ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The camera will capture 2 consecutive images of the scene and a java code i want to write must subtract the second image from the first so the result must be zero since they are the image of the same scene.
In order to subtract the 2 images from each other, I want to connect the camera to java and read the images (which are composed of pixels) as an array in java in order to subtract them cell by cell ( the array is composed of cells).
EDIT: By the way, the resulting image will almost never have all pixels exactly zero. Depending on the quality of your camera, more or less noise will be in the pictures that will lead to slightly different images.
You are right..while writing the code the result didn't give zero unless i loaded the same picture twice..My goal is that if the image changed it means that somebody entered the room so the alarm must turn on and a graph illustrating the trace of the intruder's movement is drawn and some other secure effects. I stick on the condition i want to write to make the camera realize the change of image by subtracting the two pictures from each other because even if the image didn't change the result of subtracting will not be zero. Any idea ?
The Motion depends on Linux which I am not familiar with. I get an idea, what if i changed the pictures taken by the camera to black and white before subtracting them. So if no one entered the room the result of subtracting will be zero ?
It doesn't matter if you convert the images before doing the subtraction. The noise is in the original images, so it also will be in any image that is based on them.
If noise is in each image taken of the scene how I will let the code assume they are similar.Is the code taking the same color of the scene different in the two images? If so how can I face this problem ? What if I make the condition to assume the two images the same is not that the RGB of the pixels are equal, but I make the condition that the RGB of the pixels of the first image minus the RGB of the pixels of the second image <5 for example ?
Yes, if you assume that noise is (if you have a somewhat decent cam) only a minimal change in the pictures, then ignoring minimal differences should work. It also may help to break down the images in smaller chunks. For example, if you have only a difference in one or two pixels in a 10x10 pixel block then it is unlikely that there was a motion, it is more likely that the differences are simply noise. The size of the pixel blocks should be dependent on the size of the original images. It wouldn't make much sense to use 50x50 pixel blocks with an image that is 320x240, but may be with an image that is 1920x1080.
Last edited by TobiSGD; 04-04-2012 at 10:59 AM.
Reason: fixed typo
It seems very improbable that your simplistic approach will ever be successful, due to a number of real-world factors. The imaging device will never be noise-free, and is subject to variations in sensitivity to various spectra due to temperature, possibly humidity, aging, noise coupled from power sources and noise from other electrical sources, etc. The optics of the imaging system will also be subject to long term changes. The image itself will change due to variations of lighting, mechanical vibrations of the imaging device, and other environmental factors.
To have any sort of reliable detection of movement, you will need to do more analysis than frame-by-frame comparison. You will probably need some form of image-recognition analysis, which is able to identify gross components within images, and then compare those on a frame-by-frame, or frame-series basis. There are packages available to assist with this, although I have no first-hand experience with any. I am given to believe that most higher quality packages are commercial products.
You may be able to glean some techniques from code that performs motion video compression. As I understand it, much of the compression technique is to compare frames, and then store only differences between sequential frames. At the root level, this sounds like what you are attempting to do, followed by some quantization of the frame-to-frame differences. There should be an abundance of open-source code that performs MPEG and other forms of motion video compression for your your scrutiny.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.