Quote:
Originally Posted by aarsh
the programmer found the histogram of two images then, squared the difference b/w them and lastly found the square root ! why so ?
|
To have any idea what that function is doing, you need a detailed understanding of what the histogram function is doing. I had no clue of that when I read your post, but with google I found the histogram section of the following page:
http://www.pythonware.com/library/pi...book/image.htm
Also, be aware that you left out a key step in describing the function you quoted: "squared the difference b/w them and lastly found the square root"
A Root Mean Square computation, such as the one you quoted, computes the differences element by element for the elements of two different aggregates. Then it squares each of those differences.
Then it averages all those square together into one (which step you left out of your description). Then computes the square root.
One key requirement for any validity in RMS is that the elements correspond in meaning between the two aggregates. Element N of one aggregate should carry the same meaning as element N of the other.
In two pictures of the identical scene would you expect the 523'rd pixel of one to correspond to the 523'rd pixel of the other? Even with a camera held still, while snapping two pictures in a row, the image would be expected to shift by more than a pixel. As soon as the elements fail to correspond, the whole validity of RMS is gone.
The page you linked takes the pixel by pixel difference between two images (meaningless unless the pixels perfectly correspond) then takes a histogram (I have no clue why) then does the rest of the RMS computation. It is hard to imagine any value in the result.
The code you quoted instead takes the histogram on each image before computing the difference. That fixes any issue of pixel position failing to correspond, but at the cost of throwing away 100% of the shape information in the image, and throwing away almost all the color information (by segregating the primary colors as described in that page I linked).
There might be some meaning left in the RMS result, but I find it hard to imagine. The slightest change in overall brightness between the two images would mis align the histogram elements destroying almost all the meaning in the RMS.
What is the source of the two images you want to compare? You need to understand the nature of the differences you want to measure (and even more so the nature of the differences you want to ignore) in order to have any hope.
Quote:
Also I am looking for ward to have some Python code that compares two images regardless their dimensions and file formant.
|
IIUC, the histogram is in raw pixel counts, so you can't meaningfully compare histograms from different size images. That would be easy to fix by scaling the histogram to some proportional scale instead of leaving counts in raw pixels.
Quote:
but I am not getting what I want
|
I don't know what you want, but I suspect what you want is either impossible or far too difficult for you to achieve.
Comparing two pictures in the same format and dimension to see whether they are absolutely identical should be pretty easy.
But comparing two pictures to see whether or not they are very similar is almost impossible.
I don't know any of the math behind the methods used to measure the effectiveness of lossy compression. What matters in lossy compression is whether the result will look nearly the same to a human viewing it. But no math is known to measure that. Some mathematical methods are used to estimate the significance of the differences from compression. That may be very similar to your request, in which case you might attack that extremely difficult problem by finding someone else's solution.