First of all, by all means look for existing libraries that will do what you want to do with your images. Really, there's no reason for you to do what has already been done...
Assuming (for the sake of simplicity) that you know you are dealing with a 32-bit environment, then a standard int type will undoubtedly be 32 bits long. (There are predefined constants that will provide this information in a reliable and platform-independent way, but let's assume for the sake of argument that it's known to be "32.")
The most-significant bit (MSB) will be the sign-bit: 0=positive, 1=negative. This so-called two's complement notation is universally used because it eliminates the need for special rules (and therefore, special hardware designs) to handle signs. If you subtract 2 from the value $00000001 (using hexadecimal notation here...) it naturally becomes $FFFFFFFF, which is "-1." No funky hardware-designs required.
(Note: Hexadecimal notation uses the 16 digits (0-9, A-F), so that each digit represents 4 bits. Therefore, "F" represents 1111. A preceding "$," by convention, indicates to you that the number has been written in hexadecimal.)
If you declare the number to be "unsigned," then the MSB will not be interpreted as a sign-indicator. But, notice that I use the phrase, "be interpreted as." The manner in which the MSB is regarded is entirely up to you, and you must be consistent.
This is particularly true with functions like printf(), where format-string specifiers must be used to indicate that an "unsigned" output-format is to be used. There is nothing "intrinsically special" about the most-significant bit of a particular n-bit quantity: it's up to you.
Last edited by sundialsvcs; 04-07-2009 at 08:14 AM.