signed and unsigned
what's the difference with these three definitions?
char signed char unsigned char? Generally, when and how to use them? (Networking, scientific computing)???? |
Well, a signed char can have either positive or negative values, while an unsigned char can only have positive values. Ex:
Code:
signed char myChar = 10; Code:
char anotherChar = 20; |
This feels like a HW question. Read K&R 2.2, 2.9 if you've got it around.
|
'char' is just a number, like 'short' and 'long'. Since it has only 256 values, we choose to generally represent letters using 'char' and binary using 'unsigned char', though the computer sees them both as 1 byte numbers. The usual default is 'signed', but you always have the option of making it explicit. The C++ standard says that any pointer can be cast to 'unsigned char*' for binary representation without "undefined behavior", which is why it's mostly used for working with single bytes of raw data. Really, the only thing different between the two is how arithmetic other than + and - affect them, and how std::cout (in C++) would choose to display them (and, of course, the pointer types don't implicitly convert.)
ta0kira |
You use 'unsigned char' when you want the compiler to treat it as unsigned.
An example Code:
char a=127,b=1,c; a greater than b[/code[ Change the char in the code to unsigned char and this will be the result: Code:
a+b=127+1=128 If you're however programming in a dos/windows environment, you will have the extensions (like à,ç etc which are the values 128 to 255). When comparing the characters (a<b or a>=b), the sign is important (see above example). |
All times are GMT -5. The time now is 05:26 PM. |