Originally Posted by zali
Thanks for your reply, but if we have an unsigned value upper than 128 and cast it to a char variable, then char variable would be corrupted
No, it is not corrupted.
While the C99 standard (section 188.8.131.52) states that casting an unsigned value to signed results in implementation-defined behaviour, all current architectures use two's complement to represent signed integers, so casts between integer types do not modify the bit pattern at all. Conversion to larger type replicates the high bit to new bits, and conversion to smaller type uses only the low bits. Signedness or unsignedness does not affect integer casts at all. In other words, all architectures running Linux have the same behaviour: use the same actual bit pattern. Thus, no corruption.
(As far as I know, this actually extends to all currently mass-produced CPU architectures. There are some historical ones that used one's complement, and thus have +0 and -0 with differing bit patterns, but they have a lot of other weirdnesses too. You can safely assume all architectures use two's complement signed integers, and that casting between signed and unsigned integer types of the same size does not modify the bit patterns.)
Your example code has a fatal flaw: both i
are converted to int
first. Because usually i
is a signed type, for it the new high bits will be duplicates of the old highest bit (the sign bit, in two's complement format). Because j
is unsigned, the new high bits will be zero.
Try this code instead:
signed char s1;
unsigned char u1, u2;
u1 = 0xF1; /* 241 in decimal */
s1 = u1;
c1 = u1;
u2 = s1;
printf("u1 = %d (0x%x)\n", u1, u1);
printf("c1 = %d (0x%x) when converted to int\n", c1, c1);
printf("s1 = %d (0x%x) when converted to int\n", s1, s1);
printf("s1 = %d (0x%x) actual bits\n", s1 & ((1 << CHAR_BIT)-1), s1 & ((1 << CHAR_BIT)-1));
printf("u2 = %d (0x%x)\n", u2, u2);
The output will be
u1 = 241 (0xf1)
c1 = -15 (0xfffffff1) when converted to int
s1 = -15 (0xfffffff1) when converted to int
s1 = 241 (0xf1) actual bits
u2 = 241 (0xf1)
except that the number of consecutive f
's in the c1
lines depends on the size of int on the architecture (32 bits in Linux, always, as far as I know).
This output will show you that casting does not affect the bit pattern, and that the bit pattern in the signed char is exactly the same as the bit pattern in the unsigned char. Thus, casting between signed and unsigned types in Linux, never changes the actual bits. Only their interpretation.