ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
0x0001 is the hexadecimal equivalent of the value of 1. I'll let you guess what 0x0002 is.
As for the enumerated identifiers, they were chosen to uniquely identify a value. Thus "send" is 0x0001 and "receive" is 0x0002. Whether these numbers actually mean anything, or were arbitrarily chosen, cannot be deduced from the code snip you provided.
Note, when an enumerated list is defined, if the first identifier is not assigned a value, then it will be assigned the default value of zero; all subsequent identifiers will have a value of one more than the previous, unless specifically assigned a value. For example:
Code:
enum Foo
{
ZERO, // has the value of 0
APPLE, // has the value of 1
ORANGE = 3, // obviously has value of 3
TOMATO, // has the value of 4
ZUCCHINI = 6 // obviously has value of 6
};
Enumerated values are not 16-bit values, but instead allocated a full 32-bits (4 bytes). See for yourself on your system using this program:
Code:
#include <stdio.h>
enum Foo
{
ZERO,
ONE
};
int main()
{
printf("sizeof(ZERO) = %u\n", sizeof(ZERO));
return 0;
}
Last edited by dwhitney67; 03-22-2011 at 06:53 PM.
0x0001 is the hexadecimal equivalent of the value of 1. I'll let you guess what 0x0002 is.
As for the enumerated identifiers, they were chosen to uniquely identify a value. Thus "send" is 0x0001 and "receive" is 0x0002. Whether these numbers actually mean anything, or were arbitrarily chosen, cannot be deduced from the code snip you provided.
Note, when an enumerated list is defined, if the first identifier is not assigned a value, then it will be assigned the default value of zero; all subsequent identifiers will have a value of one more than the previous, unless specifically assigned a value. For example:
Code:
enum Foo
{
ZERO, // has the value of 0
APPLE, // has the value of 1
ORANGE = 3, // obviously has value of 3
TOMATO, // has the value of 4
ZUCCHINI = 6 // obviously has value of 6
};
Enumerated values are not 16-bit values, but instead allocated a full 32-bits (4 bytes). See for yourself on your system using this program:
Code:
#include <stdio.h>
enum Foo
{
ZERO,
ONE
};
int main()
{
printf("sizeof(ZERO) = %u\n", sizeof(ZERO));
return 0;
}
what is the use of Hexadecimal values these numbers are arbitrarily chosen and thanks for the good explanation
what is the use of Hexadecimal values these numbers are arbitrarily chosen and thanks for the good explanation
I'm not sure if this is a question... but oftentimes, hex values are used in lieu of decimal because it makes them easier to read. All numbers within a computer are stored in binary, thus the use of hex or even base-10 numbers serves no purpose other than to gratify humans, who both develop and read, code.
can any one explain what is the meaning of the 0x0001 and 2. i am very glad if any one can explain with example
Those aren't 16-bit numbers. Those zeros just mean nothing, maybe they're to align the code or make it look nicer. You could write 0x1 or 0x0000000000000001 and it would mean the same thing to the compiler.
Hexadecimal is used because these are masks for bits. Very long ago I wrote some assembler routines that did serial input/output directly, inputting or outputing characters. There was a data register and a status register, the last bit of the status register was set if a character was still being sent, the previous one if a character had been received but not processed. To send a string, one sent the first character to the data register, waited till the last bit was unset, then sent the next and so on. Similarly with input. To see if a character had just been received, one loaded "receive" to a register, logical anded it with the status register: if the result was non-zero, one could copy the resultant character from the input data register, which would reset the status bit ready for the next character to be received.
I'm not sure if this is a question... but oftentimes, hex values are used in lieu of decimal because it makes them easier to read. All numbers within a computer are stored in binary, thus the use of hex or even base-10 numbers serves no purpose other than to gratify humans, who both develop and read, code.
Also, since each hex digit represents exactly 4 bits, it is almost trivial to create code in very low-level languages like assembler to convert numbers to their human-readable ASCII format as hexadecimal. Same for octal. Converting to ASCII in a decimal radix is comparatively difficult to code, requires larger code and is slower.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.