ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Forza, please bear with me. I have not ignored any of the posts here. I need time to sort through them all, and to study their content. Most of them are over my head, as C++ is very new to me, and my programming experience in general is very limited (which should be obvious). Please, continue to be patient with me.
Sure, we stay patient :-) But, it would be great to get some progress on this issue, because it is a very simple problem. Let's start again ...
The distinction between an integer and a string (char *) should be clear. A string (char *) is a contiguous amount of memory containing character encodings, e.g. one ASCII character per 1 byte, finalized by a NUL ('\0') byte in the case of ASCII.
Environment variables generally intend to modify the behavior of your program without changing the source code or re-compiling your code. For example, the environment variable LD_LIBRARY_PATH is used to instruct the dynamic loader where to search for dynamic libraries. Linux and other operating systems store environment variables as strings (char *). So, if you set the environment variable CONTENT_LENGTH to, say, 1024, you will NOT get the value 1024 but the string "1024" consisting of the ASCII values '1' , '0', '2', '4'. So if do a getenv("CONTENT_LENGTH"), you get a pointer to the above character sequence. For practical purpose, let's assume that getenv(3) returns the address 0xc0000000 (decimal: 3221225472).
So far, so good. No matter where this environment variable comes from, but what you want to do is to convert this string "1024" ('1','0','2','4') into the value 1024. I hope that this distinction is clear either, because otherwise we would need to start at the very beginning of computer science, and that definitely isn't the purpose of this forum.
So, based on this theory, let's get back to your code.
The character pointer now points to the value of the environment variable "CONTENT_LENGTH", as above 0xc0000000. If you would dereference this pointer, you would get the value 49, assuming that the environment variable has the value "1024".
Code:
int nContentLength = atoi(billschedContentLength);
Here, you convert the string "1024" into the value 1024, which is good!!! :-)
As you said, malloc(3) is a function to allocate some memory from the heap. To allocate this memory, you need to tell malloc(3) the amount of memory to reserve. malloc(3) expects this value to be of type 'size_t' which is 'unsigned long' on most platforms. Now, what you finally do is, to pass the address (0xc0000000) to malloc and not the value 1024. Just to be clear, neither C nor C++ implicitly convert strings (char *) to numbers. In your case, you would allocate a _very_ large amount of memory, although you only want to allocate 1024 bytes. So, finally, you need to change this last line of code to
Code:
billschedBuffer = malloc(nContentLength+1);
Clear?
But, just a different question. You always said, that you got this template from somewhere? Could you give us this link? Because this template seems to be completely incorrect ...
Thank you. Having studied some more tutorials coupled with the information provided in this thread, I'm getting a better understanding of variables, strings, arrays and pointers in C++. I've just never dealt with a language this low-level, directly working with memory and such. Thanks for your time & input.
Thank you. Having studied some more tutorials coupled with the information provided in this thread, I'm getting a better understanding of variables, strings, arrays and pointers in C++. I've just never dealt with a language this low-level, directly working with memory and such. Thanks for your time & input.
Okay, thanks for providing this link. That explains where the env vars come from and it also explains where your error comes from. BTW, you should have looked at the attached code (at this tutorial), http://library.thinkquest.org/16728/data/cgi.cpp, there the coding is correct ...
Thank you. Having studied some more tutorials coupled with the information provided in this thread, I'm getting a better understanding of variables, strings, arrays and pointers in C++. I've just never dealt with a language this low-level, directly working with memory and such. Thanks for your time & input.
I've been hung up at one point: I was going to ask, "how can an array of characters include an integer at its end?" But, having researched this, I'll rather ask if the following was in error:
Quote:
A string is actually a pointer to the first character of an array of chars. The last character of a string is always 0 (not ASCII '0', but the integer 0. It's often specified as "'\0'" instead of "0" in this case.).
Should this read "The last character of a string is always 0 (not the integer 0, but ASCII '\0'?]"
Otherwise, I'm confused, since an array is supposed to be of one data type.
Last edited by bluegospel; 07-07-2010 at 07:37 AM.
Should this read "The last character of a string is always 0 (not the integer 0, but ASCII '0'?]"
WRONG WRONG WRONG
Quote:
Originally Posted by bluegospel
Otherwise, I'm confused, since an array is supposed to be of one data type.
ASCII and integers are the SAME DATA TYPE! char is actually a very short int (because there aren't many ASCII characters, using int would be overkill and a serious waste of RAM), and it can be used to store small numbers!
ASCII is just a specification of what integer corresponds to what picture printed on the screen. That's all. The CPU has no concept of ASCII, characters, or letters.
In fact, when the C compiler comes across a character in single quotes, it just looks up an ASCII table and pastes in the appropriate integer (this is a bit oversimplified, but you get the idea).
For example, the ASCII character '0' is actually just the integer value 48, and 'A' is 65.
I've been hung up at one point: I was going to ask, "how can an array of characters include an integer at its end?" But, having researched this, I'll rather ask if the following was in error:
Should this read "The last character of a string is always 0 (not the integer 0, but ASCII '\0'?]"
Otherwise, I'm confused, since an array is supposed to be of one data type.
If you have a null-terminated string, its last character is ASCII '\0'. Which for the majority of CPUs means a single byte with 0 value.
Last edited by Sergei Steshenko; 07-07-2010 at 08:00 AM.
ASCII is just a specification of what integer corresponds to what picture printed on the screen. That's all. The CPU has no concept of ASCII, characters, or letters.
Okay, so 00000011 is fundamentally a 3 no matter what data type is specified. But the 3 is interpreted differently depending how it's used. It's not the 00000011 being interpreted by use, but the number it represents. So 00000011 does not mean one thing here, and another there. The 3 means one thing here, and another there, because 00000011 is fundamentally a 3.
Last edited by bluegospel; 07-07-2010 at 07:54 AM.
Okay, so 00000011 is fundamentally a 3 no matter what data type is specified. But the 3 is interpreted differently depending how it's used. It's not the 00000011 being interpreted by use, but the number it represents. So 00000011 does not mean one thing here, and another there. The 3 means one thing here, and another there, because 00000011 is fundamentally a 3.
Well, but if you consider 10011, it may be 19 or, for example, 13 - the latter is for BCD (Binary Coded Decimals).
Well, but if you consider 10011, it may be 19 or, for example, 13 - the latter is for BCD (Binary Coded Decimals).
Okay, thanks. What I'm really after at this point is the question of the basic substance of an 8-bit byte: is it fundamentally a 19, which could be interpreted variously, or is is it 10011, which can be interpreted variously?
Okay, thanks. What I'm really after at this point is the question of the basic substance of an 8-bit byte: is it fundamentally a 19, which could be interpreted variously, or is is it 10011, which can be interpreted variously?
Somebody somewhere decides how to represent numbers. Plain binary and BCD are just different decisions/representation. One rarely encounters BCD nowadays. IBM used to like BCD a lot.
The substance of an 8-bit byte is 8 bits What the meaning is of what is in there depends on the code that you write.
Examples:
If it contains the bit pattern 01101000, the code can use it as a number (104) or as a character ('h').
If it contains 11111111, the code can use it as a signed number (-1) or a unsigned number (255)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.