ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The disadvantage of that is in the case that you accidentally change the type of *r without changing the malloc call.
Hmmm intresting but I belive this is why typedef was invented or eazier then that include a #define.
using a pointer because in the off chance the value may be a diffrent kind of varible is silly and dangerous. Even if its not an issue at run time it makes the code hard to understand to others. I would say that just cause you can do something doesent mean you have to do it that way or is the best way to do it.
Code:
//dont know if a defind would work this way.
#define type float
char *r;
r = malloc(sizeof(type));
or hell a better eazier to understand methood.
Code:
#include<stdio.h>
int main()
{
char *r;
int sze;
sze = sizeof(float); //change this if need diffrent value.
r = malloc(sze);
return 0;
}
In a complex application, using a typedef or a define would become unwieldy because of the sheer number you need. Of course, a macro might be useful in allocation in some cases:
Yes, this would still generate the sizeof(*r) mechanism, but this is probably the cleanest way to handle allocation of an array. Also, it is better self-documentation than malloc(count*sizeof(*r)) is. Thoughts?
[/code]
For me, changing a sizeof(type) would be way easier to spot as something to change, need be, If you need to change the type of the pointer, change the type of the evaluating size (sizeof(type2),etc), That is very easy to debug. This makes a lot more sense to me, since the size of a primitive is not based on info in the variable, it is based on the type definition for the architecture. Pointer issues of not being valid, is a lot tougher to debug, for me at least.
That said, in my opinion, I still think it is poor practice to deallocate pointers that are not initialized. To each their own I suppose. Let us just hope that this technique is not used by accident in a construct that actually dereferences the pointer. That would be a bit difficult to debug, as the results would vary.
The macro you are proposing would simply be the same code after the precompiler parsed it, in place. But, again, to each their own.
My thoughts were this:
-Pointer values or variable values (in theory, not range) are separate from the space allocated to them
-Space allocated to variables/objects is dependent on the architecture's/language grammer's/programmer's definition of that type, not on an individual instantiation of
that type.
These are the other reasons I may opt to use a general case 'type' to define the size, instead of a individual case of a type (I am defining individual/special cases of a types to be instantiations of objects, variables, etc).
I guess my conclusion would be to use whatever method is more readable and in good practice for your specifications, while still avoiding pointer issues.
All being said the OP never posted compleate code. We dont know what the varible r is.
Im assuming from the code that its a mesurement of some kind thus being a float or a double. If thats the case using this methood is kinda confusing but not illegal.
if it is an int then i would rather see size_t used instead.
if its a void pointer that changes its type thoughout the program and needs diffrent allocated memory for each type then i would say use the largest type that is used. Using a little more memory but probably safer then reallocateing memory as the type changes.
Yes, if it is a void pointer much consideration would have to be made of size. In the case of an object, structure or array this could be from anywhere from 1 to N. Where N mod word size = 0. (in theory, the largest size ever needed would be infinity, where infinity is a multiple of the word size). I agree, if a singular primitive is what the pointer is referring to, for flexibility it would be a good idea to allocate the largest primitive size to the pointer space.
elyk1212, I agree that dereferencing a pointer of unknown context is a really bad idea. That being said, sizeof() is evaluated at compile time, so is not problematic. Also, in some cases, a single pointer may have memory allocated in a number of places (say, a general purpose char* buffer), and occasionally it is easy to miss a change in every location (to, say, wchar_t) and this can also cause bugs.
On the flip side of all this, I am honestly not sure how sizeof(*(void *)) returns. What is the size of a (void)?
I think it is all really a matter of personal preferences and specific usage.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.