ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
WTF? that is called a MEMORY LEAK and is very bad programming practice and telling a newbie that its is ok is just an outright stupid thing to do.
I like to assume that most programmers are able to think, so they don't need to be managed by a bunch of overly strict rules.
When you write general purpose functions and/or components of large projects, you should expect any memory leak to be potentially large enough that it must be fixed for your code to be correct.
But at the top level of a program, or in trivial programs, most memory leaks can be easily seen to be trivial. Fixing a trivial memory leak wastes your time as a programmer and wastes the computer's time and memory at run time. It takes at least code space and maybe data space to fix a trivial memory leak and no memory is usefully released by doing so.
I think I clearly stated the conditions under which it is OK to ignore the memory leak, and any programmer who can think for himself will recognize those conditions are only common in trivial programs. Those conditions also occur in various places in non trivial programs. But the vast majority of memory allocation points in non trivial programs would be serious memory leaks if freeing memory were not done correctly.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by johnsfine
I like to assume that most programmers are able to think, so they don't need to be managed by a bunch of overly strict rules.
When you write general purpose functions and/or components of large projects, you should expect any memory leak to be potentially large enough that it must be fixed for your code to be correct.
But at the top level of a program, or in trivial programs, most memory leaks can be easily seen to be trivial. Fixing a trivial memory leak wastes your time as a programmer and wastes the computer's time and memory at run time. It takes at least code space and maybe data space to fix a trivial memory leak and no memory is usefully released by doing so.
I think I clearly stated the conditions under which it is OK to ignore the memory leak, and any programmer who can think for himself will recognize those conditions are only common in trivial programs. Those conditions also occur in various places in non trivial programs. But the vast majority of memory allocation points in non trivial programs would be serious memory leaks if freeing memory were not done correctly.
well i learned programing in dos where you want to squeeze every byte out of the 640K and i was told never have memory leaks so yeah i think its a bad idea and not a waste of space or time.
well i learned programing in dos where you want to squeeze every byte out of the 640K and i was told never have memory leaks so yeah i think its a bad idea and not a waste of space or time.
I don't want to beat this question to death, but:
I learned to program in environments where 640KB was inconceivably large, so you really had to squeeze every byte. So knowing when not to fix a memory leak was critical.
My point was that sometime you know an absolute upper bound on the cumulative losses from a memory leak. Sometimes that upper bound is effectively zero (because the memory can't be released until the program is about to exit anyway). If the code or data needed to track and release the memory is larger than the upper bound on the cumulative losses, then fixing the memory leak is worse than not fixing it.
With gigabytes of virtual memory, the above may be trivial. (Fixing the leak may still be "worse" than not fixing it, but on a scale of don't care vs. don't care). But I'd rather see programmers think in preference to just following rules.
You were told "never have memory leaks". Many people have told me the same. If that were the worst inflexible rule I'd been given for programming, I might be OK with inflexible rules in general and follow that one too. But lots of the common inflexible rules are much worse, so as long as I think rather than blindly follow rules, I'm going to think before following that one as well.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by johnsfine
I don't want to beat this question to death, but:
I learned to program in environments where 640KB was inconceivably large, so you really had to squeeze every byte. So knowing when not to fix a memory leak was critical.
My point was that sometime you know an absolute upper bound on the cumulative losses from a memory leak. Sometimes that upper bound is effectively zero (because the memory can't be released until the program is about to exit anyway). If the code or data needed to track and release the memory is larger than the upper bound on the cumulative losses, then fixing the memory leak is worse than not fixing it.
With gigabytes of virtual memory, the above may be trivial. (Fixing the leak may still be "worse" than not fixing it, but on a scale of don't care vs. don't care). But I'd rather see programmers think in preference to just following rules.
You were told "never have memory leaks". Many people have told me the same. If that were the worst inflexible rule I'd been given for programming, I might be OK with inflexible rules in general and follow that one too. But lots of the common inflexible rules are much worse, so as long as I think rather than blindly follow rules, I'm going to think before following that one as well.
i was also told goto and global variables are bad and they seem fine
but a memory leak is a terriable thing
when if somone decides to use your program in a script and you did not expect it it would loop 100s even 1000s of times eating up all the memory
There's really no point in discussing this with you, since you're not going to change your mind.
That said, if a program had the memory leak as described by johnsfine and you used the program in a script and invoked it 1000s of times, guess what would go wrong? Absolutely nothing.
That's because the all the memory from the program is freed at exit time. That's why you don't bother explicitly freeing it beforehand - there's is really no point in doing that since it just adds extra code that doesn't do anything useful. The "memory leak" doesn't survive program termination. The memory doesn't stay "lost".
Now had you said that someone took johnsfine program and modified it, such that it itself invoked the memory leak condition multiple times, then you would have point. But at that stage, it's the new programmers responsibility to recognize that he now has a non-trivial memory leak that he has to fix. None of this invalidates what johnsfine said.
Global variables are bad. You never know who's using them. Have you not heard of information hiding? Goto's are bad, you never know where you came from, albeit goto's can be useful for exception handling code.
There's really no point in discussing this with you, since you're not going to change your mind.
You're probably right about that and definitely right about memory at program exit. But I also can't resist adding some explanation:
The memory allocation we are talking about is a two level activity. The OS allocates memory to the malloc module inside your process and malloc inside your process allocates memory to whatever inside your process calls it.
When you free memory inside your process, it almost always goes back only to the malloc module inside your process. It almost never causes the malloc module to give memory back to the OS (explaining that "almost never" as opposed to "never" would require another whole discussion).
The OS remembers all the memory it gave to your process. When the process exits, the OS takes all that memory back. The OS has no way to even know (nor any reason to care) whether the memory it is taking back was properly returned by your code to the malloc module's free pool inside your process, or whether that memory was leaked inside your process. Either way, it is memory the OS gave your process that wasn't given back (all the way to the OS) before your process exits, so it is taken back when your process exits.
On those other topics:
In big projects, global variables usually cause more confusion than they are worth.
Beginning programmers ought to learn fairly soon how to avoid global variables even in simple situations, where using the global would be less confusing. Otherwise, they won't know how to avoid globals when it becomes important to do so.
But when a beginner is focusing on some other item of new material, I don't think he should be locked into a "no globals ever" rule to complicate simple projects and distract him from the current topic. (Similarly, a beginner should learn to avoid memory leaks on some examples that are barely more serious than the "trivial" discussed above, or he won't know how when it matters. But "zero memory leaks" is also a pointless distraction.)
Many of the tricks used to avoid globals (class statics, etc.) are sometimes helpful ways to organize the information and sometimes extra sources of confusion even worse than the globals would have been. You need to focus on organizing the information, not on following a "no globals" rule.
GOTOs: I use more GOTOs than most programmers and I am confident that I use them well. To some extent, I use more because I tend to work on the kind of problem where GOTOs make the flow of control clearer. To some extent, I use more because I avoid the constructs used instead of a GOTO that make the flow of control obscure.
The proven universal method to avoid a GOTO is by using a state variable instead (plus some tests of that state variable plus rarely also some extra levels of loop to assist the state variable). That almost always obscures the flow of control.
When you feel like a GOTO is required, you should think through the flow of control. There is usually a better way to both avoid the GOTO and make the flow of control clearer. The common/easy way to eliminate the GOTO is to use a state variable instead. I see that all the time. Someone who could have rethought the flow of control to make it better than with a GOTO instead just follows the "no GOTOs" rule using a state variable to make flow of control worse than with a GOTO.
After you rethink the flow of control and there are no choices other than using a GOTO or adding a state variable, I usually use a GOTO. But there are always exceptions. Sometimes a state variable is clearer than a GOTO.
I learned to program in environments where 640KB was inconceivably large, so you really had to squeeze every byte. So knowing when not to fix a memory leak was critical.
My point was that sometime you know an absolute upper bound on the cumulative losses from a memory leak. Sometimes that upper bound is effectively zero (because the memory can't be released until the program is about to exit anyway). If the code or data needed to track and release the memory is larger than the upper bound on the cumulative losses, then fixing the memory leak is worse than not fixing it.
With gigabytes of virtual memory, the above may be trivial. (Fixing the leak may still be "worse" than not fixing it, but on a scale of don't care vs. don't care). But I'd rather see programmers think in preference to just following rules.
You were told "never have memory leaks". Many people have told me the same. If that were the worst inflexible rule I'd been given for programming, I might be OK with inflexible rules in general and follow that one too. But lots of the common inflexible rules are much worse, so as long as I think rather than blindly follow rules, I'm going to think before following that one as well.
Memory leaks only matter - or even have meaning - in multitasking environments. If you are in a single task or foreground-background (RT-11) environment, memory leaks are pretty much meaningless; you have the whole machine anyway.
While I understand your argument, I don't think the overhead associated with issuing a free() to match an earlier malloc() is significant, and failing to do so is just sloppy practice.
You start with this trivial leak that you choose to ignore, then later that program gets merged into a much larger project and it no longer is the top-level routine, and suddenly your leak IS significant and someone has to debug to find it.
i was also told goto and global variables are bad and they seem fine
The first time you find yourself debugging a large program that is full of gotos and global variables, you'll change your mind.
In fact, you will find that careless use of global variables, by itself, will result in a much less scalable (read: able to grow and expand) program, AND will GREATLY increase debugging time when you change ANYTHING in the program.
Quote:
but a memory leak is a terriable thing
Often trivial. Not that I agree with the practice of not freeing memory, but as Johnsfine has already pointed out, it often hurts nothing.
Quote:
when if somone decides to use your program in a script and you did not expect it it would loop 100s even 1000s of times eating up all the memory
Nope. Doesn't work that way. When used in a script, your program is reinvoked on every pass through the script. Every time it exits, all memory is freed, and when it is reinvoked it starts over.
last post about this i promise:
turbo C++ does not free memory you dont free yourself!
and goto is good for things like this
Code:
while(bla1){
while(bla2){
if(bla3){goto endit; /*stop both loops*/]
}
}endit:
You shouldn't do this. I can point you to languages and implementations where doing this will foul up your stack because you are not allowing the system to clean up the loops you were in when you exit them.
You might claim that not cleaning up correctly is a compiler or an environment bug, and I will agree with you. But your code will still be broken, and you will have a very hard time finding it.
Play it safe and exit cleanly from your while statements. Use a state variable to terminate your whiles and set the variable to the termination value when you want out. Or else use break successively through the various levels.
GOTOs: I use more GOTOs than most programmers and I am confident that I use them well. To some extent, I use more because I tend to work on the kind of problem where GOTOs make the flow of control clearer. To some extent, I use more because I avoid the constructs used instead of a GOTO that make the flow of control obscure.
I also use gotos occasionally. I try to avoid it because usually it makes control flow less obvious. There are some places, though, where speed and efficiency is the primary consideration ahead of everything else.
Commonly, therefore, I use gotos in interrupt service routines, and in time-sensitive routine that are talking to other hardware - either over a PCI bus or via some other interface.
Distribution: M$ Windows / Debian / Ubuntu / DSL / many others
Posts: 2,339
Rep:
Quote:
Originally Posted by jiml8
You shouldn't do this. I can point you to languages and implementations where doing this will foul up your stack because you are not allowing the system to clean up the loops you were in when you exit them.
You might claim that not cleaning up correctly is a compiler or an environment bug, and I will agree with you. But your code will still be broken, and you will have a very hard time finding it.
Play it safe and exit cleanly from your while statements. Use a state variable to terminate your whiles and set the variable to the termination value when you want out. Or else use break successively through the various levels.
actually it works fine and uses no extra space
Quote:
Nope. Doesn't work that way. When used in a script, your program is reinvoked on every pass through the script. Every time it exits, all memory is freed, and when it is reinvoked it starts over.
maybe not but what if you have a function that does not free its memory
and you run it on a loop without relizing it
chances are there will be some big trouble
That may indeed be true with gcc. But if you will take the time and trouble to re-read what I wrote (and I have had to bounce you about that before...) you will see that I said: "I can point you to languages and implementations where doing this will foul up your stack..."
That it works in one environment does not mean it will work in another environment.
Quote:
maybe not but what if you have a function that does not free its memory
and you run it on a loop without relizing it
And that is a completely different condition than the condition you originally specified, now isn't it. And furthermore, it is a condition that I, gzunk, and johnsfine have all already pointed out.
You do yourself no credit this way. Learn to read carefully.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.