C programming: memory use?
Hi all,
I am busy with writing some simple apps and have some question. I defined a struct entitysize { char * str1; char * str2; int nr; } In a function1 I allocate memory to hold 5 of these structs. The pointer to the allocated memory is a global variable(varfilelist). In function2 I initialize all the structs: Code:
void init_file_dir_lists() { Now my question:p As soon as I change the pointer... what will happen with the memory that is allocated for the string "test"? Will this be lost somewhere? Like if you didn't free dynamically allocated memory? Thanks in advance! |
The code you quoted isn't consistent between the parts, which makes it hard to be sure what you mean, but I think I can explain the key concept you seem to be missing:
Quote:
It is unfortunate that C allows that. A text constant ought to be assigned only to pointers declared char const* When you make that assignment, no new memory is allocated, nor text copied. You are just setting a pointer. The memory for a text constant such as "test" is allocated at compile time. The instruction you quoted just sets ptrentity->entity to the address of that compile time allocation. Quote:
You used that text constant "test" inside a loop inside a function. Every time that instruction is executed, it uses the address of the same text constant. It does not allocate anything at run time. You also used identical text constants "test" in more than one place. The compiler may choose to use the same text constant for both. |
You said that str1 is a pointer to an array of characters. You then assigned one of those pointers to contain (of course...) the address of a static string array, allocated by the compiler, which contains: [ 't', 'e', 's', 't', '\0' ]
Remember that "C" is by-design a very low-level language; not much more by-default than assembler. "C++" builds on top of it, partly through the use of a much larger runtime library and more expressive semantics. ("C++" of course is built in "C.") If you are accustomed to other programming languages (all of them, of course, originally written in "C"), you can be misled by thinking that you see in "C" what is not there. Every programming language or tool is the same. |
Thanks for your response! It certainly made a few things clearer!
Currently I am using the readdir function to get file entries in a directory. For each file I use stat to get the filesize. If the size is in the top 5(in the structs located in the dynamically allocated memory) it gets put in the allocated memory. If this is the new biggest file, I walk over all structs and move the pointers to change the order and place the new biggest file at the end. Now for the smallest one, this would fall off(the pointers in the struct in the beginning are overridden to point to the new smallest in the top 5). What I wonder is if I actually loose the memory for that smallest entry (memory leak) or not as it is only the struct that is dynamically allocated. All the rest are char *. However it could be that readdir/stat allocate memory themself and I need to free that space... In that case I would waste alot of memory when going over 1000s of files. Thanks again! |
Quote:
Code:
char d_name[256]; /* filename */ Quote:
Quote:
|
Quote:
Quote:
Personally, I'd use a C99 structure similar to the following: Code:
struct fileinfo { For each new directory entry, obtain its file size, and compare against the initial entry in the file size array. If the new file is larger, discard the initial fileinfo structure, and create a new for the new directory entry. Also update the initial file size array entry to reflect the file size. Then, scan through the file size array to see if there is a smaller file size than the new initial entry. If so, swap the two, so that the entry corresponding to the smallest file is always first in the arrays. Note: you do not need to do more than one swap, total. Scan the array first, then do one swap, if necessary. This is also why the sort key, file size, array is separate: so that it can be scanned with minimal CPU use. For small arrays (depends on CPU, but say up to a couple of dozen entries) this is the fastest option. The above is reasonably fast for several dozen or even hundreds of entries, but it scales poorly when the arrays get much bigger. In other words, with enough entries in the array, it gets slow. If you wanted to keep a lot of fileinfo structures, order the arrays as a binary min heap. Element at index i has parent (with equal or smaller filesize) at index (i-1)/2, and possibly childs (with equal or larger filesizes) at indices 2*i+1 and 2*i+2. The entry corresponding to the smallest known file is at index 0, so most of the time, you'll be simply comparing against that one. When you replace it, you percolate the entry outwards, swapping with the smaller child, until the heap property (parent filesize <= filesize <= child filesize) is no longer violated. This is just log₂N steps even in the worst case, so this is very fast even if the array size N gets very large. (To append to the heap, you add the data at the end, then percolate down, swapping with the parent until the parent is smaller or equal in file size.) |
All times are GMT -5. The time now is 04:00 PM. |