I believe I have found a serious flaw in memory management using malloc() and free(). This sounds incredible, but please bear with me. I have made a test program that causes the problem, and the program is too simple to have a bug. It is listed below.
Test Program Logic
Code:
1. Read two input parameters: allocation amount and loop count.
2. Accept input of 'return' to continue or 'x' to exit.
3. Loop for loop count:
malloc() a random memory amount
(average = step 1 allocation amount)
4. Loop for loop count:
free() each piece of memory allocated
5. Loop back to step 2.
Note that if the program is waiting for input, there is no memory allocated.
Procedure to test:
1. Start a resource monitor so you can watch the amount of real memory in use.
2. Start the test program in one terminal. I used amount = 100 and loops = 10 million, so that an even 1 gigabyte is used.
3. Watch the monitor. It may go up and then down again, which is the expected behavior, since the program immediately releases all the memory right after allocating it. The 10 million calls to malloc() and free() need a few seconds, normally. Cycle the program with 'return' until you see the memory usage go up and not come back down. This is abnormal and deadly.
4. Leave the first test program at a state where the allocated memory stays high.
5. Start a 2nd test program in another terminal and do the same. After a few iterations it will also fail to release the memory.
6. Do the same in a 3rd terminal window.
7. The system will eventually run out of memory and slow to a crawl with swap space slowly increasing. This is happening even though there is plenty of memory wrongly retained for the test programs that have released their memory and are waiting for input.
If the randomized malloc() amount is replaced with a fixed amount, the test programs work as expected (i.e. malloc() and free() work as expected). The bug is related to the highly variable amount of memory being allocated. Uncomment the commented line of code to verify this.
I am using Ubuntu 10.04 with kernel 2.6.32, 64-bit.
Ubuntu 8.10 with kernel 2.6.27 also exhibits the problem, but with different characteristics: each test program will fail to release its memory for the fist iteration only, and then works normally forever after that. A memory crisis can be caused by running many test programs one cycle only, and leave them waiting for input.
Can someone else please test this?
If I have gotten anyone's attention, what is the next step to get the attention of someone able to fix it?
Mike
Code:
/**************************************************************************
test malloc() and free()
compile: gcc -o malloctest malloctest.c
***************************************************************************/
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv)
{
int nn, amount, loops, bytes;
char **pmem, x;
amount = loops = 0;
printf("enter allocation amount: "); // bytes to allocate each loop
nn = scanf("%d",&amount);
if (nn != 1) return 0;
if (amount < 1) return 0;
printf("enter allocation loop count: "); // number of loops
nn = scanf("%d",&loops);
if (nn != 1) return 0;
if (loops < 1) return 0;
while (1)
{
printf("return to repeat, x to exit \n");
scanf("%c",&x);
if (x == 'x') return 0;
printf("allocating %d blocks of size %d ... \n",loops,amount);
pmem = (char **) malloc(loops * sizeof(char *));
if (! pmem) {
printf("malloc failure \n");
return 0;
}
for (nn = 0; nn < loops; nn++)
{
bytes = 1 + rand() % (2 * amount); // randomized amount
/// bytes = 1 + amount; // fixed amount
pmem[nn] = malloc(bytes);
if (! pmem[nn]) {
printf("malloc failure \n");
return 0;
}
}
printf("freeing the allocated blocks ... \n");
for (nn = 0; nn < loops; nn++)
free(pmem[nn]);
free(pmem);
}
}