LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   Valgrind reports memory leak at opendir (https://www.linuxquestions.org/questions/programming-9/valgrind-reports-memory-leak-at-opendir-495092/)

Rayven 10-24-2006 07:42 AM

Valgrind reports memory leak at opendir
 
When I run Valgrind memcheck on my C application, it reports that I have a memory leak on an opendir command:

Code:

if( (dp = opendir( inputDir )) == NULL )
{
  logError( "Could not open directory path." );
    return -1;
}

The inputDir variable is a char *, that is free'd when the program terminates. However, this application is set to run continuously for very long times without leaving. When running Valgrind this is the only memory leak that it reports, and I do not understand why this is being reported or how to fix it. I am running this under Red Hat WS 4 with the Portland Group compilers.

Thanks

dmail 10-24-2006 08:00 AM

Is this in main or its own function? if main and it fails you return -1 (exit)
If it succeeds do you close the stream?

Rayven 10-24-2006 08:38 AM

Quote:

Originally Posted by dmail
Is this in main or its own function? if main and it fails you return -1 (exit)
If it succeeds do you close the stream?

No, this is not in main, it is in its own function. I guess I should have put this code in just after the opendir call:

Code:

if( (dp = opendir( inputDir )) == NULL )  <========Valgrind reports error on this line
{
  logError( "Could not open directory path." );
    return -1;
}

while( (dirp = readdir( dp )) != NULL )
{
  if( (strcmp( dirp->d_name, "." ) != 0) &&
      (strcmp( dirp->d_name, ".." ) != 0)
  {
      // fork child process to handle the file contents
  }
}

if( closedir( dp ) < 0 )
{
  logError( "Error closing file" );
}

return numProcessed;


jlliagre 11-01-2006 02:33 AM

I read elsewhere you were developing on both Red-Hat and Solaris.
If you are compiling on SPARC hardware, I would suggest you to use an alternative method to find out memory leaks on the Solaris side by using Sun Studio 11 compiler and dbx RTC.

lorebett 11-01-2006 03:42 AM

sometimes valgrind reports memory leaks on library functions because some library functions do not actually free memory but keep them in a pool. This might be the case?

jim mcnamara 11-01-2006 03:03 PM

Quote:

opendir()
opens the directory dirname and associates a directory
stream with it. opendir() returns a pointer used to
identify the directory stream in subsequent operations.
opendir() uses malloc(3C) to allocate memory.

This is from the man page for dirent which you should read.

Add a free(dp) to relaese the DIR struct when you are done with it.

exvor 11-01-2006 03:38 PM

opendir returns a pointer to a DIR stream so free(dp) wouldent make any sense. He does close the stream properly. So if there is a problem with malloc inside the opendir function then its a problem with the standard library call opendir. If this is the case try and locate the source for the library and see if it properly free's the memory. Im guessing this is going to be encapulated with the rest of the similar calls.

xhi 11-01-2006 06:56 PM

Quote:

Originally Posted by jim mcnamara
This is from the man page for dirent which you should read.

Add a free(dp) to relaese the DIR struct when you are done with it.

and what is wrong with using closedir()?

tuxdev 11-01-2006 07:43 PM

closedir() is better, just doing free() may leak a file handle. Also, never, ever free FILE* and DIR* directly because those are managed by the system.

For the OP issue, what happens in valgrind if you have a trivial program that simply opens and closes a directory?

Rayven 11-02-2006 10:14 AM

I am attempting to respond to multiple questions at once, so this is a rather long post! :study:

Quote:

Originally Posted by jlliagre
I read elsewhere you were developing on both Red-Hat and Solaris.
If you are compiling on SPARC hardware, I would suggest you to use an alternative method to find out memory leaks on the Solaris side by using Sun Studio 11 compiler and dbx RTC.

I am using Sun Studio 10 on the Solaris side, however, I have not ran the code through studio10 to determine memory leaks on Solaris, since this application will be primarily ran on RHEL4.

Quote:

Originally Posted by lorebett
sometimes valgrind reports memory leaks on library functions because some library functions do not actually free memory but keep them in a pool. This might be the case?

This may be the case why my application is increasing very quickly. Is there a 'C' function call to force RHEL to free the memory from the pool? The reason I believe this may be a problem, is that when I stop the application and continue to watch the memory, it very slowly drops about 50MB or so, and then holds at that value.

Quote:

Originally Posted by tuxdev
closedir() is better, just doing free() may leak a file handle. Also, never, ever free FILE* and DIR* directly because those are managed by the system.

For the OP issue, what happens in valgrind if you have a trivial program that simply opens and closes a directory?

I ran a quick test using opendir/closedir (please excuse any syntax/sematic errors in the code below, I typed this in from memory, and I have a bad memory!)

Code:

int main( int argc, char*argv[])
{
  DIR *dp;
  struct dirent *dirp;

  dp = opendir( argv[1] );
 
  while( (dirp = readdir( dp )) != NULL )
  {
      fprintf( stderr, "File %s\n", dirp->d_name );
  }

  closedir( dp );
 
  exit( 0 );
}

Valgrind did not report any memory issues with the test code, even when I wrapped the opendir/closedir code in a for statement. I am wondering if the memory leak is occuring when the child process does not "finish" before the closedir( ) is called. The child processes are not using anything dealing with DIR or dirent pointers. The filenames are copied prior to fork being called.

Code:


if( (dp = opendir( inputDir )) == NULL )  <========Valgrind reports error on this line
{
  logError( "Could not open directory path." );
    return -1;
}

while( (dirp = readdir( dp )) != NULL )
{
  if( (strcmp( dirp->d_name, "." ) != 0) &&
      (strcmp( dirp->d_name, ".." ) != 0)
  {
      // check if valid file using dirp->d_name
   
      // Copy dirp using snprintf adding the path and new prefix

      // fork child process to handle the file contents

      // Child process does not interact with the parent process at all

      // Parent process moves to the next file in the list
  }
}

if( closedir( dp ) < 0 )
{
  logError( "Error closing file" );
}

return numProcessed;

I do not have direct access to the code right now. It is not on this machine, and the printer is down. If needed, I will try and get more code.

lorebett 11-02-2006 10:23 AM

Quote:

Originally Posted by lorebett
sometimes valgrind reports memory leaks on library functions because some library functions do not actually free memory but keep them in a pool. This might be the case?

by the way, if this is the case (and for library functions it usually is), you can simply suppress this leak warning, by using the suppression options by valgrind.

jlliagre 11-02-2006 02:03 PM

Quote:

Originally Posted by Rayven
I am using Sun Studio 10 on the Solaris side, however, I have not ran the code through studio10 to determine memory leaks on Solaris, since this application will be primarily ran on RHEL4.

Testing an application under different platforms is a good thing, at it often allows to identify bugs faster.
Quote:

This may be the case why my application is increasing very quickly. Is there a 'C' function call to force RHEL to free the memory from the pool? The reason I believe this may be a problem, is that when I stop the application and continue to watch the memory, it very slowly drops about 50MB or so, and then holds at that value.
What memory value are you referring to ?


All times are GMT -5. The time now is 07:16 AM.