LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   Hep me---File size limit exceeded--error (https://www.linuxquestions.org/questions/linux-general-1/hep-me-file-size-limit-exceeded-error-572639/)

Ratheeshshenoy 07-27-2007 12:44 AM

Hep me---File size limit exceeded--error
 
Hi,

I am running a tool and its output is a text file,the tool stops throwing the error 'file size limit exceeded' once the size of file reaches 2GB.
file system is ext3
and Linux OS is Red hat enterprise AS
Please Suggest me a Linux OS version which supports LFS

Tinkster 07-27-2007 04:41 AM

Hi, and welcome to LQ,

And what do you mean when you say LFS, Linux from Scratch?
And what is the tool you're using?



Cheers,
Tink

Ratheeshshenoy 07-27-2007 07:23 AM

Quote:

Originally Posted by Tinkster
Hi, and welcome to LQ,

And what do you mean when you say LFS, Linux from Scratch?
And what is the tool you're using?



Cheers,
Tink

I meant Large file System, the tool that am running is a third party tool
The command ulimit says its unlimited and there is no quota set and am logged in as root

I know the output file will come more than 4 GB but it stops after 2GB throwing the error 'file size limit exceeded'

Thanks
Ratheesh

wjevans_7d1@yahoo.co 07-27-2007 07:29 AM

If ulimit is not the problem, then the problem is with the third party tool.

If it was written in C (which is quite likely), then they probably did not compile it to use LFS. Many developers either are unaware that they need to do something special to allow files to be larger than 2GB, or (worse) they are unaware that their customers might need to deal with files that large.

Ratheeshshenoy 07-27-2007 07:38 AM

Quote:

Originally Posted by wjevans_7d1@yahoo.co
If ulimit is not the problem, then the problem is with the third party tool.

If it was written in C (which is quite likely), then they probably did not compile it to use LFS. Many developers either are unaware that they need to do something special to allow files to be larger than 2GB, or (worse) they are unaware that their customers might need to deal with files that large.

I have used the Linux enterprise AS . Is this is supported with the LFS? if not can I please know some versions of linux which are having LFS support

wjevans_7d1@yahoo.co 07-27-2007 07:58 AM

It is almost certain that your system has Large File Support. But "almost certain" isn't good enough, I'm sure. (grin)

Run the following shell script. It will create a C program, compile it, and run it. That C program will attempt to create a file called data which is slightly over 2GB. If _FILE_OFFSET_BITS had not been defined, that program would have failed.

Hope this helps.

Code:

#!/bin/sh

cat > testlarge.c <<EOD
#define _FILE_OFFSET_BITS 64

#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main(void)
{
  int    jndex;
  int    pass;
  int    phyle;

  ssize_t write_result;

  char    kbuffer[1024];

  unlink("data");

  phyle=open("data",O_RDWR|O_CREAT,0600);

  if(phyle==-1)
  {
    perror("data");

    exit(1);
  }

  for(jndex=0;
      jndex<=1024*1024*2;
      jndex++
    )
  {
    write_result=write(phyle,
                      kbuffer,
                      sizeof(kbuffer)
                      );

    if(write_result!=sizeof(kbuffer))
    {
      fprintf(stderr,
              "\nwrite result was %d, not %d",
              write_result,
              sizeof(kbuffer)
            );

      break;  /* <--------- */
    }

    if(jndex%(128*1024)==0)
    {
      printf("%d ",(1024*1024*2-jndex)/(128*1024));
      fflush(stdout);
    }
  }

  printf("\n");

  if(close(phyle))
  {
    perror("close");

    exit(1);
  }

  return 0;

} /* main() */

EOD
cc testlarge.c -o testlarge

status=$?

if [[ $status != 0 ]]
then
  echo "oops -- compile errors"
else
  testlarge
fi


Ratheeshshenoy 07-30-2007 05:47 AM

Quote:

Originally Posted by wjevans_7d1@yahoo.co
It is almost certain that your system has Large File Support. But "almost certain" isn't good enough, I'm sure. (grin)

Run the following shell script. It will create a C program, compile it, and run it. That C program will attempt to create a file called data which is slightly over 2GB. If _FILE_OFFSET_BITS had not been defined, that program would have failed.

Hope this helps.

Code:

#!/bin/sh

cat > testlarge.c <<EOD
#define _FILE_OFFSET_BITS 64

#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int main(void)
{
  int    jndex;
  int    pass;
  int    phyle;

  ssize_t write_result;

  char    kbuffer[1024];

  unlink("data");

  phyle=open("data",O_RDWR|O_CREAT,0600);

  if(phyle==-1)
  {
    perror("data");

    exit(1);
  }

  for(jndex=0;
      jndex<=1024*1024*2;
      jndex++
    )
  {
    write_result=write(phyle,
                      kbuffer,
                      sizeof(kbuffer)
                      );

    if(write_result!=sizeof(kbuffer))
    {
      fprintf(stderr,
              "\nwrite result was %d, not %d",
              write_result,
              sizeof(kbuffer)
            );

      break;  /* <--------- */
    }

    if(jndex%(128*1024)==0)
    {
      printf("%d ",(1024*1024*2-jndex)/(128*1024));
      fflush(stdout);
    }
  }

  printf("\n");

  if(close(phyle))
  {
    perror("close");

    exit(1);
  }

  return 0;

} /* main() */

EOD
cc testlarge.c -o testlarge

status=$?

if [[ $status != 0 ]]
then
  echo "oops -- compile errors"
else
  testlarge
fi








Hi,

Thanks To all
I have run the command
dd if=/dev/zero of=largefile bs=100M count=21

Which created file greater than 2GB
so i can confirm that tool is having problem....

Again thanks to all for kind help...
and please let me know if there is more info. on this

Thanks and Regards
Ratheesh


All times are GMT -5. The time now is 05:39 AM.