Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
05-12-2004, 04:31 AM
|
#1
|
LQ Newbie
Registered: May 2004
Posts: 4
Rep:
|
memory full when reading large files with fgetc()
i'm trying to read a large text file (about 600MB) using fgetc(), i'm processing the file character by character and not storing anything in memory or use any buffers.
but it seams that the OS cache the file while i'm reading it  and the memory keep growing until it is full and every thing becomes very slow
so is there a way to set a maximum size for buffers/cache used for caching files
the code is something like this:
#include <stdio.h>
int main()
{
char ch;
int i=0;
FILE *myf = fopen("1train","r");
while(!feof(myf))
{
ch = fgetc(myf);
}
}
|
|
|
05-12-2004, 08:50 AM
|
#2
|
Senior Member
Registered: Jan 2004
Location: Oregon, USA
Distribution: Slackware
Posts: 1,246
Rep:
|
I don't think buffers/cache is your problem. The kernel will automatically free memory used for that as memory is needed for applications. Maybe fopen() allocates memory for the entire file. You might want to try the lower-level file routines (open(), et al.)
Code:
char ch;
int fd = open("1train", O_RDONLY);
while(read(fd, &ch, 1) > 0)
{
// perform operations on ch
}
close(fd);
I don't know if that will help, but it will speed your program up anyway 
Last edited by itsme86; 05-12-2004 at 08:51 AM.
|
|
|
05-12-2004, 09:14 AM
|
#3
|
Member
Registered: Mar 2004
Location: Upstate NY
Distribution: Slackware/YDL
Posts: 77
Rep:
|
Have you confirmed that the memory image is actually growing?
If it's not then the slowdown may be caused by the huge number of I/O ops being performed. I've had this experience but I usually only notice it when I'm doing large amounts of console output as well.(for a machine simulator and a dinero clone)
|
|
|
05-12-2004, 09:17 AM
|
#4
|
Member
Registered: May 2002
Posts: 964
Rep:
|
Something does not sound right here.
The standard I/O routines are probably not the cause. I routinely work with 2GB+ files, no problems. Input file buffers get flushed and refreshed.
They normally do not expand forever, unless someone changed something in the kernel.
What are you doing with the data inside your code, after you read in the character?
Are you sure your process is tyhe problem?
|
|
|
05-12-2004, 11:07 AM
|
#5
|
LQ Newbie
Registered: May 2004
Posts: 4
Original Poster
Rep:
|
no open() and read did not solve the problem
this is what i get while running the program
$ free
total used free shared buffers cached
Mem: 497964 493592 4372 0 876 424824
-/+ buffers/cache: 67892 430072
Swap: 1012084 496 1011588
as u can see the memory is filled with cached data and all the system became very slow :S
sure there is no problem with the standard I/O routines. i'm wondering if it is possible to set a max for buffers/cache ?? i'm using Fedora Core 1
thanx 
|
|
|
05-12-2004, 02:32 PM
|
#6
|
Member
Registered: Mar 2004
Location: Massachusetts
Distribution: Debian
Posts: 557
Rep:
|
your swap usage is under a meg, so I'm not sure why this should be slowing you down. It is normal for the kernel to cache as much of the disk as possible -- I would guess that the large amount cached simply means that no process is demanding use of ram.
Here are my stats at the moment:
Code:
15:28 aluser@alf:~$ free
total used free shared buffers cached
Mem: 1033648 995192 38456 0 184296 269888
-/+ buffers/cache: 541008 492640
Swap: 2008084 1008 2007076
As you can see, my system fills up most of its ram, too.
|
|
|
05-13-2004, 02:33 AM
|
#7
|
LQ Newbie
Registered: May 2004
Posts: 4
Original Poster
Rep:
|
i don't know what is happening!! i have 512MB ram & the program is running very good with files less than my memory but when i run it with 600MB test file the processing and all the system become very slow!!  what is really annoying me is that the program runs very well on MS Windows installed on the same machine !?
i think this slow down is happening because when the kernel caches the file and physical memory is full, then the swap is used!
so i need either to limit the memory size the kernel uses for caching the file or some c code that i can use in my program to tell the kernel to free the allocated cache for the file
thanx to every one 
|
|
|
05-13-2004, 06:03 AM
|
#8
|
Senior Member
Registered: Aug 2002
Location: Groningen, The Netherlands
Distribution: Debian
Posts: 2,536
Rep: 
|
Very strange.
I tried your program, literally copied and pasted and have no problems.
Using: Debian sarge/testing, (vanilla) kernel 2.6.6, glibc6 v2.3.2ds1, gcc 3.3.3.
|
|
|
05-13-2004, 06:30 AM
|
#9
|
LQ Newbie
Registered: May 2004
Posts: 12
Rep:
|
Hi,
I think you should try read much more data at once and then process each character as you want. This will surely speed up your program (and if not then there's something wrong with your system settings).
Code:
FILE *myf=fopen("1train","r");
enum { BUFLEN = 200000 };
char buf[BUFLEN], ch;
int i, nread;
while ((nread = fread(buf, 1, BUFLEN, myf)))
for (i=0; i<nread; i++) {
ch = buf[i];
// your stuff here
}
|
|
|
05-30-2004, 07:45 AM
|
#10
|
LQ Newbie
Registered: May 2004
Posts: 4
Original Poster
Rep:
|
Hi everyone,
i faced the above problem when running my code on Fedora Core 1 !
when testing the same code on the same machine but using Mandrake 9.1 or Windows XP the program ran without problems!!!!
is this a bug in memory management in FC1 or miss configuration??
|
|
|
All times are GMT -5. The time now is 03:21 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|