LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices

Reply
 
Search this Thread
Old 09-04-2012, 08:20 AM   #1
rudreshsm
LQ Newbie
 
Registered: Sep 2012
Posts: 2

Rep: Reputation: Disabled
Question fclose() operation takes very long time to execute on ext4 file system


Hi All,

This is regarding issue with the fclose() operation which takes very long time (40-80seconds) to execute while creating a large size file (eg. 4GB file)

Some observations on this issue are,

1) This issue is seen on ex4 file system mounted partitions. The same operation takes very less time to execute on ext3 file system mounted partitions.

2) This issue is seen only when we reopen an existing file in write(“w”) mode, first time opening and closing or opening an existing file in append (“a”) mode doesn’t take much time. There is no such issue observed on ext3 file system mounted partitions.

The sample code snippet to create 4GB (approx) file is mentioned below,

int main(int argc, char *argv[])
{
int bufferSize = 1048576;
long size = 0;
int len = 0;
char buffer[bufferSize];
FILE *fp;

memset(buffer,0x55,bufferSize);

fp = fopen(argv[1],"w");

// 4GB((approx) file
while ( size <= 4294967296 )
{
len = fwrite(buffer, 1, bufferSize, fp);
size = size + len;
}
fflush(fp);
fclose(fp);
}

Please let me know,

- Is there any issue in mounting ext4 file system on Linux machines.

Thanks,
Rudresh
 
Old 09-04-2012, 03:47 PM   #2
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,007

Rep: Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742
I may be way off base here but I'd take a look at moving the fflush(fp) into the loop, right after the fwrite(). Flush early and often and see what happens. You might want to increase the size of your buffer from 1 M to, maybe, 10 M (or even more, that depends on your drive controller, take a look at the specifications for it) -- but move the fflush() in any event.

Hope this helps some.
 
Old 09-13-2012, 12:53 AM   #3
rudreshsm
LQ Newbie
 
Registered: Sep 2012
Posts: 2

Original Poster
Rep: Reputation: Disabled
Hi,

Thanks for your suggestion. I tried both ways, moving fflush() right after fwrite() in the loop and increasing the buffer size to 10MB. But fclose() operation still takes more time to complete. Any other suggestions.

Ramesh
 
Old 09-13-2012, 05:25 AM   #4
H_TeXMeX_H
Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269Reputation: 1269
Use the 'data=ordered' option in ext4.
 
Old 09-13-2012, 08:31 AM   #5
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,007

Rep: Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742
Not to argue, but isn't data=orderd the default? From http://kernel.org/doc/Documentation/...stems/ext4.txt,
Quote:
When mounting an ext4 filesystem, the following option are accepted:
(*) == default
...
data=ordered (*) All data are forced directly out to the main file
system prior to its metadata being committed to the
journal.
...
The documentation talks about Data Mode:
Quote:
Data Mode
=========
There are 3 different data modes:

* writeback mode
In data=writeback mode, ext4 does not journal data at all. This mode provides
a similar level of journaling as that of XFS, JFS, and ReiserFS in its default
mode - metadata journaling. A crash+recovery can cause incorrect data to
appear in files which were written shortly before the crash. This mode will
typically provide the best ext4 performance.

* ordered mode
In data=ordered mode, ext4 only officially journals metadata, but it logically
groups metadata information related to data changes with the data blocks into a
single unit called a transaction. When it's time to write the new metadata
out to disk, the associated data blocks are written first. In general,
this mode performs slightly slower than writeback but significantly faster than journal mode.

* journal mode
data=journal mode provides full data and metadata journaling. All new data is
written to the journal first, and then to its final location.
In the event of a crash, the journal can be replayed, bringing both data and
metadata into a consistent state. This mode is the slowest except when data
needs to be read from and written to disk at the same time where it
outperforms all others modes. Currently ext4 does not have delayed
allocation support if this data journalling mode is selected.
Seems like data=writeback may be a slightly better option?

One thing I suggest might be worth a try -- because the program is doing, essentially, raw writes -- is to open the file with
Code:
fd = open (argv[1], O_WRONLY | O_CREAT, 0644));
(and no O_TRUNC, which seems a definite no-no with ext4!) rather than the fp = fopen (argv[1], w);. I seem to remember that there is a difference between open and fopen that may -- may -- affect things (might be grasping at straws here).

Note to the OP, fd (file descriptor) is an integer where fp is a structure of type FILE, don't intermingle them if you decide to try this.

Hope this helps some.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
UDEV takes more time to execute at boot up time.Can anyboby help me to reduce dis tim pabansal Linux - General 3 09-30-2011 06:45 AM
Why it takes a long time to delete a file? slackmate Slackware 7 03-06-2010 07:41 PM
GDM takes a *long* time to start first time grumpybuffalo Linux From Scratch 2 09-09-2007 12:17 PM
Finding out how long a program takes to execute nodger Linux - Software 2 11-03-2004 05:31 PM
Can I somehow accurately measure the time a operation takes? Like 'unzip -x foo.zip'? brynjarh Linux - Newbie 9 08-18-2004 09:26 PM


All times are GMT -5. The time now is 02:16 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration