LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 12-02-2010, 11:46 PM   #1
ZAMO
Member
 
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598

Rep: Reputation: 30
failure of I/O to files larger than 2 Gb


Hi ,

I have a problem in my server which will not allow to do I/O to files larger than 2 Gb. Though the ulimit shows the open files as 1024, is that it affect the server? IS there any other way to fix this ?
Code:
$ uname -a
Linux zamo22  2.4.21-27.ELsmp #1 SMP Wed Dec 1 21:59:02 EST 2004 i686 i686 i386 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux AS release 3 (Taroon Update 4)
$ ulimit -a
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) 4
max memory size       (kbytes, -m) unlimited
open files                    (-n) 1024
pipe size          (512 bytes, -p) 8
stack size            (kbytes, -s) 10240
cpu time             (seconds, -t) unlimited
max user processes            (-u) 7168
virtual memory        (kbytes, -v) unlimited
 
Old 12-03-2010, 12:30 AM   #2
paulsm4
LQ Guru
 
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Blog Entries: 1

Rep: Reputation: Disabled
Hi -

Of *course* Linux supports file sizes >> 2GB. Look here:

Comparison of file systems

But you happen to have a very old version of Redhat, and an even older kernel version.

Support for ++2GB files at the time of your Redhat required something called "LFS" ("Large File Support"). Maybe you have LFS support in your current system. Or maybe you can add it. Otherwise, you might just have to upgrade. LFS support is built in to all current (2.6.x) kernels.
 
Old 12-03-2010, 12:35 AM   #3
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
oops. Totally off the mark. My apologies, I retract what I said earlier.

However, the 'open files' kernel parameter is the number of filehandles you can have open simultaneously and I don't think it would have a bearing on your problem.

Last edited by tommylovell; 12-03-2010 at 01:14 AM. Reason: technically out in left field... somewhere...
 
Old 12-03-2010, 12:46 AM   #4
paulsm4
LQ Guru
 
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Blog Entries: 1

Rep: Reputation: Disabled
Hi -

Minor correction: ext3 currently supports maximum file sizes between 16GB and 2TB:

http://en.wikipedia.org/wiki/Ext3

http://www.novell.com/documentation/...ml/apas04.html
 
Old 12-03-2010, 01:17 AM   #5
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,426

Rep: Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786Reputation: 2786
Have you seen this https://access.redhat.com/support/po...pdates/errata/ ? Basically RHEL3 is out of support unless you want to pay extra for the "Extended Life Cycle Phase:"
 
Old 12-03-2010, 01:28 AM   #6
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
Hmmm...

See http://linuxmafia.com/faq/VALinux-kb...ize-limit.html. That article seems to indicate it is a characteristic of very very old 32-bit systems (as paulsm4 pointed out). I can't imagine that your system is that old. Sorry this doesn't help much.
 
Old 12-03-2010, 01:37 AM   #7
ZAMO
Member
 
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598

Original Poster
Rep: Reputation: 30
Thanks for valuable inputs by all. But , I have the other boxes running on the Same RHEL 3, 2.4 kernel( they have the same open file descriptor as well.) has no issue in handling file size of >> 2G .

One more question , whether the file I/O is handled by kernel or it depends on file system ? I don't think ext3 is not an issue for the >>2g files . correct me , if am wrong .
 
Old 12-03-2010, 02:13 AM   #8
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
Here's a long shot.

If you do a "dumpe2fs -h" against that filesystem does "Filesystem features:" contain "large_file"?

Code:
[root@athlonz ~]# dumpe2fs -h /dev/mapper/vgz00-lvz01 | grep "Filesystem features"
dumpe2fs 1.41.4 (27-Jan-2009)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
The man page on tune2fs says:
Quote:
large_file
Filesystem can contain files that are greater than
2GB. (Modern kernels set this feature automatically
when a file > 2GB is created.)
Like I say, a long shot.

Quote:
One more question , whether the file I/O is handled by kernel or it depends on file system ? I don't think ext3 is not an issue for the >>2g files . correct me , if am wrong .
I think the answer is "both". The I/O request initially is handled by the systems VFS code (kernel) that in turn hands that request off to the code that support the underlying filesystem (fat, ntfs, ext2/3/4, Reiser or whatever).

So, yes, I do think ext3 is the issue here, but I'm clearly not an authority.
 
Old 12-03-2010, 06:00 AM   #9
ZAMO
Member
 
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598

Original Poster
Rep: Reputation: 30
tommylovell,

Thanks for the worthy information . The dump2fs shows my kernel supports large_file. As said , other boxes of same OS/kernel/FS descriptor works with file size of >>2g . Is there any other parameter which affect the i/o of >> 2G here

Code:
#dumpe2fs -h /dev/sda8 |  grep "Filesystem features"
dumpe2fs 1.32 (09-Nov-2002)
Filesystem features:      has_journal filetype needs_recovery sparse_super large_file
 
Old 12-03-2010, 10:20 AM   #10
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386

Rep: Reputation: 105Reputation: 105
Quote:
Is there any other parameter which affect the i/o of >> 2G here
None that I know of. You've already touched on the user limit values and you have "file size (blocks, -f) unlimited". (I assume you issued the 'ulimit -a' command logged in or su'd to the user that is having the difficulty. Limits are on a user-by-user or group-by-group basis, and picked up the next time that user logs in.)

And if it was ulimit related, I think you would get the error 'file size limit exceeded'. And, of course, if it were something like this it would imply that someone tampered (either inadvertently or deliberately) with this one system and not the others that are ok.

I think I've been assuming that you had adequate space in that filesystem (taking into account that if your process is non-root the reserved blocks are not usable).

Doing a 'cp' where I ran out of space the error reported back to the application was "No space left on device".

Code:
[root@athlonz ~]# cp -p /dev/urandom /mnt/z
cp: writing `/mnt/z': No space left on device
The error returned to cp's write would have been "ENOSPC". 'cp' converted that to "No space left on device" for my benefit. (See 'man errno' and 'man -s2 write' for errno's.)

Other errors returned could be
"EDQUOT Disk quota exceeded (POSIX.1)";
"EFBIG File too large."; or
"EIO A low-level I/O error occurred while modifying the inode."

Is there any indication from your app what error is being returned by the system?
 
Old 12-03-2010, 10:22 AM   #11
paulsm4
LQ Guru
 
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Blog Entries: 1

Rep: Reputation: Disabled
Quote:
I can't imagine that your system is that old.
Argh! It *is* that old!!! Look at the links in my first post!

To repeat:
Quote:
But you happen to have a very old version of Redhat, and an even older kernel version.

Support for ++2GB files at the time of your Redhat required something called "LFS" ("Large File Support"). Maybe you have LFS support in your current system. Or maybe you can add it. Otherwise, you might just have to upgrade. LFS support is built in to all current (2.6.x) kernels.

http://en.wikipedia.org/wiki/Comparison_of_file_systems

http://www.suse.de/~aj/linux_lfs.html

http://en.wikipedia.org/wiki/Ext3

http://www.novell.com/documentation/...ml/apas04.html
"ext3" is a perfectly good filesystem.

Now...

One thing nobody's asked until now is exactly HOW you're trying to create the files.

Remember - "Large file support" was something of "add on" before the 2.6.x kernel. Even if the OS and the filesystem both support 2++GB files, it's entirely possible (likely, even ) that any given app is doing 32-bit reads and writes. And guess what the limit of a 32-bit signed integer is ?

So.....

If you're having problems:
1. Make sure the OS supports LFS (all current kernels do by default)
2. Make sure the filesystem supports LFS (virtually all current Linux filesystems do by default)
3. Make sure ulimit isn't constrained (the very first post indicated "unlimited". Good!)
4. Make sure the application supports LFS (almost any application compiled against a modern kernel should by default, as long as it doesn't try to do naive 32-bit signed lseek()'s).

'Hope that helps

Last edited by paulsm4; 12-03-2010 at 10:24 AM.
 
Old 12-03-2010, 10:53 AM   #12
ZAMO
Member
 
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598

Original Poster
Rep: Reputation: 30
Thanks , failed to follow up . The error that the apps throw is "File size limit exceeded"
Code:
If you're having problems:
1. Make sure the OS supports LFS (all current kernels do by default)
Is that "large_file" support to be enabled for all the file systems ?
2. Make sure the filesystem supports LFS (virtually all current Linux filesystems do by default)
Not all file systems here 
3. Make sure ulimit isn't constrained (the very first post indicated "unlimited". Good!)
4. Make sure the application supports LFS (almost any application compiled against a modern kernel should by default, as long as it doesn't try to do naive 32-bit signed lseek()'s).
Is that "large_file" support to be enabled for all the file systems ?
Not for all . its enabled on the FS on which the app resides .
While i try to enable with
Code:
tune2fs -O[+]feature[,large_file] /dev/sda5
, am unable to do . Correct me , if am wrong with the switches

Last edited by ZAMO; 12-03-2010 at 11:00 AM.
 
Old 12-03-2010, 01:18 PM   #13
ZAMO
Member
 
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598

Original Poster
Rep: Reputation: 30
tune2fs man page says
Code:
 -O [^]feature[,...]
	      Set  or clear the indicated filesystem features (options) in the
	      filesystem.  More than one filesystem feature can be cleared  or
	      set  by  separating  features  with commas.  Filesystem features
	      prefixed with a caret character ('^') will  be  cleared  in  the
	      filesystem's  superblock;  filesystem  features without a prefix
	      character or prefixed with a plus character ('+') will be  added
	      to the filesystem.

	      The  following  filesystem  features can be set or cleared using
	      tune2fs:

		   large_file
			  Filesystem can contain files that are  greater  than
			  2GB.	(Modern kernels set this feature automatically
			  when a file > 2GB is created.)
But unable to understand , the option to set? Will tune2fs be executed on a mounted file system?
 
Old 12-03-2010, 03:07 PM   #14
paulsm4
LQ Guru
 
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Blog Entries: 1

Rep: Reputation: Disabled
Hi -
Quote:
The error that the apps throw is "File size limit exceeded"
Q: Which apps?

As I tried to say above, if the app itself isn't compiled with LFS support ... then you're screwed

Again, support for ++2GB (large files) began in earnest in the Windows NT 4.x/Linux kernel 2.4.x timeframe. By the time Windows XP/Server 2003 and Linux 2.6 arrived, large file support was standard. Both in the OS, and in any applications built against a current version of the OS.

But *before* that time period, LFS support was "iffy".

If you can't recompile the apps in question, the odds are pretty good you need to update your OS, along with your apps.

Please do revisit my links: there might be something that can help in your scenario.

Sorry I can't be more encouraging
 
Old 12-03-2010, 03:16 PM   #15
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 27,491

Rep: Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123Reputation: 8123
And something that may be related; is this a SAN volume by any chance?? I remember having to deal with that, and having to explicitly mount the SAN volume with a parameter, to get it to use large-files.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Getting larger files over n/w aarsh Linux - Newbie 5 05-17-2010 09:52 AM
finding larger files... visitnag Linux - Newbie 8 11-11-2008 03:26 PM
Can't read files larger than 2.14 GB jhendri868 Linux - Software 2 09-10-2008 04:41 PM
can not write files larger then 2G to system.... lleb Linux - Software 10 01-29-2007 01:38 AM
Files larger than 4 GB Hamsjael Linux - Software 1 03-09-2005 02:51 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 10:13 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration