Linux - Server This forum is for the discussion of Linux Software used in a server related context. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
12-02-2010, 11:46 PM
|
#1
|
Member
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598
Rep:
|
failure of I/O to files larger than 2 Gb
Hi ,
I have a problem in my server which will not allow to do I/O to files larger than 2 Gb. Though the ulimit shows the open files as 1024, is that it affect the server? IS there any other way to fix this ?
Code:
$ uname -a
Linux zamo22 2.4.21-27.ELsmp #1 SMP Wed Dec 1 21:59:02 EST 2004 i686 i686 i386 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux AS release 3 (Taroon Update 4)
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 4
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 7168
virtual memory (kbytes, -v) unlimited
|
|
|
12-03-2010, 12:30 AM
|
#2
|
LQ Guru
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Rep: 
|
Hi -
Of *course* Linux supports file sizes >> 2GB. Look here:
Comparison of file systems
But you happen to have a very old version of Redhat, and an even older kernel version.
Support for ++2GB files at the time of your Redhat required something called "LFS" (" Large File Support"). Maybe you have LFS support in your current system. Or maybe you can add it. Otherwise, you might just have to upgrade. LFS support is built in to all current (2.6.x) kernels.
|
|
|
12-03-2010, 12:35 AM
|
#3
|
Member
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386
Rep: 
|
oops. Totally off the mark. My apologies, I retract what I said earlier.
However, the 'open files' kernel parameter is the number of filehandles you can have open simultaneously and I don't think it would have a bearing on your problem.
Last edited by tommylovell; 12-03-2010 at 01:14 AM.
Reason: technically out in left field... somewhere...
|
|
|
12-03-2010, 01:17 AM
|
#5
|
LQ Guru
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,426
|
Have you seen this https://access.redhat.com/support/po...pdates/errata/ ? Basically RHEL3 is out of support unless you want to pay extra for the "Extended Life Cycle Phase:"
|
|
|
12-03-2010, 01:28 AM
|
#6
|
Member
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386
Rep: 
|
Hmmm...
See http://linuxmafia.com/faq/VALinux-kb...ize-limit.html. That article seems to indicate it is a characteristic of very very old 32-bit systems (as paulsm4 pointed out). I can't imagine that your system is that old. Sorry this doesn't help much.
|
|
|
12-03-2010, 01:37 AM
|
#7
|
Member
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598
Original Poster
Rep:
|
Thanks for valuable inputs by all. But , I have the other boxes running on the Same RHEL 3, 2.4 kernel( they have the same open file descriptor as well.) has no issue in handling file size of >> 2G .
One more question , whether the file I/O is handled by kernel or it depends on file system ? I don't think ext3 is not an issue for the >>2g files . correct me , if am wrong .
|
|
|
12-03-2010, 02:13 AM
|
#8
|
Member
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386
Rep: 
|
Here's a long shot.
If you do a "dumpe2fs -h" against that filesystem does "Filesystem features:" contain "large_file"?
Code:
[root@athlonz ~]# dumpe2fs -h /dev/mapper/vgz00-lvz01 | grep "Filesystem features"
dumpe2fs 1.41.4 (27-Jan-2009)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
The man page on tune2fs says:
Quote:
large_file
Filesystem can contain files that are greater than
2GB. (Modern kernels set this feature automatically
when a file > 2GB is created.)
|
Like I say, a long shot.
Quote:
One more question , whether the file I/O is handled by kernel or it depends on file system ? I don't think ext3 is not an issue for the >>2g files . correct me , if am wrong .
|
I think the answer is "both". The I/O request initially is handled by the systems VFS code (kernel) that in turn hands that request off to the code that support the underlying filesystem (fat, ntfs, ext2/3/4, Reiser or whatever).
So, yes, I do think ext3 is the issue here, but I'm clearly not an authority.
|
|
|
12-03-2010, 06:00 AM
|
#9
|
Member
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598
Original Poster
Rep:
|
tommylovell,
Thanks for the worthy information . The dump2fs shows my kernel supports large_file. As said , other boxes of same OS/kernel/FS descriptor works with file size of >>2g . Is there any other parameter which affect the i/o of >> 2G here
Code:
#dumpe2fs -h /dev/sda8 | grep "Filesystem features"
dumpe2fs 1.32 (09-Nov-2002)
Filesystem features: has_journal filetype needs_recovery sparse_super large_file
|
|
|
12-03-2010, 10:20 AM
|
#10
|
Member
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 386
Rep: 
|
Quote:
Is there any other parameter which affect the i/o of >> 2G here
|
None that I know of. You've already touched on the user limit values and you have "file size (blocks, -f) unlimited". (I assume you issued the 'ulimit -a' command logged in or su'd to the user that is having the difficulty. Limits are on a user-by-user or group-by-group basis, and picked up the next time that user logs in.)
And if it was ulimit related, I think you would get the error 'file size limit exceeded'. And, of course, if it were something like this it would imply that someone tampered (either inadvertently or deliberately) with this one system and not the others that are ok.
I think I've been assuming that you had adequate space in that filesystem (taking into account that if your process is non-root the reserved blocks are not usable).
Doing a 'cp' where I ran out of space the error reported back to the application was "No space left on device".
Code:
[root@athlonz ~]# cp -p /dev/urandom /mnt/z
cp: writing `/mnt/z': No space left on device
The error returned to cp's write would have been "ENOSPC". 'cp' converted that to "No space left on device" for my benefit. (See 'man errno' and 'man -s2 write' for errno's.)
Other errors returned could be
"EDQUOT Disk quota exceeded (POSIX.1)";
"EFBIG File too large."; or
"EIO A low-level I/O error occurred while modifying the inode."
Is there any indication from your app what error is being returned by the system?
|
|
|
12-03-2010, 10:22 AM
|
#11
|
LQ Guru
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Rep: 
|
Quote:
I can't imagine that your system is that old.
|
Argh! It *is* that old!!! Look at the links in my first post!
To repeat:
"ext3" is a perfectly good filesystem.
Now...
One thing nobody's asked until now is exactly HOW you're trying to create the files.
Remember - "Large file support" was something of "add on" before the 2.6.x kernel. Even if the OS and the filesystem both support 2++GB files, it's entirely possible (likely, even  ) that any given app is doing 32-bit reads and writes. And guess what the limit of a 32-bit signed integer is  ?
So.....
If you're having problems:
1. Make sure the OS supports LFS (all current kernels do by default)
2. Make sure the filesystem supports LFS (virtually all current Linux filesystems do by default)
3. Make sure ulimit isn't constrained (the very first post indicated "unlimited". Good!)
4. Make sure the application supports LFS (almost any application compiled against a modern kernel should by default, as long as it doesn't try to do naive 32-bit signed lseek()'s).
'Hope that helps
Last edited by paulsm4; 12-03-2010 at 10:24 AM.
|
|
|
12-03-2010, 10:53 AM
|
#12
|
Member
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598
Original Poster
Rep:
|
Thanks , failed to follow up . The error that the apps throw is "File size limit exceeded"
Code:
If you're having problems:
1. Make sure the OS supports LFS (all current kernels do by default)
Is that "large_file" support to be enabled for all the file systems ?
2. Make sure the filesystem supports LFS (virtually all current Linux filesystems do by default)
Not all file systems here
3. Make sure ulimit isn't constrained (the very first post indicated "unlimited". Good!)
4. Make sure the application supports LFS (almost any application compiled against a modern kernel should by default, as long as it doesn't try to do naive 32-bit signed lseek()'s).
Is that "large_file" support to be enabled for all the file systems ?
Not for all . its enabled on the FS on which the app resides .
While i try to enable with
Code:
tune2fs -O[+]feature[,large_file] /dev/sda5
, am unable to do . Correct me , if am wrong with the switches
Last edited by ZAMO; 12-03-2010 at 11:00 AM.
|
|
|
12-03-2010, 01:18 PM
|
#13
|
Member
Registered: Mar 2007
Distribution: Redhat &CentOS
Posts: 598
Original Poster
Rep:
|
tune2fs man page says
Code:
-O [^]feature[,...]
Set or clear the indicated filesystem features (options) in the
filesystem. More than one filesystem feature can be cleared or
set by separating features with commas. Filesystem features
prefixed with a caret character ('^') will be cleared in the
filesystem's superblock; filesystem features without a prefix
character or prefixed with a plus character ('+') will be added
to the filesystem.
The following filesystem features can be set or cleared using
tune2fs:
large_file
Filesystem can contain files that are greater than
2GB. (Modern kernels set this feature automatically
when a file > 2GB is created.)
But unable to understand , the option to set? Will tune2fs be executed on a mounted file system?
|
|
|
12-03-2010, 03:07 PM
|
#14
|
LQ Guru
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Rep: 
|
Hi -
Quote:
The error that the apps throw is "File size limit exceeded"
|
Q: Which apps?
As I tried to say above, if the app itself isn't compiled with LFS support ... then you're screwed
Again, support for ++2GB (large files) began in earnest in the Windows NT 4.x/Linux kernel 2.4.x timeframe. By the time Windows XP/Server 2003 and Linux 2.6 arrived, large file support was standard. Both in the OS, and in any applications built against a current version of the OS.
But *before* that time period, LFS support was "iffy".
If you can't recompile the apps in question, the odds are pretty good you need to update your OS, along with your apps.
Please do revisit my links: there might be something that can help in your scenario.
Sorry I can't be more encouraging 
|
|
|
12-03-2010, 03:16 PM
|
#15
|
LQ Guru
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 27,491
|
And something that may be related; is this a SAN volume by any chance?? I remember having to deal with that, and having to explicitly mount the SAN volume with a parameter, to get it to use large-files.
|
|
|
All times are GMT -5. The time now is 10:13 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|