LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 03-21-2008, 10:35 AM   #1
petepdx
LQ Newbie
 
Registered: Mar 2008
Posts: 1

Rep: Reputation: 0
"Best file systems" for only large (10-30GB) files ?


RedHat EL 5.1

I'm looking for options for the 'best' file system for large
files only. I will be starting with a logical disk created
using multipath and LVM that will be 3 TB and could grow to
over 10 TB.

The files will be from 10 GB to 30 GB in size and will most
of the time be sequentially read or written. The number of
files being simultaneously accesss will be from 1 to a
maximum of about 15 (not yet known just a guess). Hence not
that many files.

If it helps the application is Tivoli TSM and this file systems
will be for one of the disk pools.

The disk is a on a Hitachi SAN, SATA, 3 RAID 5 groups of 14 500GB
SATA disks each, pathing is 2 x 2 GB Fibre channels and the LUNS
are 200 GB each. I will be using LVM to join and stripe the LUNS.

TIA ...

-pete
 
Old 03-21-2008, 12:32 PM   #2
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Rep: Reputation: 51
absolutely XFS. I could go on a long diatribe about it, or you can research XFS for streaming large files. We've done lots of testing on this issue. XFS is the way when you're dealing with large files & scubbing through them. Granted, I've not compared XFS to the new-ish ZFS, but I'd still put my money on XFS.

edited to add: our testing is with film/video editing, so we deal with lots of moving large files around. XFS performs noticeably (and benchmarks) faster than the ext's or reiser.

Last edited by BrianK; 03-21-2008 at 12:36 PM.
 
Old 03-21-2008, 01:18 PM   #3
jens
Senior Member
 
Registered: May 2004
Location: Belgium
Distribution: Debian, Slackware, Fedora
Posts: 1,436

Rep: Reputation: 275Reputation: 275Reputation: 275
XFS was originally build for large video data, so it's indeed superior here (large files).

ZFS isn't really an option yet, the only Linux port is with fuse(user space), you really can't compare that.

I agree with BrianK, XFS is by far your best bet for large files.
 
Old 03-24-2008, 01:17 PM   #4
mjh
Member
 
Registered: Nov 2007
Posts: 37

Rep: Reputation: 15
Isn't the file system that Google uses available to end users?

They have a minimum file size of 3GB or something.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Problem in "Mounting Virtual Kernel File Systems" satimis Linux From Scratch 7 07-05-2005 06:57 AM
"Really" large partitions, files and filesystems BlueKnight Linux - General 10 07-21-2004 05:14 AM
Using cpio on a 26GB archive produces "File too large" mfairclough Linux - Newbie 2 06-28-2004 05:55 PM
"Attempting create too large a file system" suguru Linux - Hardware 2 02-23-2004 01:16 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 07:52 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration