LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   optimise xfs for large partition (https://www.linuxquestions.org/questions/linux-general-1/optimise-xfs-for-large-partition-609363/)

carpman 12-27-2007 04:31 AM

optimise xfs for large partition
 
Hello, ok i am building new 64bit system with Gentoo and will have 650gb partition for storing my digital images. file will range in size from 3mb to 200mb with average about 20mb.


The partition is on a raid 1 array, the hardware for array is areca sata pciE raid card with 256mb ram, 500mhz processor, drives are 2x750gb Hitachi 7k1000. The 650gb array is set 128k stripe size.


Now i have read and used the IBM article for setting up XFS but never for array this size.

The guide talks about allocation groups


Quote:

The second option lets you enhance the performance of your new filesystem by telling mkfs.xfs to minimize the number of allocation groups that are created. Normally, mkfs.xfs chooses the number of allocation groups automatically, but from my experience it usually picks a number that is a bit too high for most general-purpose Linux workstations and servers. As you'll recall from my previous article, allocation groups let XFS perform multiple metadata operations in parallel. This comes in handy on high-end servers, but too many allocation groups do add a bit of overhead. So rather than let mkfs.xfs choose the number of allocation groups for your filesystem, specify a number by using the -d agcount=x option. Set x to a small number, something like 4, 6, or 8. You'll need at least one allocation group for every 4 GB of capacity in your target block device.


but following that a 650gb xfs partition will have very high number for allocations groups, is there max or should go with 1 for every 4gb?


If do then allocation group number would be in the thousands!!



Any tips for setting up xfs for this partition?

cheers

jay73 12-27-2007 02:05 PM

If the average file size is only 20MB, you have other things to worry about than allocation speed, especially on a computers with moderate specs (xfs tends to require a good deal of RAM to perform well) . Having too many allocation groups tends to become non-productive - if not counterproductive - beyond a certain point as performance will be affected by increased search activity.
It may be more interesting to focus on the area where xfs tends to lag a bit:file deletion. Xfs is noticeably slower in this respect than ext3 or reiser,especially when it comes to smaller files. Remember that xfs is optimized for large files, i.e. hundreds of megabytes. The good news is that you can get noticeable performance gains by manipulating the number and size of logbuffers. Just prepare your partition(s) with mkfs.xfs -l size=64m (or even 128m) and mount them with noatime,nodiratime,logbufs=8. Xfs file systems also tend to perform better if the journal is placed on a partition of its own.

carpman 12-28-2007 03:07 AM

Hello and thanks for reply, i am not worried about deletions times but good read then write speed would be nice.

I have amd64 3700mhz with 2gb ram soon to be 4gb so system spec is not an issue.

The IBM guide i refer to is at www.ibm.com/developerworks/library/l-fs10.html


I normally use following to create xfs FS, setting agcount= as per partition size.

Quote:

mkfs.xfs -d agcount=4 -l size=32m /dev/hda1

Which as you can see already uses logbuffer setting but i can up amount as you suggest for logbuffers but i am still not sure about agcount= setting, how many for a 650gb partition?

Also can speed be improve but using other xfs settings such as block size?

cheers

jay73 12-28-2007 04:46 PM

http://everything2.com/index.pl?node_id=1479435

Maybe you should use something like Bonnie to find out what works best for you.


All times are GMT -5. The time now is 04:12 AM.