Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have two hardware RAID 6 arrays concatenated via LVM. I have a single XFS file system on top of this. The RAID 6 arrays are 11 and 12 disks wide.
I'd like to provide XFS with a stripe width (sw) and stripe unit (su) at mount time for enhanced performance.
I know my stripe unit size (64k), but what stripe width do I provide? My best guess is 19 (9 data spindles from array0, 10 data spindles from array1), but that's just a guess.
Anyone have experience? OS is Centos 5. Thanks in advance...
I'm not sure what you mean by "concatenated" via LVM, but I presume you mean enclosed in a single Volume Group.
Each RAID6 array is a single Physical Volume as far as the VG is concerned
Did you set a specific physical extent size when creating the VG?
Did you set the number of stripes when you created the LV? If so, how many? If not, then each RAID unit is now logically one strip. The underlying strips within the RAID are hidden by the VG.
Regardless, if you've created a VG and LV, then the unit is now a single logical unit and the default xfs create will suffice.
Optimizing the performance of RAIDs and LVM is very detailed and and specific to both usage and underlying hardware
eg: database indexes on 15,000 RPM SAS drives.
What kind of raid, motherboard/os or controller?
What parameters were used creating the RAID, the Volume Group and the Logical Volume?
What is the indended use of the volume?
The short answer, according the XFS FAQ is to let the xfs create figure things out. Only optimize when performance is/becomes an issue.
I had to duplicate my RAID so that I could rebuild differently, then use the initial as backup.
YMMV
I'm not sure what you mean by "concatenated" via LVM, but I presume you mean enclosed in a single Volume Group.
Concatenated as opposed to LVM striping.
Quote:
Did you set a specific physical extent size when creating the VG?
No.
Quote:
Did you set the number of stripes when you created the LV? If so, how many?
No, I did not use LVM striping.
Quote:
If not, then each RAID unit is now logically one strip. The underlying strips within the RAID are hidden by the VG.
I agree with you to this point. My question has more to do with the effect that by specifying a stripe unit and stripe width when mounting an XFS volume, it's more likely that writes will be spread across multiple spindles instead of concentrated on fewer (or possibly one) spindle. If I had a single PV on top of a single RAID device, I would specify the stripe size and width that my array uses. XFS would ostensibly write to LVM with the correct stride size for the underlying device and the data would be distributed optimally across multiple spindles.
But this is more complicated: I have two PVs and they do not have the same stripe width. To optimize write distribution, I need to know whether LVM will be writing to one PV until it fills up, whether it chooses a PV to write to at random (on a per-write basis?), or whether it distributes data across PVs (even though they are not striped). In the first case, I can test with the two different stripe widths and figure out which PV I am writing to -right now-. Then when that PV fills up and LVM starts writing to the second PV, I can switch to the appropriate stripe width. In the second case, I will never know the optimal stripe width. In the third case, the stripe width might be related to the total width of the two RAID devices, or it might not.
Does this make sense?
The specific section of the XFS FAQ that I'm screwing around with is here.
I agree with you to this point. My question has more to do with the effect that by specifying a stripe unit and stripe width when mounting an XFS volume, it's more likely that writes will be spread across multiple spindles instead of concentrated on fewer (or possibly one) spindle. If I had a single PV on top of a single RAID device, I would specify the stripe size and width that my array uses. XFS would ostensibly write to LVM with the correct stride size for the underlying device and the data would be distributed optimally across multiple spindles.
But this is more complicated: I have two PVs and they do not have the same stripe width. To optimize write distribution, I need to know whether LVM will be writing to one PV until it fills up, whether it chooses a PV to write to at random (on a per-write basis?), or whether it distributes data across PVs (even though they are not striped). In the first case, I can test with the two different stripe widths and figure out which PV I am writing to -right now-. Then when that PV fills up and LVM starts writing to the second PV, I can switch to the appropriate stripe width. In the second case, I will never know the optimal stripe width. In the third case, the stripe width might be related to the total width of the two RAID devices, or it might not.
Does this make sense?
The specific section of the XFS FAQ that I'm screwing around with is here.
Since you've created two PVs, when you created your VG you presumably did the following
pvcreate /dev/sda
pvcreate /dev/sdb
vgcreate vg /dev/sda /dev/sdb
lvcreate -n lv vg
you can really only have two stripes, one on each pv, but as you say, you created a non-striped lv
I don't believe those XFS parameters will make any difference (not speaking from authority, I have not contributed to XFS code)
Read the LVM HOW TO, from 8.1 thru to 8.4 Harware Raid, combined with the XFS FAQ section you mentioned.
Could you post a copy of the lvm metadata from /etc/lvm/backup? (you can trim out the unnessesary bits if you want to)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.