[solved] Problems with Raid5 performance
Hiho!
I've built a new home server, based on Ubuntu Desktop 10.10 64bit. The Server is powered by an AMD X2 450e, 2GB DDR3/ECC, a 60GB 2.5" IDE system hard disk drive and three Samsung HD204UI 2TB hard disk drives for the Raid-Set. (4k sectors / Firmware is already updated) Curious, but don't expect I have a write performance problem - the issue concerns to the read performance. arrangements:
mdadm.conf Code:
DEVICE /dev/sd[bcd]1 Code:
Metadaten =/dev/md0 isize=256 agcount=32, agsize=30523616 blks Code:
/dev/md0 on /mnt/raid5 type xfs (rw,noatime) Code:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] Code:
dd if=/dev/zero of=/mnt/raid5/bonnie/4GB bs=1M count=4000 Code:
time cp /mnt/raid5/bonnie/4GB /dev/null Any suggestion is welcome! Regards OK |
Very interesting, but writing zeros to XFS is not a benchmark. And you need to use conv=fdatasync for dd to sync the data to the disk; otherwise it may just be cached.
To check the real life performance, change to the filesystem to be tested, then create a random file, say 4GB of it: Code:
dd if=/dev/urandom of=4GB bs=1048576 count=4096 Then, check the read performance (in 256k chunks) via Code:
sudo sh -c 'sync ; echo 3 > /proc/sys/vm/drop_caches' Code:
sudo sh -c 'sync ; echo 3 > /proc/sys/vm/drop_caches' A good write performance test would require you to put the random file, say 1GB of it, to a ramdisk: Code:
sudo mkdir -m 0700 /tmp/ramdisk Code:
sudo sh -c 'sync ; echo 3 > /proc/sys/vm/drop_caches' Code:
sudo umount /tmp/ramdisk/ Nominal Animal |
For comparison, here are my results on software-RAID0 over two Samsung F1 1TB (HD103UJ) drives, on a 94% full ext4 filesystem, run just now using the exact above commands.
My leftover extents are all at the very end of the disks, so I'm getting the very worst performance possible out of any new files. Other than normal desktop usage, the drives were idle. Kernel is vanilla 2.6.36.3 + autogroup patches. If you run the copy or write tests repeatedly, remember to remove the output file from the old run first. Read performance using 256k chunks: Code:
4294967296 bytes (4.3 GB) copied, 35.9747 s, 119 MB/s Copy performance using 256k chunks: Code:
4294967296 bytes (4.3 GB) copied, 105.057 s, 40.9 MB/s Write performance using 256k chunks, reading from a ramdisk: Code:
1073741824 bytes (1.1 GB) copied, 11.6993 s, 91.8 MB/s It is possible there is some kind of a RAID0 performance regression in the kernel; I'd have to run the tests on an older kernel to check. Nominal Animal |
Thanks for your reply. I'm going to perform it tomorrow and post the results.
|
Here's the output for further analysis (just made C&P; I'm too tired for more today):
Code:
tpm@ubuntu-amd64:~$ ls /proc/sys/vm/drop_caches |
In Addition: When I perform a read speed test through the disk manager which comes with Gnome I get an avg. read speed of 200MB/s!
|
The problem still exists and I can't figure out the reason. I would be pleased for every suggestion.
PS: I forgot to tell that I've switched to ext4...with nearly the same results. |
It seems that the mdadm version that comes with ubuntu 10.10 has a negative impact of the partition alignment. After I compiled version 3.1.4 I've got the results below:
Info: option -b, 20GB Volume @ Raid5 @ 3x2TB HDDs @ 1TB Offset Code:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random- |
All times are GMT -5. The time now is 11:00 PM. |