Very slow Raid performance - Only write speed
Hey people
I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server). I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however! Creating a file on the single (non-raid) disk, is fine: Code:
root@BigBertha:/# dd if=/dev/zero of=ddfile.big bs=1MB count=1k Code:
root@BigBertha:/# time cp ddfile.big /storage/big/ Code:
root@BigBertha:/storage/big# dd if=/dev/zero of=ddfile.big bs=1MB count=1k Code:
root@BigBertha:/storage/big# dd if=ddfile.big of=/dev/null Code:
root@BigBertha:/storage/big# time cp ddfile.big ~ Code:
root@BigBertha:~# cat /proc/mdstat I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD.. Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this! At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers SS |
Are the disks Western Digital and labeled "advanced format" or something? If so, read this thread:
http://www.linuxquestions.org/questi...0/#post4200465 |
Quote:
Thanks for the response, Guttorm. But no these are Seagate Barracuda LPs (5900 PM) from late 2009, so they don't have the 4k sector size (FDISK reports 512) Also I didn't think that sector size effected the performance much, it was to reduce the overheads ergo usable data lost in formatting drives. |
Hmm. Sorry, then I don't know. I had a similar problem a little while ago, where writing was very slow with Raid 1. Then it was advanced format that was the problem, and recreating the partitions with some options to fdisk solved the problem.
|
Quote:
|
Quote:
Quote:
|
Quote:
I'm pulling my hair out over this, there seems to be no other people with similar problems (done plenty of googling) and not really a reason for it to happen. I'm going to have to mess around with this over the weekend I think :( |
iSpaZZZ^...did you ever find the fix for this? I'm running into the same issue with a little home NAS I built with two Samsung F4EG HD155UI 1.5GB drives running in software RAID1.
|
hi chudux,
Yes I did get it sorted in the end, I found out it was because of the method the drive was mounted. I can't remember the exact switches in /etc/fstab where set, but I remember that it was set to 'sync' not 'async', putting it back to defaults made it go back to the performace level i would expect. Hope that helps. |
** Solution Found **
Hi Guys, I see that this thread is 2 years old - but spaz's response at the end helped me as well. I feel it wasn't spelled out very clearly though and I made an account just to document this here incase it can also help somebody else. I have a raid 1 array that I migrated from one machine to another( used as storage, not root dir). The machine the array was created on was running Ubuntu 12.04 LTS. The machine that the array was moved to was running Debian Wheezy 7.2. When I migrated the drives I migrated /etc/fstab and /etc/mdadm/mdadm.conf as well. On the debian machine I was seeing 150kB/s write speeds while on the Ubuntu machine I was seeing 50MB/s write speeds with the exact same settings in mdadm.conf and fstab. The solultion was like spaz said above: change the mount options in /etc/fstab to "defaults" or to change "sync" to "async". When I changed "sync" to "async" or just put in "defaults" I got normal performance from the array. Old fstab that gave slow write speeds on Debian 7.2: Quote:
Quote:
Quote:
|
Still very relevant
Quote:
I could not resists to do the same and post a thanks, this thread was a huge help to me, I was just about to give up on kernel RAID - clearing the sync boot option made all the difference! Thank you all! The only other thing I would like to add is that the server I am building is a heavily experimental virtual client host using SSDs, HDDs, external USB 3 HDDs, thumb drives, network shares - and my debian os and my client partitions are all over the place - some on BIOS RAID, some on kernel RAID, some not on RAID and I was suspecting everything else but the boot options. I should not have, since once sync is cleared everything works the way I suspected it should. Lesson learned. Cheers! |
All times are GMT -5. The time now is 07:53 AM. |