LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Very slow Raid performance - Only write speed (https://www.linuxquestions.org/questions/linux-server-73/very-slow-raid-performance-only-write-speed-854375/)

iSpaZZZ^ 01-05-2011 09:15 AM

Very slow Raid performance - Only write speed
 
Hey people

I have recently migrated my file server over to a HP Microserver. The server has two 1TB disks, in a software RAID-1 array, using MDADM. When I migrated simply moved the mirrored disks over, from the old server Ubuntu 9.10 (server) to the new one 10.04.1 (server).

I Have recently noticed that write speed to the RAID array is *VERY* slow. In the order of 1-2MB/s order of magnitude (more info below). Now obviously this is not optimal performance to say the least. I have checked a few things, CPU utilisation is not abnormal (<5%) nor is memory / swap. When I took a disk out and rebuilt the array, with only one disk (tried both) performance was as to be expected (write speed >~70MB/s) The read speed seems to be unaffected however!


Creating a file on the single (non-raid) disk, is fine:
Code:

root@BigBertha:/# dd if=/dev/zero of=ddfile.big bs=1MB count=1k
1024+0 records in 
1024+0 records out 
1024000000 bytes (1.0 GB) copied, 14.9 s, 68.7 MB/s

However, copying the 1GB file to the array (mounted on /storage/big/) takes ~25mins, (~ 0.6MB/s) which is not normal
Code:

root@BigBertha:/# time cp ddfile.big /storage/big/ 
 real    25m51.021s 
 user    0m0.128s 
 sys    0m10.957s

Creating the file on the array (from the kernel /dev/zero, no other storage involved), is again very slow:
Code:

root@BigBertha:/storage/big# dd if=/dev/zero of=ddfile.big bs=1MB count=1k
1024+0 records in
1024+0 records out
1024000000 bytes (1.0 GB) copied, 71.4943 s, 14.3 MB/s

Reading from the array however, everything runs fine!:
Code:

root@BigBertha:/storage/big# dd if=ddfile.big of=/dev/null
2000000+0 records in
2000000+0 records out
1024000000 bytes (1.0 GB) copied, 12.4544 s, 82.2 MB/s

As is copying from the array to the single disk
Code:

root@BigBertha:/storage/big# time cp ddfile.big ~
real    0m13.547s
user    0m0.040s
sys    0m4.156s

Also here is the output of 'cat /proc/mdstat'
Code:

root@BigBertha:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[0] sdb1[1]
      976759936 blocks [2/2] [UU]
     
unused devices: <none>

As mentioned above when I remove a disk, and then mount a single drive from the array performance is normal, with either disk, whilst I understand that write speeds to RAID-1 mirrors might not be blazing, I am certain that something is amiss here!

I'm tempted to think that there is something funny going on with the storage subsystem, as copying from the single disk to the array is slower than creating a file from /dev/zero to the array using DD..

Either way I can't try the array in another computer right now, so I though I was ask to see if people have seen anything like this!

At the moment I'm not sure if it is something strange to do with having simply chucked the mirrored array into the new server, perhaps a different version of MDADM? I'm wondering if it's worth backing up and starting from scratch! Anyhow this has really got me scratching my head, and its a bit of a pain! Any help here would be awesome, e-cookies at the ready! Cheers

SS

Guttorm 01-05-2011 11:17 AM

Are the disks Western Digital and labeled "advanced format" or something? If so, read this thread:

http://www.linuxquestions.org/questi...0/#post4200465

iSpaZZZ^ 01-05-2011 12:23 PM

Quote:

Originally Posted by Guttorm (Post 4214162)
Are the disks Western Digital and labeled "advanced format" or something? If so, read this thread:

http://www.linuxquestions.org/questi...0/#post4200465


Thanks for the response, Guttorm. But no these are Seagate Barracuda LPs (5900 PM) from late 2009, so they don't have the 4k sector size (FDISK reports 512) Also I didn't think that sector size effected the performance much, it was to reduce the overheads ergo usable data lost in formatting drives.

Guttorm 01-05-2011 12:39 PM

Hmm. Sorry, then I don't know. I had a similar problem a little while ago, where writing was very slow with Raid 1. Then it was advanced format that was the problem, and recreating the partitions with some options to fdisk solved the problem.

iSpaZZZ^ 01-06-2011 07:27 AM

Quote:

Originally Posted by Guttorm (Post 4214245)
Hmm. Sorry, then I don't know. I had a similar problem a little while ago, where writing was very slow with Raid 1. Then it was advanced format that was the problem, and recreating the partitions with some options to fdisk solved the problem.

Heh no need to apologise, glad you mentioned it, although i am starting to think that rebuilding the array from the beginning is the only thing I can try!

salasi 01-06-2011 01:35 PM

Quote:

Originally Posted by iSpaZZZ^ (Post 4214230)
Also I didn't think that sector size effected the performance much, it was to reduce the overheads ergo usable data lost in formatting drives.

Yes and no. The aim is to reduce lost space in the error correction, what with the areal density going up and the margins in the error correction subsystem going down, that's true. OTOH a side effect is that if you computer writes a 512 b block, the disk has to read its 4k block, modify 512 b of it and write that 4k block back to the platter. If, to take a very obvious example, you write a contiguous section of 512 b blocks that is going to be very, very inefficient and, therefore, slow.

Quote:

Originally Posted by iSpaZZZ^ (Post 4214230)
..But no these are Seagate Barracuda LPs (5900 PM) from late 2009, so they don't have the 4k sector size (FDISK reports 512)

With that date, I'd guess that you are correct, but, if it was me, I'd still want to check with the manufacturers datasheet, given that if this was wrong it'd probably be the easiest thing to deal with.

iSpaZZZ^ 01-07-2011 04:35 AM

Quote:

Originally Posted by salasi (Post 4215560)
OTOH a side effect is that if you computer writes a 512 b block, the disk has to read its 4k block, modify 512 b of it and write that 4k block back to the platter. If, to take a very obvious example, you write a contiguous section of 512 b blocks that is going to be very, very inefficient and, therefore, slow.

Well according the data-sheet they have 512 bytes per sector, whilst I'm aware that the 4k block size can cause a slowdown, if the OS/kernel is 4k sector unaware, I don't think it is the problem here.

I'm pulling my hair out over this, there seems to be no other people with similar problems (done plenty of googling) and not really a reason for it to happen. I'm going to have to mess around with this over the weekend I think :(

chudux 06-05-2012 10:20 AM

iSpaZZZ^...did you ever find the fix for this? I'm running into the same issue with a little home NAS I built with two Samsung F4EG HD155UI 1.5GB drives running in software RAID1.

iSpaZZZ^ 06-07-2012 07:45 AM

hi chudux,

Yes I did get it sorted in the end, I found out it was because of the method the drive was mounted. I can't remember the exact switches in /etc/fstab where set, but I remember that it was set to 'sync' not 'async', putting it back to defaults made it go back to the performace level i would expect.

Hope that helps.

etoixpi 11-05-2013 06:50 PM

** Solution Found **

Hi Guys,

I see that this thread is 2 years old - but spaz's response at the end helped me as well. I feel it wasn't spelled out very clearly though and I made an account just to document this here incase it can also help somebody else.

I have a raid 1 array that I migrated from one machine to another( used as storage, not root dir). The machine the array was created on was running Ubuntu 12.04 LTS. The machine that the array was moved to was running Debian Wheezy 7.2. When I migrated the drives I migrated /etc/fstab and /etc/mdadm/mdadm.conf as well. On the debian machine I was seeing 150kB/s write speeds while on the Ubuntu machine I was seeing 50MB/s write speeds with the exact same settings in mdadm.conf and fstab.

The solultion was like spaz said above: change the mount options in /etc/fstab to "defaults" or to change "sync" to "async". When I changed "sync" to "async" or just put in "defaults" I got normal performance from the array.

Old fstab that gave slow write speeds on Debian 7.2:
Quote:

/dev/mapper/raid1 /media/storage ext4 rw,nosuid,nodev,noexec,auto,user,sync 0 2
New fstab that gave normal performance on Debian 7.2:
Quote:

/dev/mapper/raid1 /media/storage ext4 rw,nosuid,nodev,noexec,auto,user,async 0 2
Alternatively, this worked as well on Debian 7.2:
Quote:

/dev/mapper/raid1 /media/storage ext4 defaults 0 2
Cheers. I hope this helps someone else!

alabit 11-07-2014 01:31 AM

Still very relevant
 
Quote:

Originally Posted by etoixpi (Post 5058991)
I see that this thread is 2 years old - but spaz's response at the end helped me as well. I feel it wasn't spelled out very clearly though and I made an account just to document this here incase it can also help somebody else.

...

Cheers. I hope this helps someone else!

Greetings,

I could not resists to do the same and post a thanks, this thread was a huge help to me, I was just about to give up on kernel RAID - clearing the sync boot option made all the difference! Thank you all!

The only other thing I would like to add is that the server I am building is a heavily experimental virtual client host using SSDs, HDDs, external USB 3 HDDs, thumb drives, network shares - and my debian os and my client partitions are all over the place - some on BIOS RAID, some on kernel RAID, some not on RAID and I was suspecting everything else but the boot options. I should not have, since once sync is cleared everything works the way I suspected it should. Lesson learned. Cheers!


All times are GMT -5. The time now is 07:53 AM.