LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Linux SSD write speed is crappy (https://www.linuxquestions.org/questions/linux-newbie-8/linux-ssd-write-speed-is-crappy-4175585834/)

road hazard 07-29-2016 05:07 PM

Linux SSD write speed is crappy
 
My PC has some hot-swap bays. I have a two Sandisk SSD's. One has Windows loaded on it and the other Linux Mint.

When I boot my PC with the Windows SSD and copy a file from my server, over a gigabit connection, the speed is over 100MB/s the entire time. If I go into Device Manager and disable the write back caching policy for this drive, performance drops to about 60MB/s.

I shut down the PC and pop my Linux SSD in and boot up and copy that same file from my server, my performance is about 60MB/s. If I go into hdparm and use -W, it reports that my SSD has caching 'on' but the performance just isn't there. I wiped Linux from this drive and installed Windows and did the same test and with write caching enabled, 100+MB/s. What in Linux is slowing me down?

I used nmon to watch disk activity and it's all over the place. Writes to my SSD start at 90MB/s, jump to 200+, 300 then drop to zero for a split second and 60MB/s, 90, 250, 400....pause, rinse repeat until the transfer is done. Watching in performance monitor, transfer rates stays around 60MB/s as far as data being received over the network. It's as if the SSD is choking and not keeping up with the gigabit transfer coming in and can't write the data fast enough. Is there a Linux version of that 'enable write back caching' option in Device Manager in Windows? I think this will solve my problem.

BUT, funny thing is, I use to get 100+MB/s when copying to my Linux box up until a few weeks ago. As far as I know, nothing has changed.

suicidaleggroll 07-29-2016 05:17 PM

Have you confirmed it's the SSD that's slowing down the transfer?

Try copying the file over the network to /dev/null
Also try using dd to dump from /dev/zero to a file on the SSD

If indeed you're seeing that transferring over the network to /dev/null is >100 MB/s, and using dd to dump a file on the SSD is ~60 MB/s, you can pursue SSD optimization, but I have my doubts that will be the case.

road hazard 07-29-2016 05:54 PM

Quote:

Originally Posted by suicidaleggroll (Post 5583140)
Have you confirmed it's the SSD that's slowing down the transfer?

Try copying the file over the network to /dev/null
Also try using dd to dump from /dev/zero to a file on the SSD

If indeed you're seeing that transferring over the network to /dev/null is >100 MB/s, and using dd to dump a file on the SSD is ~60 MB/s, you can pursue SSD optimization, but I have my doubts that will be the case.

I used iperf on the server and in Windows on my PC (Win 7 SSD installed) and network utilization stayed at 99% the entire time. When I copy from the server to this machine with the Linux SSD installed, the server network utilization (in Task Manager) bounces around from 50-80% constantly.

Being a newb and all, can you give me the exact formatting of what I need to type into the terminal? (my SSD is at /dev/sda/). I'm going to boot back into Linux here in a few minutes and will poke around on Google to see if I can figure out how to carry out your test.

Thanks

Update: I used this command:

dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc status=progress

and got this....

1030750208 bytes (1.0 GB, 983 MiB) copied, 2.03404 s, 507 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.47454 s, 196 MB/s

Still working on the /dev/null network copy.

suicidaleggroll 07-29-2016 06:00 PM

Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:

dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.

road hazard 07-29-2016 06:04 PM

Quote:

Originally Posted by suicidaleggroll (Post 5583152)
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:

dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.

Results:

10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 43.5942 s, 241 MB/s

road hazard 07-29-2016 06:08 PM

Quote:

Originally Posted by suicidaleggroll (Post 5583152)
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:

dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.

I found this page; https://sites.google.com/site/easyli...-write-actions and carried out the steps but it seems this is more about reducing wear on your SSD and less about enabling write back caching. Maybe I'm crazy and 60MB/s is normal for a SSD under Linux when coping large amounts of data over gigabit? Can you pull some multi-gig file over your network and see if it's around 60MB/s like mine or 100+, thus making me want to bang my head into the desk in frustration? :)

road hazard 07-29-2016 10:39 PM

Quote:

Originally Posted by suicidaleggroll (Post 5583152)
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:

dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.

Something really weird I just discovered.

To recap, my Windows 7 "server" is a Plex media server. It has a SSD.

I have another PC (we'll call it 'Gamer') with hot-swap drives and a SSD with Windows 7 and a SSD with Linux Mint that I can switch to boot into either OS on the same hardware.

While using the Windows SSD, and with write back caching enabled for my SSD, I can pull a 10+ gig file from my Plex server at 100+MB/s. If I turn off write back caching, it drops to 60MB/s.

While using the Linux SSD on this same box, I can pull a 10+ gig file from my Plex server at 60MB/s.

While in Linux, if I copy that same file over to the RAID 5 array, it copies over at 120+MB/s. If I copy that same file back to my SSD, it comes over at well over 150MB/s.

So while in Linux, my SSD can copy to/from the RAID 5 array (spindle drives) really quickly but pulling from the SSD on the Plex server while connected at 1000 megs (Gigabit), the transfer rate is about 60MB/s. Is that normal? Why can this same hardware (when using the Windows 7 SSD drive and write back caching enabled) pull the same file at over 100MB/s? So I guess the SSD performance in Linux is fine, so long as the source isn't a file coming in over the network card? Unless 60MB/s is normal for gigabit?

This makes no sense especially since some friends of mine can pull huge files from their Windows connected servers on a gigabit connection with write caching turned OFF on their target PC and maintain a 100MB/s transfer rate.

If anyone else is reading this, can you copy a file from a gigabit connected device to your Linux install on a SSD and tell me what the transfer rate is?

Thanks

suicidaleggroll 07-30-2016 09:08 AM

You haven't told us what the network protocol is. Encryption overhead can easily drop the transfer speed to ~60 MB/s depending on hardware, cypher used, etc.

smallpond 07-30-2016 10:32 AM

If you like living with no safety net you can mount the filesystem with the "nobarrier" option to operate like Windows.

road hazard 07-30-2016 12:11 PM

Quote:

Originally Posted by suicidaleggroll (Post 5583330)
You haven't told us what the network protocol is. Encryption overhead can easily drop the transfer speed to ~60 MB/s depending on hardware, cypher used, etc.

Sorry, using Samba all the way around and TCP.

road hazard 07-30-2016 12:41 PM

Quote:

Originally Posted by smallpond (Post 5583345)
If you like living with no safety net you can mount the filesystem with the "nobarrier" option to operate like Windows.

Adding that to fstab and rebooting made no difference. Still transferring from the server's SSD to my Linux SSD (over gigabit network) at around 60MB/s.

suicidaleggroll 07-30-2016 10:46 PM

Quote:

Originally Posted by road hazard (Post 5583367)
Sorry, using Samba all the way around and TCP.

Samba may be the cause, have you tried NFS (if possible)?


All times are GMT -5. The time now is 04:51 PM.