LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 07-29-2016, 06:07 PM   #1
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Rep: Reputation: Disabled
Question Linux SSD write speed is crappy


My PC has some hot-swap bays. I have a two Sandisk SSD's. One has Windows loaded on it and the other Linux Mint.

When I boot my PC with the Windows SSD and copy a file from my server, over a gigabit connection, the speed is over 100MB/s the entire time. If I go into Device Manager and disable the write back caching policy for this drive, performance drops to about 60MB/s.

I shut down the PC and pop my Linux SSD in and boot up and copy that same file from my server, my performance is about 60MB/s. If I go into hdparm and use -W, it reports that my SSD has caching 'on' but the performance just isn't there. I wiped Linux from this drive and installed Windows and did the same test and with write caching enabled, 100+MB/s. What in Linux is slowing me down?

I used nmon to watch disk activity and it's all over the place. Writes to my SSD start at 90MB/s, jump to 200+, 300 then drop to zero for a split second and 60MB/s, 90, 250, 400....pause, rinse repeat until the transfer is done. Watching in performance monitor, transfer rates stays around 60MB/s as far as data being received over the network. It's as if the SSD is choking and not keeping up with the gigabit transfer coming in and can't write the data fast enough. Is there a Linux version of that 'enable write back caching' option in Device Manager in Windows? I think this will solve my problem.

BUT, funny thing is, I use to get 100+MB/s when copying to my Linux box up until a few weeks ago. As far as I know, nothing has changed.
 
Old 07-29-2016, 06:17 PM   #2
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,258

Rep: Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947
Have you confirmed it's the SSD that's slowing down the transfer?

Try copying the file over the network to /dev/null
Also try using dd to dump from /dev/zero to a file on the SSD

If indeed you're seeing that transferring over the network to /dev/null is >100 MB/s, and using dd to dump a file on the SSD is ~60 MB/s, you can pursue SSD optimization, but I have my doubts that will be the case.
 
Old 07-29-2016, 06:54 PM   #3
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
Have you confirmed it's the SSD that's slowing down the transfer?

Try copying the file over the network to /dev/null
Also try using dd to dump from /dev/zero to a file on the SSD

If indeed you're seeing that transferring over the network to /dev/null is >100 MB/s, and using dd to dump a file on the SSD is ~60 MB/s, you can pursue SSD optimization, but I have my doubts that will be the case.
I used iperf on the server and in Windows on my PC (Win 7 SSD installed) and network utilization stayed at 99% the entire time. When I copy from the server to this machine with the Linux SSD installed, the server network utilization (in Task Manager) bounces around from 50-80% constantly.

Being a newb and all, can you give me the exact formatting of what I need to type into the terminal? (my SSD is at /dev/sda/). I'm going to boot back into Linux here in a few minutes and will poke around on Google to see if I can figure out how to carry out your test.

Thanks

Update: I used this command:

dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc status=progress

and got this....

1030750208 bytes (1.0 GB, 983 MiB) copied, 2.03404 s, 507 MB/s
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.47454 s, 196 MB/s

Still working on the /dev/null network copy.

Last edited by road hazard; 07-29-2016 at 07:00 PM. Reason: Update
 
Old 07-29-2016, 07:00 PM   #4
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,258

Rep: Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:
dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
 
Old 07-29-2016, 07:04 PM   #5
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:
dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
Results:

10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 43.5942 s, 241 MB/s
 
Old 07-29-2016, 07:08 PM   #6
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:
dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
I found this page; https://sites.google.com/site/easyli...-write-actions and carried out the steps but it seems this is more about reducing wear on your SSD and less about enabling write back caching. Maybe I'm crazy and 60MB/s is normal for a SSD under Linux when coping large amounts of data over gigabit? Can you pull some multi-gig file over your network and see if it's around 60MB/s like mine or 100+, thus making me want to bang my head into the desk in frustration?

Last edited by road hazard; 07-29-2016 at 07:08 PM. Reason: Grammer
 
Old 07-29-2016, 11:39 PM   #7
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.

As for the dd test, you could do something like:
Code:
dd if=/dev/zero of=/path/to/mount/point/bigfile.bin bs=1M count=10000
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
Something really weird I just discovered.

To recap, my Windows 7 "server" is a Plex media server. It has a SSD.

I have another PC (we'll call it 'Gamer') with hot-swap drives and a SSD with Windows 7 and a SSD with Linux Mint that I can switch to boot into either OS on the same hardware.

While using the Windows SSD, and with write back caching enabled for my SSD, I can pull a 10+ gig file from my Plex server at 100+MB/s. If I turn off write back caching, it drops to 60MB/s.

While using the Linux SSD on this same box, I can pull a 10+ gig file from my Plex server at 60MB/s.

While in Linux, if I copy that same file over to the RAID 5 array, it copies over at 120+MB/s. If I copy that same file back to my SSD, it comes over at well over 150MB/s.

So while in Linux, my SSD can copy to/from the RAID 5 array (spindle drives) really quickly but pulling from the SSD on the Plex server while connected at 1000 megs (Gigabit), the transfer rate is about 60MB/s. Is that normal? Why can this same hardware (when using the Windows 7 SSD drive and write back caching enabled) pull the same file at over 100MB/s? So I guess the SSD performance in Linux is fine, so long as the source isn't a file coming in over the network card? Unless 60MB/s is normal for gigabit?

This makes no sense especially since some friends of mine can pull huge files from their Windows connected servers on a gigabit connection with write caching turned OFF on their target PC and maintain a 100MB/s transfer rate.

If anyone else is reading this, can you copy a file from a gigabit connected device to your Linux install on a SSD and tell me what the transfer rate is?

Thanks
 
Old 07-30-2016, 10:08 AM   #8
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,258

Rep: Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947
You haven't told us what the network protocol is. Encryption overhead can easily drop the transfer speed to ~60 MB/s depending on hardware, cypher used, etc.
 
Old 07-30-2016, 11:32 AM   #9
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,613

Rep: Reputation: 703Reputation: 703Reputation: 703Reputation: 703Reputation: 703Reputation: 703Reputation: 703
If you like living with no safety net you can mount the filesystem with the "nobarrier" option to operate like Windows.
 
Old 07-30-2016, 01:11 PM   #10
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by suicidaleggroll View Post
You haven't told us what the network protocol is. Encryption overhead can easily drop the transfer speed to ~60 MB/s depending on hardware, cypher used, etc.
Sorry, using Samba all the way around and TCP.
 
Old 07-30-2016, 01:41 PM   #11
road hazard
Member
 
Registered: Nov 2015
Posts: 39

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by smallpond View Post
If you like living with no safety net you can mount the filesystem with the "nobarrier" option to operate like Windows.
Adding that to fstab and rebooting made no difference. Still transferring from the server's SSD to my Linux SSD (over gigabit network) at around 60MB/s.
 
Old 07-30-2016, 11:46 PM   #12
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,258

Rep: Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947Reputation: 1947
Quote:
Originally Posted by road hazard View Post
Sorry, using Samba all the way around and TCP.
Samba may be the cause, have you tried NFS (if possible)?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
SSD M2 SATA Transcend MSA370 slow write speed with dd jheengut Linux - Hardware 2 12-30-2015 05:26 AM
LXer: READ/WRITE Speed Benchmark of Samsung SSD 850 PRO LXer Syndicated Linux News 0 01-27-2015 11:41 AM
RAID10 write speed decreased to normal SSD speed after rebuilding the array. mke2k Linux - Server 2 07-11-2014 05:15 AM
Custom debootstrap build live cd slow write speed on ssd kktester Ubuntu 8 04-20-2013 08:19 PM


All times are GMT -5. The time now is 02:21 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration