Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
My PC has some hot-swap bays. I have a two Sandisk SSD's. One has Windows loaded on it and the other Linux Mint.
When I boot my PC with the Windows SSD and copy a file from my server, over a gigabit connection, the speed is over 100MB/s the entire time. If I go into Device Manager and disable the write back caching policy for this drive, performance drops to about 60MB/s.
I shut down the PC and pop my Linux SSD in and boot up and copy that same file from my server, my performance is about 60MB/s. If I go into hdparm and use -W, it reports that my SSD has caching 'on' but the performance just isn't there. I wiped Linux from this drive and installed Windows and did the same test and with write caching enabled, 100+MB/s. What in Linux is slowing me down?
I used nmon to watch disk activity and it's all over the place. Writes to my SSD start at 90MB/s, jump to 200+, 300 then drop to zero for a split second and 60MB/s, 90, 250, 400....pause, rinse repeat until the transfer is done. Watching in performance monitor, transfer rates stays around 60MB/s as far as data being received over the network. It's as if the SSD is choking and not keeping up with the gigabit transfer coming in and can't write the data fast enough. Is there a Linux version of that 'enable write back caching' option in Device Manager in Windows? I think this will solve my problem.
BUT, funny thing is, I use to get 100+MB/s when copying to my Linux box up until a few weeks ago. As far as I know, nothing has changed.
Have you confirmed it's the SSD that's slowing down the transfer?
Try copying the file over the network to /dev/null
Also try using dd to dump from /dev/zero to a file on the SSD
If indeed you're seeing that transferring over the network to /dev/null is >100 MB/s, and using dd to dump a file on the SSD is ~60 MB/s, you can pursue SSD optimization, but I have my doubts that will be the case.
Have you confirmed it's the SSD that's slowing down the transfer?
Try copying the file over the network to /dev/null
Also try using dd to dump from /dev/zero to a file on the SSD
If indeed you're seeing that transferring over the network to /dev/null is >100 MB/s, and using dd to dump a file on the SSD is ~60 MB/s, you can pursue SSD optimization, but I have my doubts that will be the case.
I used iperf on the server and in Windows on my PC (Win 7 SSD installed) and network utilization stayed at 99% the entire time. When I copy from the server to this machine with the Linux SSD installed, the server network utilization (in Task Manager) bounces around from 50-80% constantly.
Being a newb and all, can you give me the exact formatting of what I need to type into the terminal? (my SSD is at /dev/sda/). I'm going to boot back into Linux here in a few minutes and will poke around on Google to see if I can figure out how to carry out your test.
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
Results:
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 43.5942 s, 241 MB/s
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
I found this page; https://sites.google.com/site/easyli...-write-actions and carried out the steps but it seems this is more about reducing wear on your SSD and less about enabling write back caching. Maybe I'm crazy and 60MB/s is normal for a SSD under Linux when coping large amounts of data over gigabit? Can you pull some multi-gig file over your network and see if it's around 60MB/s like mine or 100+, thus making me want to bang my head into the desk in frustration?
Last edited by road hazard; 07-29-2016 at 06:08 PM.
Reason: Grammer
Can't help on copying the file over the network, since I don't know how you're doing it currently. Just change the destination of this copy to /dev/null instead.
That'll write a 10 GB file to /path/to/mount/point/bigfile.bin (change this to an actual directory that will be located on the drive), time it, and tell you the average write speed when it's done.
Something really weird I just discovered.
To recap, my Windows 7 "server" is a Plex media server. It has a SSD.
I have another PC (we'll call it 'Gamer') with hot-swap drives and a SSD with Windows 7 and a SSD with Linux Mint that I can switch to boot into either OS on the same hardware.
While using the Windows SSD, and with write back caching enabled for my SSD, I can pull a 10+ gig file from my Plex server at 100+MB/s. If I turn off write back caching, it drops to 60MB/s.
While using the Linux SSD on this same box, I can pull a 10+ gig file from my Plex server at 60MB/s.
While in Linux, if I copy that same file over to the RAID 5 array, it copies over at 120+MB/s. If I copy that same file back to my SSD, it comes over at well over 150MB/s.
So while in Linux, my SSD can copy to/from the RAID 5 array (spindle drives) really quickly but pulling from the SSD on the Plex server while connected at 1000 megs (Gigabit), the transfer rate is about 60MB/s. Is that normal? Why can this same hardware (when using the Windows 7 SSD drive and write back caching enabled) pull the same file at over 100MB/s? So I guess the SSD performance in Linux is fine, so long as the source isn't a file coming in over the network card? Unless 60MB/s is normal for gigabit?
This makes no sense especially since some friends of mine can pull huge files from their Windows connected servers on a gigabit connection with write caching turned OFF on their target PC and maintain a 100MB/s transfer rate.
If anyone else is reading this, can you copy a file from a gigabit connected device to your Linux install on a SSD and tell me what the transfer rate is?
You haven't told us what the network protocol is. Encryption overhead can easily drop the transfer speed to ~60 MB/s depending on hardware, cypher used, etc.
You haven't told us what the network protocol is. Encryption overhead can easily drop the transfer speed to ~60 MB/s depending on hardware, cypher used, etc.
If you like living with no safety net you can mount the filesystem with the "nobarrier" option to operate like Windows.
Adding that to fstab and rebooting made no difference. Still transferring from the server's SSD to my Linux SSD (over gigabit network) at around 60MB/s.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.