LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 07-06-2013, 10:16 AM   #1
sysbox
Member
 
Registered: Jul 2005
Posts: 117

Rep: Reputation: 15
SSD Write Performance on Linux


I have an 512 GB Samsung 840 Pro SSD on a Linux machine (CentOS 6.4, 64-bit) with an i7 CPU. The SSD drive is connected to a 6 Gb/sec SATA connector on the system board. The entire SSD has one large 480 GB ext4 partition. Tom's Hardware reports this SSD as giving 480 MB/sec for sequential writes. However, when I fill the disk with repeated calls to this dd command:
Quote:
dd count=1048576 bs=1024 if=/dev/zero of=file.nnn
I only see about 240 MB/sec. If I run fstrim in the entire partition, and rerun the test, I only see 240 MB/sec. If I delete the files, and then DON'T run fstrim, and rerun the test, I still only see 240 MB/sec. I tried other block sizes as well. However, I can't get the write speed to budge from 240 MB/sec.

So it seems that either fstrim isn't working for me, or maybe this isn't the correct way to perform sequential write benchmarks, or maybe the SSD isn't somehow configured correctly, or fstrim isn't configured properly, or maybe Linux doesn't support SSDs correctly? Does anyone know why I'm not seeing faster write speeds, and what can I do to see the faster write speeds?
 
Old 07-06-2013, 01:10 PM   #2
propofol
Member
 
Registered: Nov 2007
Location: Seattle
Distribution: Debian Wheezy & Jessie; Ubuntu
Posts: 331

Rep: Reputation: 59
Try:
Code:
dd if=/dev/zero of=file.nnn bs=1024 count=1048576 conv=fdatasync
 
Old 07-06-2013, 06:35 PM   #3
sysbox
Member
 
Registered: Jul 2005
Posts: 117

Original Poster
Rep: Reputation: 15
I'm not sure what that was supposed to do, but I tried 'conv=fdatasync' and write speeds were about 190 megabytes/second.
 
Old 07-06-2013, 11:26 PM   #4
propofol
Member
 
Registered: Nov 2007
Location: Seattle
Distribution: Debian Wheezy & Jessie; Ubuntu
Posts: 331

Rep: Reputation: 59
Quote:
Originally Posted by sysbox View Post
I'm not sure what that was supposed to do, but I tried 'conv=fdatasync' and write speeds were about 190 megabytes/second.
From the dd man page:
"fdatasync: physically write output file data before finishing"
In other words it should show the actual data throughput without the effect of caching to RAM. You might find this interesting:
https://wiki.archlinux.org/index.php/SSD_Benchmarking

Regards,
Stefan
 
1 members found this post helpful.
Old 07-07-2013, 12:02 AM   #5
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,600

Rep: Reputation: 1241Reputation: 1241Reputation: 1241Reputation: 1241Reputation: 1241Reputation: 1241Reputation: 1241Reputation: 1241Reputation: 1241
I would also suggest using something larger than 1K for a buffer. Try 4k/8k/16k.

1k is rather bad for the I/O scheduling and DMA (lots of extra overhead).
 
Old 07-07-2013, 04:31 AM   #6
AwesomeMachine
Senior Member
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora
Posts: 1,829

Rep: Reputation: 257Reputation: 257Reputation: 257
SSD speed can be wildly contrived. For instance, the hdparm -T command might read 2,000 MB/s. But that's from the drive buffer cache, so it's not really fair. Making a 50 GB file of zeroes with dd is better, but dd uses four buffers. I believe the source data is read into the first buffer, a second buffer verifies the data with the original read bufer, after which data is written to the target and then read into the third buffer, which is compared to the first buffer for verification. If all that checks out right, the dd program moves on the next block of data.

To say the least, that is just going to be slow. It's not like an office application writing to the disk. From what I know, about 250 MB/s is a good realistic rate for such a drive as yours.
 
Old 07-07-2013, 06:15 AM   #7
mddnix
Member
 
Registered: Mar 2013
Location: Bengaluru, India
Distribution: Redhat, Arch, Ubuntu
Posts: 498

Rep: Reputation: 137Reputation: 137
How much speed do you get when used hdparm?
Code:
# hdparm -t --direct /dev/sdxX
Also try this links:
How to maximise SSD performance with Linux
How to Tweak Your SSD in Ubuntu for Better Performance


After Edit: You can also try what interface its using (1.5/3/6). For example my SATA hdd uses 1.5 Gb/s and 3 Gb/s. In your case it should also show 6 Gb/s. If not, check sata cable. Cant say it works for SSD though.
Code:
# hdparm -I /dev/sdc | grep speed
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Gen2 signaling speed (3.0Gb/s)

Last edited by mddnix; 07-07-2013 at 06:43 AM.
 
1 members found this post helpful.
Old 07-07-2013, 06:15 AM   #8
mddnix
Member
 
Registered: Mar 2013
Location: Bengaluru, India
Distribution: Redhat, Arch, Ubuntu
Posts: 498

Rep: Reputation: 137Reputation: 137
How much speed do you get when used hdparm?
Code:
# hdparm -t --direct /dev/sdxX
Also try this links:
How to maximise SSD performance with Linux
How to Tweak Your SSD in Ubuntu for Better Performance


After Edit: You can also try what interface its using (1.5/3/6). For example my SATA hdd uses 1.5 Gb/s and 3 Gb/s. In your case it should also show 6 Gb/s. If not, check sata cable. Cant say it works for SSD though.
Code:
# hdparm -I /dev/sdc | grep speed
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Gen2 signaling speed (3.0Gb/s)

Last edited by mddnix; 07-07-2013 at 06:43 AM.
 
1 members found this post helpful.
Old 07-07-2013, 08:20 AM   #9
sysbox
Member
 
Registered: Jul 2005
Posts: 117

Original Poster
Rep: Reputation: 15
I tried the following, and hdparm reported read speeds around 307 MB/s:
Quote:
> hdparm -t --direct /dev/sda
I also tried this:
Quote:
> hdparm -I /dev/sda | grep speed
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
This is interesting. The SSD is connected to the SATA 6.0 Gb/s connector on the system board. Why does hdparm report 3.0 Gb/s? You said to check the cable. The SSD did not come with a SATA cable, so I used whatever I had handy. I see on newegg that some SATA Cables are listed as 6.0 Gb/s. Are they really different then other cables?
 
Old 07-07-2013, 09:07 AM   #10
mddnix
Member
 
Registered: Mar 2013
Location: Bengaluru, India
Distribution: Redhat, Arch, Ubuntu
Posts: 498

Rep: Reputation: 137Reputation: 137
Quote:
Originally Posted by sysbox View Post
You said to check the cable. The SSD did not come with a SATA cable, so I used whatever I had handy. I see on newegg that some SATA Cables are listed as 6.0 Gb/s. Are they really different then other cables?
Actually i was suggesting to check if there are different SATA2 and SATA3 interface, and that have you plugged to correct SATA3 interface. But a quick google cleared that there should not be difference (bet 3 & 6), unless the sata cable is cheap one. SATA 3Gb/s vs. 6Gb/s Cable Performance

Anyway, these are the some areas you have to check for problems:
  • Mother Board interface (SATA III)
  • BIOS (IF possible update bios)
  • Write Cache (Some ssd comes with cache, if its so then disabling write cache greatly improves write speed)
  • SATA cable
 
Old 07-07-2013, 09:24 AM   #11
sysbox
Member
 
Registered: Jul 2005
Posts: 117

Original Poster
Rep: Reputation: 15
I'm using the SATA III connection on the system board. I checked for BIOS updates, there are some, but none of them address the SATA III. The SATA cable shouldn't matter, according to the article you referenced. Not sure what else to check. But Tom's Hardware clearly achieved about twice as high write (and read) bandwidth on this exact drive under windows as I am under linux. So something is definitely wrong.

The 3.0 Gb/s output from hdparm must be a clue.
 
Old 07-07-2013, 10:37 AM   #12
cascade9
Senior Member
 
Registered: Mar 2011
Location: Brisneyland
Distribution: Debian, aptosid
Posts: 3,718

Rep: Reputation: 906Reputation: 906Reputation: 906Reputation: 906Reputation: 906Reputation: 906Reputation: 906Reputation: 906
Quote:
Originally Posted by sysbox View Post
The SSD is connected to the SATA 6.0 Gb/s connector on the system board.
Not all SATA controllers are created equal. Many of the addon SATA controllers found on motherobrds use junk marvel (etc.) controllers.....which often run very badly even with windows. With linux they run far worse. Even if labeled 'SATA6' or 'SATAIII' can be limited to SATAII as well, I've seen reports of that even with windows.

At least some of them do not support TRIM either.

What motherboard are you using?

Last edited by cascade9; 07-07-2013 at 10:42 AM.
 
Old 07-07-2013, 11:06 AM   #13
Shadow_7
Senior Member
 
Registered: Feb 2003
Distribution: debian
Posts: 2,321
Blog Entries: 1

Rep: Reputation: 447Reputation: 447Reputation: 447Reputation: 447Reputation: 447
You might want to write a short script that does the dd with a timestamp at the start and timestamp at the end and calculate the write speed yourself. It could just be reporting an incorrect value. Also bear in mind that writing files to a filesystem has more overhead than writing data to a whole disk.
 
Old 07-07-2013, 11:22 AM   #14
mddnix
Member
 
Registered: Mar 2013
Location: Bengaluru, India
Distribution: Redhat, Arch, Ubuntu
Posts: 498

Rep: Reputation: 137Reputation: 137
Try this, one guy told that by changing to AHCI, it improved speed:

Quote:
BIOS > On-Chip SATA Type > AHCI. (Previously only booted on 'Native IDE')
From:
[SOLVED] Samsung 840 Pro-- very slow in new build. Why?
 
Old 07-07-2013, 11:30 AM   #15
sysbox
Member
 
Registered: Jul 2005
Posts: 117

Original Poster
Rep: Reputation: 15
Quote:
What motherboard are you using?
I'm using an Asus Sabertooth X58. It has a Marvell SATA III controller. When I boot, the boot message lists that Marvell SATA controller and it lists it as 6 Gb/sec.

Quote:
You might want to write a short script that does the dd with a timestamp at the start and timestamp at the end and calculate the write speed yourself.
Yes, I did that originally. That is where the 240 MB/sec numbers come from.

Quote:
BIOS > On-Chip SATA Type > AHCI. (Previously only booted on 'Native IDE')
I did something similar. In the BIOS, I then changed the Marvell controller from IDE to AHCI. It didn't make any difference in the speed tests.
 
Old 07-08-2013, 02:50 AM   #16
mddnix
Member
 
Registered: Mar 2013
Location: Bengaluru, India
Distribution: Redhat, Arch, Ubuntu
Posts: 498

Rep: Reputation: 137Reputation: 137

Take a look at this links

Solid-State Disk Deployment Guidelines
SSD performance tips for RHEL6 and Fedora
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Looking for Linux file system performance tuning tips for LSI 9271-8i + 8 SSD's RAID0 RJ0 Linux - Server 0 02-03-2013 12:44 AM
iSCSI write performance very poor while read performance is excellent dinominant Linux - Server 1 10-10-2012 11:51 AM
SSD performance Skaperen Linux - Hardware 6 03-15-2011 04:40 PM
Software Raid 6 - poor read performance / fast write performance Kvothe Linux - Server 0 02-28-2011 04:11 PM
LXer: Three Simple Tweaks for Better SSD Performance LXer Syndicated Linux News 0 11-25-2009 08:40 PM


All times are GMT -5. The time now is 11:52 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration