Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I don't know if Linux has any impact on this. I got a new SSD drive. It's the high speed X25-E in 32GB from Intel. I've been benchmarking it at various read blocksizes. Smaller blocksizes are performing poorly as generally expected, with larger blocksizes performing better. One peculiar thing shows up is that going from a blocksize of 16k to a blocksize of 32k actually has a slight, but consistently detected, performance drop. Going from 256k to 512k also levels off while 1024k rises some. So it seems odd powers of 2 have some kind of effect somewhere. Any ideas? Beyond 4M (tested previously to 64M) it has leveled out.
The 1st thing that strikes me is that the drive will be well worn before you put data onto it.
SSD is a form of ram, which must have an internal organisation. That is, there is a transparent mapping from head,track,sector to physical addresses in flash. It stands to reason that there would be an optimum size. Further, at close to 220Mb/S I would expect you to be fairly close to motherboard speed limits, so a whole lot of unpredictable things could happen there.
It's SLC, so it should last a lot longer than MLC. And this is reading, which is not supposed to involve doing the flash erases that wear them out.
The motherboard controller could be the culprit. It's a SuperMicro X8STE. I moved it over to my desktop, and get speeds of 262 to 274 MB/sec. And my desktop has the same SuperMicro X8STE motherboard. The server has Ubuntu server 10.04.2 while the desktop has Ubuntu desktop 10.10 (both amd64 versions). The server has a Xeon with ECC RAM and the desktop has a Core i7 with non-ECC RAM (yes, both can work in sock 1366 with X58 chipset). Maybe it's the ECC that is slowing down the transfers?
The server also has a 9650SE RAID card, but the SSD was connected directly to a motherboard SATA port, not the RAID card. The RAID was mostly idle (though the OS resides on the RAID, so there would be some activity). One of the purposes of examining the SSD is to put the OS on there instead of on the RAID (but swap space would stay on the RAID, or be eliminated).
My testing so far has been on the whole device. It has not been partitioned, yet. GPT in this case has the disadvantage of chopping a whole megabyte off the end (as well as beginning) ... when one is keeping whole megabyte alignments (which I generally recommend even for spinning platter drives, for better cache performance, among other things).
I'd only be losing 33 sectors at the end if I were disregarding any alignment. This device has exactly 32000000000 bytes, so there's an exact 1M chunk at the end, which the 33 sectors occupies. Maybe they need to make the secondary table optional in GPT (based on my pattern of using MBR over the past 16 years, I have zero use for the secondary table). That it is mandatory is a disadvantage of GPT (IMHO). If it were optional, then it would be a clear advantage.
Code:
lorentz/root /root 147# fdisk -l /dev/sdg
Disk /dev/sdg: 32.0 GB, 32000000000 bytes
255 heads, 63 sectors/track, 3890 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdg doesn't contain a valid partition table
lorentz/root /root 148# gdisk -l /dev/sdg
GPT fdisk (gdisk) version 0.5.1
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries.
Disk /dev/sdg: 62500000 sectors, 29.8 GiB
Disk identifier (GUID): EF2EBE12-8FF6-0BA1-F087-3515DAD67007
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 62499966
Total free space is 62499933 sectors (29.8 GiB)
Number Start (sector) End (sector) Size Code Name
lorentz/root /root 149#
Quote:
Originally Posted by onebuck
I use the INtel X25-V SATA SSD 40GB with good results.
Very satisfied for the $ spent.
That's an MLC model. How does it show up in fdisk/gdisk?
While the "Using dd" section is also testing reading of buffer cached data, ordinarily, when I am only interested in the drive itself, I use the options "iflag=direct" (for read testing) or "oflag=direct" (for write testing) to bypass as much buffering as I can (I also do these at times for other reasons).
BTW, I've often gotten inconsistent results from hdparm for benchmarking.
The big thing I'm asking about is the non-uniform rise in performance with increased block size (e.g. why does it go back down at 32k and stay level at 512k). Basically, why do I not see an increase in performance when doubling the block size, but do when quadrupling it. Maybe because I'm hitting or missing certain sizes that tend to be optimal or non-optimal?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.