LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 06-30-2012, 05:55 PM   #1
glorsplitz
Senior Member
 
Registered: Dec 2002
Distribution: slackware!
Posts: 1,308

Rep: Reputation: 368Reputation: 368Reputation: 368Reputation: 368
your experience with ssd


Hi, I'm doing a new computer build and thinking about an ssd for the system, slackware 64 latest stable.

I've read up and see what to do about system tweaks like trim.

What I would like to know if you are actively using an ssd, what size, about what you paid, how long has it been installed, have you done any system tweaks, which distro/kernel, how is the system performance, is it what you thought was, would you suggest anything else.

Thank you
 
Old 06-30-2012, 07:38 PM   #2
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
I use an Intel X25-V 40GB SATA 2 SSD in my laptop with Slackware 13.37 (will go back to -current when the AMD legacy driver is released). I use ext4 on it with the usual tweaks (proper partition aligning, trim from time to time using fstrim, using nop-scheduler), it works fine since about 2.5 years in that machine, price was about 120€ back in the time.
I also use a Corsair Force 3 120GB SATA 3 in my main machine with Vista/Slackware64 -current dual boot, also ext4 with the same optimizations. This peace of hardware is about 6 months old and works like a charm.
 
1 members found this post helpful.
Old 07-01-2012, 07:43 AM   #3
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member Response

Hi,

Look at: #26 in Slackware on an SSD

Loads of reference links within. I too like the 'noop' scheduler(which is a FIFO) for a SSD. In #26 you will find methods to set things up.
HTH!
 
1 members found this post helpful.
Old 07-01-2012, 09:38 AM   #4
glorsplitz
Senior Member
 
Registered: Dec 2002
Distribution: slackware!
Posts: 1,308

Original Poster
Rep: Reputation: 368Reputation: 368Reputation: 368Reputation: 368
Thank you

Thank you responders.

That's what I was looking for.
 
Old 07-01-2012, 10:34 AM   #5
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member Response

Hi,
Quote:
Originally Posted by glorsplitz View Post
Thank you responders.

That's what I was looking for.
Your Welcome!

Please remember that LQ search functions will get you additional information on 'SSD' or for that matter any Linux related topic.

You can also look at the suggested threads by scrolling down within the thread review.
 
1 members found this post helpful.
Old 07-03-2012, 03:21 AM   #6
GreggT
Member
 
Registered: Jan 2012
Distribution: Debian, CentOS, Fedora, TinyCore
Posts: 45

Rep: Reputation: Disabled
newegg has been having some super deals on SSD's, I've seen some 120GB's go for $70 multiple times over the past month, these are different SSD's and lots of deals all the time.

most of the consumer MLC SSD's it looks like can take 5GB of writes per day for around 5 years before it dies, so that's what you want to be aware of.. which it shouldn't be an issue at all but this is what testing & intel has said.

keep an eye out for those deals though, save you a good chunk
 
Old 07-03-2012, 09:00 PM   #7
glorsplitz
Senior Member
 
Registered: Dec 2002
Distribution: slackware!
Posts: 1,308

Original Poster
Rep: Reputation: 368Reputation: 368Reputation: 368Reputation: 368
yea the deals at newegg are what got me curious

they have a cosair $119 $90 after mail in rebate and seems to have pretty good feedback
 
Old 07-03-2012, 10:54 PM   #8
GreggT
Member
 
Registered: Jan 2012
Distribution: Debian, CentOS, Fedora, TinyCore
Posts: 45

Rep: Reputation: Disabled
Yea I'm running on the 90GB version (had it for a half year or so) runs very well, w7 boots up in a few seconds (not counting bios) and linux on it doesn't even seem to boot, it just pops up with login screen (lol).. and of course bios takes longer than anything
 
Old 07-04-2012, 04:20 AM   #9
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member Response

Hi,
Quote:
Originally Posted by GreggT View Post
<snip>

most of the consumer MLC SSD's it looks like can take 5GB of writes per day for around 5 years before it dies, so that's what you want to be aware of.. which it shouldn't be an issue at all but this is what testing & intel has said.

keep an eye out for those deals though, save you a good chunk
It would be nice if you shared where you got the 5GB per day writes and a time frame for 5 years. Because Intel provides better numbers than what you posted;
Quote:
From http://techreport.com/articles.x/15433

If you don't want to crunch through the math, Intel estimates that the 80GB X25-M will last for five years with "much greater than" 100GB of write-erase per day. That's a relatively long time for much more data than most folks are likely to write or erase on a daily basis.

Actual drive lifespans aside, Intel rates the X25-M's Mean Time Between Failures (MTBF) at 1.2 million hours. That's competitive with the MTBF rating of other MLC-based flash drives and equivalent to common MTBF ratings for enterprise-class mechanical hard drives.
Old article but still comparative data. Newer 'MLC' SSD have a typical MTBF of '1,500,000 hrs'. Wear leveling & write amplification have greatly improved for newer controllers using 'MLC'.

Another good example;
Quote:
from 'A Wear Leveling Refresher: How Long Will My SSD Last?'

If I never install another application and just go about my business, my drive has 203.4GB of space to spread out those 7GB of writes per day. That means in roughly 29 days my SSD, if it wear levels perfectly, I will have written to every single available flash block on my drive. Tack on another 7 days if the drive is smart enough to move my static data around to wear level even more properly. So we’re at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 360,000 days of using my machine the way I have been for the past two weeks for all of my NAND to wear out; once again, assuming perfect wear leveling. That’s 986 years. Your NAND flash cells will actually lose their charge well before that time comes, in about 10 years.
This assumes a perfectly wear leveled drive, but as you can already guess - that’s not exactly possible.
'A Wear Leveling Refresher: How Long Will My SSD Last?' article is a good practical read from AnandTech.

HTH!
 
Old 07-04-2012, 04:26 AM   #10
GreggT
Member
 
Registered: Jan 2012
Distribution: Debian, CentOS, Fedora, TinyCore
Posts: 45

Rep: Reputation: Disabled
From reading forums & watching videos.. somewhere, might have been at a meeting too.

Also, the faster MLC's have a worse life than slower ones years ago.. most of them aren't even using new controllers.

http://www.storagesearch.com/ssdmyths-endurance.html

That's just the first link from google.

Also take into consideration, 5GB of writes a day is like 50 times the average person whose writes is going to be just the browser cache and that's about it.

Also their NAND technology is impressive:
http://en.wikipedia.org/wiki/NAND_gate
http://en.wikipedia.org/wiki/NAND_logic
http://en.wikipedia.org/wiki/NAND_flash#NAND_flash

and I'm glad to see it works so well, because not having to replace drives every <1 year while not using SLC's will be great

Last edited by GreggT; 07-04-2012 at 04:34 AM.
 
Old 07-04-2012, 05:56 AM   #11
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member Response

Hi,

That is one of the reasons to use some older data relative to 'MLC' for both the density differences today and the new controllers to have a fair comparison. 'wear-leveling' & 'write amplification' are the two major variables that everyone should pay attention too.

People who use a 'MLC' based SSD in a Enterprise and then complain when wear or failure occurs should not be maintaining let alone build a system around 'SSD MLC' technology. Instead a competent admin would choose a EFD 'SSD SLC' technology for the Enterprise and not worry about the cost factor. Right tool for the job!

For NAND flash memory;
Quote:
from http://www.anandtech.com/show/2928

The first 25nm product is an 8GB (64Gbit) 2-bits-per-cell MLC NAND flash. A single 8GB die built on IMFT’s 25nm process has a die size of 167mm2. Immersion lithography is apparently necessary to produce these 25nm NAND devices, but the extent is unclear. This is technically Intel’s first device that requires immersion lithography to manufacture.
The above excerpt defines Intel's technology for MLC. Whole article is very informative.

Intel & Micron now use 20nm by way of IM Flash Technologies (IMFT) to double the density for newer drives. One of the reasons everyone is seeing a SSD resell/dump at a great savings. Newer SSD that use IMFT will have greater density with improved controllers.

Too much 'FUD' out there!

BTW, nice links to the Logic gates which we are not talking about. Plus the flash link is not valid for what we a speaking about. NAND Flash Technology has changed to 2 or 3 bit cells to be used within a 'SSD'.

HTH!
 
Old 07-04-2012, 01:41 PM   #12
GreggT
Member
 
Registered: Jan 2012
Distribution: Debian, CentOS, Fedora, TinyCore
Posts: 45

Rep: Reputation: Disabled
You directly posted an about an SSD with NAND, so sorry to talk about it *rolleyes*
 
Old 07-05-2012, 09:13 PM   #13
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member Response

Hi,

Quote:
Originally Posted by GreggT View Post
You directly posted an about an SSD with NAND, so sorry to talk about it *rolleyes*
We are talking about 2-3 bit NAND Flash memory not a Boolean NAND logic gate. You can example the NAND using a DIP or transistor(s). Sure the Boolean function is a good example. Your logic gate examples are for the Boolean NAND and yes you can build a memory using logic gates.

We can use the logic tree to show the function for a NAND therefore the state of the memory cell we are using.

HTH!
 
Old 07-07-2012, 03:20 PM   #14
angel115
Member
 
Registered: Jul 2005
Location: France / Ireland
Distribution: Debian mainly, and Ubuntu
Posts: 542

Rep: Reputation: 79
If you are planning to use a SSD drive with Linux. I would recommand you to use Btrfs filesystem insted of EXT2, 3 or 4

https://www.youtube.com/watch?v=hxWuaozpe2I
 
Old 07-07-2012, 03:31 PM   #15
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Btrfs still lacks fsck, so I would still mark it as not ready for production systems.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
SSD raid1 vs SSD raid10 advice ? wonker Linux - Hardware 8 05-23-2012 01:46 AM
SSD InvRa Linux - Hardware 1 04-05-2011 10:23 AM
SSD hydraMax Linux - Hardware 4 01-09-2011 12:09 PM
ssd slack66 Slackware 5 06-24-2010 12:56 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 05:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration