Does Your Primary Linux Desktop Have An HDD or SSD?
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
View Poll Results: Does Your Primary Linux Desktop Have An HDD or SSD?
I have 1 240GB SSD in my first machine and 5 500GB HD in raid0 (yes, five) in my second pc wirh linux.
The linux machine is more fast reading/ writing than pc with ssd.
More disk capacity, more speed, more cheap and more physical space. Jaja
I have 1 240GB SSD in my first machine and 5 500GB HD in raid0 (yes, five) in my second pc wirh linux.
The linux machine is more fast reading/ writing than pc with ssd.
More disk capacity, more speed, more cheap and more physical space. Jaja
But, I'am very happy with 2 pcs.
For big sequential read/writes I could see the HDDs being faster than the SSD, but they still won't hold a candle to the SSD for seek time and random I/O.
Reliability won't even be comparable with 5 HDDs in RAID 0, I wouldn't expect that to last very long at all before you lose the entire array.
I voted SSD, but in fact my Primary machine boots Fedora in a hybrid architecture.
The actual boot is on an OCZ Vertex 4 .5Tb. SSD, but the LVM also includes a RAID0 system in a Mediasonic box connected by a USB3 connection tethered on a powered USB3 hub.
Using 2 WD Red 2 Tb. HDDs, this gives me an LVM space of about 4.5 Tb. without the high cost of using all SSD,s which would have added at least $4000.00 CDN to the cost
Backup is done on 2 NAS boxes, one a RAID5 the other a RAID0, all connected via a NETGEAR R6300 at Gb. speed.
This has some caveats when I have to reboot, but these are not the end of the world.
Just as a reference, my home server that runs ssh, ftp, http, nfs, and samba servers writes around 1.6 GB/day to the SSD that houses the OS, the rest of the data goes onto the platters. Even though it's only a 40 GB drive, it shouldn't hit its write limit until somewhere around the year 2078.
Since folks asked. Going back though my (paper) logs:
Looked like this was Jan 2011 . This disk was prev generation OCZ (so vertex 2)
OS was Debain I recall it being wheezy in order to get the trim extensions ... but I tried lots of others
The original box was a mini-ITX fanless, I also bought a dreamplug and used an external enclosure, same result.
OS would install fine run like a dream, very fast, then shutdown (but leave power applied) and it would fail to boot.
One such message was "bad inode in journal (block 8)" . Sometimes it I just left it alone ab 'ls -l' would show
files which when you tried to open them would be missing. I spent a YEAR on this buy new hardware, trying lots of OS versions. On reading up at the time the theory I came with was:
+ SSDs are in effect quite complex computers in their own right
+ they attempt to "Wear level" by remapping chips to addresses ... so move data from chip#1 to chip#2 then change mapping so #1 becomes #2 and vice versa
+ They can only do this remapping when SSD is idle
+ shutting down CPU , but leaving motherboard (and hence SSD) powered give SSD a chance to "tidy up"
+ There were bugs in the SSD firmware
So I could run all the disk tests I liked , speed tests, r/w tests ...all ran fine and fast (SMART said disk was perfect)
Then leave it overnight (powered) and come back to a hopeless scramble of disk blocks ... to the point where fsck(8) was clearly just dealing with random blocks and failing.
Put an HDD in same position all worked fine.
Bought a much cheaper SanDisk SSD , that also worked fine. Got the OCZ replaced with vertex 3, so far that is fine ... however I've not trusted either to my "always on" box, that that is now a dreamplug with an SD card (SLC of course!)
So my current hierarchy is:
1: Registers , very, very fast data only safe for nanoseconds
2: RAM, very fast, good for whole minutes of storage
3: SSD fast , can store data for several days, but will lose it all with no warning
4: HDD slow, can store data for several years, will generally fail slowly with warning, but will lose it all in the end
5: Optical media very slow , robust (impacts) last may years but will lose all data eventually (may be recoverable)
6: Magnetic tapes, usually the reader fails after several years , data is safe but unreadable (1600bpi 1/2" tape anybody?)
7: Paper very very slow. Lasts many years , slowly degrades , can burn up suddenly
8: Stone tablet , very very very slow ... Can last centuries , usually recoverable
9: Large stone construction , very low data rate (look what a great king I am!) longest lifespan.
10: Gold disc bolted to side of spacecraft fired beyond limit of SOL nova ... Jury is still out on this one.
I use hdd's all the way, for now though. I'll always use smaller drives for the OS and store main data to larger mechanical drives purely because mechanical have a cheaper $ to GB ratio. However the day ssd's become cheaper per GB/TB than mechanical's I will swap (whilst at the same time keeping the old drives as backup).
I currently use 2x2TB hdds for data storage and 2 old 150GB laptop drives pulled from old laptops, 1 for arch one for windows7 and games. I will upgrade to a 256GB msata ssd soon as it becomes affordable to me, replacing the 2 old laptop drives. My motherboard (Zotac Z77-ITX WiFi) can hold 4 sata drives total + 1 msata on the motherboard. I'm very keen to ditch the two slow and low capacity OS drives I have the now in favor of 1 similarly sized ssd and 4x 2tb hdds.
Now that I'm on the topic my two sata 2, 5400rpm laptop drives are brutally slow. Does anyone know of a good technique to cache read/writes in the ram to ease the burden on these? I am one of those people who spent most of their money on everything but hdd's on their rig, I never gave too much thought to storage. Ram's the one thing I have in idle/wasted abundance. Does anyone remember flashfire for acer aspire ones? Looking forward to your idea's guys
I am using a Laptop as intended(portability). Laptop has 3 external USB drives in a simple letter desk carrier (sitting on LapDesk), internal SSD in one bay with a HDD in second bay. Laptop is sitting on a LapDesk and is moved to from a portable desk to my lap. Actively use this Laptop for my personal use, LQ and other client needs. Laptop does not mean a user cannot design the user environment. Personally, I could use a Desktop(sometimes I do) but I prefer sitting in my LazyBoy while working and being comfortable.
Not everyone responds here at LQ that work in enterprise(s). I have repaired several machines that are abused by their users. Anything from coffee or liquid spills to dropping or banging the Laptop around. Sure, newer Laptop designs do provide services to cover some environmental abuse but things do happen to those. You should see some of the case dings that I have come across. No excuse here!
Laptops are not the only machines that get abuse.
I was responding specifically to the guy that was saying laptop hdd's only last 1-2 years in a business/enterprise, that if it were the case there would be a lot of businesses with issues.
How do you handle all the weight of the laptop, drives, portable desk, etc? Sounds like that much bigger than my lap could handle.
I haven't had a desktop, computer desk or even a mouse in over a decade. I've always ran portables and ran them on mt lap, on the couch, etc. I make sure to buy smaller lighter machines and business grade so I don't need lapdesks and cooling fans, etc. I also don't split my data between multiple drives, its all in one place then multiple backups.
Reliability won't even be comparable with 5 HDDs in RAID 0, I wouldn't expect that to last very long at all before you lose the entire array.
Traducción That's true, but I perform an incremental backup every 2 days and a full every7 days. Even I do it with simple disks. It is a good habit.
You're right with ramdom access, But with 5 raid0 HDS surpass the speed of a SSD throughout. At least every speed test programs so prove it.
The ones I've used, of course.
Last edited by scanray; 10-16-2013 at 07:07 AM.
Reason: I forgot something
This is an interesting benchmark on how little one needs for an installation that appears to be, in terms of working data, little more than a series of servers.
My case is a bit different, in tyhat I need to keep a series of working files and use the NAS box as a backup, and sometimes as data collector/mirror.
Then, when my motherboards (which really are a bit too old, read $$$) decides to crash, I don't lose data from the backup.
Down the road, I will build another whole machine, when the cost will fit comfortably in my budget.
I don't know if many have done custom kernel compiles of late, but my recent one based on the distros kernel /boot/config was over 10GB compiled for the source tree with objects. Only 3GB if I did a make localmodconfig.
The number of writes is a lot more than you think if you use your computers. Browser cache, auto-save, logging, media editing, source compiling, updates? So many writes all the time. Only 3,000 of them, that kind of makes me cringe. One write a day for slightly more than 8 years? If you edit media, maybe create a DVD from content, less than a year. Assuming that the estimate wasn't pessimistic. I've done at least that on my current HDD and no signs (yet) of pending doom.
I don't know if many have done custom kernel compiles of late, but my recent one based on the distros kernel /boot/config was over 10GB compiled for the source tree with objects. Only 3GB if I did a make localmodconfig.
The number of writes is a lot more than you think if you use your computers. Browser cache, auto-save, logging, media editing, source compiling, updates? So many writes all the time. Only 3,000 of them, that kind of makes me cringe. One write a day for slightly more than 8 years? If you edit media, maybe create a DVD from content, less than a year. Assuming that the estimate wasn't pessimistic. I've done at least that on my current HDD and no signs (yet) of pending doom.
I do that all the time.
Code:
>>> du -sh linux-3.11.5
1.6G linux-3.11.5
This one is with Slackware's "huge" configuration, direct after compilation. I have about 3TB written to this SSD now and it still has 100% lifetime. My oldest one, an Intel SSD with 40GB from their cheaper value series that is about 3 years old with about the same amount of writes, reports 97% lifetime, so even the cheaper ones not from the latest generation have a pretty good longevity.
This one is with Slackware's "huge" configuration, direct after compilation. I have about 3TB written to this SSD now and it still has 100% lifetime. My oldest one, an Intel SSD with 40GB from their cheaper value series that is about 3 years old with about the same amount of writes, reports 97% lifetime, so even the cheaper ones not from the latest generation have a pretty good longevity.
I need the three things SSD still sucks at: Reliability (esp in the longevity aspect), lots of space, and inexpensive.
And SSDs are only cheap if you're looking at a comparison of the technology against itself. Saying they're cheap is like saying Microsoft makes a good OS because Win 7 is better than Vista. Windows still sucks because Vista is a low bar to set. It's a bar that was set by the lone jumper rather than the competition. If you have to exclude the competition in order to make a particular metric look good...
It's much the same with Reliability and space. Comparisons are frequently made against previous iterations of the technology rather than against other technologies. I'm glad you added a few inches to your 3 foot high jump, but your competition is jumping 8 to 10 feet.
If we look at speed only, well HDDs suck on cache misses. Otherwise the performance is much the same. We've been at it long enough that minimizing cache misses is a refined art. Add more memory, read less (e.g. compression), be smarter about discards (use a decent OS). But yeah, cache misses still happen often enough that the speed of the device is often important. More important than my preferred three metrics? You get one or the other. It's a lot to trade off for speed. At least from my perspective.
1+
SSDs are crap for reliabilty and real work, they are nothing than glorified SD cards at this point.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.