Slackware on an SSD
Been a while since I posted in this forum...
Anyways, since I'm slowly (and finally) getting on my way in getting a new computer, I have been researching SSDs to supplement my new system. I was considering installing Slackware on an SSD and leave a conventional hard drive just for general storage. So I am wondering what is everyone's opinions/experiences with Slackware on an SSD, and most importantly what type of filesystems are appropriate for an SSD? |
See the benchmark:
http://www.phoronix.com/scan.php?pag...38_large&num=1 To me it seems JFS is the best for SSD. One thing to keep in mind is that SSDs may need firmware updates, and not all companies provide Linux utils (OCZ does). |
Hello:
I have a new Lenovo ThinkPad T520 with a 160GB SSD. I did a complete install of Slackware 13.37 64-bit directly from the DVD on a ext4 partition. I find it faster but I do not know if there is anything special that should be done when using a SSD. To be honest, at this moment, Windows 7 Professional is still installed and the SSD makes it extremely quick when compared to XP on a regular hard disk. Bottom line: Slackware is slightly faster but the big difference in speed is really evident in Windows. |
I use ext4 with a 240 GB OCZ Vertex 3 SSD. Plenty fast.
A few things that I've read about optimizing SSD's with Linux: - Use the 'noop' io scheduler as opposed to cfq. - The 'discard' (i.e. TRIM) mount option may hurt performance. Idle garbage collection is pretty good on the Vertex 3. You can also use fstrim about once a month to TRIM the drive. If you're leaning towards OCZ Vertex 3, make sure that you download the 2.15 firmware. Any other version and you'll probably experience random lockups or erratic behavior from your system. I've read that Intel and Samsung SSDs are the most stable, out of the box. http://www.zdnet.com/blog/perlow/gee...and-linux/9190 http://en.opensuse.org/SDB:SSD_disca...rim%29_support |
Why would you use the noop I/O scheduler ? (I would never use it)
I would use deadline instead. Either way, you can benchmark them easily. You can switch them on a running system: http://blog.nexcess.net/2010/11/07/c...-io-scheduler/ |
Interesting so far, results seem promising. I do like JFS as my default FS, however the problem with most of the FS in Linux is that these particular FS' are of course coming from the days of conventional drives. The biggest concern that I have about SSDs is their number of write cycles, and using a journaled FS like ext3, JFS, XFS, Reiser, or even BTRFS(though still not considered stable), would seem to add to the number of write cycles. Sure NAND flash is getting up there, but I would venture to guess that a conventional HD would still have a much longer lifespan.
Now if I am just using a conventional HD for general storage then that obviously would offset that issue, but again not sure about what to use as an FS, even though chances are I may just stick with JFS since I've never seen Slackware Linux offer JFFS (only actual flash based FS in kernel) as an option to install on. |
If you are concerted about limited writes, why not set it up so everything that is mostly read-only is on the SSD, and stuff you write often is stored on the HDD. For example, I would put /home and /tmp on the HDD for sure, as well as swap if you use that (I recommend zcache or zram instead).
|
I am not using any SSD. But am looking closely since willing to upgrade my notebook. I have found this link about various optimization and etc for linux on SSD. Take a look it is very interesting reading:
http://www.ocztechnologyforum.com/fo...ighlight=linux Many suggestions from this reading (like using RAM for /tmp and moving firefox cache to /tmp, for example) can be used without SSD and give you some performance benefits (of course if you have enough RAM, but it's cheap today anyway) If I would buy one for desktop, I would definitely install /root to SSD (with ext4 optimized as suggested in the above link), while leaving /home on regular disk, and /tmp on RAM - with tmpfs. Biggest disadvantage of SSD is that it has limited number of overwrites. So less writing - longer life. |
Yea definitely I would put /home and /tmp on the conventional drive. Not sure if I would be able to read-only the entire SSD though, since that would be tricky. I would still need to write when there are security updates, or if I just want to install a program or two and will need to actually write to the SSD, unless I can somehow manually place the appropriate directories to the conventional drive. Sounds like an interesting project too :p.
|
Member response
Hi,
I just purchased another refurbished Dell Laptop and purchased from another vendor a Patriot PYRO SE SSD: MLC arch, SATAIII: Patriot PPSE60GS25SSDR Pyro SE Solid State Drive Data Sheet.pdf; Quote:
Quote:
I installed 'Slackware64-current' last night and will finish tweaking the install sometime this evening. Nothing really big jumped out for the 'SSD', just setting partitioning scheme, filesystem of choice, setup 'fstab'. One noted problem was the sound & wireless setup but that too was fixed. This is not my first 'SSD' drive. I have another Dell Laptop with a Intel X25-V SATA 40GB 'SSD' with Slackware64 13.1 on it. Works great! If it ain't broke don't fix it! Love those 'SSD' and good portable external USB 500GB drives for data storage with quality enclosures. :) |
Quote:
For optimizations, I use firefox cache on /dev/shm, but I don't recommend doing this with /tmp, because you can easily run out of RAM. I recommend zcache and I have found that these commands also help throughput: Code:
blockdev --setra 16384 /dev/sda |
Quote:
|
The impact of writes on the lifespan isn't so hard as most people think. I use a Intel SSD (X25-V, 40GB, ext4 mounted with discard option) in my notebook and besides putting /tmp to RAM I have not done any optimizations for having fewer writes. And I am not concerned about that. c't, a popular German computer magazine is currently making the test. They write , for several weeks now, non-compressible data (to make it hard for the Sandforce controllers to compress the data) to several SSDs and no failures til now. Most modern SSDs have a wearout or lifetime indicator, that can be read out over SMART. Here a little bit to think about if you really should be so concerned:
If you take the worst result from that OCZ drive: If a typical Slackware install is about 6GB you can do 4166 complete installs before something like that happens. If you look at the Gskill drive that would be more than 25K complete installs. So if you use the SSD for the OS and a mechanical disk for your data there should be nothing to worry about. By the way, keep in mind to get a motherboard that is capable of SATA 3 (aka SATA 6G) to get the most out of the SSD. |
Install slackware on ssd
I have had slackware on my asus 901 eeepc on the 20 gig ssd for over 2 years with never a problem. It is very easy to install and every thing works well. I do not remember exactly but I think all drivers were included with the distro.
|
One more thing, /var should also be on the HDD.
|
/var /home and /tmp in HDD
Quote:
|
Member response
Hi,
Old reference that is still useful to take notice: http://cptl.org/wp/index.php/2010/03...ives-in-linux/ I like the Patriot forum & Intel Support Community. edit: Onebuck SSD participation threads links that have loads of reference links and discussion in post #24 HTH! |
Quote:
|
I currently run Slackware on an SSD. I wouldn't recommend using anything other than ext4 as the filesystem because it's stable and supports TRIM, which is very important to maintain the performance of the drive over time. I tweaked things a little bit like using the "noatime" mount option too, setting the IO scheduler to "noop" for that drive, using tmpfs for /tmp and compiling things in there, and keeping the Firefox cache on /tmp too. It could also be disabled if you have a fast Internet connection. I did not bother to use a HDD for /var and I'm not sure it's really worth it. Not much gets written in there and, if it does, like files downloaded by a package manager or a database, I'm pretty sure you're interested in the performance benefit.
My two cents. |
Arch's wiki has some good info about optimizing for an SSD... though it sounds like most of it has been mentioned in this thread already:
link |
Quote:
|
Quote:
|
Quote:
Code:
[General] |
Member response
Hi,
Sorry about that, forgot about the timeout. Here are some of the links; |
|
Member response
Hi,
Quote:
Quote:
Another point for 'swap' is to use and placing in '/etc/rc.d/rc.local' : 'echo 1 >/proc/sys/vm/swappiness' to limit swap. Default swappiness in Slackware is '60'. Even with today's large memory footprint, I like to keep a swap. As the memory gets >=8GB then consider locating high read/write operations in memory or if you prefer a physical HDD at the cost of time. You can also maximize performance by using and placing in '/etc/rc.d/rc.local'; 'echo 50 > /proc/sys/vm/vfs_cache_pressure'. Slackware default for '/proc/sys/vm/vfs_cache_pressure' is '100'. Note: Plus be sure to use & where X=a,b.c..: 'tune2fs -o discard /dev/sdX' if you do not use 'discard' in 'fstab'. If you have 'SSD' & 'HDD' in the system then you can use 'noop' scheduler and can be placed in '/etc/rc.d/rc.local': 'echo noop > /sys/block/sdX/queue/scheduler' ( where 'X' is a,b,c,d,e....) Slackware default is [cfg] scheduler. You can verify this by 'cat /sys/block/sdX/queue/scheduler' that will show the contents for the scheduler in use will be in [cfg] brackets (where 'X' is a,b,c,d,e....). You will see other available schedulers from the output of 'cat /sys/block/sda/queue/scheduler'. If you are worried that udev may assign different '/dev/' node to drive(s) because of kernel updates,system upgrade, etc. You should take the means to assign 'noop' to the correct device and place this in '/etc/rc.d/rc.local'; Code:
This code snippet from ArchWiki; Code:
Note Information revised from ArchWiki; Caution Note: You should only switch the scheduler to 'noop' for the 'SSD(s)' in the system. You should keep the 'cfg' scheduler for all other physical 'HHD' in the system. Some say to use 'deadline' but for a 'SSD' device that has no mechanical heads or spinning disks actions to introduce delay then no advantage for 'deadline' over 'noop' which is 'FIFO' queue. Sometimes you may need to completely reset 'SSD' cells to factory state. 'TRIM' can degrade performance over time on some 'SSD', even ones that support native 'TRIM'. I suggest to get 'SSD' utilities from the drive manufacture. Manufactures usually provide utilities along with firmware upgrades but not always. Partition Schemes for 'SSD' are always debated. Personally I think that the 'longevity' of the SSD can be extended by placing active partitions like '/var' on a physical HDD rather than on a 'SSD'. '/tmp' can be treated in the same way. Nothing saying you cannot setup highly active partitions on a 'SSD' when the 'SSD' is optimized properly. 'SSD' longevity is increasing for newer devices. For me a 'SSD' that lasts for 5-8 years is longer than the system usage lifetime for my hardware. Everyone should select the type of 'SSD' selected, be it synchronous 'SLC' ($$$)or asynchronous 'MLC'($). Each has advantages and disadvantages. Some manufactures are keeping the costs down by using asynchronous 'MLC' with a known good processor controller(i.e.; SandForce is widely used). Maximized performance can be had by synchronous 'SLC' and known good controller 'SandForce'. But you will pay for those advantages on a SandForce synchronous 'SLC' based 'SSD'. Densities are getting better for 'MLC' at a much lower cost for consumer grade 'SSD'. So unless you have the mission critical performance needs that are provided by a SandForce synchronous 'SLC' based 'SSD' at a much greater cost then my suggestion is to select a premium grade SandForce asynchronous 'MLC' based 'SSD' device at a lower cost. I like 'Patriot SSD' while others prefer 'OCZ SSD'. I buy computer systems all the time as the need fits. Just purchased another refurbished Dell Laptop at a great cost savings. My old Dell Laptop(5 years old) is still functional (LQ machine) and will continue to be used but for other duties. Placed a Patriot PyroSE 60GB drive in the new Dell Laptop. Still doing a lot of tweaking for this system with Slackware64 -current. Most of this 'SSD' information is based on the tweaking of the 'SSD' in the New & Old Dell laptops. So much information out there on 'SSD' usage. Some FUD, while some factual and useful from LQ, manufactures & wikis'. Note: Be very careful with mixing old & new 'SSD' advice & information. I will revise this as I continue benchmarking and tweaking the New Dell Laptop w/Patriot PyroSE SSD. Also will place revisions in So you want to be a Slacker! What do I do next? Buyer be-aware! But: "No man ever yet became great by imitation." -Samuel Johnson HTH! |
Member Response
Hi,
I really do not like to resurrect old threads. But: For some 'SSD' you can get some additional gain by setting up a write-back cache. Not all 'SSD' support this feature. No harm to try it as root from the 'cli'; Code:
~#hdparm -W1 /dev/sdx #where x= a,b,c,d... You can turn it off with; Code:
~#hdparm -W0 /dev/sdx #where x= a,b,c,d... Quote:
|
Quote:
Stick to HDD drives. It's cheaper, they last longer, and they work better. |
Don't buy a hard drive. They are electro-mechanical, and the mechanical parts wear out. If the mechanical parts don't get you, faulty firmware will. They're doomed to fail. C'mon...
The naysayers fear that your drive will wear out in what seems like a matter of months. Hard drives aren't what they used to be either. Just about any consumer hard drive you buy today will only come with a 1-year warranty. Go back to the mid-90's, and many drives had a 3 or 5 year warranty. Are you telling me they're so reliable these days, you don't need a decent warranty? I don't think so. I have an SSD and a hard drive in my laptop. I'm happy with the SSD. The prices are coming down, and each generation improves upon the previous. Just do your homework, just as you would with any other purchase. |
Quote:
Quote:
|
Member Response
Hi,
Personal & professional preferences are what dictate hardware specifications for me. I still use mechanical hard drives but find the use of a 'SSD' enhances my hardware experience. As to the life comparisons for a 'SSD' & 'HDD'. My base hardware will reach the end of life before a failure of either device. Too limit use of a 'SSD' because of 'FUD' is just plain silly. Go ahead and design your system around the device you wish to use. I will use my 'SSD' along with cheap large storage 'HDD' to provide a good workable machine. Today's 'SSD' controller design along with the use of 'MLC' will insure a good experience over time for consumer grade drives. After a few years(2-4) then I will will be moving up to another platform with newer technology after a few years of positive experience. :hattip: |
i have a SSD...
read the section about SSD in the Arch linux wiki and in the end got a bit pissed that i have to do so many tweaks and that i had to read so much in order to just use the damn thing... eventually what i did was: - use AHCI mode from the BIOS (don't even remember anymore why) - use gdisk to partition the disk as GPT (so partitions are aligned properly, whatever this means) - use EXT4 to format the partitions (because some options had to be passed in fstab, see below) - use the following options for e.g. the root partition in fstab (don't remember what they are for either): /dev/sda1 / ext4 defaults,noatime,discard 0 1 I do not know if everything I did was right, may be I screw up something... I have mounted /tmp/SBo into RAM, something that I always do. yes, i know i should've mounted /home, /var and /tmp on a separate "conventional" hard disk, but since i don't have any, i just use these partitions on the SSD. And to be honest: I DON'T CARE! Sure it is fast, but not as fast as I imagined from reading some reviews. No magic, only /usr/bin is displayed in Thunar within a second and it takes a fraction of a second to start Seamonkey. But the CPU is AMD Phenom Black (4 cores) so i suppose it would've been fast even with a HDD. There is something weird though: when LILO boots Slackware, it is very slow when the "Loading Linux.................................................. ......." completes. After that the boot is very fast (~15 sec or less with the generic kernel) |
A Slackware user that is pissed by reading documentation? That is rather rare, I would think.
- AHCI should be activated for any modern disk, regardless if it is a SSD or not. - You can align partitions with fdisk - ext4/btrfs are recommended for SSDs because the support TRIM, an important thing for SSDs Seems that you have done anything right. I am using SSDs since 2010, I never had any problems. the SMART status of my oldest SSD (Intel X25-V 40GB) reports 0% wearout. I recommend SSDs to anyone. You have fear of loosing data because of wear-out? Make a backup! I am doing that since the time I lost data when a 6 months old mechanical disk died. |
As an additional data point, I'm using a Raspberry Pi, which is a tiny development board with a slow ARM CPU that uses an SD card as its storage medium -- i.e. a *very* slow SSD.
I tried the conventional advice to use the 'noop' scheduler. What a DISASTER! Any background load, such as compiling, would kill interactive response for minutes at a time. The 'noop' and 'deadline' schedulers are equivalent if and only if your SSD performance is so fast, and utilisation is so low, that there is no queueing. But in any situation where your SSD is a bottleneck -- in other words, whenever actual scheduling is actually needed -- 'deadline' or 'cfq' will maintain responsiveness and 'noop' will be epic fail. 'noop' is the scheduler of choice only for workloads that are so utterly trivial that they don't ever need a scheduler. I'm guessing that, in practice, the 'noop' advocates are being saved from themselves by cleverness in their SSD controllers. |
Quote:
Quote:
|
Quote:
I'm just trying to illustrate the larger point that people seem to refuse to take a second look at a technology because it may have been rougher around the edges a few years earlier. They dwell on the negatives of old and come to expect that they were be there for eternity. |
Quote:
|
Quote:
Add the "compact" option to /etc/lilo.conf, run lilo and enjoy! |
I've got two Intel SSD's in my workstation:
The / SSD: Device Model: INTEL SSDSA2M080G2GC Serial Number: CVPO0120057J080BGN Firmware Version: 2CV102HD User Capacity: 80,026,361,856 bytes The /home SSD: Device Model: INTEL SSDSA2CW300G3 Serial Number: CVPR112601L4300EGN Firmware Version: 4PC10302 User Capacity: 300,069,052,416 bytes The / SSD has been online for 8561 hours and the /home SSD for 6388 hours. I use my system quite heavily, with lots of compilation, a boatload of git commit/push/pull and lots of copying to and from network drives. This is not a box that is idling 90% of the day. Currently the smartctl "233 Media_Wearout_Indicator" reports 99, for both SSD's. Quote:
I use EXT4 with the discard option added to fstab. I can honestly say that buying those two SSD's have made a tremendous difference in my computing experience. They are A LOT faster than even the fastest HDD's. Probably one of the best hardware buys I've ever done. |
Member Response
Hi,
Quote:
I have experienced 'HDD' failure and that is no walk in the park. For me the 'MTBF' for a 'SSD' surpass the life time of my general hardware. I do replace the system long before a failure will occur for a properly configured system with a 'SSD'. Currently replacing a Laptop that will be used for a 2-3 year time frame. 'SSD' will be a Patriot drive around 160GB-250GB with secondary USB 3.0 'HDD' at 500GB. Still shopping for the 'SSD' but think I will stick with a 'MLC' based unit for cost reasons. My laptop choices are narrowing and a new selection to be made soon. |
Member Response
Hi,
Quote:
Quote:
Quote:
'deadline' does provide good scheduling for 'SSD' that could be used for DB management with potential overlay issues. I would like to to see the data to support your statements for 'noop' and 'deadline'. If you make a system wide scheduler change then I can see some issues with the mix of 'SSD' & 'HDD' scheduling. If the system is configured for independent scheduler for each device then the data/benchmarks state otherwise. Quote:
|
Quote:
|
Quote:
Furthermore, it is super easy to recover data from a failed HDD. |
Quote:
Quote:
|
Quote:
Btw, there are benchmarks done by the BFQ people (BFQ is a superior I/O scheduler for *rotational* disks) showing that for disks with NCQ (native command queueing) the I/O scheduler strategy matters less, to the point of NOOP being competitive with BFQ! http://algo.ing.unimo.it/people/paol...mark-suite.php and finally my ceterum censeo: people wanting the ultimate interactivity should not forget the CPU scheduler and look into BFS. |
The one primarily storage application where an SSD can really help is what was once occupied by WORM (write once, read many) optical drives. Reference data and media files that are often stored without being changed or overwritten for long periods but are frequently read and accessed can benefit from the extra speed. The limitation here for SSDs is their limited size and cost/GB.
|
Quote:
Quote:
Quote:
Quote:
Ok, either of you, do you care to suggest exactly *which* something "must be wrong" with my "system configuration"? Go on, be creative. |
Quote:
- Opposite to most recommendations, I don't use the discard option in fstab. From what I have read this can in some cases lead to serious performance decreases. I use fstrim from time to time to trim the partition. - I added this line to rc.local: Code:
echo noop > /sys/block/sda/queue/scheduler - I changed fstab to mount /tmp to RAM, this I would recommend regardless if you have a SSD or not if you have 4 or more GB of RAM. - My /home partition is on the SSD, with symlinks to a (mechanical) HDD for directories like Downloads, Pictures, Documents and Videos. Possibly important: I use i3 as my WM, so no things like indexing services that run in the background (like when you run default KDE). Other than that I use some lightweight programs instead of the standard ones (Ranger, newsbeuter, Claws-Mail) my systems are pretty much standard. |
Quote:
|
Quote:
Quote:
|
All times are GMT -5. The time now is 01:51 AM. |