SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The one primarily storage application where an SSD can really help is what was once occupied by WORM (write once, read many) optical drives. Reference data and media files that are often stored without being changed or overwritten for long periods but are frequently read and accessed can benefit from the extra speed. The limitation here for SSDs is their limited size and cost/GB.
Click here to see the post LQ members have rated as the most helpful post in this thread.
Hi,
'SD' controller and 'SSD' controllers are different thus the write/read will be handled different.
Yes, which is why I wrote "I'm guessing that, in practice, the 'noop' advocates are being saved from themselves by cleverness in their SSD controllers". Conjecturally, coalescing requests, or prioritising small random reads over big writes, or any similar non-fifo-behaviour.
Quote:
Originally Posted by onebuck
'noop' is a 'FIFO'
My point exactly. On an SD, when a couple of reads from one process are queued behind a minute's worth of writes from another, there will be a minute's delay before they are serviced. On an SSD, the delay would be a fraction of a second, but a FIFO is still a FIFO. I fail to see why my observation of a FIFO doing what a FIFO does is somehow evidence of this:
Quote:
Originally Posted by onebuck
Your system may not have been configured properly.
Quote:
Originally Posted by TobiSGD
Posting this while playing a Youtube video and compiling a kernel with -j10 on my Corsair Force 3 120GB, noop scheduler, just to test this. No, I can't see any sluggishness. Something must be wrong with your system configuration.
There seems to be an echo in here.
Ok, either of you, do you care to suggest exactly *which* something "must be wrong" with my "system configuration"? Go on, be creative.
Ok, either of you, do you care to suggest exactly *which* something "must be wrong" with my "system configuration"? Go on, be creative.
How should I know? I don't sit in front of your computer and I don't know what you have changed on your system. But I can tell you the changes I made and which I don't make, from a clean -current install on the SSD, using ext4 for the /-partition (of course with aligned partitions and AHCI enabled):
- Opposite to most recommendations, I don't use the discard option in fstab. From what I have read this can in some cases lead to serious performance decreases. I use fstrim from time to time to trim the partition.
- I added this line to rc.local:
Code:
echo noop > /sys/block/sda/queue/scheduler
Obviously, sda is the SSD.
- I changed fstab to mount /tmp to RAM, this I would recommend regardless if you have a SSD or not if you have 4 or more GB of RAM.
- My /home partition is on the SSD, with symlinks to a (mechanical) HDD for directories like Downloads, Pictures, Documents and Videos.
Possibly important: I use i3 as my WM, so no things like indexing services that run in the background (like when you run default KDE).
Other than that I use some lightweight programs instead of the standard ones (Ranger, newsbeuter, Claws-Mail) my systems are pretty much standard.
Well I don't want to drag LQ into a pissing contest, and all the things you list are valid, but you'll just have to take my word for the fact that I know everything in your list just as well as you do. I'm not so dumb as to run an indexing service on a 700MHz armv6 with 192Mb of memory, thanks all the same.
SSDs are made to put work on it, they can handle that really fast. They are not made to store large chunks of data (at least not now). So what is your point?
My point will be "Please read the thread and get an idea of what we are talking about before you reply".
Quote:
Originally Posted by TobiSGD
Sure, let me see how you recover data from a HDD with headcrash. By the way, recovering data is for people that are either to lazy or to ignorant to make proper backups.
My point will be "Please read the thread and get an idea of what we are talking about before you reply".
I have read the thread, and its point is, beginning from the OP, what are experiences of Slackware users with SSDs, used together with HDDs in the same system. This evolved into the discussion over the lifetime of SSDs. So I still have to ask: What is your point with that post?
Furthermore, it is super easy to recover data from a failed HDD.
Actually was meant as: Hey, if it crashes it is pretty easy to give it to a professional data recovery company and it will only cost a few hundreds to thousand dollars.
Quote:
Originally Posted by 55020
Well I don't want to drag LQ into a pissing contest, and all the things you list are valid, but you'll just have to take my word for the fact that I know everything in your list just as well as you do. I'm not so dumb as to run an indexing service on a 700MHz armv6 with 192Mb of memory, thanks all the same.
May be there are some other and pretty simple factors that make you system slow with the noop scheduler.
Some things that come to my mind, but I can't say if anything applies to your system:
- a slow SD card controller.
- SD cards are magnitudes slower than SSDs, especially write speed, but also read speed.
- As far as I know, SD cards/controllers don't support the TRIM feature, which can have serious impact on write speed.
I never heard from a SSD user with the same problems that you have with the SD card (which doesn't mean that those problems can't exist), so I would simply assume that the problem is caused by your system.
Anyways, since I'm slowly (and finally) getting on my way in getting a new computer, I have been researching SSDs to supplement my new system. I was considering installing Slackware on an SSD and leave a conventional hard drive just for general storage.
So I am wondering what is everyone's opinions/experiences with Slackware on an SSD, and most importantly what type of filesystems are appropriate for an SSD?
I want to chime in with my (probably different) point of view, and at the same time give a plug (hope that's OK) for my SourceForge project called "Joe's Boot Disk (JBD)" which you can find at:
I'm currently using two SSDs and I have Slackware installed on both of them. Perhaps it was unwise, but I just installed it in the normal way giving no consideration for the type of drive. I don't remember the file system types, but I think I used ext3 or ext4, the same as what I would normally have used.
What I did differently, however, is that the SSDs are each in a Sabrent USB enclosure. I made JBDs (see my SourceForge project) to boot them. The resulting combination has been working well for me. Most people are not familiar with this idea, so I'll say it again more slowly. Slackware (in each) is on the SSD in the USB enclosure. The enclosure connects to the Linux box (or CPU, tower, or whatever you like to call it) with a USB cable. The enclosure is very small as it is hardly bigger than the drive, and it needs no power beyond what USB provides. The Linux boxes are configured to boot first from CDROM. The JBD (a CDROM) does the job of booting Slackware on the SSD automatically when the Linux box is powered on.
I said I have two such SSDs in use. One serves as my firewall and dhcp server (instead of a standard, off-the-shelf router) for my home network. The linux box it plugs into therefore has two network interface cards. It has been in use for about one year. The other SSD serves as an educational computer in the special ed class where I work. The students mostly use seamonkey, ktuberling, and mplayer, and it has been in use for about two years. Both give quick, reliable performance and are a pleasure to use. Also being external to the Linux box I can (and I have) moved them from one Linux box to another with little effort.
In summary, the benefits of an SSD for me are (1) low power requirement, (2) small form factor, (3) speed of operation, and (4) works well with my JBDs. Although I've been conscious of their limitations, I've not treated them any differently from a hard drive.
-Joe
Last edited by jrosevear; 07-19-2012 at 11:13 PM.
Reason: Punctuation.
May be there are some other and pretty simple factors that make you system slow with the noop scheduler.
Some things that come to my mind, but I can't say if anything applies to your system:
- a slow SD card controller.
- SD cards are magnitudes slower than SSDs, especially write speed, but also read speed.
- As far as I know, SD cards/controllers don't support the TRIM feature, which can have serious impact on write speed.
I never heard from a SSD user with the same problems that you have with the SD card (which doesn't mean that those problems can't exist), so I would simply assume that the problem is caused by your system.
But that was exactly my point to start with. By running this experiment on a setup that is maybe 200 times slower than a typical SSD, and that does *not* have any of the fancy features that alleviate the noop scheduler's shortcomings, you can see clearly in isolation what effect the noop scheduler has. And that effect is not good. I suggest that the noop scheduler can and does cause these problems with typical SSD setups, but people don't care, because it's all happening 200 times faster.
All I wanted to do was to make people question the conventional advice about the noop scheduler and to make people think critically about the noop scheduler itself. Apparently I failed: probably because the conventional advice has become a matter of faith. Alas.
Edit: I'm worried that this thread is starting to look anti-TobiSGD. No! I really admire Tobi's work at LQ, it's outstanding, invaluable and irreplaceable.
I have read the thread, and its point is, beginning from the OP, what are experiences of Slackware users with SSDs, used together with HDDs in the same system. This evolved into the discussion over the lifetime of SSDs. So I still have to ask: What is your point with that post?
I think my point is clear enough from this post: http://www.linuxquestions.org/questi...ml#post4732705
If you still believe "storage reliability = rated lifetime" then I have nothing to say. Please feel free to put your eggs into one basket.
All I wanted to do was to make people question the conventional advice about the noop scheduler and to make people think critically about the noop scheduler itself. Apparently I failed: probably because the conventional advice has become a matter of faith. Alas.
More a matter of experience. But anyways, I never really noticed the differences between the schedulers, so may be you are right, but the differences in performance are so small that you can't notice them even under heaviest load.
Quote:
Originally Posted by guanx
I think my point is clear enough from this post: http://www.linuxquestions.org/questi...ml#post4732705
If you still believe "storage reliability = rated lifetime" then I have nothing to say. Please feel free to put your eggs into one basket.
I still don't get it. This thread is still about using SSDs in conjunction with HDDs. Also, no one said that using a SSD means that you don't have to do your backups. So please elaborate how using a SSD is putting "your eggs into one basket".
But that was exactly my point to start with. By running this experiment on a setup that is maybe 200 times slower than a typical SSD, and that does *not* have any of the fancy features that alleviate the noop scheduler's shortcomings, you can see clearly in isolation what effect the noop scheduler has. And that effect is not good. I suggest that the noop scheduler can and does cause these problems with typical SSD setups, but people don't care, because it's all happening 200 times faster.
You are still comparing two different storage techniques. Show me the data or information to support your claims. As I said before that a 'SD' controller manages writes & reads differently than a 'SSD' controller. How do you define 'noop' shortcomings? Not just a verbal claim, please support your argument.
Quote:
Originally Posted by 55020
All I wanted to do was to make people question the conventional advice about the noop scheduler and to make people think critically about the noop scheduler itself. Apparently I failed: probably because the conventional advice has become a matter of faith. Alas.
Not a matter of faith but fact. You will find gains when using 'noop' for the 'SSD' with general overall system usage. 'deadline' should be used for indexed/overlay usage and bottlenecks of this type. If you as a user are not sure then use the default [cfq] scheduler.
Quote:
Originally Posted by 55020
Edit: I'm worried that this thread is starting to look anti-TobiSGD. No! I really admire Tobi's work at LQ, it's outstanding, invaluable and irreplaceable.
I believe 'TobiSGD' can hold his own. Debates are healthy with good challenging factual points for exchange. Hearsay or FUD have no place when discussing something of the this sort. Opinions are fine but support with details.
Personally, I have been reading docs, benchmarks and anything available to make positive choice(s) for the scheduler to use. For one laptop , the 'noop' scheduler for a 'SSD' is the best overall choice. Along with proper configuration setups the 'noop' does provide best fit. This machine uses a 'Patriot' Pyro 'SSD' which does support 'write-back cache'. Another point for performance gains but not all 'SSD' controllers support 'write-back'. Users should dig into 'SSD' manufactures data to insure that maximum performance can be gained by tweaks for their system. Ask the technical support people to define or explain any of your queries/questions.
One other thing: be sure to use relative information, do not mix old 'SSD' configuration techniques with newer 'SSD' since this can be a problem. Newer controllers do provide better control techniques than older versions.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.