LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Slackware on an SSD (https://www.linuxquestions.org/questions/slackware-14/slackware-on-an-ssd-925789/)

NyteOwl 07-19-2012 02:38 PM

The one primarily storage application where an SSD can really help is what was once occupied by WORM (write once, read many) optical drives. Reference data and media files that are often stored without being changed or overwritten for long periods but are frequently read and accessed can benefit from the extra speed. The limitation here for SSDs is their limited size and cost/GB.

55020 07-19-2012 06:02 PM

Quote:

Originally Posted by onebuck (Post 4732611)
Hi,
'SD' controller and 'SSD' controllers are different thus the write/read will be handled different.

Yes, which is why I wrote "I'm guessing that, in practice, the 'noop' advocates are being saved from themselves by cleverness in their SSD controllers". Conjecturally, coalescing requests, or prioritising small random reads over big writes, or any similar non-fifo-behaviour.

Quote:

Originally Posted by onebuck (Post 4732611)
'noop' is a 'FIFO'

My point exactly. On an SD, when a couple of reads from one process are queued behind a minute's worth of writes from another, there will be a minute's delay before they are serviced. On an SSD, the delay would be a fraction of a second, but a FIFO is still a FIFO. I fail to see why my observation of a FIFO doing what a FIFO does is somehow evidence of this:

Quote:

Originally Posted by onebuck (Post 4732611)
Your system may not have been configured properly.

Quote:

Originally Posted by TobiSGD (Post 4732634)
Posting this while playing a Youtube video and compiling a kernel with -j10 on my Corsair Force 3 120GB, noop scheduler, just to test this. No, I can't see any sluggishness. Something must be wrong with your system configuration.

There seems to be an echo in here.

Ok, either of you, do you care to suggest exactly *which* something "must be wrong" with my "system configuration"? Go on, be creative.

TobiSGD 07-19-2012 06:29 PM

Quote:

Originally Posted by 55020 (Post 4733185)
Ok, either of you, do you care to suggest exactly *which* something "must be wrong" with my "system configuration"? Go on, be creative.

How should I know? I don't sit in front of your computer and I don't know what you have changed on your system. But I can tell you the changes I made and which I don't make, from a clean -current install on the SSD, using ext4 for the /-partition (of course with aligned partitions and AHCI enabled):

- Opposite to most recommendations, I don't use the discard option in fstab. From what I have read this can in some cases lead to serious performance decreases. I use fstrim from time to time to trim the partition.

- I added this line to rc.local:
Code:

echo noop > /sys/block/sda/queue/scheduler
Obviously, sda is the SSD.

- I changed fstab to mount /tmp to RAM, this I would recommend regardless if you have a SSD or not if you have 4 or more GB of RAM.

- My /home partition is on the SSD, with symlinks to a (mechanical) HDD for directories like Downloads, Pictures, Documents and Videos.

Possibly important: I use i3 as my WM, so no things like indexing services that run in the background (like when you run default KDE).

Other than that I use some lightweight programs instead of the standard ones (Ranger, newsbeuter, Claws-Mail) my systems are pretty much standard.

55020 07-19-2012 06:52 PM

Quote:

Originally Posted by TobiSGD (Post 4733195)
How should I know?

Well I don't want to drag LQ into a pissing contest, and all the things you list are valid, but you'll just have to take my word for the fact that I know everything in your list just as well as you do. I'm not so dumb as to run an indexing service on a 700MHz armv6 with 192Mb of memory, thanks all the same.

guanx 07-19-2012 07:09 PM

Quote:

Originally Posted by TobiSGD (Post 4732728)
SSDs are made to put work on it, they can handle that really fast. They are not made to store large chunks of data (at least not now). So what is your point?

My point will be "Please read the thread and get an idea of what we are talking about before you reply".

Quote:

Originally Posted by TobiSGD (Post 4732728)
Sure, let me see how you recover data from a HDD with headcrash. By the way, recovering data is for people that are either to lazy or to ignorant to make proper backups.

Take this for example: http://istcolloq.gsfc.nasa.gov/fall2...s/pederson.pdf

TobiSGD 07-19-2012 07:37 PM

Quote:

Originally Posted by guanx (Post 4733215)
My point will be "Please read the thread and get an idea of what we are talking about before you reply".

I have read the thread, and its point is, beginning from the OP, what are experiences of Slackware users with SSDs, used together with HDDs in the same system. This evolved into the discussion over the lifetime of SSDs. So I still have to ask: What is your point with that post?

Ah, know I see. Your statement:
Quote:

Furthermore, it is super easy to recover data from a failed HDD.
Actually was meant as: Hey, if it crashes it is pretty easy to give it to a professional data recovery company and it will only cost a few hundreds to thousand dollars.

Quote:

Originally Posted by 55020 (Post 4733207)
Well I don't want to drag LQ into a pissing contest, and all the things you list are valid, but you'll just have to take my word for the fact that I know everything in your list just as well as you do. I'm not so dumb as to run an indexing service on a 700MHz armv6 with 192Mb of memory, thanks all the same.

May be there are some other and pretty simple factors that make you system slow with the noop scheduler.
Some things that come to my mind, but I can't say if anything applies to your system:
- a slow SD card controller.
- SD cards are magnitudes slower than SSDs, especially write speed, but also read speed.
- As far as I know, SD cards/controllers don't support the TRIM feature, which can have serious impact on write speed.

I never heard from a SSD user with the same problems that you have with the SD card (which doesn't mean that those problems can't exist), so I would simply assume that the problem is caused by your system.

jrosevear 07-19-2012 07:54 PM

Quote:

Originally Posted by Jeebizz (Post 4584914)
Been a while since I posted in this forum...

Anyways, since I'm slowly (and finally) getting on my way in getting a new computer, I have been researching SSDs to supplement my new system. I was considering installing Slackware on an SSD and leave a conventional hard drive just for general storage.

So I am wondering what is everyone's opinions/experiences with Slackware on an SSD, and most importantly what type of filesystems are appropriate for an SSD?

I want to chime in with my (probably different) point of view, and at the same time give a plug (hope that's OK) for my SourceForge project called "Joe's Boot Disk (JBD)" which you can find at:

http://www.sourceforge.net/projects/joesbootdisk

I'm currently using two SSDs and I have Slackware installed on both of them. Perhaps it was unwise, but I just installed it in the normal way giving no consideration for the type of drive. I don't remember the file system types, but I think I used ext3 or ext4, the same as what I would normally have used.

What I did differently, however, is that the SSDs are each in a Sabrent USB enclosure. I made JBDs (see my SourceForge project) to boot them. The resulting combination has been working well for me. Most people are not familiar with this idea, so I'll say it again more slowly. Slackware (in each) is on the SSD in the USB enclosure. The enclosure connects to the Linux box (or CPU, tower, or whatever you like to call it) with a USB cable. The enclosure is very small as it is hardly bigger than the drive, and it needs no power beyond what USB provides. The Linux boxes are configured to boot first from CDROM. The JBD (a CDROM) does the job of booting Slackware on the SSD automatically when the Linux box is powered on.

I said I have two such SSDs in use. One serves as my firewall and dhcp server (instead of a standard, off-the-shelf router) for my home network. The linux box it plugs into therefore has two network interface cards. It has been in use for about one year. The other SSD serves as an educational computer in the special ed class where I work. The students mostly use seamonkey, ktuberling, and mplayer, and it has been in use for about two years. Both give quick, reliable performance and are a pleasure to use. Also being external to the Linux box I can (and I have) moved them from one Linux box to another with little effort.

In summary, the benefits of an SSD for me are (1) low power requirement, (2) small form factor, (3) speed of operation, and (4) works well with my JBDs. Although I've been conscious of their limitations, I've not treated them any differently from a hard drive.

-Joe

55020 07-20-2012 04:53 AM

Quote:

Originally Posted by TobiSGD (Post 4733231)
May be there are some other and pretty simple factors that make you system slow with the noop scheduler.
Some things that come to my mind, but I can't say if anything applies to your system:
- a slow SD card controller.
- SD cards are magnitudes slower than SSDs, especially write speed, but also read speed.
- As far as I know, SD cards/controllers don't support the TRIM feature, which can have serious impact on write speed.

I never heard from a SSD user with the same problems that you have with the SD card (which doesn't mean that those problems can't exist), so I would simply assume that the problem is caused by your system.

But that was exactly my point to start with. By running this experiment on a setup that is maybe 200 times slower than a typical SSD, and that does *not* have any of the fancy features that alleviate the noop scheduler's shortcomings, you can see clearly in isolation what effect the noop scheduler has. And that effect is not good. I suggest that the noop scheduler can and does cause these problems with typical SSD setups, but people don't care, because it's all happening 200 times faster.

All I wanted to do was to make people question the conventional advice about the noop scheduler and to make people think critically about the noop scheduler itself. Apparently I failed: probably because the conventional advice has become a matter of faith. Alas.

Edit: I'm worried that this thread is starting to look anti-TobiSGD. No! I really admire Tobi's work at LQ, it's outstanding, invaluable and irreplaceable.

guanx 07-20-2012 05:13 AM

Quote:

Originally Posted by TobiSGD (Post 4733231)
I have read the thread, and its point is, beginning from the OP, what are experiences of Slackware users with SSDs, used together with HDDs in the same system. This evolved into the discussion over the lifetime of SSDs. So I still have to ask: What is your point with that post?

I think my point is clear enough from this post: http://www.linuxquestions.org/questi...ml#post4732705
If you still believe "storage reliability = rated lifetime" then I have nothing to say. Please feel free to put your eggs into one basket.

TobiSGD 07-20-2012 06:20 AM

Quote:

Originally Posted by 55020 (Post 4733543)
All I wanted to do was to make people question the conventional advice about the noop scheduler and to make people think critically about the noop scheduler itself. Apparently I failed: probably because the conventional advice has become a matter of faith. Alas.

More a matter of experience. But anyways, I never really noticed the differences between the schedulers, so may be you are right, but the differences in performance are so small that you can't notice them even under heaviest load.

Quote:

Originally Posted by guanx (Post 4733555)
I think my point is clear enough from this post: http://www.linuxquestions.org/questi...ml#post4732705
If you still believe "storage reliability = rated lifetime" then I have nothing to say. Please feel free to put your eggs into one basket.

I still don't get it. This thread is still about using SSDs in conjunction with HDDs. Also, no one said that using a SSD means that you don't have to do your backups. So please elaborate how using a SSD is putting "your eggs into one basket".

onebuck 07-20-2012 10:57 AM

Member Response
 
Hi,

Quote:

Originally Posted by 55020 (Post 4733543)
But that was exactly my point to start with. By running this experiment on a setup that is maybe 200 times slower than a typical SSD, and that does *not* have any of the fancy features that alleviate the noop scheduler's shortcomings, you can see clearly in isolation what effect the noop scheduler has. And that effect is not good. I suggest that the noop scheduler can and does cause these problems with typical SSD setups, but people don't care, because it's all happening 200 times faster.

You are still comparing two different storage techniques. Show me the data or information to support your claims. As I said before that a 'SD' controller manages writes & reads differently than a 'SSD' controller. How do you define 'noop' shortcomings? Not just a verbal claim, please support your argument.
Quote:

Originally Posted by 55020 (Post 4733543)
All I wanted to do was to make people question the conventional advice about the noop scheduler and to make people think critically about the noop scheduler itself. Apparently I failed: probably because the conventional advice has become a matter of faith. Alas.

Not a matter of faith but fact. You will find gains when using 'noop' for the 'SSD' with general overall system usage. 'deadline' should be used for indexed/overlay usage and bottlenecks of this type. If you as a user are not sure then use the default [cfq] scheduler.

Quote:

Originally Posted by 55020 (Post 4733543)
Edit: I'm worried that this thread is starting to look anti-TobiSGD. No! I really admire Tobi's work at LQ, it's outstanding, invaluable and irreplaceable.

I believe 'TobiSGD' can hold his own. Debates are healthy with good challenging factual points for exchange. Hearsay or FUD have no place when discussing something of the this sort. Opinions are fine but support with details. :)

Personally, I have been reading docs, benchmarks and anything available to make positive choice(s) for the scheduler to use. For one laptop , the 'noop' scheduler for a 'SSD' is the best overall choice. Along with proper configuration setups the 'noop' does provide best fit. This machine uses a 'Patriot' Pyro 'SSD' which does support 'write-back cache'. Another point for performance gains but not all 'SSD' controllers support 'write-back'. Users should dig into 'SSD' manufactures data to insure that maximum performance can be gained by tweaks for their system. Ask the technical support people to define or explain any of your queries/questions.

One other thing: be sure to use relative information, do not mix old 'SSD' configuration techniques with newer 'SSD' since this can be a problem. Newer controllers do provide better control techniques than older versions.

Buyer be-aware!

TobiSGD 07-20-2012 12:50 PM

Quote:

Originally Posted by 55020 (Post 4733543)
Edit: I'm worried that this thread is starting to look anti-TobiSGD.

Don't worry, i don't see it to be this way and even if it would be you can be sure that I can handle that.

Martinus2u 07-21-2012 03:35 AM

Quote:

Originally Posted by 55020 (Post 4733185)
a FIFO is still a FIFO

Still all modern SSDs and HDDs use NCQ which means they re-order requests at their leasure. Your SD card does not.

55020 07-21-2012 04:25 AM

Quote:

Originally Posted by Martinus2u (Post 4734343)
Still all modern SSDs and HDDs use NCQ which means they re-order requests at their leasure. Your SD card does not.

How many times do I have to repeat myself?

Quote:

Originally Posted by 55020 (Post 4731619)
I'm guessing that, in practice, the 'noop' advocates are being saved from themselves by cleverness in their SSD controllers

I see no point in continuing to participate in this thread if people can't remember beyond the last thing they read.

Martinus2u 07-21-2012 06:10 AM

no need to get ratty if we all seem to agree on the point. Btw, in addtition to your guesswork I pointed out proof in my earlier post.


All times are GMT -5. The time now is 08:03 AM.