LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 11-24-2014, 06:46 AM   #16
solarfields
Senior Member
 
Registered: Feb 2006
Location: Outer Shpongolia
Distribution: Slackware
Posts: 1,027

Rep: Reputation: 605Reputation: 605Reputation: 605Reputation: 605Reputation: 605Reputation: 605

pchristy,

You can take a look at the corresponding section of Arch Linux wiki. What I did couple of years ago was to stick to EXT4, modifying my /etc/fstab like this for /:

/dev/sda1 / ext4 defaults,noatime,discard 0 1

So far I have not had any issues, but I would appreciate if someone with more knowledge confirms that this setup is fine.

Last edited by solarfields; 11-24-2014 at 06:47 AM.
 
Old 11-24-2014, 07:55 AM   #17
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 6,179

Rep: Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922
I totally agree with TobiSGD. Modern SSDs are not nearly as prone to wearing out the drives as the older ones. That was the whole point in me mentioning that tech report article. With the way wear leveling works and the advances in technology to increase the actual write limitations, you'd have to go through an astronomical amount of data to actually wear the drive out. And when I say astronomical, Intel's spec is rated for 20GB written to the drive, EVERY DAY, for THREE YEARS!! Who does that? And that's just what it's rated for. The tests showed that it was actually capable of 30x that amount before the drive actually failed. They were able to write over 700TB to the drive before it broke. I did the math and that's the equivalent of 380GB written to the drive, EVERY DAY, for FIVE YEARS! Depending on the size of your drive, that could literally be wiping the drive every day and copying everything back over for 5 years. (2 drives have over 1.5PB written and were still going on the last update -- I don't think I need to do the math there ) Basically, with modern SSDs, you will almost always have the hardware fail before you no run out of writes.

So no, I'm not worried about putting swap on the SSD. As TobiSGD mentioned, when you start using swap, it's because your extremely fast RAM filled up. Why not use the next fastest available media if you're not worried about it dying? I actually have a very basic partition layout. I have swap, /, and /home on my SSD and all my media is stored on the other 5 traditional hard drives. I don't worry about moving /tmp over, because /tmp is the place a lot of software is compiled by slackbuilds. I want that fast. I supposed I could move /tmp to RAM for an increase in speed, but I only have 8GB on my system (maxed out -- system is almost 7 years old) and I tend to max out my RAM pretty consistently already. I can definitely see the reasoning behind TobiSDG's linking of mostly static home content to other drives, but that's just too much effort and would mess with my folder structure on my media drives. But from a logical standpoint, it does make sense to move things you wouldn't normally access to a traditional drive. I would, without a doubt, at least include /usr for the same reasons TobiSDG mentioned. You want the applications you open to benefit from that increased speed you have available.

Anyway, I'm about ready to get off my soapbox. The only other thing I can recommend is considering switching from noop to deadline. Both are immensely better than cfq, however, I personally believe deadline is a better option if you're not running a production server. The reason behind this is it queues all the requests and will give preference towards reads over writes, so it will generally leave your system more responsive if you're doing some intense hard drive activity. Either way, using the command you listed, you can easily try out the two and see if you find one more to your liking.

Once you're ready to make it permanent, I know it's possible through lilo/grub, but it makes the change for all drives connected to the system. You can do that with a simple append line to your lilo.conf.

Code:
boot = /dev/sda
image = /boot/vmlinuz
  label = Linux
  root = /dev/sda1
  append = elevator=deadline
  read-only
You can also play around with the udev rules and have it set there, but I haven't tinkered with this much. This should set any non-rotating drive to deadline, but I am not sure how this would fare with removable flash media like thumbdrives and memory cards. However, I'd guess any issues would be minimal.

Code:
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"
You can just add this to a file you create in /etc/udev/rules.d/ and you can call it something like 55-ssd.rules and then reboot. Verify it was set properly via cat /sys/block/sda/queue/scheduler

Anyway, I think I've said enough for now
 
2 members found this post helpful.
Old 11-24-2014, 10:22 AM   #18
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 161

Rep: Reputation: 63
It's far better to use deadline if it's going to be a kernel parameter
changing it to a default for all drives in the system.
The tests I've done with bonnie++ on platter drives and ssd's, all
with jfs, show that as far as performance concerns you really should not
use cfq for anything. The performance hit with cfq is in the order of ten-fold
under some circumstances.

Here are two examples on a aic7xxx controller with a fujitsu map 10krpm 36GB scsi drive:

cfq on generic 3.10.17 smp kernel:
Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
1G 9902 98 46816 58 18580 24 11293 98 65370 32 381.8 5
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
10 105 11 +++++ +++ 124 0 107 10 +++++ +++ 124 4

deadline on generic 3.10.17 smp kernel:
Version 1.03e ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seek--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
1G 9947 98 53247 59 25271 31 11355 98 66862 32 410.7 4
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
10 481 39 +++++ +++ 1277 23 559 47 +++++ +++ 997 35

The difference in performance when it comes to many small files as is the case on e.g. a mail server is ten-fold in this case.
If you want to see how jfs performs compared to other file systems on a modern linux kernel on ssd's:
http://www.phoronix.com/scan.php?pag...ux317_fs&num=1
The editor claims that jfs delays journal writes and thus "cheats".
I tend to think that if it was good enough for IBM mainframes it's
good enough for a speed freak like me.

Last edited by rogan; 11-24-2014 at 10:55 AM.
 
Old 11-24-2014, 01:08 PM   #19
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 6,179

Rep: Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922Reputation: 3922
Quote:
Originally Posted by rogan View Post
The difference in performance when it comes to many small files as is the case on e.g. a mail server is ten-fold in this case.
If you want to see how jfs performs compared to other file systems on a modern linux kernel on ssd's:
http://www.phoronix.com/scan.php?pag...ux317_fs&num=1
The editor claims that jfs delays journal writes and thus "cheats".
I tend to think that if it was good enough for IBM mainframes it's
good enough for a speed freak like me.
Seeing that, it seems that JFS is really only the best when it can queue the writes. On the other tasks, it seemed to lag behind many other contenders. Overall, I still don't think there is a clear winner on the best filesystem for SSDs. JFS can be extremely fast in some scenarios, but it didn't do nearly as well in the compile benchmarks.
 
Old 11-24-2014, 01:19 PM   #20
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 491Reputation: 491Reputation: 491Reputation: 491Reputation: 491
Make sure to use deadline I/O scheduler with JFS, as this works best.
 
Old 11-24-2014, 02:11 PM   #21
rogan
Member
 
Registered: Aug 2004
Distribution: Slackware
Posts: 161

Rep: Reputation: 63
Scheduler choice is probably more hardware and application dependent than file system dependent.
I have seen cfq do pretty well on older low-spec hardware even with jfs if you don't run database loads.
 
Old 11-24-2014, 02:20 PM   #22
BratPit
Member
 
Registered: Jan 2011
Posts: 237

Rep: Reputation: 85
Some thougts about SSD.

1. Better buy UPS or SSD with power loss capasitors.

http://lkcl.net/reports/ssd_analysis.html

2.If you "discard" /TRIM enabled/ the good is longer endurance the bad is after "TRIM" the data are unrecoverable .

3.if trim on lvm backend remember to set "issue_discards" option in /etc/lvm/lvm.conf .

4.There is some security issue in wiping data from SSD neither ATA "secure erase" nor fill entire disk with 0 do not give 100% of sure.
eg.
http://nvsl.ucsd.edu/index.php?path=projects/sanitize
https://www.google.pl/url?sa=t&rct=j...80185997,d.cWc

5. If You encrypt disk with LUKS better move "luks header" to HDD to avoid incidental demage caused by garbage collection of the disc .
Of caurse you can switch "discard" on device mapper It helps, but no one knows how each manufacturer's firmware behaves in this case. There is no standard.

6. If you encrypt disc with trim enabled , another issue. Better not trim :-)

http://asalor.blogspot.hu/2011/08/tr...-problems.html

If that do not bother anybody You can enjoy the system is booted in 10-15 seconds.
You have to move forward :-)

Last edited by BratPit; 11-24-2014 at 02:39 PM.
 
Old 11-24-2014, 02:24 PM   #23
jtsn
Member
 
Registered: Sep 2011
Posts: 922

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Quote:
Originally Posted by BratPit View Post
5. If You encrypt disk with LUKS better move "luks header" to HDD to avoid incidental demage caused by garbage collection of the disc .
Why should GC cause any damage to user data?

Quote:
Of caurse you can switch "discard" on device mapper It helps, but no one knows how each manufacturer's firmware behaves in this case. There is no standard.
Actually ATA TRIM is standardized very well.
 
Old 11-24-2014, 03:24 PM   #24
BratPit
Member
 
Registered: Jan 2011
Posts: 237

Rep: Reputation: 85
Sory for my english. I mean "demage" that part of partition containing luks header and slots in the sense copy of part or all of a completely different place and unpredictable. For encrypted device in terms of security is "demage" IMO.

SSD controllers internally re-route the write location of the ATA commands for disk leveling purposes. One side effect of this is you could end up with two (or more) copies of your LUKS header on the drive. On a regular HDD, if one passphrase gets compromised, you can just use LUKS to revoke that key and create a new one (as long as the master private key wasn't compromised in the process.) On an SSD if you do this, an you could potentially find an old copy of your LUKS header and use the compromised passphrase to gain access to your entire drive, even data that was written after the change (because you're still using the same master private key until you reformat.)

and

http://code.google.com/p/cryptsetup/...AskedQuestions
Section
5.19 What about SSDs, Flash and Hybrid Drives?
 
Old 11-24-2014, 05:02 PM   #25
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,298

Rep: Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957Reputation: 1957
Quote:
Originally Posted by jtsn View Post
Actually ATA TRIM is standardized very well.
Prior to the SATA 3.1 spec introduced in July, 2011, the Trim command was not queue-able, and could result in severe performance degradation if issued after every filesystem delete operation. Unless you have newer devices and drivers that support the queued Trim command, you are better off running a daily cron job that runs an fstrim command.

You can run
Code:
hdparm -I /dev/sdX | grep Transport:
to see if your drive supports SATA Rev 3.1.

Last edited by rknichols; 11-24-2014 at 05:07 PM. Reason: Add "You can run..."
 
Old 11-24-2014, 05:26 PM   #26
jtsn
Member
 
Registered: Sep 2011
Posts: 922

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Quote:
Originally Posted by BratPit View Post
SSD controllers internally re-route the write location of the ATA commands for disk leveling purposes. One side effect of this is you could end up with two (or more) copies of your LUKS header on the drive.
But only one of them will be accessible over SATA with read commands.
Quote:
On a regular HDD, if one passphrase gets compromised, you can just use LUKS to revoke that key and create a new one (as long as the master private key wasn't compromised in the process.) On an SSD if you do this, an you could potentially find an old copy of your LUKS header
No, you can't. Because the SSD won't give you access to the overwritten LUKS header.
 
Old 11-24-2014, 05:41 PM   #27
Jeebizz
Senior Member
 
Registered: May 2004
Distribution: Slackware14.2 64-Bit Desktop, Devuan 2.0 ASCII Toshiba Satellite Notebook
Posts: 2,722

Rep: Reputation: 757Reputation: 757Reputation: 757Reputation: 757Reputation: 757Reputation: 757Reputation: 757
I use JFS with discard, no problems whatsoever. I still would prefer to use F2FS but that hasn't been an option yet, hopefully that will change in the not-too-distant future since that is a filesystem designed FOR such disks.

Last edited by Jeebizz; 11-24-2014 at 05:42 PM.
 
Old 11-24-2014, 08:18 PM   #28
jtsn
Member
 
Registered: Sep 2011
Posts: 922

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
For maximum SSD performance, you want something with the least possible CPU load.
 
Old 11-24-2014, 11:41 PM   #29
ttk
Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 877
Blog Entries: 27

Rep: Reputation: 1220Reputation: 1220Reputation: 1220Reputation: 1220Reputation: 1220Reputation: 1220Reputation: 1220Reputation: 1220Reputation: 1220
I prefer ext4 on my SSD's and on my root filesystems, and XFS on non-root filesystems on spinning drives.

XFS really shines when directory sizes become very large (tens of thousands or hundreds of thousands of files), and when accessing deeply-nested hierarchies of subdirectories. I use it for bulk data storage filesystems.

Ext4 has better SSD support, and is measurably faster at accessing small files in typically-sized directories (as seen on root filesystems). Multi-process locking/unlocking of lockfiles, in particular, is about 25% faster (on my hardware, under Slackware 14.1) under ext4 than XFS.

On the other hand, if I think I'll want to use XFS's administrative tools (particularly xfsdump, xfsrestore, and xfs_freeze), I'll use XFS even if it's on an SSD and/or root filesystem. They're only really worth it in my experience on systems with a lot of shell users, or webmasters, each with their own /home directories.
 
Old 11-25-2014, 03:53 PM   #30
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 491Reputation: 491Reputation: 491Reputation: 491Reputation: 491
Quote:
Originally Posted by jtsn View Post
For maximum SSD performance, you want something with the least possible CPU load.
How come ? Either way JFS has the lowest, I'm just wondering why.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: JFS File-System Can Now Handle SSD TRIM Discard LXer Syndicated Linux News 4 10-04-2012 06:23 PM
Best file system on an SSD? FiftyOneFifty Linux - Desktop 5 02-12-2012 03:29 PM
SSD File System Partitioning jaycee4 Linux - Newbie 7 09-18-2009 03:48 AM
LXer: File-System Benchmarks On The Intel X25-E SSD LXer Syndicated Linux News 0 03-16-2009 08:20 AM
FTL and file system to be used for Solid State Device(SSD) AbhijitK Linux - Embedded & Single-board computer 1 01-17-2009 09:01 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 01:23 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration