LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware - Installation (https://www.linuxquestions.org/questions/slackware-installation-40/)
-   -   Upcoming fresh installation with SSD... (https://www.linuxquestions.org/questions/slackware-installation-40/upcoming-fresh-installation-with-ssd-4175652052/)

andrew.46 04-12-2019 08:38 PM

Upcoming fresh installation with SSD...
 
I will be building a new system soon and installing Slackware -current, no other OS. A big change for me is that I will be using a 2TB SSD (suddenly they have become affordable!!) instead of my usual mechanical drives.

My question is: should I be doing anything different with a 2TB SSD than my normal practice of partitioning with fdisk and using ext4? Many will disagree but I usually only have a partition for swap and a partition for everything else...

My suspicion is that there will be no difference but I thought I would check with any Slackware user who has greater experiences than me with SSDs. Not hard to have greater experience as I have zero experience :)

jbuckley2004 04-12-2019 09:42 PM

I'm not sure how to do this in Slack (Fedora user here). But there are some things you should be aware of. Well, one, anyway.

It amounts to this - you want to minimize the number of disk writes. Frequent overwriting of the same bytes on the SSD will cause it to fail, eventually. But unlike hard-drives, there's no way to mark a sector "bad" and automatically bypass it. The SSD just fails abruptly.

To minimize it, you can send things that are written frequently to the memory cache. For most of us that occurs in two places - the browser, when pages are cached and in linux log files when we start a session.

From: https://www.howtoguides.org/Firefox-...-of-Disk-Cache
Quote:

1. Step Open Firefox
2. Step Enter about:config into the address bar where you normally enter addresses into – this is the ultimate config panel in Firefox
3. Step Enter browser.cache.disk.enable into the Filter
4. Step As you can see the value is by default set to true
5. Step Double-click on the entry to set it to false
6. Step Right-click into the white area and click on New Integer

7. Step Enter disk.cache.memory.capacity and hit enter
8. Step You are now supposed to enter an integer. This integer will be the amount of RAM in KB. So, enter the amount of RAM in KB that you want to dedicate to the memory cache, e.g. 131072 for 128MB RAM [F29; 153600]
There are similar tutorials for Chrome, IE9 and Opera (I think).

Similarly, log files are written every time you login to your session. Those too can be written to memory cache. In Fedora you just have to add a couple of lines to /etc/fstab, but I'm entirely ignorant of what to do in Slackware. Perhaps you can find a similar tutorial that addresses that distro specifically.

andrew.46 04-12-2019 10:44 PM

Quote:

Originally Posted by jbuckley2004 (Post 5984391)
It amounts to this - you want to minimize the number of disk writes. Frequent overwriting of the same bytes on the SSD will cause it to fail, eventually. But unlike hard-drives, there's no way to mark a sector "bad" and automatically bypass it. The SSD just fails abruptly.

Thank you for the points that you have made. As for the life of the drives, I have had a look at the SSD I have in mind: the Crucial - MX500 2 TB 2.5" Solid State Drive. This has a TBW (Terabytes Written) company spec of 700TB. So if I write 20GB per day (this may average out over the year!) I will have total write of 7,300GB per year, or 7.3TB roughly. So potentially a 90 year life for the drive...

If I go nuts and increase overall writes to 50GB per day I would amass writes of 18,250GB per year or 18.25TB. In this unlikely usage the guaranteed usage of the SSD, based on the company spec of 750TB TBW, would only be 38 years :).

I am aware that this is perhaps a really simplistic way of looking at this and I am, as always, ready to be corrected...

jbuckley2004 04-13-2019 10:41 AM

Andrew - If I understand the issue correctly, it isn't that things are written to the ssd. The problem is that, physically, the same bytes are overwritten relatively frequently in some circumstances. I have my firefox set up to delete the cached files when it's shut down. That same memory space is often reused when I restart it, which may be several times a day. I don't know it for a fact, but I can easily believe that the linux log files are treated similarly by the system. Of course, most files are not written, deleted and rewritten daily, so they are not a problem.

Either way, I suspect it's a preventative measure, one that's pretty easy to impliment. I really don't expect my Crucial SSD (yes, I bought one too!) to die tomorrow regardless - but better safe than sorry, ya know!

bassmadrigal 04-15-2019 04:03 PM

Most filesystems (including ext4, which is the default in Slackware) will change operations if they detect they're being run on an SSD or NVMe drive. As long as the drive isn't full, they'll move writes around. It doesn't cost any extra time with it being an SSD to write data in different blocks from the original location. Not only that, but the SSD/NVMe controller will do wear leveling to move data around to prevent wearing out the same location. Finally, every drive comes with "overprovisioned" space, meaning additional space that is not allowed to be partitioned like normal. This is used when cells are getting close to their failure point. The SSD/NVMe controller will mark that cell as failed and will start using one in the overprovisioned space. These cells are referred to as reallocated, and you can typically see it in the output of your SMART data.

@OP, I think you're fine sticking with your normal procedures for installing Slackware. I now have installed Slackware on several SSD/NVMe systems, right now it is on two systems (my media center with an SSD and my desktop with an NVMe drive, and my desktop had an SSD for several years before I built a new system that included an NVMe drive... the SSD is still kicking). I pretty much follow the same procedures as when I installed on mechanical drives, except I do specify noatime and nodiratime in my fstab (just so it doesn't update access times whenever I access a folder or file). I include my swap on my SSD/NVMe as well.

Didier Spaier 04-19-2019 07:26 AM

+1 for noatime and nodiratime.

Let me also suggest these settings:
1. in /etc/fstab:
Code:

tmpfs /tmp    tmpfs defaults 0 0
tmpfs /var/tmp tmpfs defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0

2. in ~/.profile:
Code:

export XDG_CACHE_HOME=/dev/shm/$(whoami)
mkdir -p /dev/shm/$(whoami)
chmod 700 /dev/shm/$(whoami)
export XDG_RUNTIME_DIR=$XDG_CACHE_HOME

So, all temp and cached files will go in RAM, and this includes the files that are otherwise stored in ~/.cache, including the cached files of web browsers and Thunderbird, etc.

Some people would disagree to do that with /var/tmp, decision is yours.

Additionally this provides an auto-cleaning of these cached files at each reboot. Which is good unless you have a really slow internet connection.

Only caveat: having /tmp in a tmpfs of course precludes usage of /tmp to store permanent files.

bassmadrigal 04-19-2019 12:17 PM

Quote:

Originally Posted by Didier Spaier (Post 5986609)
Some people would disagree to do that with /var/tmp, decision is yours.

"Some people" like the FHS? ;)

Quote:

The /var/tmp directory is made available for programs that require temporary files or directories that are preserved between system reboots.

Didier Spaier 04-19-2019 12:28 PM

Quote:

Originally Posted by bassmadrigal (Post 5986694)
"Some people" like the FHS? ;)

Indeed ;)

However the statements in the chapter you linked to are not perfectly clear for me:
Quote:

The /var/tmp directory is made available for programs that require temporary files or directories that are preserved between system reboots. Therefore, data stored in /var/tmp is more persistent than data in /tmp.

Files and directories located in /var/tmp must not be deleted when the system is booted. Although data stored in /var/tmp is typically deleted in a site-specific manner, it is recommended that deletions occur at a less frequent interval than /tmp.
What exactly means "more" persistent? And how frequent is "less frequent"?

Anyway they recommend that the deletion occurs "in a site-specific manner", so I feel allowed to decide for my site (i.e., my personal laptop) this manner be "at each boot" ;)

sevendogsbsd 04-19-2019 12:32 PM

And trim should be kept in mind as well. Since slack does not have systemd (I am not advocating systemd here), you will have to either run fstrim -v <file system> manually or do so in a cron job. It is not recommended to add the "discard" parameter to /etc/fstab because this does a continuous trim and is not necessary.

I am also not sure about trying to minimize writes to the SSD. Everything I have read recently says the technology is pretty good now and doesn't require any special handling except for trim. I have a few samsung 850 pros and have been beating the heck out of them over the 3 or 4 years I have had them and they show minimal wear.

I have read you should not run any sort of wiping software on an SSD you are still using, in case that is something you regularly do. I do not know if this is true or not but makes sense.

bassmadrigal 04-20-2019 12:20 AM

Quote:

Originally Posted by Didier Spaier (Post 5986699)
However the statements in the chapter you linked to are not perfectly clear for me:What exactly means "more" persistent? And how frequent is "less frequent"?

Looking at my files in there, it mainly seems to be KDE cache files (which come from a symlink in my home folder ~/.kde/cache-$COMPUTER_NAME). So, if these are deleted, they'd need to be regenerated on boot. Not a huge deal, but it is regenerated data that may not need to be regenerated.

Quote:

Originally Posted by Didier Spaier (Post 5986699)
Anyway they recommend that the deletion occurs "in a site-specific manner", so I feel allowed to decide for my site (i.e., my personal laptop) this manner be "at each boot" ;)

I'm not even sure what they mean by "site-specific manner"... is that left up to the programs or the administrator?

Either way, "tmp" files should have the expectation they can be deleted and should be able to be regenerated. Whether or not you delete them is a choice. I'm lazy, so I'll stick with leaving them as is :)

Quote:

Originally Posted by sevendogsbsd (Post 5986702)
And trim should be kept in mind as well.

Good call! I forgot about trim.

Quote:

Originally Posted by sevendogsbsd (Post 5986702)
It is not recommended to add the "discard" parameter to /etc/fstab because this does a continuous trim and is not necessary.

I think this is still a relic from older times with SSDs and things just haven't been updated (this is just my belief... it may not be accurate). I used discard on my old SSD without any ill issues and to be honest. The only reason I didn't mention it above with the noatime and nodiratime was because I completely forgot about adding any sort of trim on my new NVMe drive (I just ran fstrim on my /home partition and it cleared 246GB and an additional 85GB on my / partition!!). Personally, I've never had issues with discard and it seems to work perfectly normal. That being said, if a person feels more comfortable with it, they can set up a cron job to run fstrim periodically on their system.

@OP don't be like me and forget to set up any type of trim... it can end up hurting performance on your drive. Basically, each NAND cell needs to be blank before it can be written to, unlike HDDs, which can simply be written over. So, if you have a bunch of cells that had data on them, then you deleted the data, and now the only place the drive can write to is where that data "used to be", then it would first need to go "discard" the blocks, then it can write the data to it. This can increase write times quite substantially. discard will basically reset those cells when you delete the files, where fstrim will do it at whatever time it is run (whether that be manually or a cron job). Based on my data above, I had 246GB of data in my /home directory that I had deleted, but the cells still contained the data and they still needed to be cleared. If I was out of "empty" space on the drive, then any time I needed to write new data to the drive, it would have to first go and clear the drive.

Quote:

Originally Posted by sevendogsbsd (Post 5986702)
I am also not sure about trying to minimize writes to the SSD. Everything I have read recently says the technology is pretty good now and doesn't require any special handling except for trim. I have a few samsung 850 pros and have been beating the heck out of them over the 3 or 4 years I have had them and they show minimal wear.

Yeah, trying to minimize writes is something that needed to be done late last decade and, for some drives, early this decade. But for pretty much all current drives, it'll be almost impossible for someone to run into write issues with normal system usage.

Quote:

Originally Posted by sevendogsbsd (Post 5986702)
I have read you should not run any sort of wiping software on an SSD you are still using, in case that is something you regularly do. I do not know if this is true or not but makes sense.

This is simply because of the way NAND works. Anytime you delete something, it will be completely removed as soon as the next trim is run on the filesystem. These drives don't hold data like HDDs. Running wiping software will just increase your writes for no reason. And if you do need to wipe the device, you can simply run blkdiscard. It will automatically discard any data within the region specified (by default, it will do the whole partition).

Lysander666 04-21-2019 06:15 AM

Not worth making another topic but thanks to bass and all for the advice about fstrim.

I used it on my desktop SSD which I have never trimmed and this was the result:

Code:

root@psychopig-xxxiv:~# fstrim /home -v
/home: 2.4 GiB (2593120256 bytes) trimmed
root@psychopig-xxxiv:~# fstrim / -v
/: 2.5 GiB (2694127616 bytes) trimmed

Interestingly enough, it doesn't affect the data, the partitions still are as full as they were before. I'll have to clean them up another way.

wpeckham 04-21-2019 08:03 AM

Quote:

Originally Posted by Lysander666 (Post 5987161)
Not worth making another topic but thanks to bass and all for the advice about fstrim.

I used it on my desktop SSD which I have never trimmed and this was the result:

Code:

root@psychopig-xxxiv:~# fstrim /home -v
/home: 2.4 GiB (2593120256 bytes) trimmed
root@psychopig-xxxiv:~# fstrim / -v
/: 2.5 GiB (2694127616 bytes) trimmed

Interestingly enough, it doesn't affect the data, the partitions still are as full as they were before. I'll have to clean them up another way.

I think you misunderstand what trim does. It does not trim used data, it cleans up deleted (invisible) data in deleted blocks to speed up writes. This does not change space usage, it speeds up storage based performance on write.

Lysander666 04-21-2019 08:27 AM

Quote:

Originally Posted by wpeckham (Post 5987175)
I think you misunderstand what trim does. It does not trim used data, it cleans up deleted (invisible) data in deleted blocks to speed up writes. This does not change space usage, it speeds up storage based performance on write.

Yes, I knew I had something wrong and suspected that someone would set me straight. Thanks, wpeckham.

andrew.46 04-24-2019 04:12 AM

Thanks to all who have responded to this thread which I will now mark as solved. My 'take away' messages have been:
  1. Partition and create file system as normal
  2. Don't worry about limited file system writes with the newest SSDs
  3. Intermittent usage of fstrim is a great idea

Now just to save a little more money for this build which will be holding the SSD...

gus3 05-05-2019 07:37 PM

I also delete the ext4 journal on SSD, to reduce writes even more. ext4 can be tuned to be very flash-friendly, but this particular boost comes at the cost of fsck time in the event of system crash.

My 2¢.


All times are GMT -5. The time now is 02:21 AM.