Upcoming fresh installation with SSD...
I will be building a new system soon and installing Slackware -current, no other OS. A big change for me is that I will be using a 2TB SSD (suddenly they have become affordable!!) instead of my usual mechanical drives.
My question is: should I be doing anything different with a 2TB SSD than my normal practice of partitioning with fdisk and using ext4? Many will disagree but I usually only have a partition for swap and a partition for everything else... My suspicion is that there will be no difference but I thought I would check with any Slackware user who has greater experiences than me with SSDs. Not hard to have greater experience as I have zero experience :) |
I'm not sure how to do this in Slack (Fedora user here). But there are some things you should be aware of. Well, one, anyway.
It amounts to this - you want to minimize the number of disk writes. Frequent overwriting of the same bytes on the SSD will cause it to fail, eventually. But unlike hard-drives, there's no way to mark a sector "bad" and automatically bypass it. The SSD just fails abruptly. To minimize it, you can send things that are written frequently to the memory cache. For most of us that occurs in two places - the browser, when pages are cached and in linux log files when we start a session. From: https://www.howtoguides.org/Firefox-...-of-Disk-Cache Quote:
Similarly, log files are written every time you login to your session. Those too can be written to memory cache. In Fedora you just have to add a couple of lines to /etc/fstab, but I'm entirely ignorant of what to do in Slackware. Perhaps you can find a similar tutorial that addresses that distro specifically. |
Quote:
If I go nuts and increase overall writes to 50GB per day I would amass writes of 18,250GB per year or 18.25TB. In this unlikely usage the guaranteed usage of the SSD, based on the company spec of 750TB TBW, would only be 38 years :). I am aware that this is perhaps a really simplistic way of looking at this and I am, as always, ready to be corrected... |
Andrew - If I understand the issue correctly, it isn't that things are written to the ssd. The problem is that, physically, the same bytes are overwritten relatively frequently in some circumstances. I have my firefox set up to delete the cached files when it's shut down. That same memory space is often reused when I restart it, which may be several times a day. I don't know it for a fact, but I can easily believe that the linux log files are treated similarly by the system. Of course, most files are not written, deleted and rewritten daily, so they are not a problem.
Either way, I suspect it's a preventative measure, one that's pretty easy to impliment. I really don't expect my Crucial SSD (yes, I bought one too!) to die tomorrow regardless - but better safe than sorry, ya know! |
Most filesystems (including ext4, which is the default in Slackware) will change operations if they detect they're being run on an SSD or NVMe drive. As long as the drive isn't full, they'll move writes around. It doesn't cost any extra time with it being an SSD to write data in different blocks from the original location. Not only that, but the SSD/NVMe controller will do wear leveling to move data around to prevent wearing out the same location. Finally, every drive comes with "overprovisioned" space, meaning additional space that is not allowed to be partitioned like normal. This is used when cells are getting close to their failure point. The SSD/NVMe controller will mark that cell as failed and will start using one in the overprovisioned space. These cells are referred to as reallocated, and you can typically see it in the output of your SMART data.
@OP, I think you're fine sticking with your normal procedures for installing Slackware. I now have installed Slackware on several SSD/NVMe systems, right now it is on two systems (my media center with an SSD and my desktop with an NVMe drive, and my desktop had an SSD for several years before I built a new system that included an NVMe drive... the SSD is still kicking). I pretty much follow the same procedures as when I installed on mechanical drives, except I do specify noatime and nodiratime in my fstab (just so it doesn't update access times whenever I access a folder or file). I include my swap on my SSD/NVMe as well. |
+1 for noatime and nodiratime.
Let me also suggest these settings: 1. in /etc/fstab: Code:
tmpfs /tmp tmpfs defaults 0 0 Code:
export XDG_CACHE_HOME=/dev/shm/$(whoami) Some people would disagree to do that with /var/tmp, decision is yours. Additionally this provides an auto-cleaning of these cached files at each reboot. Which is good unless you have a really slow internet connection. Only caveat: having /tmp in a tmpfs of course precludes usage of /tmp to store permanent files. |
Quote:
Quote:
|
Quote:
However the statements in the chapter you linked to are not perfectly clear for me: Quote:
Anyway they recommend that the deletion occurs "in a site-specific manner", so I feel allowed to decide for my site (i.e., my personal laptop) this manner be "at each boot" ;) |
And trim should be kept in mind as well. Since slack does not have systemd (I am not advocating systemd here), you will have to either run fstrim -v <file system> manually or do so in a cron job. It is not recommended to add the "discard" parameter to /etc/fstab because this does a continuous trim and is not necessary.
I am also not sure about trying to minimize writes to the SSD. Everything I have read recently says the technology is pretty good now and doesn't require any special handling except for trim. I have a few samsung 850 pros and have been beating the heck out of them over the 3 or 4 years I have had them and they show minimal wear. I have read you should not run any sort of wiping software on an SSD you are still using, in case that is something you regularly do. I do not know if this is true or not but makes sense. |
Quote:
Quote:
Either way, "tmp" files should have the expectation they can be deleted and should be able to be regenerated. Whether or not you delete them is a choice. I'm lazy, so I'll stick with leaving them as is :) Quote:
Quote:
@OP don't be like me and forget to set up any type of trim... it can end up hurting performance on your drive. Basically, each NAND cell needs to be blank before it can be written to, unlike HDDs, which can simply be written over. So, if you have a bunch of cells that had data on them, then you deleted the data, and now the only place the drive can write to is where that data "used to be", then it would first need to go "discard" the blocks, then it can write the data to it. This can increase write times quite substantially. discard will basically reset those cells when you delete the files, where fstrim will do it at whatever time it is run (whether that be manually or a cron job). Based on my data above, I had 246GB of data in my /home directory that I had deleted, but the cells still contained the data and they still needed to be cleared. If I was out of "empty" space on the drive, then any time I needed to write new data to the drive, it would have to first go and clear the drive. Quote:
Quote:
|
Not worth making another topic but thanks to bass and all for the advice about fstrim.
I used it on my desktop SSD which I have never trimmed and this was the result: Code:
root@psychopig-xxxiv:~# fstrim /home -v |
Quote:
|
Quote:
|
Thanks to all who have responded to this thread which I will now mark as solved. My 'take away' messages have been:
Now just to save a little more money for this build which will be holding the SSD... |
I also delete the ext4 journal on SSD, to reduce writes even more. ext4 can be tuned to be very flash-friendly, but this particular boost comes at the cost of fsck time in the event of system crash.
My 2¢. |
All times are GMT -5. The time now is 02:21 AM. |