LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Advice - Moving existing system to a SSD. (https://www.linuxquestions.org/questions/slackware-14/advice-moving-existing-system-to-a-ssd-4175557637/)

dalgrim 10-31-2015 09:50 AM

Advice - Moving existing system to a SSD.
 
I have a working 14.1 installation running on a small 150GB partition of my 1TB WD drive. I'm getting a new Samsung 850pro 512 drive. I've never migrated a running system before so I'd like some advice on how to do it. Can I just dd the devices once they are in? Also I had some questions because this is a SSD:

1) What, if any, areas should I NOT put on the SSD? (write cycle limitations with SSDs)
2) What file system should I use for a SSD? (trim support, etc)
3) Would I be better off doing a fresh install? (-current perhaps?) or has there been whisperings of 14.2 or 15.0 soon...
4) I don't want to be one of those RTFM people, Any good reading on the subject?

-Brian

Didier Spaier 10-31-2015 10:27 AM

I am not sure about write cycle limitations of recent SSD but I just mount / with noatime,nodiratime and still use ext4. Caveat: see in "man mount" the warning about noatime possibly breaking some apps, consider also strictatime then. But I am not (yet?) a mutt user.

As an alternative to dd you could consider rsync.

It never hurts to do fresh install (I do that instead of upgrading, that is an occasion to clean the house).

No one (i.e., not even Pat) can give you a reliable ETA for 14.2, I think.

PS I have only one / partition on the SSD and put big files/directories like isos, virtual disks, mirrors in /archives in a 1TB hard disk.

EDIT: actually noatime suffices as it implies nodiratime as stated in "man 2 mount".

Emerson 10-31-2015 10:30 AM

I used cp -a to migrate my Gentoo systems to SSD. Followed by chrooting and bootloader install.

sidzen 10-31-2015 10:34 AM

Taking into account what Didier Spaier said (regarging noatime, etc), try clonezilla and an unused USB stick; after booting clonezilla, choose beginner option and 'save part' then select the partition(s) desired. This way, the 'resore part' on the other end (SSD) will not return the error that there is not enough space.

smallpond 10-31-2015 10:42 AM

Write endurance is 300 TBW for that drive, so for a 10-year life you need to limit drive writes to about 100 GB/day average. So why are you worrying about it?

http://www.samsung.com/global/busine...fications.html

Richard Cranium 10-31-2015 10:47 AM

If you were using LVM, it would be a simple matter of adding the new drive as a physical volume, add the volume to your volume group, and issuing a pvmove command. That would move all the extents off the chosen physical volume while the system is running.

onebuck 10-31-2015 11:31 AM

Member response
 
Hi,

You really do not need 'nodiratime' since it will be part of 'noatime' as a subset.

You may find this helpful; http://www.linuxquestions.org/questi...ml#post4585376

Plus this post has information that may help; #26

Be sure to look at your SSD specifications and have on hand while setting up your system. The links above are related to Slackware.

You could do a LQ Advanced Search and find more Slackware SSD information for Slackware.

As to modern SSD technology, you will not likely have the 'SSD' in use longer than the potential to have failure on a well maintained system. Most consumer grade 'SSD' do have a long lifetime due to the newer controllers for MLC type drives.

Hope this helps.
Have fun & enjoy!
:hattip:

Didier Spaier 10-31-2015 12:02 PM

Quote:

Originally Posted by onebuck (Post 5442871)
You really do not need 'nodiratime' since it will be part of 'noatime' as a subset.

Thanks for the precision. I should have read "man 2 mount", not only "man 1 mount" as it's written in the former. I edited my first post accordingly.

onebuck 10-31-2015 12:13 PM

Member response
 
Hi,

That's OK!

I do read a lot when researching on system configurations. Users should be aware of how deep one needs to get when trying to get knowledge/understanding for system configurations. Too much 'FUD' that can cause harm when some users take a statement as true instead of researching to find the real facts.

One of the reasons that LQ Slackware forum is popular for the details that members provide. :)

Documentation is your friend and should be used by all. Memory does fade over time if not used! I still document all my systems admin/maintenance.

:hattip:

imitheos 10-31-2015 12:14 PM

My setup is pretty much the same as Didier's. I have /, /usr, /var, etc on the SSD and i have the home directories, /usr/src, git repositories, vm images, wine trees, etc on a 1TB hdd.

I use noatime on every filesystem and i didn't take any special measures when i migrated the data to the SSD but i do not use swap and have /tmp on memory backed tmpfs. I use brtfs (with additional autodefrag mount option) but i cannot recommend it because while i never had any problems, it is not considered stable yet. ext4 is probably the safest bet.

Regarding the write cycles, avoid putting swap and /tmp on the ssd _if_ you can but do not sweat much on this. The no1 failure factor for SSDs is metadata corruption due to power loss or something, which leads to bricked ssd. Modern mlc SSDs have great endurance and can withstand many times its rated GB/day.

Edit: Also try to take advantage of your ssd and use it instead of letting it sit. Most guides recommend watching the write patterns to avoid wearing the flash out and i used to believe that myself but i have reconsidered. With the money you buy a 512GB SSD you can buy a 2TB (or even larger) hdd. You do not buy a SSD so that you can store large amounts of data for extreme periods, you buy it for its speed. Due to prices falling, until the 10 year warranty has passed, you will probably have changed 2 more SSDs. So you should take advantage of the speed and write as many data as you can.

onebuck 10-31-2015 03:14 PM

Member response
 
Hi,
Quote:

Originally Posted by imitheos (Post 5442887)
<snip>

Regarding the write cycles, avoid putting swap and /tmp on the ssd _if_ you can but do not sweat much on this.

Most manufacture data does not show this for newer 'SSD'.

Quote:

Originally Posted by imitheos (Post 5442887)
The no1 failure factor for SSDs is metadata corruption due to power loss or something, which leads to bricked ssd.

Do you have any data or reference to this? I do from Sandisk http://www.sandisk.com/assets/docs/A...Protection.pdf
Quote:

Most Enterprise- and industrial-class solid state drives rely on power failure circuitry that monitors
the supply voltage and generates an “early warning” signal to the SSD controller if the voltage drops
below a predefined threshold. A secondary voltage hold-up-circuit is implemented to ensure the
drive has power for a sufficient time to harden data whenever that warning is received.
As an example of an Enterprise-class implementation, Figure 1 below illustrates the power failure
circuit block diagram of
Most modern 'SSD' do have control circuit to protect the device.

Quote:

Originally Posted by imitheos (Post 5442887)
Modern mlc SSDs have great endurance and can withstand many times its rated GB/day.

Where do you get the GB/day limit?

Quote:

Originally Posted by imitheos (Post 5442887)
Edit: Also try to take advantage of your ssd and use it instead of letting it sit. Most guides recommend watching the write patterns to avoid wearing the flash out and i used to believe that myself but i have reconsidered. With the money you buy a 512GB SSD you can buy a 2TB (or even larger) hdd. You do not buy a SSD so that you can store large amounts of data for extreme periods, you buy it for its speed. Due to prices falling, until the 10 year warranty has passed, you will probably have changed 2 more SSDs. So you should take advantage of the speed and write as many data as you can.

My usage for 'SSD' supports the use of long term data retention period for any data. Proper configured modern/current 'SSD' can be used to write large data or for any data types. If you have large DB usage then a user should consider 'deadline' or 'cfq' Leave wear to the 'SSD' controller to maintain and control.

imitheos 10-31-2015 03:57 PM

Quote:

Originally Posted by onebuck (Post 5442938)
Hi,
Do you have any data or reference to this? I do from Sandisk http://www.sandisk.com/assets/docs/A...Protection.pdf Most modern 'SSD' do have control circuit to protect the device.

No i do not have any concrete evidence. I said it based on my (admittedly very small) experience and by what i read on reviews sites. If i remember correctly, the only current non-enterprise ssd that has power failure protection is the intel 730. Some other drives (like crucial ones for example) indeed have capacitors but they only "partially" protect. Even the document you gave mentions "Most Enterprise- and industrial-class drives" keywords enterprise and most. Anyway, i mentioned the power failure as an example. There were other causes that bricked SSDs due to metadata corruption like non adequately tested firmware. What i wanted to say is that flash wear out is not to be feared so much.

Quote:

Originally Posted by onebuck (Post 5442938)
Where do you get the GB/day limit?

Don't all the manufacturers mention it ?

Quote:

Originally Posted by onebuck (Post 5442938)
My usage for 'SSD' supports the use of long term data retention period for any data. Proper configured modern/current 'SSD' can be used to write large data or for any data types. If you have large DB usage then a user should consider 'deadline' or 'cfq' Leave wear to the 'SSD' controller to maintain and control.

Yes of course it can be used for different usages. I didn't have a large database or a server in general in mind when i wrote my post. Based on the OP's post, i assumed he was talking about a typical desktop / laptop case. It was wrong for me to assume a certain scenario for the OP.

Darth Vader 10-31-2015 05:31 PM

No one need beyond of one standard/cheappo 128GB SSD for his personal Slackware root filesystem, with a WD Blue about 1TB for /home and swap; and some RAID5 or even RAID6 made from 4 to 6 x3TB Enterprise grade WD, for the personal collection of Pron, if case. For developers, an aditional 500GB WD Black Edition will be useful. :hattip:

onebuck 10-31-2015 05:43 PM

Member response
 
Hi,

'GB/day' is a general specification to support the maximum GB/life that is given for comparison. That is not saying this is limit but available GB/day to meet the life in general. MTBF or MTTF are not concrete specs but QC specs that are tested over a specific number of tested pieces. 2 million hours to potential issues is a long term that most users will not meet (83,333 days or 238 years). Time for a new system would happen before a new drive if you lived that long.

As to power backup for 'SSD' from the same link http://www.sandisk.com/assets/docs/A...Protection.pdf
Quote:

3.1 Supercapacitor
A supercapacitor is an electrolytic capacitive charge storage device. It is capable of storing a large
amount of energy in a comparatively small three-dimensional space. A generic supercapacitor-based
voltage hold-up circuit is consistent with the block diagram shown in Figure 1.
Designing a supercapacitor-based power failure protection circuit is easy to do, and many SSDs
employ the approach for this reason. Unfortunately, there are a number of concerns related to long
term supercapacitor reliability that makes this component type questionable for Enterprise-class
SSDs.
Supercapacitors are typically Aluminum Electrolytic Capacitors. This type of capacitor is known
for a high capacitance-to-size ratio, and is an attractive choice for applications requiring large bulk
capacitance like a solid state drive. However, like all electrolytic capacitors, supercapacitors suffer
from a well known set of deficiencies with regard to long term reliability. Supercapacitors “wear out”,
resulting in reduced capacitance over time. They use a wet electrolyte, and the packaging is subject
to ongoing losses via leakage and diffusion.
The performance of the supercapacitor degrades slowly with electrolyte loss, until the onset of total
failure occurs with little or no warning. In addition, loss rate increases with higher operating voltage,
and in higher operating and non-operating temperature environments. For every 10°C of ambient
operating temperature rise, the life expectancy of a supercapacitor can be cut approximately in half.
A thorough analysis done by SanDisk has shown that supercapacitors are not reliable enough to
meet the required reliability standards for the high-performance enterprise and industrial computing
markets served by SanDisk rds for the high-performance enterprise and industrial computing markets served by SanDisk
As to the firmware failures, most manufactures do provide firmware updates for issues of this type when they are discovered. Keeping 'SSD' firmware up to date is the responsibility of the consumer/user to perform by using available updates.

I keep all my systems on UPS so they will have clean power without interruptions. Assuming that the system PSU do not fail and cause issues. I feel confident that my systems will continue providing me with reliable services. Backups!!! Plus properly configured systems.
:hattip:

dalgrim 11-01-2015 06:38 PM

Thanks for all the responses. I'll have my new SSD on wednesday. I'll probably just put /tmp and maybe /var on the old HDD and discard,noatime the rest of / on the SSD.

-Brian


All times are GMT -5. The time now is 05:27 PM.