[SOLVED] Moving slackware system from one drive to another
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I know the original poster is onto other wonderful methods, but for future readers of this thread I want to suggest another rsync option to those above. Boot into a live usb first like clonezilla or gparted. Mount your two drives, and do -
rsync -aAXHv /mnt/hdd/ /mnt/ssd
A stock slackware 15.0 install has many hard links, mostly related time zone files from memory, so adding the preserve hardlinks option should satisfy the ocd among us. Then just do the usual fstab and lilo/elilo/etc tweaks as necessary.
Quote:
Originally Posted by asarangan
'dumpe2fs -h /dev/sdb2' also gives me an I/O error:
dumpe2fs 1.46.5 (30-Dec-2021)
dumpe2fs: Input/output error while trying to open /dev/sdb2
Couldn't find valid filesystem superblock.
However, mkfs.ext4 -n /dev/sdb2 gave me the following:
mkfs.ext4 -n /dev/sdb2
mke2fs 1.46.5 (30-Dec-2021)
64-bit filesystem support is not enabled. The larger fields afforded by this feature enable full-strength checksumming. Pass -O 64bit to rectify.
Creating filesystem with 241172480 4k blocks and 60293120 inodes
Filesystem UUID: 41f820e7-e585-4b5c-88c8-729df6844afb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
I am moving a system from HDD to SSD. They are both the same size.
Quote:
Originally Posted by asarangan
/dev/sdb used to run just fine. I was simply running dd from it, not to it. I am pretty sure, because I have my command history in the shell
you previously saved the data from that HDD drive who may have been failing from the start ?
you ran dmesg before executing the dd command to be sure you had the HDD(spinning) not the SDD(flash) as the source ? if=
having a command history in the shell sounds unusual as you turn off, unplug the sata source drive before booting the SDD target drive.
It turns out that my problem was a combination of two factors. I had done all the copying correctly (rsync or cp), but I did not run lilo correctly. I booted with the installation USB stick, mounted the new drive as /mnt and then did chroot /mnt. However, things won't work correctly unless I also edit the /etc/lilo.conf file and change image=/boot/vmlinuz to image=/mnt/boot/vmlinuz. Both will run without errors because both paths are correct, but the former will not boot correctly. Once I did this, everything worked fine. It was important, however, to change that line back to image=/boot/vmlinuz once booted with the new drive before running lilo again.
However, there was a second problem. Initially, I assumed that my problem was improper copying, so I tried several things, including dd. I had some bad sectors on my source drive, and dd was halting with an I/O error. So I tried dd with status=noerror, but doing this completely wrecked the drive. I am not sure if that was coincidental or if dd caused the drive to crash. The superblock was corrupt. I tried mounting with a backup superblock, and that didn't work either. There were lots and lots of I/O errors. I figured the drive was totally lost. Luckily I had most of the data from the prior attempts at copying, so all was well.
I also edit the /etc/lilo.conf file and change image=/boot/vmlinuz to image=/mnt/boot/vmlinuz. Both will run without errors because both paths are correct, but the former will not boot correctly. Once I did this, everything worked fine. It was important, however, to change that line back to image=/boot/vmlinuz once booted with the new drive before running lilo again.
It is for situations like this that lilo has the switch -r
Example:
Code:
lilo -r /mnt
Using that switch there is no need to temporary mess around with lilo.conf.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.