Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am relatively new to anything beyond user level and basic package management in linux so hopefully this will be an easy answer.
I was recently tasked with moving a RedHat 6.7 instance to a new drive. the original system is a "snowflake" so a reinstall and reconfigure was out of the question. the system is being moved from a raid 0 array to a solid state drive.
Here is what I have done so far and where I stand now:
1) created identical partitions on the new solid state drive
2) rsynced all of the data to the drive while running on a live cd
3) swapped out the hard drives
4) updated the drive info and UUIDs in device.map, grub.conf, and etc/fstab
5) booted to a live cd again chrooted to the system partition
6) ran grub-install
Here is my problem: when I boot the computer I see my grub options, it seems to load the kernel (I see all the messages about starting the different daemons / services)
then I go one to a screen with just a small spinning circle in the upper left corner of the screen. the circle will spin for about 10 minutes and then the screen will go black. If I press the power button i see all the normal shutdown messages and it powers off gracefully.
So far i haven't noticed anything obvious in /var/log/messages or dmesg
What am I missing?
edit: digging into boot.log has shown my that HAL failed to start for some reason. I will try digging in there.
Advice is still appreciated
Last edited by ThatGuywiththeComputer; 04-22-2016 at 11:01 AM.
Reason: new information
If the system isn't the currently running system. Double check /etc/fstab and grub.cfg. The UUID is partition specific. And is more reliable IMO that /dev/ or label names.
# blkid /dev/sda2
(or whatever applies)
And for grub... vmlinuz root=UUID=########-####-####-####-############... versus root=/dev/????. When you create a new partition, it get's a different UUID. I have an early UEFI machine that only boots usb with dos partitions, not the newer GPT.
I appear to have solved my own problem. but first a few things I learned:
- the small black circle i was seeing in the upper left hand corner should have been the mouse cursor but there was no movement or response from keyboard (I am assuming) because the HAL daemon was failing to start.
- when I add the emergency flag to the kernel options in grub i could get to command line and interact with the local file system
- it is running grub version 0.97 (which some sites indicate does not play well with ext4 partitions i believe it was because of 128 vs 256 inode size). for this reason I chose to restart fresh with the clone, creating the partitions from within a redhat 6 installation disk.
on my initial attempt I had used a CentOS 7 live CD as my working environment. after recreating the partition from the RedHat 6.5 disk the partitions show as ext4 (version 1) in the disk utility.
I rsynced the data from both the boot and system partitions to the new solid state
mounted the new partitions and updated device.map, grub.conf, and etc/fstab
powered down and replaced the drive in the system
powered the machine on
disabled the raid controller on the system
full reboot brought me to a normal login prompt.
add discard option to etc/fstab (because solid state)
things i do not know for sure:
was it the inode size of the partitions causing the problem or was it a transfer error?
were there any issues downstream of the failed HAL after 5the first attempt?
how the hell did i forget to pour a glass of whiskey when i finished?
was it the inode size of the partitions causing the problem or was it a transfer error?
were there any issues downstream of the failed HAL after 5the first attempt?
how the hell did i forget to pour a glass of whiskey when i finished?
i) the inode size in all likelihood. grub legacy needed a patch to handle that, but e2fsprogs also needed to be at an appropriate level.
ii) probably not
iii) a good Islay I trust ...
You cloning procedure looks fine as a general methodology.
I powered down the system to take care of some other issues and now the system will not boot again. the root file system seems to be mounting as read only now. I have made no changes since my last successful boot so I am perplexed.
Edit: curiouser and curiouser. after sorting through the boot log it said there were errors in fstab on each line with partition information. i removed the discard option and was able to boot again. not sure how i was able to get through two boot cycles with no problem only to be locked down later.
should I be using any special options in fstab besides defaults? I'm not quite sure what just happened.
Last edited by ThatGuywiththeComputer; 04-26-2016 at 12:51 PM.
Given you are running grub classic and the old small inodes, I would doubt discard is supported. Would expect it to be flagged as an unknown option though.
Is fsck running automatically at boot ?. Let's see fstab and relevant messages.
fsck is running automatically at boot on both the boot and root partitions. I have removed the discard option and added noatime, at this point i have put it through about 20 boot cycles with no new issues. it is probably for the best since this SSD has been added to the blacklist for TRIM support (Samsung Evo 850 series). the drive may die faster, but I am comfortable with It's current state. Thank you all for your help and insights.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.