Slackware 12. problem on reboot under VMware workstation
I have been playing with slackware 12 under Vmware workstation running on windowsXP. Initially there were no problems, setup ran fine loading the iso install with no problems, setup completed and after putting in the darkstar/root login and password I was able to load slackware with no problems from the lilo boot menu. After this I rebooted a number of times and started up without any problems.
1) The main problem is to do with slackware, not vmware. What is the proper procedure for rebooting Slackware? I have logged off thinking that i have unmounted the drives, and then powered down the system, only to find upon next boot that "hda1 was not unmounted successfully" and is then scanned for errors.
This happened about 3 times until it got stuck in a loop where the system scans hda1 "for file system errors" and then reboots back into the same scan without repairing. I get a prompt asking if I want to log into Root and manually fix/scan/alter the hda1 partition.
Can someone tell me what I need to do to repair my hda1 partition and what I have done to cause it damage?
If anyone can respond, or tell me what further information they need to help sort the problem it would be greatly appreciated.
I dont know what might be needed to repair the partition other than just wiping the whole setup and beginning again. Is there an easier alternative to be found in making changes or scans to the boot partition (hda1)?
From the sounds of it, you are logging out of KDE and then powering off the machine. If so, you need to do a proper shutdown, which will power off the machine itself. So if that is the case, once you shut down KDE and are back at the system prompt, as root, just run
Ok, this gives me the answer to one part of my problem. thanks.
I now have a slackware installation that loads and gives me the message "hda1 contains a filesystem with errors. check forced." and then prompts me to manually fix the partition. Any suggestions
"an error occurred during the filesystem check.
you will now be given a chance to log into the root
system in single user mode to fix the problem.
if you are using the ext2 file system, running e2fsck -v -y may help."
I am then asked for the root password...
The partition is ext3. I am going to read up a bit on the approaches to fixing or checking this partition, any help would be appreciated. Is it really possible for a partition to be damaged from one shutdown without unmounting?
Ok I seem to have solved my own problem. All that I needed to do was manually run fsck once it had run itself under the forced check mode. I didnt really know what fsk was doing, but aparrently it is checking the drives in read only mode, and then when you manually run it, it will make the fixes. after 3 reboots I havent come across the same problem. Anyone able to give me an idea of how often fsck will pick up errors, and what type of errors are serious? When I ran fsk manually it picked up about 10 problems and prompted me to fix them, no problems after that. Is it usual for a newly created file system (on a new disk) to have these errors?
Perhaps there are specific problems because I am using VMWare to creat a virtual disk and then set my partition in that...
Thanks for your post. using Halt seems to stop fsck from automatically forcing checks on the partition.
|All times are GMT -5. The time now is 06:46 PM.|