Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
If you boot a live disc, your working root partition is a ramdisk created for that purpose. It may look like your normal root partition, because all root directories have the same content, but it is not. Consequently the /etc directory that you see is not your normal /etc directory but a temporary one created for this boot, and similarly with /etc/fstab. What you need to do is examine /dev/sda directly.
Code:
fdisk -l /dev/sda
will give you a list of the partitions on it. You can then mount them by hand, one by one, on /mnt and examine them.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,803
Rep:
Quote:
Originally Posted by kevinbenko
*: forgive me, I am on a "live disk"
*: I tried to determine what drive was bad/needed replacing
*: When I tried to find it, I discovered that the computer would't boot
*{darn}!!!!!
I can boot into / directory ONLY
I edited /etc/fstab to remove every drive except / (duh) and /home... with no success.
Bad disk? What has happened to make you suspect that?
If I understand your plight, you can boot off the live DVD and see you data on the hard disks -- a real good sign, BTW, and makes one wonder if there really was a bad disk -- but cannot boot from the hard disk itself? Or is the boot from the hard disk the situation where the other filesystems are not mounting? (Booting from the live DVD is not going to mount the filesystems on hard disks. How would it know about them?)
While you're booted using the live DVD, have you run fsck for the hard disk partitions? Including the "/" partition?
Did you remove or merely comment out the /etc/fstab entries for all non-/ filesystems. (I would probably never do the former but have opted for the latter during a migration of a previous fstab to fresh installs.)
Anyhoo, if the system is booting from the hard disk and only "/" is mounted, have you tried mounting the other filesystems by hand? If so, and if the mount commands threw error messages, can you capture any that are displayed? You can either write those down and re-type them here or capture the mount session using "script":
Code:
# script sdb2_mount.log
Script started, file is sdb2_mount.log
# mount /dev/sdb2 /mnt # for example -- no errors on this one
# umount /mnt
# exit
Script done, file is sdb2_mount.log
#
and then attach that log file or paste the contents into "code" tags. Note: you'll need write access to wherever the script log file is going to be written. If you've booted from the live DVD, you should be able to mount a thumb drive to hold the script log file(s).
Do one thing at a time and make sure you record results. Making multiple modifications that result in a change in system behavior makes it very difficult to know which modification is responsible for the change in behavior.
After I boot, it takes much much longer than usual, it does a check on my main HHD, eventually dropping into "scream and die, you are screwed" mode.
=============================
The errors are practally all the same, while my / (dev/sda2) partition seems to be fine, the other three partitions are not:
time out on /dev/sda{1, 3, 4}
dependency failure for /dev/sda{1, 3, 4}
dependency failure for {the names of the other partitions}
=============================
Also, when I had a "scream and die" problem, I put the bad drive in, and everything seems to me to be the way it was normally. I did that BEFORE I had asked for help.
Sorry...
S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) consists of the smartd and smartctl (the command line tool ) which is installed via the smartmontools package.
So how many drives? Post the output of the command
lsblk
After I boot, it takes much much longer than usual, it does a check on my main HHD, eventually dropping into "scream and die, you are screwed" mode.
=============================
The errors are practally all the same, while my / (dev/sda2) partition seems to be fine, the other three partitions are not:
time out on /dev/sda{1, 3, 4}
dependency failure for /dev/sda{1, 3, 4}
dependency failure for {the names of the other partitions}
=============================
Also, when I had a "scream and die" problem, I put the bad drive in, and everything seems to me to be the way it was normally. I did that BEFORE I had asked for help.
I am a little concerned about the cause of the dependency failure. Are you using encryption? Did you switch from ext4 to btrfs? or what else have you done that might cause that? That message hints at a missing kernel module needed to open the filesystem on those partitions.
If you have not done anything that could cause those errors, then you need to boot into the live USB then run (as root)
Code:
fsck.ext4 /dev/sda(2, 3, 4)
for each of those 3 partitions.
When that completes then run
Code:
fsck -t vfat /dev/sda1
Once all those have been completed successfully and you are sure the filesystems are clean then it should just boot normally.
Last edited by computersavvy; 12-05-2020 at 09:17 PM.
OK... I did something "questionable"
Since the whole thing was started because I replaced a "dead" drive with a good drive I did a netinstall (Debian) on the good drive, and that is now my system drive. I will be migrating my directory trees (just /home and /home/VDO) to my good drive.
In addition, I will be mounting the non-/home drives to /mnt.
Tomorrow.
Once I stop beating my head against the table.
I guess I will probably buying 4 3TB drives in the near future.
{as an aside..... once upon a time, Seagate was pretty {darn} good. Not so good recently. Can anyone tell me what brand yoouu suggest? Yeah... this question is probably on a different forum on LQ.... just asking.....}
OH, yeah... I will call this "Solved".... sort of.....
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.