I think I have hardware problems that prevent Kubuntu 14.04 from runn ing normally, see my post
http://www.linuxquestions.org/questi...ad-4175571478/
and want to check my disks but I can't because fsck claims they are "in use". This is happening under 3 different Linux systems booted from optical drive: Knoppix, Ubuntu, and SystemRescueCD, all downloaded and burned recently. I'm writing this on the SystemRescueCD. The hardware is a Gigabyte motherboard and AMD A6 CPU, both <1 year old, with 3 SATA disks. They are called "in use" before I ever make any move to mount anything. This is what I tried:
root@sysresccd /etc % e2fsck /dev/sdc1
e2fsck 1.42.13 (17-May-2015)
/dev/sdc1 is in use.
e2fsck: Cannot continue, aborting.
root@sysresccd /etc % mount
udev on /dev type devtmpfs (rw,nosuid,relatime,size=10240k,nr_inodes=882330,mode=755)
tmpfs on /livemnt/boot type tmpfs (rw,relatime,size=2097152k)
/dev/loop0 on /livemnt/squashfs type squashfs (ro,relatime)
tmpfs on /livemnt/memory type tmpfs (rw,relatime)
none on / type aufs (rw,noatime,si=144c061a3e47e95d)
tmpfs on /livemnt/tftpmem type tmpfs (rw,relatime,size=524288k)
none on /tftpboot type aufs (rw,relatime,si=144c06180df5c95d)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run type tmpfs (rw,nodev,relatime,size=708580k,mode=755)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
root@sysresccd /etc % lsof -n |grep sdc
(no output)
root@sysresccd /etc % ps aux |grep sdc
root 4078 0.0 0.0 3944 1292 pts/2 S+ 06:00 0:00 grep sdc
From research on the web I take it this means the kernel has my disks open - but why, for heaven's sake, in a rescue system? And how can I wrest control of my disks from it?
One strange thing I notice in the output of fdisk -l: /dev/sdc appears twice under different names (excerpt):
Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1c511c51
Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 63 976773167 976773105 465.8G 83 Linux
Disk /dev/mapper/nvidia_geahifbj: 465.8 GiB, 500107860992 bytes, 976773166 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1c511c51
Device Boot Start End Sectors Size Id Type
/dev/mapper/nvidia_geahifbj1 * 63 976773167 976773105 465.8G 83 Linux
where /dev/mapper/nvidia_geahifbj1 is a symlink to block device /dev/dm-1
But I can't fsck any of the 3 disks.
Thank you for any pointers.