Mounting Virtual Disks in a SLES11/SP3 KVM Hosted VM (shows busy after reboot)
Ok, so I'm working on a SLES11/SP3 Server that is setup with KVM to Host VMs. I've allocated two spare Drives as RAID-0 Volumes to that VM. When I first stand up the VM, I'm able to run sfdisk/mkfs, update the /etc/fstab, and mount the filesys/s (after creating the mount points and such).
Then I reboot the VM (for something like setting the IP Addresses), and when it comes up those filesys/s won't mount (and gives me a busy message): note_osdump:/etc # mount /data mount: /dev/vdb1 already mounted or /data busy note_osdump:/etc # fuser -c /data note_osdump:/etc # mount /dev/sda2 on / type ext3 (rw,acl,user_xattr,errors=panic) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) debugfs on /sys/kernel/debug type debugfs (rw) udev on /dev type tmpfs (rw,mode=0755) tmpfs on /dev/shm type tmpfs (rw,mode=1777) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/sda5 on /var type ext3 (rw,acl,user_xattr,errors=panic) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /var/lib/ntp/proc type proc (ro,nosuid,nodev) note_osdump:/etc # cat /etc/fstab LABEL=ROOT-BE1 / ext3 acl,user_xattr,errors=panic 1 1 LABEL=VAR-BE1 /var ext3 acl,user_xattr,errors=panic 1 2 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 /dev/vda1 /repo ext3 defaults,nofail 0 0 /dev/vdb1 /data ext3 defaults,nofail 0 0 note_osdump:/etc # fdisk -l /dev/vda Disk /dev/vda: 899.0 GB, 898999779328 bytes 16 heads, 63 sectors/track, 1741923 cylinders, total 1755858944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1442b0d0 Device Boot Start End Blocks Id System /dev/vda1 1 1755858383 877929191+ 83 Linux note_osdump:/etc # fdisk -l /dev/vdb Disk /dev/vdb: 899.0 GB, 898999779328 bytes 16 heads, 63 sectors/track, 1741923 cylinders, total 1755858944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf895f817 Device Boot Start End Blocks Id System /dev/vdb1 1 1755858383 877929191+ 83 Linux note_osdump:/etc # ls -l /dev/vd* brw-rw---- 1 root disk 253, 0 Feb 17 17:50 /dev/vda brw-rw---- 1 root disk 253, 1 Feb 17 17:50 /dev/vda1 brw-rw---- 1 root disk 253, 16 Feb 17 17:50 /dev/vdb brw-rw---- 1 root disk 253, 17 Feb 17 17:50 /dev/vdb1 note_osdump:/etc # I can see the Partitions so we can access the devices. What I'm trying to figure out is what has taken over ownership of those devices. Any suggestions? |
We actually figured this one out.
Due to the mount options used in /etc/fstab, we didn't get /etc/mtab updated properly. We found if we looked at /proc/mounts those filesys/s were actually mounted, we just couldn't see them in the df output (because they were not in /etc/mtab). We fixed that by changing the mount options to: /dev/vda1 /repo ext3 acl,user_xattr 1 2 /dev/vdb1 /data ext3 acl,user_xattr 1 2 Problem solved. I'm not sure if this information provides value to others or not. I find these posts often provide a piece of data that I need to solve problems that I'm working on, so wanted to contribute. I'll try a few more to see if anybody gets anything out of this (if not; I won't waste anybody's time). |
Thanks RedDog2 for providing your resolution.
Much appreciated! :) All the best, Tim |
All times are GMT -5. The time now is 10:07 PM. |