Access drive from failed Openfiler SAN
I have a Disk that contains a VMFS partition that was presented to my ESXi server using openfiler. the openfiler server has died and I need to get the data. I have it attached to a Linux workstation and can see the LUN but cannot mount it.
Here is the results from from scans. root@fileserver /# pvscan PV /dev/sdb1 VG vpilun.003 lvm2 [465.75 GiB / 0 free] Total: 1 [465.75 GiB] / in use: 1 [465.75 GiB] / in no VG: 0 [0 ] root@fileserver /# vgscan Reading all physical volumes. This may take a while... Found volume group "vpilun.003" using metadata type lvm2 root@fileserver /# lvscan ACTIVE '/dev/vpilun.003/vpi003' [465.75 GiB] inherit root@fileserver /# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009b612 Device Boot Start End Blocks Id System /dev/sda1 * 1 9469 76054528 83 Linux /dev/sda2 9469 9730 2094081 5 Extended /dev/sda5 9469 9730 2094080 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xe74ec4f4 Device Boot Start End Blocks Id System /dev/sdb1 * 1 60802 488384512 fd Linux raid autodetect /dev/sdb2 62004 109419 380859392 83 Linux Disk /dev/dm-0: 500.1 GB, 500095254528 bytes 255 heads, 63 sectors/track, 60799 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000ab83d Device Boot Start End Blocks Id System /dev/dm-0p1 1 60799 488367903+ fb VMware VMFS And here is the error I get when I try to mount it. root@fileserver /# mount -t ext3 /dev/vpilun.003/vpi003 /mnt/vpi/ mount: wrong fs type, bad option, bad superblock on /dev/mapper/vpilun.003-vpi003, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so root@fileserver /# dmesg | tail [ 10.332957] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None [ 10.333150] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 20.892008] eth0: no IPv6 routers present [ 2397.714789] VFS: Can't find ext3 filesystem on dev dm-0. [ 2412.666004] VFS: Can't find ext3 filesystem on dev dm-0. [ 2700.674085] VFS: Can't find ext3 filesystem on dev dm-0. [ 3101.786046] VFS: Can't find ext3 filesystem on dev dm-0. [ 3715.222044] EXT4-fs (dm-0): VFS: Can't find ext4 filesystem [ 3735.046075] EXT4-fs (dm-0): VFS: Can't find ext4 filesystem [ 4552.530438] VFS: Can't find ext3 filesystem on dev dm-0. |
There was a script around that allowed linux to mount vmfs volumes. It gave big warnings about writing to the device while vmware was using it. I believe I found it among the bits and pieces that came with vmware-server 1 or 2 for linux. It didn't sound like production quality stuff, but should get you out of your corner.
|
All times are GMT -5. The time now is 12:45 AM. |