LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud
User Name
Password
Linux - Virtualization and Cloud This forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.

Notices

Reply
 
Search this Thread
Old 12-07-2012, 04:41 PM   #1
ciforbg
LQ Newbie
 
Registered: Sep 2007
Posts: 16

Rep: Reputation: 1
/dev/sd* devices change their names on each virtual machine reboot


Hello,

I have a virtual machine (using qemu/KVM) used for Oracle and I needed to add some emulated disk devices(four 2GB disks and four 1GB disks) so I could use them for ASM disks, until I discovered that on each reboot my virtual machine is changing the names of the disk drives, which is a big problem for me.

For example, this is how my disk drives are named before I reboot:

Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64        2611    20458496   8e  Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               2       20480    20970496   8e  Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
/dev/sdc1               1        1009     2095662   83  Linux
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdd1               1        1009     2095662   83  Linux
Disk /dev/sde: 2147 MB, 2147483648 bytes
/dev/sde1               1        1009     2095662   83  Linux
Disk /dev/sdf: 2147 MB, 2147483648 bytes
/dev/sdf1               1        1009     2095662   83  Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1               1        1011     1048376+  83  Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1               1        1011     1048376+  83  Linux
Disk /dev/sdi: 1073 MB, 1073741824 bytes
/dev/sdi1               1        1011     1048376+  83  Linux
Disk /dev/sdj: 1073 MB, 1073741824 bytes
/dev/sdj1               1        1011     1048376+  83  Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
/dev/sda and /dev/sdb are physical volumes for two VGs available on the virtual machine;
/dev/sdc to /dev/sdj are the mentioned four 2GB and four 1GB drives(formatted each as one primary partition) which I intend to use for ASM.

And here is after reboot:

Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 2147 MB, 2147483648 bytes
/dev/sda1               1        1009     2095662   83  Linux
Disk /dev/sdb: 2147 MB, 2147483648 bytes
/dev/sdb1               1        1009     2095662   83  Linux
Disk /dev/sdi: 21.5 GB, 21474836480 bytes
/dev/sdi1   *           1          64      512000   83  Linux
/dev/sdi2              64        2611    20458496   8e  Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdc1               1        1009     2095662   83  Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1               1        1011     1048376+  83  Linux
Disk /dev/sde: 1073 MB, 1073741824 bytes
/dev/sde1               1        1011     1048376+  83  Linux
Disk /dev/sdj: 21.5 GB, 21474836480 bytes
/dev/sdj1               2       20480    20970496   8e  Linux LVM
Disk /dev/sdd: 2147 MB, 2147483648 bytes
/dev/sdd1               1        1009     2095662   83  Linux
Disk /dev/sdf: 1073 MB, 1073741824 bytes
/dev/sdf1               1        1011     1048376+  83  Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1               1        1011     1048376+  83  Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
This is how I attached the newly created LVs from the virtualization host machine as block devices to the virtual machine:

Code:
[root@c4 ~]# lvcreate -L 2G -n virtdisks_node01_asm1 vg_c4
  Logical volume "virtdisks_node01_asm1" created
[root@c4 ~]# virsh # attach-disk node01 /dev/mapper/vg_c4-virtdisks_node01_asm1 sdc --persistent
Disk attached successfully

From the virtual machine dmesg log:

scsi 2:0:2:0: Direct-Access     QEMU     QEMU HARDDISK    0.15 PQ: 0 ANSI: 5
scsi target2:0:2: tagged command queuing enabled, command queue depth 16.
scsi target2:0:2: Beginning Domain Validation
scsi target2:0:2: Domain Validation skipping write tests
scsi target2:0:2: Ending Domain Validation
sd 2:0:2:0: Attached scsi generic sg3 type 0
sd 2:0:2:0: [sdc] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 1f 00 00 08
sd 2:0:2:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sdc:
sd 2:0:2:0: [sdc] Attached SCSI disk



.... did this with all LVs..
On the qemu guest .xml file the disks are added with their corresponding /dev/sd* names as I want them:

Code:
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm1'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='2'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm2'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='3'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm3'/>
      <target dev='sde' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='4'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm4'/>
      <target dev='sdf' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='5'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm5'/>
      <target dev='sdg' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='6'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm6'/>
      <target dev='sdh' bus='scsi'/>
      <address type='drive' controller='1' bus='0' unit='0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm7'/>
      <target dev='sdi' bus='scsi'/>
      <address type='drive' controller='1' bus='0' unit='1'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm8'/>
      <target dev='sdj' bus='scsi'/>
      <address type='drive' controller='1' bus='0' unit='2'/>
    </disk>
But still, when I reboot the virtual machine, the names mess up.

Host machine: Fedora 16 x86_64
Virtual machine: Oracle Linux 6 x86_64

Can someone please tell me why and advise how to overcome it?

Thanks a lot in advance!

Any comments are most welcome.

Last edited by ciforbg; 12-07-2012 at 05:00 PM.
 
Old 12-08-2012, 10:10 AM   #2
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 827

Rep: Reputation: 91
do NOT use the scsi emulation with qemu. It's buggy, unsupported and not actively developed. Use virtio_blk instead
 
Old 12-08-2012, 10:22 AM   #3
ciforbg
LQ Newbie
 
Registered: Sep 2007
Posts: 16

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by dyasny View Post
do NOT use the scsi emulation with qemu. It's buggy, unsupported and not actively developed. Use virtio_blk instead
Thanks for that hint.
So you are telling me that this naming convention mess-up comes from using scsi emulation?

And please can you tell me how to convert safely from using scsi emulation to virtio_blk?

I can't see any form of virtio emulation type that I can choose from the attach-disk command?

Code:
 attach-disk domain-id source target [--driver driver] [--subdriver subdriver] [--cache cache] [--type type] [--mode mode] [--persistent] [--sourcetype soucetype] [--serial
       serial] [--shareable] [--address address]
           Attach a new disk device to the domain.  source and target are paths for the files and devices.  driver can be file, tap or phy for the Xen hypervisor depending on the
           kind of access; or qemu for the QEMU emulator.  type can indicate cdrom or floppy as alternative to the disk default, although this use only replaces the media within
           the existing virtual cdrom or floppy device; consider using update-device for this usage instead.  mode can specify the two specific mode readonly or shareable.
           persistent indicates the changes will affect the next boot of the domain.  sourcetype can indicate the type of source (block|file) cache can be one of "default",
           "none", "writethrough", "writeback", or "directsync".  serial is the serial of disk device. shareable indicates the disk device is shareable between domains.  address
           is the address of disk device in the form of pci:domain.bus.slot.function, scsi:controller.bus.unit or ide:controller.bus.unit.

Last edited by ciforbg; 12-08-2012 at 10:28 AM.
 
Old 12-08-2012, 10:50 AM   #4
ciforbg
LQ Newbie
 
Registered: Sep 2007
Posts: 16

Original Poster
Rep: Reputation: 1
Following dyasny's advice I've changed the guest domain's XML file for all block device entries

from:

Code:
<disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm1'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='2'/>
    </disk>
to:

Code:
<disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm1'/>
      <target dev='vdc' bus='virtio'/>
    </disk>
rebooted the guest and here are the results now:

Code:
[root@node01 ~]# fdisk -l |grep /dev/
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64        2611    20458496   8e  Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               2       20480    20970496   8e  Linux LVM
Disk /dev/vda: 2147 MB, 2147483648 bytes
/dev/vda1               1        1009     2095662   83  Linux
Disk /dev/vdb: 2147 MB, 2147483648 bytes
/dev/vdb1               1        1009     2095662   83  Linux
Disk /dev/vdc: 2147 MB, 2147483648 bytes
/dev/vdc1               1        1009     2095662   83  Linux
Disk /dev/vdd: 2147 MB, 2147483648 bytes
/dev/vdd1               1        1009     2095662   83  Linux
Disk /dev/vde: 1073 MB, 1073741824 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/vde1               1        1011     1048376+  83  Linux
Disk /dev/vdf: 1073 MB, 1073741824 bytes
/dev/vdf1               1        1011     1048376+  83  Linux
Disk /dev/vdg: 1073 MB, 1073741824 bytes
/dev/vdg1               1        1011     1048376+  83  Linux
Disk /dev/vdh: 1073 MB, 1073741824 bytes
/dev/vdh1               1        1011     1048376+  83  Linux
Disk /dev/dm-0: 4227 MB, 4227858432 bytes
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
Disk /dev/dm-5 doesn't contain a valid partition table
Disk /dev/dm-1: 16.7 GB, 16710107136 bytes
Disk /dev/dm-2: 5368 MB, 5368709120 bytes
Disk /dev/dm-3: 5368 MB, 5368709120 bytes
Disk /dev/dm-4: 5368 MB, 5368709120 bytes
Disk /dev/dm-5: 5268 MB, 5268045824 bytes
[root@node01 ~]#
It seems to me that the XML directive 'target dev='on the <disk clause> doesn't work because the disks were named sequentially from vda to vdh - which is not the way they are set up on the XML config file. The same applies for the /dev/sda and /dev/sdb - they should be named hda and hdb....

And if I decide to add/edit/drop another disk(s) some other time, I should expect again change in the device namings?

Can this be made persistent somehow?
 
Old 12-08-2012, 01:24 PM   #5
dyasny
Member
 
Registered: Dec 2007
Location: Canada
Distribution: RHEL,Fedora
Posts: 827

Rep: Reputation: 91
right, you're on track now. All you have left is to set the disks up with UUIDs in the guest, and use those UUIDs in fstab

http://www.cyberciti.biz/faq/linux-f...-update-fstab/

the guide is for ubuntu, but it's quite generic
 
Old 12-09-2012, 04:32 AM   #6
ciforbg
LQ Newbie
 
Registered: Sep 2007
Posts: 16

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by dyasny View Post
right, you're on track now. All you have left is to set the disks up with UUIDs in the guest, and use those UUIDs in fstab

http://www.cyberciti.biz/faq/linux-f...-update-fstab/

the guide is for ubuntu, but it's quite generic
Hi dyasny,

Thanks for that info.
Unfortunately I don't think that this can suit for me for the Oracle ASM volumes, because they aren't ext* filesystems, but rather are ASM formatted:

Code:
[root@node01 ~]# blkid |grep /dev/vd
/dev/vda1: LABEL="PROD01_2G_D1" TYPE="oracleasm" 
/dev/vdb1: LABEL="PROD01_2G_D2" TYPE="oracleasm" 
/dev/vdc1: LABEL="PROD01_2G_D3" TYPE="oracleasm" 
/dev/vdd1: LABEL="PROD01_2G_D4" TYPE="oracleasm" 
/dev/vde1: LABEL="PROD01_1G_D1" TYPE="oracleasm" 
/dev/vdf1: LABEL="PROD01_1G_D2" TYPE="oracleasm" 
/dev/vdg1: LABEL="PROD01_1G_D3" TYPE="oracleasm" 
/dev/vdh1: LABEL="PROD01_1G_D4" TYPE="oracleasm"
However, I have UUIDs for the rest of my ext* filesystems:

Code:
/dev/sda1: UUID="0f5c5a32-b644-4272-b657-1201e5ffd543" SEC_TYPE="ext2" TYPE="ext3" */boot partition
/dev/sda2: UUID="xjRVaA-E5QQ-ORsS-0Cuq-xPNa-e7t2-bmUUO3" TYPE="LVM2_member" 
/dev/sdb1: UUID="SkUpW1-dZ3D-IdR1-6103-G32W-Tsh2-cYoor7" TYPE="LVM2_member" 
-----------------
/dev/mapper/vg_node01-lv_swap: UUID="ed887e17-7b32-4830-901f-49735a109b5f" TYPE="swap" 
/dev/mapper/vg_node01-lv_root: UUID="537b26bc-c61a-4dd4-af7d-f904f5f3ba57" TYPE="ext4" 
/dev/mapper/vg_ora-ora_home1: UUID="7a972044-a1ce-4bcb-8218-c2f4f4d02ee2" SEC_TYPE="ext2" TYPE="ext3" 
/dev/mapper/vg_ora-ora_home2: UUID="9e4435fb-6450-4125-ab90-d920a67ede54" TYPE="ext3" 
/dev/mapper/vg_ora-ora_data1: UUID="b3d11a50-1165-4215-b1fa-e8adee4a5950" SEC_TYPE="ext2" TYPE="ext3" 
/dev/mapper/vg_ora-orafra: UUID="9299810b-06b8-492c-a1f5-38ee0d3e375a" SEC_TYPE="ext2" TYPE="ext3"
The contents of the /etc/fstab file are as follows:

Code:
/dev/mapper/vg_node01-lv_root /                       ext4    defaults        1 1
UUID=0f5c5a32-b644-4272-b657-1201e5ffd543 /boot                   ext3    defaults        1 2
/dev/mapper/vg_ora-ora_data1 /ora_data               ext3    defaults        1 2
/dev/mapper/vg_ora-orafra /ora_fra                ext3    defaults        1 2
/dev/mapper/vg_ora-ora_home1 /ora_soft               ext3    defaults        1 2
/dev/mapper/vg_ora-ora_home2 /ora_soft/app/grid/product/11.2.0               ext3    defaults        1 2
/dev/mapper/vg_node01-lv_swap swap                    swap    defaults        0 0
So I guess that upon boot, no matter what(KVM decisions to change device naming, etc...) the /boot partition is recognized no matter how the kernel sees and loads the block device (with what /dev/* name) - is that correct?

However, if I edit my fstab to reflect to these partitions with their UUIDs, instead of their /dev/mapper* names, what I will achieve as talking for the names of the devices?

If you go up a little bit you will see that no matter how the kernel sees the block devices names, the system boots up properly, identifying the disks, LVM partitions, VGs and LVs, etc... but this I guess has nothing to do with the device names....

So my question is still how do I control the names of the devices the kernel will detect upon boot in the guest virtual machine?
 
Old 12-09-2012, 04:41 AM   #7
descendant_command
Member
 
Registered: Mar 2012
Posts: 757

Rep: Reputation: 159Reputation: 159
I'm not familiar with ASM, but you should be able to use the labels in fstab instead of UUID
ie
LABEL=PROD01_2G_D1 /mountpoint fs options
 
Old 01-03-2013, 01:44 AM   #8
trijit
Member
 
Registered: Sep 2010
Location: Kolkata
Distribution: Ubuntu
Posts: 34

Rep: Reputation: 3
Quote:
Originally Posted by ciforbg View Post
Hello,

I have a virtual machine (using qemu/KVM) used for Oracle and I needed to add some emulated disk devices(four 2GB disks and four 1GB disks) so I could use them for ASM disks, until I discovered that on each reboot my virtual machine is changing the names of the disk drives, which is a big problem for me.

For example, this is how my disk drives are named before I reboot:

Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 21.5 GB, 21474836480 bytes
/dev/sda1   *           1          64      512000   83  Linux
/dev/sda2              64        2611    20458496   8e  Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
/dev/sdb1               2       20480    20970496   8e  Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
/dev/sdc1               1        1009     2095662   83  Linux
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdd1               1        1009     2095662   83  Linux
Disk /dev/sde: 2147 MB, 2147483648 bytes
/dev/sde1               1        1009     2095662   83  Linux
Disk /dev/sdf: 2147 MB, 2147483648 bytes
/dev/sdf1               1        1009     2095662   83  Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1               1        1011     1048376+  83  Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1               1        1011     1048376+  83  Linux
Disk /dev/sdi: 1073 MB, 1073741824 bytes
/dev/sdi1               1        1011     1048376+  83  Linux
Disk /dev/sdj: 1073 MB, 1073741824 bytes
/dev/sdj1               1        1011     1048376+  83  Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
/dev/sda and /dev/sdb are physical volumes for two VGs available on the virtual machine;
/dev/sdc to /dev/sdj are the mentioned four 2GB and four 1GB drives(formatted each as one primary partition) which I intend to use for ASM.

And here is after reboot:

Code:
[root@node01 ~]# fdisk -l |grep /dev/sd
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/sda: 2147 MB, 2147483648 bytes
/dev/sda1               1        1009     2095662   83  Linux
Disk /dev/sdb: 2147 MB, 2147483648 bytes
/dev/sdb1               1        1009     2095662   83  Linux
Disk /dev/sdi: 21.5 GB, 21474836480 bytes
/dev/sdi1   *           1          64      512000   83  Linux
/dev/sdi2              64        2611    20458496   8e  Linux LVM
Disk /dev/sdc: 2147 MB, 2147483648 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
/dev/sdc1               1        1009     2095662   83  Linux
Disk /dev/sdg: 1073 MB, 1073741824 bytes
/dev/sdg1               1        1011     1048376+  83  Linux
Disk /dev/sde: 1073 MB, 1073741824 bytes
/dev/sde1               1        1011     1048376+  83  Linux
Disk /dev/sdj: 21.5 GB, 21474836480 bytes
/dev/sdj1               2       20480    20970496   8e  Linux LVM
Disk /dev/sdd: 2147 MB, 2147483648 bytes
/dev/sdd1               1        1009     2095662   83  Linux
Disk /dev/sdf: 1073 MB, 1073741824 bytes
/dev/sdf1               1        1011     1048376+  83  Linux
Disk /dev/sdh: 1073 MB, 1073741824 bytes
/dev/sdh1               1        1011     1048376+  83  Linux
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3 doesn't contain a valid partition table
Disk /dev/dm-4 doesn't contain a valid partition table
This is how I attached the newly created LVs from the virtualization host machine as block devices to the virtual machine:

Code:
[root@c4 ~]# lvcreate -L 2G -n virtdisks_node01_asm1 vg_c4
  Logical volume "virtdisks_node01_asm1" created
[root@c4 ~]# virsh # attach-disk node01 /dev/mapper/vg_c4-virtdisks_node01_asm1 sdc --persistent
Disk attached successfully

From the virtual machine dmesg log:

scsi 2:0:2:0: Direct-Access     QEMU     QEMU HARDDISK    0.15 PQ: 0 ANSI: 5
scsi target2:0:2: tagged command queuing enabled, command queue depth 16.
scsi target2:0:2: Beginning Domain Validation
scsi target2:0:2: Domain Validation skipping write tests
scsi target2:0:2: Ending Domain Validation
sd 2:0:2:0: Attached scsi generic sg3 type 0
sd 2:0:2:0: [sdc] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 1f 00 00 08
sd 2:0:2:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sdc:
sd 2:0:2:0: [sdc] Attached SCSI disk



.... did this with all LVs..
On the qemu guest .xml file the disks are added with their corresponding /dev/sd* names as I want them:

Code:
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm1'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='2'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm2'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='3'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm3'/>
      <target dev='sde' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='4'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm4'/>
      <target dev='sdf' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='5'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm5'/>
      <target dev='sdg' bus='scsi'/>
      <address type='drive' controller='0' bus='0' unit='6'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm6'/>
      <target dev='sdh' bus='scsi'/>
      <address type='drive' controller='1' bus='0' unit='0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm7'/>
      <target dev='sdi' bus='scsi'/>
      <address type='drive' controller='1' bus='0' unit='1'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/mapper/vg_c4-virtdisks_node01_asm8'/>
      <target dev='sdj' bus='scsi'/>
      <address type='drive' controller='1' bus='0' unit='2'/>
    </disk>
But still, when I reboot the virtual machine, the names mess up.

Host machine: Fedora 16 x86_64
Virtual machine: Oracle Linux 6 x86_64

Can someone please tell me why and advise how to overcome it?

Thanks a lot in advance!

Any comments are most welcome.
Are you using multipath or something? In that case you can make the labels persistent.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] installing Slackware 13.37 as a virtual machine. unable to mount /dev/sda1 kevinamygrayson Slackware - Installation 4 08-22-2012 03:35 PM
block device names change after reboot (sometimes^^) Orangutanklaus Linux - Hardware 2 01-19-2011 09:38 AM
[SOLVED] always starting virtual machine at reboot tkmsr Linux - Virtualization and Cloud 2 08-15-2010 03:30 PM
KVM Virtual Machine CD CHANGE ashishverma1984 Fedora 2 12-12-2008 02:51 AM


All times are GMT -5. The time now is 10:05 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration