I had tried with the below procedure but was not success ....
1. Enable the Intel VT-d extensions
The Intel VT-d extensions provide hardware support for directly assigning a physical devices to guest.
The VT-d extensions are required for PCI passthrough with Red Hat Enterprise Linux. The
extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default.
These extensions are often called various terms in BIOS which differ from manufacturer to
manufacturer. Consult your system manufacturer's documentation.
2. Activate Intel VT-d in the kernel
Activate Intel VT-d in the kernel by appending the intel_iommu=on parameter to the kernel line
of the kernel line in the /boot/grub/grub.conf file.
The example below is a modified grub.conf file with Intel VT-d activated.
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-36.x86-645)
root (hd0,0)
kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/VolGroup00/LogVol00 rhgb
quiet intel_iommu=on
initrd /initrd-2.6.32-36.x86-64.img
3. Ready to use
Reboot the system to enable the changes. Your system is now PCI pass-through capable.
Preparing an AMD system for PCI pass-through
Enable AMD IOMMU extensions
The AMD IOMMU extensions are required for PCI pass-through with Red Hat Enterprise Linux.
The extensions must be enabled in the BIOS. Some system manufacturers disable these
extensions by default.
AMD systems only require that the IOMMU is enabled in the BIOS. The system is ready for PCI
passthrough once the IOMMU is enabled.
Adding a PCI device with virsh
Need to follow the below steps
.
1. Identify the device
Identify the PCI device designated for passthrough to the guest. The virsh nodedev-list
command lists all devices attached to the system. The --tree option is useful for identifying
devices attached to the PCI device (for example, disk controllers and USB controllers).
# virsh nodedev-list --tree
For a list of only PCI devices, run the following command:
# virsh nodedev-list | grep pci
In the output from this command, each PCI device is identified by a string, as shown in the
[root@speedRHEV ~]# virsh nodedev-list | grep pci
pci_0000_00_00_0
pci_0000_00_01_0
pci_0000_00_03_0
pci_0000_00_04_0
pci_0000_00_05_0
pci_0000_00_06_0
pci_0000_00_07_0
pci_0000_00_09_0
pci_0000_00_14_0
pci_0000_00_14_1
pci_0000_00_14_2
pci_0000_00_1a_0
pci_0000_00_1a_1
pci_0000_00_1a_7
pci_0000_00_1d_0
pci_0000_00_1d_1
pci_0000_00_1d_7
pci_0000_00_1e_0
pci_0000_00_1f_0
pci_0000_00_1f_2
pci_0000_01_00_0
pci_0000_01_00_1
pci_0000_02_00_0
pci_0000_02_00_1
pci_0000_03_00_0
pci_0000_04_00_0
pci_0000_05_00_0
pci_0000_06_02_0
pci_0000_06_04_0
pci_0000_07_00_0
pci_0000_07_00_1
pci_0000_08_00_0
pci_0000_08_00_1
pci_0000_09_00_0
pci_0000_0a_00_0
pci_0000_0b_03_0
We can identify the pci device with the command output “lspci –vv”
2. Information on the domain, bus and function are available from output of the virsh nodedevdumpxml command:
[root@speedRHEV ~]# virsh nodedev-dumpxml pci_0000_0a_00_0
<device>
<name>pci_0000_0a_00_0</name>
<parent>pci_0000_00_09_0</parent>
<driver>
<name>mpt2sas</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>10</bus>
<slot>0</slot>
<function>0</function>
<product id='0x0072'>SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]</product>
<vendor id='0x1000'>LSI Logic / Symbios Logic</vendor>
</capability>
</device>
Above is the HBA card which we need to assign to the virtual host directly .
3. Detach the device from the system. Attached devices cannot be used and may cause various errors if connected to a guest without detaching first.
# virsh nodedev-dettach pci_0000_0a_00_0
Device pci_0000_0a_00_0detached
4. Convert slot and function values to hexadecimal values (from decimal) to get the PCI bus
addresses. Append "0x" to the beginning of the output to tell the computer that the value is ahexadecimal number. And use the value
5.Run virsh edit (or virsh attach device) and added a device entry in the <devices> section to attach the PCI device to the guest.
And add the below line
# virsh edit speedbkp
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00' slot='0x1a' function='0x7'/>
</source>
</hostdev>
6. Once the guest system is configured to use the PCI address, the host system must be configured
to stop using the device. The ehci driver is loaded by default for the HBA PCI controller.
7.Detach the device:
$ virsh nodedev-dettach pci_0000_0a_00_0
9. Set a sebool to allow the management of the PCI device from the guest:
$ setsebool -P virt_manage_sysfs 1
10. Start the guest system :
# virsh start speedbkp
The PCI device should now be successfully attached to the guest and accessible to the guest
Operating system.
After start the use of HBA card on backup server we can not use the same device on virtual host server.
We can add the hardware by virtual console manager also with the same procedure as we have discussed above
|