I had tried with the below procedure but was not success ....
1. Enable the Intel VT-d extensions
The Intel VT-d extensions provide hardware support for directly assigning a physical devices to guest.
The VT-d extensions are required for PCI passthrough with Red Hat Enterprise Linux. The
extensions must be enabled in the BIOS. Some system manufacturers disable these extensions by default.
These extensions are often called various terms in BIOS which differ from manufacturer to
manufacturer. Consult your system manufacturer's documentation.
2. Activate Intel VT-d in the kernel
Activate Intel VT-d in the kernel by appending the intel_iommu=on parameter to the kernel line
of the kernel line in the /boot/grub/grub.conf file.
The example below is a modified grub.conf file with Intel VT-d activated.
title Red Hat Enterprise Linux Server (2.6.32-36.x86-645)
kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/VolGroup00/LogVol00 rhgb
3. Ready to use
Reboot the system to enable the changes. Your system is now PCI pass-through capable.
Preparing an AMD system for PCI pass-through
Enable AMD IOMMU extensions
The AMD IOMMU extensions are required for PCI pass-through with Red Hat Enterprise Linux.
The extensions must be enabled in the BIOS. Some system manufacturers disable these
extensions by default.
AMD systems only require that the IOMMU is enabled in the BIOS. The system is ready for PCI
passthrough once the IOMMU is enabled.
Adding a PCI device with virsh
Need to follow the below steps
1. Identify the device
Identify the PCI device designated for passthrough to the guest. The virsh nodedev-list
command lists all devices attached to the system. The --tree option is useful for identifying
devices attached to the PCI device (for example, disk controllers and USB controllers).
# virsh nodedev-list --tree
For a list of only PCI devices, run the following command:
# virsh nodedev-list | grep pci
In the output from this command, each PCI device is identified by a string, as shown in the
[root@speedRHEV ~]# virsh nodedev-list | grep pci
We can identify the pci device with the command output “lspci –vv”
2. Information on the domain, bus and function are available from output of the virsh nodedevdumpxml command:
[root@speedRHEV ~]# virsh nodedev-dumpxml pci_0000_0a_00_0
<product id='0x0072'>SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]</product>
<vendor id='0x1000'>LSI Logic / Symbios Logic</vendor>
Above is the HBA card which we need to assign to the virtual host directly .
3. Detach the device from the system. Attached devices cannot be used and may cause various errors if connected to a guest without detaching first.
# virsh nodedev-dettach pci_0000_0a_00_0
4. Convert slot and function values to hexadecimal values (from decimal) to get the PCI bus
addresses. Append "0x" to the beginning of the output to tell the computer that the value is ahexadecimal number. And use the value
5.Run virsh edit (or virsh attach device) and added a device entry in the <devices> section to attach the PCI device to the guest.
And add the below line
# virsh edit speedbkp
<hostdev mode='subsystem' type='pci' managed='yes'>
<address domain='0x0000' bus='0x00' slot='0x1a' function='0x7'/>
6. Once the guest system is configured to use the PCI address, the host system must be configured
to stop using the device. The ehci driver is loaded by default for the HBA PCI controller.
7.Detach the device:
$ virsh nodedev-dettach pci_0000_0a_00_0
9. Set a sebool to allow the management of the PCI device from the guest:
$ setsebool -P virt_manage_sysfs 1
10. Start the guest system :
# virsh start speedbkp
The PCI device should now be successfully attached to the guest and accessible to the guest
After start the use of HBA card on backup server we can not use the same device on virtual host server.
We can add the hardware by virtual console manager also with the same procedure as we have discussed above