This is all slightly over my head, but that has never stopped me before. I'll answer what I can. If I'm wrong or incomplete on any points, someone please correct or add. To make matters worse, my Fibrechannel experience is
limited to Linux on Z (the IBM mainframe).
What is HBA card?
An HBA is a Host Bus Adapter. It is the Fibrechannel equivalent of an ethernet NIC.
Unlike most NICs, it can be configured with target LUN information so that you can
boot off of it. If you have a blade server with no local disk, this comes in handy.
How to detect HBA card in server?
We needed to rebuild the initial ramdisk to include Fibrechannel drivers. Then add configuration
info into a file, /etc/zfcp.conf, so that the zfcp drivers could find the LUNs out in the SAN.
There is probably something equivalent in the x86 world. I don't know what it is. Each LUN found
is represented as a separate SCSI device, e.g. /dev/sda, /dev/sdb, etc.
How to make lun visible to redhat os?
As mentioned above, we need to put definitions in place in /etc. Also, there is a "feature"
of the fcp driver where it stops at 8 devices when it scans for devices. If you have more
than 8 drives*paths, you need to add a statement like:
options scsi_mod max_luns=50
to /etc/modprobe.conf, then rebuild the initial ramisk with something like:
mkinitrd --with=zfcp -f initrd-$(uname -).img $(uname -r)
whats are features of lun?
It simply looks like a SCSI disk partition -- not a whole disk. It's size and characteristics are
determined by "the storage guy" who is doing your SAN work. You can see that this s way over my head.
how to use lun as partition?
I would imagine you would format it as you would any partition i.e. you would use mkfs.ext3. We don't
use the devices directly. Instead, we use dm-multipath, then put the multipath device into an LVM volume
can we use lun as raw devices?
Yes. I believe you can use it just as you would any drive. But why would you want to? The performace
hit to using LVM is almost impossible to measure...
What is multipathing?
Multipathing allows you to have two (or more) HBA's configured to point to the same LUN.
With planning (and money) you have more than one physical path all the way from your server
to the backend disk array. In the event the HBA, fiber, SAN switch, or disk array port fails,
there is another path to device.
There are two methods of multipathing. The older method (pre-RHEL4.5?) used software raid to
take two devices, say /dev/sda and /dev/sdb, and represent them (actually "it", not "them"
as in this case, they are the same LUN) as a single MD device, say /dev/md0.
The second method, DM-Multipath, is the preferred method. It can take the devices and represent
them as a single device, say as /dev/mapper/mpath0. The advantage of DM-Multipath is that it
can do round-robin load balancing, etc., is very efficient, and is very robust.
how to check is multipathing configured or not?
With Software Raid multipathing, 'cat /proc/mdstat' would show that to you;
with DM-Multipath, 'multipath -ll' would show all of the devices, e.g.:
# multipath -ll
mpath1 (36005076306ffc4af0000000000004801) dm-7 IBM,2107900
\_ round-robin 0 [prio=2][active]
\_ 0:0:0:2 sdc 8:32 [active][ready]
\_ 1:0:0:2 sdd 8:48 [active][ready]
mpath0 (36005076306ffc4af0000000000004800) dm-6 IBM,2107900
\_ round-robin 0 [prio=2][active]
\_ 0:0:0:1 sda 8:0 [active][ready]
\_ 1:0:0:1 sdb 8:16 [active][ready]
I don't know the answers to:
How to check how many HBA are installed?
How to check if one or more (if any ) HBA card is working or not.
how to check link betwwen switch and card is working or not?