Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
SDN 101: An Introduction to Software Defined Networking
Discover the advantages of SDN.
SDN has quickly become one of the hottest trends in IT. But not all SDN solutions offer real software-defined functionality. As more enterprises consider SDN, they want to know, “What is SDN? And what are the real benefits?” If you're ready to explore the advantages of SDN, and want to know how it should be implemented within your enterprise, start by reading our introductory white paper.
Click Here to receive this Complete Guide absolutely free.
I am trying to set up a RHEL 4 cluster with 2 Dell machines, FC3 cables, one zone on a Cisco switch, and a StorageTek D280 SAN. I have been unable to get the machines speaking to the SAN. I did at one time get the Red Hat Cluster Suite loaded on both boxes along with GFS - Global File System.
Here is the local configuration:
/dev/sda 70 GB (actually double since it is 2 disks, RAID 1)
/dev/sdb 300 GB (actually double since it is 2 disks, RAID 1)
/dev/sdc 300 GB (actually double since it is 2 disks, RAID 1)
fdisk -l I am seeing PV's within LVM2. So I see /dev/sdd through /dev/sdo or something like that - basically half the alphabet used up. Redhat provides the device-mapper-multipath-0.4.5-16.1.RHEL4. I am not sure if that handles all of the multipathing or what and I need to find a good tutorial on that in and of itself.
I have created and shared a couple of volume groups called like test_volume and test_vg. dmesg shows them on bootup so the local boxes are definitely seeing the SAN volumes however it is not being served up correctly and handed off to the LUN's. At one point last week I got both boxes to see /gfs that I created, and when I created a file on one machine it showed up on the other with ls. I can probably get back to that point again, but that doesn't solve the overall problem. The network is stable w/static ip's, the rpm's are loaded, the machines are up2dated.
RHEL 4 comes with the lpfc driver which works great with my hba cards (Emulex LPe 11000). The HBA drivers loaded great as well as the HBAnyware utilities. I have updated the firmware on the cards as of yesterday.
I need to get a driver from somewhere. StorageTek (aka SUN) has provided a driver (rdac) -- but it does not support these HBA cards. OTOH these HBA cards are the only cards that we know which are PCI-express and which will fit in the servers. So if I can't make this work with the current configuration, I will have to work with other cards and therefore probably other servers.
At least, the driver does not state that it does, it pretty explicitly says that you need other card models. So for production purposes I don't think it is a supported configuration, even if I was able to get it to work with that driver. Hence I am looking for another driver.
Can anyone help point me to a driver that will be a better choice for talking to the SAN? I am thinking that at some point I should start seeing LUN's and have total end-to-end connectivity, but that hasn't happened yet. Or if anyone knows of something I can do to fix this situation please let me know.