Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'd suggest you install lsscsi (yum install) then run that command to see what SCSI disks (including SAN attached) are seen by the OS.
I'd also suggest you install Hitachi's HORCM software for Linux and run "inqraid -CLI -fxng" command to see what disks it thinks are assigned to the host.
Often you have to rescan the fiber HBAs to make them take notice of drives you've mapped in. The two tools mentioned will let you know for sure that the disks you think are mapped are in fact seen and accessible by the host.
You didn't mention fiber switches for your SAN. Are you using any? Did you update zoning there to pass the disks through? Here we have map the drives both in our SAN switches and in our Hitachi array.
For RHEL6 make sure you've updated to the most recent device-mapper-multipath and device-mapper-multipath-libs packages.
For RHEL6 multipath.conf we typically:
Locate the line that starts "#blacklist {"
Insert comments just above that similar to the following:
# Uncommented blacklist and added DELL PERC and FusionIO to blacklist. Left default comments.
Uncomment the line so it starts just as "blacklist {"
Leave the commented out wwid and devnode lines below that commented then below those insert rules for DELL and Fusion:
Uncomment the "#}" at end of the blacklist so it closes the open bracket at start.
• Verify user_friendly names section is uncommented and is set to yes:
defaults {
user_friendly_names yes
}
We also Enable multipath daemon:
Run "chkconfig --list multipathd" which should show:
multipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
Run "chkconfig multipathd on"
Run "service multipathd start" which should output:
Starting multipathd daemon: [ OK ]
Run "ps -ef |grep multipath" which should show the multipathd now running similar to the following:
root 23225 1 0 17:19 ? 00:00:00 /sbin/multipathd
NOTE:
Every time multipath.conf is changed you must run command to bounce the daemon "service multipathd restart" for it to see the changes.
You can run "multipath -l -v2" to see what multipath thinks is configured.
"Friendly names" are "mpath*" but change on each boot. There is more setup you can do in multipath.conf to give each device a persistent name but that's an advanced topic.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.