-   SUSE / openSUSE (
-   -   Unable to see new scsi disks (

surkude 03-26-2007 08:50 PM

Unable to see new scsi disks

I am trying to expose some luns to the guest (running SLES 9 SP3 RC4) using the raw device mapping feature of the VMware's ESX server. The luns are visible on the guest only after the reboot. I suppose Suse does support hot add of the disks as the luns do show up sometimes even without a reboot.

I run in order to scan the devices. But the scanning rarely succeeds and whenever it fails i see following message in the console log:

mptbase: ioc0: IOCStatus(0x004b): SCSI IOC Terminated

So, i'm pretty sure that the scsi driver on the linux kernel isn't able to detect the newly added disk. I also thought that this could be a problem on the vmware side, but the same luns show up consistently on the windows guest without any problems and hence i think there's something wrong with my linux kernel.

Has anyone see such a problem before ? I would really appreciate if anyone can point me to some solutions for this.


jdmcdaniel3 03-27-2007 12:07 AM

Your problem sound Luny to me...
I guess I am not sure just what you are trying to do. Normally, you setup your disk space so that it is all part of your unified file system under Linux, no matter how many drives you have. Then, under VMWare, you create a kind of logical drive per session that you setup to exist somewhere in the Linux folder system. I think you should not be trying to map real drive access straight through SuSE to VMWare, but that is just my opinion.

If you think the problem is due to the older Linux kernel, you might want to consider an update to SLES 10 which is kind of based on SuSE 10.1 and has more support for newer hard drives and raid setups.

Thank You,

surkude 03-27-2007 01:42 PM

Thanks for the reply. The ESX server has number of disks and i would like to assign only few of them to this particular guest (that's running SLES 9) and raw device mapping is one of the approaches to do so.

There's an important aspect to this problem, the luns seen by the ESX server are actually iscsi luns coming from the netapp array. But these luns will appear as FC luns to the guest since vmware would make them transparent to the guest. And the lun scanning "probably" fails because the timeout values for the FC are much smaller than those that are required for iscsi luns. Does anyone know of any tunables that i can use to play around with the FC discovery/scanning timeout values ?

All times are GMT -5. The time now is 06:37 PM.