AIXThis forum is for the discussion of IBM AIX.
eserver and other IBM related questions are also on topic.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
1. umount /company
2. varyoffvg sanvg <----this is an EMC disk storage array mirrored with another machine
3. chdev -l hdisk2 -a queue_depth=32
4. varyonvg sanvg
5. mount -a
when i tried to do the chdev part is gave me:
Method error (/etc/methods/chgdisk):
0514-018 The values specified for the following attributes
are not valid:
queue_depth Queue DEPTH
I had a google and it didnt really explain what the issue was. is it because the disk array i have didnt like the value of 32? i tried 24 as well and it gave me the same problem. The current value when its set to is 8 (lsattr -E -l hdisk2 shows me this):
why is it settable on other boxes then? and why, as root, cant i set it? could it be something to do with the fibre channel SAN and the way that is set up?
you can set the queue depth to the same type of disk subsystem on other boxes ? if so, then i would check your driver and microcode levels for you disks and adapters and compare to the system where is works ...
I have worked out what the issue is with regards to not being able to change the queue_depth. When the disks are added, presumably by smit, the person setting up the disk is presented with a list of supported disks to choose from. This list defines how the attributes the disk should have so far as the OS is concerned. My server's disk set up is "SYMM_RAID1_RDF1 (EMC Symmetrix FCP RDF1 Raid1)" which says it has a queue_depth of 8 and is NOT user settable. Another server's disk is "SYMM_RAID1 (EMC Symmetrix FCP Raid1)" which says it has a queue_depth of 8 and IS user settable. The difference between the two is the RDF1 part, my server uses SRDF.
Thanks, Frustin.
You should never have to use smit in order to add a disk.
On a standard system, you should do as I do :
you simply connect the new disks and at the keybord type "lsdev -Ccdisk;time cfgmgr;lsdev -Ccdisk"
and you should see automatically the new disks.
Then the disks should appear with the correct settings, and the correct user-settable attributes.
If you don't have the necessary drivers installed on your system (the drivers lpp are usually delivered together with the disk subsystem box or in the Bull Enthancement CD's) you would see the disks appear as "other fc disk devices", and installing the adequate drivers will make them turn back to EMC things.
Installing manually the disks device can give unpredictable results. Moreover, I'm pretty sure that a system should never see the SRDF disks, they are usually used as backup invisible disks becoming accessible only when the production disks fails.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.