[SOLVED] Drive partition labels: why/how do they change, and why are there extra in /dev?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Drive partition labels: why/how do they change, and why are there extra in /dev?
I'm ashamed to even be asking this, but I have to no matter how goofy it is.
I have a server with 8 SDDs in it. I installed CentOS 6.2 using Kickstart. The idea was to only format the first 2 disks and leave the rest as jbod (software requirement). Initially it failed with a "KeyError: /dev/sda" message during drive partitioning, and a colleague of mine suggested I change the bootloader line from "--driveorder=sda,sdb,sdc,etc." to start at sdc. I did, and the installation succeeded.
Now that I'm logged in when I do fdisk -l I see that sda and sdb were actually created and partitioned according to my scheme, so in effect I have sda1 and 2, and sdb1 and 2. The rest of the drives are untouched as I'd wanted. Why did I have to specify a starting point of sdc to successfully install? Why doesn't the partition table start at sdc then; why did it revert to sda?
Also, when I do fdisk -l I notice that the partitions run like this: sda, sdb, sdc, sdd, sde, sdf, sdi, sdj. If I look in /proc/partitions this matches as well. /dev directory and /sys/block have sdg and sdh as well though. Why did the installation skip those two and go to "i"? Are they actually being used for something else? How can I tell?
To add, I went ahead and ran two scripts that I would be using for deploying these servers. The first checks for the presence of an OS on the drives to make sure I didn't inadvertently screw with the wrong ones. It returned:
The OS disks:
/dev/sda1 /dev/sdb1 /dev/sda2 /dev/sdb2
The non OS disks:
/dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj
The second script runs various hdparm commands on the result of ls on /dev. For example:
And to further complicate things...upon reboot /dev/md0, which was using /dev/sda and /dev/sdb, is now using /dev/sda and /dev/sde. Fdisk -l shows sda and sde as the two formatted and partitioned drives as well. Should I now be moving this from the newb forum?
I found (I think) that the order in which Linux detects drives during boot determines what labels it uses for each drive, which is why the drive I installed the OS on may be /dev/sda during one session and /dev/sde on another. I'm opening a separate thread to discuss the hdparm output.