Ah...
That's been one of my biggest complaints about Linux handles disk devices: It's not much better than DOS disk drive letters. Add or remove a drive and everything "downstream" gets a new name. If you did this to the first SCSI bus, not only do the downstream devices get new names but the entire second (or third, etc.) bus is screwed up. Insane, IMHO. I had hoped, a few years ago, that the SCSI team might do well to adopt the SCSI naming convention used by, say, SYSV and some of the other commercial UNIxes and give devices names that specify the controller number, SCSI ID, partition numbers (ought to be familiar to anyone who's spent five minutes poking around on a Solaris box). I eas encouraged when I learned that the devfs filesystem was moving in that direction. It handled IDE devices in a similar fashion, IIRC. But now devfs is pretty much phased out. Using udev might be of some help but so far it looks to me to be a work in progress and a bit too complex to be really useful for this. (OK, end of rant :-) )
There
is a method for making the OS less aware of changes to the SCSI devices names and it involves assigning a volume label to each partition. You then refer to that volume label in /etc/fstab. For example, instead of having
in fstab, you could label the partition using
Code:
tune2fs -L "usr" /dev/sda2
and then the record on fstab would be changed to read
The downside to using the labels is that you eventually forget just where things are physically located and if a device fails, you may not immediately be able to tell just what's been clobbered. Keep a comment in /etc/fstab noting what the label is on each physical partition to avoid confusion later.
In your case, the boot process would still have complained but you would have seen the volume name that it was having trouble with. Then you would have been able go into single user mode and edit /etc/fstab and comment out the line(s) corresponding to the removed device.
Here's a couple of interesting situations where volume labels are handy: 1.) The boot time filesystem check complains that /dev/sdb1 has problems. Checking it in single user show all sorts of errors. You panic because that's where half of the OS is supposed to be. What happend? The second disk didn't spin up (future failure looming) and fsck just tried to check a swap partition on /dev/sdc1. (I just had this happen about a week ago.) 2.) The check of all of the filesystems on the last SCSI disk fail. What happened? Well, the same thing as case one could have happened; it depends on the layout of the partitions. Or drive three could have been the culprit and drive four got seen as drive three. Whatever. The use of labels would have nailed exactly which drive was the problem. (Like I mentioned above, having a record of what's where is still important.)
Check labels out. They aren't just a gimimicky thing thrown into tune2fs just for the heck of it.