Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I work on a government department with radar system
in the back end, we use linux with 3 different machines.
each linux system connect each other with NFS mount point,
we use Centos 5,ubuntu 10.04 and ubuntu 12.04
we create script to automatically copy data in each machines
but, the hardest part is because the mount point on each machines sometimes change , let say from sda to sdb and on other day to sdc, we can make the script run perfectly.
so , why the mount point suddenly change?
even when we try to copy data from one machines to another machines manually, suddenly the copy process stop since mount point changed.
The device names change due to the random nature of disk spin up.
To improve boot time the kernel scans each controller in parallel. Usually the /dev/sda is fairly consistent because it is the disk spun up by the BIOS, so it is the first one recognized. After that the disks are assigned names as it spins up.
Unfortunately, that means that what is identified as /dev/sdb, may be on the second controller, instead of the first... or on the first controller IF that disk spins up first. For two disks from the same manufacturer... they can alternate (though usually once you identify the order, they would tend to also come up in the same order the next time... until you add another disk).
There are three ways to specifically identify a disk; by UUID, by volume label, by vendor model and serial number.
1. Give the filesystem a UUID that will uniquely identify the disk. If a UUID isn't specified, one is generated for it by default. Mount the storage unit by "mount UUID=<and the long UUID string> /mountpoint", or "mount /dev/disk/by-uuid/<and the long UUID string>"
2. Give the filesystem a label. This is frequently easier to deal with as you can use mnemonic names that identify the function of the particular filesystem being mounted. Mount the filesystem by "mount LABEL=name /mountpoint" or "mount /dev/disk/by-label/<label>
3. By model and serial number (also known as "by id"). This is a bit tricky as different vendors identify their disk in different ways. You can see the vendor labels in the directory /dev/disk/by-id. The easy part is identifying the partition, this is always at the end of the identification and is "...-partn", where n is the partition number. In my case, I have a number of Samsung SATA disks - which are identified by "ata-SAMSUNG_<model>_<serial>" and have the partition number appended. The problem is that not all manufacturers use the same style... Seagate has a slightly simpler naming scheme and uses "ata-ST<model>-<revision>_<serial>" and may have the partition number added. These may be mounted by "mount /dev/disk/by-id/<and the identification string>"
USB devices that get added/removed very DEFINITELY get various names depending on whether one or more are installed at a time.
In most/all cases it should be possible to use UDEV rules to force known names... but you end up adding rules for UUID identification, volume labels, or the /dev/disk/by-id; and you still don't know what device names show up for new storage devices. You can find out what the current association is because the command "ls -l /dev/by-xxx/" will show a symbolic link for each of the names pointing to the current /dev/sdx<partition> name.
For your purposes, I suspect that using a volume label would be the easiest. It avoids the long UUID strings, allows you to set the name easily, and avoids having to remember which partition it is on, but that assumes the filesystem you are using allows volume labels. Ext filesystems use tune2fs.
You can use the "blkid" utility to list out current values for UUID and volume labels.
If you need more help, you might try to get permission to post the script you are using. NFS itself doesn't use device names on client systems - just mountpoints. I assume that the problem is on the server where the device names change.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.