LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 05-23-2012, 01:38 PM   #1
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Rep: Reputation: 0
Persistent Naming of a Block Device CentOS 6


I have a server running CentOS 6 that is running multiple virtualizations.

Sometimes when I reboot the server the drives change names.

This is not a problem for the raided volumes as they seem to always resolve irrespective of the label, but the drives that are not raided get lost to the virtual systems if the label changes. IE: /dev/sdh1 becomes /dev/sdc1

I started to write a udev rule (20-persistent-naming) to ensure the drive is labeled as expected.
Code:
# This prints out UUID 
# blkid  -o value -s UUID  

# HOST SYSTEM OS
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid  -o value -s UUID", RESULT=="6a2dc245-c87c-45b9-b2e0-819282889a37", NAME="sdi1" 
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid  -o value -s UUID", RESULT=="044d3adc-ebf1-40c1-8b34-96c1726d14de", NAME="sdi2" 
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid  -o value -s UUID", RESULT=="b0db3a7f-82ed-4ac2-a3d9-7395ffb96e04", NAME="sdi3"

# BACKUP Server Drive Mount
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid  -o value -s UUID", RESULT=="8946a854-1e9b-46b1-9fe8-4b318ebebab7", NAME="backup1"
It think the rule syntax is correct... have not tested it yet. Assuming yes, I am not sure how to handle the raided volumes as they do not have unique UUIDs. Which I assume is how the raid knows to assemble itself. I suspect that the label is irrelevant to the softraid? Do I spec them as:
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID",
RESULT=="03611a9f-f269-6fcd-9229-ebd614d37619", NAME="sda1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID",
RESULT=="03611a9f-f269-6fcd-9229-ebd614d37619", NAME="sdb1"

Code:
# Raid 1 set
/dev/sda1: UUID="03611a9f-f269-6fcd-9229-ebd614d37619" 
/dev/sda2: UUID="4fa3fe58-64fe-d0d9-e8e5-fd4b4fb2ae41" 
/dev/sda3: UUID="9c044aa4-e4b1-3e40-742b-6fc74f1ac5a6" 
/dev/sda5: UUID="34d7be3f-21e9-016b-ee28-8a30b4fadb33" 
/dev/sda6: UUID="a10381d7-dfec-017b-3e9f-cefbece711da" 
/dev/sda7: UUID="7d8b71a7-5ae4-a338-2582-634877b41ce0" 

/dev/sdb1: UUID="03611a9f-f269-6fcd-9229-ebd614d37619" 
/dev/sdb2: UUID="4fa3fe58-64fe-d0d9-e8e5-fd4b4fb2ae41" 
/dev/sdb3: UUID="9c044aa4-e4b1-3e40-742b-6fc74f1ac5a6" 
/dev/sdb5: UUID="34d7be3f-21e9-016b-ee28-8a30b4fadb33" 
/dev/sdb6: UUID="a10381d7-dfec-017b-3e9f-cefbece711da" 
/dev/sdb7: UUID="7d8b71a7-5ae4-a338-2582-634877b41ce0"

# Unused mount 
/dev/sdc1: UUID="1c666101-e599-413d-a646-cc8f0f42d6cc" 

# Raid 5 set 
/dev/sde1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" 
/dev/sde2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1"
 
/dev/sdd1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" 
/dev/sdd2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1"
 
/dev/sdf1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" 
/dev/sdf2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1"
 
/dev/sdg1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" 
/dev/sdg2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1"
 
 
# backup volume 
/dev/sdh1: UUID="8946a854-1e9b-46b1-9fe8-4b318ebebab7"


# Boot system 
/dev/sdi1: UUID="6a2dc245-c87c-45b9-b2e0-819282889a37" 
/dev/sdi2: UUID="044d3adc-ebf1-40c1-8b34-96c1726d14de" 
/dev/sdi3: UUID="b0db3a7f-82ed-4ac2-a3d9-7395ffb96e04"
 
Old 05-23-2012, 01:42 PM   #2
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Original Poster
Rep: Reputation: 0
Label is not the right word.... It is the drive location that is the issue. IE: /dev/sdh1 becomes /dev/sdc1
 
Old 05-23-2012, 01:58 PM   #3
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora, Lubuntu, FreeBSD
Posts: 3,930
Blog Entries: 5

Rep: Reputation: Disabled
Looks like you have multiple paths to these block devices. Is that correct? (If so, is there any reason you're not using DM-Multipath?)
 
Old 05-23-2012, 02:17 PM   #4
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Original Poster
Rep: Reputation: 0
Do I? Not sure I understand what that is.

I only access the block devices from with in the virtual systems.
AKA:
/dev/sdh1 goes to backup virtual
/dev/md1 (sda1 + sdb1) goes to file server virtual

I am open to alternate configurations. Here is my setup: http://dl.dropbox.com/u/2295801<br /...tup_5.2012.jpg
http://dl.dropbox.com/u/2295801/Host...12.graffle.jpg

Last edited by bluefish1; 05-23-2012 at 02:22 PM.
 
Old 05-23-2012, 02:22 PM   #5
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora, Lubuntu, FreeBSD
Posts: 3,930
Blog Entries: 5

Rep: Reputation: Disabled
To put it a different way, are /dev/sda1 and /dev/sdb1 (for instance) really the same block device? You have multiple partitions where the UUIDs are identical.

(I don't completely understand the graphic you provided. The text within it is quite small in my web browser.)
 
Old 05-23-2012, 02:53 PM   #6
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Original Poster
Rep: Reputation: 0
Posted a bigger image if that helps. http://dl.dropbox.com/u/2295801/Host...12.graffle.jpg
I thought is was odd that the raids had the same UUID's but never notice until I started this. Should I be pulling the UUID_SUB?

[14:48:52 root]$ cat mdadm.conf
ARRAY /dev/md3 metadata=1.2 level=5 num-devices=4 name=localhost.home:3 UUID=3506e6c2:b59b44b2:2bd3df13:7f6c48cc
# md3 : active raid5 sdg1[2] sdd1[0] sde1[1] sdf1[3]
# 135278592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
ARRAY /dev/md0 metadata=1.2 level=5 num-devices=4 name=localhost.home:0 UUID=e84f8a6d:beb516d7:d8b6698d:ad143eb1
# md0 : active raid5 sdg2[2] sdd2[0] sde2[1] sdf2[3]
# 1036838400 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
ARRAY /dev/md1 metadata=1.2 level=1 num-devices=2 name=localhost.home:1 UUID=03611a9f:f2696fcd:9229ebd6:14d37619
# md1 : active raid1 sdb1[1] sda1[0]
# 765463928 blocks super 1.2 [2/2] [UU]
ARRAY /dev/md2 metadata=1.2 level=1 num-devices=2 name=localhost.home:2 UUID=4fa3fe58:64fed0d9:e8e5fd4b:4fb2ae41
# md2 : active raid1 sdb2[1] sda2[0]
# 44049134 blocks super 1.2 [2/2] [UU]
ARRAY /dev/md4 metadata=1.2 level=1 num-devices=2 name=localhost.home:4 UUID=9c044aa4:e4b13e40:742b6fc7:4f1ac5a6
# md4 : active raid1 sdb3[1] sda3[0]
# 44049134 blocks super 1.2 [2/2] [UU]
ARRAY /dev/md5 metadata=1.2 level=1 num-devices=2 name=localhost.home:5 UUID=34d7be3f:21e9016b:ee288a30:b4fadb33
# md5 : active raid1 sdb5[1] sda5[0]
# 34610915 blocks super 1.2 [2/2] [UU]
ARRAY /dev/md6 metadata=1.2 level=1 num-devices=2 name=localhost.home:6 UUID=a10381d7:dfec017b:3e9fcefb:ece711da
# md6 : active raid1 sdb6[1] sda6[0]
# 34610915 blocks super 1.2 [2/2] [UU]
ARRAY /dev/md7 metadata=1.2 level=1 num-devices=2 name=localhost.home:7 UUID=7d8b71a7:5ae4a338:25826348:77b41ce0
# md7 : active raid1 sdb7[1] sda7[0]
# 53969240 blocks super 1.2 [2/2] [UU]

[13:19:09 root]$ blkid
/dev/sda1: UUID="03611a9f-f269-6fcd-9229-ebd614d37619" UUID_SUB="ba22bec9-750c-c207-4486-893655b347f2" LABEL="localhost.home:1" TYPE="linux_raid_member"
/dev/sda2: UUID="4fa3fe58-64fe-d0d9-e8e5-fd4b4fb2ae41" UUID_SUB="910c4220-bb95-a561-9977-3e419cc8c137" LABEL="localhost.home:2" TYPE="linux_raid_member"
/dev/sda3: UUID="9c044aa4-e4b1-3e40-742b-6fc74f1ac5a6" UUID_SUB="f9559f80-459e-2575-4795-4b451e8c3047" LABEL="localhost.home:4" TYPE="linux_raid_member"
/dev/sda5: UUID="34d7be3f-21e9-016b-ee28-8a30b4fadb33" UUID_SUB="1a326e2f-bf2b-3b82-d9db-9090490b9968" LABEL="localhost.home:5" TYPE="linux_raid_member"
/dev/sda6: UUID="a10381d7-dfec-017b-3e9f-cefbece711da" UUID_SUB="7ea7fe8c-4e02-3459-d7fd-43ddb524e906" LABEL="localhost.home:6" TYPE="linux_raid_member"
/dev/sda7: UUID="7d8b71a7-5ae4-a338-2582-634877b41ce0" UUID_SUB="b2f7c6d2-6e18-c359-cac7-5cbc2f5f75d4" LABEL="localhost.home:7" TYPE="linux_raid_member"
/dev/sdc1: UUID="1c666101-e599-413d-a646-cc8f0f42d6cc" TYPE="ext4"
/dev/sdb1: UUID="03611a9f-f269-6fcd-9229-ebd614d37619" UUID_SUB="579d7357-6cd4-c918-371b-ba3178df5da7" LABEL="localhost.home:1" TYPE="linux_raid_member"
/dev/sdb2: UUID="4fa3fe58-64fe-d0d9-e8e5-fd4b4fb2ae41" UUID_SUB="a027572b-aa57-b2c0-24c4-4a6ada7b55fa" LABEL="localhost.home:2" TYPE="linux_raid_member"
/dev/sdb3: UUID="9c044aa4-e4b1-3e40-742b-6fc74f1ac5a6" UUID_SUB="e13f9f67-ffb0-73a3-d420-f1ad87f39447" LABEL="localhost.home:4" TYPE="linux_raid_member"
/dev/sdb5: UUID="34d7be3f-21e9-016b-ee28-8a30b4fadb33" UUID_SUB="16fa06de-f368-b8e6-fff3-df1061668554" LABEL="localhost.home:5" TYPE="linux_raid_member"
/dev/sdb6: UUID="a10381d7-dfec-017b-3e9f-cefbece711da" UUID_SUB="e978f199-da30-8613-0218-8a6568451b59" LABEL="localhost.home:6" TYPE="linux_raid_member"
/dev/sdb7: UUID="7d8b71a7-5ae4-a338-2582-634877b41ce0" UUID_SUB="edc631fd-097d-1294-418f-a3935dad295f" LABEL="localhost.home:7" TYPE="linux_raid_member"
/dev/sde1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" UUID_SUB="f1d04a29-fecb-fbf1-bcf6-4eaff4c6842b" LABEL="localhost.home:3" TYPE="linux_raid_member"
/dev/sde2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1" UUID_SUB="09e56f2b-55b1-360e-1308-acfa7dd6e488" LABEL="localhost.home:0" TYPE="linux_raid_member"
/dev/sdd1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" UUID_SUB="61edf730-a581-fae6-ea9a-feb3c477ef83" LABEL="localhost.home:3" TYPE="linux_raid_member"
/dev/sdd2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1" UUID_SUB="2a642833-5006-feaa-19e8-ff547cde372e" LABEL="localhost.home:0" TYPE="linux_raid_member"
/dev/sdf1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" UUID_SUB="1642e40c-c0d0-dbfd-e09e-1a68e4e4161b" LABEL="localhost.home:3" TYPE="linux_raid_member"
/dev/sdf2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1" UUID_SUB="d145aa2e-a082-12b3-7644-83bdfaef5114" LABEL="localhost.home:0" TYPE="linux_raid_member"
/dev/sdg1: UUID="3506e6c2-b59b-44b2-2bd3-df137f6c48cc" UUID_SUB="22d56cdc-dd0c-ddef-abf1-f6e2d3e67be9" LABEL="localhost.home:3" TYPE="linux_raid_member"
/dev/sdg2: UUID="e84f8a6d-beb5-16d7-d8b6-698dad143eb1" UUID_SUB="5087c6b6-1c70-375e-440f-2b3ca41c79c6" LABEL="localhost.home:0" TYPE="linux_raid_member"
/dev/sdh1: UUID="8946a854-1e9b-46b1-9fe8-4b318ebebab7" TYPE="ext3"
/dev/sdi1: UUID="6a2dc245-c87c-45b9-b2e0-819282889a37" TYPE="ext4"
/dev/sdi2: UUID="044d3adc-ebf1-40c1-8b34-96c1726d14de" TYPE="swap"
/dev/sdi3: UUID="b0db3a7f-82ed-4ac2-a3d9-7395ffb96e04" TYPE="ext4"
/dev/md2p1: UUID="78710a2e-7c21-442c-b542-d2f2cb825477" TYPE="ext4"
/dev/md2p2: UUID="X1iIG8-bdCB-ZgIE-CxkD-uBsf-umQy-hSojfH" TYPE="LVM2_member"
/dev/md5: UUID="867bfb23-f9b1-4551-a01f-82302d8f7ce7" TYPE="ext4"
/dev/md1: UUID="3c35000b-3d58-4e25-b02c-fc8835ccaa2d" TYPE="ext4"
/dev/md4: UUID="4e02b889-5494-4e84-9cd7-6a31919ba64f" SEC_TYPE="ext2" TYPE="ext3"
/dev/md4p1: UUID="C6F4DCF5F4DCE927" TYPE="ntfs"
/dev/md6: UUID="0c6c4ee1-a353-4785-b579-be218b9ee377" TYPE="ext4"
/dev/md0: UUID="6f894609-133f-4b38-9314-7ec36d4bdc08" TYPE="ext4"
/dev/md3: UUID="761ab9ab-d9cf-4d35-a7fa-f37aed8ef8e5" TYPE="ext4"
/dev/md7: UUID="82de017f-a0dc-4d4a-a68b-2efb2158e219" TYPE="ext4"
/dev/mapper/vg_haweater-lv_root: UUID="177c7b5a-561a-4c6d-9436-5bf07053fe1e" TYPE="ext4"
/dev/mapper/vg_haweater-lv_swap: UUID="c88d92b8-5a7c-4c82-8728-987d82d0b271" TYPE="swap"
/dev/md6p1: LABEL="System Reserved" UUID="BC6809F86809B1E6" TYPE="ntfs"
/dev/md6p2: UUID="0E4816814816682B" TYPE="ntfs"
/dev/md5p1: LABEL="System Reserved" UUID="5018137E18136270" TYPE="ntfs"
/dev/md5p2: UUID="9A701F32701F151B" TYPE="ntfs"

Last edited by bluefish1; 05-23-2012 at 02:55 PM.
 
Old 05-23-2012, 05:10 PM   #7
anomie
Senior Member
 
Registered: Nov 2004
Location: Texas
Distribution: RHEL, Scientific Linux, Debian, Fedora, Lubuntu, FreeBSD
Posts: 3,930
Blog Entries: 5

Rep: Reputation: Disabled
Thanks for posting the larger image - much easier to read. I apologize, but I don't totally understand the architecture you have in place. (The image is helpful, but I hesitate to give _any_ advice on something I do not follow.)

My DM-Multipath comments may have been out of line. I assumed you had multiple paths to devices because of the duplicated UUIDs, but now I'm not sure what I am looking at.
 
Old 05-23-2012, 08:43 PM   #8
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Original Poster
Rep: Reputation: 0
I don't think I am actually configured out of the ordinary. I just need to know how to define the raid volumes in the udev rule.
 
Old 05-23-2012, 10:19 PM   #9
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Original Poster
Rep: Reputation: 0
Will this work?

# This prints out UUID of the /DEV/sd? it parses.
# blkid -o value -s UUID

# HOST SYSTEM OS
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="6a2dc245-c87c-45b9-b2e0-819282889a37", NAME="sda1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="044d3adc-ebf1-40c1-8b34-96c1726d14de", NAME="sda2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="b0db3a7f-82ed-4ac2-a3d9-7395ffb96e04", NAME="sda3"

# Raid 5
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="f1d04a29-fecb-fbf1-bcf6-4eaff4c6842b", NAME="sdb1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="09e56f2b-55b1-360e-1308-acfa7dd6e488", NAME="sdb2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="61edf730-a581-fae6-ea9a-feb3c477ef83", NAME="sdc1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="2a642833-5006-feaa-19e8-ff547cde372e", NAME="sdc2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="1642e40c-c0d0-dbfd-e09e-1a68e4e4161b", NAME="sdd1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="d145aa2e-a082-12b3-7644-83bdfaef5114", NAME="sdd2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="22d56cdc-dd0c-ddef-abf1-f6e2d3e67be9", NAME="sde1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="5087c6b6-1c70-375e-440f-2b3ca41c79c6", NAME="sde2"

# BACKUP Server Drive Mount
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="8946a854-1e9b-46b1-9fe8-4b318ebebab7", NAME="sdf1"

# Raid 1
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="ba22bec9-750c-c207-4486-893655b347f2", NAME="sdg1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="910c4220-bb95-a561-9977-3e419cc8c137", NAME="sdg2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="f9559f80-459e-2575-4795-4b451e8c3047", NAME="sdg3"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="1a326e2f-bf2b-3b82-d9db-9090490b9968", NAME="sdg5"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="7ea7fe8c-4e02-3459-d7fd-43ddb524e906", NAME="sdg6"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="b2f7c6d2-6e18-c359-cac7-5cbc2f5f75d4", NAME="sdg7"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="579d7357-6cd4-c918-371b-ba3178df5da7", NAME="sdh1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="a027572b-aa57-b2c0-24c4-4a6ada7b55fa", NAME="sdh2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="e13f9f67-ffb0-73a3-d420-f1ad87f39447", NAME="sdh3"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="16fa06de-f368-b8e6-fff3-df1061668554", NAME="sdh5"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="e978f199-da30-8613-0218-8a6568451b59", NAME="sdh6"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID_SUB", RESULT=="edc631fd-097d-1294-418f-a3935dad295f", NAME="sdh7"

# Unused
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="1c666101-e599-413d-a646-cc8f0f42d6cc", NAME="sdi1"
 
Old 10-05-2012, 09:05 PM   #10
bluefish1
Member
 
Registered: Apr 2004
Location: PA
Distribution: CentOS 6
Posts: 47

Original Poster
Rep: Reputation: 0
Still looking for an answer here.
The issue is that the boot drive is moving on each reboot. Not an issue accept that I am mounting a drive to a virtual system that is mapped by a drive letter. All of my other drives are raided so this is not an issue as the system knows how to handle them.

I created this udev rule (20-persistent-naming) but it has no effect?
# HOST SYSTEM OS
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="6a2dc245-c87c-45b9-b2e0-819282889a37", NAME="sda1"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="044d3adc-ebf1-40c1-8b34-96c1726d14de", NAME="sda2"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="b0db3a7f-82ed-4ac2-a3d9-7395ffb96e04", NAME="sda3"

# BACKUP Server Drive Mount
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="8946a854-1e9b-46b1-9fe8-4b318ebebab7", NAME="sdb1"

# 2.5" drive -- unused
KERNEL=="sd*", BUS=="scsi", PROGRAM=="blkid -o value -s UUID", RESULT=="1c666101-e599-413d-a646-cc8f0f42d6cc", NAME="sdc1"

Last edited by bluefish1; 10-05-2012 at 09:10 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Persistent naming of /dev/sd[a-z] devices. How could I achieve this? Devyn Linux - Server 5 12-26-2011 11:55 PM
persistent device naming problem d-niX Linux - Server 7 02-24-2011 03:42 AM
SCSI device naming kuhazor Linux - Server 3 02-25-2009 07:41 PM
Changing UDEV persistent naming schemes orbit Slackware 5 04-21-2008 10:22 PM
device naming didi156 Linux - General 3 07-14-2006 07:37 AM


All times are GMT -5. The time now is 09:18 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration