Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I don't believe disks get UUID. I think partitions get UUID. So if the disks are just plugged in with no partitions, I don't think they have UUID's yet.
You can try blkid /dev/sde, but it will probably fail due to no partition.
would work if your md0 array was assembled, but it sounds like it is not. (Which doesn't sound too good.)
Try instead
Code:
mdadm --detail --scan
to actually scan your disks looking for md superblocks and report the detail.
There is no UUID associated with a partition. (If the partition is formatted with EXT2/3/4, then there is a UUID in the file system's superblock. Yours isn't. My guess why it doesn't show.) There is a serial number associated with a disk or a SAN LUN. DM-Multipath uses that to sort things out.
'vol_id' used to scan all disks for UUID or LABEL. It appears broken in Fedora 10, and non-existant in Fedora 14.
As always, I may be wrong.
ps. the md superblock is usually in the last 128KB of each block device that comprises a software RAID array. you could always 'dd' skip into that device and pipe it into 'hexdump -C'. tedious.
Last edited by tommylovell; 02-17-2011 at 03:11 PM.
The raid arrays are all running fine... no problem over that. My question was regarding the way mdadm assemble the partitions to form an array.
In my case, one of my HDD"s failed. I removed it, but of course , Linux remapped the drives accordingly so all drives that were installed after the failed drive got remapped one letter ahead (sde to sdd, sdf to sde...) and mdadm is configured to assemble md6 with sde5 & sdf5 but sdf5 got missing so it said that I had a degraded array.
I'd like to use UUID's to assemble the arrays instead of sdX because I believe it is dangerous...
If so, how do I find the UUID of the partitions being part of the array?
You don't. They don't have a UUID. The two block devices that comprise that "md device" both have an md Superblock near the end of the block device. Both superblocks have the same UUID. That's how it knows that the two devices are paired together, and device names can change without affecting the RAID array(s). (In your case the block device is a partition, like /dev/sdXY, but it just as easily could be a whole drive, like /dev/sdX.)
Since one of your devices is new you need to add it to the array, like
Code:
mdadm --manage /dev/mdN --add /dev/sdXY
Once you've done that you can, of course, watch it rebuilt
Code:
cat /proc/mdstat
That'll also show you what drives are paired. A 'mdadm --detail /dev/mdN' will show you a lot move information.
And yes, the UUID reported by the 'mdadm --detail --scan' is the UUID of the array.
Last edited by tommylovell; 02-17-2011 at 04:11 PM.
OK so to be sure I get it right , please consider the following scenario:
I have 2 hard drives. The first one is /dev/sda and connected to the motherboard's SATA port 1. The second drive is /dev/sdb and connected to the motherboard's SATA port 2.
sda has a partition sda5 & sdb has a partition sdb5. sda5 & sdb5 are assembled in an array as md1.
Now lets say I unplug the second drive (sdb), plug a totally new drive to the second SATA port, and replug what was before "sdb" to the SATA port 3, becoming now sdc5. What happens with my raid array?
Your md1 RAID array should assemble correctly. Mine did.
I did an experiment (not that I doubted the advice I was giving, but I have been known to be wrong...) I added a drive "between" my two existing drives, the same as your scenario.
So, assuming your /etc/mdadm.conf has UUID specified for each RAID array like mine, you should be fine.
Code:
[root@athlonz ~]# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=bdd8f198:ed3d0863:7f9dce92:2db94737
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=397c7395:9c3c48fa:68e69357:c9f2169f
Here is my system before the drive was added.
Code:
[root@athlonz ~]# fdisk -l
Disk /dev/sda: 1500.3 GB, 1500300828160 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00031558
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 3126 25005172+ fd Linux raid autodetect
/dev/sda3 3127 182401 1440026437+ 5 Extended
/dev/sda5 3127 182401 1440026406 fd Linux raid autodetect
Disk /dev/sdb: 1500.3 GB, 1500300828160 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002a7c0
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 3126 25005172+ fd Linux raid autodetect
/dev/sdb3 3127 182401 1440026437+ 5 Extended
/dev/sdb5 3127 182401 1440026406 fd Linux raid autodetect
[root@athlonz ~]#
Code:
root@athlonz ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Tue Mar 10 06:59:43 2009
Raid Level : raid1
Array Size : 25005056 (23.85 GiB 25.61 GB)
Used Dev Size : 25005056 (23.85 GiB 25.61 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Feb 18 18:27:37 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 397c7395:9c3c48fa:68e69357:c9f2169f
Events : 0.632582
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
[root@athlonz ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Mar 10 06:59:44 2009
Raid Level : raid1
Array Size : 1440026304 (1373.32 GiB 1474.59 GB)
Used Dev Size : 1440026304 (1373.32 GiB 1474.59 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri Feb 18 19:30:58 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : bdd8f198:ed3d0863:7f9dce92:2db94737
Events : 0.27236
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 21 1 active sync /dev/sdb5
[root@athlonz ~]#
After the addition of a third drive.
Code:
[root@athlonz ~]# fdisk -l
Disk /dev/sda: 1500.3 GB, 1500300828160 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00031558
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 3126 25005172+ fd Linux raid autodetect
/dev/sda3 3127 182401 1440026437+ 5 Extended
/dev/sda5 3127 182401 1440026406 fd Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500130372608 bytes
255 heads, 63 sectors/track, 60804 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1500.3 GB, 1500300828160 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0002a7c0
Device Boot Start End Blocks Id System
/dev/sdc1 * 1 13 104391 83 Linux
/dev/sdc2 14 3126 25005172+ fd Linux raid autodetect
/dev/sdc3 3127 182401 1440026437+ 5 Extended
/dev/sdc5 3127 182401 1440026406 fd Linux raid autodetect
[root@athlonz ~]#
Code:
[root@athlonz ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Tue Mar 10 06:59:43 2009
Raid Level : raid1
Array Size : 25005056 (23.85 GiB 25.61 GB)
Used Dev Size : 25005056 (23.85 GiB 25.61 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Feb 18 20:10:26 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 397c7395:9c3c48fa:68e69357:c9f2169f
Events : 0.632582
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 34 1 active sync /dev/sdc2
[root@athlonz ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Mar 10 06:59:44 2009
Raid Level : raid1
Array Size : 1440026304 (1373.32 GiB 1474.59 GB)
Used Dev Size : 1440026304 (1373.32 GiB 1474.59 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Fri Feb 18 20:03:30 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : bdd8f198:ed3d0863:7f9dce92:2db94737
Events : 0.27236
Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 37 1 active sync /dev/sdc5
[root@athlonz ~]#
Ok make sense. Your arrays are still assembling fine because mdadm does not relies on sdX nomenclature but UUID's instead. Not my case as you can see below:
As you can see, md6 is not assembled using UUID's but sdX names...
md6 details
Code:
root@lhost2:~# mdadm --detail /dev/md6
/dev/md6:
Version : 0.90
Creation Time : Sat Oct 9 22:02:36 2010
Raid Level : raid1
Array Size : 1465135936 (1397.26 GiB 1500.30 GB)
Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 6
Persistence : Superblock is persistent
Update Time : Sat Feb 19 00:01:59 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 03a9b2d5:f6e2c821:85427e01:1986d539
Events : 0.38
Number Major Minor RaidDevice State
0 8 65 0 active sync /dev/sde1
1 8 81 1 active sync /dev/sdf1
fdisk of sde
Code:
root@lhost2:~# fdisk -l /dev/sde
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
30 heads, 63 sectors/track, 1550411 cylinders
Units = cylinders of 1890 * 512 = 967680 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009fd41
Device Boot Start End Blocks Id System
/dev/sde1 1 1550411 1465138363+ fd Linux raid autodetect
Can I use the UUID reported by mdadm --detail in the mdadm.conf? What happen if sde1 becomes sdg1 for example? I believe the raid array wont assemble... right?
What happen if sde1 becomes sdg1 for example? I believe the raid array wont assemble... right?
If you change md6 so the md driver is assembles that RAID array by UUID, then it won't matter what the underlying devices are and it will assemble. If you leave it as it is now (assembling /dev/sde1 and /dev/sdf1 into /dev/md6, then no, it won't assemble.
If you are nervous about making this change, just copy your mdadm.conf to mdadm.conf.old before you make your change, That way you can easily 'mv' the old one back in place should something go wrong.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.