Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I was having some trouble with the boot partition(s) of my RAID-1. My fstab has this as /dev/md2. When I booted from the DVD to check things, /proc/mdstat showed it as /dev/md125, not /dev/md2. I ran the following in an attempt to rename it correctly:
You're in a recovery environment, it's just a device name - mount it and check the data. Seems you already have enough problems without inventing new ones.
It is the correct procedure, albeit transient, place to store array configuration permanently is /etc/mdadm.conf. What is the status of the array after it is stopped? (cat /proc/mdstat, mdadm -D)
You're in a recovery environment, it's just a device name - mount it and check the data. Seems you already have enough problems without inventing new ones.
Hmmm, never thought of that. Maybe it would have been fine once I rebooted back to that as the actual boot drive.
Quote:
Originally Posted by lvm_
It is the correct procedure, albeit transient, place to store array configuration permanently is /etc/mdadm.conf. What is the status of the array after it is stopped? (cat /proc/mdstat, mdadm -D)
In the "recovery environment" there id no mdadm.conf. On the device in question it is:
So maybe syg00 is right. The thing is I've booted to recovery numerous time in the past on other RAID configred systems and never noticed a /dev/md125 before. That's what threw me off -- along with numerous posts on how to (supposedly) rename the array. Also, the RAID is initially created from the setup DVD with 'mdadm --create /dev/md0 ...' not md125, but of course, there is still no mdadm.conf on subsequent boots from the DVD. AND, the other partition was still /dev/md1, not something "made up". All this confused me.
Right now, I've temporarily restored the boot drive image to a non-RAID device to get the machine up and running for production. I will re-stage these on a RAID config very soon and I'll check to see what the devices look like when I boot from DVD.
In the "recovery environment" there id no mdadm.conf.
Oh, missed that - then why bother? It doesn't affect how the device is called after the normal boot. Device name you are used to is stored in mdadm.conf, it is because it is unavailable the name has changed. But the fact that device won't stop properly still indicates an issue.
I'm trying to do tests on various aspects of this issue. I booted from the installation DVD with the only the former sda member of the RAID-1 installed. I was pretty sure I have formatted those partitions back to 82 'Linux swap' and 83 'Linux' and fdisk shows that:
Code:
$ fdisk -l /dev/sda
Disk /dev/sda: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST32000542AS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xb2b13a31
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 16779263 16777216 8G 82 Linux swap
/dev/sda2 * 16779264 3907029167 3890249904 1.8T 83 Linux
Re-partitioning doesn't remove raid superblock, use 'mdadm --zero-superblock'
Good to know! I'll add that to my documentation for the next.
Meanhile, I've formatted these drives to be RAID-1 members again and restored the backups. All seems to be working so I'll put this RAID back into production.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.