Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I need to know if I can fix this or if I should bite the bullet and start reloading my dvd backups. Again. I have 90-95% backed up, but it's about 1.6 TB of data on hundreds of DVDs, so you know how painful reloading is.
I have 6 400 GB SATA drives on a RAID5 array mounted to /home. System is Ubuntu 6.06 Server kernel 2.6.15 with mdadm (don't know the version). I recently installed a new motherboard with more on-board SATA connectors as I was also planning to start adding more drives. The plan was to add a 3 bay enclosure that can hold 5 drives and setup 3 500 GB drives in another RAID5 array now and expand to 5 later. There were two issues with this, both of which I realize now could probably have been resolved if I had simply taken the time to learn how to compile a new kernel. The network drivers weren't loaded when it booted (two on the board), so I had to use a card, and I remember reading that I would need a newer kernel than was available in the apt repository for ubuntu 6.06 to expand an array.
So, instead of compiling a new kernel, I decided to do a fresh install of 6.10 server. During install, there was a some problem with DHCP, and it took me back to the menu. I got it sorted out but didn't realize until it finished that it managed to skip several sections of the installation, including user setup. With no login, I ran the recovery. When it finished, it looked ok. mdadm showed the device as /dev/md0(what it had been), and it mounted fine. If everything else had been fine, it would have been what I wanted, however, the install had missed other stuff besides user accounts, for example, not only were there no apt sources configured, there were no man pages installed.
Another reinstall, all the way through this time. mdadm didn't set it up correctly this time. Here is the current status.
Quote:
# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Apr 15 19:41:18 2007
Raid Level : raid5
Array Size : 1953556480 (1863.06 GiB 2000.44 GB)
Device Size : 390711296 (372.61 GiB 400.09 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Apr 16 02:17:39 2007
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 64 0 active sync /dev/sde
1 8 96 1 active sync /dev/sdg
2 8 80 2 active sync /dev/sdf
3 8 48 3 active sync /dev/sdd
4 8 128 4 active sync /dev/sdi
5 8 112 5 active sync /dev/sdh
Quote:
Disk /dev/md0: 2000.4 GB, 2000441835520 bytes
255 heads, 63 sectors/track, 243206 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/md0p1 1 48641 390708801 fd Linux raid autodetect
The only other thing is that last night, it showed it as degraded and resyncing one drive, and it finished the resync. What should my next step be?
Did you build a new mdadm.conf as a result of this, or did you keep the one that had been on it? You didn't run mdadm -create did you? Where did the current mdadm.conf come from (auto gen or did you make it?) and what are its contents?
Distribution: suse, opensuse, debian, others for testing
Posts: 307
Rep:
just a guess...
I've read a german thread about the very same error.
someone had built a raid array with /dev/sda1, /dev/sdb1 and so on.
after a kernel update his raid seemed ok, but mdadm showed /dev/sda, /dev/sdb and so on as members. fdisk -l would still show /dev/sda1, /dev/sdb1 were there.
there was also the discrepancy between device/array size.
his solution was to zero the superblocks on the false mebers /dev/sda, /dev/sdb .... + reboot
That German thread is exactly what mine is doing. If you don't mind translating the solution, I'd be grateful. I tried
Quote:
# mdadm --zero-superblock /dev/sd[d-i]
mdadm: Couldn't open /dev/sdd for write - not zeroing
mdadm: Couldn't open /dev/sde for write - not zeroing
mdadm: Couldn't open /dev/sdf for write - not zeroing
mdadm: Couldn't open /dev/sdg for write - not zeroing
mdadm: Couldn't open /dev/sdh for write - not zeroing
mdadm: Couldn't open /dev/sdi for write - not zeroing
When I was messing with mdadm, it once created several arrays out of just random drives on my system just after installing it. How reassembling it by hand. Have you tried that?
Um, so it may be worse now. I stopped the array and assembled it, but it came up the same as before with /dev/md0p1 in fdisk. Stopped it again and tried the zero-superblock because last time I had the command out of order and didn't stop it. It worked. Tried to assemble, and it said,
Quote:
# mdadm --assemble /dev/md0 /dev/sd[e-i]
mdadm: no recogniseable superblock
Rebooted, and it did the same thing. I'll point out that when it booted, it made md0 out of sd[a-c] (the 500's) and md1 with 4 of the 6 400's. After stopping both, assemble gave me the no recogniseable superblock error for both arrays even though I didn't run zero-superblock on the sd[a-c] array.
I don't know the consequences of not specifying which type of raid during an assemble operation. Also, is that a typo, or did you really use sd[e-i]? Or did you type something completely different but you're reporting this?
basically what is discussed here is recovery of a raid5 after zero-ing all the superblocks of the included partitions, seems to work.
I've tested it with vmware and a raid1, killed all superblocks, mdadm would not assemble/start the array. then tried above mentioned --create and lo and behold, I could mount it, no data was lost. mdadm complained about a preexisting filesystem, but I forced it to do its magic and it worked.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.