LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices

Reply
 
Search this Thread
Old 05-02-2012, 10:31 PM   #1
Pyro666
LQ Newbie
 
Registered: May 2012
Posts: 2

Rep: Reputation: Disabled
RAID 6 Issues after upgrade to 12.04... and also a fresh install. :(


Hi All,

Im new here so and still a beginner. So please be nice if I ask stupid questions. (oops this should have been in the Linux software section. can someone move thanks)

Well im having some trouble with my raid 6 setup. To give you some back ground. Prior to my problems I had a fully working Ubuntu 11.04 Desktop running 12x2TB Raid 6 setup all working perfectly with no issues. This would be my second Ubuntu raid setup. First one was a RAID5 8x1TB that I also got working fine but im still have alot to learn. Now thats the good part of the story.

Now for the bad. Stupid me deceided to upgrade 11.10 then to 12.04. And this is where my problems started. The upgrade to 11.10 worked ok and the raid was intact ok. However when I upgraded to 12.04 I lost the raid and things went from bad to worse. I should have remember....if it aint broke dont bloody touch it.

12.04 installs all ok but with no raid. First there was no mdadm. So I installed this and that where the problems started. After doing that it would not even boot correctly. It boots and hangs on the light purpleish blank screen.(dont really know what that is) So I do a restart with ctrl-alt-del. This time it gets me into GRUB and I select the default and goes into a screen telling me that one of the raid are degraded and do I want to load. y/N but either one I choose it send me into busybox shell and with a prompt of "(initramfs)" that I only exit then quit to get it to the log onto screen. So that how I have to get in all. Even after a restart I tried for hours on all sort of pages to get this working again but no go.

So im my thinking that I must(I did) screw it up totally. might as well just do a fresh install of 12.04. So this is what I did. Installed fine and boots ok but then I needed to install mdadm. And did this and rebooted and it seems that I was exactle the same place before the install with booting and all so nothing was gained from the fresh install.There has been lots of this

When I installed mdadm I got the below and had

Code:
 * Starting MD monitoring service mdadm --monitor                               
mdadm: only give one device per ARRAY line: /dev/md/20TB and Raid
mdadm: only give one device per ARRAY line: /dev/md/20TB and 6
mdadm: only give one device per ARRAY line: /dev/md/20TB and Raid
mdadm: only give one device per ARRAY line: /dev/md/20TB and 6
mdadm: only give one device per ARRAY line: /dev/md/20TB and Raid
mdadm: only give one device per ARRAY line: /dev/md/20TB and 6
mdadm: only give one device per ARRAY line: /dev/md/20TB and Raid
mdadm: only give one device per ARRAY line: /dev/md/20TB and 6
Anyway sorry for that boring part now for some fun to get this raid up and running. Well hopefully someone can help me here as im stumpted on where to go now.


So apart from the painful way I get into ubuntu here is what I currently have.

In Disk utility it states that the raid is not running,partially assembled. No drives are listed in edit components with a number.

Quote:
sudo blkid
/dev/sda1: UUID="30decba4-afc4-4a47-8454-3c5295c8a4e9" TYPE="ext4"
/dev/sda5: UUID="b4e531b9-20e7-4096-a261-942841f237e1" TYPE="swap"
/dev/sda6: UUID="2ac05c62-2515-420b-aad6-a37623135e62" TYPE="ext4"
/dev/sda7: UUID="0fc56403-0f86-4344-a172-6bdc92f5138a" TYPE="ext4"
/dev/sdb1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="fd4ae176-0ec2-4f1b-33cf-d93da5099d38" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdc1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="b7409352-aff3-b356-2a93-5e5e38dbda37" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdd1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="29a07134-1d4b-6468-1d0b-b8f8d56edbac" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sde1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="cb13aa13-25e5-4f41-9eb7-13aa1ae2be18" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdf1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="aa5b9c19-13ea-f424-12ee-3f3b00dd5587" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdg1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="8cfaf1e0-1783-c2fc-a538-77f86fb75a62" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdh1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="5902e566-941b-89df-da0a-065f6a0cc5a9" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdj1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="7bb77084-44a9-3f05-6013-f99794cd4efb" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdi1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="4778ea59-4954-7bb8-57ac-541eedab8410" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdk1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="5d6c6cd5-dc7d-9b2c-d24a-ed2b2abcd07c" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdm1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="310b75b6-310a-c3dd-326a-2a74adf47509" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
/dev/sdl1: UUID="8c8142a8-f05a-16ac-2cc9-128c3312cf1c" UUID_SUB="572e01f6-026a-7ce5-a1dd-fbdec82accfe" LABEL=":20TB Raid 6" TYPE="linux_raid_member"
fdisk -l
Code:
sudo fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006003c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      976895      487424   83  Linux
/dev/sda2          978942   976771071   487896065    5  Extended
/dev/sda5          978944     4882431     1951744   82  Linux swap / Solaris
/dev/sda6         4884480   102539263    48827392   83  Linux
/dev/sda7       102541312   976771071   437114880   83  Linux

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9e7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00023d5b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001b5c0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005539e

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000467fd

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdg: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003c224

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdh: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00009d12

   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdj: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000aa75a

   Device Boot      Start         End      Blocks   Id  System
/dev/sdj1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdi: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00003471

   Device Boot      Start         End      Blocks   Id  System
/dev/sdi1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdk: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000d9168

   Device Boot      Start         End      Blocks   Id  System
/dev/sdk1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdm: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000bff63

   Device Boot      Start         End      Blocks   Id  System
/dev/sdm1              63  3907024064  1953512001   fd  Linux RAID autodetect

Disk /dev/sdl: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c8807

   Device Boot      Start         End      Blocks   Id  System
/dev/sdl1              63  3907024064  1953512001   fd  Linux RAID autodetect
cat /proc/mdstat
Quote:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdd1[1](S) sdc1[2](S) sdb1[3](S) sde1[0](S)
7814043908 blocks super 1.2

cat /etc/fstab
Quote:
cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda6 during installation
UUID=2ac05c62-2515-420b-aad6-a37623135e62 / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
UUID=30decba4-afc4-4a47-8454-3c5295c8a4e9 /boot ext4 defaults 0 2
# /home was on /dev/sda7 during installation
UUID=0fc56403-0f86-4344-a172-6bdc92f5138a /home ext4 defaults 0 2
# swap was on /dev/sda5 during installation
UUID=b4e531b9-20e7-4096-a261-942841f237e1 none swap sw 0 0
cat /etc/mdadm/mdadm.conf
Quote:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#ARRAY /dev/md/20TB Raid 6 metadata=1.2 UUID=8c8142a8:f05a16ac:2cc9128c:3312cf1c name=:20TB Raid 6

# This file was auto-generated on Thu, 03 May 2012 01:24:20 +1000
# by mkconf $Id$
And where I think the problems come from is when I run
sudo /usr/share/mdadm/mkconf
Quote:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/20TB Raid 6 metadata=1.2 UUID=8c8142a8:f05a16ac:2cc9128c:3312cf1c name=:20TB Raid 6
spares=4
ARRAY /dev/md/20TB Raid 6 metadata=1.2 UUID=8c8142a8:f05a16ac:2cc9128c:3312cf1c name=:20TB Raid 6
you can see that there are two listed of the same array.


So if anyone out there can point me in the right direction and help I would apreciate it alot. If ive misted some info tell me what you need.

I will try anything at this stage.

I must admit though. This is a good way to learn about something. Break it and try and put it back to gether. I bet alot of us have been there before and many times doing this and going and asking ourselves wtf did I do that to start with and end up being all but in the end all you can do it .

Thanks in advanced

Last edited by Pyro666; 05-02-2012 at 11:01 PM.
 
Old 05-05-2012, 04:40 PM   #2
Pyro666
LQ Newbie
 
Registered: May 2012
Posts: 2

Original Poster
Rep: Reputation: Disabled
After trying many things I eventually stuff the raid array so it would not be possible to create so i deceided to cut my losses and started from scratch. New install of 12.04 and recreate array and copy across stuff.

Tried this and eventual found out that half my problems was that Ubuntu 12.04 had a bug mdadm, and also 11.10 has this also. I wish I never upgraded to either one. I do wonder why they release new versions of software when they might have fixed and created new features but they break other things.

The booting issues i was having is mentioned here and how to fix if anyone else has it. I on the ho to recreated a raid from a previous version I have no idea how to do.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Fresh Install of Ubuntu 10.04 with a RAID array captnemo06 Ubuntu 4 09-29-2010 12:59 AM
Keep RAID-5 Set on Fresh install lawkie Linux - Software 1 05-19-2008 02:30 PM
Upgrade 10.2 or Install 11 Fresh? NightSky Slackware 14 05-19-2007 05:14 PM
Upgrade or fresh install Atif_Khan Fedora 12 02-07-2005 12:16 PM
Upgrade or fresh install? im_an_elf Suse/Novell 2 01-26-2005 09:08 AM


All times are GMT -5. The time now is 02:11 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration