LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 02-28-2012, 10:12 AM   #1
velvetbulldozer
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Rep: Reputation: Disabled
Raid: How to kill a connection to an array/volume


Hi guys,
I guess I get god answers and I keep asking....
Yesterday my raid 5 array failed entirely after finally managing to attach a new drive.
Today I have in the lab a 2 disk raid 1 array. I want to add 1 spare. I doesnt allow me. I tried to stop the array and unmount the volume but it doesnt allow me:

¨Error stopping array: mdadm exited with exit code 1: mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?¨

How can I kill all the connections to the array without rebooting?
Should I be allowed to add a spare to a raid 1 array, or any type of raid array, while array is hot? Sounds like a nice feature, though.

Thanks,
Dan
 
Old 02-28-2012, 01:16 PM   #2
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, Ubuntu, SLES, CentOS
Posts: 1,790

Rep: Reputation: 324Reputation: 324Reputation: 324Reputation: 324
@ Reply

Hi velvetbulldozer,

Welcome to LQ!!

Paste the output of:

1.
Code:
df -h
2.
Code:
cat /etc/fstab
 
Old 02-28-2012, 01:33 PM   #3
velvetbulldozer
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
Hi, here is the requested info. Thanks.

Quote:
Originally Posted by T3RM1NVT0R View Post
Hi velvetbulldozer,

Welcome to LQ!!

Paste the output of:

1.
Code:
df -h


[root@linux ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_linux-lv_root
50G 5.0G 44G 11% /
tmpfs 3.9G 100K 3.9G 1% /dev/shm
/dev/sda1 485M 69M 392M 15% /boot
/dev/mapper/vg_linux-lv_home
400G 9.2G 370G 3% /home
/dev/md127 459G 774M 435G 1% /media/linux.tank

2.
Code:
cat /etc/fstab


# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Feb 20 08:49:15 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_linux-lv_root / ext4 defaults 1 1
UUID=baee4f8e-63b7-4628-b46b-a02bd2bd8185 /boot ext4 defaults 1 2
/dev/mapper/vg_linux-lv_home /home ext4 defaults 1 2
/dev/mapper/vg_linux-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
[root@linux ~]#
 
Old 02-28-2012, 01:41 PM   #4
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, Ubuntu, SLES, CentOS
Posts: 1,790

Rep: Reputation: 324Reputation: 324Reputation: 324Reputation: 324
@ Reply

Code:
/dev/md127 459G 774M 435G 1% /media/linux.tank
This appears to be your RAID device which is mounted on /media/linux.tank. Just to confirm let me know the output of the following command:

Code:
cat /proc/mdstat
If this turn out to be the RAID device we are looking for them you first have to umount /media/linux.tank and then stop the RAID array.
 
Old 02-28-2012, 02:21 PM   #5
velvetbulldozer
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
reply

...

Quote:
Originally Posted by T3RM1NVT0R View Post
Code:
/dev/md127 459G 774M 435G 1% /media/linux.tank
This appears to be your RAID device which is mounted on /media/linux.tank. Just to confirm let me know the output of the following command:

Code:
cat /proc/mdstat
[root@linux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb1[3] sdc1[1] sdd1[0]
488385390 blocks super 1.2 [3/2] [UU_]
[=====>...............] recovery = 27.2% (133158080/488385390) finish=521.2min speed=11356K/sec

unused devices: <none>



If this turn out to be the RAID device we are looking for them you first have to umount /media/linux.tank and then stop the RAID array.
 
Old 02-28-2012, 02:26 PM   #6
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, Ubuntu, SLES, CentOS
Posts: 1,790

Rep: Reputation: 324Reputation: 324Reputation: 324Reputation: 324
@ Reply

Code:
[root@linux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb1[3] sdc1[1] sdd1[0]
488385390 blocks super 1.2 [3/2] [UU_]
[=====>...............] recovery = 27.2% (133158080/488385390) finish=521.2min speed=11356K/sec
Alright. So md127 is the RAID array that we should be looking for. However, from the output it appears that recovery is going on. Did you add / remove any device from the RAID array. Or did you initiate the rebuild?

What exactly are we looking for now?
 
Old 02-28-2012, 02:35 PM   #7
velvetbulldozer
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
reply...

Well, it seems that when I was trying to add a spare to the array and it didnt allow me, after the reboot it found the disk plugged in and started to rebuild. I guess its smarter than I thought. But what I actually wanted to achieve, was to add a spare to the array.
I am learning with usb drives and these seem to behave unreliable. Of course me being on a steep learning curve doesn´t make it easier ....

Can I usually add a spare to a raid1 array ?

Quote:
Originally Posted by T3RM1NVT0R View Post
Code:
[root@linux ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sdb1[3] sdc1[1] sdd1[0]
488385390 blocks super 1.2 [3/2] [UU_]
[=====>...............] recovery = 27.2% (133158080/488385390) finish=521.2min speed=11356K/sec
Alright. So md127 is the RAID array that we should be looking for. However, from the output it appears that recovery is going on. Did you add / remove any device from the RAID array. Or did you initiate the rebuild?

What exactly are we looking for now?
 
Old 02-28-2012, 02:40 PM   #8
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, Ubuntu, SLES, CentOS
Posts: 1,790

Rep: Reputation: 324Reputation: 324Reputation: 324Reputation: 324
@ Reply

Yes, you can add a spare to a RAID1 array. Here is the abstract from man mdadm:

Code:
       -x, --spare-devices=
              Specify the number of  spare  (eXtra)  devices  in  the  initial
              array.   Spares can also be added and removed later.  The number
              of component devices listed on the command line must  equal  the
              number of RAID devices plus the number of spare devices.
Are you doing this on a production system? I would suggest that you should take a good backup before you try anything with RAID device. It is better to be safe than sorry :-)
 
Old 02-28-2012, 03:05 PM   #9
velvetbulldozer
LQ Newbie
 
Registered: Feb 2012
Posts: 14

Original Poster
Rep: Reputation: Disabled
thanks...

No, its not prod environment - yet I want to build a home storage system which is not windows based. I will get a jbod and two servers soon so i currently learn linux. The idea is a setup with raid 1 on the servers and possibly raid 6 on the jbod which has 24 bays connected to twin lsi cards on the server side. Lots of people have advised me to run without the lsi cards, and go linux raid+lvm all the way. I am not sure yet.
One idea would be to run the raid cards at the lowest level and present to lvm a huge PV and take it further from there.
What would you think about that?



Quote:
Originally Posted by T3RM1NVT0R View Post
Yes, you can add a spare to a RAID1 array. Here is the abstract from man mdadm:

Code:
       -x, --spare-devices=
              Specify the number of  spare  (eXtra)  devices  in  the  initial
              array.   Spares can also be added and removed later.  The number
              of component devices listed on the command line must  equal  the
              number of RAID devices plus the number of spare devices.
Are you doing this on a production system? I would suggest that you should take a good backup before you try anything with RAID device. It is better to be safe than sorry :-)
 
Old 02-28-2012, 03:33 PM   #10
T3RM1NVT0R
Senior Member
 
Registered: Dec 2010
Location: Internet
Distribution: Linux Mint, Ubuntu, SLES, CentOS
Posts: 1,790

Rep: Reputation: 324Reputation: 324Reputation: 324Reputation: 324
@ Reply

Quote:
One idea would be to run the raid cards at the lowest level and present to lvm a huge PV and take it further from there.
What does that mean? Instead of creating huge PVs I would suggest creating moderate PVs just to avoid single point of failure. You can always expand your lv by creating new pv whenever required and that is the benifit of using lvm.

And yes, I am with you on using raid cards. You cannot just depend on software RAID when you are in production environment.
 
  


Reply

Tags
raid1


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
RedHat Linux 4 Kernel Panic not syncing kill init no volume groups found xxsubz78x Red Hat 2 03-22-2011 11:57 AM
Extending a logical volume in a reconfigured RAID 5 array set cswebb Linux - Hardware 3 12-19-2008 02:16 AM
Adding an old software-RAID array to a software-RAID installation.. Boot problems GarethM Linux - Hardware 2 05-05-2008 03:16 PM
same raid array on 2 O.S. tataiermail Linux - Server 5 06-27-2007 09:22 AM
Volume buttons kill keyboard.. CHambeRFienD Ubuntu 0 10-16-2005 12:41 AM


All times are GMT -5. The time now is 04:31 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration