LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 07-13-2020, 10:43 PM   #1
Sum1
Member
 
Registered: Jul 2007
Distribution: Fedora, CentOS, and would like to get back to Gentoo
Posts: 332

Rep: Reputation: 30
Break RAID1 with lvm volume group without losing filestsytem and data


I cannot recall the precise steps of my original configuration.
I thought I had noted all my actions in a log or journal but I can't find it now.

CentOS 7 Server
UEFI bios
SATA Controller Mode set to: RAID1 in bios
I configured /dev/sdc and /dev/sdd in RAID1 using mdadm thus creating a 2TB RAID1 array.
I then created a 2TB lvm physical volume and then configured all allocatable space to one volume group with xfs filesystem.

pvdisplay --

--- Physical volume ---
PV Name /dev/md125
VG Name data
PV Size <1.82 TiB / not usable 3.06 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476899
Free PE 0
Allocated PE 476899
PV UUID dsvFH3-9eL0-cOqt-jOGJ-cNrN-Nxwk-cWGGlI


vgdisplay --

--- Volume group ---
VG Name data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <1.82 TiB
PE Size 4.00 MiB
Total PE 476899
Alloc PE / Size 476899 / <1.82 TiB
Free PE / Size 0 / 0
VG UUID 1lI07H-Wgbr-2NvR-qojp-aGX9-QeDl-nDt1ja


/etc/fstab shows - /dev/mapper/data /mnt/data xfs defaults 0 0

I want to save the current data on one of the drives and use the remaining drive for other data.
Is there a way to remove the RAID1 configuration without losing any data on the xfs filesystem?
Is it possible to keep the current Volume Group located and mounted at /dev/mapper/data /mnt/data but remove the underlying RAID1 configuration?

Thank you for your guidance.
 
Old 07-14-2020, 01:42 AM   #2
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,159

Rep: Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125
Redacted - sorry, I wasn't thinking clearly.

Last edited by syg00; 07-14-2020 at 01:57 AM.
 
Old 07-14-2020, 06:34 AM   #3
Sum1
Member
 
Registered: Jul 2007
Distribution: Fedora, CentOS, and would like to get back to Gentoo
Posts: 332

Original Poster
Rep: Reputation: 30
Closing the thread.
I don't think it's possible.
The logical volume needs to be moved to another <<volume group / physical volume>> before I "fail" and "remove" the RAID1 array currently located at /dev/md125.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~

If I did not use lvm, I would have been able to disassemble the RAID1 and use the 1 drive --

mdadm /dev/md125 --fail /dev/sdd --remove /dev/sdd

Modify /etc/fstab to use the remaining drive:
/dev/sdc1 /mnt/data xfs defaults 0 0

Reboot and destroy the RAID array:
mdadm /dev/md125 --destroy

Last edited by Sum1; 07-14-2020 at 06:54 AM.
 
Old 07-14-2020, 06:57 AM   #4
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,159

Rep: Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125Reputation: 4125
... you should be able to stop md125, then fail one of the devices to cause the RAID to be degraded. Then you can (mdadm) fail remove that device. Whether you can then convert to linear I have no idea, but nominally it should run degraded until you can arrange to get things sorted properly.
Don't know how that interacts with the BIOS RAID1, I never use it.

Edit: damn, I hate it when posts (edit) cross in flight like that.

Last edited by syg00; 07-14-2020 at 07:15 AM. Reason: strike
 
Old 07-14-2020, 07:20 AM   #5
Sum1
Member
 
Registered: Jul 2007
Distribution: Fedora, CentOS, and would like to get back to Gentoo
Posts: 332

Original Poster
Rep: Reputation: 30
Quote:
Originally Posted by syg00 View Post
...
Edit: damn, I hate it when posts (edit) cross in flight like that.
Thanks, Syg --- gives me a boost to know I was tracking the same line of troubleshooting.
lvm got in the way on this issue.
if i had more hardware and disk storage, all the lvm flexibility would have been useful.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
lvm volume group on raid1: physically move raid1 array from server 1 to server 2 Sum1 Linux - Software 4 07-20-2018 11:20 AM
LVM Mount Physical Volume/Logical Volume without a working Volume Group mpivintis Linux - Newbie 10 01-11-2014 07:02 AM
Extended LVM Volume group and Logical Volume. But space not usable linuxlover.chaitanya Linux - Server 1 11-19-2012 09:37 AM
upgrade system with lvm on software raid-1 data volume w/o losing data BinWondrin Linux - General 1 01-13-2009 03:25 PM
Fedora LVM volume group & Physical Volume resize problem gabeyg Fedora 1 05-14-2008 11:26 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 10:15 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration