Need to "refresh my emory": adding disk -> partitioninig -> lvm -> format
1 Attachment(s)
Hello!
For the last 7 years my PC had 4 SAS disks arranged in two pairs, each pair comprizing one "RAID 0" array: Code:
<localhost.localdomain>.../user>lsscsi and they're /dev/sda on which the OS and /home are installed. The pair of TOSHIBA MK2001TRKB 2TB disks comprize KA_RAID_1 array and they're /dev/sdb, on which "lvm pv" partition is created and mounted on /home/user/GRAPHICS, here's a line from "mount" output: Code:
/dev/mapper/VolGroup_graphics-lv_graphics on /home/user/GRAPHICS type developing a growing number of bad sectors so I bought a pair of new TOSHIBA MG04SCA20EE disks, created from them a new array KA_RAID_2 which became /dev/sdc. What I would like to do is to make the new pair of TOSHIBA MG04SCA20EE disks partitioned and formatted exactly like the old pair of TOSHIBA MK2001TRKB and mounted as /home/user/GRAPHICS2 (I guess I'll need to make a different LVM volume with "2" added at the end of the name). But since I did the partitioning/formatting of the old disks about 7 years ago I forgot what should be the order of operations and that's what I would like the people here to remind me. I worked with "GParted" and with "KDE Partition manager" (strangely, "GParted" can format while in "KDE Partition manager" I can't find "format" but OTOH, "KDE Partition mamager" can deal with logical volums while "GParted" can't (or, at least, I can't find such menu). Maybe there is a better GUI tool that can both format and deal with LVM? I've attached a screen grab of "GParted" when there is an otion of formatting the 'dev/sdc1 partition. I'm not sure what filesystem should I select: in the line of the "mount" I see that the old disks (/dev/sdb1) logical volume is formatted as ext4 but if I open this partition view in "GParted" I see the "lvm pv" filesystem type. I need someone knowledgeable to refresh my memory as to the order of operations needed to bring the new array /dev/sdc to be partitioned/formatted exactly like the old array /dev/sdb only the mount point will be "/home/user/GRAPHICS2". When I'll have the new array partitioned/formatted like the old one I plan (after 3-rd backup, already did 2) to copy the contents of "/home/user/GRAPHICS" to "/home/user/GRAPHICS2", then to swap the connectors to the SAS controller (Adaptec ICP5165BR) between the old array disks and the new array disks in hope the new will become /dev/sdb mounted as "/home/user/GRAPHICS" and after than I can throw away the old pair of TOSHIBA MK2001TRKB disks. TIA for any help, kaza. |
I had a long LVM post all typed out then realised you were talking hardware RAID.
Can you simply add the two new drives to the RAID1, let the card synch them then fail and remove the two problematic cards ?. Then Linux nor LVM needs to know anything about them, and will just truck along merrily. Certainly this would be the best/easiest solution if it were software RAID. |
Thant's something I haven't thought about.
The current setup is that each two disks are comprising a RAID 0 array (striped). What happens when adding a new array? Will the data remain on the array whch'll become larger? And how can I copy data from old disks to new ones if they're all the same array? TIA, kaza. |
OK, it seems I managed to do the first part: to add the new pair
of disks, to create a new logical volume and to mount it. Just the plain and simple "CLI" way of "lvm -> lvcreate" followed by "mkfs" and "mount": Code:
lvm> lvcreate -l 100%VG -n lv_graphics2 VolGroup_graphics2 and now I have both old and new arrays mounted: code] <root localhost.localdomain>.../root>df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 8198564 0 8198564 0% /dev tmpfs 8212360 15412 8196948 1% /dev/shm tmpfs 8212360 1444 8210916 1% /run tmpfs 8212360 0 8212360 0% /sys/fs/cgroup /dev/mapper/VolGroup-lv_root 102009824 15490320 81294648 17% / tmpfs 8212360 16 8212344 1% /tmp /dev/sda1 999320 562136 368372 61% /boot /dev/mapper/VolGroup-lv_home 134948080 114096016 13974064 90% /home /dev/mapper/VolGroup_graphics-lv_graphics 3839360400 1157858172 2486450916 32% /home/user/GRAPHICS tmpfs 1642472 12 1642460 1% /run/user/1000 /dev/mapper/VolGroup_graphics2-lv_graphics2 3838422480 90140 3643281028 1% /home/user/GRAPHICS2 [/code] Now I'll have to copy the entire contents of "/home/user/GRAPHICS" to "/home/user/GRAPHICS2", make a full backup (excluding "/home/user/GRAPHICS") and then "go to the land of unknown": swap the connections between the old pair of disks and the new pair and see if data stays. Now to some "byproduct" of what I did: immediately after "mount" I started seeing in the "gkrellm" a constant disk traffic of about 6-7 MB/Sec. "iotop" showed the same and indicated that it's the "ext4lazyinit" which creates the traffic. After some reading I understand that since I didn't instruct the "mkfs" to disable "lazy intialization" now the process "ext4lazyinit" makes the initialization in the background. I guess I'll need to wait many hours for it to complete before shutting down, lets hope I won't suffer any power outagVolGroup_graphics2-lv_graphics2es longer than what UPS can handle... Another question: when I open the "KDE Partition Manager" and click on the newly created VolGroup_graphics2-lv_graphics2 it shows 186.1 GiB used (out of 3.6 TiB). How can it be? The "df" reports only 1% used (which should be about 40 GiB)... TIA, kaza. |
Could have been done differently, but that's fine.
Don't worry about the 186 Gig - ext4 reserves 5% to ensure it can continue when the filesystem fills - see "man mkfs.ext4" option -m. Can be set to zero safely for data-only (i.e. non-system) filesystems. Also you can do the copy while the background init is running. |
Done!
Swapping the connectors of the disks pairs didn't cause havoc with data - everything remained, in fact, EXACTLY everything remained: the order of the RAID arrays didn't change. So I hit "Ctrl+A" during boot to enter the Adaptec "Array configuration Utility" (BIOS), didn't see any simple way to move arrays but there was a "Ctrl+B" option which makes array bootable by making it's order "0". After few experimentations (like a homework of some "computer studies" course: "arrange an array of 3 elements to a desired order when all you can do is make some element first and prev. first to take its place) I've got them in the order I wanted: RAID_0 RAID_2 RAID_1 and indeed, after reboot I've got: Code:
<localhost.localdomain>.../kaza>lsscsi swapping the connectors I places a file "graphics_new" on the new GRAPHICS2 disk and a file "graphics_old" on the old GRAPHICS disk and nothing changed there. Then, after some learning of lvm commands I found the sequence which gave me what I needed: 1) umount /home/user/GRAPHICS umount /home/user/GRAPHICS2 2) vgrename /dev/VolGroup_graphics /dev/VolGroup_graphics_old 3) lvrename VolGroup_graphics_old lv_graphics lv_graphics_old 4) vgrename /dev/VolGroup_graphics2 /dev/VolGroup_graphics 5) lvrename VolGroup_graphics lv_graphics2 lv_graphics 6) vgrename /dev/VolGroup_graphics_old /dev/VolGroup_graphics2 7) lvrename /dev/VolGroup_graphics2 lv_graphics_old lv_graphics2 8) mount -t ext4 /dev/mapper/VolGroup_graphics-lv_graphics /home/user/GRAPHICS mount -t ext4 /dev/mapper/VolGroup_graphics2-lv_graphics2 /home/user/GRAPHICS2 and now the graphics_old file is on GRAPHICS2 disk and graphics_new is on GRAPHICS disk. Thanks to everyone for replies, this issue is closed. kaza. |
All times are GMT -5. The time now is 09:26 AM. |