Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
This is not a question, just sort-off plea for assistance.
I am having a heck of a time implementing RAID.
There are tutorials , mostly based on "mdadm" and incomplete.
This is what works - sort off - not necessarily in proper sequence
1. Use gpatred to add /dev/sdx -
assign "raid" flag to it
2. Use mdadm to --create RAIDx
3. Use Disks to mount /dev/sdx and /dev/mdx
4. Use gparted to create /dev/mdx partion type and
create /dev/mdx partiton - not sure when
4. (Test) copy something to ONE of the /dev/sdx
5. Use mdadm to watch for smoke
However the most challenging part is that RAID uses SEVERAL valid / legal designations:
"standard " Linux /dev/sdx
mdadm CAN assign /mdx or /name
gparted WILL change at random from /md1 to /md127 or /name
Disks by default uses uuid - name, icon name , icon symbol optional
Ubuntu file manager goes by Name or Label - assigned by user in gparted
or UUID assigned by Disks - impossible to track.
And I'll skip mount point for now.
Any suggestions how to manage this mess?
Post-it notes ?
Besides the old doctor's joke:
"doctor - it hurts when I do this..."
"soo don't do that !"
If you want it to be FEAKISHELY easy, read up on BTRFS! BTRFS does RAID (0, 1, 10 at least, newer version do 5 and 6 but with some cautions) without MDADM or LVM helpers.
BUT, using it is a totally different animal from using MDADM.
btrfs has supported 5/6 for years - regardless of the doom-sayers. But I wouldn't recommend it to the OP.
For some time LVM has had native (i.e. not requiring mdadm) RAID - super easy to implement and add devices, as well as policies that kick in in the event of a device failure. I use it on my pi3 router/filewall for instance for the simplicity.
btrfs has supported 5/6 for years - regardless of the doom-sayers. But I wouldn't recommend it to the OP.
For some time LVM has had native (i.e. not requiring mdadm) RAID - super easy to implement and add devices, as well as policies that kick in in the event of a device failure. I use it on my pi3 router/filewall for instance for the simplicity.
Now for real stupid question.
Briefly scanned the article and it looks as the /dev/sdx still has to be "assigned' before btrfs is used, right?
I'll give it a go - for test purpose on same HDD. Later.
The link posted by emerson in post 2 is the best guide. Ideally you want to use new blank disks with no partitions of any type on them.
Using mdadm requires that it be able to create the raid metadata in the partition table area of the disk so they need to already be blank or you can manually blank that area.
raid 0 & 1 require a minimum of 2 devices, raid 5 requires a minimum of 3 devices, raid 6 & 10 require a minimum of 4 devices.
I recently assisted another individual to create a raid array for use with a media server using mdadm.
I do recommend that you not even consider anything with btrfs for raid at present.
Last edited by computersavvy; 02-09-2021 at 09:52 PM.
I want to add that no matter what you do with RAID it does not replace BACKUPS. Raid can either provide performance benefits, or protect you from single storage device (or select dual device) failure. It does not protect you from data corruption, storage SYSTEM failure, or server failure that corrupts or destroys the contents of the storage devices. It clearly does not protect you from catastrophic failures from teh environment, such as fire, flood, cats, etc. If the data matters, if it has value, back it up two different ways (or more) using offsite generational backup images and verify your backups often.
Amazingly enough my storage survived flood. All equipment was under water for a week in 2016 during Louisiana historic flood - I did not expect 6 feet water in my house and did not put my stuff in the attic. My Intel NUC lost infrared receiver, I emailed Intel and asked what was the type of it, intending to solder a new one in. Intel replaced my NUC for free, even paid shipping! My Supermicro based server with ECC RAM and 6 hard drives works again, did not have even to replace the power supply. Indeed, I took everything apart and cleaned thoroughly before powering up, it was lots of work, but it paid off in the end. And all other computers also survived.
I went for KISS, used mdadm to create new TEST RAID 5 array,
sudo mdadm --create --verbose /dev/md10 --level=5 --raid-devices=3 /dev/sda19 /dev/sdb9 /dev/sdg27
Yes, I added small /dev/sdx first using gparted.
Did no format, didn't added "raid" flag, did not mess with Label nor Name , did not use Disks to mount.
Just run "create " , waited for "recovery " to finish and then run -detail.
Quote:
@a-desktop:~$ sudo mdadm --detail /dev/md10
[sudo] password for a:
/dev/md10:
Version : 1.2
Creation Time : Wed Feb 10 08:02:59 2021
Raid Level : raid5
Array Size : 2043904 (1996.00 MiB 2092.96 MB)
Used Dev Size : 1021952 (998.00 MiB 1046.48 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Wed Feb 10 08:03:58 2021
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : a-desktop:10 (local to host a-desktop)
UUID : 3fee7915:97923e1c:5165ee2e:3e5191c5
Events : 18
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/sda19
1 8 25 1 active sync /dev/sdb9
3 259 22 2 active sync /dev/sdg27
a@a-desktop:~$
Let me restate - this is just a RAID TEST.
I would prefer NOT to discuss pro and cons of RAID any more.
Yes, I can do variety of "save /backup" - my C IDE does one when optioned.
I used RAID when windowze was introduced and it did the job.
I am little stubborn and want the Linux RAID to work as expected ON FIRST TRY . I do realize there are holes in my procedures - mainly identifying the hardware and setting mount points at correct time.
My current dilemma / question is
I have a working / functioning RAID array - according to mdadm, but
my file manager does not let me to write anything to the RAID5 array BECAUSE
I cannot identify the /dev/sda19 ( for example )
WHY ?
.... fill in blank....
PS
At present "Disks" won't let me change / add /modify ANY RAID devices mount points - including the "/dev/md10 "
PPS
I case it is not clear - RAID technology works, it is my procedure which is incorect and "buy a new computer" is not a solution.
PPPS
I'll try other method (AKA new computer ) AFTER I fix RAID
I went for KISS, used mdadm to create new TEST RAID 5 array,
sudo mdadm --create --verbose /dev/md10 --level=5 --raid-devices=3 /dev/sda19 /dev/sdb9 /dev/sdg27
Yes, I added small /dev/sdx first using gparted.
---snip
My current dilemma / question is
I have a working / functioning RAID array - according to mdadm, but
my file manager does not let me to write anything to the RAID5 array BECAUSE
I cannot identify the /dev/sda19 ( for example )
WHY ?
.... fill in blank....
PS
At present "Disks" won't let me change / add /modify ANY RAID devices mount points - including the "/dev/md10 "
You created the raid device, /dev/md10
Now you need to
1. create a partition on it,
2. then format the file system there,
3. then mount it.
in that order.
Disks is a gui that can only handle the physical devices and cannot access the raid device. (Its name tells you that)
Use gparted (a gui partitioning tool) which can handle the raid device (/dev/md10) to create the partition and format it.
Or
Use gdisk (a cli partitioning tool) which can create the partition but does not format it. Gdisk would have to be followed by fsck.ext4 to format the partition.
The command "gparted /dev/md10" will get you started there.
Once partitioned and formatted you still will have to manually create the mount point and mount the file system before you can access it.
GUI tools like disks are great !!!,
--- until they cannot do what you need done....
Then you need to go back to the cli to do what the gui is not designed for.
Last edited by computersavvy; 02-10-2021 at 12:45 PM.
my file manager does not let me to write anything to the RAID5 array BECAUSE
As posted many many times you can not use the RAID until it formatted with a file system and mounted. To mount the RAID once formatted you use its device ID i.e. /dev/md10 or its filesystem's UUID.
As far as I know and still true that you can not partition a RAID device unless it is created with --auto=mdp option.
Problem:
When I use "gparted" and create (sizeable ) /dev/sdx devices and then
use "mdadm --create...." it starts "recovering" hence
there is no way to use gparted until mdadm is done "recovering" empty /dev/sdx array.
Solution :
take a break...
Now you need to
-2. access gparted , select "dev/mdx"
-1. build partition table type
0. now you are cooking
1. create a partition on it,
2. then format the file system there,
since the newly created partition has "file type assigned "
not sure if that is necessary , but it won't hurt
3. then mount it.
gparted at that point has no option to mount /dev/mdx
so I switch to "Disks" to mount /dev/mdx , choose
"mount on startup / reboot"
and do so.
I have not figured out how NOT to have to reboot to make the mount configured / setup in Disks stick.
BUT
just "discovered" today Disks have option to SELECT how the mount point is going to show up elsewhere
bye bye UUID (assigned by default by Disks) and good riddance !
UUID is still there, but now I have humanly readable interface value !
in that order.
Disks is a gui that can only handle the physical devices and cannot access the raid device. (Its name tells you that)
not true , it actually is the key to my (partial) success
Use gparted (a gui partitioning tool) which can handle the raid device (/dev/md10) to create the partition and format it.
Or
Use gdisk (a cli partitioning tool) which can create the partition but does not format it. Gdisk would have to be followed by fsck.ext4 to format the partition.
The command "gparted /dev/md10" will get you started there.
Really ?
Quote:
@a-desktop:~$ sudo gparted /dev/md10
Unit boot.mount does not exist, proceeding anyway.
Unit tmp.mount does not exist, proceeding anyway.
GParted 1.0.0
configuration --enable-libparted-dmraid --enable-online-resize
libparted 3.3
/dev/md10: unrecognised disk label
a@a-desktop:~$
And of course it opens gparted GUI and "select /dev/md10" and you can start over - again from 0. Fun.
Actually I wonder how mdamd --detail /dev/md10 looks now? Checked - it is (still) clean.
Once partitioned and formatted you still will have to manually create the mount point and mount the file system before you can access it.
GUI tools like disks are great !!!,
--- until they cannot do what you need done....
Then you need to go back to the cli to do what the gui is not designed for.
In (partial) closing -
I really, really appreciate you sticking with me until this is resolved.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.