LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 03-23-2023, 09:20 AM   #16
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308

Quote:
Originally Posted by wingfy01 View Post
Emmm, i dont know if i understand well what you said.
Many nas solution providers like QNAP also use the madam for raid?
It's MDADM, NOT Madam! From Multiple Disks Administration.

This word "Madam" in my language means "Madame" or "Lady" or "Mistress" as is House's Boss, and seems like the same is with English and not only.

At least be kind to name correctly the software.

Code:
root@darkstar:~# mdadm --help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm --create device options...
            Create a new array from unused devices.
       mdadm --assemble device options...
            Assemble a previously created array.
       mdadm --build device options...
            Create or assemble an array without metadata.
       mdadm --manage device options...
            make changes to an existing array.
       mdadm --misc options... devices
            report on or modify various md related devices.
       mdadm --grow options device
            resize/reshape an active array
       mdadm --incremental device
            add/remove a device to/from an array as appropriate
       mdadm --monitor options...
            Monitor one or more array for significant changes.
       mdadm device options...
            Shorthand for --manage.
Any parameter that does not start with '-' is treated as a device name
or, for --examine-bitmap, a file name.
The first such name is often the name of an md device.  Subsequent
names are often names of component devices.

 For detailed help on the above major modes use --help after the mode
 e.g.
         mdadm --assemble --help
 For general help on options use
         mdadm --help-options
root@darkstar:~#
BTW, also there is not such operating system named slk15.0, but there's Slackware 15.0

Last edited by LuckyCyborg; 03-23-2023 at 09:47 AM.
 
Old 03-23-2023, 09:47 AM   #17
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Angry

Quote:
Originally Posted by LuckyCyborg View Post
It's MDADM, NOT Madam! From Multiple Disks Administration.

This word "Madam" in my language means "Madame" or "Lady" or "Mistress" as is House's Boss, and seems like the same is with English and not only.

At least be kind to name correctly the software.

Code:
root@darkstar:~# mdadm --help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm --create device options...
            Create a new array from unused devices.
       mdadm --assemble device options...
            Assemble a previously created array.
       mdadm --build device options...
            Create or assemble an array without metadata.
       mdadm --manage device options...
            make changes to an existing array.
       mdadm --misc options... devices
            report on or modify various md related devices.
       mdadm --grow options device
            resize/reshape an active array
       mdadm --incremental device
            add/remove a device to/from an array as appropriate
       mdadm --monitor options...
            Monitor one or more array for significant changes.
       mdadm device options...
            Shorthand for --manage.
Any parameter that does not start with '-' is treated as a device name
or, for --examine-bitmap, a file name.
The first such name is often the name of an md device.  Subsequent
names are often names of component devices.

 For detailed help on the above major modes use --help after the mode
 e.g.
         mdadm --assemble --help
 For general help on options use
         mdadm --help-options
root@darkstar:~#
Sorry...Sometime typing fast make the wrong word and after miss typing many times....
And where to change the post title for correction?

Last edited by wingfy01; 03-23-2023 at 09:51 AM.
 
Old 03-23-2023, 09:54 AM   #18
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308
Quote:
Originally Posted by wingfy01 View Post
Sorry...Sometime typing fast make the wrong word and after miss typing many times....
And where to change the post title for correction?
Edit first post in Advanced Mode.

And while you are at it, please write Slackware 15.0 instead of slk15.0 , out of respect for Mr. Volkerding who spent 30 years to give you gratis this wonderful operating system. So I guess it's OK for you to spend 2 more seconds to spell correctly it's name.

Last edited by LuckyCyborg; 03-23-2023 at 09:55 AM.
 
1 members found this post helpful.
Old 03-23-2023, 09:57 AM   #19
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Talking

Quote:
Originally Posted by LuckyCyborg View Post
Edit first post in Advanced Mode.

And while you are at it, please write Slackware 15.0 instead of slk15.0 , out of respect for Mr. Volkerding who spent 30 years to give you gratis this wonderful operating system. So I guess it's OK for you to spend 2 more seconds to spell correctly it's name.
All correction done! You are right.
 
Old 03-23-2023, 10:44 AM   #20
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308
Quote:
Originally Posted by wingfy01 View Post
All correction done! You are right.
Thank you!

Now, regarding your topic...

I for one, I don't like ZFS and BTRFS, because I believe that is not a filesystem business to handle disks and/or partitions. They both tries to be a Swiss Army Tool, but going this way they limits the user on their features.

And, let's do not forget the old saying often ventilated in this forum: do a thing and do it well, this is the UNIX way!

And yes, in this case I believe that "the UNIX way" should be enforced.

I know, I know, the BTRFS evangelists tries to convince you with snapshots, mirrors, migrations and so on...

BUT, also MD(ADM) is capable to do those things and probably do them even better. Heck, the former forum member Darth Vader even made a live system with persistence using only the MD support.

So, I for one, IF I would be on your shoes, I would go with an ol'good RAID1 using MD, and probably as filesystem I would chose EXT4FS. Bonus points: a MD device could be also partitioned, then you can have /dev/md2p3 and so on. And this is extremely useful.

BTW, that driver thing from kernel is named simply MD, the MDADM is the administration tool from user space.

Last edited by LuckyCyborg; 03-23-2023 at 10:53 AM.
 
2 members found this post helpful.
Old 03-23-2023, 10:52 AM   #21
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Quote:
Originally Posted by LuckyCyborg View Post
Thank you!

Now, regarding your topic...

I for one, I don't like ZFS and BTRFS, because I believe that is not a filesystem business to handle disks and/or partitions. They both tries to be a Swiss Army Tool, but going this way they limits the user on their features.

And, let's do not forget the old saying often ventilated in this forum: do a thing and do well, this is the UNIX way!

And yes, in this case I believe that "the UNIX way" should be enforced.

I know, the BTRFS evangelists tries to convince you with snapshots, mirrors, migrations and so on...

BUT, also MD(ADM) is capable to do those things and probably do them even better. Heck, the former forum member Darth Vader even made a live system with persistence using only the MD support.

So, I for one, IF I would be on your shoes, I would go with an ol'good RAID1 using MD, and probably as filesystem I would chose EXT4FS. Bonus points: a MD device could be also partitioned, then you can have /dev/md2p3 and so on. And this is extremely useful.

BTW, that driver thing from kernel is named simply MD, the MDADM is the administration tool from user space.
If using MD + ext4, does LVM required?
And is external journal supported for md with ext4fs?
 
Old 03-23-2023, 11:03 AM   #22
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308
Quote:
Originally Posted by wingfy01 View Post
If using MD + ext4, does LVM required?
And is external journal supported for md with ext4fs?
This is the beauty of Linux MD: it has no business with the filesystems, it handle only the disks. So you have a big flexibility.

Regarding LVM, probably you ask because the partitioning. You can use LVM for partitioning, OR like I said already, you can simply partition a MD device. As simple as managing a disk, you can do also "fdisk /dev/md0" and getting partitions like "/dev/md0p1" and so on.

Regarding that "external journal", I do not understand well what you mean. Yes, from what I know, the EXT4FS supports an external journal, BUT it's not business of MD(ADM) if the journal of a filesystem is internal or external. You should handle it as you like.

In other hand, there are mirrors and snapshots on MD engine logic, but probably it's better to just use LVM for using device snapshots.

BTW, also this LVM uses the MD engine from kernel, so you should see it as another face of MD.

HOWEVER, please bear in mind that RAID wasn't invented or DATA SAFETY, but for SYSTEM RESILIENCE, then to survive alive to a faulty drive until the admin intervene. So, the RAID is not a replacement for backups.

You should make a consistent backup strategy both for system and data, no matter of what RAID you use.

Last edited by LuckyCyborg; 03-23-2023 at 11:16 AM.
 
1 members found this post helpful.
Old 03-23-2023, 11:14 AM   #23
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Quote:
Originally Posted by LuckyCyborg View Post
This is the beauty of Linux MD: it has no business with the filesystems, it handle only the disks. So you have a big flexibility.

Regarding LVM, probably you ask because the partitioning. You can use LVM for partitioning, OR like I said already, you can simply partition a MD device. As simple as managing a disk, you can do also "fdisk /dev/md0" and getting partitions like "/dev/md0p1" and so on.

Regarding that "external journal", I do not understand well what you mean. Yes, from what I know, the EXT4FS supports an external journal, BUT it's not business of MD(ADM) if the journal of a filesystem is internal or external. You should handle it as you like.

In other hand, there are mirrors and snapshots on MD engine logic, but probably it's better to just use LVM for using device snapshots.

BTW, also this LVM uses the MD engine from kernel, so you should see it as another face of MD.
Thanks. Snapshot is not necessary in my case.
And how mdadm + ext4 detects the data incosistent(bit-rot?)?
And how mdadm + ext4 fix it?
 
Old 03-23-2023, 11:20 AM   #24
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308
Quote:
Originally Posted by wingfy01 View Post
Thanks. Snapshot is not necessary in my case.
And how mdadm + ext4 detects the data incosistent(bit-rot?)?
And how mdadm + ext4 fix it?
Again? Let's do NOT confuse the disks (or devices) with the filesystems.

The MD(ADM) has its logic for detecting the the incongruity in the RAID data, the EXT4FS has its methods to find the incongruity on the filesystem data. Those are fully different things.

The MD engine cares just that the bits of data are for example properly mirrored (or stripped, etc) and eventually to rebuild the RAID (or kick of a disk), while EXT4FS, as any other FS, do its business as usual. It's no difference for EXT4FS on how it runs in a MD RAID, a bare disk of even a loop file.

BTW, from the RAID POW, IF you care specially about consistency, I strongly recommend you a RAID5, which requires at least 3 disks - and gives you the space of 2 disks, while the third is used for checksums. It's also more efficient as read or write speed, and as offered storage space than RAID1.

RAID1: 2 disks of 100GB means 100GB storage, read 2x, write 1x as speed.
RAID1: 3 disks of 100GB means 100GB storage, read 3x, write 1x as speed.

RAID5: 3 disks of 100GB means 200GB storage, read 3x, write 2x as speed.

Also, it's more scalable.

RAID5: 4 disks of 100GB means 300GB storage, read 4x, write 3x as speed.
RAID5: 5 disks of 100GB means 400GB storage, read 5x, write 4x as speed.

Of course, the read/write speed depends also on hardware throughput, if hardware is capable to support that speed.

Last edited by LuckyCyborg; 03-23-2023 at 11:34 AM.
 
2 members found this post helpful.
Old 03-23-2023, 11:37 AM   #25
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Talking

Quote:
Originally Posted by LuckyCyborg View Post
Again? Let's do NOT confuse the disks (or devices) with the filesystems.

The MD(ADM) has its logic for detecting the the incongruity in the RAID data, the EXT4FS has its methods to find the incongruity on the filesystem data. Those are fully different things.

The MD engine cares just that the bits of data are for example properly mirrored (or stripped, etc) and eventually to rebuild the RAID (or kick of a disk), while EXT4FS, as any other FS, do its business as usual. It's no difference for EXT4FS on how it runs in a MD RAID, a bare disk of even a loop file.

BTW, from the RAID POW, IF you care specially about consistency, I strongly recommend you a RAID5, which requires at least 3 disks - and gives you the space of 2 disks, while the third is used for checksums. It's also more efficient as read or write speed, and as offered storage space than RAID1.

RAID1: 2 disks of 100GB means 100GB storage, read 2x, write 1x as speed.
RAID1: 3 disks of 100GB means 100GB storage, read 3x, write 1x as speed.

RAID5: 3 disks of 100GB means 200GB storage, read 3x, write 2x as speed.

Also, it's more scalable.

RAID5: 4 disks of 100GB means 300GB storage, read 4x, write 3x as speed.
RAID5: 5 disks of 100GB means 400GB storage, read 5x, write 4x as speed.

Of course, the read/write speed depends also on hardware throughput, if hardware is capable to support that speed.
Thanks. Nice! Very Clear!
 
Old 03-23-2023, 11:52 AM   #26
marav
LQ Sage
 
Registered: Sep 2018
Location: Gironde
Distribution: Slackware
Posts: 5,355

Rep: Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067Reputation: 4067
Quote:
Originally Posted by LuckyCyborg View Post
Again? Let's do NOT confuse the disks (or devices) with the filesystems.

The MD(ADM) has its logic for detecting the the incongruity in the RAID data, the EXT4FS has its methods to find the incongruity on the filesystem data. Those are fully different things.

The MD engine cares just that the bits of data are for example properly mirrored (or stripped, etc) and eventually to rebuild the RAID (or kick of a disk), while EXT4FS, as any other FS, do its business as usual. It's no difference for EXT4FS on how it runs in a MD RAID, a bare disk of even a loop file.

BTW, from the RAID POW, IF you care specially about consistency, I strongly recommend you a RAID5, which requires at least 3 disks - and gives you the space of 2 disks, while the third is used for checksums. It's also more efficient as read or write speed, and as offered storage space than RAID1.

RAID1: 2 disks of 100GB means 100GB storage, read 2x, write 1x as speed.
RAID1: 3 disks of 100GB means 100GB storage, read 3x, write 1x as speed.

RAID5: 3 disks of 100GB means 200GB storage, read 3x, write 2x as speed.

Also, it's more scalable.

RAID5: 4 disks of 100GB means 300GB storage, read 4x, write 3x as speed.
RAID5: 5 disks of 100GB means 400GB storage, read 5x, write 4x as speed.

Of course, the read/write speed depends also on hardware throughput, if hardware is capable to support that speed.
Write operations with raid5 are very poor
The calcul method is : Nx/4
N: number of disks
x: IOPS
So, with 3 or 5 disks, the performances are almost the same as a single disk
Because, on every single operation, raid5 read the data, read the parity, write the data and finally write the parity

Last edited by marav; 03-23-2023 at 12:03 PM.
 
2 members found this post helpful.
Old 03-23-2023, 12:19 PM   #27
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Quote:
Originally Posted by LuckyCyborg View Post
Again? Let's do NOT confuse the disks (or devices) with the filesystems.

The MD(ADM) has its logic for detecting the the incongruity in the RAID data, the EXT4FS has its methods to find the incongruity on the filesystem data. Those are fully different things.

The MD engine cares just that the bits of data are for example properly mirrored (or stripped, etc) and eventually to rebuild the RAID (or kick of a disk), while EXT4FS, as any other FS, do its business as usual. It's no difference for EXT4FS on how it runs in a MD RAID, a bare disk of even a loop file.

BTW, from the RAID POW, IF you care specially about consistency, I strongly recommend you a RAID5, which requires at least 3 disks - and gives you the space of 2 disks, while the third is used for checksums. It's also more efficient as read or write speed, and as offered storage space than RAID1.

RAID1: 2 disks of 100GB means 100GB storage, read 2x, write 1x as speed.
RAID1: 3 disks of 100GB means 100GB storage, read 3x, write 1x as speed.

RAID5: 3 disks of 100GB means 200GB storage, read 3x, write 2x as speed.

Also, it's more scalable.

RAID5: 4 disks of 100GB means 300GB storage, read 4x, write 3x as speed.
RAID5: 5 disks of 100GB means 400GB storage, read 5x, write 4x as speed.

Of course, the read/write speed depends also on hardware throughput, if hardware is capable to support that speed.
So with raid1, if md detecs the mismatch and how does it fix it? how does the md know which data is correct in two disks?...
 
Old 03-23-2023, 12:33 PM   #28
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308
Quote:
Originally Posted by wingfy01 View Post
So with raid1, if md detecs the mismatch and how does it fix it? how does the md know which data is correct in two disks?...
The consistency is detected by MD engine automatically and also automatically happens the array rebuilds and/or disks kicking.

I am not a programmer (to know precisely how MD driver works), but I guess that on the RAID1 the data read from both stripes (on each disks) should be equal, otherwise starts rebuilding the array. BUT, I believe that RAID1 has no way to check IF the equal data from stripes is correct or not. Anyway, any RAID1 system, no matter if it's software or hardware, should react this way.

That's why I believe that RAID5 is superior, because it has checksums for every data stripe, so eventually it even can rebuild the missing data. For example, introducing (replacing a faulty) disk in a 3 disks RAID5 will end with rebuilding your entire data correctly.

This is better than on the RAID1, where I guess that rebuilding means just mirroring the data from the "good" disk to the new disk.

And let's do not talk about RAID0 which has no ways to recover after a faulty disk. This one is only for getting the maximum storage speed.

BUT, like I already told you, the RAIDs are not a backup (data integrity) solution. They are about system resilience when a disk fails. Read: for some time to stay online in a consistent state (even with degraded performances) when that event occur.

So, you should have your own separate backup solution.

Long story short, the RAIDs aren't a backup solution. No one of their variants.

Last edited by LuckyCyborg; 03-23-2023 at 12:56 PM.
 
1 members found this post helpful.
Old 03-23-2023, 12:37 PM   #29
wingfy01
Member
 
Registered: Jul 2009
Location: HZ, China
Distribution: Slackware14 /Debian 6/CentOS5
Posts: 64

Original Poster
Rep: Reputation: 6
Wink

Quote:
Originally Posted by LuckyCyborg View Post
The consistency is detected by MD engine automatically and also automatically happens the array rebuilds and/or disks kicking.

I am not a programmer (to know precisely how MD driver works), but I guess that on the RAID1 the data read from both stripes (on each disks) should be equal, otherwise starts rebuilding the array. BUT, I believe that RAID1 has no way to check IF the equal data from stripes is correct or not. Anyway, any RAID1 system, no matter if it's software or hardware, should react this way.

That's why I believe that RAID5 is superior, because it has checksums for every data stripe, so eventually it even can rebuild the missing data. For example, introducing (replacing a faulty) disk in a 3 disks RAID5 will end with rebuilding your entire data correctly.

This is better than on the RAID1, where I guess that rebuilding means just mirroring the data from the "good" disk to the new disk.

And let's do not talk about RAID0 which has no ways to recover after a faulty disk.

BUT, like I already told you, the RAIDs are not a backup (data integrity) solution. They are about system resilience when a disk fails.

So, you should have your own separate backup solution.
Thanks a lot. Backup is necessary.
 
Old 03-23-2023, 12:44 PM   #30
LuckyCyborg
Senior Member
 
Registered: Mar 2010
Posts: 3,500

Rep: Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308Reputation: 3308
Quote:
Originally Posted by wingfy01 View Post
Thanks a lot. Backup is necessary.
The RAIDs just gives you more time to react, before the server goes fully offline.

If you really care about your data, a consistent backup solution is absolutely necessary.

And by this I mean data being synced to one or more disks which are NOT physically connected usually at that server.

And there are the flavors: (operating) system backup and data backup.

Usually is way more simple to recover the operating system (i.e. by clean reinstalling) than to recover data (i.e. by writing or translating again a book), so I believe that for each one should be a particular backup strategy.

Last edited by LuckyCyborg; 03-23-2023 at 01:07 PM.
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] SLK15.0, shutdown through ssh, motherboard catterr led on and system hung. wingfy01 Slackware 18 03-23-2023 04:37 AM
Raid6 recovery using madam. HalB Linux - Newbie 1 06-13-2018 08:54 AM
Installing onto and existing madam RAID5 array KC9GRD Linux - Newbie 0 02-01-2014 02:48 AM
ZFS Root / Boot into ZFS from a usb flash drive Kataku Solaris / OpenSolaris 1 07-15-2006 04:13 AM
set up raid 1 with madam eyt Linux - Software 1 09-01-2005 07:05 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 08:38 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration