LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices


Reply
  Search this Thread
Old 08-02-2022, 07:16 PM   #1
shellwedance
LQ Newbie
 
Registered: Aug 2022
Posts: 2

Rep: Reputation: 0
mdadm s/w raid module ramains device file in privileged container


Hello all.
I am testing software raid in container environment.
Issue is that garbage files of mdadm remain in host when I stopped raid device in container.

Below is creating s/w raid with two NVMe-oF devices in privileged container.
Code:
## host kernel version is 5.3.153
$ uname -r
5.4.153-1.20211202.el7.x86_64

$ docker run --privileged -i -t --name test-volume1 -v /dev:/dev -v /sys:/sys centos_7.9.2009_x86_64 /bin/bash

## In privileged container
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0    0   11G  0 disk 
nvme1n1 259:1    0   11G  0 disk 
...


$ /sbin/mdadm --create /dev/md/testraidvol --assume-clean --failfast --bitmap=internal --level=1 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/testraidvol started.

## After creating s/w raid, disk status is like below.
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
nvme0n1 259:0    0   11G  0 disk  
`-md127   9:126  0   11G  0 raid1 
nvme1n1 259:1    0   11G  0 disk  
`-md127   9:126  0   11G  0 raid1 
...

$ ls /dev/md*
/dev/md127

/dev/md:
testraidvol

Below is deleting s/w raid in privileged container.
Code:
## Stopping s/w raid
$ /sbin/mdadm --stop /dev/md/testraidvol
mdadm: stopped /dev/md/testraidvol


$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0    0   11G  0 disk 
nvme1n1 259:1    0   11G  0 disk 
...

## Raid disk does not exist in lsblk, but device files still remain..
$ ls /dev/md*
/dev/md127

/dev/md:
testraidvol
We can check mdadm garbage device files still remain in /dev path.
After host reboot, the remaining device files are gone. But we don't want to reboot every time when we delete raid device.
If same process with above is done in host itself, mdadm garbage files do not exist...

I need your help. Please ask me freely!
 
Old 08-02-2022, 08:54 PM   #2
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,131

Rep: Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121
Have a look at --remove. To save having to specify the device each time, maybe try "detached".
 
Old 08-03-2022, 12:46 PM   #3
shellwedance
LQ Newbie
 
Registered: Aug 2022
Posts: 2

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by syg00 View Post
Have a look at --remove. To save having to specify the device each time, maybe try "detached".
I appreciate to your suggestion.
But unfortunately, it seems that --remove and "detached" do not work...

If I do --remove instead of --stop, the command does nothing like below.
Code:
## In privileged container,

$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
nvme0n1 259:0    0   11G  0 disk  
└─md126   9:126  0   11G  0 raid1 
nvme1n1 259:1    0   11G  0 disk  
└─md126   9:126  0   11G  0 raid1 
...
  
$ ls /dev/md*
/dev/md126  /dev/md127

/dev/md:
testvol

$ /sbin/mdadm --remove /dev/md126 detached

$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
nvme0n1 259:0    0   11G  0 disk  
└─md126   9:126  0   11G  0 raid1 
nvme1n1 259:1    0   11G  0 disk  
└─md126   9:126  0   11G  0 raid1 
...

$ ls /dev/md*
/dev/md126  /dev/md127

/dev/md:
testvol
If I do --remove after --stop, it cannot find raid device like below.
Code:
## In privileged container,

$ /sbin/mdadm --stop /dev/md126
mdadm: stopped /dev/md126

$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0    0   11G  0 disk 
nvme1n1 259:1    0   11G  0 disk 
...

$ ls /dev/md*
/dev/md126  /dev/md127

/dev/md:
testvol

$ /sbin/mdadm --remove /dev/md126 detached
mdadm: Cannot get array info for /dev/md126
May I ask which functionality exactly mdadm --remove with "detached" do?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Networking worked fine on privileged containers but can't get it working on unprivileged container Uzer40239028 Linux - Containers 1 01-01-2018 04:43 PM
Why can't I mount this md0 raid? (mdadm and software raid) cruiserparts Linux - Software 35 01-05-2013 03:35 PM
unable to load MDADM module BUT mdadm works?!?!?! alirezan1 Linux - Software 2 09-08-2008 07:58 PM
mdadm says "mdadm: /dev/md1 not identified in config file" when booting FC7 raffeD Linux - Server 1 08-11-2008 11:47 AM
rhel5 raid device cannot remove, no /etc/mdadm.conf file? hocheetiong Linux - Newbie 1 11-20-2007 06:08 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel

All times are GMT -5. The time now is 09:54 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration