Hello all.
I am testing software raid in container environment.
Issue is that garbage files of mdadm remain in host when I stopped raid device in container.
Below is creating s/w raid with two NVMe-oF devices in privileged container.
Code:
## host kernel version is 5.3.153
$ uname -r
5.4.153-1.20211202.el7.x86_64
$ docker run --privileged -i -t --name test-volume1 -v /dev:/dev -v /sys:/sys centos_7.9.2009_x86_64 /bin/bash
## In privileged container
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 11G 0 disk
nvme1n1 259:1 0 11G 0 disk
...
$ /sbin/mdadm --create /dev/md/testraidvol --assume-clean --failfast --bitmap=internal --level=1 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/testraidvol started.
## After creating s/w raid, disk status is like below.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 11G 0 disk
`-md127 9:126 0 11G 0 raid1
nvme1n1 259:1 0 11G 0 disk
`-md127 9:126 0 11G 0 raid1
...
$ ls /dev/md*
/dev/md127
/dev/md:
testraidvol
Below is deleting s/w raid in privileged container.
Code:
## Stopping s/w raid
$ /sbin/mdadm --stop /dev/md/testraidvol
mdadm: stopped /dev/md/testraidvol
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 11G 0 disk
nvme1n1 259:1 0 11G 0 disk
...
## Raid disk does not exist in lsblk, but device files still remain..
$ ls /dev/md*
/dev/md127
/dev/md:
testraidvol
We can check mdadm garbage device files still remain in /dev path.
After host reboot, the remaining device files are gone. But we don't want to reboot every time when we delete raid device.
If same process with above is done in host itself, mdadm garbage files do not exist...
I need your help. Please ask me freely!