LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 04-19-2016, 07:45 AM   #1
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Rep: Reputation: Disabled
Hybrid RAID1 configuration between HDD partition and Ramdisk does not reassemble on reboot


Hi all.

First I want to apologize because I am going to post about an argument similar to what I have already opened on another forum on a different site (spefic of the distribution I am using that is CentOS 7), but since I have not received an answer, try here.

In my office We are trying to execute some performance test. In particular We are trying to create a mirror RAID 1 configuration between and HDD primariy partition of 16 Gb and a ramdisk of the same size. This test plans to set RAID level 1 with the option --write-mostly for the HDD. In this way the most writing are on HDD and the ramdisk will be used mostly for reading.

We are aware that there are alternatives like flashcache or the ramdisk tmpfs and so on, but for now we want to test this configuration between disk and ramdisk.

The first problem we faced is how to achieve a 16GB ramdisk (our test server is an OVH server with 32GB of ram).
After some googling we realized that was not enough to change the grub file, but we had to recompile the kernel by changing the configuration under "Default RAM Disk Size".
To achieve this we had to download an updated custom OVH Server kernel. We do:

yum install make gcc ncurses ncurses-devel -y
cd /usr/src
wget https://www.kernel.org/pub/linux/kernel ... 4.6.tar.xz
tar xf linux-4.4.6.tar.xz
cd linux-4.4.6
make mrproper
wget ftp://ftp.ovh.net/made-in-ovh/bzImage/4 ... td-ipv6-64
mv config-4.4.6-xxxx-std-ipv6-64 .config


At this point to start the configuration utility:
make menuconfig

We perform the following steps:
- load .config file
- Change: Device Drivers > Block Devices > RAM Block Device Support: Default RAM Disk Size = 16500000
- Enable loadable module support
- General Setup > Kernel Compression Mode = XZ (This just for our convenience)
- Save the configuration


We compile everything and recreate the grub file:
make
make modules
make modules_install
cp arch/x86_64/boot/bzImage /boot/bzImage-modules-on-4.4.6-xxxx-grs-ipv6-64
grub2-mkconfig -o /boot/grub2/grub.cfg


Then reboot the system and verify updated kernel version:
reboot

And after:
uname -r

everything is right

I unmount the 16GB partition that I created in the process of installing the server:
umount /dev/sda5

I edit the partition, setting it as the primary, type FD, and update the system settings:
cfdisk /dev/sda5 (inside the utility I modify settings)
partprobe


Now I create raid 1 with the option to use the physical disk mainly for writing (--write-mostly) and omitting the RAM disk so currently you can format the raid only with the physical disk which is after synchronized with the ramdisk:
mdadm --create /dev/md6 --level=1 --raid-devices=2 missing --write-mostly /dev/sda5

Now I formats, mount the raid and add as the ramdisk to the raid:
mkfs -t ext4 /dev/md6
mkdir /rrdisk
mount /dev/md6 /rrdisk
mdadm /dev/md6 -a /dev/ram0


To check the HDD synchronization status with the Ramdisk I run a few times:
mdadm --detail /dev/md6

Arrived to 100% sync everything is ok the raid 1 works well.
I test the performance with hdparm tool.
- Raid1: hdparm -t = about 3000 Mb/s
- Other HDD partition: hdparm -t = abount 120 Mb/s


So far, so good.
At this point we wondered whether to reboot the machine the raid would be reassembled.
It 'clear that the restart raid1 is degraded because it lacks a drive, the ramdisk.
We were wondering if you would be re-synchronized by itself but instead nothing.

After a reboot to restore the raid we performed the following manual controls:
mdadm --stop /dev/md6
mdadm --assemble --scan
mount /dev/md6 /rrdisk
mdadm /dev/md6 -a /dev/ram0


In this way everything works but we wanted on rebooting the raid are restored automatically.

We have tried to modify the file / etc / fstab to mount the raid even using UUID automatically but does not work and CentOS comes into emergency mode unless we add the options to avoid it in the event of errors, but this clearly does not help .
We read around that the problem probably is that the mounting in fstab occurs before the reassembly of the software raid or even before the creation of the RAM disk.

I do not yet know the inner mechanisms of centos and linux so I need some help to figure out how to restart this software raid1 with ramdisk automatically.

Someone can help me?

Thank you all
Elaidon
 
Old 04-19-2016, 08:52 AM   #2
frostschutz
Member
 
Registered: Apr 2004
Distribution: Gentoo
Posts: 95

Rep: Reputation: 28
There is no automated reassembly for a degraded RAID. This step `mdadm /dev/mdX --add /dev/ramX` is always necessary.

You can automate this yourself by adding it to some kind of startup service script. In order to avoid warnings about degraded RAID you can also have a shutdown script that kicks the ramdisk out cleanly by using something like `mdadm --grow /dev/mdX --raid-devices=1 --force` which reduces it to a 1 device RAID-1 (you have to experiment a bit with the drive order so that actually kicks the ramdisk and not the hdd), and in the above add command you would add `--grow --raid-devices=2` accordingly.

Technically there should be little point in using ramdisks for RAID. This should be covered by regular filesystem caches. It would make more sense with SSD instead of RAMdisk, if you couldn't afford to have two SSD instead of just one.

Special setups such as yours always require some degree of manual labor... good luck.
 
Old 04-19-2016, 10:32 AM   #3
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Hi Frostschutz,

Thank you for your quick answer, but I was wrong and my question was not clear.
Sure I did not know I cannot reassembly automatically a degraded RAID, this is valuable information for me not to waste time searching for anything.

But now I will try to be more accurate (english is not my mother-language) and I do not yet mastered the technical terms.

For automating assembly of a degraded RAID I will create a startup script like you suggested, but the problem is not so much that the raid will not be reconstructed (because it would also be usable with only one disc), but the fact that during the booting the system does not mount it. In this way eventually other programs cannot access the degraded RAID because it is unattainable.
After specifying that difference, is your sentence the same? That is, during the boot not only it can not be assembled nor mount?

About your warn that the cache makes useless to create a RAID1 configuration with ramdisk, this is exactly one of the goal of our test.
In fact we did also the test with:

hdparm -T /rrdisk VS hdparm -T /dev/sda3 (normal disk)

and result that the reading speed from the cache were similar.
The other goal of this test is to try joininig best performance with safe data.

My global target is to obtain the best possible low latency and best speed answering web single server already at first request. If I use cache the performance grows for the files that are frequently requested. But I need the best performance at first request.
I need a web server answers as soon as possible to a page request based on mongodb query. For the same reason the production server will be a 2xIntel Xeon E5 2630v3 with 128Gb 1866Mhz, 2x2TB HDD HW RAID and 2x2GB SSD HW RAID. Because of this particular target we thought that using a ramdisk we can decide from the beginning what files saved in it and we should not wait for the cache notices that are frequently called. For example also all the mongodb could be in ram. But of course, a ramdisk is risky if the machine crashes. So we thought of a couple of alternatives:

1) RAID1, HDD + RAMDisk
2) traditional tmpfs (that has no redundancy with cache) with inside scripts and mongodb + some sort of rsync to backup files and/or 2 instances of mongodb (first all in ram, and the second like replica set on the hdd)


That said I am open to any suggestions to get best solutions.

Thank you all
Elaidon
 
Old 04-19-2016, 12:03 PM   #4
frostschutz
Member
 
Registered: Apr 2004
Distribution: Gentoo
Posts: 95

Rep: Reputation: 28
Some distributions have this unfathomable / silly notion of making boot fail if the RAID is degraded (as in still operational but lost redundancy)... and these distros have to be configured specifically to allow degraded RAID during bootup. This is distribution specific and thus there are no specific instructions... (I'm not actually familiar with CentOS, sorry). My suggestion to forcibly "grow" it to a single-disk RAID on shutdown was actually aimed in this direction... this would give you a 1 disk RAID-1 which is not degraded because 1 disk is all there is. But of course that would not help you in the case of server suffering from an actual power loss, so you should set your system to accept degraded RAID in any case.

Quote:
If I use cache the performance grows for the files that are frequently requested. But I need the best performance at first request.
You could have a background process that once an hour runs `cat $your $list $of $files > /dev/null` to make sure those files are in the filesystem cache and stay there.

That is if it really is such a huge issue that files are not cached on the very first run.

Best of luck in your experiments
 
Old 04-19-2016, 01:59 PM   #5
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,973

Rep: Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623Reputation: 3623
It might be worth looking at one of the pci-e ssd's.

I'd tune my system for this use too. Dedicated kernel and minimal system.
 
Old 04-19-2016, 07:17 PM   #6
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by frostschutz View Post
You could have a background process that once an hour runs `cat $your $list $of $files > /dev/null` to make sure those files are in the filesystem cache and stay there.
That is if it really is such a huge issue that files are not cached on the very first run.
This a not elegant but really effective and clever solutions. In any case, before other ways, now it is a matter of pique to see the raid1 correctly running.

Now I find out another interesting step.
After a reboot I do:

mdadm --stop /dev/md6
mdadm --assemble --scan
grub2-mkconfig -o /boot/grub2/grub.cfg
dracut --regenerate-all --force

dracut saves the actual status to initramfs.
In this way if I reboot OS the precess is right and the degraded raid is mounted.
Instead if I add the ramdisk and I reboot the process fails.

In practice it is as if with both disks of the raid activated in case of sudden shutdown the OS is not able to restart from a missing configuration point...
 
Old 04-21-2016, 02:32 PM   #7
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
I am begging a new path to try to understand the problem. This night I will install a normal software raid on two classical HDD partitions. The I will check if the behaviour is the same like with ramdisk.
 
Old 04-21-2016, 04:27 PM   #8
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,138

Rep: Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263Reputation: 1263
The system cannot auto-assemble your ramdisk because it has lost its superblock containing the RAID metadata on reboot. So you will always have to add that drive back after boot.

The system should be able to auto-start degraded RAID, as this is a very common requirement for servers. Are you getting to a grub menu? What is in your grub configuration?
 
Old 04-21-2016, 05:03 PM   #9
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by smallpond View Post
The system cannot auto-assemble your ramdisk because it has lost its superblock containing the RAID metadata on reboot. So you will always have to add that drive back after boot.

The system should be able to auto-start degraded RAID, as this is a very common requirement for servers. Are you getting to a grub menu? What is in your grub configuration?
Smallpond I agree with you, in fact, even in the case of a physical HDD if it breaks the OS can no longer read the superblock and yet the system should be able to restart a raid1 degraded by only one disc.

Now once again I am setting up the whole thing and as soon as I did, I publish the grub file content.

Last edited by Elaidon; 04-21-2016 at 05:04 PM.
 
Old 04-21-2016, 06:36 PM   #10
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Ok I try to share with you all my OS setup.
During installa on server OVH I can create partitions and I must name them. I cannot have more then 4 primary partitions. So the starting partitions are:

[root@servertest ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8T 933M 1.7T 1% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9.8M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda5 16G 40M 15G 1% /disk2
/dev/sda4 16G 40M 15G 1% /disk1

tmpfs 3.2G 0 3.2G 0% /run/user/0


[root@servertest linux-4.4.6]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 40 2048 1004.5K BIOS boot parti primary
2 4096 3840440319 1.8T Linux filesyste primary
3 3840440320 3841486847 511M Linux swap primary
4 3841486848 3874252799 15.6G Linux filesyste primary
5 3874252800 3907018751 15.6G Linux filesyste primary

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes



I have created 2 HDD partitions:
- /dev/sda4 (mounted ad disk1)
- /dev/sda5 (mounted ad disk1)

The original OS is CentOS 7.2-1511 with kernel:

[root@servertest ~]# uname -r
3.14.32-xxxx-grs-ipv6-64


OVH templates are based on standard kernel with some modifications.

Because I need a 16GB ramdisk and default on CentOS has 16MB limit, I have to modify the configuration of the kernel and recompiling. While I'm at it I upgrade to the latest version available at OVH.

[root@servertest ~]# cd /usr/src
[root@servertest src]#
[root@servertest src]# wget https://www.kernel.org/pub/linux/ker...x-4.4.6.tar.xz
--2016-04-22 00:15:12-- https://www.kernel.org/pub/linux/ker...x-4.4.6.tar.xz
Resolving www.kernel.org (www.kernel.org)... 2001:4f8:1:10:0:1991:8:25, 2620:3:c000:a:0:1991:8:25, 149.20.4.69, ...
Connecting to www.kernel.org (www.kernel.org)|2001:4f8:1:10:0:1991:8:25|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 87308328 (83M) [application/x-xz]
Saving to: ‘linux-4.4.6.tar.xz’

100%[==================================================================================================== =============================================================>] 87,308,328 7.18MB/s in 11s

2016-04-22 00:15:24 (7.71 MB/s) - ‘linux-4.4.6.tar.xz’ saved [87308328/87308328]

[root@servertest src]# tar xf linux-4.4.6.tar.xz
[root@servertest src]# cd linux-4.4.6
[root@servertest linux-4.4.6]# make mrproper
[root@servertest linux-4.4.6]#


I need also the OVH config file:

[root@servertest linux-4.4.6]# wget ftp://ftp.ovh.net/made-in-ovh/bzImag...xx-std-ipv6-64
--2016-04-22 00:16:56-- ftp://ftp.ovh.net/made-in-ovh/bzImag...xx-std-ipv6-64
=> ‘config-4.4.6-xxxx-std-ipv6-64’
Resolving ftp.ovh.net (ftp.ovh.net)... 213.186.33.9
Connecting to ftp.ovh.net (ftp.ovh.net)|213.186.33.9|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /made-in-ovh/bzImage/4.4.6 ... done.
==> SIZE config-4.4.6-xxxx-std-ipv6-64 ... 100735
==> PASV ... done. ==> RETR config-4.4.6-xxxx-std-ipv6-64 ... done.
Length: 100735 (98K) (unauthoritative)

100%[==================================================================================================== =============================================================>] 100,735 --.-K/s in 0.06s

2016-04-22 00:16:56 (1.50 MB/s) - ‘config-4.4.6-xxxx-std-ipv6-64’ saved [100735]

[root@servertest linux-4.4.6]# mv config-4.4.6-xxxx-std-ipv6-64 .config



Now I can start the process to configure kernel:

[root@servertest linux-4.4.6]#make menuconfig


Operations:

- load .config file
- Change: Device Drivers > Block Devices > RAM Block Device Support: Default RAM Disk Size = 16500000
- Enable loadable module support
- General Setup > Kernel Compression Mode = XZ (This just for our convenience)
- Save the configuration


Then:

[root@servertest linux-4.4.6]# make (To compile everything it takes several minutes)
[root@servertest linux-4.4.6]# make modules
[root@servertest linux-4.4.6]# make modules_install


At the end:

[root@servertest linux-4.4.6]# cp arch/x86_64/boot/bzImage /boot/bzImage-modules-on-4.4.6-xxxx-grs-ipv6-64
[root@servertest linux-4.4.6]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/bzImage-modules-on-4.4.6-xxxx-grs-ipv6-64
done


and:

[root@servertest linux-4.4.6]# reboot


When the OS starts I check again the kernel version:

[root@servertest ~]# uname -r
4.4.6-xxxx-std-ipv6-64


It is right updated.
Because I want to create a raid1 between a ramdisk and a HDD partition, unmount the /dev/sda5:

[root@servertest ~]# umount /dev/sda5

And this is the status:

[root@servertest ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8T 2.2G 1.7T 1% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9.7M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda4 16G 40M 15G 1% /disk1
tmpfs 3.2G 0 3.2G 0% /run/user/0



[root@servertest ~]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 40 2048 1004.5K BIOS boot parti primary
2 4096 3840440319 1.8T Linux filesyste primary
3 3840440320 3841486847 511M Linux swap primary
4 3841486848 3874252799 15.6G Linux filesyste primary
5 3874252800 3907018751 15.6G Linux filesyste primary

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes



I modify the HDD partition:

[root@servertest ~]# cfdisk /dev/sda5

Operations:
- Primary
- Type: FD
- Save & Quit

To update the table in the OS:

[root@servertest ~]# partprobe

And finally I create the raid1. I create it missing the ramdisk because I cannot fdisk like the normal HDD. If I try to fdisk it the system crasches.
Instead before I create the raid1 with ramdisk missing, I format the raid1, and only after formattation I will add the ramdisk. In this way the software for raid1 will copy like a mirror the HDD partition on ramdisk.

[root@servertest ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing --write-mostly /dev/sda5
mdadm: /dev/sda5 appears to contain an ext2fs file system
size=16382976K mtime=Fri Apr 22 00:48:59 2016
mdadm: /dev/sda5 appears to be part of a raid array:
level=raid0 devices=0 ctime=Thu Jan 1 01:00:00 1970
mdadm: partition table exists on /dev/sda5 but will be lost or
meaningless after creating array
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.


Now I can format the new drive:

[root@servertest ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1024000 inodes, 4093696 blocks
204684 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
125 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done



Currently there is no mdadm configuration file in the system. So I let mdadm append its configuration to mdadm.config file.

[root@servertest ~]# mdadm --examine --scan >> /etc/mdadm.conf

Now this is the content of the file:

ARRAY /dev/md/0 metadata=1.2 UUID=2f4c70f6:b02798e1:3a153742:1cbfd3cd name=servertest:0

But the UUID is wrong. The right one is:

[root@servertest ~]# blkid | grep /dev/md0
/dev/md0: UUID="9ece8839-3a44-44df-93ac-07eec9bc8e9e" TYPE="ext4"


So I modify /etc/mdadm.conf with the right UUID:

[root@servertest ~]# nano /etc/mdadm.conf

The content of the file now is:

ARRAY /dev/md/0 metadata=1.2 UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e name=servertest:0


I need also to replace the device entries in /etc/fstab with the new RAID devices. To do so, firstly run:
Now I can use the UUID of the raid1 md0 in the /etc/fstab file:

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda2 / ext4 errors=remount-ro 0 1
/dev/sda3 swap swap defaults 0 0
/dev/sda4 /disk1 ext4 defaults 1 2
#/dev/sda5 /disk2 ext4 defaults 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e /rrdisk ext4 defaults 0 0



I create a directory where to mount the raid:
[root@servertest ~]# mkdir /rrdisk


Launch dracut to update initial initramfs:

[root@servertest ~]# dracut --mdadmconf --add-drivers "raid1" --filesystems "ext4" --force /boot/initramfs-$(uname -r).img $(uname -r)


Mount the raid1 and verify its status:

[root@servertest ~]# mount /dev/md0 /rrdisk
[root@servertest ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid1 sda5[1](W)
16374784 blocks super 1.2 [2/1] [_U]

unused devices: <none>



Now I add the ramdisk to raid1:

[root@servertest ~]# mdadm --add /dev/md0 /dev/ram0
mdadm: added /dev/ram0


and check:

[root@servertest ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Apr 22 00:52:36 2016
Raid Level : raid1
Array Size : 16374784 (15.62 GiB 16.77 GB)
Used Dev Size : 16374784 (15.62 GiB 16.77 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Fri Apr 22 01:22:28 2016
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 36% complete

Name : servertest:0 (local to host servertest)
UUID : 2f4c70f6:b02798e1:3a153742:1cbfd3cd
Events : 265

Number Major Minor RaidDevice State
2 1 0 0 spare rebuilding /dev/ram0
1 8 5 1 active sync writemostly /dev/sda5




As soon as the raid finished to rebuild I try to test the raid1:

[root@servertest rrdisk]# hdparm -t /dev/md0

/dev/md0:
Timing buffered disk reads: 6296 MB in 3.00 seconds = 2097.61 MB/sec



The test on the normal HDD is:

[root@servertest rrdisk]# hdparm -t /dev/sda4

/dev/sda4:
Timing buffered disk reads: 292 MB in 3.02 seconds = 96.80 MB/sec



Currently all perfect and raid1 running. This are the configuration files:


FILE: /etc/mdadm.conf

ARRAY /dev/md/0 metadata=1.2 UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e name=servertest:0


FILE: /etc/fstab

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda2 / ext4 errors=remount-ro 0 1
/dev/sda3 swap swap defaults 0 0
/dev/sda4 /disk1 ext4 defaults 1 2
#/dev/sda5 /disk2 ext4 defaults 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e /rrdisk ext4 defaults 0 0



FILE: /etc/default/grub

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
# under any circumstances, keep net.ifnames=0!
# http://www.freedesktop.org/wiki/Soft...nterfaceNames/
GRUB_CMDLINE_LINUX="net.ifnames=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_LINUX_UUID="true"



I read from many places that it could be better to use the UUID also in the grub file. In particular I read many suggestions about to modify the following line in this way:

GRUB_CMDLINE_LINUX="net.ifnames=0 rd.auto rd.auto=1 rd.md.uuid=9ece8839-3a44-44df-93ac-07eec9bc8e9e"

In fact from the man page:

rd.auto rd.auto=1
enable autoassembly of special devices like cryptoLUKS, dmraid,
mdraid or lvm. Default is off as of dracut version >= 024.



But for now, I try to reboot without modify GRUB.
 
Old 04-21-2016, 06:41 PM   #11
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
During boot is shown:

[** ] A start job is running for dev-disk....c9bc8e9e.device (1min 30s / 1min 30 sec)

Then OS enters in the emergency mode. At this point if I modify /etc/fstab commenting the UUID line, the OS restarts but clearly without mounting raid1 md0.

# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda2 / ext4 errors=remount-ro 0 1
/dev/sda3 swap swap defaults 0 0
/dev/sda4 /disk1 ext4 defaults 1 2
#/dev/sda5 /disk2 ext4 defaults 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
#UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e /rrdisk ext4 defaults 0 0



I have also see during the boot sequence this line:

[FAILED] Failed to start Software RAID monitoring and management.
See 'systemctl status mdmonitor.service' for details.


This is the result of the command:

[root@servertest ~]# systemctl status mdmonitor.service
● mdmonitor.service - Software RAID monitoring and management
Loaded: loaded (/usr/lib/systemd/system/mdmonitor.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2016-04-22 01:44:42 CEST; 1min 55s ago
Process: 468 ExecStart=/sbin/mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid (code=exited, status=1/FAILURE)

Apr 22 01:44:42 servertest systemd[1]: Starting Software RAID monitoring and management...
Apr 22 01:44:42 servertest mdadm[468]: mdadm: No mail address or alert command - not monitoring.
Apr 22 01:44:42 servertest systemd[1]: mdmonitor.service: control process exited, code=exited status=1
Apr 22 01:44:42 servertest systemd[1]: Failed to start Software RAID monitoring and management.
Apr 22 01:44:42 servertest systemd[1]: Unit mdmonitor.service entered failed state.
Apr 22 01:44:42 servertest systemd[1]: mdmonitor.service failed.


So I addedd to file /etc/mdadm.conf:

MAILADDR elaidonwebsolutions@gmail.com

and this problem is solved, but the boot is always wrong.

Last edited by Elaidon; 04-21-2016 at 06:53 PM.
 
Old 04-21-2016, 06:50 PM   #12
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,119

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
There are a bunch of possible solutions to transparently caching disk I/O on something faster. Typically just using SSD in front of the I/O is sufficient. But does also suffer the "initial read" latency.
For your project I suggest you read this; especially section 5. It's LVM based which adds the ability to specify that you want the RAMDISK leg of the RAID to be where most reads come from. Perfect for your scenario.
Note also the use of brd to create the RAMDISK without kernel recompile. It also provides systemd unit files to handle the creation/breakdown of the RAID. I have this on my (long) list of things to try out.
 
Old 04-21-2016, 07:01 PM   #13
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by syg00 View Post
There are a bunch of possible solutions to transparently caching disk I/O on something faster. Typically just using SSD in front of the I/O is sufficient. But does also suffer the "initial read" latency.
For your project I suggest you read this; especially section 5. It's LVM based which adds the ability to specify that you want the RAMDISK leg of the RAID to be where most reads come from. Perfect for your scenario.
Note also the use of brd to create the RAMDISK without kernel recompile. It also provides systemd unit files to handle the creation/breakdown of the RAID. I have this on my (long) list of things to try out.
Thank syg00 for your suggestions. Surely in the next minutes I will read your link. But now, beyond possible alternatives, this has become a matter of pique. I am certain to be close to succeeding. So first I want to be able to operate this raid1, because I can not be beaten by computer. Man has to win!

Then I will devote myself to respect and try to do tests to see what works best. In any case, short-I read your link because it could give me ideas for what I try to do now.
 
Old 04-25-2016, 08:40 AM   #14
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
I found the solution and share with everyone.

The solutions is to write and run at boot a script to stop and restart the raid.

For Centos 7:

nano /root/startraid.sh

Inside the file write:

#!/bin/bash
mdadm --stop /dev/md0
mdadm --assemble --scan
mount /dev/md0 /rrdisk
mdadm /dev/md0 -a /dev/ram0


Save and exit.

Then:

chmod +x /root/startraid.sh


Now:
nano /etc/rc.d/rc.local

Add at the end of file:

sh /root/startraid.sh

Save and exit.

Finally:

chmod +x /etc/rc.d/rc.local

and reboot
 
Old 04-27-2016, 01:41 PM   #15
Elaidon
LQ Newbie
 
Registered: Apr 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
I test my raid1 (traditional ramdisk + hdd write mostly) vs ramdisk tmpfs for reading speed. These are the results:

Raid1:

[root@servertest ~]# dd if=/rrdisk/tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.170897 s, 6.3 GB/s


Ramdisk tmpfs:

[root@servertest ~]# dd if=/mnt/ramdisk/tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.193274 s, 5.6 GB/s


I found it very intersting, but why raid is faster then pure memory ramdisk?

Last edited by Elaidon; 04-27-2016 at 01:42 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Raid1 degraded after reboot crazy4nix Linux - Software 1 12-01-2011 12:06 PM
[SOLVED] Using Clonezilla to clone HDD that has an "EISA Configuration" Partition austinium Linux - Newbie 19 03-01-2011 03:28 PM
[RAID1, GRUB] Secondary HDD can't boot up when primary HDD fails Akhran Linux - Newbie 2 05-04-2006 04:17 AM
how to free ramdisk without reboot? malo_umoran Slackware 3 06-28-2004 08:23 AM
How to add RAID1 without a reboot G../ Linux - Newbie 1 07-20-2002 10:46 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 09:53 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration