Ok I try to share with you all my OS setup.
During installa on server OVH I can create partitions and I must name them. I cannot have more then 4 primary partitions. So the starting partitions are:
[root@servertest ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8T 933M 1.7T 1% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9.8M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda5 16G 40M 15G 1% /disk2
/dev/sda4 16G 40M 15G 1% /disk1
tmpfs 3.2G 0 3.2G 0% /run/user/0
[root@servertest linux-4.4.6]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
# Start End Size Type Name
1 40 2048 1004.5K BIOS boot parti primary
2 4096 3840440319 1.8T Linux filesyste primary
3 3840440320 3841486847 511M Linux swap primary
4 3841486848 3874252799 15.6G Linux filesyste primary
5 3874252800 3907018751 15.6G Linux filesyste primary
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
I have created 2 HDD partitions:
- /dev/sda4 (mounted ad disk1)
- /dev/sda5 (mounted ad disk1)
The original OS is CentOS 7.2-1511 with kernel:
[root@servertest ~]# uname -r
3.14.32-xxxx-grs-ipv6-64
OVH templates are based on standard kernel with some modifications.
Because I need a 16GB ramdisk and default on CentOS has 16MB limit, I have to modify the configuration of the kernel and recompiling. While I'm at it I upgrade to the latest version available at OVH.
[root@servertest ~]# cd /usr/src
[root@servertest src]#
[root@servertest src]# wget https://www.kernel.org/pub/linux/ker...x-4.4.6.tar.xz
--2016-04-22 00:15:12-- https://www.kernel.org/pub/linux/ker...x-4.4.6.tar.xz
Resolving www.kernel.org (www.kernel.org)... 2001:4f8:1:10:0:1991:8:25, 2620:3:c000:a:0:1991:8:25, 149.20.4.69, ...
Connecting to www.kernel.org (www.kernel.org)|2001:4f8:1:10:0:1991:8:25|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 87308328 (83M) [application/x-xz]
Saving to: ‘linux-4.4.6.tar.xz’
100%[==================================================================================================== =============================================================>] 87,308,328 7.18MB/s in 11s
2016-04-22 00:15:24 (7.71 MB/s) - ‘linux-4.4.6.tar.xz’ saved [87308328/87308328]
[root@servertest src]# tar xf linux-4.4.6.tar.xz
[root@servertest src]# cd linux-4.4.6
[root@servertest linux-4.4.6]# make mrproper
[root@servertest linux-4.4.6]#
I need also the OVH config file:
[root@servertest linux-4.4.6]# wget ftp://ftp.ovh.net/made-in-ovh/bzImag...xx-std-ipv6-64
--2016-04-22 00:16:56-- ftp://ftp.ovh.net/made-in-ovh/bzImag...xx-std-ipv6-64
=> ‘config-4.4.6-xxxx-std-ipv6-64’
Resolving ftp.ovh.net (ftp.ovh.net)... 213.186.33.9
Connecting to ftp.ovh.net (ftp.ovh.net)|213.186.33.9|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /made-in-ovh/bzImage/4.4.6 ... done.
==> SIZE config-4.4.6-xxxx-std-ipv6-64 ... 100735
==> PASV ... done. ==> RETR config-4.4.6-xxxx-std-ipv6-64 ... done.
Length: 100735 (98K) (unauthoritative)
100%[==================================================================================================== =============================================================>] 100,735 --.-K/s in 0.06s
2016-04-22 00:16:56 (1.50 MB/s) - ‘config-4.4.6-xxxx-std-ipv6-64’ saved [100735]
[root@servertest linux-4.4.6]# mv config-4.4.6-xxxx-std-ipv6-64 .config
Now I can start the process to configure kernel:
[root@servertest linux-4.4.6]#make menuconfig
Operations:
- load .config file
- Change: Device Drivers > Block Devices > RAM Block Device Support: Default RAM Disk Size = 16500000
- Enable loadable module support
- General Setup > Kernel Compression Mode = XZ (This just for our convenience)
- Save the configuration
Then:
[root@servertest linux-4.4.6]# make (To compile everything it takes several minutes)
[root@servertest linux-4.4.6]# make modules
[root@servertest linux-4.4.6]# make modules_install
At the end:
[root@servertest linux-4.4.6]# cp arch/x86_64/boot/bzImage /boot/bzImage-modules-on-4.4.6-xxxx-grs-ipv6-64
[root@servertest linux-4.4.6]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/bzImage-modules-on-4.4.6-xxxx-grs-ipv6-64
done
and:
[root@servertest linux-4.4.6]# reboot
When the OS starts I check again the kernel version:
[root@servertest ~]# uname -r
4.4.6-xxxx-std-ipv6-64
It is right updated.
Because I want to create a raid1 between a ramdisk and a HDD partition, unmount the /dev/sda5:
[root@servertest ~]# umount /dev/sda5
And this is the status:
[root@servertest ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 1.8T 2.2G 1.7T 1% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9.7M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda4 16G 40M 15G 1% /disk1
tmpfs 3.2G 0 3.2G 0% /run/user/0
[root@servertest ~]# fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
# Start End Size Type Name
1 40 2048 1004.5K BIOS boot parti primary
2 4096 3840440319 1.8T Linux filesyste primary
3 3840440320 3841486847 511M Linux swap primary
4 3841486848 3874252799 15.6G Linux filesyste primary
5 3874252800 3907018751 15.6G Linux filesyste primary
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
I modify the HDD partition:
[root@servertest ~]# cfdisk /dev/sda5
Operations:
- Primary
- Type: FD
- Save & Quit
To update the table in the OS:
[root@servertest ~]# partprobe
And finally I create the raid1. I create it missing the ramdisk because I cannot fdisk like the normal HDD. If I try to fdisk it the system crasches.
Instead before I create the raid1 with ramdisk missing, I format the raid1, and only after formattation I will add the ramdisk. In this way the software for raid1 will copy like a mirror the HDD partition on ramdisk.
[root@servertest ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 missing --write-mostly /dev/sda5
mdadm: /dev/sda5 appears to contain an ext2fs file system
size=16382976K mtime=Fri Apr 22 00:48:59 2016
mdadm: /dev/sda5 appears to be part of a raid array:
level=raid0 devices=0 ctime=Thu Jan 1 01:00:00 1970
mdadm: partition table exists on /dev/sda5 but will be lost or
meaningless after creating array
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Now I can format the new drive:
[root@servertest ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1024000 inodes, 4093696 blocks
204684 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
125 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Currently there is no mdadm configuration file in the system. So I let mdadm append its configuration to mdadm.config file.
[root@servertest ~]# mdadm --examine --scan >> /etc/mdadm.conf
Now this is the content of the file:
ARRAY /dev/md/0 metadata=1.2 UUID=2f4c70f6:b02798e1:3a153742:1cbfd3cd name=servertest:0
But the UUID is wrong. The right one is:
[root@servertest ~]# blkid | grep /dev/md0
/dev/md0: UUID="9ece8839-3a44-44df-93ac-07eec9bc8e9e" TYPE="ext4"
So I modify /etc/mdadm.conf with the right UUID:
[root@servertest ~]# nano /etc/mdadm.conf
The content of the file now is:
ARRAY /dev/md/0 metadata=1.2 UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e name=servertest:0
I need also to replace the device entries in /etc/fstab with the new RAID devices. To do so, firstly run:
Now I can use the UUID of the raid1 md0 in the /etc/fstab file:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda2 / ext4 errors=remount-ro 0 1
/dev/sda3 swap swap defaults 0 0
/dev/sda4 /disk1 ext4 defaults 1 2
#/dev/sda5 /disk2 ext4 defaults 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e /rrdisk ext4 defaults 0 0
I create a directory where to mount the raid:
[root@servertest ~]# mkdir /rrdisk
Launch dracut to update initial initramfs:
[root@servertest ~]# dracut --mdadmconf --add-drivers "raid1" --filesystems "ext4" --force /boot/initramfs-$(uname -r).img $(uname -r)
Mount the raid1 and verify its status:
[root@servertest ~]# mount /dev/md0 /rrdisk
[root@servertest ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md0 : active raid1 sda5[1](W)
16374784 blocks super 1.2 [2/1] [_U]
unused devices: <none>
Now I add the ramdisk to raid1:
[root@servertest ~]# mdadm --add /dev/md0 /dev/ram0
mdadm: added /dev/ram0
and check:
[root@servertest ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Apr 22 00:52:36 2016
Raid Level : raid1
Array Size : 16374784 (15.62 GiB 16.77 GB)
Used Dev Size : 16374784 (15.62 GiB 16.77 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Fri Apr 22 01:22:28 2016
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 36% complete
Name : servertest:0 (local to host servertest)
UUID : 2f4c70f6:b02798e1:3a153742:1cbfd3cd
Events : 265
Number Major Minor RaidDevice State
2 1 0 0 spare rebuilding /dev/ram0
1 8 5 1 active sync writemostly /dev/sda5
As soon as the raid finished to rebuild I try to test the raid1:
[root@servertest rrdisk]# hdparm -t /dev/md0
/dev/md0:
Timing buffered disk reads: 6296 MB in 3.00 seconds = 2097.61 MB/sec
The test on the normal HDD is:
[root@servertest rrdisk]# hdparm -t /dev/sda4
/dev/sda4:
Timing buffered disk reads: 292 MB in 3.02 seconds = 96.80 MB/sec
Currently all perfect and raid1 running. This are the configuration files:
FILE: /etc/mdadm.conf
ARRAY /dev/md/0 metadata=1.2 UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e name=servertest:0
FILE: /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/sda2 / ext4 errors=remount-ro 0 1
/dev/sda3 swap swap defaults 0 0
/dev/sda4 /disk1 ext4 defaults 1 2
#/dev/sda5 /disk2 ext4 defaults 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
UUID=9ece8839-3a44-44df-93ac-07eec9bc8e9e /rrdisk ext4 defaults 0 0
FILE: /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
# under any circumstances, keep net.ifnames=0!
# http://www.freedesktop.org/wiki/Soft...nterfaceNames/
GRUB_CMDLINE_LINUX="net.ifnames=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_DISABLE_LINUX_UUID="true"
I read from many places that it could be better to use the UUID also in the grub file. In particular I read many suggestions about to modify the following line in this way:
GRUB_CMDLINE_LINUX="net.ifnames=0 rd.auto rd.auto=1 rd.md.uuid=9ece8839-3a44-44df-93ac-07eec9bc8e9e"
In fact from the man page:
rd.auto rd.auto=1
enable autoassembly of special devices like cryptoLUKS, dmraid,
mdraid or lvm. Default is off as of dracut version >= 024.
But for now, I try to reboot without modify GRUB.