First, RAID1 was implemented and it worked. Then the partitions sda6 and sda7 were deleted and sda6, sda7, and sda8 were created for a fresh implementation of RAID Level 5.
I stopped the existing RAID:
Code:
[root@localhost ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
This is the scene after deleting the partitions and recreating them:
Code:
[root@localhost ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 1275 10241406 83 Linux
/dev/sda2 1276 1657 3068415 82 Linux swap / Solaris
/dev/sda3 1658 2874 9775552+ 5 Extended
/dev/sda5 1658 1901 1959898+ 83 Linux
/dev/sda6 1902 1914 104391 fd Linux raid autodetect
/dev/sda7 1915 1927 104391 fd Linux raid autodetect
/dev/sda8 1928 1940 104391 fd Linux raid autodetect
Disk /dev/md0: 213 MB, 213647360 bytes
2 heads, 4 sectors/track, 52160 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk /dev/md0 doesn't contain a valid partition table
[root@localhost ~]#
The underlined line above states some error or warning. Why is it there?
I have been told by my trainer that now I must create other partitions, sda9, sda10, sda11 for the new RAID because the previous one is creating the problem. I am trying that now, however, I want to know why we have to leave sda6 and sda7 even after deleting them.
Well, here is what I am trying now: