Success. Verification of the software raid1 functionality has finally been achieved. I was told that even though the raid partitions were not given mount points during raid creation I could still mount them after the raid was broken. I guess an experienced linux user would have known that. Anyway, the steps I followed to verify that raid 1 did indeed create two separate copies of data are included below. Thanks your help.
Verifying Software Raid1
Raid1 should result in 2 copies of identical data when writing to the raid. This procedure attempts to verify that 2 copies do indeed exist.
Note this procedure breaks apart the raid and expects the user to re-build the raid, therefore we will try to minimize the changes between the 2 copies of data. There is no guarantee that the minimal changes made here will be transparent to the raid when it is re-established.
Key words: software raid1, verification raid1, testing raid1, duplicate raid1 data
1. Assumptions and terminology (substitute the names used in your raid configuration for the corresponding names used here)
1.1. A software raid 1 configuration has been set up. The OS is Centos 5.1
1.2. The 2 elements to be raided are software partitions (on the same hard disk). On the system tested they are called /dev/hda5 and /dev/hda6
1.3. The logical device name of the raid is /dev/md0 and is mounted to /work_1. This mount point was assigned during raid 1 setup.
1.4. The tool mdadm exists on the linux system. This tool is used to change and determine status of the raid.
1.5. Login as root to avoid privilege violations.
2. Populate the raid with data
2.1. fdisk -l > /work_1/fdisk_output.txt.
2.1.1. I believe you need root privileges to use the fdisk -l option. If this is undesirable you can send the output of a different command to the disk.
2.1.2. I ignored the following message “Disk /dev/md0 doesn’t contain a valid partition table”, which seems to follow every fdisk -l command
3. Ensure that fdisk_output.txt is saved in the format of your text editing tool. I don’t know if this step is needed but I feel it is a good idea. (Optional ?)
3.1. OpenOffice was used to open then save the /work_1/ fdisk_output.txt with no changes.
3.2. Choose “yes”, you want to save in text format.
4. Break apart the raid
4.1. Effectively “fail” one of the partitions (there are 2 dashes preceding the mdadm options, fail, remove and detail)
4.1.1. mdadm /dev/md0 --fail /dev/hda5
4.2. Remove the failed partition from the raid
4.2.1. mdadm /dev/md0 --remove /dev/hda5
4.3. Verify that the raid has been broken (Optional)
4.3.1. mdadm --detail /dev/md0
5. Make the identical files different.
5.1. Create a directory to which we will mount the removed partition
5.1.1. mkdir /home/test_a
5.2. Mount the “removed” partition to the test area
5.2.1. mount /dev/hda5 /home/test_a (root privileges needed?)
5.3. Make a minor modification to file /home/test_a/fdisk_output.txt
5.3.1. OpenOffice was used
5.3.2. The uppercase “D” in line 1 was changed to a lower case “d”
5.3.3. Save the file and accept the changes. Choose “yes”, you want to save in text format
6. Verify that 2 separate files do indeed exist
6.1. Display the file just modified
6.1.1. cat /home/test_a/fdisk_output.txt
220.127.116.11. Note the lower case “d”
6.2. Display the file still part of the degraded raid
6.2.1. cat /work_1/fdisk_output.txt
18.104.22.168. Note the upper case “D”
6.3. Note the time stamp of the 2 files are also different
6.3.1. ls –l /home/test_a/fdisk_output.txt
6.3.2. ls –l /work_1/fdisk_output.txt
7. Re-build the raid (Optional)
7.1. Unmount the removed partition
7.1.1. cd /home (ensures a different location than /home/test_a)
7.1.2. umount /dev/hda5
7.2. Add the removed partition back into the raid
7.2.1. mdadm /dev/md0 --add /dev/hda5
7.3. Interrogate the status of the raid (Optional)
7.3.1. mdadm –detail /dev/md0
7.3.2. Verify the raid has been rebuilt.
You may see the raid an intermediate rebuilding state. Wait approximately 30 seconds and re-issue the interrogate command again. The status should show the raid successfully re-established.
Note: The surviving data belonged to /work_1/fdisk_output.txt, the file that resided in the degraded raid. If you were to again break apart the raid, you would notice that both files are now identical and contain the changes in the surviving version (upper case “D”).