To make you feel better, make a backup copy (cp -a...) of /boot somewhere in the “/” partition/logical volume (e.g. /boot_backup) before you start. That way, if all else fails, you can always recreate the Raid1 and repopulate it from the backup.
You shouldn't need to do anything about grub or anything else, if you remove the “bad” drive in rescue mode (resulting in a “degraded” raid1), physically put in the new properly partitioned drive and then add the new partition to the Raid1 in rescue mode. The “degraded” Raid1 will be rebuilt within seconds of adding the new partition to the Raid1 and everything will be like new again.
From rescue mode, you can prove it to yourself by intentionally failing and removing one of the Raid1 partitions and then adding it back, something like this (using the correct raid and partitions. of course):
# mdadm /dev/md0 -f /dev/sdb2 -r /dev/sdb2
# mdadm --detail /dev/md0
# mdadm /dev/md0 -a /dev/sdb2
# mdadm --detail /dev/sdb2
where --detail will show you the status of the Raid1. If you run the second --detail quickly after adding the partition, you will catch it in the middle of rebuilding the Raid1.
If you are concerned about grub, then make a grub boot floppy as is described in the Grub Manual:
http://www.gnu.org/software/grub/man...UB-boot-floppy
Then, if something goes wrong, you can reconfigure grub natively by following the instructions here:
http://www.gnu.org/software/grub/man...-GRUB-natively