Doesn't have to be a nightmare, but you will have to have the confidence to try somethings to see what works for you.
I pretty much gave you the extent of their help (from my notes) minus the lessons and learning dialogue, the command that restored my raid 5 was;
Code:
mdadm --create /dev/md0 --assume-clean -l5 -n4 -c512 /dev/sd[bcd]1 missing
I don't know if you can remove the "--assume clean" if that worries you. You can always try it without the --assume-clean. We had to take several stabs at restoring mine because I didn't know my chunk size etc... Once he explained that the commands were only writing to the superblock, I pretty much ran with it from there.
I've had to restore my raid since because of a power failure and I used these same commands. From what I can see, the raid is fairly bullet proof as long as you don't kick off a rebuild/resync...
Disclaimer... I'm no expert which is why I didn't suggest you try the create command and I'm old, and working with very bad memory (old) but they basically explained that the raid shouldn't try to rebuild as long as you input the missing device parameter. And as long as it doesn't try to rebuild/resync, then all this command is doing is creating a new/clean superblock, but it shouldn't disturb any of your data because what you're creating is a raid drive with no parity (missing drive means no parity).
In my case I would create a superblock with xxx chunk size, mount the drive then try to read the data. When that didn't work I'd try yyy chunk size then zzz chunk size etc... Each time I tried the command, all it was doing was creating a new superblock which assembled the raid. My data would be there, but unreadable. Once we found the right chunk size which was 512, I was able to read my data...
Is there any way you can make a back up of 3 of these drives? If so, make a back up and try fiddling with the command to see if you can make it work. Backup doesn't have to be to another similar drive, you can make an compressed image of the drives.
You know all your parameters from your initial mdadm --examine /dev/sd[bcde]1 you posted. You can also back up your superblocks, because again, your data will be there as long as the raid doesn't rebuild/resync and you can copy your backup superblocks back if your trying fails.
I think your problem is drive sdb has that Recovery Offset flag set. I don't see why this command won't work if you use your original drives [cde]...
Code:
mdadm --assemble --force /dev/md127 /dev/sd[cde]1
or
Code:
mdadm --assemble --scan --force
Remember, the drive letters will change because sdb won't be there. One of these drives will probably be designated as sdb so you will have to get the drive letters from the drives once the system boots then try this command. From what I see, sdc has an outdated last update time but as long as nothing was written to the drives in those two hours it was out of the array then you can force it in and you should be fine.
But if that doesn't work, unless you can find someone who can tell you how to manipulate your current superblocks, you're just about forced to use the create command which will write you new superblocks. Not the end of the world as long as you use 3 drives, because including all 4 will kick off a resync and you're toast...