I wrote up the previous at or around the same time as previous post.
oh okay so i did that
and after checking cat/ proc/mdstat after a few minutes gave me:
Code:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[3](S) sdb1[4](F) sda1[0]
1953519872 blocks level 5, 64k chunk, algorithm 2 [3/1] [U__]
unused devices: <none>
One failed drive and one spare.
I have been fighting with mdadm for months now.
The detail description shows this(mdadm --detail /dev/md0):
Code:
/dev/md0:
Version : 00.90
Creation Time : Mon Dec 15 10:50:54 2008
Raid Level : raid5
Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Mon Dec 15 12:13:21 2008
State : clean, degraded
Active Devices : 1
Working Devices : 2
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 5224c5ea:00a41204:7b403d38:22f8ac8c (local to host bible)
Events : 0.8
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
2 0 0 2 removed
3 8 33 - spare /dev/sdc1
4 8 17 - faulty spare /dev/sdb1
The install of Debian that this is running under is is almost completely clean. All changes that have been made is removing the default apache settings(apache2-default triggered home page)