Software Raid 5 State - Dirty
Hi All,
System: RHEL4, latest updates and patches. This is the output of mdadm /dev/md0 --detail I'm curious about the "State : Dirty" is this something to worry about? It will show "State : Clean" sometimes and "State : Dirty" other times. Can someone expand on the meaning of "State : Dirty" Thanks, Joe ============================================= /dev/md0: Version : 00.90.01 Creation Time : Thu May 18 13:26:41 2006 Raid Level : raid5 Array Size : 208298496 (198.65 GiB 213.30 GB) Device Size : 69432832 (66.22 GiB 71.10 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jan 30 07:30:01 2007 State : dirty Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 256K Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 18 1 active sync /dev/sdb2 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 UUID : fd15945d:fd36deb4:77f2eb73:320be835 Events : 0.1602838 |
While your file system is active, it will set the "dirty" flag. If your system is booting and finds the "dirty" flag still set, it means the system was not shut down "cleanly" and might need to be recovered.
|
This might answer your question.
|
RAID Dirty
Thank You for the reply. I looked at that KB article and the following it what it said:
=========================================================== Symptom: The RAID device /dev/md0 shuts down properly during a reboot. However, the following mdadm query shows the state as being "dirty, no-errors": mdadm --detail /dev/md0 Solution: This is the normal operation of a software RAID array. While an array is active, it is considered "dirty", as there could be incorrect parity blocks from new data written to the disk. ============================================================= That is basically what "wpn146" said as well. Thanks Again, Joe |
All times are GMT -5. The time now is 06:00 AM. |