Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
As I had a server crash overnight, I'm still struggling to find the cause as the logs don't tell me anything.
This machine has got software raid setup and I started querying the Raid config.
I'm new to mdadm and I didnt' set it up originally but I get the following.
Maybe someone with mdadm skills will be able to help out.
Standard disk info stuff
--------------------------------
# fdisk -l
Disk /dev/hdc: 80.0 GB, 80026361856 bytes
16 heads, 63 sectors/track, 155061 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 * 1 207 104296+ fd Linux raid autodetect
/dev/hdc2 208 2312 1060920 fd Linux raid autodetect
/dev/hdc3 2313 155061 76985496 fd Linux raid autodetect
Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 fd Linux raid autodetect
/dev/hda2 14 145 1060290 fd Linux raid autodetect
/dev/hda3 146 9729 76983480 fd Linux raid autodetect
this shows you the full details of what has been removed etc.
If hda is damanged, you want to replace it ASAP.
1. go and buy a new hard drive of the same size (it simplifies everything).
2. make the partitions the same size as they used to be on the old hda
3. run this command to add the parition to the array:
It doesn't seem like the mdadm.conf file is being used
Quote:
# more /etc/mdadm.conf
# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
# DEVICE lines specify a list of devices of where to look for
# potential member disks
#
# ARRAY lines specify information about how to identify arrays so
# so that they can be activated
#
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
#
# If you mount devfs on /dev, then a suitable way to list all devices is:
#DEVICE /dev/discs/*/*
#
#
#
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
# super-minor is usually the minor number of the metadevice
# UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
# mdadm -D <md>
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hda2
#
# ARRAY lines can also specify a "spare-group" for each array. mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a failed
# drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program. This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
# mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#PROGRAM /usr/sbin/handle-mdadm-events
I basically inherited this system so I'm trying to make sense of it's raid config.
Since I have no prior XP with mdadm I don't want to start putting in commands that could potentially blow up this raid.
How would I manually try to start hdb?
But if there was an hdb in this config, would this mean that there was a mirror across 3 disks?
Distribution: Mandriva mostly, vector 5.1, tried many.Suse gone from HD because bad Novell/Zinblows agreement
Posts: 1,606
Rep:
what is the output of
mdadm --detail /dev/md2
Have you got the output of
/etc/raidtab
btw which distro have you got
I am pretty sure /dev/hda and hdc are part of the raid.
You can have raid with 3 hd (but I suppose it would say raid 5 then)
Forget about me asking about hdb, it just sounded strange, but
it is possible to have a raid accross hda and hdc
You are not necessarily using mdadm at the minute; there is
a series of utilities called mdtools (I think?)
You have [_U] One of your hard drive is malfunctioning / dead.
But because you have a raid1 system (mirror), the system still works.
Distribution: Mandriva mostly, vector 5.1, tried many.Suse gone from HD because bad Novell/Zinblows agreement
Posts: 1,606
Rep:
re [_U]
looks like it, yes (but I am like you, is this 100% sure?
you might want to do some backups first and then try
to restart the raid with some of the mdtools
rather than mdadm. I heard mdadm is "better" and I use it
but then you will need to edit mdadm.conf
I have not enough knowledge to see why fdsik still see both HD.
Maybe one of the HD is not that damaged?
The above means that the first drive in the array is unavailable. [U_] means that the second HDD is unavailable.
I believe that trial and error is the only way to find out. you are right in thinking that ding fdisk -l will give you an indicaiton of which one is broken. If you do that and see that hdb is not listed in fdisk -l, then you can open up the PC and see if hdb is in fact a hard drive.
raidtab has nothing to do with mdadm. They are two different packages for doing the same thing. Raidtab is older, and mdadm is becoming more popular.
you will find that /etc/mdadm.conf is probably unused. I have never used it, in fact I didn't know it existed!
Distribution: Mandriva mostly, vector 5.1, tried many.Suse gone from HD because bad Novell/Zinblows agreement
Posts: 1,606
Rep:
I suppose one can do without mdadm.conf while using mdadm with some scripts,
and this will depend on the distro
On my distro the raid area is started automatically and I think mdadm
takes the info it needs from mdadm.conf (that I configured by hand).
My point was that possibly mdtools was used by stefaandk's legacy system
rather tham mdadm.
Must be said that indeed stefaandk can use either mdtools or mdadm
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.