Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm running RedHat Enterprise Linux 4, Kernel 2.6.9-42.0.10.ELsmp (lspci and DMESG output below). I recently inherited this box and, according to the user, there was a RAID array being mounted as /data2 which is no longer being mounted ever since the move over here. The only things that have changed are
1. We patched the system as soon as it went live on the network (I'm guessing this might have caused the issues)
2. The drives may have (the person who did the move can't say) been put back into the server out of order.
The system is booting fine it's just not finding those volumes anymore. The key error I'm seeing here is
Mar 20 12:23:25 tunl mount: mount: special device LABEL=/data2 does not exist
The name of the partition implies that it was data. You also say that you inherited it. Does that mean that the data isn't important, but you would just like a working raid array. If so, you could recreate a new array from scratch using the graphical partitioner program.
Otherwise, you will need to find out which program RE uses to control raid arrays. That program or an associated one may have an option to force the drives to be marked good once you get the order correct.
I have SuSE and it uses mdadm, which has an --assume-clean option to mark the raid components as clean. If you can guess the right order, a similar command and option may work. ( I think that Fedora and RH use a different raid program. )
I have assumed that this is using software raid and not hardware raid. You didn't supply enough details on your system for me to know.
SuSE uses mdadm and if I used raid I would read the documentation in /usr/share/doc/packages/mdadm/Software-RAID.HOWTO.html
Look in /usr/share/doc/packages/. There may be similar documentation there for your system.
Sorry I missed some details, I wrote the first post in haste. Here are more details:
1. The data is VERY important
2. It's unclear is this was a hardware or software RAID. There is no sign of a software RAID every being configured on the box and the BIOS on the RAID controller is password protected, no one knows the password.
3. The RAID controller is a Areca Technology Corporation RAID Controller 1120 PCI-X
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.