Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
from the 'INSTALL' file included with the download it says this:
To build mdadm, simply run:
to install, run
No configuration is necessary.
looks like you dont need to configure it as you want to?
for your unresponsive program, you do not need to reboot. type "ps -A" to list all processes, find out the PID (process ID) of your unresponsive program in the list, then use the kill command to get rid of it. ie: kill 1936
if it doesnt go away after afew seconds, force it to kill with SIGKILL (signal 9), ie: kill -s 9 1936
Thanks for your answer, I managed to make it work.
Now I have a new question:
lsraid -a /dev/md0
I get some informations about the raid, if a disk failed I can read which one must be replaced. The problem is, I don't which one physically is the broken one.
Should this happen I will find out that either sdb1 or sdc1 or sdd1 or sde1 failed, but I don't know inside the server which one is the one...
The mainboard is an Asus P5AD2-E Premium with 8 SATA connectors, 4 of them belong to the raid (SATA_RAID1 to 4), the OS is on a 5th disk: which one is /dev/sdb1? If I remove the wrong one all the data gets lost.
i have no clue, maybe someone else can help.
i dont even know what md or sd are.
reason i was (hopefully) able to help with the initial question is because i took afew minutes and researched it.
i would just be guessing with this problem...
what was the output of the lsraid command?
so are you saying you have 5 HDs, 4 of them (sdb, sdc, sdd, sde?) are for data or backup or something, and sda(?) for your OS? so is there 5 places where you connected the HDs for your raid setup? if so, i would think its logical that 1 matches up with a, 2 with b, etc.
if not, then all i would say to try is trial and error; unplugging one and booting, untill you unplug one that makes the computer not boot (the OS one)
how come you would loose all data if you try and unplug (the wrong) one? wouldnt it just not boot or boot improperly?
or maybe try a live cd like knoppix or for a smaller/quicker download, damn small linux and use the same trial and error method and booting to the live cd and mounting your other 4 drives that are plugged in and check their contents and see which one is unplugged..?
The raid is made of 4 disks, /dev/sdb1 etc. It's a Raid5 and the data is saved across all disks with redundancy. If 1 disk fails the raid works with 3 disks and the 4th should be replaced ASAP and it will be rebuilt from the redundant data. If a disk is broken and I remove the wrong one it means that I lost 2 disks ==> all the data is gone.
I could remove 1 disk now that all disk are OK, so I could find out which /dev/sd.. is which physical disk (and eventually repeat this for all 4 disks), but the rebuild takes very very long (maybe half a day... I have 4 * 400GB disks (unformatted) => available space on the raid is 1TB, already half full) so I hoped to find out without trial and error.