Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
and now /dev/md0 is created. I am mounting it on a directory called /raid0 and put some data into it.
But, when I tried to set the partition /dev/hda6 as faulty by means of the
command:
mdadm --fail /dev/md0 /deva/hda6, the o/p says it is set as faulty. but when I went to the /raid0 and accessed the data, it is still there. It should actually be not there, right? So, what mistake I am making. please help.
Hi friend,
Thanks for that, but it is actually about other RAID methods, if one is faulty others will work, which is working for me. I am looking for RAID 0 implementation, in whihch even removing a faulty device also is not possible. Thank u.
so if i'm understanding you correctly, you want to initiate a disk failure so that the logical partition becomes unavailable...? if that's the case, why not just pull/disconnect a disk.
Indeed, Raid 0 has no redunancy at all, if this raid0 is small and holds nothing important, format one half of it. But a Raid0 in that state just is not recoverable. Raid0 requires both partitions working as it literally splits all data half an half so once you lose half it's gone, unlike other raids which still have all the data should one device become faulty and can still operate but lose their redunancy (rather then the entire array)
so if i'm understanding you correctly, you want to initiate a disk failure so that the logical partition becomes unavailable...? if that's the case, why not just pull/disconnect a disk.
Because it's hard to pull a partition ;) ... his raid
slices reside on the same hdd.
I think I got the answer for this. Please let me know this is correct.
1.make 2 partitions, hda5 and hda6
2.Make it /dev/md0 with Raid level 0.
3.make the file system with ext3 and mount it to a folder, say /raid0
4.cd /raid0
5.Touch file1 file2 file3.
6.fdisk /dev/hda
7.delete /dev/hda6
8.partprobe.
9.reboot.
10. cd /raid0
11.The system will give error message failing to read.
--------------------
but if you do the same procedure to a raid1 or raid5 still we can read the file1 file2 file3.
thank you friends.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.