RHEL 5 on Dell H/W - LVM commands, RAID 0, disk error, filesystem recreate.
Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A logical volume/filesystem question. I'm rather new to Linux, but know a particular UNIX rather well.
Which one????
Quote:
There is a volume on a RHEL 5 system consisting of a number of disk drives arranged in RAID 0 configuration. One of the disks becomes unusable.
What is the best way of restoring filesystem functionality on the remaining drives (with reduced capacity)? Note: Outage is (obviously!) expected. There are no data considerations to worry about.
You don't say if this is a hardware or software RAID setup, so it's hard to provide specific advice. Also, RHEL 5 is old...the last supported version is 5.9, I believe. If you're having hardware problems, now would be an EXCELLENT time to upgrade the system to the latest 7, since you've got hardware to fix, and are obviously going to have an outage anyway.
You don't say if this is a hardware or software RAID setup, so it's hard to provide specific advice. Also, RHEL 5 is old...the last supported version is 5.9, I believe. If you're having hardware problems, now would be an EXCELLENT time to upgrade the system to the latest 7, since you've got hardware to fix, and are obviously going to have an outage anyway.
a) Solaris.
b) i) I don't care about the data. ii) Hardware.
It's just a question about how to resume operations using a (somewhat slower) 3-disk stripe when a failure occurs. A new disk would be ordered, of course, but it would be nice to get something working before it arrives (probably about 5 hours from time of putative failure).
Then you know a 'real' unix, and not something like AIX. It should be pretty easy for you to get around in Linux.
Quote:
b) i) I don't care about the data. ii) Hardware.
It's just a question about how to resume operations using a (somewhat slower) 3-disk stripe when a failure occurs. A new disk would be ordered, of course, but it would be nice to get something working before it arrives (probably about 5 hours from time of putative failure).
You don't. As said, RAID0 provides ZERO redundancy. If you lose a disk, you lose the array, period. There is nothing that holds parity or provides a hot failover. You would resume normal operations after restoring your backup to whatever new array you build.
Taking your question at face value, to get back up and running with the drives that are working, all you need to do is partition and format the new RAID entity. You can use parted or fdisk or cfdisk or Gnome Disks (GUI) for that, and it should just work:
1. Find the device node for the drive using dmesg.
2. Assuming the drive is at /dev/sdb, partition it:
Code:
parted /dev/sdb mklabel msdos
3. Get total size of disk:
Code:
parted /dev/sdb print | grep Disk
4. Assume the drive is 500G. Create a partition spanning entire disk:
Code:
parted /dev/sdb mkpart primary 1 500000
5. Create filesystem:
Code:
mkfs.ext4 -L my500drive /dev/sdb1
Done.
But given that you specifically state that this is an LVM question, maybe you are asking how to get LVM running, using this hardware-RAIDed array as part of the storage pool, because you intend to expand it later. In that case, do something similar, but with LVM commands so that your RAID drive becomes part of your logical volume space:
1. Mark your drive as available for inclusion into your storage pool:
Code:
pvcreate /dev/sdb1
2. Create a volume group to include your drive:
Code:
vgcreate mygroup /dev/sdb1
3. Create a logical volume within this storage pool:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.