Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So we have some logical volumes that weren't in use any more. The drives were removed from the server (SAN storage). The problem is we didn't quite coordinate it and the volumes weren't deleted before the storage was pulled.
Now, lvm is throwing errors on most any command. lvremove isn't removing the volumes as it says it can't get information about the volume group. We do have valid volumes mounted and in use on those volume groups, however.
A call to support turned up the usual. Confusion with a request to get more output with a follow-up at some future point.
I'm wondering if it's just as simple as making copies of the vg files in /etc/lvm/backup and deleting the old volumes, then going from there?
Any help is appreciated.
The errors are multiples of:
/dev/dm-16: read failed after 0 of 4096 at 0: Input/output error
/dev/dm-17: read failed after 0 of 4096 at 0: Input/output error
Couldn't find device with uuid 'X5x8vG-chsN-CVqI-H2Ky-wdyq-95aG-vf9sgV'.
Couldn't find all physical volumes for volume group oravg04.
Do you have a complete list of the device files for the disks that were removed?
You are probably going to have to manually move the device files to a different folder (to be removed later if all goes well).
If you still have the disks, you could try reattaching them to the SAN and removing the volumes, but I wouldn't expect that to be easy (if even possible).
Do you have a complete list of the device files for the disks that were removed?
You are probably going to have to manually move the device files to a different folder (to be removed later if all goes well).
If you still have the disks, you could try reattaching them to the SAN and removing the volumes, but I wouldn't expect that to be easy (if even possible).
No, no list of the old devices that I see. I think the disks, along with the array they were in, are gone.
You may want to look at man 8 vgreduce.
Particularly the --removemissing option.
Yes, this ended up being what I did and it seems to have worked out. That allowed me to clean up the Power Path devices as well. I'll just need to reboot, I assume, to clean up the device files for the disks no longer visible to the server.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.