postgresql: is there a way to make a hero dump of a db?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
postgresql: is there a way to make a hero dump of a db?
Hi!
Is there a way to make an offline dump from a server that crashed because there are physical problems with the disks that made up a RAID5? I was able to partially recover the data from the disks and rebuild the image in the RAID, I was able to copy the /var/lib/postgresql out of the server but when we try to get a pg_drump from it, it starts complaining about missing/wrong files. I know we won't be able to recover the exact data that was on the DB before it crashed, but then I'd like to get as much as possible from it. db_dump (or the server) will halt at the first problem they find on the DB.... is there a way to make a hero effort to get as much (consistent) data as possible? either online of offline?
The server was running ubuntu 9.10 (postgres 8.3) and I have it up on a virtual machine with te same versions of ubuntu/postgres.
When you copy complete contents of your /var/lib/postgresql to new server, can you start Postgres and connect to databases ?
If there is a different architecture (i.e. copying from physical server to virtual server), seems that could be a problem with a restore from raw data.
I'm curious about solution, at first glance, it looks tricky restoring data from a raw files.
We can start the server (after doing some hacks on the postgresq.conf file to avoid some errors) and doing other stuff (not too much).
Then postgresql will start and you are able to send queries and so on... the problem is that some queries will break the connection altogether (though the server is still up) and the pg_dump will fail because of problems on this or that file. But does the service start? Oh, yeah, it does!
About the procedure, we are replacing the whole /var/lib/postgresql directory to do our recovery attempt.
Just keep in mind that because it comes from a corrupted FS (because the raid was faulty), there could be a lot of crap on the pg available files. I just want to know how to try to get the most data out of the DB... and skip what's junk, but try to get as much correct data as possible. Is theere a combination of postgresql.conf parameters that could force postgresql to try to avoid as many errors as possible?
Thanks in advance.
PS would it be too tricky to try to do it directly from the files, I mean, make a postgresql raw files reader (offline).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.