-   Linux - Software (
-   -   postgresql: is there a way to make a hero dump of a db? (

eantoranz 11-09-2011 12:26 PM

postgresql: is there a way to make a hero dump of a db?

Is there a way to make an offline dump from a server that crashed because there are physical problems with the disks that made up a RAID5? I was able to partially recover the data from the disks and rebuild the image in the RAID, I was able to copy the /var/lib/postgresql out of the server but when we try to get a pg_drump from it, it starts complaining about missing/wrong files. I know we won't be able to recover the exact data that was on the DB before it crashed, but then I'd like to get as much as possible from it. db_dump (or the server) will halt at the first problem they find on the DB.... is there a way to make a hero effort to get as much (consistent) data as possible? either online of offline?

The server was running ubuntu 9.10 (postgres 8.3) and I have it up on a virtual machine with te same versions of ubuntu/postgres.

Thanks in advance.

resolv_25 11-11-2011 03:38 AM

When you copy complete contents of your /var/lib/postgresql to new server, can you start Postgres and connect to databases ?
If there is a different architecture (i.e. copying from physical server to virtual server), seems that could be a problem with a restore from raw data.
I'm curious about solution, at first glance, it looks tricky restoring data from a raw files.

Eventually, this procedure might work:

eantoranz 11-11-2011 08:58 AM

We can start the server (after doing some hacks on the postgresq.conf file to avoid some errors) and doing other stuff (not too much).

Then postgresql will start and you are able to send queries and so on... the problem is that some queries will break the connection altogether (though the server is still up) and the pg_dump will fail because of problems on this or that file. But does the service start? Oh, yeah, it does!

About the procedure, we are replacing the whole /var/lib/postgresql directory to do our recovery attempt.

eantoranz 11-11-2011 09:06 AM

Just keep in mind that because it comes from a corrupted FS (because the raid was faulty), there could be a lot of crap on the pg available files. I just want to know how to try to get the most data out of the DB... and skip what's junk, but try to get as much correct data as possible. Is theere a combination of postgresql.conf parameters that could force postgresql to try to avoid as many errors as possible?

Thanks in advance.

PS would it be too tricky to try to do it directly from the files, I mean, make a postgresql raw files reader (offline).

eantoranz 11-11-2011 09:07 AM

and the word is not "avoid", but rather "skip over".

All times are GMT -5. The time now is 02:50 AM.