Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I regularly take pgdumpall from production database and restore the staging database.
Production database size is 4.5 GB.
After restoring the staging database size is incremented.
The following command is used to restore pgdumpall file.
psql -U postgres -f dbdumpall
My aim is to sync up production database and staging database.
You'll be wanting to look at something like maybe DBI-link
or a solution like Slony. With pgdumpall you can't do anything
that resembles incremental backups, and if you have any form
of constraints on the target database the import will fail.
You could conceivably use a user-defined function that skips
insert when the target row exists, but I'd expect that to take
a rather long time to complete given the size of your DB.
I'd recommend reading their documentation, but I think it can't
be done w/o any downtime at all ... in a mission-critical
environment this wouldn't be very good.
The next best approach would be to see whether you can export
data on criteria like the age of a record or something like
that - of course you haven't given us no indication as to what
the nature of the data is, the complexity of the data and
relationships between tables.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.