2013 LinuxQuestions.org Members Choice AwardsThis forum is for the 2013 LinuxQuestions.org Members Choice Awards.
You can now vote for your favorite products of 2013. This is your chance to be heard! Voting ends on February 4th.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I've used PostgreSQL for half a dozen years. It doesn't ever seem to have an issue, even if the server dies. It is fast on even old, slow hardware and has been easy to work with. I'm not convinced there isn't a better db out there, I just haven't had a need to look.
I use PostgreSQL, Oracle, Unify (not in this list), MySQL/MariaDB, SQLite, and CSV (not in this list ) on a daily basis and convert data between them has become routine.
*ALL* databases suck in one way or another, but PostgreSQL sucks least of all. CSV is easy to port: every table is a file and can be hand-edited, but of course it is slow. Using perl with DBD::CSV however makes a set of folders with CSV files look just like a database and conditionally fill "real" relational database from CSV datasets suddenly has become easy.