ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've install PostgreSQL , but it seem to be slow with the default configuration . I read in the document and find out some interesting parameters :
Quote:
shared_buffers (integer) 3500
Sets the number of shared memory buffers used by the database server.
sort_mem (integer)
Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files . Note that for a complex query, several sort or hash operations might be running in parallel
vacuum_mem (integer)
Specifies the maximum amount of memory to be used by VACUUM to keep track of to-be-reclaimed rows. The value is specified in kilobytes, and defaults to 8192 kB. Larger settings may improve the speed of vacuuming large tables that have many deleted rows.
fsync (boolean) false
If this option is on, the PostgreSQL server will use the fsync() system call in several places to make sure that updates are physically written to disk . However, using fsync() results in a performance penalty: when a transaction is committed, PostgreSQL must wait for the operating system to flush the write-ahead log to disk. When fsync is disabled, the operating system is allowed to do its best in buffering, ordering, and delaying writes
But changing it with out any knowleage is not the right way . I need some tip for that , could you please help me ?
Other thing is how to test the perfomance of postgres ( the number of transaction per second ), I want to see how effect it was when I changed some parameter
How is it that you're getting the feeling of it being slow? You're running a query that seems to take longer than it should? Over how much data? How many people access the database? Are you using indexes? You need to know what is going slow before you can fix it For instance, changing the vacuum_mem only improves the maintenance of the database, not it's operation.
If a particular query is running slow, you can use EXPLAIN to show what's happening for that query.
As for benchmarking... that's harder depending on what's going on. For instance, if you're the only user of the database, it's going to spend most of its time waiting for you to do something, so transactions per second is not going to be a useful figure. You could write up a script for psql that performs 1000 inserts and use 'time' to see how long it takes. Also test 1000 queries, updates, and deletes for a better picture.
I think that is one of the main problem to improve speed . But I can not find out where to turn it on . In creatdb does not contain it .
Quote:
You need to know what is going slow before you can fix it
Yes , you're right , I just image in my mind because I have no experience . But I want my database system can do as much as transaction per second as posible . For the maximum of people log in , I think it not so importance ( 50 is too good ) .
Now for example I set 50 is the maximum of connection , and how to make the number of transation boot up . Can you give me something more ?
Indexes are specific to your data. For example, if you have a table called "customers", and you regularly do a query like
select * from customers where name like 'A%'
then your query would be sped up a lot by creating an index on the name field, which you would do via an SQL command like "CREATE INDEX custname ON customer (name)"
I understand you want maximum performance, but that can only be achieved on a per-system basis. You can tune other things separately, like your hard drive performance, and that will very likely improve postgres as a side effect, but to try and measure transactions per second makes almost no sense. Inserting a row into a 40 column table will take longer than inserting a row into a 2 column table, but they are both transactions. A query that retrieves data from fifteen different tables with lots of crazy joins is a single transaction too, but will probably take far longer than the two inserts. You need to set up your system and create tests for the activities that will apply to your system - there is no single 'golden configuration' that just makes Postgres a speed demon.
This url http://www.phpbuilder.com/columns/sm...21.php3?page=2 helps explain the shared memory settings you listed above. The fsync parameter improves speed at the possible expense of data integrity (if the system crashes or loses power, your database may be in an inconsistent state - whether this is important to you or not depends on how much your database is used and how important the data is).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.