LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 03-25-2008, 05:21 AM   #1
sridhar_dct3
Member
 
Registered: May 2007
Location: india,chennai
Distribution: debian,ubuntu linux
Posts: 30

Rep: Reputation: 15
postgres - performance


Dear friends,

I am developing a project which uses postgresql database as a back end.
The response time is fine when there were less number of records(initial stage).

When the number of records in a table increased to some 50 lakh record I find that the response of postgresql is very slow.

The table size is about 1GB and total size of the database is about 2.2GB.

Even, for simple sql queries it is taking more time .

Ex:

Select count(f1) from mytable1; - 34seconds
Select f1 from mytable1; - 34seconds


Experts, is there any constraint in postgres that it should not go beyond n-lakh of records or I should not use postgres to have about 50 lakh records in a table.


I am surprised google is giving result for 4million within 4 secs(how??).

NOTE:
I am using index properly - no use.
Tried postgresql.conf - no use.

Here is the hardware configuration:
/proc/cpuinfo
/proc/cpuinfoprocessor : 0
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Celeron(R) CPU 2.66GHz
stepping : 1
cpu MHz : 2667.013
cache size : 256 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc up pni monitor ds_cpl tm2 cid cx16 xtpr
bogomips : 5340.26

/proc/meminfo
MemTotal: 1003128 kB
MemFree: 14172 kB
Buffers: 2580 kB
Cached: 794580 kB
SwapCached: 9184 kB
Active: 297484 kB
Inactive: 665028 kB
HighTotal: 97216 kB
HighFree: 252 kB
LowTotal: 905912 kB
LowFree: 13920 kB
SwapTotal: 979956 kB
SwapFree: 957100 kB
Dirty: 244 kB
Writeback: 0 kB
AnonPages: 165164 kB
Mapped: 64620 kB
Slab: 15860 kB
PageTables: 2424 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 1481520 kB
Committed_AS: 369788 kB
VmallocTotal: 114680 kB
VmallocUsed: 4460 kB
VmallocChunk: 109560 kB
 
Old 03-25-2008, 09:54 PM   #2
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 7.7 (?), Centos 8.1
Posts: 17,831

Rep: Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559Reputation: 2559
Show us the table definition.
how much is 'lahk' in metric ?
 
Old 03-26-2008, 01:09 PM   #3
kromberg
Member
 
Registered: Feb 2007
Location: Colorado
Distribution: FC6, FC7 x86_64
Posts: 218

Rep: Reputation: 30
Postgres stores all the tables in the filesystem and heavily relies on the OS file caching system for performance. The biggest problem I see is that the machine only has 1GBish of memory: 1003128kb/1024 = 979MB. The best thing to do is increase the amount of memory on the machine and periodically for the files of the database into cache:

find /var/lib/pgsql/data -type f -exec cat > /dev/null \{\} \;

Keith
 
Old 03-26-2008, 06:30 PM   #4
AdaHacker
Member
 
Registered: Oct 2001
Location: Brockport, NY
Distribution: Kubuntu
Posts: 384

Rep: Reputation: 32
Quote:
Originally Posted by chrism01 View Post
how much is 'lahk' in metric ?
According to Wikipedia, a lakh is 100,000, so 50-lakh would be 5,000,000 rows.

As to the original problem, I'd have to concur with kromberg: a multi-million row production database probably shouldn't be running on a 2.66GHz Celeron (!?!) with only one gigabyte of RAM. You might want to consider buying a real server.
 
Old 03-27-2008, 11:39 AM   #5
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062
In every DBMS I've used (not including PostgreSQL, alas), 5M rows is, well, trivial - particularly for executing count (which usually returns almost instantly if there's no where clause). What is not trivial, however, is row size (and that has almost everything to do with system memory, buffers and all of that). When I find a DBMS with the kind of performance (or lack of it) citied, the first thing I'll look at is the row size and start thinking about dividing up a large (meaning large row size, not the number of rows) table into multiple tables with join columns (most engines can do joins a lot faster than handling huge rows in queries). I also look hard at indexes; I frequently find poor performance directly affected by poor indexing. Given the example, count(f1) my first question would be (after asking about the row size) what is f1? A numeric, a string (how big a string), is there a huge composite index that includes f1, is there an index at all?

As an example, I run a GIS data base in MySQL on a 3GHz processor with 1G of RAM that has over 25 million rows. It returns count in about 3 seconds (and I can't believe that PostgreSQL is that much poorer performance than MySQL). The table has a unique numeric identifier, longitude (decimal), latitude (decimal), altitude (integer), population (integer), country code (string), identifier (string), native name (string) and English name (string). There is an index on the unique numeric identifier, a composite on the longitude and latitude, an index on the alphabetic identifier, and an index on the English name. The only one that's really necessary is the first, the others are for querying on those columns. I only offer this as an example of reasonable performance on a not-too-big platform.

If sridhar_dct3 would post the table schema (including the indexes) perhaps that would help to analyze where the problem might lie.
 
Old 04-03-2008, 01:22 AM   #6
sridhar_dct3
Member
 
Registered: May 2007
Location: india,chennai
Distribution: debian,ubuntu linux
Posts: 30

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by tronayne View Post
In every DBMS I've used (not including PostgreSQL, alas), 5M rows is, well, trivial - particularly for executing count (which usually returns almost instantly if there's no where clause). What is not trivial, however, is row size (and that has almost everything to do with system memory, buffers and all of that). When I find a DBMS with the kind of performance (or lack of it) citied, the first thing I'll look at is the row size and start thinking about dividing up a large (meaning large row size, not the number of rows) table into multiple tables with join columns (most engines can do joins a lot faster than handling huge rows in queries). I also look hard at indexes; I frequently find poor performance directly affected by poor indexing. Given the example, count(f1) my first question would be (after asking about the row size) what is f1? A numeric, a string (how big a string), is there a huge composite index that includes f1, is there an index at all?

As an example, I run a GIS data base in MySQL on a 3GHz processor with 1G of RAM that has over 25 million rows. It returns count in about 3 seconds (and I can't believe that PostgreSQL is that much poorer performance than MySQL). The table has a unique numeric identifier, longitude (decimal), latitude (decimal), altitude (integer), population (integer), country code (string), identifier (string), native name (string) and English name (string). There is an index on the unique numeric identifier, a composite on the longitude and latitude, an index on the alphabetic identifier, and an index on the English name. The only one that's really necessary is the first, the others are for querying on those columns. I only offer this as an example of reasonable performance on a not-too-big platform.

If sridhar_dct3 would post the table schema (including the indexes) perhaps that would help to analyze where the problem might lie.
Here is the table schema
CREATE TABLE testing
(
test_call_id integer default nextval('call_log_seq'::text) NOT NULL,
test_source_ext_no varchar (15),
test_dest_ext_no varchar (15),
test_call_type varchar (20) default 'IN', -- IN/OUT
test_caller_id varchar (20),
test_dialed_no varchar (20),
test_call_stime timestamp with time zone default now(),
test_call_etime timestamp with time zone default now(),
test_is_complete boolean,
test_call_duration integer,
test_voice_recorded boolean,
test_screen_recorded boolean,
test_no_of_holds integer,
test_no_of_transfers integer,
test_no_of_rings integer,
test_total_hold_time integer,
test_record_rule integer,
test_call_status varchar (20) default 'DETECTED',
test_voice_in_filesize integer,
test_voice_out_filesize integer,
test_screen_filesize integer,
test_project_id integer default 0,
test_campaign_id integer default 0,
test_aldy_record_chkd boolean default 'f',
test_aldy_record_chkd_admin boolean default 'f',
test_voice_codec varchar (20),
test_voice_samplerate varchar (20),
test_voice_datasize varchar (15),
test_voice_compression_format varchar (10),
test_screen_compression_format varchar (10),
test_caller_id_desc varchar (20),
test_dtmf_values text,
test_srecord_fps integer,
test_srecord_time timestamp with time zone,
);

CREATE INDEX call_stime_index ON call_log USING btree (test_call_stime);
CREATE INDEX call_etime_index ON call_log USING btree (test_call_etime);

ALTER TABLE ONLY call_log
ADD CONSTRAINT call_log_pkey PRIMARY KEY (call_id);


-> A row can contain maximum of 1000bytes.
-> I will try the same with mysql also.

-> Index on call_id will be created by default since it is a primary key field.

For all your info:
I tried the same in 3GHz and 4GB RAM and the result is:

select count(test_call_id) from testing; - 4secs.
I am still not satisfied with the performance of postgres.
Am expecting the result within a second because if I fire any SELECT SQL query on non-indexed field I will have to wait for 5 mins - its too bad.

Last edited by sridhar_dct3; 04-03-2008 at 01:27 AM.
 
Old 04-03-2008, 08:44 AM   #7
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062
If I may make a couple of suggestions, I would first consider changing the definition of test_call_id to serial (I would do this irrespective of what DBMS I was using) with a unique or primary key constraint, perhaps with an initial default value. I would use serial unless I was sure that there would be more than 2^31 values over the life of this table (which is, pretty much, unlikely), in which case I might consider bigserial (but I wouldn't do so by default).

The reason for this is that, as you have the column defined, you're essentially making the engine execute SQL rather than letting it generate the next value internally. The PostgrSQL manual shows the serial type as "equivalent to" which is not the same thing as "equal to" if you know what I mean. As a general rule of thumb, always -- always -- use the data types defined in the DBMS (they're optimized to be as efficient as possible) rather than "rolling your own."

As shown, you have indexes on the test_call_id and test_call_stime and test_call_etime columns. If you alter thentest_call_id column to serial you will probably show a speed improvement. I would expect queries on non-index columns to take quite some time to execute and would suggest that you analyze your table from the point of view of what you're going to be querying and build individual or composite indexes to support that; don't overdo it, but run some typical queries (that include multi column constraints) then create a composite index on the columns and see what happens. It's usually a good hint that you need an index when a query runs like a three-legged dog with a busted tail, eh?

Let the engine do the work -- that's what it's good at -- and try to keep things as simple as possible (avoid stored procedures and triggers if at all possible) and life will be good.

Hope this helps some.
 
Old 04-03-2008, 12:41 PM   #8
Tinkster
Moderator
 
Registered: Apr 2002
Location: in a fallen world
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914
Can you please post your postgresql.conf ?

And what's the I/O subsystem?

[edit]
Oh, and when did you last VACUUM that table?
[/edit]


Cheers,
Tink

Last edited by Tinkster; 04-03-2008 at 12:56 PM.
 
Old 04-04-2008, 04:28 AM   #9
sridhar_dct3
Member
 
Registered: May 2007
Location: india,chennai
Distribution: debian,ubuntu linux
Posts: 30

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by Tinkster View Post
Can you please post your postgresql.conf ?

And what's the I/O subsystem?

[edit]
Oh, and when did you last VACUUM that table?
[/edit]


Cheers,
Tink
========================================================================
grep "^\s*#" -v /etc/postgresql/8.1/main/postgresql.conf | tr -s '\n'

hba_file = '/etc/postgresql/8.1/main/pg_hba.conf' # host-based authentication file
ident_file = '/etc/postgresql/8.1/main/pg_ident.conf' # IDENT configuration file
external_pid_file = '/var/run/postgresql/8.1-main.pid' # write an extra pid file
listen_addresses = '*' # what IP address(es) to listen on;
port = 5432
max_connections = 100
unix_socket_directory = '/var/run/postgresql'
shared_buffers = 4096 # min 16 or max_connections*2, 8KB each
temp_buffers = 1000 # min 100, 8KB each
work_mem = 102400 # min 64, size in KB
maintenance_work_mem = 524288 # min 1024, size in KB
effective_cache_size = 200000 # typically 8KB each
cpu_tuple_cost = 0.01 # (same)
log_rotation_age = 1440 # Automatic rotation of logfiles will
client_min_messages = notice # Values, in order of decreasing detail:
log_min_messages = log # Values, in order of decreasing detail:

log_connections = on
log_disconnections = on
log_duration = on
log_line_prefix = '%t [%p] ' # Special values:
log_statement = 'all' # none, mod, ddl, all
stats_row_level = on
autovacuum = on # enable autovacuum subprocess?
datestyle = 'SQL, DMY'
lc_messages = 'en_IN' # locale for system error message
lc_monetary = 'en_IN' # locale for monetary formatting
lc_numeric = 'en_IN' # locale for number formatting
lc_time = 'en_IN' # locale for time formatting
========================================================================
HOST:~# free
total used free shared buffers cached
Mem: 4076464 2465364 1611100 0 151968 2149580
-/+ buffers/cache: 163816 3912648
Swap: 979956 96 979860
HOST:~#
========================================================================
HOST:~# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 CPU 4300 @ 1.80GHz
stepping : 2
cpu MHz : 1798.315
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm
bogomips : 3599.31

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 15
model name : Intel(R) Core(TM)2 CPU 4300 @ 1.80GHz
stepping : 2
cpu MHz : 1798.315
cache size : 2048 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl est tm2 cx16 xtpr lahf_lm
bogomips : 3596.69

========================================================================

Please tell me anything I need to upgrade ??
 
Old 04-04-2008, 08:54 AM   #10
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062
Just for grins I created your call_log table in the test data base in MySQL on two machines; a Sun SPARC 5 (running Solaris 8) and a Linux server (3GHz, 1G RAM) using this schema (which is altered to use default data types):
Code:
drop    table if exists call_log;

create  table   call_log (
        call_id                         serial,
        source_ext_no                   varchar(15),
        dest_ext_no                     varchar(15),
        call_type                       varchar(20) default 'in',
        caller_id                       varchar(20),
        dialed_no                       varchar(20),
        call_stime                      timestamp,
        call_etime                      timestamp,
        is_complete                     bool,
        call_duration                   integer,
        voice_recorded                  bool,
        screen_recorded                 bool,
        no_of_holds                     integer,
        no_of_transfers                 integer,
        no_of_rings                     integer,
        total_hold_time                 integer,
        record_rule                     integer,
        call_status                     varchar(20) default 'detected',
        voice_in_filesize               integer,
        voice_out_filesize              integer,
        screen_filesize                 integer,
        project_id                      integer default 0,
        campaign_id                     integer default 0,
        aldy_record_chkd                bool default 0,
        aldy_record_chkd_admin          bool default 0,
        voice_codec                     varchar(20),
        voice_samplerate                varchar(20),
        voice_datasize                  varchar(15),
        voice_compression_format        varchar(10),
        screen_compression_format       varchar(10),
        caller_id_desc                  varchar(20),
        dtmf_values                     text,
        srecord_fps                     integer,
        srecord_time                    timestamp
);

create  index   call_stime_index on call_log (call_stime);
create  index   call_etime_index on call_log (call_etime);
The serial data type implies primary key; executing describe call_log shows:
Code:
+---------------------------+---------------------+------+-----+---------------------+----------------+
| Field                     | Type                | Null | Key | Default             | Extra          |
+---------------------------+---------------------+------+-----+---------------------+----------------+
| call_id                   | bigint(20) unsigned | NO   | PRI | NULL                | auto_increment |
| source_ext_no             | varchar(15)         | YES  |     | NULL                |                |
| dest_ext_no               | varchar(15)         | YES  |     | NULL                |                |
I wrote a little program that creates 1,000,000 rows of random strings and numerics; the first two rows of the data looks like this:
Code:
0|qKUCUzTcxewO|mQQkvAg gzD |WnaZDpTJIynv|MEimwn zC hG|x NQxXHoiRfH|||0|1331946527|1|1|2093923550|409228709|140138229|1218327146|2075019552|fVSxtQSa mv |1886741086|1020826200|256306606|1785695289|2008890539|0|0|SVFz mvnkTBG| GgsvIJBEdql|sKJ RoU Du o|HEHz H DRQOl| E  vXcYTj b|yUssvab ponJ|NzWkRl oSzl |1477942413||
0|NTGhOcACEiOE|UGcuu NBk tA| NazQQEKiwyX|erFWQsYPn Pi|wmAdI XHXI g|||1|961689752|0|1|2075204513|1448851265|1861928064|923117909|475674527|Zl cZkCcnbOa|172852799
I then loaded the million rows into that table and ran count() on it; the Linux server result:
Code:
time print "select count(dtmf_values) from call_log;" | mysql test
count(dtmf_values)
100000

real    0m0.02s
user    0m0.01s
sys     0m0.00s
and
Code:
time print "select count(*) from call_log;" | mysql test
count(*)
100000

real    0m0.01s
user    0m0.01s
sys     0m0.01s
The above is about what I'd expect (counting the column without an index takes just slightly longer), and was virtually identical on both the Solaris and Linux servers (a SPARC 5 is not a fast box).

My machines use the "huge" MySQL configuration (for machines with 1G-2G memory) but that doesn't really matter all that much.

So, either PostgreSQL is a real dog (which I don't believe), or your system has a real problem, or your schema could use some attention (which I do believe) or there's something seriously incorrect in your PostgreSQL configuration?
 
Old 04-06-2008, 02:44 PM   #11
Tinkster
Moderator
 
Registered: Apr 2002
Location: in a fallen world
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914
Quote:
Originally Posted by sridhar_dct3 View Post
========================================================================
grep "^\s*#" -v /etc/postgresql/8.1/main/postgresql.conf | tr -s '\n'

hba_file = '/etc/postgresql/8.1/main/pg_hba.conf' # host-based authentication file
ident_file = '/etc/postgresql/8.1/main/pg_ident.conf' # IDENT configuration file
external_pid_file = '/var/run/postgresql/8.1-main.pid' # write an extra pid file
listen_addresses = '*' # what IP address(es) to listen on;
port = 5432
max_connections = 100
unix_socket_directory = '/var/run/postgresql'
shared_buffers = 4096 # min 16 or max_connections*2, 8KB each
temp_buffers = 1000 # min 100, 8KB each
work_mem = 102400 # min 64, size in KB
maintenance_work_mem = 524288 # min 1024, size in KB
effective_cache_size = 200000 # typically 8KB each
cpu_tuple_cost = 0.01 # (same)
log_rotation_age = 1440 # Automatic rotation of logfiles will
client_min_messages = notice # Values, in order of decreasing detail:
log_min_messages = log # Values, in order of decreasing detail:

log_connections = on
log_disconnections = on
log_duration = on
log_line_prefix = '%t [%p] ' # Special values:
log_statement = 'all' # none, mod, ddl, all

========================================================================

Please tell me anything I need to upgrade ??
You haven't mentioned your disk sub-system, which, to a RDBMS,
is more important than CPU.

And I created that table structure with 1,000,000 rows on
my notebook - a "Genuine Intel(R) CPU T2300 @ 1.66GHz"
and ran that query.
Code:
# select count(*) from call_log;
  count  
---------
 1000000
(1 row)

Time: 302.699 ms
0.3 seconds. Not too bad.

That's with a bored postgres and only one user.


Now, if the system on YOUR end was busy, specifically with
inserts or deletes on that table, I'd expect slower results
for a variety of reasons; to begin with, w/o locking the
table exclusive you won't be getting a precise result in the
first place - it's always going to be just a ball-park figure.

How many concurrent sessions is that machine handling, when
did you last run a 'vacuum analyze' on that table?



Cheers,
Tink

Last edited by Tinkster; 04-06-2008 at 02:58 PM.
 
Old 04-09-2008, 02:43 AM   #12
Tinkster
Moderator
 
Registered: Apr 2002
Location: in a fallen world
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914Reputation: 914
Quote:
Originally Posted by tronayne View Post
Code:
time print "select count(*) from call_log;" | mysql test
count(*)
100000

real    0m0.01s
user    0m0.01s
sys     0m0.01s
Only just noticed that your million is a hundred thousand :}



Cheers,
Tink
 
Old 04-09-2008, 06:45 AM   #13
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062Reputation: 1062
Ouch! Those are the test one! Here are the "real" ones:
Code:
time print "select count(*) from call_log;" | mysql test
count(*)
1000000

real    0m0.01s
user    0m0.01s
sys     0m0.00s
and
Code:
time print "select count(dtmf_values) from call_log" | mysql test
count(dtmf_values)
1000000

real    0m0.86s
user    0m0.01s
sys     0m0.01s
Still and all, not anything to get too excited about.

Darn it I hate getting old and blind, thanks for letting me know.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Performance Technologies Announces Availability of AMC121 High-Performance Comp LXer Syndicated Linux News 0 09-18-2007 11:30 AM
Changes in postgres without restarting postgres venki Linux - General 3 07-19-2007 02:06 AM
Postgres Hiper1 Linux - Newbie 8 04-10-2005 06:43 PM
PostGres Installation.... gvsprasad Linux - Enterprise 1 09-01-2004 06:29 AM
starting postgres 7.2.2 lolmc Linux - Software 4 02-26-2003 04:30 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 10:26 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration