LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 12-09-2012, 10:50 PM   #1
landysaccount
Member
 
Registered: Sep 2008
Location: Dominican Republic
Distribution: Debian Squeeze
Posts: 177

Rep: Reputation: 17
Server is extremely overloading.


I have a webserver (php/mysql/apache2) that runs ok hosting 4 websites; 1 of these with an average of 750 visits and around 8000 pageviews. I do two backups of the server's directories and the entire database and transfer it to another box in the same network.

Things are getting strange lately, my compressed backup is already at 11GB. I've discovered that while doing this backup the server gets extremely sluggish. Today I ran uptime and I got these very ugly numbers that scared me away:

Quote:
uptime
00:35:14 up 7:49, 1 user, load average: 54.71, 41.73, 35.05
Quote:
ps ax
16486 ? S 0:00 /bin/bash /root/backupWebServer.sh
16558 ? S 0:14 tar czf webserverBackUp-Dec-10-2012-00.tar.gz /root/webserver_backup.sql /home/
16559 ? D 2:21 gzip
I don't know what is really causing such overload on this box. Things started acting weird about a week ago.

What measures should I take in order to prevent such overload and what other things should I look at?

Thanks in advanced for your time and help.
 
Old 12-10-2012, 01:37 AM   #2
rylan76
Senior Member
 
Registered: Apr 2004
Location: Potchefstroom, South Africa
Distribution: Fedora 17 - 3.3.4-5.fc17.x86_64
Posts: 1,475

Rep: Reputation: 87
As for why you suddenly started seeing these loads, no idea. Log file explosion or maybe a great increase in traffic, causing bigger log files?

You also do not state WHAT you backup - do you take the entire file system, or only the Apache webroot and a dump of the MySQL databases?

As for the loads, you appear to be compressing using gzip - so what I'd try first is to lower the priorty (so called "renice") of the gzip process, to give other processes more CPU time. The compression will take much longer though, but at least your system won't slow down on your critical activities other than gzipping?

If you are transferring on an intranet to the backup server on the same intranet, bandwidth costs shouldn't be an issue and if you can afford more bandwidth, do not compress the file? You will save enormous amounts of CPU time.

If size is an issue, you can also try the 7zip compressor, gives much better ratios than gzip, but can be computationally intense.

I backup several sites off my primary webserver this way - tar the webroot, 7zip it, then use a simple PHP FTP script to move the file over to the backup server. It is automated via a cronjob and runs once every 24 hours.
 
Old 12-10-2012, 02:26 AM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 12,050

Rep: Reputation: 971Reputation: 971Reputation: 971Reputation: 971Reputation: 971Reputation: 971Reputation: 971Reputation: 971
Try this to get an idea of the loadavg contributors
Code:
top -b -n 1 | awk '{if (NR <=7) print; else if ($8 == "D") {print; count++} } END {print "Total status D: "count}' > saveit.txt
Post the file.

Later: are you shipping these backups over NFS by any chance ?.

Last edited by syg00; 12-10-2012 at 02:33 AM.
 
Old 12-10-2012, 06:06 AM   #4
landysaccount
Member
 
Registered: Sep 2008
Location: Dominican Republic
Distribution: Debian Squeeze
Posts: 177

Original Poster
Rep: Reputation: 17
rylan76, syg00 thanks for replying.

Yes. I backup apache's webroot and do mysqldump. Today I can't even logon to the system, its very slow.

I compress the the webroot so I can move just that big file to the backup server... Since I'm moving it to the same network size doesn't matter.

I don't want to do a reboot becuase I would like to run syg00's command.
 
Old 12-10-2012, 06:47 AM   #5
rylan76
Senior Member
 
Registered: Apr 2004
Location: Potchefstroom, South Africa
Distribution: Fedora 17 - 3.3.4-5.fc17.x86_64
Posts: 1,475

Rep: Reputation: 87
Hmm rather run the command you got...

I don't know what could be wrong there, but I think you're looking in the wrong place. I've worked with some VERY underpowered and very old systems with Linux - often doing exactly what you're doing, with Apache webroot archiving and mysqldumping - and I've never been able to pull even the oldest, slowest Pentium down into unresponsiveness with a simble tar/gzip mysqldump/gzip (or bzip, or 7zip).

At worst load averages went to 2 - this was on Red Hat 9, FC 3, etc.

It sounds rather as if you have massive concurrency loads on your MySQL instance? How busy is the server web-serving wise? The only places I've ever seen such loads on a webserver is in a massively concurrent environment, and maybe sub-optimal MySQL table design - possible row or table deadlocks or very slow queries?
 
Old 12-10-2012, 10:56 AM   #6
landysaccount
Member
 
Registered: Sep 2008
Location: Dominican Republic
Distribution: Debian Squeeze
Posts: 177

Original Poster
Rep: Reputation: 17
rylan76 thanks for replying.

Our website has been experiencing high traffic as of the last couple of days... visits as of late have doubled, maybe that's where the problem comes from...

running top:
Quote:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1372 mysql 20 0 224m 59m 3624 S 123 3.9 103:55.07 mysqld
11145 www-data 20 0 55616 21m 4536 S 18 1.4 0:03.10 apache2
11163 www-data 20 0 53044 19m 4528 S 12 1.3 0:03.06 apache2
11122 www-data 20 0 54368 19m 4412 S 9 1.3 0:00.30 apache2
11168 www-data 20 0 53540 19m 4536 R 7 1.3 0:02.62 apache2

uptime
12:55:14 up 4:25, 2 users, load average: 9.16, 4.01, 2.95
Now, this is without running the backup; normal operation.
 
Old 12-11-2012, 12:57 AM   #7
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,225

Rep: Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021Reputation: 2021
If visits of late have doubled, either someone has come up with a magnificent page, or possibly you've been hacked. Checked your Apache access_log/error_log recently?
Compare before the uptick with current access numbers.

Incidentally, re compression tools; most inc gzip & bzip2 have a option to trade off compression efficiency vs speed.
They usually default to a middling value (eg http://linux.die.net/man/1/gzip defaults to 6 on a scale of 1 - 9)
 
Old 12-11-2012, 01:06 AM   #8
rylan76
Senior Member
 
Registered: Apr 2004
Location: Potchefstroom, South Africa
Distribution: Fedora 17 - 3.3.4-5.fc17.x86_64
Posts: 1,475

Rep: Reputation: 87
Yikes.

You may need to either optimize your mysql database tables (use the OPTIMIZE TABLE command on each, and see if it helps) or you may need to generate some indices on them...

I run the OPTIMIZE TABLE command via a cron on all the tables on my main online MySQL server daily, and it does seem to make a difference in keeping the server response times down. You can also maybe try (depending if your MySQL version is recent enough to have a stable InnoDB engine version) changing your tables from MyISAM type to InnoDB type - use

Code:
alter table tablename engine = innodb;
to do this.

WARNING - if you use fulltext indices you currently cannot use those indices with InnoDB as it does not support such indices in current MySQL stable versions. InnoDB is apparently more efficient with locking operations and you are (hopefully) less likely to run into table deadlocks in high-concurrency conditions than you are with MyISAM.

WARNING - do the alter to innodb when you can live without the DB for a few minutes. It can take some time, depending on how large the table is you are converting, to actually convert a MyISAM table to an InnoDB table. With a core i7 machine with 32GB of RAM running mysql 5.1 on Debian Squeeze, it takes about 8 minutes to convert a table with 40 columns and about 545 000 records from MyISAM to InnoDB.

WARNING - backup your database before attempting a table engine alter, just in case!

Other than a brute-force approach like upgrading hardware (doubling memory, for example, might - or might not - have a beneficial impact) as a last resort you may need to redesign your queries in your applications, or rethink your application in total.

For the http://poweralert.co.za site for example - one that experienced VERY high concurrency loads at a time - I came up with a solution whereby the pages I need are generated only once every 10 minutes - since the data the site presents remains the same for every visitor. E. g. even with 5000 concurrent visitors, the site is hitting the MySQL database only ONCE for all 5000 visitors - the last ten minute refresh that ran. It then uses the results returned to build text files only ONCE for that ten minute period, that contain the required HTML to build the site. These text files are then simply included in the main HTML file.

Maybe you can do something similar? Including a text file is "lighter" on processing resources than "phoning up" MySQL each time for each and every user that comes into the site...

Last edited by rylan76; 12-11-2012 at 01:09 AM.
 
Old 12-18-2012, 04:33 PM   #9
landysaccount
Member
 
Registered: Sep 2008
Location: Dominican Republic
Distribution: Debian Squeeze
Posts: 177

Original Poster
Rep: Reputation: 17
Quote:
Originally Posted by chrism01 View Post
If visits of late have doubled, either someone has come up with a magnificent page, or possibly you've been hacked. Checked your Apache access_log/error_log recently?
Compare before the uptick with current access numbers.

Incidentally, re compression tools; most inc gzip & bzip2 have a option to trade off compression efficiency vs speed.
They usually default to a middling value (eg http://linux.die.net/man/1/gzip defaults to 6 on a scale of 1 - 9)
chrism01.

I check access_log/error_log regularly and see the regular traffic to my sites... I don't know what else to tweak and noticed the problem is with mysql...
 
Old 12-18-2012, 04:41 PM   #10
landysaccount
Member
 
Registered: Sep 2008
Location: Dominican Republic
Distribution: Debian Squeeze
Posts: 177

Original Poster
Rep: Reputation: 17
I did OPTIMIZEd all the TABLEs on all databases... Can't notice any change or improvement. I even doubled the memory capacity and nothing...

Don't know if I should mention, that the website was created with Joomla which makes many queries to load.

This server is just getting smashed:

ER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1361 mysql 20 0 255m 97m 6288 S 130 3.2 83:27.04 mysqld
7185 www-data 20 0 50392 20m 5096 S 5 0.7 0:20.34 apache2
7189 www-data 20 0 50396 20m 5084 S 5 0.7 0:10.72 apache2
7192 www-data 20 0 51756 21m 5212 S 5 0.7 0:24.88 apache2
7164 www-data 20 0 47064 17m 4888 S 4 0.6 0:17.84 apache2
7184 www-data 20 0 49936 20m 5216 S 4 0.7 0:25.76 apache2
7187 www-data 20 0 48088 18m 5076 S 4 0.6 0:24.62 apache2
7167 www-data 20 0 50368 19m 4264 S 3 0.7 0:23.32 apache2
7172 www-data 20 0 50400 20m 5052 S 3 0.7 0:21.50 apache2
7183 www-data 20 0 50376 19m 4268 S 3 0.7 0:25.44 apache2
7171 www-data 20 0 50400 20m 5072 S 2 0.7 0:19.52 apache2
7173 www-data 20 0 50632 20m 4288 S 2 0.7 0:24.18 apache2

# uptime
18:40:50 up 2:14, 2 users, load average: 26.45, 26.00, 26.23


Any other idea?
 
Old 12-19-2012, 02:20 AM   #11
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6
Posts: 1,398

Rep: Reputation: 433Reputation: 433Reputation: 433Reputation: 433Reputation: 433
Busy websites that make use of a PHP front-end and MySQL backend tend to run fine and then hit a "tipping point" where they start running slow and show high server load.

What is the actual specification of your server and how many page views are you doing per day?

You may have just reached the performance limit of that server.
 
Old 12-20-2012, 12:26 AM   #12
rylan76
Senior Member
 
Registered: Apr 2004
Location: Potchefstroom, South Africa
Distribution: Fedora 17 - 3.3.4-5.fc17.x86_64
Posts: 1,475

Rep: Reputation: 87
Good point - I've seen this before.

There are two more things the OP might try - separate the MySQL and PHP servers into two physical machines (thus splitting the SQL load and the web-serving load) and trying InnoDB as storage engine.

I don't know if Joomla will support InnoDB as regards using it for all storage tasks, the only problem might be fulltext indices that are not supported.

Since Joomla is already in use I'm assuming that all the tables relevant are ALREADY correctly indexed... which is bad since there then is no optimisation to be squeezed out of redesigning or indexing tables.
 
Old 12-20-2012, 02:21 AM   #13
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6
Posts: 1,398

Rep: Reputation: 433Reputation: 433Reputation: 433Reputation: 433Reputation: 433
Quote:
Originally Posted by rylan76 View Post
[..]separate the MySQL and PHP servers into two physical machines (thus splitting the SQL load and the web-serving load)
The efficiency of this will depend on the infrastructure between the two machines. Ideally they would be inter-connected either over a direct crossover cable or via a dedicated 100Mb/1Gb switch. I've seen this thing done in data-centers where both machines had (for bandwidth/contract reasons) 10Mb switch ports and a single NIC for all traffic on the PHP server, thus creating another performance bottleneck.
 
Old 12-20-2012, 06:32 AM   #14
rylan76
Senior Member
 
Registered: Apr 2004
Location: Potchefstroom, South Africa
Distribution: Fedora 17 - 3.3.4-5.fc17.x86_64
Posts: 1,475

Rep: Reputation: 87
Quote:
Originally Posted by TenTenths View Post
The efficiency of this will depend on the infrastructure between the two machines. Ideally they would be inter-connected either over a direct crossover cable or via a dedicated 100Mb/1Gb switch. I've seen this thing done in data-centers where both machines had (for bandwidth/contract reasons) 10Mb switch ports and a single NIC for all traffic on the PHP server, thus creating another performance bottleneck.
Yup fully agree. The above (dedicated crossover) I've seen done and it helped immensely with a certain site I was concerned with at a time - webserver machine got a second NIC installed and used that NIC just to do database comms to the MySQL server over the crossover cable at 1Gb throughput. Eth0 on the web machine stayed connected to the "world" for traffic - helped a lot with load, just the fact that a separate machine was running queries and the webserver could get on with serving.

Then again, the webserver had a reasonably sophisticated caching setup - so moving SQL duties off-machine left it more CPU time to just flick already-cached files (created with deferred SQL at night) onto the browsers that were a-visiting. So as you say, depending on your precise hardware and software architecture, it might or might not have a beneficial effect.

AFAIK Joomla though is pretty standard and should already be quite optimally keyed / etc??

It could be as another RP said, that the OP's machine is simply reaching its absolute processing limits for the volumes of traffic thrown at it.
 
Old 12-20-2012, 05:07 PM   #15
landysaccount
Member
 
Registered: Sep 2008
Location: Dominican Republic
Distribution: Debian Squeeze
Posts: 177

Original Poster
Rep: Reputation: 17
Thanks all for replying...

I'm only getting 10K - 14K pageviews which I don't think is much but, since Joomla runs so many queries I think that might be killing the server...

I was thinking of moving Mysql server to another box but I think I might be on the same since the other box is exactly the same as the current server (not a powerful machine).

I also wanted to install a cache to the server but, that was going to eat a lot of the memory... Don't know if that would be a good idea.

After reading, searching, and doing a lot of research... I found out that by enabling Joomla cache (I was very hesitant) it would make joomla server load webpages faster but with some trade-offs.

I enabled it and currently testing it for about two days...


uptime
19:04:22 up 2 days, 2:37, 1 user, load average: 0.52, 0.74, 0.67

top:

ID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18528 mysql 20 0 207m 64m 6488 S 35 2.1 11:41.26 mysqld
20543 www-data 20 0 50372 19m 4208 S 11 0.7 0:02.32 apache2
20468 www-data 20 0 46260 16m 4224 S 6 0.5 0:05.58 apache2
20496 www-data 20 0 46000 16m 4392 S 1 0.5 0:07.32 apache2
4 root 15 -5 0 0 0 S 1 0.0 0:16.68 ksoftirqd/0

Pageviews and visits are now up a little.

Still testing... Thinking of adding a Squid cache to the server...

Any opinion?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Wait IO suddenly extremely high (exploit ?), crashing the server gcat Linux - Security 18 04-18-2012 12:14 PM
Apache overloading the server mobinskariya Linux - Server 2 04-01-2010 02:46 PM
Extremely slow networking on server humbletech99 Linux - Networking 2 11-16-2006 04:44 PM
sound server issues, ALSA overloading cpu shinacalypse Linux - Software 1 12-14-2003 01:55 PM


All times are GMT -5. The time now is 12:28 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration