Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi,
I've just started experiencing a weird problem with my linux server. Randomly processes will just die with the message "Killed" printed to standard output, as if someone issued a kill sigterm on them. While the machine does not have a *lot* of memory, the "free" report does show available memory outside of swap. Specifically, mysqld and asterisk are the two servers that seem to be crashing a lot. Both are latest versions. The server is a Pentium 3 with 256MB of RAM and plenty of swap allocated. With mysql, mysqld_safe seems to be doing its job of restarting mysqld when it dies, but there's still no indication in any log files or anything that would tel me exactly why these processes are just dying. Asterisk just, isn't there suddenly. I have to manually restart it. I was going to write a script that puts it in a loop to make sure it stays running, but I would really like to know exactly why these processes are being killed. I could add more RAM to the system if it would help. however I'm trying to understand why it would be necessary:
I do not know all the rules governing this, but I do know that the kernel will kill processes that are running away, or sucking resources, or doing something not allowed, or interfering with the kernel itself.
I have seen it myself under one circumstance. I have found that there is a memory leak in some combination of X, compiz, and firefox such that after running a session for a week or so I sometimes have full RAM, nearly full swap, and a sluggish system. I then restart X to clear things out. This leaves a lot of swap in use, and I then issue a swapoff -a to force all that swapped data back into RAM.
It can sometimes happen that I run out of RAM before swap is emptied. In this event, the system appears to hang for several minutes, then the kernel kills my swapoff command (giving the message killed, just as you have reported) and resumes operating normally, with swap re-enabled.
So the kernel will do this. Why is it doing it to you? I have no clue.
ok so I found out it is indeed a memory problem; the mysql process apparently goes haywire and starts getting in a memory loop. I finally directly observed the server in its anguish as both the 1GB of swap and the RAM got completely used up and the kernel killed mysqld processes to rescue itself. Guessing it's a MySQL bug (4.1.22) - have to check if there's more updates for the 4.x series.
Set up a cron job to run every minute or 2 that runs top in batch mode and sends the output to a file.
Or use ps . This way you will have a record of what processes are doing what.
Also use sar eg: sar -qrwW
Check whether any mysql query is creating loop or what.
Enable mysql_slow_query_log and check time for executing Mysql Queries.
I didnt have the same problem, but due to some mysql queries, my server was getting slow. On checking, I found that one query was taking 30-40 seconds( which should be less than 1 second),
to execute. On checking table, I found there were millions of records, which was slowing the process.
Check if any such type of issues are there or not.
ok so I found out it is indeed a memory problem; the mysql process apparently goes haywire and starts getting in a memory loop. I finally directly observed the server in its anguish as both the 1GB of swap and the RAM got completely used up and the kernel killed mysqld processes to rescue itself. Guessing it's a MySQL bug (4.1.22) - have to check if there's more updates for the 4.x series.
As a patch, set up a cron job to run every minute or so to check for the presence of the mysql daemon. If it is gone, restart it.
Probably someone has a badly written query that is looping and sucking the life out of your system. This query runs as part of a daily batch update, which is why you only see it at night. Enable all logging and see if you can figure out what user is doing it, but in the meanwhile that cron that restarts the mysqld is your key to getting a full night of sleep.
Hah, So it turns out the entire thing was because of Wiki spam!! I had a Mediawiki running on my server. The database was... ready for this.. 955MB of data! But I did upgrade to MySQL 5.0.51 anyway and now all seems to be good! Going to now do the research on how to lock down this Wiki - specifically, only let registered people post!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.