Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Brace yourselves for this one, I'm totally clueless.
At random, my server will stop responding (PHP errors out, sites don't load, MySQL refuses logins, etc). It'll sit in that coma state for about 10-15 minutes, then resume working like nothing ever went wrong.
After that mess is over, a "df -h" shows that 8-10GB of space went MIA. It literally goes MIA, the partition size doesn't change, and I can't locate anything on it consuming that space. Logs don't show anything either. No logs for anything even show that things went bad at all, hence I'm left totally clueless.
I've checked for weird services, signs of being hacked, etc, but nothing comes up.
This has happened twice now, 13 days exactly apart, but at different times of the day. Next time this happens it'll be out of space.
I don't know why it's happening, but the du command should help you find where the files are being written. For example, run du -sh /* to get the space usage at the top of your file system. If, for example, there was an unusually large usage under /var, you could use du -sh /var/* and keep going like that until you find the files that were created.
Alternatively, if you know when it's happening, you could use the find command and look for files less than a particular age..
The find command will do that - have a look at the -amin, -atime, -mmin and -mtime options. For example, the following will list the files that were accessed at least 100 minutes ago in the /tmp directory: