Quote:
Originally Posted by gacanepa
Hi everyone,
I talked to a support representative and the only thing he was able to come up with was that the database server seemed to be getting restarted because of insufficient memory:
Code:
root@centosvps [~]# free -m
total used free shared buffers cached
Mem: 1024 997 26 0 0 782
-/+ buffers/cache: 215 808
Swap: 0 0
And then of course, he offered me more RAM. It is not that expensive, but I want to make sure that there is something in this issue that would justify buying more resources, especially when mysql only uses around 4% RAM as shown by the top command.
However, I noticed that when running a simple query (note that this database has over 6 million records) the CPU usage of mysql went up to 99.7%! but the memory usage did not increase significantly.
I appreciate any comments. They will be more than welcome.
|
This is a completely false statement and a pretty common misconception with Linux. If you look at the above output you see 26 MB of free memory and you're like WTF is going on with my system. But without going into the gory details Linux kernel manages memory differently than Windows and will only "clean" the memory when it is required for use with another application. Until then memory that had been used by the system at one point in time will go into a "dirty inactive" status.
Using free -m always provides confusing output.
Run a cat /proc/meminfo to see what is actually going on with your memory.
Here is one of my prod servers running with a bunch of memory. Running a free -m shows that there is little to no free memory:
Code:
free -m
total used free shared buffers cached
Mem: 19939 19834 105 0 666 15043
-/+ buffers/cache: 4125 15814
Swap: 9983 0 9983
So I want to see whats really going on I dig into my /proc directory:
Code:
cat /proc/meminfo
MemTotal: 20418256 kB
MemFree: 107732 kB
Buffers: 679172 kB
Cached: 15432136 kB
SwapCached: 0 kB
Active: 6396636 kB
Inactive: 12651936 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 20418256 kB
LowFree: 107732 kB
SwapTotal: 10223608 kB
SwapFree: 10223036 kB
Dirty: 33524 kB
Writeback: 4 kB
AnonPages: 2937276 kB
Mapped: 82772 kB
Slab: 1134120 kB
PageTables: 56380 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 20432736 kB
Committed_AS: 11044808 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 269708 kB
VmallocChunk: 34359468443 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
Reviewing the above output I can see the following:
Code:
MemTotal: 20418256 kB
MemFree: 107732 kB
Inactive: 12651936 kB
This tells me I actually have 12759668 KB free of my 201418256 KB.
Another note. You're cached memory which is the second line in the free -m output can give you an idea of how much you actually have free but its not 100% accurate.
As you can see in my meminfo file, 15432136 KB of memory are cached but 6396636 KB are still active so only 12651936KB is available to be cleaned for use.
Sounds like the support rep honestly didnt know what the problem was and I feel for those folks as I've done that job before working for a hosting company. And at the end of the day most of their scope's of support dont provide troubleshooting services. They will say, oh your MySQL is dying? Well reprovision it and it wont happen again. Basically its the data and config you are doing to the system that will cause it to crash. If you take their out of the box image I'm sure MySQL runs fine and unless you pay them or they provide actual server administration services you won't get very far with support.
What does your /var/log/messages file report around the time the service dies?
What exists in /var/lib/mysql when the service dies before you restart it?