Very Strange MySQL Problem...works fine on small box, terrible on big box
I have a mysql 4.1 server running on a quad opteron machine running redhat E/S 4u4.
This mysql runs at about 30-40% cpu usage. It has tables that are no larger than 2 gb. They are all MyISAM tables.
The cpus in this box are 2.2ghz opterons, and there are 20gb of RAM.
This box also serves as a web server, so we wanted my migrate the mysql off of the box, onto a 16core opteron 32gb RAM box, (running gentoo, with a vanilla 2.6.19, and it runs very well) with fiber channel attached raid array. Alreay on this machine we have a MySQL 4.1 with several databases, and a MySQL 5.0.26 with tables upwards of 6gb (a vbulletin site). (those databases have NO problem at ALL)
With both of these running (and that vbulletin sees a LOT of traffic), the load avg is 1-2, and the cpu usage is no more than 10%. (It is 16 cores after all).
I created a new mysql instance/config file/init script, and moved our binary data files over (created using mysqlhotcopy).
I started the server (used even the same config file, mysql variable-wise), and the cpu usage slams the server, upwards of 1100%. The load avg gets as high as 90.
I have tried:
clearing the linux disk cache, and cat'ing the myi/myd files to /dev/null, just to get them back in the cache.
switching the tables to innodb
fine-tuning the mysql variables
several other things
Now, quite obviously, the 16 core machine is a much beefier server, and the nic wasn't being overutilized, yet the mysql process was taking at the very least 400-500% cpu, with actual numbers around 900%. It was causing the load average to spike, making the other mysql instances....slow.
I moved the EXACT SAME data files back (so, that rules out something strictly relating to them) to the other quad opteron box, and it runs perfectly.
I don't even know where to BEGIN to try to figure out why it would perform so differently. Anyone have any ideas?