MySQL Cluster setup
Posted 05-04-2011 at 12:46 AM by EricTRA
Hi,
Recently I had to set up MySQL Cluster for testing purposes on RHEL 5.5 virtual machines. Below is the procedure I followed and am sharing as a first blog post with you. This of course is a guide, put together piece by piece with stuff found on the internet and solutions for bugs/problems I encountered during installation. The result is a working MySQL Cluster.
Since MySQL Cluster uses its own binaries for the job you need to have servers with no previous MySQL instances installed (or remove them).
Get the Cluster package mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz from one of the available mirrors at http://dev.mysql.com/downloads/mirror.php?id=397969 if not locally available yet.
Put the package in the root user's directory /root
Extract using:
Move the directory contents to /usr/local/mysql after creating it:
Create some extra directories:
Edit the /etc/profile configuration file to add the path automatically to PATH by editing/adding the following content:
This assures that you can execute the mysql commands necessary without having to type the complete path to the binaries.
Create the mysql group and user with the following commands:
Set permissions on the relevant directories for MySQL Cluster to the mysql user:
Create the directory to hold the cluster configuration files:
Now set up the configuration file for the management node first. In the management server create a file /etc/mysql/config.ini with the following content (change IP addresses according to your network):
Edit your /etc/hosts file on all management and node servers to hold the IP hostname combination. MySQL Cluster has a bug in it that doesn't let you use just IP addresses and by default doesn't accept mixture of both, so it's better to stick with IP hostname in the hosts-file.
Check connectivity by pinging from every node to every other node.
Next we can start the management daemon with the following command:
You can check if the process is running with:
In order to automatically start the ndb_mgmd daemon at reboot execute the following:
Setting up the MySQL nodes
Copy the startup script for the mysql.server to /etc/init.d/
with the following command:
and make it startup at boot with the following:
but don't start it yet!
Now edit the file /etc/my.cnf to show the following content (change IP/hostname according to your setup):
(Please note: we have to run ndbd --initial only when we start MySQL for the first time, and if /etc/mysql/config.ini on mysql-mngmt changes.)
On the management node mysql-mngmt you can check the connection status of the nodes:
and at the console ndb_mgm> type show, which will give you something like this:
Now we need to install the default mysql database on both nodes. Without it the MySQL server will not start. To do so execute the following on both nodes:
Next start the MySQL server at both nodes:
Last but not least, all databases you want/need to synchronize between the nodes have to be adapted to use the NDBCLUSTER engine. If you use MyIsam or InnoDB only the databases will be created but none of the contents will be synchronized.
Kind regards,
Eric
Recently I had to set up MySQL Cluster for testing purposes on RHEL 5.5 virtual machines. Below is the procedure I followed and am sharing as a first blog post with you. This of course is a guide, put together piece by piece with stuff found on the internet and solutions for bugs/problems I encountered during installation. The result is a working MySQL Cluster.
Since MySQL Cluster uses its own binaries for the job you need to have servers with no previous MySQL instances installed (or remove them).
Get the Cluster package mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz from one of the available mirrors at http://dev.mysql.com/downloads/mirror.php?id=397969 if not locally available yet.
Put the package in the root user's directory /root
Extract using:
Code:
tar xvf mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23.tar.gz
Code:
mkdir /usr/local/mysql mv mysql-cluster-gpl-7.1.9a-linux-x86_64-glibc23/* /usr/local/mysql/
Code:
mkdir /usr/local/mysql/{backup,mysql-cluster}
Code:
# Path manipulation if [ "$EUID" = "0" ]; then pathmunge /sbin pathmunge /usr/sbin pathmunge /usr/local/sbin pathmunge /usr/local/mysql/bin fi if [ "$EUID" != "0" ]; then pathmunge /usr/local/mysql/bin fi
Create the mysql group and user with the following commands:
Code:
groupadd mysql useradd -g mysql mysql
Code:
chown -R mysql:wheel /usr/local/mysql/*
Code:
mkdir /etc/mysql
Code:
[NDBD DEFAULT] NoOfReplicas=2 DataMemory=80M # How much memory to allocate for data storage IndexMemory=18M # How much memory to allocate for index storage # For DataMemory and IndexMemory, we have used the # default values. Since the "world" database takes up # only about 500KB, this should be more than enough for # this example Cluster setup. [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] datadir=/usr/local/mysql/mysql-cluster [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] # IP address of the management node (this system) HostName=mysql-mngmt nodeid=1 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=mysql-node1 DataDir=/usr/local/mysql/data BackupDataDir=/usr/local/mysql/backup DataMemory=512M nodeid=2 [NDBD] # IP address of the second storage node HostName=mysql-node2 DataDir=/usr/local/mysql/data BackupDataDir=/usr/local/mysql/backup DataMemory=512M nodeid=3 # one [MYSQLD] per storage node [MYSQLD] [MYSQLD]
Code:
192.168.0.200 mysql-mngmt.localdomain mysql-mngmt 192.168.0.201 mysql-node1.localdomain mysql-node1 192.168.0.202 mysql-node2.localdomain mysql-node2
Next we can start the management daemon with the following command:
Code:
ndb_mgmd -f /etc/mysql/config.ini --configdir=/usr/local/mysql/mysql-cluster/
Code:
ps ax | grep ndb
Code:
echo "#chkconfig: 2345 80 05" >/etc/init.d/ndb_mgmd echo "#Description: MySQL Cluster Management Daemon" >>/etc/init.d/ndb_mgmd echo "ndb_mgmd -f /etc/mysql/config.ini --configdir=/usr/local/mysql/mysql-cluster/" >>/etc/init.d/ndb_mgmd chkconfig --add ndb_mgmd chkconfig ndb_mgmd on
Copy the startup script for the mysql.server to /etc/init.d/
with the following command:
Code:
cp /usr/local/mysql/support-files/mysql.server /etc/init.d/
Code:
chkconfig --add mysql.server
Now edit the file /etc/my.cnf to show the following content (change IP/hostname according to your setup):
Code:
[client] port=3306 socket=/tmp/mysql.sock [mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=mysql-mngmt default-storage-engine=NDBCLUSTER #Those are for future tuning #max_connections=341 #query_cache_size=16M #thread_concurrency = 4 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=mysql-mngmt Next start ndbd for the first time: ndbd --initial
On the management node mysql-mngmt you can check the connection status of the nodes:
Code:
ndb_mgm
Code:
ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.0.201 (mysql-5.1.51 ndb-7.1.9, Nodegroup: 0, Master) id=3 @192.168.0.202 (mysql-5.1.51 ndb-7.1.9, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.0.200 (mysql-5.1.51 ndb-7.1.9) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from any host) id=5 (not connected, accepting connect from any host)
Code:
cd /usr/local/mysql sh scripts/mysql_install_db chown -R mysql:wheel *
Code:
/etc/init.d/mysql.server start
Kind regards,
Eric
Total Comments 0