Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm trying to configure a RHEL HA cluster with MySQl and GFS,
I have 2 virtual machines running centos5.2 virt1.xen and virt2.xen
on them I have sheared iscsi device with GFS2 that i want MySQL database to be on.
I installed MySQL servers on both virtual machines, but I can't configure the cluster service properly so it will be a HA MyQSL cluster
Please help I can't find any proper howtos about that.
I get this errors:
clurgmgrd[4070]: Stopping service service:MySQL
clurgmgrd[4070]: stop on mysql "mysql" returned 1 (generic error)
clurgmgrd: [4070]: Checking Existence Of File /var/run/cluster/mysql/mysql:mysql.pid [mysql:mysql] > Failed - File Doesn't Exist
clurgmgrd: [4070]: Stopping Service mysql:mysql > Failed
clurgmgrd[4070]: Marking service:MySQL as 'disabled', but some resources may still be allocated!
clurgmgrd[4070]: Service service:MySQL is disabled
clurgmgrd[4070]: stop on clusterfs "GFS" returned 2 (invalid argument(s))
clurgmgrd: [4070]: stop: Could not match /dev/vgoo/iscsi with a real device
clurgmgrd[4070]: Marking service:MySQL as 'disabled', but some resources may still be allocated!
clurgmgrd[4070]: Service service:MySQL is disabled
clurgmgrd[4070]: Stopping service service:MySQL
clurgmgrd[4070]: stop on clusterfs "GFS" returned 2 (invalid argument(s))
# cat /etc/my.cnf
[mysqld]
datadir=/data
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
hostname = virt1.xen
192.168.122.24 virt2.xen
Please help
If there is any other files that you need please tell
Some things to be said, after roughly going over the information you send:
1. Add both clusternodenames and ips to your /etc/hosts
2. Simplify your cluster configuration. Why do you have that strange multicast configuration? Remove it or have reason. Looks very odd.
3. Review AND UNDERSTAND your configuration. Ask yourself:
3.1 Why mysql on gfs?
3.2 What exactly should gfs do?
3.3 Read more documentation.
4. Review the logs you sent. What do the error messages tell? How about checking for /dev/vgoo/iscsi? Why is it there and not /dev/sda1? Did you really setup gfs on /dev/sda1?
5. Simplify your cluster.conf remove all services except one and try to bring it up. I.e the clusterfs.
6. Send more information like the output of:
cman_tool services
cman_tool status
cman_tool nodes
clustat
group_tool dump
gfs_tool getsb /dev/sda1
That's it for the first.
Don't loose your head and have fun.
Sorry if this is not the best way to discuss this, but this article is one of the first ones found in google about the topic.
I think I want to get the same thing, but maybe in a different approach. I've managed to have a RHEL cluster running in active/passive mode. I've created a service ip that runs the mysql service and I can move the service between nodes. I'm using gfs with clustered lvm. At the time, I have small data, so I haven't done big performance tests.
So, I'd like to use gfs and mysql in active/active mode. The problem is that, as I've understood for mysql clustering, is that that scripts for creating tables should be manipulated in order to change the engine. In my case, I have to host some propietary app whose database has about 100 tables, and doing this is not supported. Then, as I've understood, the mysql cluster engine would replicate the data among nodes, and I can't predict the behaviour when, because of the shared storage (it's a SAN disk), the data is already there, so the chances are duplicate record or error because duplicate key.
RHCS is searching for pid file in
/var/run/mysql/mysql.pid
but your my.cnf has configured pid file in
/var/run/mysqld/mysqld.pid
Change your my.cnf to create pid file with name "mysql.pid" in
/var/run/mysql
Also increase the shutdown_wait time to 5 or 10 seconds.
Well, thanks for your help, but I forgot to mention something that could help:
1) The error message is "110913 16:20:03 InnoDB: Retrying to lock the first data file
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files."
Maybe your solution works with MyISAM tables.
2) The pid file is /var/lib/mysql/$HOSTNAME.pid, so the pid file should not be a conflict (/var/lib/mysql is a gfs filesystem)
If you still think it's possible and I've forgotten something to override the lock, please let me know.
Well, thanks for your help, but I forgot to mention something that could help:
1) The error message is "110913 16:20:03 InnoDB: Retrying to lock the first data file
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files."
If I understand you right, you are trying to access the mysql data concurrently from two or more nodes?
If so this is a very very bad idea. A database cannot be used in active/active mode if the database does not support it. The database does extensive caching and if the cache is not kept coherent between those nodes your data will get corrupt or you at least get messages of the type you've seen.
If you want to do such things the db needs to be aware of active/active usage like Oracle RAC for example. The cluster file system on the other hand just allows the concurrent access and consistency on file system bases. The data itself has to be handled by the application (in your case mysql).
As far as I know mysql (itself) only supports active/active configuration on a shared nothing bases (no cluster file system required) and with databases kept in memory (but this information might be outdated). There are also master/master/slave configurations but all those configurations are kind of shared nothing configurations.
Quote:
Originally Posted by omgs
Maybe your solution works with MyISAM tables.
2) The pid file is /var/lib/mysql/$HOSTNAME.pid, so the pid file should not be a conflict (/var/lib/mysql is a gfs filesystem)
If you still think it's possible and I've forgotten something to override the lock, please let me know.
I would really recommend to review your setup and the necessity to use mysql in active/active mode.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.