Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a two node cluster configured using drbd 8.4.4, corosync 2.3.3, pacemaker 1.1.10 on Ubuntu 14.04 LTS servers. Using LVM, I created a block storage on each. Using LCMC 1.7.2.
DRBD is up and running between the servers and replicating between them:
root@server1:/var/lib/mysql# cat /etc/drbd.d/r0.res
resource r0 {
on server1 {
volume 0 {
device /dev/drbd0;
disk /dev/AOS_VG1/lvol0;
flexible-meta-disk internal;
}
address 10.0.6.61:7788;
}
on server2 {
volume 0 {
device /dev/drbd0;
disk /dev/AOS_VG1/lvol0;
flexible-meta-disk internal;
}
address 10.0.6.62:7788;
}
}
pacemaker shows the status as all is well:
Online: [ server1 server2 ]
Master/Slave Set: ms_drbd_1 [res_drbd_1]
Masters: [ server1 ]
Slaves: [ server2 ]
res_Filesystem_1 (ocf::heartbeat:Filesystem): Started server1
res_IPaddr2_1 (ocf::heartbeat:IPaddr2): Started server1
res_mysql_1 (ocf::heartbeat:mysql): Started server1
res_anything_AOS (ocf::heartbeat:anything): Started server1
The issue is that when I fail over from server1 to server2, the user and group ownerships are not mysql:mysql when mounted on server2. because of this, MySQL does not start until the ownership (chown) is changed to be correct.
Here is what I see as the mounted directory at /var/lib/MySQL:
drwxr-xr-x 13 109 117 4096 Nov 4 14:46 data/
It should be
drwxr-xr-x 13 mysql mysql 4096 Nov 4 14:46 data/
There must be something obvious that I am missing in the configuration. Can someone point me in the right way?
Ok, thank you for pointing me in the correct direction. It did end up by being the UID and GID's were mismatched between the two servers causing the ownerships to change to the numeric value of the other node. I made them consistent and it is working now.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.