Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
To safely purge binary log files, follow this procedure:
On each slave server, use SHOW SLAVE STATUS to check which log file it is reading.
Obtain a listing of the binary log files on the master server with SHOW BINARY LOGS.
Determine the earliest log file among all the slaves. This is the target file. If all the slaves are up to date, this is the last log file on the list.
Make a backup of all the log files you are about to delete. (This step is optional, but always advisable.)
Purge all log files up to but not including the target file.
These are points to consider before purging. Basically purging logs up to a certain log doesn't effect the database but it's always safe practice to make a dbdump before, just in case, you never know if Murphy is around. The purge command deletes the logs up to the one you indicate moving that one to the first position.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 15G 8.1G 6.0G 58% /
and
du -sh /var/log/mysql --->4.6G /var/log/mysql.
I had taken DB dump and its around 2.7 GB and also i moved it in to External Storage.
So i think if i remove all files up to the current BINARY LOG,Then the used space in System will be around 3.5 GB.(Consider even my DB size is around 2.7 GB.)..So i guess if i remove logs,it will lead to data loss.In mysql forums they haven't mentioned any thing detail.
Purging the binary logs doesn't affect the data at all. Let me state that a bit more careful, it shouldn't affect data and/or result in data loss at all, if executed correctly. They are, as stated by the site referenced above, files that contain information about data modifications:
Quote:
The binary log is a set of files that contain information about data modifications made by the MySQL server
Of course it's always good practice to have a current backup. So just go ahead purging, following the steps indicated, and check your database(s) after that.
It's always great to get confirmation. Glad it worked out for you. If you consider this question/problem solved then please mark it as such using the Thread Tools.
I'll be looking forward to your next question, as will the whole community
One more query.
If i am adding one more slave to the Existing master,is there any option to replicate existing(old) data rather than mysqldump.I mean if i have backup of all the binary logs on master,can i set Slave to use old MASTER_LOG_FILE and position.Does it make any sense..?
One more query.
If i am adding one more slave to the Existing master,is there any option to replicate existing(old) data rather than mysqldump.I mean if i have backup of all the binary logs on master,can i set Slave to use old MASTER_LOG_FILE and position.Does it make any sense..?
One more query.
If i am adding one more slave to the Existing master,is there any option to replicate existing(old) data rather than mysqldump.I mean if i have backup of all the binary logs on master,can i set Slave to use old MASTER_LOG_FILE and position.Does it make any sense..?
Thanks,
Ajayan
Hi,
The best solution has already been pointed to by quanta.
I am getting stuck with one problem in MySQL fail-over.Currently my DB servers were set up on AWS (Amazone Web Service) and having a master (DB 1) and slave (DB 2) which is currently replicated.In case of master (DB 1) fail-over,the slave ((DB 2) will be promoted as master and new slave (DB 3) will be created from last snapshot of original slave (DB 2).Now the problem is,if master (DB 1) has 100 records and obviously the slave (DB 2) also have 100 records.But last snapshot which had taken 10 minutes before might have only 95 records.So when i am creating new slave from last snapshot then new master (DB 2) holds 100 and new slave (DB 3) hold only 95 records.As per Information from MySQL docs,i have to dump data from master to slave to fill missing data.since my data volume is high, it is not going be a practical solution..I had also tried with mysqlbinlog utility,so that i can read log position from mysql bin logs and compare it with new slave's postion,then i tried to point to that particular location.But that ends up in table duplication/corruption)..
Can any body suggest me, how to fill missing data on new slave...
Why would you only activate the second slave when the first one becomes master? In other words you can have more then one slave to a master. In that setup if the master fails, slave #1 becomes master and slave #2 stays slave and is always up to date. No need for snapshots and no risk loosing data because of the use of snapshots/dumps.
Eric,
The master in region 1(Us east) and slave in Region2(Us West).If regions 1 fails we have to ensure that database in up in region 2.Client can't afford setting up 2 slave DB's..o They just want to pay for an Instance at the time of master fail.Thats why i am looking for a Creating Slave 2 from a snapshot.My concern is,will it possible to update missing data from Master's Binary log (without duplication/corrupt).
Eric,
The master in region 1(Us east) and slave in Region2(Us West).If regions 1 fails we have to ensure that database in up in region 2.Client can't afford setting up 2 slave DB's..o They just want to pay for an Instance at the time of master fail.Thats why i am looking for a Creating Slave 2 from a snapshot.My concern is,will it possible to update missing data from Master's Binary log (without duplication/corrupt).
Regards,
Ajayan
Hi,
If that's the case then the only sure and certain option I see is what quanta pointed out. Locking the tables, make a dump and get that over to the other server. This will mean down time as I understand it since the data volume is rather on the high size.
That is if you want to stay with master-slave setup. Another option is to configure the two nodes in master-master configuration. I have that setup in a production environment here and I can have one node go down for 10 days max and it will synchronize when started within this timeframe.
Don't know if that might interest your customer since there's no extra hardware involved?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.