Thursday, March 15, 2012

Setting Up Master-Slave MySQL Replication on CentOS

I'm currently doing some research and testing into a MySQL Master-Slave setup on CentOS. It's a bit of a finicky setup and I had a couple stumbling blocks which may or may not happen to you. Just in case, these are documented further down the post.

No sense beating about the bush, let's jump right in...

First of all, this original guide was most helpful in my quest:


 Assumptions:
Base install of Centos completed.
IP of Mysql Master: 192.168.1.10
IP of Mysql Slave: 192.168.1.11
Default Mysql Path: /var/lib/mysql
Replication User: slave_user

On both Master and Slave:

0. Login as root or su


1. Install mysql packages
Yum install mysql*

2. Configure to start on boot
Chkconfig mysqld on

3. Start daemon and set relevant passwords
service mysqld start
--set root pw
mysqladmin –u root password “newpasswordgoeshere”

On Master:

1. Modify my.cnf

a)  remove comments or delete lines to enable networking and bind
#skip-networking
#bind-address = 127.0.0.1


b)    Designate server id
server-id = 1

c)    set an expiration date for the binlog
expire-logs-days=7

d)    Restart Mysql
service mysqld restart


2. Create Slave user and check Master Status.
Login to mysql as root and add a replication user account with associated privileges

GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'slave_password';
--for production environment, please specify IP’s or hostnames.
-- …TO 'slave_user'@'192.168.1._' …

FLUSH PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

Expected output
File        |Position|binlog_do_db|binlog_ignore_db
mysql-bin.001      | 500
!!Take note of the binfile and position, it is needed to start replication on the slave!

3.Backup database using preferred method and transfer to slave and restore.
In this instance, I used the innobackupex tool from percona xtrabackup (see blog post)

a) On the Master, Take the backup
innobackupex --defaults-file=/etc/my.cnf --user=***** --password=***** --databases=mydbname /path/to/backup/

b) SCP the backup files over the network to the slave
scp –r /path/to/backup/ user@server:/path/to/slave/destination**
**not mysql directory!!!



4. Unlock the tables.
Login to Mysql
UNLOCK TABLES;


On Slave:

1. Restore the backup

a) On the Slave, Prep the backup
innobackupex --apply-log --user=****** --password=****** /path /to/slave/destination

b) Restore the backup on slave
service mysqld stop
cp –r /path/to/slave/destination /var/lib/mysql
chown –R mysql:mysql *
chmod –R 771 *
service mysqld start


2. Configure Slave
a) open /etc/my.cnf and add under section [mysqld]
nano /etc/my.cnf
server-id=2
master-host=’192.168.1.10’
master-user=’slave_user’
master-password=’slave_password’
master-connect-retry=60
#Specify database to replicate
replicate-wild-do-table=mydb.%

relay log = /var/lib/mysql/mysql-relay-bin
relay-log-info-file = /var/lib/mysql/mysql-relay-log.info
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
master-info-file = /var/lib/mysql/mysql-master.info


#more than likely this entry exists, however, it does not hurt to check
log-error = /var/log/mysqld.log
Save and exit.

b) Restart Mysql
service mysqld restart

3. Start Replication on Slave
a) Login to Mysqld and execute the following:

STOP SLAVE;
#For this next step, please confirm the binlog name and position from the SHOW MASTER STATUS command on the master, before the tables were unlocked!!
CHANGE MASTER TO MASTER_HOST='192.168.1.10', MASTER_USER='slave_user', MASTER_PASSWORD='slave-password', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.001', MASTER_LOG_POS=500;

START SLAVE;

b) To ensure that the slave is replicating, run the following command and take note of the following output:

SHOW SLAVE STATUS \G;
Expected output (snippet)
xxxxxxxxxx
Slave_IO_State: Waiting for master to send event


Slave_IO_Running: Yes
Slave_SQL_Running: Yes


Seconds_behind_master: 0
xxxxxxxxxx

4. Troubleshooting?
If the status is anything other than what is listed above, exit the mysql CLI and check the mysql log for troubleshooting.
cat /var/log/mysqld.log

5. Optional Issues and Fixes (may be specific to my test environment, so may not apply to all scenarios )
I encountered an issue in my test where the replication simply was not starting.
The output of the show slave status was as follows:
xxxxxxxxxx
Slave_IO_State:


Slave_IO_Running: No
Slave_SQL_Running: Yes


Seconds_behind_master: NULL
xxxxxxxxxx
Analysis of the mysqld log indicated:
Error 1236: Could not find first log file name in binary log index.
I followed the advice from this website and got it to work:
http://forums.mysql.com/read.php?26,9390,9390#msg-9390

STOP SLAVE;
FLUSH TABLES WITH READ LOCK;
UNLOCK TABLES;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.001';
CHANGE MASTER TO MASTER_LOG_POS=500;
START SLAVE;



--End.

That was long. You deserve a cup of coffee.

Coming soon - Monitoring the replication and checking for replication errors. I'm still in the research phase here, so I'll update as soon as I can.

-noveck

Thursday, March 1, 2012

Find specific files then move them using a list [Linux]

In a previous post I mentioned finding and deleting certain files from Linux.
http://noveckg.blogspot.com/2011/01/find-and-delete-certain-files-in-centos.html

This time around, in order to maintain an audit trail and facilitate recovery options, a prudent approach would be to remove the files from the directory and place them somewhere else where disk util is not critical.

In this example, I used these scripts to clean up my MoodleData directory of old backup zipfiles. It saved tons of time so I didnt have to peruse over 8000 courses to clean this crap up. This may or may not apply to all moodle environments. Obviously, if you rely on this form of course backups, these scripts are not the solution for you. I'll reiterate what was said in that earlier post.


Basically what this command does is search a specified directory for a name/filetype and then move the files. 
If you run it from the top level directory (/), chances are something important may get moved and you can screw things up.

Command 1: Generate a list of backup zip files complete with filepath (datestamped for convenience). This can also be used as an index in case any files need to be replaced

find /path/to/moodledata/ -name \*backup\*.zip -exec ls -lh {} \; | awk '{ print $9}' > /path/to/course-backup-filelist-$(date +%Y-%m-%d).txt


 Command 2: read list line by line, and move file to alternative location

cat /path/to/course-backup-filelist-01-03-2012.txt | while read line; do mv "$line" /path/to/other/location; done


I managed to recover close to 20GB of storage from mostly unneeded files.

\m/ (rock on, moodlers)

-noveck