Showing posts with label MySQL. Show all posts
Showing posts with label MySQL. Show all posts

Thursday, February 19, 2015

Installing MySQL 5.6 on CentOS 6

MySQL on CentOS6 is currently limited to version 5.1, which sucks. A lot of modern web applications can benefit of running the most recent version of MySQL (up to 5.6 at the writing of this post), and as a result we will be installing MySQL 5.6 on CentOS 6.x, patched to the latest sub-version.

There are several ways of accomplishing this, but I think it's a good idea to take the software directly from the source.

It's also a pretty good idea to have a backup mechanism, and sadly the free version does not play well with hot backups, which are performed while the database is running in full read/write made.

Percona has a wonderful set of tools, and offer a free hot backup solution for MySQL that works awesomely. I'll provide the instructions to install the tool, you can feel free to peruse their docs to implement your own backup strategy :)

More on Percona Xtrabackup here: http://www.percona.com/doc/percona-xtrabackup/2.2/

More on MySQL Community Server here: http://dev.mysql.com/downloads/mysql/


0. Got root/sudo

1. Get and Install the RPM from MySQL/Oracle
cd /tmp
wget  http://dev.mysql.com/get/mysql-community-release-el6-5.noarch.rpm
rpm -Uvh mysql-community-release-el6-5.noarch.rpm

2. Install MySQL and start / enable service
yum install mysql-community-server 
service mysqld start && chkconfig mysqld on

3. Secure MySQL server

./usr/bin/mysql_secure_installation
4.  Install Percona with repos
rpm -Uvh http://www.percona.com/downloads/XtraBackup/XtraBackup-2.2.9/binary/redhat/6/x86_64/percona-xtrabackup-2.2.9-5067.el6.x86_64.rpm
 Verify successful install and check version
xtrabackup --version



xtrabackup-screenshot

--EOF

Thursday, September 26, 2013

Upgrading / Migrating a MySQL 5.0 Database to MySQL 5.5 [InnoDB]

This post is an ultra, no, make that uber paranoid method of upgrading/migrating a relatively large (20+ GB on file) InnoDB database from MySQL version 5.0 to MySQL 5.5. Some might consider it overkill, but as it relates to a database of this size and maturity, I'd prefer not to take any unnecessary risks.

This assumes that the old server is running mysql 5.0 on CentOS 5.x and that the MySQL 5.5 is installed on a new server running CentOS 6, using the remi repositories. This is covered here.

Phase 1 - Prepare Data on the Old Server

1. Execute Database Check to ensure tables are clean
From terminal:
mysqlcheck –c mydbname –u root –p
<enter password when prompted>


2. Re-index tables before the dump
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;

From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql

3. Export the database as a dump file
 From shell:
mysqldump  -u root –p –e –c --verbose --default-character-set=utf8 --skip-set-charset --max-allowed-packet = 100M --single-transaction --databases mydbname –r /root/Desktop/mydbdump.sql

4. Copy to new DB server
scp –r /root/Desktop/mydbdump.sql root@new.db.srv.ip:/root/Desktop/


Phase 2 - Import to New Server

1. Create empty database shell for import

From mysql:
create database mdbname character set utf8 collate utf8_unicode_ci\

2. Issue Grant permissions to new DB (I hope you have this documented, else you might need to dump/restore the mysql.user table to new DB)

3. Import SQL file.(but first set a really high session value for max_allowed_packet to handle the large data import)
 set global max_allowed_packet = 1000000000;
source /root/Desktop/mydbdump.sql

4. Check mysql for transaction warnings
from mysql:
show warnings\G

5. Run upgrade script
From shell:
mysql_upgrade –u root –p --force

6. Rebuild InnoDB tables, which would force the InnoDB tables to upgrade
(source: http://www.mysqlperformanceblog.com/2010/05/14/mysql_upgrade-and-innodb-tables/ )

From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql

7. Execute Database Check to ensure newly imported/upgraded tables are clean
From shell:
mysqlcheck –c mydbname –u root –p

Phase 3 - Compare old and new database
Checking data consistency to ensure all the data was transferred via an accurate record count.
http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html

1. On Old db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;

From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.

2. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt


3. On New db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;

From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.

4. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt

5. Copy both text files to a third machine for comparison

On OLD db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt root@third.machine.ip:/root/Desktop

On NEW db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt root@third.machine.ip:/root/Desktop

ON third server
from shell:
diff –a /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt
No output from the previous command means that the data is consistent (as it relates to number of rows on each table) on both servers and the new database can be made active/ brought in production

<EOF>

That's it!
-noveck

Monday, August 19, 2013

Installing MySQL 5.5 on CentOS 6.x

This post covers the installation of MySQL 5.5 on CentOS 6 (64bit)

By default, CentOS 6 ships with MySQL 5.1, but to take all the advantages of the more recent versions, it is generally recommended to try to use 5.5 or 5.6 if possible, especially on a new server.

0. Login as root/ su

1. Open Terminal Interface

2. Go to Temporary Folder

cd /tmp

3. Get and Install EPEL Repo  (this example uses 64bit, get the 32bit rpm if needed! )

wget http://fedora.mirror.nexicom.net/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm –Uvh epel-release-6-8.noarch.rpm


4. Get and Install REMI Repo
wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
rpm –Uvh remi-release-6.rpm

5. Check MySQL version to be installed
yum --enablerepo=remi list mysql mysql-server

**If the following error is observed during this step, see below for resolution

ERROR: Cannot retrieve metalink for repository: epel. Please verify its path and try again.

FIX: Edit epel.repo and change all https references in “mirrorlist” sections to http
cd /etc/yum.repos.d/
nano epel.repo
Find: mirrorlist=https://mirrors.fedorapro……
Change to: mirrorlist=http://mirrors.fedorapro…..


6. Install mysql 5.5
yum install --enablerepo=remi mysql mysql-server

7. Start MySQL and configure to start on boot
service mysqld start
chkconfig mysqld on


8. Run mysql upgrade script
mysql_upgrade -u root –p

9. Change Mysql default Password
/usr/bin/mysqladmin -u root password 'yourpasswordhere'

10. Check to ensure that the mysql is at the desired version
mysql –version

11. Set proper permissions on /tmpchown –R root:root /tmp
chmod –R 1777 /tmp


12. Secure MySQL
Optional but recommended for production servers
See link for details: http://dev.mysql.com/doc/refman/5.5/en/mysql-secure-installation.html

/usr/bin/mysql_secure_installation

13. Restart mysql service
service mysqld restart

That's it!

-noveck



Tuesday, June 11, 2013

Reset MySQL root password on CentOS 5.x

I had one of those oh-crap moments and forgot the mysql root password in one of my development/test machines.
This is a reblog of someone else's post in case it ever gets deleted, I must say it saved my bacon (or at the very lease a couple hours of hair pulling and reinstall)

Credit to: http://gettechgo.wordpress.com/2012/05/10/how-to-reset-mysql-root-password-linux-o-s/

0. Login as root/su

1. Stop the MySQL service
 
service mysqld stop
 

2. Start MySQL Safe mode with skip grant tables option 
mysqld_safe --skip-grant-tables &
(press ctrl+z to exit, if required)

3. Start the MySQL service

service mysqld start

4. Log into the MySQL server without any password

 mysql -u root -p mysql
 

5. Reset the password for ‘root’ user
UPDATE user SET password=PASSWORD(‘new-password’) where user=’root’;

6. Flush privileges
 
flush privileges;

7. Restart the MySQL service

service mysqld restart
 

8. Log-in with the new password 
mysql -u root -p
<enter new password when prompted>


Cheers,
Noveck

Monday, March 11, 2013

InnoDB Restore Backup Procedure

My previous post elaborated on some scripts on backing up a MySQL Database with the InnoDB storage engine.

This post documents the restore procedure using the actual backup generated. Steps 7 and 8 are optional, but recommended. Better safe than sorry!

Assumptions:
· Target restore candidate server has Percona Xtrabackup installed.
· Target database server (restore candidate), has no active/production databases running.

Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto

Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly


0. Got root? 
 
1. Locate appropriate backup file
Assumption that it is compressed like the Daily or Monthly in my previous post.

cp –r /bkp/Daily/mydbnamexxxxxx.tgz /root/Desktop

2. Uncompress backup file

cd /root/Desktop
tar xzf mydbnamexxxxxx.tgz


3. Prepare the backup file 
cd /path/to/extracted/backup/ 
ls
(expected output should be ibdata1 file, folder containing database base and other xtrabackup named files)

innobackupex –apply-log /root/Desktop/*_dbxxxx/201xxxxx/

4. Stop the MYSQL Daemon and copy the files from the desktop to the mysql directory
service mysqld stop
rsync –rvt --exclude ‘xtrabackup_checkpoints’ --exclude ‘xtrabackup_logfile’ * /var/lib/mysql


5. Change ownership of the restored files and restart the MYSQL Daemon
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start


6. Login to mysql and ensure the database has been restored via a table count
(It helps to have this documented in your production environment)
mysql –u root –p
<enter password when prompted>
show databases;
use mydbname; (use appropriate database name from list)
show tables;
Quit MYSQL:
\q

7: Execute a mysqlcheck with check flag to determine any data inconsistencies
mysqlcheck –c mydbname –u root –p
<enter password when prompted>

8: Execute a mysqlcheck with optimize flag to optimize restored table data
mysqlcheck –o mydbname –u root –p
<enter password when prompted>

--eof
-noveck

Monday, February 18, 2013

Monitoring and reporting on a remote MySQL database.

Depending on your environment, you won't grant every single server on your network the ability to send email. This sort of security precaution had me in a bit of a pickle, as I wanted to step up the mysqlchecks I currently have scripted via cron on my database servers. If your database server has the ability to send email, this solution may not be the best option for you.


I already had a Zabbix Server sitting around, cheerfully gathering data and pumping reports on my little server farm and I thought about giving it some more work to do, without any extra pay.  The zabbix server already had specific port access to these servers, so the security experts at my organization would not give me hell for this particular solution.


The logic was simple. Execute scripted checks from a remote host, run some data processing, if everything is good, just take a timestamp of execution. If everything is not good, send an email for the sysadmin or DBA to action immediately. The nice thing about this script is that it lists the tables that are not flagged as OK by MySQL.

Simple stuff, really.


Assumptions:
Database Server is 192.168.1.10
Monitoring Server is 192.168.1.11
Database to be monitored = mydb
Location of user scripts = /scripts
SSMTP is installed on the monitoring server. Installation Instructions here.

Both servers should already see each other or be able to ssh to/from one another. If not, seek assistance from your company's network gurus (walk with chocolates).

0. Login as root on the Database server.
You'll need to create an account that the remote server will access over the network.

Login as mysql root.

mysql -u root -p
<enter password>

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,INDEX,ALTER on mydb.* to mydbuser@'192.168.1.11' identified by 'mydbuserpassword';

flush privileges;

1. Test that the remote connection works
From terminal on the monitoring server
mysql -u mydbuser -h 192.168.1.10 -p
<enter password>
You should be presented with a mysql prompt.

2. Install script on the Monitoring server
Copy and paste the following into a remotedbcheck.sh file in your scripts directory. Ensure the executable flag is set

chmod +x remotedbcheck.sh

Actual Script
#!/bin/sh
# A Script to execute mysqlcheck -c from a remote server
# and send an email if any table are flagged as NOT OK.
# The tables with issues are sent in the body of the
# email for admin action.
# Written by: Noveck Gowandan
# 09-01-2013
# Version 1.1
# Added script timer which will be appended to new logfile in mylocation (see variables)

##Variables
    # Location of Results and Status Files (default: /tmp)
    mylocation=/tmp 
    # Database to check
    mydbname=mydb
    # Database user with appropriate mysql access
    mydbuser=mydbuser
    # DB pass
    mydbpass=mydbpassword
    # Specify remote database host (IP preferred, hostname if entries are in /etc/hosts)
    mydbhost=192.168.1.10

    # Email variables (Valid Recipent and Sender Email required)
    notifyemail=myemailaddress@myorg.com
    serveremail=monitoring.server@myorg.com

    # These variables can be left as is
    resultsfilename=mysqlcheck-c-results.txt
    statusfilename=mysqlcheck-c-status.txt
    msgtext=`grep -v OK $mylocation/$mydbhost-$resultsfilename`
    status=0
   
# Start timer
time_start=`date +%s`

#Remove any resultsfile that exists
rm -rf $mylocation/$mydbhost-$resultsfilename

#Remove any statusfile that exists
rm -rf $mylocation/$mydbhost-$statusfilename

#Execute mysqlcheck with -c flag only
mysqlcheck -c $mydbname -u $mydbuser -p$mydbpass -h $mydbhost > $mylocation/$mydbhost-$resultsfilename

#Check results file and generate status file
grep -v OK $mylocation/$mydbhost-$resultsfilename | wc -l > $mylocation/$mydbhost-$statusfilename

#Check Status file, send email
status=`cat $mylocation/$mydbhost-$statusfilename`;

if test $status -ge 1
then

ssmtp $notifyemail << EOF
To: $notifyemail
From: $serveremail
Subject: `echo "PROBLEM: MYSQLCHECK "$mydbhost`;

$msgtext
EOF

else

echo "Last Check OK!" >> $mylocation/$mydbhost-$mydbname-check.log
fi

# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`

# Log output: datestamp and time takes to execute

date >> $mylocation/$mydbhost-$mydbname-check.log
echo "Execution Time was $total_time seconds." >> $mylocation/$mydbhost-$mydbname-check.log
echo "____________" >> $mylocation/$mydbhost-$mydbname-check.log

3. Add script to cron
Depending on your organization and criticality of the service, this may need to be done daily. Given the size of the database in my case and the length of time it takes to run, this example is weekly on Sundays at 2:01 AM.

nano /etc/crontab


Append to file
01 2 * * 0 root /bin/sh /scripts/remotedbcheck.sh


That's it!
The script will execute quietly and will not send any email if everything reports as OK. Be prepared if you do get any email from this script, you should act immediately!

Cheers,
-noveck

Wednesday, January 9, 2013

InnoDB Backup Scripts

Following up on my earlier post about converting the storage engine on a MySQL Database from MyISAM to InnoDB, I'd like to share the following scripts with backup rotation built in.
The next blog entry should be a step-by-step restore procedure using the actual backups below.

Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto

Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly

Script Output Log Directory
/tmp

Script Directory
/scripts

Disclaimer: I do not guarantee this is the BEST way of doing this, but it works for me. Copy and paste the following into a *.sh file in your scripts directory. Ensure the executable flag is set

chmod +x scriptname.sh

Actual Scripts
innodb_backup_monthly.sh

#!/bin/sh
# An InnoDB Backup Script to backup database Monthly
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer

time_start=`date +%s`

# Go to backup location and create Monthly folder with datestamp
cd /bkp/
mkdir M_db_$(date +%Y%m%d)

# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/M_db_$(date +%Y%m%d)

# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz M_db*

# Backup rotation section
rm -rf M_db*
rm -rf /bkp/Monthly/$(date +"%B")
mkdir /bkp/Monthly/$(date +"%B")
mv mydbname* /bkp/Monthly/$(date +"%B")

# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`

# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_monthly.log
date  >> /tmp/db_backup_monthly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_monthly.log
 innodb_backup_monthly.sh
 #!/bin/sh
# An InnoDB Backup Script to backup database DAILY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`

# Go to backup location and create Daily folder with datestamp
cd /bkp/
mkdir D_db_$(date +%Y%m%d)

# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/D_db_$(date +%Y%m%d)

# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz D_db*

# Backup rotation section
rm -rf D_db*
rm -rf /bkp/Daily/$(date +"%A")
mkdir /bkp/Daily/$(date +"%A")
mv mydbname* /bkp/Daily/$(date +"%A")

# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`

# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_daily.log
date  >> /tmp/db_backup_daily.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_daily.log


innodb_backup_hourly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database HOURLY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
# Go to backup location and create Hourly folder with datestamp
cd /bkp/
mkdir H_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/H_db_$(date +%Y%m%d)
# Backup rotation section
rm -rf /bkp/Hourly/$(date +"%H")
mkdir /bkp/Hourly/$(date +"%H")
mv H_db* /bkp/Hourly/$(date +"%H")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_hourly.log
date  >> /tmp/db_backup_hourly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_hourly.log

Monday, August 6, 2012

Identifying and Repairing MySQL (MYISAM) Table fragmentation

I had to perform some server maintenance recently, so I decided to re-run the handy mysql tuner tool, as mentioned in my Database Tuning entry.

Lo and behold, a couple tables were identified as fragmented, even though I have a weekly optimization script that checks and repairs any fragmentation.

Just in case there was some sort of anomaly that would bite me in the butt, I decided to run a check to identify the fragmented tables and manually repair those specifically. I found this post by Sean Hull on the DatabaseJournal very helpful!

To quote the author:
"MySQL tables, including MyISAM and InnoDB, two of the most common types, experience fragmentation as data is inserted and deleted randomly. Fragmentation can leave large holes in your table, blocks which must be read when scanning the table. Optimizing your table can therefore make full table scans and range scans more efficient."


The process:

0. Login as root on the Mysql server
mysql -u root -p ************

1. Identify fragmented tablesTake note of the tables, or perhaps select into an outfile if you so desire. This may come in handy if you want to script or automate the identification and repair of the fragmented tables.

select table_schema, table_name, data_free, engine
from information_schema.tables where table_schema
not in ('information_schema', 'mysql') and data_free > 0;


2. Repair the table fragmentation
This could be scripted, but was manually executed as I only had about 4 fragmented tables.
optimize table tablename1;

3. Rerun Step 1 to ensure the fragmentation has been resolved.
 
-noveck

Monday, June 25, 2012

MYSQL ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (111)

So I was thrown this error recently, while setting up a testing mysql environment and attempting to access it from a remote host. Usually it's the simple things that are overlooked, so here's how to resolve the error:
ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (111)

The first thing we can check is to see if the user from the remote host is allowed.

1. Login as root on mysql server
mysql -u root -p

2. Select database and show users.
select * from mysql.user\G

**A vertical list will be displayed. Look at the first two fields for each entry. If the user is only allowed to connect from localhost, this may be the problem.
Example:

Host: localhost
User: mydbuser

A user will have to be defined with the same parameters as mydbuser for the remote host (or hosts)
Here's where your documentation will come in handy (or you can hope the old query exists in the mysql buffer!)

3. Allow remote hosts to connect
grant select,insert,update,delete,create,drop,index,alter on mydbname.* to mydbuser@'192.168.1.%' identified by 'mydbpassword' ;

flush privileges;

Note: if you only want to allow a certain host, specify the IP instead of the wildcard.


The second issue that may cause this error is a MySQL configuration.

1. Open MySQL config file
nano /etc/my.cnf

2. Ensure that the following are commented out.

#skip-external-locking
#skip-networking
#bind-address = xx.xx.xx.xx

Save and exit

3. Restart mysql service
service mysqld start

The third issue that may contribute to this error may be the security configuration rejecting incoming requests on the server.

1. Login as root on db server

2. Add rule to iptables
/sbin/iptables -A INPUT -i eth0 -s 192.168.1.0/24 -p tcp --destination-port 3306 -j ACCEPT

** this grants access to the entire subnet, use a specific IP where applicable.

service iptables save

3. Restart iptables service
service iptables restart


Test from remote host by using the following:
mysql -h 192.168.my.dbip -u mydbuser -p


-noveck



Monday, June 18, 2012

Moodle Issues: Backup and restore / course import fails

It was one of those strange situations that popped up after a disaster recovery scenario.

The database server suffered hardware failure, but that was quickly resolved and some tests performed on the database -
MYISAMCHK
and
MYSQLCHECK

Both tests indicated no issues.

Soon after I started getting reports of the course imports or (backup/restores) failing, and troubleshooting took a couple hours. There were a lot of leads on the moodle forums but this is the solution to my particular issue. I strongly recommend turning debugging on from the moodle interface, simply to get as much description on the error as possible.
Debugging revealed a cryptic message: Warning: fopen******* [function.fopen]: failed to open stream: No such file or directory in ******/backuplib.php on line ***followed by a bunch of fwrite problems.

PROBLEM MESSAGE: An error occurred while backing up course start

Solution:

0. Login as admin to the moodle interface.


1. Open the mysqladmin tool  (this can also be done from the MYSQL Server)
2. Truncate the following tables:
  • mdl_backup_courses
  • mdl_backup_files
  • mdl_backup_ids
TRUNCATE TABLE tablename;

3.  Run a mysqlcheck with the check and repair flags just in case.

In my case, this solved the problem. The important point to note, is that the debugging may lead you in the correct direction, so DO NOT USE THIS SOLUTION if you are not certain your problem is identical!


Cheers,
-noveck

Thursday, April 12, 2012

Moodle: PHP script to convert Mysql database from MYISAM to InnoDB

I wrote (or rather modified an existing script) that batch converts the moodle database from MyISAM to InnoDB. This saves a lot of time, rather than writing a couple hundred ALTER TABLE statements, or even using the phpMyadmin interface.
Credit for the original script:
http://stackoverflow.com/questions/3856435/how-to-convert-all-tables-from-myisam-to-innodb
 
Please test on a copy of the database before using. Do not run on a production database! 
I can't emphasize that enough.

It's been tested on both Moodle 1.9 and 2.0/2/2, see script preamble for further details.

It's a potentially hazardous script, so I won't recommend placing it in any web accessible folders (like /var/www/html/..). It's been made relatively simple to implement, and the instructions are below:

0. Login as root/su

1. Ensure that InnoDB is enabled on your server
mysql -u root -p *****
SHOW ENGINES;
Expected output
...
InnoDB | YES | Supports transactions...


2. Install script on your server
Copy/Paste the code at the bottom of the post into a new file named innodb_convert.php on your Moodle server where the database is located.
Please ensure that it is not in a web accessible folder.



3. Give the file execute permissions to the script
chmod +x innodb_convert.php

4. Configuration
Open the file and navigate to the database configuration section (approx line 48). Set relevant variables.
$host = 'myhost'; //Specify host
$dbuser = 'myuser'; // Specify user with alter permissions 
$dbpass = 'mypass'; //user password
$mydb = 'mydb'; //specify schema or db to be modified
5. Save and exit.

6. From the command line, call the script using php
cd /tmp
php innodb_convert.php



7. Normal output expected:
ALTER TABLE mytable1 engine=InnodDB;
ALTER TABLE mytable2 engine=InnodDB;
ALTER TABLE mytablen engine=InnodDB;
Operation Completed

Confirm Storage Engine conversion using phpyadmin
or from Mysql: show create tablename.
Script Execution time: 999.99 seconds

The script (see preamble for additional information):

Download text file here.

<?php
// WARNING: PLEASE TEST BEFORE RUNNING ON A PRODUCTION DATABASE
// Tested on Moodle 1.9 and Moodle 2.0/2.2
//
// Disclaimer:
//
// This script was written to be executed from the command line in 
// Linux using a MYSQL database 
// and MYISAM storage engine. It converts the storage engine to 
// INNODB using the ALTER TABLE command.
//
// It assumes that the INNODB engine has been enabled. 
// This can be confirmed by logging into the mysql prompt 
// and running the command: 
// show engines;
//
// If it is disabled, check your my.cnf and 
// comment out the line: skip-innodb
//
// It does not check if disabled, so the script would execute 
 //and attempt to co
// It contains secure information, so DO NOT PLACE IN WEB DIRECTORY
//
// Execute from /temp and delete afterwards
//
// This is a modification of a script located here:
// http://stackoverflow.com/questions/3856435/how-to-convert-all-tables-from-myisam-to-innodb 
//
// Version : 1.2
// Date  : 11 April 2012
// Diff  :  check for recordset before executing alter table
//
// Version : 1.1
// Date  : 31 January 2012
// Diff  :  Added Timer to script; Display user and DB being affected
//
//
// Version :  1.0 
// Date  : 25 January 2012
//
// 
//
// Noveck Gowandan http://noveckg.blogspot.com

//Time Calculation - start count

$time = explode(' ',microtime());
$time = $time[1] + $time[0]; //return array
$begintime = $time; //define begin time

//Your database connection items here
 
$host = 'myhost'; //Specify host
$dbuser = 'myuser'; // Specify user with alter permissions 
$dbpass = 'mypass'; //user password
$mydb = 'mydb'; //specify schema or db to be modified


//connect to database using variables above
$link = mysql_connect($host,$dbuser,$dbpass);
$db = mysql_select_db($mydb);
   
if (!$link) {
die('Could not connect: ' . mysql_error());

}
echo "Connected Successfully to: $host." . "\n\n";
echo "Using database: $mydb." . "\n\n"; 
echo "Running script as $dbuser." . "\n\n";


//show tables in database
$sql = 'SHOW TABLES';
$rs = mysql_query($sql);

echo $sql;
echo "\n";

if (!$rs) {
die('SQL Recordset Error: ' . mysql_error());

}
else {
//loop through tables and convert to InnoDB
while($row = mysql_fetch_array($rs))
{
$tbl = $row[0];
$sql = "ALTER TABLE $tbl engine=InnoDB;";
mysql_query($sql);

echo $sql;
echo "\n";
}

echo 'Operation Completed.' . "\n\n";
echo 'Confirm Storage Engine conversion using phpmyadmin ' . "\n" . 'or from mysql: show create table tblname.' . "\n";

}
//close connection
mysql_close($link);

$time = explode(" ", microtime());
$time = $time[1] + $time[0]; 
$endtime = $time; //define end time
$totaltime = ($endtime - $begintime);
echo "Script Execution Time: $totaltime" . " seconds." . "\n\n";


?>

Thursday, March 15, 2012

Setting Up Master-Slave MySQL Replication on CentOS

I'm currently doing some research and testing into a MySQL Master-Slave setup on CentOS. It's a bit of a finicky setup and I had a couple stumbling blocks which may or may not happen to you. Just in case, these are documented further down the post.

No sense beating about the bush, let's jump right in...

First of all, this original guide was most helpful in my quest:


 Assumptions:
Base install of Centos completed.
IP of Mysql Master: 192.168.1.10
IP of Mysql Slave: 192.168.1.11
Default Mysql Path: /var/lib/mysql
Replication User: slave_user

On both Master and Slave:

0. Login as root or su


1. Install mysql packages
Yum install mysql*

2. Configure to start on boot
Chkconfig mysqld on

3. Start daemon and set relevant passwords
service mysqld start
--set root pw
mysqladmin –u root password “newpasswordgoeshere”

On Master:

1. Modify my.cnf

a)  remove comments or delete lines to enable networking and bind
#skip-networking
#bind-address = 127.0.0.1


b)    Designate server id
server-id = 1

c)    set an expiration date for the binlog
expire-logs-days=7

d)    Restart Mysql
service mysqld restart


2. Create Slave user and check Master Status.
Login to mysql as root and add a replication user account with associated privileges

GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY 'slave_password';
--for production environment, please specify IP’s or hostnames.
-- …TO 'slave_user'@'192.168.1._' …

FLUSH PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

Expected output
File        |Position|binlog_do_db|binlog_ignore_db
mysql-bin.001      | 500
!!Take note of the binfile and position, it is needed to start replication on the slave!

3.Backup database using preferred method and transfer to slave and restore.
In this instance, I used the innobackupex tool from percona xtrabackup (see blog post)

a) On the Master, Take the backup
innobackupex --defaults-file=/etc/my.cnf --user=***** --password=***** --databases=mydbname /path/to/backup/

b) SCP the backup files over the network to the slave
scp –r /path/to/backup/ user@server:/path/to/slave/destination**
**not mysql directory!!!



4. Unlock the tables.
Login to Mysql
UNLOCK TABLES;


On Slave:

1. Restore the backup

a) On the Slave, Prep the backup
innobackupex --apply-log --user=****** --password=****** /path /to/slave/destination

b) Restore the backup on slave
service mysqld stop
cp –r /path/to/slave/destination /var/lib/mysql
chown –R mysql:mysql *
chmod –R 771 *
service mysqld start


2. Configure Slave
a) open /etc/my.cnf and add under section [mysqld]
nano /etc/my.cnf
server-id=2
master-host=’192.168.1.10’
master-user=’slave_user’
master-password=’slave_password’
master-connect-retry=60
#Specify database to replicate
replicate-wild-do-table=mydb.%

relay log = /var/lib/mysql/mysql-relay-bin
relay-log-info-file = /var/lib/mysql/mysql-relay-log.info
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
master-info-file = /var/lib/mysql/mysql-master.info


#more than likely this entry exists, however, it does not hurt to check
log-error = /var/log/mysqld.log
Save and exit.

b) Restart Mysql
service mysqld restart

3. Start Replication on Slave
a) Login to Mysqld and execute the following:

STOP SLAVE;
#For this next step, please confirm the binlog name and position from the SHOW MASTER STATUS command on the master, before the tables were unlocked!!
CHANGE MASTER TO MASTER_HOST='192.168.1.10', MASTER_USER='slave_user', MASTER_PASSWORD='slave-password', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.001', MASTER_LOG_POS=500;

START SLAVE;

b) To ensure that the slave is replicating, run the following command and take note of the following output:

SHOW SLAVE STATUS \G;
Expected output (snippet)
xxxxxxxxxx
Slave_IO_State: Waiting for master to send event


Slave_IO_Running: Yes
Slave_SQL_Running: Yes


Seconds_behind_master: 0
xxxxxxxxxx

4. Troubleshooting?
If the status is anything other than what is listed above, exit the mysql CLI and check the mysql log for troubleshooting.
cat /var/log/mysqld.log

5. Optional Issues and Fixes (may be specific to my test environment, so may not apply to all scenarios )
I encountered an issue in my test where the replication simply was not starting.
The output of the show slave status was as follows:
xxxxxxxxxx
Slave_IO_State:


Slave_IO_Running: No
Slave_SQL_Running: Yes


Seconds_behind_master: NULL
xxxxxxxxxx
Analysis of the mysqld log indicated:
Error 1236: Could not find first log file name in binary log index.
I followed the advice from this website and got it to work:
http://forums.mysql.com/read.php?26,9390,9390#msg-9390

STOP SLAVE;
FLUSH TABLES WITH READ LOCK;
UNLOCK TABLES;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.001';
CHANGE MASTER TO MASTER_LOG_POS=500;
START SLAVE;



--End.

That was long. You deserve a cup of coffee.

Coming soon - Monitoring the replication and checking for replication errors. I'm still in the research phase here, so I'll update as soon as I can.

-noveck

Monday, February 6, 2012

Testing Percona Xtrabackup (innobackupex)

My last post dealt with the installation of the Percona Xtrabackup tool for use on a Mysql innodb database.
After some tweaking and help from the Percona Docs, the following is a test backup, prep and restore of the full database (not incremental).

This assumes a test environment, so please don't try this on a running production environment. Before the database can be restored, the existing one will have to be renamed or moved. The options for this are straightforward. Stop the mysql daemon and make a binary copy of the files, or rename the source folder.

0. Login as root/su

1. Prerequisite configuration items

Configure datadir and in my.cnf and reboot mysqld service
nano /etc/my.cnf
 --find section
[mysqld]
--add line
datadir=/var/lib/mysql

service mysqld restart

2. Run full backup of specific database
innobackupex --defaults-file=/etc/my.cnf --user=***** --password=***** --databases=mydbname /path/to/backup/

3. Prepare backup files
innobackupex --apply-log --user=****** --password=****** /path /to/backup

4. Restore backup
service mysqld stop
cp –r /path/to/backup /var/lib/mysql
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start

-n

Wednesday, February 1, 2012

Installing Percona xtrabackup on CentOS 5.x

This covers the basic installation of the xtrabackup package on CentOS. This is necessary in order to take hot backups of a database using the InnoDB storage engine. By default, Mysql has no options to take a full hot backup of a live InnoDB database.

For more information,  please see Percona's website located here: http://www.percona.com/doc/percona-xtrabackup/

0. Login as root

1. Get rpm from Percona
(check your server version before installing, there are specific RPM’s for 32-bit and 64-bit)
see here for rpm list: http://www.percona.com/downloads/percona-release/
cd /tmp
wget  http://www.percona.com/downloads/percona-release/percona-release-0.0-1.i386.rpm

2. Install rpm (32-bit on this test machine)
Rpm –Uvh percona-release-0.0-1.i386.rpm

3. Check repo for package
yum list | grep percona


4. Install xtrabackup
yum install xtrabackup --nogpgcheck
I still have testing to do on actual usage, so I'll create a new post on tests of backups and restoring databases.

-n

Wednesday, January 11, 2012

Installation of Mysql 5.5 on CentOS 5.x

This particular sub-entry focuses on installing MySQL 5.5 on CentOS 5.7.
For reference, CentOS ships with MySQL 5.0.77 by default, at the time of this entry (January 2012). This entry does not involve any migration and assumes a new server is being deployed. Nonstandard repos are required (remi and EPEL)

0. Login as root

1. Remove previously installed MySQL (if applicable)
yum erase mysql

2. Install additional repos

Navigate to temporary directory
cd /tempInstall EPEL repo
wget http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm

UPDATE 11 MAY 2012: The above URL is sporadic. Please use this alternative if the original does not work. - http://download.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

rpm -Uvh epel-release-5-4.noarch.rpm

Install remi repo
wget http://rpms.famillecollet.com/enterprise/remi-release-5.rpm
rpm -Uvh remi-release-5.rpm


3. Ensure the repo contains MySQL 5.5
yum --enablerepo=remi list mysql mysql-server
4. Install MySQL 5.5
yum install --enablerepo=remi mysql*

5. Start MySQL and configure to start on boot
service mysqld start
chkconfig mysqld on 


6. Run mysql upgrade script
mysql_upgrade -u root -p

7. Change Mysql default Password 
/usr/bin/mysqladmin -u root password 'yourpasswordhere'

8. Check to ensure mysql 5.5 is installed.
mysql --version


That's it! MySQL 5.5 should now be installed. Dont forget to tune your server:
http://noveckg.blogspot.com/2011/04/some-mysql-database-tuning-from-non-dba.html

-noveck

Monday, November 7, 2011

Moodle Apache/DB Connection Errors!

The Moodle's config.php includes a section to email connection errors to an email address

$CFG->emailconnectionerrorsto = 'myemail@org.com'

I've noticed some random errors popping up in my inbox, at the same time my stats script is configured to run.
Something along the lines of:

WARNING: Database connection error: http://mysitemoodle.com

I found this especially curious, especially my Zabbix Server was not reporting any errors. So I dug a little deeper, and ran a couple of the Mysql Reporting tools I use.
<>

Turns out I was maxing out my max_connections config.
My old config was 385 and I had originally increased this to 500 with the deployment of the LB/HA Moodle Environment (earlier post)

I had to do some more tweaking of the mysql configuration file. /etc/my.cnf


The default wait_timeout is very long: 14400 sec.
In my original deployment in May this year, I reduced this to 3600 sec.
With these errors randomly cropping up during the statsrun, I had to tweak it yet again.

I reduced the key_buffer_size to increase the max_connections to 550. This was necessary to ensure that the server does not over-allocate RAM. I also reduced the wait_timeout to 1800 sec, which is fine considering the hardware my server uses. (2 Quad core proc, 32GB RAM)


A quick save and restart of the mysql service was all it took.
96+ hours and I'm still actively monitoring, no more connection errors and we're down to about a max of 300 connections at the peak.



 -n

Friday, October 28, 2011

Useful Queries for Moodle

I've come to compile a list of queries for Moodle, aggregrated from various sources on the internet. Credits to the original authors, whomever they be. (sorry for not keeping track)

# Top 20 courses in terms of student enrolment for specfic year or term (assuming year code is built in to courses)
SELECT DISTINCT c.fullname, c.shortname, count( usr.id )
FROM mdl_course c
INNER JOIN mdl_context cx ON c.id = cx.instanceid
AND cx.contextlevel = '50'
INNER JOIN mdl_role_assignments ra ON cx.id = ra.contextid
INNER JOIN mdl_role r ON ra.roleid = r.id
INNER JOIN mdl_user usr ON ra.userid = usr.id
WHERE r.name = 'Student' and c.idnumber like '%\_2010%'
GROUP BY c.fullname
ORDER BY count( usr.id ) DESC

LIMIT 0 , 20


# List of Lecturers and how much courses they teach
SELECT user.firstname, user.lastname, user.email, count( role.id ) AS courses
FROM mdl_course course
JOIN mdl_context context ON ( course.id = context.instanceid
AND context.contextlevel =50 )
JOIN mdl_role_assignments assign ON ( context.id = assign.contextid )
JOIN mdl_role role ON ( assign.roleid = role.id )
JOIN mdl_user user ON ( assign.userid = user.id )
WHERE role.shortname = 'editingteacher'
GROUP BY user.firstname, user.lastname, user.email
ORDER BY user.lastname, user.firstname


# Courses with MMS streaming media for specfic year or term (assuming year code is built in to courses)
SELECT DISTINCT c.shortname
FROM mdl_resource r, mdl_course c
WHERE r.reference LIKE '%mms://%'
AND c.shortname LIKE '%\_2010%'
AND r.course = c.id
#List of courses and number of activities for specfic year or term (assuming year code is built in to courses) (sorted by total num_activities )
SELECT m.id, m.`shortname` , m.fullname, cCount.totalcount
FROM mdl_course m
LEFT JOIN (
SELECT courseCount.course, sum( courseCount.subcount ) AS totalcount
FROM (
SELECT course, count( * ) AS subcount
FROM mdl_resource
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_quiz
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_assignment
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_survey
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_label
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_glossary
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_wiki
GROUP BY course) AS courseCount
GROUP BY courseCount.course) AS cCount ON cCount.course = m.id
WHERE m.shortname LIKE '%\_2010%'
ORDER BY cCount.totalcount ASC

Friday, July 22, 2011

Build your own updated Moodle Schema

This particular post is to assist in getting a Moodle DB schema specific to your database. It is especially handy if there are non-standard plugins installed on Moodle.

As mentioned in an earlier post, getting a relevant ERD on Moodle.org is almost herculian. This tool will reverse engineer the databse ERD.

The application is Fabforce4 located at http://www.fabforce.net/dbdesigner4/

Installation on Windows is straightforward, simply download, install and configure to point to your database.

Installation on CentOS linux was not as good. The application refused to launch. I decided to run it on Wine (which I had to install here).

I'll update with a sample schema of my test database as soon as the operation completes.

-n

Friday, July 8, 2011

Moodle Issues after an upgrade! [Resolved!]

Part of my project in migrating to a load-balanced virtualized infrastructure involved running a minor version upgrade from 1.9.7 to 1.9.11 which addressed several bugs in our core Moodle usage. The project plan included checking the database before and after for corruption as well as re-indexing using mysqlcheck and myisamchk respectively, and no errors were detected, both in the Test environment as well as during the upgrade in the production environment. Basic checks were done post upgrade to ensure core moodle functionality. To my surprise (read: shock, horror, pain), two weeks after the upgrade, two major issues were reported by several lecturers:
    1. Quiz attempts were not being saved, where the quiz spanned multiple pages.
    2. Assignments (Advanced Uploading of Files) could not be submitted.

Tracking down this issue took some time, and since so many variables changed from the original environment and so much time had elapsed since the upgrade, reverting was not an option.
The order of troubleshooting included:
  •     Determining if the course/activity settings were at fault.
  •     Turning on Debugging to check for errors upon avtivity submits.
  •     Determining if my (tested) load balancer environment was causing the issue.
  •     Checking the new webserver configuration
  •     Checking the database configuration
  •     Checking and rechecking  and attempting to repair the database for corruption using variations of switches on the respective tools (mysqlcheck and myisamchk)
  •     Checking if caching was causing the issue (disabling memcached)
I then decided to try two different tests:
  •     Parallel installation of moodle on the same new environment/hardware.
  •     Try the current database on different hardware
The first test required using a copy of the existing moodle code, but on a fresh database. To my surprise (read: temporary relief), the environment was fine. The empty database with a test course worked perfectly. The next logical step assumed that the database may be the issue, as the previous test used the same moodle code, effectively eliminating it as the root cause.
The second test confirmed this. I migrated the same Moodle code plus a copy of the database to a clean server and the problem was replicated. At this point the epic horror dawned upon me on the possibilities ahead as visions of rebuilding from scratch haunted me.

At this point, only two variables needed investigating:
  •     Database Indexes
  •     Database Data/Structure
In my test environment (where the problem was replicated), I attempted to perform some last ditch attempts to recover the environment. Firstly, I attempted to strip all the indexes from the database and use the mysiamchk tool to rebuild.
I took a backup of the database (without indexes) using the following code:
mysqlhotcopy -u root -p yourpassword --addtodest --noindices db_name /path/to/backup/destination/
I then used the MYSQL restore method in my earlier post to restore the database to the test environment.

Since the indexes had to be rebuilt using myisamchk, it required the mysql service to be stopped. The indexes were rebuilt using:
myisamchk -rq /var/lib/mysql/db_name/*.MYI

I then ran the mysqlcheck command as in step 7 of my earlier post.

Again, no errors/issues were reported. I held my breath as I started the test quiz and assignment upload....and it failed again. That meant the data was royally screwed. (...and that's putting it nicely).
I then decided I was going to take a look at the database structure and perhaps even delve into the data itself and try to see where the issues were.

I took mysqldumps of the structure of the working database and the non-working database and compared them line by line in Notepad++ (a tedious task, but some real team effort came into play here), but came to the conclusion that the structure was fundamentally the same. Therefore, by the process of elimination, the root cause was the actual tables/data.
The structure was analyzed using the following:
mysqldump --no-data --host localhost --user=root -p db_name > /path/to/report.sql

On to the analysis of data. Analysis of the tables from the phpmyadmin interface showed no anomalies. In Moodle, there's a database report that is included with the base code which contains a bit more useful data and can be accessed via:
http://yourmoodleurl.com/admin/dbperformance.php

I ran the script and used the view tables link and started scouring the tables one by one, searching for anomalies. Interesting enough, I stumbled upon two very strange issues.
The auto_increment fields for two tables were HUGE, compared to the number of rows the tables contained:
  1. mdl_question_sessions
  2. mdl_assignment_submissions
The figures were large enough to be beyond the range of the ID field (BIGINT-10) and the names of the questionable tables coincided with the issues we were having. A faint light began to appear at the end of this proverbial tunnel.
The problem lay with the Primay Keys of these two tables. After searching for the Moodle Database schema on Moodle.org (almost a herculian task), and careful analysis of the stable structure I learned that the Primary keys were not foreign keys in other tables. The proverbial light grew a little bigger.

The only way forward at this point would be to delete the entire primary key fields on the two affected tables, recreate the field (and associated options) and reset the auto_increment to 1.

This operation was done from the phpmyadmin interface and needed to be tested significantly to ensure that other functionality would not be negatively affected. Success! It worked in the test environment. The light grew stronger...

I was especially wary about implementation in production, having a very narrow maintenance window to attempt the fix. After the database fix, I re-ran the myisamchk with repair as well as the mysqlcheck to err on the safe side. The operation was a success, and I finally saw the end of the tunnel...Problem solved.

Thoughts: I've yet to determine WHY the moodle upgrade script failed and caused this databse anomaly. From the front-end, the upgrade was successful and NO errors were reported. I've also to understand how the mysqlcheck and myisamchk could not detect that a table field contained data beyond it's configured range. Even turning on Debugging on the Moodle site failed to show anything useful about a failed insert to at least provide a tip in the general direction of the issue.

Tuesday, April 5, 2011

Some MySQL database tuning - from a non-DBA perspective.

A DBA I'm not, but that does not mean I have to accept the default configuration options for Mysql.

There are a couple scripts which can assist with tuning an existing database server, such as
a) Tuning-Primer: http://www.day32.com/MySQL/tuning-primer.sh
which handles recomendations for the following:


  • Slow Query Log


  • Max Connections


  • Worker Threads


  • Key Buffer


  • Query Cache


  • Sort Buffer


  • Joins


  • Temp Tables


  • Table (Open & Definition) Cache


  • Table Locking


  • Table Scans (read_buffer)


  • Innodb Status

    b) Then, there is also the mysqltuner.pl [perl script] which essentially does the same thing. I used both with some pretty decent results in tweaking an old server.

    The above mentioned scripts only work properly on a currently running setup. It does not really help in tuning a new database server, which I'm in the process of building.
    Even the sample mysql config files located in the distro is limited, with the largest (my-huge.cnf) only catering to a database server with 1-2GB of memory.



  • My intention is to tune a dedicated database server with the following base specifications:
    CentOS 5.5 64 bit installed on
    HP ProLiant DL380 G7 Server
    2 Intel Xeon E5620 Quad-core processors
    32GB memory



    There is a builtin benchmark tool with MySQL, I'd recommend running the benchmark tests before any changes are made, then run and document the results after each configuration change. Normal elevated privlieges apply - login as root.


    0. Run the initial benchmark test - then copy paste (or pipe) the result into a textfile for further analysis.

    cd /usr/share/sql-bench
    perl run-all-tests --user='yourdbusername' --password='yourdbpassword'
    Sample output - not mine, sample taken from here:
    alter-table: Total time: 8 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.03 CPU)
    ATIS: Total time: 2 wallclock secs ( 1.20 usr 0.09 sys + 0.00 cusr 0.00 csys = 1.29 CPU)
    big-tables: Total time: 5 wallclock secs ( 2.45 usr 0.08 sys + 0.00 cusr 0.00 csys = 2.53 CPU)
    connect: Total time: 50 wallclock secs (12.74 usr 4.50 sys + 0.00 cusr 0.00 csys = 17.24 CPU)
    create: Total time: 31 wallclock secs ( 1.20 usr 0.44 sys + 0.00 cusr 0.00 csys = 1.64 CPU)
    insert: Total time: 397 wallclock secs (97.95 usr 13.61 sys + 0.00 cusr 0.00 csys = 111.56 CPU)
    select: Total time: 44 wallclock secs ( 8.71 usr 0.88 sys + 0.00 cusr 0.00 csys = 9.59 CPU)
    transactions: Test skipped because the database doesn’t support transactions
    wisconsin: Total time: 3 wallclock secs ( 0.91 usr 0.23 sys + 0.00 cusr 0.00 csys = 1.14 CPU)
    TOTALS 562.00 123.77 19.82 143.59 3425950
     

    I first started off by keeping a backup of the existing /etc/my.cnf ann then using the distro's my-huge.cnf, making that my baseline config.


    1. backup existing config, set new baseline and restart mysql service
    mv /etc/my.cnf /etc/my.cnf.bkp
    cp /usr/share/doc/mysql-server-5.0.77/my-huge.cnf /etc/my.cnf
    service mysqld restart

    The MySQL documentation suggests the following server disk parameters:
    hdparm - which allows configuration of the disk interface.
    MySQL suggests the following configuration:

    2. Tuning disk parameters (reboot not necessary, but I did anyway)
    hdparm -m 16 -d 1


    It should be noted that I mounted the /var partition on it's own disk array to avoid disk contention.
    Mysql also suggests the following mount options on the DB disks:
    noatime and async

    nano /etc/fstab

    find your /var partition or whatever partition your db resides on, and append noatime and async after the default option:

    LABEL=/var            /var                  ext3            defaults,noatime,async 1 2

    Note: differnet filesystems have their own pros and cons for use of a database server. Ext3 is the middle ground in terms of performance and stability.

    3. Tune additional my.cnf options (requires a restart of mysql service before changes are applied)


    MySQL documentation suggests that the two most important parameters to begin tuning are:
    key_buffer_size (or key_buffer on newer versions of MySQL)
    table_open_cache

    The key_buffer_size allows you to store the MyISAM table indexes in memory. The generally accepted rule of thumb is setting this to 25-50% of server memory on a dedicated database server.
    nano /etc/my.cnf
    find key_buffer_size (or key_buffer)
    adjust to suggested value above with the following syntax
    key_buffer_size = 16384M

    The table_open_cache (or table_cache) is related to the max_connections configuration. Tuning this parameter relies on obtaining information from the running setup and tweaking accordingly.
    Tips on tuning this and other parameters can be found here: http://www.databasejournal.com/features/mysql/article.php/3367871/Optimizing-the-mysqld-variables.htm


    Remember to run the benchmark after each change - analysis of the results should be fun.

    -n