This post is an ultra, no, make that uber paranoid method of upgrading/migrating a relatively large (20+ GB on file) InnoDB database from MySQL version 5.0 to MySQL 5.5. Some might consider it overkill, but as it relates to a database of this size and maturity, I'd prefer not to take any unnecessary risks.
This assumes that the old server is running mysql 5.0 on CentOS 5.x and that the MySQL 5.5 is installed on a new server running CentOS 6, using the remi repositories. This is covered here.
Phase 1 - Prepare Data on the Old Server
1. Execute Database Check to ensure tables are clean
From terminal:
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
2. Re-index tables before the dump
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql
3. Export the database as a dump file
From shell:
mysqldump -u root –p –e –c --verbose --default-character-set=utf8 --skip-set-charset --max-allowed-packet = 100M --single-transaction --databases mydbname –r /root/Desktop/mydbdump.sql
4. Copy to new DB server
scp –r /root/Desktop/mydbdump.sql root@new.db.srv.ip:/root/Desktop/
Phase 2 - Import to New Server
1. Create empty database shell for import
From mysql:
create database mdbname character set utf8 collate utf8_unicode_ci\
2. Issue Grant permissions to new DB (I hope you have this documented, else you might need to dump/restore the mysql.user table to new DB)
3. Import SQL file.(but first set a really high session value for max_allowed_packet to handle the large data import)
set global max_allowed_packet = 1000000000;
source /root/Desktop/mydbdump.sql
4. Check mysql for transaction warnings
from mysql:
show warnings\G
5. Run upgrade script
From shell:
mysql_upgrade –u root –p --force
6. Rebuild InnoDB tables, which would force the InnoDB tables to upgrade
(source: http://www.mysqlperformanceblog.com/2010/05/14/mysql_upgrade-and-innodb-tables/ )
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql
7. Execute Database Check to ensure newly imported/upgraded tables are clean
From shell:
mysqlcheck –c mydbname –u root –p
Phase 3 - Compare old and new database
Checking data consistency to ensure all the data was transferred via an accurate record count.
http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html
1. On Old db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;
From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.
2. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt
3. On New db server, generate query to perform record count on each table
from mysql: (single line!)
select
concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as
exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from
information_schema.tables where table_schema =‘mydbname’ into outfile
’/tmp/TableAnalysisQuery.sql’;
From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.
4. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt
5. Copy both text files to a third machine for comparison
On OLD db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt root@third.machine.ip:/root/Desktop
On NEW db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt root@third.machine.ip:/root/Desktop
ON third server
from shell:
diff –a /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt
No output from the previous command means that the data is consistent (as it relates to number of rows on each table) on both servers and the new database can be made active/ brought in production
<EOF>
That's it!
-noveck
Content moved to tucuche-consulting.com as of April 2019
Showing posts with label InnoDB. Show all posts
Showing posts with label InnoDB. Show all posts
Thursday, September 26, 2013
Monday, March 11, 2013
InnoDB Restore Backup Procedure
My previous post elaborated on some scripts on backing up a MySQL Database with the InnoDB storage engine.
This post documents the restore procedure using the actual backup generated. Steps 7 and 8 are optional, but recommended. Better safe than sorry!
Assumptions:
· Target restore candidate server has Percona Xtrabackup installed.
· Target database server (restore candidate), has no active/production databases running.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
0. Got root?
1. Locate appropriate backup file
Assumption that it is compressed like the Daily or Monthly in my previous post.
cp –r /bkp/Daily/mydbnamexxxxxx.tgz /root/Desktop
2. Uncompress backup file
cd /root/Desktop
tar xzf mydbnamexxxxxx.tgz
3. Prepare the backup file
cd /path/to/extracted/backup/
ls
(expected output should be ibdata1 file, folder containing database base and other xtrabackup named files)
innobackupex –apply-log /root/Desktop/*_dbxxxx/201xxxxx/
4. Stop the MYSQL Daemon and copy the files from the desktop to the mysql directory
service mysqld stop
rsync –rvt --exclude ‘xtrabackup_checkpoints’ --exclude ‘xtrabackup_logfile’ * /var/lib/mysql
5. Change ownership of the restored files and restart the MYSQL Daemon
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start
6. Login to mysql and ensure the database has been restored via a table count
(It helps to have this documented in your production environment)
mysql –u root –p
<enter password when prompted>
show databases;
use mydbname; (use appropriate database name from list)
show tables;
Quit MYSQL:
\q
7: Execute a mysqlcheck with check flag to determine any data inconsistencies
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
8: Execute a mysqlcheck with optimize flag to optimize restored table data
mysqlcheck –o mydbname –u root –p
<enter password when prompted>
--eof
-noveck
This post documents the restore procedure using the actual backup generated. Steps 7 and 8 are optional, but recommended. Better safe than sorry!
Assumptions:
· Target restore candidate server has Percona Xtrabackup installed.
· Target database server (restore candidate), has no active/production databases running.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
0. Got root?
1. Locate appropriate backup file
Assumption that it is compressed like the Daily or Monthly in my previous post.
cp –r /bkp/Daily/mydbnamexxxxxx.tgz /root/Desktop
2. Uncompress backup file
cd /root/Desktop
tar xzf mydbnamexxxxxx.tgz
3. Prepare the backup file
cd /path/to/extracted/backup/
ls
(expected output should be ibdata1 file, folder containing database base and other xtrabackup named files)
innobackupex –apply-log /root/Desktop/*_dbxxxx/201xxxxx/
4. Stop the MYSQL Daemon and copy the files from the desktop to the mysql directory
service mysqld stop
rsync –rvt --exclude ‘xtrabackup_checkpoints’ --exclude ‘xtrabackup_logfile’ * /var/lib/mysql
5. Change ownership of the restored files and restart the MYSQL Daemon
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start
6. Login to mysql and ensure the database has been restored via a table count
(It helps to have this documented in your production environment)
mysql –u root –p
<enter password when prompted>
show databases;
use mydbname; (use appropriate database name from list)
show tables;
Quit MYSQL:
\q
7: Execute a mysqlcheck with check flag to determine any data inconsistencies
mysqlcheck –c mydbname –u root –p
<enter password when prompted>
8: Execute a mysqlcheck with optimize flag to optimize restored table data
mysqlcheck –o mydbname –u root –p
<enter password when prompted>
--eof
-noveck
Labels:
CentOS,
innobackupex,
InnoDB,
Linux,
MySQL,
Percona,
xtrabackup
Wednesday, January 9, 2013
InnoDB Backup Scripts
Following up on my earlier post about converting the storage engine on a MySQL Database from MyISAM to InnoDB, I'd like to share the following scripts with backup rotation built in.
The next blog entry should be a step-by-step restore procedure using the actual backups below.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
Script Output Log Directory
/tmp
Script Directory
/scripts
Disclaimer: I do not guarantee this is the BEST way of doing this, but it works for me. Copy and paste the following into a *.sh file in your scripts directory. Ensure the executable flag is set
chmod +x scriptname.sh
Actual Scripts
innodb_backup_monthly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database Monthly
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
# Go to backup location and create Monthly folder with datestamp
cd /bkp/
mkdir M_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/M_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz M_db*
# Backup rotation section
rm -rf M_db*
rm -rf /bkp/Monthly/$(date +"%B")
mkdir /bkp/Monthly/$(date +"%B")
mv mydbname* /bkp/Monthly/$(date +"%B")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_monthly.log
date >> /tmp/db_backup_monthly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_monthly.log
# Go to backup location and create Daily folder with datestamp
cd /bkp/
mkdir D_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/D_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz D_db*
# Backup rotation section
rm -rf D_db*
rm -rf /bkp/Daily/$(date +"%A")
mkdir /bkp/Daily/$(date +"%A")
mv mydbname* /bkp/Daily/$(date +"%A")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_daily.log
date >> /tmp/db_backup_daily.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_daily.log
innodb_backup_hourly.sh
The next blog entry should be a step-by-step restore procedure using the actual backups below.
Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto
Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly
Script Output Log Directory
/tmp
Script Directory
/scripts
Disclaimer: I do not guarantee this is the BEST way of doing this, but it works for me. Copy and paste the following into a *.sh file in your scripts directory. Ensure the executable flag is set
chmod +x scriptname.sh
Actual Scripts
innodb_backup_monthly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database Monthly
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
# Go to backup location and create Monthly folder with datestamp
cd /bkp/
mkdir M_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/M_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz M_db*
# Backup rotation section
rm -rf M_db*
rm -rf /bkp/Monthly/$(date +"%B")
mkdir /bkp/Monthly/$(date +"%B")
mv mydbname* /bkp/Monthly/$(date +"%B")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_monthly.log
date >> /tmp/db_backup_monthly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_monthly.log
innodb_backup_monthly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database DAILY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# An InnoDB Backup Script to backup database DAILY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
time_start=`date +%s`
# Go to backup location and create Daily folder with datestamp
cd /bkp/
mkdir D_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/D_db_$(date +%Y%m%d)
# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz D_db*
# Backup rotation section
rm -rf D_db*
rm -rf /bkp/Daily/$(date +"%A")
mkdir /bkp/Daily/$(date +"%A")
mv mydbname* /bkp/Daily/$(date +"%A")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_daily.log
date >> /tmp/db_backup_daily.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_daily.log
innodb_backup_hourly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database HOURLY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Added script timer and modified output to log time
# An InnoDB Backup Script to backup database HOURLY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
time_start=`date +%s`
# Go to backup location and create Hourly folder with datestamp
cd /bkp/
mkdir H_db_$(date +%Y%m%d)
cd /bkp/
mkdir H_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/H_db_$(date +%Y%m%d)
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/H_db_$(date +%Y%m%d)
# Backup rotation section
rm -rf /bkp/Hourly/$(date +"%H")
mkdir /bkp/Hourly/$(date +"%H")
mv H_db* /bkp/Hourly/$(date +"%H")
rm -rf /bkp/Hourly/$(date +"%H")
mkdir /bkp/Hourly/$(date +"%H")
mv H_db* /bkp/Hourly/$(date +"%H")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_hourly.log
date >> /tmp/db_backup_hourly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_hourly.log
echo "____________" >> /tmp/db_backup_hourly.log
date >> /tmp/db_backup_hourly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_hourly.log
Labels:
CentOS,
innobackupex,
InnoDB,
Linux,
MySQL,
Percona,
xtrabackup
Monday, November 5, 2012
Batch convert MyISAM tables to InnoDB from Terminal
A couple months ago I posted on a modified php script to convert tables MySQL tables from MyISAM to InnoDB. It worked all well and good for my test environment, but I realized that it may not be the best solution for a dedicated database server where php is not installed.
In this case, this post assumes you have both su access and mysql root access in order to batch generate the queries to convert the storage engine. I strongly recommend isolating the database from the front end at this point, it means shutting your application down until the operation has been complete.
Please TEST before running in your production environment!
Credit to this post on RackerHacker for the kickstart query generation.
This post does not cover the advantages/disadvantages of either storage engine, that is covered at MySQL's site and at the Oracle Blog.
The benefits of Moodle on InnoDB is covered here.
Some prerequisites to get out of the way.
a. Ensure that InnoDB is enabled on the server
mysql -u root -p *****
SHOW ENGINES;
Expected outputb. Disable the writeback cache (as per MySQL's recommendation)
...
InnoDB | YES | Supports transactions...
hdparm -W0 /dev/sdx (where sdx contains the /var/lib/mysql or your specific mysql location )
c. Tune the my.cnf according to MySQL's recommendation
Later on, the Day32's Tuning Primer
and the Mysql Tuner Script can be used to fine tune the other server parameters.
Prereq's aside, let's push forward onto the crux of the matter.
0. Login as root on your database server.
1. Take a backup before proceeding any further.
2. Double check that you have a backup. Please.
3. Run a check on the database to avoid a GIGO issue.
mysqlcheck -c -o -r -f -h localhost -u mydbusername -p mydbname
<enter your db password>
4. Login as MySQL root and generate the batch queries, send to output file (note the single quotes plus backticks)
select concat(‘ALTER TABLE `’,table_schema,’`.`’,table_name,’` ENGINE=InnoDB;’)
from information_schema.tables
Where table_schema=’mydbname’
and ENGINE=’MyISAM’
into outfile ‘/tmp/InnoBatchConvert.sql’
5. Logout from MySQL root and back to shell
At the shell:
mysql -u root -p --verbose < /tmp/InnoBatchConvert.sql
<enter password>
6. After the script has completed, check all the tables to ensure the storage engine is InnoDB
select table_name, engine
from information_schema.tables
where table_schema = 'mydatabasename';
7. Check the database again.
mysqlcheck -c -h localhost -u mydbusername -p mydbname
<enter your db password>
That's it! All you need to do now is ensure you have a decent backup strategy in place and the InnoDB backup script is covered here. I'll follow up soon with a detailed restore procedure as well as perhaps my own version of my innodb backup scripts with rotation built in.
-noveck
Labels:
Convert Database,
InnoDB
Thursday, April 12, 2012
Moodle: PHP script to convert Mysql database from MYISAM to InnoDB
I wrote (or rather modified an existing script) that batch converts the moodle database from MyISAM to InnoDB. This saves a lot of time, rather than writing a couple hundred ALTER TABLE statements, or even using the phpMyadmin interface.
Credit for the original script:
Please test on a copy of the database before using. Do not run on a production database!
I can't emphasize that enough.
It's been tested on both Moodle 1.9 and 2.0/2/2, see script preamble for further details.
It's a potentially hazardous script, so I won't recommend placing it in any web accessible folders (like /var/www/html/..). It's been made relatively simple to implement, and the instructions are below:
0. Login as root/su
1. Ensure that InnoDB is enabled on your server
mysql -u root -p *****
SHOW ENGINES;
2. Install script on your server
Copy/Paste the code at the bottom of the post into a new file named innodb_convert.php on your Moodle server where the database is located.
Please ensure that it is not in a web accessible folder.
3. Give the file execute permissions to the script
chmod +x innodb_convert.php
4. Configuration
Open the file and navigate to the database configuration section (approx line 48). Set relevant variables.
6. From the command line, call the script using php
cd /tmp
php innodb_convert.php
7. Normal output expected:
The script (see preamble for additional information):
Download text file here.
Credit for the original script:
http://stackoverflow.com/questions/3856435/how-to-convert-all-tables-from-myisam-to-innodb
Please test on a copy of the database before using. Do not run on a production database!
I can't emphasize that enough.
It's been tested on both Moodle 1.9 and 2.0/2/2, see script preamble for further details.
It's a potentially hazardous script, so I won't recommend placing it in any web accessible folders (like /var/www/html/..). It's been made relatively simple to implement, and the instructions are below:
0. Login as root/su
1. Ensure that InnoDB is enabled on your server
mysql -u root -p *****
SHOW ENGINES;
Expected output
...
InnoDB | YES | Supports transactions...
2. Install script on your server
Copy/Paste the code at the bottom of the post into a new file named innodb_convert.php on your Moodle server where the database is located.
Please ensure that it is not in a web accessible folder.
3. Give the file execute permissions to the script
chmod +x innodb_convert.php
4. Configuration
Open the file and navigate to the database configuration section (approx line 48). Set relevant variables.
5. Save and exit.$host = 'myhost'; //Specify host $dbuser = 'myuser'; // Specify user with alter permissions $dbpass = 'mypass'; //user password $mydb = 'mydb'; //specify schema or db to be modified
6. From the command line, call the script using php
cd /tmp
php innodb_convert.php
7. Normal output expected:
ALTER TABLE mytable1 engine=InnodDB;
ALTER TABLE mytable2 engine=InnodDB;
ALTER TABLE mytablen engine=InnodDB;
Operation Completed
Confirm Storage Engine conversion using phpyadmin
or from Mysql: show create tablename.
Script Execution time: 999.99 seconds
The script (see preamble for additional information):
Download text file here.
<?php // WARNING: PLEASE TEST BEFORE RUNNING ON A PRODUCTION DATABASE // Tested on Moodle 1.9 and Moodle 2.0/2.2 // // Disclaimer: // // This script was written to be executed from the command line in // Linux using a MYSQL database // and MYISAM storage engine. It converts the storage engine to // INNODB using the ALTER TABLE command. // // It assumes that the INNODB engine has been enabled. // This can be confirmed by logging into the mysql prompt // and running the command: // show engines; // // If it is disabled, check your my.cnf and // comment out the line: skip-innodb // // It does not check if disabled, so the script would execute //and attempt to co // It contains secure information, so DO NOT PLACE IN WEB DIRECTORY // // Execute from /temp and delete afterwards // // This is a modification of a script located here: // http://stackoverflow.com/questions/3856435/how-to-convert-all-tables-from-myisam-to-innodb // // Version : 1.2 // Date : 11 April 2012 // Diff : check for recordset before executing alter table // // Version : 1.1 // Date : 31 January 2012 // Diff : Added Timer to script; Display user and DB being affected // // // Version : 1.0 // Date : 25 January 2012 // // // // Noveck Gowandan http://noveckg.blogspot.com //Time Calculation - start count $time = explode(' ',microtime()); $time = $time[1] + $time[0]; //return array $begintime = $time; //define begin time //Your database connection items here $host = 'myhost'; //Specify host $dbuser = 'myuser'; // Specify user with alter permissions $dbpass = 'mypass'; //user password $mydb = 'mydb'; //specify schema or db to be modified //connect to database using variables above $link = mysql_connect($host,$dbuser,$dbpass); $db = mysql_select_db($mydb); if (!$link) { die('Could not connect: ' . mysql_error()); } echo "Connected Successfully to: $host." . "\n\n"; echo "Using database: $mydb." . "\n\n"; echo "Running script as $dbuser." . "\n\n"; //show tables in database $sql = 'SHOW TABLES'; $rs = mysql_query($sql); echo $sql; echo "\n"; if (!$rs) { die('SQL Recordset Error: ' . mysql_error()); } else { //loop through tables and convert to InnoDB while($row = mysql_fetch_array($rs)) { $tbl = $row[0]; $sql = "ALTER TABLE $tbl engine=InnoDB;"; mysql_query($sql); echo $sql; echo "\n"; } echo 'Operation Completed.' . "\n\n"; echo 'Confirm Storage Engine conversion using phpmyadmin ' . "\n" . 'or from mysql: show create table tblname.' . "\n"; } //close connection mysql_close($link); $time = explode(" ", microtime()); $time = $time[1] + $time[0]; $endtime = $time; //define end time $totaltime = ($endtime - $begintime); echo "Script Execution Time: $totaltime" . " seconds." . "\n\n"; ?>
Labels:
CentOS,
Convert Database,
InnoDB,
Linux,
Moodle,
Moodle 1.9,
Moodle 2,
MyISAM,
MySQL,
php
Monday, February 6, 2012
Testing Percona Xtrabackup (innobackupex)
My last post dealt with the installation of the Percona Xtrabackup tool for use on a Mysql innodb database.
After some tweaking and help from the Percona Docs, the following is a test backup, prep and restore of the full database (not incremental).
This assumes a test environment, so please don't try this on a running production environment. Before the database can be restored, the existing one will have to be renamed or moved. The options for this are straightforward. Stop the mysql daemon and make a binary copy of the files, or rename the source folder.
0. Login as root/su
1. Prerequisite configuration items
After some tweaking and help from the Percona Docs, the following is a test backup, prep and restore of the full database (not incremental).
This assumes a test environment, so please don't try this on a running production environment. Before the database can be restored, the existing one will have to be renamed or moved. The options for this are straightforward. Stop the mysql daemon and make a binary copy of the files, or rename the source folder.
0. Login as root/su
1. Prerequisite configuration items
Configure datadir and in my.cnf and reboot mysqld service
nano /etc/my.cnf
--find section
[mysqld]
--add line
datadir=/var/lib/mysql
service mysqld restart
2. Run full backup of specific database
innobackupex --defaults-file=/etc/my.cnf --user=***** --password=***** --databases=mydbname /path/to/backup/
--add line
datadir=/var/lib/mysql
service mysqld restart
2. Run full backup of specific database
innobackupex --defaults-file=/etc/my.cnf --user=***** --password=***** --databases=mydbname /path/to/backup/
3. Prepare backup files
innobackupex --apply-log --user=****** --password=****** /path /to/backup
4. Restore backup
service mysqld stop
cp –r /path/to/backup /var/lib/mysql
innobackupex --apply-log --user=****** --password=****** /path /to/backup
4. Restore backup
service mysqld stop
cp –r /path/to/backup /var/lib/mysql
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start
-n
chmod –R 771 /var/lib/mysql
service mysqld start
-n
Labels:
innobackupex,
InnoDB,
MySQL,
Percona,
xtrabackup
Wednesday, February 1, 2012
Installing Percona xtrabackup on CentOS 5.x
This covers the basic installation of the xtrabackup package on CentOS. This is necessary in order to take hot backups of a database using the InnoDB storage engine. By default, Mysql has no options to take a full hot backup of a live InnoDB database.
For more information, please see Percona's website located here: http://www.percona.com/doc/percona-xtrabackup/
0. Login as root
2. Install rpm (32-bit on this test machine)
Rpm –Uvh percona-release-0.0-1.i386.rpm
3. Check repo for package
yum list | grep percona
For more information, please see Percona's website located here: http://www.percona.com/doc/percona-xtrabackup/
0. Login as root
1. Get rpm from Percona
(check your server version before installing, there are specific RPM’s for 32-bit and 64-bit)
see here for rpm list: http://www.percona.com/downloads/percona-release/
(check your server version before installing, there are specific RPM’s for 32-bit and 64-bit)
see here for rpm list: http://www.percona.com/downloads/percona-release/
cd /tmp
wget http://www.percona.com/downloads/percona-release/percona-release-0.0-1.i386.rpm
wget http://www.percona.com/downloads/percona-release/percona-release-0.0-1.i386.rpm
2. Install rpm (32-bit on this test machine)
Rpm –Uvh percona-release-0.0-1.i386.rpm
3. Check repo for package
yum list | grep percona
4. Install xtrabackup
yum install xtrabackup --nogpgcheck
I still have testing to do on actual usage, so I'll create a new post on tests of backups and restoring databases.
-n
yum install xtrabackup --nogpgcheck
I still have testing to do on actual usage, so I'll create a new post on tests of backups and restoring databases.
-n
Labels:
InnoDB,
MySQL,
Percona,
xtrabackup
Subscribe to:
Posts (Atom)