Showing posts with label CentOS. Show all posts
Showing posts with label CentOS. Show all posts

Monday, February 26, 2018

Troubleshooting Cron (Cron not working)

I had a very curious case on a developmental server where after a couple weeks, I realized the cronjobs were not running. The crontab was fine, the scripts and permissions were fine and could be manually executed. Seems like the issue was cron itself.

It took a bit of digging, and there was this obscure line highlighted in the man page. Much thanks to here for pointing me in the right direction.
https://askubuntu.com/questions/23009/why-crontab-scripts-are-not-working

Two things were wrong here.

1. The crond was stalled. Quietly. Check this first and start it if it's stopped. Even if it's running, consider restarting the service.
service crond status
service crond restart

2. Crontab needs an empty/newline at the end of the file in order to load.
My version of crontab does not match the one in the article (I'm on CentOS 6); but the newline was missing at the end. Oddly enough all the other servers that were fine had this blank/newline. Guess it's pretty important, if not documented/deprecated.

At any rate, cron performs important user functions, so consider checking it from time to time.

--EOF

Friday, September 30, 2016

Installing Docker on CentOS 7 (behind corporate proxy)

As part of my research into Percona's new Open Source offering, their Percona Monitoring and Management platform, I realized that a core component is provided via a Docker container. Now I've previously played around with docker on a small scale, but this needed to be done on an actual server environment on the corporate network. There was one little item that caused a brief moment of grief with the proxy, but I eventually sorted it out.

Docker Engine installation on CentOS7

0. Login as root
1. Update machine
yum update
2. Add the Docker yum repo

tee /etc/yum.repos.d/docker.repo <<-'EOF'[dockerrepo]name=Docker Repositorybaseurl=https://yum.dockerproject.org/repo/main/centos/7/enabled=1gpgcheck=1gpgkey=https://yum.dockerproject.org/gpgEOF

3. Install the Docker Engine Package
yum install docker-engine

4. Start the Daemon
systemctl start docker

5. Set to run at boot
systemctl enable docker

6. Verify Operation with simple test
docker run hello-world

**Note: if you're behind a proxy, you may notice an error 
... dial tcp xx.xx.xx.xx:53: getsockopt: connection refused

You may need to do the following

a. create a systemd drop-in directory for the docker service
mkdir /etc/systemd/system/docker.service.d

b. create a proxy configuration file
...in the directory just created, in my case I needed both an HTTP and an HTTPS proxy to get it to work
vi /etc/systemd/system/docker.service.d/http-proxy.conf

Add the following (one line)
[Service]
Environment="HTTP_PROXY=http://your.proxy.ip.addr:port/" "HTTPS_PROXY=http://your.proxy.ip.addr:port/"

Save and exit

c. flush changes
systemctl daemon-reload

d. Verify that the configuration has been loaded:
systemctl show --property=Environment docker

e. Restart Docker:
systemctl restart docker
Verify Operation with simple test (works, yay!)
docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker Hub account:
 https://hub.docker.com

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Tuesday, July 12, 2016

Using Grep to search for a string inside a file

Simple story here really.

I had a borked config somewhere inside my /etc/ folder that was throwing some application errors. It's a development machine with poor documentation so I had to try to figure out where the typo happened.

Grep is a simple, but incredibly powerful command that took care of this quite easily. The flags I used  provided the filenames that contain the matching string as well as the line number where the string is matched. Helpful for fixing or tweaking some borked config or code.

Usage is quite simple:


grep -inr "badconfigurationstring" /searchdirectory

the flags are as follows;
-i = ignore case sensitivity
-n = print line number of matching string
-r = recursively read all files under search directory

more info at the grep man page: http://linux.die.net/man/1/grep

--end.






Monday, February 15, 2016

TurnitinTwo Issues with Moodle 3 on CentOS 6

I recently stumbled upon a Turnitin bug on my Moodle 3 environment where it was simply failing to connect with Turnitin through the configuration interface.


The message was fairly cryptic and the logs even more so.

The error message was simply: "Could not connect to Turnitin, Double check your API URL setting". This was fine, so I looked at the API log, which indicated a "Curl error: Proxy CONNECT aborted", alongside an error 502.


When Turnitin was contacted, they gave this response, which did not work.
If you encounter connectivity issues while using the Turnitin Moodle Direct V2 integration (error: Turnitin API Base URL incorrect or unavailable, or error: Double check your API URL setting) this could be related to a CA certificate being unavailable to cURL. Viewing the Turnitin Apilog files will identify if this is the case.

The Moodle Direct plugin uses the server operating system's implementation of cURL. If cURL has an out of date (or no) CA certificates, the interaction with Turnitin will fail due to cURL performing peer SSL certificate verification and not being able to verify the Turnitin SSL certificate. Until cURL 7.18.0 some CA certificates were provided, but after 7.18.0 no CA certificates have been provided at all. Because of this, the Moodle server administrator would need to ensure that an up to date CA certificate bundle is used.

For Debian and RedHat based distributions:
CA certificates are distributed in the ca-certificates package. Gentoo servers provide them via the app-misc/ca-certificates ebuild. It's also a good idea to make sure that the OpenSSL libraries (libssl) and cURL libraries (libcurl) are up to date on your server.

You will also need to place a file with the Bundle of CA Root Certificates (downloadable from
http://curl.haxx.se/ca/cacert.pem) on your webserver and make a curl.cainfo reference to this file in your php.ini.

For Windows based servers:

1. You need to be running PHP 5.3.7 or later.
2. Download
https://raw.github.com/bagder/curl/master/lib/mk-ca-bundle.vbs
from the Curl repository on GitHub.
3. Open a Command Prompt as Administrator and go to the directory in which you downloaded mk-ca-bundle.vbs .
4. Run mk-ca-bundle.vbs . Accept the default file name and do not include the text information for each certificate.
5. After running this you will end up with a file ca-bundle.crt.
6. Copy that to a known location, e.g. {path}/ca-bundle.crt.
7. Add curl.cainfo={path}/ca-bundle.crt to php.ini. See PHP Runtime Configuration for more details
[PHP]
;;;;;;;;;;;;;;;;;;;
; CURL Settings ;
;;;;;;;;;;;;;;;;;;;
curl.cainfo={path}/ca-bundle.crt
8. Restart the IIS web site
We were eventually able to resolve the issue by a combination of factors:

1. In addition to having an http_proxy environment variable in the operating system, I also needed to explicitly set an https_proxy. This is dependent on if your server currently uses an http_proxy environment variable. Do not make any changes if your server can access the web directly.


vi /etc/bashrc
add linesexport http_proxy=’http://yourproxyip:port/’ export https_proxy=’http://yourproxyip:port/’ save and exit shell

2. The CA-Certificates bundle from curl.hexx.se did not work and resulted in a bunch of errors related to the SSL CA Cert (Message: Problem with the SSL CA cert (path? access rights?)) , so I re-installed the ca-certificates bundle from CentOS repositories.
yum reinstall ca-certificates openssl
3. I then used the “update-ca-trust” package to update the certificate store.
update-ca-trust
4. I removed the Moodle Proxy configuration from the Moodle application interface.
    Dashboard / ► Site administration / ► Server / ► HTTP (Server Proxy section)


Please note that these steps fixed the issue with my particular environment, if you are faced with similar issues I'd suggest starting with steps 2 and 3. If you are having proxy connect issues beforehand, you may try step 1 first.

Good luck and happy moodling!

-Noveck

Thursday, January 8, 2015

Pluggable Authentication Modules (PAM) - some basic tricks on CentOS 6

I've been playing around with PAM on a couple distros recently, and I thought I'd share some quick tips and tricks in setting up a secure CentOS 6 Linux multi-user environment. Whilst these are not bulletproof password policies, they are a step beyond the default distribution configuration and are not too complex that the users would be bugging you, the friendly neighbourhood sysadmin.

As usual, any feedback is appreciated, so drop me a line: noveck@woblag.com. Once it gets past the spam filters, I'll try my best to respond asap.

1. Use PAM to disable the use of null passwords in user Accounts.

vi /etc/pam.d/system-auth

Find line 
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok

Remove/delete nullok option, so the line now reads:
password sufficient pam_unix.so md5 shadow try_first_pass use_authtok

save and close file


2. Use PAM to prevent re-using/recycling passwords .

This example prevents the use of the last 3 passwords.

vi /etc/pam.d/system-auth
find line
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok

Add to end of line
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok remember=3

save and close file

3. Set password minimum length

This example sets the minimum password length to 8 characters.

vi /etc/pam.d/system-auth

find line
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok

Add new line BEFORE
passwd password requisite pam_cracklib.so minlen=8
save and close file

4. Configure server to deny access with multiple incorrect login attempts

This example temporarily denies access after 5 attempts. The temporary lockout time can also be configured for a certain time, which will be set to 1 hour (3600 seconds) in this example.

vi /etc/pam.d/system-auth

Add the following line to end of file
auth required pam_tally.so onerr=fail deny=5 unlock_time=3600

save and close file

--END

Thursday, March 6, 2014

SSL Secured Apache Webserver

Here's a quick way to run an SSL Secured Webserver. Ideally, a trusted Certificate Authority should be used, but as a proof of concept, we'll be generating our own self-signed certificate.

This assumes a fully functional Apache Webserver running on CentOS Linux.

0. Login as root/sudo into the terminal

1. Install prerequisites
yum install mod_ssl openssl

2. Generate Certificate / Private Key
(or use instructions from trusted CA with a purchased certificate) 
openssl genrsa -out ca.key 1024

3. Generate Certificate Signing Request (CSR)
openssl req -new -key ca.key -out ca.csr

4. Generate Self Signed Key
openssl -x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

5. Copy files to appropriate locations
cp ca.crt /etc/pki/tls/certs
cp ca.key /etc/pki/tls/private/ca.key
cp ca.csr /etc/pki/tls/private/ca.csr

6. For SELinux
restorecon -Rvf /etc/pki

7. Update the Apache SSL config file
vi +/SSLCertificateFile /etc/httpd/conf.d/ssl.conf

Edit the two entries in the file

SSLCertificateFile /etc/pki/tls/certs/ca.crt
SSLCertificateKeyFile /etc/pki/tls/private/ca.key


8. Restart Apache
service httpd restart



9. Configure the firewall to accept incoming SSL requests
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
service iptables save
iptables -L –v


10. Test
From a web browser hit https://servername.com and the page should be displayed.


Finito!
Now get some coffee.:)

-noveck



Tuesday, October 29, 2013

Standalone server reporting with Sar and KSar

Zabbix is a great tool for monitoring and reporting for multiple servers and that installation is covered here. In this instance, Sar and KSar will be used as a standalone data collection tool, as it falls outside the physical and networking reach of my Zabbix server.


Sar is a neat little tool that is part of the sysstat package, more information can be found on the author's website. In my case, we will be using to collect data on CPU, Memory, Swap, Network and all the other metrics that can make or break a Linux based service.

kSar is a separate java based tool that generates some lovely graphs using the collected sar data, because a picture paints a thousand words, or in this case, a graph summarizes a crapload of data.

This tutorial covers the installation of both, as well as a practical usage scenario.

0. Got root/ sudo?



1. Install the packagesyum install sysstat java



2. Set the sysstat cron to run
nano /etc/cron.d/sysstat
Ensure the following lines are active / not commented out. The first line specifies how often the tool should take a snapshot, the second is when the daily summary is processed.


*/10 * * * * root /usr/lib/sa/sa1 1 1
53 23 * * * root /usr/lib/sa/sa2 –A

3. Configure sar to keep a month's worth of data

By default, sar keeps 7 days worth of data. Since we need data monthly, the configuration needs updating.

nano /etc/sysconfig/sysstat

Update line, HISTORY = 7 to now read:
HISTORY = 31

4. Download kSar

Located at http://sourceforge.net/projects/ksar
Create a folder to store kSar and Monthly text files
mkdir /mon
Extract kSar into /mon
Change permissions to make kSar executable
chmod +x /mon/kSar


5. View Daily Server Data (default)
Prep report:
LC_ALL=C sar -A > /mon/sardata.txt

See the graphs
cd /mon/kSarx.x.x/
./run.sh


Click "Data" Menu option -> Load from text file
Select /mon/sardata.txt

6. View Monthly Server Data (see below for actual script)

Prep report:
cd /mon/
./sarprep_monthly.sh
(Ensure script is executable before running!)

See the graphs
cd /mon/kSarx.x.x/
./run.sh


Click "Data" Menu option -> Load from text file
Select /mon/sarmonthly_July.txt (use appropriate month name)

7. All done!

sarprep_monthly.sh
#cleanup old monthly file
rm -rf /mon/sarmonthly_$(date +"%B").txt
#loop through 31 possible days, merge all files into one
for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31; do
LC_ALL=C sar -A -f /var/log/sa/sa$i >> /mon/sarmonthly_$(date + "%B").txt

done


That's it! Til next time...
-noveck

Thursday, September 26, 2013

Upgrading / Migrating a MySQL 5.0 Database to MySQL 5.5 [InnoDB]

This post is an ultra, no, make that uber paranoid method of upgrading/migrating a relatively large (20+ GB on file) InnoDB database from MySQL version 5.0 to MySQL 5.5. Some might consider it overkill, but as it relates to a database of this size and maturity, I'd prefer not to take any unnecessary risks.

This assumes that the old server is running mysql 5.0 on CentOS 5.x and that the MySQL 5.5 is installed on a new server running CentOS 6, using the remi repositories. This is covered here.

Phase 1 - Prepare Data on the Old Server

1. Execute Database Check to ensure tables are clean
From terminal:
mysqlcheck –c mydbname –u root –p
<enter password when prompted>


2. Re-index tables before the dump
From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;

From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql

3. Export the database as a dump file
 From shell:
mysqldump  -u root –p –e –c --verbose --default-character-set=utf8 --skip-set-charset --max-allowed-packet = 100M --single-transaction --databases mydbname –r /root/Desktop/mydbdump.sql

4. Copy to new DB server
scp –r /root/Desktop/mydbdump.sql root@new.db.srv.ip:/root/Desktop/


Phase 2 - Import to New Server

1. Create empty database shell for import

From mysql:
create database mdbname character set utf8 collate utf8_unicode_ci\

2. Issue Grant permissions to new DB (I hope you have this documented, else you might need to dump/restore the mysql.user table to new DB)

3. Import SQL file.(but first set a really high session value for max_allowed_packet to handle the large data import)
 set global max_allowed_packet = 1000000000;
source /root/Desktop/mydbdump.sql

4. Check mysql for transaction warnings
from mysql:
show warnings\G

5. Run upgrade script
From shell:
mysql_upgrade –u root –p --force

6. Rebuild InnoDB tables, which would force the InnoDB tables to upgrade
(source: http://www.mysqlperformanceblog.com/2010/05/14/mysql_upgrade-and-innodb-tables/ )

From mysql: (single line!)
select concat(‘ALTER TABLE`’, table_schema,’`.`’, table_name,’` Engine=InnoDB;’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/InnoBatch.sql’;
From shell:
mysql -u root –p --verbose < /tmp/InnoBatch.sql

7. Execute Database Check to ensure newly imported/upgraded tables are clean
From shell:
mysqlcheck –c mydbname –u root –p

Phase 3 - Compare old and new database
Checking data consistency to ensure all the data was transferred via an accurate record count.
http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html

1. On Old db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;

From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.

2. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt


3. On New db server, generate query to perform record count on each table
from mysql: (single line!)
select concat(‘ SELECT “’,table_name, ‘” as table_name, count(*) as exact_row_count from ‘,table_schema, ‘.’ table_name, ‘ UNION’) from information_schema.tables where table_schema =‘mydbname’ into outfile ’/tmp/TableAnalysisQuery.sql’;

From shell:
nano /tmp/TableAnalysisQuery.sql
remove the LAST Union from the end of last line in the file.

4. Run the query to get table row count for all tables
From shell:
mysql –u root –p < /tmp/TableAnalysisQuery.sql > /root/Desktop/TableAnalysisResults-$(hostname).txt

5. Copy both text files to a third machine for comparison

On OLD db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt root@third.machine.ip:/root/Desktop

On NEW db server, from shell:
scp –r /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt root@third.machine.ip:/root/Desktop

ON third server
from shell:
diff –a /root/Desktop/TableAnalysisResults-myolddb.mydomain.com.txt /root/Desktop/TableAnalysisResults-mynewdb.mydomain.com.txt
No output from the previous command means that the data is consistent (as it relates to number of rows on each table) on both servers and the new database can be made active/ brought in production

<EOF>

That's it!
-noveck

Wednesday, July 17, 2013

CentOS 5 top menu vanished after update

Well this is a tickler. After I performed an update on a CentOS system, the top menu pulled a Houdini on me. The panel was totally empty and non-responsive, save for the time.

This is a quick and easy fix, so don't panic. Just force the gnome panel to reload.

0. Login as root or sudo.
You should find the Terminal via right-click on the desktop. Thankfully.

1. Reload the gnome panel (forcefully)
killall gnome-panel


2. That's it!
The menu items should be return. If that didn't work, you might need further help.

-noveck

Tuesday, June 11, 2013

Reset MySQL root password on CentOS 5.x

I had one of those oh-crap moments and forgot the mysql root password in one of my development/test machines.
This is a reblog of someone else's post in case it ever gets deleted, I must say it saved my bacon (or at the very lease a couple hours of hair pulling and reinstall)

Credit to: http://gettechgo.wordpress.com/2012/05/10/how-to-reset-mysql-root-password-linux-o-s/

0. Login as root/su

1. Stop the MySQL service
 
service mysqld stop
 

2. Start MySQL Safe mode with skip grant tables option 
mysqld_safe --skip-grant-tables &
(press ctrl+z to exit, if required)

3. Start the MySQL service

service mysqld start

4. Log into the MySQL server without any password

 mysql -u root -p mysql
 

5. Reset the password for ‘root’ user
UPDATE user SET password=PASSWORD(‘new-password’) where user=’root’;

6. Flush privileges
 
flush privileges;

7. Restart the MySQL service

service mysqld restart
 

8. Log-in with the new password 
mysql -u root -p
<enter new password when prompted>


Cheers,
Noveck

Monday, April 8, 2013

Issues with ip6tables

Whilst troubleshooting a VM hanging issue with a EXSi 5.0 guest running CentOS 5, i noticed a strange error after my last kernel update.


It had me a bit confused, as I distinctly recall disabling all IPv6 support while building the machine.

Anyhow, since I don't need IPv6 support at the moment and to avoid any unnecessary red flags while booting, I'll just go ahead and disable it completely.

0. Login as root / su

1. Check to see if it loads on boot.
chkconfig --list | grep ip6tables

//in my instance, it was enabled for levels 2,3,4 and 5;

2. Disable ip6tables
chkconfig ip6tables off

3. Stop ip6tables service (if running)
service ip6tables status (if process is runnig, stop; if not, ignore next line)
service ip6tables stop

That's it!


It may or may not help my VM hanging issue, but at least it'd clear that pesky red flag.

Cheers,
noveck

Monday, March 11, 2013

InnoDB Restore Backup Procedure

My previous post elaborated on some scripts on backing up a MySQL Database with the InnoDB storage engine.

This post documents the restore procedure using the actual backup generated. Steps 7 and 8 are optional, but recommended. Better safe than sorry!

Assumptions:
· Target restore candidate server has Percona Xtrabackup installed.
· Target database server (restore candidate), has no active/production databases running.

Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto

Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly


0. Got root? 
 
1. Locate appropriate backup file
Assumption that it is compressed like the Daily or Monthly in my previous post.

cp –r /bkp/Daily/mydbnamexxxxxx.tgz /root/Desktop

2. Uncompress backup file

cd /root/Desktop
tar xzf mydbnamexxxxxx.tgz


3. Prepare the backup file 
cd /path/to/extracted/backup/ 
ls
(expected output should be ibdata1 file, folder containing database base and other xtrabackup named files)

innobackupex –apply-log /root/Desktop/*_dbxxxx/201xxxxx/

4. Stop the MYSQL Daemon and copy the files from the desktop to the mysql directory
service mysqld stop
rsync –rvt --exclude ‘xtrabackup_checkpoints’ --exclude ‘xtrabackup_logfile’ * /var/lib/mysql


5. Change ownership of the restored files and restart the MYSQL Daemon
chown –R mysql:mysql /var/lib/mysql
chmod –R 771 /var/lib/mysql
service mysqld start


6. Login to mysql and ensure the database has been restored via a table count
(It helps to have this documented in your production environment)
mysql –u root –p
<enter password when prompted>
show databases;
use mydbname; (use appropriate database name from list)
show tables;
Quit MYSQL:
\q

7: Execute a mysqlcheck with check flag to determine any data inconsistencies
mysqlcheck –c mydbname –u root –p
<enter password when prompted>

8: Execute a mysqlcheck with optimize flag to optimize restored table data
mysqlcheck –o mydbname –u root –p
<enter password when prompted>

--eof
-noveck

Wednesday, January 16, 2013

Issues with CentOS 6 and EPEL

I was running into some issues with the EPEL repo on CentOS 6.x. I had just installed the OS and the EPEL/remi repos and whilst trying to search for a package the same error kept repeating.

Cannot retrieve metalink for repository: epel

and it kept failing each time.

The fix involved editing the EPEL repo and making a simple change from https to http. Why or how this broke is beyond me, but this fix may help someone else.

Assuming you have root or su:

1. Edit the epel repo
nano /etc/yum.repos.d/epel.repo

Find line:
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch

Change to
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch

Save and exit.

2. Clean yum
yum clean all


That did it for me!

Cheers,
noveck 

Wednesday, January 9, 2013

InnoDB Backup Scripts

Following up on my earlier post about converting the storage engine on a MySQL Database from MyISAM to InnoDB, I'd like to share the following scripts with backup rotation built in.
The next blog entry should be a step-by-step restore procedure using the actual backups below.

Test Environment:
CentOS 5.x
MySQL 5.0 (distro version)
Percona Xtrabackup 2.0.3 installed as per this howto

Backup Directory Structure
/bkp
/bkp/Hourly
/bkp/Daily
/bkp/Monthly

Script Output Log Directory
/tmp

Script Directory
/scripts

Disclaimer: I do not guarantee this is the BEST way of doing this, but it works for me. Copy and paste the following into a *.sh file in your scripts directory. Ensure the executable flag is set

chmod +x scriptname.sh

Actual Scripts
innodb_backup_monthly.sh

#!/bin/sh
# An InnoDB Backup Script to backup database Monthly
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer

time_start=`date +%s`

# Go to backup location and create Monthly folder with datestamp
cd /bkp/
mkdir M_db_$(date +%Y%m%d)

# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/M_db_$(date +%Y%m%d)

# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz M_db*

# Backup rotation section
rm -rf M_db*
rm -rf /bkp/Monthly/$(date +"%B")
mkdir /bkp/Monthly/$(date +"%B")
mv mydbname* /bkp/Monthly/$(date +"%B")

# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`

# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_monthly.log
date  >> /tmp/db_backup_monthly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_monthly.log
 innodb_backup_monthly.sh
 #!/bin/sh
# An InnoDB Backup Script to backup database DAILY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Uses tar-gzip to further compress final archive
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`

# Go to backup location and create Daily folder with datestamp
cd /bkp/
mkdir D_db_$(date +%Y%m%d)

# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/D_db_$(date +%Y%m%d)

# Compress backup into a tarball
tar czf mydbname_$(date +%Y%m%d).tgz D_db*

# Backup rotation section
rm -rf D_db*
rm -rf /bkp/Daily/$(date +"%A")
mkdir /bkp/Daily/$(date +"%A")
mv mydbname* /bkp/Daily/$(date +"%A")

# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`

# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_daily.log
date  >> /tmp/db_backup_daily.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_daily.log


innodb_backup_hourly.sh
#!/bin/sh
# An InnoDB Backup Script to backup database HOURLY
#
# Written by: Noveck Gowandan
# 02-10-2012
# Version 1.1
# Modified filename convention
# Added script timer and modified output to log time
# Start timer
time_start=`date +%s`
# Go to backup location and create Hourly folder with datestamp
cd /bkp/
mkdir H_db_$(date +%Y%m%d)
# Execute backup using innobackupex and send to folder created previously
innobackupex --defaults-file=/etc/my.cnf --user=****** --password=****** --databases=mydbname /bkp/H_db_$(date +%Y%m%d)
# Backup rotation section
rm -rf /bkp/Hourly/$(date +"%H")
mkdir /bkp/Hourly/$(date +"%H")
mv H_db* /bkp/Hourly/$(date +"%H")
# Stop timer and calculate total time
time_end=`date +%s`
total_time=`expr $(( $time_end - $time_start ))`
# Log output: datestamp and time takes to execute
echo "____________" >> /tmp/db_backup_hourly.log
date  >> /tmp/db_backup_hourly.log
echo "Execution Time was $total_time seconds." >> /tmp/db_backup_hourly.log