Monday, December 20, 2010

VMWare ESXi error - Server could not interpret client request (fixed!)

My VMware Infrastructure client simply refused to connect to my ESXi server today. The error:
"Error - Server could not interpret client request".

Very uninformative.

Took a while to figure it out, VMWare's site was not too helpful.

Here is how to re-establish connectivity with your server.

1. Log on to the ESXi physical server directly (if its in a different building, put on your walking shoes)
2. Navigate to the Option: Restart Management Agents
3. Restart the management agent (F11 to confirm)
4. Log in using the VMWare Infrastructure Client

Connectivity should now be established.


Thursday, October 14, 2010

Flat out

Procrastination: Something about some shit I'm gonna do tomorrow.

Thursday, September 2, 2010

Installing Rkhunter on CentOS 5.x

Rkhunter is a rootkit scanning tool for Linux/Unix type environments. If you are running a Linux based webserver, it is a good idea to install and configure this to run perhaps nightly.

0. Login as root or su (whatever floats your boat)

1. Install the RPMForge repo if not already installed.

This example is for a 32 bit system, there is a different rpm for 64 bit.
cd /temp
rpm -Uhv rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rm rpmforge-release-0.3.6-1.el5.rf.i386.rpm

2. Install rkhunter
yum install rkhunter -y

3. Perform Initial scan
rkhunter --propupd
rkhunter -c

Now it is recommended to execute this daily, especially for a high traffic server. Shell Script!

4. Create shell script
cd /your/script/directory
chmod +x

add lines

rkhunter --update
sleep 60
rkhunter --checkall --cronjob --skip-keypress

cat /var/log/rkhunter.log | mail -s "Daily rkhunter scan report"

5. Add script to crontab
nano /etc/crontab
add line like:
#This will be executed at 1:00 am daily.

00 1 * * * root /bin/sh /your/script/directory/



Thursday, August 19, 2010

Show passwords on Browser page

This handy piece of javascript was written by:

Copy and paste this into your Browser's Address bar (only works on a page where password is saved and hidden)

Definitely works on IE and Firefox.

Cheers (with props to the original author),

Setting up an NFS Share on CentOS 5.x

This outlines on how to setup a NFS Client/Server architecture.Used by myself for Moodle to access Datafiles, as I employ multiple webservers.

0. Package Installation

Login as root and install the following packages on NFS Client(s) and NFS Server
yum install nfs-utils nfs4-acl-tools portmap
chkconfig nfs on
chkconfig portmap on
service nfs start
service portmap start

1. Prep NFS Server
export share
nano /etc/exports
add lines like:

allow clients to be able to connect.
nano /etc/hosts.allow
add lines like:
add clients to hosts file
nano /etc/hosts
make sure all relevant hosts are listed in the format:
192.168.x.x server1

restart the nfs and portmap service
service nfs restart
service portmap restart
2. Prep the NFS Client(s)
Repeat this step on each NFS client.
Edit the fstab
nano /etc/fstab

adding line:
server.ip.address:/mount/folder /mnt nfs rw,hard,intr 0 0
Mount the share
mount nfshostname:/mount/folder /mnt

restart services
service netfs restart
chkconfig netfs on

3. Test NFS

Create a file from one client, and it should be automatically available on the other clients.
From Client1
echo "This is an NFS Test" > /mnt/file.txt


Tuesday, August 3, 2010

Sending mail from command line or shell script

How to send mail from the Linux Command line (or shell script) using the mail command.

echo "This is the body of the email" | mail -s "Email Subject" -- -f

The -- -f section is used to plug in a sender address, otherwise the mail will be sent from your root or user account from the actual physical server name - e.g

This comes in handy for notification of script execution, for example, jobs in the crontab. Placing this code immediately after the last command in a shell script will ensure relevant person(s) are notified when the script is called.


Friday, July 30, 2010

Load Balancing - Never a dull moment

Currently doing some research into implementing an updated version of session aware loadbalancing for a specialized Moodle installation.

The solution I inherited with the system was Ultramonkey in a High Availability setup, but I figure I'm gonna throw in some dedicated loadbalancing in the new equation.

Updates to follow soon, as well as some more scripting!

Tuesday, July 20, 2010

Adding a virtual disk to a Linux Virtual Machine.

Needed to add another Virtual Disk to an existing machine in VMWare ESXi. Adding the disk to the VM Settings was the easy part and fairly straightforward. This particular guide is to get the Disk recognized on the actual Linux Server and mount it.

0. Login as root (Seeing a pattern yet?)

1. Identify the drive.
df -h  and take note of the entries.
cd /dev/

By the process of elimination, the one in the list in /dev/ that is not part of the df -h listing is the culprit.

2. Create the filesystem
/sbin/mkfs -t ext3 /dev/xxx  where xxx is the drive identified in Step 1.

3. Mount the filesystem
 mount -t ext3 /dev/xxx /mnt/myfoldername where my folder name is - you guessed it!

4. Check to make sure it is there!
df -h



Monday, July 19, 2010

Backup, Backup Backup - never underestimate the value of it.

There I was, bitching about non-geek stuff, when *poof*, suffice it to say, never underestimate the value of a good backup strategy.

One of the Moodle virtual servers that I manage failed and (conveniently) the RAID 5 with the backups went up in smoke as well. It literally just disappeared. Usually I prefer Hard disk backup to tape but in this case, hosting a server for someone else (bureaucratic favour), my preference turned around and bit me in the ass.

The meta-data on the Raid controller apparently became corrupt, with some sort of power blip (even with enterprise level UPS and generator backups) that happened just at the wrong time possible. Kind of like lightning hitting you in the dead of summer whilst you were casually walking in a field of lightning rods.

Anyhow, disaster aside, time to attempt either a recovery or a rebuild.

Part One - RAID Recovery.
In order to get the server to recognize the drives once more, the array needed to be deleted. Sounds scary, but technically the drives will not be wiped, per se. Then the array needed to be recreated using the same settings as its predecessor. Blind luck helped in this case, as the original guy who created the array took off, presumably because he got a premonition about this shitstorm and decided to flee.

So the Array got recreated and the server was able to see the 500 GB once again. YAY!...... well no. The 500GB appeared to be empty.

Time to see if we can work some magic.

Part two -  Attempted data recovery.
This will be done using UBCD for linux - two words. GET IT.

Using the PhotoRec tool by CG Security, which is included on the CD, an attempt is currently being made to recover the data.

An update will follow soon.


All we managed to recover was junk data. The server was rebuilt using an older backup. The client now understands the importance of a little request, like an extra HDD for backups.


Tuesday, May 18, 2010


Seems that the non-geek spell continues.
Currently working on some documentation, as well as:
Batch uploading users to myelearning (for manual accounts)
Batch creating courses (thanks to

Massive server changes coming soon, plus testing and deployment on VMWare ESXi, so plenty to write about in the upcoming weeks/months.


Thursday, April 22, 2010

Windows Batch Script to remove spaces from filenames

Part of managing Moodle( for me) includes using a Windows Streaming Server to host large media files.
On several occasions, files have been sent with non-web-friendly filenames with spaces et al to be uploaded.

This script fixes that.

This is a windows batch script, which needs to be executed in the folder containing the files, and will replace the spaces and dots in the filenames. Simply copy the script below, name it remove_spaces.bat and execute in the folder needed.

Pretty handy stuff.

Credit to

@echo off
setlocal enabledelayedexpansion
for %%j in (*.*) do (
set filename=%%~nj
set filename=!filename:.=_!
set filename=!filename: =_!
if not "!filename!"=="%%~nj" ren "%%j" "!filename!%%~xj"

**Updated on 05-07-2012
To remove underscores, use the following in place of lines 5 and 6
set filename=!filename:.=!
set filename=!filename: =!

Wednesday, April 7, 2010


Seems like the geek stuff is on the back burner for the next few weeks.
Currently drafting some documentation and doing some planning for when I get new hardware. ;)
Moodle Networking is also on the cards...


Tuesday, March 23, 2010

In pursuit of a monitoring solution for CentOS Linux: Part VI - Zabbix Agent Installation

Continuation of "In pursuit of a monitoring solution for CentOS Linux"
Part I
Part II
Part III
Part IV
Part V 

Part VI - Zabbix Agent Install [Linux Specific]

0. Login as root on the server to be monitored.

1. Install prerequisite packages
yum install gcc

2. Copy Zabbix tarball and extract
mkdir /temp/zabbix && cd /temp/zabbix
copy the zabbix tarball into this folder (via scp from server, or it can be re-downloaded)
tar -xzvf zabbix-1.8.1.tar.gz
cd zabbix-1.8.1.tar.gz

3. Install the zabbix agent
./configure --enable-agent --prefix=/usr/local/zabbix
make install

4. Create the Zabbix User
useradd zabbix

5. Configure Zabbix Agent
echo 'zabbix_agent 10050/tcp' >> /etc/services
echo 'zabbix_trap 10051/tcp' >> /etc/services

6. Copy the sample configs for the agentd.
mkdir /etc/zabbix
cp misc/conf/zabbix_agentd.conf /etc/zabbix

7. Edit agentd config
nano /etc/zabbix/zabbix_agentd.conf
Host_name=your_server_name ***[Remember this name! Case Sensitive]

8. Configure Agent to Automatically start
cp misc/init.d/redhat/zabbix_agentd_ctl /etc/init.d/zabbix_agentd
nano /etc/init.d/zabbix_agentd
Just below #!/bin/sh: add these two lines, including the # hash marks.
# chkconfig: 345 95 95
# description: Zabbix Agentd
Edit the following parameters:
chkconfig --level 345 zabbix_agentd on 

9. Edit hostsfile to allow communication
nano /etc/hosts.allow
ALL: your.zabbix.server.ip : allow

10.Depending on your firewall setup, open the follwoing ports: 10050 and 10051.

11. Start the Zabbix Agent Service
service zabbix_agentd start

12. Make sure the agent is operational
cat /tmp/zabbix_agentd.log

The expected output is a few lines, one of which should say Zabbix Agent is running...

Now login to the Zabbix Server web interface/ GUI.

13.  Setup the host on the server
Go to Configuration –> Hosts –> Create Host
Host: Your Server Name(use the same name from step 7)
DNS name: your dns info [your dns name only]
IP address: 192.168.x.x [Your Host ip address]
Port: 10050
Status: Monitored [We will change this after installing the agent]
Link with Template: Template_Linux


Documentation of Triggers and other necessary information on further configuration can be found on the Zabbix Website.


Wednesday, March 17, 2010

MySQL Maintenance for MyISAM Tables recommended performing regular re-indexing of the tables in a Large Moodle Installation. Since my install of Moodle uses the MyISAM storage engine, this will be accomplished using the myisamcheck command.

This will mean stopping the MySQL service whilst the command is running!

0. Login as root on the database server.

1. Stop the Mysql Service
service mysqld stop

2. Run the myisamcheck command
myisamchk --analyze --check --extend-check --force --verbose /var/lib/mysql/yourdatabasename/ *.MYI

3. Start the Mysql Service
 service mysqld start

It is recommended to do this monthly for very large databases with high activity.


Monday, March 15, 2010

In pursuit of a monitoring solution for CentOS Linux: Part V - Zabbix Server Installation

Continuation of "In pursuit of a monitoring solution for CentOS Linux"
Part I
Part II
Part III
Part IV

Part V - Zabbix Server Installation

This install was done mostly from this guide located at

0. Login as Root on the Monitoring Server (assuming the same server in the Nagios Server Setup)

1. Install prerequisite packages
yum -y install ntp php php-bcmath php-gd php-mysql httpd mysql gcc mysql-server mysql-devel net-snmp net-snmp-utils net-snmp-devel net-snmp-libs curl-devel mak

2. Fping is not part of the base repository, so the RPM's is downloaded from Dag Rep.
rpm -Uvh fping-2.4-1.b2.2.el5.rf.i386.rpm
chmod 7555 /usr/sbin/fping

3. Create the Zabbix User
useradd zabbix

4. Download and Install Zabbix
mkdir /temp/zabbix && cd /temp/zabbix
tar -xzvf zabbix-1.8.1.tar.gz

5. Start/Restart Mysql and create the empty database shell
service mysqld restart
mysql -u root -p
mysql> CREATE DATABASE zabbix character set utf8;
mysql> GRANT DROP,INDEX,CREATE,SELECT,INSERT,UPDATE,ALTER,DELETE ON zabbix.* TO zabbixmysqluser@localhost IDENTIFIED BY ‘zabbixmysqlpassword’;
mysql> quit;

6. Create the DB Schema
cd zabbix-1.8.1
cat create/schema/mysql.sql | mysql -u zabbixmysqluser -pzabbixmysqlpassword zabbix
cat create/data/data.sql | mysql -u zabbixmysqluser -pzabbixmysqlpassword zabbix
cat create/data/images_mysql.sql | mysql -u zabbixmysqluser -pzabbixmysqlpassword zabbix
./configure --enable-server --prefix=/usr/local/zabbix –-ith-mysql --with-net-snmp --with-libcurl
make install
make clean

7. Compile the Zabbix Agent
./configure --enable-agent --prefix=/usr/local/zabbix --enable-static
make install

8. Add Zabbix Server and Agent ports to the Services file
echo ‘zabbix_agent 10050/tcp’ >> /etc/services
echo ‘zabbix_trap 10051/tcp’ >> /etc/services

9. Copy the sample configs to /etc/zabbix for server and agentd.
mkdir /etc/zabbix
cp misc/conf/zabbix_agentd.conf /etc/zabbix
cp misc/conf/zabbix_server.conf /etc/zabbix

10. Modify Zabbix server configuration
nano /etc/zabbix/zabbix_server.conf
Change the following entries

11. Modify the Agent configuration
nano /etc/zabbix/zabbix_agentd.conf
Change the following entries

12. Copy Server and Agent files
cp misc/init.d/redhat/zabbix_agentd_ctl /etc/init.d/zabbix_agentd
cp misc/init.d/redhat/zabbix_server_ctl /etc/init.d/zabbix_server
Modify /etc/init.d/zabbix_agentd AND /etc/init.d/zabbix_server:

13. Edit Agentd and Server config
nano /etc/init.d/zabbix_agentd (Note the # hash marks, they are necessary), add near the top, just below #!/bin/sh:
# chkconfig: 345 95 95
# description: Zabbix Agentd

nano /etc/init.d/zabbix_server (again, note the # Hash marks, they are required), add near the top, just below #!/bin/sh:
# chkconfig: 345 95 95
# description: Zabbix Server

14. Configure Services to start automatically on boot.
chkconfig zabbix_server on
chkconfig zabbix_agentd on
chkconfig httpd on
chkconfig mysqld on

15. Depending on your firewall setup, open the follwoing ports:for port 80, 10050, and 10051.

16. Copy phpfiles to site root
cp -r frontends/php /var/www/html/zabbix

17. Tweak php.ini
nano /etc/php.ini
max_execution_time = 300
date.timezone = Pick a Timezone from here

18. Start Apache and make the zabbix folder writable
service httpd start
chmod 777 /var/www/html/zabbix/conf

19. Launch http://your.server.ip/zabbix in your web browser.
A Setup Screen should be displayed.
Click the EULA, and confirm all prerequisites are met on the next screen.
Setup the DB connection using the user and password set in step 6.

20. When the setup is complete, the webroot needs to be secured and the services restarted.
chmod 755 /var/www/html/zabbix/conf
mv /var/www/html/zabbix/setup.php /var/www/html/zabbix/setup.php.bak
service zabbix_agentd start
service zabbix_server start

21. Zabbix is now accessible using http://your.server.ip/zabbix
username: admin
(no password)

On to configuring the Zabbix Agents!


Tuesday, March 9, 2010

In pursuit of a monitoring solution for CentOS Linux: Part IV - Nagios Client Installation

Continuation of "In pursuit of a monitoring solution for CentOS Linux"
Part I
Part II
Part III
Part IV - Nagios Client Installation

This is accomplished using Nagios Plugins and the NRPE Daemon to report information to the Monitoring Server.

0. Login as root on Client (Server to be monitored)

1. Install prerequisite packages - openssl-devel and xinetd is needed.
yum install openssl-devel xinetd

2. Create nagios account
useradd nagios
passwd nagios

3. Create folder to store downloaded files
mkdir -p /opt/Nagios/Nagios_Plugins && cd /opt/Nagios/Nagios_Plugins
**Go to and get the URL for the latest versions of the software. Use wget to download to directory.

4. Extract Files
tar xzf nagios-plugins-1.4.13.tar.gz
cd nagios-plugins-1.4.13

5. Compile and Configure plugins
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make install

6. Change permissions on plugins and plugin directory
chown nagios.nagios /usr/local/nagios
chown -R nagios.nagios /usr/local/nagios/libexec

Install NRPE Daemon

7. Create folder to store downloaded files
mkdir -p /opt/Nagios/Nagios_NRPE && cd /opt/Nagios/Nagios_NRPE
**Go to and get the URL for the latest versions of the software. Use wget to download to directory.

8. Extract the files
tar -xzf nrpe-2.12.tar.gz
cd nrpe-2.12

9. Compile and Configure NRPE


Exected Output
General Options:
NRPE port: 5666
NRPE user: nagios
NRPE group: nagios
Nagios user: nagios
Nagios group: nagios

make all
make install-plugin
make install-daemon
make install-daemon-config
make install-xinetd

10. NRPE Configutation
nano /etc/xinetd.d/nrpe
edit line:
only_from = your.nagios.SERVER.address

nano /etc/services

add line:
nrpe 5666/tcp # NRPE

11. Restart Xinetd and Set to start at boot
chkconfig xinetd on
service xinetd restart

12. Open Port 5666 on Firewall

13. Test NRPE Daemon

netstat -at |grep nrpe
Expected Output:
tcp 0 0 *:nrpe *.* LISTEN

/usr/local/nagios/libexec/check_nrpe -H localhost
Expected Output:
NRPE v2.12

At this point further configuration is needed on the Nagios Server, to accept data from the client.

14. Login as root on Nagios Server

15. Downlad and Install NRPE Plugin

mkdir -p /opt/Nagios/Nagios_NRPE && cd /opt/Nagios/Nagios_NRPE
**Go to and get the URL for the latest versions of the software. Use wget to download to directory.

16. Extract the Files
tar -xzf nrpe-2.12.tar.gz
cd nrpe-2.12

17. Compile and Configure NRPE
make all
make install-plugin

18. Test connection to client
/usr/local/nagios/libexec/check_nrpe -H client.ip.address
Expected Output:
NRPE v2.12

19.Create NRPE Command Definition

nano /usr/local/nagios/etc/objects/commands.cfg

Add the following:
# Command to use NRPE to check remote host systems

define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

20. Create Linux Object template
nano /usr/local/nagios/etc/objects/linux-box-remote.cfg

Add the following and replace the values “host_name” “alias” “address” with the values that match your setup:
** The “host_name” you set for the “define_host” section must match the “host_name” in the “define_service” section **
define host{
name linux-box-remote ; Name of this template
use generic-host ; Inherit default values
check_period 24x7
check_interval 5
retry_interval 1
max_check_attempts 10
check_command check-host-alive
notification_period 24x7
notification_interval 30
notification_options d,r
contact_groups admins

define host{
use linux-box-remote ; Inherit default values from a template
host_name Centos5 ; The name we're giving to this server
alias Centos5 ; A longer name for the server
address ; IP address of the server

define service{
use generic-service
host_name Centos5
service_description CPU Load
check_command check_nrpe!check_load
define service{
use generic-service
host_name Centos5
service_description Current Users
check_command check_nrpe!check_users
define service{
use generic-service
host_name Centos5
service_description /dev/hda1 Free Space
check_command check_nrpe!check_hda1
define service{
use generic-service
host_name Centos5
service_description Total Processes
check_command check_nrpe!check_total_procs
define service{
use generic-service
host_name Centos5
service_description Zombie Processes
check_command check_nrpe!check_zombie_procs

21. Activate the linux-box-remote.cfg template
nano /usr/local/nagios/etc/nagios.cfg
And add:
# Definitions for monitoring remote Linux machine

22. Verify Nagios Config Files

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Expected Output:
Total Warnings: 0
Total Errors: 0

23. Restart Nagios Service
service nagios restart

Please check the Nagios website if any errors are encountered. This install went off relatively smoothly, no major errors.

Only the standatd NRPE plugins were demonstrated in this install, please visit the Nagios website for more plugins!


In pursuit of a monitoring solution for CentOS Linux: Part III - Nagios Server Installation

Continuation of "In pursuit of a monitoring solution for CentOS Linux"
Part I
Part II

Part III and IV covers installation of the Nagios Server and configuration of the Clients.

Nagios Server Install on CentOS 5.x

For the purpose of this install, Server will refer to the monitoring server and Client will refer to the servers being monitored.

0. Log in to Server as root

1. Install the necessary pacakges

yum install httpd gcc glibc glibc-common gd gd-devel php

2. Create user and group for Nagios
useradd -m nagios
groupadd nagcmd
usermod -a -G nagcmd nagios
usermod -a -G nagcmd apache

3. Create directory to store Nagios Downloaded files

mkdir /opt/Nagios && cd /opt/Nagios
**Go to and get the URL for the latest versions of the software. Use wget to download to directory.

4. Extract files
cd /opt/Nagios
tar xzf nagios-3.0.6.tar.gz
cd nagios-3.0.6

5. Compile and Configure
./configure --with-command-group=nagcmd
make all
make install
make install-init
make install-config
make install-commandmode
make install-webconf

6. Create Web Interface Login User
Create user “nagiosadmin” ( remember the password assigned !)
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

7. Restart Apache
service httpd restart

Install the Nagios Plugins

8. Extract files
cd /opt/Nagios
tar xzf nagios-plugins-1.4.13.tar.gz
cd nagios-plugins1.4.13

9. Compile and Configure Nagios Plugins
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make install

10. Configure the email address to send notifications to
nano /usr/local/nagios/etc/objects/contacts.cfg
email nagios@localhost ; << CHANGE THIS TO YOUR EMAIL ADDRESS

11. Verify config file
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Expected output
Total Warnings: 0
Total Errors: 0

12. Start Nagios and set to start automatically on reboot.
chkconfig --add nagios
chkconfig nagios on
chkconfig httpd on
service nagios start

13. Log on to server from an internet browser

On to Part IV - Nagios Client install/configuration


Monday, March 8, 2010

In pursuit of a monitoring solution for CentOS Linux: Part II - The Winners

This is an update to my earlier post: In pursuit of a monitoring solution for CentOS Linux

Based on some testing for the past couple weeks, the winner, or rather winners turned out to be Nagios and Zabbix.

Nagios was a great monitoring solution and was highly customizable and the plugins were simple to modify.

Zabbix is a lot more complex, but offers very nifty graphs for analysis. Graphs are always good ;)

Future posts will have the Nagios and Zabbix installation instructions for CentOS5.x - both the Client and the Server.


Monday, March 1, 2010

Installing memcached on CentOS 5.x

Memcached is a distributed memory object caching system - as quoted from

In other words, it is simply a cache for common database queries and typically reduces database load by caching the mentioned queries.

It is not part of the standard CentOS distro, so installation of the DAG repositories was necessary.

0. Login as root on your webserver(s)

This install specific to the 32-bit version of CentOS 5.x

1. Get the rpmforge rpm (if not already installed)

2. Install the rpmforge rpm
rpm -Uvh rpmforge-release-0.3.6-1.el5.rf.i386.rpm

3. Install memcached and other packages required
yum install --enablerepo=rpmforge memcached php-pecl-memcache

4. Modify php.ini
nano /etc/php.ini

add line under Dynamic Extension section.

5. Create memcached configuration file
nano /etc/sysconfig/memcached
Add to file:
#  Specify Port
#Specify User for service to run.
#Specify maximum number of connections
#Set Cache size based on Memory available

#Specify which interface to listen to - Security Measure

6. Add the memcached user (the same username entered in step 5)
useradd memcached

7. Start the memcached daemon and set to automatically start on reboot
service memcached start
chkconfig memcached on

8. Test memcached
Original Script located at
Create script in site root, or other preferred location.
touch /var/www/html/memcachedtest.php

Paste the following code:

//memcached simple test
$memcache = new Memcache;
$memcache->connect('localhost', 11211) or die ("Could not connect");
$key = md5('42data');  //something unique
for ($k=0; $k<5; $k++) {
$data = $memcache->get($key);
    if ($data == NULL) {
        $data = array();
        //generate an array of random shit
        echo "expensive query";
        for ($i=0; $i<100; $i++) {
            for ($j=0; $j<10; $j++) {
                $data[$i][$j] = 42;  //who cares
    } else {
        echo "cached";

Run to see if it works
You should see "Expensive Query"
If memcached works, you should see "Cached Cached Cached" upon refreshing the page.


Wednesday, February 24, 2010

Stubborn Data CD Recovery

Have you ever come across a stubborn data CD that just refuses to copy? Even creating an ISO image was failing - both in windows and linux. The Windows PC was not even recognizing the CD, on both Optical Drives!

Had to use a unique combination to recover/copy this data using Linux and some handy commands.

0. Mount CD into drive.

1. Navigate to the CD and copy the data
cp -r /media/* /path/to/copied/files

2. Compare the Original and the copy for any differences
diff -qr /media/ /path/to/copied/files

Got no errors this time, but in case any differences are found, it should be as simple as navigating to the source file and copying it to the backup directory.

Being on ancient hardware meant that this Linux machine did not have a CD burner. The files were copied to a Windows PC to burn onto CD.



Friday, February 19, 2010

In pursuit of a monitoring solution for CentOS Linux

It's about that time where it has become almost necessary to setup a monitoring/analysis solution for the linux servers. The hardware is getting a bit outdated and empirical data speaks for itself when asking for new stuff!

Based on my research, there is no one perfect solution - especially when it comes to Open Source and Linux.

At the very least, the following will need to be monitored:
  • Apache
  • Mysql
  • Network Traffic
  • Disk Space
  • Server Load
  • Memory Utilization
  • Swap usage
  • Processes

In the next few weeks some tests will be done using the following software offerings(in no particular order):
  • cacti
  • munin
  • nagios
  • opennms
  • zabbix
  • zenoss
Which one will be the best fit? Time and Testing will tell.


Wednesday, February 17, 2010

Recovery of a DNS server

There was a DNS server sitting on the network that had been around for ages. Thermal overload and an 'unclean reboot' on an old Red Hat install finally killed it. At least there was failover support!

After some discussions with some colleagues, it was eventually decided that an attempted revival was out of the question.  Rebuild time!

The weapon OS of choice was CentOS (of course), and an attempt was made to salvage whatever data and configurations possible from the old server.

Tomorrow: an update on what was done.

We were luckily able to retrieve the named.conf and the databases from the dead server using a SLAX Live CD.

0. Assuming a Clean install of CentOS and Ethernet / other system configurations complete - log in as root

1. The first thing is to update the files to the latest version
yum update

2. Install the following packages
yum install bind bind-chroot bind-libs bind-utils

It should be noted that the old DNS server did not use the chroot security option, which is essentially a 'jail' to prevent full system access to a hacker using any bind exploit. This meant that the locations of the conf file and the databases needed to be changed for the new install.

3. Rename the original named.conf
mv /var/named/chroot/etc/named.conf /var/named/chroot/etc/named.conf.orig

4. Copy the named.conf which was taken from the old server
cp /location/of//backup/named.conf  /var/named/chroot/etc/

5. Copy the zones
cp /location/of/backup/zones/* /var/named/chroot/var/named/

6. Check to see if the named service is operational
service named restart

Once it starts without problems, proceed to step 7
Troubleshooting? Useful resources here:

7. Set named to automatically start on reboot.
chkconfig named on

8. Configure the firewall
If the server is only being used for DNS, only allow incoming DNS and perhaps SSH/Telnet.

Set firewall to Enabled (*)
Go to Customize
Leave Trusted Devices and Masquerade Devices empty (dependent on your configuration)
Add to "other ports" section
53:tcp 53:udp
Check box by either SSH or Telnet (whichever preferred)

Save and exit

9. Check to see if service is operational
service named status

This setup was based on a restore, so a new install of Bind will need additional tweaking dependent on the environment.

Update: Permissions issue was preventing updates to the server.

10. Further permission fix

cd etc
*Backup named.conf
cp -p named.conf named.conf.bkp

cd /var/named/chroot/var/named
chown named:named db.*

cd /var/named
chown -R named:named ./chroot

chmod g+w /var/named/chroot/var/named


Monday, February 8, 2010

Installing the Eaccelerator cache for php - on CentOS

eAccelerator is an opensource optimizer/cache for php. For a Moodle install, it typically brings down server load (a lot!) by caching frequently requested content. The following setup was used for Centos5.x, but should be similar for other the other Linux flavours.

0. Login as root.

1. Server preparation
The following packages are needed for the eAccelerator install. From the terminal:
yum  -y install php-devel
yum -y groupinstall 'Development Tools'

2. Get the eAccelerator Package
mkdir /temp (if not already created)
cd /temp
tar xvfj eaccelerator-

3. Configure and Install
./configure --with-eaccelerator-shared-memory 
make install

4. Create Cache Directory and set permissions
mkdir /var/cache/eaccelerator
chmod 777 /var/cache/eaccelerator

5. Create the config file
nano /etc/php.d/eaccelerator.ini
Add the following lines:


6. Restart the webserver
service httpd restart

7. Check to see if installed properly
nano /var/www/html/phpinfo.php
Add to file
< php
Save and exit
Now call the script from your internet browser: http://yourservername/phpinfo.php
Look for the Eaccelerator Section
After a short time, the Cached Scripts entry should be > 0.
It works!


Thursday, February 4, 2010

Backing up Moodle Datafiles to a remote server

 As part of any disaster recovery plan, backup of the MoodleData folder is also critical. The critical variable in this equation is Disk Space - there never seems to be enough!

Making regular backups of the entire MoodleData folder is a bit impractical - at least in my scenario, where it is over 100 GB. Instead, I chose to mirror the moodledata folder on a different server, using rsync. This is purely for a disaster recovery scenario and will not consider data recovery on a per-user basis. In an environment of 22000+ users, this is highly impractical.

Assumptions: Another server has already been configured with CentOS, all necessary firewall/SELinux/networking settings have been completed.
In this example, I will use: as the Moodle Server as the Backup Server

0. Login as root on Moodle Server

A "trust" needs to be setup between the Moodle Server and the Backup Server, so that the automated backup does not fail due to password prompts! This is also an added measure of security as well, so that root / sudo passwords are not stored in plain-text in the shell scripts.

1. Setup a trust for password free rsync over ssh
ssh-keygen -t dsa
cat ~/.ssh/ | ssh root@ 'cat >> .ssh/authorized_keys'
Prompted for the password ONCE. Enter it correctly and that's it!

2. Test the password free connection (this will only be one way). From Moodle Server:
ssh -l root
The backup server should be accessible without entering a password!

3. On the Moodle Backup, create a folder on the largest available disk.
mkdir /path/to/mirror

4. On the Moodle Server, create the script that will mirror the moodledata folder.
cd /temp/scripts
chmod +x

5. Edit the script -  log each time the folder is mirrored. Ensure that the folder exists [/temp/logs]
Add to file:

rsync --delete-after -e ssh -avz /path/to/your/moodledata/folder/ root@
date >> /temp/logs/moodledatamirror.log Save and exit

6. Test script
cd /temp/scripts
Some output should be visible and the script will notify when complete.

7. Check to see if the files went across properly!
Login via ssh to Moodle Backup server
ssh -l root
cd /path/to/mirror
ls -all

Compare the results of the list to the original folder on the Moodle Server!

8.Add the script to the crontab to execute nightly
Schedule to run this script at periods of low activity - 2:00am should be fine for most installs. Remember to factor in the Database backup time!

nano /etc/crontab
Append the following:

#database backup script

00 2 * * *  root / /temp/scripts/

The MoodleData files will now be mirrored nightly!
This same strategy can be used to transfer the database backups to the backup server.


Wednesday, February 3, 2010

Restoring a Moodle Database Backup

This is an addendum to my earlier post: Backing Up your Moodle Database

Now you have a database backup. In the event that something horrible happens (knock on wood) and the database needs to be restore...
This assumes that the database is located in the default MySQL installation location: /var/lib/mysql/yourdbname

0. Login as root on the database server

1. Stop the mysqld service
service mysqld stop

2. Backup the current corrupted database (in case forensic analysis is needed later)
cp /var/lib/mysql/yourdbname /var/lib/mysql/yourdbname_backup

3. Empty the files from the database.
cd /var/lib/mysql/yourdbname
rm -rf *

4. Copy the backup to the empty database shell
cp /path/to/your/backup/* /var/lib/mysql/yourdbname/
*make sure that the folder /var/lib/mysql/yourdbname/ only contains MYI, MYD, frm and opt files ONLY. 

5.  Ensure that permissions are correctly set.
cd /var/lib/mysql/yourdbname/
chown mysql:mysql *
chmod 771 *

6. Start the Mysql Service.
service mysqld start

7. Run a Check/Repair/Optimize on the restored database.
This is to ensure consistency of the data you just restored.
There is no need to login to mysql, simply execute from the terminal.

mysqlcheck yourdbname -c -o -a -r -f -h localhost -u yourdbusername -p yourdbpassword

Note: If the database for the Moodle install is large,  a weekly or nightly optimization can be configured to minimize the chances of irrepairable corruption.
This can be done similarly to the cron job in my earlier post.

7.1 Create script
cd /temp/scripts
chmod +x
7.2. Edit the script
Add the following (I chose to log each time the script runs)

mysqlcheck yourdbname -c -o -a -r -f -h localhost -u yourdbusername -p yourdbpassword > /temp/logs/database_check_$(date +%Y%m%d)_$(date +%H%M).txt
The log folder needs to be created so the script does not throw an error!
cd /temp
mkdir logs
7.3 Test

check the latest logfile for output
cat /temp/logs/database_check_xxxxxx.txt
You should see some output with the table names and status next to it - e.g
mdl_config      OK


Backing up Moodle Database

This particular utility is only applicable to Mysql Databases using the MYISAM storage engine. I could have used the mysqldump utility, but I prefer the mysqlhotcopy tool as it executes much faster on larger databases with less overheads.

So the Moodle install is up and running. Time for a simple database backup using the Mysqlhotcopy tool. It allows for literally "hot" copying of the database without stopping the service!

This can be shell scripted to run nightly.

0. Login as root.

1. Make a script directory and create a bare shell script
mkdir /temp/scripts && cd /temp/scripts
chmod +x

2. Edit the script using VI or nano (I <3 nano)
cd /temp/scripts
Add lines (edit to include your database name, username, password etc). Since the database password will be stored in the file as plaintext, make sure the folder is owned by root. If preferred, the date can be appeded to the end of the foldername for simple identification.
mysqlhotcopy -u yourdbusername -p yourdbpassword --addtodest yourdbname /path/to/backup/folder
mv /path/to/backup/folder /path/to/backup/folder_$(date +%Y%m%d)_(date +%H%M)

Save and exit

3. Test the script to make sure it works as advertised!
cd /temp/scripts

Once it runs without errors, it can be set to run automatically via cron.

4. Add the script to the crontab to execute nightly
I prefer to run this script at periods of low activity - 1:00am should be fine for most installs.

nano /etc/crontab
Append the following:

#database backup script

00 1 * * *  root / /temp/scripts/

This tutorial demonstrates how to make a copy of  raw database files on the same server that the database resides on.

Later on (when I get some time), I  will show how to extend the strategy to compress the backup and transfer the files to another server - as well as how to do a Database restore from the raw files.


Tuesday, February 2, 2010

Moodle Installation on CentOS 5.x

This is for a single server Moodle Install.

Assumption: A base install of CentOS 5.x is completed, and mysqld and httpd have already been installed.

1. Server preparation - Login as root or superuser
yum install php* php-mysql httpd* mysql*
All the packages are not neccessary, but if you are going to install any plugins, they may be needed.

2. Make sure Mysql and Http are started and can start automatically on reboot.
service mysqld start
service httpd start
chkconfig httpd on
chkconfig mysqld on

3. Assign a root password to mysql
/usr/bin/mysqladmin -u root password 'yourpasswordhere'
It is recommended to set a secure root password!

4. Login to Mysql, Create an empty database and assign a user for moodle.
mysql -u root -p yourpasswordhere
At the mysql prompt:

GRANT select,insert,update,delete,create,drop,index,alter 
ON mydbname.*
TO mymoodleuser@localhost IDENTIFIED BY 'moodleuserpassword';

flush privileges;


5. Download moodle and extract to webserver root.
cd /tmp
[check for latest version!!!]
tar xzf moodle-weekly-19.tgz 
cd moodle-weekly-19
cp -r * /var/www/html/
chown -R root:root /var/www/html/

6. Create Moodledata folder [not in webserver root!]
mkdir /usr/moodledata
cd /usr/moodledata
chown -R apache:apache /usr/moodledata

Note: I use the paranoid file permissions as recommended on
See Paraniod Moodle Permissions!

7. Setup your config.php
Please see here for further details.

8. Edit Apache config file 
I prefer nano, not VIM
nano /etc/httpd/conf/httpd.conf
Add lines to end of file:
[Directory "/usr/moodle/mymoodle"]*
DirectoryIndex index.php
AcceptPathInfo on
AllowOverride None
Options None
Order allow,deny
Allow from all

*substitute the [] with <> !!

9. Setup the Moodle Cron job.
nano /etc/crontab
add line
*/5 * * * * /usr/bin/wget -O /dev/null http://localhost/admin/cron.php

10. Open up a web browser on the server and hit: http://localhost/admin

Happy Moodling!


Monday, February 1, 2010

First and foremost...

I need to start collecting my notes, which seem to be scattered all over the place.
A memory stick here, a disconnected IDE drive there, perhaps even on one of (many!) desktops that have to deal with my nonsense and last but not least, the notepads that seem to have more doodling than actual writing.