I'll be on vacation for the rest of 2011, but I've lined up a couple items to work on.
1. Splitting the statistics processing from the moodlecron
2. Investigating MYSQL replication as part of the disaster recovery plan
3. The much dreaded moodle 2.x upgrade strategy.
Cheers,
-n
Content moved to tucuche-consulting.com as of April 2019
Thursday, December 1, 2011
Monday, November 7, 2011
Moodle Apache/DB Connection Errors!
The Moodle's config.php includes a section to email connection errors to an email address
$CFG->emailconnectionerrorsto = 'myemail@org.com'
I've noticed some random errors popping up in my inbox, at the same time my stats script is configured to run.
Something along the lines of:
WARNING: Database connection error: http://mysitemoodle.com
I found this especially curious, especially my Zabbix Server was not reporting any errors. So I dug a little deeper, and ran a couple of the Mysql Reporting tools I use.
<>
Turns out I was maxing out my max_connections config.
My old config was 385 and I had originally increased this to 500 with the deployment of the LB/HA Moodle Environment (earlier post)
I had to do some more tweaking of the mysql configuration file. /etc/my.cnf
The default wait_timeout is very long: 14400 sec.
In my original deployment in May this year, I reduced this to 3600 sec.
With these errors randomly cropping up during the statsrun, I had to tweak it yet again.
I reduced the key_buffer_size to increase the max_connections to 550. This was necessary to ensure that the server does not over-allocate RAM. I also reduced the wait_timeout to 1800 sec, which is fine considering the hardware my server uses. (2 Quad core proc, 32GB RAM)
A quick save and restart of the mysql service was all it took.
96+ hours and I'm still actively monitoring, no more connection errors and we're down to about a max of 300 connections at the peak.
-n
$CFG->emailconnectionerrorsto = 'myemail@org.com'
I've noticed some random errors popping up in my inbox, at the same time my stats script is configured to run.
Something along the lines of:
WARNING: Database connection error: http://mysitemoodle.com
I found this especially curious, especially my Zabbix Server was not reporting any errors. So I dug a little deeper, and ran a couple of the Mysql Reporting tools I use.
<
Turns out I was maxing out my max_connections config.
My old config was 385 and I had originally increased this to 500 with the deployment of the LB/HA Moodle Environment (earlier post)
I had to do some more tweaking of the mysql configuration file. /etc/my.cnf
The default wait_timeout is very long: 14400 sec.
In my original deployment in May this year, I reduced this to 3600 sec.
With these errors randomly cropping up during the statsrun, I had to tweak it yet again.
I reduced the key_buffer_size to increase the max_connections to 550. This was necessary to ensure that the server does not over-allocate RAM. I also reduced the wait_timeout to 1800 sec, which is fine considering the hardware my server uses. (2 Quad core proc, 32GB RAM)
A quick save and restart of the mysql service was all it took.
96+ hours and I'm still actively monitoring, no more connection errors and we're down to about a max of 300 connections at the peak.
-n
Friday, October 28, 2011
Useful Queries for Moodle
I've come to compile a list of queries for Moodle, aggregrated from various sources on the internet. Credits to the original authors, whomever they be. (sorry for not keeping track)
# Top 20 courses in terms of student enrolment for specfic year or term (assuming year code is built in to courses)
SELECT DISTINCT c.fullname, c.shortname, count( usr.id )
FROM mdl_course c
INNER JOIN mdl_context cx ON c.id = cx.instanceid
AND cx.contextlevel = '50'
INNER JOIN mdl_role_assignments ra ON cx.id = ra.contextid
INNER JOIN mdl_role r ON ra.roleid = r.id
INNER JOIN mdl_user usr ON ra.userid = usr.id
WHERE r.name = 'Student' and c.idnumber like '%\_2010%'
GROUP BY c.fullname
ORDER BY count( usr.id ) DESC
LIMIT 0 , 20
# List of Lecturers and how much courses they teach
SELECT user.firstname, user.lastname, user.email, count( role.id ) AS courses
FROM mdl_course course
JOIN mdl_context context ON ( course.id = context.instanceid
AND context.contextlevel =50 )
JOIN mdl_role_assignments assign ON ( context.id = assign.contextid )
JOIN mdl_role role ON ( assign.roleid = role.id )
JOIN mdl_user user ON ( assign.userid = user.id )
WHERE role.shortname = 'editingteacher'
GROUP BY user.firstname, user.lastname, user.email
ORDER BY user.lastname, user.firstname
# Courses with MMS streaming media for specfic year or term (assuming year code is built in to courses)
SELECT DISTINCT c.shortname
FROM mdl_resource r, mdl_course c
WHERE r.reference LIKE '%mms://%'
AND c.shortname LIKE '%\_2010%'
AND r.course = c.id
#List of courses and number of activities for specfic year or term (assuming year code is built in to courses) (sorted by total num_activities )
SELECT m.id, m.`shortname` , m.fullname, cCount.totalcount
FROM mdl_course m
LEFT JOIN (
SELECT courseCount.course, sum( courseCount.subcount ) AS totalcount
FROM (
SELECT course, count( * ) AS subcount
FROM mdl_resource
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_quiz
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_assignment
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_survey
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_label
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_glossary
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_wiki
GROUP BY course) AS courseCount
GROUP BY courseCount.course) AS cCount ON cCount.course = m.id
WHERE m.shortname LIKE '%\_2010%'
ORDER BY cCount.totalcount ASC
# Top 20 courses in terms of student enrolment for specfic year or term (assuming year code is built in to courses)
SELECT DISTINCT c.fullname, c.shortname, count( usr.id )
FROM mdl_course c
INNER JOIN mdl_context cx ON c.id = cx.instanceid
AND cx.contextlevel = '50'
INNER JOIN mdl_role_assignments ra ON cx.id = ra.contextid
INNER JOIN mdl_role r ON ra.roleid = r.id
INNER JOIN mdl_user usr ON ra.userid = usr.id
WHERE r.name = 'Student' and c.idnumber like '%\_2010%'
GROUP BY c.fullname
ORDER BY count( usr.id ) DESC
LIMIT 0 , 20
# List of Lecturers and how much courses they teach
SELECT user.firstname, user.lastname, user.email, count( role.id ) AS courses
FROM mdl_course course
JOIN mdl_context context ON ( course.id = context.instanceid
AND context.contextlevel =50 )
JOIN mdl_role_assignments assign ON ( context.id = assign.contextid )
JOIN mdl_role role ON ( assign.roleid = role.id )
JOIN mdl_user user ON ( assign.userid = user.id )
WHERE role.shortname = 'editingteacher'
GROUP BY user.firstname, user.lastname, user.email
ORDER BY user.lastname, user.firstname
# Courses with MMS streaming media for specfic year or term (assuming year code is built in to courses)
SELECT DISTINCT c.shortname
FROM mdl_resource r, mdl_course c
WHERE r.reference LIKE '%mms://%'
AND c.shortname LIKE '%\_2010%'
AND r.course = c.id
#List of courses and number of activities for specfic year or term (assuming year code is built in to courses) (sorted by total num_activities )
SELECT m.id, m.`shortname` , m.fullname, cCount.totalcount
FROM mdl_course m
LEFT JOIN (
SELECT courseCount.course, sum( courseCount.subcount ) AS totalcount
FROM (
SELECT course, count( * ) AS subcount
FROM mdl_resource
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_quiz
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_assignment
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_survey
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_label
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_glossary
GROUP BY course
UNION ALL SELECT course, count( * ) AS subcount
FROM mdl_wiki
GROUP BY course) AS courseCount
GROUP BY courseCount.course) AS cCount ON cCount.course = m.id
WHERE m.shortname LIKE '%\_2010%'
ORDER BY cCount.totalcount ASC
Labels:
Moodle,
MySQL,
queries,
SQL Moodle Qery
Deploying a loadbalanced Apache front end
This particular configuration is being used to support a large Moodle Installation.
The front end comprises of:
1 Loadbalancer
3 webservers
The load balancer virtual IP
Prerequisites:
Fully built and configured CentOS Apache-based webservers
One clean CentOS server to act as loadbalancer
IP's
Webserver 1- 192.168.100.1
Webserver 2 - 192.168.100.2
Webserver 3- 192.168.100.3
Loadbalancer - 192.168.100.5
Virtual IP- 192.168.100.10
2. Create specific test file on each webserver to allow the Loadbalancer to check.
echo foo > /var/www/html/test.html
3. Create unique identifiers on each webserver to ensure the load balancing algorithm works. These files will be deleted later on.
On each webserver x (replace x with associated number)
echo "This is Webserver X" > /var/www/html/index.html
4. Create a loopback interface with the virtual IP to terminate on each webserver:
nano /etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.100.20
NETMASK=255.255.255.255
ONBOOT=yes
NAME=loopback
5. Configure kernel to announce ARP requests/send responses
nano /etc/sysctl.conf
Modify the following entries
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
6. Reload modified kernel parameters and bring up loopback:
sysctl -p
ifup lo:0
2. Install necessary packages
yum install -y heartbeat heartbeat-ldirectord ipvsadm
3. Configure ldirectord and heartbeat in autostart list.
chkconfig --add ldirectord
chkconfig --del heartbeat
4. Modify kernel to allow IP forwarding
nano /etc/sysctl.conf
find the following parameter:
net.ipv4.ip_forward = 0
change the parameter:
net.ipv4.ip_forward = 1
5. Reload modified kernel parameter:
sysctl -p
6. Configure secondary ethernet interface for Virtual IP:
nano /etc/sysconfig/network-scripts/ifcfg-eth0:0
DEVICE=eth0:0
BOOTPROTO=none
ONBOOT=yes
HWADDR=3a:5d:23:ad:67:47 <>
NETMASK=255.255.255.0
IPADDR=192.168.100.10
GATEWAY=192.168.100.1
TYPE=Ethernet
7. Create ldirector configuration file:
nano /etc/ha.d/ldirectord.cf
checktimeout=10
checkinterval=2
autoreload=no
logfile="/var/log/ldirectord.log"
quiescent=no
virtual=192.168.100.10:80
real=192.168.100.1:80 gate
real= 192.168.100.2:80 gate
real= 192.168.100.3:80 gate
service=http
request="test.html"
receive="foo"
scheduler=wlc
protocol=tcp
checktype=negotiate
8. Restart the ldirectord service:
service ldirectord restart
9. Test the configuration:
ipvsadm -l
Expected output:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP yoursite.com:http wlc
-> web03:http Route 1 412 0
-> web02:http Route 1 411 0
-> web01:http Route 1 411 0
That should be it! Your DNS may need to be configured to point your external IP to the Virtual IP.
-n
The front end comprises of:
1 Loadbalancer
3 webservers
The load balancer virtual IP
Prerequisites:
Fully built and configured CentOS Apache-based webservers
One clean CentOS server to act as loadbalancer
IP's
Webserver 1- 192.168.100.1
Webserver 2 - 192.168.100.2
Webserver 3- 192.168.100.3
Loadbalancer - 192.168.100.5
Virtual IP- 192.168.100.10
On Each Webserver
1. Open Terminal Interface2. Create specific test file on each webserver to allow the Loadbalancer to check.
echo foo > /var/www/html/test.html
3. Create unique identifiers on each webserver to ensure the load balancing algorithm works. These files will be deleted later on.
On each webserver x (replace x with associated number)
echo "This is Webserver X" > /var/www/html/index.html
4. Create a loopback interface with the virtual IP to terminate on each webserver:
nano /etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.100.20
NETMASK=255.255.255.255
ONBOOT=yes
NAME=loopback
5. Configure kernel to announce ARP requests/send responses
nano /etc/sysctl.conf
Modify the following entries
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.eth0.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.eth0.arp_announce = 2
6. Reload modified kernel parameters and bring up loopback:
sysctl -p
ifup lo:0
On the Loadbalancer
1. Open command line2. Install necessary packages
yum install -y heartbeat heartbeat-ldirectord ipvsadm
3. Configure ldirectord and heartbeat in autostart list.
chkconfig --add ldirectord
chkconfig --del heartbeat
4. Modify kernel to allow IP forwarding
nano /etc/sysctl.conf
find the following parameter:
net.ipv4.ip_forward = 0
change the parameter:
net.ipv4.ip_forward = 1
5. Reload modified kernel parameter:
sysctl -p
6. Configure secondary ethernet interface for Virtual IP:
nano /etc/sysconfig/network-scripts/ifcfg-eth0:0
DEVICE=eth0:0
BOOTPROTO=none
ONBOOT=yes
HWADDR=3a:5d:23:ad:67:47 <
NETMASK=255.255.255.0
IPADDR=192.168.100.10
GATEWAY=192.168.100.1
TYPE=Ethernet
7. Create ldirector configuration file:
nano /etc/ha.d/ldirectord.cf
checktimeout=10
checkinterval=2
autoreload=no
logfile="/var/log/ldirectord.log"
quiescent=no
virtual=192.168.100.10:80
real=192.168.100.1:80 gate
real= 192.168.100.2:80 gate
real= 192.168.100.3:80 gate
service=http
request="test.html"
receive="foo"
scheduler=wlc
protocol=tcp
checktype=negotiate
8. Restart the ldirectord service:
service ldirectord restart
9. Test the configuration:
ipvsadm -l
Expected output:
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP yoursite.com:http wlc
-> web03:http Route 1 412 0
-> web02:http Route 1 411 0
-> web01:http Route 1 411 0
That should be it! Your DNS may need to be configured to point your external IP to the Virtual IP.
-n
Labels:
Apache,
CentOS,
IPVS,
Ldirectord,
Linux,
Load Balancing
Tuesday, August 16, 2011
NTFS support on Centos 5.x
If you have a Removable memory device (USB Memory stick, USB Hard drive) that is formatted in NTFS, it may not be able to mount on Centos by default.
Just a couple pacakges are needed to automagically detect and mount NTFS-formatted drives.
0. Login as root
1. Update then Install packages
yum -y update
yum install fuse fuse-ntfs-3g
2. Plugin your drive and it will auto-mount and display in a little window.
-n
Just a couple pacakges are needed to automagically detect and mount NTFS-formatted drives.
0. Login as root
1. Update then Install packages
yum -y update
yum install fuse fuse-ntfs-3g
2. Plugin your drive and it will auto-mount and display in a little window.
-n
Friday, July 22, 2011
Installing Wine on CentOS linux
Wine is a linux application that allows installation and execution of some native Windows apps. It is not included in the standard CentOS distro, so this is a simple installation tutorial.
1. Get the rpmforge rpm (if not already installed)
2. Install the rpmforge rpm
rpm -Uvh rpmforge-release-0.3.6-1.el5.rf.i386.rpm
3. Install wine
yum install wine -y enablerepo=rpmforge
4. Launch Wine from the Centos Applications menu
-n
0. Either Sudo or login as root on the machine
cd /tmp
wget http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
wget http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -Uvh rpmforge-release-0.3.6-1.el5.rf.i386.rpm
3. Install wine
yum install wine -y enablerepo=rpmforge
4. Launch Wine from the Centos Applications menu
-n
Build your own updated Moodle Schema
This particular post is to assist in getting a Moodle DB schema specific to your database. It is especially handy if there are non-standard plugins installed on Moodle.
As mentioned in an earlier post, getting a relevant ERD on Moodle.org is almost herculian. This tool will reverse engineer the databse ERD.
The application is Fabforce4 located at http://www.fabforce.net/dbdesigner4/
Installation on Windows is straightforward, simply download, install and configure to point to your database.
Installation on CentOS linux was not as good. The application refused to launch. I decided to run it on Wine (which I had to install here).
I'll update with a sample schema of my test database as soon as the operation completes.
-n
As mentioned in an earlier post, getting a relevant ERD on Moodle.org is almost herculian. This tool will reverse engineer the databse ERD.
The application is Fabforce4 located at http://www.fabforce.net/dbdesigner4/
Installation on Windows is straightforward, simply download, install and configure to point to your database.
Installation on CentOS linux was not as good. The application refused to launch. I decided to run it on Wine (which I had to install here).
I'll update with a sample schema of my test database as soon as the operation completes.
-n
Labels:
CentOS,
DBDesigner,
ERD,
Moodle,
Moodle ERD,
MySQL
Friday, July 8, 2011
Moodle Issues after an upgrade! [Resolved!]
Part of my project in migrating to a load-balanced virtualized infrastructure involved running a minor version upgrade from 1.9.7 to 1.9.11 which addressed several bugs in our core Moodle usage. The project plan included checking the database before and after for corruption as well as re-indexing using mysqlcheck and myisamchk respectively, and no errors were detected, both in the Test environment as well as during the upgrade in the production environment. Basic checks were done post upgrade to ensure core moodle functionality. To my surprise (read: shock, horror, pain), two weeks after the upgrade, two major issues were reported by several lecturers:
1. Quiz attempts were not being saved, where the quiz spanned multiple pages.
2. Assignments (Advanced Uploading of Files) could not be submitted.
Tracking down this issue took some time, and since so many variables changed from the original environment and so much time had elapsed since the upgrade, reverting was not an option.
The order of troubleshooting included:
The second test confirmed this. I migrated the same Moodle code plus a copy of the database to a clean server and the problem was replicated. At this point the epic horror dawned upon me on the possibilities ahead as visions of rebuilding from scratch haunted me.
At this point, only two variables needed investigating:
I took a backup of the database (without indexes) using the following code:
mysqlhotcopy -u root -p yourpassword --addtodest --noindices db_name /path/to/backup/destination/
I then used the MYSQL restore method in my earlier post to restore the database to the test environment.
Since the indexes had to be rebuilt using myisamchk, it required the mysql service to be stopped. The indexes were rebuilt using:
myisamchk -rq /var/lib/mysql/db_name/*.MYI
I then ran the mysqlcheck command as in step 7 of my earlier post.
Again, no errors/issues were reported. I held my breath as I started the test quiz and assignment upload....and it failed again. That meant the data was royally screwed. (...and that's putting it nicely).
I then decided I was going to take a look at the database structure and perhaps even delve into the data itself and try to see where the issues were.
I took mysqldumps of the structure of the working database and the non-working database and compared them line by line in Notepad++ (a tedious task, but some real team effort came into play here), but came to the conclusion that the structure was fundamentally the same. Therefore, by the process of elimination, the root cause was the actual tables/data.
The structure was analyzed using the following:
mysqldump --no-data --host localhost --user=root -p db_name > /path/to/report.sql
On to the analysis of data. Analysis of the tables from the phpmyadmin interface showed no anomalies. In Moodle, there's a database report that is included with the base code which contains a bit more useful data and can be accessed via:
http://yourmoodleurl.com/admin/dbperformance.php
I ran the script and used the view tables link and started scouring the tables one by one, searching for anomalies. Interesting enough, I stumbled upon two very strange issues.
The auto_increment fields for two tables were HUGE, compared to the number of rows the tables contained:
The problem lay with the Primay Keys of these two tables. After searching for the Moodle Database schema on Moodle.org (almost a herculian task), and careful analysis of the stable structure I learned that the Primary keys were not foreign keys in other tables. The proverbial light grew a little bigger.
The only way forward at this point would be to delete the entire primary key fields on the two affected tables, recreate the field (and associated options) and reset the auto_increment to 1.
This operation was done from the phpmyadmin interface and needed to be tested significantly to ensure that other functionality would not be negatively affected. Success! It worked in the test environment. The light grew stronger...
I was especially wary about implementation in production, having a very narrow maintenance window to attempt the fix. After the database fix, I re-ran the myisamchk with repair as well as the mysqlcheck to err on the safe side. The operation was a success, and I finally saw the end of the tunnel...Problem solved.
Thoughts: I've yet to determine WHY the moodle upgrade script failed and caused this databse anomaly. From the front-end, the upgrade was successful and NO errors were reported. I've also to understand how the mysqlcheck and myisamchk could not detect that a table field contained data beyond it's configured range. Even turning on Debugging on the Moodle site failed to show anything useful about a failed insert to at least provide a tip in the general direction of the issue.
1. Quiz attempts were not being saved, where the quiz spanned multiple pages.
2. Assignments (Advanced Uploading of Files) could not be submitted.
Tracking down this issue took some time, and since so many variables changed from the original environment and so much time had elapsed since the upgrade, reverting was not an option.
The order of troubleshooting included:
- Determining if the course/activity settings were at fault.
- Turning on Debugging to check for errors upon avtivity submits.
- Determining if my (tested) load balancer environment was causing the issue.
- Checking the new webserver configuration
- Checking the database configuration
- Checking and rechecking and attempting to repair the database for corruption using variations of switches on the respective tools (mysqlcheck and myisamchk)
- Checking if caching was causing the issue (disabling memcached)
- Parallel installation of moodle on the same new environment/hardware.
- Try the current database on different hardware
The second test confirmed this. I migrated the same Moodle code plus a copy of the database to a clean server and the problem was replicated. At this point the epic horror dawned upon me on the possibilities ahead as visions of rebuilding from scratch haunted me.
At this point, only two variables needed investigating:
- Database Indexes
- Database Data/Structure
I took a backup of the database (without indexes) using the following code:
mysqlhotcopy -u root -p yourpassword --addtodest --noindices db_name /path/to/backup/destination/
I then used the MYSQL restore method in my earlier post to restore the database to the test environment.
Since the indexes had to be rebuilt using myisamchk, it required the mysql service to be stopped. The indexes were rebuilt using:
myisamchk -rq /var/lib/mysql/db_name/*.MYI
I then ran the mysqlcheck command as in step 7 of my earlier post.
Again, no errors/issues were reported. I held my breath as I started the test quiz and assignment upload....and it failed again. That meant the data was royally screwed. (...and that's putting it nicely).
I then decided I was going to take a look at the database structure and perhaps even delve into the data itself and try to see where the issues were.
I took mysqldumps of the structure of the working database and the non-working database and compared them line by line in Notepad++ (a tedious task, but some real team effort came into play here), but came to the conclusion that the structure was fundamentally the same. Therefore, by the process of elimination, the root cause was the actual tables/data.
The structure was analyzed using the following:
mysqldump --no-data --host localhost --user=root -p db_name > /path/to/report.sql
On to the analysis of data. Analysis of the tables from the phpmyadmin interface showed no anomalies. In Moodle, there's a database report that is included with the base code which contains a bit more useful data and can be accessed via:
http://yourmoodleurl.com/admin/dbperformance.php
I ran the script and used the view tables link and started scouring the tables one by one, searching for anomalies. Interesting enough, I stumbled upon two very strange issues.
The auto_increment fields for two tables were HUGE, compared to the number of rows the tables contained:
- mdl_question_sessions
- mdl_assignment_submissions
The problem lay with the Primay Keys of these two tables. After searching for the Moodle Database schema on Moodle.org (almost a herculian task), and careful analysis of the stable structure I learned that the Primary keys were not foreign keys in other tables. The proverbial light grew a little bigger.
The only way forward at this point would be to delete the entire primary key fields on the two affected tables, recreate the field (and associated options) and reset the auto_increment to 1.
This operation was done from the phpmyadmin interface and needed to be tested significantly to ensure that other functionality would not be negatively affected. Success! It worked in the test environment. The light grew stronger...
I was especially wary about implementation in production, having a very narrow maintenance window to attempt the fix. After the database fix, I re-ran the myisamchk with repair as well as the mysqlcheck to err on the safe side. The operation was a success, and I finally saw the end of the tunnel...Problem solved.
Thoughts: I've yet to determine WHY the moodle upgrade script failed and caused this databse anomaly. From the front-end, the upgrade was successful and NO errors were reported. I've also to understand how the mysqlcheck and myisamchk could not detect that a table field contained data beyond it's configured range. Even turning on Debugging on the Moodle site failed to show anything useful about a failed insert to at least provide a tip in the general direction of the issue.
Wednesday, May 4, 2011
Some news from the trenches.
I have a crapload of docs to prepare and upload.
Til then...
-n
- Building a loadbalanced web cluster for Moodle
- Basic Apache tuning
- VNC server/client
- Additional Moodle tweaks
- Adding a new virtual disk to a Linux VM on ESXi
Til then...
-n
Tuesday, April 5, 2011
Some MySQL database tuning - from a non-DBA perspective.
A DBA I'm not, but that does not mean I have to accept the default configuration options for Mysql.
There are a couple scripts which can assist with tuning an existing database server, such as
a) Tuning-Primer: http://www.day32.com/MySQL/tuning-primer.sh
which handles recomendations for the following:
Slow Query Log
Max Connections
Worker Threads
Key Buffer
Query Cache
Sort Buffer
Joins
Temp Tables
Table (Open & Definition) Cache
Table Locking
Table Scans (read_buffer)
Innodb Status
b) Then, there is also the mysqltuner.pl [perl script] which essentially does the same thing. I used both with some pretty decent results in tweaking an old server.
The above mentioned scripts only work properly on a currently running setup. It does not really help in tuning a new database server, which I'm in the process of building.
Even the sample mysql config files located in the distro is limited, with the largest (my-huge.cnf) only catering to a database server with 1-2GB of memory.
My intention is to tune a dedicated database server with the following base specifications:
CentOS 5.5 64 bit installed on
HP ProLiant DL380 G7 Server
2 Intel Xeon E5620 Quad-core processors
32GB memory
There is a builtin benchmark tool with MySQL, I'd recommend running the benchmark tests before any changes are made, then run and document the results after each configuration change. Normal elevated privlieges apply - login as root.
0. Run the initial benchmark test - then copy paste (or pipe) the result into a textfile for further analysis.
cd /usr/share/sql-bench
perl run-all-tests --user='yourdbusername' --password='yourdbpassword'
Sample output - not mine, sample taken from here:
alter-table: Total time: 8 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.03 CPU)
ATIS: Total time: 2 wallclock secs ( 1.20 usr 0.09 sys + 0.00 cusr 0.00 csys = 1.29 CPU)
big-tables: Total time: 5 wallclock secs ( 2.45 usr 0.08 sys + 0.00 cusr 0.00 csys = 2.53 CPU)
connect: Total time: 50 wallclock secs (12.74 usr 4.50 sys + 0.00 cusr 0.00 csys = 17.24 CPU)
create: Total time: 31 wallclock secs ( 1.20 usr 0.44 sys + 0.00 cusr 0.00 csys = 1.64 CPU)
insert: Total time: 397 wallclock secs (97.95 usr 13.61 sys + 0.00 cusr 0.00 csys = 111.56 CPU)
select: Total time: 44 wallclock secs ( 8.71 usr 0.88 sys + 0.00 cusr 0.00 csys = 9.59 CPU)
transactions: Test skipped because the database doesn’t support transactions
wisconsin: Total time: 3 wallclock secs ( 0.91 usr 0.23 sys + 0.00 cusr 0.00 csys = 1.14 CPU)
TOTALS 562.00 123.77 19.82 143.59 3425950
I first started off by keeping a backup of the existing /etc/my.cnf ann then using the distro's my-huge.cnf, making that my baseline config.
1. backup existing config, set new baseline and restart mysql service
mv /etc/my.cnf /etc/my.cnf.bkp
cp /usr/share/doc/mysql-server-5.0.77/my-huge.cnf /etc/my.cnf
service mysqld restart
The MySQL documentation suggests the following server disk parameters:
hdparm - which allows configuration of the disk interface.
MySQL suggests the following configuration:
2. Tuning disk parameters (reboot not necessary, but I did anyway)
hdparm -m 16 -d 1
It should be noted that I mounted the /var partition on it's own disk array to avoid disk contention.
Mysql also suggests the following mount options on the DB disks:
noatime and async
nano /etc/fstab
find your /var partition or whatever partition your db resides on, and append noatime and async after the default option:
LABEL=/var /var ext3 defaults,noatime,async 1 2
Note: differnet filesystems have their own pros and cons for use of a database server. Ext3 is the middle ground in terms of performance and stability.
3. Tune additional my.cnf options (requires a restart of mysql service before changes are applied)
MySQL documentation suggests that the two most important parameters to begin tuning are:
key_buffer_size (or key_buffer on newer versions of MySQL)
table_open_cache
The key_buffer_size allows you to store the MyISAM table indexes in memory. The generally accepted rule of thumb is setting this to 25-50% of server memory on a dedicated database server.
nano /etc/my.cnf
find key_buffer_size (or key_buffer)
adjust to suggested value above with the following syntax
key_buffer_size = 16384M
The table_open_cache (or table_cache) is related to the max_connections configuration. Tuning this parameter relies on obtaining information from the running setup and tweaking accordingly.
Tips on tuning this and other parameters can be found here: http://www.databasejournal.com/features/mysql/article.php/3367871/Optimizing-the-mysqld-variables.htm
Remember to run the benchmark after each change - analysis of the results should be fun.
-n
There are a couple scripts which can assist with tuning an existing database server, such as
a) Tuning-Primer: http://www.day32.com/MySQL/tuning-primer.sh
which handles recomendations for the following:
b) Then, there is also the mysqltuner.pl [perl script] which essentially does the same thing. I used both with some pretty decent results in tweaking an old server.
The above mentioned scripts only work properly on a currently running setup. It does not really help in tuning a new database server, which I'm in the process of building.
Even the sample mysql config files located in the distro is limited, with the largest (my-huge.cnf) only catering to a database server with 1-2GB of memory.
My intention is to tune a dedicated database server with the following base specifications:
CentOS 5.5 64 bit installed on
HP ProLiant DL380 G7 Server
2 Intel Xeon E5620 Quad-core processors
32GB memory
There is a builtin benchmark tool with MySQL, I'd recommend running the benchmark tests before any changes are made, then run and document the results after each configuration change. Normal elevated privlieges apply - login as root.
0. Run the initial benchmark test - then copy paste (or pipe) the result into a textfile for further analysis.
cd /usr/share/sql-bench
perl run-all-tests --user='yourdbusername' --password='yourdbpassword'
Sample output - not mine, sample taken from here:
alter-table: Total time: 8 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.03 CPU)
ATIS: Total time: 2 wallclock secs ( 1.20 usr 0.09 sys + 0.00 cusr 0.00 csys = 1.29 CPU)
big-tables: Total time: 5 wallclock secs ( 2.45 usr 0.08 sys + 0.00 cusr 0.00 csys = 2.53 CPU)
connect: Total time: 50 wallclock secs (12.74 usr 4.50 sys + 0.00 cusr 0.00 csys = 17.24 CPU)
create: Total time: 31 wallclock secs ( 1.20 usr 0.44 sys + 0.00 cusr 0.00 csys = 1.64 CPU)
insert: Total time: 397 wallclock secs (97.95 usr 13.61 sys + 0.00 cusr 0.00 csys = 111.56 CPU)
select: Total time: 44 wallclock secs ( 8.71 usr 0.88 sys + 0.00 cusr 0.00 csys = 9.59 CPU)
transactions: Test skipped because the database doesn’t support transactions
wisconsin: Total time: 3 wallclock secs ( 0.91 usr 0.23 sys + 0.00 cusr 0.00 csys = 1.14 CPU)
TOTALS 562.00 123.77 19.82 143.59 3425950
I first started off by keeping a backup of the existing /etc/my.cnf ann then using the distro's my-huge.cnf, making that my baseline config.
1. backup existing config, set new baseline and restart mysql service
mv /etc/my.cnf /etc/my.cnf.bkp
cp /usr/share/doc/mysql-server-5.0.77/my-huge.cnf /etc/my.cnf
service mysqld restart
The MySQL documentation suggests the following server disk parameters:
hdparm - which allows configuration of the disk interface.
MySQL suggests the following configuration:
2. Tuning disk parameters (reboot not necessary, but I did anyway)
hdparm -m 16 -d 1
It should be noted that I mounted the /var partition on it's own disk array to avoid disk contention.
Mysql also suggests the following mount options on the DB disks:
noatime and async
nano /etc/fstab
find your /var partition or whatever partition your db resides on, and append noatime and async after the default option:
LABEL=/var /var ext3 defaults,noatime,async 1 2
Note: differnet filesystems have their own pros and cons for use of a database server. Ext3 is the middle ground in terms of performance and stability.
3. Tune additional my.cnf options (requires a restart of mysql service before changes are applied)
MySQL documentation suggests that the two most important parameters to begin tuning are:
key_buffer_size (or key_buffer on newer versions of MySQL)
table_open_cache
The key_buffer_size allows you to store the MyISAM table indexes in memory. The generally accepted rule of thumb is setting this to 25-50% of server memory on a dedicated database server.
nano /etc/my.cnf
find key_buffer_size (or key_buffer)
adjust to suggested value above with the following syntax
key_buffer_size = 16384M
The table_open_cache (or table_cache) is related to the max_connections configuration. Tuning this parameter relies on obtaining information from the running setup and tweaking accordingly.
Tips on tuning this and other parameters can be found here: http://www.databasejournal.com/features/mysql/article.php/3367871/Optimizing-the-mysqld-variables.htm
Remember to run the benchmark after each change - analysis of the results should be fun.
-n
Labels:
CentOS,
DB Tuning,
Linux,
my.cnf,
MySQL,
MYSQL Tuning,
Server Tuning
Tuesday, March 22, 2011
Mysql - checking read/write ratio
At some point during database tuning, it becomes necessary to check the amount of reads and write transactions to determine the best way forward.
In order to check these stats, you need to login as the Mysql admin. Then execute the following query:
show global status like 'Com%';
* These stats are only gathered for the duration the mysql service is running. All data will be reset if the service is restarted.
Expected output:
In this case, the read:write ratio is quite high, selects are 28 million; other write based operations add up to approx 11 million.
+--------------------------+----------+
| Variable_name | Value |
+--------------------------+----------+
| Com_admin_commands | 1 |
| Com_alter_db | 0 |
| Com_alter_table | 164 |
| Com_analyze | 4 |
| Com_backup_table | 0 |
| Com_begin | 0 |
| Com_call_procedure | 0 |
| Com_change_db | 6903942 |
| Com_change_master | 0 |
| Com_check | 0 |
| Com_checksum | 0 |
| Com_commit | 0 |
| Com_create_db | 0 |
| Com_create_function | 0 |
| Com_create_index | 0 |
| Com_create_table | 246 |
| Com_create_user | 0 |
| Com_dealloc_sql | 0 |
| Com_delete | 68561 |
| Com_delete_multi | 0 |
| Com_do | 0 |
| Com_drop_db | 0 |
| Com_drop_function | 0 |
| Com_drop_index | 0 |
| Com_drop_table | 246 |
| Com_drop_user | 0 |
| Com_execute_sql | 0 |
| Com_flush | 26 |
| Com_grant | 0 |
| Com_ha_close | 0 |
| Com_ha_open | 0 |
| Com_ha_read | 0 |
| Com_help | 0 |
| Com_insert | 1859347 |
| Com_insert_select | 3288 |
| Com_kill | 0 |
| Com_load | 328 |
| Com_load_master_data | 0 |
| Com_load_master_table | 0 |
| Com_lock_tables | 26 |
| Com_optimize | 4 |
| Com_preload_keys | 0 |
| Com_prepare_sql | 0 |
| Com_purge | 0 |
| Com_purge_before_date | 0 |
| Com_rename_table | 0 |
| Com_repair | 6804 |
| Com_replace | 0 |
| Com_replace_select | 0 |
| Com_reset | 0 |
| Com_restore_table | 0 |
| Com_revoke | 0 |
| Com_revoke_all | 0 |
| Com_rollback | 0 |
| Com_savepoint | 0 |
| Com_select | 29848728 |
| Com_set_option | 6755821 |
| Com_show_binlog_events | 0 |
| Com_show_binlogs | 0 |
| Com_show_charsets | 0 |
| Com_show_collations | 0 |
| Com_show_column_types | 0 |
| Com_show_create_db | 0 |
| Com_show_create_table | 0 |
| Com_show_databases | 0 |
| Com_show_errors | 0 |
| Com_show_fields | 760708 |
| Com_show_grants | 0 |
| Com_show_innodb_status | 0 |
| Com_show_keys | 0 |
| Com_show_logs | 0 |
| Com_show_master_status | 0 |
| Com_show_ndb_status | 0 |
| Com_show_new_master | 0 |
| Com_show_open_tables | 0 |
| Com_show_privileges | 0 |
| Com_show_processlist | 0 |
| Com_show_slave_hosts | 0 |
| Com_show_slave_status | 0 |
| Com_show_status | 3 |
| Com_show_storage_engines | 0 |
| Com_show_tables | 65 |
| Com_show_triggers | 0 |
| Com_show_variables | 26 |
| Com_show_warnings | 0 |
| Com_slave_start | 0 |
| Com_slave_stop | 0 |
| Com_stmt_close | 0 |
| Com_stmt_execute | 0 |
| Com_stmt_fetch | 0 |
| Com_stmt_prepare | 0 |
| Com_stmt_reset | 0 |
| Com_stmt_send_long_data | 0 |
| Com_truncate | 3048 |
| Com_unlock_tables | 26 |
| Com_update | 2871820 |
| Com_update_multi | 2540 |
| Com_xa_commit | 0 |
| Com_xa_end | 0 |
| Com_xa_prepare | 0 |
| Com_xa_recover | 0 |
| Com_xa_rollback | 0 |
| Com_xa_start | 0 |
| Compression | OFF |
+--------------------------+----------+
104 rows in set (0.00 sec)
In order to check these stats, you need to login as the Mysql admin. Then execute the following query:
show global status like 'Com%';
* These stats are only gathered for the duration the mysql service is running. All data will be reset if the service is restarted.
Expected output:
In this case, the read:write ratio is quite high, selects are 28 million; other write based operations add up to approx 11 million.
+--------------------------+----------+
| Variable_name | Value |
+--------------------------+----------+
| Com_admin_commands | 1 |
| Com_alter_db | 0 |
| Com_alter_table | 164 |
| Com_analyze | 4 |
| Com_backup_table | 0 |
| Com_begin | 0 |
| Com_call_procedure | 0 |
| Com_change_db | 6903942 |
| Com_change_master | 0 |
| Com_check | 0 |
| Com_checksum | 0 |
| Com_commit | 0 |
| Com_create_db | 0 |
| Com_create_function | 0 |
| Com_create_index | 0 |
| Com_create_table | 246 |
| Com_create_user | 0 |
| Com_dealloc_sql | 0 |
| Com_delete | 68561 |
| Com_delete_multi | 0 |
| Com_do | 0 |
| Com_drop_db | 0 |
| Com_drop_function | 0 |
| Com_drop_index | 0 |
| Com_drop_table | 246 |
| Com_drop_user | 0 |
| Com_execute_sql | 0 |
| Com_flush | 26 |
| Com_grant | 0 |
| Com_ha_close | 0 |
| Com_ha_open | 0 |
| Com_ha_read | 0 |
| Com_help | 0 |
| Com_insert | 1859347 |
| Com_insert_select | 3288 |
| Com_kill | 0 |
| Com_load | 328 |
| Com_load_master_data | 0 |
| Com_load_master_table | 0 |
| Com_lock_tables | 26 |
| Com_optimize | 4 |
| Com_preload_keys | 0 |
| Com_prepare_sql | 0 |
| Com_purge | 0 |
| Com_purge_before_date | 0 |
| Com_rename_table | 0 |
| Com_repair | 6804 |
| Com_replace | 0 |
| Com_replace_select | 0 |
| Com_reset | 0 |
| Com_restore_table | 0 |
| Com_revoke | 0 |
| Com_revoke_all | 0 |
| Com_rollback | 0 |
| Com_savepoint | 0 |
| Com_select | 29848728 |
| Com_set_option | 6755821 |
| Com_show_binlog_events | 0 |
| Com_show_binlogs | 0 |
| Com_show_charsets | 0 |
| Com_show_collations | 0 |
| Com_show_column_types | 0 |
| Com_show_create_db | 0 |
| Com_show_create_table | 0 |
| Com_show_databases | 0 |
| Com_show_errors | 0 |
| Com_show_fields | 760708 |
| Com_show_grants | 0 |
| Com_show_innodb_status | 0 |
| Com_show_keys | 0 |
| Com_show_logs | 0 |
| Com_show_master_status | 0 |
| Com_show_ndb_status | 0 |
| Com_show_new_master | 0 |
| Com_show_open_tables | 0 |
| Com_show_privileges | 0 |
| Com_show_processlist | 0 |
| Com_show_slave_hosts | 0 |
| Com_show_slave_status | 0 |
| Com_show_status | 3 |
| Com_show_storage_engines | 0 |
| Com_show_tables | 65 |
| Com_show_triggers | 0 |
| Com_show_variables | 26 |
| Com_show_warnings | 0 |
| Com_slave_start | 0 |
| Com_slave_stop | 0 |
| Com_stmt_close | 0 |
| Com_stmt_execute | 0 |
| Com_stmt_fetch | 0 |
| Com_stmt_prepare | 0 |
| Com_stmt_reset | 0 |
| Com_stmt_send_long_data | 0 |
| Com_truncate | 3048 |
| Com_unlock_tables | 26 |
| Com_update | 2871820 |
| Com_update_multi | 2540 |
| Com_xa_commit | 0 |
| Com_xa_end | 0 |
| Com_xa_prepare | 0 |
| Com_xa_recover | 0 |
| Com_xa_rollback | 0 |
| Com_xa_start | 0 |
| Compression | OFF |
+--------------------------+----------+
104 rows in set (0.00 sec)
Labels:
MySQL
Wednesday, March 9, 2011
Choosing the right scheduling algorithm for a Linux Based Load Balancer - Part II
This is a continuation of an earlier post:
Choosing the right scheduling algorithm for a Linux Based Load Balancer
This is the current scenario:
Weighted Least-Connections (default)
Distributes more requests to servers with fewer active connections relative to their capacities.
Capacity is indicated by a user-assigned weight, which is then adjusted upward or
downward by dynamic load information. The addition of weighting makes this algorithm
ideal when the real server pool contains hardware of varying capacity.
-n
Choosing the right scheduling algorithm for a Linux Based Load Balancer
This is the current scenario:
- Two identical webservers, mirrored hardware and software specifications.
- The main variable is physical location of the second webserver, but it is located on the same physical network but a different building.
- Network traffic may or may not be an issue.
Weighted Least-Connections (default)
Distributes more requests to servers with fewer active connections relative to their capacities.
Capacity is indicated by a user-assigned weight, which is then adjusted upward or
downward by dynamic load information. The addition of weighting makes this algorithm
ideal when the real server pool contains hardware of varying capacity.
-n
Wednesday, February 23, 2011
Choosing the right scheduling algorithm for a Linux Based Load Balancer
I'm currently doing some research into choosing the right scheduling algorithm for a Linux based Load balancer for a dual front end moodle installation.
The actual loadbalancer implementation will be detailed later.
This particular post will be to outline the scheduling algorithms possible on ipvs and which would be the best option for my particular scenario. The assumption in my case is that both webservers are identical, from hardware specifications to OS to php code.
From the CentOS Documentation on IPVS Scheduling Algorithms:
Part II here.
-n
The actual loadbalancer implementation will be detailed later.
This particular post will be to outline the scheduling algorithms possible on ipvs and which would be the best option for my particular scenario. The assumption in my case is that both webservers are identical, from hardware specifications to OS to php code.
From the CentOS Documentation on IPVS Scheduling Algorithms:
Round-Robin SchedulingI'm currently doing some testing with one or two of the more viable options and will follow up with my choice (and why I chose it).
Distributes each request sequentially around the pool of real servers. Using this algorithm,
all the real servers are treated as equals without regard to capacity or load. This scheduling
model resembles round-robin DNS but is more granular due to the fact that it is networkconnection
based and not host-based. LVS round-robin scheduling also does not suffer the
imbalances caused by cached DNS queries.
Weighted Round-Robin Scheduling
Distributes each request sequentially around the pool of real servers but gives more jobs to
servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which
is then adjusted upward or downward by dynamic load information.
Weighted round-robin scheduling is a preferred choice if there are significant differences in
the capacity of real servers in the pool. However, if the request load varies dramatically, the
more heavily weighted server may answer more than its share of requests.
Least-Connection
Distributes more requests to real servers with fewer active connections. Because it keeps
track of live connections to the real servers through the IPVS table, least-connection is a
type of dynamic scheduling algorithm, making it a better choice if there is a high degree of
variation in the request load. It is best suited for a real server pool where each member
node has roughly the same capacity. If a group of servers have different capabilities,
weighted least-connection scheduling is a better choice.
Weighted Least-Connections (default)
Distributes more requests to servers with fewer active connections relative to their capacities.
Capacity is indicated by a user-assigned weight, which is then adjusted upward or
downward by dynamic load information. The addition of weighting makes this algorithm
ideal when the real server pool contains hardware of varying capacity.
Locality-Based Least-Connection Scheduling
Distributes more requests to servers with fewer active connections relative to their destination
IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the
packets for an IP address to the server for that address unless that server is above its capacity
and has a server in its half load, in which case it assigns the IP address to the least
loaded real server.
Locality-Based Least-Connection Scheduling with Replication Scheduling
Distributes more requests to servers with fewer active connections relative to their destination
IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs
from Locality-Based Least-Connection Scheduling by mapping the target IP address to a
subset of real server nodes. Requests are then routed to the server in this subset with the
lowest number of connections. If all the nodes for the destination IP are above capacity, it
replicates a new server for that destination IP address by adding the real server with the
least connections from the overall pool of real servers to the subset of real servers for that
destination IP. The most loaded node is then dropped from the real server subset to prevent
over-replication.
Destination Hash Scheduling
Distributes requests to the pool of real servers by looking up the destination IP in a static
hash table. This algorithm is designed for use in a proxy-cache server cluster.
Source Hash Scheduling
Distributes requests to the pool of real servers by looking up the source IP in a static hash
table. This algorithm is designed for LVS routers with multiple firewalls.
Part II here.
-n
Tuesday, February 15, 2011
Using the Linux diff command
I need to do a Moodle minor version upgrade(1.9.x to 1.9.y), but my basecode is highly customized. For a vanilla installation of moodle, an upgrade is very straightforward - but how much moodle installs are actually vanilla?
What I intend to do is a side-by-side comparison to check from a file standpoint the differences between the updated moodle core and my customized version.
0. copy production moodle code to a test machine (if possible)
1. on test machine download latest version of moodle from www.moodle.org and extract to a folder in tmp
2. run the following command:
diff -qry /path/to/current/code /path/to/downloaded/code > /pipe/to/textfile.txt
3. Have fun comparing files. I recommending opening the file in a spreadsheet editor.
Cheers,
-n
What I intend to do is a side-by-side comparison to check from a file standpoint the differences between the updated moodle core and my customized version.
0. copy production moodle code to a test machine (if possible)
1. on test machine download latest version of moodle from www.moodle.org and extract to a folder in tmp
2. run the following command:
diff -qry /path/to/current/code /path/to/downloaded/code > /pipe/to/textfile.txt
3. Have fun comparing files. I recommending opening the file in a spreadsheet editor.
Cheers,
-n
Friday, January 28, 2011
Find and Delete certain files in CentOS Linux
This command can be scripted (needs to be run as root), but this script is not covered here.
Basically what this command does is search a specified directory for a name/filetype and then delete.
If you run it from the top level directory (/), chances are something important may get deleted. Be sure to specify your folder, it will traverse subfolders.
My particular reason for using this is I need to clean up over 3000 courses in Moodle and remove any old course backups that were used for creating course copies by the site admin. This involves sorting through tens of thousands of folders for course backups.
Bear in mind that this action cannot be undone, so if Lecturers/Teachers manage their own Moodle course backups, this script will not work for you. As always, you should have backups of all necessary files before attempting this sort of activity.
In order to ensure that no unnecessary files get deleted, run the command without the delete section.
find /path/to/your/folder -name \*backup\*.zip
*edit: to pipe the list including the size of the files to a textfile, please see below:
find /path/to/your/folder -size +5k -name \*searchstring\*.zip -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' > /temp/report.txt
much thanks to http://www.cyberciti.biz/faq/find-large-files-linux/ for the size additionA list of files matching your search string will be displayed.
Breakdown:
find - Linux search command
/path/to/your/folder/ - self explanatory
-name - tells the command to search for specific filenames
\*backup\*.zip - search string, will return the following:
/bin/rm -f - delete command
Now for the cleanup:
find /path/to/your/folder -name \*backup\*.zip | xargs /bin/rm -f
Congrats! All your base are belong to us..err wait, I meant the files should have been deleted.
Rerun the original find command with your search string and it should no longer return any results.
Cheers,
-n
Basically what this command does is search a specified directory for a name/filetype and then delete.
If you run it from the top level directory (/), chances are something important may get deleted. Be sure to specify your folder, it will traverse subfolders.
My particular reason for using this is I need to clean up over 3000 courses in Moodle and remove any old course backups that were used for creating course copies by the site admin. This involves sorting through tens of thousands of folders for course backups.
Bear in mind that this action cannot be undone, so if Lecturers/Teachers manage their own Moodle course backups, this script will not work for you. As always, you should have backups of all necessary files before attempting this sort of activity.
In order to ensure that no unnecessary files get deleted, run the command without the delete section.
find /path/to/your/folder -name \*backup\*.zip
*edit: to pipe the list including the size of the files to a textfile, please see below:
find /path/to/your/folder -size +5k -name \*searchstring\*.zip -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' > /temp/report.txt
much thanks to http://www.cyberciti.biz/faq/find-large-files-linux/ for the size additionA list of files matching your search string will be displayed.
Breakdown:
find - Linux search command
/path/to/your/folder/ - self explanatory
-name - tells the command to search for specific filenames
\*backup\*.zip - search string, will return the following:
mybackup.zipwill not return:
mybackup2001.zip
backup2002.zip
backup.txt| xargs - allows commands to be run from previous query/command
mybackup.doc
backup1.log
/bin/rm -f - delete command
Now for the cleanup:
find /path/to/your/folder -name \*backup\*.zip | xargs /bin/rm -f
Congrats! All your base are belong to us..err wait, I meant the files should have been deleted.
Rerun the original find command with your search string and it should no longer return any results.
Cheers,
-n
Labels:
CentOS,
Linux,
Moodle,
Scripting,
Selective Deleting
Friday, January 21, 2011
Installing AIDE (Advanced Intrusion Detection Environment) on CentOS 5.x
AIDE (Advanced Intrusion Detection environment) is a great approach to layered security on a Linux Server. This covers a basic install and configuration to run once per day with a report to your email account(or not).
It is recommended to run this on a "Clean" system - i.e one that is perhaps freshly installed and configured before deployment on the web.
Any updates to software or system on the server after this point will trigger false positives, so be sure to update the database after any such work is done.
0. Log in as root
1. Install the package
yum install aide
2. Edit the config file to be able to send to your email address
NOTE: If you do not wish to receive a daily report or you want to inspect the logs manually, skip this section ang go to step 3.
nano /etc/aide.conf
look for the following lines and comment out via # at the beginning of the line
3. Run AIDE to create the initial database
Steps 3 and 4 will need to be repeated each time you do a system update or modify any configuration files, so be warned. Security and convenience are mutually exclusive.
aide --init
4. Copy the database to default setting - this is the baseline database.
cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
5. Run the AIDE first check
aide --check
Expected output in a perfect world:
6. Setup a daily job (in this case to run at 11pm) to run
nano /etc/crontab
if you wish to receive the email as configured in step 2, add to end of file
the default logs can be accessed at:
/var/log/aide/aide.log
There will be a list of modified files and or filesystem.
Further information on AIDE as well as troubleshooting can be found here: http://www.cs.tut.fi/~rammer/aide/manual.html
Cheers,
-n
It is recommended to run this on a "Clean" system - i.e one that is perhaps freshly installed and configured before deployment on the web.
Any updates to software or system on the server after this point will trigger false positives, so be sure to update the database after any such work is done.
0. Log in as root
1. Install the package
yum install aide
2. Edit the config file to be able to send to your email address
NOTE: If you do not wish to receive a daily report or you want to inspect the logs manually, skip this section ang go to step 3.
nano /etc/aide.conf
look for the following lines and comment out via # at the beginning of the line
report_url=file:@@{LOGFIR}/aide.logit should now read:
report_url=stdout
#report_url=file:@@{LOGFIR}/aide.logadd the following lines immediately below the commented out section as mentioned above:
#report_url=stdout
report_url=mailto:youremail@yourdomain.comsave and exit /etc/aide.conf
report_url=syslog:LOG_AUTH
3. Run AIDE to create the initial database
Steps 3 and 4 will need to be repeated each time you do a system update or modify any configuration files, so be warned. Security and convenience are mutually exclusive.
aide --init
4. Copy the database to default setting - this is the baseline database.
cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
5. Run the AIDE first check
aide --check
Expected output in a perfect world:
AIDE, version 0.13.1
### All files match AIDE database. Looks okay!
6. Setup a daily job (in this case to run at 11pm) to run
nano /etc/crontab
if you wish to receive the email as configured in step 2, add to end of file
00 23 * * * /usr/sbin/aide --check /bin/mail -s "$HOSTNAME - Daily AIDE integrity check" youremail@yourdomain.comif you do not wish to receive any email modify the crontab as indicated below
00 20 * * * /usr/sbin/aide --check
the default logs can be accessed at:
/var/log/aide/aide.log
There will be a list of modified files and or filesystem.
Further information on AIDE as well as troubleshooting can be found here: http://www.cs.tut.fi/~rammer/aide/manual.html
Cheers,
-n
Thursday, January 20, 2011
Installing Tomcat5 on CentOS 5.x
This covers pure installation of the Tomcat server on CentOS alongside Apache httpd, any configurations can be found on the Tomcat site: http://tomcat.apache.org/
0. Login as root (standard rules apply)
1. Install the prerequisite packages
yum install -y httpd-devel gcc-java tomcat*
2. Set Auto startup on boot
chkconfig tomcat5 on
service tomcat5 restart
3. Check that the server is operational
From a web browser on the machine (or on the network) type in the following:
http://localhost:8080 (http://your.network.ip.address:8080 from a network machine)
An Apache Tomat testpage should be displayed.
4. Upload a test script
The default tomcat directory is /var/lib/tomcat5/webapps/ROOT/
Create a sample jsp on the server named test.jsp and run it from a browser to ensure the server is operating normally.
cd /var/lib/tomcat5/webapps/ROOT/
nano test.jsp
Paste the following and save the file:
(taken from http://java.sun.com/developer/technicalArticles/xml/WebAppDev/)
**Replace [] with <>**
http://localhost:8080/test.jsp (http://your.network.ip.addr:8080/test.jsp)
You should get a yellow page showing the current system date/time.
Cheers,
-n
0. Login as root (standard rules apply)
1. Install the prerequisite packages
yum install -y httpd-devel gcc-java tomcat*
2. Set Auto startup on boot
chkconfig tomcat5 on
service tomcat5 restart
3. Check that the server is operational
From a web browser on the machine (or on the network) type in the following:
http://localhost:8080 (http://your.network.ip.address:8080 from a network machine)
An Apache Tomat testpage should be displayed.
4. Upload a test script
The default tomcat directory is /var/lib/tomcat5/webapps/ROOT/
Create a sample jsp on the server named test.jsp and run it from a browser to ensure the server is operating normally.
cd /var/lib/tomcat5/webapps/ROOT/
nano test.jsp
Paste the following and save the file:
(taken from http://java.sun.com/developer/technicalArticles/xml/WebAppDev/)
**Replace [] with <>**
5. Access the test script from a web browser[HTML][HEAD]
[TITLE]JSP Example[/TITLE]
[/HEAD]
[BODY BGCOLOR="ffffcc"]
[CENTER]
[H2]Date and Time[/H2]
[%
java.util.Date today = new java.util.Date();
out.println("Today's date is: "+today);
%]
[/CENTER]
[/BODY]
[/HTML]
http://localhost:8080/test.jsp (http://your.network.ip.addr:8080/test.jsp)
You should get a yellow page showing the current system date/time.
Cheers,
-n
Labels:
Apache,
CentOS,
Linux,
Open Source,
Tomcat
Wednesday, January 19, 2011
Lost your root password?
Just finished a fresh install of CentOS 5.5, and in my pangs of hunger, I found myself locked out of my root account, as I was unable to remember the bloody password I set 5 minutes ago.
This outlines how to quickly change the root password if locked out.
1. Boot into single user mode. Boot the machine and at the start up options, press a to append to the GRUB loader config.
The following text should be visible
ro root=LABEL=/
Delete any text after this and append the following line (be sure to include a space after the slash)
single
It should look like this
ro root=LABEL=/ single
The Single user prompt should appear after booting:
sh-2.05b#
2. Change the root password
Enter the following command to change the root password:
passwd root
You will be prompted to change and confirm the new root password
3. RebootUpon reboot, you should be able to access the root account.
Cheers,
-n
This outlines how to quickly change the root password if locked out.
1. Boot into single user mode. Boot the machine and at the start up options, press a to append to the GRUB loader config.
The following text should be visible
ro root=LABEL=/
Delete any text after this and append the following line (be sure to include a space after the slash)
single
It should look like this
ro root=LABEL=/ single
The Single user prompt should appear after booting:
sh-2.05b#
2. Change the root password
Enter the following command to change the root password:
passwd root
You will be prompted to change and confirm the new root password
3. RebootUpon reboot, you should be able to access the root account.
Cheers,
-n
Labels:
CentOS,
Linux,
lockout,
root,
single user
Thursday, January 6, 2011
2011
2011 is here, and with it some more stuff.
Plans for the next few weeks/months.
NTP
High Availability Load Balancing
...more to come.
-n
Plans for the next few weeks/months.
NTP
High Availability Load Balancing
...more to come.
-n
Subscribe to:
Posts (Atom)