Monday, August 16, 2010

Running multiple Tomcat 6 instances in Ubuntu, the quick and dirty way

Here's a quick list for running multiple Tomcat 6 instances on Ubuntu 10.04 :

  • Copy the /etc/init.d/tomcat script with a new name in the same directory

  • Update the NAME variable in the startup script with a name for the new instance that you want to run.

  • Copy /usr/share/$(old)NAME to the (new) NAME you've just created, along with /var/lib/$(old)NAME and /etc/default/$(old)NAME

  • Edit the server.xml file under /var/lib/$(new)NAME/conf/ and change all the ports (ie for shutdown, and all your Connectors) so that they don't conflict with the old instance

  • Run /etc/init.d/$(new tomcat script name) start


You should now have a running, fully functional instance of Tomcat on the server, using a different port.

Wednesday, August 04, 2010

Quickly dropping all the tables in a MySQL database without dropping the database itself

I've recently come across a case where I need to drop all the tables in my database (ie effectively truncate the database) but MySQL has no built in command for doing so. This is where the magic of command lines becomes very useful. I found a great little trick here that will very quickly let you get rid of all that annoying data so you can load in new test data into your database :

mysqldump -u[USERNAME] -p[PASSWORD] --add-drop-table --no-data [DATABASE] | grep ^DROP | mysql -u[USERNAME] -p[PASSWORD] [DATABASE]

If you've got GnuWin32 or another set of GNU programs installed on your Windows box, you can even do this in Windows without even changing the syntax !

Tuesday, August 03, 2010

Quickly dumping a MySQL database out to a file

I've found that sometimes, I just need a quick and dirty copy of a database to test changes against, and it doesn't matter if the data is recent, or even consistent for that matter. That's where mysqldump comes in handy with the --single-transaction option. It can be used on a live database because it doesn't lock the tables and prevent your web application from continuing to insert, modify and delete new records. One example would be :

mysqldump -u myusername -h myhost -p --single-transaction mydbname | gzip > mybackupfile.20100803.sql.gz &

This can be made even quicker by combining this dumping into an SSH transfer to copy the output data to another machine.

Piping a MySQL database from one server to another

I know that there's a lot of great, wonderful things that can be done on-the-fly through SSH. It's one of the greatest tools out there for moving data or communicating between two machines. So I thought "Why not try to move my database in the fastest way possible via SSH?", and here's the command I found :

mysqldump -ux -px database | ssh me@newhost "mysql -ux -px database"

This is of course a very basic, stripped down version of the command, which I haven't tested yet, but it's a good start to what seems to be a very common problem among developers.

Converting Unix timestamps to something readable in Excel

Our company uses log files in our machines to log raw data in real-time so that we have a history of what's happened on jobs that can be analyzed and, if necessary, used to prove to inspectors that our machines are doing what they say they are. We have our own log file parsing and analysis package for performing various analysis, but sometimes we need to look at the raw signals themselves in the logs, and this is where Excel comes in handy. The timestamps that are put on the signals are Unix timestamps, so seconds since the Epoch (Jan. 1, 1970 at 00:00:00 hours), which are not very useful in Excel, because Excel has its own Epoch (Jan. 1, 1900). To make things just a little bit more complex, we have machines in different timezones, which we need to account for as well. Thanks to a post at this site, I managed to come up with a modified version of their formula that will convert the unix timestamps to Excel format, and adjust them for timezone offsets :

=(ROUNDDOWN(A10116,0) / 86400) + 25569 - TIME(6,0,0)