Wednesday, October 13, 2010

Determining what ports Tomcat is running on

I've recently had problems connecting to a Tomcat server setup by another developer. In order to troubleshoot these problems, I wanted to use netstat to see what ports were being bound to, but apparently server sockets don't come up by default. If you really want to see what server sockets are in use on your machine, use the following :

netstat -lntp


The undocumented 't' option will cause netstat to show server sockets.

Friday, October 01, 2010

Event models in Silverlight vs WPF

To really understand the event model in Microsoft's Silverlight / WPF frameworks, you need to start off with a proper mental model. You can think of the root XML element in a XAML object as being at ground level. Each successive child XML element goes deeper into the ground. With that in mind, there's two terms that are used in both of these frameworks to describe how event handlers propagate between objects : Bubbling and Tunneling. With Tunneling, event handlers start at the root XML element, and "tunnel" deeper into the "earth" (your control stack) until they get to the original source of the element, which is your controls. With Bubbling, event handlers start at the original source of the element (your control which initiated the event), and then "bubble up" to the root control. We use the earth analogy because the terminology goes hand in hand with gravity : tunnelling follows gravity, like digging into the earth, and bubbling goes against gravity, like a bubble coming up to the surface from the bottom of the ocean. I never really had a clear mental model of the Silverlight / WPF event models until right now.

With all that said, there are some differences between Silverlight and WPF that turn out to be very important when it comes to implementing your event handlers, and maintaining compatibility between the two. The biggest difference is that WPF supports both Bubbling and Tunneling events, whereas Silverlights supports only Bubbling events. Keep this in mind if you're designing desktop applications that you might want to port over to web applications at some point

Thursday, September 30, 2010

Holy crap, a Windows ramdisk program that works

In the past, I've had a bit of experience trying to install and run ramdisks in Windows XP, but I never found anything really good. Microsoft provided a ramdisk driver for Windows 2000 that one could get working in XP, but it wasn't really useful because it only provided a maximum of 32 MB of storage (Why ?!). I decided to revisit the issue of ramdisks at work today for Windows 7 because I wanted a way to speed up Visual Studio's caching and other operations (and Eclipse, but that's another matter). After searching around some more, I found a program called ImDisk. The installation is dead easy (if a little lacking in the notification department to tell you it's successfully installed). What's even better, you can make disks of arbitrary size, have it simulate various kinds of devices, and you can setup multiple ramdisks. The only catch is that you have to start the service in Administrator mode, and it's a bit more than trivial, though it is easy using the following steps :

1. Start -> All Programs -> Accessories -> Command Prompt -> Right-click -> Run as administrator
2. sc config imdisk start= auto (note the space between start= and auto, this got me the first time)
3. net start imdisk
4. Open up Control Panel -> ImDisk Virtual Disk Driver

... and have at it!

Friday, September 17, 2010

Editing your file system mappings for TFS paths

So, as you may or may not know, when using Microsoft Team Foundation Server for version control, TFS maps remote project paths into local file system patmhs for checkout, etc. As I learned today, there are times when you check out the wrong path, and/or map it to the wrong path in the file system. If you ever need to modify or just nuke your file system mappings in TFS, here's how you go about doing it :


Team Explorer -> Source Control (double click) -> Workspaces (dropdown) -> Workspaces ...


Once you've selected your workspace in the dialog that comes up (you'll likely only have one anyway), click on :

Edit ... -> Working Folders


... and then select the folder to file system mappings that you want to remove, or create whatever new file system mappings you want right there.

Starting a new job ... and a new philosophy

Ok, so I've left my previous employer and started at a new job. This means new domains of knowledge, new tools, and new people. My new employer is a Microsoft-exclusive shop, for almost every aspect of their software. If you've read this blog in any significant amount in the past, you'll know that I'm really not a Microsoft fan. In fact, I hate almost everything that's ever come out of Redmond, because for the most part it's deficient in how it's been engineered, and not as usable as other products out on the market (or even a lot of open source products). Therefore, the tone of this blog is probably going to change somewhat, and I'll be ranting and raving like a lunatic on things I'm learning about dealing with Microsoft products more often. It's going to be interesting.

Thursday, September 02, 2010

Finally ... MySQL workbench sucks less

MySQL workbench has been out for quite a while, under the guise of the people at MySQL. For the longest time, I stuck to using just the individual MySQL Query Browser and Administrator because they weren't too bad, and there really wasn't anything out there that I liked much better for query browsing alternatives. I tried out the old Workbench back when MySQL was standalone, but it really wasn't a very positive experience, so I just dropped it.

However, lately, something drove me to search for better alternatives to the MySQL query browser again, and I don't even know why. In my Google search, the MySQL Workbench came up, and I saw that it was a very recent version that was a good .2 versions up from the last one I had used, so I figured I'd give it a try. The difference was startling. Not only did they completely revamp the interface (at least for the Mac) but the workbench was just generally much more reliable and performant than the old query browser. If you get the chance, give it a shot. The new integrated interface is much more user friendly, and there's a bunch of new "Copy to clipboard" snippets that I personally find incredibly convenient and useful.

Monday, August 16, 2010

Running multiple Tomcat 6 instances in Ubuntu, the quick and dirty way

Here's a quick list for running multiple Tomcat 6 instances on Ubuntu 10.04 :

  • Copy the /etc/init.d/tomcat script with a new name in the same directory

  • Update the NAME variable in the startup script with a name for the new instance that you want to run.

  • Copy /usr/share/$(old)NAME to the (new) NAME you've just created, along with /var/lib/$(old)NAME and /etc/default/$(old)NAME

  • Edit the server.xml file under /var/lib/$(new)NAME/conf/ and change all the ports (ie for shutdown, and all your Connectors) so that they don't conflict with the old instance

  • Run /etc/init.d/$(new tomcat script name) start


You should now have a running, fully functional instance of Tomcat on the server, using a different port.

Wednesday, August 04, 2010

Quickly dropping all the tables in a MySQL database without dropping the database itself

I've recently come across a case where I need to drop all the tables in my database (ie effectively truncate the database) but MySQL has no built in command for doing so. This is where the magic of command lines becomes very useful. I found a great little trick here that will very quickly let you get rid of all that annoying data so you can load in new test data into your database :

mysqldump -u[USERNAME] -p[PASSWORD] --add-drop-table --no-data [DATABASE] | grep ^DROP | mysql -u[USERNAME] -p[PASSWORD] [DATABASE]

If you've got GnuWin32 or another set of GNU programs installed on your Windows box, you can even do this in Windows without even changing the syntax !

Tuesday, August 03, 2010

Quickly dumping a MySQL database out to a file

I've found that sometimes, I just need a quick and dirty copy of a database to test changes against, and it doesn't matter if the data is recent, or even consistent for that matter. That's where mysqldump comes in handy with the --single-transaction option. It can be used on a live database because it doesn't lock the tables and prevent your web application from continuing to insert, modify and delete new records. One example would be :

mysqldump -u myusername -h myhost -p --single-transaction mydbname | gzip > mybackupfile.20100803.sql.gz &

This can be made even quicker by combining this dumping into an SSH transfer to copy the output data to another machine.

Piping a MySQL database from one server to another

I know that there's a lot of great, wonderful things that can be done on-the-fly through SSH. It's one of the greatest tools out there for moving data or communicating between two machines. So I thought "Why not try to move my database in the fastest way possible via SSH?", and here's the command I found :

mysqldump -ux -px database | ssh me@newhost "mysql -ux -px database"

This is of course a very basic, stripped down version of the command, which I haven't tested yet, but it's a good start to what seems to be a very common problem among developers.

Converting Unix timestamps to something readable in Excel

Our company uses log files in our machines to log raw data in real-time so that we have a history of what's happened on jobs that can be analyzed and, if necessary, used to prove to inspectors that our machines are doing what they say they are. We have our own log file parsing and analysis package for performing various analysis, but sometimes we need to look at the raw signals themselves in the logs, and this is where Excel comes in handy. The timestamps that are put on the signals are Unix timestamps, so seconds since the Epoch (Jan. 1, 1970 at 00:00:00 hours), which are not very useful in Excel, because Excel has its own Epoch (Jan. 1, 1900). To make things just a little bit more complex, we have machines in different timezones, which we need to account for as well. Thanks to a post at this site, I managed to come up with a modified version of their formula that will convert the unix timestamps to Excel format, and adjust them for timezone offsets :

=(ROUNDDOWN(A10116,0) / 86400) + 25569 - TIME(6,0,0)

Thursday, July 15, 2010

Coding for Effect vs Coding a Model

One of the software engineers I've worked with in the past has a very different way of doing things. He will take the most expedient way of implementing a particular piece of software in order to get the job done, always. Every time. (Admittedly, there is the odd time where this is the most effective means of doing software in order to get a job done, and potentially avoid getting fired, but that's for another discussion). The person in question takes no pride in his own work (by his own admission) and doesn't care about the quality of his work. Essentially, he codes with only the desired effect of his work in his mind. I've found that this way of doing things (while being faster) is a great way to introduce bugs and to cause problems later that arise as a lack of planning and forethought, ie lack of maintainability, inability to add features to the software.

Whenever I write my own software, I write it to model what's going on in a business or scientific process. (Really, isn't that what software's supposed to do ? ;P) I think about what's going on, and I try to model it in the software, taking into account all possible factors (or at least everything that reasonably occurs to me) that may influence the process. This typically results in a reliable system that very rarely breaks down. When a breakdown does occur, it's always been the result of something that I hadn't anticipated. I then go back to the software, and re-evaluate the model to see if there's something I need to change, and change it. This way of implementing software results in software that's easy to maintain, is (virtually) free of side effects, and can virtually eliminate undesired behaviour.

Monday, July 05, 2010

Working with CSV files in Excel the way you want ... rather than the way Microsoft tells you to by default

In both my current job and my past jobs, I've worked with CSV files in Microsoft Excel as a matter of necessity. Excel is just too useful not to use (OpenOffice has its own quirks, but that's for another post.) The only problem with Excel is that it formats fields automatically when you open a CSV file by double clicking on it or by going through the File -> Open option (or Ribbon -> Open in Office 2007 and later). Sometimes this is nice, but most times it's a pain in the ass, especially when you want to be able to save that CSV data right back out to CSV again. What happens is Excel converts the values to particular data types after introspecting the data in the cells. This is annoying and stupid, especially when you have financial, scientific and engineering data that's in a format that doesn't fit will into Microsoft's algorithm for dealing with numbers, dates, times, and currencies.

There is a way around this giant annoyance. Instead of taking the easy way of opening the file, you can open Excel directly, with a fresh spreadsheet. Then, on the ribbon, go to the Data tab, and click on the 'From Text' button. This will let you open a delimited text file, and treat it as a data source, and Excel will give you options for opening the file and how you want to deal with its contents. This is much better for dealing with the data, especially when dealing with tab-delimited files.

Thursday, June 10, 2010

Background on the home search page ? WTF Google ?

Today I went to Google (as I do most days) to search for some stuff, but today when I got there, I was greeted with a horrible looking background on the search. My very first thought was "Did I just go to Bing by mistake ?", then I looked closer at the page and realized it was indeed Google. Needless to say, I was incredibly pissed. Why did Google change their signature search page ? It was great the way it was ! That was specifically why I went to Google. If I wanted to use a search engine that looked like a piece of shit, I'd use Bing. In my mind, this is nothing short of an epic fail on Google's part. And apparently, I'm not alone in this feeling.

Wednesday, May 05, 2010

My hatred of Microsoft is justified ... yet again

There are two different schools of thought when it comes to being a vendor of very large software used by millions of people :

  1. Move your software forward as often as possible, even if a few customers have to suffer lack of backward compatibility

  2. Move your software forward as little as possible, even if a few customers have to suffer lack of innovation


I've noticed from my own personal experiences that Apple tends to take the former attitude, whereas Microsoft tends to take the latter attitude. In the end, someone (probably you) is going to get screwed over at some point, it's just a matter of how you want it to happen. As a consumer, I generally like Apple products, so on my personal technology front, I choose the former. As a software developer, I'm forced by business constraints to accept the latter. The specific circumstances that bring me to mention this :

Today I was working with version 4.0 of the .NET framework, the very latest (and supposedly greatest) from Microsoft, along with the very latest version of their Visual Studio (2010) software. I'm also using these with WPF (the Windows Presentation Foundation) to create an application for my employer. For various reasons, I had to go back and refactor some old code, part of which required opening log files for viewing within the application. In order to open a file in WPF, you must use the OpenFileDialog class that comes with the Windows Forms library, which has been around since Windows 2000 (if not earlier, I'm admittedly not as familiar with the lifecycle of this technology as I could be). If you're a developer on the Windows platform, or even just an observant user, you'll have noticed that on the OpenFileDialog there's a Places bar on the left hand side of the dialog that provides common places for storing files, some of which are very general and will require some drilling down into subfolders to find what you actually want. I wanted to be able to add a place to this bar so that users of the application (company employees who work as technicians out on job sites) could have a direct link to the typical directory used to store log files on their machine.

It turns out that adding a folder to the places bar is obscenely difficult, and must be done through registry hacks. My question is this (directed squarely at the developers at Microsoft responsible for writing the GUI controls, specifically these dialogs) : what the hell was going through your head that would make you think that the way you've implemented these dialogs is a good thing ? It's obscenely hard on developers to customize the generalized tools that you've given them, and this in turn makes it hard on users of that software to use the software made by developers bound by these stupid and seemingly arbitrary constraints. I find myself growing increasingly frustrated by stupid decisions of the Microsoft developers that seem obviously poor when you look at them from the perspective of a framework user. These poor decisions are costing me a lot in terms of time I have to spent developing around these decisions and researching alternatives. It's now no wonder to me how other companies can make absurd sums of money off of controls designed as work arounds for the short-sightedness of Microsoft's developers, and frankly, it's quite depressing.

Tuesday, May 04, 2010

Fixing telnet disconnects

I've recently been working with a lot of embedded systems via telnet, and I'll frequently forget to disconnect my session before physically disconnecting the machine from my local LAN. Unfortunately, this results in telnet hanging the session, and me being unable to do anything else. Until now, I've just closed the window and opened another, but that became a pain in the ass, especially when I had to discard my (very useful) terminal history. Apparently, you can rescue a hung telnet session and avoid having to kill your terminal session by pressing Ctrl+5. This sends a signal to telnet to end the current session and return you to a prompt. Incredibly useful. Looks like there's a ton of other stuff where I found this useful tidbit.

Wednesday, April 21, 2010

Resolving slow SSH login times

At work, like most other organizations, we have a Linux server that we access via SSH. However, lately, I've found my use of it skyrocketing in order to test software, and log-ins and file copying have been very slow. In my efforts to find out why, I found out that sshd has in its configuration file a setting called 'UseDNS'. The default is 'yes', so even if this setting is commented out, it will try to perform reverse DNS lookups on the IP addresses of users logging in. This would be a bad thing for us, especially since we have this box locked down and unable to contact any DNS servers. After disabling DNS lookups, my login and file copy time dropped to near 0. I hope this helps somebody.

*EDIT:* I've also encountered this problem on the BeagleBone Black boards which I've recently acquired for use in a project.

Tuesday, April 20, 2010

Two bash functions that will make your software development life 10% easier

... if you use SSH to move between machines a lot. Which I do. I have several machines that I work on regularly, and a centralized development machine shared with others that I frequently have to exchange files with. The following two functions are the most useful two bash functions that you can put in your .bashrc file if you use SSH as much as I do :

function ssh-create-keys {
ssh-keygen -t rsa
}

function ssh-setup-on {
cat ~/.ssh/id_rsa.pub | ssh $1 "cat - >>.ssh/authorized_keys"
}

The first sets up an SSH key file for you in your home directory, This key file will be used to identify you on other machines, and can be used for various purposes on your local machine as well. The second function logs you into the machine determined by the username@anothermachine given as the first argument to the function, and adds your public key to the list of authorized keys for that machine.

Once you've run the first function, and run the second function to setup your public key on an account on another machine, you'll now be able to use SSH to log in and copy files freely with the account on the other machine without having to enter a password. While this is incredibly convenience, it also comes with a caveat : it's dangerous. If somebody other than yourself gains physical access to your machine and can log on as you, (or use your already logged on account), they can move to those same machines freely and perform possibly malicious actions, as you. Keep that in mind.

Saturday, April 17, 2010

Copying files in linux

Lately I've been doing a lot of embedded development with Linux, and copying files between systems has been a bit of a pain. Fortunately, a combination of RSync and SSH solved my problems, with a command that lets me copy files from a directory on one system to a directory on another system, recursively with symlink preservation (and even duplication !) :

rsync -azuv -e ssh  user@systemaddress:~/path/to/dir/* .

Sunday, April 04, 2010

WTF ?! Windows (as of Vista) no longer supports multiple different monitors!? WHY ????

Apparently, as of Windows Vista and later, Windows no longer supports multiple video cards that don't use the same driver!! Why ? I use a Radeon and a Matrox QID to drive 6 displays on my development box, and this just entirely fucks me over. At best, I can only use four of my displays now, and that's only going to be when Matrox gets off their lazy ass and releases a Windows 7 driver for their QID LP PCIe video cards. I'm so disappointed with what's a complete step backward for Microsoft. Epic fail, Microsoft.

Saturday, January 23, 2010

Seeing where Perl looks for its modules on a system

Because we have very limited space on some embedded devices that we use that run Perl, we can only store a very few modules on these systems. This is a consequence of the fact that we run Busybox on these things, so by definition everything on these boxes is limited, if present at all. Therefore, we have to check and see if a module is available before we can use it in our code. Fortunately, there's a quick one-liner to see where Perl is looking for modules on a system :

perl -e'print join "\n", @INC'


I got this off a forum post from somewhere, and I'd post it here if I could find the link again. My apologies to the author of that forum post if they ever happen to run across this blog.

Embarking on a Perl journey

A long time ago, in a job far, far away, I had to deal with some Perl. I learned just enough to get me by for the duration of the task at hand, and then pretty much forgot everything I had learned. Now, at my latest job, I'm having to deal extensively with legacy systems which have a considerable amount of logic written in Perl that needs to be either ported over to other languages (for various reasons) or updated and new things written because Perl is the only language that's both abstract enough and not too processor intensive to run on the embedded systems we deal with. Therefore, you're going to start seeing a lot more Perl posts on this blog.

Thursday, January 21, 2010

Setting up Tomcat (5.5) on Ubuntu Server 8.10

I recently ran into some old quirks when provisioning a new server for our company's web applications on Ubuntu 8.10 (Intrepid Ibex). Because the manager apps are no longer installed by default, you need to add extra packages to the list to install when installing Tomcat :

sudo apt-get install -y tomcat5.5 tomcat5.5-admin tomcat5.5-webapps


If you're copying configuration over from a previous Tomcat / Ubuntu installation, you need to make sure the permissions on all the files you copy are set correctly. In most cases, you'll have to run :

chown -R tomcat55:adm [file and folder list here]


If you're securing the applications with a certificate, try to make sure it's valid for your location and ensure that you've set it properly in your server.xml configuration file. If you want useful logging, you'll also have to place a log4j.properties file in

$CATALINA_HOME/common/classes


Hope this helps.

Dumping just your schema with MySQL dump

A simple one-liner :
mysqldump -u root -p mydatabasename --no-data=true --add-drop-table=false > test_dump.sql


With this command, you'll be prompted for your root password. I got this from here. Simple

Tuesday, January 12, 2010

The Curious Case of Damned DataIntegrityViolationException

In one of the projects on which I contract, we recently started encountering a problem importing and parsing text record files into our system which previously had no problem. My first thought on hearing this was that the partner from which we obtain the files had changed the file format (again). Upon closer inspection, nothing had changed in the files. My next step was to try importing them into a development system and seeing what was going on. As it turned out, the application was catching Spring's DataIntegrityViolationException. I was floored as soon as I saw this because our application was supposed to be catching this exception behind one of the business interfaces and converting it to an internal exception which is used in business logic. After some more poking around to confirm what was really going on, I threw the problem into google, and on the second result was a post in the Spring forum made by a user having exactly the same problem I was.


To summarize their problem quickly, they were using a transaction manager, and they were intercepting their business methods (via interfaces) with Aspects, which we're also doing. The problem was this : as soon as the internal Aspects were applied to the business interface, this changed the ordering of advice applied to the interface implementor, so now the Hibernate session underneath was getting flushed later by the transaction manager, instead of in the business method where it had been flushed previously. The result of this was that now DataIntegrityViolationExceptions were being thrown outside of the interecepted method, instead of inside where it was expected. A manual session.flush() inside of a HibernateCallback within the business method fixed this :


/**
* @see AchPaymentNoticeOfChangeService#registerNoc(AchPaymentNoticeOfChange)
*/
@Override
public void registerNoc(final AchPaymentNoticeOfChange changeNotification) throws IllegalArgumentException, NoticeOfChangeAlreadyExistsException, Exception {
try {
getHibernateTemplate().execute(new HibernateCallback() {
@Override
public Object doInHibernate(Session session) throws HibernateException, SQLException {

session.save (changeNotification);

// flush the session to ensure that the database gets synchronized
// the end of this call, rather than waiting for any transaction
// managers to handle it and risk letting a DataIntegrityViolationException
// occur outside of this method's handling
session.flush();

return null;
}
});
} catch (DataIntegrityViolationException dive) {
throw new NoticeOfChangeAlreadyExistsException("A notice of change already exists for payment ["+changeNotification.getAchPayment().getId()+"]", dive, changeNotification);
} catch (Exception e) {
throw e;
}
}


I hope this post finds somebody else who runs into this problem.