Tuesday, July 14, 2009

Keeping your servers up to date

As I've previously mentioned on this blog, our company uses Ubuntu for our servers. I won't re-iterate the reasons why in this post, you can search this blog using an Ubuntu tag if you're interested in them. Our servers have uptimes of months and months, and as a result, the clocks on the machines tend to drift over time, which has inspired me to start using NTP to keep the clocks synchronized. A quick Google search yielded the desired results. In order to manually synchronize with NTP, you can run the following commands :

sudo /etc/network/if-up.d/ntpdate

sudo ntpdate pool.ntp.org


Obviously, entering this command every day or every week gets tiresome and stupid, so you can get cron to schedule this daily for you by running the following commands :

echo "sudo ntpdate ntp.ubuntu.com" >> /etc/cron.daily/ntpdate
chmod 755 /etc/cron.daily/ntpdate

Tomcat keystore : too many open files - Continued

Further to my last post, it seems there was another, larger underlaying problem that was causing the exceptionally high number of connections to our servers. One of our clients "required" (I use that term loosely because they really didn't the information) extra information for transactions that was not included in the optimized change metadata that they had been instructed to query for in order to update their transactions. So instead of just querying for the update metadata, they would do that, and then they'd query for each individual transaction. Assume we page our metadata at 100 transactions / page. For example, if we were to have a batch of 800 transactions submitted by the client, instead of making 800 / 100 = 8 calls to our server to update the transactions in the batch, they'd have 8 + (800 * 1) = 808 calls to update their system. They were effectively launching a Denial of Service attack on our servers each time they wanted to update a batch of transactions. Needless to say I consulted with them on the issue and updated our change metadata to include the information they "need" (which they've already got in their system), and they've updated their system to take the number of requests down to the proper level to update their transactions. So let this be a lesson to anybody reading this blog post that has to develop systems that deal with external clients : ensure your clients fully understand the purpose and intent of all the features of our system before they start developing for it and using it.

Wednesday, July 08, 2009

Too many open sockets error in Tomcat 5.5

Our company recently started a new project. Our primary client is the first one to use it, and use they have. I've suspected they've been putting an unusually high load on our servers, and tonight it was confirmed when our server stopped handling requests, refusing them and then generating "Too many open files" errors in the Tomcat log files referring to my Tomcat SSL keystore. After doing some brief research on the error, I've discovered that this can happen when the Tomcat element in server.xml is configured with too few 'maxThreads' and a too low an 'acceptCount'. I've since tripled the number of threads and acceptable connections. Hopefully this will resolve things.