Application Deployment Best Practices

Someone just asked me to define “best practices” for a collection of application deployments. The question was impossible to answer because the applications we were talking about were all bespoke, so each one had good and bad ways of deploying them. It would take an age to go through each one and explain which is the best way of doing their unique installation operations.

However, I’ve given it some thought and I think there are still a handful of best practices for application deployments which can pretty much extend across almost all applications, certainly all the ones we were talking about. Here are some of what I would define as best practices for application deployments.

  1. Keep the installation structure SIMPLE. Files and directories should be kept to a minimum. Don’t install anything that’s never going to be used.
  2. Always get rid of old files. When something goes wrong on a production host, the last thing I want to be doing is trawling through random directories and copies of old files to find what’s gone wrong.
  3. Automate it – this almost goes without saying, but deployments should NOT be manual, there’s far too much room for human error. Use a tool for doing deployments, something that supports the native OS operations, like rpm using yum for RedHat. Alternatively, if you’re deploying to multiple different OSs, try using a scripting language to script the deployments.
  4. Don’t over do it with the symlinks. Use them only if you have to. It’s too easy to end up with symlinks pointing to the wrong place, or to break them altogether. It’s also a bad idea for your applications themselves to rely on symlinks. I would rather enforce standardisation of the environments and have my applications use real paths than rely on symlinks. Symlinks simply add another level of configuration and a reliance on something else which is all too breakable.
  5. Delete everything first. If you’re simply deploying a new directory or package, completely remove the existing one. Take a backup if necessary, but delete that backup at the end if the deployment is successful. This is similar to point 2, but more robust. I think that if at all possible, you shouldn’t rely on sync tools like rsync/xcopy/robocopy to do your deployments. If your time and bandwidth allows, delete everything first and upload the complete new package, not just a delta.
  6. Have a roll back strategy. Things can sometimes go wrong and often the best policy is to roll-back to a known-working version. Keeping a backup of the last known working version locally on the target machine can often be the quickest and simplest method, but again I would avoid this option if bandwidth allows. I don’t like having old versions sitting around on servers, it leads to cluttered production boxes. I would much rather do a roll-back using the same mechanism used for doing a normal deployment.
  7. Don’t make changes to your deploy mechanism or deploy scripts between deploying to different environments. This is just common sense, but I’ve seen a process where, in order to actually deploy something onto a production server, the deploy script had to be manually changed! Suffice to say I wasn’t a fan of that idea.

As you can probably appreciate, this list is generally not written with installers (such as msi files) in mind, maybe I’ll look at that another time.

Advertisement

Principles of Continuous Integration

I’m a big fan of CI, and as a simple best practice/process for development teams I think it’s right up there as one of the most important to get right. Most software places now have a Continuous Integration system, but do they actually practice Continuous Integration? In my experience the answer isn’t always “yes”.

For me, practicing CI goes a lot further than simply having a CI system setup, and running check-in/nightly builds and running unit test/code inspections etc and so on.

So what is my definition of “Practicing Continuous Integration”? Well, for me it is simply:

  • Having a CI system (obviously)
  • Adopting the principles of Continuous Integration

I did a quick google to see if I could find a simple list of some good CI principles – I know I’ve read many of them before in numerous good books such as Paul M Duvall’s “Continuous Integration” and Jez Humble’s “Continuous Delivery”, but I couldn’t find an easy cut-out-and-use version on the internet, so I’ve decided to knock something up here 🙂

Everyone loves a good list so here’s a list of what I believe to be some principles of CI:

  1. Fix your build failures, immediately. Never leave a build broken. If you do, the build team should be within their right to roll back your last commit.
  2. Every check-in should be an improvement on the last. Each check-in should have value, otherwise why do it? An improvement could be defined as a new piece of functionality, a bug fix, an increase in code coverage etc. If the improvement cannot be easily measured, ensure you detail it in your commit log.
  3. Never check-in on a broken build (unless you’re fixing the breakage)! If you have multiple commits to a codebase after the original breaking commit, then this leaves a bit of a bad smell in your CI system. Firstly, unless your commits are coming thick and fast, it means you aren’t reacting to your build failures, and this is not acceptable – it basically means you’re not playing ball and your letting the rest of the team down. Secondly, the cause of the original breakage can potentially be a lot harder to identify if several other check-ins have happened since the original check-in which broke the build!
  4. Code should be built and tested before checking in to source control. You can either do this by compiling the build locally yourself and checking that the tests pass, or you could invest in a CI/VC system that can perform pre-commit builds, such as Pulse, TeamCity and ElectricCommander (list obtained from Jez Humble’s Continuous Delivery book). However, you can use pre-commit hooks in svn to achieve something similar, and I’m led to believe this can also be done in Git and buildbot.
  5. Everyone should be interested in the output of the CI system. And that includes project managers and testers. Working on the principle that at some point a build works, and every commit thereafter is an improvement, we should be proud to display the results of every build to the whole project team, and we should react quickly if, for any reason, our last check-in was not an improvement to the state of our software.
  6. Functional automated tests should be run as part of the CI process. If you have a suite of automated regression tests, the chances are it’s no great effort to hook them up to the CI system, and definitely worth it in the long run.
  7. Make the builds fit for purpose. I absolutely hate having to wait ages for a simple check-in build to compile, when all I really want to know is whether I have broken my unit tests or violated some important rule. Make the check-in builds quite lightweight, run your unit tests and code coverage tests only, perhaps. Do you really need to run all your code inspections and analysis tools as well as run automated functional tests every time there’s a check-in? Chances are you don’t, so keep the check-in builds light, and leave the heavy duty work for the nightly builds, while we’re all asleep!
  8. React to the feedback. The CI system is a tool to help drive improvement and quality as much as anything else, so we should never ignore it. However, it’s easy to ignore your CI system if the feedback isn’t relevant. The solution is to work closely with the build team, as well as the rest of the project members, to ensure the build feedback is targeted and relevant

That’s about it for my list so far, I’m sure I’ll add to it as time goes by. Please feel free to add your suggestions!