Continuous Delivery with a Difference: They’re Using Windows!

 

Last night I was taken to the London premier of Warren Miller’s latest film called “Flow State”. Free beers, a free goodie bag and an hour and a half of the best snowboarders and skiers in the world doing tricks which in my head I can also do, but in reality are about 10000000% better than anything I can manage. Good times. Anyway, on my way home I was thinking about “flow” and how it applies to DevOps. It’s a tricky thing to maintain in an Ops capacity. It reminded me of a talk I went to last week where the speakers talked of the importance of “Flow” in their project, and it’s inspired me to write it up:

Thoughtworkers Pat and Aleksander have been working at a top secret location* for a top secret company** on a top secret mission to implement continuous delivery in a corporate Windows world***
* Ok, I actually forgot to ask where it was located

** It’s probably not a secret, but they weren’t telling me

*** It’s obviously not a secret mission seeing as how: a) it was the title of their talk and b) I just said what it was

Pat and Aleksander put their collective Powerpoint skills to good use and made a presentation on the stuff they’d done and learned during their time working on this top secret project, but rather than call their presentation “Stuff We Learned Doing a Project” (that’s what I would have named it) they decided to call it “Experience Report: Continuous Delivery in a Corporate Windows World”, which is probably why nobody ever asks me to come up with names for their presentations.

This talk was at Skills Matter in London, and on my way over there I compiled a list of questions which I was keen to hear their answers to. The questions were as follows:

  1. What tools did they use for infrastructure deployments? What VM technology did they use?
  2. How did they do db deployments?
  3. What development tools did they use? TFS?? And what were their good/bad points?
  4. Did they use a front-end tool to manage deployments (i.e. did they manage them via a C.I. system)?
  5. Was the company already bought-in to Continuous Delivery before they started?
  6. What breed of agile did they follow? Scrum, Kanban, TDD etc.
  7. What format were the built artifacts? Did they produce .msi installers?
  8. What C.I. system did they use (and why did they decide on that one)?
  9. Did they use a repository tool like Nexus/Artifactory?
  10. If they could do it all again, what would they do differently?

During the evening (mainly in the pub afterwards) Pat and Aleksander answered almost all of the questions above, but before I list them, I’ll give a brief summary of their talk. Disclaimer: I’m probably paraphrasing where I’m quoting them, and most of the content is actually my opinion, sorry about that. And apologies also for the completely unrelated snowboarding pictures.

Cultural Aspects of Continuous Delivery

Although CD is commonly associated with tools and processes, Pat and Aleksander were very keen to push the cultural aspects as well. I couldn’t agree more with this – for me, Continuous Delivery is more than just a development practice, it’s something which fundamentally changes the way we deliver software. We need to have an extremely efficient and reliable automated deployment system, a very high degree of automated testing, small consumable sized stories to work from, very reliable environment management, and a simplified release management process which doesn’t create problems than it solves. Getting these things right is essential to doing Continuous Delivery successfully. As you can imagine, implementing these things can be a significant departure from the traditional software delivery systems (which tend to rely heavily on manual deployments and testing, as well as having quite restrictive project and release management processes). This is where the cultural change comes into effect. Developers, testers, release managers, BAs, PMs and Ops engineers all need to embrace Continuous Delivery and significantly change the traditional software delivery model.

FACT BOMB: Some snowboarders are called Pat. Some are called Aleksander.

Tools

When Aleksander and Pat started out on this project, the dev teams were already using TFS as a build system and for source control. They eventually moved to using TeamCity as a Continuous Integration system, and  Git-TFS as the devs primary interface to the source control system.

The most important tool for a snowboarder is a snowboard. Here’s the one I’ve got!

  • Builds were done using msbuild
  • They used TeamCity to store the build artifacts
  • They opted for Nunit for unit testing
  • Their build created zip files rather than .msi installers
  • They chose Specflow for stories/spec/acceptance criteria etc
  • Used Powershell to do deployments
  • Sites were hosted on IIS

 

Practices

“Work in Progress” and “Flow” were the important metrics in this project by the sounds of things (suffice to say they used Kanban). I neglected to ask them if they actually measured their flow against quality. If I find out I’ll make a note here. Anyway, back to the project… because of the importance of Work in Progress and Flow they chose to use Kanban (as mentioned earlier) – but they still used iterations and weekly showcases. These were more for show than anything else: they continued to work off a backlog and any tasks that were unfinished at the end of one “iteration” just rolled over to the next.

Another point that Pat and Aleksander stressed was the importance of having good Business Analysts. They were essential in carving up stories into manageable chunks, avoiding “analysis paralysis”, shielding the devs from “fluctuating functionality” and ensuring stories never got stuck for too long. Some other random notes on their processes/practices:

  • Used TDD with pairing
  • Testers were embedded in the team
  • Maintained a single branch of code
  • Regression testing was automated
  • They still had to raise a release request with Ops to get stuff deployed!
  • The same artifact was deployed to every environment*
  • The same deploy script was used on every environment*

* I mention these two points because they’re oh-so important principles of Continuous Delivery.

Obviously I approve of the whole TDD thing, testers embedded in the team, automated regression testing and so on, but not so impressed with the idea of having to raise a release request (manually) with Ops whenever they want to get stuff deployed, it’s not very “devops” 🙂 I’d seek to automate that request/approval process. As for the “single branch of code”, well, it’s nice work if you can get it. I’m sure we’d all like to have a single branch to work from but in my experience it’s rarely possible. And please don’t say “feature toggling” at me.

One area the guys struggled with was performance testing. Firstly, it kept getting de-prioritised, so by the time they got round to doing it, it was a little late in the day, and I assume this meant that any design re-considerations they might have hoped to make could have been ruled out. Secondly, they had trouble actually setting up the load testing in Visual Studio – settings hidden all over the place etc.

Infrastructure

Speaking with Pat, he was clearly very impressed with the wonders of Powershell scripting! He said they used it very extensively for installing components on top of the OS. I’ve just started using it myself (I’m working with Windows servers again) and I’m very glad it exists! However, Aleksander and Pat did reveal that the procedure for deploying a new test environment wasn’t fully automated, and they had to actually work off a checklist of things to do. Unfortunately, the reality was that every machine in every environment required some degree of manual intervention. I wish I had a bit more detail about this, I’d like to understand what the actual blockers were (before I run into them myself), and I hate to think that Windows can be a real blocker for environment automation.

Anyway, that’s enough of the detail, let’s get to the answers to the questions (I’ve added scores to the answers because I’m silly like that):

  1. What tools did they use for infrastructure deployments? What VM technology did they use? – Powershell! They didn’t provision the actual VMs themselves, the Ops team did that. They weren’t sure what tools they used.  1
  2. How did they do db deployments? – Pass 0
  3. What development tools did they use? TFS?? And what were their good/bad points? – TFS. Source Control and C.I. were bad so they moved to TeamCity and Git-TFS 2
  4. Did they use a C.I. tool to manage deployments? – Nope 0
  5. Was the company already bought-in to Continuous Delivery before they started? – They hired ThoughtWorks so I guess they must have been at least partly sold on the idea! Agile adoption was on their roadmap 1
  6. What breed of agile did they follow? Scrum, Kanban, TDD etc. – TDD with Kanban 2
  7. What format were the built artifacts? Did they produce .msi installers? – Negatory, they used zip files like any normal person would. 2
  8. What C.I. system did they use (and why did they decide on that one)? – TeamCity, which is interesting seeing as how ThoughtWorks Studios produce their own C.I. system called “Go”. I’ve used Go before and it’s pretty good conceptually, but it’s also expensive and hard to manage once you’re running over 50 builds and 100 test agents. The UI is buggy too. However, it has great features, but the Open Source competitors are catching up fast. 2
  9. Did they use a repository tool like Nexus/Artifactory? – They used TeamCity’s own internal repo, a bit like with Jenkins where you can store a build artifact. 1
  10. If they could do it all again, what would they do differently? – They wouldn’t push so hard for the Git TFS integration, it was probably not worth the considerable effort at the end of the day. 1

TOTAL: 12

What does this total mean? Absolutely nothing at all.

What significance do all the snowboard pictures have in this article? None. Absolutely none whatsoever.

Advertisement

Coping With Big C.I.

Last night I went along to another C.I. meetup to listen to Tom Duckering, a consultant devops at Thoughtworks, deliver a talk about managing a scaled-up build/release/CI system. In his talk, Tom discussed Continuous Delivery, common mistakes, best practices, monkeys, Jamie Oliver and McDonald’s.

Big CI and Build Monkeys

buildmonkeyFirst of all, Tom started out by defining what he meant by “Big CI”.

Big CI means large-scale build and Continuous Integration systems. We’re talking about maybe 100+ bits of software being built, and doing C.I. properly. In Big CI there’s usually a dedicated build team, which in itself raises a few issues which are covered a bit later. The general formula for getting to Big CI, as far as build engineers (henceforth termed “build monkeys”) are concerned goes as follows:

build monkey + projects = build monkeys

build monkeys + projects + projects = build monkey society

build monkey society + projects = ĂĽber build monkey

ĂĽber build monkey + build monkey society + projects = BIG CI

 

What are the Issues with Big CI?

Big CI isn’t without its problems, and Tom presented a number of Anti-Patterns which he has witnessed in practice. I’ve listed most of them and added my own thoughts where necessary:

Anti-Pattern: Slavish Standardisation

As build monkeys we all strive for a decent degree of standardisation – it makes our working lives so much easier! The fewer systems, technologies and languages we have to support the easier, it’s like macro configuration management in a way – the less variation the better. However, Tom highlighted that mass standardisation is the work of the devil, and by the devil of course, I mean McDonald’s.

McDonald’s vs Jamie Oliver 

ronnysmug git

Jamie Oliver might me a smug mockney git who loves the sound of his own voice BUT he does know how to make tasty food, apparently (I don’t know, he’s never cooked for me before). McDonald’s make incredibly tasty food if you’re a teenager or unemployed, but beyond that, they’re pretty lame. However, they do have consistency on their side. Go into a McDonald’s in London and have a cheeseburger – it’s likeley to taste exactly the same as a cheeseburger from a McDonald’s in Moscow (i.e. bland and rubbery, and nothing like in the pictures either). This is thanks to mass standardisation.

Jamie Oliver, or so Tom Duckering says (I’m staying well out of this) is less consistent. His meals may be of a higher standard, but they’re likely to be slightly different each time. Let’s just say that Jamie Oliver’s dishes are more “unique”.

Anyway, back to the Continuous Integration stuff! In Big CI, you can be tempted by mass standardisation, but all you’ll achieve is mediocrity. With less flexibility you’re preventing project teams from achieving their potential, by stifling their creativity and individuality. So, as Tom says, when we look at our C.I. system we have to ask ourselves “Are we making burgers?”

Are we making burgers?

– T. Duckering, 2011

Anti-Pattern: The Team Who Knew Too Much

There is a phenomenon in the natural world known as “Build Monkey Affinity”, in which build engineers tend to congregate and work together, rather than integrate with the rest of society. Fascinating stuff. The trouble is, this usually leads the build monkeys to assume all the responsibilities of the CI system, because their lack of integration with the rest of the known world makes them isolated, cold and bitter (Ok, I’m going overboard here). Anyway, the point is this, if the build team don’t work with the project team, and every build task has to go through the build team, there will be a disconnect, possibly bottlenecks and a general lack of agility. Responsibility for build related activities should be devolved to the project teams as much as possible, so that bottlenecks and disconnects don’t arise. It also stops all the build knowledge from staying in one place.

Anti-Pattern: Big Ball of CI Mud

This is where you have a load of complexity in your build system, loads of obscure build scripts, multitudes of properties files, and all sorts of nonsense, all just to get a build working. It tends to happen when you over engineer your build solution because you’re compensating for a project that’s not in a fit state. I’ve worked in places where there are projects that have no regard for configuration management, project structures in source control that don’t match what they need to look like to do a build, and projects where the team have no idea what the deployed artifact should look like – so they just check all their individual work into source control and leave it for the build system to sort the whole mess out. Obviously, in these situations, you’re going to end up with some sort of crazy Heath Robinson build system which is bordering on artificial intelligence in its complexity. This is a big ball of CI mud.

Heath Robinson Build System

Heath Robinson Build System a.k.a. "a mess"

Anti-Pattern: “They” Broke MY Build…

This is a situation that often arises when you have upstream and downstream dependencies. Let’s say your build depends on library X. Someone in another team makes a change to library X and now your build fails. This seriously blows. It happens most often when you are forced to accept the latest changes from an upstream build. This is called a “push” method. An alternative is to use the “pull” method, which is where you choose whether or not you want to accept a new release from an upstream build – i.e. you can choose to stick with the existing version that you know works with your project.

The truth is, neither system is perfect, but what would be nice is if the build system had the flexibility to be either push or pull.

The Solutions!

Fear not, for Tom has come up with some thoroughly decent solutions to some of these anti-patterns!

Project Teams Should Own Their Builds

Don’t have a separated build team – devolve the build responsibilities to the project team, share the knowledge and share the problems! Basically just buy into the whole agile idea of getting the expertise within the project team.

Project teams should involve the infrastructure team as early as possible in the project, and again, infrastructure responsibilities should be devolved to the project team as much as possible.

Have CI Experts

Have a small number of CI experts, then use them wisely! Have a program of pairing or secondment. Pair the experts with the developers, or have a rotational system of secondment where a developer or two are seconded into the build team for a couple of months. Meanwhile, the CI experts should be encouraged to go out and get a thoroughly rounded idea of current CI practices by getting involved in the wider CI community and attending meetups… like this one!

Personal Best Metrics

The trouble with targets, metrics and goals is that they can create an environment where it’s hard to take risks, for fear of missing your target. And without risks there’s less reward. Innovations come from taking the odd risk and not being afraid to try something different.

It’s also almost impossible to come up with “proper” metrics for CI. There are no standard rules, builds can’t all be under 10 minutes, projects are simply too diverse and different. So if you need to have metrics and targets, make them pertinent, or personal for each project.

Treat Your Build Environments Like They Are Production

Don’t hand crank your build environments. Sorry, I should have started with “You wouldn’t hand crank your production environments would you??” but of course, I know the answer to that isn’t always “no”. But let’s just take it as read that if you have a large production estate, to do anything other than automate the provision of new infrastructure would be very bad indeed. Tom suggests using the likes of Puppet and Chef, and here at Caplin we’re using VMWare which works pretty well for us. The point is, extend this same degree of infrastructure automation to your build and CI environments as well, make it easy to create new CI servers. And automate the configuration management while you’re at it!

Provide a Toolbox, Not a Rigid Framework

Flexibility is the name of the game here. The project teams have far more flexibility if you, as a build team, are able to offer a selection of technologies, processes and tricks, to help them create their own build system, rather than force a rigid framework on them which may not be ideal for each project. Wouldn’t it be nice, from a build team perspective, if you could allow the project teams to pick and choose whichever build language they wanted, without worrying that it’ll cause a nightmare for you? It would be great if you could support builds written in Maven, Ant, Gradle and MSBuild without any problems. This is where a toolkit comes in. If you can provide a certain level of flexibility and make your system build-language agnostic, and devolve the ownership of the scripts to the project team, then things will get done much quicker.

Consumer-Driven Contracts

It would be nice if we could somehow give upstream builds a “contract”, like a test API layer or something. Something that they must conform to, or make sure they don’t break, before they expose their build to your project. This is a sort of push/pull compromise.

And that pretty much covers it for the content of Tom’s talk. It was really well delivered, with good audience participation and the content was thought-provoking. I may have paraphrased him on the whole Jamie Oliver thing, but never mind that.

It was really interesting to hear someone so experienced in build management actually promote flexibility rather than standardisation. It’s been my experience that until recently the general mantra has been “standardise and conform!”. But the truth is that standardisation can very easily lead to inflexibility, and the cost is that projects take longer to get out of the door because we spend so much time compromising and conforming to a rigid process or framework.

Chatting to Christian Blunden a couple of months back about developer anarchy was about the first time I really thought that such a high degree of flexibility could actually be a good thing. It’s really not easy to get to a place where you can support such flexibility, it requires a LOT of collaboration with the rest of the dev team, and I really believe that secondment and pairing is a great way to achieve that. Fortunately, that’s something we do quite well at Caplin, which is pretty lucky because we’re up to 6 build languages and 4 different C.I. systems already!

ClickOnce and Nant – The Plot Thickens

Turns out that these ClickOnce deployment builds aren’t as piss-easy as I once thought.
Turns out that the builds need to be customised for different environments, nothing new there, but (and here comes the catch), all the environmental settings have to be applied at BUILD TIME!!! Why? I hear you ask, and the answer is: because if you edit the config files post-build it changes some checksum jiggery pokery wotnot and then the thingumyjig goes and fails!!! Typical. (Basically the details you need to configure are held in files you cannot edit post build because the manifest file will do a checsum evaluation and see that someone has edited the file, and throw errors).
So what I’ve decided to do is this….

  1. Copy all configurable files to seperate environmentally named folders pre-build.
  2. Use SED to replace tokens for each environment in these files
  3. Copy 1 of them back to the build folder
  4. Compile
  5. Copy the output to the environmentally named folder

redo steps 3,4,5 for all the environments.
And hey presto, this works.

<target name=”changeconfigs”>
<!–This bit sets up some folders where I’ll do the prep work for each environment–>
<delete dir=”${config.dir}\${project.name}” verbose=”true” if=”${directory::exists(config.dir+’\’+project.name)}” />
<mkdir dir=”${config.dir}\${project.name}\TestArea” />
<mkdir dir=”${config.dir}\${project.name}\DevArea” />
<mkdir dir=”${config.dir}\${project.name}\Staging” />
<mkdir dir=”${config.dir}\${project.name}\UAT” />
<mkdir dir=”${config.dir}\${project.name}\Live” />

<!–This bit moves a tokenised config file to these folders–>
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\TestArea\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\DevArea\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\Staging\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\UAT\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\Live\app.config” />

<!–This bit calls sed, which replaces the tokens with relevant values for each environment, more on sed another time!–>
<exec program=”${sedUAT.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedTestArea.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedDevArea.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedStaging.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedLive.exe}” commandline=”${sedParse.dir}” />
</target>

<!–This bit copies the edited file back to the build directory–>
<target name=”prepTestArea”>
<delete file=”${source.dir}\app.config” />
<copy file=”${config.dir}\${project.name}\TestArea\app.config” tofile=”${source.dir}\app.config” />
</target>

<!–This bit builds the ClickOnce project–>
<target name=”publishTestArea” >
<msbuild project=”${base.dir}\Proj1\ClickOnce.vbproj”>
<arg value=”/t:Rebuild” />
<arg value=”/property:Configuration=Release”/>
<arg value=”/p:ApplicationVersion=${version.num}”/>
<arg value=”/p:InstallUrl=http://testarea/ClickOnce/”/>
<arg value=”/t:publish”/>
<arg value=”/p:UpdateRequired=true”/>
<arg value=”/p:MinimumRequiredVersion=${version.num}”/>
</msbuild>
</target>

<!–This bit copies the output to an environment-named folder, ready for deployment–>
<target name=”copyfilesTestArea”>
<mkdir dir=”${versioned.dir}\TestArea” />
<copy todir=”${versioned.dir}\TestArea” includeemptydirs=”true”>
<fileset basedir=”${base.dir}\Proj1\bin\Release\”>
<include name=”**.publish\**\*.*” />
</fileset>
</copy>
</target>

REPEAT LAST 3 TARGETS FOR THE OTHER ENVIRONMENTS.

Now that wasn’t too hard, and it doesn’t take up too much extra time.
I suppose I’d better mention some of the arguments I’m passing in the MSBuild calls:

<arg value=”/t:Rebuild” /> – I do this because it must re build the .deploy files each time, or you get the previous builds environment settings left in there because MSBuild decides to skip files that haven changed….

<arg value=”/property:Configuration=Release”/> – Obvious

<arg value=”/p:ApplicationVersion=${version.num}”/> – ClickOnce apps have a version stamped on them for various reasons, one of them being for use in automatic upgrades – people with installshield knowledge will know what a joke that can be!

<arg value=”/p:InstallUrl=http://testarea/ClickOnce/”/> – A pretty important one this, it stamps the URL for the download onto the manifest or application file.

<arg value=”/t:publish”/> – just calls the publish task, I do this because this makes the setup.exe

<arg value=”/p:UpdateRequired=true”/>
<arg value=”/p:MinimumRequiredVersion=${version.num}”/> – These 2 together mean the app will do a forced upgrade when a new version becomes available

So far, so good. My next trick will hopefully be how to get 2 installations working side-by-side. Currently it doesn’t work because one will overwrite the other. I’m working on it okay!!??

Building ClickOnce Applications with NAnt

Since I don’t like to actually do any work, and would much rather automate everything I’m required to do, I decided to automate a ClickOnce application build, because doing it manually was taking me literally, er, seconds, and this is waaaaaay too much like hard work. So, I naturally turned to NAnt, which is so often the answer to all my deployment questions….The answer came in the form of using NAnt to call MSBuild and pass the publish target, along with the version number. So, this is what you need to do:

Add a property to your nant script containing your build number (you can get this from CruiseControl.Net if you’re using CCNet to do your builds)
<property name=”version.num” value=”1.2.3.4″/>
Then just compile the project using NAnt’s MSBuild task, and call the publish target:

<target name=”publish” >

<msbuild project=”${base.dir}\ClickOnce.vbproj”>

<arg value=”/property:Configuration=Release”/>

<arg value=”/p:ApplicationVersion=${version.num}”/>

<arg value=”/t:publish” />

</msbuild>

</target>

The next thing you need to do is create or update the publish.htm file. What I’ve done for this is to take a copy of a previously generated publish.htm, and replace the occurrences of the application name with a token. Then in the NAnt script, I replace the token with the relevant application name with a version number. I do this because the version number will change with each build, and rather than manually update it, which is much too complicated for me, I’d rather just automate it so that I can go back to sleep while it builds.I tokenised the application name because of a much darker, more sinister reason that I’ll maybe explain at another time, but the world’s just not ready for that yet.
Anyway, here’s all that in NAntish:

<copy todir=”${config.dir}\${project.name}”>

<fileset basedir=”.”>

<include name=”publish.htm” />

</fileset>

<filterchain>

<replacetokens>

<token key=”VERSION” value=”${version.num}” />

<token key=”APPNAME” value=”${appname}” />

</replacetokens>

</filterchain>

</copy>

Best Practices for Build and Release Management Part 2

Ok, as promised in Part 1, I’ll go into a bit more detail about each of the areas outlined previously, starting with…

The Build Process

This area, perhaps more than any other area I’ll be covering in this section, has benefited most from the introduction of some ultra handy tools. Back in the day, building/compiling software was fairly manual, and could only be automated to a certain degree, make files and batch systems were about as good as it got, and even that relied on a LOT of planning and could quite often be a nightmare to manage.

These days though, the build phase is exceedingly well catered for and is now a very simple process, and what’s more, we can now get an awful lot more value out of this single area.

As I mentioned before, one of the aims of release management is to make software builds simple, quick and reliable. Tools such as Ant, Nant (.Net version of Ant), Maven, Rake and MSBuild help us on our path towards our goal in many ways. Ant, MSBuild and Nant are very simple XML based scripting languages which offer a wide ranging level of control – for instance, you can build entire solutions with a single line of script, or you can individually compile each project and specify each dependency – it’s up to you to decide what level of control you need. I believe that build scripts should be kept simple and easy to manage, so when dealing with NAnt and MSBuild for .Net solutions I like to build each project by calling an .proj file rather than specifically compiling each library. The .proj files should be constructed correctly and stored in source control. Each build should get the latest proj file  (and the rest of the code, including shared libraries – more on that later) and compile the project.

For Java projects. Ant and Maven are the most popular tools. Ant, like Nant, gives the user a great deal of control, while Maven has less inherent flexibility and enforces users to adhere to its processes. However, both are equally good at helping us make our build simple, quick and reliable. Maven uses POM files to control how projects are built. Within these POM files a build engineer will define all the goals needed to compile the project. This might sound a little tedious but the situation is made easier by the fact that POM files can inherit from master/parent POM files, reducing the amount of repetition and keeping your project build files smaller, cleaner and easier to manage. I would always recommend storing as much as possible in parent POM files, and as little as you can get away with in the project POMs.

One of the great improvements in software building in recent years has been the introduction of Continuous Integration. The most popular CI tools around are CruiseControl, CruiseControl.Net, Hudson and Bamboo. In their simplest forms, CI tools are basically just schedulers, and they essentially just kick off your build tools. However, that’s just the tip of the iceberg, because these tools can do much, MUCH more than that – I’ll explain more later, but for now I’ll just say that they allow us to do our builds automatically, without the need for any human intervention. CI tools make it very easy for us to setup listeners to poll our source code repositories for any changes, and then automatically kick off a build, and then send us an email to let us know how the build went. It’s very simple stuff indeed.

So let’s take a look at what we’ve done with our build process so far:

  • We’ve moved away from manually building projects and started using simple build scripts, making the build process less onerous and not so open to human error. Reliability is on the up!
  • We’ve made our build scripts as simple as possible – no more 1000 line batch files for us! Our troubleshooting time has been significantly reduced.
  • We’ve moved away from using development UIs to make our builds – our builds are now more streamlined and faster.
  • We’ve introduced a Continuous Integration system to trigger our builds whenever a piece of code is committed – our builds are now automated.

So in summary, we’ve implemented some really simple steps and already our first goal is achieved – we’ve now got simple, quick and reliable builds. Time for a cup of tea!