Maven Release Plugin and Continuous Delivery

I was setting up a Continuous Delivery system using Maven as the build tool, Perforce as the SCM and Go (ThoughtWorks’ CI system). All was going perfectly well until I got to the point when I no longer wanted to make snapshot builds…

The idea behind my Continuous Delivery system was this:

  • Every check-in runs a load of unit tests
  • If they pass it runs a load of acceptance tests
  • If they pass we run more tests – Integration, scenario and performance tests
  • If they all pass we run a bunch of static analysis and produce pretty reports and eventually deploy the candidate to a “Release Candidate” repository where QA and other like-minded people can look at it, prod it, and eventually give it a seal of approval.

As you can see, there’s no room for the notion of “snapshot” and “release” builds being separate here. Every build is a potential release build. So, a few days ago I went right ahead and used the maven release plugin, and that was the last time I remember smiling, having fun, getting a full night’s sleep, and my brain not hurting.

The problem is this: the maven release plugin doesn’t really work for continuous delivery. And what’s more, it REALLY doesn’t work with Go and Perforce. I’ll start with the Go/Perforce issues: I got loads of errors thanks to the way Go runs as the system user, and creates its own clientspecs. The results of this debacle are detailed here and here.

I managed to finally get around the clientspec/P4/Go problems with some help from my colleague Toby Catlin who bears the scars of similar skirmishes with Go and Perforce from days gone by. The “fix” was to create a perforce config file and an “uber” client spec. The perforce config file specified the uber clientspec and it lived in the root of the project directory. It was hardly a satisfactory workaround, as it meant that every project would need to have this file, and the uber clientspec would need to be updated every time a new build job was created. But never mind that, it was just a relief to see the builds going green for a change.

And that’s when it happened… the release build completed. The maven release plugin increased the version number in the pom and checked it in. And then Go detected the change in the pom and checked it out again and started building again. This then updated the version number and checked it in, which in turn got detected and kicked off another build CAN YOU SEE THE GLARINGLY OBVIOUS PROBLEM HERE????

It’s obvious really. I’ve always made my maven release builds a manual process in the past and that was exactly why, I’d just forgotten all about it. So, I’ve decided not to use the maven release plugin at all. Every build now just creates a “release” build because I’ve removed all instances of the word SNAPSHOT from the poms. If they pass all their tests and look good enough, they’re automatically promoted to the release candidate repository. And everyone’s happy. Also, I’ve added a property to the builds which pulls in a variable from the Go system, and if that’s not present the “deploy to release candidate repository” step fails – this is to stop developers from manually creating releases – all release builds must come from the CI system.

Advertisement

Maven Release Plugin and Perforce Clientspecs

I’m getting a very annoying error trying to do maven release builds on our CI servers, which isn’t appearing on my local workstation. The build seems to fail because the pom file is under source control (I’m using Perforce) and so it’s read-only. However, It should check out the pom file so that it isn’t read only. Alas, that doesn’t seem top be working. Here are the errors I got initially:

[ERROR] BUILD ERROR
[INFO] ————————————————————————
[INFO] Error writing POM: C:\Program Files\Cruise Agent\pipelines\yadda\yadda\pom.xml (Access is denied)

The reason behind it appeared to be:

[ERROR] CommandLineException Exit code: 1 – Client ‘xpcruisebuildvm1-STANDARD-ALL’ can only be used from host ‘xpcruisebuildvm1’.

Command line was:p4 -d “C:\Program Files\Cruise Agent\pipelines\yadda\yadda” -p perforce.mycompany.com:1666 ed
it pom.xml
org.codehaus.plexus.util.cli.CommandLineException: Exit code: 1 – Client ‘xpcruisebuildvm1-STANDARD-ALL’ can only be used from host ‘xpcruisebuildvm1’

So the first thing I did was create a new clientspec for that machine which had all the files mapped to its c:\temp directory, and then tried to run the build again, assuming this would fix the issue but present me with a whole new problem about how to fit this all in to my CI system. However, this also failed:

[ERROR] BUILD ERROR
[INFO] ————————————————————————
[INFO] Error writing POM: C:\TEMP\yadda\pom.xml (Access is denied)

This is confusing because the clientspec I’m using does have these files in its view and therefore Maven should be able to edit them. Also, this is how I have it setup on my local workstation, and that works fine…

After a lot of trial and error I managed to make some progress. One of the issues was that the client spec had the following mapping in it:

//JavaDevelopment/main/yadda/… //XPCRUISEVM808_release/temp/yadda/…

and the root was set to C:\

This means all P4 files should sync to my C:\TEMP directory, which already existed on the machine. As expected, the sync worked fine and all files appeared in C:\TEMP\yadda.

And therein lies the rub: the client spec uses lowercase “temp”, while the windows directory was uppercase “TEMP”. As simple as that. I changed the client spec to math the filesystem and for good measure updated my release plugin version to 2.1. Problem solved (well, actually that presents me with a whole new problem because I don’t want every build agent to have different client specs, because I’m using Go, and that puts them in it’s own directory under the name of the build job. Grrrr).

POM ‘ release:prepare release’ not found in repository

I’m rather irritatingly getting this error in my maven builds at the moment, trying to setup some release builds using Go:

POM ' release:prepare release' not found in repository

One issue I have with Go is that it doesn’t natively support maven. This isn’t really much of a big deal because I can just tell it to run a custom command, and point it to the mvn shell or batch file (this could be a bit of a pain if I want my builds run on windows and/or linux but don’t want to have to define separate build jobs for each one, but I don’t, and I can’t think of any reason why I would, so that’s ok). Anyway, the issue this time is with the way I setup the build job. I used the new (in version 2.2) clicky-UI to setup the job, like telling it to run the mvn batch file, and what arguments to pass. This just seemed not to work. When I looked at the Go xml file it looked a bit like this:

<job name=”build_release”>

<tasks>

<exec command=”D:\buildTools\maven\2.2.1\bin\mvn.bat” workingdir=”yadda\yadda”>

<args>-B release:prepare release:perform</args>

</tasks>

<resources>

<resource>windows</resource>

</resources>

</job>

So I deleted it and manually edited the xml, making it look like this instead:

<job name=”build_release”>

<tasks>

<exec command=”D:\buildTools\maven\2.2.1\bin\mvn.bat” args=”-B release:prepare release:perform” workingdir=”yadda\yadda” />

</tasks>

<resources>

<resource>windows</resource>

</resources>

</job>

And this seems to have fixed it. Not very impressive at all.

Continuous Delivery using build pipelines with Jenkins and Ant

My idea of a good build system is one which will give me fast, concise, relevant feedback, but I also want it to produce a proper finished article when I’ve checked in my code. I’d like every check-in to result in a potential release candidate. Why? Well, why not?

I used to employ a system where release candidates were produced separately to my check-in builds (also known as “snapshot” builds). This encouraged people to treat snapshot builds as second rate. The main focus was on the release builds. However, if every build is a potential release build, then the focus on each build is increased. Consequently, if every build could be a potential release candidate, then I need to make sure every build goes through the most rigorous testing possible, and I would like to see a comprehensive report on the stability and design of the build before it gets released. I would also like to do all of this automatically, as I am inherently lazy, and have a facebook profile to constantly update!

This presents me with a problem: I want instant feedback on check-in builds, and to have full static analysis performed on them and yet I still want every check-in build to undergo a full suite of testing, be packaged correctly AND be deployed to our test environments. Clearly this will take a lot longer than I’m prepared to wait! The solution to this problem is to break the build process down into smaller sections.

Pipelines to the Rescue!

The concept of build pipelines has been around for a couple of years at least. It’s nothing new, but it’s not yet standard practice, which is a pity because I think it has some wonderful advantages. The concept is simple: the build as a whole is broken down into sections, such as the unit test, acceptance test, packaging, reporting and deployment phases. The pipeline phases can be executed in series or parallel, and if one phase is successful, it automatically moves on to the next phase (hence the relevance of the name “pipeline”). This means I can setup a build system where unit tests, acceptance tests and my static analysis are all run simultaneously at commit stage (if I so wish), but the next stage in the pipeline will not start unless they all pass. This means I don’t have to wait around too long for my acceptance test results or static analysis report.

Continuous Delivery

Continuous delivery has also been around for a while. I remember hearing about it in about 2006 and loving the concept. It seems to be back in the news again since the publication of “Continuous Delivery”, an excellent book from Jez Humble and David Farley. Again the concept is simple, roughly speaking it means that every build gets made available for deployment to production if it passes all the quality gates along the way. Continuous Delivery is sometimes confused with Continuous Deployment. Both follow the same basic principle, the main difference is that with Continuous Deployment it is implied that each and every successful build will be deployed to production, whereas with continuous delivery it is implied that each successful build will be made available for deployment to production. The decision of whether or not to actually deploy the finished article to the production environment is entirely up to you.

Continuous Delivery using Build Pipelines

You can have continuous delivery without using build pipelines, and you can use build pipelines without doing continuous delivery, but the fact is they seem made for each other. Here’s my example framework for a continuous delivery system using build pipelines:

I check some code in to source control – this triggers some unit tests. If these pass it notifies me, and automatically triggers my acceptance tests AND produces my code-coverage and static analysis report at the same time. If the acceptance tests all pass my system will trigger the deployment of my project to an integration environment and then invoke my integration test suite AND a regression test suite. If these pass they will trigger another deployment, this time to UAT and a performance test environment, where performance tests are kicked off. If these all pass, my system will then automatically promote my project to my release repository and send out an alert, including test results and release notes.

So, in a nutshell, my “template” pipeline will consist of the following stages:

  • Unit-tests
  • Acceptance tests
  • Code coverage and static analysis
  • Deployment to integration environment
  • Integration tests
  • Scenario/regression tests
  • Deployments to UAT and Performance test environment
  • More scenario/regression tests
  • Performance tests
  • Alerts, reports and Release Notes sent out
  • Deployment to release repository

Introducing the Tools:

Thankfully, implementing continuous delivery doesn’t require any special tools outside of the usual toolset you’d find in a normal Continuous Integration system. It’s true to say that some tools and applications lend themselves to this system better than others, but I’ll demonstrate that it can be achieved with the most common/popular tools out there.

Who’s this Jenkins person??

Jenkins is an open-source Continuous Integration application, like Hudson, CruiseControl and many others (it’s basically Hudson, or was Hudson, but isn’t Hudson any more. It’s a trifle confusing*, but it’s not important right now!). So, what is Jenkins? Well, as a CI server, it’s basically a glorified scheduler, a cron job if you like, with a swish front end. Ok, so it’s a very swish front end, but my point is that your CI server isn’t usually very complicated, in a nutshell it just executes the build scripts whenever there’s a trigger. There’s a more important aspect than just this though, and that’s the fact that Jenkins has a build pipelines plugin, which was written recently by Centrum Systems. This pipelines plugin gives us exactly what we want, a way of breaking down our builds into smaller loops, and running stages in parallel.

Ant

Ant has been possibly the most popular build scripting language for the last few years. It’s been around for a long while, and its success lies in its simplicity. Ant is an XML based scripting language tailored specifically for software build related tasks (specifically Java. Nant is the .Net version of Ant and is almost identical).

Sonar

Sonar is a quality measurement and reporting tool, which produces metrics on build quality such as unit test coverage (using Cobertura) and static analysis tools (Findbugs, PMD and Checkstyle). I like to use Sonar as it provides a very readable report and contains a great deal of useful information all in one place.

Setting up the Tools

Installing Jenkins is incredibly simple.  There’s a debian package for Operating Systems such as ubuntu, so you can install it using apt-get. For Redhat users there’s an rpm, so you can install via yum.

Alternatively, if you’re already running Tomcat v5 or above, you can simply deploy the jenkins.war to your tomcat container.

Yet another alternative, and probably the simplest way to quickly get up and running with Jenkins is to download the war and execute:

java -jar jenkins.war

This will run jenkins through it’s own Winstone servlet container.

You can also use this method for installing Jenkins on Windows, and then, once it’s up and running, you can go to “manage jenkins” and click on the option to install Jenkins as a Windows Service! There’s also a windows installer, which you can download from the Jenkins website

Ant is also fairly simple to install, however, you’ll need the java jdk installed as a pre-requisite. To install ant itself you just need to download and extract the tar, and then create the environment variable ANT_HOME (point this to the directory you unzipped Ant into). Then add ${ANT_HOME}/bin (or %ANT_HOME%/bin if you’re on Windows) to your PATH, and that’s about it.

Configuring Jenkins

One of the best things about Jenkins is the way it uses plugins, and how simple it is to get them up and running. The “Manage Jenkins” page has a”Manage Plugins” link on it, which takes you a list of all the available plugins for your Jenkins installation:

To install the build pipeline plugin, simply put a tick in the checkbox next to “build pipeline plugin” (it’s 2/3 of the way down on the list) and click “install”. It’s as simple as that.

The Project

The project I’m going to create for the purpose of this example is going to be a very simple java web application. I’m going to have a unit test and an acceptance test stage.  The build system will be written in Ant and it will compile the project and run the tests, and also deploy the build to a tomcat server. Sonar will be used for producing the reports (such as test coverage and static analysis).

The Pipelines

For the sake of simplicity, I’ve only created 6 pipeline sections, these are:

  • Unit test phase
  • Acceptance test phase
  • Deploy to test phase
  • Integration test phase
  • Sonar report phase
  • Deploy to UAT phase

The successful completion of the unit tests will initiate the acceptance tests. Once these complete, 2 pipeline stages are triggered:

  • Deployment to a test server

and

  • Production of Sonar reports.

Once the deployment to the test server has completed, the integration test pipeline phase will start. If these pass, we’ll deploy our application to our UAT environment.

To create a pipeline in Jenkins we first have to create the build jobs. Each pipeline section represents 1 build job, which in this case runs just 1 ant task each. You have to then tell each build job about the downstream build which is must trigger, using the “build other projects” option:

Obviously I only want each pipeline section to do the single task it’s designed to do, i.e. I want the unit test section to run just the unit tests, and not the whole build. You can easily do this by targeting the exact section(s) of the build file that you want to run. For instance, in my acceptance test stage, I only want to run my acceptance tests. There’s no need to do a clean, or recompile my source code, but I do need to compile my acceptance tests and execute them, so I choose the targets “compile_ATs” and “run_ATs” which I have written in my ant script. The build job configuration page allows me to specify which targets to call:

Once the 6 build jobs are created, we need to make a new view, so that we can start to visualise this as a pipeline:

We now have a new pipeline! The next thing to do is kick it off and see it in action:

Oops! Looks like the deploy to qa has failed. It turns out to be an error in my deploy script. But what this highlights is that the sonar report is still produced in parallel with the deploy step, so we still get our build metrics! This functionality can become very useful if you have a great deal of different tests which could all be run at the same time, for instance performance tests or OS/browser-compatibility tests, which could all be running on different Operating Systems or web browsers simultaneously.

Finally, I’ve got my deploy scripts working so all my stages are looking green! I’ve also edited my pipeline view to display the results of the last 3 pipeline builds:

Alternatives

The pipelines plugin also works for Hudson, as you would expect. However, I’m not aware of such a plugin for Bamboo. Bamboo does support the concept of downstream builds, but that’s really only half the story here. The pipeline “view” in Jenkins is what really brings it all together.


“Go”, the enterprise Continuous Integration effort from ThoughtWorks not only supports pipelines, but it was pretty much designed with them in mind. Suffice to say that it works exceedingly well, in fact, I use it every day at work! On the downside though, it costs money, whereas Jenkins doesn’t.

As far as build tools/scripts/languages are concerned, this system is largely agnostic. It really doesn’t matter whether you use Ant, Nant, Gradle or Maven, they all support the functionality required to get this system up and running (namely the ability to target specific build phases). However, Maven does make hard work of this in a couple of ways – firstly because of the way Maven lifecycles work, you cannot invoke the “deploy” phase in an isolated way, it implicitly calls all the preceding phases, such as the compile and test phases. If your tests are bound to one of these phases, and they take a long time to run, then this can make your deploy seem to take a lot longer than you would expect. In this instance there’s a workaround – you can skip the tests using –DskipTests, but this doesn’t work for all the other phases which are implicitly called. Another drawback with maven is the way it uses snapshot and release builds. Ultimately we want to create a release build, but at the point of check-in we want a release build. This suggests that at some point in the pipeline we’re going to have to recompile in “release mode”, which in my book is a bad thing, because it means we have to run ALL of the tests again. The only solution I have thought of so far is to make every build a release build and simply not use snapshots.


* A footnote about the Hudson/Jenkins “thing”: It’s a little confusing because there’s still Hudson, which is owned by Oracle. The whole thing came about when there was a dispute between Oracle, the “owners” of Hudson, and Kohsuke Kawaguchi along with most of the rest of the Hudson community. The story goes that Kawaguchi moved the codebase to GitHub and Oracle didn’t like that idea, and so the split started.

Maven assembly plugin inheritance headache

Today I’ve had a headache with the maven assembly plugin, and the way it inherits from a parent. The story goes as follows:

I have a uber parent pom, which defines the assembly plugin in the pluginManagement section, and it looks like this:

<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-1</version>
<configuration>
<descriptors>
<descriptor>src/assembly/kit.xml</descriptor>
</descriptors>
<finalName>${pom.artifactId}-${pom.version}</finalName>
<outputDirectory>build/maven/${pom.artifactId}/target</outputDirectory>
<workDirectory>build/maven/${pom.artifactId}/target/assembly/work</workDirectory>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>

Then I have a parent pom for a collection of projects, and this parent pom has the following definition in it:

<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
</execution>
</executions>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
</configuration>

Now, if I simply put <artifactId>maven-assembly-plugin</artifactId> in one of the module’s pom files, it inherits most of everything from the project parent, which makes sense to me. The problem arises when we want to do something nifty with the assembly plugin, namely, create a jar with dependencies, and THEN include that in the package as described by an assembly descriptor which is different to the parent.

Here’s what I had in my project pom:

<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
</execution>
<execution>
<id>copy-fields-conf-file</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
</execution>
</executions>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifestEntries>
<build-number>${build-number}</build-number>
</manifestEntries>
</archive>
</configuration>

The problem was that the initial make-assembly execution was inheriting its configuration from the parent, which tells it there’s an assembly descriptor in src/main/assembly/kit.xml. That file DOES exist there, but inside it, it has:

<file>
<source>build/maven/permissioning-auth-module/target/permissioning-auth-module-${project.version}-jar-with-dependencies.jar</source>
<outputDirectory>/</outputDirectory>
</file>

Now, this is a problem because this jar hasn’t been created yet, that’s what we’re trying to create! So, I commented this section out from the parent, only to find that it inherits it from the uber parent, and here’s why:

In the assembly plugin definition you have execution sections and configurations. You can tie a configuration in to an execution phase, by simply adding it inside the execution scope. If you don’t, and you leave it outside, then it becomes a general definition which is inherited by any child projects whenever they call the assembly plugin and don’t override it explicitly. This is usually fine, but not when you don’t want to declare an assembly descriptor. Because the asembly descriptor is defined in the parent(s), it always goes looking for it even if you don’t want it to in your execution (which we don’t). There’s a workaround: create a blank assembly descriptor and point your execution at that, but that’s not very elegant. The trick is to always tie your configurations in to an execution phase, so the parent pom(s) end up looking something more like this:

<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-1</version>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
<descriptor>src/assembly/kit.xml</descriptor>
</descriptors>
<finalName>${pom.artifactId}-${pom.version}</finalName>
<outputDirectory>build/maven/${pom.artifactId}/target</outputDirectory>
<workDirectory>build/maven/${pom.artifactId}/target/assembly/work</workDirectory>
</configuration>
</execution>
</executions>

And the module’s pom can now look like this:

<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>pack-assembly</id>
<phase>prepare-package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
</descriptors>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifestEntries>
<build-number>${build-number}</build-number>
</manifestEntries>
</archive>
</configuration>
</execution>
<execution>
<id>copy-fields-conf-file</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
</configuration>
</execution>
</executions>

Headache over 🙂

Principles of Continuous Integration

I’m a big fan of CI, and as a simple best practice/process for development teams I think it’s right up there as one of the most important to get right. Most software places now have a Continuous Integration system, but do they actually practice Continuous Integration? In my experience the answer isn’t always “yes”.

For me, practicing CI goes a lot further than simply having a CI system setup, and running check-in/nightly builds and running unit test/code inspections etc and so on.

So what is my definition of “Practicing Continuous Integration”? Well, for me it is simply:

  • Having a CI system (obviously)
  • Adopting the principles of Continuous Integration

I did a quick google to see if I could find a simple list of some good CI principles – I know I’ve read many of them before in numerous good books such as Paul M Duvall’s “Continuous Integration” and Jez Humble’s “Continuous Delivery”, but I couldn’t find an easy cut-out-and-use version on the internet, so I’ve decided to knock something up here 🙂

Everyone loves a good list so here’s a list of what I believe to be some principles of CI:

  1. Fix your build failures, immediately. Never leave a build broken. If you do, the build team should be within their right to roll back your last commit.
  2. Every check-in should be an improvement on the last. Each check-in should have value, otherwise why do it? An improvement could be defined as a new piece of functionality, a bug fix, an increase in code coverage etc. If the improvement cannot be easily measured, ensure you detail it in your commit log.
  3. Never check-in on a broken build (unless you’re fixing the breakage)! If you have multiple commits to a codebase after the original breaking commit, then this leaves a bit of a bad smell in your CI system. Firstly, unless your commits are coming thick and fast, it means you aren’t reacting to your build failures, and this is not acceptable – it basically means you’re not playing ball and your letting the rest of the team down. Secondly, the cause of the original breakage can potentially be a lot harder to identify if several other check-ins have happened since the original check-in which broke the build!
  4. Code should be built and tested before checking in to source control. You can either do this by compiling the build locally yourself and checking that the tests pass, or you could invest in a CI/VC system that can perform pre-commit builds, such as Pulse, TeamCity and ElectricCommander (list obtained from Jez Humble’s Continuous Delivery book). However, you can use pre-commit hooks in svn to achieve something similar, and I’m led to believe this can also be done in Git and buildbot.
  5. Everyone should be interested in the output of the CI system. And that includes project managers and testers. Working on the principle that at some point a build works, and every commit thereafter is an improvement, we should be proud to display the results of every build to the whole project team, and we should react quickly if, for any reason, our last check-in was not an improvement to the state of our software.
  6. Functional automated tests should be run as part of the CI process. If you have a suite of automated regression tests, the chances are it’s no great effort to hook them up to the CI system, and definitely worth it in the long run.
  7. Make the builds fit for purpose. I absolutely hate having to wait ages for a simple check-in build to compile, when all I really want to know is whether I have broken my unit tests or violated some important rule. Make the check-in builds quite lightweight, run your unit tests and code coverage tests only, perhaps. Do you really need to run all your code inspections and analysis tools as well as run automated functional tests every time there’s a check-in? Chances are you don’t, so keep the check-in builds light, and leave the heavy duty work for the nightly builds, while we’re all asleep!
  8. React to the feedback. The CI system is a tool to help drive improvement and quality as much as anything else, so we should never ignore it. However, it’s easy to ignore your CI system if the feedback isn’t relevant. The solution is to work closely with the build team, as well as the rest of the project members, to ensure the build feedback is targeted and relevant

That’s about it for my list so far, I’m sure I’ll add to it as time goes by. Please feel free to add your suggestions!

Build Reports For the Project Team

Reading through Jez Humble and David Farley’s new(ish) book “Continuous Delivery” at the moment. It’s just great to read a book where the authors are really speaking your language. The concepts are simple and correct. There are aslo a few new ideas in there that I’ve not thought of before. But most of all I like the way it gets me thinking about how to implement some of the concepts in a “challenging” atmosphere. Not all companies understand the value of Continuous Integration, let alone Continuous Deployment/Delivery, but this book lays out the concepts in such a simple, straightforward way that it seems hard to believe that anyone would not see the advantages in them.

I picked up on one idea which I’d like to start implementing ASAP – and that is to provide feedback to project teams about the state of the builds and environments involved in that project, throughout the whole project lifecycle. I think that might help the rest of the project team understand the importance of delivery, by drawing attention to and raising awareness of the builds and the deployment environments. I can well imagine that some project team members won’t even know what they’re looking at when they’re presented with a build report, but there’s no reason for this. Everyone on the project should be interested in the build reports, they should have a vested interest in the quality of the code and the state of the deployment environment. These are real, tangible things, not abstract concepts, so they should be presented as real, tangible things for everyone to see.