Read-only Gradle Wrapper Files Are Bad, mmkaay

I’ve just been messing around with a Gradle build using Gradle Wrapper, trying to import an ant build which does some clever stuff. I couldn’t get it to work, so I eventually changed the Ant script to just echo a variable, and then just tried importing the ant file into gradle and calling the echo task, like this:

ant file (test.xml):

    <target name=”test” >
<echo>the value of main.version is ${main.version}</echo>
</target>

Gradle file:

ant.importBuild ‘test.xml’

And I was just running this:

gradlew test

Simples, right? Well, as it happens, no. I’m running the 1.0 “release” of gradle wrapper (which is actually versioned as 1.0-rc-3, strangely enough). Whenever I tried to run my highly complicated build (ahem), I got this lovely error:

Could not open task artifact state cache (D:\development\ReleaseEngineering\main\commonBuildStuff\.gradle\1.0-rc-3\taskArtifacts).
> java.io.FileNotFoundException: D:\development\ReleaseEngineering\main\commonBuildStuff\.gradle\1.0-rc-3\taskArtifacts\cache.properties.lock (Access is denied)

Access is denied! Gah! Of course it is! Wait, why is access denied? Well, basically that file is read-only because it’s in source control and I’m using Perforce. I removed the read-only flag and the build worked. Problem temporarily solved. It’s going to be interesting to see how this is going to work in the C.I. system…

 

Continuous Delivery Using Maven

I’m currently working on a continuous delivery system where I work, so I thought I would write something up about what I’m doing. The continuous delivery system, in a nutshell, looks a bit like this:

I started out with a bit of a carte blanche with regards to what tools to use, but here’s a list of what was already in use, in one form or another, when I started my adventure:

  • Ant (the main build tool)
  • Maven (used for dependency management)
  • CruiseControl
  • CruiseControl.Net
  • Go
  • Monit
  • JUnit
  • js-test-driver
  • Selenium
  • Artifactory
  • Perforce

The decision of which of these tools to use for my system was influenced by a number of factors. Firstly I’ll explain why I decided to use Maven as the build tool (shock!!).

I’m a big fan of Ant, I’d usually choose it (or probably Gradle now) over Maven any day of the week, but there was already an existing Ant build system in place, which had grown a bit monolithic (that’s my polite way of saying it was a huge mess), so I didn’t want to go there! And besides, the first project that would be going into the new continuous delivery system was a simple Java project – way too straightforward to justify rewriting the whole ant system from scratch and improving it, so I went for Maven. Furthermore, since the project was (from a build perspective) fairly straightforward, I thought Maven could handle it without too much bother. I’ve used Maven before, so I’ve had my run-ins with it, and I know how hard it can be if you want to do anything outside of “The Maven Way”. But, as I said, the project I was working on seemed pretty simple so Maven got the nod.

GO was the latest and greatest C.I. server in use, and the CruiseControl systems were a bit of a handful already, so I went for GO (also I’d never used it before so I thought that would be cool, and it’s from Thoughtworks Studios, so I thought it might be pretty good). I particularly liked the pipeline feature it has, and the way it manages each of its own agents. A colleague of mine, Andy Berry, had already done quite a bit of work on the GO C.I. system, so there was already something to start from. I would have gone for Jenkins had there not already been a considerable investment in GO by the company prior to my arrival.

I decided to use Artifactory as the artifact repository manager, simply because there was already an instance installed, and it was sort-of already setup. The existing build system didn’t really use it, as most artifacts/dependencies were served from network shares. I would have considered Nexus if Artifactory wasn’t already installed.

I setup Sonar to act as a build analysis/reporting tool, because we were starting with a Java project. I really like what Sonar does, I think the information it presents can be used very effectively. Most of all I just like the way in which it delivers the information. The Maven site plugin can produce pretty much all of the information that Sonar does, but I think the way Sonar presents the information is far superior – more on this later.

Perforce was the incumbent source control system, and so it was a no-brainer to carry on with that. In fact, changing the SC system wasn’t ever in question. That said, I would have chosen Subversion if this was an option, just because it’s so utterly freeeeeeee!!!

That was about it for the tools I wanted to use. It was up to the rest of the project team to determine which tools to use for testing and developing. All that I needed for the system I was setting up was a distinction between the Unit Tests, Acceptance Tests and Integration Tests. In the end, the team went with Junit, Mockito and a couple of in-house apps to take care of the testing.

The Maven Build, and the Joys of the Release Plugin!

The idea behind my Continuous Delivery system was this:

  • Every check-in runs a load of unit tests
  • If they pass it runs a load of acceptance tests
  • If they pass we run more tests – Integration, scenario and performance tests
  • If they all pass we run a bunch of static analysis and produce pretty reports and eventually deploy the candidate to a “Release Candidate” repository where QA and other like-minded people can look at it, prod it, and eventually give it a seal of approval.

This is the basic outline of a build pipeline:

Maven isn’t exactly fantastic at fitting in to the pipeline process. For starters we’re running multiple test phases, and Maven follows a “lifecycle” process, meaning that every time you call a particular pipeline phase, it runs all the preceding phases again. Our pipeline needs to run the maven Surefire plugin twice, because that’s the plugin we use to execute our different tests. The first time we run it, we want to execute all the unit tests. The second time we run it we want to execute the acceptance tests – but we don’t want it to run the unit tests again, obviously.

You probably need some familiarity with the maven build lifecycle at this point, because we’re going to be binding the Surefire plugin to two different phases of the maven lifecycle so that we can run it twice and have it run different tests each time. Here is the maven lifecycle, (for a more detailed description check out the Maven’s own lifecycle page)

Clean Lifecycle

  • pre-clean
  • clean
  • post-clean

Default Lifecycle

  • validate
  • initialize
  • generate-sources
  • process-sources
  • generate-resources
  • process-resources
  • compile
  • process-classes
  • generate-test-sources
  • process-test-sources
  • generate-test-resources
  • process-test-resources
  • test-compile
  • process-test-classes
  • test
  • prepare-package
  • package
  • pre-integration-test
  • integration-test
  • post-integration-test
  • verify
  • install
  • deploy

Site Lifecycle

  • pre-site
  • site
  • post-site
  • site-deploy

So, we want to bind our Surefire plugin to both the test phase to execute the UTs, and the integration-test phase to run the ATs, like this:

<plugin>
<!-- Separates the unit tests from the integration tests. -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
  <configuration>
  -Xms256m -Xmx512m
  <skip>true</skip>
  </configuration>
  <executions>
    <execution>
      <id>unit-tests</id>
      <phase>test</phase>
      <goals>
        <goal>test</goal>
      </goals>
  <configuration>
    <testClassesDirectory>
      target/test-classes
    </testClassesDirectory>
    <skip>false</skip>
    <includes>
      <include>**/*Test.java</include>
    </includes>
    <excludes>
      <exclude>**/acceptance/*.java</exclude>
      <exclude>**/benchmark/*.java</exclude>
      <include>**/requestResponses/*Test.java</exclude>
    </excludes>
  </configuration>
</execution>
<execution>
  <id>acceptance-tests</id>
  <phase>integration-test</phase>
  <goals>
    <goal>test</goal>
  </goals>
  <configuration>
    <testClassesDirectory>
      target/test-classes
    </testClassesDirectory>
    <skip>false</skip>
    <includes>
      <include>**/acceptance/*.java</include>
      <include>**/benchmark/*.java</include>
      <include>**/requestResponses/*Test.java</exclude>
    </includes>
  </configuration>
</execution>
</executions>
</plugin>

Now in the first stage of our pipeline, which polls Perforce for changes, triggers a build and runs the unit tests, we simply call:

mvn clean test

This will run the surefire test phase of the maven lifecycle. As you can see from the Surefire plugin configuration above, during the “test” phase execution of Surefire (i.e. this time we run it) it’ll run all of the tests except for the acceptance tests – these are explicitly excluded from the execution in the “excludes” section. The other thing we want to do in this phase is quickly check the unit test coverage for our project, and maybe make the build fail if the test coverage is below a certain level. To do this we use the cobertura plugin, and configure it as follows:

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>cobertura-maven-plugin</artifactId>
  <version>2.4</version>
  <configuration>
    <instrumentation>
      <excludes><!-- this is why this isn't in the parent -->
        <exclude>**/acceptance/*.class</exclude>
        <exclude>**/benchmark/*.class</exclude>
        <exclude>**/requestResponses/*.class</exclude>
      </excludes>
    </instrumentation>
    <check>
      <haltOnFailure>true</haltOnFailure>
      <branchRate>80</branchRate>
      <lineRate>80</lineRate>
      <packageLineRate>80</packageLineRate>
      <packageBranchRate>80</packageBranchRate>
      <totalBranchRate>80</totalBranchRate>
      <totalLineRate>80</totalLineRate>
    </check>
    <formats>
      <format>html</format>
      <format>xml</format>
    </formats>
  </configuration>
  <executions>
    <execution>
      <phase>test</phase>
      <goals>
        <goal>clean</goal>
        <goal>check</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To get the cobertura plugin to execute, we need to call “mvn cobertura:cobertura”, or run the maven “verify” phase by calling “mvn verify”, because the cobertura plugin by default binds to the verify lifecycle phase. But if we delve a little deeper into what this actually does, we see that it actually runs the whole test phase all over again, and of course the integration-test phase too, because they precede the verify phase, and cobertura:cobertura actually invokes execution of the test phase before executing itself. So what I’ve done is to change the lifecycle phase that cobertura binds to, as you can see above. I’ve made it bind to the test phase only, so that it only executes when the unit tests run. A consequence of this is that we can now change the maven command we run, to something like this:

mvn clean cobertura:cobertura

This will run the Unit Tests implicitly and also check the coverage!

In the second stage of the pipeline, which runs the acceptance tests, we can call:

mvn clean integration-test

This will again run the Surefire plugin, but this time it will run through the test phase (thus executing the unit tests again) and then execute the integration-test phase, which actually runs our acceptance tests.

You’ll notice that we’ve run the unit tests twice now, and this is a problem. Or is it? Well actually no it isn’t, not for me anyway. One of the reasons why the pipeline is broken down into sections is to allow us to separate different tasks according to their purpose. My Unit Tests are meant to run very quickly (less than 3 minutes ideally, they actually take 15 seconds on this particular project) so that if they fail, I know about it asap, and I don’t have to wait around for a lifetime before I can either continue checking in, or start fixing the failed tests. So my unit test pipeline phase needs to be quick, but what difference does an extra few seconds mean for my Acceptance Tests? Not too much to be honest, so I’m actually not too fussed about the unit tests running for a second time.  If it was a problem, I would of course have to somehow skip the unit tests, but only in the test phase on the second run. This is doable, but not very easy. The best way I’ve thought of is to exclude the tests using SkipTests, which actually just skips the execution of the surefire plugin, and then run your acceptance tests using a different plugin (the Antrun plugin for instance).

The next thing we want to do is create a built artifact (a jar or zip for example) and upload it to our artifact repository. We’ll use 5 artifact repositories in our continuous delivery system, these are:

  1. A cached copy of the maven central repo
  2. A C.I. repository where all builds go
  3. A Release Candidate (RC) repository where all builds under QA go
  4. A Release repository where all builds which have passed QA go
  5. A Downloads repository, from where the downloads to customers are actually served

Once our build has passed all the automated test phases it gets deployed to the C.I. repository. This is done by configuring the C.I. repository in the maven pom file as follows:

<distributionManagement>
<repository>
<id>CI-repo</id>
<url>http://artifactory.mycompany.com/ci-repo</url&gt;
</repository>
</distributionManagement>

and calling:

mvn clean deploy

Now, since Maven follows the lifecycle pattern, it’ll rerun the tests again, and we don’t want to do all that, we just want to deploy the artifacts. In fact, there’s no reason why we shouldn’t just deploy the artifact straight after the Acceptance Test stage is completed, so that’s what exactly what we’ll do. This means we need to go back and change our maven command for our Acceptance Test stage as follows:

mvn clean deploy

This does the same as it did before, because the integration-test phase is implicit and is executed on the way to reaching the “deploy” phase as part of the maven lifecycle, but of course it does more than it did before, it actually deploys the artifact to the C.I. repository.

One thing that is worth noting here is that I’m not using the maven release plugin, and that’s because it’s not very well suited to continuous delivery, as I’ve noted here. The main problem is that the release plugin will increment the build number in the pom and check it in, which will in turn kick off another build, and if every build is doing this, then you’ll have an infinitely building loop. Maven declares builds as either a “release build” which uses the release plugin, or a SNAPSHOT build, which is basically anything else. But I want to create releases out of SNAPSHOT builds, but I don’t want them to be called SNAPSHOT builds, because they’re releases! So what I need to do is simply remove the word SNAPSHOT from my pom. Get rid of it entirely. This will now build a normal “snapshot” build, but not add the SNAPSHOT label, and since we’re not running the release plugin, that’s fine (WARNING: if you try removing the word snapshot from your pom and then try to run a release build using the release plugin, it’ll fail).

Ok, let’s briefly catch up with what our system can now do:

  • We’ve got a build pipeline with 2 stages
  • It’s executed every time code is checked-in
  • Unit tests are executed in the first stage
  • Code coverage is checked, also in the first stage
  • The second stage runs the acceptance tests
  • The jar/zip is built and deployed to our ci repo, this also in the second stage of our pipeline

So we have a jar, and it’s in our “ci” repo, and we have a code coverage report. But where’s the rest of our static analysis? The build should report a lot more than just the code coverage. What about coding styles & standards, rules violations, potential defect hot spots, copy and pasted code etc and so forth??? Thankfully, there’s a great tool which collects all this information for us, and it’s called Sonar.

I won’t go into detail about how to setup and install Sonar, because I’ve already detailed it here.

Installing Sonar is very simple, and to get your builds to produce Sonar reports is as simple as adding a small amount of configuration to your pom, and adding the Sonar plugin to you plugin section. To produce the Sonar reports for your project, you can simply run:

mvn sonar:sonar

So that’s exactly what we’ll do in the next section of our build pipeline.

So we now have 3 pipeline sections and were producing Sonar reports with every build. The Sonar reports look something like this:

Sonar report

As you can see, Sonar produces a wealth of useful information which we can pour over and discuss in our daily stand-ups. As a rule we try to fix any “critical” rule violations, and keep the unit test coverage percentage up in the 90s (where appropriate). Some people might argue that unit test coverage isn’t a valuable metric, but bear in mind that Sonar allows you to exclude certain files and directories from your analysis, so that you’re only measuring the unit test coverage of the code you want to have covered by unit tests. For me, this makes it a valuable metric.

Moving on from Sonar now, we get to the next stage of my pipeline, and here I’m going to run some Integration Tests (finally!!). The ITs have a much wider scope than the Unit Test, and they also have greater requirements, in that we need an Integration Test Environment to run them in. I’m going to use Ant to control this phase of the pipeline, because it gives me more control than Maven does, and I need to do a couple of funky things, namely:

  • Provision an environment
  • Deploy all the components I need to test with
  • Get my newly built artifact from the ci repository in Artifactory
  • Deploy it to my IT environment
  • Kick of the tests

The Ant script is fairly straightforward, but I’ll just mention that getting our artifact from Artifactory is as simple as using Ant’s own “get” task (you don’t need to use Ivy juts to do this):

<get src=”${artifactory.url}/${repo.name}/${namespace}/${jarname}-${version}” dest=”${temp.dir}/${jarname}-${version}” />

The Integration Test stage takes a little longer than the previous stages, and so to speed things up we can run this stage in parallel with the previous stage. Go allows us to do this by setting up 2 jobs in one pipeline stage. Our Sonar stage now changes to “Reports and ITs”, and includes 2 jobs:

<jobs>
          <job name="sonar">
            <tasks>
              <exec command="mvn" args="sonar:sonar" workingdir="JavaDevelopment" />
            </tasks>
            <resources>
              <resource>windows</resource>
            </resources>
          </job>
 <job name="ITs">
            <tasks>
              <ant buildfile="run_ITs.xml" target="build" workingdir="JavaDevelopment" />
            </tasks>
            <resources>
              <resource>windows</resource>
            </resources>
          </job>
</jobs>

Once this phase completes successfully, we know we’ve got a half decent looking build! At this point I’m going to throw a bit of a spanner into the works. The QA team want to perform some manual exploratory tests on the build. Good idea! But how does that fit in with our Continuous Delivery model? Well, what I did was to create a separate “Release Candidate” (RC) repository, also known as a QA repo. Builds that pass the IT stage get promoted to the RC repo, and from there the QA team can take them and do their exploratory testing.

Does this stop us from practicing “Continuous Delivery”? Well, not really. In my opinion, Continuous Delivery is more about making sure that every build creates a potentially releasable artifact, rather that making every build actually deploy an artifact to production – that’s Continuous Deployment.

Our final stage in the deployment pipeline is to deploy our build to a performance test environment, and execute some load tests. Once this stage completes we deploy our build to the Release Repository, as it’s all signed off and ready to handover to customers. At this point there’s a manual decision gate, which in reality is a button in my CI system. At this point, only the product owner or some such responsible person, can decide whether or not to actually release this build into the wild. They may decide not to, simply because they don’t feel that the changes included in this build are particularly worth deploying. On the other hand, they may decide to release it, and to do this they simply click the button. What does the button do? Well, it simply copies the build to the “downloads” repository, from where a link is served and sent to customers, informing them that a new release is available – that’s just the way things are done here. In a hosted environment (like a web-based company), this button-press could initiate the deploy script to deploy this build to the production environment.

A Word on Version Numbers

This system is actually dependant on each build producing a unique artifact. If a code change is checked in, the resultant build must be uniquely identifiable, so that when we come to release it, we know we’re releasing theexact same build that has gone through the whole pipeline, not some older previous build. To do this, we need to version each build with a unique number. The CI system is very useful for doing this. In Go, as with most other CI systems, you can retrieve a unique “counter” for your build, which is incremented every time there’s a build. No two builds of the same name can have the same counter. So we could add this unique number to our artifact’s version, something like this (let’s say the counter is 33, meaning this is the 33rd build):

myproject.jar-1.0.33

This is good, but it doesn’t tell us much, apart from that this is the 33rd build of “myproject”. A more meaningful version number is the source control revision number, which relates to the code commit which kicked off the build. This is extremely useful. From this we can cross reference every build to the code in our source control system, and this saves us from having to “tag” the source code with every build. I can access the source control revision number via my CI system, because Go sets it as an environment variable at build time, so I simply pass it to my build script in my CI system’s xml, like this:

cobertura:cobertura -Dp4.revision=${env.GO_PIPELINE_LABEL}
-Dbuild.counter=${env.GO_PIPELINE_COUNTER"

p4.revision and build.counter are used in the maven build script, where I set the version number:

    <groupId>com.mycompany</groupId>
<artifactId>myproject</artifactId>
<packaging>jar</packaging>
<version>${main.version}-${build.number}-${build.counter}</version>
<name>myproject</name>

<properties>
<build.number>${p4.revision}</build.number>
<major.version>1</major.version>
<minor.version>0</minor.version>
<patch.version>0</patch.version>
<main.version>${major.version}.${minor.version}.${patch.version}</main.version>
</properties>

If my Perforce check-in number was 1234, then this build, for example, will produce:

myproject.jar-1.0.0-1234-33

And that just about covers it. I hope this is useful to some people, especially those who are using Maven and are struggling with the release plugin!

Changing File Permissions On Files in Perforce

I had a script file which I had written on my Windows machine, which I wanted to execute on the build server. However, the build server couldn’t execute it because it didn’t have execute permissions by default, and I couldn’t change this on my windows machine.

So, what I had to do was check out the file on my linux box, using a clientspec I’d created, and edit the permissions like this:

p4 -c my_clientspec_test edit -t text+x jamestest.sh

my_clientspec_test is the name of the clientspec I used

jamestest.sh is my script file, obviously.

-t  tells the system to open the file as a particular type – in this case we pass text+x, which means text, with the “executable” modifier.

Of course, if your file isn’t text type you need to change the command to suit, i.e. -t binary+x if it’s a binary.

What is in a name? Usually a version number, actually.

Another fascinating topic for you – build versioning! Ok, fun it might not be, but it is important and mostly unavoidable. In an earlier blog I outlined a build versioning strategy I was proposing to use with our Java builds. Since then, the requirements have changed, as they tend to, and so I’ve had to change the versioning convention.

Essentially, what I’m after is a way of using artifact version numbers to tell me some useful at-a-glance information about the artifact I have created. Also, customers want the version number to meet their expectations – that is, when they get a new build, they want to see an easily identifiable difference in the version number between the new build and their old one. What they don’t want is a long complicated list of numbers which are hard to distinguish. For instance, it’s much easy to identify which of the following 2 versions is the latest:

  • 5.0.1
  • 5.0.4

but it’s not so easy to work out which of these is the latest:

  • 5.0.1.13573
  • 5.0.1.13753

As we’re practicing continuous delivery, any given check-in can feasibly produce a release build. So, I would like some way of identifying exactly which check in produced my builds, or at least have a way of working out which bits of source code went into my released package. There are a couple of ways we can do this:

Tag the source code – We could make the builds tag the source code in our SCM system (Perforce) with every build. This is relatively easy to do using Ant and Maven. With Ant there are numerous different ways of doing it depending on your SCM system, for instance, with subversion you need to use the SvnAnt tasks from subclipse (http://subclipse.tigris.org/svnant/svn.html) and basically perform a copy of your source url:

 <copy srcUrl=”${src.url}” destUrl=”${dest.url}” message=”${version.num}”/>

(this is because tags in svn are just cheap copies with a label).

With Maven you just need to use the release plugin – this automatically handles tagging for you.

Tagging the source code is great – it keeps the version numbers as simple as I’d like, and it’s nicely traceable. However, it’s time consuming, and can result in a lot of tags.  The other problem is, I can’t tell which check-in caused the build just by looking at the version number of an artifact.

Use the commit number in the build version – We use a build version of Major.Release.Patch-Build in our artifacts. The build number used to be an auto-incrementing number – this worked fine but it didn’t give us a link back to which commit had caused the build to be made. So, I decided to use the perforce changelist id (i.e. the commit version) as the build number in the version, so that builds would end up looking something like this: 1.0.0-11531.

The problem here is that the version number is not customer friendly – so I remove the build number as a final step, before the builds get released to customers. To track what version the customers have got, I still keep a record of the full build number (including the commit number) in the release notes, and I could also easily inject it into an assembly info or properties/config file if I so wished, so that customers could very easily read out the full version number just by looking in a menu somewhere.

There were several obstacles I had to overcome to get this working. The first obstacle, and really this was the one that stopped me from tagging the source code, was that the maven release plugin is abysmal when it comes to continuous delivery. I needed to use the release plugin to tag the source code, but one of the other things that the maven release plugin does is to remove the word SNAPSHOT, increment the version number, and check the pom back into source control. This would cause another build to trigger in the CI system, which in turn would increment the build number etc and cause another build to trigger – so on and so on. Basically it would create a continually building project.

So I have decided not to use the maven release plugin at all – it doesn’t seem to fit in with Continuous Delivery. In order to create potential release candidates with every successful build, I’ve removed the word SNAPSHOT from all the poms, so we aren’t making any snapshot builds anymore either (except when you build locally – more on that later). The version in the poms now takes the P4 commit number, which is injected via the Continuous Integration system, which in my case is Go. Jenkins also supports this, using the subversion plugin (if you use subversion), which sets an environment variable with the svn revision number (more details here). The Jenkins Perforce plugin does the same thing, setting the P4_CHANGELIST environment variable – so it can easily be consumed (more details here).

Go takes the P4 changelist number and puts it in an environment variable called “GO_PIPELINE_LABEL”. I read this variable in, and assign it to a property called p4.revision. I do this in the command that kicks off the build, so that it overwrites a default value which I can keep in my pom – this is useful because it means my colleagues and I don’t have to make any changes to the pom if we want to run a build locally (bear in mind if we run it locally this environment variable won’t exist on our PCs, so the build would otherwise fail). Here’s a basic run down of a sample pom, with more details to follow:

<modelVersion>4.0.0</modelVersion>
<groupId>etc.so.forth</groupId>
<artifactId>MyArtifact</artifactId>
<packaging>jar</packaging>
<version>${main.version}-${build.number}</version>

<description>Description about this application</description>

<properties>
<p4.revision>SNAP</p4.revision>
<build.number>${p4.revision}</build.number>
<main.version>5.0.2</main.version>
</properties>

<scm>

</scm>

<repositories>
<repository>
<snapshots>
<enabled>false</enabled>
</snapshots>
<id>release-candidate-repo</id>
<url>http://artifactory.me.com/my-rc</url&gt;
</repository>
</repositories>

<build>

</build>
</project>

The value for p4.revision is “SNAP” by default, meaning that if I make a local build, I’ll get an artifact with the version 5.0.2-SNAP. I know that these builds should never be promoted to production or handed to customers because the word SNAP gives it away.  However, when a build is created by the CI system, the following command is passed:

clean deploy sonar:sonar -Dp4.revision=${env.GO_PIPELINE_LABEL}

This overwrites the value for p4.revision, passing in the Perforce commit number, and the build will create something like 5.0.2-1234 (where 1234 is my imaginary p4 commit number).

I’ve added a property called main.version, which is the same as the full version but without the build number. I’ve done this so that I can package up my customer builds (ina  zip) and label them with the version 5.0.2. After all, customers don’t care about the build number.

An important policy to follow is once a build is released to a customer, one of the other version numbers MUST be increased, meaning all further builds will be at least 5.0.3. The decision of which version number to increase depends on various business factors – I like to increase the 3rd number if I’m releasing a patch to a previously released build. If I’m releasing new functionality I increase the second number. The first number gets increased for major releases. The whole issue of version numbers becomes a lot less complicated if you’re in the business of releasing software to web servers and you don’t actually have to hand software over to customers. In this instance, I just keep the full version number with the build number at the end, as it’s usually someone like me who has to look after the production system anyway!

Are Tools and Processes Stifling Our Creativity and Productivity?

I had lunch with my friend Christian the other day (we went to wagamamas, I had Ginger Chicken Udon – delicious) and he was telling me about a company who work according to what sounded very much like developer anarchy – basically everyone is allowed to go their own route, use whatever tools or languages they please, using any framework they want to, to deliver projects. This sounded like fresh lunacy, and I couldn’t get past how much of a nightmare it must be as a build/release or sysadmin guy, to make all of that come together and then look after it.

http://centennialsociety.com/business_reply/businessreply.htm

Developer Anarchy

However, it did get me thinking about the tools and frameworks that I’ve used over the years, and whether they were too restrictive. That led me on to thinking about some of the business processes I’ve worked with, and how counter-productive they could be too. So I decided to list some of these tools, processes and frameworks, and critique them with respect to how inflexible and restrictive they can be.

Maven and Ant

Rather predictably, I’ll start with Maven. Maven is both a build tool and a framework, it uses plugins to hide the complexities of the tasks it performs “behind the scenes”. Rather than explicitly coding out what you want to compile, or what unit tests you want to run, Maven says “if you put your files here, and call this plugin, I’ll take care of it”. Most of the Maven plugins are fairly configurable, but only to an extent, and you have to consult the (very hit-and-miss) documentation every time you want to do something even slightly off the beaten path. It very much relies on the “convention over configuration” principle – as long as you conform to its convention, you’ll be fine. And it’s this characteristic which inevitably makes it so restrictive. It’s testament to how difficult it is to veer from the “Maven way” that I almost always end up using the antrun task to call an ant script to do most of the non-standard Maven things that I want to do with my builds. For instance, I currently have a requirement to copy my distributable to 2 separate repositories, one with some supporting files (which need to be filtered) and one without. This isn’t a particularly weird requirement, and you can probably do it with Maven, but despite the fact that I’ve worked with Maven on a daily basis for over 3 years, I still find it much easier to just call Ant to do stuff like this, rather than write my own plugin or go trawling through Maven’s online documentation.

Dev Team Ring-Fencing

I have worked in companies where discussion and interaction between development and QA/NetOps etc have actively been discouraged. The idea behind this insanity has been to give the developers time to do their work, without being constantly interrupted by testers, release engineers etc and so on. I can just about understand how this can happen, but I can’t really understand how someone can come to the conclusion that ring-fencing a whole team and discouraging interactions with other teams is a sustainable and sensible solution. The solution should be to find out exactly WHY the QA or NetOps team need to constantly interrupt the developers. Being told that you can’t talk freely with other teams develops business silos which can be very hard to break down, and also prevents people from getting the information they require, which can seriously impact productivity.

Audits

If I had a pound and a pat on the head every time I heard the phrase “we can’t do that, it would never get past the auditors”, by now I’d be a rich man with a flat head. Working in build and release engineering, I quite often get caught up with security and audit issues when it comes to doing deployments – what account should be used to do deployments to production? How often should passwords be changed? How do we pass the passwords? Are they encrypted? What level of access should release engineers have to the codebase? All of these questions come up at one point or another. In my experience, developers, sysadmins, managers and the like, will always come back with “you can’t do that, it’ll never get past the auditors” whenever you propose a new or slightly radical way of doing things, it’s like their safety net, and I think it’s just their way of saying “we don’t like that, it’s different, different is bad”. My suggestion is to speak with the auditors and challenge what you’ve been told. If that’s not possible, try to find out exactly what rule you’re breaking and then find out what the guidelines for compliance are on the rule in question – google is remarkably good for this, so are forums and meetups. In my experience, most of the so called compliance rules are not nearly as draconian or restrictive as many of my own colleagues would have me believe.

Meetings

Oh my god. So many meetings. I now hate pretty much all meetings. Prior to working at Caplin Systems I think my job was to attend meetings all day. I can’t really remember, I can’t actually remember a time before meetings.

The worst meetings of all are the “weekly status meetings”. I don’t know if these are widely popular but if they are, and you have to attend them, then you have my sympathies. They amount to a lot of people talking about stuff that’s already happened. Quite a lot of it has nothing to do with you, and absolutely all of it could be communicated more efficiently without needing a meeting. Also, someone takes notes. Taking notes in a meeting about stuff that’s already happened so that they can tell everyone else about what happened in a meeting about stuff that’s already happened. The pain is still raw. Don’t do this, it’s stupid. Do demos instead, they’re more interesting and we actually get to SEE stuff, rather than just hear someone droning on about stuff they’ve already done. Meetings are not productive, and they stifle my creativity by making me want to go to sleep.

MS Project

“You have to work on this today, and it has to take you eight hours, and then you have to work on a different project for 50% tomorrow, and then you have to work on bla bla bla”. No. Wrong. This is totally unrealistic. You only have to be out by 10% with your LOE (Level of Effort, or SWAG “Sophisticated Wild-Arsed Guess”) to throw the whole thing into disarray. Be sick for a day and they system is in free-fall, one project needs to shift by 1 day and suddenly you’re clashing with all these other projects and you’re assigned to work a 16 hour day. MS Project is often the tool of choice for companies who follow the Waterfall process, and it’s probably the process as much as the tool itself which causes this mess, but the tool certainly doesn’t help. I prefer to use tools that allow me to manage my time on a 2 weekly basis (like Acunote, see here), the decision of when I should work on each task is determined by me and the team I’m working with, and it has worked very successfully so far.

Source Control Systems

Aside from Visual Source Safe, I’ve never worked with a truly awful SCM. They all have really neat features and at the end of the day they perform an exceedingly important job. However, the everyday use of these tools varies from one to another, and some are more restrictive and controlling than others. For instance, Perforce, which has long been one of my favorites, only allows you to have 1 root for your client spec – this is a pain in a CI system if you can’t tell your server to swithch clientspec (or workspace). I’m currently working with Perforce coupled with Go as the Continuous Integration server, and Go forces me to check out my files to “C:\Program Files\Cruise Agent” but my build has a requirement to also sync some files to D:\. This can’t be done with Perforce because you can only have one root directory (in this case C:\).

I like SCM systems where it’s easy to create branches. I’m a big fan of personal branches, somewhere to check-in my personal work, POCs, or just incomplete bug fixes that I’d like to check-in somewhere safe before I go home. Subversion and Git make this easy, and they also allow you to merge your changes to another branch. I personally find Subversion better at this than Perforce, but that might just be me (I get confused with Perforce’s “yours” and “theirs”). SCM systems that allow for easy branching are far less restrictive, and encourage you to try something different, without having to worry about checking in to the main branch and breaking everything.

Continuous Integration Systems

C.I. systems can really get in the way of your productivity if they:

  • Take ages to setup a build job – How can I be productive if I’m spending so long creating a build job, or waiting for one to be made?
  • Keep reporting false-negatives – How can I be productive if I’m always looking into a failing build which isn’t really failing?
  • Don’t provide a friendly interface
  • Don’t provide you with the information that you need!

A good C.I. system should be a service, and a hub of information. It should be easy to copy build jobs or create new ones and it should provide all users with a single point of reference for all the build output, like test results and static analysis results. At a previous job, Pankaj Sahu, the build engineer, setup a system of automatically creating build jobs in Bamboo using Selenium and Ant, it was brilliant. Bamboo also allows you to copy jobs, which is in itself a real time saver. Jenkins also has a similar feature. Go, which is a good C.I system is slightly harder work, but it’s main advantages lay elsewhere.

Maven Release Plugin and Continuous Delivery

I was setting up a Continuous Delivery system using Maven as the build tool, Perforce as the SCM and Go (ThoughtWorks’ CI system). All was going perfectly well until I got to the point when I no longer wanted to make snapshot builds…

The idea behind my Continuous Delivery system was this:

  • Every check-in runs a load of unit tests
  • If they pass it runs a load of acceptance tests
  • If they pass we run more tests – Integration, scenario and performance tests
  • If they all pass we run a bunch of static analysis and produce pretty reports and eventually deploy the candidate to a “Release Candidate” repository where QA and other like-minded people can look at it, prod it, and eventually give it a seal of approval.

As you can see, there’s no room for the notion of “snapshot” and “release” builds being separate here. Every build is a potential release build. So, a few days ago I went right ahead and used the maven release plugin, and that was the last time I remember smiling, having fun, getting a full night’s sleep, and my brain not hurting.

The problem is this: the maven release plugin doesn’t really work for continuous delivery. And what’s more, it REALLY doesn’t work with Go and Perforce. I’ll start with the Go/Perforce issues: I got loads of errors thanks to the way Go runs as the system user, and creates its own clientspecs. The results of this debacle are detailed here and here.

I managed to finally get around the clientspec/P4/Go problems with some help from my colleague Toby Catlin who bears the scars of similar skirmishes with Go and Perforce from days gone by. The “fix” was to create a perforce config file and an “uber” client spec. The perforce config file specified the uber clientspec and it lived in the root of the project directory. It was hardly a satisfactory workaround, as it meant that every project would need to have this file, and the uber clientspec would need to be updated every time a new build job was created. But never mind that, it was just a relief to see the builds going green for a change.

And that’s when it happened… the release build completed. The maven release plugin increased the version number in the pom and checked it in. And then Go detected the change in the pom and checked it out again and started building again. This then updated the version number and checked it in, which in turn got detected and kicked off another build CAN YOU SEE THE GLARINGLY OBVIOUS PROBLEM HERE????

It’s obvious really. I’ve always made my maven release builds a manual process in the past and that was exactly why, I’d just forgotten all about it. So, I’ve decided not to use the maven release plugin at all. Every build now just creates a “release” build because I’ve removed all instances of the word SNAPSHOT from the poms. If they pass all their tests and look good enough, they’re automatically promoted to the release candidate repository. And everyone’s happy. Also, I’ve added a property to the builds which pulls in a variable from the Go system, and if that’s not present the “deploy to release candidate repository” step fails – this is to stop developers from manually creating releases – all release builds must come from the CI system.

Maven Release Plugin and Perforce Clientspecs

I’m getting a very annoying error trying to do maven release builds on our CI servers, which isn’t appearing on my local workstation. The build seems to fail because the pom file is under source control (I’m using Perforce) and so it’s read-only. However, It should check out the pom file so that it isn’t read only. Alas, that doesn’t seem top be working. Here are the errors I got initially:

[ERROR] BUILD ERROR
[INFO] ————————————————————————
[INFO] Error writing POM: C:\Program Files\Cruise Agent\pipelines\yadda\yadda\pom.xml (Access is denied)

The reason behind it appeared to be:

[ERROR] CommandLineException Exit code: 1 – Client ‘xpcruisebuildvm1-STANDARD-ALL’ can only be used from host ‘xpcruisebuildvm1’.

Command line was:p4 -d “C:\Program Files\Cruise Agent\pipelines\yadda\yadda” -p perforce.mycompany.com:1666 ed
it pom.xml
org.codehaus.plexus.util.cli.CommandLineException: Exit code: 1 – Client ‘xpcruisebuildvm1-STANDARD-ALL’ can only be used from host ‘xpcruisebuildvm1’

So the first thing I did was create a new clientspec for that machine which had all the files mapped to its c:\temp directory, and then tried to run the build again, assuming this would fix the issue but present me with a whole new problem about how to fit this all in to my CI system. However, this also failed:

[ERROR] BUILD ERROR
[INFO] ————————————————————————
[INFO] Error writing POM: C:\TEMP\yadda\pom.xml (Access is denied)

This is confusing because the clientspec I’m using does have these files in its view and therefore Maven should be able to edit them. Also, this is how I have it setup on my local workstation, and that works fine…

After a lot of trial and error I managed to make some progress. One of the issues was that the client spec had the following mapping in it:

//JavaDevelopment/main/yadda/… //XPCRUISEVM808_release/temp/yadda/…

and the root was set to C:\

This means all P4 files should sync to my C:\TEMP directory, which already existed on the machine. As expected, the sync worked fine and all files appeared in C:\TEMP\yadda.

And therein lies the rub: the client spec uses lowercase “temp”, while the windows directory was uppercase “TEMP”. As simple as that. I changed the client spec to math the filesystem and for good measure updated my release plugin version to 2.1. Problem solved (well, actually that presents me with a whole new problem because I don’t want every build agent to have different client specs, because I’m using Go, and that puts them in it’s own directory under the name of the build job. Grrrr).