POM ‘ release:prepare release’ not found in repository

I’m rather irritatingly getting this error in my maven builds at the moment, trying to setup some release builds using Go:

POM ' release:prepare release' not found in repository

One issue I have with Go is that it doesn’t natively support maven. This isn’t really much of a big deal because I can just tell it to run a custom command, and point it to the mvn shell or batch file (this could be a bit of a pain if I want my builds run on windows and/or linux but don’t want to have to define separate build jobs for each one, but I don’t, and I can’t think of any reason why I would, so that’s ok). Anyway, the issue this time is with the way I setup the build job. I used the new (in version 2.2) clicky-UI to setup the job, like telling it to run the mvn batch file, and what arguments to pass. This just seemed not to work. When I looked at the Go xml file it looked a bit like this:

<job name=”build_release”>

<tasks>

<exec command=”D:\buildTools\maven\2.2.1\bin\mvn.bat” workingdir=”yadda\yadda”>

<args>-B release:prepare release:perform</args>

</tasks>

<resources>

<resource>windows</resource>

</resources>

</job>

So I deleted it and manually edited the xml, making it look like this instead:

<job name=”build_release”>

<tasks>

<exec command=”D:\buildTools\maven\2.2.1\bin\mvn.bat” args=”-B release:prepare release:perform” workingdir=”yadda\yadda” />

</tasks>

<resources>

<resource>windows</resource>

</resources>

</job>

And this seems to have fixed it. Not very impressive at all.

Build Versioning Strategy

Over the last few years I’ve followed a build versioning strategy of the following format:

<Major Version>.<Release Version>.<Patch Number>.<Build ID>

The use of decimal points allows us to implement an auto-incrementing strategy for our builds, meaning the Build ID doesn’t need to be manually changed each time we produce a build, as this is taken care of by the build system. Both Maven and Ant have simple methods of incrementing this number.

Ensuring that each build has a unique version number (by incrementing the Build ID) allows us to distinguish between builds, as no two builds of the same project will have the same BuildID. The other numbers are changed manually, as and when required.

When to Change Versions:

Major Version – Typically changes when there are very large changes to product or project, such as after a rewrite, or a significant change to functionality

Release Version – Incremented when there is an official release of a project which is not considered a Major Version change. For example, we may plan to release a project to a customer in 2 or 3 separate releases. These releases may well represent the same major version (say version 5) but we would still like to be able to identify the fact that these are subsequent planned releases, and not patches.

Patch Number – This denotes a patch to an existing release. The release that is being patched is reflected in the Release Version. A patch is usually issued to fix a critical bug or a collection of major issues, and as such is different to a “planned” release.

Build ID – This auto-increments with each release build in the CI system. This ensures that each build has a unique version number. When the Major Version, Release Version or Patch Number is increased, the Build ID is reset to 1.

Examples:

17.23.0.9 – This represents release 17.23. It is the 9th build of this release.

17.24.0.1 – This is the next release, release 17.24. This is the first build of 17.24.

17.24.1.2 – This represents a patch for release 17.24. This is the first patch release, and happens to be the 2nd build of that patch.

Continuous Delivery using build pipelines with Jenkins and Ant

My idea of a good build system is one which will give me fast, concise, relevant feedback, but I also want it to produce a proper finished article when I’ve checked in my code. I’d like every check-in to result in a potential release candidate. Why? Well, why not?

I used to employ a system where release candidates were produced separately to my check-in builds (also known as “snapshot” builds). This encouraged people to treat snapshot builds as second rate. The main focus was on the release builds. However, if every build is a potential release build, then the focus on each build is increased. Consequently, if every build could be a potential release candidate, then I need to make sure every build goes through the most rigorous testing possible, and I would like to see a comprehensive report on the stability and design of the build before it gets released. I would also like to do all of this automatically, as I am inherently lazy, and have a facebook profile to constantly update!

This presents me with a problem: I want instant feedback on check-in builds, and to have full static analysis performed on them and yet I still want every check-in build to undergo a full suite of testing, be packaged correctly AND be deployed to our test environments. Clearly this will take a lot longer than I’m prepared to wait! The solution to this problem is to break the build process down into smaller sections.

Pipelines to the Rescue!

The concept of build pipelines has been around for a couple of years at least. It’s nothing new, but it’s not yet standard practice, which is a pity because I think it has some wonderful advantages. The concept is simple: the build as a whole is broken down into sections, such as the unit test, acceptance test, packaging, reporting and deployment phases. The pipeline phases can be executed in series or parallel, and if one phase is successful, it automatically moves on to the next phase (hence the relevance of the name “pipeline”). This means I can setup a build system where unit tests, acceptance tests and my static analysis are all run simultaneously at commit stage (if I so wish), but the next stage in the pipeline will not start unless they all pass. This means I don’t have to wait around too long for my acceptance test results or static analysis report.

Continuous Delivery

Continuous delivery has also been around for a while. I remember hearing about it in about 2006 and loving the concept. It seems to be back in the news again since the publication of “Continuous Delivery”, an excellent book from Jez Humble and David Farley. Again the concept is simple, roughly speaking it means that every build gets made available for deployment to production if it passes all the quality gates along the way. Continuous Delivery is sometimes confused with Continuous Deployment. Both follow the same basic principle, the main difference is that with Continuous Deployment it is implied that each and every successful build will be deployed to production, whereas with continuous delivery it is implied that each successful build will be made available for deployment to production. The decision of whether or not to actually deploy the finished article to the production environment is entirely up to you.

Continuous Delivery using Build Pipelines

You can have continuous delivery without using build pipelines, and you can use build pipelines without doing continuous delivery, but the fact is they seem made for each other. Here’s my example framework for a continuous delivery system using build pipelines:

I check some code in to source control – this triggers some unit tests. If these pass it notifies me, and automatically triggers my acceptance tests AND produces my code-coverage and static analysis report at the same time. If the acceptance tests all pass my system will trigger the deployment of my project to an integration environment and then invoke my integration test suite AND a regression test suite. If these pass they will trigger another deployment, this time to UAT and a performance test environment, where performance tests are kicked off. If these all pass, my system will then automatically promote my project to my release repository and send out an alert, including test results and release notes.

So, in a nutshell, my “template” pipeline will consist of the following stages:

  • Unit-tests
  • Acceptance tests
  • Code coverage and static analysis
  • Deployment to integration environment
  • Integration tests
  • Scenario/regression tests
  • Deployments to UAT and Performance test environment
  • More scenario/regression tests
  • Performance tests
  • Alerts, reports and Release Notes sent out
  • Deployment to release repository

Introducing the Tools:

Thankfully, implementing continuous delivery doesn’t require any special tools outside of the usual toolset you’d find in a normal Continuous Integration system. It’s true to say that some tools and applications lend themselves to this system better than others, but I’ll demonstrate that it can be achieved with the most common/popular tools out there.

Who’s this Jenkins person??

Jenkins is an open-source Continuous Integration application, like Hudson, CruiseControl and many others (it’s basically Hudson, or was Hudson, but isn’t Hudson any more. It’s a trifle confusing*, but it’s not important right now!). So, what is Jenkins? Well, as a CI server, it’s basically a glorified scheduler, a cron job if you like, with a swish front end. Ok, so it’s a very swish front end, but my point is that your CI server isn’t usually very complicated, in a nutshell it just executes the build scripts whenever there’s a trigger. There’s a more important aspect than just this though, and that’s the fact that Jenkins has a build pipelines plugin, which was written recently by Centrum Systems. This pipelines plugin gives us exactly what we want, a way of breaking down our builds into smaller loops, and running stages in parallel.

Ant

Ant has been possibly the most popular build scripting language for the last few years. It’s been around for a long while, and its success lies in its simplicity. Ant is an XML based scripting language tailored specifically for software build related tasks (specifically Java. Nant is the .Net version of Ant and is almost identical).

Sonar

Sonar is a quality measurement and reporting tool, which produces metrics on build quality such as unit test coverage (using Cobertura) and static analysis tools (Findbugs, PMD and Checkstyle). I like to use Sonar as it provides a very readable report and contains a great deal of useful information all in one place.

Setting up the Tools

Installing Jenkins is incredibly simple.  There’s a debian package for Operating Systems such as ubuntu, so you can install it using apt-get. For Redhat users there’s an rpm, so you can install via yum.

Alternatively, if you’re already running Tomcat v5 or above, you can simply deploy the jenkins.war to your tomcat container.

Yet another alternative, and probably the simplest way to quickly get up and running with Jenkins is to download the war and execute:

java -jar jenkins.war

This will run jenkins through it’s own Winstone servlet container.

You can also use this method for installing Jenkins on Windows, and then, once it’s up and running, you can go to “manage jenkins” and click on the option to install Jenkins as a Windows Service! There’s also a windows installer, which you can download from the Jenkins website

Ant is also fairly simple to install, however, you’ll need the java jdk installed as a pre-requisite. To install ant itself you just need to download and extract the tar, and then create the environment variable ANT_HOME (point this to the directory you unzipped Ant into). Then add ${ANT_HOME}/bin (or %ANT_HOME%/bin if you’re on Windows) to your PATH, and that’s about it.

Configuring Jenkins

One of the best things about Jenkins is the way it uses plugins, and how simple it is to get them up and running. The “Manage Jenkins” page has a”Manage Plugins” link on it, which takes you a list of all the available plugins for your Jenkins installation:

To install the build pipeline plugin, simply put a tick in the checkbox next to “build pipeline plugin” (it’s 2/3 of the way down on the list) and click “install”. It’s as simple as that.

The Project

The project I’m going to create for the purpose of this example is going to be a very simple java web application. I’m going to have a unit test and an acceptance test stage.  The build system will be written in Ant and it will compile the project and run the tests, and also deploy the build to a tomcat server. Sonar will be used for producing the reports (such as test coverage and static analysis).

The Pipelines

For the sake of simplicity, I’ve only created 6 pipeline sections, these are:

  • Unit test phase
  • Acceptance test phase
  • Deploy to test phase
  • Integration test phase
  • Sonar report phase
  • Deploy to UAT phase

The successful completion of the unit tests will initiate the acceptance tests. Once these complete, 2 pipeline stages are triggered:

  • Deployment to a test server

and

  • Production of Sonar reports.

Once the deployment to the test server has completed, the integration test pipeline phase will start. If these pass, we’ll deploy our application to our UAT environment.

To create a pipeline in Jenkins we first have to create the build jobs. Each pipeline section represents 1 build job, which in this case runs just 1 ant task each. You have to then tell each build job about the downstream build which is must trigger, using the “build other projects” option:

Obviously I only want each pipeline section to do the single task it’s designed to do, i.e. I want the unit test section to run just the unit tests, and not the whole build. You can easily do this by targeting the exact section(s) of the build file that you want to run. For instance, in my acceptance test stage, I only want to run my acceptance tests. There’s no need to do a clean, or recompile my source code, but I do need to compile my acceptance tests and execute them, so I choose the targets “compile_ATs” and “run_ATs” which I have written in my ant script. The build job configuration page allows me to specify which targets to call:

Once the 6 build jobs are created, we need to make a new view, so that we can start to visualise this as a pipeline:

We now have a new pipeline! The next thing to do is kick it off and see it in action:

Oops! Looks like the deploy to qa has failed. It turns out to be an error in my deploy script. But what this highlights is that the sonar report is still produced in parallel with the deploy step, so we still get our build metrics! This functionality can become very useful if you have a great deal of different tests which could all be run at the same time, for instance performance tests or OS/browser-compatibility tests, which could all be running on different Operating Systems or web browsers simultaneously.

Finally, I’ve got my deploy scripts working so all my stages are looking green! I’ve also edited my pipeline view to display the results of the last 3 pipeline builds:

Alternatives

The pipelines plugin also works for Hudson, as you would expect. However, I’m not aware of such a plugin for Bamboo. Bamboo does support the concept of downstream builds, but that’s really only half the story here. The pipeline “view” in Jenkins is what really brings it all together.


“Go”, the enterprise Continuous Integration effort from ThoughtWorks not only supports pipelines, but it was pretty much designed with them in mind. Suffice to say that it works exceedingly well, in fact, I use it every day at work! On the downside though, it costs money, whereas Jenkins doesn’t.

As far as build tools/scripts/languages are concerned, this system is largely agnostic. It really doesn’t matter whether you use Ant, Nant, Gradle or Maven, they all support the functionality required to get this system up and running (namely the ability to target specific build phases). However, Maven does make hard work of this in a couple of ways – firstly because of the way Maven lifecycles work, you cannot invoke the “deploy” phase in an isolated way, it implicitly calls all the preceding phases, such as the compile and test phases. If your tests are bound to one of these phases, and they take a long time to run, then this can make your deploy seem to take a lot longer than you would expect. In this instance there’s a workaround – you can skip the tests using –DskipTests, but this doesn’t work for all the other phases which are implicitly called. Another drawback with maven is the way it uses snapshot and release builds. Ultimately we want to create a release build, but at the point of check-in we want a release build. This suggests that at some point in the pipeline we’re going to have to recompile in “release mode”, which in my book is a bad thing, because it means we have to run ALL of the tests again. The only solution I have thought of so far is to make every build a release build and simply not use snapshots.


* A footnote about the Hudson/Jenkins “thing”: It’s a little confusing because there’s still Hudson, which is owned by Oracle. The whole thing came about when there was a dispute between Oracle, the “owners” of Hudson, and Kohsuke Kawaguchi along with most of the rest of the Hudson community. The story goes that Kawaguchi moved the codebase to GitHub and Oracle didn’t like that idea, and so the split started.

Maven Release plugin: issues with Perforce

I was using the “default” maven release plugin to do a release build (which tags my Perforce SCM as part of its process), and I got the following error:

‘login’ not necessary, no password set for this user

Then, if I supplied my username and password I got:

You don’t have permission for this operation

But my Perforce user doesn’t have a password, so I tried leaving it blank, which gave me:

password is required for the perforce scm plugin.

I “fixed” it by explicitly referencing version 2.0 of the release plugin in my pom:

<build>
<plugins>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.0</version>
</plugin>

</plugins>
</build>

Maven Assembly Plugin Filtering

A colleague at Caplin made some changes to a build pom to setup some filtering on some files. We followed the instructions given on the maven site here. It basically tells you to list the files you want to filter in your assembly descriptor, and to then list the filter and then configure your filter file inside your assembly plugin, like this:

      <plugin>

<artifactId>maven-assembly-plugin</artifactId>

<version>2.2.1</version>

<configuration>

<filters>

<filter>src/assemble/filter.properties</filter>

</filters>

<descriptors>

<descriptor>src/assemble/distribution.xml</descriptor>

</descriptors>

</configuration>

</plugin>

If you do this though, you’ll get an error saying something like:

[INFO] Error configuring: org.apache.maven.plugins:maven-assembly-plugin.

Reason: ERROR: Cannot override read-only parameter: filters in goal: assembly:single

This is basically because the Maven documentation is wrong. In reality you need to add the filters section as a child of the build element, not a child of the assembly plugin’s configuration element.

So, it should look more like this:

    <build>
<filters>
<filter>src/assemble/filter.properties</filter>
</filters>
<plugins>

<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-1</version>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
<finalName>${pom.artifactId}-${pom.version}-${p4.revision}</finalName>
<outputDirectory>build/maven/${pom.artifactId}/target</outputDirectory>
<workDirectory>build/maven/${pom.artifactId}/target/assembly/work</workDirectory>
</configuration>
</execution>
</executions>
</plugin>

</plugins>

</build>

Acunote – Agile Project Management Tool

This is just a short little post about a task management tool I’ve been using for roughly a couple of years now. It’s called Acunote (http://www.acunote.com) and I’ve come to love it. I use it as a task management tool at the moment but in the past I’ve used it as a project management tool and a team management tool.

Acunote is based on the agile scrum system, and as such it allows you to create “sprints”. In your sprints you create tasks (in the real world these would be derived from user stories). You can assign tasks to individuals, estimate them, set a priority and continually update this information.

acunote sprint

As you can see, acunote will work out a burn-down chart for you as you go along (which you can view by user, multi-user or team), and will give predictions on whether you’re going to hit your targets etc. It also allows you to keep a backlog, and to add a bunch of tasks from you backlog into a new sprint is as simple as a click of a button.

All in all I’ve found it to be fantastic, and I couldn’t imagine not using it now. I highly recommend taking a look at it.

Changing a filename using the maven assembly plugin

I’m currently working on a project which requires the build to produce a zip archive of the jar and some other stuff (doc files mainly). I’ve used the maven assembly plugin, with an assembly descriptor to help me out here. However, there was one slightly unusual requirement – I needed to change the name of the jar so that inside the zip, it has a non-standard name, and no version number (the reasons behind this are fairly weak, but basically it’s because a load of other scripts, which we can’t change, expect to find the jar in this non-standard format).

So, the build produces:

myproject-1.0.0.0-SNAPSHOT.jar

but in the zip I need to have:

my-project.jar

Here’s how I did it.

  • Include the maven-assembly plugin in the build:

<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-1</version>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
<finalName>${pom.artifactId}-${pom.version}</finalName>
<outputDirectory>build/maven/${pom.artifactId}/target</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>

  • Create the kit.xml in src/main/assembly and specify <destName> in the file inclusion. Here’s my kit.xml:

<assembly>
<id>kit</id>
<formats>
<format>zip</format>
</formats>
<fileSets>
<fileSet>
<directory>src/main/resources</directory>
<outputDirectory>doc</outputDirectory>
<includes>
<include>*.*</include>
</includes>
</fileSet>
</fileSets>
<files>
<file>
<source>build/maven/${artifactId}/target/${artifactId}-${version}.${packaging}</source>
<outputDirectory>/</outputDirectory>
<destName>my-project.jar</destName>
</file>
</files>
</assembly>

As you can see, the assembly descriptor has a “files” section, which is what does the trick for us. I’ve isolated the actual section below for clarity:

<files>
<file>
<source>build/maven/${artifactId}/target/${artifactId}-${version}.${packaging}</source>
<outputDirectory>/</outputDirectory>
<destName>my-project.jar</destName>
</file>
</files>

Maven assembly plugin inheritance headache

Today I’ve had a headache with the maven assembly plugin, and the way it inherits from a parent. The story goes as follows:

I have a uber parent pom, which defines the assembly plugin in the pluginManagement section, and it looks like this:

<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-1</version>
<configuration>
<descriptors>
<descriptor>src/assembly/kit.xml</descriptor>
</descriptors>
<finalName>${pom.artifactId}-${pom.version}</finalName>
<outputDirectory>build/maven/${pom.artifactId}/target</outputDirectory>
<workDirectory>build/maven/${pom.artifactId}/target/assembly/work</workDirectory>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>

Then I have a parent pom for a collection of projects, and this parent pom has the following definition in it:

<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
</execution>
</executions>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
</configuration>

Now, if I simply put <artifactId>maven-assembly-plugin</artifactId> in one of the module’s pom files, it inherits most of everything from the project parent, which makes sense to me. The problem arises when we want to do something nifty with the assembly plugin, namely, create a jar with dependencies, and THEN include that in the package as described by an assembly descriptor which is different to the parent.

Here’s what I had in my project pom:

<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
</execution>
<execution>
<id>copy-fields-conf-file</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
</execution>
</executions>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifestEntries>
<build-number>${build-number}</build-number>
</manifestEntries>
</archive>
</configuration>

The problem was that the initial make-assembly execution was inheriting its configuration from the parent, which tells it there’s an assembly descriptor in src/main/assembly/kit.xml. That file DOES exist there, but inside it, it has:

<file>
<source>build/maven/permissioning-auth-module/target/permissioning-auth-module-${project.version}-jar-with-dependencies.jar</source>
<outputDirectory>/</outputDirectory>
</file>

Now, this is a problem because this jar hasn’t been created yet, that’s what we’re trying to create! So, I commented this section out from the parent, only to find that it inherits it from the uber parent, and here’s why:

In the assembly plugin definition you have execution sections and configurations. You can tie a configuration in to an execution phase, by simply adding it inside the execution scope. If you don’t, and you leave it outside, then it becomes a general definition which is inherited by any child projects whenever they call the assembly plugin and don’t override it explicitly. This is usually fine, but not when you don’t want to declare an assembly descriptor. Because the asembly descriptor is defined in the parent(s), it always goes looking for it even if you don’t want it to in your execution (which we don’t). There’s a workaround: create a blank assembly descriptor and point your execution at that, but that’s not very elegant. The trick is to always tie your configurations in to an execution phase, so the parent pom(s) end up looking something more like this:

<artifactId>maven-assembly-plugin</artifactId>
<version>2.2-beta-1</version>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
<descriptor>src/assembly/kit.xml</descriptor>
</descriptors>
<finalName>${pom.artifactId}-${pom.version}</finalName>
<outputDirectory>build/maven/${pom.artifactId}/target</outputDirectory>
<workDirectory>build/maven/${pom.artifactId}/target/assembly/work</workDirectory>
</configuration>
</execution>
</executions>

And the module’s pom can now look like this:

<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<id>pack-assembly</id>
<phase>prepare-package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<descriptors>
</descriptors>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifestEntries>
<build-number>${build-number}</build-number>
</manifestEntries>
</archive>
</configuration>
</execution>
<execution>
<id>copy-fields-conf-file</id>
<phase>package</phase>
<goals>
<goal>attached</goal>
</goals>
<configuration>
<descriptors>
<descriptor>src/main/assembly/kit.xml</descriptor>
</descriptors>
</configuration>
</execution>
</executions>

Headache over 🙂

Automate Configuration Management Using Tokens!

Devops engineers are often tasked with the job of managing deployments of code to multiple environments. Each one may have different environmental settings such as server name/ip address, URL, subnet name and different connection settings such as db connection strings and app layer connections to name but a few. In all, there’s a truck load of differences. These differences, for convenience sake, are usually stored in config and ini files…

Usually they’re a nightmare (sorry, a challenge) to manage. But here’s a solution that has worked well for me…..

  • Use “master” config files that have ALL environmental details replaced with tokens
  • Move copies of these files to folders denoting the environments they’ll be deployed to
  • Use a token replacement operation to replace the tokens
  • Deploy over the top of your code deployments, in doing so replacing the default config files

All the above can be automated very easily, and here’s how:
First off, make tokenised copies of your config files, so that environmental values are replaced with tokens, e.g.
change things like:

<add key=”DB:Connection” value=”Server=TestServer;Initial Catalog=TestDB;User id=Adminuser;password=pa55w0rd”/ >

to

<add key=”DB:Connection” value=”Server=%DB_SERVER%;Initial Catalog=%DB_NAME%;User id=%DB_UID%;password=%DB_PWD%”/ >

Then save a copy of these tokens, and their associated values in a sed file. This sed file should contain values specific to one environment, so that you’ll end up with 1 sed file per environment. These files act as lookups for the tokens and their values.

The syntax for these sed files is:

s/%TOKEN%/TokenValue/i

So here’s the contents of a test environmemt sed file (testing.sed)

s/%DB_SERVER%/TestServer/i

s/%DB_NAME%/TestDB/i

s/%DB_UID%/Adminuser/i

s/%DB_PWD%/pa55w0rd/i

And here’s live.sed:

s/%DB_SERVER%/LiveServer/i

s/%DB_NAME%/LiveDB/i

s/%DB_UID%/Adminuser/i

s/%DB_PWD%/Livepa55w0rd/i

Next up, we want to have a section in our build script which renames the web_master.config files and copies them, and then runs the token replacement task….so here it is:

<target name=”moveconfigs” description=”renames configs, copies them to respective prep locations”>

<delete file=”${channel.dir}\web.config” verbose=”true” if=”${file::exists (webconfig)}” />

<move file=”${channel.dir}\web_Master.config” tofile=”${channel.dir}\web.config” if=”${file::exists (webMasterConfig)}” />

<delete file=”${channel.dir}\web.config” verbose=”true” if=”${file::exists (webconfig)}” />

<move file=”${channel.dir}\web_Master.config” tofile=”${channel.dir}\web.config” if=”${file::exists (webMasterConfig)}” />

<mkdir dir=”${build.ID.dir}\configs\TestArea” />

<mkdir dir=”${build.ID.dir}\configs\Live” />

<copy todir=”${build.ID.dir}\configs\TestArea\${channel.output.name}” >

<fileset basedir=”${channel.dir}” >

<include name=”**\*.config” />

<exclude name=”*.bak” />

</fileset>

</copy>

<copy todir=”${build.ID.dir}\configs\Live\${channel.output.name}” >

<fileset basedir=”${channel.dir}” >

<include name=”**\*.config” />

<exclude name=”*.bak” />

</fileset>

</copy>

</target>

<target name=”EditConfigs” description=”runs the token replacement by calling the sed script and passing the location of the tokenised configs as a parameter” >

<exec program=”D:\compiled\call_testarea.cmd” commandline=”${build.ID.dir}” />

<exec program=”D:\compiled\call_Live.cmd” commandline=”${build.ID.dir}” />

</target>

As you can see, the last target calls a couple of cmd files, the first of which looks like this:

xfind “%*\TestArea” -iname *.* |xargs sed -i -f “D:\compiled\config\testing.sed”

xfind “%*\TestArea” -iname *.* |xargs sed -i s/$/\r/

This is the sed command to read the config file, pipe the contents to sed and run the script file against it, and edit it in place. the second line handles Line Feeds so that the file ends up in a readable state. Essentially we’re telling sed to recursively read through the config file, and replace the tokens with the relevant value.

The advantage that this method has over using Nant’s “replacetokens” is that we can call the script for any number of files in any number of subdirectories using just one call, and the fact that the tokens and values are extracted from the build script. Also, the syntax means that the sed files are a lot smaller than a similar functioning Nant script would be.

And that’s about it.

ClickOnce and Nant – The Plot Thickens

Turns out that these ClickOnce deployment builds aren’t as piss-easy as I once thought.
Turns out that the builds need to be customised for different environments, nothing new there, but (and here comes the catch), all the environmental settings have to be applied at BUILD TIME!!! Why? I hear you ask, and the answer is: because if you edit the config files post-build it changes some checksum jiggery pokery wotnot and then the thingumyjig goes and fails!!! Typical. (Basically the details you need to configure are held in files you cannot edit post build because the manifest file will do a checsum evaluation and see that someone has edited the file, and throw errors).
So what I’ve decided to do is this….

  1. Copy all configurable files to seperate environmentally named folders pre-build.
  2. Use SED to replace tokens for each environment in these files
  3. Copy 1 of them back to the build folder
  4. Compile
  5. Copy the output to the environmentally named folder

redo steps 3,4,5 for all the environments.
And hey presto, this works.

<target name=”changeconfigs”>
<!–This bit sets up some folders where I’ll do the prep work for each environment–>
<delete dir=”${config.dir}\${project.name}” verbose=”true” if=”${directory::exists(config.dir+’\’+project.name)}” />
<mkdir dir=”${config.dir}\${project.name}\TestArea” />
<mkdir dir=”${config.dir}\${project.name}\DevArea” />
<mkdir dir=”${config.dir}\${project.name}\Staging” />
<mkdir dir=”${config.dir}\${project.name}\UAT” />
<mkdir dir=”${config.dir}\${project.name}\Live” />

<!–This bit moves a tokenised config file to these folders–>
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\TestArea\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\DevArea\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\Staging\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\UAT\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\Live\app.config” />

<!–This bit calls sed, which replaces the tokens with relevant values for each environment, more on sed another time!–>
<exec program=”${sedUAT.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedTestArea.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedDevArea.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedStaging.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedLive.exe}” commandline=”${sedParse.dir}” />
</target>

<!–This bit copies the edited file back to the build directory–>
<target name=”prepTestArea”>
<delete file=”${source.dir}\app.config” />
<copy file=”${config.dir}\${project.name}\TestArea\app.config” tofile=”${source.dir}\app.config” />
</target>

<!–This bit builds the ClickOnce project–>
<target name=”publishTestArea” >
<msbuild project=”${base.dir}\Proj1\ClickOnce.vbproj”>
<arg value=”/t:Rebuild” />
<arg value=”/property:Configuration=Release”/>
<arg value=”/p:ApplicationVersion=${version.num}”/>
<arg value=”/p:InstallUrl=http://testarea/ClickOnce/”/>
<arg value=”/t:publish”/>
<arg value=”/p:UpdateRequired=true”/>
<arg value=”/p:MinimumRequiredVersion=${version.num}”/>
</msbuild>
</target>

<!–This bit copies the output to an environment-named folder, ready for deployment–>
<target name=”copyfilesTestArea”>
<mkdir dir=”${versioned.dir}\TestArea” />
<copy todir=”${versioned.dir}\TestArea” includeemptydirs=”true”>
<fileset basedir=”${base.dir}\Proj1\bin\Release\”>
<include name=”**.publish\**\*.*” />
</fileset>
</copy>
</target>

REPEAT LAST 3 TARGETS FOR THE OTHER ENVIRONMENTS.

Now that wasn’t too hard, and it doesn’t take up too much extra time.
I suppose I’d better mention some of the arguments I’m passing in the MSBuild calls:

<arg value=”/t:Rebuild” /> – I do this because it must re build the .deploy files each time, or you get the previous builds environment settings left in there because MSBuild decides to skip files that haven changed….

<arg value=”/property:Configuration=Release”/> – Obvious

<arg value=”/p:ApplicationVersion=${version.num}”/> – ClickOnce apps have a version stamped on them for various reasons, one of them being for use in automatic upgrades – people with installshield knowledge will know what a joke that can be!

<arg value=”/p:InstallUrl=http://testarea/ClickOnce/”/> – A pretty important one this, it stamps the URL for the download onto the manifest or application file.

<arg value=”/t:publish”/> – just calls the publish task, I do this because this makes the setup.exe

<arg value=”/p:UpdateRequired=true”/>
<arg value=”/p:MinimumRequiredVersion=${version.num}”/> – These 2 together mean the app will do a forced upgrade when a new version becomes available

So far, so good. My next trick will hopefully be how to get 2 installations working side-by-side. Currently it doesn’t work because one will overwrite the other. I’m working on it okay!!??