Nant NUnit and FxCop Build Script

Here’s a nant script I’ve knocked up which does the following:

  • Compiles the solution in debug
  • Moves the Unit test dlls into a single common folder
  • Applies Unit tests
  • Generates reports
  • Runs an FxCop analysis
  • Outputs the FxCop analysis to an xml file

I’ve used this build script in a continuous integration system that doesn’t actually publish any code. The purpose of this system is to run the unit tests and the code analysis quickly so that feedback can be obtained as early as possible. I’ve reused most of this script in the nightly build, which actually deploys the resulting code to a test server. The main difference is that the build is done in release maode in that instance. Anyway, here’s the script:

<?xml version=”1.0″ encoding=”utf-8″?>
<project name=”Unit tests” default=”run”>

<property name=”namespace” value=”MyProjectName” />
<property name=”test.dir” value=”.\UNITTESTS” />
<property name=”nant.settings.currentframework” value=”net-2.0″ overwrite=”false”/>
<property name=”output.dir” value=”c:\output” overwrite=”false”/>
<!– fxcop props –>
<property name=”fxcop.basedir” value=”C:\FxCop” />
<property name=”fxcop.executable” value=”${fxcop.basedir}\fxcopcmd.exe” overwrite=”false”/>
<property name=”fxcop.template” value=”${fxcop.basedir}\Template.FXCop” overwrite=”false”/>
<property name=”fxcop.out” value=”${output.dir}\${namespace}.fxCop.xml” overwrite=”false”/>

<target name=”compile”>
<msbuild project=”${namespace}.sln” >
<arg value=”/property:Configuration=Debug”/>
</msbuild>
</target>

<target name=”move” description=”moves the unittest dlls into a single common folder”>
<mkdir dir=”${test.dir}” />
<copy todir=”${test.dir}” includeemptydirs=”false” flatten=”true”>
<fileset basedir=”..\”>
<include name=”**\*UnitTests.dll” />
</fileset>
</copy>
</target>

<target name=”test” depends=”compile” description=”Apply unit tests”>
<property name=”nant.onfailure” value=”fail.test”/>

<nunit2>
<formatter type=”Xml” usefile=”true” extension=”.xml” outputdir=”${output.dir}” />
<test>
<assemblies basedir=”${test.dir}” >
<include name=”*UnitTests.dll”/>
</assemblies>
</test>
</nunit2>

<nunit2report out=”${output.dir}\${namespace}.html” >
<fileset>
<include name=”${output.dir}\*.dll-results.xml” />
</fileset>
</nunit2report>
<property name=”nant.onfailure” value=”fail”/>
</target>

<target name=”fail.test”>
<nunit2report out=”${output.dir}\${namespace}.html” >
<fileset>
<include name=”${output.dir}\*.dll-results.xml” />
</fileset>
</nunit2report>
</target>

<target name=”cop”>
<echo message=”Running FxCop Analysis”/>
<exec program=”${fxcop.executable}” commandline=”/p:${fxcop.template} /f:${test.dir}\*.dll /o:${fxcop.out} /s” failonerror=”false”/>
<!–<echo message=”Logging Analysis Results”/>
<fxcoplogger dbserver=”myDbServer” dbname=”CodeAnalysis” trustedconnection=”true” />–>
</target>

<target name=”run” depends=”compile, move, test, cop”>
</target>

</project>

One thing you might note is that there appears to be a bit of replication of the same code, namely the nunit2report. But if you follow the logic, you’ll see the reason for this. We don’t want the build to stop processing if one of the unit tests fail, we actually want to capture the results, carry on, and present them in the build report. So what you do is add this line:

<property name=”nant.onfailure” value=”fail.test”/>

This tells the the script to go to the fail.test target, only if the build fails. This target generates the nunit2report, thus:

<target name=”fail.test”>
<nunit2report out=”${output.dir}\${namespace}.html” >
<fileset>
<include name=”${output.dir}\*.dll-results.xml” />
</fileset>
</nunit2report>
</target>

However, I still want to see the nunit2report even if the build doesn’t fail, so I have put a copy of the same target inside the test target, this means that if the unit tests fail or pass, I get my report, and if the tests pass, the build process carries on to do an FxCop analysis.

I think the outline of this system is covered in a book called Expert .Net Delivery by Marc Holmes, which is a great resource for getting up and running with Nant. It was the first book I bought when I started using Nant, and I recall it covered the fundamentals very well. It’s a pity the title doesn’t have “Continuous Integration” in it, because it’s actually a good place to start when setting up a .Net based CI system too.

Speaking of CI – if you use CruiseControl.Net there are handly transforms for getting your FxCop and Nunit reports published alongside your build on the CI dashboard. Alternatively you can hack the CruiseControl.net dashboard and provide a customised link to the nunit2report’s output html – it all depends on which format you prefer. I can’t decide, so I have both!

Advertisement

Automate Configuration Management Using Tokens!

Devops engineers are often tasked with the job of managing deployments of code to multiple environments. Each one may have different environmental settings such as server name/ip address, URL, subnet name and different connection settings such as db connection strings and app layer connections to name but a few. In all, there’s a truck load of differences. These differences, for convenience sake, are usually stored in config and ini files…

Usually they’re a nightmare (sorry, a challenge) to manage. But here’s a solution that has worked well for me…..

  • Use “master” config files that have ALL environmental details replaced with tokens
  • Move copies of these files to folders denoting the environments they’ll be deployed to
  • Use a token replacement operation to replace the tokens
  • Deploy over the top of your code deployments, in doing so replacing the default config files

All the above can be automated very easily, and here’s how:
First off, make tokenised copies of your config files, so that environmental values are replaced with tokens, e.g.
change things like:

<add key=”DB:Connection” value=”Server=TestServer;Initial Catalog=TestDB;User id=Adminuser;password=pa55w0rd”/ >

to

<add key=”DB:Connection” value=”Server=%DB_SERVER%;Initial Catalog=%DB_NAME%;User id=%DB_UID%;password=%DB_PWD%”/ >

Then save a copy of these tokens, and their associated values in a sed file. This sed file should contain values specific to one environment, so that you’ll end up with 1 sed file per environment. These files act as lookups for the tokens and their values.

The syntax for these sed files is:

s/%TOKEN%/TokenValue/i

So here’s the contents of a test environmemt sed file (testing.sed)

s/%DB_SERVER%/TestServer/i

s/%DB_NAME%/TestDB/i

s/%DB_UID%/Adminuser/i

s/%DB_PWD%/pa55w0rd/i

And here’s live.sed:

s/%DB_SERVER%/LiveServer/i

s/%DB_NAME%/LiveDB/i

s/%DB_UID%/Adminuser/i

s/%DB_PWD%/Livepa55w0rd/i

Next up, we want to have a section in our build script which renames the web_master.config files and copies them, and then runs the token replacement task….so here it is:

<target name=”moveconfigs” description=”renames configs, copies them to respective prep locations”>

<delete file=”${channel.dir}\web.config” verbose=”true” if=”${file::exists (webconfig)}” />

<move file=”${channel.dir}\web_Master.config” tofile=”${channel.dir}\web.config” if=”${file::exists (webMasterConfig)}” />

<delete file=”${channel.dir}\web.config” verbose=”true” if=”${file::exists (webconfig)}” />

<move file=”${channel.dir}\web_Master.config” tofile=”${channel.dir}\web.config” if=”${file::exists (webMasterConfig)}” />

<mkdir dir=”${build.ID.dir}\configs\TestArea” />

<mkdir dir=”${build.ID.dir}\configs\Live” />

<copy todir=”${build.ID.dir}\configs\TestArea\${channel.output.name}” >

<fileset basedir=”${channel.dir}” >

<include name=”**\*.config” />

<exclude name=”*.bak” />

</fileset>

</copy>

<copy todir=”${build.ID.dir}\configs\Live\${channel.output.name}” >

<fileset basedir=”${channel.dir}” >

<include name=”**\*.config” />

<exclude name=”*.bak” />

</fileset>

</copy>

</target>

<target name=”EditConfigs” description=”runs the token replacement by calling the sed script and passing the location of the tokenised configs as a parameter” >

<exec program=”D:\compiled\call_testarea.cmd” commandline=”${build.ID.dir}” />

<exec program=”D:\compiled\call_Live.cmd” commandline=”${build.ID.dir}” />

</target>

As you can see, the last target calls a couple of cmd files, the first of which looks like this:

xfind “%*\TestArea” -iname *.* |xargs sed -i -f “D:\compiled\config\testing.sed”

xfind “%*\TestArea” -iname *.* |xargs sed -i s/$/\r/

This is the sed command to read the config file, pipe the contents to sed and run the script file against it, and edit it in place. the second line handles Line Feeds so that the file ends up in a readable state. Essentially we’re telling sed to recursively read through the config file, and replace the tokens with the relevant value.

The advantage that this method has over using Nant’s “replacetokens” is that we can call the script for any number of files in any number of subdirectories using just one call, and the fact that the tokens and values are extracted from the build script. Also, the syntax means that the sed files are a lot smaller than a similar functioning Nant script would be.

And that’s about it.

ClickOnce and Nant – The Plot Thickens

Turns out that these ClickOnce deployment builds aren’t as piss-easy as I once thought.
Turns out that the builds need to be customised for different environments, nothing new there, but (and here comes the catch), all the environmental settings have to be applied at BUILD TIME!!! Why? I hear you ask, and the answer is: because if you edit the config files post-build it changes some checksum jiggery pokery wotnot and then the thingumyjig goes and fails!!! Typical. (Basically the details you need to configure are held in files you cannot edit post build because the manifest file will do a checsum evaluation and see that someone has edited the file, and throw errors).
So what I’ve decided to do is this….

  1. Copy all configurable files to seperate environmentally named folders pre-build.
  2. Use SED to replace tokens for each environment in these files
  3. Copy 1 of them back to the build folder
  4. Compile
  5. Copy the output to the environmentally named folder

redo steps 3,4,5 for all the environments.
And hey presto, this works.

<target name=”changeconfigs”>
<!–This bit sets up some folders where I’ll do the prep work for each environment–>
<delete dir=”${config.dir}\${project.name}” verbose=”true” if=”${directory::exists(config.dir+’\’+project.name)}” />
<mkdir dir=”${config.dir}\${project.name}\TestArea” />
<mkdir dir=”${config.dir}\${project.name}\DevArea” />
<mkdir dir=”${config.dir}\${project.name}\Staging” />
<mkdir dir=”${config.dir}\${project.name}\UAT” />
<mkdir dir=”${config.dir}\${project.name}\Live” />

<!–This bit moves a tokenised config file to these folders–>
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\TestArea\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\DevArea\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\Staging\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\UAT\app.config” />
<copy file=”${source.dir}\App_Master.config” tofile=”${config.dir}\${project.name}\Live\app.config” />

<!–This bit calls sed, which replaces the tokens with relevant values for each environment, more on sed another time!–>
<exec program=”${sedUAT.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedTestArea.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedDevArea.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedStaging.exe}” commandline=”${sedParse.dir}” />
<exec program=”${sedLive.exe}” commandline=”${sedParse.dir}” />
</target>

<!–This bit copies the edited file back to the build directory–>
<target name=”prepTestArea”>
<delete file=”${source.dir}\app.config” />
<copy file=”${config.dir}\${project.name}\TestArea\app.config” tofile=”${source.dir}\app.config” />
</target>

<!–This bit builds the ClickOnce project–>
<target name=”publishTestArea” >
<msbuild project=”${base.dir}\Proj1\ClickOnce.vbproj”>
<arg value=”/t:Rebuild” />
<arg value=”/property:Configuration=Release”/>
<arg value=”/p:ApplicationVersion=${version.num}”/>
<arg value=”/p:InstallUrl=http://testarea/ClickOnce/”/>
<arg value=”/t:publish”/>
<arg value=”/p:UpdateRequired=true”/>
<arg value=”/p:MinimumRequiredVersion=${version.num}”/>
</msbuild>
</target>

<!–This bit copies the output to an environment-named folder, ready for deployment–>
<target name=”copyfilesTestArea”>
<mkdir dir=”${versioned.dir}\TestArea” />
<copy todir=”${versioned.dir}\TestArea” includeemptydirs=”true”>
<fileset basedir=”${base.dir}\Proj1\bin\Release\”>
<include name=”**.publish\**\*.*” />
</fileset>
</copy>
</target>

REPEAT LAST 3 TARGETS FOR THE OTHER ENVIRONMENTS.

Now that wasn’t too hard, and it doesn’t take up too much extra time.
I suppose I’d better mention some of the arguments I’m passing in the MSBuild calls:

<arg value=”/t:Rebuild” /> – I do this because it must re build the .deploy files each time, or you get the previous builds environment settings left in there because MSBuild decides to skip files that haven changed….

<arg value=”/property:Configuration=Release”/> – Obvious

<arg value=”/p:ApplicationVersion=${version.num}”/> – ClickOnce apps have a version stamped on them for various reasons, one of them being for use in automatic upgrades – people with installshield knowledge will know what a joke that can be!

<arg value=”/p:InstallUrl=http://testarea/ClickOnce/”/> – A pretty important one this, it stamps the URL for the download onto the manifest or application file.

<arg value=”/t:publish”/> – just calls the publish task, I do this because this makes the setup.exe

<arg value=”/p:UpdateRequired=true”/>
<arg value=”/p:MinimumRequiredVersion=${version.num}”/> – These 2 together mean the app will do a forced upgrade when a new version becomes available

So far, so good. My next trick will hopefully be how to get 2 installations working side-by-side. Currently it doesn’t work because one will overwrite the other. I’m working on it okay!!??

Building ClickOnce Applications with NAnt

Since I don’t like to actually do any work, and would much rather automate everything I’m required to do, I decided to automate a ClickOnce application build, because doing it manually was taking me literally, er, seconds, and this is waaaaaay too much like hard work. So, I naturally turned to NAnt, which is so often the answer to all my deployment questions….The answer came in the form of using NAnt to call MSBuild and pass the publish target, along with the version number. So, this is what you need to do:

Add a property to your nant script containing your build number (you can get this from CruiseControl.Net if you’re using CCNet to do your builds)
<property name=”version.num” value=”1.2.3.4″/>
Then just compile the project using NAnt’s MSBuild task, and call the publish target:

<target name=”publish” >

<msbuild project=”${base.dir}\ClickOnce.vbproj”>

<arg value=”/property:Configuration=Release”/>

<arg value=”/p:ApplicationVersion=${version.num}”/>

<arg value=”/t:publish” />

</msbuild>

</target>

The next thing you need to do is create or update the publish.htm file. What I’ve done for this is to take a copy of a previously generated publish.htm, and replace the occurrences of the application name with a token. Then in the NAnt script, I replace the token with the relevant application name with a version number. I do this because the version number will change with each build, and rather than manually update it, which is much too complicated for me, I’d rather just automate it so that I can go back to sleep while it builds.I tokenised the application name because of a much darker, more sinister reason that I’ll maybe explain at another time, but the world’s just not ready for that yet.
Anyway, here’s all that in NAntish:

<copy todir=”${config.dir}\${project.name}”>

<fileset basedir=”.”>

<include name=”publish.htm” />

</fileset>

<filterchain>

<replacetokens>

<token key=”VERSION” value=”${version.num}” />

<token key=”APPNAME” value=”${appname}” />

</replacetokens>

</filterchain>

</copy>

Automate Configuration Management Using Tokens!

Here’s the problem:

Your application has numerous config files, and the values in these config files differ on every server or every environment. You hate manually updating the values every time you deploy your applications to a new environment, because that takes up too much of your time and inevitably leads to costly mistakes.

Here’s the solution:

Automate it.

And here’s one way of doing it:

  • Use “master” config files that have ALL environmental details replaced with tokens
  • Move copies of these files to folders denoting the environments they’ll be deployed to
  • Use a token replacement operation to replace the tokens
  • Deploy over the top of your code deployments, in doing so replacing the default config files

All the above can be automated very easily, and here’s how:
First off, make tokenised copies of your config files, so that environmental values are replaced with tokens, e.g.
change things like:

<add key=”DB:Connection” value=”Server=TestServer;Initial Catalog=TestDB;User id=Adminuser;password=pa55w0rd”/ >
to

<add key=”DB:Connection” value=”Server=%DB_SERVER%;Initial Catalog=%DB_NAME%;User id=%DB_UID%;password=%DB_PWD%”/ >

Then save a copy of these tokens, and their associated values in a sed file. This sed file should contain values specific to one environment, so that you’ll end up with 1 sed file per environment. These files act as lookups for the tokens and their values.

The sybntax for these sed files is:

s/%TOKEN%/TokenValue/i

So here’s the contents of a test environmemt sed file (testing.sed)

s/%DB_SERVER%/TestServer/i

s/%DB_NAME%/TestDB/i

s/%DB_UID%/Adminuser/i

s/%DB_PWD%/pa55w0rd/i

And here’s live.sed:

s/%DB_SERVER%/LiveServer/i

s/%DB_NAME%/LiveDB/i

s/%DB_UID%/Adminuser/i

s/%DB_PWD%/Livepa55w0rd/i

Next up, we want to have a section in our build script which renames the web_master.config files and copies them, and then runs the token replacement task….so here it is:

<target name=”moveconfigs” description=”renames configs, copies them to respective prep locations”>

<delete file=”${channel.dir}\web.config” verbose=”true” if=”${file::exists (webconfig)}” />

<move file=”${channel.dir}\web_Master.config” tofile=”${channel.dir}\web.config” if=”${file::exists (webMasterConfig)}” />

<delete file=”${channel.dir}\web.config” verbose=”true” if=”${file::exists (webconfig)}” />

<move file=”${channel.dir}\web_Master.config” tofile=”${channel.dir}\web.config” if=”${file::exists (webMasterConfig)}” />

<mkdir dir=”${build.ID.dir}\configs\TestArea” />

<mkdir dir=”${build.ID.dir}\configs\Live” />

<copy todir=”${build.ID.dir}\configs\TestArea\${channel.output.name}” >

<fileset basedir=”${channel.dir}” >

<include name=”**\*.config” />

<exclude name=”*.bak” />

</fileset>

</copy>

<copy todir=”${build.ID.dir}\configs\Live\${channel.output.name}” >

<fileset basedir=”${channel.dir}” >

<include name=”**\*.config” />

<exclude name=”*.bak” />

</fileset>

</copy>

</target>

<target name=”EditConfigs” description=”runs the token replacement by calling the sed script and passing the location of the tokenised configs as a parameter” >

<exec program=”D:\compiled\call_testarea.cmd” commandline=”${build.ID.dir}” />

<exec program=”D:\compiled\call_Live.cmd” commandline=”${build.ID.dir}” />

</target>

As you can see, the last target calls a couple of cmd files, the first of which looks like this:

xfind “%*\TestArea” -iname *.* xargs sed -i -f “D:\compiled\config\testing.sed”

xfind “%*\TestArea” -iname *.* xargs sed -i s/$/\r/

This is the sed command to read the config file, pipe the contents to sed and run the script file against it, and edit it in place. the second line handles Line Feeds so that the file ends up in a readable state. Essentially we’re telling sed to recursively read through the config file, and replace the tokens with the relevant value.

The advantage that this method has over using Nant’s “replacetokens” is that we can call the script for any number of files in any number of subdirectories using just one call, and the fact that the tokens and values are extracted from the build script. Also, the syntax means that the sed files are a lot smaller than a similar functioning Nant script would be.

Of course, you could make this whole thing even more elegant by putting the token/value pairs in a database instead of in a sed file, simply pull them out of the db at build/deploy time and then do the substitution.

People sometimes say that this method doesn’t work well if there are a large number of config files; they don’t like the idea of maintaining a large number of “master” versions as well as standard code versions. So to get around this, you can just not use tokens, but instead have the sed/replace look for the xml node and then the element, and simply replace the value there. There are plenty of ways of doing this using xml xPath. Both approaches have their own advantages, I guess the decision of which one to go for could just be a matter of how numerous your config files are.

Best Practices for Build and Release Management Part 2

Ok, as promised in Part 1, I’ll go into a bit more detail about each of the areas outlined previously, starting with…

The Build Process

This area, perhaps more than any other area I’ll be covering in this section, has benefited most from the introduction of some ultra handy tools. Back in the day, building/compiling software was fairly manual, and could only be automated to a certain degree, make files and batch systems were about as good as it got, and even that relied on a LOT of planning and could quite often be a nightmare to manage.

These days though, the build phase is exceedingly well catered for and is now a very simple process, and what’s more, we can now get an awful lot more value out of this single area.

As I mentioned before, one of the aims of release management is to make software builds simple, quick and reliable. Tools such as Ant, Nant (.Net version of Ant), Maven, Rake and MSBuild help us on our path towards our goal in many ways. Ant, MSBuild and Nant are very simple XML based scripting languages which offer a wide ranging level of control – for instance, you can build entire solutions with a single line of script, or you can individually compile each project and specify each dependency – it’s up to you to decide what level of control you need. I believe that build scripts should be kept simple and easy to manage, so when dealing with NAnt and MSBuild for .Net solutions I like to build each project by calling an .proj file rather than specifically compiling each library. The .proj files should be constructed correctly and stored in source control. Each build should get the latest proj file  (and the rest of the code, including shared libraries – more on that later) and compile the project.

For Java projects. Ant and Maven are the most popular tools. Ant, like Nant, gives the user a great deal of control, while Maven has less inherent flexibility and enforces users to adhere to its processes. However, both are equally good at helping us make our build simple, quick and reliable. Maven uses POM files to control how projects are built. Within these POM files a build engineer will define all the goals needed to compile the project. This might sound a little tedious but the situation is made easier by the fact that POM files can inherit from master/parent POM files, reducing the amount of repetition and keeping your project build files smaller, cleaner and easier to manage. I would always recommend storing as much as possible in parent POM files, and as little as you can get away with in the project POMs.

One of the great improvements in software building in recent years has been the introduction of Continuous Integration. The most popular CI tools around are CruiseControl, CruiseControl.Net, Hudson and Bamboo. In their simplest forms, CI tools are basically just schedulers, and they essentially just kick off your build tools. However, that’s just the tip of the iceberg, because these tools can do much, MUCH more than that – I’ll explain more later, but for now I’ll just say that they allow us to do our builds automatically, without the need for any human intervention. CI tools make it very easy for us to setup listeners to poll our source code repositories for any changes, and then automatically kick off a build, and then send us an email to let us know how the build went. It’s very simple stuff indeed.

So let’s take a look at what we’ve done with our build process so far:

  • We’ve moved away from manually building projects and started using simple build scripts, making the build process less onerous and not so open to human error. Reliability is on the up!
  • We’ve made our build scripts as simple as possible – no more 1000 line batch files for us! Our troubleshooting time has been significantly reduced.
  • We’ve moved away from using development UIs to make our builds – our builds are now more streamlined and faster.
  • We’ve introduced a Continuous Integration system to trigger our builds whenever a piece of code is committed – our builds are now automated.

So in summary, we’ve implemented some really simple steps and already our first goal is achieved – we’ve now got simple, quick and reliable builds. Time for a cup of tea!

Best Practices for Build and Release Management Part 1

Firstly, Release Management has been around for long enough for it to no longer mean what it used to mean. Release Management used to be concentrated on the discipline of “creating a release of software”, that generally involved the following key points:

  • How to create or build a reliable “release”
  • How to get that reliable release out into the wild

The sorts of issues that these key points in turn raised were things like:

  • How to reliably and repeatably “build” (compile) software
  • How to make software builds quicker
  • How to make software builds easier
  • How to package software builds (zips, .msi etc)

We used to spend our time working with make files, batch files and countless checklists, running manual builds, and then we’d painstakingly create installers or configure zip files to deploy our releases. And when things went wrong, they usually went seriously wrong, and repeating the build and release process could take days.

Since those bad old days, Release Management has come a long way. Lots of the old issues have been addressed by some exceedingly neat tools which have placed emphasis on automation and quality (I’m thinking Ant/Nant, Cruise Control, the Continuous Integration process, Hudson and loads more). But one other major thing has happened in the world of Release Management, and that’s ITIL.

ITIL has redefined the practice of Release Management as more of a planning and coordinating role, it even goes so far as to say Release Management involves communicating with customers and managing customer expectation. This is a million miles away from writing complex batch files, hundreds of lines long, to compile and deploy software to a QA environment! In an ITIL world, the issues listed earlier either don’t exist, or have been addressed already and are no longer a concern to a Release Manager.

So why does the ITIL version of Release Management differ so much from the real world job of a Release Manager?

Well, I would guess that the “build management” aspect is simply not considered part of release management, and that it should be covered somewhere else, but that’s just my guess, I’m seeking some advice from ITIL about that right now.

What we’re left with now is a world where “Release Management” means one thing to one person, and something completely different to another. I’m from the old school of Release Management, I like to actually produce stuff. In a second I’ll outline what I consider to be the main roles and objectives of Release Management, and then later I’ll take each one and explain some ways that I’ve used for tackling them.

So, I like to think of Release Management as a practice which:

  • Helps make software builds simple, quick and reliable. This is achieved by employing the best tools for the job. This means understanding all the various build tools, seeing how they integrate with the systems that already exist in the workplace, and making an informed choice. There’s no way you’re going to make software builds easier, more reliable and repeatable by implementing a manual solution, so get to grips with the various build tools out there and make them work for you.
  • Helps make software deployments simple, quick, reliable and repeatable. Again, this is a bit like the above, but there are fewer tools to choose from. Manually deploying releases is painful and risky, and it also belongs in the dark ages and should be outlawed. There are still plenty of options and combinations of tools to make this task fully automated.
  • Helps take care of configuration management. When I say configuration management, I’m talking about all those issues with how to make a software release look, feel and behave the same from one environment to the next. For me this falls into Release Management because Release Management, unlike development, QA or Operations, has a direct involvement in every environment along the way to releasing into the wild. It’s pointless asking the development team to tackle the issues of configurations between environments when they have very little or no visibility of the production environment, and besides, their time would be much better spent making that button look cooler because that’s what the business has asked for!
  • Helps drive software quality. Thanks to the Continuous Integration process, and the tools that have been built around it, it’s now possible for us to build software every single time a piece of code is checked in, run a suite of unit tests, analyse the code for lazy programming and report on the amount of test coverage a project has. And that’s just the start. There are tools out there for doing much much more than this, and I’ll go into more detail about this later.
  • Helps optimise development and QA time. By giving the dev team the feedback on the quality of their code and telling them where they’re going right and going wrong, we’re helping them target their efforts. Furthermore, if were busy providing these solutions for them, doing the builds, configurations and releases, the developers can get busy doing the stuff they’re skilled at doing. For the QA team, we’re finding bugs and failing releases before the releases even get to them! (of course, if we find too many bugs and fail a release, that release won’t even get o QA)
  • Speeds up time to market. Ok, so we’ve made builds quicker, easier and more reliable, we’ve sped up the process of fine tuning code quality, we’ve spotted bugs before a round of QA has even begun and we’ve made the process of releasing our software out into the wild quicker and simpler. Basically we’ve saved a heap of time in dev, QA and Operations and so our new, higher quality software, can be released efficiently into the wild. Happy days!

As promised earlier, I’ll spend a while giving a few examples of how to actually implement what I’ve broadly outlined above. I’ll try to be generic where I can, but I’ll include specifics for some examples. All that and more in Part 2!