Ant script to delete .svn dirs

I’ve used this script a good few times, so I’m writing it here for safe keeping!
It’s just a simple Ant script for deleting the .svn folders from within a folder structure, simple as that. There’s an example of how to do the same thing on the Ant website, but at the moment it’s wrong (I think they’re missing a slash after the .svn).

So here’s the target:

<target name=”delete”>
<delete includeemptydirs=”true”>
<fileset dir=”C:\James.Betteley\Projects\” defaultexcludes=”false” includes=”**\*.svn\”/>
</delete>
</target>

Advertisement

A great way to corrupt a .war file

So, yesterday I had an interesting issue with a corrupt .war file which was producing all sorts of errors when I tried to extract it, for instance:

BUILD FAILED
java.lang.RuntimeException: data starting at 0 is in unknown format

When I manually extracted the war file, or did an inspection on it (using “unzip -T”), then it gave these errors:

error:  invalid compressed data to inflate

It turns out I was corrupting it by running a filter on it at an earlier stage in the build, a bit like this:

<copy todir=”${release.dir}/${app.name}/tomcat/” includeEmptyDirs=”false”>
<fileset dir=”${build.dir}/${app.name}/tomcat/”>
</fileset>
<filterset>
<filter token=”SERVERNAME” value=”${dest}” />
<filter token=”DBSERVER” value=”${db.server}” />
</filterset>
</copy>

So basically I had to make sure that the war file was copied separately, and then the other files (which i wanted to run the filter on) were copied afterwards. Interesting…

Ant Unzip Issues

Let’s say I had a simple ant script which looked a little something like this:

<project name=”unzipstuff” default=”unzip”>
<target name=”unzip”>
<unzip src=”/home/webapps/edwinstarr.war” dest=”/home/webapps/edwin/starr/” >
</unzip>
</target>
</project>

Why oh why oh why oh why (etc and so on) would it fail with the following error?

BUILD FAILED
java.lang.RuntimeException: data starting at 0 is in unknown format
at org.apache.tools.zip.ZipEntry.setExtra(ZipEntry.java:272)
at org.apache.tools.zip.ZipFile.resolveLocalFileHeaderData(ZipFile.java:454)
at org.apache.tools.zip.ZipFile.<init>(ZipFile.java:153)

Is the .war file screwed up? If so, why does it extract ok if I do it manually? I wasted approximately 4 hours of my life on this before embarking on a convoluted and silly workaround which is leaving me feeling defeated.

Maven sites for failed builds

I’m using Bamboo and Maven to build our code, and Cobertura as the unit test coverage checking tool. I’ve setup the builds to fail if there are less than, say, 70% unit test coverage. However, I still want the maven site to be created so that I can go around waving it in people’s faces. The trouble is, if there’s less than 70% coverage, the build fails and therefore no cobertura report is created in the maven site! Gah!

The good thing about the maven sites is that they’re easy for whole project teams to understand, not just devs and other technical people. If a build failed because there’s less than 70% unit test coverage I currently have to look in the bamboo build log to see exactly why it failed (I know I also get the emails but there are way too many of them for it to be manageable).

In short – is there a way of producing the cobertura code coverage reports even if the build fails?

Build Reports For the Project Team

Reading through Jez Humble and David Farley’s new(ish) book “Continuous Delivery” at the moment. It’s just great to read a book where the authors are really speaking your language. The concepts are simple and correct. There are aslo a few new ideas in there that I’ve not thought of before. But most of all I like the way it gets me thinking about how to implement some of the concepts in a “challenging” atmosphere. Not all companies understand the value of Continuous Integration, let alone Continuous Deployment/Delivery, but this book lays out the concepts in such a simple, straightforward way that it seems hard to believe that anyone would not see the advantages in them.

I picked up on one idea which I’d like to start implementing ASAP – and that is to provide feedback to project teams about the state of the builds and environments involved in that project, throughout the whole project lifecycle. I think that might help the rest of the project team understand the importance of delivery, by drawing attention to and raising awareness of the builds and the deployment environments. I can well imagine that some project team members won’t even know what they’re looking at when they’re presented with a build report, but there’s no reason for this. Everyone on the project should be interested in the build reports, they should have a vested interest in the quality of the code and the state of the deployment environment. These are real, tangible things, not abstract concepts, so they should be presented as real, tangible things for everyone to see.

Best Practices for Build and Release Management Part 3: Making software deployments rapid, reliable and repeatable

When we do our deployments, it’s usually at that time in the project everyone in the world is eager for something to be delivered. With all eyes on you, the last thing you want is a complicated, arduous, risk-filled deployment task. What you need is a rapid, reliable and repeatable task that you have the utmost faith in.

Of course, the simple solution in a nutshell is to automate deployments. This easier said than done of course, but it’s also not rocket science.

If I think of some of the deployments I’ve done in the past and break them down into their constituent manual steps, I might come up with something along these lines:

  1. Download a tar ball from a repository
  2. Extract the tar and move some files.
  3. Make changes to the config files
  4. Add some third party files (maybe tomcat for instance)
  5. Tar it back up
  6. Prep target servers by removing the existing version
  7. Send the new version to each server
  8. Extract it on each host
  9. Start the application
  10. Test that it works

That’s quite a few steps, and in reality each one of these steps would probably consist of several others.

These tasks might not seem risk-filled and arduous but in actual fact they are. If all these steps were manual, then there’d be ample opportunity for human error. Perhaps the downloading of the tar file was incomplete – we’ve introduced a machine error that might not be caught until step 10!

What about the configurations? We might have different configurations for each locale. In a manual process we might have to do this by hand. The opportunity for human error is greatly extended, and mistakes in config files can be costly.

And what about the sheer amount of time it would take if we tried to do these tasks ourselves? I know I’d rather be getting on with something else more constructive!

So let’s see what we can do about making this deployment rapid, reliable and repeatable.

In actual fact we can easily automate all these steps.

We can write a script to download the tar file and then check it against an md5 checksum. An Ant script could do this for us nicely, and hey presto, we’ve removed the risk of that machine error.

Here’s an Ant code snippet for getting a tar file from a maven repository (handy if you use Maven ;-)) and doing an md5 check:

<target name=”get_tar”>
<echo>Getting the distribution from ${sourcehost}</echo>
<scp file=”${sourcefile}” todir=”${tars.dir}/${app.name}/${filename}” trust=”true” keyfile=”${key}” passphrase=” “/>
</target>

Next Step – extracting the file and moving some files about: Really we should look to minimise the amount of file moving we need to do, if the application doesn’t get delivered in the right structure then this should be addressed earlier in the process so that it does get delivered in exactly the right structure for our production system. Go back to the developer/vendor and tell them how you would expect the delivery to look. If this isn’t an option, I’ve again used ant tasks to get my releases into the state I want them in. Here’s an example:

<!– copy the tomcat files into the release –>

<copy todir=”${release.dir}/supportapps/java/${jre_jdk.version}” includeEmptyDirs=”false”>

<fileset dir=”${supportapps.dir}/${jre_jdk.version}”>

</fileset>

</copy>

<!– copy the application configs files into the right directory –>

<copy todir=”${release.dir}/config” overwrite=”true” includeEmptyDirs=”false”>

<fileset dir=”${config.template.dir}/” />

<filterset>

<filter token=”VERSION” value=”${version.num}” />

<filter token=”SERVERNAME” value=”${destination.name}” />

<filter token=”DBNAME” value=”${dbname}” />

<filter token=”UID” value=”${username}” />

</filterset>

</copy>

As you can see, I’ve also made some changes to the configs files here while I copied them from one directory to another. In theory, the only difference between an application deployed on a test environment, and the same application deployed on the production environment, should be the config files. In reality we also have databases which can affect the functionality of our system, but we shall leave that to one side for the moment. There are numerous ways of managing changes to config files, one method (using token replacement) is covered here. The important thing is that this step is carefully managed. I would always prefer to automate this step and write a test script to check that the configs look like I expect them to, rather than ever get involved in manually updating them – the risk of error is simply too high.

What I like to do is store all the configs in source control (I usually use svn), and then either include them in the build (so that when you actually get a release, the configs are all already in there), or get them from the tag branch. Either way, I like them to be tokenised. Then I use the method shown above to replace the tokens with proper values. I like to do this at deploy time because at that point you know the destination hosts you’re going to do deployments to, and can therefore retrieve the right corresponding values to replace the tokens. I also like the practice of keeping the values in a db, and simply pulling them out of the db at deploy time. There are obviously a million ways you can do this. You can even embed it in the ant deploy script like so:

<target name=”getDbName”>

<property name=”temp_sql” value=”${temp.dir}/getdbname.sql”/>

 

<echo file=”${temp_sql}”>select dbname from configuration where servername = ‘${destination.name}’ </echo>

 

<exec executable=”${psql_exec}” failonerror=”false” outputproperty=”dbname.tmp”>

<env key=”PGPASSWORD” value=”${dbpasswd}”/>

<arg line=”-h ${dbsrvname}”/>

<arg line=”-U ${dbusr}”/>

<arg line=”-d ${dbname}”/>

<arg line=”-f ${temp_sql} -t”/>

</exec>

<echo>${dbname}</echo>

</target>

The package is now “ready” to copy over to the destination host – i.e. the configs files are correct for the destination we’re pushing to, the package structure is correct, and any third party files have been included (see jre/jdk above). I tend to tar or zip up the package, simply because it makes the package smaller, and this can be useful if you’re copying the release package to another datacentre and bandwidth is unpredictable. Of course, this step is entirely optional. Anyway, in keeping with the previous examples, here’s an ant snippet:

<target name=”zip”>

<zip destfile=”${release.dir}/release.zip” basedir=”${release.dir}” update=”true”/>

</target>

As simple as that. We’re now ready to move on and automate steps 6-10, which is in the next post!

Fun with FindBugs

We’ve just moved to a new master pom file in an effort to make our lives a bit easier, and to allow certain builds to carry on using the old master pom file which was basically quite rubbish. You see, the old master pom file just defined a load of plugins, mostly in the plugin management section, so they had to be referenced by the application poms anyway. The idea with the new master pom is that it enforces the use of certain plugins, and ALSO enfiorces certain standards for builds to pass – for instance, we included a cobertura coverage rate of 80% and made the builds fail if there were any findbugs issues. It sounded like a good idea at the time. So, we put these plugin definitions directly into the build section of the pom, like so:

<build>

<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>

<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.4</version>
<configuration>
<testClassesDirectory>
build/maven/${artifactId}/target/test-classes
</testClassesDirectory>
</configuration>
</plugin>

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.4</version>
<configuration>
<formats>
<format>html</format>
<format>xml</format>
</formats>
<check>
<totalBranchRate>54</totalBranchRate>
<totalLineRate>75</totalLineRate>
</check>
</configuration>
<executions>
<execution>
<goals>
<goal>clean</goal>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>2.5</version>
<executions>
<execution>
<goals>
<goal>check</goal>
<goal>cpd-check</goal>
</goals>
</execution>
</executions>
<configuration>
<linkXref>true</linkXref>
<targetJdk>1.6</targetJdk>
<sourceEncoding>utf-8</sourceEncoding>
<failOnViolation>false</failOnViolation>
<outputDirectory>build/maven/${pom.artifactId}/target/pmd-reports</outputDirectory>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>findbugs-maven-plugin</artifactId>
<version>2.3.1</version>
<!– NOT USING THIS YET
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions> –>
</plugin>
</plugins>
</build>

The first thing I had to do (as you can see) is disable the check goal because it was causing findbugs to fail pretty much every build. The next thing I had to do was remove the whole of the following configuration section from the findbugs plugin:

<configuration>
<threshold>High</threshold>
<effort>Max</effort>
<xmlOutput>true</xmlOutput>
<xmlOutputDirectory>build/maven/${artifactId}/target/site</xmlOutputDirectory>
</configuration>

I’ve kept this section in the reporting section though.

The next thing I got was this error:

03-Nov-2010 09:53:03 [java] Java Result: 1
03-Nov-2010 09:53:03 [Fatal Error] findbugsTemp.xml:1:1: Premature end of file.
03-Nov-2010 09:53:03 [INFO] ————————————————————————
03-Nov-2010 09:53:03 [ERROR] FATAL ERROR
03-Nov-2010 09:53:03 [INFO] ————————————————————————
03-Nov-2010 09:53:03 [INFO] Premature end of file.
03-Nov-2010 09:53:03 [INFO] ————————————————————————
03-Nov-2010 09:53:03 [INFO] Trace
03-Nov-2010 09:53:03 org.xml.sax.SAXParseException: Premature end of file.

and that was fixed by changing this bit of the findbugs plugin configuration:

<effort>Max</effort>

to this:

<effort>Default</effort>

So, in short, using “Max” gives us java out of memory exceptions. Which is not exactly convenient. Not sure how to fix this though.