Archive

Archive for the ‘Uncategorized’ Category

Changes to Scrum

​Ken Schwaber and Jeff Sutherland, the original guys who came up with the whole concept of scrum back in about 1995 have recently posted a video on the interwebs, explaining some changes to the scrum model based on their experiences over the last few years. The video can be found here.

If you don’t have the time to watch the video, here’s my summary of the bits I found most interesting:

1. We should do more prep before our sprint planning, so that all stories are sufficiently prepared before the sprint planning session. This has come about because many sprint planning sessions take many hours. They have suggested having a “ready” status for backlog items that are ready to be discussed in the planning session.

2. We should always have a sprint goal, and during our daily stand-ups we should talk about how we are helping the team progress towards our sprint goal

3. We should talk about “value” in our sprint reviews. With hindsight, did we deliver as much value as we could have? If not, what could we do next time to ensure we deliver greater value?

Categories: Uncategorized Tags: ,

Team Transformation for Continuous Delivery with Chris O’Dell – as it happened

March 19, 2013 9 comments

By James Betteley

Preamble:

No time for that, I’m seriously late. I was leaving the office just as someone said “James, before you go…” and that was the end of any hopes I had of getting here on time.

It’s 6:40pm, I’ve finally got my sh1t together, and we’re off! In act that’s a lie, they were off ages ago. I’m so late there aren’t even any seats, so I’m sat in the corner at the back, like a proper Billy No-mates

6:41pm: Chris is talking about big balls of mud and how they’ve gone from that, to a much smaller ball (no mention of mud this time).

6:42: Slides are going by quicker than I can type! Chris is talking about the importance of moving away from a “blame culture”. I personally hate blame culture, I think it was the French who invented it. Arf! (sorry).

6:45pm: Ok, there’s a slide on how to get to Continuous Delivery, I’m going to pay attention to this one…

It says you need:

  • Cross functional product focused teams
  • A focus on technical debt
  • Sit the team close to their clients
  • actively remove blame culture
  • focus on self improvement
  • radiate metrics
  • collect metrics on work in progress

After a quick coffee break it’s time to interrogate the suspects – It’s Q&A time!

7:01pm: How do you share “commonality of functionality”? Asks someone who likes words ending in “ality”. Service oriented architecture was apparently a big help responds someone from the 7digital posse (they are a posse by the way Chris has been joined by some 7digital reinforcements).

7.02pm: The next question is about metrics and how they collect them. Apparently they’re working on a logging system, but I’m guessing they also use CI and some live reporting tools which I probably missed in the earlier slides! Oops.

7.05pm: “What didn’t work?” Asks someone (my favorite question so far. I’m giving it a 7.5 out of ten). Trying to patch things up didn’t work. The dependency chain caused a pain. Acceptance Tests were a pain (haha!) Using UI stuff and a shared DB caused Acceptance Test issues, they say. I nod in agreement.

7:07pm: Someone says something about keeping environments the same being a challenge. They’re meant to be the same??? Where’s the fun in that?

7.10pm: A question on blame culture is next up, namely “How do you get rid of it?” By not telling on people! Also, having a dev manager who protects from above is handy.

It’s how you respond [to blame] that’s important, as that’s what sets the tone.

7:11pm: How did you make the culture change? Asks someone who wants to know how they made the culture change. It’s another good question, and one I’d really like to learn from. It’s all very well having a great culture, and there’s no denying its importance, but how do you make a culture change if it’s not ideal to start with? Sadly the answer isn’t straightforward. The posse reply with things like “Adopt Agile principles”, “tech manifesto” (which sounds cool), “self-organising teams”, “small steps” and “leading by example”. Followed up with “hire well”, “you need champions!” “do workshops”. Also, “the CTO is pretty cool”. Hmmmm, so no “click here to change your culture” button then?

7:16pm: The next question is a corker. It went a bit like: “Usually have to change architecture of system to support Continuous Delivery, but also sometimes the architecture of organisation as well. Did this happen?” That’s the winner so far. “No” comes the answer. Damn. At 7 digital there’s a focus on lack of hierarchy, so quite a flat structure. Not much change was needed then, obviously. I think the word “culture” came up as well, and not for the first time.

7:18pm: How have they managed to integrate Ops, asks the next person, clearly fresh from devopsdays. “We’re still learning” is the honest sounding response. Not that the all haven’t been honest sounding. They’ve started assigning Ops people to “the team”, by which I assume they mean the project team.

7:20pm: Someone wants to know if they had a shared goal between tech/ops and dev? I think the answer is yes. Basically Rob (who is the head of both ops and dev) became head of both ops and dev, which helped. He also created a tech manifesto and is toying with the idea of putting up some posters. When I was a kid I had a poster of Airwolf in my bedroom. Not sure if that’s going to help anyone though.

7:21pm: “Is there a QA on the team?” is the next question. Yarp (I’m paraphrasing). But the QA person is more of a coach – everyone is expected to do it, but they’re there to lead. No separate dev manger or QA manager – everyone’s one great big team (aaahhhh).

7:23pm: Somebody has asked how they handle support, and whether there’s a support team. I think the person who responds says there’s a “Systems team”, who get a text or call in the middle of the night. It seems a bit cruel that they wait until the middle of the night to text them but what do I know? Apparently the devs may also get involved, so that’s ok. There’s an on-call team, “but this is an area for improvement” they confess. Mainly it’s a case of “call someone!”, which I personally think is pretty good. But they do stress how there’s a focus on monitoring so that they can catch as many issues as possible before they become, er, issues, if you catch my drift. They said it much better than I can write it.

7:26pm: “How frequently did you deploy your big ball of mud compared to how frequently you do it now?” And that question goes to contestant number 2. It used to be one every 3 months, but they don’t measure how frequently they can deploy stuff any more because it’s that frequent.
(that’s just showing off). Improving the deploy mechanism was all-important. And changing the culture to shift to more frequent releases. That word “culture” again.

7:30pm: This question sounds like a plant: “How do you have time to test stuff if you deploy so often?” asks some cheeky 7digital employee hidden in the audience. I’m joking of course, it’s a nice question because it leads to a well executed answer: Chris basically explains that because they deploy so often, their releases are very small. Also, they’ve automated the hell out of everything.

7:31pm: Dave asks a really good question but I’m far too slow to keep up! It included the phrase “separation of concerns” so was probably too complicated for me to understand anyway.

7:40pm: There’s a question about schema changes. I reckon the answer will include the word “culture”. it does. Somehow.

If there was a word cloud for this Q&A session then “culture” would dwarf all the others. Something tells me that “culture shift” is important.

7:45pm: “How do you manage project accounting?” “We don’t” – No mention of culture!

7:46pm: Someone asks “If there’s a production issue, like an outage, who takes ownership?”. Nice one, who indeed does take ownership? “Everyone, we have a culture of shared ownership”. Gah! it’s all about culture!

7:47pm: “How do you decide what projects get green lighted?” asks some poor innocent from the back of the room (and no, it wasn’t me). Apparently this has nothing to do with Continuous Delivery (and all the other questions have?) and there’s nobody from the product team here so that question lands on stony ground.

7:52pm: Banos treads a fine line by asking a question dangerously close to the time when we’re meant to be heading to the pub, but just about gets away with it “what CI system do you use?” he asks, and the answer is (drum roll…..) Team City! Actually there was no drum roll, I made that up. Then interestingly they say that everyone is in charge of looking after Team city and that they just trust each other! Crazyness!

8.02pm: “I was told we finish at 8″ says Chris, and she’s bloody well right, there’s a pub nearby and some of us are thirsty.

So, in conclusion, 7digital know their Continuous Delivery from their TDD, and “culture” is the word of the evening. I’m off for beer with Banos, and the rest of the London Continuous Delivery gang!

Keep an eye out for #londoncd on twitter for news of the next London Continuous Delivery meetup, or go to the London CD website. Also follow the likes of @matthewpskelton, @AgileSteveSmith, @banoss and @davenolan for more Continuous Delivery goodness.

Why do we do Continuous Integration?

October 25, 2012 2 comments

Continuous Integration is now very much a central process of most agile development efforts, but it hasn’t been around all that long. It may be widely regarded as a “development best practice” but some teams are still waiting to adopt C.I. Seriously, they are.

And it’s not just agile teams that can benefit from C.I. The principles behind good C.I. can apply to any development effort.

This article aims to explain where C.I. came from, why it has become so popular, and why you should adopt it on your development project, whether you’re agile or not.

Back in the Day…

Are you sitting comfortably? I want you to close your eyes, relax, and cast your mind back, waaay back, to 2003 or something like that…

You’re in an office somewhere, people are talking about The Matrix way too much, and there’s an alarming amount of corduroy on show… and developers are checking in code to their source control system….

Suddenly a developer swears violently as he checks out the latest code and finds it doesn’t compile. Someone’s check-in has broken the codebase.

He sets about fixing it and checking it back in.

Suddenly another developer swears violently….

Rinse and repeat.

CI started out as a way of minimising code integration headaches. The idea was, “if it’s painful, don’t put it off, do it more often”. It’s much better to do small and frequent code integrations rather than big ugly ones once in a while. Soon tools were invented to help us do these integrations more easily, and to check that our integrations weren’t breaking anything.

Tests!

Fossilized C.I. System

Fossil of a Primitive C.I. System

Excavations of fossilized C.I. systems from the early 21st Century suggest that these primitive C.I. systems basically just compiled code, and then, when unit tests became more popular, they started running unit tests as well. So every time someone checked in some code, the build would make sure that this integration would still result in a build which would compile, and pass the unit tests. Simple!

C.I. systems then started displaying test results and we started using them to run huuuuge overnight builds which would actually deploy our builds and run integration tests. The C.I. system was the automation centre, it ran all these tasks on a timer, and then provided the feedback – this was usually an email saying what had passed and broken. I think this was an important time in the evolution of C.I. because people started seeing C.I. as more of an information generator, and a communicator, rather than just a techie tool that ran some builds on a regular basis.

Information Generator

Management teams started to get information out of C.I. and so it became an “Enterprise Tool”.

Some processes and “best practices” were identified early on:

  • Builds should never be left in a broken state.
  • You should never check in on a broken build because it makes troubleshooting and fixing even harder.

With this new-found management buy-in, C.I. became a central tenet of modern development practices.

People started having fun with C.I. plugging lava lamps, traffic lights and talking rabbits into the system. These were fun, but they did something very important in the evolution of C.I. -  they turned it into an information radiator and a focal point of development efforts.

Automate Everything!

Automation was the big selling point for C.I. Tasks that would previously have been manual, error-prone and time-consuming could now be done automatically, or at night while we were in bed. For me it meant I didn’t have to come in to work on the weekends and do the builds! Whole suites of acceptance, integration and performance tests could automatically be executed on any given build, on a convenient schedule thanks to our C.I. system. This aspect, as much as any other, helped in the widespread adoption of C.I. because people could put a cost-saving value on it. C.I. could save companies money. I, on the other hand, lost out on my weekend overtime.

Code Quality

Static analysis and code coverage tools appeared all over the place, and were ideally suited to be plugged in to C.I. These days, most code coverage tools are designed specifically to be run via C.I. rather than manually. These tools provided a wealth of feedback to the developers and to the project team as a whole. Suddenly we were able to use our C.I. system to get a real feeling for our project’s quality. The unit test results combined with the static analysis could give us information about the code quality, the integration  and functional test results gave us verification of our design and ensured we were making the right stuff, and the nightly performance tests told us that what we were making was good enough for the real world. All of this information got presented to us, automatically, via our new best friend the Continuous Integration system.

Linking C.I. With Stories

When our C.I. system runs our acceptance tests, we’re actually testing to make sure that what we’ve intended to do, has in fact been done. I like the saying that our acceptance tests validate that we built the right thing, while our unit and functional tests verify that we built the thing right.

Linking the ATs to the stories is very important, because then we can start seeing, via the C.I. system, how many of the stories have been completed and pass their acceptance criteria. At this point, the C.I. system becomes a barometer of how complete our projects are.

So, it’s time for a brief recap of what our C.I. system is providing for us at this point:

1. It helps us identify our integration problems at the earliest opportunity

2. It runs our unit tests automatically, saving us time and verifying or code.

3. It runs static analysis, giving us a feel for the code quality and potential hotspots, so it’s an early warning system!

4. It’s an information radiator – it gives us all this information automatically

5. It runs our ATs, ensuring we’re building the right thing and it becomes a barometer of how complete our project is.

And we’re not done yet! We haven’t even started talking about deployments.

Deployments

Ok now we’ve started talking about deployments.

C.I. systems have long been used to deploy builds and execute tests. More recently, with the introduction of advanced C.I. tools such as Jenkins (Hudson), Bamboo and TeamCity, we can use the C.I. tool not only to deploy our builds but to manage deployments to multiple environments, including production. It’s now not uncommon to see a Jenkins build pipeline deploying products to all environments. Driving your production deployments via C.I. is the next logical step in the process, which we’re now calling “Continuous Delivery” (or Continuous Deployment if you’re actually deploying every single build which passes all the test stages etc).

Below is a diagram of the stages in a Continuous Delivery system I worked on recently. The build is automatically promoted to the next stage whenever it successfully completes the current stage, right up until the point where it’s available for deployment to production. As you can imagine, this process relies heavily on automation. The tests must be automated, the deployments automated, even the release email and it’s contents are automated.So what exactly is the cost saving with having a C.I. system?

Yeah, that’s a good question, well done me. Not sure I can give you a straight answer to that one though. Obviously one of the biggest factors is the time savings. As I mentioned earlier, back when I was a human C.I. machine I had to work weekends to sort out build issues and get working code ready for Monday morning. Also, C.I. sort of forces you to automate everything else, like the tests and the deployments, as well as the code analysis and all that good stuff. Again we’re talking about massive time savings.

But automating the hell out of everything doesn’t just save us time, it also eliminates human error. Consider the scope for human error in a system where some poor overworked person has to manually build every project, some other poor sap has to manually do all the testing and then someone else has to manually deploy this project to production and confidently say “Right, now that’s done, I’m sure it’ll work perfectly”. Of course, that never happened, because we were all making mistakes along the line, and they invariably came to light when the code was already live. How much time and money did we waste fixing live issues that we’d introduced by just not having the right processes and systems in place. And by systems, of course, I’m talking about Continuous Integration. I can’t put a value on it but I can tell you we wasted LOTS of money. We even had bugfix teams dedicated to fixing issues we’d introduced and not caught earlier (due in part to a lack of C.I.).

Conclusion

While for many companies C.I. is old news, there are still plenty of people yet to get on board. It can be hard for people to see how C.I. can really make that much of a difference, so hopefully this blog will help to highlight some of the benefits and explain how C.I. has been adopted as one of the most important and central tenets of modern software delivery.

For me, and for many others, Continuous Integration is a MUST.

 

Installing Artifactory on Ubuntu

July 24, 2012 7 comments

I was messing around with some permissions settings on our “Live” verison of Artifactory, as you do, and then suddenly I thought “No! This feels wrong… I’m sure there’s someone who would be having a fit right now if he knew I was guffing around with the permissions on the live version of Artifactory…”. That someone is of course me. So I duly pressed cancel, and decided to do things “properly”, i.e. install a local version of Artifactory and mess around with that one instead.

Easier said than done.

How NOT to Install Artifactory on Ubuntu

If you want to know how to install Artifactory on Ubuntu, I’d suggest skipping to the “How to Install Artifactory on Ubuntu” section, below. However, if you’d love to know exactly how to NOT install Artifactory on Ubuntu while following the Artifactory installation instructions and getting really annoyed, then read on!

I decided to install it on my Ubuntu VirtualBox VM, because surely that would be easier than installing it on my Windows box, right? My Ubuntu VM is running java 1.7. So I downloaded the artifactory zip and extracted it. Dead easy! Then I followed this particular instruction on the artifactory installation webpage:

Before you install is recommended you first verify your current environment by running artifactoryctl check under the $ARTIFACTORY_HOME/bin folder

But by default it wasn’t executable, so I had to chmod it, duuuuh! So I did that, ran it, and I got the following error:

ARTIFACTORY_HOME not set

Er, well, no, I haven’t set ARTIFACTORY_HOME, that’s true. I had no idea I had to set it anywhere. So I set it in my profile, sourced it, and all that stuff.

I ran the artifactoryctl check command again and got another message saying:

Cannot find a JRE or JDK. Please set JAVA_HOME to a >=1.5

But that’s rubbish! I totally have 1.7 installed!! I checked my paths and all the rest. All looked fine to me. After a little while I got bored of seeing that exact same message, so I decided to just stop bothering with this stupid “artifactoryctl check” thing, and just moved on with the installation by running the $ARTIFACTORY_HOME/bin/install.sh script

ffs!

Gah! Not logged in as root. Sudo sudo sudo.

That’s better. I got a little message saying “SUCCESS” which was nice, followed by:

Installation of Artifactory completed. You can now check installation by running:

> service artifactory check

Okey dokey then, I’ll go do that!

Gahh!!! SUDO MAKE ME A SANDWICH!!!

Sudo !! is probably my most frequently typed command, along with “ll” (swiftly followed by “ls -a” when I realsie the ll alias hasn’t been setup. I’m an idiot, ya’see).

Anyway, following a swift sudo !! I got the following message:

Found JAVA= in JAVA_HOME=

Cannot find a JRE or JDK. Please set JAVA_HOME to a >=1.5 JRE

Erm, I’m pretty sure that’s not right. I definitely remember installing Java 1.7 and setting JAVA_HOME. So I echoed JAVA_HOME. Sure enough, it’s pointing to my 1.7 installation path.

Turns out that setting it in my bash profile wasn’t good enough, and I had to also put it in the following file:

/etc/artifactory/default

Oh, and don’t put the symlink path in for JAVA_HOME, that didn’t work. I had to put the full path in (/usr/bin/java got me nowhere). I literally had to put the full location in (mine was /usr/lib/jvm/java-7-openjdk-amd64/jre/). Anyway, that just about did it. Piss easy!

How to Install Artifactory on Ubuntu

  • Make sure you’ve got Java 1.5 or above installed, and make a note of the full path, as you’re going to need this later (mine was /usr/lib/jvm/java-7-openjdk-amd64/jre/).
  • Download the artifactory zip and extract it somewhere.
  • Go to your artifactory bin dir and make the install.sh file executable
  • Now run sudo ./install.sh – this will copy some files around the place and setup some paths.
  • Edit the file /etc/artifactory/default and put the FULL Java path in there as JAVA_HOME
  • Make sure JAVA_HOME is also set in /etc/environment
  • run “sudo service artifactory check”
  • If it all looks good run “sudo service artifactory start”
  • Go to http://localhost:8081/artifactory/
  • You’re done!
Categories: DevOps, Uncategorized Tags: , ,

Sonar Analysis Using Gradle

June 18, 2012 5 comments

I’ve been experimenting with Gradle recently, and as part of the experiment, I wanted to get Sonar running and producing code metrics, including test coverage reports. I’m running the first release version of Gradle, so version 1.0.

To get Sonar working in Gradle you need to apply the sonar plugin, like this:

apply plugin: ‘sonar’

Then you need to add some sonar connection settings (very much like with Maven):

sonar {
server {
url = “http://${sonarBaseName}/”
}
database {
url = “jdbc:mysql://${hostBaseName}:3306/sonar?useUnicode=true&characterEncoding=utf8″
driverClassName = “com.mysql.jdbc.Driver”
username = “wibble”
password = “wobble”
}
}

To run the Sonar analysis/reports, you just call sonarAnalyze, which is the in-built task that the Sonar plugin gives you. So far, so easy.

The first problem was with the version of Sonar. My colleage Ed (check out his blog here) was trying to get a gradle build working with an existing Sonar installation, but wasn’t having much joy. We were using a version of Sonar pre version 2.8, so we had to upgrade. In the end we were forced to upgrade to version 3.0.1. That was the first pain point.

The next problem we stumbled upon was with cobertura. There’s a cobertura plugin for Gradle, and getting it to work is a bit unusual. You need to reference an initialisation script which is hosted on GitHub, like this:

buildscript {
apply from: ‘https://github.com/valkolovos/gradle_cobertura/raw/master/repo/gradle_cobertura/gradle_cobertura/1.2/coberturainit.gradle’
}

We had some problems with this. One day, I could access this script fine, and the next it failed. A week or so later, I could access it, but Ed’s build couldn’t. We still don’t understand why this was the case, but we suspect it was something to do with the GitHub https connection.

To make sure we didn’t get this problem again, we got hold of the initialisation script and saved it locally – unfortunately it has dependencies so we had to download the whole folder and put this in our artifactory repository, and make the build reference it from there. This seemed to fix our problem, but it left us with another issue – we were now depending on another build component, which contained hard coded build configuration information (the initialisation script refers to the maven central repo). We weren’t happy with this (since we use our own cached repositories in artifactory), so we had to think of a solution.

Ed went away to meditate on our problem. A little while later he came back with a gradle build file which used the Cobertura ant task. It’s pretty much the same way as it’s documented in the gradle cookbook, here.

These are the important parts that you need to include:

def cobSerFile="${project.buildDir}/cobertura.ser"
def srcOriginal="${sourceSets.main.classesDir}"
def srcCopy="${srcOriginal}-copy"
dependencies {
        testRuntime 'net.sourceforge.cobertura:cobertura:1.9.3'
        testCompile 'junit:junit:4.5'
}
test.doFirst  {
    ant {
        // delete data file for cobertura, otherwise coverage would be added
        delete(file:cobSerFile, failonerror:false)
        // delete copy of original classes
        delete(dir: srcCopy, failonerror:false)
        // import cobertura task, so it is available in the script
        taskdef(resource:'tasks.properties', classpath: configurations.testRuntime.asPath)
        // create copy (backup) of original class files
        copy(todir: srcCopy) {
            fileset(dir: srcOriginal)
        }
        // instrument the relevant classes in-place
        'cobertura-instrument'(datafile:cobSerFile) {
            fileset(dir: srcOriginal,
                   includes:"my/classes/**/*.class",
                   excludes:"**/*Test.class")
        }
    }
}
test {
    // pass information on cobertura datafile to your testing framework
    // see information below this code snippet
}
test.doLast {
    if (new File(srcCopy).exists()) {
        // replace instrumented classes with backup copy again
        ant {
            delete(file: srcOriginal)
            move(file: srcCopy,
                     tofile: srcOriginal)
        }
        // create cobertura reports
        ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/coverage",
format:'xml', srcdir:"src/main/java", datafile:cobSerFile)
ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/coverage",
format:'html', srcdir:"src/main/java", datafile:cobSerFile)
    }
}

So this is how we’ve got it running at the moment. As you can see, we’re no longer using the Cobertura plugin for gradle. The next thing we need to do is get Sonar to pick up the Cobertura reports. This is configured in the Sonar configuration section. I’ve shown the Sonar configuration section at the top of this page, but now we need to make some changes to it, like this:

sonar{

project {
coberturaReportPath = new File(buildDir, “/reports/cobertura/coverage.xml”)
sourceEncoding = “UTF-8″
dynamicAnalysis = “reuseReports”
testReportPath = new File(buildDir, “/test-results”)
}

server {
url = “http://${sonarBaseName}/”
}
database {
url = “jdbc:mysql://${hostBaseName}:3306/sonar?useUnicode=true&characterEncoding=utf8″
driverClassName = “com.mysql.jdbc.Driver”
username = “wibble”
password = “wobble”
}
}

Now we need to go back and change the output directory of our Cobertura ant configuration, to make it output to /reports/cobertura/coverage.xml, so we change the last bit of our configuration to look like this:

 // create cobertura reports

        ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/cobertura/coverage",
format:'xml', srcdir:"src/main/java", datafile:cobSerFile)
ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/coverage",
format:'html', srcdir:"src/main/java", datafile:cobSerFile)

What’s Going On?

May 22, 2012 1 comment

Here’s a bunch of upcoming talks, courses, conferences, things and stuff, which I reckon might be worth checking out.

Managing javascript with Gradle – Free event @ Skills Matter (London) May 22nd 6:30pm

Insight for CI - Webinar May 23rd 11am and again at 2pm EDT

Goto Conference – Amsterdam May 24-26

Thoughtworks Live – Picadilly, London May 24th (all day)

Configuration Management Conference – (£80) London, May 29th (all day)

Thoughtworks Quarterly Briefing – Liverpool Street, London May 30th 6:30pm

Agile Development West – Las Vegas, June 10 – 15th

Gradle Build Automation Evolved – Free event @ Skills Matter (London) June 12th 6:30pm

Continuous Delivery Workshop – (£695, €695) London July 5th. Berlin June 12th, Dusseldorf  June 14th

Devops summit – London, June 20th

Jenkins User Conference – Israel, July 5th (all day)

I will add to this list as and when I find out about any interesting new events.

Beer and Pizza with Facebook

April 19, 2012 12 comments

http://jamesbetteley.wordpress.com/2012/04/19/beer-and-pizza-with-facebook/

Last night I was invited to go along to the Facebook offices in London and attend a tech talk on how Facebook do release engineering and automated testing.

Now, when you go along to meetups & tech talks they often give you free pens, magazines and sometimes free beer. These freebies are bribes to make you enjoy the evening and think favorably of the content. I would never allow myself to be influenced by such things, and as such my blogs are guaranteed to be 100% impartial. Honestly. Right, that’s that done, now on with the tech-talk…

Pint of Spitfire

The first thing I did was go to the bar to collect my free beer. The choice was great, there was wine for the ladies, lager for the men, bitter for the real men, and soft drinks for, er, others. And you get your beer in a proper pint glass too. So an excellent start to the evening.

I took my seat on a very comfortable sofa and sat back, waiting for the talk to begin. Then the snacks started arriving. They were brought round by waitresses in black uniforms, so they sort of looked like ninjas. I’m not sure that was the intention though. Anyway, the snacks were delicious. I started off with a chilli and lemongrass chicken skewer. Yummy.

No sooner had I finished my chicken skewer than Girish Patangay, a Facebook release engineer, started his talk on how they do deployments to Facebook.com.

The first thing I noted was that they don’t do continuous delivery. I think I know why, and I’ll explain about that later.

Girish emphasized how important the culture is at Facebook, and explained that “ownership and impact” are very important there. This means that developers take full ownership of their changes/code and they have to have full awareness of impact of their changes. He described the developers as “shepherds” of the code, in that they look after their changes from the moment they’re checked in, to the moment they’re pushed to production. They are also responsible for testing their changes because Facebook “don’t have a QA team” as such. It sounds like the devs are responsible for coming up with the tests and writing them. I wondered if these included Acceptance Tests, and if so, where are the acceptance criteria coming from?

Being able to shepherd your code into production is made much easier by the quick turnaround time from code commit to production push. The longest anyone would have to wait is 1 week, but mostly it’s a lot quicker than that. There are daily pushes every day, and 1 weekly push.

Branching

The next snack to come round was a vegetarian mini pizza, and I mean mini. I could fit the whole thing in my mouth, and it was totally delicious.

Their branching policy was pretty much the same policy as we had when I worked at uSwitch.com. They worked on main until a certain day (I think they said Sunday) when a branch was taken. From then on they work on the branch. Fixes could be deployed at any time from the previous week’s branch if they deemed them fit enough and necessary.

They also used shadow branches, which I think are the same as the latest branch plus any changes in main. The point in this is so that anyone can see the very latest merged code at any given time. I’m not sure how often this shadow branch was updated though (presumably at least daily).

Push Karma

By this point I’d finished my pint of beer, so a ninja came around and offered me another one! How awesome is that?! I also tucked in to another little snack, not sure what this one was but it looked like a mini bhajee and came with a dip. Tasty.

I loved the “push karma” thing they’ve got going on at Facebook. Basically everyone is born with a push karma of 4. If your changes repeatedly turn out to be a disaster or troublesome, your push karma goes down. If it goes down to 2 or below, you can’t get into the daily push and you have to wait for the weekly release. On the other hand, if your changes are notoriously smooth, then your push karma goes up, and the better chance you have of getting your changes into to daily push. I really love this concept and I wish I’d thought of it at uSwitch. Back in those days we were basically doing daily pushes as well as biweekly releases, and giving people “push karma” would have been a fantastic weapon for pushing back on the odd push that I knew pretty well wasn’t going to go smoothly!

Pineapple and Chilli

The next treat to come my way via a ninja was a pineapple and peanut *thing* with some chilli on top. Again this was delicious. I had two of them they were so good. I could clearly identify the pineapple, and the bit of chilli on top, but I wasn’t sure what the peanut flavored thing was. I mean, presumably it was peanut, but what kind of peanut? It was more like a peanut relish than a peanut. It certainly didn’t look like a peanut. Anyway, on with the tech talk…

At Facebook, when the staff try to access facebook.com, the staff actually access latest.facebook.com – this is the latest code, deployed onto some beta servers. This way, the staff are acting like testers. What’s particularly useful about this is how easy they have made it for users to report bugs. You can even assign them to individual devs. I think it’s this “usability” which is lacking in most places. Many of us can access demo sites etc but actually capturing and reporting defects really isn’t a click-of-a-button thing, and it’s this barrier which Facebook have tried to overcome. I would love it if I could access my latest system that easily, and report a bug simply by clicking a button on the same site.

How Facebook Do Deployments

As Girish started talking about the actual technical details of how Facebook do their deployments, I tucked into a duck spring roll and my third beer. This time I was drinking becks or something similar, which I swiped from a passing ninja.

About 4 years ago, Facebook did deployments using rsync, and so did I! In fact, I know a few places that still do deployments using rsync. It took about an hour for Facebook to deploy their whole site. These days they’ve got about 100 times more servers to push to, and they can do it in minutes. How??

They wouldn’t say.

Just kidding. I’ll get to that in a sec, first they explained some approaches they considered, and why they discounted them. I should at this point mention that they deploy their entire webserver code, rather than just small parts of it in each push. This, in my opinion, is probably why they aren’t doing continuous deployment or continuous delivery. The release of the site is a 1.5Gb binary. So, they looked at binary diffs, but just aren’t that quick, and they looked at multicast, which turned out to be very complicated and a cross-datacentre configuration nightmare. They also looked at peer to peer rsync or scp, but that wasn’t working for them.

What they settled on, as Girish explained while I had another chilli and lemongrass chicken skewer (definitely my favorite), was a torrent push, and I must confess I love this idea.

It works like this, you install torrent clients on your servers, and create a torrent file. Then you simply deploy your torrent to one peer and sit back and admire your work as the peer to peer sharing gathers pace. Absolutely brilliant. I’m so annoyed I didn’t think of this as well.

torrent diagram from http://torrentfreak.com

Their solution was based on opentracker and hrktorrent, and allowed them to push a 418Mb gzip file to 10,000 servers in just 58 seconds, which is roughly the equivalent to 563Gbps!!

Testing

Earlier on they said they don’t have a QA team, so when one of their testers, Damien Sereni, came up to give his talk, I got a bit confused. However, they explained that he is the Webdriver guy, and that he’s busy porting their old Watir tests over to Webdriver. I wondered why they were doing this, and obligingly they explained that it was because the Watir code was very separate from the site code and that webdriver allowed them to keep their code together better. I’ve used Watir and webdriver and I can understand what he means, even though it might not sound like a brilliant idea for such a switch.

Facebook use Selenium grid & webdriver hub to scale their tests and speed them up. This allows them to distribute their tests to multiple environments and parallelize their test execution.

This is all pretty easy when you’re testing on computers but it it gets a bit tricky with mobile phones. Back in the day, when the facebook app was separate to the site, it was a pain to deploy and a pain to test. Also you hgad to deal with Apple quite a lot, so you couldn’t really take control of when and how you did deployments. Nowadays the facebook app just renders the website so things are a little different (i.e. easier). That said, automated testing for mobile, and sharing UI tests across platforms remains one of the biggest challenges at Facebook.

Post-Talk Drinks

It would have been rude to leave without collecting my free T-shirt and Facebook-embossed pint glass, so I stuck around until the end of the talk and took the opportunity to chat with some of the Facebook engineers. One guy explained how they did roll-backs (by keeping the old code on the site and repointing a symlink) and another guy explained how they manage schema changes (by keeping the schema really really simple, and abstracting). Also, I took the opportunity to speak with one of the ninja waitresses and asked her what was in the pineapple and peanut snack. The answer: Pineapple and peanut. I had a halloumi cheese skewer (delicious) and left.

Continuous Delivery Using Maven

February 21, 2012 20 comments

I’m currently working on a continuous delivery system where I work, so I thought I would write something up about what I’m doing. The continuous delivery system, in a nutshell, looks a bit like this:

I started out with a bit of a carte blanche with regards to what tools to use, but here’s a list of what was already in use, in one form or another, when I started my adventure:

  • Ant (the main build tool)
  • Maven (used for dependency management)
  • CruiseControl
  • CruiseControl.Net
  • Go
  • Monit
  • JUnit
  • js-test-driver
  • Selenium
  • Artifactory
  • Perforce

The decision of which of these tools to use for my system was influenced by a number of factors. Firstly I’ll explain why I decided to use Maven as the build tool (shock!!).

I’m a big fan of Ant, I’d usually choose it (or probably Gradle now) over Maven any day of the week, but there was already an existing Ant build system in place, which had grown a bit monolithic (that’s my polite way of saying it was a huge mess), so I didn’t want to go there! And besides, the first project that would be going into the new continuous delivery system was a simple Java project – way too straightforward to justify rewriting the whole ant system from scratch and improving it, so I went for Maven. Furthermore, since the project was (from a build perspective) fairly straightforward, I thought Maven could handle it without too much bother. I’ve used Maven before, so I’ve had my run-ins with it, and I know how hard it can be if you want to do anything outside of “The Maven Way”. But, as I said, the project I was working on seemed pretty simple so Maven got the nod.

GO was the latest and greatest C.I. server in use, and the CruiseControl systems were a bit of a handful already, so I went for GO (also I’d never used it before so I thought that would be cool, and it’s from Thoughtworks Studios, so I thought it might be pretty good). I particularly liked the pipeline feature it has, and the way it manages each of its own agents. A colleague of mine, Andy Berry, had already done quite a bit of work on the GO C.I. system, so there was already something to start from. I would have gone for Jenkins had there not already been a considerable investment in GO by the company prior to my arrival.

I decided to use Artifactory as the artifact repository manager, simply because there was already an instance installed, and it was sort-of already setup. The existing build system didn’t really use it, as most artifacts/dependencies were served from network shares. I would have considered Nexus if Artifactory wasn’t already installed.

I setup Sonar to act as a build analysis/reporting tool, because we were starting with a Java project. I really like what Sonar does, I think the information it presents can be used very effectively. Most of all I just like the way in which it delivers the information. The Maven site plugin can produce pretty much all of the information that Sonar does, but I think the way Sonar presents the information is far superior – more on this later.

Perforce was the incumbent source control system, and so it was a no-brainer to carry on with that. In fact, changing the SC system wasn’t ever in question. That said, I would have chosen Subversion if this was an option, just because it’s so utterly freeeeeeee!!!

That was about it for the tools I wanted to use. It was up to the rest of the project team to determine which tools to use for testing and developing. All that I needed for the system I was setting up was a distinction between the Unit Tests, Acceptance Tests and Integration Tests. In the end, the team went with Junit, Mockito and a couple of in-house apps to take care of the testing.

The Maven Build, and the Joys of the Release Plugin!

The idea behind my Continuous Delivery system was this:

  • Every check-in runs a load of unit tests
  • If they pass it runs a load of acceptance tests
  • If they pass we run more tests – Integration, scenario and performance tests
  • If they all pass we run a bunch of static analysis and produce pretty reports and eventually deploy the candidate to a “Release Candidate” repository where QA and other like-minded people can look at it, prod it, and eventually give it a seal of approval.

This is the basic outline of a build pipeline:

Maven isn’t exactly fantastic at fitting in to the pipeline process. For starters we’re running multiple test phases, and Maven follows a “lifecycle” process, meaning that every time you call a particular pipeline phase, it runs all the preceding phases again. Our pipeline needs to run the maven Surefire plugin twice, because that’s the plugin we use to execute our different tests. The first time we run it, we want to execute all the unit tests. The second time we run it we want to execute the acceptance tests – but we don’t want it to run the unit tests again, obviously.

You probably need some familiarity with the maven build lifecycle at this point, because we’re going to be binding the Surefire plugin to two different phases of the maven lifecycle so that we can run it twice and have it run different tests each time. Here is the maven lifecycle, (for a more detailed description check out the Maven’s own lifecycle page)

Clean Lifecycle

  • pre-clean
  • clean
  • post-clean

Default Lifecycle

  • validate
  • initialize
  • generate-sources
  • process-sources
  • generate-resources
  • process-resources
  • compile
  • process-classes
  • generate-test-sources
  • process-test-sources
  • generate-test-resources
  • process-test-resources
  • test-compile
  • process-test-classes
  • test
  • prepare-package
  • package
  • pre-integration-test
  • integration-test
  • post-integration-test
  • verify
  • install
  • deploy

Site Lifecycle

  • pre-site
  • site
  • post-site
  • site-deploy

So, we want to bind our Surefire plugin to both the test phase to execute the UTs, and the integration-test phase to run the ATs, like this:

<plugin>
<!-- Separates the unit tests from the integration tests. -->
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
  <configuration>
  -Xms256m -Xmx512m
  <skip>true</skip>
  </configuration>
  <executions>
    <execution>
      <id>unit-tests</id>
      <phase>test</phase>
      <goals>
        <goal>test</goal>
      </goals>
  <configuration>
    <testClassesDirectory>
      target/test-classes
    </testClassesDirectory>
    <skip>false</skip>
    <includes>
      <include>**/*Test.java</include>
    </includes>
    <excludes>
      <exclude>**/acceptance/*.java</exclude>
      <exclude>**/benchmark/*.java</exclude>
      <include>**/requestResponses/*Test.java</exclude>
    </excludes>
  </configuration>
</execution>
<execution>
  <id>acceptance-tests</id>
  <phase>integration-test</phase>
  <goals>
    <goal>test</goal>
  </goals>
  <configuration>
    <testClassesDirectory>
      target/test-classes
    </testClassesDirectory>
    <skip>false</skip>
    <includes>
      <include>**/acceptance/*.java</include>
      <include>**/benchmark/*.java</include>
      <include>**/requestResponses/*Test.java</exclude>
    </includes>
  </configuration>
</execution>
</executions>
</plugin>

Now in the first stage of our pipeline, which polls Perforce for changes, triggers a build and runs the unit tests, we simply call:

mvn clean test

This will run the surefire test phase of the maven lifecycle. As you can see from the Surefire plugin configuration above, during the “test” phase execution of Surefire (i.e. this time we run it) it’ll run all of the tests except for the acceptance tests – these are explicitly excluded from the execution in the “excludes” section. The other thing we want to do in this phase is quickly check the unit test coverage for our project, and maybe make the build fail if the test coverage is below a certain level. To do this we use the cobertura plugin, and configure it as follows:

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>cobertura-maven-plugin</artifactId>
  <version>2.4</version>
  <configuration>
    <instrumentation>
      <excludes><!-- this is why this isn't in the parent -->
        <exclude>**/acceptance/*.class</exclude>
        <exclude>**/benchmark/*.class</exclude>
        <exclude>**/requestResponses/*.class</exclude>
      </excludes>
    </instrumentation>
    <check>
      <haltOnFailure>true</haltOnFailure>
      <branchRate>80</branchRate>
      <lineRate>80</lineRate>
      <packageLineRate>80</packageLineRate>
      <packageBranchRate>80</packageBranchRate>
      <totalBranchRate>80</totalBranchRate>
      <totalLineRate>80</totalLineRate>
    </check>
    <formats>
      <format>html</format>
      <format>xml</format>
    </formats>
  </configuration>
  <executions>
    <execution>
      <phase>test</phase>
      <goals>
        <goal>clean</goal>
        <goal>check</goal>
      </goals>
    </execution>
  </executions>
</plugin>

To get the cobertura plugin to execute, we need to call “mvn cobertura:cobertura”, or run the maven “verify” phase by calling “mvn verify”, because the cobertura plugin by default binds to the verify lifecycle phase. But if we delve a little deeper into what this actually does, we see that it actually runs the whole test phase all over again, and of course the integration-test phase too, because they precede the verify phase, and cobertura:cobertura actually invokes execution of the test phase before executing itself. So what I’ve done is to change the lifecycle phase that cobertura binds to, as you can see above. I’ve made it bind to the test phase only, so that it only executes when the unit tests run. A consequence of this is that we can now change the maven command we run, to something like this:

mvn clean cobertura:cobertura

This will run the Unit Tests implicitly and also check the coverage!

In the second stage of the pipeline, which runs the acceptance tests, we can call:

mvn clean integration-test

This will again run the Surefire plugin, but this time it will run through the test phase (thus executing the unit tests again) and then execute the integration-test phase, which actually runs our acceptance tests.

You’ll notice that we’ve run the unit tests twice now, and this is a problem. Or is it? Well actually no it isn’t, not for me anyway. One of the reasons why the pipeline is broken down into sections is to allow us to separate different tasks according to their purpose. My Unit Tests are meant to run very quickly (less than 3 minutes ideally, they actually take 15 seconds on this particular project) so that if they fail, I know about it asap, and I don’t have to wait around for a lifetime before I can either continue checking in, or start fixing the failed tests. So my unit test pipeline phase needs to be quick, but what difference does an extra few seconds mean for my Acceptance Tests? Not too much to be honest, so I’m actually not too fussed about the unit tests running for a second time.  If it was a problem, I would of course have to somehow skip the unit tests, but only in the test phase on the second run. This is doable, but not very easy. The best way I’ve thought of is to exclude the tests using SkipTests, which actually just skips the execution of the surefire plugin, and then run your acceptance tests using a different plugin (the Antrun plugin for instance).

The next thing we want to do is create a built artifact (a jar or zip for example) and upload it to our artifact repository. We’ll use 5 artifact repositories in our continuous delivery system, these are:

  1. A cached copy of the maven central repo
  2. A C.I. repository where all builds go
  3. A Release Candidate (RC) repository where all builds under QA go
  4. A Release repository where all builds which have passed QA go
  5. A Downloads repository, from where the downloads to customers are actually served

Once our build has passed all the automated test phases it gets deployed to the C.I. repository. This is done by configuring the C.I. repository in the maven pom file as follows:

<distributionManagement>
<repository>
<id>CI-repo</id>
<url>http://artifactory.mycompany.com/ci-repo</url&gt;
</repository>
</distributionManagement>

and calling:

mvn clean deploy

Now, since Maven follows the lifecycle pattern, it’ll rerun the tests again, and we don’t want to do all that, we just want to deploy the artifacts. In fact, there’s no reason why we shouldn’t just deploy the artifact straight after the Acceptance Test stage is completed, so that’s what exactly what we’ll do. This means we need to go back and change our maven command for our Acceptance Test stage as follows:

mvn clean deploy

This does the same as it did before, because the integration-test phase is implicit and is executed on the way to reaching the “deploy” phase as part of the maven lifecycle, but of course it does more than it did before, it actually deploys the artifact to the C.I. repository.

One thing that is worth noting here is that I’m not using the maven release plugin, and that’s because it’s not very well suited to continuous delivery, as I’ve noted here. The main problem is that the release plugin will increment the build number in the pom and check it in, which will in turn kick off another build, and if every build is doing this, then you’ll have an infinitely building loop. Maven declares builds as either a “release build” which uses the release plugin, or a SNAPSHOT build, which is basically anything else. But I want to create releases out of SNAPSHOT builds, but I don’t want them to be called SNAPSHOT builds, because they’re releases! So what I need to do is simply remove the word SNAPSHOT from my pom. Get rid of it entirely. This will now build a normal “snapshot” build, but not add the SNAPSHOT label, and since we’re not running the release plugin, that’s fine (WARNING: if you try removing the word snapshot from your pom and then try to run a release build using the release plugin, it’ll fail).

Ok, let’s briefly catch up with what our system can now do:

  • We’ve got a build pipeline with 2 stages
  • It’s executed every time code is checked-in
  • Unit tests are executed in the first stage
  • Code coverage is checked, also in the first stage
  • The second stage runs the acceptance tests
  • The jar/zip is built and deployed to our ci repo, this also in the second stage of our pipeline

So we have a jar, and it’s in our “ci” repo, and we have a code coverage report. But where’s the rest of our static analysis? The build should report a lot more than just the code coverage. What about coding styles & standards, rules violations, potential defect hot spots, copy and pasted code etc and so forth??? Thankfully, there’s a great tool which collects all this information for us, and it’s called Sonar.

I won’t go into detail about how to setup and install Sonar, because I’ve already detailed it here.

Installing Sonar is very simple, and to get your builds to produce Sonar reports is as simple as adding a small amount of configuration to your pom, and adding the Sonar plugin to you plugin section. To produce the Sonar reports for your project, you can simply run:

mvn sonar:sonar

So that’s exactly what we’ll do in the next section of our build pipeline.

So we now have 3 pipeline sections and were producing Sonar reports with every build. The Sonar reports look something like this:

Sonar report

As you can see, Sonar produces a wealth of useful information which we can pour over and discuss in our daily stand-ups. As a rule we try to fix any “critical” rule violations, and keep the unit test coverage percentage up in the 90s (where appropriate). Some people might argue that unit test coverage isn’t a valuable metric, but bear in mind that Sonar allows you to exclude certain files and directories from your analysis, so that you’re only measuring the unit test coverage of the code you want to have covered by unit tests. For me, this makes it a valuable metric.

Moving on from Sonar now, we get to the next stage of my pipeline, and here I’m going to run some Integration Tests (finally!!). The ITs have a much wider scope than the Unit Test, and they also have greater requirements, in that we need an Integration Test Environment to run them in. I’m going to use Ant to control this phase of the pipeline, because it gives me more control than Maven does, and I need to do a couple of funky things, namely:

  • Provision an environment
  • Deploy all the components I need to test with
  • Get my newly built artifact from the ci repository in Artifactory
  • Deploy it to my IT environment
  • Kick of the tests

The Ant script is fairly straightforward, but I’ll just mention that getting our artifact from Artifactory is as simple as using Ant’s own “get” task (you don’t need to use Ivy juts to do this):

<get src=”${artifactory.url}/${repo.name}/${namespace}/${jarname}-${version}” dest=”${temp.dir}/${jarname}-${version}” />

The Integration Test stage takes a little longer than the previous stages, and so to speed things up we can run this stage in parallel with the previous stage. Go allows us to do this by setting up 2 jobs in one pipeline stage. Our Sonar stage now changes to “Reports and ITs”, and includes 2 jobs:

<jobs>
          <job name="sonar">
            <tasks>
              <exec command="mvn" args="sonar:sonar" workingdir="JavaDevelopment" />
            </tasks>
            <resources>
              <resource>windows</resource>
            </resources>
          </job>
 <job name="ITs">
            <tasks>
              <ant buildfile="run_ITs.xml" target="build" workingdir="JavaDevelopment" />
            </tasks>
            <resources>
              <resource>windows</resource>
            </resources>
          </job>
</jobs>

Once this phase completes successfully, we know we’ve got a half decent looking build! At this point I’m going to throw a bit of a spanner into the works. The QA team want to perform some manual exploratory tests on the build. Good idea! But how does that fit in with our Continuous Delivery model? Well, what I did was to create a separate “Release Candidate” (RC) repository, also known as a QA repo. Builds that pass the IT stage get promoted to the RC repo, and from there the QA team can take them and do their exploratory testing.

Does this stop us from practicing “Continuous Delivery”? Well, not really. In my opinion, Continuous Delivery is more about making sure that every build creates a potentially releasable artifact, rather that making every build actually deploy an artifact to production – that’s Continuous Deployment.

Our final stage in the deployment pipeline is to deploy our build to a performance test environment, and execute some load tests. Once this stage completes we deploy our build to the Release Repository, as it’s all signed off and ready to handover to customers. At this point there’s a manual decision gate, which in reality is a button in my CI system. At this point, only the product owner or some such responsible person, can decide whether or not to actually release this build into the wild. They may decide not to, simply because they don’t feel that the changes included in this build are particularly worth deploying. On the other hand, they may decide to release it, and to do this they simply click the button. What does the button do? Well, it simply copies the build to the “downloads” repository, from where a link is served and sent to customers, informing them that a new release is available – that’s just the way things are done here. In a hosted environment (like a web-based company), this button-press could initiate the deploy script to deploy this build to the production environment.

A Word on Version Numbers

This system is actually dependant on each build producing a unique artifact. If a code change is checked in, the resultant build must be uniquely identifiable, so that when we come to release it, we know we’re releasing theexact same build that has gone through the whole pipeline, not some older previous build. To do this, we need to version each build with a unique number. The CI system is very useful for doing this. In Go, as with most other CI systems, you can retrieve a unique “counter” for your build, which is incremented every time there’s a build. No two builds of the same name can have the same counter. So we could add this unique number to our artifact’s version, something like this (let’s say the counter is 33, meaning this is the 33rd build):

myproject.jar-1.0.33

This is good, but it doesn’t tell us much, apart from that this is the 33rd build of “myproject”. A more meaningful version number is the source control revision number, which relates to the code commit which kicked off the build. This is extremely useful. From this we can cross reference every build to the code in our source control system, and this saves us from having to “tag” the source code with every build. I can access the source control revision number via my CI system, because Go sets it as an environment variable at build time, so I simply pass it to my build script in my CI system’s xml, like this:

cobertura:cobertura -Dp4.revision=${env.GO_PIPELINE_LABEL}
-Dbuild.counter=${env.GO_PIPELINE_COUNTER"

p4.revision and build.counter are used in the maven build script, where I set the version number:

    <groupId>com.mycompany</groupId>
<artifactId>myproject</artifactId>
<packaging>jar</packaging>
<version>${main.version}-${build.number}-${build.counter}</version>
<name>myproject</name>

<properties>
<build.number>${p4.revision}</build.number>
<major.version>1</major.version>
<minor.version>0</minor.version>
<patch.version>0</patch.version>
<main.version>${major.version}.${minor.version}.${patch.version}</main.version>
</properties>

If my Perforce check-in number was 1234, then this build, for example, will produce:

myproject.jar-1.0.0-1234-33

And that just about covers it. I hope this is useful to some people, especially those who are using Maven and are struggling with the release plugin!

Being Agile in Release Management

November 16, 2011 4 comments

http://jamesbetteley.wordpress.com/2011/11/16/being-agile-in-release-management/

2 great things happened in 2005: Wales won the Grand Slam, and I had my first taste of “agile”. And after having worked on a 3-year-long waterfall project (which still wasn’t finished by the time I left) agile came as a breath of fresh air for me. I was hooked from day 1. I was working as a release manager in a fairly large development team, and since then I’ve worked in a number of different departments, such is the broad spectrum of the work involved in release management. I’ve also worked with teams of all sizes, including offshore teams and partners. Each situation poses its own unique set of challenges and I like to think that working in an agile fashion has equipped me well to overcome these challenges.

You might wonder how a “development methodology” can help a release manager overcome so many different challenges, given that release management doesn’t necessarily lend itself to working like an agile dev team (mainly due to the number of unplanned interruptions) and the answer is simply that agile, for me, goes further than just being a development methodology, it’s a culture.

Change the way you look at things - is the model spinning clockwise or anti-clockwise?

One of the things that I really love about agile is how it teaches you to think differently to how you otherwise might. It teaches you to evaluate things using different criteria – or rather it clarifies  which criteria you should be using to evaluate tasks. For instance, I now look at the tasks that I work on in terms of business value and customer demand, rather than my value and my demand!  In the past I have spent months working on complicated build and release solutions, which may well have been ultimately successful, but weren’t delivered on time and on occasion didn’t do everything that the users wanted.

These days, I certainly wouldn’t approach such a large challenge and try to get it right first time, it simply doesn’t make good business sense – it’s likely to be too costly in terms of time and effort, and by the time it eventually gets to the users it may well not be fit for purpose. Adopting an agile approach certainly helps here. But it’s not quite as simple as this in real life…

Thinking Agile

Thinking in an “agile way” doesn’t necessarily come naturally to release management – the solutions we’re tasked to come up with are often very complicated, need to support a multitude of projects and users, and still need to be simple and robust enough for the next person to be able to pick up. Working out a system like this takes some time. There’s also the added problem that we’re often dealing with live systems, and the risk of “getting it wrong” can be very costly and visible! For that reason, the temptation to do a great deal of up-front planning is HUGE! Another problem is that we try to (or are asked to) produce a one-size-fits-all solution to a very disparate system. I’m talking about things like:

  • We only want one CI system, but there are already 3 being used in the dev team.
  • We only want to use one build tool, but we need to support different programming languages, and the developers have already chosen their favourites.
  • Everyone has their favourite code inspection tools but management want stats that can be compared.
  • QA do things one way, dev do it another. And let’s not even start talking about how NetOps do it!
  • Deployments are done differently depending on which team you’re in, which OS you use, and which colour socks you’re wearing that day.

So as you can see, we’re often faced with competing requirements and numerous different “customers”, each with their own opinions and priorities. The temptation to standardise and make things simpler for everybody leads us down a long and windy road to a solution that invariably ends up being more complex than the problem you tried to solve in the first place. The fact is, there has to be complexity somewhere, and it often ends up in the build & deploy system.

How Do I “Think Agile”?

Well, first of all you have to stop looking at the big picture! I know it sounds crazy, but once you’ve got an idea of the big picture, instead of diving straight in and working on your Sistine Chapel, just write down what your big picture is in terms of a goal or mission statement, and then park it. I like to park it on a piece of A4 and stick it to the wall, but that’s just me! Just write it down somewhere for safe keeping, so that you can refer to it when needs be.

Michelangelo (not the ninja turtle) would have needed a few sprints to finish the Sistine Chapel

I once had a goal to standardise and automate the builds and deployments of every application to every environment, a-la continuous delivery. At the time, that was my Sistine Chapel.

User Stories

The next thing is to start gathering requirements in the form of stories. User stories help you get a real feel for what the users want – they give a sense of perspective and “real-life” which traditional requirements specs just don’t give. I honestly believe you’ve got a much better chance of delivering what people are asking for if you use stories rather than use cases or requirements specs to drive your development. Speak to your customers, the developers, testers, managers and netops engineers, and write down their requirements in the form of stories. I literally go around with a pen and paper and do this. Don’t forget to add your own stories as well – the release engineering team has its own set of requirements too!

User Stories Applied, by Mike Cohn

Next up is to turn these stories into tasks. Some stories can be made up of dozens of tasks, and they may take several sprints to complete, but this is the whole point of this exercise. By breaking the stories down into tasks, you’re creating tangible pieces of work which you can then give relatively accurate estimates on. You’ll often find that some stories contradict one another in the sense that your solution to one story will almost definitely be rendered obsolete when you get around to completing another story later on. Don’t be tempted to put one task off, just because you know you’ll end up changing it later!!

Eventually, when the time comes, you will have to change the work you’ve already done. This is the natural evolution of the process. Obviously it’s better to be future proof  and keeping one eye on the distance is a very useful thing. It would be foolish to write a system that will need to be completely torn down in a matter of a couple of weeks, but there’s a constant balancing act to perform – you need to get tasks completed but you don’t want to be making hard work for yourself in the future. My tactic is to make each solution (be it a deploy script or a new CI system) self-contained, and only later on will I refactor and pull out the common parts – but the point to realise is that this won’t come as a surprise, and you can make sure everyone knows that this work will eventually need doing as a consequence.

Customers and Prioritisation

I’ve learned that all stories must have a sponsor, or “customers”. As I’ve mentioned, these are likely to be developers, testers, management and netops engineers, as well as yourself! Strangely enough the customers are actually a really handy tool in helping you manage your work…

There’s never enough time in the day to get everything done, or at least that’s the way it often seems when you’ve got project managers hassling you to do a release of the latest super-duper project, and management asking you automate the reports, and developers asking you to fix their environments, and then your source control system throws a wobbly. It’s organised chaos sometimes. However, when you’re working on your stories, and your stories have “customers”, you can leave it to your customers to fight it out over which work gets the highest priority! From the list above there are the following high-level tasks:

  • Automate the builds and deployments for the super-duper project
  • Automate the generation of management reports
  • Stabilise the dev environments
  • Implement failover and disaster recovery for your source control system (why has this not been done already???!!!!)

Each of these tasks has a customer, and they all want them doing yesterday. Simply get all the customers in a room and then get the hell out of there work together to sort out the priorities.

How to Deal With Unplanned Work

Probably the single hardest issue to overcome has been how to manage the constant interruptions and unplanned work. A few years back, Rachel Davies came in and gave us some valuable agile coaching, and from those lessons, and my own experiences, I’ve worked out a few ways of dealing with all the unplanned work that comes my way.

First of all, I’ll explain what I mean by unplanned work. I’m essentially talking about anything that crops up which I haven’t included in my sprint plan, which I have to work on at some point during the sprint. Typically these are things like emergency releases, fixing broken environments, sorting out server failures and sometimes emergency secondment into other teams. “Fixing stuff that unexpectedly broke” is probably the most common one!

The way I have come to deal with unplanned work is to start off by pretending it simply doesn’t exist. Plan a sprint as if there will be no unplanned work at all. Then, during the course of the sprint, whenever unplanned work comes your way, take an estimate of how long it took, and more importantly, make an estimate of how much time it has set you back. The two don’t necessarily equate to the same thing, I’ll explain: If I’m working on a particularly complicated scripting task that has taken me a good while to get my head around, and then I’m asked to fix a broken VM or something, the task of fixing the VM may only take an hour or two at most, but getting back to where I was with the script may take me another 2 hours on top of that, especially if someone else has changed it in the meantime! Suddenly I’ve lost half a day’s work due to a one or two hour interruption. The key is to track the time lost, rather than the time taken. I record all of the time lost due to unplanned work in a separate sprint called “Unplanned Work”. I use acunote for this. This allows me to see how much time I lose to unplanned work each sprint. After a while I can see roughly how much time I should expect to “lose” each sprint, and I adjust my sprint planning accordingly.

One way of working, which helps to highlight the amount of unplanned work you’re carrying out, is to plan your sprints as normal, and then say to the customers/sponsors (who should ideally be represented in your planning session) “right, that’s what we could be doing without unplanned work, but now I’m afraid we have to remove x points”. This is a rather crude way of ramming home the reality of working in a department which has a higher than average amount of interruptions and unplanned work (certainly in comparison to dev/qa). It also works as a good way of highlighting the cost of unplanned work to the management team. They hate having work taken out of the sprints and when they realise that unplanned work is costing them in terms of delivery, they are far more likely to act upon it. This could mean investing in better hardware/software, reprioritising that work that you wanted to automate, or hiring more staff.

- If you’re interested to know more about user stories I highly recommend Mike Cohn’s book “User Stories Applied”.

- Rachel Davies is an agile consultant who co-authored the “Agile Coaching” book. She also runs agile coaching courses at skillsmatter

8 Principles of Continuous Delivery

August 4, 2011 10 comments

continuous deliveryDave Farley co-authored “Continuous Delivery”, an excellent book in the Martin Fowler signature series, which goes into great detail about the evolution of Continuous Integration, and how to achieve continuous delivery (or continuous deployment) using “build pipelines”.

I went along to hear Dave Farley give a talk on Continuous Delivery and how they’re doing it where he works, at LMAX. It was a really great session and he managed to cover, in quite a short session, a great deal of content from in the book. I’ve put together a highlight of what he covered in the talk, mixed with my own take on things

Here’s what I learned…

Continuous delivery is basically the logical extension of Continuous Integration  – it’s a more holistic solution than C.I. though, as it encapsulates a lot more than just the development of software.

For instance, continuous delivery focuses a lot more on requirements than C.I. ever did, and involves a great deal more people on the delivery chain than traditional C.I. as well. It also has a greater customer focus than C.I.

Now, here’s something I didn’t know about continuous delivery…

There are 12 principles behind the agile manifesto. the first of which is:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software

Well who’d have thought it? Continuous delivery was mentioned waaaaay back in the days of the agile manifesto, some 2500 years BC* and yet for most of us it seems like a pretty new idea.

Continuous delivery is based on the use of smart automation. This is all about creating a repeatable and reliable process for delivering software. You have to automate pretty much everything in order to be able to achieve continuous delivery. manual steps will get in the way or become a bottleneck. This goes for everything from requirements authoring to deploying to production.

The focus is on the finished article – again, this is described as being:

Working software in the hands of the user

software in the hands of the user

software in the hands of the user

Because the focus is on the software in the hands of the user, there’s less tendency from a developers perspective, to simply chuck software over the wall to the QA team, and similarly to the Netops/production team.

Continuous delivery is all about getting that product out there, and getting the feedback from the users. This might mean delivering “unfinished” demo software during your development iterations, and getting your users to give valuable early feedback, or it might mean deploying experimental software to a website cluster and tracking how successful this new site is as compared to the existing system. Either way, it’s all about feedback loops. Essentially you want to have as rapid a feedback loop with your users as possible.

Feedback loops are familiar to everyone who has worked on a Continuous Integration system. In C.I. feedback loops are generally about getting test feedback (unit test, acceptance test, performance test etc) as quickly as possible – “Fail Fast” – as you’ve probably heard.

Continuous Delivery, as described, takes this idea to it’s logical conclusion, and gets the users involved in the feedback loop. This is a good example of how Continuous Delivery is more holistic than its C.I. predecessor. In Continuous Delivery, the feedback loop provides feedback not only on the quality of your code, but on the quality of your requirements, and the quality of your processes for delivering software.

8 Principles of Continuous Delivery

  1. The process for releasing/deploying software MUST be repeatable and reliable. This leads onto the 2nd principle…
  2. Automate everything! A manual deployment can never be described as repeatable and reliable (not if I’m doing it anyway!). You have to invest seriously in automating all the tasks you do repeatedly, and this tends to lead to reliability.
  3. If somethings difficult or painful, do it more often. On the surface this sounds silly, but basically what this means is that doing something painful, more often, will lead you to improve it, probably automate it, and this will eventually make it painless and easy. Take for example doing a deployment of a database schema. If this is tricky, you tend to not do it very often, you put it off, maybe you’ll do 1 a month. Really what you should do is improve the process of doing the schema deployments, get good at it, and do it more often, like 1 a day if needed. If you’re doing something every day, you’re going to be a lot better at it than if you only do it once a month.
  4. Keep everything in source control – this sounds like a bit of a weird one in this day and age, I mean seriously, who doesn’t keep everything in source control? Apparently quite a few people. Who knew?
  5. Done means “released”. This implies ownership of a project right up until it’s in the hands of the user, and working properly. There’s none of this “I’ve checked in my code so it’s done as far as I’m concerned”. I have been fortunate enough to work with some software teams who eagerly made sure their code changes were working right up to the point when their changes were in production, and then monitored the live system to make sure their changes were successful. On the other hand I’ve worked with teams who though their responsibility ended when they checked their code in to the VCS.
  6. Build quality in! Take the time to invest in your quality metrics. A project with good, targeted quality metrics (we could be talking about unit test coverage, code styling, rules violations, complexity measurements – or preferably, all of the above) will invariably be better than one without, and easier to maintain in the long run.
  7. Everybody has responsibility for the release process. A program running on a developers laptop isn’t going to make any money for the company. Similarly, a project with no plan for deployment will never get released, and again make no money. Companies make money by getting their products released to customers, therefore, this process should be in the interest of everybody. Developers should develop projects with a mind for how to deploy them. Project managers should plan projects with attention to deployment. Testers should test for deployment defects with as much care and attention as they do for code defects (and this should be automated and built into the deployment task itself).
  8. Improve continuously. Don’t sit back and wait for your system to become out of date or impossible to maintain. Continuous improvement means your system will always be evolving and therefore easier to change when needs be.

To go with these principles there are also:

4 Practices of Continuous Delivery

  1. Build binaries only once. You’d be staggered by the number of times I’ve seen people recompile code between one environment and the next. Binaries should be compiled once and once only. The binary should then be stored someplace which is accessible only to your deployment mechanism, and your deployment mechanism should deploy this same binary to each successive environment…
  2. Use precisely the same mechanism to deploy to every environment. It sounds obvious, but I’ve genuinely seen cases where deployments to QA were automated, only for the production deployments to be manual. I’ve also seen cases where deployments to QA and production were both automated, but in 2 entirely different languages. This is obviously the work of mad people.
  3. Smoke test your deployment. Don’t leave it to chance that your deployment was a roaring success, write a smoke test and include that in the deployment process. I also like to include a simple diagnostics test, all it does it check that everything is where it’s meant to be – it compares a file list of what you expect to see in your deployment against what actually ends up on the server. I call it a diagnostics test because it’s a good first port of call when there’s a problem.
  4. If anything fails, stop the line! Throw it away and start the process again, don’t patch, don’t hack. If a problem arises, no matter where, discard the deployment (i.e. rollback), fix the issue properly, check it in to source control and repeat the deployment process. A lot of people comment that this is impossible, especially if you’ve got a tiny outage window to deploy things to your live system, or if you do your production changes are done in the middle of the night while nobody else is around to fix the issue properly. I would say that these arguments rather miss the point. Firstly, if you have only a tiny outage window, hacking your live system should be the last thing you want to do, because this will invalidate all your other environments unless you similarly hack all of them as well. Secondly, the next time you do a deployment, you may reintroduce the original issue. Thirdly, if you’re doing your deployments in the middle of the night with nobody around to fix issues, then you’re not paying enough attention to the 7th principle of Continuous Delivery – Everybody has responsibility for the release process. Unless you can’t avoid it, I wouldn’t recommend doing releases when there’s the least amount of support available, it simply goes against common sense.

* Approximate date.

Follow

Get every new post delivered to your Inbox.

Join 381 other followers