Agile ITOps: Day 4

It’s day 4 of our first agile ITOps sprint and so far it’s pretty much going to script. Our estimation of how much work we would be able to get through is looking roughly in the right ballpark (for more details on how we did the estimating, and other stuff see Agile ITOps: Day 1)

Looking at our sprint board a couple of things are fairly apparent. Firstly, that we were right about the number and frequency of interruptions, and secondly that we’re going to need a bigger board:

We’re going to need a bigger board

One of the things we aimed to achieve with the sprint board was to raise the visibility of the constant “interruptions” we have to work on. To do this we decided to note down every “interruption” on a red/pink card. As you can see in the picture above, we’ve worked more on interruptions than we have on our committed tasks. This was exactly what we expected to see.

Another thing we expected to see was a correlation between the number of interruptions we had, and the amount of project work we achieved. It was expected that on days where there were a high degree of interruptions, the number of project points completed would drop off. That hasn’t quite materialised yet, as you’ll see in the burndown below:

day4

2 Plus 2 Doesn’t Equal 4?

Another thing that we’re expecting to see is that the maths just don’t add up. What I’m saying is that I don’t expect the team to be 100% productive when you add up the completed tasks we committed to deliver, plus all the points we did on interruptions. The reality is that people can’t automatically switch from one task to another and not lose any productivity. At the end of this sprint I’m hoping to be able to get a rough estimate of this “lost time”. In due course I will try to see if there’s any relationship between the amount of “lost time” and the types of tasks/interruptions we work on, so that we can effectively “cost” some of our regular tasks.

Agile ITOps: Day 1

Today we started day 1 of our first ever ITOps sprint. This all came about because we needed a way of working out our productivity on “project tasks”, as well as learning how to triage our interruptions a bit better. None of the IT Ops team had tried any form of agile before, so there’s a mix of excitement and apprehension at the moment! I’ve opted for the old skool cards and sprint board (we’re using scrum by the way – more on this in a bit) rather than using an IT based system like Jira. This is mainly because everyone’s co-located, but also because I think it’s a good way to advertise what we’re doing within the office, and it’s an easy way to introduce a team to scrum.

The team already had a backlog of projects, so it was quite an easy job to break these down into cards. First of all we prioritised the projects and then sized up a bunch of tasks. We’re guessing, for this first sprint, that we’ll probably be around 40% to 50% productive on these project tasks, while the rest of our time will be taken up by interruptions (IT requests, responding to inquiries, dealing with unexpected issues etc).

Sprint 1 – planned, or “committed” tasks are on green cards, while interruptions go on those lovely pinky-reddish ones.

I’m using a points system which is roughly equivalent to 1 point = 1 hour’s effort. This is a bit more granular than you might expect in a dev sprint, but since we’re dealing with a very high number of smaller tasks, I’ve opted for this slightly more granular scale.

As well as project work, there was also an existing list of IT requests which were in the queue waiting to be done. I wanted to “commit” to getting some of these done, so we planned them in to the sprint. I had originally thought of setting a goal of reducing the IT request queue by 30 tickets, and having that as a card, but I decided against this because I don’t feel it really represents a single task in it’s own right. I might change my mind about this at a later date though! So instead of doing that, we just chose the highest priority IT requests, and then arranged them by logical groupings (for instance, 3 IT requests to rebuild laptops would be grouped together). At the end of our planning session we had committed to achieving approximately 40% productivity.

Managing Interruptions

We expect at least 50% of our time to be taken up by interruptions – this is based on a best-guess estimate, and will be refined from one sprint to the next as we progress on our agile journey. Whenever we get an interruption, let’s say someone comes over and says his laptop is broken, or we get an email saying a printer has mysteriously stopped working, then we scribble down a very brief summary of this issue on a red/pink card and make a quick call on whether it’s a higher priority that our current “in progress” tasks. If it is, then it goes straight into “in progress”, and we might have to move something out of “in progress” if the interruption means we need to park it for a while. Otherwise, the interruption will go in the backlog or in the “to do” list, depending on urgency.

At the end of the sprint we’re expecting to see a whole lot more red cards than green ones in the “Done” column. Indeed, we’re only a couple of hours into the first sprint an already the interruption cards contribute more than a 4:1 ratio in terms of hours actually worked today!

Interruptions already outweigh “committed” tasks by 4:1 in terms of hours worked – looks like my 40% guess was a bit optimistic!

Triage and Priorities

One of the goals of this agile style of working is to get us to think a bit smarter when it comes to prioritising interruptions. It’s been felt in the past that interruptions always get given higher priority over our project work, when perhaps they shouldn’t have been. I suspect that in reality there’s a combination of reasons why the interruptions got done over project work: they’re usually smaller tasks, someone is usually telling you it’s urgent, it’s nice to help people out, and sometimes it’s not very easy to tell someone that you aren’t going to do their request because it’s not high enough priority. By writing the interruptions down on a card, and being able to look at the board and see a list of tasks we have committed to deliver, I’m hoping that we will be able to make better judgement calls on the priority, and also make it easier to show people that their requests are competing with a large number of other tasks which we’ve promised to get done.

Why Scrum?

I don’t actually see this as the perfect solution by any means, and as you might be able to tell already, there’s a slight leaning towards Kanban. I eventually want to go full-kanban, so to speak, but I feel that our scrum based approach provides a much easier introduction to agile concepts. Kanban seems to be much more geared towards an Ops style environment, with it’s continuous additions of tasks to the backlog and its rolling process. I think the pull based system and the focus on “Work In Progress” will provide some good tools for the ITOps team to manage their work flow. For now though, they’re getting down with a bit of scrum to get used to the terms and the ideas, and to understand what the hell the devs are talking about!

Retrospectives, burn-downs and Continuous Improvement

We’ll stick with scrum for a while and at the end of our sprint we’ll demo what project work we actually got through, as well as advertise the amount of IT requests we did, and the number of “interruptions” we successfully handled (although I don’t think I’ll keep calling them “interruptions” :-)). I’m also looking forward to our first retrospective, which we’ll be doing at the end of each sprint as a learning task – we want to see what we did well and what we did badly, as well as take a good look at our original estimations!

I’ll keep track of our burndown, and as the weeks go by I’ll try to give the team as much supporting metrics/information as I can, such as the number of hours since an interruption, the number of project tasks that are stuck in “In Progress” etc. But most important of all, we’re looking at this as a learning experience, an opportunity for us to improve the way we manage projects and a chance for everyone to get better at prioritising and estimating.

I’ll keep the posts coming whenever I get the chance!

Continuous Delivery with a Difference: They’re Using Windows!

 

Last night I was taken to the London premier of Warren Miller’s latest film called “Flow State”. Free beers, a free goodie bag and an hour and a half of the best snowboarders and skiers in the world doing tricks which in my head I can also do, but in reality are about 10000000% better than anything I can manage. Good times. Anyway, on my way home I was thinking about “flow” and how it applies to DevOps. It’s a tricky thing to maintain in an Ops capacity. It reminded me of a talk I went to last week where the speakers talked of the importance of “Flow” in their project, and it’s inspired me to write it up:

Thoughtworkers Pat and Aleksander have been working at a top secret location* for a top secret company** on a top secret mission to implement continuous delivery in a corporate Windows world***
* Ok, I actually forgot to ask where it was located

** It’s probably not a secret, but they weren’t telling me

*** It’s obviously not a secret mission seeing as how: a) it was the title of their talk and b) I just said what it was

Pat and Aleksander put their collective Powerpoint skills to good use and made a presentation on the stuff they’d done and learned during their time working on this top secret project, but rather than call their presentation “Stuff We Learned Doing a Project” (that’s what I would have named it) they decided to call it “Experience Report: Continuous Delivery in a Corporate Windows World”, which is probably why nobody ever asks me to come up with names for their presentations.

This talk was at Skills Matter in London, and on my way over there I compiled a list of questions which I was keen to hear their answers to. The questions were as follows:

  1. What tools did they use for infrastructure deployments? What VM technology did they use?
  2. How did they do db deployments?
  3. What development tools did they use? TFS?? And what were their good/bad points?
  4. Did they use a front-end tool to manage deployments (i.e. did they manage them via a C.I. system)?
  5. Was the company already bought-in to Continuous Delivery before they started?
  6. What breed of agile did they follow? Scrum, Kanban, TDD etc.
  7. What format were the built artifacts? Did they produce .msi installers?
  8. What C.I. system did they use (and why did they decide on that one)?
  9. Did they use a repository tool like Nexus/Artifactory?
  10. If they could do it all again, what would they do differently?

During the evening (mainly in the pub afterwards) Pat and Aleksander answered almost all of the questions above, but before I list them, I’ll give a brief summary of their talk. Disclaimer: I’m probably paraphrasing where I’m quoting them, and most of the content is actually my opinion, sorry about that. And apologies also for the completely unrelated snowboarding pictures.

Cultural Aspects of Continuous Delivery

Although CD is commonly associated with tools and processes, Pat and Aleksander were very keen to push the cultural aspects as well. I couldn’t agree more with this – for me, Continuous Delivery is more than just a development practice, it’s something which fundamentally changes the way we deliver software. We need to have an extremely efficient and reliable automated deployment system, a very high degree of automated testing, small consumable sized stories to work from, very reliable environment management, and a simplified release management process which doesn’t create problems than it solves. Getting these things right is essential to doing Continuous Delivery successfully. As you can imagine, implementing these things can be a significant departure from the traditional software delivery systems (which tend to rely heavily on manual deployments and testing, as well as having quite restrictive project and release management processes). This is where the cultural change comes into effect. Developers, testers, release managers, BAs, PMs and Ops engineers all need to embrace Continuous Delivery and significantly change the traditional software delivery model.

FACT BOMB: Some snowboarders are called Pat. Some are called Aleksander.

Tools

When Aleksander and Pat started out on this project, the dev teams were already using TFS as a build system and for source control. They eventually moved to using TeamCity as a Continuous Integration system, and  Git-TFS as the devs primary interface to the source control system.

The most important tool for a snowboarder is a snowboard. Here’s the one I’ve got!

  • Builds were done using msbuild
  • They used TeamCity to store the build artifacts
  • They opted for Nunit for unit testing
  • Their build created zip files rather than .msi installers
  • They chose Specflow for stories/spec/acceptance criteria etc
  • Used Powershell to do deployments
  • Sites were hosted on IIS

 

Practices

“Work in Progress” and “Flow” were the important metrics in this project by the sounds of things (suffice to say they used Kanban). I neglected to ask them if they actually measured their flow against quality. If I find out I’ll make a note here. Anyway, back to the project… because of the importance of Work in Progress and Flow they chose to use Kanban (as mentioned earlier) – but they still used iterations and weekly showcases. These were more for show than anything else: they continued to work off a backlog and any tasks that were unfinished at the end of one “iteration” just rolled over to the next.

Another point that Pat and Aleksander stressed was the importance of having good Business Analysts. They were essential in carving up stories into manageable chunks, avoiding “analysis paralysis”, shielding the devs from “fluctuating functionality” and ensuring stories never got stuck for too long. Some other random notes on their processes/practices:

  • Used TDD with pairing
  • Testers were embedded in the team
  • Maintained a single branch of code
  • Regression testing was automated
  • They still had to raise a release request with Ops to get stuff deployed!
  • The same artifact was deployed to every environment*
  • The same deploy script was used on every environment*

* I mention these two points because they’re oh-so important principles of Continuous Delivery.

Obviously I approve of the whole TDD thing, testers embedded in the team, automated regression testing and so on, but not so impressed with the idea of having to raise a release request (manually) with Ops whenever they want to get stuff deployed, it’s not very “devops” 🙂 I’d seek to automate that request/approval process. As for the “single branch of code”, well, it’s nice work if you can get it. I’m sure we’d all like to have a single branch to work from but in my experience it’s rarely possible. And please don’t say “feature toggling” at me.

One area the guys struggled with was performance testing. Firstly, it kept getting de-prioritised, so by the time they got round to doing it, it was a little late in the day, and I assume this meant that any design re-considerations they might have hoped to make could have been ruled out. Secondly, they had trouble actually setting up the load testing in Visual Studio – settings hidden all over the place etc.

Infrastructure

Speaking with Pat, he was clearly very impressed with the wonders of Powershell scripting! He said they used it very extensively for installing components on top of the OS. I’ve just started using it myself (I’m working with Windows servers again) and I’m very glad it exists! However, Aleksander and Pat did reveal that the procedure for deploying a new test environment wasn’t fully automated, and they had to actually work off a checklist of things to do. Unfortunately, the reality was that every machine in every environment required some degree of manual intervention. I wish I had a bit more detail about this, I’d like to understand what the actual blockers were (before I run into them myself), and I hate to think that Windows can be a real blocker for environment automation.

Anyway, that’s enough of the detail, let’s get to the answers to the questions (I’ve added scores to the answers because I’m silly like that):

  1. What tools did they use for infrastructure deployments? What VM technology did they use? – Powershell! They didn’t provision the actual VMs themselves, the Ops team did that. They weren’t sure what tools they used.  1
  2. How did they do db deployments? – Pass 0
  3. What development tools did they use? TFS?? And what were their good/bad points? – TFS. Source Control and C.I. were bad so they moved to TeamCity and Git-TFS 2
  4. Did they use a C.I. tool to manage deployments? – Nope 0
  5. Was the company already bought-in to Continuous Delivery before they started? – They hired ThoughtWorks so I guess they must have been at least partly sold on the idea! Agile adoption was on their roadmap 1
  6. What breed of agile did they follow? Scrum, Kanban, TDD etc. – TDD with Kanban 2
  7. What format were the built artifacts? Did they produce .msi installers? – Negatory, they used zip files like any normal person would. 2
  8. What C.I. system did they use (and why did they decide on that one)? – TeamCity, which is interesting seeing as how ThoughtWorks Studios produce their own C.I. system called “Go”. I’ve used Go before and it’s pretty good conceptually, but it’s also expensive and hard to manage once you’re running over 50 builds and 100 test agents. The UI is buggy too. However, it has great features, but the Open Source competitors are catching up fast. 2
  9. Did they use a repository tool like Nexus/Artifactory? – They used TeamCity’s own internal repo, a bit like with Jenkins where you can store a build artifact. 1
  10. If they could do it all again, what would they do differently? – They wouldn’t push so hard for the Git TFS integration, it was probably not worth the considerable effort at the end of the day. 1

TOTAL: 12

What does this total mean? Absolutely nothing at all.

What significance do all the snowboard pictures have in this article? None. Absolutely none whatsoever.

A Really $h!t Branching Policy

“As a topic of conversation, I find branching policies to be very interesting”, “Branching is great fun!”, “I wish we could do more branching” are just some of the comments on branching that you will never hear. And with good reason, because branching is boring. Merging is also boring. None of this stuff is fun. But for some strange reason, I still see the occasional branching policy which involves using the largest number of branches you can possibly justify, and of course the most random, highly complex merging process you can think of.

Here’s an example of a really $h!t branching policy:

Look how hateful it is!! I imagine this is the kind of conversation that leads to this sort of branching policy:

“Right, let’s just work on main and then take a release branch when we’re nearly ready to release”

“Waaaaait a second there… that sounds too easy. A better idea would be to have a branch for every environment, maybe one for each developer as well, and we should merge only at the most inconvenient time, and when we’ve merged to the production branch we should make a build and deploy it straight to Live, safe in the knowledge that the huge merge we just did went perfectly and couldn’t possibly have resulted in any integration issues”

“Er, what?”

“You see! Its complexity is beautiful”

Conclusion

Branching is boring. Merging is dull and risky. Don’t have more branches than you need. Work on main, take a branch at the latest possible time, release from there and merge daily. Don’t start conversations about branching with girls you’re trying to impress. Don’t talk about branches as if they have personalities, that’s just weird. Use a source control system that maintains branch history. Floss regularly. Stretch after exercise.

 

 

 

​How Do You Do Fixed Bid With Agile?

https://i0.wp.com/photos2.meetupstatic.com/photos/event/d/2/7/7/highres_18713879.jpeg

Last week I went along to an Agile Evangelists Meetup in London. The speaker was Kim Lennard, who works for consultancy firm Equal Experts and is currently working with one of their clients, O2. Her background is in managing Agile transformations, which is what she has been involved with for the last couple of years at O2.

There was no fixed “theme” for this particular meetup, but instead, Kim ran a sort-of “open-space” forum, where we were all invited to come up with questions/topics, and then we voted for whichever ones we wanted to discuss first. Once we’d voted, it turned out that nobody really wanted to talk about the topic I’d suggested, which was “how do you implement agile in a company which has geographically distributed teams” (even cheating and voting three times for my own topic didn’t work) but rather, people seemed to want to talk about “How Fixed Bid & Agile Go Together“. There were a lot of agile consultants in the crowd, clearly, and I was outvoted 😦

How Do You Do Fixed Bid With Agile?

Not being an agile consultant, I didn’t really know what a “Fixed Bid” contract was, or how it worked. Basically fixed bid contracts have an agreed up-front cost, and payment isn’t based on the amount of time or resources expended so you can understand how this might seem incompatible with agile. Interestingly, Kim said it was always a good idea to find out why a contract was being done on a fixed-bid basis, rather than cost-plus, for instance. I got the general impression that fixed-bid contracts were difficult to deal with and if possible, it would be favourable to convince the client to alter their contract basis. Of course, that’s unlikely to be possible in most cases, I would imagine.

Great Expectations

Kim’s first bit of advice for dealing with fixed-bid contracts was to manage expectations appropriately. Fixed Bid is a harder environment, and you have to manage the risks associated with it.

You have to be smart about how you manage expectations

 

One way of doing this is to get the client involved on a day-to-day basis, so they can clearly see and understand the challenges and issues you experience every day. These challenges and issues represent the risks to the project – if the client sees and experiences them at first hand, just as you do, then their “expectation” will be accordingly adjusted. I liked this bit of advice and I would also encourage this “good-practice” to be extended to all agile projects, not just fixed bid!

Great Expectations are good, Realistic Expectations are better.

When I was at university, I decided to catch up on some of the classic literature that I had widely ignored during my school days, and embarked upon a mission to read the likes of Dickens, Dostoyevsky, and that other bloke who did the plays. I started with Oliver Twist by Charles Dickens, which I loved, and then went on to read Great Expectations, which wasn’t as good as I thought it would be. Haha. Aside from that being an awful joke, it highlights the need to manage expectations appropriately. With fixed bid contracts it’s essential to highlight areas of risk and make sure you don’t over promise & under deliver – you have to be pragmatic and realistic. Let the client know exactly what represents a risk to delivering on time & on target. In other words, don’t give your client Great Expectations unless they’re also Realistic Expectations.

What are the Risks?

Kim identified a number of factors which could contribute to projects not delivering on time or on budget (i.e. risks), and these are:

  • Over-promising. This leads to under-delivering, obvs!
  • Not knowing your team’s velocity. Maybe they’re a new team, and not very well established. This could mean that you under or overestimate the velocity from one sprint to the next, leading your client on a merry dance, and thus risking their trust.
  • Relationship with the client. How well do you know your client? Do the clients understand you and your team? Is there trust? Do you have shared goals and a shared ethos? If the answer is “no” to any of these, then you have yourself a risk!
  • Dodgy requirements! That old chestnut 🙂 If the requirements are unclear or subject to change, then this is certainly going to impact your delivery and as such this must be flagged as a risk.

Continuous Delivery Warts and All

Tom Duckering was back at Skills Matter this week, and this time he bought a friend (and fellow thoughtworker), Marc Hofer. They were there to talk to us about a “real life” continuous delivery project they’ve recently been working on. I sat, listened, took notes, and then I had to leave because I was meeting my girlfriend at the cinema to watch “Snow White and the Huntsman”, which was absolutely AWFUL by the way. Do not waste your time on this movie, it seriously drags on forever and I actually fell asleep before the end. It has Charlize Theron in it (is it me or is she in everything right now?), but don’t let that fool you, it’s still rubbish. Anyway, as I was saying, I took notes, and this is what I learned…

Warts?

The “warts and all” title was meant to be a caveat that they don’t claim to have got everything perfectly right, and that there were problems along the way on this project. The client for this particular project was “Springer” (a publishing company) and the job was to redesign the website (basically). One of the problems they were aiming to fix was the “time to release”, which was in the region of months, rather than hours, and so they decided to go all Continuous Delivery from the outset. Another thing worth mentioning was that this was a greenfield project, which has its advantages and disadvantages, as outlined here in my incredibly pointless table:

I did that table in Powerpoint, thus highlighting my potential as a senior manager.

Why Continuous Delivery?

The fact that they chose to follow the continuous delivery path right from the outset was an important decision. In my experience, continuous delivery isn’t something you can easily retro fit into an existing system, well, it’s not as easy as when you set out right from the start to follow continuous delivery. Tom put it like this:

You can’t sell continuous delivery as a bolt-on

Which, as usual, is a much better way of putting it than I just did.

Once of the reasons why they went for the continuous delivery approach with this client was to sell more of Jez Humble’s Continuous Delivery book (available on Amazon at a very reasonable price). Just kidding! They would never do that. They actually chose continuous delivery because of the good-practices (I’m trying to stop using the term “best practices” as I’ve learned that it’s evil) it enforces on a project. Continuous delivery allows you to have fast, frequent releases, which forces small changes rather than big ones, and also forces you to automate pretty much everything. They even automated the release notes, which is something we’ve also done on a project I’m working on currently! Our release notes are populated from a template, content pulled in from Jira, and they’re packaged up in every single build. Neat, no? Well Tom seemed pretty impressed with the idea, and I’m quite chuffed that we’re doing the same stuff.

Another reason they opted for a continuous delivery approach was to overcome the IT bottleneck problem.

Look at all the cool stuff I can do with MS paint!!

It would seem that there was an IT black hole which was unable to produce as quickly as the business demanded. I usually hear people say “Agile” is the solution to the IT bottleneck, rather than continuous delivery, but Tom made a point of saying that they were agile as well. I think continuous delivery helps teams to focus on the delivery aspect of agile, and gives us a way of bringing the delivery issues much further back down the line, where they can be addressed more easily, and not at the last minute. As I mentioned earlier, time-to-market was an important driving factor in choosing continuous delivery. I would also add that, in my experience, having a predictable time to market is of great importance to the business. You tend to find that project sponsors don’t mind waiting a couple of weeks, maybe longer, for a change to go live, as long as that estimate is realistic.

The Details

I won’t go into too much technical detail about the project they were working on, so I’ll summarise it like this:

  • Local virtualisation was done using Vagrant and VirtualBox, so dev’s could easily spin up new environments locally.
  • They used Git, and it wasn’t easy. Steep learning curve etc. Using submodules didn’t help either.
  • They had on-site Git go-to people, which helped with the Git learning curve.
  • Devs could deploy to any environment – this was useful for building up environments, but is scary as hell.
  • They kept the branches to a minimum – only for bugfixes or when doing feature toggle releasing.
  • They do check-in stats analysis to “incentivize” people. Small and frequent commits were rewarded.
  • They used Go (they have my sympathy).
  • They deploy using capistrano
  • They deploy to a versioned directory and use symlinks which helps with rollbacks (I’d say this was a pretty standard practice)
  • They use Kickstart and Chef to build workstations, and Chef-Solo for other environments
  • The servers are provisioned with VMWare, the base OS installed with Cobbler/Kickstart, and the “configuration” applied by Chef
  • Even the QA environment was load balanced!
  • This is a long list of bullet points

I was pretty interested with the idea of load balancing the test environment because it reminded me of a problem I had at a company I was working for a few years ago. We didn’t have a load balanced test environment but we did have a load balanced live environment, and one night we did a scheduled production release which just wouldn’t work. It was about 4am and things weren’t looking good. Luckily for me, a particularly bright developer by the name of Andy Butterworth was on hand, and he got to the bottom of the problem and dug us out of a hole. The problem was load-balance related of course. Our new code hadn’t been written for a load balanced cluster, but we never picked it up until it was too late. I’m not sure what past experiences drove Tom and Marc to implement a load balanced test environment, but it’s a good job they did, as Tom testified that it has saved their bacon a few times.

Load balancing QA has saved our bacon a few times!

One of the other things that I was interested in was the idea of using Vagrant and VirtualBox for local VM stuff. I was surprised at this because they are also using VMware. I wondered why, if they’re already using VMware, they don’t just use VMware player for their local VMs?

I was also interested in the way they’d configured Go, which, at a glance, looked totally different to how we’ve got our one setup here where I’m currently working. I’m hoping Tom will shed some light on this in due course!

I loved the idea of using check-in stats to incentivize the team! I’m really keen on the whole gamification thing at the moment, and I’m trying to think of some cool gamified way of incentivizing teams where I work. The check-in stats approach that Tom talked about looked cool, they analyse the number of check-ins per person and also look at the devs comments too, and produce a scoreboard 🙂

More Than Tools

I’ve been to a few talks and conferences recently and one of the underlying messages I’ve got from most of them is that people and relationships are more important than tools, and by that I mean that it’s more important to get relationships right than it is to pick the right tools. Bringing in a new amazing tool isn’t going to fix the big problems if the big problems are down to relationships.

I can think of a few examples: introducing tools like VMware and Chef are great at helping to speed up provisioning and configuring of environments, but if you don’t actually work on the relationships between the development and operations teams, then the tools won’t have any effect, the operations team might not buy into them, or maybe they’ll use them but not in the way the developers want them to. Another example: bringing in a new build tool because your old build system was unreliable. This isn’t going to fix your problem if your problem was that your old system was unreliable because development weren’t communicating clearly with the build engineers.

So relationships are key. But how do we make sure we’ve got good relationships? Well, I think if anyone knew the answer to that one they’d bottle it and sell it for millions. The truth is that it’s different for every situation, but there are things which can make sure you’re all on the same page, which is a start:

  • Have shared goals! I’m often banging on about this. Everyone has to push in the same direction. For me, in reality this often means trying to educate people that we don’t make any money from having reliable builds on developers laptops if the builds are unreliable in the CI/build system. We don’t make money out of finishing all our story points on time. We don’t make money out of writing new features. We make money by delivering quality software to customers! So I think that is exactly what we should all be focused on.
  • Be agile! I know this might seem a bit like it’s the wrong way around, but I actually think that being agile helps to build relationships. It’s a practice and a mindset as much as a process, and so if people share that mindset they’re naturally going to work better together. In my experience, in Operations teams we’ve been quite slow at adopting agile in comparison to other teams. It’s time for this to change. Tom said that on the project he’s working on, the Ops team are agile, and he identified that as one of the success areas.
  • Pair up. There’s nothing quite like sitting next to someone for a couple of days to help you see things from their perspective! On Tom & Marc’s project at Springer they paired the ops guys with dev. I would recommend going further and pairing dev with support engineers, QA (obvs!) and build/release management on a regular basis. Pairing them with users/customers would be even better!
  • Skill up. Tom & Marc talked about cross pollination of skills, and by this he means different people (possibly from different teams) learning parts of each others trade and skills. Increasing your skillset helps you understand other people’s issues and problems better, as well as making you more valuable, of course!

I became a better developer by understanding how things ran in Production – Marc Hofer

Summary

In summary – Tools are important, people and relationships are importanter (new word), you should automate everything, take little steps instead of big ones, stick to the principles of continuous delivery, and the new Snow White movie is bollocks.

Upcoming Agile/DevOps/CI Events

There’s a free talk this evening at Skills Matter (London) about Continuous Delivery by Tom Duckering and Marc Hofer. Tom did a talk on “Coping with Big CI” a few months ago, which was interesting and very well delivered. I’m looking forward to tonight’s talk.

Then tomorrow there’s the DevOps summit (again in London), which is being chaired by Stephen Nelson-Smith, author of “Test-Driven Infrastructure with Chef” (you can find my review of the book here). Atlassian and CollabNet will have speakers/panelists at this event so I’m expecting it to be very worthwhile.

On the 26th June, again in London (it’s all happening in London for a change), there’s Software Experts Summit, subtitled “Mastering Uncertainty in the Software Industry: Risks, Rewards, and Reality”, I’m expecting there will be a decent amount of DevOps/Continuous Delivery coverage. Speakers include representatives from Microsoft and Groupon.

Next Thursday (June 28th) there’s an Agile Evangelists Meetup in London entitled “Agility within a Client Driven Environment” with talks from experienced agile experts from a range of industries. These are usually pretty good events and the speakers usually have a great deal to offer.

And as I mentioned in an earlier post, there’s the Jenkins User Conference in Israel on July 5th.

Sonar Analysis Using Gradle

I’ve been experimenting with Gradle recently, and as part of the experiment, I wanted to get Sonar running and producing code metrics, including test coverage reports. I’m running the first release version of Gradle, so version 1.0.

To get Sonar working in Gradle you need to apply the sonar plugin, like this:

apply plugin: ‘sonar’

Then you need to add some sonar connection settings (very much like with Maven):

sonar {
server {
url = “http://${sonarBaseName}/”
}
database {
url = “jdbc:mysql://${hostBaseName}:3306/sonar?useUnicode=true&characterEncoding=utf8”
driverClassName = “com.mysql.jdbc.Driver”
username = “wibble”
password = “wobble”
}
}

To run the Sonar analysis/reports, you just call sonarAnalyze, which is the in-built task that the Sonar plugin gives you. So far, so easy.

The first problem was with the version of Sonar. My colleage Ed (check out his blog here) was trying to get a gradle build working with an existing Sonar installation, but wasn’t having much joy. We were using a version of Sonar pre version 2.8, so we had to upgrade. In the end we were forced to upgrade to version 3.0.1. That was the first pain point.

The next problem we stumbled upon was with cobertura. There’s a cobertura plugin for Gradle, and getting it to work is a bit unusual. You need to reference an initialisation script which is hosted on GitHub, like this:

buildscript {
apply from: ‘https://github.com/valkolovos/gradle_cobertura/raw/master/repo/gradle_cobertura/gradle_cobertura/1.2/coberturainit.gradle’
}

We had some problems with this. One day, I could access this script fine, and the next it failed. A week or so later, I could access it, but Ed’s build couldn’t. We still don’t understand why this was the case, but we suspect it was something to do with the GitHub https connection.

To make sure we didn’t get this problem again, we got hold of the initialisation script and saved it locally – unfortunately it has dependencies so we had to download the whole folder and put this in our artifactory repository, and make the build reference it from there. This seemed to fix our problem, but it left us with another issue – we were now depending on another build component, which contained hard coded build configuration information (the initialisation script refers to the maven central repo). We weren’t happy with this (since we use our own cached repositories in artifactory), so we had to think of a solution.

Ed went away to meditate on our problem. A little while later he came back with a gradle build file which used the Cobertura ant task. It’s pretty much the same way as it’s documented in the gradle cookbook, here.

These are the important parts that you need to include:

def cobSerFile="${project.buildDir}/cobertura.ser"
def srcOriginal="${sourceSets.main.classesDir}"
def srcCopy="${srcOriginal}-copy"
dependencies {
        testRuntime 'net.sourceforge.cobertura:cobertura:1.9.3'
        testCompile 'junit:junit:4.5'
}
test.doFirst  {
    ant {
        // delete data file for cobertura, otherwise coverage would be added
        delete(file:cobSerFile, failonerror:false)
        // delete copy of original classes
        delete(dir: srcCopy, failonerror:false)
        // import cobertura task, so it is available in the script
        taskdef(resource:'tasks.properties', classpath: configurations.testRuntime.asPath)
        // create copy (backup) of original class files
        copy(todir: srcCopy) {
            fileset(dir: srcOriginal)
        }
        // instrument the relevant classes in-place
        'cobertura-instrument'(datafile:cobSerFile) {
            fileset(dir: srcOriginal,
                   includes:"my/classes/**/*.class",
                   excludes:"**/*Test.class")
        }
    }
}
test {
    // pass information on cobertura datafile to your testing framework
    // see information below this code snippet
}
test.doLast {
    if (new File(srcCopy).exists()) {
        // replace instrumented classes with backup copy again
        ant {
            delete(file: srcOriginal)
            move(file: srcCopy,
                     tofile: srcOriginal)
        }
        // create cobertura reports
        ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/coverage",
format:'xml', srcdir:"src/main/java", datafile:cobSerFile)
ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/coverage",
format:'html', srcdir:"src/main/java", datafile:cobSerFile)
    }
}

So this is how we’ve got it running at the moment. As you can see, we’re no longer using the Cobertura plugin for gradle. The next thing we need to do is get Sonar to pick up the Cobertura reports. This is configured in the Sonar configuration section. I’ve shown the Sonar configuration section at the top of this page, but now we need to make some changes to it, like this:

sonar{

project {
coberturaReportPath = new File(buildDir, “/reports/cobertura/coverage.xml”)
sourceEncoding = “UTF-8”
dynamicAnalysis = “reuseReports”
testReportPath = new File(buildDir, “/test-results”)
}

server {
url = “http://${sonarBaseName}/”
}
database {
url = “jdbc:mysql://${hostBaseName}:3306/sonar?useUnicode=true&characterEncoding=utf8”
driverClassName = “com.mysql.jdbc.Driver”
username = “wibble”
password = “wobble”
}
}

Now we need to go back and change the output directory of our Cobertura ant configuration, to make it output to /reports/cobertura/coverage.xml, so we change the last bit of our configuration to look like this:

 // create cobertura reports

        ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/cobertura/coverage",
format:'xml', srcdir:"src/main/java", datafile:cobSerFile)
ant.'cobertura-report'(destdir:"${project.buildDir.path}/reports/coverage",
format:'html', srcdir:"src/main/java", datafile:cobSerFile)

What’s Going On?

Here’s a bunch of upcoming talks, courses, conferences, things and stuff, which I reckon might be worth checking out.

Managing javascript with Gradle – Free event @ Skills Matter (London) May 22nd 6:30pm

Insight for CI – Webinar May 23rd 11am and again at 2pm EDT

Goto Conference – Amsterdam May 24-26

Thoughtworks Live – Picadilly, London May 24th (all day)

Configuration Management Conference – (£80) London, May 29th (all day)

Thoughtworks Quarterly Briefing – Liverpool Street, London May 30th 6:30pm

Agile Development West – Las Vegas, June 10 – 15th

Gradle Build Automation Evolved – Free event @ Skills Matter (London) June 12th 6:30pm

Continuous Delivery Workshop – (£695, €695) London July 5th. Berlin June 12th, Dusseldorf  June 14th

Devops summit – London, June 20th

Jenkins User Conference – Israel, July 5th (all day)

I will add to this list as and when I find out about any interesting new events.

Design Driven Testing – Half a Book Review

Lately, I’ve been reading “Design Driven Testing – Test Smarter, Not Harder”, a book by 2 blokes who don’t like TDD (I won’t name them). Before I start “reviewing” it, I should really point out that I only actually read half the book. I couldn’t finish it. It’s not like I forgot how to read, or someone stole the book from me, nothing like that. It’s just that I got really annoyed and frustrated at it, and I realised that I was actually stabbing myself in the leg with a fountain pen. So I decided to put the book down, apply a plaster to my leg, and retire to the drawing room with a large brandy and a copy of “Learn To Find Inner Peace” (I’ll review that at a later date)*

As you may have already realised, I didn’t really enjoy reading this book.

Now, I’m no TDD evangelist myself (I don’t even follow TDD in my daily work), but the way the authors construct their arguments against TDD and then try to “prove” DDT is better, is just really badly done. As I was reading it, I felt like I was being told that if I’m doing things the TDD way, then I’m a stupid infidel, because DDT is the One True God!

Their method of argument is also completely flawed. In one of the early examples of how “DDT is better than TDD” they show an example of how to do something the TDD way, and then how to do it in the DDT way. To start off, they get all pedantic with TDD, intentionally doing insufficient design, and testing absolutely bloody everything. Then, they go on to show you how to solve the same problem doing DDT, but, BUT here’s the thing, it’s CLEARLY not easier! It’s much, much harder and longer! Yet they still seem to go “there, you see, wasn’t that much better?” (Real answer: No).

The other things that got on my nerves were EA, which is some software that they use throughout, and the book often seems like a sales pitch for this piece of software. Then there’s their obsession with UML diagrams. Oh my God how many UML diagrams??? (Answer: too many). It’s unsurprising to learn that they actually wrote a book on UML as well. Then there’s ICONIX. they’re obsessed with it. And guess what, they wrote a book on that too. And while I’m on the subject of other books the authors have written, it’s worth pointing out that they have form when it comes to books whose main objective seems to be to complain about popular agile programming practices, because they also wrote a book called “Extreme Programming Refactored, the case against XP”.

Conclusion

Anyway, despite all of this, I do actually believe the concept of DDT is a good one; using testing to verify design. But I just found this book to be a vitriolic rant against Test-Driven Development. I think I would have preferred it much more if the authors had spent more time talking about DDT, and less time denigrating TDD. I don’t know, maybe there’s not enough content to fill a whole book there and so they had to add the anti-TDD stuff to pad it out, but I doubt it.

You will probably enjoy this book if you already hate TDD and would just like to read a book written by some people who agree with you.

* I didn’t stab myself in the leg, I don’t have a copy of “Learn to Find Inner Peace”, or for that matter, a drawing room. Also I don’t drink brandy. Unless you’re buying.