Team Transformation for Continuous Delivery with Chris O’Dell – as it happened

By James Betteley

Preamble:

No time for that, I’m seriously late. I was leaving the office just as someone said “James, before you go…” and that was the end of any hopes I had of getting here on time.

It’s 6:40pm, I’ve finally got my sh1t together, and we’re off! In act that’s a lie, they were off ages ago. I’m so late there aren’t even any seats, so I’m sat in the corner at the back, like a proper Billy No-mates

6:41pm: Chris is talking about big balls of mud and how they’ve gone from that, to a much smaller ball (no mention of mud this time).

6:42: Slides are going by quicker than I can type! Chris is talking about the importance of moving away from a “blame culture”. I personally hate blame culture, I think it was the French who invented it. Arf! (sorry).

6:45pm: Ok, there’s a slide on how to get to Continuous Delivery, I’m going to pay attention to this one…

It says you need:

  • Cross functional product focused teams
  • A focus on technical debt
  • Sit the team close to their clients
  • actively remove blame culture
  • focus on self improvement
  • radiate metrics
  • collect metrics on work in progress

After a quick coffee break it’s time to interrogate the suspects – It’s Q&A time!

7:01pm: How do you share “commonality of functionality”? Asks someone who likes words ending in “ality”. Service oriented architecture was apparently a big help responds someone from the 7digital posse (they are a posse by the way Chris has been joined by some 7digital reinforcements).

7.02pm: The next question is about metrics and how they collect them. Apparently they’re working on a logging system, but I’m guessing they also use CI and some live reporting tools which I probably missed in the earlier slides! Oops.

7.05pm: “What didn’t work?” Asks someone (my favorite question so far. I’m giving it a 7.5 out of ten). Trying to patch things up didn’t work. The dependency chain caused a pain. Acceptance Tests were a pain (haha!) Using UI stuff and a shared DB caused Acceptance Test issues, they say. I nod in agreement.

7:07pm: Someone says something about keeping environments the same being a challenge. They’re meant to be the same??? Where’s the fun in that?

7.10pm: A question on blame culture is next up, namely “How do you get rid of it?” By not telling on people! Also, having a dev manager who protects from above is handy.

It’s how you respond [to blame] that’s important, as that’s what sets the tone.

7:11pm: How did you make the culture change? Asks someone who wants to know how they made the culture change. It’s another good question, and one I’d really like to learn from. It’s all very well having a great culture, and there’s no denying its importance, but how do you make a culture change if it’s not ideal to start with? Sadly the answer isn’t straightforward. The posse reply with things like “Adopt Agile principles”, “tech manifesto” (which sounds cool), “self-organising teams”, “small steps” and “leading by example”. Followed up with “hire well”, “you need champions!” “do workshops”. Also, “the CTO is pretty cool”. Hmmmm, so no “click here to change your culture” button then?

7:16pm: The next question is a corker. It went a bit like: “Usually have to change architecture of system to support Continuous Delivery, but also sometimes the architecture of organisation as well. Did this happen?” That’s the winner so far. “No” comes the answer. Damn. At 7 digital there’s a focus on lack of hierarchy, so quite a flat structure. Not much change was needed then, obviously. I think the word “culture” came up as well, and not for the first time.

7:18pm: How have they managed to integrate Ops, asks the next person, clearly fresh from devopsdays. “We’re still learning” is the honest sounding response. Not that the all haven’t been honest sounding. They’ve started assigning Ops people to “the team”, by which I assume they mean the project team.

7:20pm: Someone wants to know if they had a shared goal between tech/ops and dev? I think the answer is yes. Basically Rob (who is the head of both ops and dev) became head of both ops and dev, which helped. He also created a tech manifesto and is toying with the idea of putting up some posters. When I was a kid I had a poster of Airwolf in my bedroom. Not sure if that’s going to help anyone though.

7:21pm: “Is there a QA on the team?” is the next question. Yarp (I’m paraphrasing). But the QA person is more of a coach – everyone is expected to do it, but they’re there to lead. No separate dev manger or QA manager – everyone’s one great big team (aaahhhh).

7:23pm: Somebody has asked how they handle support, and whether there’s a support team. I think the person who responds says there’s a “Systems team”, who get a text or call in the middle of the night. It seems a bit cruel that they wait until the middle of the night to text them but what do I know? Apparently the devs may also get involved, so that’s ok. There’s an on-call team, “but this is an area for improvement” they confess. Mainly it’s a case of “call someone!”, which I personally think is pretty good. But they do stress how there’s a focus on monitoring so that they can catch as many issues as possible before they become, er, issues, if you catch my drift. They said it much better than I can write it.

7:26pm: “How frequently did you deploy your big ball of mud compared to how frequently you do it now?” And that question goes to contestant number 2. It used to be one every 3 months, but they don’t measure how frequently they can deploy stuff any more because it’s that frequent.
(that’s just showing off). Improving the deploy mechanism was all-important. And changing the culture to shift to more frequent releases. That word “culture” again.

7:30pm: This question sounds like a plant: “How do you have time to test stuff if you deploy so often?” asks some cheeky 7digital employee hidden in the audience. I’m joking of course, it’s a nice question because it leads to a well executed answer: Chris basically explains that because they deploy so often, their releases are very small. Also, they’ve automated the hell out of everything.

7:31pm: Dave asks a really good question but I’m far too slow to keep up! It included the phrase “separation of concerns” so was probably too complicated for me to understand anyway.

7:40pm: There’s a question about schema changes. I reckon the answer will include the word “culture”. it does. Somehow.

If there was a word cloud for this Q&A session then “culture” would dwarf all the others. Something tells me that “culture shift” is important.

7:45pm: “How do you manage project accounting?” “We don’t” – No mention of culture!

7:46pm: Someone asks “If there’s a production issue, like an outage, who takes ownership?”. Nice one, who indeed does take ownership? “Everyone, we have a culture of shared ownership”. Gah! it’s all about culture!

7:47pm: “How do you decide what projects get green lighted?” asks some poor innocent from the back of the room (and no, it wasn’t me). Apparently this has nothing to do with Continuous Delivery (and all the other questions have?) and there’s nobody from the product team here so that question lands on stony ground.

7:52pm: Banos treads a fine line by asking a question dangerously close to the time when we’re meant to be heading to the pub, but just about gets away with it “what CI system do you use?” he asks, and the answer is (drum roll…..) Team City! Actually there was no drum roll, I made that up. Then interestingly they say that everyone is in charge of looking after Team city and that they just trust each other! Crazyness!

8.02pm: “I was told we finish at 8” says Chris, and she’s bloody well right, there’s a pub nearby and some of us are thirsty.

So, in conclusion, 7digital know their Continuous Delivery from their TDD, and “culture” is the word of the evening. I’m off for beer with Banos, and the rest of the London Continuous Delivery gang!

Keep an eye out for #londoncd on twitter for news of the next London Continuous Delivery meetup, or go to the London CD website. Also follow the likes of @matthewpskelton, @AgileSteveSmith, @banoss and @davenolan for more Continuous Delivery goodness.

Continuous Delivery with a Difference: They’re Using Windows!

 

Last night I was taken to the London premier of Warren Miller’s latest film called “Flow State”. Free beers, a free goodie bag and an hour and a half of the best snowboarders and skiers in the world doing tricks which in my head I can also do, but in reality are about 10000000% better than anything I can manage. Good times. Anyway, on my way home I was thinking about “flow” and how it applies to DevOps. It’s a tricky thing to maintain in an Ops capacity. It reminded me of a talk I went to last week where the speakers talked of the importance of “Flow” in their project, and it’s inspired me to write it up:

Thoughtworkers Pat and Aleksander have been working at a top secret location* for a top secret company** on a top secret mission to implement continuous delivery in a corporate Windows world***
* Ok, I actually forgot to ask where it was located

** It’s probably not a secret, but they weren’t telling me

*** It’s obviously not a secret mission seeing as how: a) it was the title of their talk and b) I just said what it was

Pat and Aleksander put their collective Powerpoint skills to good use and made a presentation on the stuff they’d done and learned during their time working on this top secret project, but rather than call their presentation “Stuff We Learned Doing a Project” (that’s what I would have named it) they decided to call it “Experience Report: Continuous Delivery in a Corporate Windows World”, which is probably why nobody ever asks me to come up with names for their presentations.

This talk was at Skills Matter in London, and on my way over there I compiled a list of questions which I was keen to hear their answers to. The questions were as follows:

  1. What tools did they use for infrastructure deployments? What VM technology did they use?
  2. How did they do db deployments?
  3. What development tools did they use? TFS?? And what were their good/bad points?
  4. Did they use a front-end tool to manage deployments (i.e. did they manage them via a C.I. system)?
  5. Was the company already bought-in to Continuous Delivery before they started?
  6. What breed of agile did they follow? Scrum, Kanban, TDD etc.
  7. What format were the built artifacts? Did they produce .msi installers?
  8. What C.I. system did they use (and why did they decide on that one)?
  9. Did they use a repository tool like Nexus/Artifactory?
  10. If they could do it all again, what would they do differently?

During the evening (mainly in the pub afterwards) Pat and Aleksander answered almost all of the questions above, but before I list them, I’ll give a brief summary of their talk. Disclaimer: I’m probably paraphrasing where I’m quoting them, and most of the content is actually my opinion, sorry about that. And apologies also for the completely unrelated snowboarding pictures.

Cultural Aspects of Continuous Delivery

Although CD is commonly associated with tools and processes, Pat and Aleksander were very keen to push the cultural aspects as well. I couldn’t agree more with this – for me, Continuous Delivery is more than just a development practice, it’s something which fundamentally changes the way we deliver software. We need to have an extremely efficient and reliable automated deployment system, a very high degree of automated testing, small consumable sized stories to work from, very reliable environment management, and a simplified release management process which doesn’t create problems than it solves. Getting these things right is essential to doing Continuous Delivery successfully. As you can imagine, implementing these things can be a significant departure from the traditional software delivery systems (which tend to rely heavily on manual deployments and testing, as well as having quite restrictive project and release management processes). This is where the cultural change comes into effect. Developers, testers, release managers, BAs, PMs and Ops engineers all need to embrace Continuous Delivery and significantly change the traditional software delivery model.

FACT BOMB: Some snowboarders are called Pat. Some are called Aleksander.

Tools

When Aleksander and Pat started out on this project, the dev teams were already using TFS as a build system and for source control. They eventually moved to using TeamCity as a Continuous Integration system, and  Git-TFS as the devs primary interface to the source control system.

The most important tool for a snowboarder is a snowboard. Here’s the one I’ve got!

  • Builds were done using msbuild
  • They used TeamCity to store the build artifacts
  • They opted for Nunit for unit testing
  • Their build created zip files rather than .msi installers
  • They chose Specflow for stories/spec/acceptance criteria etc
  • Used Powershell to do deployments
  • Sites were hosted on IIS

 

Practices

“Work in Progress” and “Flow” were the important metrics in this project by the sounds of things (suffice to say they used Kanban). I neglected to ask them if they actually measured their flow against quality. If I find out I’ll make a note here. Anyway, back to the project… because of the importance of Work in Progress and Flow they chose to use Kanban (as mentioned earlier) – but they still used iterations and weekly showcases. These were more for show than anything else: they continued to work off a backlog and any tasks that were unfinished at the end of one “iteration” just rolled over to the next.

Another point that Pat and Aleksander stressed was the importance of having good Business Analysts. They were essential in carving up stories into manageable chunks, avoiding “analysis paralysis”, shielding the devs from “fluctuating functionality” and ensuring stories never got stuck for too long. Some other random notes on their processes/practices:

  • Used TDD with pairing
  • Testers were embedded in the team
  • Maintained a single branch of code
  • Regression testing was automated
  • They still had to raise a release request with Ops to get stuff deployed!
  • The same artifact was deployed to every environment*
  • The same deploy script was used on every environment*

* I mention these two points because they’re oh-so important principles of Continuous Delivery.

Obviously I approve of the whole TDD thing, testers embedded in the team, automated regression testing and so on, but not so impressed with the idea of having to raise a release request (manually) with Ops whenever they want to get stuff deployed, it’s not very “devops” 🙂 I’d seek to automate that request/approval process. As for the “single branch of code”, well, it’s nice work if you can get it. I’m sure we’d all like to have a single branch to work from but in my experience it’s rarely possible. And please don’t say “feature toggling” at me.

One area the guys struggled with was performance testing. Firstly, it kept getting de-prioritised, so by the time they got round to doing it, it was a little late in the day, and I assume this meant that any design re-considerations they might have hoped to make could have been ruled out. Secondly, they had trouble actually setting up the load testing in Visual Studio – settings hidden all over the place etc.

Infrastructure

Speaking with Pat, he was clearly very impressed with the wonders of Powershell scripting! He said they used it very extensively for installing components on top of the OS. I’ve just started using it myself (I’m working with Windows servers again) and I’m very glad it exists! However, Aleksander and Pat did reveal that the procedure for deploying a new test environment wasn’t fully automated, and they had to actually work off a checklist of things to do. Unfortunately, the reality was that every machine in every environment required some degree of manual intervention. I wish I had a bit more detail about this, I’d like to understand what the actual blockers were (before I run into them myself), and I hate to think that Windows can be a real blocker for environment automation.

Anyway, that’s enough of the detail, let’s get to the answers to the questions (I’ve added scores to the answers because I’m silly like that):

  1. What tools did they use for infrastructure deployments? What VM technology did they use? – Powershell! They didn’t provision the actual VMs themselves, the Ops team did that. They weren’t sure what tools they used.  1
  2. How did they do db deployments? – Pass 0
  3. What development tools did they use? TFS?? And what were their good/bad points? – TFS. Source Control and C.I. were bad so they moved to TeamCity and Git-TFS 2
  4. Did they use a C.I. tool to manage deployments? – Nope 0
  5. Was the company already bought-in to Continuous Delivery before they started? – They hired ThoughtWorks so I guess they must have been at least partly sold on the idea! Agile adoption was on their roadmap 1
  6. What breed of agile did they follow? Scrum, Kanban, TDD etc. – TDD with Kanban 2
  7. What format were the built artifacts? Did they produce .msi installers? – Negatory, they used zip files like any normal person would. 2
  8. What C.I. system did they use (and why did they decide on that one)? – TeamCity, which is interesting seeing as how ThoughtWorks Studios produce their own C.I. system called “Go”. I’ve used Go before and it’s pretty good conceptually, but it’s also expensive and hard to manage once you’re running over 50 builds and 100 test agents. The UI is buggy too. However, it has great features, but the Open Source competitors are catching up fast. 2
  9. Did they use a repository tool like Nexus/Artifactory? – They used TeamCity’s own internal repo, a bit like with Jenkins where you can store a build artifact. 1
  10. If they could do it all again, what would they do differently? – They wouldn’t push so hard for the Git TFS integration, it was probably not worth the considerable effort at the end of the day. 1

TOTAL: 12

What does this total mean? Absolutely nothing at all.

What significance do all the snowboard pictures have in this article? None. Absolutely none whatsoever.

Why do we do Continuous Integration?

Continuous Integration is now very much a central process of most agile development efforts, but it hasn’t been around all that long. It may be widely regarded as a “development best practice” but some teams are still waiting to adopt C.I. Seriously, they are.

And it’s not just agile teams that can benefit from C.I. The principles behind good C.I. can apply to any development effort.

This article aims to explain where C.I. came from, why it has become so popular, and why you should adopt it on your development project, whether you’re agile or not.

Back in the Day…

Are you sitting comfortably? I want you to close your eyes, relax, and cast your mind back, waaay back, to 2003 or something like that…

You’re in an office somewhere, people are talking about The Matrix way too much, and there’s an alarming amount of corduroy on show… and developers are checking in code to their source control system….

Suddenly a developer swears violently as he checks out the latest code and finds it doesn’t compile. Someone’s check-in has broken the codebase.

He sets about fixing it and checking it back in.

Suddenly another developer swears violently….

Rinse and repeat.

CI started out as a way of minimising code integration headaches. The idea was, “if it’s painful, don’t put it off, do it more often”. It’s much better to do small and frequent code integrations rather than big ugly ones once in a while. Soon tools were invented to help us do these integrations more easily, and to check that our integrations weren’t breaking anything.

Tests!

Fossilized C.I. System

Fossil of a Primitive C.I. System

Excavations of fossilized C.I. systems from the early 21st Century suggest that these primitive C.I. systems basically just compiled code, and then, when unit tests became more popular, they started running unit tests as well. So every time someone checked in some code, the build would make sure that this integration would still result in a build which would compile, and pass the unit tests. Simple!

C.I. systems then started displaying test results and we started using them to run huuuuge overnight builds which would actually deploy our builds and run integration tests. The C.I. system was the automation centre, it ran all these tasks on a timer, and then provided the feedback – this was usually an email saying what had passed and broken. I think this was an important time in the evolution of C.I. because people started seeing C.I. as more of an information generator, and a communicator, rather than just a techie tool that ran some builds on a regular basis.

Information Generator

Management teams started to get information out of C.I. and so it became an “Enterprise Tool”.

Some processes and “best practices” were identified early on:

  • Builds should never be left in a broken state.
  • You should never check in on a broken build because it makes troubleshooting and fixing even harder.

With this new-found management buy-in, C.I. became a central tenet of modern development practices.

People started having fun with C.I. plugging lava lamps, traffic lights and talking rabbits into the system. These were fun, but they did something very important in the evolution of C.I. –  they turned it into an information radiator and a focal point of development efforts.

Automate Everything!

Automation was the big selling point for C.I. Tasks that would previously have been manual, error-prone and time-consuming could now be done automatically, or at night while we were in bed. For me it meant I didn’t have to come in to work on the weekends and do the builds! Whole suites of acceptance, integration and performance tests could automatically be executed on any given build, on a convenient schedule thanks to our C.I. system. This aspect, as much as any other, helped in the widespread adoption of C.I. because people could put a cost-saving value on it. C.I. could save companies money. I, on the other hand, lost out on my weekend overtime.

Code Quality

Static analysis and code coverage tools appeared all over the place, and were ideally suited to be plugged in to C.I. These days, most code coverage tools are designed specifically to be run via C.I. rather than manually. These tools provided a wealth of feedback to the developers and to the project team as a whole. Suddenly we were able to use our C.I. system to get a real feeling for our project’s quality. The unit test results combined with the static analysis could give us information about the code quality, the integration  and functional test results gave us verification of our design and ensured we were making the right stuff, and the nightly performance tests told us that what we were making was good enough for the real world. All of this information got presented to us, automatically, via our new best friend the Continuous Integration system.

Linking C.I. With Stories

When our C.I. system runs our acceptance tests, we’re actually testing to make sure that what we’ve intended to do, has in fact been done. I like the saying that our acceptance tests validate that we built the right thing, while our unit and functional tests verify that we built the thing right.

Linking the ATs to the stories is very important, because then we can start seeing, via the C.I. system, how many of the stories have been completed and pass their acceptance criteria. At this point, the C.I. system becomes a barometer of how complete our projects are.

So, it’s time for a brief recap of what our C.I. system is providing for us at this point:

1. It helps us identify our integration problems at the earliest opportunity

2. It runs our unit tests automatically, saving us time and verifying or code.

3. It runs static analysis, giving us a feel for the code quality and potential hotspots, so it’s an early warning system!

4. It’s an information radiator – it gives us all this information automatically

5. It runs our ATs, ensuring we’re building the right thing and it becomes a barometer of how complete our project is.

And we’re not done yet! We haven’t even started talking about deployments.

Deployments

Ok now we’ve started talking about deployments.

C.I. systems have long been used to deploy builds and execute tests. More recently, with the introduction of advanced C.I. tools such as Jenkins (Hudson), Bamboo and TeamCity, we can use the C.I. tool not only to deploy our builds but to manage deployments to multiple environments, including production. It’s now not uncommon to see a Jenkins build pipeline deploying products to all environments. Driving your production deployments via C.I. is the next logical step in the process, which we’re now calling “Continuous Delivery” (or Continuous Deployment if you’re actually deploying every single build which passes all the test stages etc).

Below is a diagram of the stages in a Continuous Delivery system I worked on recently. The build is automatically promoted to the next stage whenever it successfully completes the current stage, right up until the point where it’s available for deployment to production. As you can imagine, this process relies heavily on automation. The tests must be automated, the deployments automated, even the release email and it’s contents are automated.So what exactly is the cost saving with having a C.I. system?

Yeah, that’s a good question, well done me. Not sure I can give you a straight answer to that one though. Obviously one of the biggest factors is the time savings. As I mentioned earlier, back when I was a human C.I. machine I had to work weekends to sort out build issues and get working code ready for Monday morning. Also, C.I. sort of forces you to automate everything else, like the tests and the deployments, as well as the code analysis and all that good stuff. Again we’re talking about massive time savings.

But automating the hell out of everything doesn’t just save us time, it also eliminates human error. Consider the scope for human error in a system where some poor overworked person has to manually build every project, some other poor sap has to manually do all the testing and then someone else has to manually deploy this project to production and confidently say “Right, now that’s done, I’m sure it’ll work perfectly”. Of course, that never happened, because we were all making mistakes along the line, and they invariably came to light when the code was already live. How much time and money did we waste fixing live issues that we’d introduced by just not having the right processes and systems in place. And by systems, of course, I’m talking about Continuous Integration. I can’t put a value on it but I can tell you we wasted LOTS of money. We even had bugfix teams dedicated to fixing issues we’d introduced and not caught earlier (due in part to a lack of C.I.).

Conclusion

While for many companies C.I. is old news, there are still plenty of people yet to get on board. It can be hard for people to see how C.I. can really make that much of a difference, so hopefully this blog will help to highlight some of the benefits and explain how C.I. has been adopted as one of the most important and central tenets of modern software delivery.

For me, and for many others, Continuous Integration is a MUST.

 

Principles of Continuous Integration

I’m a big fan of CI, and as a simple best practice/process for development teams I think it’s right up there as one of the most important to get right. Most software places now have a Continuous Integration system, but do they actually practice Continuous Integration? In my experience the answer isn’t always “yes”.

For me, practicing CI goes a lot further than simply having a CI system setup, and running check-in/nightly builds and running unit test/code inspections etc and so on.

So what is my definition of “Practicing Continuous Integration”? Well, for me it is simply:

  • Having a CI system (obviously)
  • Adopting the principles of Continuous Integration

I did a quick google to see if I could find a simple list of some good CI principles – I know I’ve read many of them before in numerous good books such as Paul M Duvall’s “Continuous Integration” and Jez Humble’s “Continuous Delivery”, but I couldn’t find an easy cut-out-and-use version on the internet, so I’ve decided to knock something up here 🙂

Everyone loves a good list so here’s a list of what I believe to be some principles of CI:

  1. Fix your build failures, immediately. Never leave a build broken. If you do, the build team should be within their right to roll back your last commit.
  2. Every check-in should be an improvement on the last. Each check-in should have value, otherwise why do it? An improvement could be defined as a new piece of functionality, a bug fix, an increase in code coverage etc. If the improvement cannot be easily measured, ensure you detail it in your commit log.
  3. Never check-in on a broken build (unless you’re fixing the breakage)! If you have multiple commits to a codebase after the original breaking commit, then this leaves a bit of a bad smell in your CI system. Firstly, unless your commits are coming thick and fast, it means you aren’t reacting to your build failures, and this is not acceptable – it basically means you’re not playing ball and your letting the rest of the team down. Secondly, the cause of the original breakage can potentially be a lot harder to identify if several other check-ins have happened since the original check-in which broke the build!
  4. Code should be built and tested before checking in to source control. You can either do this by compiling the build locally yourself and checking that the tests pass, or you could invest in a CI/VC system that can perform pre-commit builds, such as Pulse, TeamCity and ElectricCommander (list obtained from Jez Humble’s Continuous Delivery book). However, you can use pre-commit hooks in svn to achieve something similar, and I’m led to believe this can also be done in Git and buildbot.
  5. Everyone should be interested in the output of the CI system. And that includes project managers and testers. Working on the principle that at some point a build works, and every commit thereafter is an improvement, we should be proud to display the results of every build to the whole project team, and we should react quickly if, for any reason, our last check-in was not an improvement to the state of our software.
  6. Functional automated tests should be run as part of the CI process. If you have a suite of automated regression tests, the chances are it’s no great effort to hook them up to the CI system, and definitely worth it in the long run.
  7. Make the builds fit for purpose. I absolutely hate having to wait ages for a simple check-in build to compile, when all I really want to know is whether I have broken my unit tests or violated some important rule. Make the check-in builds quite lightweight, run your unit tests and code coverage tests only, perhaps. Do you really need to run all your code inspections and analysis tools as well as run automated functional tests every time there’s a check-in? Chances are you don’t, so keep the check-in builds light, and leave the heavy duty work for the nightly builds, while we’re all asleep!
  8. React to the feedback. The CI system is a tool to help drive improvement and quality as much as anything else, so we should never ignore it. However, it’s easy to ignore your CI system if the feedback isn’t relevant. The solution is to work closely with the build team, as well as the rest of the project members, to ensure the build feedback is targeted and relevant

That’s about it for my list so far, I’m sure I’ll add to it as time goes by. Please feel free to add your suggestions!