Monthly Archives: February 2016

DevNexus 2016 – Top 5 Reasons Improvements Fail

DevNexus is a developer conference here in Atlanta that’s been around since 2004. Last week I attended the conference, and specifically saw presentations from the Agile, Javascript, and Architecture tracks.  This post will the first with some of my impressions of the event, and relaying some of the information I learned.

Top 5 Reasons Improvements Fail (Agile)

This session discussed how to identify team performance issues and why they’re probably not in the areas you thought they were.  The 5 reasons mentioned are

  • Broken Target – identify the real problems at hand, don’t assume implementing best practices will fix things
  • Broken Visibility – measure where time is actually being spent: troubleshooting, learning, or doing rework
  • Broken Clarity – stop making generalizations, identify the specific patterns
  • Broken Awareness – break hindsight bias and stop and think
  • Broken Focus – improve visibility of root problems to management

Broken Target

The lesson here was to make sure you’re trying to solve the right problem.

Situation: A team was spending lots of time doing bug fixes every time they were getting ready to release to production.
First Solution: They added hundreds of unit tests for their code.
Outcome: The team was spending the same amount of time fixing bugs before releasing.

The team thought that their code quality was too low, so their first assumption was that they should add automated tests and that would fix their quality issues. They spent a a big chunk of time, added unit tests for all their code, but still ran into the same problem before releasing.  Therefore, their initial solution was not the right one.

The speaker then gathered data on the team’s performance and behavior over time and proposed a new solution.

Second Solution: Make smaller releases.
Outcome: Less time was spent fixing bugs before each release and overall.

The real problem that the speaker was able to identify was that the team was rushing to complete work towards the end of every sprint and especially towards the end of a release.  Then because they rushed, there were more bugs in the code.  Even when they moved on to working on code for the subsequent release there were still hidden bugs and so each pre-release debugging period grew longer and longer, creating an order of magnitude cost increase.  The real solution then, was to make “as small as possible” releases. With this release cadence, even for the same number of total features there was less overall time spent on bugs and faster times to release.

Broken Visibility

Identifying the right problems can be difficult.  Even if a team has a retrospective at the end of every sprint or iteration, the issues that come up there may not be the larger, more important ones.  The issues will most likely be what the developers feel most strongly about, however this list of issues is influenced by both recency bias – whatever problem came up most recently, and guilt bias – whatever problem is related to something caused by an individual developer. Additionally, proposed solutions that come up during this time are affected by known solution bias – using a solution you’ve used before without checking if its the right one first, and sunk cost bias – continuing down an existing route because you’ve already put a lot of effort into it without considering alternatives.

To aid in identifying the right problems, the speaker created some custom tooling that gathered data on how the developers were spending their time.  Developers use the tooling to indicate when they run into some sort of friction during development.  The three categories of friction are troubleshooting, learning, and rework.  This allows the determination of where time is actually being spent and then you can further drill down to the root causes from there.  For instance, realizing that most of your team’s friction is spent troubleshooting code written by other teams was costing yours 1000/hours a month allows you to address that specific problem.

Broken Clarity

Shared understanding is key.  Generalizations allow people to communicate without given specifics and thus without a guaranteed shared understanding.  For instance, if developers and the business consistently walk away from discussions each believing they’e on the same page, then during the business review of a feature the business is upset because their expectations weren’t met but the developer thought they built it exactly as specified, there’s a communication issue. One solution to such a communication issue, which aids in reaching a shared understanding and removing generalities from conversation, is to create a glossary of the terms used by your team and what they mean.  That grants everyone a common language and understanding and actually reduces the defect rate as a result.

Broken Awareness

Sometimes people make decisions without even being aware of them.  You can review a situation and help them understand that a different approach may have been better but every time a similar situation comes up, they repeat their original behavior.  Certain behaviors are just on auto-pilot and unless you can get someone to stop and think before a decision gets made, they’ll choose the same path every time.

Broken Focus

There is always some sort of pressure on development teams from management to accomplish certain goals, and it can be tricky to get management to understand the need to invest time and money into certain things now over new features.  This “wall of ignorance” between development teams and management needs to be overcome for both sides to feel like they’re being heard and the team is heading in the right direction.  The speaker encourages teams to use a “risk translator” to make concepts that are important to developers transparent to management.  In particular, quality risks which correspond to troubleshooting costs, familiarity risks which have a cost to learn, and assumption risks which can cause rework to be needed.  A gambling metaphor was suggested to translate the ROI of a given desired priority – for instance hurrying and cutting 40 hours off development now, increases the chance of a 400 hour time sink by 20% later.

The speakers suggests that taking a 3 month trial to measure data will identify the biggest problem areas and thus priorities to focus on to reduce friction areas. Some areas that many teams acquire technical debt that generate friction are test data generation, missing diagnostic tools, and environmental problems.  These are areas that teams routinely spend significant portions of time on that could be reduced by spending some time up front to address needs in a given area.

Conclusion

Overall it was an interesting and well presented session.  It wasn’t so much that any of the ideas were particularly new but putting so much focus on getting data to support the decision making and problem identification process showed a different approach that allows for quantitative evaluation of team performance – without using velocity.  However even just the basic message of “stop and think – what is the real problem?” is useful and important to keep in mind so you don’t focus time on the wrong areas.

More reviews to come.