We embarked on the journey of our series by laying out The Problem that large companies face in deciding the methodology they will follow in adapting themselves to the rapid evolution of modern technology. In The Dojo Coach, we discussed the personal qualities of the transformation leaders at Discover® who get up close and personal in their modeling of technical skills to develop product-focused teams. We further explored The Dojo Lifecycle of processes and practices transferred during the program and continue in the team's life long after the coaching period is completed.
It wouldn't be fitting if we didn't conclude our series by talking about expected outcomes. The option is always there, of course, for a team to return to its Dojo after the close of the initial program period for tune-ups and to learn new skills. But let’s look at the ground we gained leading up to that moment. Given the mission that we took on to transform to a product-centric world, it's fair to ask: What can teams expect to know after leaving their first dojo?
The measure of effectiveness
Had Discover chosen to transform product teams through a consultant program or different model, then a major measure of success might be in highlighting the upbeat feelings and hype among participants. In the Dojo approach, though, transformation speaks for itself. Individually and together, the team and its coach have grown, and being deliberately transparent about the experience in attending the Dojo is a key part of the transformation.
At the finish of each Dojo, we become formal in our collection of observations and metrics relating to the product team’s journey. We call this the final readout of the Dojo. This isn’t some activity we doing for the first time as we wrapping up. Remember that at the outset of the Dojo, we established a regular schedule of communications as standard operating procedure for the team. The intent is to anchor our outcomes around empowered teams that have learned how to function in an agile way of working. We aim for this transformation to leverage a high degree of automation, with all its implications for pipeline adoption, automated testing, and the like.
Yes, I mentioned metrics. We’ve all seen how metrics, like most statistics, can lead us down holes from which there’s no coming back out. The love-hate dynamic that plays out here is this: you can’t improve what you don’t measure, and measuring the wrong things can reinforce bad behaviors that lead to—shall we say—interesting outcomes.
It seems fair, at a minimum, to highlight metrics that we collect at the enterprise level and at the team level. Shocker: our enterprise metrics align with industry best practices of the DevOps Research and Assessment team. The goal of DORA is to understand the practices, processes, and capabilities that enable teams to achieve high performance in software and value delivery, including:
- Delivery or Lead Time for Changes tells us the amount of time it takes a commit to get into production.
- Predictability or Time to Restore Service projects how long it takes an organization to recover from a failure in production.
- Deployment Frequency identifies how often an organization successfully releases to production.
- Change Failure Rate reveals the percentage of deployments that cause a failure in production.
Much is already written about the metrics that are widely used and effective, so I’ll leave it to you all to flex your search engine skills. Within the Dojo, though, we peer into the data that we’ve collected through a more granular lens that gives us a team perspective.
The difference in our focus is pointed at helping the team to learn and grow, and at reinforcing team independence-qualities that we can extrapolate from some of the following metrics:
- Blocked Work Items highlight opportunities for the team to reduce dependencies.
- Story Completion Ratio increases the team’s self-awareness in the tasks of making and meeting commitments.
- Velocity Variation provides support in planning and release management.
- Open Defects help bring awareness to delivering a high-quality product.
Now that we've revealed some of the metrics that we set out to measure, let’s take a deeper dive into why we feel they help teams to develop. Before jumping in, let's not forget that we've explicitly covered how no two teams are the same. Each team starts at a different part of this journey, and each team has a distinct set of experiences and skills.
With that disclaimer out of the way, now we'll dive in.
Elevating the team
Let's discuss how our metrics yield quality insights, starting with Story Development Cycle Time. In simple terms, this measurement reveals how long on average it takes for your team to finish coding a story. You might look at drastic reductions with wonder, but giant strides in improvement can come through automation, particularly in the areas of testing and deployment.
We also want to look at the Story Competition Ratio. This metric helps newer teams to understand root causes and potential solutions with respect to their Definition of Done. This metric additionally helps teams to understand their initial estimations of those stories, and to achieve the ultimate goal of arriving at the end of a sprint with more stories in the Done column than the Half Done. This metric can be used as a retrospective to help facilitate conversations on how the team can improve their skills in story estimation.
Tracking Velocity Variation is another measurement that new teams can find helpful. There isn’t necessarily a target to hit for this metric; it's more about giving the team a window into how frequently and consistently they deliver. Remember that velocity measures how much work a team can tackle during a single sprint. As you might expect, a new team is likely to be all over the place as they gain greater exposure to the complexities of their product and flex their understanding of that complexity in their story estimation. Small variations increase the likelihood of consistent delivery and the integrity of release planning activities.
Reducing the number of Blocked Work Items helps the team to break down dependencies with other teams and leads to less variation in their velocity. The reduction should also help teams to improve their story completion ratio. While in the Dojos, coaches look at these outcomes as secondary benefits in the tracking of blocked work items, as the primary reason is to highlight dependencies and to remove them as often as possible. Remember that we want to enable teams to be independent enough to move at their own pace.
This leads us to Test Coverage. I hope that in this series of blogs, you’ve noticed our focus around automation as an outcome. Some teams have come to us with little automated test coverage and have been able to move the needle in coverage as they are exiting. Test automation is key to increasing delivery and improving quality.
Is the data perfect? With every step we take to sharpen our collection of data and how to leverage it in the Dojo to influence product team delivery, we get closer. Where we see room to grow is in areas like test automation: the more that teams practice these principles, the better they get at them.
Improving the Dojo experience
To complement the interpretable data that we collect-metrics from Jira, data on team performance, and so forth-we also solicit direct engineer feedback as part of a good retrospective to help the Dojo coaches continually improve.
The participant feedback leads to overall optimism in the Dojo experience. We’ll continue to tweak the type of metrics we collect as we strive to provide white glove service to our product teams and help Discover deliver high value products with high quality at speed.
We started this series to explore the use of technical Dojos as a holistic learning realm that product teams can enter to adopt new practices and build new engineering capabilities. Along the way we learned what makes a good Dojo Coach and the kind of skills needed. How we deliver Dojos and the expectations of the team entering our Dojos, in addition to the kinds of measurements we leverage to help teams learn about themselves and get better. Through this experience our coaches have imparted a product-centric mindset, a constant agile way of working, reinforced team autonomy and extreme automation while encouraging continuous learning.