Data and Tools Don’t Solve Problems, People Do

In Value Stream by Andrew Fuqua

Originally published on DevOps.com

When developing software, many managers believe that data and tools will solve all their problems. If they just have enough data or the right data analysis tools, development and delivery become automated and decisions are made without human input. But in reality, humans are a necessary part of the process. There are a few pieces of data you may want to start gathering, but you often don’t need a lot of data to make the right decisions. And sometimes, you don’t need data at all. You can use the data and tools you already have in place, but your most valuable asset is your brain. To illustrate this point, I’ll provide a few examples.

If it takes too long for your team to fix a bug, you can dig into data from your bug tracker or ALM tool, but reasoning about the process is also a necessary step. It’s not sufficient to simply look at some reports from your tooling. In fact, reasoning about your process may be all you need. You may already know where your bottlenecks are and where the waste is. Nevertheless, when you need to dig into the data, the point solution tooling you use could be the best place to do so.

If your epics are moving too slowly, you can use a report provided by whatever tool you are using to understand lead time, as well as the cycle time between states. You can drill down from there to look at stories as they flow from inception to completion and use whatever tool you are using to look for bottlenecks and wait states. Likewise, for build through deployment to production, you can look at data out of your tooling to understand how long it takes to go through each phase.

To illustrate this further, relatively few companies today are deploying more than once per sprint, and many, less often. Until you get your lead time down to two weeks, measuring how much undeployed code inventory has built up is useful only for its shock value. And then, you don’t need to update it every month. Measuring lead time is sufficient, and easier.

Whatever tool you are using, however, can only model what you have in the underlying systems. If the underlying systems don’t model the wait states, your analytics aren’t going to give you any insight into your delays. Include wait state columns in your scrum board or kanban.

A great approach to continuous improvement is the Toyota kata, and there is a great book by that name. You should read it and then implement it in your organization. But know that this can’t be tooled-up or made automatic. You do need to be able to measure for the intended improvements, but it still takes humans to do the thinking. Toyota was successful because its approach ingrained in every employee the notion of improving their processes. Toyota never minded people seeing what it was currently doing because Toyota knew that its reason for success was its ability to change and to improve faster than the competition. This requires humans. Brains.

Example: Maybe you are looking at metrics about builds from a tool. You might look and see that you have multiple builds per hour. But if you don’t really understand the metric or what it is you’re looking at, you might not realize that some individual builds may only be built once per day, or worse. Or you might not realize that your dev team’s builds only go to QA every other week, slowing down your overall flow of value.

Gary Gruver, in his book, “Engineering the Digital Transformation,” says, “Most organizations don’t really think closely about how value flows through their organization,” (page 99). I agree. Moreover, they too easily buy into the promise of not having to think closely about how value flows.

Gruver continues: “The biggest change for the industry is to create a culture of continuous improvement.” He doesn’t say “tooling for VSM” or tooling for metrics. He says culture change. The stuff of people and brains.

Gruver also says, “Software flow is very hard to measure. … That is hard with software because the size of each feature is different, which makes it very difficult to have a quantifiable measure of flow.” Another way to say this is that you shouldn’t have unrealistic expectations that your tooling will make the decisions for you. You should expect to have to think. And then maybe dig into the data already provided by various tools in your IT or software dev value stream, and even modify some of those to capture additional data. You should expect to have to analyze each component value stream, each type of work, the mix of work types and the variability of each.

Be realistic. Be human.

ConnectALL — The pragmatic value stream management company.