The Hottest Value Stream Metrics That Actually Matter

There is no shortage of metrics related to software delivery. Agile metrics, DevOps and DORA metrics, support, services and operational metrics, Lean, value stream and flow metrics, product and predictability metrics. All are trumpeted by their own champions as the hottest thing since sliced bread.

Research from ConnectALL shows that because of the sheer volume of measurements, organizations are not confident in selecting metrics. This is a big deal in the success (or failure) of using analytics and insights platforms and tools. That's not cool at all. is excited to host ConnectALL's SVP of Products Andrew Fuqua and President and COO Lance Knight to talk about the specific set of metrics organizations should choose to get started and how to ensure those suit their objectives and challenges. Sounds like some spicy metrics!

Join these industry leaders as they break down the metrics that program managers, product managers, Agile managers and DevOps teams need to kick off their metrics journey—what they are and why you need them. Crank up the heat with some of the hottest metrics (that actually matter)!

Key takeaways:
- Understanding the starter metrics specific to teams
- Why teams need to track specific metrics and what they get
- How understanding and tracking metrics helps within a value stream management framework


Cody: Good morning, good afternoon, or good evening, depending on where you are in the world and welcome to today's webinar brought to you by Techstrong and ConnectAll. My name is Cody J Brown. I'm the host of Techstrong learning. We have an exciting presentation ahead, but first I have just a couple of housekeeping notes. First, today, the session is being recorded. So if you miss any of our discussion or you'd like to rewatch, reshare, relisten, whatever you'd like to do, the on-demand recording will be made available shortly after we conclude our live session. If you have any questions, we want you to send those in using the Q and A tab, which is located on the right side of your screen. And that's also where you're going to find the chat tab. This is where we want you to engage with your fellow audience members, help steer the direction of today's conversation. 

We also have a couple of handouts ready for you to download. they're there to help follow along with today's presentation. So feel free to download those at your own leisure. And at the conclusion of this webinar, we will be giving away four $25 Amazon gift cards. So be sure to stick around. The topic today is Plan and Protect, A Modern Plan For Open Source Security. Oh, The Hottest Value Stream Metrics That Actually Matter. That is the topic of today's webinar. And I'm joined today by Andrew Fuqua, VP of Products at ConnectAll and Lance Knight, President and COO at ConnectAll. Lance and Andrew, thank you both for being here with me today. I'm going to give you the floor. 

Andrew: Thanks. 

Lance: We could talk about plan and protect if you want. 

Andrew: We can wait. We can try that, but it's been recorded. I didn't know it was going to be recorded, so I'm not sure I want to proceed anyway. I don't know. I'm out. Well, okay. We'll do the hottest value stream metrics that actually matter. And, there's nothing like enthusiastic marketing, just set the bar really high at the start of the webinar. It might've been better to do the other topic, but here we go. So, you know, when I think of what's really hot, you know, what's really hot right now. What do you think, Lance? What's the hottest metric? Put you on the spot. 

Lance: Well, I think that it's going to be DevSecOps Metro. 

Andrew: Oh, DevSecOps. Yep. That's hot. I think what's really hot right now is flow metrics, flow metrics they're really, really smoking hot, but you know, if you think back just a minute ago, you know, what was really hot was the DORA metrics. Those DORA metrics, boy, those were, those were all the rage. Everybody was talking about it. And, you know, before that, it was, yeah, the DevSecOps stuff yeah, that’s hot. But there was also the Kanban stuff, you know, cumulative flow diagrams and lead time. That was the bomb before that. What was before that, Lance? I'll put you on the spot again, before Kanban. The hot thing? 

Okay. I'm going to say scrum, those scrum metrics, the lead time, the burndown chart. Those were really hot, then before that, before that was quality metrics, six Sigma. That was the hot thing. Yep. That's the big thing. Everybody's got to know that. So there's all this stuff. There's so much hype, there's little acknowledgement that one set of metrics might not be all that you need, but you know, where do you start? Well, you go to Google, right? You know, if, if you know what you're looking for, if you, if you're trying to understand some particular metric, then, yeah, you can find a wealth of information. That's very helpful, but if you're really not sure where to start, this doesn't seem very helpful to me, to start out there. So there are all of these things. So we mentioned all this hot stuff, the flow metrics, the Dora, the dev ops and quality product metrics, the pirate metrics. 

And so if, if you're, you know, new to value stream management or, new to this kind of stuff, trying to figure out what you need to do, you might feel like you're drowning in metrics. And maybe so I'm not going to sugar coat it for you. You know, if you are a value stream manager or you aspire to be a value stream manager, are you aspire to be a, or you're, you know, you're a new dev director or VP of development. You need to understand each of these things and select from them. What's valuable for the objectives that your organization has. You have to know what questions to ask. You have to understand the metrics, landscape, and know how to use them. Well, I, I kinda think it's kinda like the statistics in college. Everybody has to take us, probably everybody in the audience, or most of you had to take a statistics class that I think that goes to show that metrics are important in all aspects of life. And, this is as well. Lance, did you ever have to take statistics? 

Lance: I did, but it was statistics. 

Andrew: I can't. Neither one of us can, can spell or pronounce. So, we can talk about hot metrics. So. 

Lance: Well, there were hot portfolio metrics years ago when I was doing that as well. I mean, you're absolutely right. Always these metrics and how you look at so lots of hot metrics out there too. 

Andrew: But if I go back to the title slide though, you know, the value stream metrics that actually matter, well, I have to ask, well, actually matter for what? Or for whom? Are, are you trying to build a DevOps culture? Are you trying to make scrum better or, or use safe or disciplined agile delivery? Or maybe it's, maybe you're in lean and you need to improve flow and lead time and eliminate delays or improve time to market, or, but what I see a lot of organizations struggling with these larger organizations is just getting predictable. They’re always missing their delivery dates and they need to do more risk reduction and, and dependency management and figure out how to get predictable. Maybe quality and reliability, for product performance, or stability, security, there's all kinds of purposes for which metrics can be useful. And all kinds of roles as well. Maybe you're the team level scrum master or agile coach, or maybe you're the enterprise lean agile coach, a development director, but a dev director has a different set of metrics that they probably need to look at. And then the director of support or the director of services, and the VP of product engineering has got a different set than the VP of product management. 

So with that, I think we kind of need to think about an approach to metrics and figuring out what it is you need. I'm not going to dwell on this and there's other approaches, but the, the, jist or the point I want to make is that you really need to start with your organization's goals. Take a look at your OKR’s, take a look at your improvement objectives, with regard to those questions I asked earlier and think about what improvement direction you need to go in. Once you understand a small number of goals you might want to ask about what questions you would ask that would lead you to know whether you're on the right track, or to lead you to select metrics that would answer the questions and drive you towards the goal. So, you know, when I'm talking about questions, for example, it might be around innovation. 

You might have a question about, how long does it take for customers to realize value when they deploy? Or maybe, can the scrum team meet its sprint commitments and its released commitments? Can they, can the support team meet their SLA’s or can engineering teams and product owner teams, do they have everything they need to do to do the work? Or, is throughput stable? Is velocity stable? Can I use that to forecast the future? Can a team control their WIP? Are we managing risk? Can we control scope? And then there's a whole wealth of stuff around coding practices, continuous integration. How long does it take the developer keep  their code checked out? How long before they submit the pull request? How many times does it go through the review cycle? All of these things are really good and useful questions to ask, but yet you can't tackle all of that. At the same time, you'll be pulled in too many directions. So that's why it's important to start with an organizational goal and have some focus, identify some, identify some of the questions to answer and then pick some metrics. 

So then, of course you, if this isn't a one and done thing, you need to revise and repeat the metrics that you used, we'll guide you, but they'll need to change over time. As you make improvements as you, as your organization matures, you will want to retire some questions, retire some metrics. So it's an ongoing process. Now I'm eventually gonna, show us some, actually we're going to actually look at some metrics and discuss some of them, some, pitfalls and stuff. Some of the stuff, though, when you're looking at metrics dashboards, it looks like the same metric answering different questions. Sometimes that's the case. other times, when a metric looks the same and it may be graphing in the same way, but it could be measuring a different part of the value stream. You could be using the same, basically the same metric for one purpose at one end of the value stream and another purpose at the other end of a value stream. 

So, these can be tuned for each audience, for each part of the organization and each goal that you're after. So for example, you might use cycle time and the upper half of the value stream to tune the product management, product ownership, user experience, you know, prioritization, side of the value stream. And you can use it at the dev ops and the dev ops for, to tune the delivery portion of the value stream, but you'll use the same metric and the same graph at both perspectives. But either way, what you're really, I believe you really ought to do is you really want that in one tooling, one value stream management platform. So you don't have to teach the organization multiple tools. You don't have to learn multiple tools. You don't have to tune multiple tools and you can bring data for all those audiences from different places, in the one platform that everybody can use. And then you get the value. You get the benefit of having the data all together in one set of metrics, you can get end-to-end visibility into what's going on. So, you know, each ALM tool, each PLM tool, orchestration tools, all of these things have some metrics built in, and those are great. And you'll, you'll still need those. And some, in some cases, if your scope is limited, the metrics that you get out of those tools may be exactly what you need, but even then you need to be situationally aware. You need to know what the overall organization improvement goals are for the next couple of levels, so that you're not doing a local optimization, that's defeating the overall goal of the organization. 

But I alluded to this earlier, this concept of data blending that might not be the right kind of a technically accurate term. What it really means is data integration, but in the scheme of value stream management, that term integration can sometimes be misleading, but I'm talking about joining all the data together, concatenating it. So extending the view of the value stream from one tool, all the way through multiple tools, to the view of the value stream from another. You want to be able to pull that together, to really understand what's going on and truly manage your value stream. By composability and extensibility, talking about the ability to tailor your dashboard for different processes, different audiences, and the different questions that pieces of the organization are going to take and the overall scheme of, meeting the organization's goals and then managing the value stream. And finally, by accessibility, what I really mean is you need to make it easy to get to this view, to this dashboard, to this, to a recording of your goals, questions, and metrics. You don't want to, you know, you don't ask your VPs to go log into rally and version one and click around to get a few different graphs and then have to go to service now for some other stuff, and adjust the graphs to see different teams views. You want to make it focused, quick and easy. And focused on the organization's goals. 

So nevertheless, even though I say there isn't one single best set of value stream metrics that matter, I'm going to show you some things that I think are useful and that are either not covered by other tools, but they're important, or I'm going to look at some that are often misinterpreted. And also some that benefit from having data from multiple sources brought into one common data model to make the metric more useful. So there's some common bones, essential metrics that have a variety of uses. And so these are the kind of the place where I usually start the conversation there. They're useful in many situations, if you can understand each of these, you can apply them to different goals. You can apply them to different roles and get good use out of them. So I'm not going to go through all of these, these we have in our quick start set inside our insights products, but I think this is a great place to start. So without boring you any more with slides, I'm going to switch to a live product view to discuss through some of these metrics. And while I switch Lance, if you want to add any colors, you can feel free to chime in at any time. 

Lance: Oh, sure. So, you know, it's interesting, there's a lot of different questions about different metrics coming into the chat over there. And this is just a starter kit we have. And what's really nice about some of the metrics we can produce is when we go through DQM, we can do metrics by team. We can look at specific views that might be relevant to making you more predictable and predictability itself is something different in every organization. So when you start getting information, you start to realize that you want different information to help your processes to become more associated to what you're trying to achieve, your outcomes and so on. So there are, you know, dev team metrics. We also offer DevSecOps metrics as well. So there's just any of those solutions out there, contain data that can turn into analytics and information you use to make better business decisions and decide whether you're doing two things, managing your value stream, right? Or managing what's in the value stream, in the right way as well. 

Andrew: Thanks Lance. So, here is our quick start set, that I usually start the discussion with to figure out where an organization needs to go and what they need to use. and so I, you see, I have it organized by, by role or by organization, that tips typically help people to focus. So, let's take a look at one of these. One of the things I'll point out, as I mentioned, this goal question metric, and I'm not going to talk out about it, any more after this, and I've put the lorem empson, in there just because I didn't want to put in a specific goal and you think, “oh, well, that's not my goal. Everything that Andrew is having to say, it doesn't have anything to do with that,” but I think it's useful on your dashboard to keep that goal front and center of what it is you're after, so that you don't just end up doing metrics for metrics sake and lose sight of what you're, what you're really trying to achieve. 

So let's take a look at program and project management, a few things here again, the setup, wasn't gonna talk about it again, but one final note, the goal question metric might be a different or tuned or lower level at this level. So you might have a refinement of it, in here at the program management section. Okay. So some metrics, everybody knows throughput, everybody knows, velocity or throughput by sprint or by, or by week or by two weeks. And so w you know, we have these here at, again, I'll point out that it's to have a higher level view. You want to be able to pull data in from multiple tools and look at the throughput. You do not have to go to multiple tools to see it, but bring it all together to understand your or how your organization is doing. 

Now, this particular graph, I can look at this and say, well, the, all their throughputs all over the place. I shouldn't use this throughput to forecast my future completion, future completion date. Now, sometimes though I can look at a throughput chart and I'm not really sure, you know, is that throughput stable or does it have too much variability So in my dashboards, I include a measure of throughput variation. In this particular case, we, we can see that it's, this is basically the standard deviation divided by the mean, but it basically takes the scale out of it so that you can compare a couple of teams or compare a couple of situations and say, “can I use this data for forecasting?” Now, if this number is 0.2 or smaller, yes, your variation is low and you can use that for forecasting. If it's 20 to 30. Yeah. Tread carefully. If it's above 30, you know that's probably, if you need predictability, if you need to do forecasting, that's probably the first thing to do is to understand why your throughput is all over the place. So there's throughput variation. I don't see how other folks use this, but it’s a critical measure to help you read your throughput chart or your velocity charts. So that's why I like to include it. So I thought I pointed out today, so that you'd know, 

All right, flow load. This is where those flow metrics that everybody's really hot on float off flow load. Man, it's fantastic. And it can be useful. And so how do you read this? Somebody might look at this and say, well, the blue bar is stories. Here's my key. And the story bar is getting better. So we're working on more stories. The defect, green area down here at the bottom, is getting smaller. So we're doing fewer and fewer defects. So maybe the quality is getting better. And I'm able to put more effort into stories, but then you look and say, well, the risk bar is getting a little more narrow. So maybe I'm not doing risk management like off too. And technical debt. That seems kind of thin. Maybe I should be investing in technical debt a little bit more. 

And so that's like the default reading of flow load that you would normally hear, and it is valuable. And if you're a new, a new product, a new organization, you know, there's a good prescription of how you might want to allocate your effort against those four areas. If you're a more mature product, you might want to allocate a little bit more, a little differently, but the problem with this particular graph is that this may be really going on here is that they just have too many stories. They're working on too many stories and they're moving really, really slow. The defects are moving really, really quickly. So at any given point in time of the measurement, there's not a whole lot in progress, but so that's the real problem. So the point I want to make here is that understand the metrics that you're using, really understand how to read them and, and, challenge somebody if they're giving you the, their reading of the metric, make sure you understand that they're, that they're not trying to use the metric for, you know, incorrectly or, or for there for a bad purpose. 

I know no one here would do that, but it does happen. Escaped defects. I'll talk about one more of these in this section. And then we'll move on. Escaped defects and escape to defect is a defect that escaped the process. It should have been caught in the development phase, but it was caught by some later testing phase. And then of course, if it escapes that as well and gets all the way into production, we call it a production bug. So, this useful graph, important to understand. And then evaluate, is this level of escape defects okay? Or is this level of production defects? Okay, where do I need to focus my effort first? Should I reduce one of them or should I focus on the other? So these kinds of things are, this is a good place to start, a helpful metric and a useful tool. 

So let me go, you know, after that, what's probably gonna happen is if this is like the project manager or program manager, the next place they're going to go is to the agile manager or the engineering manager, they're going to start asking questions about, you know, what do we do What does this stuff mean And so, and how do we improve the situation So the agile manager, he might want to go look for bottlenecks, understand what is, slowing down progress are making things take so long. Now, this is a very common chart. You see it in a lot of tools and they, this is how they show bottlenecks is basically the average time than a given in each state. And so, if I was looking at this, where might you think you would need to go look, 

I would think I'd go look at the two red bars. Of course, red draws your attention to that. And so these must be where the bottlenecks are testing deferred and this approved state, whatever that is the problem with this is the problem with all averages. And there's a great book by Sam Savage called the flaw of averages talks about this problem. Anytime you're averaging data, you're, you're losing information. So in this particular case, 2.2 weeks, that's a little bit more than 14 days. So I, I like, I prefer this other chart for looking at bottlenecks. I have duration on the x-axis and there's 15 days. And I have the number of tickets that went into that on the y-axis. So if I look at this dot right here, testing deferred only eight tickets. So I don't really need to spend time trying to tune my process or improve my value stream for eight tickets. Where I really need to focus is this, 222 tickets that are in the approved state. 

That's where I need to focus also, anything that's up here at the top. I might want to look at each of those states, what's this one over here? This one is in QA. If I look at this other chart that most people use in QA, it looks screen, doesn't look bad at all. It's in the middle of the, of the, of the, of the, of the list, but all tickets go through that. So if I can save an hour for all tickets, if I can save half an hour, if I can save half a day, then I can really have a big, really big benefit. 

All right. And so, this chart, I find much more useful than what standard tools usually provide. Also this box plot is a great supplement plus with that, I can see how many outliers I have. I can see how big the range is on these things and understand better where I need to go focus and tune my process in this particular case. I look and say, “wow, that's a lot of outliers.” Probably what I want to do is see, is, are, is, are these outliers the same in each one of these states. Do I have a handful of tickets that are causing all of the problems? So that's why, that's a little bit about, bottlenecks that I wanted to share with you is that the average chart, the, the chart that you normally see is really, kind of misleading and not as useful as some other options that you have something else that, the agile manager would probably want to go do is look at actual state transitions. I have, in my dashboard, a list of the tickets sorted by the number of state changes that they had. So 16 state transitions for a regular development ticket. That sounds way high. So what's really going on there? I can see here, my, the number of times I've cycled through code review have gone through code review and sprint testing a couple of times. And, now you can see something like this in JIRA, it's got your state transition graph, but that's in theory. So what I can show here with this graph is what actually happened. I might go on to, you know, I could click on a different one and see the state transition and evaluate that for different tickets. 

This one went two times through this code review and progress loop, and it went to the testing block three times, one of the rules going on? Is there something I can do to improve that kind of situation? These are some things that you don't normally see that I thought would be that valuable to highlight. And, so that's what I wanted to show you. All right, product manager, I'm not going to go through all of these, except I'll point out that this distribution, pie chart or our ring charter donut chart, whatever you want to call it, is the analog to flow load. That’s the other problem with flow load is, it's showing what's in progress at the moment, or is, through time, but you might want to look back and see, well, what have I done over the last quarter? 

How does it compare to the quarter before? Product managers typically have some kind of, not only will do, are they trying to balance the prioritization amongst these categories, bug stories, technical debt, and risk, but they may also have investment things. They may also have a target investment allocation. They're trying to hit 25% on UI improvement, 25% on features, 25% on bugs and another 25% on whatever other corporate goal. And so this kind of a graph is helpful to a product manager to see what we've actually done, and is a perfect compliment to the flow load graph. Of course, other stuff they may want to look at is the oldest, oldest bugs, are there bugs that are being ignored or are being missed? Of course you can go look and in a tool to do that, but it's helpful to have a focused dashboard for the product manager to use, and particularly for pulling data in, from multiple tools, it helps to have all of the, combined set into, into one graph to focus on, okay, that’s product manager. 

Now that here's the Dora, everybody understands the Dora for deployment frequency change, fill rate, meantime to recover, and we link time for changes, but, on this deployment frequency, usually it's, it's expressed as a rate. You know, so many per day or so many per week. I like this expanded graph. I want to see that, actually see when the deployments are happening. And I can see from this chart that I've got one team that's deploying what I put on the y-axis is the number of items in that deployment. So they've got one team that's making big releases infrequently, whereas other teams are doing smaller releases more frequently. And maybe if DORA is the thing that we're trying to do, we're trying to go to more frequent releases. I can identify that team and, and work to improve the process that they're going through or their approach or their delivery speed. 

So that's why I liked this kind of graph. And let me point out another problem with averages, the door metrics have this mean lead time for changes. And so if you publish that your executives are gonna look at this and say, “oh great, we can get improvements in about two weeks. Fantastic.” But the problem is, that's average. Half the time that your executive is going to be disappointed. It's not going to happen in two weeks. So, every time I present something with the mean, protected or mean lead time, I also show the 80th percentile lead time expectation. The way you read this is 80% of the time this team gets enhancements done in 4.6 weeks is quite a bit different than two weeks, but this sets an expectation of truth, and businesses can make plans based on that. 80% of the time. 

Okay, you know, I can accept that and I, you might want to be 70% or 85% or whatever, but this kind of graph helps people. It helps set expectations properly. Let me show you another problem with these Dora metrics. The mean time changes though, this is, this is real data that I got from somewhere that I didn't, I didn't manufacture this, but look at this production bug two weeks to get production bugs out. But the 80th percentile is one week. How in the world can that be? How in the world can 80% of the time this team get production bugs done in a week where the average is two weeks? The problem is outliers. There's got a small number of outliers, a relatively small number of outliers that took a really long time and threw off the average. So the mean in this case is really not very helpful. 

The 80th percentile lead time expectation is immune to that problem and allows you to trim the tail of those outliers and give a better indication of where you really are and how long things are really going to take. So this, I'll just briefly show two more things, and then we'll jump over to questions. Of course, there's, you have services teams, you have support teams. They usually want to look at load by person, or the load bar, the source of where the problem's coming from. They'll probably have their own 80th percentile lead time expectation for different classes of work. they may want to understand the load or their people, or the top 10 sources of problems or the top 10 versions that are out in the field. So all of this, all of this stuff is, what you really want to be able to do is consolidate the data from multiple tools and, pull it together and have dashboards that are, that are tailored to a particular audiences and never lose, lose sight of your overall goal so that you understand why you have the metrics on the page. 

So, with that, do we want to field some of these questions, Lance? Are you still there? Or Cody, you want to field some of them for me?

Cody: Absolutely. So it looks like Lance may have dropped, so I'm just going to assume he is jumping back in with us momentarily. Let's get some questions out. So this first question reads, which security metrics should we look into 

Andrew: Oh, goodness. Yeah, I don't know if I'm prepared to answer that question. But, I think it's this, I think it's the same thing. you got to understand what your, what your goals are and go learn them and evaluate them. If you can kind of see, I'm kind of straying away from the, you know, you gotta have this one, I think there's these things that I've shown there. They are good, they are useful. but I'm not saying you gotta have it. I think you've got to evaluate your own, your own situation and understand where you are. and, and then go from there. I don't think there's an easy answer. You have to dig into it yourself. 

Cody: So should we have a metric per development process? 

Andrew: I, I wanna say, I'll go back to this, the same story that, start with your goal. You, you probably do have, there are, you know, if you're, if you're trying to tune your, if the problems that you're trying to focus on, if the, if the goal is around improvement of the delivery organizations, and you've got Kanban teams and you've got scrum teams, and you got, you're trying to, you say for dev, yeah. Probably there are some metrics specific to those things specific to those teams and their operation, but if that's not where your focus is, if you're not trying, if you're not in the middle of an agile transformation, if your focus is, has to do with innovation or security issues, then, perhaps that's not the place to focus. So, again, be careful about over optimizing at, in one particular area of the value stream, which might end up sub-optimizing the whole. 

Cody: So how did ConnectAll come up with these particular metrics? 

Andrew: Yeah, some of them I put in there, for education purposes, like, I think there's some important things for people to know and to understand. So I've included some of those, others I've included just because they are very useful when you're, when you're able to pull data in from multiple data sources. They're more useful in a dashboard like this when you have, when you can pull in data from multiple sources than they are in the individual tools. So I've included some of those, and the others are their common starting points. They have a lot of uses like you see, and some of stuff that I showed, I might have, stuff in multiple areas, looking at the program in product management and as well as the, the team, the team in the things. So, they're common, they're useful, or they have some educational component. Or there is some light velocity variation. I include in the eighth percentile because nobody else does. but it's useful and people need to know about it. So it's a multitude of reasons why we included them, 

Lance: Hey, sorry, my computer died. And I joined back. But, you know, we went through a whole kind of, organizational thing, and we're constantly looking at the most valuable thing to add, for the metrics that we offer right out of the box. And, there's other dashboards and things where you offer metrics. Like I was talking about the DevSecOps or metrics earlier, we offer a hierarchical view. The power of  platforms like ours is that we can connect to all those tools. And by creating a really unique, common data model, we can then get all kinds of information, allowing you to make great business decisions. and we created the starter kit so that, you know, within a couple of days, you can hook up your, you know, connect to your tools within ConnectAll, and then start seeing metrics, start to accumulate, sorry, data for metrics. 

So to make those businesses quickly. And then, you can expand to thinking about what other things may I have that the starter kit doesn't, and you can grow out from there. So we want to get you the value really quick with maybe just the flow metrics of the portfolio or the Dora markers, depending upon what you're, you're connecting into the platform and then go from there. And we have a particular point of view on a bunch of different metrics that are really important depending upon the clients that we talked to in their need. 

Cody: So once you start collecting this data from these matrix, from these metrics, how long until the data really has anything meaningful to provide? 

Andrew: Yeah. Most of my metrics are that I would start with the starter kit in an hour. I can have data pulled from multiple tools and you're starting to get value out of it. There's a couple that need some time to monitor state changes and watch the traffic go or watch the tickets go through, for a while. but that's only like a couple of them. So for most of them, data can be pulled and, be used right away 

Lance: Real-time data. When we do it, we don't batch it and then do it. It's, it's gotta be real time as things are moving and as you're collecting data and states are changing. 

Cody: So what would you suggest to product managers in order to define and track that first set of metrics? 

Andrew: And what I love about these questions, we've kind of heard it three times already is, you know, what metrics do you recommend? Why do I need to do, or do I need to start? Just give me the answer, man. And, I kind of alluded to it earlier, I'm sorry. Yeah, I really kind of have to understand your organization's goals and what you’re after. Is it innovation? Does your, is your product management organization needing to drive more innovation or are you in a big organization that needs predictability? And if that's, and each one of those situations is going to call for a different prescription for your metrics. So, if you're in charge of a, of a piece of the organization, or if you're a value stream management or kind of have that role of responsibility, you kind of have to understand the, the set and start with that, that goal, our, our organizational objective improvement objectives have conversations with the rest of leadership and making sure everybody's pulling in the same direction. 

Cody: So who would you recommend in the organization to own these metrics and who were the main stakeholders that they would need to consider 

Andrew: Well, I think there's, for stakeholders, there's the main person that's kind of driving the initiative and I'll come back to that, but then you have the people who are closest to the work that needs to actually implement any improvements to your, to your process or your value stream. So that's one, the constituency is the people where the focus of the change is. The other end of the, the other important stakeholders, the person with the purse strings, the person who needs to lend support for your overall value stream management initiative, the person that's funding it and approving it and can provide funding if, if you need a, a consultant or a contractor to change tooling or to do some integration. So those are two main constituencies, but then the other one, the person who is facilitating the discussions around the goals, facilitating the discussions around, your value stream management, improvement, process, and we're running your value stream management. That person as well is, kind of a main stakeholder, if you will. So three groups. Closest to the work, closest to the funding and the person who's providing the leadership or the guidance around your value stream management. I would say a value stream manager, organizations don't always have a dedicated person with that title. So it might show up as a development director or a, or something like that, or, or an enterprise agile coach. 

Cody: So say someone just went through the process of creating the value stream for their project or product. What would be the first metric that they should have established? 

Andrew: I think this is the same question. One thing I'll, I'll also point out is, the value streams that already exist. If you, if you have a company that's, you know, ever gotten its first bit of revenue, you have a value stream. You might not recognize it. You might not understand it. You might not have identified it. You might not be managing it well, he might be mismanaging it for all I know, but it's there, it exists. So, I go back to the, understand your goals and objectives, and then put a metrics program in place that supports those. 

Cody: Awesome. So I'm not seeing any more questions at this point in time. So I'm going to just ask our audience, send in any last second questions that you might have. Now would be your chance. And I'm going to go ahead and give away some gift cards while we wait. Does that sound good guys? 

Cody: Perfect. so the four winners of our four $25 Amazon gift cards, our first winner is Harsha V. Our second winner is Pilar D. Our third winner is Dave B and our fourth and final winner is TN yawn L. So to the four of you, congratulations, keep an eye on your inbox to claim that gift card. But if you don't find an email, check your spam folder, now also take this opportunity to remind our audience that this session was recorded. So following this live session, you'll receive an email with a link to access the recording on demand. You can also find the recording living on the dev ops website at dev, and be sure to look in the on-demand section. So I'm just getting a couple of things into the Q and a box. So, it looks like we are ready to wrap up. Do you have anything you want to leave our attendees with before we close out? 

Andrew: Cody I'll, noticed one comment in the, in the chat that I thought was very good. The comment was that “I disagree with metrics requirements. The frequency can be adjusted, but if it's deemed useful and is stabilized over time, if you retire completely results can return to undesirable overtime.” I totally agree with that. I probably, when I, I glossed it went kind of quickly over that part, where I was talking about retiring metrics. I totally agree that you might want to look at it a little less frequently. Don't just, don't cut it off too soon. You want to make sure that the organization that whatever change it was that that question and metric led you to has stabilized that it stuck. So whoever made that comment, I appreciate asking for that clarification. I totally agree. That was my final word, Lance do you have a final word? 

Lance: Hey, thanks everybody for joining and I apologize for my technical issues. Next webinar. I'll add more value. I promise. 

Andrew: A little value in value stream management, right. 

Cody: Is it value or is it value stream management? I don't know. This is the question. 

Andrew: Yeah, I think that would be a great webinar. When is that going to come? When is that? 

Lance: I believe we will be doing that pretty soon. 

Andrew: Okay. Stay tuned. 

Lance: Thank you so much. 

Cody: Awesome. Well, Andrew and Lance, thank you both for joining me today. So, I am seeing a question about handouts. So if you are not seeing the handouts under the handouts tab, those should be included in the email that gets sent to you, after the fact. So we will get those, those handouts over to you. Don't worry about that. But to everyone, thank you all so much for joining us, Andrew and Lance. Thank you. I'd like to thank ConnectAll for sponsoring today's webinar. And of course, before we close out, who would I be to end without thanking our audience? We really appreciate you spending time with us today. We ask for one extra moment of your time to fill out a quick post webinar survey that should pop up on your screen here in just a moment. But otherwise we hope to see you at a future Techstrong learning webinar or workshop. Everyone, have a great rest of your day. 

Lance: Thank you. 

Ready to get started?

Explore the integrations, check out the features, or get in touch