Select Page

Note: My colleague Dan Vacanti has also captured and expanded in greater detail on much of the topics touched on in this post, in his book titled “ActionableAgile.”

“The ability to take data – to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it, it’s going to be a hugely important skill in the next decades, not only at the professional level but even at the educational level for elementary school kids, for high school kids, for college kids. Now we really do have essentially free and ubiquitous data. So the complimentary scarce factor is the ability to understand that data and extract value from it.”

“I think statisticians are part of it, but it’s just a part. You also want to be able to visualize the data, communicate the data, and utilize it effectively. But I do think those skills – of being able to access, understand, and communicate the insights you get from data analysis – are going to be extremely important. Managers need to be able to access and understand the data themselves.

 – Hal Varian, Google’s Chief Economist, Jan 2009 – The McKinsey Quarterly

My previous post discussed how some of my earlier teams used T-shirt sizes for story level work items in their software development planning processes. But T-shirt sizes were only a part of what helped us get effectively predictable. The emphasis on just-in-time (JIT) story creation and story analysis, along with just-enough story and portfolio level backlogs (limiting work-in-progress or WIP), were also significant contributing factors. Two other key factors were de-emphasizing upfront estimating of level-of-effort and duration, and instead placing a greater emphasis on lightweight tracking of real (lead and cycle) times to complete and deliver story level work items. (See my earlier posts here and here for a bit more context and background on push vs. pull scheduling systems.)

I also discussed how some basic analysis of this lead time data and T-Shirt sizing helped us develop an internal service level of agreement (SLA) for completing story level work items. But this information also guided and shaped the policies we developed to influence the team’s interactions and specific responses (pulled from our toolbox) in a JIT manner as information unfolded about a story’s level of effort and duration. My observation is that all this contributed to us becoming predictable in a context where we never had been before using heavier upfront planning strategies. Based on my study of scheduling systems this combination reflected key pull scheduling characteristics, where the role of our software development workflow management changed from determining all operation activities upfront, to one focused much more on setting the rules for interactions (in turn influencing our work environment structures).

There’s lots more analysis (mathematical and statistical) you can do using the minimal and easily collected data that produced the Basic Story (Lead Time) Metrics table and T-shirt sizes from my earlier post and that will have to wait for another time. For this post I want to focus on visualizing the information in this simple table to see if we might extract a bit more value from the modest analysis investment already expended.

I’ll also build on this initial visualization, using a bit more (quick) low hanging fruit type analysis of the full raw data set represented by the earlier spreadsheet snippet, to produce a basic temporal (time) perspective of story lead times. Can adding a basic temporal perspective  provide a number of other useful insights into understanding the nature of our workflow’s story level work item lead times? (Both the earlier table and spreadsheet snippet are included here; click images to enlarge). To this basic temporal perspective, with a bit more new analysis on the existing raw data spreadsheet, I’ll then add new information extracted to give a work item type perspective as well, and then overlay the T-shirt size information to this mix.

Finally, again with just a bit more analysis on the existing raw data, I’ll visualize two other separate temporal perspectives using both  percentages and frequency counts. Afterwards, let me know what you think. Do one or two of these other ways of visualizing the information in the earlier table and spreadsheet snippets help you and others access, understand, and communicate insights that leads to a more effective predictable workflow in your software development context?

Visualizing T-Shirt Size Frequency and Percentiles (Histogram and Pareto Charts)

The charts in this section don’t reflect the “textbook” definitions of a Histogram and Pareto Chart respectively. Still, I’ll refer to them as such for now and leave it to your curious side to learn why they’re “pseudo” versions. Enhanced descriptions of the bins appear as the x-axis labels on the histogram, and the counts are pulled directly from the data in the Basic Story (Lead Time) Metrics table above and appear as labels on the individual bars with a frequency scale plotted along the y-axis.

I used color too, in the same way as done in the earlier table, with green shades indicating the more desirable T-shirt sizes for the story work level items in our context. What do you think? Does this visualization of the same data in the earlier table provide improved access, understanding, or better communicate insights into the expected lead times of story level work items? Note: these story level work item lead times are calendar days that include weekends, holidays, any sick days, or vacations, etc.

Next, the individual percentages of each T-Shirt size bin from the table were added to this histogram, which compliments the visual representation of the color bars and gives additional meaning to the absolute frequency counts for each T-shirt size bin and helps in comparisons. Finally, the cumulative percentage values were used to add the “pseudo Pareto line” which clearly shows story work level items with a lead time of greater than two weeks really are less common relative to the others. All the information in this pseudo Pareto chart exists in the earlier table, there is no new information here. So, does this visualization give better insights, access, or understanding about potential SLAs, or outliers?

Temporal Perspectives of Story Lead Times (Scatter Plot Charts)

For the limited effort required, the histogram and Pareto chart give us a number of useful insights into the distribution of story level work item lead times and have helped us define some meaningful T-shirt sizes (derived from real times and produced from our existing software development workflow process). But, we have no insights into the stability of how our story level work item lead times, or how specific actions along the way impacted them (or didn’t impact them).

To add a temporal (time) perspective we’ll use a simple scatter plot of the underlying raw data to visualize story level work item lead times over our reporting intervals. As mentioned earlier, I did a bit more analysis of the raw data spreadsheet to partition lead time data by functional story level work items and infrastructure story level work items. This was quick to do as several existing fields in the underlying raw data spreadsheet distinguished between these two work item types. (Note: I scroll time along the x-axis with most recent information on the left as older information “falls” off at the right end, which matches my preference for creating CFDs as well. Again, this is a “preference”, and you should flip the direction as needed to fit your context and audience).

What insights about our story level work items does the new temporal perspective in this scatter plot provide us? How does distinguishing between functional and non-functional story level work items help? Do you think the value might be greater or less depending on the context of your software development workflow? Overall we see a number of infrastructure story level work items distributed throughout the full timeline. We also see a higher density of story level work items at the latest end of this time line. What might this mean? It’s also pretty clear we hit a spot in the overall timeline where our story level work item lead times were quite long in our context. What happened here? Can we look back at our story notes (or reporting interval notes, as we kept them) and explain these cases? Clearly this is where our larger T-Shirt sizes came from, right? Why?

Now, I’ll add the T-shirt sizes from the earlier table to this new temporal perspective as solid and dashed colored horizontal lines, in effect adding Pareto type information to the scatter plot. Does this help make it more clear just how few story level work items were taking greater than two weeks lead time to complete? Again, this is lead time, not cycle time, so it includes time starting from when the story was placed into the ToDo process state (and in our context included weekends, holidays, etc.). How about helping to identify outliers? In retrospect, we know from the earlier table there are just under 600 lead times (blue and red dots) plotted on this scatter plot and we can hand count the eleven that have a lead time well above 28 days. This is no more than 3% and we can see for the most part they were concentrated in one area along our 16 months of data. Is that helpful insight in setting SLAs or communicating expectations with others?

Two More Charts for Fun (Stacked Bar Charts)

The temporal perspective of the scatter plots above do provide some useful insights. Again, the overall effort required is fairly limited too. However, along the way I wondered if visualizing both the concrete “counts” of story level work items being completed and the “percentages” of their lead times over time might potentially add some valuable insights. So, I did a bit more analysis on the raw data spreadsheet to partition the months of story level lead time data into four-week intervals (no special reason I picked four weeks, could have been two or six, depending on context, or one might start with two and expand the intervals as more data is collected).

This first stacked bar chart shows percentages for each T-shirt size bin for a number of reporting intervals displayed on the x-axis. (Again, I’m scrolling latest date on left and earlier date “falling” off to the right.) Notice the reporting intervals labeled Wk 25-28, Wk 29-32, and Wk 33-36. What insights do these three stacked bars provide you? There are some larger percentages of story level work items with longer lead times during these twelve weeks. How does this time frame match up with our scatter plots above? Do you see the alignment with the distribution of dots on the scatter plot being higher on the chart? What new insights or value might the temporal perspective of this stacked bar chart of T-shirt sizes provide you? You certainly see less green for these three reporting intervals.

On the second stacked bar chart I show the counts for each T-shirt bin for a number of the same reporting intervals again displayed along the x-axis. So, what stands out here? Obviously the number of story level work items completed per four week interval is fairly consistent until the last two reporting intervals. What happened in Wks 69-72 and Wks 73-76? I actually have notes on what was happening in our environment but this is a whole other story for another time. The point here is the stacked bar chart certainly provides some new insights and reflects the context of what was happening in our environment at this time.

Look at reporting intervals Wk 25-28, Wk 29-32, and Wk 33-36 again on this chart. While we don’t have specific percentages, it’ still pretty clear there is less green and more orange and red here indicating longer story lead times, and story counts took a slight dip before rebounding up a bit. Any guesses on what happened? Again, match it up with the scatter plot and notice the time of year. These new insights reflect some of what was happening in our context according to our notes during this time beyond just the usual impact of holidays.

Closing Thoughts

While reading this post, it might appear at times the analysis and visualization discussed above was done only in retrospect after all the data was collected. I admit while writing this post looking back myself it was hard not to give that impression or perspective. Yet, to be clear, the analysis and visualization of the results in these charts above occurred and evolved over time, as more and more of the full 16 months of data was collected on this project, and as we learned and observed along the way. Also, I want to be clear that as we collected actual story level work item lead times, among other data too like story cycle times, stories completed per reporting interval and per feature, feature level work item lead and cycle times, and analyzed this data in similar ways as above, the models, the information, and the insights were very valuable in helping us get effectively predictable and confident in what we could commit to delivering in a specific time.

A second point I want to be clear on was we didn’t use this data or analysis as part of individual or team performance evaluations. The focus and purpose of this data and analysis was primarily on helping to understand and learn about the nature of our software development workflow related to completing and delivering quality story and feature level work items. As mentioned earlier the information was used primarily to help us guide and shape the policies that influenced the team’s interactions and specific responses in a just-in-time manner as information unfolded about a work item’s (story, feature, epic) level of effort and duration. The role of our software development workflow management became focused much more on setting the rules for these interactions and in turn influencing our broader work environment structures and interactions, and on shaping (conditioning) the workflow inputs and managing them through the well-known constraints and measured capabilities of our software development workflow process.

Take care,

Frank