Select Page

Colorado’s fight with COVID-19: A view through a process behavior chart lens

You are only fooling yourself if you don’t think there’s a high degree of uncertainty about the best path forward.

In these times, alternatives and opposing opinions on the problems and solutions surrounding the pandemic need to be heard, not silenced.

While COVID-19 presents us with a particularly thorny case of decision making based on scientific uncertainty, this issue is perennial in science.

– Peter Attia, M.D., “The importance of red teams”, May 24, 2020

My interest in writing this post

Over the years I have spent a fair portion of “free” time learning about predictability and uncertainty through randomization-based inference data analysis. In particular, from works by Dr. Sam L. Savage (author of “The Flaw of Averages”), Douglas W. Hubbard (author of “How to Measure Anything” and “The Failure of Risk Management”), and Dr. George W. Cobb (author of several statistical textbooks and papers advocating for these inference methods – see “The Introductory Statistics Course: a Ptolemaic Curriculum?”). And a bit more recently about process behavior charts from works by Dr. Donald J. Wheeler (author of “Understanding Variation: The Key to Managing Chaos” and “Making Sense of Data”).

What draws me to their works is a common theme that aligns with my study of “lean thinking”, which has influenced my learning approach. That is, start with existing insights from what I already know and understand well about a problem or question, add some fundamental analysis concepts and learning tools that I can grasp relatively quickly, and then practice (every chance I get) with them to develop my skills and capability to effectively and efficiently guide the next most valuable learning efforts, identify economically worthwhile continuous improvement opportunities, and enhance the decision-making process used to support or supplement my intuitions (experienced recognition).

This emphasis on creating an experiential foundation to build on, to first gain deeper insights into essential concepts of statistical inference and experimental design, I have found to be very helpful. Here is a recent example.

A simple message found often in Dr. Wheeler’s books and articles is “No data have any meaning apart from its context.”1 The deeper significance related to data analysis only truly “hit me” after I used a spreadsheet to duplicate a few simple experiments he suggested, and then extended and explored a bit further based on what I found. That experience provided insights into the significance behind this humble statement that reading alone had never provided me. From those insights, I now see why I might want to validate a very key assumption prior to using any theoretical probability distribution or any data analysis that uses them (but that is another story for another time). 

As an added bonus that I did not anticipate, I also now realize more why Dr. Savage also for some time has been expressing the depth of learning from these types of experience as “Connecting the Seat of Intellect to the Seat of the Pants.”2 That is, a depth of learning that occurs when intellectual (explicit) forms of learning are combined (connected) with experiential (tacit) forms.3 And while that too is another story for another time, it provides the context behind the motivation for this post.

Today’s focus and the data used

Today, I want to share a little from my recent practice with a process behavior chart (PBC) to help me look deeper into Colorado’s COVID-19 data, which has obviously been on a lot of people’s mind lately, including mine, and rightly so. Specifically, I was curious to see how the PBC might help me better understand something about the virus’s spread and the effectiveness of some of the broader responses to it here in Colorado. In particular, what I could learn beyond the charts I found available on the state’s web site.

(more…)

Are You Just an Average CFD User?

“In everyday life, the Flaw of Averages ensures that plans based on average customer demand, average completion time, average interest rate, and other uncertainties are below projection, behind schedule, and beyond budget.” – Sam L. Savage, 2009 – The Flaw of Averages

“Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions.”  – Stephen Jay Gould, 1985 – “The Median Isn’t the Message”

“Essentially, all models are wrong, but some are useful” – George E. P. Box, “Empirical Model Building and Response Surfaces”, (1919 – 2013)

“Remember that a model is not the truth. It is a lie to help you get your point across.” – Sam L. Savage, “The Flaw of Averages”, 2009

“You are allowed to lie a little, but you must never mislead.” – Paul Halmos, mathematician, (1916 – 2006)

Mean_SumSymbol

Last week I read an article about yet another tool showing the ability to produce CFDs (cumulative flow diagrams). Maybe you’re already a user of one of these tools that help you visualize your workflow, and generate them for you automatically (or “automagically”) as part of reports they provide. Or, perhaps like me, you still generate them mostly using MS-Excel. Either way, have you wondered just a little about how a CFD works?

As most do, this article displayed a line extending vertically between “stages” on the CFD (or workflow processes, as I often call them) and identified this distance as the WIP (work-in-progress) on a specific date for the respective stage (or stages) of interest. There was also a description of another distance, a line extending horizontally between stages of interest on the CFD and identifying it as the “average lead time” for the requests (work items) arriving on a specific date. That is, the average time for a request (work item) to “flow” through (arrive into and depart out of) one or more stages of interest. Lastly, there was a description of a sloped line, as a mean delivery rate of requests (work items) flowing into or out of a stage (workflow process) depending on viewing the workflow upstream or downstream. Note: for more on the basics of reading a CFD, see this earlier post here.

A Difference Is Not An Average, Right?

It is easy to understand that WIP is simply a “difference” between two counts on the CFD and represents the number of requests (work items) at a point in time. Similarly, seeing the slope as a simple rise over run calculation of a number of requests per unit of time (a rate over a period of time of interest) is not a complex concept to accept. But, what about the notion of the “average lead time” derived from the CFD? How is it a “difference” between two points in time read from the CFD (ex. calendar dates) can represent an “average” unit of time for a request (work item) to flow through a stage (workflow process)? A “difference” that represents N numbers summed up and divided by N. Yes, really! But how can this be?

(more…)

Kanban Method: Myths and Misconceptions

The instinctual shortcut…when we have ‘too much information’ is…picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.” – Nate Silver, American statistician, writer. “The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t” (2012)

If you want to be competent at continually uncovering the unknown, strategic processes need to be based on learning not planning…in the fast paced world of twenty-first-century business, the focus of strategic work is no longer about making decisions – it’s about making discoveries. It is about discovering what you don’t know that you don’t know.” – Rod Collins, author, speaker, consultant; formerly Chief Operating Executive at Blue Cross & Blue Shield Federal Employee Program. “Why Planning Is No Longer a Strategic Activity”, (2013)

Over the last several months I’ve gratefully appreciated opportunities to present at conferences and user groups in Denver, Chicago, Houston, and Boston, and will have one more opportunity to close the year presenting in Pittsburg come Dec. While discussing possible topics with organizers of these events, most felt a presentation discussing some myths and misconceptions about the Kanban Method would be of interest and benefit for their attendees. [Note: you can find additonal context, and a bit about the initial motivation for talking more about these myths and misconceptions in my earlier post titled “The Kanban Method: Is It Just Scrum With Tweaks or Is There More?”]

The latest variation of my presentations on this topic was at the Agile New England user group in Boston, where they capture many, if not all of their monthly presentations, on video and make them available publicly. You’ll find this video below. If you’re in the Boston area and not familiar with ANE yet, I’d encourage you to visit their website here.

To assist with viewing the video I’ve listed a mini-index (below) indicating a few logical partitions where one might stop and come back to if you’re unable to sit through the presentation in a single sitting. [Note: this particular presentation allowed for a number of questions to be taken throughout, and not just at the end.] I hope you find it interesting. As per a quote I heard once somewhere that has really stuck with me, “the greatest learning occurs at the interface of disagreement” (adding, provided there is some constructive dialogue). So, if you feel I’ve mistated something, could articulate something a bit more effectively, or you disagree with something, please let me know.

Logical partition points after intro material:

07:23 – A taste of LKNA2013 from Chicago via Boston

07:56 – Getting on the same page with the term “kanban”

10:53 – Why is this important?

13:13 – If you take away “one” thing only

14:40 – First of the “myth and misconceptions”

17:49 – Second of the “myth and misconceptions” (including a number of questions from attendees)

42:57 – Notes on the term “iterate”

43:50 – Notes on decoupling activities

45:36 – Third of the “myth and misconceptions”

57:27 – Closing Q&A

I want to thank again here a few of the people from ANE who work hard on a volunteer basis to provide this first class user group in the Boston area. Thanks to ANE Member Shyam Kumar for his persistance, as we started talking about this opportunity some time ago and patitiently working through schedules. Also thanks to ANE’s President, David Grabel, ANE’s Vice-President, Tom Woundy, and ANE’s Program Coordindator, Ron Morsicato, for their efforts that evening and prior, contributing to a first class user group operation. Lastly, thanks to ANE Member Ron Verge who video-taped, edited, and put togther the video.

Take care,

Frank

Does Data Shape Policy Or Does Policy Shape Data? Yes!

“Errors using inadequate data are much less than those using no data at all.” – Charles Babbage, English mathematician, philosopher, inventor, mechanical engineer, invented the first mechanical computer (1791 – 1871)

“I always avoid prophesying before hand, because it is a much better policy to prophesy after the event has already taken place.” – Winston Churchill, British orator, author, and Prime Minister during WWII (1874-1965)

“Policies are many, Principles are few, Policies will change, Principles never do.” – John C. Maxwell, evangelical Christian pastor, speaker, and author of 60+ books primarily about leadership (1947 –   )

“We’re entering a new world in which data may be more important than software.” – Tim O’Reilly, founder of O’Reilly Media, supporter of the free software and open source movements (1954 –  ) 

DataProcessProcessData_VISS

This question below came up recently on our Agile Denver Kanban SIG (special interest group) LinkedIn discussion list.

“I am working for an organization that has ‘Lab Week’ six times a year. Lab Week encourages innovation by setting aside time for engineers to work on whatever they want. My current conundrum is, what do I do with the work items on my team’s Kanban board, specifically those in To Do or Doing, during Lab Week?

It feels wrong to put them back in the backlog, but keeping them in To Do or Doing will affect lead/cycle time. While this is reality, it is only reality six times a year. In the interest of predictability I think I would want to know what lead/cycle time is without the impact of Lab Week and then factor in Lab Week when we hit a Lab Week.”

I had two immediate responses to this interesting question. First, teams in software development and IT/IS organizations are familiar with the basis for this question, and I’m guessing it comes up in a number of non-IT/IS or non-software development contexts as well.

Second, there is already an upfront discomfort with the idea of moving work items back to the “backlog” that have been “pulled” into the ToDo (or Ready, On Deck, etc.) workflow process states. I’m wondering what signal is this discomfort providing to us? I’m guessing the discomfort is even greater with the idea of moving work items back to the backlog that made it to some state of Doing in the workflow process. In essence, this would be “resetting the clock” for a number of work items already considered work in progress for some period of time, and with expectations, I’m sure they would be completed within some known SLA.

Just what I like, “real world” questions in particular when they come with some “pain points” clearly identified upfront. As I continued to think about this one more, it felt like a “perfect” opportunity to raise what I feel is a related question and discuss them together in this post. When discussing metrics as part of measuring and managing flow, the question below is one I raise often with others who are applying the Kanban Method to their workflow processes.

“In your workflow process context: Does Data Shape Your Policy or Does Policy Shape Your Data?”

How might we in the case of a “Lab Week” produce the metrics we want and visualize this information without any “discomfort” of doing post-collection adjustments or moving (resetting) work items back into the backlog? What can we learn from thinking about and responding to this “Lab Week” question that applies beyond the original basis itself in how we develop policies for managing our workflow processes?

(more…)

Business and Technology Peacefully Co-creating

Written in direct collaboration with Richard Hensley (McKesson AVP).

“Knowledge is an unending adventure at the edge of Uncertainty.” – Jacob Bronowski, British mathematician, biologist, historian of science, poet, inventor, author of “The Ascent of Man”

“Maturity of mind is the capacity to endure Uncertainty.” – John Huston Finley, 1938 – New York Times editor-in-chief

“Many high performers would rather do the wrong thing well than do the right thing poorly.” – Thomas J. Delong & Sara DeLong, “Managing Yourself: The Paradox of Excellence”, June 2011 – Harvard Business Review

It’s common to see an organization (the people in them) focus on building systems with as many features as possible and targeting delivery by a specific due date. Yet, often the result is missing the date while also ignoring other important goals demanded by the businesses such as high levels of product quality, development productivity, planning reliability, employee satisfaction, and customer loyalty. Retrospectives, if done after such an occurrence, surface the dissatisfaction concerning missed dates, poor quality, technical debt, and more, still frequently this pattern repeats. Does this scenario sound familiar to you? If so, why do you think it is so? In a past or maybe your current organization, I’m guessing you’ve heard or thought, “We need our business and technology people on the same page.” How might “being on the same page” look in your organization? Does your current software development methodology, its principles, processes, and practices, contribute effectively to this objective? Is getting on the “same page” with objectives and goals enough?

Over the last three years Richard Hensley, AVP Process at McKesson Health Solutions, has worked to address these challenges within the three business units of the division he works in at McKesson. I’ve been able to catch up with him on several occasions and discuss his efforts over these years. This post is a brief “fly-over” of his experience, capturing some of the key thoughts Richard developed over this time, and shared during our conversations.

No Silver Bullets Here!

Anyone reading a blog post on getting business and technology people on the same page probably knows there is no “silver bullet” to this challenge. Chances are good you’ve already been a part of such an effort, right? If so, you know it is not a simple task. Or even if you saw them start out on the same page in your organization, did issues appear that caused this working relationship to return to its earlier challenging state?

In short, Richard suggests, “you won’t find quick fixes for this issue” and his experience indicates it takes a serious effort from both sides. It also requires a continuous commitment over time because “getting everyone on the same page is a good start but not sufficient to sustain the business.” Yes, getting on the same page upfront, by itself, can be a positive thing as it helps everyone feel good, knowing where others are at and being in the know. If you’re familiar with the practice of daily stand-ups, you can draw a comparison, getting on the same page only is like a stand-up where team members simply provide “status” of their work. It is a good start, but alone it is ineffective for helping to solve real issues. In a similar fashion, getting on the same page with objectives and goals, if that is all you do, doesn’t help anyone address how things might be done more quickly or more effectively. It is a good start, but more is needed! (more…)

A Few Basic Ways to Visualize Story Lead Time Predictability

Note: My colleague Dan Vacanti has also captured and expanded in greater detail on much of the topics touched on in this post, in his book titled “ActionableAgile.”

“The ability to take data – to be able to understand it, to process it, to extract value from it, to visualize it, to communicate it, it’s going to be a hugely important skill in the next decades, not only at the professional level but even at the educational level for elementary school kids, for high school kids, for college kids. Now we really do have essentially free and ubiquitous data. So the complimentary scarce factor is the ability to understand that data and extract value from it.”

“I think statisticians are part of it, but it’s just a part. You also want to be able to visualize the data, communicate the data, and utilize it effectively. But I do think those skills – of being able to access, understand, and communicate the insights you get from data analysis – are going to be extremely important. Managers need to be able to access and understand the data themselves.

 – Hal Varian, Google’s Chief Economist, Jan 2009 – The McKinsey Quarterly

My previous post discussed how some of my earlier teams used T-shirt sizes for story level work items in their software development planning processes. But T-shirt sizes were only a part of what helped us get effectively predictable. The emphasis on just-in-time (JIT) story creation and story analysis, along with just-enough story and portfolio level backlogs (limiting work-in-progress or WIP), were also significant contributing factors. Two other key factors were de-emphasizing upfront estimating of level-of-effort and duration, and instead placing a greater emphasis on lightweight tracking of real (lead and cycle) times to complete and deliver story level work items. (See my earlier posts here and here for a bit more context and background on push vs. pull scheduling systems.)

I also discussed how some basic analysis of this lead time data and T-Shirt sizing helped us develop an internal service level of agreement (SLA) for completing story level work items. But this information also guided and shaped the policies we developed to influence the team’s interactions and specific responses (pulled from our toolbox) in a JIT manner as information unfolded about a story’s level of effort and duration. My observation is that all this contributed to us becoming predictable in a context where we never had been before using heavier upfront planning strategies. Based on my study of scheduling systems this combination reflected key pull scheduling characteristics, where the role of our software development workflow management changed from determining all operation activities upfront, to one focused much more on setting the rules for interactions (in turn influencing our work environment structures).

There’s lots more analysis (mathematical and statistical) you can do using the minimal and easily collected data that produced the Basic Story (Lead Time) Metrics table and T-shirt sizes from my earlier post and that will have to wait for another time. For this post I want to focus on visualizing the information in this simple table to see if we might extract a bit more value from the modest analysis investment already expended.

I’ll also build on this initial visualization, using a bit more (quick) low hanging fruit type analysis of the full raw data set represented by the earlier spreadsheet snippet, to produce a basic temporal (time) perspective of story lead times. Can adding a basic temporal perspective  provide a number of other useful insights into understanding the nature of our workflow’s story level work item lead times? (Both the earlier table and spreadsheet snippet are included here; click images to enlarge). To this basic temporal perspective, with a bit more new analysis on the existing raw data spreadsheet, I’ll then add new information extracted to give a work item type perspective as well, and then overlay the T-shirt size information to this mix.

Finally, again with just a bit more analysis on the existing raw data, I’ll visualize two other separate temporal perspectives using both  percentages and frequency counts. Afterwards, let me know what you think. Do one or two of these other ways of visualizing the information in the earlier table and spreadsheet snippets help you and others access, understand, and communicate insights that leads to a more effective predictable workflow in your software development context? (more…)