“Errors using inadequate data are much less than those using no data at all.” – Charles Babbage, English mathematician, philosopher, inventor, mechanical engineer, invented the first mechanical computer (1791 – 1871)
“I always avoid prophesying before hand, because it is a much better policy to prophesy after the event has already taken place.” – Winston Churchill, British orator, author, and Prime Minister during WWII (1874-1965)
“Policies are many, Principles are few, Policies will change, Principles never do.” – John C. Maxwell, evangelical Christian pastor, speaker, and author of 60+ books primarily about leadership (1947 – )
“We’re entering a new world in which data may be more important than software.” – Tim O’Reilly, founder of O’Reilly Media, supporter of the free software and open source movements (1954 – )
This question below came up recently on our Agile Denver Kanban SIG (special interest group) LinkedIn discussion list.
“I am working for an organization that has ‘Lab Week’ six times a year. Lab Week encourages innovation by setting aside time for engineers to work on whatever they want. My current conundrum is, what do I do with the work items on my team’s Kanban board, specifically those in To Do or Doing, during Lab Week?
It feels wrong to put them back in the backlog, but keeping them in To Do or Doing will affect lead/cycle time. While this is reality, it is only reality six times a year. In the interest of predictability I think I would want to know what lead/cycle time is without the impact of Lab Week and then factor in Lab Week when we hit a Lab Week.”
I had two immediate responses to this interesting question. First, teams in software development and IT/IS organizations are familiar with the basis for this question, and I’m guessing it comes up in a number of non-IT/IS or non-software development contexts as well.
Second, there is already an upfront discomfort with the idea of moving work items back to the “backlog” that have been “pulled” into the ToDo (or Ready, On Deck, etc.) workflow process states. I’m wondering what signal is this discomfort providing to us? I’m guessing the discomfort is even greater with the idea of moving work items back to the backlog that made it to some state of Doing in the workflow process. In essence, this would be “resetting the clock” for a number of work items already considered work in progress for some period of time, and with expectations, I’m sure they would be completed within some known SLA.
Just what I like, “real world” questions in particular when they come with some “pain points” clearly identified upfront. As I continued to think about this one more, it felt like a “perfect” opportunity to raise what I feel is a related question and discuss them together in this post. When discussing metrics as part of measuring and managing flow, the question below is one I raise often with others who are applying the Kanban Method to their workflow processes.
“In your workflow process context: Does Data Shape Your Policy or Does Policy Shape Your Data?”
How might we in the case of a “Lab Week” produce the metrics we want and visualize this information without any “discomfort” of doing post-collection adjustments or moving (resetting) work items back into the backlog? What can we learn from thinking about and responding to this “Lab Week” question that applies beyond the original basis itself in how we develop policies for managing our workflow processes?
What is Lab Week?
A number of organizations I know are fairly mature in their software development or IT/IS workflow processes and technical practices. It’s interesting to me too that a number of them do practice some form of activity typically called a “hack-a-thon, hack-fest, hack-day, code-fest”, and it looks too like they are called “Lab Week” by some. These events typically last anywhere from a day to a week and are for a fairly similar purpose and consist of fairly similar activities. These include educational and social purposes, with an intent to create usable software, but often are more immediately focused on exploring new languages, APIs, operating systems, tools, platforms, or new product development and innovation efforts.
Should We Adjust Data to Account for a Lab Week Policy?
In these contexts, I can see how it “feels wrong” to move work items from ToDo or Doing back to the backlog. Yet, if there was a specific and immediate business need (a force, a problem, an opportunity) to be addressed by deriving information without the impact of a Lab Week, I see too how one might explore ways to do a post adjustment, a “correction”, of the lead times for these work items by one week. However, for the moment, I want to consider doing something different than either of these.
From the original question, the initial concern appears that minimally seven days of “idle time” are being added to the lead times of any work item that is in ToDo or Doing states of the workflow process when a Lab Week occurs. I wondered how might this concern, due to a Lab Week policy, appear in any of the metrics and charts that I use often? I’m also inferring a need to know something about these metrics of interest when the additional seven days due to a Lab Week are not a factor. So, how might these metrics influence policies that don’t require us to make “corrections” to collected data, while still showing us the reality of our system with Lab Weeks occurring six times a year?
A Run Chart and Scatter Plot Data Perspective
Assume a team was using a run chart showing the number of work items being completed each reporting interval and had this data plotted over several months now. Along the x-axis, I’d see the reporting interval end dates plotted and vertically above each of these labels I’d see a point plotted at the appropriate height relative to the scale of increasing incremental counts plotted along the y-axis. Essentially a simple line chart of sorts.
In my experiences with real project data from several software development contexts, these points plotted at each reporting interval when connected created a line that oscillated up and down yet producing a fairly consistent pattern that trended neither upward or downward over time. This consistent alternating pattern gave me a fairly reliable indication of the number, a consistent range, of work items I came to expect to be completed per reporting interval. For example, in one context this range was 7 to 17 “XP story sized” work items completed per two-week reporting interval.
Assume a team was also using a scatter plot showing lead times for completed work items over time. In a similar fashion, the dates plotted along the x-axis were the end dates of the same two-week reporting intervals used on the run chart. Plotted along the y-axis were in this case, a scale of days (lead time) required to complete a work item.
Again, based on my experience with real project data from several software development contexts, I’d see a large number of these points plotted densely at the lower end of the y-axis and then be less dense at the upper end of the y-axis. That is, this distribution of lead times gave me a fairly reliable indication that 70+ percent of my work items would complete in seven “calendar” days or less. (Note: see my post here for examples of a run chart and scatter plot; the last “bar chart” in this other post essentially is plotting similar info used to create a “run chart”.)
What might the effects of a Lab Week policy look like on my run charts? Well, with a two-week reporting interval one result may be no noticeable difference. That is, even if I didn’t have data well established before the Lab Week policy was implemented, the run chart might show reporting intervals at the low end of the range appear about as often where Lab Week occurs as to where it doesn’t. You might be saying, “A two-week reporting interval that contains a Lab Week has to visually show some ‘dips’ on a run chart.” I wouldn’t be so sure.
That said, for now, let us assume a run chart does show two-week reporting intervals containing a Lab Week are the ones consistently on the low end of the range. What problem does this cause for us? Looking at it from a delivery perspective, it certainly doesn’t make anyone want to have a work item in progress during Lab Week. It probably also causes a desire by some to push on or for their work items as a Lab Week approaches.
What might the effects of a Lab Week policy look like on my scatter plots? Well, with a two-week reporting interval, where most work items complete in seven days or less, any work item in ToDo or Doing when Lab Week occurs will likely see their lead time doubled. That is, there would be fewer points plotted in the reporting intervals with a Lab Week (fewer completed work items) and more “scatter” of the points plotted in reporting intervals just after those with a Lab Week (due to the long lead times for work items “idled” during the prior Lab Week).
What Would We Like Our Data to Show?
Would it be preferable for our charts to show that Lab Week has little to no impact? How might we make this happen if we were in fact seeing the “dips” and “scatter” occurring as described above for the run charts and scatter plots respectively? Instead of “correcting” data, is it possible we could make some tweaks and adjustments (“corrections”) to our policies that would “shape” the data into what we’d prefer to see?
A Lab Week six times per year works out to implementing one about every seven or so weeks. Staying with the two-week reporting interval for our discussion, every fourth reporting interval that contains a Lab Week, consider doing the following:
- Implement the Lab Week as the second week of the reporting interval.
- During the first week of this interval, if you complete a work item, don’t immediately pull the next work item from ToDo, instead first try to help on any other work items in progress (Doing) to complete them if possible before theLab Week starts.
- If you can’t do #2, pull a work item from ToDo only if there is a high probability you can complete it before Lab Week starts.
- If you can’t do #3, start your Lab Week early, and end your Lab Week early.
- If you finish Lab Week early, don’t immediately pull the next work item from ToDo, instead first try to help complete, hopefully before the reporting interval ends, any of the work items in progress (Doing) but idled due to Lab Week.
- If you finish Lab Week early and can’t do #5, try to pull the work items that have been the longest in ToDo (where typically these would be the larger work items passed over just before Lab Week started in favor of completing a few smaller work items).
Data Shapes Policy, Policy Shapes Data!
What do you think? Do the “dips” in a run chart guide us in shaping the “corrections” (tweaks) to policies that would in turn smooth the flow of work items completing each reporting interval that contains a Lab Week? Could tweaks to these policies in turn shape the run chart and “smooth” the dips in reporting intervals that “contained” a Lab Week?
Does seeing the “scatter” in the scatter plot guide us in shaping the “corrections” (tweaks) to policies that minimize the spread of work item lead times completing just after each reporting interval with a Lab Week? Could tweaks to these policies in turn shape the scatter plot to reduce the spread in reporting intervals that occur just after a reporting interval containing a Lab Week?
Insights from your data should be used to shape (guide changes to) your workflow process policies, and in turn tweaks to your policies should be used to help shape (create the change in) the workflow process data you wish to see. In particular, to smooth the flow (throughput) of completed work items and minimize the lead time spread of completed work items, with both of these contributing to the overall “predictability, sustainability, and quality” in your workflow process.
Who Said Anything About Lowering WIP Limits?
Okay, now I’ll mention WIP limits. Were you surprised I hadn’t mentioned them until now? Well, what do you think, could lowering the upper end of your ToDo column WIP limits as you enter a reporting interval with a Lab Week help? Would this be a policy tweak worth considering?
You might think this is being a bit “manipulative” by choosing to “start the clock” on fewer work items during these reporting intervals with a Lab Week. But isn’t this is a fairer expectation to visualize? That is, wouldn’t it be better to communicate “earlier” that fewer work items will likely “enter” a work-in-progress state in the workflow process during the reporting intervals with a Lab Week?
Isn’t it more appropriate to leave a few more work items in the “backlog” during these intervals than to consider moving them back after they reach the ToDo or Doing states of your workflow process, or adjusting their lead times after they complete? It might be we simply adjust the replenish policy to reduce the frequency, putting the ToDo column on a bit of a “diet” during these reporting intervals with a Lab Week. Or a combination of both. Either way, the net effect is to lower “work in progress” as we head into a (known) period of “lower capability.” Does this sound familiar?
Closing Thoughts
Near the top of this post, I suggested that what we learn from our response to the original question could extend beyond the original basis. Lab Week represents a time where we know, or at least we assume, in advance that our team’s normal (capability) will be challenged. What other times do we know a similar challenge will occur?
What about when a two-day holiday occurs next to a weekend? Everyone on the team getting a “four-day weekend” can be a big hit to the workflow throughput and work item lead times. I know these are also popular times for some to request an additional three days off and turn a two-day holiday into a full one-week vacation. What about times of the year when vacation requests are heaviest, like spring breaks for those who have children in schools, or the weeks between Thanksgiving and the New Year here in the U.S.? What about those times set aside for “annual or quarterly planning” or even “performance reviews” and the like?
Even if you don’t have “Lab Weeks” you could very well have something that causes a similar challenge. Think about these policy tweaks in these contexts also, consider how they might help, and more importantly think about why they might help.
Take care,
Frank