Flow-based (Kanban/Pull) Metrics – (Quantitative Metrics with the Kanban Method):

Predictability and Meaningful Forecasting

Okay, I’ve collected data now what?

Maybe you’ve collected lead times for different work types over a number of weeks or reporting intervals. If so, are you asking yourself the question above?

In late 2007 and early 2008, shortly after beginning to use the Kanban Method, I found myself asking similar questions. How is this data going to help me? What’s next?

The outline below represents key learnings from efforts to answer these questions for myself. It reflects analysis on data collected while working hands-on and on-the-floor as part of software development teams, several sets spanning 15+ months of real world data, from different environments (different contexts, business domains, teams, etc.). It represents a general scope of topics that could be covered in a tutorial to introduce teams to quantitative (flow-based) metrics with the Kanban Method. This outline of core of topics would be customized (emphasizing or adding topics) after gathering more information specific to your context, as well as well as considering the type of coaching and tutorials desired (ex. given as a single-day or multi-day classroom style workshop, or several smaller just-in-time 2-4 hour workshops over several weeks as part of a fixed period embedded coaching engagement).

Why learn about kanban metrics?

  1. Extend the “visualizing” of your workflow beyond the board:
    • Show visually what “normal” looks like for your current context
    • Quantitatively identify starting points for finding problem areas
    • Distinguish potential “outlier” cases (special causes)
  2. Provide a baseline for validating change
  3. Build a foundation for deeper learning (less guessing) about your workflow
  4. Move subjective debates or discussion to objective analysis, experiments


Basics of Little’s Law and Cumulative Flow Diagrams (CFDs)

  1. Again, What are Lead Time and Cycle Time? (Getting on the same page)
  2. Little’s Law
    • What is Little’s Law?
    • The basic algebra
    • Understanding the units
    • The “basic” assumptions
    • Why is it important in managing workflow?
  3. Basics of CFDs
    • What is a CFD?
    • How is it useful?
    • CFD – reading WIP (at a point in time)
    • CFD – reading approximate average lead lime (at a point in time)
    • CFD – reading throughput (between two points in time)
    • CFD – reporting intervals (picking a meaningful reporting period)
    • Creating a simple CFD spreadsheet (seeing behind the lines)
  4. Putting Little’s Law and CFD’s to Work
    • A necessary deeper look at those “basic” assumptions
    • Why might these assumptions be important to us?
    • Do they influence your policies for pulling work?
    • How do your pull policies impact your workflow?
    • First-In-First Out (FIFO) – think car wash
    • First-In-First-Serve (not necessarily first out) – think bank queue
    • First-In-Mostly-First-Serve – think waiting for a table at a restaurant
  5. Using CFDs to Recognize Good and Bad Trends
    • Desirable CFD chart line patterns
    • Spotting warning signs


Tracking Small and Large Work Items

  1. Data Tracking – Small Work Items (Story Size)
    • How do I start capturing helpful data? (simple spreadsheet setup)
    • First glimpse at “how long do stories take”? (basic scatter charts of lead and cycle times)
    • First glimpse of “what is unusual” (outliers, a simple chart view)
  2. “Basic Stats” Analysis
    • Second glimpse at “how long do stories take?” (scatter plots and lead time percentiles)
    • Second glimpse of “what is unusual” (outliers, a statistical view)
    • Third glimpse at “how long do stories take?” (one-point estimate; mean lead and cycle times)
    • How confident are you in that mean? (upper and lower confidence limits on your mean)
    • What is the problem with one-point estimates? (means are not the whole picture)
    • Is that why we miss often, this thing called “inherent variation” in our system? (variance)
    • So, what is a reasonable buffer? (two-point estimate; mean with an 84% upper confidence limit)
    • What is a “safe” safeguard? (two-point estimate; a mean with an 95% upper confidence limit)
    • Fourth glimpse at “how long do stories take”? (basic frequency data distribution tables)
    • Keep it (sizing) simple (deriving meaningful t-shirt sizes from your actual project data)
    • What do you mean my data is not “normal?” (recognizing normal vs log-normal)
    • What can we do when my data is not “normal?” (transforming and back-transforming)
    • What are the benefits or consequences of transforming data and back-transforming?
    • Would the “median” or “percentiles” make more sense to use?
    • Learning why you might want to keep it simple to start and why that is often enough.
    • Do you really want to talk about Control Charts?
  3. Interval Reporting – Visualizing Performance Over Time
    • Making trends easy to see (run charting small work item (story) metrics)
    • How many stories were completed per interval? (story done counts)
    • How many stories should I expect to be completed per interval? (cumulative mean of story done counts)
    • How confident are you in that mean? (upper and lower confidence limits on your mean)
    • Why do you convince someone that “we got lucky” completing X stories last reporting interval? (mean with a 95% upper confidence limit)
    • How are we doing over time? (recognizing trends and spikes)
    • What happened here? (tracking what isn’t and is happening each interval)
  4. Data Tracking – Large Work Items (Epics, MMFs, MVUs, etc.)
    • How long did each epic take? (cycle time for each epic)
    • How long should I expect a typical epic to take? (cumulative mean epic cycle time)
    • What do you mean our epic data is not like our story data? (well-curve vs. bell curve)
    • We know how many stories were completed for each epic (story counts for each epic)
    • We know how many stories were completed per day for each epic (story throughput)
    • We know expected story lead and cycle times for each epic (mean story lead and cycle times)


Predictability and Forecasting

  1. Putting It All Together to Make Meaningful Forecasts
  2. Using CFDs
  3. Using Historical Data
    • Small work items (stories)
    • Large work items (epics, MMFs, MVUs, etc.; the need other approaches)
  4. Monte Carlo Simulation
    • What is Monte Carlo simulation?
    • How is it useful?
    • Using your historical data distribution is key
    • Spreadsheet example

Tutorials based on real-world hands-on experience!
If you’re interested in a tutorial for your organization, or would like to “seed” a public offering in your metro area, please contact Frank Vega.

Learning Outcomes

A typical one-day tutorial is loaded with several interactive and “hands-on” exercises introducing how to begin using a probabilistic approach to obtain predictable workflows and meaningful forecasts using the kanban method and flow-based metrics in software development, and targeting these key learning outcomes:

  • Basics of Little’s Law and Cumulative Flow Diagram (CFDs).
  • Understanding the assumptions that make these tools work.
  • Using these assumptions to help create predictable workflows and meaningful forecasting.
  • Defining realistic (probabilistic) SLAs for how long it takes to complete work items.
  • Quickly, efficiently “sizing” work items.
  • Developing risk mitigation strategies and tactics that respond dynamically to real-time workflow feedback.
  • Using trends to analyze past issues for improvement opportunities (run charts).
  • Strategies for forecasting completion times for projects.


  • Introductions
  • Why a tutorial on kanban metrics?
    • “Visualize” workflow, provide insights, assist with optimizing WIP
    • Move subjective to objective, basis for learning, validating changes
  • Are we on the same page with Little’s Law and CFDs?
    • What is Little’s Law? What are its key assumptions? Why are these important?
    • What is a CFD? How it is helpful? What are its key assumptions?
  • How do we learn how long stories take?
    • How do we use scatter plots? What are data distributions? Why are they important?
    • What is normal (expected)? Is there more?
  • How can we see issues in our process? (run charts and trends)
    • Actual stories completed per interval and over time
    • Average story lead time per interval and over time
  • How can we learn where to improve our process next?
    • Identifying the right questions
    • Learning from the right mistakes
  • Putting it all together
    • Monitoring CFDs, trends on lead times, story counts
    • Using metrics to manage pull/JIT workflow

Other Details

After completion of the tutorial, attendees receive a PDF of slides.

Note: Number of attendees is typically limited to provide and ensure a quality experience for all participants (usually 18 max).

Kanban Coaching Professional

This tutorial is offered by Frank Vega of VISS, Inc., a Lean-Kanban University Kanban Coaching Professional Charter Member. For more information on the KCP program click here.