A quick note, I incorrectly referred to Steven J Spear as “Steven Spears” in the slide deck used for this presentation and in my mention of his work during the presentation. It is Steven J Spear as I had it listed with the quotes from him that I used in my earlier blog post “The Kanban Method: Is It Just Scrum With Tweaks or Is There More?” where I had initially referred to his book, “The High Velocity Edge.”

Take care,

Frank

I’m glad to see Arne’s translation effort resulted in more conversation and I like the essence too of what you’ve captured in your comment above. The post’s title for sure is with faults, and yet invites questioning and provokes discussion rather than provides “conclusions.” ☺

In Aug of 2011, I started discussing my own questions re: LL and CFDs with Dan Vacanti, and these initial discussions lead to others and lots more questions, and the discussions and my learning continues.

The Kanban Metrics tutorials we’ve given at LSSC2012 (Boston) and at LKCE2012 (Vienna), and Dan’s talk at LKCE2012 have amplified the discussion even further, and a number of us engaged in it still more at KLRUS (San Diego) last month. Arne and I knew a single blog post would be insufficient for this topic, and there is definitely more to be discussed, understood, learned, and benefited from re: LL.

Thanks for your comments and tweets and I look forward to seeing you and Arne again, and perhaps over a good meal and glass of wine we’ll continue our conversation.

Take care,

Frank

I love the post, but now with the German transalation and a tweet coming with it, I do not like the title any more

My take is: LL has (as every Law) conditions under which it holds. And, under these conditions it holds. (I just made the tautology explicit There are conditions under which it doesn’t hold. Then it does not make any statement. Then the observed effects can be linear or nonlinear. They might be polynomial, exponential. Or … linear. No statement.

So, I think the right title might be ‘Does LL hold under your system conditions?’ Or: ‘Is your system invariant over time, as otherwise, LL does not make a statement about’ etc.

The baseline is: LL in it’s original form is an integral over time.So, roughly, the system needs to be stable over the observed time, otherwise no LL. That means if the WIP increase is drastically changing the system itself, or ‘if there is a correlation between the system status and its WIP': NO LL.

The title suggests there is some ‘scientifically sensation’ here, which isn’t – it is rather science applied to the real world. It is basically the core of what I referred to with my Pecha Kucha at #LKCE

But again: It IS a great post, it just puts a wrong conclusion (you didn’t even make) at the beginning.

I nearly didn’t get the captcha solved

Thanks and all the best

Markus

]]>And, in today’s market, quick response times means more business!

]]>As per the “seed” you planted last March and some additional motivation based on recent email exchanges with Hichem Chaibi, I finally got to a follow up related to bottlenecks and CFDs. See my Nov 2012 post titled “Bottlenecks – Revisting the Reading of Cumulative Flow Diagrams.

Take care,

Frank

LL comes from Queueing Theory that examines the *queues* that pile up in front of a server that uses a certain process with certain statistical behaviour. Though LL is true *regardless* of this statistical behaviuor, it does not hold anymore, if the process changes. Applied to Software Kanban this means that LL is a good guideline as long as you manage queues. If the WiP limit starts to effect the proces itself, we have dependent variables in LL. Since this interdependence may be non-linear, we probably run into the realms of Complex Systems with all their unpredictable behaviour (e.g. people may start to work on their own projects if they are significantly under-utilized, some controller may run havok or whatever). Hence LL is a great tool to manage queues. But it doesn’t mean that a Complex (Adaptive?) System suddenly starts to behave linearely.

Take care

Jens

The model is a cashier line: a queue where you spend time waiting and a station where you spend time paying.

Consider the station itself. Its utilization is:

U = lambda * S

where lambda is the rate of arrival and S the service time for that station (how much it takes to pay when there’s no one in the queue, let’s say).

Consider the full system:

L = lambda * W

where L is the people in the system at a certain time, and W is the total waiting time.

Noe the total waiting time is composed of the time you spend in queue, plus the time you spend at the station:

W = L * S + S

since when you arrive, you have to wait until (on average) L customers are served, and then be served yourself.

So substituting L in this equation, you get:

W = lambda * W * S + S

W (1 – lambda * S) = S

W = S / (1 – lambda * S)

W = S / (1 – U)

so when U goes up, near to 1, the denominator approaches 0 and the total time you spend in the system approaches infinity.

My intuitive explanation: if we were able to perfectly time customer’s arrivals we would be able to arriva to U = 100%: they would arrive every S seconds, be served, and when they exist another one would be ready. But in this model we are talking about average times, so they arrive randomly: sometimes early, sometimes late. If they arrive early, they wait in the queue for a bit, so their L goes up; if they arrive late, at certain times the queue is empty and we lose a bit of utilization because no one can be served during that time. ]]>

So let me rephrase what I mean: Stable to me means something like predictable. It does not mean “without change”, but rather “without surprising change”.

Let me pick a highway as an analogy: In some countries there are speed limits on highways, e.g. 130 km/h. Car´s then don´t go all at the same speed, but the variation around 130 km is comparatively small. The speed of a car approaching you from behind won´t surprise you much.

But on German highways (the Autobahn) there is no general speed limit. (There are many due to construction work, though.) That means you essential do not have a clue how fast a car coming up from behind is going. If you´re cruising along with 130 km/h then the other guy approaching you can go 140 km/h or 240 km/h. Believe me, you can be in for many surprises on German highways.

What this surprising variation in speed leads to is congestions. Red lights from fast cars breaking right behind slower cars signal danger. Other cars start breaking. This is a wave spreading… and quickly leads to congestions far behind the original cars with large speed differences. Their drivers might not be aware of what´s happening behind them.

Car flow on German highways thus is impeded by large variations in speed. Looking just at the average speed won´t explain it. But variance explains why flow is not continuous.

This of course is no problem at night. Only few cars are on the road. But if highway utilisation is high during daytime and even more during rush hours… then this variation causes many problems. That´s why their are “on demand speed limits” on some highways. They get switched on only at certain times.

So to me a highway with large variation in speed is not a predictable system. It´s hard to plan a trip. It´s not stable in what I can expect from it. The “highway car delivery performance” (getting me from here to there) is not stable. I cannot even expect linear development of “delivery” depending on increase/decrease of the number of cars on the highway. Twice as many cars at 6am compared to 4am might have no effect on my travel time. But four times as many cars at 8am might cause a complete stand still – which does not look like a linear increase in travel time

Does that make it more clear what I meant by “can the work item/packet size be assumed to be pretty much constant? If no, the system is not stable and non linear effects can be expected”?

]]>Thanks for the follow-up. I think you’re understanding is fine. That is, I agree with your second comment completely :>)

Still, from your first comment, I have to admit I’m stumbling on the following:

“But doesn´t that mean, any system where work item size is not uniform is not a stable system?

“To me that would mean, the first question to ask about a system is: can the work item/packet size be assumed to be pretty much constant? If no, the system is not stable and non linear effects can be expected and LT=WIP/TP cannot be readily applied.”

It could be me just missing something here.

Take care,

Frank

Still, though, as a customer waiting in a super market cashier line an average waiting time does not really make me happy if right before me is a guy with his cart filled up to the brim. I know for sure, my personal waiting time will be way above the average.

That´s why there are express lanes, I´d say. To lower the average. And to make waiting time more predictable on a personal level.

Coming back to the 80% utilization: If a network is utilized only 10% or 40% even a large change in work item size does not affect the ability to take up more work. Maybe just one more work item then leads to a 65% utilization.

But above 80% a possible, even likely variation in work item size might lead to network overload. That´s when things break down.

So I´d say not only it´s important to know the average work item size, but also the variance in work item size. If it´s small, i.e. work items are of pretty much the same size, then all´s dandy. But if the variance is large… then there is a real danger of exceeding buffer capacity once utilization is high.

But maybe I´m misunderstanding a crucial point?

]]>