“The paradox is that trying too hard to create predictability creates the opposite effect.”
- Mary Poppendieck, Aug 2003 – Lean Development and the Predictability Paradox
“If you want to go in one direction, the best route may involve going in the other. Paradoxical as it sounds, goals are more likely to be achieved when pursued indirectly.”
- John Kay, Jan 2004 – Obliquity, January (article)
“Rationality is not defined by good processes, irrationality lies in persisting with methods and actions that plainly do not work – including the methods and actions that commonly masquerade as rationality.”
- Johh Kay, Mar 2010 – Think oblique: How our goals are best reached indirectly
If you’ve worked in the software development field for any length of time, I’m sure you’ve been asked, “When will the project be finished?” I’m also sure you’ve found this question challenging to answer more often than not. Why is this? Estimating, forecasting, projecting, call it what you will, is not easy to do well, or even close to effective, especially as the project increases in size, complexity, or both. In a single post, I’m definitely not diving deeply into how one might estimate, forecast, or project how long a software development project might take to complete. However, keeping in mind the “estimating challenges” we’ve encountered in software development provides a real world context for discussing what I think are a couple of related and intriguing concepts.
The Predictability Paradox
In 2003, Mary Poppendieck published a paper titled “Lean Development and the Predictability Paradox.” The lean principles found in this paper appear in a refined and expanded form in Mary and Tom Poppendieck’s book “Lean Software Development: An Agile Toolkit” (2003). However, unless I missed it, the Predictability Paradox, neither as a term or concept, appears explicitly in this book. But, the concept definitely resurfaced in their following book, “Implementing Lean Software Development: From Concept to Cash” (2006), although relabeled as a chapter sub-heading “Myth: Predictions Create Predictability.”
So, what exactly is the Predictability Paradox? The paradox, as explained by Mary and Tom Poppendieck in this second book is that “in our zeal to improve the predictability of software development, we have institutionalized practices that have had the opposite effect. We create a plan, and then act on that plan as if it embodies an accurate prediction of the future.” The key point for me here is, developing a plan itself isn’t the issue, but failing to recognize that we act as if this plan could never be wrong is an issue. You might say, “Now wait a minute, if the plan is ever found to be wrong, shouldn’t we then work harder upfront to get better at making sure the plan is more often right?” I’d say, “Sure!” But then, I’d ask, “How?”
In discussing the Predictability Paradox further, the Poppendiecks list four factors they feel contribute to making the effort of accurately “predicting” the future a difficult one, that is, when what we are trying to predict is one or more of the following:
- To occur in the distant future
- Occurring in an uncertain (changing) environment
While I don’t consider this “a complete list”, it’s definitely sufficient to illustrate sources of the challenges we face in any effort to ensure upfront the plan is more often right. But, what if we instead approached a goal of attaining predictable outcomes from another perspective? If we start with the premise that our plans, and in particular our initial and early plans, will not be highly accurate or correct (a high probability in many cases actually), what would we do differently? What if we put more effort, especially early on in the project, into increasing the available response options we’d have if or when we learn the plan is not right? Could this be a more effective approach to creating predictable outcomes, than applying additional upfront planning or extra efforts needed to adhere to the “original version” of our plan?
The Poppendiecks conclude this section in their book with this thought “an organization that has a well-developed ability to wait for events to occur and then [is able] to respond quickly and correctly will deliver far more predictable outcomes than an organization that attempts to predict the future.” Is this line of thinking something you can or will buy? I’m assuming the first response for some reading this is “This sounds nice in theory, but in my real everyday world I don’t have the luxury of waiting until all hard-to-predict events occur.” In the broader context of the reading I’ve done on lean, and in particular software development, and even in the broader context of the Poppendieck’s book, I understand their message, but that doesn’t mean it is easy to do, and getting to this ability that they speak of doesn’t happen overnight either, does it? Still, thinking about how to increase your options to respond when plans turn out wrong and working toward increasing this ability is a desirable goal and one that you can move toward even if just incrementally (and iteratively) to start. What downsides exist for you that would prohibit considering or taking small steps in this direction?
In his book “The Principles of Product Development Flow”, Don Reinertsen suggests that pursuing economic success by focusing on efficiency metrics, such as maximizing capacity utilization, or by focusing on conformance to plan, is fundamentally wrong. Instead, he says economic success is more likely to be achieved by making good economic choices, based on the latest most reliable information available, and acknowledging as information changes, the best economic choice may change as well.
Earlier I suggested a key point for me was seeing that planning is not the issue, but it is acting as if the plan could never be wrong. Here, I think the key point again suggests that planning itself is not the issue, but that failing to change the plan when we have later and more reliable information is the issue. It is not just working on our ability to keep options open longer or to more quickly respond when a glitch occurs, it is also shifting to an economic perspective to guide our choices in how we plan, how much we plan, how we respond to glitches when they occur, and continuing to ask along the way if conforming to the earlier plan still makes sense.
Just thinking about economic models is probably enough to dissuade some, if not most, that adding an economic perspective to planning is not a viable option. I’ve actually met Don on several occasions and have followed his work for several years now. Again, I understand the message here too, but that doesn’t mean it’s easy, does it? Still, at the very least, as you plan, is it possible to start asking where in your planning efforts does it “feel” like you’re not getting a good return on your investment of time, and if possible roughly quantify it? For example, begin with looking for where in your planning today the effort to plan or estimate upfront how long something will take to complete is easily observable to be a significant amount of time relative to the time you think it would take to simply complete the work “that you’re planning.” From an economic perspective, is this trade-off in time for value returned at least worth questioning? As an experiment, for a period of time do no upfront estimating of these types of work items at all. Instead, simply track the actual times, plot the data over time, and see if this trend information provides all that you need for “future planning” of these types of work items. Experiment with other work item types too and identify others that forecast well using this lightweight form of planning.
John Kay, a leading British economist, and author of several books, began publishing writings about the concept of Obliquity as early as 2004 (but indicates the term came about earlier). Kay’s emphasis early on was in microeconomics and in particular economics related to businesses. He was particularly interested in answering questions similar to this one, “If two firms face the same basic conditions of supply and demand, and operate within the same market structure, why does one firm do better than another?” In the research he did to answer this question, he studied various businesses and some public agencies, and found the most successful organizations were those that achieved their goals with an indirect approach rather than a direct one. This was even more the case when the systems and the environments were complex and “unpredictable.” You can find examples that he cites in several posts online as well as in his recent book titled “Obliquity: Why our goals are best achieved indirectly” (2010).
Here is another interesting example that demonstrates a twist on obliquity thinking that comes from a study cited in the popular book by Tom DeMarco and Timothy Lister “Peopleware: Productive Products and Teams” (1999). DeMarco and Lister reference a study done in 1985 by two researchers, Michael Lawrence and Ross Jeffrey, at the University of New South Wales. The focus of the study was to see how productive programmers were on projects based on who developed the upfront estimates. There is much more interesting information they reference in this study but I’ll limit my comments related to our discussion on obliquity for now. In short, the study showed that programmers were more productive when they developed the estimates for the work they were asked to do, than if their managers developed the estimates. This is probably not a surprise. However, this study also showed the programmer’s productivity was highest when there were no estimates requested at all. In the desire for achieving greater productivity, a direct approach was taken, based on the belief that asking programmers to estimate their work upfront and then have them work toward their estimates would ensure high productivity (by avoiding gold plating or an assumed to be true Parkinson’s Law type effect). Yet, this study showed an indirect approach, asking for no estimates upfront, actually resulted in higher programmer productivity.
How might taking an oblique (indirect) approach change your planning in the context of “estimating challenges” and a desire to produce predictable outcomes? Do you accept that for some problems the full picture and challenge is revealed only as you actually get into the work to solve them? If so, would predictable solutions be possible earlier, or more economically obtained, using an upfront approach with a focus on learning (experimentation and discovery) rather than one emphasizing upfront planning or estimating (guessing)? How might you communicate when a project will be done with less upfront estimating efforts?
One alternative to the upfront (push) estimating planning approach you might consider is move toward a pull scheduling system. Rather than “predicting” (anticipating demand or delivery), a pull system approach reduces the emphasis on upfront planning or estimating by a shift of emphasis to one on collecting and tracking actual throughput capability of a system over time, and then using this historical (production data, not predictions or guesses) to project (forecast) expected delivery going forward. As more data is collected, trends become more pronounced, and normal delivery capacity becomes better known and documented. This information becomes a solid foundation for predictable outcomes as well as a dependable baseline for assessing the impact of any improvement efforts to the software development process in place.
So, in closing, do you think the ideas of the Predictability Paradox and Obliquity have some relevance in helping you improve your ability to answer the challenging question “When will the project be finished?” Do these concepts give you ideas for making changes to your planning approach or estimating, forecasting, and projecting practices? Do you think taking an economic perspective will result in you making some different choices in how much upfront planning or estimating you’ll do or cause you to be more responsive and open to changing your plans as new information becomes available? If so, I hope you’ll share some thoughts or insights from your experiences.