Back from a Hiatus
After a brief flurry of writing in the Summer of 2023 outlining some broad themes I wanted to explore in this blog, this publication promptly went on an extended hiatus.
I was wrestling unsuccessfully with scope creep and trying to simplify material that is mathematical at its core without being too technical.
In the end, I decided to put it aside until I felt I could explain things in a way that struck the right balance, and after a year of reading and experimentation with drafts and talks, I’m back to make another pass.
The Broad Themes
The themes explored in this blog remain unchanged: the relationship between value creation and the flow of work in software product development, the connection between economic models and value streams, and the underlying mathematical foundations that help us reason reliably about product development flow with real-time data.
Last summer’s posts can be considered informal, tactical explorations of key themes at a very high level.
In “Stability First,” we discussed the importance of stabilizing flow in product development. In “Stable does not always mean steady,” we discussed what it intuitively means for a process to be stable. In “Flow Signals and Flow Metrics,” we looked at establishing a goal-oriented framework for flow metrics. Finally, “Throughput!= ” examined the connections between flow metrics and economic outcomes.
This time, I plan to explore these themes, their underlying theories, and their mathematical foundations more deeply. While the content will be more technical, I aim to keep the discussion informal as much as possible without sacrificing rigor.
A Key Focus: Queueing Theory
As we advance, queueing theory and the importance of probabilistic models for reasoning about software development will be key focus areas for this blog.
The Lean Model
I assume readers are already familiar with concepts such as Little’s Law and the Utilization Law, as they are applied in Lean/Agile Software development.
We explored these informally in the posts “The Iron Triangle of Flow,” “A Highway at Capacity is not a Parking Lot,” and “A Parking Lot is not a Highway at Capacity.”
While these concepts stem from queueing theory, their application in software development is indirect, via principles adopted in Lean manufacturing that rely on queueing theory for their validity.
The manufacturing domain allows for several simplifying assumptions, such as predictable demand, repeatable production processes, low variability in task completion times, and deterministic work routing.
These assumptions lead to simpler, quasi-deterministic queueing models from the perspective of general queueing theory. Specifically, as discussed in later posts, they assume that the inputs to a parameterized queueing model are known beforehand with reasonable certainty.
These assumptions enable the design of stable processes optimized to produce outputs efficiently given those inputs. They also allow the operational focus to shift to managing the process’s outputs and exceptions.
However, software development is far more dynamic and unpredictable, and none of these simplifying assumptions, specifically those about deterministic inputs and stable processes, apply.
Lean software development introduces interventions such as WIP Limits, Pull Policies, and Constraint Management to make those assumptions fit within specific bounded contexts in a software development process.
While they have proven valuable in improving the flow of work and stability of development processes in smaller, localized contexts, these techniques become more unwieldy as we try to expand to broader and more general organizational scopes1.
More fundamentally, the Lean approach ignores the stochastic nature of software development inputs. Modeling such processes falls comfortably within the scope of general queueing theory.
General Queueing Theory
Queueing theory is a general mathematical framework that explains how and why delays occur in processes under resource constraints. It highlights the importance of concurrency as a constraint to flow and allows us to characterize how variability in demand and development capacity influences delays.
In particular, in the general queueing theory model, stochastic processes are the inputs to a queueing system, and the theory uses probabilistic arguments to explain the relationship between those inputs and the model’s outputs.
This gives us powerful new tools to help us reason probabilistically about system stability, resilience, and the capacity to perform work in contexts where volatility and uncertainty are typical.
The themes we will explore are general queueing theory and its underlying probabilistic reasoning techniques, the representation of product development value streams using queueing networks, and causal models that help us measure and model cause-and-effect relationships between process behavior and economic outcomes, even in complex adaptive environments.
While techniques such as probabilistic forecasting are becoming standard parts of Lean/Agile practice, viewed from the perspective of general queueing theory, they are still primarily forecasting future output parameters of a stochastic process based on the values of past output parameters of the same process.
This approach makes forecasts volatile, but more importantly, it does not provide a way to reason about why they are volatile.
I’ll argue that this is a consequence of starting with the simpler output-focused Lean model and that a model based on general queueing theory is a much stronger foundation for building forecasting models for software development.
Starting from the first principles of queueing theory and modeling software development as a network of inter-connected stochastic processes will help us build a more realistic picture and give us a robust set of tools and techniques to reason rigorously about real-world software development.
The Road Ahead
Many of the problems we struggle to model today in software development are surprisingly similar to the ones that initially motivated the development of queueing theory in the early 20th century.
Over the past century, theory and applications have built on each other as practitioners used them to solve problems in various domains, including computer architecture, traffic analysis, healthcare, public services, and computer networking.
Many of these problems are isomorphic to the ones we struggle to describe and model accurately in software. While software development, some aspects of software development are more complex than many of these areas, the generality of queueing theory seems well-equipped to tackle even these complexities.
We will first see how elegant, well-understood techniques from queueing theory and queueing network models applied in other domains can answer many relevant questions in software product development and delineate the bounded contexts in which managers can use them to make critical economic decisions.
We’ll investigate new ways to model product development flow in complex adaptive systems, incorporate uncertainty at a foundational level into our reasoning about software processes, and reason causally about how process improvements can drive economic outcomes in software development.
Examining these concepts from a different perspective, we hope to provide practical insights that can help inform your approach to software product development.
Warm-up: The Entrepreneur and The Queueing Theorist
We’ll begin our exploration starting with a parable set in the near future.
Imagine the following scenario: AI has completely solved all the problems we face today in software delivery.
And yet… many of the debates and challenges we face today, delivering value to customers, remain…
Check out the three-episode series first to get oriented around some of the discussions we’ll be delving into this season on The Polaris Flow Dispatch.
Episode 1: The Problem
Episode 2: The Analysis
Episode 3: Scenarios
I don’t mean to discount structured approaches like Enterprise Kanban or TameFlow/Flow@Scale, Flight Levels, etc., in tackling the problem of managing the flow of work for these larger bounded contexts.
These remain challenging and complex problems that we are still exploring and developing solutions for as an industry.
Our concerns here are mainly about creating models of such processes so that we can reason more robustly about these larger contexts without ignoring the stochastic nature of the problem domain.
Our thesis is that starting with models based on general queueing theory provides a stronger foundation for this than starting with simplified models that implicitly adopt the assumptions of lean manufacturing.
We are venturing into the realm of additive manufacturing (AM) where unlike traditional manufacturing, products can be customized or different products be created using the same equipment. 3D print farms are coming to existence where there is need to manage throughput similar to what you have described for creating software. Since AM deals with physical entities, what additional items would be needed to be considered to optimize flow?