Archive

Metrics

What’s Your System Improvement Index?

Most systems operate under some sort of performance metric – service uptime, number of users, needs met, revenue growth, new feature deployment, incident resolution time…that sort of thing.

Whether they’re set by management, agreed upon by the Folks That Matter™, or simply targets for continuous improvement, metrics exist.

Sometimes, they’re overtly stated – written down in strategy documents or OKRs.

And other times they’re not formalised in this way.

Don’t mistake the absence of documented goals to mean non-existence of those goals (see also: Your Real Job)..

You might think your system has no performance metrics because nothing is in writing or has ever been formally discussed – but all you have is no clear agreement as to what your system’s performance metrics are.

Whether you’re a founder, product manager, engineer or other contributor, your system can do one of two things – meet expectations or disappoint. The absence of clear, agreed, preferably documented performance metrics merely means you don’t know when the system is underperforming.

If your system lacks clearly defined metrics, stop here – the key takeaway is to discuss and agree metrics and targets, even if just on your own team – so you know when the system is failing to hit the mark.

For most mature systems and products, it’s around this time of year teams analyse performance against goals – 15% improvement in latency, 11% increase in conversion, 7% bump in NPS…that sort of thing.

My question is this:

“To meet your system’s goals, how much do your collective assumptions and beliefs need to improve?”

It’s a difficult question without an obvious answer – 0%? In line with the target metrics? Double digit percentage gains across the board?

I don’t know the answer, and you may not either – but we’d likely both agree your organisation’s mindset and culture can always evolve.

Tools like organisational psychotherapy can help reveal limiting assumptions and facilitate shifts in collective beliefs.

So let me ask plainly:

“To meet your goals this year, how much do you need your organisation’s culture to develop?”

Pinning down an exact number isn’t straightforward, but it certainly isn’t zero.

One suggestion to quantify this:

Conduct regular culture and maturity assessments, and use the year-on-year improvement as an indicative ‘System Improvement Index’ benchmark for collective thinking shifts.

Of course, you may already do this, in which case view it as validation you’re tracking evolutions in organisational worldview.

If not, there are many good culture evaluation frameworks out there. Use one aligned to your organisation’s design and purpose. We have one we can share too – just ask!

Let me close by asking once more:

To meet next year’s targets, how much do your collective assumptions and beliefs need to improve? What’s your system’s ‘Improvement Index’?

Measure With Purpose

Why Not Jump In?

Starting off by gathering all sorts of metrics might seem tempting. But wait. Jumping in head first can land you in a pool of data that’s both overwhelming and meaningless. It’s not just a best practice; it’s a necessity to know what you’re going to do with your metrics before you start collecting them.

What’s the Risk?

Collecting metrics aimlessly not only dilutes focus but also poses several other risks. Data noise, resource waste, and misguided decision-making can plague your organisation if you’re not careful.

What About GQM?

Basili et al.’s Goal-Question-Metric (GQM) approach can be a life-saver here. It’s a structured approach for defining and interpreting metrics. First, set your goals. Next, ask questions that will help you determine whether your goals are being met. Finally, decide on the metrics that will provide the answers to these questions. GQM offers a disciplined way to ensure that you’re gathering metrics that are both meaningful and actionable.

What Should You Consider?

Before embarking on your metrics journey, contemplate:

  1. What are our goals?
  2. What questions do we need to answer to reach those goals?
  3. Which metrics will help answer these questions?

Don’t accept vague or overly general answers about the purpose of a metric. “We look at them” isn’t good enough.

So, When to Start?

Begin your metrics collection only when you have a well-defined plan in place. Use the GQM approach or a similar framework to give your metrics gathering the focus and purpose it needs.

Will This Approach Really Work?

Yes. Intentionality is key. Knowing what you aim to achieve will help you decide what to measure. That way, you’re not just accumulating data; you’re gathering actionable insights.

The Fallacy of Measuring Developer Productivity: McKinsey’s Misguided Metrics

At least the execrable, and totally misinformed, recent McKinsey article “Yes, you can measure software developer productivity” has us all talking about “developer productivity”. Not that that’s a useful topic for discussion, btw – see “The Systemic Nature of Productivity”, below. Even talking about “development productivity” i.e., of the whole development department would have systems thinkers like Goldratt spinning in his grave.

The Systemic Nature of Productivity

Productivity doesn’t exist in a vacuum; it’s a manifestation of the system in which work occurs. This perspective aligns with W. Edwards Deming’s principle that 95% of the performance of an organisation is attributable to the system, and only 5% to the individual. McKinsey’s article, advocating for specific metrics to measure software developer productivity, overlooks this critical context, invalidating its recommendations from the outset.

Why McKinsey’s Metrics Miss the Mark

Quantitative Tunnel Vision

McKinsey’s emphasis on metrics ignores the complex web of factors that actually contribute to productivity. This narrow focus can lead to counterproductive behaviours.

The Dangers of Misalignment

Metrics should align with what truly matters in software development. By prioritising the wrong metrics, McKinsey’s approach risks incentivising behaviours that don’t necessarily add value to the project or align with organisational goals.

Predicated on Fallacies

McKinsey’s suggestions are riddled with fallacious assumptions, including:

  • Benchmarking – long discredited.
  • Contribution Analysis – focused on individuals. Music to the ears of traditional management but oh so wrong-headed.
  • Talent – See, for example,  Demings 95/5 for the whole fallacious belief in “talent” as a concept.
  • Measuring productivity (measure it, and productivity will go down).

The Real Measure: Needs Attended To and Needs Met

The Essence of Software Development

The core purpose of business – and thus of software development – is to meet stakeholders’ needs. Therefore, have the most relevant metrics centre on these factors: How many stakeholders’ needs have been identified? How many have been and are being attended-to? How many have been successfully met? These metrics encapsulate the real value generated by a development team – as an integrated part of the business as a whole. (See also: The Needsscape).

Beyond the Code

Evaluating how well needs are attended to and met requires a focused approach. It includes understanding stakeholders’ requirements, effective collaboration within and across teams and departments, and the delivery of functional, useful solutions. (Maybe not even software – see: #NoSoftware).

Deming’s 95/5 Principle: The Elephant in the Room

The System Sets the Stage

Ignoring the role of the system in productivity is like discussing climate change without mentioning the Sun. Deming’s 95/5 principle suggests that if you want to change productivity, you need to focus on improving the system, not measuring individuals, or even teams, within it.

The Limitations of Non-Systemic Metrics

Individual metrics are the 5% of the iceberg above the water; the system—the culture, processes, and tools that comprise the working environment—is the 95% below. To truly understand productivity, we need metrics that evaluate the system as a whole, not just the tip of the iceberg. And the impact of the work (needs met), not the inputs, outputs or even outcomes.

The Overlooked Contrast: Collaborative Knowledge Work vs Traditional Work

McKinsey’s article advocates for yet more Management Monstrosities, where the category error of seeing CKW – collaborative knowledge work – as indistinct from traditional models of work, persists.

The Nature of the Work

Traditional work often involves repetitive, clearly defined tasks that lend themselves to straightforward metrics and assessments. Think of manufacturing jobs, where the number of units produced per time period or per resources committed can be a direct measure of productivity. Collaborative knowledge work, prevalent in fields like software development, is fundamentally different. It involves complex problem-solving, creativity, and the generation of new ideas, often requiring deep collaboration among team members.

Metrics Fall Short

The metrics that work well for traditional jobs are ill-suited for collaborative knowledge work. In software development, such metrics can be misleading. The real value lies in innovation, problem-solving, and above all meeting stakeholders’ needs.

The Role of Team Dynamics

In traditional work settings, an individual often has a clear, isolated set of responsibilities. In contrast, collaborative knowledge work is highly interdependent. This complexity makes individual performance metrics not just inadequate but potentially damaging, as they can undermine the collaborative ethos needed for the team to succeed.

The Importance of Systemic Factors

The system in which work takes place plays a more significant role in collaborative knowledge work than in traditional roles. Factors like communication channels, decision-making processes, and company culture (shared assumptions and beliefs) can profoundly impact productivity. This aligns with Deming’s 95/5 principle, reinforcing the need for a systemic view of productivity.

Beyond Output: The Value of Intellectual Contributions

Collaborative knowledge work often results in intangible assets like intellectual property, improved ways of working, or enhanced team capabilities. These don’t lend themselves to simple metrics like ‘units produced’ but are critical for long-term success. Ignoring these factors, as traditional productivity metrics tend to do, gives an incomplete and potentially misleading picture of productivity.

A Paradigm Shift is Needed

The nature of collaborative knowledge work demands a different lens through which to evaluate productivity. A shift away from traditional metrics towards more needs-based measures is necessary to accurately capture productivity in modern work environments.

Quality and Productivity: Two Sides of the Same Coin

The Inextricable Link

Discussing productivity in isolation misses a crucial aspect of software development: quality. Quality doesn’t just co-exist with productivity; it fundamentally informs it. High-quality work means less rework, fewer bugs, and, ultimately, a quicker and more effective delivery-to-market approach.

Misguided Metrics Undermine Quality

When metrics focus solely on outputs they can inadvertently undermine quality. For example, rushing to complete tasks can lead to poor design choices, technical debt, and an increase in bugs, which will require more time to fix later on. This creates a false sense of productivity while compromising quality.

Quality as a Measure of User Needs Met

If we accept that the ultimate metric for productivity is “needs met,” then quality becomes a key component of that equation. Meeting a user’s needs doesn’t just mean delivering a feature quickly; it means delivering a feature that works reliably, is easy to use, and solves the user’s problem effectively. In other words, quality is a precondition for truly meeting needs.

A Systemic Approach to Quality and Productivity

Returning to Deming’s 95/5 principle, both quality and productivity are largely influenced by the system in which developers work. A system that prioritises quality will naturally lead to higher productivity, as fewer resources are wasted on fixing errors or making revisions. By the same token, systemic issues that hinder quality will have a deleterious effect on productivity.

Summary: A Call for Better Metrics

Metrics aren’t the problem; it’s the choice of metrics that McKinsey advocates that demands reconsideration. By focusing on “needs attended to” and “needs met,” and by acknowledging the vital role of the system, organisations can develop a more accurate, meaningful understanding of holistic productivity, and the role of software development therein.Let’s avoid the honey trap of measuring what’s easy to measure, rather than what matters.

Afterword

As with so much of McKinsey’s tripe, the headline contains a grain of truth – “Yes, you can measure software developer productivity”. But the nitty-gritty of the article is just so much toxic misinformation. Many managers will seize on it anyway. Caveat emptor!

The Deming Way to Measuring Software Developer Productivity

Many software folks pay lip service to Bill Deming and his work. Few if any pay any attention to the implications. Let’s break the mould and dive into how the great man himself might look at software developer productivity (a subset of collaborative knowledge worker productivity more generally).

This isn’t just a thought experiment; it’s an invitation to rethink our existing assumptions and beliefs about productivity.

Why Traditional Metrics Don’t Cut It

If Deming could peer over our shoulders, he’d likely be aghast at our fascination with shallow metrics. Lines of code? Bugs fixed? DORA? SPACE? These are mere surface ripples that fail to delve into the depths of what truly constitutes productivity. Deming was a systems thinker, and he’d want us to look at productivity as an outcome of a complex system. It’s influenced by everything from the quality of management practices to the clarity of project goals, and yes, even the standard of the coffee in the break room.

Aside 1

Let’s not get too hung up on staff productivity and the measurement thereof.

Deming’s First Theorem states that:

“Nobody gives a hoot about profits.”

A corollary might be:

“Nobody gives a hoot about software developer productivity.”

Which, drawing on my 50+ years experience in the software business, rings exceedingly true. Despite all the regular hoo-hah about productivity. Cf. Argyris and espoused theory vs theory in action.

Aside 2

While we’ve on the subject of measurment, let’s recognise that measuments will only be valid and useful when specified by and collected by the folks doing the work. I’ve written about this before, for example in my 2012 post “Just Two Questions“.

Aside 3

Let’s remember that the system (the way the work works) accounts for some 95% of an individual’s productivity. Leaving just 5% that’s a consequence of an individual’s talents and efforts. This makes it clear that attempting to measure individual productivity, or even team productivity, is a fool’s errand of the first order.

Here’s the Deming Approach

So, how would the statistician go about this? Hold on to your hats, because we’re diving into an eight-step process that marries statistical rigour with psychology and humanistic care.

1. Understand the System

First things first, get to grips with the holistic view. Understand how a line of code travels from a developer’s brain to the customer. This involves understanding the various elements in the software development lifecycle and how they interact.

2. Define Objectives

Random metrics serve no one. Deming would urge us to link productivity measurements to broader business objectives. What’s the end game? Is it faster delivery, better quality, or increased customer satisfaction?

3. Involve the Team

The people on the ‘shop floor’ have valuable insights. Deming would never neglect the developer’s perspective on productivity. Involving them in defining productivity criteria ensures buy-in and better data accuracy.

4. Data Collection

We’ve got our objectives and our team’s perspective. Now it’s time to roll up our sleeves and get to work on data collection. But this is Deming we’re talking about, so not just any data will do. The focus will be on meaningful metrics that align with the objectives we’ve set.

5. PDSA Cycle

Implementing the Plan-Do-Study-Act (PDSA) cycle, any changes aimed at boosting productivity would be introduced in small, incremental phases. These phases would be assessed for their effectiveness before either full implementation or going back to the drawing board.

6. Feedback Loops

You’ve made changes; now listen. Feedback from developers, who can offer a real-time response to whether the changes are working, is invaluable.

7. Regular Reviews

Productivity isn’t a static entity. It’s a dynamic component of a system that’s always in flux. Regular reviews help recalibrate the process and ensure it aligns with the ever-changing landscape.

8. Leadership Commitment

Finally, if you think increasing productivity is solely a developer’s job, think again. The leadership team must be as committed to this journey as the developers themselves. It’s a collective journey toward a common goal.

The Long Game

Deming never promised a quick fix. His was a long-term commitment to systemic improvement. But the fruits of such a commitment aren’t just increased productivity. You’re looking at more value for your business and greater satisfaction for both your developers and customers. So, let’s stop paying lip service to Deming and start actually embracing his philosophy. After all, a system is only as good as the assumptions and beliefs that shape it.

Reliability and Effectiveness

Many times when presenting either the Rightshifting curve:

or the Marshall Model:

I have been asked to define “Effectiveness” (i.e. the horizontal axis for both of these charts). I have never been entirely happy with my various answers. But I have recently discovered a definition for effectiveness, including a means to measure it, which I shall be using from now on. This definition is by Goldratt, as part of Theory of Constraints, and appears in his audiobook “Beyond the Goal”.

Measurements

Measurements serve us in two ways:

  1. As indicators of where we are, so we know where to go. For example, the dials and gauges on a car’s dashboard.
  2. As means to induce positive behaviours.

We must always remember, though, that we are dealing with humans and human-based organisations:

“Tell me how you measure me and I’ll tell you how I behave.” ~ Goldratt

We must choose measurements to induce the parts to do what’s better for the company as a whole. If a measurement jeopardises the performance of the system as a whole, the measurement is wrong.

Companies already have one set of measurements which measure their performance as a whole: their Financial measurements: e.g. Net profit (P&L) and investment (Balance Sheet)

What about when we dive inside the company as a whole, though? We then have two areas in which we have to conduct measurements:

  1. Support for and evaluation of management decisions
  2. Oversight on execution (how well are we executing on the decisions we’ve made?)

We generally don’t have good measurements in terms of decisions, nor good measurements in terms of execution.

We have to remember we’re dealing with human beings. And as long as we’re dealing with human beings, we have to realise that by judging any person on more than five measure, we’re creating anarchy. Simply because, with more than five measurements, people can basically do whatever they like, and likely still score high on one of them. And their bosses can nail them on some measurement they fail to deliver against. More than five measurements is conceptually wrong.

Categories of Measurement

So, how to categorise thing so that human beings can grasp the situation? Can we do better than we do now? Theory of Constraints suggests we can.

What resources do we have to help us formulate measurements in each of the above two areas; management decision-making, and execution of those decisions?

  • For decision-related measurements – there are lots of resources available to help e.g. books on Throughput Accounting.
  • For execution-related measurements – there is next to nothing published anywhere.

Continuous Improvement

I’ll not make the case for continuous improvement here. But if we wish to induce people to continuously improve, where should we focus our measurements? On things that are done properly, or on things that are not done properly? Which of these two foci better drives action? Focussing on the things we’re doing properly tends not to drive improvement. So we must concentrate on things that are not done properly.

How many things are not done properly? Kaplan suggests that in most businesses, there are more than twenty categories of things that are not done properly. But for humans to grasp our measures, we have already decided we need at most five categories, categories that completely cover everything that is not done properly, with zero overlap or duplication. Finding a way to categorise things that meets our criteria here is a nontrivial challenge.

Goldratt says there are only two categories:

  1. Things that should have been done but were not.
  2. Things that should not have been done but nevertheless were done.

Just two categories, with zero overlap. Beautifully simple.

And each of the above two categories already have a word defining them:

  1. Things that should have been done but were not – unreliability.
  2. Things that should not have been done but nevertheless were done – ineffectiveness.

Let’s swap these around into positive terms: Reliability, and Effectiveness.

Lovely.

Reliability and Effectiveness

Can we find measures to quantify Reliability and Effectiveness? How can we put numbers on our reliability? How can we put numbers on our effectiveness? Because, without numbers, we’re not measuring.

Let’s consider what is the end result of being reliable, in terms of the system as a whole. And what is the end result of being effective, in terms of the system as a whole? Not in financial terms though, as reliability and effectiveness are not financial things. We know this intuitively.

Reliability

Things that should have been done but were not.

The end result of being unreliable, in terms of the system as a whole, is that the company fails to fulfil its commitments to the external world. In other words, the company fails to ship on time. Do we already measure on-time shipment? Yes. We call it Due Date Performance. That’s a measure of how much we ship on time. “Our company Due Date Performance is 90%”. The unit of measure is almost always “percent”. What behaviour does this unit of measure trigger? Does it trigger behaviour that is good for the company? No. It encourages us to sacrifice on-time shipment of difficult, larger shipments in favour of smaller, easier shipments. So the dollar value of the sale must be part of any reliability measurement. We cannot ignore the dollar value. And neither is time is a factor in percent units. How late is each late shipment? We must include time, too. So, let’s change our “Reliability” units from “percent” to “Throughput dollar days” – the sales dollar value of each orders that is late, multiplied by the number of days it is late, summed across all late orders. The sum total is the measurement of our (un)reliability.

This is of course  a new unit of measure: Throughput-dollar-days. To infer trends, or to compare the performance of e.g. groups or companies, we will need time to train our intuition in the significance of this new unit of measure. As we begin to get to grips with this new unit of measure, it can help to present it as an indicator (a number in some fixed range, say 1-10, or as we use in Rightshifting, and the Marshall Model, 0-5) until we have adjusted to the Throughput-dollar-days measure.

Effectiveness

Things that should not have been done but nevertheless were done.

If we do things that we should NOT have been doing, what is the end result? Inventory. Do we already measure inventory? Of course we do. But how do we presently measure inventory? Either in terms of a dollar value, for example “$6 million of finished good inventory”, or in terms of a number of days, for example “60 days of finished goods inventory”. But both dollars AND time are important. Existing units of measurement for inventory drive unhelpful local behaviours like over-production and poor flow. So, how to measure to induce helpful behaviours? For each item of inventory, let’s use the dollar value of the inventory multiplied by the number of days that we’re holding that inventory under our local authority. We’ll call this unit of measure “Inventory-dollar-days”.

And one more measure of effectiveness: local operating expense. (For example, scrap, or salaries – with a given subunit of the company).

Note: We can fold quality into these measures simply by not recognising a sale, or a reduction in inventory, until the customer accepts the items (i.e. until the items meet the customer’s quality standards).

Summary

Now we have means for defining effectiveness, (and reliability) in a way in which we can also measure it. I feel very comfortable with that.

– Bob

Further Reading

Beyond the Goal ~ Eliyahu M. Goldratt (Audiobook only)

Relevance Lost: Rise and Fall of Management Accounting ~ Kaplan & Johnson

The Goal ~ Eliyahu M. Goldratt

Throughput Accounting ~ Thomas Corbett

The Balanced Scorecard: Translating Strategy into Action ~ Kaplan & Norton

Forecasts, Estimates and Cost Accounting

#NoEstimates?

I’ve tried to avoid getting involved in the ongoing #NoEstimates debate. It seems more like a religious war than a discussion with much prospect of a useful outcome. And a classic case of the Analytic-minded folks butting heads with the Synergistic-minded (and a few Ad-hoc perspectives thrown in for extra confusion).

For me, it also seems like a non-argument. By which I mean that all the knowledge is out there, should folks only but seek to look. For myself, I have several perspectives, drawn from these bodies of knowledge, that I shall continue to apply in the context of estimating and #NoEstimates.

The Theory Of Constraints Perspective

I don’t recall much in Goldratt’s teachings about estimates, per se. But he has written much about the futility of forecasting, e.g. customer demand for products. I suggest his arguments also hold true for forecasting costs (estimating). For more info you might like to take a look at his books, and in particular “It’s Not Luck”.

The Systems Thinking Perspective

Systems Thinking has a relevance to cost estimation, in that systems thinking (c.f. Goldratt, Ackoff) observes that a system is a collection of parts, such that improving the performance of the parts of a system taken separately will negatively impact the performance of the whole. In fact, such “local” improvements can entirely destroy an organization.

Cost Accounting assumes that the cost of each part, each operation, can be known separately (“local costs”). This is a false assumption. I suggest that this means the estimation of costs can, in reality, only produce useful numbers when considered in the context of the system (organisation) as a whole.

See also: “Throughput Accounting” ~ Corbett

The Nonviolent Communications Perspective

From this perspective, we can choose to see folks’ requests for estimates as a means for meeting some of their needs. I’d suggest that some other folks see this means as sub-optimal, in that these other folks believe that there are better means for those folks to get their needs met than through estimates and estimating. And I’d also suggest that for those other folks, having to provide estimates is not meeting their needs. Which is triggering in them various negative feelings, possibly including anger, frustration, hostility and anxiety.

So, applying this knowledge, we might choose to discuss what needs all these folks have, which ones are being met and which not, and some options for effective means for getting everyone’s needs met. Hopefully this might lead to an outcome where folks can agree on a mutually joyful way forward.

The Covalent Perspective

In any non-trivial endeavour, there may be some number of different stakeholders and stakeholding communities, each with their own set of needs. These different needs can and will, at least from time to time, conflict in possibly mutually-exclusive ways. The Covalent approach recognises this and focuses on making folks’ needs explicit and visible, such that these conflict can be resolved, to the extent that is ever possible.

See also: “Competitive Engineering” ~ Tom Gilb

– Bob

Productivity

For all the angst and discussion around how to make organisations, teams and people more productive, we might be forgiven for thinking that the idea of “productivity” was commonly understood and agreed.

However, this is not so.

For example, classical economics has a markedly different definition than does Theory of Constraints (TOC). And if you ask someone – in particular managers demanding “higher productivity” – for an operational definition, you may get a blank look, or other definitions again.

“An operational definition is a procedure agreed upon for translation of a concept into measurement of some kind.”

~ W. Edwards Deming

I’m not arguing for one, common, consistent, clear definition. Rather, I’m drawing attention to the confusion over the term – confusion compounded by many folks taking it for granted that they’re all talking about the same things, that they’re all using the same definitions.

“There is no true value of any characteristic, state, or condition that is defined in terms of measurement or observation. Change of procedure for measurement (change of operational definition) or observation produces a new number.”

~ W. Edwards Deming

Here are just some (differing) definitions I found on the Web:

So, what is productivity? I’m confused now. Are you?

My Own Definition

When I’m talking about productivity, for example in my presentations and workshops on organisational effectiveness and Rightshifting, I have a particular definition clearly in mind:

“Productivity is the act of bringing a company closer to its goal”

~ Jonah, in The Goal by Eliyahu M Goldratt

Personally, although fully agreeing with Goldratt on this definition, I find it’s hard to use in an explanation or discussion, especially with folks unfamiliar with Goldratt’s work. As the Jonah character goes on to say:

“Productivity is a meaningless concept unless the folks in an organisation understand what their goal is.”

And the book (The Goal) takes a whole book to explain how to discover the Goal in any given organisation.

Aside: This definition of productivity makes it closely congruent with “organisational effectiveness”. See this chart:

So for the purposes of discussion, I sometimes use another definition, derived from the TOC formula:

Productivity = Throughput / Operating Expenses

where:

Throughput = Sales – Totally Variable Costs

(a.k.a. the rate at which the system generates money through sales).

and

Operating Expenses = all the money the [organisation] spends in order to turn inventory into throughput.

So, my simpler definition is:

Productivity is how much it costs an organisation to move one unit (measurable step) closer to its goal (whatever its goal may be – for example, getting a particular product to market).

Note: “Cost” here, and below, is in the most general of terms, maybe a composite function of the Five Capitals, and not necessarily just in financial terms e.g. money or cash.

Or, as an (almost) operational definition, where the goal is (improving) organisational effectiveness:

Cost to move the organisation one unit (say, 0.1 of a rightshifting index point) to the right on the effectiveness axis of the Rightshifting chart.

Note: This is much the same as the Rightshifting measure named “Drag”.

Productivity is a Property of the System

By which I mean, that productivity is never a property of an individual or team, but of the whole system of work within which individuals and teams do their work. This is often referred-to as Deming’s 95% rule.

Productivity in Knowledge Work

Taiichi Ohno said “People don’t go to Toyota to ‘work’ they go there to ‘think’”.

If we take this at face value, then The Goal of Toyota, at least from Ohno’s point of view, was to get its people to ‘think’ (which I take to mean, study the system – the way the work works – and improve it).

Make it Clear

So next time, and every time, the topic – or issue – of productivity comes up, think “Are we all on the same page about what this word actually means?”

Some discussion, to ensure everyone is talking about the same thing, pays major dividends. And, yes, increases productivity.

– Bob

Postscript

Since I first wrote this post, it occurs to me that some readers may infer that I believe productivity is an “unalloyed good thing”. Inasmuch as productivity meets the needs of some folks in an organisation, we might choose to accept this at face value. Personally, I reject worshiping at the altar of productivity, and chooser rather to appreciate that a blind pursuit of productivity at the expense of folks’ wider needs can do much more harm than good.

Post-Postscript

In December 2019 I wrote a post titled “Your Real Job” to highlight just how irrelevant productivity is in most organisations. You might like to take a look.

Further Reading

Cost Accounting is Productivity’s Public Enemy Number One – Abonar’s Blog
Theory of Constraints: Bottom Line Measurements – TOC Guide

Quantification vs Measurement

“If you think you know something about a subject, try to put a number on it. If you can, then maybe you know something about the subject. If you cannot then perhaps you should admit to yourself that your knowledge is of a meager and unsatisfactory kind.”

~ Lord Kelvin, 1893

Some folks seem to mix up the idea of quantification with the idea of measurement.

“Why does it matter?” I suspect you might ask. I’ll leave you to be the arbiter of that.

I just wanted to flag that in my view (and in the dictionary), there’s a difference:

Quantity

“A fundamental, generic term used when referring to the measurement (count, amount) of a scalar, vector, number of items or to some other way of denominating the value of a collection or group of items.”

Quantification

“The act of assigning a quantity to (something).”

Tom Gilb defines quantification thusly:

“Quantification, even without subsequent measurement, is a useful aid to clear thinking (what is this about?) and good communication (this is the goal, gang).”

~ Tom Gilb

Measurement

“To ascertain the quantity of a unit of material via calculated comparison with respect to a standard.”

In A Nutshell

In a nutshell, the two terms differ in that:

  • Quantification is about a way to have more meaningful discussions, less obscured by subjective language, whilst
  • Measurement is about seeing more objectively what’s happening in your world.

In general we can fairly quantify anything; measuring things is often more problematic.

If you have your own definitions which you prefer more, or any other feedback, I’d love to hear from you.

– Bob

Further Reading

Principles of Software Engineering Management ~ Tom Gilb
Competitive Engineering ~ Tom Gilb
Software Metrics ~ Norman E. Fenton
Quantifying Stakeholder Values ~ Tom Gilb (pdf)
Making Metrics More Practical in Systems Engineering ~ Tom Gilb (pdf)