Barking Up the Wrong Tree

Barking Up the Wrong Tree

Blind Faith

What if we’re wrong? What if we’re all unwitting participants (and victims) in a mass delusion of biblical proportions?

“Not creating delusions is enlightenment.”

~ Bodhidharma

What if the past thirty years of so-called progress in the field of software development has all been one vast waste of time?

What if we’ve fooled ourselves by one huge placebo effect? Or by a combination of placebo effect and other similar pernicious delusions and cognitive biases?

“It is only when we forget all our learning that we begin to know.”

~ Henry David Thoreau

What if what we think we’ve learned turns out to have no validity at all?

Scrum, Waterfall, Agile, Kanban, Xp, etc.. “Process” itself. Could these all in fact be no more than the most egregious of red herrings?

What if it’s really some other factor – or factors in combination – that accounts for some, or indeed all, of the differences we observe from improvement initiatives? Honestly, I don’t think we can discount this possibility. Personally, I am coming round ever more to this belief.

Let’s take a look at some of the pernicious delusions and cognitive biases that may be at play here:

The Hawthorne Effect

The central idea behind the Hawthorne Effect is that changes in participants’ behavior during the course of a study may be “related only to the special social situation and social treatment they receive”.

The Feedback Effect

Improving folks’ performance by improving e.g. their skills may be a consequence of their receiving feedback on their performance (and not as a consequence of any improvement in skills per se). An “agile adoption” may give folks feedback for the first time in their working lives.

The Observer-Expectancy Effect

The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer effect, or experimenter effect) is a form of reactivity in which a researcher’s cognitive bias causes them to unconsciously influence the participants of an experiment.

“Any of a number of subtle cues or signals from an experimenter can affect the performance or response of subjects in the experiment.”

Sounds pretty much like agile coaching or scrum mastering, just about everywhere? Of course, the role of a coach or Scrum Master is indeed to affect their team(s) in such ways (at least, for the better).

The John Henry Effect

The John Henry effect is an experimental bias introduced into social experiments by reactive behavior by the control group (i.e. a group of people, not the subject of the experiment, used as a “control” against which progress in the subject group can be compared.)

As applied to organisations adopting agile, this effect may account, at least in part, for the improvement (if an) in teams, and other departments, not immediately part of the agile adoption (a.k.a. pilot).

The Pygmalion Effect

The Pygmalion effect, or Rosenthal effect, refers to the phenomenon in which the greater the expectation placed upon a group of people, the better they perform.

In agile adoptions, managers typically place a great deal of expectation on the first agile team(s). According to this effect, these teams may improve simply as a consequence of those expectations (and not, for example, as a consequence of any changes to the way the work works).

The Placebo Effect

The placebo effect refers to the phenomenon in which people receiving a fake or otherwise intentionally ineffective treatment improve to more or less the same extent as people receiving a real, intentionally effective treatment.

“Placebos have been shown to work in about thirty percent of patients. Some researchers believe that placebos simply evoke a psychological response. That the act of taking them gives you an improved sense of well-being. However, recent research indicates that placebos may also bring about a physical response.”

The Subject-Expectancy Effect

The subject-expectancy effect is a form of reactivity that occurs when someone, e.g. a research subject, expects a given result and therefore unconsciously affects the outcome, or reports that expected result.

When people already know what the result of a particular “improvement” is supposed to be, they might unconsciously change their reaction to bring about that result, or simply report that result as the outcome – even if it wasn’t. Some researchers believe that people who experience the placebo effect have become classically conditioned to expect improvement from a change. Remember Dr. Ivan Pavlov and the dog that salivated when it heard a bell? In the case of people and placebos, the stimulus is e.g. the “ceremonies” of the new development method, and the response is real (or perceived) improvement and feelings of well-being and positivity.

“The expectation of pain relief causes the brain’s pain relief system to activate.”

The Novelty Effect

The novelty effect, in the context of human performance, is the tendency for performance to initially improve when a new approach to work is instituted – not because of any actual improvement in learning or achievement, but in response to increased interest in e.g. the new approach.

Self-determination Theory

Self-determination theory is concerned with the motivation behind the choices that people make, absent any external influences. The theory focuses on the degree to which an individual’s behavior is self-motivated and self-determined. Key studies that led to emergence of this theory include research on intrinsic motivation.

In effective Agile adoptions, for example, increased self-determination (self-managed teams and the like) may be a causal factor in increased motivation, and thus in increases in e.g. productivity, quality, or what have you. Note here I’m saying that the benefits accruing (if any) are not the result of any material changes in the process (the way the work works), but in the social, motivational context for the work.

Summary

Just as in the Hawthorne experiments, we who (merely) observe are part of the system too. Objectivity is delusional. How much else of what we induce and convince ourselves to believe, is delusional too? And how would we know? As part of the “system”, could we ever know?

The Hawthorne experiments – contention over their validity and interpretation notwithstanding –  stand as a warning about us viewing even simple experiments on human participants as if the people are only mechanical systems.

“If history repeats itself, and the unexpected always happens, how incapable must Man be of learning from experience?”

~ George Bernard Shaw

Given all the research into how our brains work (and more often, fail to work), should we not be at least open to the possibility that the results we think we have achieved in the world of software development have little or nothing to do with the things we think are important?

What do you think?

– Bob

Further Reading

The Nocebo Effect – A Contributory Factor in Failed Agile Adoptions?
8 comments
  1. This is fantastic Bob. I really like the way you challenge conventional thought. After all great new thinking can only happen when we challenge and re-validate our beliefs rather than accepting the status quo as the reason why things work. Thanks for sharing this.

  2. I’m reminded of a reply made to a critic of environmental policy who complained that there was no evidence that reducing energy consumption, increasing recycling, reducing pollution and so forth would do anything. The reply was “yeah, what if we made a better world and it all turned out be be for nothing?”

    All I know about the changes in the industry that I’ve seen is that I don’t work on death-march projects any more, I don’t do all-nighters, I don’t get stuck in interminable hack/test/bugfix cycles, I don’t spend hours and hours reviewing requirements documents, design documents, blah blah blah.

    If we changed all that and it didn’t matter, well that still sounds like a win to me.

  3. Hi Keith,

    Thanks for joining the conversation. As I understand your comment, you’re saying that things have changed for the better since the introduction of “agile” on projects in which you have been involved (I have no means to dispute your personal experience, nor any wish to do so). But you don’t know why that is.

    If we don’t understand why things have changed for the better, how can we in all conscience advise others or make claims about method or causation?

    – Bob

    • No, Bob, that’s not what I’m saying at all. I think I do know why what I do now has better outcomes for me then what I did in, let’s call it the first half of my career. And I also think that I know why doing what I recommend has very consistently lead to improved outcomes for my clients (as evidenced by the fact that they keep asking me back, and referring their friends to me). I just don’t think that knowing this matters very much.

      As a consultant I try as much as I can to be mindful of a few ideas (ideas that I consider true and useful) as I do my work, and some of them are:

      1) people seem naturally inclined to attribute past success to smart decisions they must have made and therefore worry about making smart decisions now so as to turn current under–performance back into success, but really their past success is more likely because of skewed distributions governing their work and under–performance relative to that past success is more likely because of regression to the mean, than it is to some error that they made recently.

      2) people seem naturally, and very strongly, inclined to see patterns where there need not be any, to assign causative relationships where there’s no evidence for them to hold, and to ascribe agency to objects and systems that have none. (As an aside, I understand this tendency to be where “religion” comes from)

      3) most “rational” decision making processes are nothing of the sort, most rationales are rationalisations, people tend to make decisions and then justify them (rather than making justified decisions) and most of the time that’s perfectly fine

      In the face of all this, how not to be a charlatan? You mention the Placebo Effect several times. I infer that this is rhetoric intended to make the reader think “gosh, I don’t know for sure how what I recommend works. Maybe it doesn’t, or like a sugar pill, can’t possibly. Maybe…maybe that means I’m a charlatan?!”. That might work on a certain kind of straight–line analytical thinker, the kind with which our industry abounds indeed. But wait, said straight–line thinker would be making a naive mistake: some placebos do work. That is, some placebos are effective treatments. Maybe even the most effective treatment known. The astonishing thing is that, in the situations where they work, placebos appear to work even if the patient knows that they are placebos. Of course, the homeopath who prescribes sugar pills instead of chloroquine to someone who is headed to a Malaria zone is a charlatan and a dangerous quack and should be stopped. But what about a doctor who prescribes a cheery yellow pill to a patient with (let’s say, mild) depression and says “these pills have no active ingredient” (which is true), “but taking them has sometimes been seen to help in cases like yours” (which is also true)?

      Someone once said something like this: medicine is for people who are ill and want to get better, alternative medicine is for people who are not ill but don’t feel as well as they believe they are entitled to feel. Which is engagingly cynical. But alternative medicine has an important lesson for actual medicine, which is that spending time with patients, possibly quite a long time, that being sympathetic, being understanding, listening to them, all these things can have a huge positive effect on outcomes. Can we explain how that works, in mechanistic, reductive, deterministic terms? Not yet. Although I expect we will in future (perhaps going easy on the “deterministic”). I find that in the lengthier consulting engagement, after a few technical fixes for obviously broken stuff are in place the most valuable thing that I can do is turn up every so often and let people…use me a sounding board, to put it in terms that won’t alarm an HR department. And that does have a positive outcome in many, many cases.

      Another idea that I try to be mindful of is that as a consultant I only get to see software development teams that believe they have a problematic situation, want to improve that situation, and believe that I can help them. And that I should remember that the world is full of teams that are doing just fine (so far as they believe, which is what matters) that I will never hear from. Who knows what they do and how they work? Certainly not me. They may very well do things that I don’t like, or even approve of, and that’s very much my problem not theirs.

      And another, which I believe I first heard from Alastair Cockburn, is that most published methodologies are a description written by a team who failed of what they believe they should do differently next time in order to succeed.

      So where does that leave me?

      Well, you may have heard Scrum described as the “secret sauce for hyper–productive teams” or some such. Jeff Sutherland was punting this idea for a while. I think that this is a dangerous idea, and in the hands of too many Certified this–that–and–the–others it is tosh and nonsense and, yes, quackery. Scrum (and things like it, that “Agile” stuff), I have come to believe, does not deliver hyper–productive teams. It delivers teams about as productive as one should reasonably expect a team to be. Now, those teams that believe they have a problematic situation etc. are usually not nearly as productive as that and what the agile bag of tricks mainly has to offer is a way to release many of the usual kinds of breaks and relieve the restrictions and remove the impediments to being productive at all that many organisations put in the way of development teams. Once that’s done, once the development part of the organisation has become competent, then the interesting part can begin.

      I’ve come to consider that the agile bag of tricks has been repeatably effective (so long as people actually do what it says) over the last 10 or 20 years because it is the right treatment for the (largely self–inflicted) diseases of IT work that prevailed 20 or so years ago. I expect that it will stop being effective when those diseases fade away, as diseases do. New diseases will arise and new treatments will come, and that will be interesting.

      But for now, when I see a team that (for example) can’t ship a features more than once a year because they spend nine months hacking bugfixes onto three months’ work—well, I know an approach that’s often been effective in breaking that pattern for the better. When I see a team that works harder and harder but builds fewer and fewer features over time as their code gets into worse and worse shape—well, I know an approach that’s often been effective in breaking that pattern for the better. When I see a team that thrashes from one half–finished feature to another and back again because their product management can’t keep their story straight from week to week—well, I know an approach that’s often been effective in breaking that pattern for the better. And so on.

      And yes, some aspects of some of those approaches probably have no active ingredient.

      Can I prove that my proposed interventions will definitely work because I know the deterministic pattern of effects that they will cause? No. Can I demonstrate, as in a drug trial, that my proposed interventions will definitely work better than a placebo intervention (whatever that might be)? No.

      But I know an approach that’s often been effective in breaking that pattern for the better. Won’t that do?

      • Keith,

        Many thanks for writing at such length. Seems like my original post was less than clear, because I agree with 98% of what you write here. My references to the Placebo effect were intended to underscore that interventions with “no active ingredients” can indeed work (bring relief) – sometimes as well as, or better than, “the real medicine”. It’s precisely because of this phenomenon that we cannot be sure that it is the “active ingredients” of Agile (well-practiced, the real medicine) that are “working” (bringing relief). Or even that Agile is not the Placebo, and other factors (imo, social factors) are the “active ingredients”.

        My one cavil is with your assertion “…teams that are doing just fine (so far as they believe, which is what matters)”. Many (most) teams are ignorant of the positive deviance (deviants?) in the industry, and therefore, I posit, have no right (morally, ethically, cf. William Kingdon Clifford) to believe that they are “just fine”. Put another way, and by analogy, if the people in some country believe they are doing “just fine” with a rate of 100 (infant deaths per 1000 births), should we not voice concerns and seek to show that some areas within their own country fare much better (cf Vietnam, see: http://www.positivedeviance.org/about_pd/Monique%20VIET%20NAM%20CHAPTER%20Oct%2017.pdf)?

        – Bob

  4. Great article. Nice summary of fallacies.
    I wholeheartedly agree.
    We don´t know s* about many things, whether they themselves are responsible for a positive change – or if the system is suffering (?) from some of the listed “effects”.

    That, for me, raises two questions:

    -Do we need to know?
    We´re talking about the world of humans and social relationships. So it´s not the mechanical world. Can we expect to know much for sure? Are there levers to pull and humans start working together like a charm?
    Or should we rather adopt a more philosophical stance along “the path is the goal”? Instead of expecting to find some final solution, I guess we should switch to constant trial mode. Working in circles, so to speak: try, evaluate, adapt, try again… And don´t be fooled by the hype. It´s not about Scrum or Kanban or whatnot. It´s about fulfilling a purpose.

    -Why do we do what we do? What´s the purpose?
    My guess is, the more we focus on purpose (no, making tons of money is never a purpose 😉 ), the less particular methods are important. The better we know what we really, really want (our needs, purpose, goal – pick the term you like most), the more we understand that whatever we try is just a tool of more or less use. We then understand that clinging to some tool never solves any problem, but rather stands in the way of fulfilling our purpose. That way we become free to change more easily.

    Bottom line: Yes, I guess many positive effects of have to be attributed to some fallacy. And many negative effects of are a testimony to that.

    But since with regard to humans it´s so hard to know if something is due to real quality of some method or due to a fallacy… we should constantly be ready to change our course. We need to become researchers ourselves. Who exchange their findings, who experiment with their own and others´ hypthesisis, but who also know there hardly will be final answers in terms of methods. It´s much more about a few basic principles – and constant observation and adaption.

  5. The little voice inside your head said:

    John 18:38: ‘What Is Truth’ – In Pilates day it was a koombyah of like minded individuals building a new approach to life free of the formal customs common in their day. For agile developers its a koombyah of like minded individuals building a new approach to software development free of the formal customs common in the industry. What is common in each definition of truth (albeit both a bit contrived)… the koombyah… bring people together around a common cause and get better results.

    The scrummaster who acts as the bard… telling and retelling the tale… what the goal is… why we’re doing what were doing… how each piece fits into the whole (yes, his script is the often maligned detailed design doc) gets better results because he’s not just removing impediments to the tasks each developer faces, but removing the emotional impediments and doubts that naturally enter into developers minds.

    For some people (developers in the BDUF era, Hebrews under the Mosaic Law), they drew comfort from knowing that they could compare their work to the ‘master document’ and get satisfaction from knowing they were following ‘the truth’. For other people (agilists, Christians), they draw comfort from knowing that their work has the smile of approval of the one who has the master document in his head.

    I’ve been managing software teams for over a decade in ‘agile’ and ‘waterfall’ environments and I can tell you that there are developers in each that cry out for the other. (Yes, just ask some of your peers if they would have preferred knowing all the details up front instead of ‘iterating’ toward it… it may not be 50%… but there’s always a few). Depending on the behavioral patterns of comfort for each individual, they will swing one way or another, thus the ‘my process is right, yours is wrong’ , ‘your religion is wrong, my way is right’. The successful process is the process that fits the behavioral pattern of those who believe it.

  6. If Matthew Stewart’s examination of the source material is right then the Hawthorne effect wasn’t even down to the management attention. It was down to having a nice little room to work in with colleagues hand–picked for agreeableness, having control over working conditions in the little room and being paid more to take part in the experiment.

Leave a comment