No Testing

No Testing

Testing. Checking. Inspection. Exploration. Learning. Everybody has a different understanding of what testing is. And is not. (Hint: AFAIC, it’s NOT “QA”. And it’s NOT “TDD”).

I’m not going to upset people by offering my own definition. I make no claims to be an expert on testing.

When I’m a customer, I know I don’t want to pay extra just for a product that works as advertised. By extension, I’d not want to pay for testing. I want a product that “just works”. And if asked to pay more, I’d have to enquire skeptically “why can’t you people build it right in the first place?”.

Some years ago now, David Anderson wrote a blog post asserting that “All testing is waste”. I concur. But is it necessary or unnecessary waste (Type I or Type 2 Muda?). And does that categorisation depend on the capabilities of the team(s) – the developers – building the software? If the developers can’t deliver software with the intended levels of defects (which could be non-zero, btw) then maybe testing is a necessary waste, to compensate for that inability. And maybe it’s cheaper and more humane to employ less capable developers, bolstered by testers, than to have capable developers who can meet intended defect levels reliably.

So, do we have to test, despite the customer being unkeen to pay for it? Despite it adding little or no value from the customer’s point of view? Or can we find other, more economic and humane ways to meet the needs testing currently addresses?

Needs

“Testing” is one strategy for getting folks’ needs met. Some of their needs, at least. We might imagine there could be other strategies for getting those same needs met.

What needs does testing address? And who has these needs?

  • Testers need to continue earning a living in their chosen profession, to feel belonging in a community, to earn the respect of their peers for a job well done, to continue their self-development and learning, to add value and make a difference.
  • Customers need stuff that works (that meets their needs), for a price they’re willing to pay.
  • Companies making stuff need to safeguard their reputations and revenues.
  • Managers generally need to appear capable of delivering new products which meet the company’s and customers’ needs, whilst also controlling margins (costs vs returns).
  • And of course every individual may have their own particular personal needs, too.

Strategies

My question is: “Is testing the best strategy for meeting all the above needs?”. It may be the best known. The most widespread. The default. But is it the most economic? The most humane? Indeed, what are the dimensions of “best” here? Or even of “reasonably effective”?

“No Testing” attempts to flag up these questions. No soapbox. Just open enquiry.

– Bob

 

 

 

 

 

27 comments
  1. Hi Bob!

    Thanks for sharing your thoughts.

    Would you be willing to agree that building software is a process that involves continuous exploration, learning and adaptation? Do you agree that between the moment that somebody has an idea for a piece of software and the moment that the first user interacts with that software, lots of experimentation has taken place?

    At the beginning of a new project you know the least about what is really needed to make the planned product to work. When the project is finished you know more. And you know more because you have tinkered, tweaked, failed and learned along the way. In other words, by testing different ideas and ways to make the software work, you got the job done.

    Having a group of testers in the team when creating software may not always be necessary. But creating software without testing is probably not possible.

    • Hi David,

      yes I would agree that “building software is a process that involves continuous exploration, learning and adaptation”. Yes I would agree that “between the moment that somebody has an idea for a piece of software and the moment that the first user interacts with that software, lots of experimentation has taken place”.

      I would not describe experimentation as “testing”. Excepting in the very broadest sense, and not what the industry widely regards as “testing” nor any of its near-synonyms.

      And I can personally vouch that creating software (“that just works”) without testing is indeed possible.

      – Bob

      • So the software was never run by anyone before it was shipped? It was never compiled? The code was never examined? How was that possible?

      • “Non-sequitur.”

        Why? Looking at the result of code compilation results in testing. Examining code results in testing. Running the program and considering the behaviour results in testing.

        If I compile code and get a compilation error I have just performed some simple testing. I’ve learned that the existing code does not compile, as I have observed the errors and inferred that there is a problem. I might then change the code and try to compile again (exploration and experimentation to learn about the product).

        So if these things are testing, and they were done, then the product was tested. Unless your definition of testing is different, in which case that is important information for the discussion.

      • Rachelle Below said:

        Hi Bob,

        You claim:
        “And I can personally vouch that creating software (“that just works”) without testing is indeed possible.”

        Can you personally vouch for this because you have created such software? If that’s your claim, I’d like to get my hands on said software to personally teach you why you’re wrong about your opinions of testers. I bet it’s not flawless as you claim.

        -Rachelle

      • Hi Rachel,

        Please allow me to wish you well in your Sisyphean pursuit of perfection.

        – Bob

  2. I’m unsure at this time if No Testing is a viable option. But I’m certain that there are situations where we can do less testing.

    To reduce testing it helps if you know the quality of a product under development, so before it is delivered. Based on that knowledge you can manage the risk associated with less testing. My blog post on steering quality in agile teams (http://www.benlinders.com/2012/steering-product-quality-in-agile-teams/) provides suggestions how to do this.

    There are many ways to prevent defects (you mentioned several in your blog post already) to deliver better quality products. Trust and support people, give them what they need to do a good job, allow them to learn from mistakes and from things that went well, and appreciate and reward their contributions will certainly help!

    • Hi Ben,

      Viable? In what sense? Economically viable? Other?

      Part of my argument – borne from observation – is that one can know the quality of a product under development without having to source that information through testing.

      – Bob

      • Economically or for a company to survive. Many companies do testing to improve the quality of products before release, No testing would result in losing customers and going out of business.

        I fully support your observation that there are other ways than testing to know the quality. And alternative and often better ways than testing to ensure that quality is delivered 🙂

      • I would say rather that inappropriate quality would result in losing customers (or lower margins – works both ways) and possibly going out of business.

        – Bob

      • Think we are on the same page Bob. If testing is the main strategy to ensure quality in a company, no testing would lead to insufficient quality, which would lead to losing customers and going out of business.

        The solution is not to do (more) testing, but to use effective and efficient ways of working that result in quality products to be delivered.

      • Same page, looks like. 🙂

        – Bob

  3. It may simply be a matter of when/where testing is performed. In agile development for instance, testing is just one of the tasks carried out during development. So there is no seperate part of the invoice that says ‘testing’ (or QA or whatever). It is just a common sense approach to development, making sure the product is produced and delivered with constant quality. This way the learning and making of mistakes is pulled into the development process, and most ‘bugs’ or ‘defects’ are fixed before releasing a product.

    Also, in many cases there are so many external influences on the workings of a product, that it is only possible to deliver a faultless product in an ideal world of perfect documentation and no bugs in dependencies etc.

    To conclude, I have experienced that pulling ‘testing’ into the development process makes all the people involved be(come) better at what they do.

    • Hi Sjoerd,

      Just because it’s not on the invoice doesn’t mean the customer is not paying for it. Or maybe it’s the company writing the software, or selling it that pays for it (through e.g. reduced margins).

      Would you be willing to define what you mean by “constant quality”?

      And I don’t see what external dependencies have to do with the subject, per se, either. Would you be willing to clarify?

      – Bob

      • Hi Bob,

        Using the term “constant quality” I meant to refer to the principle in agile software development, where only the functionality is flexible, but quality, time & money invested are constant. Having people on board who specialize in keeping that quality constant (whether you call that testing or something else) is helpful in that case. ‘testing’ is just a way to cope with difficult circumstances.

        External dependencies can also be considered to mean a constant changing landscape of supported technology. For instance browsers, OSes and devices as well as standards and best practices when developing for the web. It can be more efficient to offload the specifics to one person who pays extra attention to differences and technology specific defects, instead of just asking each developer to ‘just do it right’ by having them do these checks constantly themselves. Especially when mostly it does just work correctly in most circumstances.

        I hope to have clarified a bit. But it is a tricky subject, and very good to have to think about why we do what we do, how we do. 🙂

        -Sjoerd

      • From my experience, constant quality is a chimera. Most successful software starts out with some baseline quality or qualities, and raises the bar in some of those qualitative dimensions over time, in response to feedback, etc.

        I don’t believe the case for role specialisation is made, maybe it applied in making matches – but certainly not in knowledge work.

        And I have nowhere in this post suggested that developers “do the testing (checks) themselves”.

        Thanks for your comments.

        – Bob

  4. A lot of this can be broken down for questioning.

    Q. The customer wants a product that “just works”. What does that mean?

    “Just works” for whom? In what way? Explicit ways? Implicit ways? Badly communicated ways? Non-communicated ways? Companies have tried making the requirements for software explicit to deliver what is required, and have failed for many well-known [citation needed] reasons:

    1. Customers don’t know entirely what they want
    2. Customers don’t know entirely what they could have
    3. Customers don’t make explicit what they feel is implicit (communication failure)
    4. Customers are ignorant of contextual differences, such as unexpected environmental/platform differences.

    Q. Regarding “why can’t you people build it right in the first place?”, what does “right” mean? And according to whom?

    The customer could ask “Why can’t you people predict exactly what I want, overcome all restrictions of technology and the skills and knowledge of your team to provide me something that fulfills all of my requirements, written and unwritten, tacit and explicit, most of which exist only in my mental model of the world?”. Then each user could ask the question in turn. How do we answer this question?

    Well, we can try to get it right first time. Many companies are trying to save money doing just that (BDD comes to mind). But how do we balance “right” in this sense? “Right” obviously doesn’t mean that there are no code errors (although it partly means that there are no code errors that impact a test client), so how do we build it “right” without the possibility of fully understanding what “right” is? How do we do it when lots of people with lots of ideas of what “right” is, in terms of development approach, test approach, how much effort they want to put in, and so on are all working on the solution at the same time?

    Q. What is the “intended levels of defects”, and how would we know if we met it?

    Firstly we have to look at what a “defect” is, and how we measure it. I won’t insult you by talking about trying to count them. But how do we measure the impact of a defect? How do we consolidate a subjective relationship into a quantifiable, or at least measurable way? And even if we remove all problems that we consider defects, how do we know that it maps onto the customer’s interpretation of a defect? How COULD we?

    Quality is a relationship between the product and a human. We need to learn things about the product that are more than the sum of its design and implementation – we need to learn the ways it solves problems for the customer (and the user, and our company), and to do that we need to understand the needs of the customer. The problem is that the only people that know what customers want are customers… and even then I’d question that they do. Sometimes they ask for the wrong solution to their problem – if we provide that solution cheaply and of “high quality” the customer will still complain that you have not provided a “solution”.

    Q. So, do we have to test, despite the customer being unkeen to pay for it?

    No, we don’t have to. We don’t have to do anything. I’m sure the customer is unkeen to pay for a lot of things that we consider necessary, though. If only companies and customers weren’t made of people.

    Q. Despite it adding little or no value from the customer’s point of view?

    Why is the customer’s point of view on the importance of testing considered valuable? We could extend that logic to any seemly inexplicable factor of software development (from the customer’s point of view). Do we need managers? Why can’t the teams manage themselves? Do we need the expense of equipping meeting rooms? Can’t they just meet in the corridor?

    Your question is: “Is testing the best strategy for meeting all the above needs?”
    My question would be: “Are those needs correct, sufficient and possible to meet at all? And if not, what should they be and how do we get close?”

    What sort of needs should we look at? Perhaps Maslow’s hierarchy? And obviously those needs could be at odds with each other, or mutually exclusive.

    … But with all that said if we call a concerted and deliberate effort to find something out about the product “testing” (noting that I’m not equating them), then there will always be testing. In some form, done by some person, but it will always exist. We’re only arguing over the form that it takes… and in what context that form has value.

    • Hi Chris,

      Thanks for joining the conversation. Yes, a short blog post can raise some vexing questions, can’t it?

      In regard to your first few questions, maybe you might like to take a look at Tom Gilb’s work (i.e. quantification of qualitative requirements)?

      And for the latter questions, Marshall Rosenberg (Nonviolent Communication)?

      – Bob

  5. Hi Bob,

    It’s a beautiful question you asked. It sparked a thinking process in my head. Thank you for it!

    One thing I’m struggling with most is the absence of definition of testing on your side. Formulating the question in such context I hear it this way, “Use your own understanding of testing process”. And the fun begins as every folk puts one’s own definition into testing.

    Another difficult thing for me is to draw a line where development ends and testing begins. Taking into the consideration that testing might be waste where should we start cleaning? As with many complex situations there’s no sharp line, only a blurred area. The wideness of the area differ from team to team, product to product. Should we remove this blurred area? Or should we widen it further?

  6. Bob

    I am not offended by the idea of no testing. I think that what I do provides value and when it doesn’t, I am not owed a thing.

    The market corrects for some of the needs you talk about, including the need of testers to make money. I provide a service for money. I fill a need while my employer fills my need. When one of us stops needing, our relationship changes.

    Suppose that customers don’t care one way or the other about testing; like I don’t care about advertising. I can buy from the company that doesn’t advertise in order to save money. Costumers can buy from a #NoTesting vendor to save money. If the product is inferior, they can change their mind. If it is not, the product can be successful. These points should be central to the entire premise of a #NoTesting discussion.

    Now that I have my 2 cents about needs, I’d like to say that you cannot have #NoTesting without testing. I’ve seen some bad stuff thrown over the wall and some good stuff thrown over the wall. The good stuff was tested before I got it. It was tested during discussion of the feature acceptance criteria. It was tested in discussion of the design. It was treated by the unit tests written by the developer. You already knew this. So are you asking about #NoTesters?

    If you are asking about #NoTesters then I have news. The good stuff thrown over the wall had problems too.

  7. Paul Beckford said:

    An interesting discussion. I’m guessing that Bob is question the distinct role of Tester more so the idea of testing (but I could be wrong).

    For example when a Chef is working in his Kitchen and he decides to taste the food to see if it is properly seasoned, isn’t that testing?

    The distinction here, between what I would describe as “the timeless way of making things” and what we tend to call “Testing” in the software industry is that testing as you go as always been an integral part of the creative process. It is part of building and not something separate.

    I’m all for a more holistic approach to software development. Every time we create a separate role (Architect, BA, Developer, Tester, etc), we create a fission within the creative process and we make it that much harder to hold on to the holistic, integral perspective needed to create stuff this is good/great.

    That’s not to say that certain people don’t have a certain knack for some activities more then others, but I have found it best where roles significantly overlap. So Developers who see testing as part of development and testers who see Development as part of testing (yes developers who can code)….

    In the limit as these boundaries are blurred and the roles coalesce, you end up with a team of software producers, not distinctions, no separation. If this is what Bob means, then yes I’ve seen this work remarkably well too.

    Paul.

    • Hi Paul,

      Thanks for joining the conversation.

      I’ve just posted a new post “More No Testing” in an attempt to answer some of the questions you – and others – have posed here.

      – Bob

  8. Paul Beckford said:

    Sorry. I said “developers who can code”, I meant to say “testers that can code”.

    P.

  9. Mike Jones said:

    Reminds me of the ‘Test is Dead’ keynote at GTAC 2011 conference. All test leads please look it up along with other presentations by James Whitakker if you have not already seen them. Very informative and most companies still playing catch up.

    Traditional (manual) software testing is becoming a clear bottleneck in agile methodologies trying to deliver ‘working software’ without a 3 week system or regression test cycle between each development iteration. Testing is still being done, however the people doing the testing is changing – with greater focus on automated unit testing, and client beta or cloudsource testing of the UI.

    • “Automated testing” and “manual testing” don’t really make any sense. It’s just testers using tools. So if “test is dead” infers “testing should be done by developers using automatic checks” then “Test is Dead” really means “Test is done in a very limited way by someone else”. If a company wants to do that, and push the open risk gap onto their customers, then so be it. Some are big enough to absorb those problems.

Leave a comment