Archive

Perspectives

Perspectives on Rightshifting

Index

As this is a long post, here’s an index to each slide / dimension in the post:

Background
Introduction
Dimension: The Software Development Life Cycle
Dimension: Flow Mode
Dimension: Feedback Delay
Dimension: Administrative Project Management
Dimension: Perspective on the Individual
Dimension: Measurement
Dimension: Inductive vs Deductive
Dimension: Toolheads
Dimension: Quality and Testing
Dimension: Development Focus
Dimension: Risk Awareness
Dimension: Systematic Learning
Dimension: Design Loopbacks
Dimension: Conformance to Schedules
Dimension: Use of Third Parties
Dimension: Deployment Problems
Dimension: Variability in Project Success
Dimension: Metaphor in Use

Background

Way back in 2008, the first public outing for my ideas about Rightshifting was a forty-five minute presentation at Agile North 2008. The slides for this presentation have been online at AuthorSTREAM ever since (including, incidentally, a Part 2, that was not presented, featuring an introduction to FlowChain).

The presentation was very well received, but one thing that has rankled me since then has been the absence of any narrative to accompany the slides. I can appreciate that this absence limits the usefulness of the slide pack. As a remedy, I have reproduced the slides here, accompanied by a brief commentary, or explanation, for each slide.

[Note: the slides in this first draft, acting as placeholders, are taken from the original presentation. I may update them later, to the more recent 3D-effect format, if there’s any demand for that.]

Introduction

The presentation as a whole attempts to address the question “given that there is such a wide range of  effectiveness between different knowledge-work organisations out there in the world, how does life – and work – in these organisations differ? What makes Rightshifted organisations so different (and thus, more effective) from their less effective cousins?”.

What is a Dimension?

What is a “dimension“, in this context? It’s a slice through – or aspect of – how things look or work in knowledge-work organisations everywhere. We might imagine mapping each organisation in the real world to a (fuzzy) n-dimensional point in an n-dimensional hypercube. This mapping reveals certain clusters, or commonalities between organisations.

The slides, one by one, each illustrate a different dimension of life and attitudes to work in organisations, accompanied by a commentary.

Note: These charts and their accompanying narratives illustrate tendencies, not so much hard and fast delineations.

Dimension: The Software Development Life Cycle

This chart illustrates the kind of software lifecycle prevalent in organisations at various stages of effectiveness:

Code and fix” refers to the disorganised, seat-of-the pants approach to developing software systems and products. Some folks refer to this as “cowboy coding”.

Waterfall” (more accurately described as “batch and queue”) refers to those particular approaches to software development where each stage of transformation e.g. Analysis, Coding, Testing, etc.) is completed as a large single batch of work, before passing on to the next stage.

Agile” refers to the various approaches to software development where work is conducted incrementally and iteratively, with early and regular delivery (into production) of increments in e.g. functionality.

Beyond” alludes to other approaches to software development “beyond agile”.

Note: This slide preceded the Marshall Model by some two years. Even so, one can see the boundaries of the fours mindsets emerging.

Dimension: Flow Mode

This chart illustrates the prevailing (collective) mental model (and thus operational practices) with respect to how value flows through the organisation (e.g. from order to delivery, or “from concept to cash”):

Random” refers to the absence of any understanding about “Flow” (of value) , and thus an absence of any specific practices to enable flow, meaning that flow of value within the organisation happens at random. For example, the actual schedule of product releases – or due date performance – will be highly variable and unpredictable, to the point of being essentially random.

Batch and Queue” Value flows through these organisations in (often, large) batches, with each batch – for example, an entire software product – queueing at various points during its passage through the organisation. These organisations generally have little conscious understanding of the idea of flow, of queueing theory, or of the other issues that contribute to smooth, predictable flow. Consequently, the actual schedule of product releases – or due date performance – will show marked variation and a significant lack of predictability.

Sprints, etc” These organisations have a conscious understanding of the advantages of flow, and structure their operations around improving the flow of value through the organisation.  However, these organisations have not yet transcended siloisation to the point where they can optimise flow across the whole organisation as a joined-up system. Thus, we may see the use of agile practices, such as iterative – or even continuous – delivery of user stories, features or use cases. As a  consequence, the actual schedule of product releases – or due date performance – will show limited variation and reasonable predictability.

Systems Thinking” refers to the mindset that embraces the whole organisation as a system, and optimises flow through this system as a whole. In concert with techniques like Statistical Process Control (SPC), this means that the actual schedule of product releases – or due date performance – will show generally predictable and minor variation.

Note: The boundary between “Sprint, etc.” and “Systems Thinking” segments may lie somewhat further to the left of the 3.2-3.5 position than it appears on the above chart.

Dimension: Feedback Delay

This chart illustrates organisational thinking on how long the feedback loops in the organisation should be:

Random” refers to the absence of any conscious attention to feedback and the length of feedback loops. Thus any feedback received from e.g. retail channels or customers about new products, product features, and the like will be acted on (or ignored) essentially at random, with the timescales (delays) for such action also, essentially, random.

3 – 6 Months” these organisations regard a three to six month time frame for acting on e.g. customer feedback as quite normal and acceptable. Produce release cycles are typically geared around this timeframe. The concept of “cost of delay” is not often known.

2 -4 Weeks” here, the concept of “cost of delay” is understood, and these organisations work towards quantifying and tracking these costs, and base their product investment and prioritisation decisions, at least in part, on these factors. This typically sees dramatic reductions in cycle times, bringing the time it takes to incorporate feedback from the market down to less than a month.

Daily” highly-effective organisations tend to have a very clear understanding of their own cost-of-delay, and of the impact of feedback, and feedback delays, on their effectiveness. Not least because these kinds of organisation tend to be in the web space (c.f. Forward, Facebook, Salesforce.com, etc.), where cost-of-delay can be high, these organisations focus on cycle times and feedback delays of a day or less.

Dimension: Administrative Project Management

This chart illustrates organisational thinking on how work (in particular, product development work) should be structure and managed:

APM” shows the prevalence of Administrative Project Management as correlated with organisational effectiveness. Least-effective organisations have little or no project management, nor indeed even projects, as such. Moving to the right, some slightly more effective organisations adopt the idea of conducting work within structures or containers called “projects”. Pretty soon after this comes the full panoply of Administrative Project Management, as typified by e.g. PRINCE2, PMBoK, etc.. As organisations’ effectiveness continues to improve, these (fewer) organisations come to understand the limitations of both the project concept itself, and the dysfunctions inherent in Administrative Project Management. The role of APM thus tails off.

Fun” shows that although quite (relatively) ineffective, organisations to the left (little APM) are fairly fun places to work. People have a degree of autonomy, rules are absent or at least lax, and work is not so regimented or controlled. As APM increases, fun goes into a tailspin, reaching its nadir as APM reaches its zenith. This is no mere coincidence. As APM goes into decline, further to the right, fun rises again, and indeed reaches new heights, driven on by the satisfaction inherent in doing good work, delivering real value, and generally making a real difference. Highly-effective organisations tend to provide high levels of job satisfaction (aka fun).

Wasted Potential” illustrates the correlation between APM and the waste of people’s innate potential (e.g. for doing good work). The key mechanism here is engagement. As fun drops (in line with rising APM), engagement with the work also drops away, and people have less incentive, motivation and thus inclination to do good work.  See Dan Pink’s book “Drive” (and associated videos, etc.) for an in-depth explanation of the role of Autonomy, Mastery and Purpose in the intrinsic motivation of knowledge-workers.

Dimension: Perspective on the Individual

This chart illustrates how organisations at different levels of effectiveness have different attitudes towards individuals (e.g. workers):

Respect” maps the degree of importance which organisations attach to the idea of respect for the individual. The chart illustrates how the least-effective organisations, on the left-hand side, have some level of respect for their staff. This may be patchy, but overall, it’s about what you’d expect to find in wider society. As we consider slightly more effective organisations (progressing to the right), here we see respect for the individual decreasing as effectiveness increases. Respect reaches a nadir around 1.5 or the chart, (see also the preceding chart on Administrative Project Management) – here organisations tend to treat people as fungible, interchangeable “cogs” in the “machine” of the organisation. As this machine view of organisations begins to wane (further to the right again), respect accorded to folks in the organisation rapidly rises, easily exceeding the levels seen in wider society.

Heroism” portrays the way in which highly-ineffective organisations attribute success and e.g. productivity to the heroic acts of individual “rock-stars”. As organisations progressively become more effective (rightshift), they likewise progressively tend to realise the role played by the system (the way the work works) relative to the contribution of “heroic” individuals. This realisation has knock-on effects on hiring, remuneration and a host of other organisational policies.

Dimension: Measurement

This chart illustrates the preferred role of measurement a.k.a. metrics in organisations at different levels of effectiveness:

“Metrics Effort” illustrates how highly ineffective organisations place very little emphasis on measuring things, and thus on the place of evidence, facts and data, more generally, in the operation of the organisation. When organisations (eventually) do begin to value measurement, they tend to go overboard on the idea, spending much effort on collecting all kinds of measures, much of which has little relevance or utility. As effectiveness continues to increase, organisations’ focus tends to resolve onto the measures with most relevance to the effectiveness of the organisation, whittling-away the less useful measures. Also, these more effective organisations tend to embed measurement – and the use of measures – into daily operations (business as usual), rather than have special (out-of-band) measurement efforts.  c.f. Basili et al –  GQM.

Dimension: Inductive vs Deductive

This chart illustrates the balance of focus on working practices (sometimes called “best practices” as against principles (the ideas underlying working practices) – in organisations at different levels of effectiveness:

Here we see a direct correlation between effectiveness and a focus on principles over practices. That is to say, highly-effective organisations understand the principles underpinning their working practices, whereas ineffective organisations have little or no understanding of the fundamental principles involved. The latter organisations are much more likely to simply copy “best practices” from others. Often this amounts to no more than “cargo-culting“.

See also: “The Inductive Deductive Schism” for more context.

Dimension: Toolheads

This chart illustrates the general disposition towards the buying and using of tools – whether physical tooling (plant), software tools or indeed, methodologies – in organisations along the Rightshifting axis:

Highly ineffective organisations tend to see little value in buying or using tools to e.g. improve productivity or reduce variation. As effectiveness improves, organisations tend to go overboard, buying tools left, right and centre in the belief that tools improve efficiencies, and that tools compensate for a lack of specialists and their know-how. For organisations that continue to improve their effectiveness, however, comes the realisation that a blanket predilection for tools does more harm than good, and these organisations become much more selective about the tools they acquire and use, even to the point of retiring or disposing of much of their existing tooling.

See also: “Watch Out For the Toolheads” article by John Seddon

Dimension: Quality and Testing

This chart illustrates the general attitude towards quality, and incidentally, the role of testing, in organisations distributed along the Rightshifting axis:

Quality philosophy” speaks to organisations’ general philosophy on the matter of quality. Highly ineffective organisations, if they have any overt philosophy at all regarding quality, tend to believe that quality can be tested into their products and services (despite, incidentally, more than thirty years of TQM, Crosby et al., advising to the contrary). Highly effective organisation come to the realisation that quality is – at least in part – an economic concern, and whereas sometimes it may be cost-effective to retain some testing, more often the effective path to quality lies in reducing or eliminating defects.

Testing effort” reflects the cost of testing, as seen in organisations at different levels of effectiveness. Highly ineffective organisation have low testing costs, simply because they have little or no testing (or any other quality efforts, for that matter). Organisations of moderate effectiveness tend spend a great deal of time, money and effort on testing things, primarily because they have little or no focus on reducing or eliminating defects, and thus have to rely on testing (a.k.a. inspections) to prevent defects reaching their customers. Highly effective organisations have discovered that by reducing or eliminating defects at source, the need for testing (a.k.a. inspections) reduces markedly.

Defects see by users” illustrates the combined effect of an organisation’s quality philosophy and testing effort. Customers of highly ineffective organisations tend to see many defects and quality problems,  whereas customers of highly effective organisations tend to see far fewer quality issues.

Dimension: Development Focus

This chart illustrates the typical stance of developers and product development groups in organisations distributed along the rightshifting axis:

CV-centric” refers to the tendency of developers and other technical specialists in e.g. highly ineffective organisations to focus on selecting and using technologies and tools that will enhance their CVs and give them interesting and cool new things to “play” with.

Code-centric” describes the tendency for technical staff in low-effectiveness organisations to believe that code and code quality is the be-all and end-all with regard to producing successful software products and services.

Requirements-centric” relates to moderately effective organisations’ belief that ongoing commercial success stems from understanding customers’ requirements and delivering against those requirements. Note: This does not necessarily imply a big-design-up-front or batch-and-queue approach to requirements gathering. Indeed, many requirements-centric (development) organisations quickly learn that iterative approaches to exploring requirements can afford more effective means for understanding.

Learning-centric” pertains to the focus of highly effective organisation on continual, organisation-wide learning – including learning about customers and markets and their evolving needs and perceptions of value, but more importantly, continually learning more about how best to make the whole organisation work ever more effectively.

Dimension: Risk Awareness

“Greater risk brings greater reward, especially in software development. A company that runs away from risk will soon find itself lagging behind its more adventurous competition. By ignoring the threat of negative outcomes—in the name of positive thinking or a can-do attitude—software managers drive their organisations into the ground.”

This chart illustrates the awareness of, and approach to handling, development risk in organisations across the spectrum of organisational effectiveness:

Highly-ineffective organisations not only remain unaware of risk and risk management disciplines, but often have a pathological fear of even discussing issues from a risk perspective (hence the negative portion of the line on the chart). Risk awareness rises oh-so-slowly as organisational effectiveness increases, with only the reasonably effective organisations achieving significant levels of awareness (and hence, effective ways to handle risk). The line tails off for the highly effective organisations, as these eschew some aspects of risk management in favour of effective and disciplined means of opportunity management.

See also: “Waltzing With Bears” by DeMarco and Lister.

Dimension: Systematic Learning

learning (ˈlɜːnɪŋ)
— n
1. knowledge gained by study; instruction or scholarship
2. the act of gaining knowledge
3. (psychology) any relatively permanent change in behaviour that occurs as a direct result of experience

This chart illustrates the typical attitude, of organisations distributed along the rightshifting axis towards, systematic (i.e. deliberate, organised and organisation-wide) learning:

Highly ineffective organisations tend to be blind to the value of systematic learning. Moderately effective organisations, once awake to the possible commercial advantages of a systematic approach to learning, begin to institute means to encourage such learning. Highly effective organisations recognise the need for such learning to be integrated with Business as Usual (BAU) and to ensure that what is discovered is actually “learnt” – i.e. new knowledge actually modifies organisational behaviour.

Dimension: Design Loopbacks

“One of the fundamental problems companies have is this practice of continual loopbacks, where they think they made the right decision, but it was the wrong decision and they end up continually in firefighting mode, fixing problems on the back end.”

“If you look at the continual state of loopbacks and lost knowledge in companies, something like 70 percent of engineering talent is used to solve problems that should have been solved early on.”

 ~ Michael Kennedy

This chart illustrates the frequency and impact of “design loopbacks”, in organisations at different levels of effectiveness:

See also: “Product Development for the Lean Enterprise” by Michael Kennedy

Dimension: Conformance to Schedules

This chart illustrates the ability of organisations, at different stages of effectiveness, to deliver new products into production on time (i.e. on schedule, or on the due date):

Here we see how well organisations meet their own development schedules. It’s probably no surprise that highly ineffective organisations struggle to deliver anything on time – with high variation and low predictability in their schedule conformance. But most (averagely-effective) organisations do little better. And few of these less-effective organisations realise that the best performers (the highly effective organisations) can have highly reliable and predictable schedule conformance as high as 98%.

Note: It may be apparent that to achieve such high levels of schedule conformance requires fundamentally different approaches to product design and development that those more commonly employed. Such approaches can include Set-based concurrent engineering (SBCE a.k.a. set-based design), trade-off curves, and other measures seen in e.g. the Toyota Product Development System (TPDS).

See also: Lean Product and Process Development by Dr Allen C. Ward

Dimension: Use of Third Parties

This chart illustrates the strategic role of third-parties (specialist suppliers, consultants, sub-contracting companies, etc.) as seen by organisations distributed along the Rightshifting axis:

No organisation, however large or diverse, can hope to have all the specialist skills and know-how that might be needed to design and deliver new products and services into ever-changing markets. Thus working with specialist third-parties is often a necessity. Highly-ineffective organisations have little or no understanding or capability for finding, and working with third parties. Generally, these organisations will treat each such relationships as an entirely novel and unusual situation, discovering how to make it work as they go along. And repeating the whole exercise the next time…ad infinitum. Thus, these organisations, also often victims of NIH (not invented here) syndrome,  rarely use third parties.

Moderately effective organisations come to realise that  working with third parties is an inevitable part of doing business and evolve means to make this part of Business As Usual. Thus, these organisation come to use and rely-on third parties in many aspects of their business.

Highly-effective organisations, not least because of their fundamentally different approaches to doing things, find it increasingly difficult to find third-parties with the necessary specialist skills and cultural (mindset) “fit”. Hence, these organisations find themselves using third parties less than they perhaps might like.

Dimension: Deployment Problems

This chart illustrates the likelihood that organisations, at different stages of effectiveness, will have significant problems with their new products (or updates) after they’ve “gone live”:

Many highly-ineffective organisations see it as inevitable that their customers, users, etc. will find problems with their new product designs when released (put into live production). Moderately-effective organisations begin to regard this as undesirable, realising the cost involved – both remediation costs and reputational costs, not least. These organisations, however, typically have an uphill struggle to reduce their level of deployment problems, basically because of their piecemeal approach to the “whole product” notion, borne of years or decades of incrementalism and local optimisations. Highly-effective organisations, often by dint of radical overhaul of their approach to “whole product” issues, have minimal deployment problems.

Dimension: Variability in Project Success

This chart illustrates variation in project (i.e. new product or service development) success, along the effectiveness spectrum. More significantly in my view, it also illustrates the different causes to which organisations at different levels of effectiveness attribute such variation:

Here we see that highly-ineffective organisations have high levels of variation (and thus low levels of predictability, certainty) in their new product development efforts. Levels of variation fall in line with increases in organisational effectiveness.

As to causes, highly-ineffective organisations tend to attribute success (and variability thereof) to the heroic (or paltry) efforts of specific individuals. Moderately effective organisations tend to let go of that simplistic notion, but often get lost in their search for the root causes of the variability in their record of success. Highly-effective organisations have discovered that, as Deming suggests, circa 95% of their success at delivering projects is down to their organisational systems – or “the way the work works”.

Dimension: Metaphor in Use

This chart illustrates the prevailing metaphor for knowledge work, as it varies in different organisations along the effectiveness axis:

Highly ineffective knowledge-work organisations have yet to realise even the nature of the work in which they are engaged, choosing, mostly by default, to regard it as just another kind of “office work”. This choice of metaphor leads to certain choices regarding e.g. the layout of the work space (cube farms, segregation of specialists, absence of team spaces, etc.).

Organisations of limited effectiveness choose to adopt the “software factory” metaphor for work, with an abundance of manufacturing/factory related metaphors for all aspects of work, such as “production line”, “batch and queue”, “conformance”, etc.

Reasonably effective organisations eschew these metaphors in favour of work as “product design” or  the “design studio”, choosing to regard the workers as “creatives”, and understanding the value of flow (in the Mihály Csíkszentmihályi sense of the word), creativity and innovation.

Highly-effective organisations, whilst appreciating the “design studio” metaphor and values, choose to adopt a “value stream” or “value network” metaphor for work, and place emphasis on the flow of value.

See also: “Principles of Product Development Flow” by Don Reinertsen.

– Bob

What Makes a Mindset?

[See also more recent posts: Perspectives on Rightshifting and What is a Mindset?

I was recently lecturing at Cass Business School, for City University’s Masters in Information Leadership course. I learned a great deal – and confirmed, in passing, Ackoff’s observations that the folks who learn most in a classroom are the teachers. :}

One of the things I learned was that it’s difficult for folks new to the Marshall Model to locate their own organisations on the “effectiveness” axis, or even to judge which mindset might be the prevailing one in their organisation.

So I’ll be writing some posts explaining the idea of Mindset, hopefully in a way that folks might find helpful in classifying their own organisations’ collective mindset.

What Do We Mean by “Mindset”?

For the purposes of the Marshall Model, at least, I define a Mindset to mean “a self-consistent set of assumptions concerning the way work should be organised, arranged, conducted and controlled”. And I should also mention the role of the collective organisational mindset – the assumption here being that everyone in a given organisation tends to act as if they share a common mindset (over the longer term). Generally, anyone (or any group) seen as a “deviant” with respect to this common mindset causes some degree of cognitive dissonance – both in the deviants and the rest of the organisation. This dissonance, over the longer term – typically nine to eighteen months – almost always resolves itself, one way or another. And often not in a good way.

Questionnaire

Drawing on this AuthorStream presentation, the following questionnaire offers a simple way of getting started with categorising you own organisation’s mindset, in terms of the Marshall Model. Simply identify which statement in each group sounds most like your organisation (as a whole), and keep a running total of the points associated with each selection. At the end, divide the total by <number_of_questions_answered * 10> to give you an approximate location on the RIghtshifting horizontal (effectiveness) axis, and thus identify the likely prevailing collective organisational mindset.

[Please note: this is the first draft of this post, and not all questions are complete as yet.]

1) Waste

How much of everyone’s working week, for folks across the organisation, is eaten up doing stuff that doesn’t add real value – i.e. anything that customers would never want to pay for – or make much difference to the value perceived by your customers (things like meetings, rework, finding defects, dealing with customer complaints, etc.)?

  • a)  Around 90%. (4 points)
  • b)  Something like two-thirds to three quarters of folk’s time seems wasted. (12 points)
  • c)  In the region of half the working week. (25 points)
  • d)  20% or less of the working week is eaten up by stuff that doesn’t make much difference. (42 points)

2) Product Development Life Cycle

How are new products (including services, and new products for internal use) developed?

  • a) Things are generally thrown together with a lot of design loopbacks (where problems in the design are found during e.g. implementation, and thus require the development folks to go back and change the design, invalidating some of their post-design work). (4 points)
  • b) Most new products, etc. are planned in some detail up front, and then built more or less according to that master plan, over a period of several months or years. (12 points)
  • c) Most new products evolve during their progress from concept to deployment, steered by a guiding vision, but with the design and implementation details emerging as the product progresses. (25 points)
  • d) New products are deployed very early, as very minimal versions, and those that find some traction in the marketplace get more funding to evolve into more sophisticated and fully-features versions, whereas those that fail to find a market are culled quickly and ruthlessly. (42 points)

3) Flow Mode

How do product ideas “flow” through the organisation and into the market? I.E. What is the “chunk” size of work in the organisation?

  • a) Mostly at random – there is no consistent way in which new products flow through the organisation. (4 points)
  • b) In large batches, with groups of related “product features”, often for a whole product, batched together and passed from queue to queue (for example, using a phase-gate approach). Such “feature-sets” get released (in the form of product iterations) with releases at least 3-4 months apart, or with the first or only release of the “product” months or years from its conception. (12 points)
  • c) In small batches, groups of related “product features” batched together and passed from queue to queue (for example, using a iterative, time-boxed approach). Such “mini-feature-sets” get released (in the form of product iterations) with releases as little as 2-4 weeks apart. (25 points)
  • d) In single features, with each individual new “product feature ” being released as soon as it is ready to deploy, often every 2-3 days , or maybe more frequently than that. (42 points)

4) Feedback Delays

How long is it, typically, before the market’s (or customers’) reaction to a new product feature can be incorporated into a subsequent product release?

  • a) We have no way of knowing – but I’d guess that any market feedback we do notice rarely affects subsequent releases at all, directly. (4 points)
  • b) Something in the order of 3-18 months. (12 points)
  • c) Something around two to six weeks. (25 points)
  • d) Less than a week. (42 points)

5) Administrative Project Management

How much emphasis does the organisation place on administering projects “properly”?

  • a) We don’t have projects as such. We just work each day on stuff that looks like it might be useful. (4 points)
  • b) Our organisation is very diligent about projects. Most if not all projects within the organisation have a dedicated project manager. The organisation also has a proper project management process (PRINCE2, CMMI, etc), Quality Manager or department, Programme Office, and so on. Most or all projects report their status via RAG reporting or some such on a regular basis. Much of what we need to do every day is diligently written in our process manuals and work standards documents. (12 points)
  • c) Work teams manage their own work, acquiring and managing team resources as necessary and themselves organising their interfaces with other parts of the organisation. (25 points)
  • d) The organisation used to have projects but has discovered the disadvantages outweigh the benefits so no longer uses the “project” as a means to organise work. (42 points)

6) Fun

How much fun and enjoyment do people have at work each day?

  • a) The organisation is a great place to work most days, apart from when things go wrong and people have to slip into ‘firefighting’ mode (which is rather too often). People (mostly) treat each like human beings. (4 points)
  • b) The organisation treats people like drones, and there is a pervading atmosphere of gloom. People are not meant to have fun at work, are they? Fun is not professional. (12 points)
  • c) The joy of working in the organisation comes from knowing what to do, having the resources and support to do it, and knowing that together the people across the organisation are making a sustained, positive difference to the world. Everyone feels well-respected, both as human beings and for the contributions they make. i(25 points)
  • d) Every day its different. People get a buzz from knowing what’s going on in the World (outside the business), especially in the world of customers and markets, and from coming up with new ways every day to meet emerging needs and trends. Everyone feels well-respected, both as human beings and for the contributions they make. (42 points)

7) Wasted Potential

People like it when they get to do more good, meaningful work. Organisations benefit from workers being more engaged with their work. How much of everyone’s innate potential gets used on a daily basis?

  • a) People spend a lot of time fighting fires and fixing up things that unexpectedly go wrong. (4 points)
  • b) Even getting the simplest things done takes much coordination, meetings, discussions, referrals “up the chain of command” for decisions, etc. Red tape is the normal state of affairs. People’s skills and special talents are not well-recognised nor often used to the advantage of themselves and the organisation. (12 points)
  • c) People can get on with what they know needs doing, coordinating with others when and wherever necessary. (25 points)
  • d) The organisation has automatic, systemic means to flag new opportunities and high-priority things that need folks’ attention. People then coalesce around these priority items and get them done straight away. (42 points)
[Note: Questions below here are not yet complete]

8) Respect for the individual

Autonomy, mastery, purpose.

  • a)
  • b)
  • c) People have the leeway to make their own decisions about what they do, how much time they spend on things, how busy they are, where they work, and so on. Everyone feels well-respected, both as human beings and for the contributions they make.
  • d) People hold each other to account.

9) Heroism

  • a) The organisation values the contribution of individuals, and encourages folks to work long hours
  • b)
  • c)
  • d)

10) Metrics

Some organisations subscribe to Lord Kelvin’s view that “If you can not measure it, you can not improve it”, others to Deming’s view that “the most important figures that one needs for management are unknown and unknowable”.

  • a)
  • b)
  • c)
  • d)

11) Principles (theory)  vs Practices

  • a)
  • b)
  • c)
  • d)

12) Toolheads

  • a)
  • b)
  • c)
  • d)

13) Testing

  • a)
  • b)
  • c)
  • d)

14) Defects Seen By Customers

  • a)
  • b)
  • c)
  • d)

15) Development Focus

  • a)
  • b)
  • c)
  • d)

16) CMMI

  • a)
  • b)
  • c)
  • d)

17) Risk Awareness

  • a)
  • b)
  • c)
  • d)

18)  Systematic Learning

  • a)
  • b)
  • c)
  • d)

19) Due Date Performance

How many product releases happen on the dates originally scheduled for them, i.e. at the inception of the product?

1) 10% or less. (4 points)

2) CIrca 25%. (12 points)

3) 40% or more. (24 points)

4) At least 75%. (42 points)

20) Use of Third Parties

How much of the budget of each new product is allocated to using the experience of third-party specialists (not e.g. individual temporary or contract staff, but people or organisations with highly-relevant specialist skills or know-how)?

  • a) Less than 10%. (4 points)
  • b) More than 20%. (12 points)
  • c) More than 40%. (24 points)
  • d) Our products have such leading-edge technology that we can’t find enough specialist help at any price. (42 points)

21) Post-Deployment Problems

  • a)
  • b)
  • c)
  • d)

22) Variation in Product Success

  • a)
  • b)
  • c)
  • d)

23) Attribution of Causes of Variation in Product Success

  • a) Individuals
  • b) Unsure
  • c) The system

24) Organisational Metaphor In Use

  • a) Office
  • b) Factory
  • c) Design Studio / Lab
  • d) Value Streams

25) Who “Does” the Change

  • a) Anyone that find themselves “stuck” with doing it. (4 points)
  • b) Management (and/or external consultants). (12 points)
  • c) The workers, collectively, supported by extra resources etc provided on demand by management. (24 points)
  • d) The system. (42 points)

Note: Further drafts will add more answers to the above questions, and maybe more questions, too. Please let me know how helpful you find this post in coming to terms with understanding “Mindset”? And let me know how your organisation scores? Thanks!

– Bob