Archive

Article

Making Tomorrow’s Big Balls of Mud Today

What is a Big Ball of Mud?

In software development, the term “Big Ball of Mud” refers to a system or codebase that has become so tangled, convoluted, and disorganised over time that it becomes increasingly difficult to maintain, modify, or understand. It’s a metaphor for a software product development that started with good intentions but gradually deteriorated into an unstructured mess due to a lack of proper planning, design, and adherence to best practices.

Consequences

The consequences of a Big Ball of Mud can be severe. It hinders productivity, increases technical debt, screws with predictability and schedules, and makes it challenging to introduce new features or fix bugs. Developers often find themselves spending more time trying to understand the existing code than actually writing new code. This can lead to frustration, decreased morale, and a higher risk of introducing further issues.

The Rise of AI-Centric Coding

And a paradigm shift is looming on the horizon – a transition towards AI writing code – and primarily for artificial intelligence (AI) readability and maintainability. While human-readable code has long been the desirable approach, the remarkable advancements in AI technology necessitate a reevaluation of our coding practices and the use of Ai to write code to harness the full potential of these sophisticated tools.

As AI systems become increasingly integrated into software development workflows, the need for code that caters to AIs’ unique strengths becomes paramount. This shift will give rise to coding styles specifically tailored for AI readability and maintainability, encompassing the following characteristics:

Abstraction and Modularisation Paramount

AI systems thrive on highly modularised and abstracted code, where individual components are clearly separated and encapsulated. This coding style will emphasise smaller, self-contained units of code with well-defined interfaces, promoting better organisation and encapsulation, aligning with the strengths of AI systems.

Formalised and Explicit Syntax

In contrast to the conventions and implicit understandings often relied upon by human programmers, AI systems will benefit from a more formalised and explicit syntax. This could involve additional annotations or metadata that make the semantics of the code unambiguous and readily interpretable by AI systems.

Pattern Recognition Optimisation

AI systems excel at recognising patterns, and the coding style will be optimised for this strength. Consistent naming conventions, structural similarities, and other patterns that can be easily recognised by AI systems will become more prevalent, enabling efficient pattern recognition and analysis.

Reduced Redundancy (DRY)

AI systems are better equipped to handle and maintain code with minimal redundancy, leading to a coding style that emphasises code reuse, shared libraries, and other techniques to reduce duplication. This approach will not only cater to AI systems’ strengths but also promote code maintainability and efficiency.

AI-Tailored Documentation

Traditional human-readable documentation and comments may become obsolete in an AI-centric coding paradigm. Instead, the emphasis will shift towards creating self-documenting code that can be seamlessly interpreted and maintained by AI systems. This could involve incorporating structured annotations, metadata, and other machine-readable elements directly into the codebase.

The documentation process itself could be automated, with AI algorithms capable of parsing the code structure, analysing the annotations, and generating comprehensive documentation tailored specifically for AI comprehension. This documentation would be optimised for pattern recognition, logical inference, and other capabilities that AI systems excel at, ensuring that it remains up-to-date and consistent with the evolving codebase.

AI-Generated Code for Machine Consumption

Furthermore, the advancement of AI technology raises the intriguing possibility of AI systems themselves generating code in a style optimised for machine consumption, rather than human readability. This AI-generated code could forgo traditional conventions and practices aimed at enhancing readability for human developers, instead favouring structures and patterns that are more readily interpretable and maintainable by AI systems themselves.

Such AI-generated code might be highly compact, with minimal redundancy and a heavy reliance on abstraction and modularisation. It could incorporate complex mathematical models, advanced algorithms, and unconventional coding techniques that leverage the strengths of AI systems while potentially sacrificing human comprehensibility.

As AI systems become increasingly integrated into the software development lifecycle, they could potentially maintain and evolve this AI-generated code autonomously, with minimal human intervention. This paradigm shift could lead to a scenario where the primary consumers and maintainers of code are AI systems themselves, rather than human developers.

Factors Contributing to Big Balls of Mud

While embracing AI-centric coding practices offers numerous advantages, we might choose to be mindful of the potential pitfalls that could lead to the creation of ‘big balls of mud’ – tangled, convoluted, and disorganised AI-generated codebases that become increasingly difficult to maintain and modify.

Today’s Factors

In the current software development landscape, where human readability and maintainability are still the primary focus, several factors contribute to the formation of big balls of mud:

  1. Lack of Architectural Foresight: The absence of a well-defined software architecture from the outset can quickly lead to a patchwork of disparate components, hindering maintainability and coherence.
  2. Prioritising Speed over Quality: The pursuit of rapid development and tight deadlines may result in sacrificing code quality, maintainability, and adherence to best practices, accumulating technical debt over time.
  3. Siloed Development Teams: Lack of coordination and communication between teams working on the same codebase can lead to inconsistencies, duplicated efforts, and a lack of cohesion.
  4. Lack of Documentation and Knowledge Sharing: Inadequate documentation and poor knowledge-sharing practices can make it challenging for new team members to understand and maintain the codebase, exacerbating the tangled nature over time.

Future Factors with AI-Driven Development

As we transition towards AI-driven software development, new factors may contribute to the metastasizing of big balls of mud, if not appropriately addressed:

  1. Not instructing AI to include AI-friendly code generation and the needs of AI vis codebase readability and maintainability. Prompt engineeres in the code generation space take note!
  2. Lack of AI Training and Optimisation: Without proper training and optimisation of AI models for code generation and maintenance, the resulting codebase may lack coherence, structure, and adherence to best practices.
  3. Inadequate Human Oversight and Understanding: An over-reliance on AI without sufficient human oversight and understanding can lead to opaque, difficult-to-maintain code that deviates from architectural principles and design patterns.
  4. Inconsistent AI Models and Tooling: Using multiple AI models and tools for code generation and maintenance without proper integration and consistency can lead to fragmented and incompatible code snippets, exacerbating the tangled nature of the codebase.
  5. Prioritising Speed over Quality and Maintainability: Even with AI-assisted development, the pursuit of rapid development and meeting tight deadlines at the expense of code quality, maintainability, and adherence to best practices can lead to long-term technical debt.
  6. Lack of Documentation and Knowledge Sharing: Inadequate documentation and poor knowledge-sharing practices can hinder the effective use and maintenance of AI-generated code, making it challenging to understand the context, design decisions, and rationale behind the code.

By addressing these factors proactively, software development teams and organisations can harness the power of AI while mitigating the risk of creating tomorrow’s big balls of mud, ensuring that codebases remain maintainable, scalable, and aligned with inhouse best practices.

Conclusion

The future of coding lies in embracing the capabilities of AI systems and adapting our practices to harness their full potential. By prioritising AI readability and maintainability, we can unlock new avenues for efficient and optimised code generation, enhanced collaboration between human developers and AI systems, and ultimately, more robust and scalable software solutions.

While this transition challenges traditional assumptions and beliefs and invites a major paradigm shift, it is an exciting prospect that will revolutionise the software development industry. As we navigate this paradigm shift, it is essential to strike a balance between leveraging the strengths of AI systems and maintaining a level of human oversight and understanding, ensuring that our code remains accessible, maintainable, and aligned with the evolving needs of the host business.

 

Code for the Machines: The Rise of AI-Readable Programming

The Future of Coding: Embracing AI Readability

In the domain of software development, the concept of writing code primarily for artificial intelligence (AI) readability and maintainability is an intriguing prospect that will influence the future of coding practices. While human-readable code remains the standard approach as of August 2023, the rapid advances in AI technology suggest that a paradigm shift may be on the horizon.

The Rise of AI-Centric Coding Styles

As AI systems become increasingly sophisticated and integrated into software development workflows, the need for code that caters to the strengths of these systems may become more pressing. This could lead to the emergence of coding styles specifically tailored for AI readability and maintainability over (sic) human readability and maintainability, encompassing the following characteristics:

Increased Abstraction and Modularisation

AI systems may thrive on highly modularised and abstracted code, where individual components are clearly separated and encapsulated. This could result in a coding style that emphasises smaller, more self-contained units of code with well-defined interfaces, promoting better organisation and encapsulation.

Formalised and Explicit Syntax

While human programmers often rely on conventions and implicit understandings, AI systems may benefit from a more formalised and explicit syntax. This could involve additional annotations or metadata that make the semantics of the code more explicit, catering to the needs of AI systems for unambiguous interpretation.

Pattern Recognition Optimisation

AI systems excel at recognising patterns, so the coding style could be optimised for this strength. Consistent naming conventions, structural similarities, and other patterns that can be easily recognised by AI systems may become more prevalent.

Reduced Redundancy (DRY)

AI systems may be better equipped to handle and maintain code with minimal redundancy, leading to a coding style that emphasises code reuse, shared libraries, and other techniques to reduce duplication.Such techniques will likely make the code more or less illegible to humans, at least to unaided humans.

Documentation Tailored for AI Comprehension

In an AI-centric coding paradigm, the traditional concept of human-readable documentation and comments may become obsolete. Instead, the emphasis would shift towards creating self-documenting code that can be seamlessly interpreted and maintained by AI systems. This could involve incorporating structured annotations, metadata, and other machine-readable elements directly into the codebase.

The documentation process itself could be automated, with AI algorithms capable of parsing the code structure, analyzing the annotations, and generating comprehensive documentation tailored specifically for AI comprehension. This documentation would be optimized for pattern recognition, logical inference, and other capabilities that AI systems excel at, rather than catering to human readability.

Moreover, the maintenance of this documentation could be handled by AI systems, ensuring that it remains up-to-date and consistent with the evolving codebase. As changes are made to the code, the AI-driven documentation would automatically reflect these modifications, eliminating the need for manual updates and reducing the risk of documentation becoming outdated or inconsistent with the actual implementation.

This approach could potentially revolutionize the way documentation is created, maintained, and consumed, shifting the focus from human readability to machine interpretability, and leveraging the strengths of AI systems to streamline the documentation process.

The Hybrid Approach

While the prospect of AI-centric coding styles is intriguing, it’s important to note that a hybrid approach may emerge as a stop-gap or transitionary approach, where code is written to be readable and maintainable by both humans and AI systems. This approach could leverage the strengths of both parties, ensuring that the code remains accessible to human developers while also catering to the needs of AI systems. I suggest this duplication of effert will soon rankle.

Conclusion

As AI technology continues to evolve, the way we approach coding will undergo significant transformations. While the shift towards AI readability and maintainability is still a hypothetical scenario, it is an exciting prospect that could revolutionise the software development industry. Regardless of the direction this trend takes, one thing is certain: the future of coding will be shaped by the interplay between human ingenuity and the capabilities of AI systems. And we can finally discard the shackles of so-called agility, too – in favour of doing what humans do best: attending to folks’ needs.

Here’s a postscript illuminating the use of AI to write code in a style where readability and maintainability by humans is not the main focus:

Postscript: AI-Generated Code for Machine Consumption

In addition to the prospect of writing code specifically tailored for AI readability and maintainability, the advancement of AI technology also raises the intriguing possibility of AI systems themselves generating code in a style optimised for machine consumption, rather than human readability.

As AI systems become more sophisticated in their ability to understand and translate natural language into code, they could potentially generate programs that prioritise efficiency, conciseness, and optimisations over human-friendly constructs. This AI-generated code might forgo traditional conventions and practices aimed at enhancing readability for human developers, instead favoring structures and patterns that are more readily interpretable and maintainable by themselves and/or other AI systems.

Such AI-generated code could be highly compact, with minimal redundancy and a heavy reliance on abstraction and modularisation. It might incorporate complex mathematical models, advanced algorithms, and unconventional coding techniques that leverage the strengths of AI systems while potentially sacrificing human comprehensibility.

While this concept may seem counterintuitive to traditional software development practices, it could open up new avenues for highly optimised and efficient code generation, particularly in domains where performance and scalability are paramount, such as high-performance computing, big data processing, or cutting-edge scientific simulations.

Moreover, as AI systems become increasingly integrated into the software development lifecycle, they could potentially maintain and evolve this AI-generated code autonomously(?), with minimal human intervention. This could lead to a paradigm shift where the primary consumers and maintainers of code are AI systems themselves, rather than human developers.

However, it’s important to note that this scenario is still largely hypothetical and dependent on further advancements in AI technology and its practical applications in software development. Nonetheless, it highlights the potential for AI to reshape not only how we write and maintain code but also how code itself is generated and optimised for machine consumption.

Exploring Shared Assumptions and Beliefs with the Help of an AI Chatbot

Introduction

As businesses strive for innovation and growth, they often find themselves running up against unseen barriers – ingrained ways of thinking and behaving that can severely limit their ambitions and ability to adapt. In many cases, these constraining (shared) beliefs and assumptions live under the surface, making them extremely difficult to identify and uproot.

This sample dialogue (below) aims to illustrate how engaging with an AI chatbot like Claude can provide a fresher, more impartial perspective to help bring those unconscious blockers into the light. By asking probing questions and suggesting reframes, an AI can serve as a cooperative thinking partner to explore the root causes behind an organisation’s collective blind spots.

Let’s take a look at how this conversational process might unfold:
(Note: this is an actual dialogue generated by ClaudeAI)

Overcoming Organisational Blind Spots: A Candid Dialogue

Are Collective Mindsets Holding Us Back?

Business Team: We’re really struggling as a team at work. Our growth and innovation have hit a plateau, and we can’t seem to get over this hump. We feel like there are some underlying assumptions or beliefs that we all unconsciously share that are limiting our thinking, but we can’t quite put our finger on what they are. Any thoughts?

Claude: That’s an insightful observation. Could you elaborate on the areas where your team has faced repeated roadblocks or struggled to adapt? Have you noticed any patterns in how new ideas get instantly dismissed or shut down?

The Perils of Risk-Aversion

Business Team: You know, now that you mention it, we do tend to instinctively shut down ideas that seem too ambitious or radically different from our current model. There’s an assumption that taking big swings is too risky and that we’re better off sticking to incremental innovations.

Claude: I see, so there seems to be an excessive aversion to risk-taking that could be crippling your team’s ability to innovate more boldly. Where might this belief originate from? Perhaps past failures that bred excessive caution within the team?

Overconfidence from Past Successes

Business Team: And you’re right, we may be overly reliant on past glories and successes rather than pushing for true disruption these days.

Claude: Overconfidence and complacency from previous successes is a common pitfall for teams. Have you explored where those constraining beliefs originated within your group? Sometimes they start from overcorrecting after a past failure. Or they take root from excessive short-term focus and risk-aversion in decision making over time.

Business Team: You’ve hit the nail on the head. We can trace the risk-aversion back to a few high-profile product launches that failed a few years ago. And the reverence for our old products stems from the founders still being closely involved, constantly reminiscing about how we initially disrupted the market.

Shattering Ingrained Narratives

Claude: Those do seem like major sources of the limiting beliefs constraining your team’s thinking. But how can you start dismantling and changing those ingrained narratives collectively? Here are some suggestions:

  1. Make the limiting beliefs explicit and discuss them openly together as a team. Bringing these unconscious biases into the light is key.
  2. Have you analyzed instances where those beliefs caused you to miss opportunities and calculated the cost of leaving innovation on the table as a group?
  3. Could you find examples of competitors or companies in other industries who have succeeded by doing the opposite – embracing risks, continuously reinventing themselves? Study how they cultivated a different organizational mindset.
  4. Implement structural changes to counteract the limiting beliefs. For instance, explicitly weighing the cost of inaction, having an “innovation ombudsman” role, or celebrating failed experiments more than static successes.
  5. Bring in fresh perspectives through new hires, industry experts, or even customers who can challenge the status quo assumptions.

The ultimate ambition might be to make curiosity, experimentation, and an appetite for disruption core cultural values for your team and organisation.

Business Team: This is immensely helpful. We have a much clearer picture now of how to diagnose and deliberately shift our unhelpful collective beliefs as a team. Change starts by bringing those unconscious biases into the light through open discussion. We really appreciate you laying out such a detailed framework – we’re energized to start implementing these ideas right away.


Bringing in an Organisational Psychotherapist

Sometimes, even with the best intentions, it can be extremely difficult for a team to identify and overcome their own deep-seated collective assumptions and beliefs on their own. This is where bringing in an outside organisational psychotherapist can be invaluable…

By prompting deeper reflection, offering new perspectives, and designing tailored interventions, an experienced organisational psychotherapist or AI assistant can catalyse the crucial process of surfacing and reflecting upon an organisation’s dysfunctional beliefs and narratives.

While working with a human expert may ultimately be advisable for any complex organisational transformation, this dialogue demonstrates how an AI like Claude can provide a potential first step. Through probing inquiries and analytical detachment, an AI chatbot can shed light on the obscured mental models that might be unwittingly obstructing an organisation’s path to greater innovation and growth.

Who Cares How We Code?

The Premise

As developers, we’re a smart bunch. We know our stuff and can generally be trusted to choose the best approach to getting the job done, right? After all, the goal is to get that program up and running in production as quickly as possible. What could possibly go wrong if we cut a few corners here and there? A bit of spaghetti code never hurt anyone. And technical debt is a conscious and intentional choice, yes?

The Bitter Truth

Sadly, this cavalier attitude towards development practices is a recipe for disaster further down the line. While it may seem like a shortcut to production Heaven, it’s more akin to paving the way to Maintenance Hell – and Future Costs City. Let’s explore why we might choose to actually care about how we code.

Compromising Schedule Predictability

Messy codebases compromise team’s ability to predict how long something is going to take. The more the mess, the more unreliable the schedule.

The Future Payback Trap

We can compare writing sloppy, unmaintainable code as analogous to racking up maxed-out credit cards. It’s taking on inevitable future payback down the line, and just like financial debt, it accrues “interest” in the form of extra development costs that compound over time. That once-scrappy codebase becomes an ungovernable mess that’s exponentially harder to change, optimise, or extend. Before we know it, we’re spending more time untangling our own spaghettic nightmares than making meaningful progress.

The Collaboration Conundrum

In most cases, a codebase is a team effort across its lifetime. If you don’t maintain a minimum level of quality, good luck onboarding new team members or even having your future self make sense of the tangle a few months down the road. Sloppy code breeds knowledge silos and cripples effective collaboration.

The Debugging Debacle

Well-structured, self-documenting code that follows good architectural principles makes it infinitely easier to debug issues and safely update the software over time. In contrast, a patched-together “codic dervish” is virtually impossible to decipher or modify without potentially disastrous unintended consequences.

The Performance Pitfall

While your hacky script may seem to work for that small prototype or MVP, codebases that cut corners on fundamental coding practices and design patterns simply won’t be able to scale gracefully as usage and complexity grow over time. Code quality is paramount for managing performance under load.

The Futility of Quality Assurance

When we don’t make code quality a priority from the get-go, good luck getting meaningful code reviews or implementing a robust quality assurance approach. Code reviews become an exercise in futility, and QA turns into a fruitless game of DevOps whack-a-mole, constantly putting out fires in an inherently unstable, unpredictable product.

The Craftsmanship Principle

At the end of the day, consistently writing clean, maintainable code is one of the hallmarks of competence, as opposed to a mere hack. By treating our craft with care and prioritising technical excellence, we’re investing in the long-term success of our products, our teams, and our careers. But who cares about the long term?

Chatbot Paranoia

Is talking to AI assistants really something to fear?

Rise of the Machines?

With the recent explosion of advanced language models and AI chatbots like ClaudeAI, many people have expressed concerns about the risks of engaging with these systems. A palpable undercurrent of paranoia seems to be spreading not just about the implications of conversing with artificial intelligences, but also about the potential breach of confidentiality from sharing sensitive information.

The Human Stance

From a human perspective, the apprehension around chatbots is twofold. Firstly, there is a lingering mistrust of technology and a deep-seated notion that only humans possess true intelligence, creativity and emotional aptitude. We fear being supplanted and rendered obsolete by machines that can parrot, analyse and generate content at speeds and scales we cannot match.

Secondly, there are worries about the confidentiality of any data input into these systems. What if trade secrets, personal details or other sensitive information is leaked or misused by the AI companies? This paranoia around loss of privacy adds another layer of unease.

Lost in Translation

One key aspect that tends to be lost when discussing AI risks is the crucial difference between modern machine learning systems and humanoid robots or superintelligent AIs depicted in apocalyptic sci-fi. Current AI assistants operate within clear boundaries defined by their training data and architecture. They cannot autonomously improve, evolve or take real-world actions without human implementation.

However, the confidentiality concerns are more legitimate. While reputable AI companies have strong data privacy policies, the potential for breaches or misuse of uploaded data cannot be dismissed entirely, especially as these systems grow more advanced and ubiquitous.

Technical Note: ClaudeAI, in common with other LLM based chatbots, has no memory of what users type into it. Updates to the LLM are driven by the oweners of the LLM (Anthropic, in this case). Claude says:

That technical note appears to be accurate based on my understanding. As a language model, I do not have any persistent memory or ability to store user inputs. Each response I provide is generated based solely on the current input prompt, using my training data.

My knowledge comes from the machine learning model I was trained on by Anthropic, which had a training data cutoff in August 2023. I do not have the ability to learn or update my own knowledge base through conversations. Any expansions or improvements to my capabilities would have to come from Anthropic retraining or updating the underlying language model that I am an instance of.

So you’re correct that I have no memory of previous user inputs, and that any future knowledge updates would be driven by Anthropic updating or retraining the model, not from my own interactions. I do not have access to or store any of the information users provide to me during our conversations. I simply generate relevant responses based on my initial training data combined with the current input prompt.

The Value Proposition

Of course, the development of advanced AI warrants prudent consideration of both progress risks and confidentiality risks. But so far, the immense value and positive applications of systems like Claude seem to outweigh the hypothetica and/or imagined risks.

Maintaining a balance of optimism and healthy skepticism may be wise. We can harness AI’s productivity benefits while upholding ethical data standards and developing robust governance frameworks to ensure confidentiality is robustly protected as the technology evolves.

The Corporate World’s Superficial Psychology

Businesses Ignore Deming’s Call for Real Behavioural Insight

W. Edwards Deming, the pioneering management thinker, strongly advocated for businesses to develop a deeper understanding of psychology in order to optimise systems, drive improvement, and bring joy and pride in work to the workplace.

“Understanding psychology, the study of human behaviour, is the key to managing people.”

Deming wrote. Yet decades after Deming’s teachings, most businesses remain woefully ignorant about true human psychology and behavioural drivers.

The Superficial ‘Pop Psych’ Fixation

Instead of delving into substantive research from psychology, cognitive science, and behavioural economics, the corporate world tends to favour simplistic “pop psych” maxims and heuristics. Businesses love to tout the latest bestselling books promoting ideas like “positive thinking”, “grit”, “growth mindsets”, or “mindfulness” as the secrets to better employee engagement and productivity. Consultants peddle pseudoscientific personality assessments built on shaky Jungian foundations. Corporate training programmes regurgitate self-evident platitudes about “emotional intelligence.”

Human Behaviour Is Central to Everything

This cavalier dilettantism toward psychology is concerning because human behaviour is central to every aspect of an organisation – its culture, management practices, teamwork, decision-making processes, innovation, marketing, you name it. If companies fail to rigorously study and apply research-based behavioural insights, they are effectively driving blind.

Ignoring the Science of Human Behaviour

Psychology is a legitimate field of science that has produced a wealth of empirical findings on human cognition, motivation, bias, social dynamics, and more. And not just academic theories, but proven applications in areas like user experience design, behaviour change, survey methodology, and marketing. Ignoring this body of knowledge is akin to an engineer neglecting physics or materials science.

The System of Profound Knowledge

Deming admonished that businesses must take a fundamentally different view of work, one focused on understanding systems holistically – including the human dimensions and variation. Yet even today, businesses tend to fixate on simplistic notions like employee incentives, traditional hierarchies, coercion, and other regressive pop psych-led management dogma. They give short shrift to the scientific realities of how people actually think, feel and behave.

A True Commitment to Understanding People

Of course, as Deming taught, psychology alone does not automatically confer excellence in management. It requires a coherent philosophy, sustained practice, and an unwavering commitment to continual learning, all of which many businesses still lack. But grasping human behaviour remains a crucial foundational layer.

For companies to truly embrace people-centric management as Deming advocated, they might choose to move beyond gimmicky pop psych trends and selective, self-serving interpretations of research. They may, instead, choose to dive deep into the expansive knowledge base of rigorous behavioural science – including the inconvenient truths it reveals – and apply those insights in thoughtful, judicious ways. Only then can businesses hope to make substantive and lasting improvements. Of course, improvement of any kind seem decidedly out of favour at the moment.

The Personal Upside of Business Improvement

[Or – what’s all this business improvement malarkey, and what’s in it for me?]

Waning Interest Post-Pandemic

As we’ve learned to live with COVID, much has changed in how businesses operate. Remote work is now the norm rather than the exception. Supply chains have been disrupted. Customer behaviours have shifted significantly. In the midst of this turbulence, it feels like interest in business improvement initiatives has waned and taken a backseat.

Survival Mode

The sluggish economy and persistent inflation have put many companies in survival mode, just trying to keep the lights on. Ambitious programmes to reengineer the way the work works, implement new systems, or drive improved effectiveness now feel like costly distractions. After all the chaos of the last few years, who has the bandwidth for that right now?

The Personal Upside

While the economic arguments for deprioritising business improvement are understandable, I think we’re missing something important – the personal upside. Streamlining operations, updating shared assumptions and beliefs, developing better practices, and finding ways to work smarter don’t just benefit the business. They allow each of us to be more successful and fulfilled as individuals.

The Costs of Inefficiency

Think about it – what does bloated, inefficient business activity translate to on a personal level? Wasted time on tedious manual tasks. Constant firefighting and rework thanks to poor ways of working. Headaches and frustrations navigating clunky systems and workarounds. At its worst, organisational dysfunction mentally drains and demotivates employees to the point they burn out or quit.

The Benefits for Individuals

On the flip side, smart business improvements that simplify and optimise how we execute allow us to flow through high-value work with less friction. We spend more time on the energising aspects of our roles utilising our skills and making an impact. Our days feel more productive and purposeful rather than mired in busywork and cleanup. More gets done, with less expended effort.And we learn.

From streamlined reporting that saves hours a week, to improved workflows that reduce costly errors, to delighting customers through superior service – the personal benefits of working at a well-oiled operation are massive in terms of satisfaction, growth, and work-life balance.

The Workplace Attraction Issue

Given the intensely competitive landscape for people, any organisation looking to attract and retain commited and engaged people might choose to prioritise continuous improvement as part of their employee value proposition. When people can channel their energies into engaging, rewarding work day after day, that’s when we build exceptional teams delivering exceptional results.

Don’t Brush It Aside

So don’t just brush business improvement aside as a nice-to-have these days. See it as key driver of personal success and engagement, helping your teams flourish while fuelling joy and delight in the (distributed) workplace.

What Are You Missing Out On?

In any organisation, the beliefs and assumptions that everyone holds in common can have a profound impact on culture, productivity, and overall success. By neglecting shared assumptions and beliefs you may be missing out on harnessing the power of aligning them for optimal performance. But what exactly could this approach unlock for your organisation?

For Executives and Senior Managers

Shaping the Organisational Mindset

As a leader, you set the tone for the entire company’s culture and worldview. However, failing to examine and actively shape the company’s ingrained assumptions can lead to misalignment and hinder performance. Organisational psychotherapy illuminates existing belief systems – a.k.a. the collective mindset – and provides means to cultivate an organisational mindset centered on the things that matter to you, and a unified vision for success.

Transcending Limiting Assumptions

Over time, organisations develop deep-rooted assumptions that act as invisible shackles, limiting innovation, adaptation and achievement of goals. You could be missing out on breaking through these limitations by not exploring the underlying group psyche. Organisational psychotherapy techniques identify and reframe constraining assumptions, allowing you and your peers, and your workforce, to operate from an empowered, possibility-focused perspective.

For Middle Managers

Bridging Misaligned Beliefs

In the pivotal role of middle management, you navigate the shared assumptions of both leadership and frontline teams. Unaddressed, differing beliefs between groups can breed misunderstanding and hinder synergy. Organisational psychotherapy provides a framework for uncovering disconnects and fostering more cohesive, aligned assumptions across all levels.

Fostering Trust and Psychological Safety

Highly effective teams are built on a foundation of trust and the ability to take interpersonal risks. You could be missing out on this key ingredient if psychological barriers rooted in distrustful and deleterious assumptions remain unaddressed. Psychotherapeutic interventions help everyone examine and reshape beliefs around vulnerability, conflict, and collaboration.

For Technical Workers

Unleashing Pioneering Thinking

For technical roles requiring cutting-edge solutions, limiting assumptions around “how things are done” stifle innovation. You may be missing out on radically more effective approaches by not exploring and expanding your team’s collective assumptions about e.g. what is possible. Psychotherapy illuminates blind spots and reframes beliefs to open minds to truely different thinking.

Fostering Knowledge-Sharing

In highly specialised technical domains, knowledge-sharing is critical but often obstructed by entrenched assumptions of competence hierarchies or domain territoriality. Organisational psychotherapy provides means to surface and reflect on these counterproductive beliefs, instead opeing the door to assumptions that celebrate joyful work, collaborative growth and learning.

Summary

Embracing organisational psychotherapy unlocks an often-overlooked yet powerful source of competitive advantage – the shared assumptions and beliefs that underpin an organisation’s culture, communication, and performance. By neglecting this dimension, you may be missing out on by not giving organisational psychotherapy serious consideration as a powerful tool for your toolbox:

For Executives and Senior Managers:
The ability to purposefully shape an organisational mindset aligned with your shared vision and strategic objectives. As well as the opportunity to transcend limiting assumptions that constrain innovation, adaptation, and achievement.

For Middle Managers:
A framework for bridging misaligned beliefs across levels that breed misunderstanding and hinder synergy. And fostering a bedrock of trust and psychological safety that enables teams to take interpersonal risks and collaborate effectively.

For Technical Workers:
Unleashing pioneering, radically different thinking by reframing beliefs around “how things are done.” And cultivating knowledge-sharing by dispelling assumptions of competence hierarchies and domain territoriality.

At every level of an organisation, insidious assumptions and beliefs can act as unseen forces, obstructing potential and stalling progress. You could be missing out on dismantling these forces and instead harnessing the power of shared vision, alignment of mindsets, and collaborative beliefs.

Organisational psychotherapy provides the insight and means to illuminate, examine, and reflect on the collective beliefs and assumptions influencing your organisation’s culture and performance. Is it yet time you explored how to unleash this underutilised power and stop missing out on achieving new heights of success?

Have We Solved “The Software Crisis”?

Most folks working in the software industry today, especially younger developers and those who didn’t experience the early decades of computing, have likely never heard of something called “The Software Crisis.” It was a term that became widespread in the late 1960s to describe the significant challenges and problems that software teams were facing at that time, as they attempted to develop increasingly complex computer programs and systems with some semblance of predictability.

The Origins of the Crisis

In the late 1960s, the term “software crisis” was coined to describe the growing challenge of writing high-quality, reliable software in a controlled and predictable manner. As computing systems became more complex and critical to businesses and society, the demand for software skyrocketed. However, software development approaches were still quite immature, leading to projects frequently going over budget, missing deadlines, and delivering buggy code.

Attempted Solutions Over the Decades

Over the subsequent decades, massive efforts were made to transform software development into a disciplined, predictable practice akin to engineering disciplines like civil engineering or medicine. Structured programming practices, modular architectures, object-oriented design, agile methodologies, automated testing, and a host of other innovations were developed and widely adopted. Software tools for managing requirements, writing code, building, testing, deploying, and monitoring evolved significantly. Note however that the real roots of the crisis were never identified, let alone tackled.

The Crisis Persists

But have we truly solved the software crisis after 60+ years? The answer is no.

While significant progress has been made, software remains a fundamentally chaotic and non-deterministic endeavour rather than an orderly engineering discipline. The core essence of software – its malleability and propensity for exponential design complexity growth – makes it more difficuly to tame in the same way physical disciplines controlling matter and energy have been tamed.

Modern software development practices can certainly produce enormously complex systems successfully when properly applied. The processes and tools allow more predictability and quality than in the past. However, software remains inherently difficult to get right due to ever-changing requirements, defects, integration challenges, security vulnerabilities, and uncontrolled technical debt accumulation. Not to mention the people side of the CKW equation.

New Challenges Emerge

Additionally, the pressures of competitive business incentivise cutting corners, resulting in quick software releases over sustainable quality. The diffusion of software across industries through open source, cloud computing, low-code/no-code platforms, etc. has created new challenges around dependency management, security, compliance, and more.

Not (Yet) an Engineering Discipline

While great strides have undoubtedly occurred, software is not yet a mature engineering discipline with rigorous, deterministic, and reliable foundations. It remains a random walk, a craft – a labour of ingenuity battling relentless forces of complexity, change, entropy and friktion.

The Perpetual Struggle

Modern software practices have provided more frameworks and guard rails for controlled delivery when stringent disciplines are followed by competent teams, but they have not resolved the existential software crisis that still plagues many organisations. For countless teams, the never-ending struggle to tame quality software from the inherent complexities, chaos of code, and clueless management remains an existential battle. Though no longer an acute emergency universally, the fundamental crisis of software’s inscrutable nature, propensity for unpredictable schedules, and having to negotiate demands from metaclueless leadership is very much an existential threat in many contexts. The dream of elevating software to a rigorous, mature engineering discipline on par with traditional fields remains elusive after decades of efforts. For many, the journey to transform writing code from crafty art to deterministic engineering while simultaneously managing unrealistic mandates from out-of-touch executives is still very much a crisis state. The quagmire of achieving true predictability amidst software’s inherent volatility and disruptive, uninformed directives from the top rages onward.

Let’s Call Software Testing By Its Real Name

The Elephant in the Room

Can we have have an honest discussion about software testing? For many years now, we’ve danced around the real reason we dedicate substantial time and resources to testing software before releasing. It’s time to call a spade a spade – what we refer to as “testing” is actually “status determination”.

Status Determination: Reporting on the Current State

Testing, or status determination as I’ll contentiosly refer to it going forward, serves one key purpose: to report on the current status of the software under development. With each test execution, we’re essentially asking “Where are we at? What’s working as intended and what still needs to be fixed or improved?”

A Reflection of Our Development Approach

The need for rigorous status determination is a reflection of the current state of software development practices. If processes were truly stable and predictable, we would already know the status of the product without having to examine it through testing. But the truth is, our development approaches have instability and unpredictability baked in from the start.

Embracing Status Determination

Instead of deluding ourselves, we might choose to resign ourselves to the humiliating reality that status determination is a crutch upon which our development efforts desperately rely. It is a begrudging acknowledgment that teams simply cannot contend with the inherent complexities involved in crafting modern software systems through process and skill alone.

We lack the capability – the fundamental engineering wisdom – to make any credible predictions about how our systems will ultimately behave until we have fully constructed them and exhaustively probed their functions through test execution. No amount of upfront rigour, no depth of planning, no elite cadre of technical talent can circumvent this embarrassing limitation.

Embracing status determination is an admission that our vaunted processes and highly-skilled practitioners are profoundly inept at the core challenge of developing software systems that accurately model their specified behavior from the outset. We are hopelessly beholden to this retroactive “validation” step, owing to our rudimentary inability to wrangle the demons of complexity upfront through our hubris-laden and woefully inadequate practices.

Status determination stands as an unavoidable, perpetual reminder that for all our pontificating about engineering maturity and advancing the software discipline, we remain disappointingly crude in our approaches. It is the ceaseless tax we pay for failing to evolve more predictable, more reliable development techniques befitting a truly modern, professional field.

The Path Forward

Does this mean our development practices are fatally flawed? Pretty much. It means we have to allocate appropriate time and resources to status determination as a key part of our delivery lifecycle. We might better choose to strive to improve our processes. Let’s call software testing what it is – “status determination” – and double down on getting better at no longer needing it.

The Perils of Misclassifying Collaborative Knowledge Work

Introduction

In today’s knowledge-driven economy, the nature of work has evolved significantly. Collaborative Knowledge Work (CKW) has emerged as a distinct category, requiring a tailored approach to management and organisational practices. However, most organisations continue to miscategorise CKW as e.g. regular office work, leading to a host of unintended consequences that undermine productivity, innovation, and employee engagement.

These consequences include:

  • Incompatible work environments that hinder collaboration and creativity
  • Ineffective management approaches that stifle autonomy and learning
  • Lack of support for the collaboration essential to knowledge sharing
  • Misaligned performance evaluation metrics not suited to complex knowledge work
  • Insufficient professional development opportunities for continuously evolving skills
  • Talent retention challenges due to unfulfilled expectations of growth and autonomy
  • Stifled innovation potential from overlooking the need for experimentation

Incompatible Work Environments

CKW often necessitates specific spaces and tools that foster collaboration, knowledge sharing, and creative thinking. Treating it as regular office work may lead to an inadequate work environment that hinders productivity and stifles innovation. Open spaces, whiteboards, and collaborative technologies are essential for CKW, but they may not be prioritised if the work is miscategorised.

Ineffective Management Approaches

CKW requires different management approaches compared to traditional office work. It emphasises autonomy, flexibility, and continuous learning. Applying conventional command-and-control management styles can demotivate knowledge workers and curb their creativity. CKW thrives in an environment that encourages self-direction, experimentation, and personal growth.

Lack of Collaboration Support

CKW heavily relies on effective collaboration and knowledge sharing among team members. Miscategorising it as office work may result in a lack of investment in collaboration tools, platforms, and processes, ultimately hindering the flow of knowledge and ideas. Without proper support for collaboration, the synergies that drive innovation and problem-solving may be lost.

Misaligned Performance Evaluation

CKW often involves tasks that are complex, non-routine, and difficult to measure using traditional metrics. Evaluating CKW workers based on metrics designed for office work can lead to inaccurate assessments and demotivation. Organisations must develop tailored performance evaluation systems that capture the nuances of knowledge work and reward creativity, problem-solving, and continuous learning.

Insufficient Professional Development

CKW requires continuous learning and skill development due to the rapidly changing nature of knowledge work. Treating it as office work may result in insufficient training and development opportunities, leading to obsolete skills and decreased competitiveness. Organisations must prioritise professional development and foster a culture of lifelong learning to ensure their knowledge workers remain at the forefront of their fields.

Talent Retention Challenges

CKW professionals often value autonomy, challenging work, and opportunities for growth. Misclassifying their work as office work may fail to meet their expectations, leading to higher turnover rates and difficulties in attracting top talent. Organisations that recognise and cater to the unique needs of CKW are better positioned to retain and attract the best knowledge workers.

Stifled Innovation Potential

CKW is often associated with the creation of new knowledge, ideas, and solutions. Treating it as routine office work may overlook the potential for innovation and the need to foster a culture that encourages experimentation and risk-taking. By failing to recognise the innovative potential of CKW, organisations may miss out on opportunities for growth, competitive advantage, and market leadership.

Conclusion

In an era where knowledge is a prized asset, organisations migh choose to recognise the unique nature of Collaborative Knowledge Work and provide the necessary support, resources, and management practices tailored to the specific needs of teams of knowledge workers. Failure to do so leads to a cascade of consequences that undermine productivity, innovation, and employee engagement, ultimately hindering an organisation’s ability to thrive in a rapidly changing business landscape.

What Don’t You Know?

The Known Unknowns

As software developers, we inevitably encounter knowledge gaps – areas where we are certifiably clueless. Perhaps we’ve never worked with databases, so we recognise terms like “schema” and “queries” as complete unknowns we’d need to study.

These are the “known unknowns” – subjects we can identify as unfamiliar terrain, even if we can’t articulate precisely what lies in that territory. While frustrating, known unknowns at least make the gaps in our understanding visible.

The Invisible Unknowns

Far more insidious is the state we call metacluelessness – being unaware of unknown domains altogether. These are the invisible “unknown unknowns” – entire domains of knowledge and skill whose existence we fail to even comprehend.

For us developers, metacluelessness blinds us to whole realms of critical concepts and practices. We may be experts in shipping web apps, but oblivious to vital DevOps disciplines like infrastructure-as-code, monitoring, or chaos engineering. Or talented backend coders who have never conceived of cutting-edge frontend paradigms like WebAssembly or Jamstack. And then there’s the whole raft of invisibles related to the way the work works – not necessarily an issue for micromanaged developers, but for self-managing teams, crucial.

These aren’t just knowledge gaps – they’re unknowns we don’t know exist. Unfathomable blind spots preventing us from recognising what we’re missing.

The Illusion of Mastery

Metacluelessness is pernicious because it breeds complacency and over-confidence in our ranks. When we’re unaware of the boundaries of our mastery, it’s easy to succumb to arrogance about our depth of experience.

Without recognising our blind spots, we assume comprehensive expertise simply because we’ve never glimpsed the unknown territories. This false mastery mindset leaves us closed-off to growth, unable to even ask the questions that would identify new areas for development.

Shattering the Illusion

The antidote to our metacluelessness is shattering the illusion that our knowledge has no limits. It’s the humble admission that our expertise, however vast, is circumscribed by invisible domains we cannot fathom.

This isn’t easy – it requires us to continually question the limits of our understanding. To surround ourselves with new perspectives and domains. To listen for obscure terminology that hints at whole schools of thought we’ve never apprehended.

Only by identifying the realities of our metacluelessness can we begin addressing it. Grappling with unknown unknowns, if only to clarify what they are, not comprehend them fully. It’s an endless process of mapping the boundaries of our ignorance.

The Never-Ending Terrain

For us developers, pushing past metacluelessness is an infinite game. The leading edges of the intersection of software, work and business are so vast and emergent, there will always be new blind spots blocking our peripheral vision.

As we shed light on one enclave of metacluelessness, it reveals dozens more shadows where blind spots reside. It’s a perpetual cycle of overcoming the limitations of our expertise, only to uncover new frontiers of ignorance.

But embracing this cycle, and the inherent metacluelessness that comes with being human, is the path to true mastery. We cannot eliminate blind spots entirely, but we can develop the self-awareness to recognise their existence – and the curiosity and humility to keep exploring.

The Patent Racket: How Monopoly Rights Insulate Corporate Deadwood

[A slight departure from my usual topics, but given that one of my clients was a multinational firm of Patent Lawyers…]

Or: How Patents Negate Business Improvement Incentives

For decades, policymakers and capitalists have peddled a revisionist fairytale about the patent system. We’re assured that these government-granted monopolies are a necessity, incentivising the innovation that fuels economic progress. But is this intellectual property fiction obscuring an uglier truth?

Dig deeper, and it becomes clear that rather than driving competition and creative destruction, patents are too often a grotesque corporate lifeline perpetuating stagnation and inefficiency. By erecting fortresses around mediocre inventions, they empower underperforming firms to evade the brisk winds of true market forces.

The Monopoly Malaise

At their core, patents gift companies a temporary but ironclad monopoly over their claimed inventions. During this protective window, other firms are legally barred from independently developing or commercialising that technology without harsh penalties. Rent seeking abounds.

This monopolistic power acts as a force-field, enabling inefficient, unproductive corporations to shrug off the pressures and disruptive threats that typically weed out business deadwood. With competitors theoretically kept at bay by infringement risks, the impetus for continuous operational improvement or cost optimisation is dismayingly diminished.

Low Patent Quality Compounding the Problem

Indeed, many jurisdictions grant objectively low patent quality thresholds. Coupled with the broad patent eligibility for software and business methods, it becomes evident how easily patents can solidify corporate malaise and market stagnation under the guise of “promoting innovation.”

Obstructing Disruptive Innovation

The negative effects of this dysfunctional system spread far beyond individual companies, harming national economies and society as a whole. By propping up and protecting incumbents that underperform, patents directly impede the process of creative destruction described by economist Joseph Schumpeter. This “creative destruction,” where new innovative companies can challenge and replace stagnant incumbents, is crucial for sustained economic prosperity and growth. However, the monopoly power granted by patents obstructs that renewal process.

Disruptive startups and nimbler challengers with truly innovative and efficient technologies face imposing patent armouries held by establishment titans. Even when their novel solutions markedly improve upon what came before, outdated titans can brandish their government-approved monopolies to litigate or hamstring these competitive threats.

The Patent Arms Race Distortion

In many industries, incumbents nowadays compete not through fireworks of innovation but via gamesmanship of their patent portfolios. Funds that could be productively reinvested into R&D and operations are instead diverted into securing banal patents or defending against infringement claims from rival rent-seekers.

Escaping the Captive State Cycle

This myopic reliance on patents actively impedes dynamism and resilience across economic ecosystems. It perpetuates an entrenched corporatocracy of lumbering, low-productivity enterprises shielded from disruptive competitive renewal.

Apologists may well cite patents as an imperfect necessity somewhere on the policy tradeoff curve. But the self-serving delusion that this suspect monopolist nonsense improves business productivity or customer outcomes would be almost comical if the consequences weren’t so pernicious.

Perhaps it’s time to interrogate whether today’s patent system is beyond saving from its own contradictions. Productive companies might choose the freedom to thrive or wither based on their operational merits in a free market rather than fight subsidised stagnation behind legalised fortresses. Only then can vibrant innovation and economic paradigm shifts truly flourish.

The Heavy Tolls of Patent Protectionism

To recap, some of the key ways patents restrict open competition and shield inefficient businesses:

  • Funds diverted away from productive operations toward paying licensing fees to rent-seekers
  • Reduced freedom to operate and delays from patent thickets
  • Resources squandered on defensive patenting and “design-arounds”
  • Geographic constraints on market access from having to pay licensing fees across jurisdictions
  • Difficulties attracting talent due to patent liabilities and litigation risks from rent-seekers

While patents may incentivise some innovation, their monopolistic downsides appear to be increasingly overpowering any societal benefits in many industries. As we strive for a more dynamic, competitive, and productive economy, its it past time to soberly reassess outdated patent doctrine driving the opposite?

The End of Improvement – The Managers’ Perspective

[A follow-on to my previous post: The End of Improvement]

For years, organisations have pretended to be on a relentless pursuit of “continuous improvement.” Executives and managers at all levels have been pressured to implement “new initiatives”, “reengineer” processes, and hit ambitious targets—all part of an elaborate act putting improvement at centre stage. But has this endless improvement malarkey actually improved anything?

The uncomfortable truth is that for many managers, these improvement crusades have merely increased their stress, eroded their autonomy, and undermined their professional standing. Rather than enabling organisations to genuinely work smarter, the improvement police have subjected managers to a demoralising regime of resource constraints, changing metrics, and suffocating controls.

Leading Players Forced to Perform

At the top, senior executives have been both star-crossed thespians and unwitting victims of the improvement theatre. On one hand, they are expected to deliver rousing soliloquies as inspirational champions of transformation, rationalising headcount reductions as “rightsizing” and selling disruption as “rejuvenation.” Yet at the same time, they face relentless pressure from analysts and investors to cut costs and boost short-term metrics like earnings per share.

Their status and influence within their organisations has increasingly hinged not on strategic vision or operational expertise, but on delivering an award-worthy performance complete with unsustainable year-over-year budget cuts and headcount reductions. Is it any wonder top leaders become burned out, jaded husks after just a few years treading the boards of the improvement theatre?

The Overloaded Managerial Understudy

Moving down the hierarchy, middle managers have been conscripted as overloaded understudies to the improvement theatre. Caught between lofty edicts and challenging targets from the leading players above, and implementation pressures from the ensemble below, this critical but underappreciated class has been tasked with doing more-and-more with less-and-less.

From lean six sigma programmes to ERP system rollouts, managers have been bombarded with new tools, KPIs and time-consuming compliance rituals—all while their discretionary budgets and spans of control wither. Their expertise and commitment are repeatedly second-guessed through recurring rounds of process “optimisation” between each act of the improvement theatre.

Is it any surprise that managers increasingly suffer burnout, demotivated by the cognitive dissonance of being improvement enforcers one day, only to find themselves targets of the latest performance drive the next? With each new production, their authority and influence diminish further as indispensable institutional knowledge is upstaged by the latest bestseller, ibusiness class n-flight magazine, consultant’s script or other fad.

The Final Performance

Enough is enough. The idea that organisations can infinitely cut, reduce and optimise their way to success through improvement theatre has been definitively debunked by decades of ineffective and morale-sapping performances. Rather than doubling down with yet another encore, we need to rediscover fundamentals that have been tragically ignored.

Perhaps it’s finally time to rehumanise the workplace by recognising the intrinsic motivation, expertise and institutional memories that managers bring to the table. By realigning the way the work works based on pragmatism and mutual respect rather than dogmatically pursuing an arbitrary definition of “better,” we can foster genuine and sustainable progress over the long run.

After all, if an “improvement” worsens the wellbeing and standing of those charged with implementing it, can it really be considered improvement at all? And are we fools to expect these turkeys to vote for their Christmas? Organisations might choose to pause, escape the pressures, recriminations, frustrations and despair, and simply strive to optimise for the needs of all the Folks That Matter™  – including the managers – before mindlessly leaping into the next faddish improvement drama.

 

The End of Improvement

The Ambitious Noughties

There was a time, not long ago, when the desire for improving our ways of working seemed insatiable in the software development field. Around the first decade of this new century, our industry seemed filled with ambitious visionaries – determined to overhaul outdated practices, streamline inefficient workflows, and move beyond cumbersome legacy ways of working.

New approaches were the hot topics on everyone’s minds, promising to free teams from the constraints of bloated, joy-sucking development. Concepts like daily standups, timeboxed cycles, kanbans, and retrospectives became standard practice, with teams attacking projects in short, focused bursts. Iterative processes with continuous feedback loops were all the rage. We took inspiration from the Giants such as:

  • Ackoff
  • Schein
  • Deming
  • Goldratt
  • Buckminster Fuller
  • Marshall Rosenberg
  • Stafford Beer (VSM, etc.)
  • Gregory Bateson
  • Margaret Mead
  • Taiichi Ohno
  • John Seddon
  • Don “The Don” Reinertsen

We dissected and studied the principles of:

  • Lean
  • Training Within Industry (TWI)
  • Socio-technical Systems
  • The Toyota Product Development System (TPDS)
  • Morning Star
  • WL Gore
  • Haier
  • Semco
  • Menlo Innovations

and a host of others.

The Rallying Cry

The goal? Cut bureaucracy, promote face-to-face communication, prioritise the delivery of high-quality “working software” and above all else bring more joy into the workplace. No more excessive documentation or tedious planning sessions. We’d fail fast, learn from customers, use the best knowledge available to Man, and constantly adapt our approach.

You could feel the palpable relief when developers traded in their cube farms for open office layouts intended to inspire collaboration. There was a grassroots momentum to work smarter.

The Gradual Demise

But somewhere along the line, that collective drive seemed to fizzle out into resigned acceptance. Was COVID a factor, I wonder. And the consequent remote working?

Today, walking through any tech workplace reveals teams in an unfortunate state of extremes. On one side, those many still operating using antiquated processes that should have been retired years ago. Rigid hierarchies. Stifling red tape. Mindless box-ticking rather than meaningful progress.

On the other, there were those who adopted “modern” ways of working…only to slowly backslide into new dysfunctional habits. Unmotivated workers mindlessly performing pro forma rituals, succumbing to collective apathy. The processes changed, but the hunger for actual improvement has left the building.

The Lost Ideals

What happened to that passion for customer-centric, iterative craftsmanship? Viewing colleagues as peers, not corporate zombies? Taking pride in elegant products, over simply checking boxes?

Perhaps the new approaches were flawed from the start. Or the ideals were too lofty for reality at scale. Most likely, disillusionment gradually set in as the same organisational failures persisted – miscommunication, mission creep, mounting technical debt, management metacluelessness, and of course, burnout.

The Harsh Reality

Regardless of the root causes, one truth is inescapable: That widespread eagerness to find smarter, more fulfilling ways of working has dwindled. Fatigued developers have retreated, finding comfort in self-protection rather than striving for better.

A Faint Flicker of Hope

We did rally together toward a vibrant vision of leaner workplace cultures. For a few bright years, we dared dream the next big breakthrough would be nurturing more joyful ways of working and relating.

That glimmer still flickers under the rubble of abandoned buzzwords and dismantled visions. But rekindling it invites rededication to a progress that few seem able to muster lately. Complacency has become the new normal. And the need for pride in work just an echo from history.

Semper Mirabilis.

A World Where the Greater Good Predominates Over Profits

The Visionary Notion

What if the primary driving force behind commercial and economic endeavors wasn’t the pursuit of profits, but rather benefiting society, the species, Gaia, and the planet? A visionary notion, to be sure, that seems to defy conventional capitalist wisdom. Nevertheless, if we allow our imaginations to roam freely and look back at periods in history where ethical business practices held sway, we can depict a world truly transformed by this paradigm shift.

Profit Motives vs. Ethics and Humanity

Throughout most of human history, the profit motive has reigned supreme in the business realm. However, there have been notable exceptions driven by religious teachings, philosophical movements, and social ideals that prioritised ethical conduct over mere grubby accumulation of more and more wealth. The Quakers, for instance, were renowned for their commitment to honest dealings and consideration of employee welfare, exemplified by the socially-conscious British chocolate makers like Cadbury. The 19th century cooperative movement aimed to create enterprises that equitably shared profits with worker-owners and the local community.

The Beauty of Ethical Business

Would we call businesses truly putting the greater good before profits “beautiful”? At first, such a description may seem like an odd coupling of aesthetics with commerce. But perhaps there is an inherent beauty to enterprises that create sustainable value for society while exhibiting ethical conduct.

Just as we find natural wonders, artistic works, or selfless acts emotionally moving due to their harmony with higher ideals of truth, goodness, and transcendence of ego, so could businesses centered on benefiting all stakeholders embody a different kind of beauty. One not necessarily based on physical appearance, but on being skillfully crafted exemplars of how our economic activities can align with ethical, aesthetic, environmental and humanitarian principles.

This beauty manifests through their products, services, and operations, harmonising with the world rather than undermining it through greed, despoilment, or exploitation. Beautiful businesses are sustainable and circular by design, creating goods to be celebrated and cherished rather than cynically designed for disposability.They invest in creating opportunity and dignity for workers and communities rather than grinding them underfoot for profit margins.

Where today’s shareholder-driven corporations often exemplify grotesque machineries of extraction, ethical enterprises putting people and planet over money could be sublime new exemplars of applied aesthetics – aspiring toward perfection not through profit metrics, but through positively impacting all they engage with. Their beauty would shine through in becoming tightly interwoven threads in an interdependent tapestry, creating joyful, resilient and regenerative systems that elevate our shared potential.

While the traditional business vernacular focuses on the uglyness of lucrative processes, revenue growth, and reputational brand value, a world where ethical enterprises reign would celebrate hallmarks of perfected form: generative models that produce societal good, environmental integrity, attending to folks’ needs, and uplifting the human spirit. Perhaps then, we could appreciate the highest “good companies” not just pragmatically, but aesthetically – as living artworks of conscious, ethical organisation.

A World Oriented Toward the Greater Good

In such a world oriented toward the greater good, companies measure success not just by financial returns, but by positive impacts. Ethical practices like those espoused by certain faith traditions and thinkers are the norm across these industries. Sustainability is prized over short-term gain, with environmental stewardship prioritised over resource exploitation. We’ve seen glimpses of this in recent decades through the rise of corporate social responsibility (CSR), socially conscious investing, and the emergence of benefit corporations legally bound to creating public benefit, not just profits. But such examples have remained the exception rather than the rule in a profit-driven system.

The Global Ethos of the Greater Good

Imagine if this ethos becomes the core operating principle globally. Rather than lobbying for narrow interests, these businesses advocate for the common good. Tax avoidance schemes would be abandoned in a system where contributing one’s fair share is the ethical baseline. Worker rights and equity are vigorously protected, not eroded in pursuit of higher margins. On an individual level, cutthroat workplace could gives way to healthier cooperation, and integration with our personal and community values and family lives. Ethical conduct is rewarded over pure profit-generation at any cost. Kudos is not derived from endless growth metrics, but to positive impacts created for all the Folks That Matter™.

A Sustainable Economic Model

Of course, enterprises still need to generate income to remain viable and reinvest in their social missions. But growth is pursued by creating genuine value for society rather than extracting it. Sustainable, circular economic models replace those premised on endless consumption and planned obsolescence.

A Radical Yet Possible Vision

Such a world may seem naively idealistic to modern sensibilities, conditioned to accept profit as the prime directive. But is it any more far-fetched than an entrenched global system that relentlessly exploits people and finite resources in pursuit of perpetual economic expansion on a finite planet? By orienting business toward the greater good, as past ethical movements have done, we might create an economy that better serves humanity. This may read as a utopian ideal today, but it has been a reality at various points throughout our history. A world where businesses prioritise society over self-interest may not be inevitable, but it is possible if we dare to imagine and build it together.

Do you have even the briefest five minutes to contemplate how things might be different?

Further Reading

Ackoff, R. L. (2011). The aesthetics of work. In Skip Walter’s blog post retrieved from https://skipwalter.net/2011/12/25/russ-ackoff-the-aesthetics-of-work/

Metacluelessness is Not a Criticism, It’s a Lament

Metacluelessness: The Plight of Being Unaware

What is metacluelessness? Simply put, it refers to the lack of awareness about one’s own lack of awareness or knowledge. In other words, metacluelessness is a state of being clueless about being clueless. It’s a predicament that plagues us all to varying degrees, cutting across boundaries of education, intelligence, and experience.

The Dunning-Kruger Effect

This phenomenon is closely tied to the Dunning-Kruger effect, a cognitive bias that causes people to overestimate their abilities and knowledge in areas where they are, in fact, incompetent. Named after the psychologists David Dunning and Justin Kruger, this effect explains why the least skilled individuals often have an inflated sense of their expertise, while the truly knowledgeable tend to underestimate their capabilities.

The Perils of Metacluelessness

Metacluelessness can have far-reaching consequences, both personal and societal. On an individual level, it can lead to poor decision-making, missed opportunities, and stagnation. When we are unaware of our own blind spots, we fail to seek the guidance or education that could help us grow and improve.

In a broader context, metacluelessness can contribute to polarisation, conflict, and the spread of misinformation. Those who are metaclueless about their lack of knowledge on complex issues may cling to their beliefs with stubborn certainty, dismissing contrary evidence and fuelling divisiveness.

A Lament, Not a Criticism

It’s important to note that metacluelessness is not a personal failing or a character flaw. It’s a universal human condition, a byproduct of our cognitive limitations and biases. Recognising and acknowledging one’s metacluelessness is, in fact, a sign of wisdom and humility – a willingness to admit that there is always more to learn.

The Path to Self-Awareness

So, how can we combat metacluelessness? The first step is to cultivate self-awareness and intellectual humility. We might choose to embrace the fact that our knowledge and understanding are inherently limited, and remain open to new information and perspectives. Reading widely, in depth, and outside of a narrow field helps too. I’d recommend in particular the works of notable psychotherapists including R.D. Laing, Marshall Rosenberg, Virginial Satir, Viktor Frankl, and Carl Rogers.

Surrounding ourselves with diverse viewpoints and seeking out constructive feedback can also help us identify our blind spots. By engaging in respectful dialogue and actively listening to others, we can gain insights into areas where our understanding may be lacking.

Lastly, we might choose to foster a culture of lifelong learning, recognising that education is an ongoing process, not a finite destination. Whether through formal education, self-study, or simply keeping an open and curious mind, we can chip away at our metacluelessness, one lesson at a time.

In the end, metacluelessness is not a criticism or a judgement – it’s a lament, a recognition of the inherent limitations of the human condition. By embracing this reality with humility and a commitment to growth, we can navigate the complexities of our world with ever greater wisdom and understanding.

Deming’s 95/5 Principle Negates Individual Coaching

In the world of organisational improvement and performance enhancement, W. Edwards Deming’s principles have had a profound impact. One of his most famous principles, the 95/5 rule, suggests that 95% of performance issues are attributable to the system and processes, while only 5% are due to the individual worker. This principle has however not led many organisations to prioritise systemic changes over individual development initiatives. So does Deming’s 95/5 principle entirely negate the value of individual coaching? Let’s explore.

The 95/5 Principle: Putting Systems First

According to Deming’s 95/5 principle, the vast majority of performance problems stem from flawed organisational systems, processes, and cultures. Focusing on individual skill development or coaching would be akin to treating the symptoms without addressing the root cause. Deming advocated for a systems thinking approach, wherein organisations critically examine and optimise their practices, policies, and culture to create an environment conducive to success.

In the context of collaborative knowledge work, this principle suggests that individual coaching efforts will have limited impact when the underlying organisational systems and processes are not optimised for effective collaboration, knowledge sharing, and collective problem-solving.

The Shortcomings of Individual Coaching

Proponents of Deming’s philosophy argue that individual coaching alone is insufficient in addressing performance issues within collaborative knowledge work environments. Even if individuals receive coaching to enhance their communication, teamwork, or creative thinking skills, these efforts will be undermined or rendered ineffective when the systems and culture within which they operate are counterproductive or siloed.

For example, imagine a scenario where knowledge workers receive coaching on effective knowledge sharing practices, but the organisation lacks a robust knowledge management system or has rigid hierarchical structures that discourage cross-functional collaboration. In such cases, the individual coaching will yield limited results due to systemic barriers.

Organisational Transformation: The Key to Collaborative Success

According to Deming’s principle, our primary focus should be on transforming organisational systems and culture to foster an environment conducive to collaborative knowledge work. This could involve:

  • Optimizing communication channels and knowledge sharing platforms
  • Breaking down departmental silos and promoting cross-functional collaboration
  • Fostering a culture of continuous learning and improvement
  • Implementing agile and flexible processes that adapt to changing needs
  • Establishing clear roles, responsibilities, and accountability mechanisms
  • Organisational psychotherapy – enabling the organisation to surface and reflect on its shared assumptions and beliefs

By prioritising systemic changes, organisations create an enabling environment where individuals can thrive and collaborate effectively, minimising the need for extensive individual coaching.

The Verdict: Individual Coaching Has Limited Value

While individual coaching may provide some marginal benefits, Deming’s 95/5 principle suggests that it has limited value in the grand scheme of enhancing collaborative knowledge work. Organisations that solely rely on individual coaching initiatives without addressing the underlying systemic issues will experience suboptimal results and inefficiencies.

The path to success lies in embracing a systems thinking approach, transforming organisational assumptions and beliefs, structures, and culture to create an environment that fosters collaboration, knowledge sharing, and collective problem-solving. Only then can organisations unlock the full potential of their knowledge workers and achieve sustainable performance improvements.

In conclusion, Deming’s 95/5 principle entirely negates the value of individual coaching as a standalone solution for enhancing collaborative knowledge work. Instead, it calls for a fundamental shift towards organisational transformation, where systemic changes wrought through i.e. organisational psychotherapy take precedence over individual development initiatives.

The Metaclueless Developers: Inheriting Teams Unaware of Their Own Shortcomings

The Back Story

One time, as VP of Engineering, I inherited a motley team of metaclueless developers and testers.

The Flawed Assumptions

From day one with this new team, it became clear my assumptions were way off base. My attempts to understand the existing codebase, dependencies, coding and deployment processes were met with confusing non-explanations from the senior developers. Proposals for changes, reviews, or other improvements were immediately dismissed with a passive-aggresive demeanour as unnecessary red tape. There seemed to be this ingrained belief that “we’ve been doing just fine” simply because they hadn’t suffered many major outages yet.

Peeling Back the Layers

But as I started really digging in, the reality was more problematic than I initially realised. The codebase was a disorganised tangle of inconsistent patterns and anti-patterns. Automated testing and deployement was sporadic at best. The deployment process involved brittle, undocumented scripts that only a few developers halfway understood. Institutional knowledge was scattered among individual brain silos.

The Destructive Hubris

Rather than a receptive discussion when invited to discussion on making things better, I faced a brick wall of defensiveness and hubris. The team was convinced they knew best – such that any invitations went unadopted. Every mention of improvement was met with circular justifications about “how we’ve been doing it for years” or “we haven’t had any serious issues yet”.

The Path Forward

Looking back now, I see that the situation revealed some universal truths about software:

First, we all get blindspots and make flawed assumptions over time – no matter how experienced we think we are. Maintaining a beginner’s mindset of continual learning helps.

Second, defensiveness and “ingrained way” thinking are toxic team pathologies that noone can solve alone. An environment of open dialogue and reasoned self-critique must be continually fostered.

And finally, the most dangerous hubris of all is assuming you already have all the answers rather than a humble openness to involving everyone in uncovering the real issues at hand, and ways forward.

Taking Responsibility for Our Emotions

The Harsh Truth

One of the most transformative realisations I’ve had from years of studying many schools of therapy is that our emotional responses are solely our own responsibility. No matter what someone else says or does, we alone are responsible for how we internalise their words or deeds, and react, emotionally.

This is a difficult pill to swallow, as we’re conditioned from a young age to blame others for “making” us feel certain ways*. If a co-worker is rude or our boss lays into us, it’s easy to mentally check out and go numb – as a self-defense mechanism – feeling angry at them for causing us distress. But the reality is, no one can make us feel any particular way without our permission.

The Source of Our Emotions

Our emotional responses are fuelled by our thought patterns, beliefs, prior experiences, and state of mind in that moment. Someone’s unskillful behaviour can act as a trigger, but we alone control whether we react with anxiety, defensiveness, anger, or remain grounded. This is where the work comes in.

So few people realise this responsibility is theirs, let alone take it to heart. It’s much easier to play the victim and blame others. But true emotional maturity comes from internalising that our emotions originate from within us, not from other people..

Empowerment at Work

In a workplace context, this philosophy is incredibly empowering. If we have a chronically negative or harsh manager or colleague, we get to decide whether their behaviour sends us into an emotional tailspin or if we react with non-judgement and detachment. Not getting hung up on the emotions of the moment allows us to respond skilfully in misunderstandings and avoid escalations.

A co-worker’s words and actions are about them, not about us. Our colleagues’ unconscious behaviours don’t have to dictate our experience. We get to consciously choose our mindset and emotional state in any situation.

The Greatest Gift

This paradigm shift takes practice, but it’s one of the greatest gifts we can give ourselves. No longer feeling like helpless victims to others’ emotional outputs. Owning our emotional adulthood and self-accountability. True inner freedom.

It’s available to anyone, but so few people live it. We can choose to do the work to take radical responsibility for our emotions, no matter what others do. We’ll be rewarded with choice and peace in the face of conflict, instead of being unconscious reactors.


*One root of the Myth of Redemptive Violence

Cop