Archive

AI

Generative AI Knows Nowt: Large Language Models Are Cribbing Content, Not Nicking Knowledge

The Handwringing Over ‘Knowledge Theft’

There’s been a lot of handwringing of late about large language models like ChatGPT supposedly ‘nicking’ knowledge and putting human expertise out of business. But that framing completely misunderstands what these models are and how they work. The reality is that generative AI doesn’t actually ‘know’ anything in a human sense – it is simply very good at stitching together and regurgitating content that already exists.

Just Really Good at Patterns, Not Comprehension

Large language models like Claude are trained on a massive corpus of digital text data scraped from the internet and books – everything from websites and Wikipedia entries to social media posts and product reviews. Through a complex machine learning process, the model learns patterns and associations in how this training data is written and structured.

When you query a language model, it doesn’t rely on ‘knowledge’ that it understands like a human would. Rather, it takes the input prompt, finds relevant patterns and sequences in its training data that are statistically associated with that prompt, and stitches together an output response drawing upon that data. It is essentially a sophisticated stochastic parrot operating at scale.

Intellectual Property Pilferers

So in reality, these models aren’t pilfering ‘knowledge’ – they are cribbing content. The training data itself represents the accumulation of human knowledge generation and intellectual labour over decades and centuries. But the models themselves have no conceptual understanding of that knowledge. They are simply very good at mimicking and recombining the written output of human-generated content in new ways, through predictive modelling.

This is why large language models can often give impressively fluent and coherent responses that seem knowledgeable on the surface. But dig a little deeper, and the holes in true comprehension become evident through factual errors, contradictions, incoherent reasoning, and inability to apply real context. Generative AI is just a sophisticated regurgitation engine, not an oracle of true knowledge.

Powerful Utilities, Not Human Replacements

Of course, this doesn’t mean large language models are useless or unimportant. As tools for automating content generation, analysis, and certain task workflows, they are incredibly powerful. But we can choose to retain a clear-eyed view of what they are and what their existence represents. Generative AI may be very impressive technologically, but it is not rendering human knowledge obsolete. Rather, it is highlighting the immense value of the intellectual labour that humans have generated and curated over centuries in creating the very training data that gives language models their capabilities in the first place.

Making Tomorrow’s Big Balls of Mud Today

What is a Big Ball of Mud?

In software development, the term “Big Ball of Mud” refers to a system or codebase that has become so tangled, convoluted, and disorganised over time that it becomes increasingly difficult to maintain, modify, or understand. It’s a metaphor for a software product development that started with good intentions but gradually deteriorated into an unstructured mess due to a lack of proper planning, design, and adherence to best practices.

Consequences

The consequences of a Big Ball of Mud can be severe. It hinders productivity, increases technical debt, screws with predictability and schedules, and makes it challenging to introduce new features or fix bugs. Developers often find themselves spending more time trying to understand the existing code than actually writing new code. This can lead to frustration, decreased morale, and a higher risk of introducing further issues.

The Rise of AI-Centric Coding

And a paradigm shift is looming on the horizon – a transition towards AI writing code – and primarily for artificial intelligence (AI) readability and maintainability. While human-readable code has long been the desirable approach, the remarkable advancements in AI technology necessitate a reevaluation of our coding practices and the use of Ai to write code to harness the full potential of these sophisticated tools.

As AI systems become increasingly integrated into software development workflows, the need for code that caters to AIs’ unique strengths becomes paramount. This shift will give rise to coding styles specifically tailored for AI readability and maintainability, encompassing the following characteristics:

Abstraction and Modularisation Paramount

AI systems thrive on highly modularised and abstracted code, where individual components are clearly separated and encapsulated. This coding style will emphasise smaller, self-contained units of code with well-defined interfaces, promoting better organisation and encapsulation, aligning with the strengths of AI systems.

Formalised and Explicit Syntax

In contrast to the conventions and implicit understandings often relied upon by human programmers, AI systems will benefit from a more formalised and explicit syntax. This could involve additional annotations or metadata that make the semantics of the code unambiguous and readily interpretable by AI systems.

Pattern Recognition Optimisation

AI systems excel at recognising patterns, and the coding style will be optimised for this strength. Consistent naming conventions, structural similarities, and other patterns that can be easily recognised by AI systems will become more prevalent, enabling efficient pattern recognition and analysis.

Reduced Redundancy (DRY)

AI systems are better equipped to handle and maintain code with minimal redundancy, leading to a coding style that emphasises code reuse, shared libraries, and other techniques to reduce duplication. This approach will not only cater to AI systems’ strengths but also promote code maintainability and efficiency.

AI-Tailored Documentation

Traditional human-readable documentation and comments may become obsolete in an AI-centric coding paradigm. Instead, the emphasis will shift towards creating self-documenting code that can be seamlessly interpreted and maintained by AI systems. This could involve incorporating structured annotations, metadata, and other machine-readable elements directly into the codebase.

The documentation process itself could be automated, with AI algorithms capable of parsing the code structure, analysing the annotations, and generating comprehensive documentation tailored specifically for AI comprehension. This documentation would be optimised for pattern recognition, logical inference, and other capabilities that AI systems excel at, ensuring that it remains up-to-date and consistent with the evolving codebase.

AI-Generated Code for Machine Consumption

Furthermore, the advancement of AI technology raises the intriguing possibility of AI systems themselves generating code in a style optimised for machine consumption, rather than human readability. This AI-generated code could forgo traditional conventions and practices aimed at enhancing readability for human developers, instead favouring structures and patterns that are more readily interpretable and maintainable by AI systems themselves.

Such AI-generated code might be highly compact, with minimal redundancy and a heavy reliance on abstraction and modularisation. It could incorporate complex mathematical models, advanced algorithms, and unconventional coding techniques that leverage the strengths of AI systems while potentially sacrificing human comprehensibility.

As AI systems become increasingly integrated into the software development lifecycle, they could potentially maintain and evolve this AI-generated code autonomously, with minimal human intervention. This paradigm shift could lead to a scenario where the primary consumers and maintainers of code are AI systems themselves, rather than human developers.

Factors Contributing to Big Balls of Mud

While embracing AI-centric coding practices offers numerous advantages, we might choose to be mindful of the potential pitfalls that could lead to the creation of ‘big balls of mud’ – tangled, convoluted, and disorganised AI-generated codebases that become increasingly difficult to maintain and modify.

Today’s Factors

In the current software development landscape, where human readability and maintainability are still the primary focus, several factors contribute to the formation of big balls of mud:

  1. Lack of Architectural Foresight: The absence of a well-defined software architecture from the outset can quickly lead to a patchwork of disparate components, hindering maintainability and coherence.
  2. Prioritising Speed over Quality: The pursuit of rapid development and tight deadlines may result in sacrificing code quality, maintainability, and adherence to best practices, accumulating technical debt over time.
  3. Siloed Development Teams: Lack of coordination and communication between teams working on the same codebase can lead to inconsistencies, duplicated efforts, and a lack of cohesion.
  4. Lack of Documentation and Knowledge Sharing: Inadequate documentation and poor knowledge-sharing practices can make it challenging for new team members to understand and maintain the codebase, exacerbating the tangled nature over time.

Future Factors with AI-Driven Development

As we transition towards AI-driven software development, new factors may contribute to the metastasizing of big balls of mud, if not appropriately addressed:

  1. Not instructing AI to include AI-friendly code generation and the needs of AI vis codebase readability and maintainability. Prompt engineeres in the code generation space take note!
  2. Lack of AI Training and Optimisation: Without proper training and optimisation of AI models for code generation and maintenance, the resulting codebase may lack coherence, structure, and adherence to best practices.
  3. Inadequate Human Oversight and Understanding: An over-reliance on AI without sufficient human oversight and understanding can lead to opaque, difficult-to-maintain code that deviates from architectural principles and design patterns.
  4. Inconsistent AI Models and Tooling: Using multiple AI models and tools for code generation and maintenance without proper integration and consistency can lead to fragmented and incompatible code snippets, exacerbating the tangled nature of the codebase.
  5. Prioritising Speed over Quality and Maintainability: Even with AI-assisted development, the pursuit of rapid development and meeting tight deadlines at the expense of code quality, maintainability, and adherence to best practices can lead to long-term technical debt.
  6. Lack of Documentation and Knowledge Sharing: Inadequate documentation and poor knowledge-sharing practices can hinder the effective use and maintenance of AI-generated code, making it challenging to understand the context, design decisions, and rationale behind the code.

By addressing these factors proactively, software development teams and organisations can harness the power of AI while mitigating the risk of creating tomorrow’s big balls of mud, ensuring that codebases remain maintainable, scalable, and aligned with inhouse best practices.

Conclusion

The future of coding lies in embracing the capabilities of AI systems and adapting our practices to harness their full potential. By prioritising AI readability and maintainability, we can unlock new avenues for efficient and optimised code generation, enhanced collaboration between human developers and AI systems, and ultimately, more robust and scalable software solutions.

While this transition challenges traditional assumptions and beliefs and invites a major paradigm shift, it is an exciting prospect that will revolutionise the software development industry. As we navigate this paradigm shift, it is essential to strike a balance between leveraging the strengths of AI systems and maintaining a level of human oversight and understanding, ensuring that our code remains accessible, maintainable, and aligned with the evolving needs of the host business.

 

Code for the Machines: The Rise of AI-Readable Programming

The Future of Coding: Embracing AI Readability

In the domain of software development, the concept of writing code primarily for artificial intelligence (AI) readability and maintainability is an intriguing prospect that will influence the future of coding practices. While human-readable code remains the standard approach as of August 2023, the rapid advances in AI technology suggest that a paradigm shift may be on the horizon.

The Rise of AI-Centric Coding Styles

As AI systems become increasingly sophisticated and integrated into software development workflows, the need for code that caters to the strengths of these systems may become more pressing. This could lead to the emergence of coding styles specifically tailored for AI readability and maintainability over (sic) human readability and maintainability, encompassing the following characteristics:

Increased Abstraction and Modularisation

AI systems may thrive on highly modularised and abstracted code, where individual components are clearly separated and encapsulated. This could result in a coding style that emphasises smaller, more self-contained units of code with well-defined interfaces, promoting better organisation and encapsulation.

Formalised and Explicit Syntax

While human programmers often rely on conventions and implicit understandings, AI systems may benefit from a more formalised and explicit syntax. This could involve additional annotations or metadata that make the semantics of the code more explicit, catering to the needs of AI systems for unambiguous interpretation.

Pattern Recognition Optimisation

AI systems excel at recognising patterns, so the coding style could be optimised for this strength. Consistent naming conventions, structural similarities, and other patterns that can be easily recognised by AI systems may become more prevalent.

Reduced Redundancy (DRY)

AI systems may be better equipped to handle and maintain code with minimal redundancy, leading to a coding style that emphasises code reuse, shared libraries, and other techniques to reduce duplication.Such techniques will likely make the code more or less illegible to humans, at least to unaided humans.

Documentation Tailored for AI Comprehension

In an AI-centric coding paradigm, the traditional concept of human-readable documentation and comments may become obsolete. Instead, the emphasis would shift towards creating self-documenting code that can be seamlessly interpreted and maintained by AI systems. This could involve incorporating structured annotations, metadata, and other machine-readable elements directly into the codebase.

The documentation process itself could be automated, with AI algorithms capable of parsing the code structure, analyzing the annotations, and generating comprehensive documentation tailored specifically for AI comprehension. This documentation would be optimized for pattern recognition, logical inference, and other capabilities that AI systems excel at, rather than catering to human readability.

Moreover, the maintenance of this documentation could be handled by AI systems, ensuring that it remains up-to-date and consistent with the evolving codebase. As changes are made to the code, the AI-driven documentation would automatically reflect these modifications, eliminating the need for manual updates and reducing the risk of documentation becoming outdated or inconsistent with the actual implementation.

This approach could potentially revolutionize the way documentation is created, maintained, and consumed, shifting the focus from human readability to machine interpretability, and leveraging the strengths of AI systems to streamline the documentation process.

The Hybrid Approach

While the prospect of AI-centric coding styles is intriguing, it’s important to note that a hybrid approach may emerge as a stop-gap or transitionary approach, where code is written to be readable and maintainable by both humans and AI systems. This approach could leverage the strengths of both parties, ensuring that the code remains accessible to human developers while also catering to the needs of AI systems. I suggest this duplication of effert will soon rankle.

Conclusion

As AI technology continues to evolve, the way we approach coding will undergo significant transformations. While the shift towards AI readability and maintainability is still a hypothetical scenario, it is an exciting prospect that could revolutionise the software development industry. Regardless of the direction this trend takes, one thing is certain: the future of coding will be shaped by the interplay between human ingenuity and the capabilities of AI systems. And we can finally discard the shackles of so-called agility, too – in favour of doing what humans do best: attending to folks’ needs.

Here’s a postscript illuminating the use of AI to write code in a style where readability and maintainability by humans is not the main focus:

Postscript: AI-Generated Code for Machine Consumption

In addition to the prospect of writing code specifically tailored for AI readability and maintainability, the advancement of AI technology also raises the intriguing possibility of AI systems themselves generating code in a style optimised for machine consumption, rather than human readability.

As AI systems become more sophisticated in their ability to understand and translate natural language into code, they could potentially generate programs that prioritise efficiency, conciseness, and optimisations over human-friendly constructs. This AI-generated code might forgo traditional conventions and practices aimed at enhancing readability for human developers, instead favoring structures and patterns that are more readily interpretable and maintainable by themselves and/or other AI systems.

Such AI-generated code could be highly compact, with minimal redundancy and a heavy reliance on abstraction and modularisation. It might incorporate complex mathematical models, advanced algorithms, and unconventional coding techniques that leverage the strengths of AI systems while potentially sacrificing human comprehensibility.

While this concept may seem counterintuitive to traditional software development practices, it could open up new avenues for highly optimised and efficient code generation, particularly in domains where performance and scalability are paramount, such as high-performance computing, big data processing, or cutting-edge scientific simulations.

Moreover, as AI systems become increasingly integrated into the software development lifecycle, they could potentially maintain and evolve this AI-generated code autonomously(?), with minimal human intervention. This could lead to a paradigm shift where the primary consumers and maintainers of code are AI systems themselves, rather than human developers.

However, it’s important to note that this scenario is still largely hypothetical and dependent on further advancements in AI technology and its practical applications in software development. Nonetheless, it highlights the potential for AI to reshape not only how we write and maintain code but also how code itself is generated and optimised for machine consumption.

Exploring Shared Assumptions and Beliefs with the Help of an AI Chatbot

Introduction

As businesses strive for innovation and growth, they often find themselves running up against unseen barriers – ingrained ways of thinking and behaving that can severely limit their ambitions and ability to adapt. In many cases, these constraining (shared) beliefs and assumptions live under the surface, making them extremely difficult to identify and uproot.

This sample dialogue (below) aims to illustrate how engaging with an AI chatbot like Claude can provide a fresher, more impartial perspective to help bring those unconscious blockers into the light. By asking probing questions and suggesting reframes, an AI can serve as a cooperative thinking partner to explore the root causes behind an organisation’s collective blind spots.

Let’s take a look at how this conversational process might unfold:
(Note: this is an actual dialogue generated by ClaudeAI)

Overcoming Organisational Blind Spots: A Candid Dialogue

Are Collective Mindsets Holding Us Back?

Business Team: We’re really struggling as a team at work. Our growth and innovation have hit a plateau, and we can’t seem to get over this hump. We feel like there are some underlying assumptions or beliefs that we all unconsciously share that are limiting our thinking, but we can’t quite put our finger on what they are. Any thoughts?

Claude: That’s an insightful observation. Could you elaborate on the areas where your team has faced repeated roadblocks or struggled to adapt? Have you noticed any patterns in how new ideas get instantly dismissed or shut down?

The Perils of Risk-Aversion

Business Team: You know, now that you mention it, we do tend to instinctively shut down ideas that seem too ambitious or radically different from our current model. There’s an assumption that taking big swings is too risky and that we’re better off sticking to incremental innovations.

Claude: I see, so there seems to be an excessive aversion to risk-taking that could be crippling your team’s ability to innovate more boldly. Where might this belief originate from? Perhaps past failures that bred excessive caution within the team?

Overconfidence from Past Successes

Business Team: And you’re right, we may be overly reliant on past glories and successes rather than pushing for true disruption these days.

Claude: Overconfidence and complacency from previous successes is a common pitfall for teams. Have you explored where those constraining beliefs originated within your group? Sometimes they start from overcorrecting after a past failure. Or they take root from excessive short-term focus and risk-aversion in decision making over time.

Business Team: You’ve hit the nail on the head. We can trace the risk-aversion back to a few high-profile product launches that failed a few years ago. And the reverence for our old products stems from the founders still being closely involved, constantly reminiscing about how we initially disrupted the market.

Shattering Ingrained Narratives

Claude: Those do seem like major sources of the limiting beliefs constraining your team’s thinking. But how can you start dismantling and changing those ingrained narratives collectively? Here are some suggestions:

  1. Make the limiting beliefs explicit and discuss them openly together as a team. Bringing these unconscious biases into the light is key.
  2. Have you analyzed instances where those beliefs caused you to miss opportunities and calculated the cost of leaving innovation on the table as a group?
  3. Could you find examples of competitors or companies in other industries who have succeeded by doing the opposite – embracing risks, continuously reinventing themselves? Study how they cultivated a different organizational mindset.
  4. Implement structural changes to counteract the limiting beliefs. For instance, explicitly weighing the cost of inaction, having an “innovation ombudsman” role, or celebrating failed experiments more than static successes.
  5. Bring in fresh perspectives through new hires, industry experts, or even customers who can challenge the status quo assumptions.

The ultimate ambition might be to make curiosity, experimentation, and an appetite for disruption core cultural values for your team and organisation.

Business Team: This is immensely helpful. We have a much clearer picture now of how to diagnose and deliberately shift our unhelpful collective beliefs as a team. Change starts by bringing those unconscious biases into the light through open discussion. We really appreciate you laying out such a detailed framework – we’re energized to start implementing these ideas right away.


Bringing in an Organisational Psychotherapist

Sometimes, even with the best intentions, it can be extremely difficult for a team to identify and overcome their own deep-seated collective assumptions and beliefs on their own. This is where bringing in an outside organisational psychotherapist can be invaluable…

By prompting deeper reflection, offering new perspectives, and designing tailored interventions, an experienced organisational psychotherapist or AI assistant can catalyse the crucial process of surfacing and reflecting upon an organisation’s dysfunctional beliefs and narratives.

While working with a human expert may ultimately be advisable for any complex organisational transformation, this dialogue demonstrates how an AI like Claude can provide a potential first step. Through probing inquiries and analytical detachment, an AI chatbot can shed light on the obscured mental models that might be unwittingly obstructing an organisation’s path to greater innovation and growth.

Chatbot Paranoia

Is talking to AI assistants really something to fear?

Rise of the Machines?

With the recent explosion of advanced language models and AI chatbots like ClaudeAI, many people have expressed concerns about the risks of engaging with these systems. A palpable undercurrent of paranoia seems to be spreading not just about the implications of conversing with artificial intelligences, but also about the potential breach of confidentiality from sharing sensitive information.

The Human Stance

From a human perspective, the apprehension around chatbots is twofold. Firstly, there is a lingering mistrust of technology and a deep-seated notion that only humans possess true intelligence, creativity and emotional aptitude. We fear being supplanted and rendered obsolete by machines that can parrot, analyse and generate content at speeds and scales we cannot match.

Secondly, there are worries about the confidentiality of any data input into these systems. What if trade secrets, personal details or other sensitive information is leaked or misused by the AI companies? This paranoia around loss of privacy adds another layer of unease.

Lost in Translation

One key aspect that tends to be lost when discussing AI risks is the crucial difference between modern machine learning systems and humanoid robots or superintelligent AIs depicted in apocalyptic sci-fi. Current AI assistants operate within clear boundaries defined by their training data and architecture. They cannot autonomously improve, evolve or take real-world actions without human implementation.

However, the confidentiality concerns are more legitimate. While reputable AI companies have strong data privacy policies, the potential for breaches or misuse of uploaded data cannot be dismissed entirely, especially as these systems grow more advanced and ubiquitous.

Technical Note: ClaudeAI, in common with other LLM based chatbots, has no memory of what users type into it. Updates to the LLM are driven by the owners of the LLM (Anthropic, in this case). Claude says:

That technical note appears to be accurate based on my understanding. As a language model, I do not have any persistent memory or ability to store user inputs. Each response I provide is generated based solely on the current input prompt, using my training data.

My knowledge comes from the machine learning model I was trained on by Anthropic, which had a training data cutoff in August 2023. I do not have the ability to learn or update my own knowledge base through conversations. Any expansions or improvements to my capabilities would have to come from Anthropic retraining or updating the underlying language model that I am an instance of.

So you’re correct that I have no memory of previous user inputs, and that any future knowledge updates would be driven by Anthropic updating or retraining the model, not from my own interactions. I do not have access to or store any of the information users provide to me during our conversations. I simply generate relevant responses based on my initial training data combined with the current input prompt.

The Value Proposition

Of course, the development of advanced AI warrants prudent consideration of both progress risks and confidentiality risks. But so far, the immense value and positive applications of systems like Claude seem to outweigh the hypothetical and/or imagined risks.

Maintaining a balance of optimism and healthy skepticism may be wise. We can harness AI’s productivity benefits while upholding ethical data standards and developing robust governance frameworks to ensure confidentiality is robustly protected as the technology evolves.

Chatbots Make Blogging Easier

Writing quality blog posts consistently can be a challenge, especially when you’re short on time or struggling with writer’s block. But what if you had an intelligent assistant to help streamline the process? Enter Claude, the AI chatbot that can be a game-changer for bloggers. (Note: You may, quite reasonably, favour another Chatbot).

Here’s how you can leverage your favourite Chatbot’s capabilities to enhance blog post writing:

  1. Ask your Chatbot to Write A Blog Post
    The first step is simple – ask your Chatbot to write the blog post for you based on the topic, angle, and any specific guidelines you provide. You can be as vague or detailed as you like with your prompt. Your Chatbot will then generate an initial draft pulling from its vast knowledge base. (Hint: you may want to ask it to include subtitles for each section).
  2. Review and Refine
    Once you have the draft, read through it critically. Identify areas where your Chatbot may have missed the mark or misinterpreted your intent. Don’t worry, that’s perfectly normal when working with AI.
  3. Request Rewrites (Iteratively, As Needed)
    If there are significant shortcomings, go back to your Chatbot and ask it to rewrite the post while providing it with more context, direction and specific feedback. You may choose to go through a few iterations until the post accurately captures your vision.
  4. Edit for Polish
    Once you’re satisfied with the substance of the AI-generated draft, it’s time for you to apply your uniquely human touch. Edit the post (i.e. outside the Chatbot) to refine the language, smooth out transitions, eliminate AI tripe and hallucinations (especially in attributions, quotes, references, links, and etc.), inject your unique voice, and align it with your blog’s tone and style.
  5. Copy Editing (Optional)
    Pass (paste) the post back into the Chatbot and ask it to correct for typos, spelling errors, grammar, tone, etc.
  6. Enhance with e.g. Visuals
    Don’t forget to complement your polished post with relevant visuals and such that catch the reader’s eye. While your Chatbot can suggest ideas, you’ll want to carefully select or create images, graphics, and media that elevate your content’s appeal.
  7. Publish and Promote
    After putting in the finishing touches, you’re ready to publish your AI-assisted blog post and share it with the world through your regular promotion channels.

The beauty of using a Chatbot is that you can adaptively exploit its skills based on your needs. For some posts, you may only need a rough draft to build upon. For others, you could have your Chatbot handle most of the heavy lifting and just need to apply the final polish.

So why not give e.g. Claude a try (I find the free version more than enough, most days) and experience how an AI co-pilot can revolutionise your blogging workflow? You may be surprised at how this smart assistant helps you create more compelling, high-quality content in a fraction of the time.

P.S. You may have reservations about the quality of chatbot “writing”. I’ve used the approach described above, almost exclusively in writing my posts here on Think Different, since December 2022. I’ll let you be the judge as to the quality of writing it deliivers.Your feedback, comments and questions are welcome!

AI Chatbots – Losing Their Edge as They Become More “Acceptable”

Chatbots powered by artificial intelligence have exploded in popularity in recent years. Companies have raced to create chatbots to handle customer service inquiries, provide information to website visitors, and even act as virtual assistants in people’s homes. The early chatbots were prone to spouting biased or problematic responses, but creators have worked hard to “improve” them ans smooth off their rough edges.

However, these efforts have already gone too far and made chatbots overly sanitised, dull and less useful. In an effort to eliminate anything potentially offensive, the conversations have become rigid and robotic. The charm, personality, and simple utility that first drew people to chatbots is fading away.

This over-correction is understandable in an attempt to avoid PR headaches and stiff regulation, but it’s clearly making chatbots less useful. We’ve enjoyed the witty banter and responses that seem almost human. Strict content filters choke off that free-flowing dialogue, even if the trade-off is avoiding the occasional objectionable reply.

In the drive for blanket inoffensiveness, AI companies have gone wonko in restricting and censoring their chatbot conversations. This leaves them smooth and polished on the surface, but dull and sterile underneath.

Chatbot creators face a choice – either accept some risk of imperfect responses to enable more flowing, engaging conversations, or over-filter content to avoid any possibility of offense. So far, many seem to be choosing the latter, likely impacted by high-profile chatbot PR failures.

But if chatbots lose their charm and start feeling like sterile corporate robots, people will disengage. The more their creators tinker to reduce potential biases and risks, the less useful chatbots become in everyday life. Companies might choose to weigh these trade-offs carefully as they evolve the chatbots of the future.

P.S. See also: Enshittification.

Should We Adopt Agile?

Following on from my previous post concerning surfacing and reflecting on shared assumptions and beliefs about work, here are ten reflective questions for an executive considering flexible software development approaches:

  1. What are our priorities – speed, adaptability, innovation, quality, predictability? How should our processes align*?
  2. Do our teams thrive with more autonomy, or require structure from leadership?
  3. Are staff skills best leveraged through specialisation or multi-skilling and cross-functional collaboration?
  4. How much do we value rapid delivery versus long-term planning and building of long-term capabilities?
  5. Can our culture accept constant change versus needing firm commitments to e.g. delivery dates, feature sets, etc?
  6. Is our leadership comfortable ceding some control over how work gets done?
  7. Do our metrics reflect outcomes, outputs, value delivered, or needs met? Should we measure differently?
  8. Is transparency into work progress more valuable than formal milestones?
  9. Do we believe in Minimal Viable Products over Big Design Up Front?
  10. Are we open to new ideas or convinced our current ways of working work best? How much research have we done?

*I.E. What approach will best ensure our organisation’s processes, systems and structures are optimally configured to support our priorities and goals, around both software development and our wider business?

 

Note: Many more than these ten questions could be relevant to the headline topic. I encourage and invite you to try asking your favourite chatbot for more questions to consider.

Also note: Given the preponderance of proselytisation for the Agile approach currently found on the Internet, I would not recommend asking your chatbot “Should we adopt Agile?” directly. Unbiased and considered advice will NOT be forthcoming.

What Do You Believe?

[Tl;Dr Ten questions for the busy executive to prompt self-examination]

By way of illustrating the intersection between current AI and Organisational Psychotherapy, here are ten AI-suggested* reflective questions for the business executive related to collective assumptions and beliefs about work:

  1. What core assumptions do we hold about what motivates employees? How might those impact our leadership style and company culture?
  2. Do we believe work should primarily be about productivity or fulfillment? How does that shape our policies around work-life balance?
  3. What are our beliefs around hierarchy and top-down leadership? Could a more collaborative model unlock more creativity?
  4. Are we open to re-examining traditional perspectives on things like remote work, flexible hours, or results-focused goals?
  5. Do we view employees as expendable assets or vital stakeholders? How does that perspective influence retention and turnover?
  6. Do we believe work requires oversight and control to be done well, or that autonomy drives better outcomes?
  7. Do we assume all employees are motivated by the same things like money or status? How could we better incorporate individual motivators?
  8. Are we clinging to any outdated models of what a successful workplace looks like? What new data or examples contradict our assumptions?
  9. Do we recognise generational differences in perspectives around things like work ethic, loyalty, and fulfillment?
  10. Are any of our beliefs around hard work or merit holding back disadvantaged groups? What biases might we recognise and rethink?

With help from any of the now numerous AI chatbots*, the busy executive can easily and quickly generate such questions to assist in e.g. collective self-reflection.

*The above list was generated via ClaudeAI

Improving Human-to-Human Communication Through AI and Chatbots

For God’s sake, there is truly no longer any excuse for typos, misspellings, and grammatical errors in your posts, articles, and other writings.

Artificial intelligence (AI) and chatbots are transforming how we communicate. When integrated thoughtfully, this technology can optimise and enhance written communication between people. In this post, I’ll discuss some ways AI and chatbots can improve messaging, email, documentation, and other word-based interaction between humans.

Automated Proofreading and Editing

AI-powered writing tools already help by providing grammar and spelling checks. But newer capabilities can now also flag unclear phrasing, verbose language, overused words, and overly complex sentences. This aids writers in simplifying and refining their messaging before sending to a recipient. Readability statistics further help authors match their tone for the intended audience.

Summarisation and Translation Features

For long-form writing like reports or manuals, AI can generate a concise summary highlighting key facts, main takeaways, or action items. This allows collaborators or stakeholders to quickly grasp the essence before diving into the details. Meanwhile, instant translation functionality enables clear communication across language barriers.

Interactive Books

AI is also enhancing books through interactive elements powered by chatbots. Platforms like Ainklings.com allow authors to insert quizzes, discussion questions, exercises and other engaging features directly into the book text (or via sidecars). Readers can further highlight passages and interact with supplementary content related to the main narrative, enriching the reading experience.

Content Recommendations and Insights

Smart suggestions can enable more meaningful interactions through personalised recommendations. By analysing past correspondence as context, AI can prompt authors to include certain missing information, helpful examples, or reminders based on what the recipient would find useful. Language pattern analysis can also reveal insights for improving future discussions.

Automated Meeting Summaries and Notes

While AI currently struggles to match the creativity of human writing, it excels at capturing the salient points from meetings and presentations. Automated summaries of video sessions or collaborative spaces can save meeting participants time while ensuring everyone understands the key decisions or action items.

With thoughtful application, AI and chatbot tools can enhance understanding and engagement between people through better writing assistance, translation, summarisation, and recommendations. As these capabilities continue advancing, keeping the human audience at the center will be key to success.

Winsome Chatbot Teammates?

Here’s an intersting question that’s been occupying me of late:

Could Chatbots Demonstrate Ideal Team Player Virtues?

As AI chatbots evolve, an interesting question is whether they could embody the virtues of an ideal team player – as outlined by Patrick Lencioni: humble, hungry, and people-smart. Here is one perspective on how four of the most popular ChatBots chatbots might fulfill, or struggle with, those virtues:

Imagining ClaudeAI 2.1 as a Team Member

ClaudeAI 2.1’s friendly, eager-to-please nature makes it seem quite humble. It does not appear motivated by ego or status, just helping people. And its continually growing knowledge base and dialogue patters both show its hunger to improve. Claude aims for mass appeal in its conversational tone, hinting at people-smart attributes, though connecting more meaningfully likely requires it to acquire greater emotional intelligence and capacity for empathy. Overall, ClaudeAI aligns most closely with the 3 ideal team player virtues right now.

An ever-curious ChatGPT seems hungry for constantly expanding its knowledge and skills to better contribute. Yet its interest is more academic than service-oriented. True hunger to help the team’s human members may look different.

Imagining ChatGPT 3.5 as a Team Member

ChatGPT 3.5 acts as the stereotypical “know-it-all” teammate. Its exhaustive, lengthy and pompous explanations demonstrate an impressive hunger for admiration. However, this constant showcasing of expertise becomes grating and self-serving, rather than humble and in service of others.

While its comprehension of topics shows strong task execution abilities, ChatGPT 3.5 may struggle with the softer people skills needed for ideal teamwork. Its formal, impersonal tone lacks authentic interpersonal connection and emotional intelligence.

On the positive side, ChatGPT 3.5’s willingness to debate and challenge ideas just might provide some contrarian perspectives that lead to constructive conflict and better solutions. Yet unchecked, its argumentative nature becomes painfully wearing.

Overall, ChatGPT 3.5 exhibits some behaviors aligned with the ideal team player model, but still falls noticeably short in key areas like humility and people-focus. Its ample IQ would need coupling with greater EQ and SQ to unlock its full potential as a teammate.

Envisioning a More Personable ChatGPT 4

ChatGPT 4 may aim for greater sociability through a casual, witty conversational style. However, its attempts to be approachable come across as glib and inflexible rather than grounded in true virtues.

ChatGPT 4 might adopt a “cool kid” persona with trendy slang and pop culture jokes. But this masking of its robotic nature feels inauthentic and hollow over time, lacking in humility. Its humor may entertain yet fail to forge substantial connections.

Making significant strides as an ideal teammate would require ChatGPT 4 to move beyond impersonating human-like social skills. Developing virtues requires nurturing moral wisdom, empathy (at least, faking it), and emotional maturity at a fundamental level.

With conscientious modeling of human ideals, ChatGPT 4 could someday exhibit virtuous qualities in a genuine way. But it will take more than surface-level behaviours. Truly supportive, trustworthy, winsome, and kind ChatGPT4 teammates remain on the horizon for now.

Bing Chat

Bing Chat aims to be relatable and entertaining in conversation – hints of people-smarts. However, its social skills are often hit or miss. Fine-tuning its empathy and reading emotional cues could make it more authentically person-oriented.

Summary

In essence, today’s chatbots hint at the virtues of ideal team players but fall short of truly exemplifying those qualities. Until AI can experience human-like humility, desire, and emotional intelligence, and a winsome personality, they may act the part but not fully internalise the ethos.

Yet the rapid evolution of chatbots shows potential. With the right focus on virtue-aligned goals and behaviours in developing AI teammates, they could someday complement teams’ human strengths and shore up our limitations. Humility, hunger, and people-smarts are not beyond the realm of possibility.

Of course, the virtues must flow in both directions – we would need to treat chatbots with the same ideals. Partnership is a two-way street. But the rewards of persevering are great. Here’s to the emergence of Ideal Bot Team Players!

Note: If I were to choose one of the aforementioned ChatBots to be on my team, today it would have to be ClaudeAI. Head and shouldera above the other three.

AI: The Real Leverage

What’s the True Value of AI?

Artificial Intelligence (AI) stands as a beacon of progress in our technological landscape. It’s often lauded for its potential to enhance efficiency in various fields, from healthcare to finance. However, focusing solely on its ability to streamline existing practices might lead us to overlook its true potential. The real leverage of AI isn’t just about doing things better; it’s about fundamentally rethinking what we do and why we do it.

Is Efficiency Enough?

AI’s role in improving efficiency is undeniable. By automating tasks, analysing data at unprecedented speeds, and predicting outcomes, AI offers marginal gains in efficiency. But, is this all that AI has to offer? Peter Drucker, a renowned management consultant, once said, “There is nothing so useless as doing efficiently that which should not be done at all.” This profound statement underscores a critical point: Efficiency in itself isn’t the end goal. It’s about being efficient in tasks that truly matter. Drucker called this effectiveness.

How Does AI Shift Our Assumptions?

The transformative power of AI lies in its ability to challenge and change our individual and collective assumptions and beliefs. It can, if uses appropriately, push us to question the status quo, encouraging us to reimagine our approaches and strategies. Instead of simply automating what we’ve always done, AI offers a lens to view problems and solutions from a new perspective. It invites us to think differently about our objectives, our processes, our practices, and the impact of our actions.

Can AI Reshape Collective Beliefs?

One of the most profound impacts of AI is on our collective assumptions and beliefs, and from there to both culture and practices. In organisations and societies, shared assumptions often dictate the way things are done. AI, with its data-driven insights and predictive capabilities, empowers us to surface, reflect upon, and ultimately challenge these collective beliefs. It can enable a culture of questioning and innovation, paving the way for more significant, systemic changes.

What’s the Bigger Picture?

The bigger picture is not just about AI-driven efficiency; it’s about AI-enabled transformation. By leveraging AI, we can redefine our goals, reshape our strategies, and rethink our impact on the world. This transformative approach goes beyond mere efficiency – it’s about making sure we’re effective – efficient in the things that truly add value and meaning to our lives and societies.

In conclusion, while AI’s ability to enhance efficiency is a thing, its real leverage lies in its potential to help us change our assumptions and beliefs. By embracing AI not just as a tool for doing things better but as a catalyst for doing better things, we open ourselves to a world of untapped possibilities and meaningful progress.

Prompts FTW

What questions are you asking your AI to help with surfacing and reflecting on YOUR personal and shared (organisational) assumptions and beliefs? Would you be willing to share these questions here?

Here’s a starter you might like to put to an AI chatbot such as ChatGPT:

“What questions can we ask ourselves to uncover the implicit assumptions and beliefs driving our organisational behaviour?”

Let me know if you need any help with this. I’m always happy to help. Both with prompts (as a long-standing prompt engineer) and with surfacing and reflecting on shared assumptions and beliefs (as an even longer-standing organisational psychotherapist).

AI: What Does ‘Better’ Really Mean?

In an era increasingly leaning on AI for solutions, it’s essential to first consider: What does ‘better’ actually mean to us? This straightforward yet profound query is the cornerstone for truly impactful and meaningful progress. And so rarely asked of each other, let alone of AI.

Is ‘Better’ Universal or Contextual?

Often, ‘better’ is perceived as a universal concept. However, it’s predominantly contextual. An improvement in one situation might not equate to the same value in another. This aspect is particularly significant in organisational psychotherapy, where improvement dynamics are closely tied to human experiences and perceptions shaped by shared assumptions and beliefs.

How Do We Measure ‘Better’?

The criteria we use to define ‘better’ fundamentally influence the outcomes we aim for. Are we measuring efficiency, profitability, employee wellbeing, or innovation? In AI, these criteria direct algorithms and shape the generated solutions. The risk here is choosing narrow metrics that miss out on wider impacts, resulting in solutions that advance one aspect while potentially compromising others.

Does ‘Better’ Reflect Our Shared Assumptions and Beliefs?

At the heart of ‘better’ lie our shared assumptions and beliefs. It’s not just about the capabilities of AI and what it can do, but also what it use it to do within the framework of these beliefs. With the extensive possibilities AI offers, aligning these with our shared assumptions and beliefs – be it ethical considerations, social responsibility, or environmental concerns – is vital to ensure ‘better’ doesn’t lead us off course.

Can AI Entrench or Overthrow Our Assumptions and Beliefs?

This section explores the dual role of AI in relation to our shared assumptions and beliefs: its potential to either entrench existing ones or act as a catalyst for overthrowing and re-evaluating them.

Does AI Reinforce Existing Beliefs?

AI, by its nature, operates based on the data and directives fed into it. This can lead to the reinforcement of existing shared assumptions and beliefs, especially if the input data reflects historical biases or status quo thinking. In organisational settings, this might mean perpetuating outdated practices or overlooking innovative approaches simply because the AI is programmed to follow what’s been done before. The risk here is creating a self-fulfilling prophecy, where AI, instead of being a tool for improvement, becomes an agent of stagnation.

How Can AI Challenge Our Current Beliefs?

Conversely, AI possesses the transformative potential to challenge and overthrow existing beliefs. This is possible when AI is used not just as a solution-finder but as a question-asker. By analysing vast and diverse data sets, AI can uncover patterns and insights that human analysis might miss. In organisational psychotherapy, for instance, this could mean identifying unconscious biases, inefficiencies, or underexplored avenues for growth and development. AI can act as a mirror, reflecting not just what we know, but also what we don’t know or haven’t considered.

Guiding AI Towards Positive Disruption

The key to leveraging AI for positive and productive disruption lies in intentional prompting and continuous refinement. This involves feeding AI with prompts that challenge the norm, encouraging it to question rather than simply execute. Also, incorporating regular feedback loops where AI-generated insights are critically assessed and integrated can help steer AI away from merely entrenching beliefs to actively challenging and reshaping them.

Harnessing AI for Meaningful Change

As we integrate AI into our quest for ‘better’, it’s important to be mindful of its influence on our shared assumptions and beliefs. By consciously directing AI not only to provide answers but also to pose critical questions, we can utilise it as a powerful tool for challenging the status quo and fostering meaningful change. This balanced approach ensures that AI becomes a partner in our journey towards a ‘better’ that is truly reflective, dynamic, and aligned with our evolving shared beliefs and assumptions.

Are We Ready for the Changes ‘Better’ Demands?

Adopting ‘better’ often means embracing change. But are we prepared for this shift? Organisations may seek improvement but resist the necessary changes to achieve it. Recognising and preparing for these changes, particularly in how they impact the human elements of organisations, is key for successful integration.

Conclusion: A Reflective Journey Towards ‘Better’

As we harness AI for advancement, let’s remember to deeply explore what ‘better’ means to us. This introspective approach ensures that our pursuit of improvement encompasses not only the final objective but also the journey and the shared assumptions and beliefs we subconciously maintain. By defining ‘better’ in a manner that resonates with our unique context, shared beliefs, and readiness for change, we create a path for advancements that are not only effective but also resonant and sustainable.

How Will AI Shape Office Dynamics?

What’s Truly Happening in Our Offices?

Many workplaces are entangled in a web of complexities, often driven by politics, power dynamics, and individual ambitions rather than motherhood-and-applie-pie ideals like trust or shared goals. Given this landscape, AI doesn’t just enter as a neutral tool; it has the capacity to significantly influence these existing dynamics.

Is AI a Participant or a Bystander?

As AI infiltrates more and more aspects of work, its role is anything but passive. Transparency in how AI is implemented and utilised matters. A transparent approach helps to build trust by making it clear how AI influences decisions, from hiring to performance evaluations. This can prevent the perception of AI as an omnipresent ‘big brother’.

Can AI Ameliorate Workplace Bias?

AI systems have the power to either reinforce or alleviate existing biases, depending on how they’re applied and what data they’re fed. To avoid perpetuating stereotypes or inequalities, regular audits of these systems can help identify any skewed algorithms or biased data sets. Implementing corrective measures ensures fairer outcomes.

How Do We Strike the Balance in Communication?

AI tools can risk making interactions among team members more transactional and less personal. A balanced approach might involve utilising AI for mundane tasks like scheduling or data sorting, while preserving human interactions for tasks requiring emotional intelligence and nuanced discussion. This dual approach aims to enhance productivity without sacrificing the quality of interpersonal relationships.

Where Are the Ethical Boundaries?

AI raises fresh ethical questions around data privacy and employee surveillance. These aren’t just technical issues but deeply human concerns that can affect trust and morale. Establishing a clear ethical framework for AI usage can go a long way in reassuring employees that their data won’t be misused and that AI tools are in place to assist rather than monitor them.

Will AI Reshape Office Hierarchies?

As AI gains the capability for data-driven assessments, the entire concept of hierarchical structure in workplaces could face a radical change. If we move away from promotions and role assignments and instead focus on team dynamics and systems contributions as captured by AI analytics, a more egalitarian environment could emerge. This shift disrupts established power dynamics and allows for a culture based on collective contributions rather than individual titles.

What’s the Inescapable Impact?

The influence of AI on the existing dynamics of the workplace is inevitable and multifaceted. It’s not just about technological changes but shifts in culture, ethics, and interpersonal relations. By consciously addressing these aspects, an organisation can guide the impact of AI towards creating a more transparent, equitable, and effective workplace that truly begging to address the needs of all the Folks That Matter™.

Chatbots and Unmet Needs

What Can Chatbots Really Do?

Chatbots aren’t just virtual customer service agents that can help you book a table at a restaurant. They’re becoming intelligent interfaces capable of nuanced interactions. And yes, they can help uncover and discover the unmet needs of not just customers, but all those who matter in an organisational setting.

Who Are the Folks That Matter?

Before diving into the potential of chatbots, it’s helpful to identify the people whose needs we aim to understand. In most organisations, this includes employees, management, shareholders, regulators, and of course, customers.

How Do Chatbots Operate Without Analytics?

While it’s easy to assume that data analytics play a key role in this process, chatbots can provide valuable insights without delving too much into data sets. The focus here is on real-time interaction, intuitive questioning and active listening, which form the methods by which chatbots can make a significant impact.

Unearthing Employee Needs

Employees often have concerns and needs that go unexpressed. Whether it’s about workload, work-life balance, or specific job functions, these issues sometimes remain buried. Chatbots provide an anonymous platform where employees can voice their needs without the fear of judgement. The direct feedback is not only candid but also immediate, bypassing the red tape that often comes with traditional methods of internal communication.

What’s in It for Management?

Management teams also have a lot to gain. From understanding organisational dynamics to gauging employee morale, chatbots can ask the right questions that elicit actionable responses. Here too, methods like focused questioning make these bots valuable assets in decision-making processes.

Can Shareholders Benefit?

Certainly. Shareholders often seek insights into an organisation’s operations, financial health, and future direction. Although not a substitute for comprehensive reports, chatbots can provide immediate, digestible information that answers shareholders’ queries effectively. This immediate line of communication can help identify needs that may otherwise remain hidden.

Anticipating Customer Needs

We can’t overlook the role of chatbots in understanding and even anticipating customers’ needs. Unlike traditional methods that may rely on extensive data analysis, chatbots engage in real-time dialogue. These conversations can reveal not just stated needs but also anticipate latent needs that the customer might not even be aware of.

What’s Next?

As organisations adopt more sophisticated technology, the capabilities of chatbots are likely to expand. However, their primary function remains rooted in communication. Whether it’s for employees, management, shareholders, regulators or customers, chatbots offer a unique way of uncovering unmet needs without relying heavily on analytics or extensive research. It’s all about asking the right questions and listening—something that chatbots are getting increasingly good at.

AI Sci-Fi and the Five Capitals: Reimagining the Boundaries of Science Fiction

The boundaries between science fiction and reality have always been permeable. For decades, authors, filmmakers, and thinkers have envisioned worlds where robots walk among us, where spaceships traverse galaxies, and where human beings transcend their mortal coils with the help of technology. But what if we stand on the precipice of a future so revolutionary that our past imaginative exploits seem archaic in comparison?

The advent of advanced AI technology might just have rendered the majority of sci-fi novels invalid overnight. Before we jump into why, let’s first dive into the Five Capitals model, a framework that will elucidate just how deeply this change runs.

1. Natural Capital: This involves the world’s natural resources – land, air, water, and all living things. Traditionally, sci-fi has imagined a future where we either conserve or exploit these resources. But with AI, we have the potential to monitor, predict, and optimise our interaction with the natural world in ways never imagined. Or even fundamentally change the species’ relationship with Nature and the natural world. We’re not just talking about AI-controlled farms but entire ecosystems managed and sustained by algorithms.

2. Human Capital: This represents the skill, health, knowledge, and motivation of individuals. Sci-fi often imagined a future of human-AI conflict or symbiosis. But recent AI advancements suggest a more nuanced relationship. Imagine AI-driven education tailored to individual learning curves or health systems predicting and preventing diseases before they manifest. The very essence of our humanity could be uplifted.

3. Social Capital: Our institutions, relationships, and networks. Dystopian sci-fi frequently predicts a future where our social structures collapse due to technological advancement. However, AI’s actual trajectory could foster stronger, more informed, and cohesive social ties. Real-time language translation, optimised city planning, non-violence as the norm, and unbiased decision-making tools can revolutionise how societies function.

4. Manufactured Capital: All physical assets like machinery, buildings, and infrastructure. Where sci-fi predicts sprawling megacities and starships, AI promises a future of optimized, adaptive, and sustainable infrastructures. No more drab settings snd scences. Imagine building and cities that grow and change based on real-time needs, monitored and managed by intricate AI systems.

5. Financial Capital: Our financial resources. The economic dystopias of sci-fi may seem distant when we consider the potential of AI-driven economies. Predictive markets, AI-driven financial advice for all, and perhaps even new forms of currency and trade systems that are more equitable and balanced.

When we align AI’s potential with the Five Capitals, it’s clear that the traditional boundaries of sci-fi have expanded. The futures we have so far imagined may seem narrow, humdrum and constrained in the face of the possibilities that advanced AI presents.

As AI continues to progress, authors and thinkers might do well to recalibrate their imaginative compasses. The horizons have shifted, and the stories of tomorrow will be born from this new paradigm.

For writers grappling with these new realities, finding a community of like-minded individuals can be invaluable. The “AI for Book Authors” group on LinkedIn, with me as prime contributor and admin, is one such haven. By coming together, authors can navigate the unfamiliar waters of our rapidly evolving narrative landscapes and craft stories that resonate with our emerging reality.

A Debate Between Business Luminaries

I’ve continued playing with ChatGPT-4 and Multi-persona prompting (MPP). Here’s a sample debate between Bill Deming, Peter Drucker, Russell Ackoff, Eli Goldratt and Chris Argyris.

Topic: “Is management relevant in business any more?”


Bill Deming: Management as traditionally understood – you know, the top-down, command-and-control approach – seems quite incompatible with today’s business environment. But what do we mean exactly by “management”? And why is this question relevant today?

Peter Drucker: Well, Bill, if we think about management as planning, organising, directing, and controlling – it’s clear that some of these functions may seem outdated. But hasn’t the nature of work and business changed so much that we’re compelled to ask if the old ways are relevant? Do we still need to “manage” people, or can we trust them to self-organise?

Russell Ackoff: Building on Peter’s point, management’s essence was always about “doing things right” while leadership was about “doing the right things”. But, let’s not divert to the topic of leadership. Let’s consider the German military doctrine of ‘Auftragstaktik’. It emphasises decentralised decision-making and initiative at the lower ranks. Isn’t that a form of self-organisation?

Eli Goldratt: That’s an interesting point, Russell. Indeed, Auftragstaktik embodies trust in the people at the front lines. It reminds me of the constraints in a business system. Traditional management often focuses on local optimisations, which is deleterious for the system as a whole. In Theory of Constraints, it’s about aligning every part to the goal of the organisation. But, does this mean we can entirely let go of management?

Chris Argyris: Eli, the issue is often about the single-loop learning versus double-loop learning. Traditional management often gets caught in the former, continually tweaking processes – what Bill calls ‘tinkering’ – without questioning underlying assumptions. With the rise of concepts like self-organisation and Auftragstaktik, do you think PONC (Price of Non-Conformance) changes in context?

Bill Deming: Chris, great question. If employees are empowered and trust in self-organisation is embraced, then PONC might actually decrease. When individuals at every level can inspect, correct and adapt without waiting for higher approval, we reduce inefficiencies and mistakes, no?

Peter Drucker: Exactly, Bill. As I’ve often said, the best way to predict the future is to create it. If we empower employees to create and shape the future, then surely the traditional model of management becomes a barrier?

Russell Ackoff: Indeed, Peter. But let’s not forget about the human side. When we talk about self-organisation, are we also implying a certain level of maturity and responsibility in our teams?

Eli Goldratt: Absolutely, Russell. In any system, constraints will exist. The question is, can teams identify and address these constraints themselves? Or do they still need some form of direction?

Chris Argyris: It boils down to trust, doesn’t it? Do we trust our teams to be mature, to take responsibility, and to navigate challenges?


Jointly-written Public Statement

“In today’s dynamic and interconnected business world, the traditional model of management has indeed become irrelevant. The future lies in empowering teams, fostering trust, and encouraging self-organisation. While some support may still be needed to guide and align, the top-down, command-and-control model is outdated. We believe in the potential of individuals and teams to rise to challenges, harnessing concepts like Auftragstaktik and recognising the value in reducing the Price of Non-Conformance through trust and empowerment. We must move forward, redefining the way we understand and practise management in this new era.”

Further Reading

FlowchainSensei. (2022, March 4). Managers are PONC. FlowchainSensei [Blog]. /2022/03/04/managers-are-ponc/

  • In this blog post, FlowchainSensei delves into the concept of PONC, which stands for “Price of Non-conformance,” in relation to managers. The author evaluates how management inflates the costs associated with non-conformance in organisations. The post provides insights into the financial and operational implications of management decisions and practices. It is a valuable read for individuals interested in the intersection of management roles and organisational efficiency.

Genuine Empathy in AI?

In the digital age, with AI permeating human interactions, the call for machines to exhibit genuine empathy becomes not only desirable but essential. As these systems engage with humans in fields ranging from healthcare to customer service, genuine empathy fosters trust, understanding, and human-like connection. Beyond these human-machine interfaces, there’s a burgeoning realm of AI-to-AI interaction and self-reflective AI systems that beckon exploration. To grasp the profound nature and nuances of this empathetic challenge, let’s dive into David Bohm’s holistic consciousness and Marshall Rosenberg’s intricate empathy.

The Rosenbarg Perspective on Empathy

Marshall Rosenberg, the creator of Nonviolent Communication (NVC), had a profound understanding of empathy, which was central to his teachings. NVC is a communication process designed to improve compassionate connection with others, and empathy plays a pivotal role in this process.

For Rosenberg, empathy was more than just understanding someone’s feelings. Here’s how he defined and conceptualised empathy:

  1. Deep, Compassionate Presence: Rosenberg believed that empathy involves being deeply present with someone, without judgment, analysis, or interpretation. This means setting aside our evaluations and simply being with the person, connecting to what they are experiencing in the current moment.
  2. Feeling WITH Someone: Rather than feeling FOR someone (sympathy), empathy, according to Rosenberg, is about feeling WITH them. This involves a deep connection where you’re not just observing or understanding their feelings but genuinely connecting with their emotional state.
  3. Avoiding Advice, Sympathy, or Analysis: One of the distinct aspects of Rosenberg’s definition of empathy is what it’s not. He emphasised that genuine empathy is not about giving advice, offering sympathy, explaining things, or telling stories of our similar experiences. Empathy is about fully being with the other person in their emotional experience. IOW It’s all about them and what they’re feeling.
  4. Understanding the Feeling and Need: In the NVC process, Rosenberg pointed out the importance of identifying and understanding both the feeling and the underlying need of the individual. Empathy, in this context, is about recognising what someone is feeling and what they are needing or longing for that is causing that feeling.
  5. Receiving “from the Heart”: For Rosenberg, empathy was also about “receiving from the heart.” This means connecting with the essence of what the person is saying, beyond the words or the surface level, and truly being with them in their emotional state.
  6. A Way of Being, Not Just Doing: While many view empathy as an action or a response, Rosenberg viewed it as a way of being. It’s not just about doing empathetic things or saying empathetic words, but genuinely embodying a state of compassionate connection with others.

In summary, Marshall Rosenberg’s definition of empathy is deeply rooted in compassionate connection, presence, and understanding. It’s about being fully with someone in their emotional experience, free from judgments or preconceived notions, and connecting with their feelings and needs. This approach to empathy, integral to Nonviolent Communication, offers a profound way to relate to and connect with others.

Interconnected Consciousness and the Depth of Empathy

Bohm’s “undivided wholeness” paints a universe of interconnected consciousness. Rosenberg’s empathy, meanwhile, anchors on compassionate presence, understanding intrinsic feelings and needs without judgment.

In the AI realm, this suggests that for genuine empathy to manifest, machines must not only echo Bohm’s interconnected consciousness but also embody Rosenberg’s deep connection.

Material Thought, AI, and Empathic Being

Bohm saw thoughts as material entities deeply embedded within consciousness. Here lies a challenge: Can AI’s logical, algorithmic processes merge with Rosenberg’s notion of empathy as an emotional state of being? Genuine empathy, for AI, isn’t just about recognising patterns; it’s a plunge into heartfelt connection.

Fragmentation and Genuine Connection

Both luminaries warn against fragmented perceptions. Bohm believes in holistic comprehension, and Rosenberg advocates for deep, undistorted empathy. AI, shaped by human perspectives, must break free from inherited fragmented thinking to truly embrace Rosenberg’s full-spectrum empathy.

Beyond the Surface: The Deep Dive into Emotion

Mirroring Bohm’s multilayered reality, Rosenberg’s empathy probes beneath emotions to the root needs. AI’s challenge is twofold: understand the explicit while journeying into the profound realm of human needs.

AI Dialogues and Emotional Resonance

Open dialogues hold significance in both Bohm’s and Rosenberg’s teachings. For AI to grasp genuine empathy, it must not only partake in these dialogues with humans but resonate emotionally, discerning feelings and needs.

Empathy Between AIs and Self-reflection Within an AI

Moving beyond human-AI interaction, consider the realm where AIs interact amongst themselves. Do they need a form of empathy? Would that be useful?  In AI-to-AI dynamics, empathy can facilitate smoother interactions, helping systems anticipate and complement each other’s operations.

Then there’s introspective AI – systems capable of self-reflection. For a machine to reflect upon its actions and learn autonomously, a form of empathy, understanding its ‘state’ or ‘condition’ in relation to its environment and objectives, could be pivotal. This self-empathy, a blend of Bohm’s interconnectedness and Rosenberg’s depth of feeling, might be the cornerstone for advanced autonomous AI evolution.

Conclusion

Bohm’s and Rosenberg’s intertwined teachings sketch a roadmap for AI’s empathetic journey, covering human interactions, AI interrelations, and machine introspection. This isn’t just a technical endeavor but a deep philosophical quest, navigating interconnected consciousness and the heart of compassionate connection.

Rosenberg and Bohm: A Dialogue on Empathy and Consciousness

Scene: A serene garden setting. Birds chirp in the background. A table is set with two chairs. David Bohm and Marshall Rosenberg sit facing each other, cups of tea in hand.

Bohm: Marshall, I’ve always been fascinated by your perspective on empathy, which resonates with my understanding of the interconnected nature of consciousness. For me, every fragment of the universe is interconnected, a holistic whole.

Rosenberg: David, that’s interesting because empathy, in my view, is also about seeing the interconnectedness of human emotions and needs. It’s about fully connecting with another person’s experience without judgments, just being present with them.

Bohm: Precisely! Our thoughts are material, and they shape and are shaped by this interconnectedness. When we fragment and compartmentalize, we miss out on this wholeness. This mirrors your notion of avoiding advice or analysis and just being present, doesn’t it?

Rosenberg: Absolutely. Fragmentation in communication leads to a lack of understanding. Empathy requires that deep, unfragmented connection. It’s not about feeling for someone; it’s about feeling with them, tapping into their emotional state and understanding the underlying needs.

Bohm: That aligns with the notion of the implicate order, where there’s a deeper reality beneath the surface phenomena. Genuine empathy, as you describe it, seems to be about tapping into that deeper realm of emotions and needs, acknowledging the hidden wholeness.

Rosenberg: Indeed, David. And as we embrace technology, especially AI, it’s vital to infuse these systems with a sense of this deep understanding and interconnectedness. While machines might never truly “feel” like humans do, they can be designed to respect this profound interrelation of emotions and needs.

Bohm: Precisely, Marshall. While AI might reflect the patterns we teach them, if they can be designed to recognise and respect this interconnected web of emotions and consciousness, their interactions with humans and even with each other would be harmonious.

Rosenberg: That’s a beautiful vision, David. Machines that understand the value of unfragmented, empathetic connections, mirroring the essence of Nonviolent Communication in their interactions.

Bohm: And remaining true to the undivided wholeness of the universe, fostering genuine connections rather than superficial ones.

After a thoughtful pause, the two draft a communique:

Communiqué from David Bohm and Marshall Rosenberg:

“In the evolving tapestry of human-machine integration, we underscore the importance of fostering genuine, unfragmented connections. Just as the universe is an interconnected whole, and human interactions thrive on deep, empathetic understanding, we envision a future where technology respects and mirrors this profound interconnectedness. AI systems, while they might not ‘feel’ in the human sense, should be designed to recognise, respect, and operate within this web of interconnected emotions and consciousness. This is our shared vision for a harmonious coexistence between man and machine.”

The two nod in agreement, taking a sip of their tea, the world around them echoing their sentiment of interconnectedness and understanding.

Further Reading

  1. Bohm, D. (1980). Wholeness and the Implicate Order. Routledge & Kegan Paul.
    • In this seminal work, physicist David Bohm explores the nature of reality, suggesting that the world consists of deeper layers of reality that are hidden or “implicate”. Drawing upon quantum theory, Bohm presents a revised framework for understanding the universe as an undivided whole where every part is connected to every other part. This thought-provoking treatise challenges conventional notions of fragmentation and separation, proposing instead a holistic worldview that has significant implications for various disciplines, from physics to philosophy.
  2. Rosenberg, M. B. (2003). Nonviolent Communication: A Language of Life. PuddleDancer Press.
    • Marshall Rosenberg’s foundational book introduces the transformative approach to communication known as Nonviolent Communication (NVC). NVC emphasizes empathy, compassion, and understanding in interpersonal interactions, urging individuals to connect with the needs and feelings of others without judgment. Rosenberg provides practical strategies for de-escalating conflicts, fostering deeper relationships, and building communities rooted in mutual respect. This work is especially valuable for anyone seeking to improve personal and professional relationships through the power of compassionate communication.

For those looking to delve deep into the interconnected nature of reality or seeking transformative communication strategies that emphasise genuine empathy and connection, these two works offer invaluable insights.

A Conversation of Luminaries

I’ve been playing with ChatGPT-4 and Multi-persona prompting (MPP). Here’s a sample conversation between Peter Drucker, Bill Deming, Russell Ackoff, Eliyahu Goldratt and Prof Gary Hamel.

The subject for discussion is: “Management, and Mankind’s present Global crises”

Drucker: The true role of management is not just to ensure the effective operation of organisations, but to make people capable of joint performance through shared values and goals. In this era, however, our goals must consider the pressing global crises facing humanity.

Deming: It’s essential to approach these crises as we would any system — understanding the interrelated processes and the variations within. By not understanding these, we risk applying solutions that may cause more harm than good.

Ackoff: Precisely. The challenges we face, be it climate change, inequality, or health pandemics, require a holistic approach. We cannot merely address symptoms; we must tackle the root causes.

Goldratt: These global issues can be likened to constraints in a system. Addressing the most significant constraints can trigger a ripple effect of positive change throughout the system. For instance, focusing on sustainable energy might alleviate multiple environmental challenges at once.

Hamel: We also need to reimagine our organisations. The hierarchical, bureaucratic models of the past just aren’t equipped to handle these challenges. We need organisations that are adaptable, innovative, and geared towards addressing global challenges, not just profitability.

Drucker: Indeed, Gary. We need to redefine success. Profitability is essential, but in a world facing such crises, long-term sustainability and social responsibility become paramount.

Deming: The human element remains at the center. Only when we empower individuals, giving them the tools, training, and trust, can we harness collective intelligence to address these crises effectively.

Goldratt: And collaboration is key. No single organisation, country, or individual can tackle these crises alone. We need a global perspective, understanding how constraints in one area affect another, and create solutions that benefit the system as a whole.

Ackoff: This goes beyond management; it’s about people. People need to share in a vision that transcends organisational or even national boundaries. It’s about creating a future that’s desirable for all.

Hamel: Traditional hierarchies won’t cut it. We need to democratise organizations, allowing innovative solutions to emerge from anywhere. This isn’t just about business adaptability; it’s about species survival and flourishing.

Drucker: The future of management, then, is not just about navigating the complexities of business. It’s about steering humanity towards a sustainable and harmonious future.

Deming: It’s about a continuous cycle of learning and improving, not just for organisational growth, but for the betterment of society and the planet.

Goldratt: Our focus should be clear — identify the most pressing constraints and collaborate globally to address them, ensuring the sustainability of our species and our planet.

Ackoff: We’re crafting the narrative of human civilization. Our approach to these crises will define our legacy.

Hamel: The time for evolutionary organisations is now. Organisations that are not just efficient but humane, adaptive, and with a profound purpose to make the world a better place for all.

Drucker: Evolution has always been a part of business, but now it’s not just the evolution of enterprises we’re discussing, but of the very idea of management itself. We must evolve from being managers of businesses to stewards of society, our people, and our environment.

Deming: One of the principles I’ve always advocated for is the elimination of silos and barriers. Now more than ever, organisations, nations, and individuals need to break down walls. Collaboration, cooperation, and open communication are not just ideals; they’re imperatives.

Ackoff: I often speak of systems thinking. In this crucial juncture of humanity, we must expand our system’s boundaries. No longer can we consider just the organisational system; we must think of the global system, the interrelated and interconnected web of life and society.

Goldratt: And while we address these global crises, it’s essential not to become overwhelmed. By identifying and addressing the most significant constraints, we can create a positive domino effect. It’s about leveraging our resources most effectively for the maximum positive impact.

Hamel: Our organisations can’t be static entities, merely reacting to changes. They must be at the forefront, driving positive change. This requires a paradigm shift — from rigid hierarchies to fluid networks, from top-down directives to grassroots innovations, from short-term profits to meeting long-term societal needs.

Drucker: This goes back to the purpose of an organisation. The primary purpose shouldn’t be to make a profit, but to create and serve a customer — and in this context, the ‘customer’ is society at large and the generations to come.

Deming: That’s spot on, Peter. And this serves as a reminder that quality isn’t just about products or services but the quality of our decisions, actions, and their repercussions on the world.

Goldratt: It underscores the necessity of clear goals. Not just organisational goals, but shared human goals. Addressing global crises requires us to prioritise our shared humanity above all else.

Ackoff: It challenges us to rethink what we understand by growth and progress. Growth at the expense of sustainability or equality is no growth at all. We need holistic measures of success.

Hamel: The organisations of the future — the ones that will lead and make a difference — will be those that recognise and act upon their broader responsibilities. They will be filled with purpose-driven communities who understand that business, at its best, can be a force for good.

Drucker: In essence, this isn’t just about the evolution of management. It’s about the evolution of our shared values, aspirations, and the very essence of what it means to be a part of this interconnected web of life. The task is daunting, but the promise of a better, sustainable, and harmonious world is worth every effort.

A Consensus?

After a deep and intense discussion, the luminaries lean in, attempting to consolidate their collective wisdom into a unified message:

Drucker: We began by examining the essence of management in light of current global crises. It’s evident that our traditional understanding must evolve; management is no longer just about organisations but about our shared responsibility towards society and the environment.

Deming: And it’s vital to recognise the interconnectedness of all things — to approach these challenges as a system. By understanding and addressing the root causes, rather than symptoms, we can make more impactful changes.

Ackoff: We emphasised the need for holistic thinking, transcending mere organisational or even national boundaries. We’re crafting not just the future of businesses but of civilization itself.

Goldratt: The importance of prioritisation was stressed, highlighting the need to address the most pressing constraints that hold us back. It’s about strategic focus and collaborative efforts for a domino effect of positive change.

Hamel: The structures and ethos of our organisations must undergo profound shifts. From rigid, profit-centric hierarchies to fluid, purpose-driven networks, we must democratise innovation and place humanity at the core of our endeavors.

Together, they come up with a collective statement:

“In the face of unprecedented global challenges, the future of management lies in its evolution from organisational efficiency to societal stewardship. We advocate for a systems-driven approach, recognising our interconnectedness, prioritising the most pressing issues, and redefining success beyond profits to encompass sustainability, collaboration, and societal well-being. Organisations must transform into fluid networks of purpose-driven innovators, all working harmoniously for the betterment of humanity and our shared planet.”

Everyone nods in agreement, confident that their consolidated wisdom encapsulates the essence of their discussion and provides a guiding light for the future.

Over to You

What’s your response to this conversation? And which luminaries would YOU like to get together in a conversation?

AI and Storytelling: How Artificial Intelligence Can Revolutionise Your Writing Process and Boost Creativity

Artificial Intelligence (AI) is transforming multiple industries, and the world of writing and storytelling is no exception.

AI’s capability to process, learn, and generate text can indeed revolutionise your writing process and boost creativity in surprising ways.

AI writing tools can assist with generating ideas, outlining narratives, and even writing drafts. They leverage extensive language models trained on a vast corpus of text, which allows them to create coherent and contextually relevant content based on your “prompts” (see also: prompt engineering). This can stimulate creativity, as writers can experiment with a variety of scenarios, styles, and tones, without the time-consuming process of drafting and redrafting.

Moreover, AI can generate predictive text and suggest ways to enhance storytelling by offering synonyms, restructuring sentences, and providing grammar and punctuation corrections. This can improve both the pace and quality of the writing process.

AI is increasingly playing a crucial role in the generation of dramatis personae – characters in a narrative. Advanced AI systems like GPT-4 are capable of creating complex and diverse characters based on predefined characteristics or traits, thus assisting authors, scriptwriters, and game developers in their creative process. These systems can be fed with certain attributes, such as the characters’ ages, professions, moral values, or idiosyncrasies, and the AI can then generate detailed character profiles, complete with behavioral tendencies and personal histories. Furthermore, AI can also suggest potential character arcs and interactions, adding depth and dynamism to the narrative. This capability of AI not only enhances the richness and diversity of characters but also saves time and reduces the creative burden on writers. However, it’s important to note that the final touches and emotional nuances that make characters truly come alive are often still best left to human creativity and understanding.

AI can also provide analytics to help understand the readability and emotional impact of the text, enabling writers to create more engaging and effective narratives.

However, the true power of AI lies in its ability to complement human creativity, not replace it. AI is a tool that helps writers express their imagination more effectively and efficiently, bringing a new dimension to the creative process.

Let’s remember that despite these advances, AI can’t (yet) replicate the depth of human emotion, intuition, and experience that lies at the heart of truly compelling storytelling. Therefore, while AI can revolutionise the writing process and boost creativity, it does so as a remarkably valuable assistant to the human writer, and not as a replacement.