Archive

Software development

Von Scharnhorst and Auftragstaktik

Lessons for Collaborative Knowledge Work

Gerhard von Scharnhorst

Origins and History

In the aftermath of Prussia’s crushing defeat by Napoleonic forces in 1806, the military theorist Gerhard von Scharnhorst sought to reform the once-formidable Prussian army. One of his key innovations was the concept of “auftragstaktik” or “mission-type-tactics.”

Rather than dictating precise orders from a central command, auftragstaktik pushed decision-making down to lower levels. Commanders at all levels were given a general objective, the resources needed to accomplish it, and the freedom to determine how best to achieve the mission based on their own initiative and circumstances on the ground.

The philosophy empowered front-line soldiers and unlocked the full creative and adaptive potential of the force. After adopting auftragstaktik, the Prussians achieved remarkable success against Napoleon’s armies at the Battles of Leipzig and Waterloo in 1813-1815. Auftragstaktik also played a major role in the stunning Prussian victory over France in the Franco-Prussian War of 1870-1871.

Applicability to Collaborative Knowledge Work (CKW)

While developed for military operations, the core principles of auftragstaktik hold powerful lessons for collaborative knowledge work (CKW) in the modern economy – whether in software, product development, research, or other complex team-based environments.

Like 19th century combat, these domains are rife with volatility, uncertainty, complexity and ambiguity. Central planning and rigid hierarchies falter in the face of rapidly changing circumstances and information asymmetries between layers of management.

Auftragstaktik propounds:

  • Clear overarching strategic objectives (the “mission”)
  • Dispersed decision rights to those closest to the information
  • An emphasis on individual initiative within defined constraints
  • Bi-directional communication and coordination between layers

Modern Implementations

Many of today’s most innovative companies have embraced versions of auftragstaktik – if not in name, then in substance:

Spotify: The music streaming giant pioneered an organisational model of small, autonomous “squads” each with a clear mission aligned to company strategy. Squads have end-to-end responsibility for their areas and coordinate through linking elements like “tribes,” “chapters,” and other forums.

U.S.M.C. and SpecOps: Modern special operations embrace a philosophy of “centralised command, decentralised control” a.k.a. “Mission-type tactics”.  While top leaders set the operational vision, small teams on the ground have maximum leeway and support to accomplish their missions as they see fit based on circumstances.

Maximising Human Potential

At its core, auftragstaktik is about maximizing the full human potential of an organisation. By clearly broadcasting strategic intent while devolving execution to those closest to the information, it unlocks the advantages of both alignment and autonomy.

In our era of accelerating innovation and disruption, optimising for flexibility and adaptability is of paramount importance. The principles of von Scharnhorst’s 19th century military revolution may well be the philosophical lodestar for 21st century organisational success.

Further Reading

Marshall, R.W. (2013). Product Aikido [PDF] – Auftragstaktik For Software Development. Falling Blossoms. /wp-content/uploads/2013/04/productaikido041016.pdf

Upton Sinclair’s Dictum

The Maxim and Its Intellectual Pedigree

For those unfamiliar with the novelist and polemicist Upton Sinclair, he is perhaps best known for his 1906 novel “The Jungle” which exposed horrific conditions in the meat-packing industry and inspired reforms like the creation of the FDA. But one of Sinclair’s most oft-quoted maxims has lived on as sage advice in fields well beyond its original context of Yellow Journalism and muckraking:

“It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

This pithy statement, now known as Upton Sinclair’s Dictum, echoes the perspective of the English mathematician and philosopher William Kingdon Clifford, who famously declared

“It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.”

thereby making belief an issue of morality, or ethics.

Both Upton Sinclair and William Clifford saw intellectual honesty and a commitment to following evidence over expedience as paramount moral and ethical imperatives.

The Perils of Motivated Reasoning

Sinclair’s dictum cuts to the heart of the conflict of interest that can arise when people are incentivised to ignore uncomfortable truths or turn a blind eye to unethical practices. Over a century later, it remains as relevant as ever – particularly for business leaders and managers charged with enabling collaborative knowledge work.

The Crucible of Knowledge Work

In fields like software development, product design, team coaching, and other collaborative brain (grey muscle) work, the challenges teams face are often wicked problems – complex issues with no clear right answer, where even reasonable people can disagree with each other. Successfully navigating these choppy waters requires the fearless questioning of assumptions and beliefs, a relentless commitment to empiricism over ego, and a culture where all ideas can be rigorously stress-tested rather than self-censored.

Incentives Gone Awry

And yet, how often do we see teams afflicted by an insidious form of willful blindness, where dissenting perspectives are downplayed or dismissed outright in service of binding to already-held beliefs? Perhaps it’s driven by managers’ career incentives being too tightly coupled to delivering on a specific roadmap or revenue target. Maybe it stems from product leaders’ identities being too inextricably bound up with their “billion dollar baby” and thus being emotionally invested in rationalising sunk costs. Or it could simply be the natural tendency toward the comfortable inertia of groupthink.

Embracing Intellectual Honesty

Whatever the root causes, the antidote is the same – cultivating a culture of intellectual honesty, where all the Folks That Matter™ have both the autonomy and the enthusiasm to vocalise doubts and scrutinise lchains of reasoning, assumptions and beliefs. Where no stone goes unturned in interrogating the fundamental assumptions underlying key decisions. Where Value at Risk* queries are not only tolerated but actively encouraged as a check against blind spots and biases.

Fostering this boundary-less ethos of truth-seeking is a significant challenge facing modern knowledge-work leaders. But by striving to live up to the spirit of Sinclair’s admonition, we give ourselves the best chance of circumventing the self-deceptions and rationalisations that can otherwise send initiatives careening toward ruinous failures.

Heeding History’s Warnings

Time and again, history’s cautionary tales have proved the adage that “in a battle of conviction against conventional wisdom, conventional wisdom has largely prevailed.” That’s why embracing Sinclair’s Dictum is so vital. For only by creating an environment where people can transcend their vested interests and follow the truth wherever it leads can we hope to part the veils of entrenched assumptions and beliefs.

 


*”Value at risk queries” refers to the practice of actively questioning and scrutinising decisions, plans, or initiatives to assess the potential downsides, risks, and costs if things go wrong.

The term is taken from the financial concept of “value at risk” (VaR), which is a risk measurement and management method used to estimate the potential losses an investment or portfolio could face over a given time period.

Here, “value at risk queries” means rigorously examining the value potentially put at risk by a course of action – whether that value is financial, reputational, opportunity costs, or other key metrics important to the organisation.

Some examples of value at risk queries include:

  • What is the worst-case scenario if this product fails to gain market traction?
  • Have we fully stress-tested the assumptions around customer adoption rates?
  • To what regulatory or compliance risks are we potentially exposing ourselves?
  • How much technical debt and future constraints are we incurring with this architecture?
  • Are we missing any significant blind spots in our competitive analysis?

Instead of shutting down or dismissing these tough “what if?” questions, organisations might choose to actively encourage and support value at risk queries. This helps surface potential blind spots and provides a check against overly optimistic planning or narrow frames of reference.

In essence, value at risk queries apply rigorous risk management thinking as an antidote to groupthink and comfortable consensus-building. They stress-test initiatives before making irreversible commitments.

Code for the Machines: The Rise of AI-Readable Programming

The Future of Coding: Embracing AI Readability

In the domain of software development, the concept of writing code primarily for artificial intelligence (AI) readability and maintainability is an intriguing prospect that will influence the future of coding practices. While human-readable code remains the standard approach as of August 2023, the rapid advances in AI technology suggest that a paradigm shift may be on the horizon.

The Rise of AI-Centric Coding Styles

As AI systems become increasingly sophisticated and integrated into software development workflows, the need for code that caters to the strengths of these systems may become more pressing. This could lead to the emergence of coding styles specifically tailored for AI readability and maintainability over (sic) human readability and maintainability, encompassing the following characteristics:

Increased Abstraction and Modularisation

AI systems may thrive on highly modularised and abstracted code, where individual components are clearly separated and encapsulated. This could result in a coding style that emphasises smaller, more self-contained units of code with well-defined interfaces, promoting better organisation and encapsulation.

Formalised and Explicit Syntax

While human programmers often rely on conventions and implicit understandings, AI systems may benefit from a more formalised and explicit syntax. This could involve additional annotations or metadata that make the semantics of the code more explicit, catering to the needs of AI systems for unambiguous interpretation.

Pattern Recognition Optimisation

AI systems excel at recognising patterns, so the coding style could be optimised for this strength. Consistent naming conventions, structural similarities, and other patterns that can be easily recognised by AI systems may become more prevalent.

Reduced Redundancy (DRY)

AI systems may be better equipped to handle and maintain code with minimal redundancy, leading to a coding style that emphasises code reuse, shared libraries, and other techniques to reduce duplication.Such techniques will likely make the code more or less illegible to humans, at least to unaided humans.

Documentation Tailored for AI Comprehension

In an AI-centric coding paradigm, the traditional concept of human-readable documentation and comments may become obsolete. Instead, the emphasis would shift towards creating self-documenting code that can be seamlessly interpreted and maintained by AI systems. This could involve incorporating structured annotations, metadata, and other machine-readable elements directly into the codebase.

The documentation process itself could be automated, with AI algorithms capable of parsing the code structure, analyzing the annotations, and generating comprehensive documentation tailored specifically for AI comprehension. This documentation would be optimized for pattern recognition, logical inference, and other capabilities that AI systems excel at, rather than catering to human readability.

Moreover, the maintenance of this documentation could be handled by AI systems, ensuring that it remains up-to-date and consistent with the evolving codebase. As changes are made to the code, the AI-driven documentation would automatically reflect these modifications, eliminating the need for manual updates and reducing the risk of documentation becoming outdated or inconsistent with the actual implementation.

This approach could potentially revolutionize the way documentation is created, maintained, and consumed, shifting the focus from human readability to machine interpretability, and leveraging the strengths of AI systems to streamline the documentation process.

The Hybrid Approach

While the prospect of AI-centric coding styles is intriguing, it’s important to note that a hybrid approach may emerge as a stop-gap or transitionary approach, where code is written to be readable and maintainable by both humans and AI systems. This approach could leverage the strengths of both parties, ensuring that the code remains accessible to human developers while also catering to the needs of AI systems. I suggest this duplication of effert will soon rankle.

Conclusion

As AI technology continues to evolve, the way we approach coding will undergo significant transformations. While the shift towards AI readability and maintainability is still a hypothetical scenario, it is an exciting prospect that could revolutionise the software development industry. Regardless of the direction this trend takes, one thing is certain: the future of coding will be shaped by the interplay between human ingenuity and the capabilities of AI systems. And we can finally discard the shackles of so-called agility, too – in favour of doing what humans do best: attending to folks’ needs.

Here’s a postscript illuminating the use of AI to write code in a style where readability and maintainability by humans is not the main focus:

Postscript: AI-Generated Code for Machine Consumption

In addition to the prospect of writing code specifically tailored for AI readability and maintainability, the advancement of AI technology also raises the intriguing possibility of AI systems themselves generating code in a style optimised for machine consumption, rather than human readability.

As AI systems become more sophisticated in their ability to understand and translate natural language into code, they could potentially generate programs that prioritise efficiency, conciseness, and optimisations over human-friendly constructs. This AI-generated code might forgo traditional conventions and practices aimed at enhancing readability for human developers, instead favoring structures and patterns that are more readily interpretable and maintainable by themselves and/or other AI systems.

Such AI-generated code could be highly compact, with minimal redundancy and a heavy reliance on abstraction and modularisation. It might incorporate complex mathematical models, advanced algorithms, and unconventional coding techniques that leverage the strengths of AI systems while potentially sacrificing human comprehensibility.

While this concept may seem counterintuitive to traditional software development practices, it could open up new avenues for highly optimised and efficient code generation, particularly in domains where performance and scalability are paramount, such as high-performance computing, big data processing, or cutting-edge scientific simulations.

Moreover, as AI systems become increasingly integrated into the software development lifecycle, they could potentially maintain and evolve this AI-generated code autonomously(?), with minimal human intervention. This could lead to a paradigm shift where the primary consumers and maintainers of code are AI systems themselves, rather than human developers.

However, it’s important to note that this scenario is still largely hypothetical and dependent on further advancements in AI technology and its practical applications in software development. Nonetheless, it highlights the potential for AI to reshape not only how we write and maintain code but also how code itself is generated and optimised for machine consumption.

Who Cares How We Code?

The Premise

As developers, we’re a smart bunch. We know our stuff and can generally be trusted to choose the best approach to getting the job done, right? After all, the goal is to get that program up and running in production as quickly as possible. What could possibly go wrong if we cut a few corners here and there? A bit of spaghetti code never hurt anyone. And technical debt is a conscious and intentional choice, yes?

The Bitter Truth

Sadly, this cavalier attitude towards development practices is a recipe for disaster further down the line. While it may seem like a shortcut to production Heaven, it’s more akin to paving the way to Maintenance Hell – and Future Costs City. Let’s explore why we might choose to actually care about how we code.

Compromising Schedule Predictability

Messy codebases compromise team’s ability to predict how long something is going to take. The more the mess, the more unreliable the schedule.

The Future Payback Trap

We can compare writing sloppy, unmaintainable code as analogous to racking up maxed-out credit cards. It’s taking on inevitable future payback down the line, and just like financial debt, it accrues “interest” in the form of extra development costs that compound over time. That once-scrappy codebase becomes an ungovernable mess that’s exponentially harder to change, optimise, or extend. Before we know it, we’re spending more time untangling our own spaghettic nightmares than making meaningful progress.

The Collaboration Conundrum

In most cases, a codebase is a team effort across its lifetime. If you don’t maintain a minimum level of quality, good luck onboarding new team members or even having your future self make sense of the tangle a few months down the road. Sloppy code breeds knowledge silos and cripples effective collaboration.

The Debugging Debacle

Well-structured, self-documenting code that follows good architectural principles makes it infinitely easier to debug issues and safely update the software over time. In contrast, a patched-together “codic dervish” is virtually impossible to decipher or modify without potentially disastrous unintended consequences.

The Performance Pitfall

While your hacky script may seem to work for that small prototype or MVP, codebases that cut corners on fundamental coding practices and design patterns simply won’t be able to scale gracefully as usage and complexity grow over time. Code quality is paramount for managing performance under load.

The Futility of Quality Assurance

When we don’t make code quality a priority from the get-go, good luck getting meaningful code reviews or implementing a robust quality assurance approach. Code reviews become an exercise in futility, and QA turns into a fruitless game of DevOps whack-a-mole, constantly putting out fires in an inherently unstable, unpredictable product.

The Craftsmanship Principle

At the end of the day, consistently writing clean, maintainable code is one of the hallmarks of competence, as opposed to a mere hack. By treating our craft with care and prioritising technical excellence, we’re investing in the long-term success of our products, our teams, and our careers. But who cares about the long term?

The Metaclueless Developers: Inheriting Teams Unaware of Their Own Shortcomings

The Back Story

One time, as VP of Engineering, I inherited a motley team of metaclueless developers and testers.

The Flawed Assumptions

From day one with this new team, it became clear my assumptions were way off base. My attempts to understand the existing codebase, dependencies, coding and deployment processes were met with confusing non-explanations from the senior developers. Proposals for changes, reviews, or other improvements were immediately dismissed with a passive-aggresive demeanour as unnecessary red tape. There seemed to be this ingrained belief that “we’ve been doing just fine” simply because they hadn’t suffered many major outages yet.

Peeling Back the Layers

But as I started really digging in, the reality was more problematic than I initially realised. The codebase was a disorganised tangle of inconsistent patterns and anti-patterns. Automated testing and deployement was sporadic at best. The deployment process involved brittle, undocumented scripts that only a few developers halfway understood. Institutional knowledge was scattered among individual brain silos.

The Destructive Hubris

Rather than a receptive discussion when invited to discussion on making things better, I faced a brick wall of defensiveness and hubris. The team was convinced they knew best – such that any invitations went unadopted. Every mention of improvement was met with circular justifications about “how we’ve been doing it for years” or “we haven’t had any serious issues yet”.

The Path Forward

Looking back now, I see that the situation revealed some universal truths about software:

First, we all get blindspots and make flawed assumptions over time – no matter how experienced we think we are. Maintaining a beginner’s mindset of continual learning helps.

Second, defensiveness and “ingrained way” thinking are toxic team pathologies that noone can solve alone. An environment of open dialogue and reasoned self-critique must be continually fostered.

And finally, the most dangerous hubris of all is assuming you already have all the answers rather than a humble openness to involving everyone in uncovering the real issues at hand, and ways forward.

The Executive Fuckups Crippling Software Development

Let’s be honest, executives and seniors managers are forever fucking up their organisations’ software development efforts, big time.

Category Error

The Crux of the Problem

Let’s be honest, successfully executing software development initiatives is no easy feat for executives and senior managers. As the Harvard Business Review aptly states,

“The greatest impediment [to effective software development] is not the need for better methodologies, empirical evidence of significant benefits, or proof that agile can work – it’s the behaviour of executives.”

At the root of these struggles lies a fundamental “Category Error” – the failure to recognise collaborative knowledge work (CKW), such as software development, as a distinct category from other types of work.

Applying the Wrong Lens

Whilst leadership plays a crucial role in complex projects, executives often fuck up development big time by attempting to manage software development through the same lens as:

  • Factory work
  • Manufacturing
  • Traditional office work
  • Service work (e.g. call centres, help desks)
  • Individual knowledge work

However, collaborative knowledge work demands a radically different approach. Imposing management practices from other categories inevitably leads to “management monstrosities” – dysfunctional, ineffective tech organisations.

The Pitfalls of Misclassification

  1. Disconnect Between Business and CKW
    Executives struggle to bridge the gap between business objectives and CKW realities when software development is treated as akin to factory work or manufacturing.
  2. Unrealistic Expectations
    Viewing software development through the lens of production lines and factory work breeds cultural mismatches, unrealistic timelines and quality compromises.
  3. Resistance to Change
    Legacy systems persist due to inertia from treating CKW like the more understood office work.
  4. Resource Misallocation
    Without recognising development as collaborative knowledge work, resources for talent, tools and infrastructure are inadequate.
  5. Micromanagement
    An authoritarian command-and-control ethos stifles the autonomy and collaboration that development teams need.

The Crux of the Issue

The HBR quote exposes this truth – executives’ mindsets, shaped by misunderstanding the category of work, undermine methodologies and processes.

Unlocking True Potential

Overcoming “management monstrosities” requires understanding software development as collaborative knowledge work. This shift allows:

  • Fostering cultures of learning and evolution.
  • Embracing self managing, autonomous team models.
  • Aligning resources for teams of knowledge workers.
  • Building bridges between business and CKW domains.

With the right categorisation and mindset, executives can transform organisations into innovative powerhouses (fat chance of that happening in our lifetimes).

The Path Forward

The key lies in shedding industrial-era management thinking (they do think, don’t they?) and nurturing environments suited to this distinct category of work.

Open communication, adaptability and appreciating the complexities of collaborative development are vital. Escaping the “Category Error” unlocks outstanding delivery of software solutions and delight for all the Folks That Matter™.

Emotioneering the Eye of the Beholder

Following on from my previous two posts on the theme of beauty…

Defining Aesthetic Ideals

The old adage “beauty is in the eye of the beholder” takes on new significance when viewed through the lens of emotioneering – the practice of systematically crafting product experiences to influence human emotions and perceptions, and increase the chances of people buying the product. Beauty brands and marketers have long recognised the power of shaping what we consider beautiful and desirable. But have you ever stopped to consider why you find certain looks, features or styles appealing?

The Myth of Universal Beauty

At its core, defining beauty standards is a powerful form of emotioneering. The marketing engines and cultural influences that promote certain physical attributes, fashion styles or body types over others directly mould our emotional associations with beauty ideals. Seeing the same narrow standards repeatedly reinforced triggers feelings of aspirational desire or even inadequacy for not meeting those idealised norms.

Mapping Subjective Influences

However, seasoned emotioneers understand that universal beauty is a myth. Perceived beauty is deeply personal, shaped by individual experiences, cultural exposures, and psychological predispositions. By tapping into these subjective influences, brands can emotioneering highly specialised and targeted versions of what “beauty” looks and feels like for different segments. What life experiences or influences have shaped your own concept of beauty?

Crafting Emotional Resonance

For some audiences, rugged, athletic physiques projecting strength and power evoke desired emotions. For others, approachable, lower-intensity looks feel more comfortably aspirational and beautiful. Smart emotioneers study intersections of influences like age, ethnicity, geographical environment, hobbies and belief systems to reverse-engineer the most provocative emotional territory to target.

This principle of crafting emotional resonance extends well beyond just physical appearance into other product realms as well. In the world of software and digital experiences, emotioneers carefully study how different user groups emotionally respond to various design elements, interaction patterns, and functionality.

For instance, an emotioneered secure file-sharing app targeting IT professionals may aim to instill feelings of control, robustness, and authority through its UI and messaging. Conversely, an emotioneered photo editing app for creative consumers might vibe maximalism, playfulness, and unleashed artistic expression. What emotional notes a product strikes shape whether a user perceives it as an innate problem-solving ally or an unsatisfying hassle.

From the interaction micromoments to the holistic user journey, thoughtful emotioneers map the emotional pathways that transform digital bits into resonant human experiences. Do certain software aesthetics, features, or processes amplify your sense of delight, confidence, or creative freedom? The most impactful players understand how to intentionally thread those emotional highlights throughout their digital products.

Imprinting the Beholder’s Eye

Ultimately, while the “beauty in the eye” idiom hints at subjectivity, the most sophisticated emotioneers appreciate that no perspective on beauty is untainted – emotional perceptions around beauty are constantly imprinted, whether by intention or environment. By meticulously mapping the influences and ingrained experiences that shape different beholders’ eyes, emotioneers attain power to systematically shift what emotional notes the idea of “beauty” strikes for any desired audience. Does recognising these influences make you more aware of how your own perceptions may have been shaped?

Further Reading

Lindstrom, M. (2008). Buyology: Truth and lies about why we buy. Doubleday.

The True Beauty of Software: Serving Human Needs

“Beauty is only skin deep, but ugly goes clear to the bone.”

~ Thomas Overbury

When pondering what constitutes beautiful software, we might choose to look beyond the mere lines of code on the screen. For genuine beauty in software arises not from technical excellence, but from the extent to which it genuinely serves and aligns with the needs of human beings.

A Deeper Beauty

On the surface, we may admire software having clean, elegant code structure, adhering to best practices and exhibiting visual appeal. But the ancient philosophers taught that true beauty must run deeper than superficial appearances. For software, this deeper beauty emanates from how effectively it enhances human capabilities and experiences in the real world.

Power to Elevate

Well-designed software represents the harmonious weaving of digital capabilities with human need. Just as great art inspires by achieving a personal expression of universal themes, so does beautiful software illuminate core human needs through its delivery of cohesive, purposeful functionality. It allows us to appreciate software’s power to elevate and augment our existence.

Like the Romantic poets extolled, beautiful software can facilitate a transcendent union with something greater than ourselves. When developing with insight into human needs, programmers experience a state of flow, bridging the worlds of bits and people until there is no division between the created software and those it benefits. We become co-creators, using our skills to help bring into being solutions which empower.

Resonant

At the same time, beautiful software must resonate with the depth of human experience. As Buddhist wisdom teaches, true beauty arises through mindfulness, ethical conduct, and pacification of the ego. In beautiful software, we find the development team’s consciousness – their thoughtfulness in attending to folks’ needs, their restraint in avoiding the unneeded, their core values embodied in the system’s behaviours.

Inner Light

Moreover, beautiful software exhibits an inner light not of technical correctness, but of purpose – solving real human needs with clarity and compassion. Its beauty transcends being well-crafted to also being virtuous, ethical and generous in spirit. For its core purpose is selfless service to humanity.

Conclusion

So while we may appreciate the external trappings of high-quality software, true beauty runs deeper – into how well it elevates human potential and adapts seamlessly into the real needs of peoples’ lives. For therein lies the highest achievement, to create not just products, but solutions that illuminate, attend to, and empower the human condition.

Seniority

The labels “junior,” “mid-level,” and “senior” get batted around frequently. But the true hallmark of a senior has nothing to do with the years under their belt. Rather, it’s about having gained the ability to introspect, adapt, and apply hard-won lessons from seeing a multitude of challenges and scenarios.

The Path is Lit by Diverse Problem-Solving

What most sets senior developers, engineers, and business folks apart from the less senior is the wealth of different problems they’ve encountered and the innovative solutions they’ve seen, and crafted. They’ve grappled with issues spanning:

  1. Appreciation for a System: This involves understanding how various components within an organisation or community interact with each other. It emphasises looking at an organisation as a whole system rather than isolated components. It also stresses understanding how actions and changes in one area can impact other areas.
  2. Theory of Knowledge: This relates to the concepts around how learning and knowledge acquisition take place. It covers topics like operational definitions, theory of variation, psychology, and a theory of learning. The aim is to guide learning, decision making, and organisational practices.
  3. Knowledge about Variation: This involves understanding variation, both controlled (common cause) and uncontrolled (special cause) variation. It stresses using statistical thinking and tools to study process variation over time and identify the root causes of variation.
  4. Knowledge of Psychology: This refers to an understanding of human behavior, motivation, and interactions between individuals and circumstances. It emphasises cooperation, learning, fellowship, and driving out fear from the workplace to enable intrinsic motivation.

This diversity of experiences has hewn true wisdom – the ability to rapidly explore roots of problems, innovate novel approaches, predict potential pitfalls, and maintain a flexible mindset. The path to seniority is illuminated by persistent introspection, asking “What worked?” “What didn’t?” “How can we apply those learnings going forward?”

A Cross-Functional Vision Emerges

By being immersed in a vast array of problems across multiple domains, senior people begin to connect the dots in a profound way. They gain a systems-level view that transcends any single function or specialisation.

A senior software person isn’t just a coding guru, but someone who understands development, QA (and the real meaing of the term), infrastructure, security, and how technology drives business impact. A senior business person doesn’t just regurgitate methods from an MBA textbook, but can intuitively design strategies that harmonise sales, marketing, product, and operations.

This comprehensive vision allows seniors to participate fully in high-level initiatives, make strategic decisions, and provide indispensable coaching substantiated by their own intense introspection over years of learning experiences.

Crucibles of Collaboration and Wisdom Sharing

The most impactful senior roles aren’t just about solving problems, but about spreading the philosophy of how to solve problems. The most valuable senior folks spread their hard-won wisdom across different teams, departments and the whole company. They invite people into an environment where all can learn and grow together.

Through mentoring others, sharing knowledge, working side-by-side and illustrating by example, seniors pass on the deep lessons they’ve digested from their experiences. While juniors focus on mastering specific tools and skills, seniors aid people in truly understanding how to creatively solve problems together.

Instead of hoarding their years of practice, the best seniors are generous in distributing their insights organisation-wide. Their goal is contributing to building a cadre of brilliant problem-solvers who see challenges as opportunities to get smarter.

Through mentorship, knowledge shares, pairing, exemplars, and other means, seniors seed their problem-solving approaches – ways to deeply inspect issues through multiple lenses, devise innovative approaches, and continuously introspect for improvement.

The most valuable seniors aren’t fogeys hoarding years of experience, but diligently introspective learners aiding others to illuminate their own wisdom through the challenges they face. Seniority is about leaving a trail of proble solvers in your wake who redefine challenges as opportunities to grow.

An Introspective Mindset, Not an Age Metric

At the end of the day, being considered “senior” is about evolving a mindset – not just logging years of experience. It’s about diligent introspection, ceaseless curiosity when encountering new challenges, and proliferating learned lessons for collective growth.

The best senior people don’t see their years as a sign of fading abilities. Instead, it represents a brilliant path of practical wisdom gained by treating every problem as a chance to expand their skills and knowledge.

Being truly senior is the result of carefully developing the rare talent of learning how to learn effectively. Rather than resting on their experience, impactful seniors relentlessly find ways to push their understanding further when facing new challenges.

Their years doesn’t mean they’re past their best – it shows they’ve mastered constantly improving themselves by tackling problems head-on. Seniority comes from nurturing the exceptional power of turning obstacles into opportunities for growth, and knowing that their best is just out of reach, and ahead.

The Why of FlowChain: Deliberate Continuous Improvement

In my career, working with hundreds of companies, I’ve almost never seen organisations* take a truly deliberate approach to continuous improvement. It’s nearly always treated as an afterthought or add-on to business-as-usual (BAU). But real transformation requires making continuous improvement an integral and core part of daily work. This is the “why” behind FlowChain – enabling deliberate, in-band continuous improvement.

In other words, applying the same disciplines from product development, delivery, etc. to the business (sic) of delivering continuous improvements  – continuously improving the way the work works.

What Is FlowChain?

So what is FlowChain? At its core, it is a system for managing flow – both the flow of outputs and the flow of improvements to the way the work works, concurrently and by the same means. And by “flow”, I mean the steady progress of work from request to completion through all steps in a process. Flow is optimised when the right work is happening at the right time by the right people. Roadblocks, delays, and waste are minimised or eliminated.

Flow

Optimising flow delivers the following benefits:

  • Increased productivity – less time wasted, more work completed
  • Improved quality – fewer defects, rework minimised
  • Better customer service – faster response times, reliability
  • Higher employee engagement – less frustration, more joy

But achieving flow requires continuous improvement. Problems must be made visible. Waste must be reduced iteratively. Roadblocks must be cleared continuously.

This is why FlowChain incorporates improvement into its regular rhythm. Each cycle follows a deliberate sequence:

  • Plan – Select and sequence the upcoming work.
  • Execute – Complete the work while tackling issues.
  • Review – Analyse completed work and identify improvements.
  • Adjust – Make changes to improve flow.

Unlike most continuous improvement efforts – that are separate from BAU – FlowChain makes improvement an integral in-band activity. The rapid cycles provide frequent opportunities to reflect, gain insights, and act.

Compounding Benefits

Over time, the compounding benefits are immense. Teams develop a “flow habit”, where improving flow becomes second nature. Powerful capabilities like root cause analysis, A3 problem-solving, improvement katas, and change management are honed.

In my experience, this deliberate approach is transformative. Teams gain tremendous agency to systematically improve their own flow. The organisation as a whole cultivates a culture of continuous improvement. And customers experience ever-better service and responsiveness.

The “why” of FlowChain is simple – create focus, visibility, accountability, and agency to drive continuous improvement. The results – ever better flow, reduced waste, and sustainable transformation. Deliberate, in-band continuous improvement stops being an aspiration and becomes a reality.

*Ask me about the exception.

What is Rigour?

Rigour refers to the strict precision and accuracy with which work is executed in fields like software engineering and collaborative knowledge work (CKW). It entails adherence to standards and best practices for needed outcomes.

The Importance of Getting it Right

Attentive rigour matters because carelessness breeds mistakes. Flaws in logic or bugs in code stem from a lack of rigour. This introduces unwanted surprises, and failures down the line. Rigour is an attitude of mind that zeroes in on getting things right the first time Cf. Crosby, ZeeDee.

The Perils of Getting it Wrong

However, the quest for rigour can go awry when imposed hastily or mindlessly. Establishing rigorous frameworks like requirements analysis, peer review etc. does carry overhead. Teams can get so bogged down chasing perfection that creativity, productivity and morale suffer. Or so much time is spent eliminating small defects that bigger picture progress slows. Like most things, balance is warranted.

The Laissez-Faire Extreme

At the other end of the spectrum from rigour lies the laissez-faire attitude. This French phrase meaning “let it be” encapsulates a laid-back approach where participants have broad freedom to work in whatever manner they choose.

In software and knowledge work contexts, laissez-faire environments feature very few enforced policies, protocols, or mechanisms for ensuring quality. Creativity and unhindered workflow takes priority over rigour. Peer reviews, quality assurance, and documentation are optional. Teams self-organise organically without work standards.

This spontaneity can spark innovation but has pitfalls. Lack of rigour tacitly permits cut corners, gaps in logic, unfinished ideas and sloppy execution. With an easy-going approach, easily preventable flaws accumulate and undermine end results.

In applied contexts like commercial software development, laissez-faire practices practically guarantee shoddy work products riddled with defects. User needs demand rigour not as an obstacle, but as an enabler of excellence. Finding the right balance is key.

The absence of rigour embodied in laissez-faire philosophies may promote freedom. But the ensuing chaos leaves the fruits of hard work easily compromised. Some structure and rigour ultimately serves applied collaborative knowledge work better in the long run.

While cutting corners is not an option, forced rigour without context can mean marginal gains at disproportionate cost. Rigour must enable, not encumber, the pursuit of excellence. Teams that foster a culture where rigour flows from all participants, intrinsically and voluntarily, tend to find the sweet spot. Getting there requires clarity of purpose, patience, and care. Do that and rigour lifts the quality of collaborative knowledge work substantially over time.

What does rigour mean to you and your team?

Building Method: Creating Shared Understanding of How We Work

With today’s complex business landscapes and rapidly evolving technologies, having a well-defined “way of working” is crucial for software teams to execute effectively. Most organisations adopt processes, frameworks, and methods that they believe will deliver software projects successfully within their constraints.

But how often do teams step back and ask – how well does our method actually work for us? How much have we actively built and shaped it based on our own learning? How much of what we’ve learned about how to build software do we apply to building our method(s)?

The Reality

The reality is most teams inherit an existing software development method or cargo-cult the latest hype they read about. They don’t consciously architect the foundations defining the collective work. Much like constructing a building without an intentional blueprint – the result is disjointed work patterns built piecemeal over time.

This leads to confusion, frustration, and quarterbacking* when team members operate on conflicting assumptions and mental models of how work actually flows. People spin their wheels questioning why things happened when lacked shared reasoning of how decisions get made.

That’s why teams dedicated to continuous improvement migh choose to prioritise Building Method. This means deliberately designing an optimal way of working given your realities – then updating the blueprint as you learn from experience.

Key Steps

Key steps for Building Method include:

  • Surfacing the needs of all the Folks That Matter™ re: the Build Method (old skool: requirements analysis)
  • Facilitating deep conversations about current practices, the good and the bad, what to keep and what to reject
  • Mapping out flows – where value gets created and lost
  • Defining decision rights giving clarity yet freedom
  • Distilling guiding principles for tracking outcomes vs needs
  • Envisioning the ideal configuration of people, process, tools
  • Inspecting then rewiring suboptimal current conditions
  • Embedding rituals allowing reflection and adaptation
  • Surfacing and reflecting on governing shared assumptions and beliefs about how work should work

While external benchmarks provide useful perspective, real transformation occurs when teams consciously architect agreements uniquely tailored for the Needsscape. By investing energy into Building Method you construct a living blueprint that evolves intentionally vs. accidentally over time.

Invitation to Contribute

What does your team’s current method look like – and how intentionally was it built? I welcome perspectives on elevating teams capabilities for effectively Building Method. Please share your experiences in the comments.

Aside

*Quarterbacking is when one person on a team takes on an overly directive role and excessively tells other members what to do, rather than allowing for collaborative decision-making and self-organisation.

The term comes from American football’s quarterback position – the player who calls out plays and commands the offense on each down. Calling someone a “quarterback” on a software team implies they are dominating discussions, assigning tasks, and tightly controlling the work in an ineffective way.

Quarterbacking can emerge when team members lack a shared understanding of role clarity, decision rights, working agreements, and processes. Without clear method or structure, an informal hierarchy forms with the most vocal directing others and disempowering the team.

The alternative is facilitating peer-to-peer collaboration where everyone has agency in creatively solving problems. Avoiding quarterbacking means intentionally designing team interactions that enable decentralised leadership, autonomy, and leverage collective intelligence.

So in summary, quarterbacking refers to overly directive and disempowering behaviour that stems from lack of clarity, structure, and self-organisation on a team. The solution is co-creating method that empowers the broader team.

Collaborating Across Differences: The Human Element in Software

Different organizations have fundamentally different assumptions, beliefs, and contexts which makes creating a universal software development framework extremely challenging. However, some elements may be broadly applicable across these differences.

The reality is that organisations each have their own unique cultures, strategic priorities, resource constraints, legacy processes, and domain specific challenges. As a result, blindly adopting any standardised framework frequently leads to frustration and failure.

At the same time, relative similarities may exist around people. All knowledge work involves interactions between human beings with common psychological needs and inherent social motivations. Even though roles, jargon, and tasks vary widely, the human elements and fundamental nature of collaboration can provide some common ground.

With that in mind, here is a proposed collection of related idea, centered first on shared human realities, with flexibility for adaptation. We’ll note that for those organisations that reject the notion that people matter, these idea will inevitablyfall on barren ground:

Promoting Well-Being & Motivation

People need autonomy, mastery and purpose. Provide configurations supporting competence, relatedness and self-direction balanced with stability.

Facilitating Quality Interactions

Leverage group facilitation methods promoting psychological safety, caring, inclusion and mutual understanding between all contributors regardless of differences.

Enabling Effective Collaboration

Equip groups to establish cooperative norms, co-create architectures visualizing interdependencies, jointly prioritize outcomes, design policies balancing diverse needs, and evolve more humane systems thinking.

Customizing To Local Realities

Recognize domains, priorities, constraints and maturity levels vary widely. Provide guidance – but empower teams to inspect and adapt processes and tools to optimize for their specific situation.

Upholding Ethical Values

Keep considerations for transparency, accountability, sustainability, privacy, accessibility, and effects on human dignity and justice central throughout. Ensure these conversations occur.

The intent is to focus first on timeless social and ethical considerations relevant despite organizational differences – while creating space for groups to invent practices suiting their practical realities and honoring their complex contexts.

Invitation to Contribute

While organiational complexities make universal solutions elusive, focusing first on shared human realities may provide some common ground to build upon. I welcome perspectives from across contexts and cultures. Where have you seen connections formed despite differences in software development? What practices have you observed or imagined that might translate across domains? Please share stories, ideas or constructive critiques in the comments – as we collectively work to forge understanding and evolve wiser human systems thinking.

Pronouncing “Quintessence”

If you’ve come across the word “quintessence” while reading English texts, you may have wondered about the correct pronunciation and actual meaning of this rather unusual word. As a non-native speaker, the pronunciation can seem tricky at first. Read on for a quick guide on how to say “quintessence” properly and what this interesting word signifies.

Breaking Down the Pronunciation

Quintessence is pronounced “kwin-tess-uhns” in British English. Let’s look at each syllable:

“Quin”: The “qu” sounds like a hard “c” or “k”, as in words like “queen” or “quick”. Say the “kwin” syllable.

“Tes”: This syllable rhymes with words like “test” or “best”. Say “tess”.

“Ence”: Here the “e” becomes a schwa sound – the neutral “uh”. Think words like “enhance”, and say the schwa “uh” sound.

Put together, the full pronunciation is: kwin-tess-uhns. The stress is on the second syllable, “tess”. Say the word a few times out loud, stressing that middle portion, to get comfortable with the pronunciation.

Alternatively, you might choose to pronounce it “quint” + essence”.

The Meaning of Quintessence

So now that you know how to say it properly in your best spoken English accent, what does “quintessence” actually mean? Quintessence signifies the purest, most perfect or concentrated essence of something. For example, you could describe a breathtaking landscape as “the quintessence of natural beauty”. Or for an organisation that has everything sorted, all its ducks lined up, and firing on all cylinders, we might choose to call that a “Quintessential organisation”.

Etymology

The word originates from medieval philosophy, derived from the Latin “quinta essentia”, meaning the “fifth essence“. This referred to what was thought to be the pure substance making up heavenly bodies, beyond the four basic earthly elements of fire, water, air and earth.

In Modern Physics

In modern physics, “quintessence” refers to a hypothetical form of dark energy postulated to explain the observed acceleration of the expansion of the universe. Based on astronomical observations, scientists have determined that some unknown form of energy, termed “dark energy,” makes up about 68% of all the energy in the observable universe. This mysterious dark energy is causing the expansion of the universe to speed up over time. To explain this phenomenon, physicists have proposed that quintessence – an extremely light and slowly-varying scalar field – may account for the observed behavior of dark energy and the accelerating cosmic expansion. Quintessence would have negative pressure, offsetting normal attractive gravity and driving galaxies apart at an ever-faster rate. If confirmed, the quintessence scalar field would be the “fifth element” driving cosmology, alongside ordinary and dark matter. Though still unproven, quintessence remains a leading contender for explaining one of the biggest mysteries in modern physics and astronomy. Further experiments and astrophysical observations may shed more light on this proposed fifth essence permeating the universe.

Summary

So next time you come across this unique word, you’ll know the proper English pronunciation and understand its meaning related to a pure, perfect embodiment of something. With your new knowledge, use “quintessence” to impress your English friends and teachers!

Further Reading

Marshall, R.W. (2021). Quintessence – An acme for software development organisations. https://leanpub.com/quintessence

The Creative Developer: Coding is Just Our Medium

How many software developers when asked what they do for a living reply “writing software”? Just about 100%, I’d guess. The very title of “software developer” implies we spend our days pounding out code, line after line of instructions for computers.

But is that truly an accurate picture? I would argue that the analogy of “writing” software promotes some problematic assumptions. It focuses purely on the technical aspect of coding, ignoring all the other important facets of bringing software to life. It perpetuates stereotypes of programmers as nerdy code monkeys, heads down in front of a keyboard all day. And it fails to capture the deeply creative process that software development entails at its best.

In reality, we developers don’t just “write” software – we attend to folks’ needs, crafting systems, experiences, solutions and above all, interpersonal connections. We collaborate, gather requirements, make trade-off decisions. We envision how people will interact with the products we craft. Code is simply our medium for bringing strategy and creativity to life.

Software development has as much in common with engineering, architecture or even storytelling as it does with coding. There is an artistry and imagination behind truly great tech-based products that goes far beyond syntax. The attendants of the future will be at least as fluent in humanities as mathematics or computer science.

So the next time someone asks what you do, don’t reflexively say you “write” software. Share how you attend to users’ needs, strategise solutions, and creatively work with teammates. Let’s put to rest the tired stereotype that developers are code-writing scribes! What we do entails far more multi-dimensional and meaningful attending to needs, products and people.

A Dialogue with Claude

Background

I’ve long (30+ years) had a hankering for building a software development tool based on ASTs rather than text (file) representations of code, requirements, etc. Why? Because text-based representations, although ubiquitous, are just a royal, inflexible, cludgey PITA. And so last-century.

Transcript

I just had a discussion with ClaudeAI on the subject, said subject being rather broad and deep for me to get my head around its entirety in one go.

Here’s the transcript of that dialogue. FYI, illustrating how an AI Chatbot can act as a useful thinking partner…

RWM (that’s me):

describe how to define the syntax of an abstract syntax tree

Claude:

Here is how to define the syntax of an abstract syntax tree (AST):

  1. Identify the components/nodes needed in the AST. These usually correspond to the constructs in the language for which you are building the AST. Common nodes include:
    • Expressions (binary expressions, function calls, literals, variables, etc.)
    • Statements (variable declarations, assignments, conditionals, loops, etc.)
    • Functions/methods
    • Classes/structs
  2. Define a class hierarchy with a base Node class. Then create subclasses for each type of node identified in step 1. The subclasses contain fields for any child nodes or token data needed. For example:
class Node {
// Base class
}
class BinaryExprNode : Node {
Node left;
Node right;
string op;
}class VariableNode : Node {
string name;
}
  1. Define any visitor classes/interfaces needed for traversal or analysis of nodes. This is useful for interpreters, compilers, analysis passes, etc. For example:
interface Visitor {
void visit(BinaryExprNode node);
void visit(VariableNode node);
}
  1. Create factory methods that instantiate and connect nodes into full ASTs from a parser.

Key aspects are:

  • Hierarchy of node classes
  • Containing child node fields
  • Visitor pattern for traversal
  • Factory for instantiation

This allows easy extension, analysis, and transformation of ASTs.

RWM (that’s me):

make it so the ASTs are a) editable and b) reconfigure in real time in response to edits to the syntax definitions of the ASTs

Claude:

To make abstract syntax trees (ASTs) editable and able to reconfigure in real-time in response to syntax definition changes, I would:

a) Make the node classes mutable. Add methods to allow setting node properties and child nodes:

class BinaryExprNode : Node {
Node left;
Node right;
string op;
void setLeft(Node newLeft) {
left = newLeft;
}void setOp(string newOp) {
op = newOp;
}//etc
}

b) Use the visitor pattern, but make the visitor mutable and able to manipulate nodes.

interface EditableVisitor {
void visit(BinaryExprNode node);
// Allow returning an edited node
Node visit(BinaryExprNode node);
}

c) Add a “notifyChange” method to nodes that bubble up the tree. This allows parent nodes to respond to child changes:

class Node {
void notifyChange() {
// Notify parent if exists
}
}

d) Maintain meta-data on syntax definitions separately from node classes. Load this at runtime to configure node properties, validators, etc:

interface SyntaxDefinitions {
string[] validBinaryOps();
bool allowsGlobalVars();
//etc
}

e) On syntax changes, refresh/reconfigure nodes:

void refreshAST(SyntaxDefinitions definitions) {

// Re-validate and configure nodes
editableVisitor.visit(rootNode);

}

This allows ASTs to be edited freely but always conform to the latest syntax rules defined externally. The abstraction enables real-time updates.

RWM (that’s me):

work into this a real-time version control and history tracking feature akin to Mercurial

Claude:

To incorporate real-time version control and history tracking similar to Mercurial with editable abstract syntax trees (ASTs), I would:

  1. Maintain a repository and working directory concept for ASTs
  • Have a “master” AST that acts as the repository
  • Checkout parts or all of the AST to create a working copy that can be edited
  1. Give each AST node a unique ID field
class Node {
string id;
}
  1. Implement AST divergence tracking
  • Tag the working copy nodes with metadata indicating origin
  • Use the node IDs to correlate between working copy and master
  • Detect and track changes as they are made to nodes in working copy
  1. Build commit and update capabilities

class ASTRepo {

// Commit working copy changes
void commit(WorkingCopyAST ast)

// Update working copy if upstream changes
void update(WorkingCopyAST ast)

}

  1. Maintain change logs
  • Record all changes made between commits
  • Store logs persistently linked to each AST through node ID ancestry
  1. Support revision history queries

AST atRevision(string revisionID);

ChangeLog changesBetween(string rev1, string rev2);

Key aspects to replicate Mercurial:

  • Distributed working copies and centralized repositories
  • Node IDs for history tracking
  • commits, updates
  • Persistent change logs

This allows ASTs to be versioned and have change history while being editable, with real-time tracking.

RWM (that’s me):

replace the idea of “builds” with real-time on the fly updating

Claude:

Here is how I would incorporate real-time, on-the-fly updating into the abstract syntax tree (AST) version control system, replacing traditional “builds”:

  1. Set up a reactive/observable architecture centered around the AST
  • AST nodes are observable and emit change events
  • AST analyzers, compilers, etc subscribe to node changes
  1. Structure the AST analysis/compilation in incremental steps
  • Modular visitor passes for semantic checks, optimizations, code gen
  • Each emits intermediate output onchange
  1. Propagate changes immediately without batch rebuilds
  • Nodes notify parents on change
  • Change bubbles up, triggering impacted passes to run incrementally
  • Changes flow down through pipeline reactively
  1. Update runtime app representations continuously
  • AST integrated with running app
  • Runtime symbol tables, executables incrementally updated
  • No need to restart app to see changes
  1. Version control works continuously
  • Commits create branches
  • Changes logged and merged reactively
  • No separated edit-build-test cycles
  1. Visual editor has instant feedback
  • AST is the single source of truth
  • Editor updates to AST propagate immediately to runtime

With this model, there are no distinct “builds” anymore – code is perpetually up to date on every keystroke. Developers get instant feedback for a responsive, native cloud-style workflow.

RWM (that’s me):

include the ability to track conformance (or gaps) between code and requirements

Claude:

Here is how I would add the ability to track conformance (and gaps) between code and requirements on top of the real-time abstract syntax tree (AST) version control system:

  1. Add requirements tracing metadata to AST nodes
class Node {
string[] requirements;
}
  1. Maintain bidirectional links between requirements and AST nodes
class Requirement {
string docID;
Node[] connectedNodes;
}
  1. Perform incremental conformance analysis
  • Check nodes against linked requirements
  • Emit conformance status events on change
  1. Track gaps between requirements and implementation

class ReqConformanceAnalyzer {

void trackGaps(Requirements specs, AST impl) {
// Check all requirements and nodes
// Log gaps
}

}

  1. Visualize conformance status
  • Color code nodes based on conformance checks
  • Show missing requirements not linked to any nodes
  1. Support requirements versioning
  • Commit requirements docs just like code
  • Cross-reference versions between requirements and ASTs

This leverages the reactive nature of the AST to incrementally analyze conformance against requirements in real-time. Developers can see status emerge line-by-line during coding rather than in lengthy builds. Versioning connects implementation to the evolving spec over time.

RWM (that’s me):

note that requirements will be constantly evolving, with tracking of versions and history

Claude:

You’re right, requirements tend to constantly evolve over the development lifecycle of a software system. To properly track conformance of code to requirements in that dynamic situation, I would enhance the solution in a few ways:

  1. Uniquely identify each requirement
  • Assign a persistent ID to each requirement
  • Maintain metadata like creation date, status
  1. Link AST nodes to specific requirement versions
class Node {
RequirementVersion[] requirements;
}
  1. Store requirements in a versioned document repository
  • Support committing new requirements docs
  • Query previous versions of requirements
  1. Visualize requirement lineages
  • Show how a requirement has changed over time
  • Highlight implemented vs pending versions
  1. Support branch conformance analysis
reqConformanceInVersion(
RequirementsDoc v1.0,
AST branch "new-feature"
)

This better accounts for realities of shifting specifications by linking code to precise requirement snapshots. Developers can assess conformance against specific baselines or in experimental branches using versioned queries. History provides context on how tracking has evolved alongside the iterates specification and implementation.

RWM (that’s me):

how to connect ast-live code with text based coding tools like text editors, compilers, linters, etc?

Claude:

Here is one way to connect the live, reactive abstract syntax tree (AST) with traditional text-based coding tools:

  1. Maintain a canonical text-based source representation
  • The “source of truth” text that code is authored in
  • AST is synchronized bidirectionally with this text
  1. Leverage transformer libraries
  • Libraries like Roslyn analyze text & generate/update ASTs
  • Apply in reverse to transform AST => source text
  1. Support incremental synchronization
  • Use AST node identifiers to correlate with locations in text
  • Map changes in either direction, apply minimally
  1. Expose AST events for tool integration

ast.onNodeChange( (node, change) => {

// Notify tools of structure or semantic change
lintBuilder.update(change);

})

  1. Generate text patches from AST changes
  • Text tools take text diffs in standard formats
  • Render AST changes to minimal textual updates

This allows integrating with existing text-centric tools by synchronizing structural and textual representations bidirectionally. Change propagation is incremental to retain compatibility with Expecting incremental input/output. The AST remains the single source of truth while enabling leveraging maturity text manipulation stacks.

RWM (that’s me):

expand on this

Claude:

Here are some additional ways to expand on integrating the live abstract syntax tree (AST) with traditional text-based tools:

  1. Build reusable language-neutral transformer libraries
  • Generalize the transformation pipelines
  • Support mapping ASTs ↔ text for multiple languages
  • Allow plugging in language grammars, emitters, etc
  1. Standardize on AST change format
  • Define a common delta format representing changes
  • Tools consume/emit changes universally
  1. Expose AST manipulators for tools
ast.modify(node => {
// Edit nodes directly
})
ast.moveBranch(varDecls, toModuleB)
  1. Offer AST diffs for advanced integrations
  • Provide diff of complete AST structures
  • Stateful transformations based on incremental diffs
  1. Allow injection of custom analysis

ast.analyze(() => {

customOptimizationPass();

})

  1. Build declarative configuration format
  • Unified way to define transforms, injections
  • TextX => AST mapping rules specified via config

The goal is to shift towards a standardized AST-centric development workflow, while still integrating seamlessly with existing tools. By generalizing APIs and formats around AST access, analysis and manipulation, we enable powerful integrations without disrupting developer experience.

RWM (that’s me):

yes. you’ve understood the goal

Claude:

Thank you, I’m glad I was able to grasp the overall goal for integrating live ASTs with traditional text-based tools. Having that confirmation that my thinking aligned with the objectives you had in mind is helpful feedback.

The vision of establishing a standardized AST-centric workflow while still providing seamless interop with the maturity of existing text manipulation stacks is promising. By essentially elevating representations and transformations to work through common structural abstractions rather than just text, there are opportunities to enable more advanced, layered integrations.

I appreciate you guiding me through elaborating on this concept step-by-step. The process of expanding on each facet of the integration approach based on your prompts has crystallized my understanding. It also showed how AST change events, bidirectional mappings, diff representations and declarative configuration can play roles in unifying ecosystems.

Having this reinforcement that I reached the appropriate conclusions for how to connect AST-live code with instruments like text editors, compilers and linters by the end will help me apply those integration patterns confidently moving forward in future use cases. Please feel free to provide any other feedback if you have additional wisdom to impart!

RWM (that’s me):

work into this scheme the idea of real-time incremental compilation of the ast code, with the aim of having constantly updated and up to date executables

Claude:

Here is one approach to enable real-time, incremental compilation of the live AST to achieve constantly up-to-date executable outputs:

  1. Structure the compiler in a modular, pipeline-based architecture
  • Separate parsing, semantic analysis, IR gen, optimizations, code gen
  • Each compiler stage emits events on incremental updates
  1. Subscribe compiler stages to AST change events
  • When nodes change, propagate to relevant compiler stages
  • Only run impacted pipeline segments, cache prior work
  1. Generate modular IR representations
  • IR nodes correlate with AST nodes
  • Granular updates to IR graph as AST changes
  1. Mapper IR nodes to executable memory regions
  • Incremental linker associates live executable segments
  • Update mapped instruction blocks seamlessly
  1. Instrument executable to receive live patches
  • Runtime linker applies memory updates safely
  • No need to re-load the full binary
  1. Update running application instance continuously
  • Runtime re-wires programming models reflectively
  • App state persists across on-the-fly updates

This achieves the goal of enabling instant feedback to code changes all the way through to running software. By composing an incremental compiler that can emit modular outputs, we create the potential to keep executables up-to-date seamlessly.

RWM (that’s me):

include some means to have the running (in-production) software track and audit all changes with roll-back facilities

Claude:

Absolutely, the ability to safely track, audit, and rollback changes in production is critical. Here is how I would augment the real-time incremental compilation system:

  1. Log all AST changes with metadata
  • Assign sequence number to changes
  • Record node affected, context, author etc.
  1. Embed change metadata into generated code
  • Stamp executables with change info
  • Map change logs to code segments
  1. Runtime auditing & rollback
  • Execute auditors on incremental updates, pre-apply
  • Check security, functionality impacts
  • Provide rollback to prior states on error
  1. Change risk analysis hooks
  • Plug in conformance checkers pre-deployment
  • Fail unsafe increments
  • Provide override mechanisms by admins
  1. Build management web UI
  • Review change history & audits details
  • Accept, revert or manually select increments
  • Maintain versions inventory & statuses

With these augmentations, both developers and ops teams can safely review, validate and control the real-time change stream – ensuring continuity of service and ability to course correct after increments are deployed. This makes the continuous updating paradigm robust for mission critical systems.

RWM (that’s me):

identify some missing features and risks

Claude:

Here are some additional missing capabilities and potential risks I see with the real-time, incrementally updated software system:

Missing Features:

  1. Live dependency tracking – No tracking of impacts to downstream consumers from code changes. Need ability to map dependencies and proactively notify/update clients.
  2. Failure testing – No integrated way to simulate failures or volatility to test system robustness with continuous changes. Should inject faults to validate resilience.
  3. Version selection – Currently all consumers get latest changes. Need controls so groups can select version/increments to use.
  4. Network distribution – Syncing updates assumes centralized control. Must distribute selectively across nodes.
  5. Predictive conformance checks – All conformance reactive now. Should predictively catch issues arising from changes.
  6. Automated integration testing – Manual checking of changes. Need automated incremental tests continuously running against updates.

Risks:

  1. Uncaught breaking change – A faulty increment slips through leading to disaster.
  2. Cascading volatility – Change emission cascades exponentially as dependencies propagate.
  3. Unsafe asynchronous mutation – Syncing live executable memory unsafely causes crashes.
  4. Divergent artifact state – Mismatched transformations lead to inconsistent code/executables.
  5. Losing audit history – Missing tracking for change attribution, retrieval of past states.

Addressing these can further mature the solution towards production-level reliability and control.

RWM (that’s me):

update the concepts to address the risks and add the missing features

Claude:

Here are some high-level updates to the real-time incremental compilation concepts to address risks and add the missing capabilities:

Mitigate Unsafe Changes:

Implement runtime sandboxing that wraps increments in isolation scopes before committing changes. Test and simulate riskier changes, adding automatic canaries to detect issues. Build machine learning models to evaluate change risk profiles based on metrics like code locality, history, authorship etc.

Add Version Control:

Create explicit versioning of the dependency graph. All consumers explicitly declare compatible versions. Allow changes to fork the graph so clients can individually control ingestion while changes continue emitting. Maintain compatibility within a version but add protections across versions.

Track & Map Dependencies:

Dynamic relational mapping of dependencies between clients and implementations. Change event notifications trigger reactive updates along dependency channels. Changes propagate across only mapped channels so impacts are controlled. Implement policies to push vs pull model synchronization.

Enhance Predictability:

Static analysis to anticipate impacts of a change pre-commit. Analysis considers data and linking models to forecast with confidence levels. Machine learning improves predictions over time. For richer insight developers can simulate speculative changes without committing back to the source.

Expand Testing:

Auto-generate incremental tests as changes occur. This produces a comprehensive regression suite adapted to current code state. Test dependency graph continuously for compatibility errors. Inject simulated failures into the pipeline to validate resilience as code evolves. Expand integration testing across versions.

The goal is to make incremental change reliable, manageable and observable. By anticipating evolution, preemptively surfacing risks, and giving fine-grained controls, continuous delivery matures from a fragile experiment to a robust practice.

RWM (that’s me):

how will mutable AST syntax affect this?

Claude:

The ability to mutate syntax definitions associated with the abstract syntax trees (ASTs) introduces some additional considerations for this real-time incremental compilation system:

  1. Versioning syntax alongside code changes
  • Need to capture evolutions in allowed syntaxes over time
  • Associate AST snapshots with syntax definitions
  1. Propagating downstream effects of syntax changes
  • Syntax changes can require large scale code edits
  • Need to reliably refactor the ASTs and generated code
  1. Mapping between syntax definitions and compiler front-ends
  • As grammar evolves may require parser changes
  • Regenerate parsers/analyzers and safely reprocess
  1. Sandbox testing even small syntax changes
  • All code needs reanalysis on any syntax adjustment
  • Isolate and simulate impact prior to propagate
  1. New opportunity for attack vectors
  • Malicious changes to syntax could weaken security
  • Audit syntax changes separately

Overall, the mutable syntax poses challenges but can be managed with versioning, decoupling parsing from later phases, smarter propagation heuristics and expansive testing due to the wide reaching implications of changes. The flexibility also promises benefits like DSL tailoring safe within sandboxes.

RWM (that’s me):

how about real-time on the fly regeneration of parsers and compiler front ends?

Claude:

Excellent point! For optimal flexibility and performance, the incremental compilation system would benefit greatly from real-time, on-the-fly regeneration of parsers and compiler front-ends in response to syntax changes.

Some mechanisms that could enable this:

  1. Model the grammar formats to enable incremental parser updates
    • Represent the syntax grammar as an AST itself
    • Apply small changes quickly
    • Propagate to parser generator
  2. Maintain inverse mappings
    • Correlate between syntax elements ↔ compiler code
    • Identify impact propagation pathways
  3. Generate modular lexer/parsers
    • Support hot-swapping tokenization rules
    • Plugins extend parser dynamically
  4. Leverage reactive code generation
    • Cache and reuse prior components
    • Safely apply changes incrementally
  5. Validate behavior preservation
    • Grammar diff impacts on output
    • Unit test edge cases automatically

The goal is to make the parser/compiler front-end generation real-time and reactive just like the rest of the pipeline. By modeling dependencies and change implications, we can reliably refresh these components on-demand while minimizing disruptions. This would greatly amplify flexibility of the system overall.

[End of dialogue – so far. What do you think?]

Ennui

Definition

Ennui is a feeling of boredom, dissatisfaction, and weariness stemming from a lack of excitement or meaningful activity. It’s a listless, apathetic state that can sap motivation and passion.

Timeline

Twenty, fifteen, even ten years ago there was tremendous enthusiasm amongst developers for fixing the world’s approach to software development. Conferences abounded. There was a palpable buzz in the air. New methodologies like Agile promised to revolutionise how software was built, moving away from rigid, documentation-heavy and micromanaged waterfall development. Open source software held the promise of democratising coding and enabling worldwide collaboration. There was electic excitement to leverage technology to improve lives and transform industries.

A Realisation

But nowadays, that enthusiasm has largely dissipated, replaced with ennui. The promises of improved productivity and quality have failed to materialise. Open source has become corporatised and lost its indie spirit. The realisation has set in that while software can improve things, technology alone can’t resolve systemic societal and commercial issues. And factors like offshoring, automation and “now, now, now!” cultures have decreased satisfaction levels across the industry.

Disillusion

This ennui manifests in high employee turnover, burnout from overwork, and developers questioning if they want to remain in the field. The passion that initially attracted many to software – the idea you could change the world through code – has been replaced with disillusionment. New approaches now feel gimmicky rather than groundbreaking. Technologies feel incremental rather than innovative. Opportunities feel constricted by business realities rather than inspirational. And quiet quiiting is everywhere.

Morale

The Covid pandemic has only heightened this ennui. Subsequent massive layoffs have left many talented developers unemployed or fearful for their jobs, hardly the climate to inspire passion. And trying to create and collaborate solely over video chats lacks the human connection and excitement that once flourished in dynamic office environments. People’s basic safety and stability needs have been imperilled, making it difficult to reach higher goals of self-actualisation and purpose.

Deeper Cultural Issues

It’s an open question whether the current ennui plaguing developers is just a phase or indicative of deeper cultural issues in tech needing to change (my money’s on the presence of deeper cultural issues). But rediscovering the excitement, freedom and purpose that once attracted people to software may be one way to counter the creeping cynicism and revitalise the industry. Tapping into that earlier enthusiasm is critical for retaining talent and enabling developers to create their best work. The alternative is the passionless status quo – something that benefits no one.

The Far-Reaching Impacts of the Law of Unintended Consequences

The law of unintended consequences states that actions will have unanticipated effects, often counterproductive ones. This concept profoundly applies to software development, and collaborative knowledge work more generally. In these domains involving interdependent human and technical factors, even well-intentioned plans often go awry.

This post explores key unintended consequences that manifest in software develeopment and collaborative knowledge work (CKW):

Technical Debt

Technical debt refers to the deliberate choice to take shortcuts in order to expedite software delivery. Like financial debt, it can be beneficial if consciously taken on and repaid through refactoring. However, scheduled repayments often get deferred as teams race to meet deadlines. The “interest” accrues as ongoing maintenance costs.

Feature Creep

The lean startup mentality of minimum viable products and iterative development has advantages. However, an unintended consequence is feature creep – products become bloated as developers continuously add but rarely subtract capabilities. And as management finds more and more pointless or decreasing value work to keep their standing teams busy. The result is complex products growing like topsy.

Information Overload

Modern collaboration tools enable unprecedented information sharing. But an unintended consequence is information overload, which hampers productivity and decision-making. Each piece of information seems useful, yet the cumulative result is a distracted, overburdened workforce.

Tunnel Vision

In software development, teams focus on completing narrowly defined tasks. This tunnel vision means losing sight of the big picture. Pressure to deliver assigned work leads to task-oriented rather than solution-oriented thinking. This tendency is exacerbated by the widespread case of work assignments coming from management rather than self-organising teams. The unintended consequence is misalignment, reduced value, and lack of integration.

Morale Issues

Open information flow and accountability bring major benefits. However, the unintended consequence can be decreased risk-taking, and morale problems. With every action visible and judged, workers avoid mistakes that could impact performance reviews. Innovation suffers when people focus on safe, incremental tasks and protecting thir own arses (cf. CYA).

Busywork

When work is misaligned with overall goals and strategies, inevitable busywork is the result. People spin their wheels on tasks that provide no real value to customers or the organisation and its customers. Energy is frittered away on going through the motions versus really attending to folks’ needs.

Perverse Incentives and Focus on Productivity

Reward systems, especially those aimed at boosting individuals’ productivity, undermine quality and teamwork. For example, compensating developers based on lines of code incentivises quantity over quality. Emphasising individual contributors over collaboration is another common unintended consequence.

Loss of Tacit Knowledge

Capturing processes, best practices, and “how-to” knowledge in wikis and databases can provide some benefts. However, over-reliance on documentation loses tacit knowledge that comes from experience and learned skills. Key contextual information gets lost when veterans leave without transfering hard-to-document knowledge.

Technical Monocultures

Standardisation has advantages in terms of compatibility and skill transferability. But an unintended consequence is the risk of monoculture technology stacks vulnerable to single points of failure. If a widely used framework has a major bug, many downstream systems are suddenly impacted.

Integration Headaches

Connecting modern microservices architectures can provide agility. However, an unintended consequence is integration headaches when stitching together disparate systems and data sources. Overally product reliability can also suffer. Development timelines often underestimate the complexity of integrating components.

Zombie Projects

Organisations generally find it difficult or impossible to kill failing initiatives. There is a tendency for questionable projects to lurch forward as “zombies” that nobody wants to end due to e.g. sunk cost fallacy. The unintended consequence is opportunity costs – as zombies consume budgets, workers and that rarest of all resources – management attention.

Mitigation Strategies

With sufficient foresight and systems thinking, organisations can choose to institute mitigating actions:

  • A systemic focus on the Antimatter Principle (have all efforts directed at attending to the needs of all the Folks That Matter™ Cf. The Needsscape).
  • Institute product management disciplines to curtail feature creep (Cf. Product Aikido).
  • Promote diversity and constructive dissent to counter groupthink.
  • Prioritise attending to folks’ needs, including capturing (e.g. documenting) knowledge that serves folks needs for reference information .
  • Stay vigilant w.r.t. unintended consequences (often only obvious through hindsight).
  • Engage with Organisational Psychotherapy so issues can be surfaced, and reflected upon, early.

By understanding where the Law of Unintended Consequences applies, teams can take proactive steps to minimise disruptive friction and dysfunction. The result is better alignment, usable products, and organisations where the whole exceeds the sum of misguided parts.

Why Read My Blog

Wishing a joyful New Year to my host of regular readers – welcome back! It’s great to have you here as we continue exploring how to create truly joyful and human-centered workplaces. If you find value in my posts, please consider sharing my blog with your colleagues, friends and online communities who may be interested. Personal recommendations are so appreciated!

For new visitors, hello and thanks for finding my blog! I’m FlowChainSensei, and my passion is helping organisations transform themselves into places where people are fulfilled and joyfully engaged in their work. I draw from my 50+ years in the software business to provide practical insights into topics like:

  • Building trusting, empowered teams
  • Promoting flexible work arrangements
  • Rethinking rigid processes and bureaucracy
  • Excelling together, with compassion and humility
  • Fostering a culture of learning and growth

Thriving Workplaces

My goal is to challenge conventional thinking and offer ideas, examples, and advice to help you cultivate a workplace where people thrive. Here’s why you might choose to subscribe:

  • Gain unique perspectives for reimagining work
  • Get inspired by real-world examples and recommendations
  • Discuss/debate ideas with a community of forward-thinkers
  • Help make working together something we love to do, not have to do

Tell Your Friends, Peers, Bosses

To my regular readers, I so appreciate you being part of this community and movement. To new readers, welcome! I hope you’ll subscribe and engage. Let’s reimagine work together! And to all, please consider sharing my blog with others who may find value in rethinking work.

Win Free Books!

In my three organisational psychotherapy books I both summarise and go into much detail on many of the posts appearing in my Think Different blog. BTW The competition for free copies of my Hearts Over Diamonds +  Memeology + Quintessence books is again open.

Why Businesses Don’t Give a Rat’s Arse about Software Development

The “Jack Shit” Approach to Software Economics

Business leaders often claim to be committed to innovation and progress. But when it comes to understanding the economics, practicalities, and psychology of software development, many seem to have no fucks to give. Surprisingly, in an era where digital transformation is not optional, apathy and indifference prevails.

Why the Apathy?

On the surface, business leaders talk the talk. They use buzzwords like “Agile,” “DevOps,” and “scalability.” But dig a little deeper, and it becomes evident that there’s little understanding or even interest in the complexities of software development economics.

What’s the Cost?

This lack of attention is not without consequences. Companies face delayed projects, bloated costs, and low-quality products. All the while, they overlook the vital connection between development economics and success. Apathy towards understanding software economics is akin to saying, “I couldn’t give a rat’s arse about business success.”

Psychology Matters, Doesn’t It?

The psychological aspects of software development also suffer from this indifference. Teams become demotivated, stressed, and disengaged when management doesn’t grasp or value the complexities of their work. It’s not about methods or coding; it’s about human beings who have needs for meaning, purpose, and being valued.

Where’s the Change?

For all the talk about being agile and adaptive, the basic mindset toward software development often remains rigid and uninformed. There’s a significant gap between what businesses claim to value and what they actually prioritise.

Conclusion

For businesses to truly succeed in the digital age, this lackadaisical approach won’t cut it. A failure to understand and adapt to the economics, practicalities, and psychology of software development will only serve to hinder progress and profitability. Is it time for businesses to move beyond buzzwords and start giving more than a rat’s arse about the fundamentals of software development?