Archive

Engineering

Good Software People Have to Lie Through Their Teeth to Get a Job

The Sad Reality

If you’re a talented software professional who understands and practices modern, effective approaches to collaborative knowledge work, you face an unpleasant reality – you likely have to lie through your teeth in job interviews to have any shot at getting hired. And if you have any integrity, you probably won’t (won’t lie, won’t get hired).

The root of the issue is that many hiring teams, managers, and organisations commit a profound “category error” – they mistakenly treat software development like a more familiar form of work that it fundamentally is not. So the cutting-edge practices that make sense for collaborative knowledge work sound like utter – and alien – nonsense to them.

Examples of Alien Approaches

This forces software development cognicsenti into an impossible choice: either pretend their field is just another flavour of manufacturing/construction/etc. that aligns with woefully outdated management dogma. Or stick to their guns, speak truth about their highly unique and dynamic domain, and get immediately rejected as fringe lunatics.

Let me illustrate with examples of legitimate yet “incredible” “wonko” approaches:

The “Constant State of Ship”

At high-performing software companies, code is shipped to production constantly, sometimes multiple times per day. Concepts like “releases” or “launch dates” are laughable antiquities from machine-age models of work.

Continuous Delivery

Elite software teams can automatically build, test and deploy code on every commit that passes automated checks – without manual gatekeepers. But to old-school minds, this sounds like reckless spontaneity instead of disciplined craftsmanship.

The Interview Reaction

Try pitching those kinds of modern practices in a job interview and watch eyes glaze over in bafflement. You’ll get pelted with scepticism about “stability,” “quality,” “risk,” etc. Poor performers always obsess over mitigating challenge instead of updating their working models.

Lying to Get Hired

So to pass interviews, superb software professionals have to dumb it down and play make-believe about pushing gigantic, monolithic releases every 6-12 months after “hardening” periods – Industrial Revolution edicts that no longer apply.

It’s maddening to have to deny the realities of cutting-edge knowledge work to be taken seriously. But that’s the tax we pay, trapped in an industry riddled with obsolete dogma.

Consequences

This dynamic creates a catch-22: organisations hire either liars lacking ethics, or candidates lacking current expertise in effective modern software practices. Neither is a viable choice for building an effective engineering team. Do they want impostors or ignoramuses on their teams?

By filtering out leaders who grasp the unique dynamics of collaborative knowledge work, firms doom themselves to inefficiency, delays, and poor quality software. The very candidates with competencies to uplift them get screened out as “unbelievable” or “reckless” based on obsolete manufacturing/construction/service analogies.

Organisations must decide whether they want to cling to personnel working under antiquated models of development, or embrace competent people optimised for the fundamentally different nature of software’s collaborative value creation. Their ability to deliver high-quality, continuous value through technology hinges on making the right choice here. Discarding modern software ideas in favor of outmoded perspectives will only perpetuate disappointing outcomes.

The implications for these organisations’ ability to deliver valuable technology solutions are profound.

How “Constant State of Ship” Drives Transformative Practices

Introduction

In the relentless pursuit of delivering value to customers, with unparalleled speed and reliability, the software development world has yet to widely embrace a revolutionary principle – the “Constant State of Ship”. This state, where software artefacts and products are perpetually poised for release into production environments within just 15 minutes’ notice, has emerged as a driving force behind best practices that enable true continuous deployment. Remarkably, this groundbreaking concept formed the foundation of the pioneering “Javelin” software development approach, a visionary approach conceived by FlowChainSensei (Bob Marshall) at Familiar circa 1996 and onwards, foreshadowing the industry’s even-now-yet-to-be-realised embrace of these practices.

The Power of “Constant State of Ship”

The “Constant State of Ship” serves us as an unyielding forcing function, inviting teams to adopt and adhere to a comprehensive set of best practices that catalyse the seamless flow of software into production. Let us explore how this principle reinforces each of thirteen fundamentals of Continuous Delivery (hat tip to Dave Farley):

The 13 Fundamentals Enabled

  1. A Repeatable, Reliable ProcessWith the ever-present possibility of an imminent release, teams may choose to establish a well-defined, automated pipeline for building, testing, and deploying their software. This process needs to be repeatable and reliable, minimising the risk of human error and ensuring consistency across releases.

    The “Constant State of Ship” mindset suggests that teams have a streamlined, automated release pipeline that can be triggered at any moment. Manual steps and ad-hoc and emergency exception procedures become liabilities, as they introduce variability and increase the chances of mistakes during deployment.

    To achieve this repeatability and reliability, teams are supported to invest in build automation tools, automated testing frameworks, and deployment automation pipelines. Every step of the release pipeline can be codified, documented, and thoroughly tested to ensure predictable outcomes each time.

    Moreover, the “Constant State of Ship” principle fosters an environment of continuous learning and improvement. Any failures or issues encountered during a release are promptly analysed, and the release process is refined to prevent future occurrences. This cycle of continuous feedback and optimisation ensures that the release pipeline remains reliable and efficient, even as the codebase and systems evolve over time.

    By operating in a “Constant State of Ship” mode, teams are invited to treat the release pipeline as a critical component of their software development lifecycle, investing the necessary resources and effort to make it repeatable, reliable, and capable of delivering changes to production environments at a moment’s notice.

  2. Automate All the ThingsIn a “Constant State of Ship” paradigm, manual interventions become significant bottlenecks and risks, hindering the required velocity and reliability. Automation becomes imperative, spanning every aspect of the delivery pipeline, from code compilation to infrastructure provisioning. The threat of an imminent release leaves no room for error-prone manual processes that could delay or derail a deployment. Teams must automate build processes, test execution, environment provisioning, deployment steps, and release orchestration to ensure consistency and minimise the risk of human error.
  3. Maintain a Releasable StateThe core tenet of “Constant State of Ship” requires that the codebase and associated artifacts remain in a perpetually releasable state. This principle invites teams to address issues promptly, maintain a high level of code quality, and vigilantly consider the accumulation of technical debt. Any defects, bugs, or instabilities in the codebase could potentially disrupt an imminent release, leading to costly delays or failures. Teams must adopt practices like continuous integration, automated testing, and ensemble programming to ensure that the codebase remains in a stable, deployable state at all times.
  4. Focus on Robust (Real) Quality Assurance

    In the “Constant State of Ship” paradigm, where the possibility of demand for an immediate release is ever-present, quality assurance cannot be treated as an afterthought. “Constant State of Ship” invites the integration of quality practices throughout the entire development lifecycle, ensuring that quality is baked into the software from inception to deployment.

    While testing plays a role, it is merely one facet of a comprehensive quality assurance strategy. Teams may choose to adopt a holistic approach that emphasises quality as a continuous, pervasive practice woven into every aspect of the development approach.

    This begins with cultivating a culture of quality-driven development, where every team member participates in collective ownership and responsibility for the quality of their work. Practices such as clarity of (quantified a la Gilb) requirements, ensemble programming, peer code reviews, adherence to coding standards, and continuous static code analysis can help identify and mitigate potential issues early in the development cycle.

    Furthermore, “Constant State of Ship” invites teams to embrace principles of iterative and incremental development. By breaking down complex features into smaller, manageable, well-bounded increments, teams can more effectively manage quality risks and ensure that each increment and subsystem meets the required quality criteria before progressing to the next.

    Continuous integration and deployment pipelines play a pivotal role in this quality assurance strategy, enabling teams to continuously validate and verify the software’s functionality, performance, and stability with each incremental change. These pipelines automate the execution of various quality checks, including unit tests, integration tests, and performance tests, providing real-time feedback and enabling teams to address issues promptly.

    However, quality assurance extends beyond mere testing alone. Teams have the opportunity to adopt a holistic approach that encompasses design practices, architectural decisions, and operational readiness. By considering quality implications at every stage of the software development lifecycle, teams can proactively identify and mitigate potential risks, ensuring that the software remains in a releasable state at all times.

    “Constant State of Ship” elevates quality assurance to a core discipline that permeates every aspect of the software development effort. By fostering a culture of quality-driven development and adopting continuous quality practices, teams can attend to the needs of all the Folks That Matter™, with confidence, knowing that their software meets the highest standards of reliability, stability, and performance.

  5. Implement Robust Deployment PipelinesAchieving a “Constant State of Ship” necessitates the implementation of robust deployment pipelines. These pipelines automate the entire process of building, testing, and deploying software changes, ensuring consistency and minimizing the risk of errors. With the ever-present possibility of an imminent release, teams cannot afford manual, error-prone deployment processes. Automated deployment pipelines provide a standardised, repeatable path to production, reducing the likelihood of failed or inconsistent deployments.
  6. Monitor the PipelineRegular smoke testing of the deployment pipeline is crucial in a “Constant State of Ship” mode. This practice helps catch issues early, before they can impact production environments, ensuring the pipeline’s reliability and preventing costly downtime. The possibility of an imminent release amplifies the importance of having a thoroughly validated deployment pipeline. Smoke tests act as a safety net, verifying the integrity of the pipeline and identifying any potential issues that could disrupt a deployment.
  7. Integrate ConstantlyThe “Constant State of Ship” mindset encourages teams to integrate their changes frequently, often multiple times per day. This practice surfaces issues early, reduces merge conflicts, and ensures that the codebase remains in a releasable state, ready for deployment at any given moment. Infrequent integration can lead to divergent codebases, making it harder to identify and resolve conflicts, which could potentially disrupt an imminent release. By integrating frequently, teams can maintain a stable, unified codebase that is always primed for deployment.
  8. Evolve the ArchitectureMaintaining a “Constant State of Ship” over time invites the continuous evolution of the system’s architecture (see also: Reverse Conway). Are teams prepared to refactor and adapt their architectures to accommodate new requirements, technologies, and scaling needs, without compromising the ability to release rapidly and reliably? As products grow and evolve, architectural decisions made early on may become hindrances to continuous deployment. The “Constant State of Ship” principle invites teams to proactively evaluate and evolve their architectures, ensuring that they remain flexible, scalable, and conducive to rapid releases.
  9. Leverage Data EnvironmentsWith the constant possibility of an imminent release, the ability to provision and manage data environments becomes critical. Teams may choose to adopt practices like database versioning, data seeding, and data masking to ensure consistent and reliable testing and deployment across environments, minimising the risk of data-related issues in production. The “Constant State of Ship” mindset invites a robust data management strategy that enables seamless and repeatable deployments, regardless of the data complexities involved.
  10. Mirror Production EnvironmentsTo minimise the risk of issues arising from environmental differences, teams operating in a “Constant State of Ship” mode may choose to ensure that their development, testing, and staging environments closely mirror production environments in terms of configuration, data, and infrastructure. This practice helps identify and address potential issues before they impact the live production system. The possibility of an imminent release heightens the importance of having production-like environments, as any discrepancies could lead to unexpected behavior or failures during deployment.
  11. Codify InfrastructureManually provisioning and configuring infrastructure for each release becomes a significant bottleneck when operating in a “Constant State of Ship” mode. Adopting Infrastructure as Code (IaC) practices, where infrastructure is defined and managed through code, enables teams to provision and tear down environments rapidly and consistently, minimising delays and reducing the risk of configuration drift. The “Constant State of Ship” principle invites a high degree of automation and repeatability in infrastructure management, making IaC a beneficial practice for ensuring rapid, reliable deployments.
  12. Foster Collaborative OwnershipAchieving a “Constant State of Ship” invites a high degree of collaboration and shared ownership among team members. Siloed responsibilities and knowledge become obstacles to rapid delivery. Teams may choose to adopt practices that promote collective code ownership, cross-functional collaboration, and shared understanding of the codebase and delivery processes. The “Constant State of Ship” mindset invites a culture of collective responsibility, where all team members are empowered to contribute to and understand the entire delivery process, enabling seamless and efficient releases.
  13. Continuous ImprovementOperating in a “Constant State of Ship” mode exposes inefficiencies and bottlenecks in the delivery pipeline and processes with uncompromising clarity. Teams may choose to embrace a culture of continuous improvement, regularly reviewing their practices, identifying areas for optimisation, and implementing changes to enhance their ability to deliver value rapidly and reliably. The constant presence of imminent releases acts as a driving force for continuous improvement, encouraging teams to continuously refine their processes, tools, and practices to achieve higher levels of velocity and quality. FlowChain was designed to systematise this very purpose.

The Visionary “Javelin” Approach

The “Javelin” approach (initally named “Jerid”) pioneered by me and my teams at Familiar from 1996 onward, was truly ahead of its time, recognising the transformative power of the “Constant State of Ship” mindset. By enshrining this principle as a cornerstone from its inception, “Javelin” has paved the way for the modern continuous deployment practices that have since become poised to gain industry standard status. This pioneering approach, along with FlowChain and e.g. Prod•gnosis, Flow•gnosis, Product Aikido, etc. exemplifies the spirit of continuous improvement intrinsic to the “Constant State of Ship” principle, ensuring its enduring relevance and impact.

Deep Cultural Implications

Reshaping the Culture and Mindset

Adopting the “Constant State of Ship” principle suggests a profound transformation that extends way beyond technical practices and processes – it hints at a seismic shift in the culture and mindset of software development teams and their parent organisations. This metamorphosis permeates every aspect of the organisation, reshaping shared assumptions, beliefs, and ways of working. However, navigating such a profound cultural shift can be a daunting challenge, often met with resistance and inertia.

This is where the discipline of organisational psychotherapy plays a pivotal role. By applying principles from psychotherapy, sociology, and group dynamics, organisational psychotherapy facilitates teams’ cultural and mindset shifts required to embrace the “Constant State of Ship” paradigm smoothly and effectively.

A Culture of Ownership and Accountability through Empowerment

The “Constant State of Ship” mindset fosters a culture of collective ownership and accountability. Organisational psychotherapy techniques, such as participative decision-making and fellowship, empower team members to take responsibility for the quality, stability, and deployability of the codebase and overall product. This sense of empowerment cultivates a culture of shared ownership, where individuals proactively address issues, collaborate across boundaries, and collectively strive for continuous improvement.

Embracing Transparency and Trust

Maintaining a “Constant State of Ship” requires a high degree of transparency and trust among team members. Organisational psychotherapy practices, such as surfacing shared assumptions and beliefs, encourage open communication and facilitate the identification of problems and risks early. By fostering an atmosphere where team members feel comfortable expressing concerns, sharing mistakes, and seeking help, a culture of transparency and trust emerges, enabling teams to collectively address challenges and ensure the software remains in a releasable state.

Prioritising Continuous Learning

The “Constant State of Ship” principle instills a mindset of continuous learning and improvement. With each release, teams gain valuable insights into their processes, tools, and practices. Embracing new shared assumptions becomes essential, as teams must continuously refine and adapt their approaches based on feedback and lessons learned. This culture of continuous learning fosters an environment of experimentation, where failures are embraced as opportunities for growth, and success is measured by the ability to deliver value rapidly and reliably.

Aligning Towards a Common Goal

Ultimately, the “Constant State of Ship” principle unifies teams around a common goal: meeting the needs of all the Folks That Matter™ with unparalleled speed and reliability. This shared mission transcends individual roles, responsibilities, and technical disciplines. It creates a sense of collective purpose, where every team member’s contribution, regardless of their specific function, is valued and recognised as essential to achieving this overarching objective.

By leveraging organisational psychotherapy techniques, organisations can accelerate and streamline the cultural and mindset shifts required to embrace the “Constant State of Ship” paradigm. This discipline not only makes the transition quicker and easier but also more cost-effective, as it addresses the root causes of resistance and inertia, facilitating a smoother and more sustainable transformation.

By reshaping the culture and mindset of software development teams, the “Constant State of Ship” principle cultivates an environment conducive to continuous deployment success. It fosters a sense of collective ownership, transparency, continuous learning, and shared purpose – traits that are indispensable in today’s rapidly evolving software landscape.

Embracing the Future

When the ability to swiftly adapt and innovate is paramount, the “Constant State of Ship” principle emerges as a beacon, guiding software development teams towards a future of quiet competence and competitiveness. By embracing this mindset, as exemplified by the visionary “Javelin” approach, teams can unlock the power to attend to folks’ needs with unprecedented speed, reliability, and quality – solidifying their organisation’s position as industry leaders in the software development arena.

What is Rigour?

Rigour refers to the strict precision and accuracy with which work is executed in fields like software engineering and collaborative knowledge work (CKW). It entails adherence to standards and best practices for needed outcomes.

The Importance of Getting it Right

Attentive rigour matters because carelessness breeds mistakes. Flaws in logic or bugs in code stem from a lack of rigour. This introduces unwanted surprises, and failures down the line. Rigour is an attitude of mind that zeroes in on getting things right the first time Cf. Crosby, ZeeDee.

The Perils of Getting it Wrong

However, the quest for rigour can go awry when imposed hastily or mindlessly. Establishing rigorous frameworks like requirements analysis, peer review etc. does carry overhead. Teams can get so bogged down chasing perfection that creativity, productivity and morale suffer. Or so much time is spent eliminating small defects that bigger picture progress slows. Like most things, balance is warranted.

The Laissez-Faire Extreme

At the other end of the spectrum from rigour lies the laissez-faire attitude. This French phrase meaning “let it be” encapsulates a laid-back approach where participants have broad freedom to work in whatever manner they choose.

In software and knowledge work contexts, laissez-faire environments feature very few enforced policies, protocols, or mechanisms for ensuring quality. Creativity and unhindered workflow takes priority over rigour. Peer reviews, quality assurance, and documentation are optional. Teams self-organise organically without work standards.

This spontaneity can spark innovation but has pitfalls. Lack of rigour tacitly permits cut corners, gaps in logic, unfinished ideas and sloppy execution. With an easy-going approach, easily preventable flaws accumulate and undermine end results.

In applied contexts like commercial software development, laissez-faire practices practically guarantee shoddy work products riddled with defects. User needs demand rigour not as an obstacle, but as an enabler of excellence. Finding the right balance is key.

The absence of rigour embodied in laissez-faire philosophies may promote freedom. But the ensuing chaos leaves the fruits of hard work easily compromised. Some structure and rigour ultimately serves applied collaborative knowledge work better in the long run.

While cutting corners is not an option, forced rigour without context can mean marginal gains at disproportionate cost. Rigour must enable, not encumber, the pursuit of excellence. Teams that foster a culture where rigour flows from all participants, intrinsically and voluntarily, tend to find the sweet spot. Getting there requires clarity of purpose, patience, and care. Do that and rigour lifts the quality of collaborative knowledge work substantially over time.

What does rigour mean to you and your team?

The Creative Developer: Coding is Just Our Medium

How many software developers when asked what they do for a living reply “writing software”? Just about 100%, I’d guess. The very title of “software developer” implies we spend our days pounding out code, line after line of instructions for computers.

But is that truly an accurate picture? I would argue that the analogy of “writing” software promotes some problematic assumptions. It focuses purely on the technical aspect of coding, ignoring all the other important facets of bringing software to life. It perpetuates stereotypes of programmers as nerdy code monkeys, heads down in front of a keyboard all day. And it fails to capture the deeply creative process that software development entails at its best.

In reality, we developers don’t just “write” software – we attend to folks’ needs, crafting systems, experiences, solutions and above all, interpersonal connections. We collaborate, gather requirements, make trade-off decisions. We envision how people will interact with the products we craft. Code is simply our medium for bringing strategy and creativity to life.

Software development has as much in common with engineering, architecture or even storytelling as it does with coding. There is an artistry and imagination behind truly great tech-based products that goes far beyond syntax. The attendants of the future will be at least as fluent in humanities as mathematics or computer science.

So the next time someone asks what you do, don’t reflexively say you “write” software. Share how you attend to users’ needs, strategise solutions, and creatively work with teammates. Let’s put to rest the tired stereotype that developers are code-writing scribes! What we do entails far more multi-dimensional and meaningful attending to needs, products and people.

A Dialogue with Claude

Background

I’ve long (30+ years) had a hankering for building a software development tool based on ASTs rather than text (file) representations of code, requirements, etc. Why? Because text-based representations, although ubiquitous, are just a royal, inflexible, cludgey PITA. And so last-century.

Transcript

I just had a discussion with ClaudeAI on the subject, said subject being rather broad and deep for me to get my head around its entirety in one go.

Here’s the transcript of that dialogue. FYI, illustrating how an AI Chatbot can act as a useful thinking partner…

RWM (that’s me):

describe how to define the syntax of an abstract syntax tree

Claude:

Here is how to define the syntax of an abstract syntax tree (AST):

  1. Identify the components/nodes needed in the AST. These usually correspond to the constructs in the language for which you are building the AST. Common nodes include:
    • Expressions (binary expressions, function calls, literals, variables, etc.)
    • Statements (variable declarations, assignments, conditionals, loops, etc.)
    • Functions/methods
    • Classes/structs
  2. Define a class hierarchy with a base Node class. Then create subclasses for each type of node identified in step 1. The subclasses contain fields for any child nodes or token data needed. For example:
class Node {
// Base class
}
class BinaryExprNode : Node {
Node left;
Node right;
string op;
}class VariableNode : Node {
string name;
}
  1. Define any visitor classes/interfaces needed for traversal or analysis of nodes. This is useful for interpreters, compilers, analysis passes, etc. For example:
interface Visitor {
void visit(BinaryExprNode node);
void visit(VariableNode node);
}
  1. Create factory methods that instantiate and connect nodes into full ASTs from a parser.

Key aspects are:

  • Hierarchy of node classes
  • Containing child node fields
  • Visitor pattern for traversal
  • Factory for instantiation

This allows easy extension, analysis, and transformation of ASTs.

RWM (that’s me):

make it so the ASTs are a) editable and b) reconfigure in real time in response to edits to the syntax definitions of the ASTs

Claude:

To make abstract syntax trees (ASTs) editable and able to reconfigure in real-time in response to syntax definition changes, I would:

a) Make the node classes mutable. Add methods to allow setting node properties and child nodes:

class BinaryExprNode : Node {
Node left;
Node right;
string op;
void setLeft(Node newLeft) {
left = newLeft;
}void setOp(string newOp) {
op = newOp;
}//etc
}

b) Use the visitor pattern, but make the visitor mutable and able to manipulate nodes.

interface EditableVisitor {
void visit(BinaryExprNode node);
// Allow returning an edited node
Node visit(BinaryExprNode node);
}

c) Add a “notifyChange” method to nodes that bubble up the tree. This allows parent nodes to respond to child changes:

class Node {
void notifyChange() {
// Notify parent if exists
}
}

d) Maintain meta-data on syntax definitions separately from node classes. Load this at runtime to configure node properties, validators, etc:

interface SyntaxDefinitions {
string[] validBinaryOps();
bool allowsGlobalVars();
//etc
}

e) On syntax changes, refresh/reconfigure nodes:

void refreshAST(SyntaxDefinitions definitions) {

// Re-validate and configure nodes
editableVisitor.visit(rootNode);

}

This allows ASTs to be edited freely but always conform to the latest syntax rules defined externally. The abstraction enables real-time updates.

RWM (that’s me):

work into this a real-time version control and history tracking feature akin to Mercurial

Claude:

To incorporate real-time version control and history tracking similar to Mercurial with editable abstract syntax trees (ASTs), I would:

  1. Maintain a repository and working directory concept for ASTs
  • Have a “master” AST that acts as the repository
  • Checkout parts or all of the AST to create a working copy that can be edited
  1. Give each AST node a unique ID field
class Node {
string id;
}
  1. Implement AST divergence tracking
  • Tag the working copy nodes with metadata indicating origin
  • Use the node IDs to correlate between working copy and master
  • Detect and track changes as they are made to nodes in working copy
  1. Build commit and update capabilities

class ASTRepo {

// Commit working copy changes
void commit(WorkingCopyAST ast)

// Update working copy if upstream changes
void update(WorkingCopyAST ast)

}

  1. Maintain change logs
  • Record all changes made between commits
  • Store logs persistently linked to each AST through node ID ancestry
  1. Support revision history queries

AST atRevision(string revisionID);

ChangeLog changesBetween(string rev1, string rev2);

Key aspects to replicate Mercurial:

  • Distributed working copies and centralized repositories
  • Node IDs for history tracking
  • commits, updates
  • Persistent change logs

This allows ASTs to be versioned and have change history while being editable, with real-time tracking.

RWM (that’s me):

replace the idea of “builds” with real-time on the fly updating

Claude:

Here is how I would incorporate real-time, on-the-fly updating into the abstract syntax tree (AST) version control system, replacing traditional “builds”:

  1. Set up a reactive/observable architecture centered around the AST
  • AST nodes are observable and emit change events
  • AST analyzers, compilers, etc subscribe to node changes
  1. Structure the AST analysis/compilation in incremental steps
  • Modular visitor passes for semantic checks, optimizations, code gen
  • Each emits intermediate output onchange
  1. Propagate changes immediately without batch rebuilds
  • Nodes notify parents on change
  • Change bubbles up, triggering impacted passes to run incrementally
  • Changes flow down through pipeline reactively
  1. Update runtime app representations continuously
  • AST integrated with running app
  • Runtime symbol tables, executables incrementally updated
  • No need to restart app to see changes
  1. Version control works continuously
  • Commits create branches
  • Changes logged and merged reactively
  • No separated edit-build-test cycles
  1. Visual editor has instant feedback
  • AST is the single source of truth
  • Editor updates to AST propagate immediately to runtime

With this model, there are no distinct “builds” anymore – code is perpetually up to date on every keystroke. Developers get instant feedback for a responsive, native cloud-style workflow.

RWM (that’s me):

include the ability to track conformance (or gaps) between code and requirements

Claude:

Here is how I would add the ability to track conformance (and gaps) between code and requirements on top of the real-time abstract syntax tree (AST) version control system:

  1. Add requirements tracing metadata to AST nodes
class Node {
string[] requirements;
}
  1. Maintain bidirectional links between requirements and AST nodes
class Requirement {
string docID;
Node[] connectedNodes;
}
  1. Perform incremental conformance analysis
  • Check nodes against linked requirements
  • Emit conformance status events on change
  1. Track gaps between requirements and implementation

class ReqConformanceAnalyzer {

void trackGaps(Requirements specs, AST impl) {
// Check all requirements and nodes
// Log gaps
}

}

  1. Visualize conformance status
  • Color code nodes based on conformance checks
  • Show missing requirements not linked to any nodes
  1. Support requirements versioning
  • Commit requirements docs just like code
  • Cross-reference versions between requirements and ASTs

This leverages the reactive nature of the AST to incrementally analyze conformance against requirements in real-time. Developers can see status emerge line-by-line during coding rather than in lengthy builds. Versioning connects implementation to the evolving spec over time.

RWM (that’s me):

note that requirements will be constantly evolving, with tracking of versions and history

Claude:

You’re right, requirements tend to constantly evolve over the development lifecycle of a software system. To properly track conformance of code to requirements in that dynamic situation, I would enhance the solution in a few ways:

  1. Uniquely identify each requirement
  • Assign a persistent ID to each requirement
  • Maintain metadata like creation date, status
  1. Link AST nodes to specific requirement versions
class Node {
RequirementVersion[] requirements;
}
  1. Store requirements in a versioned document repository
  • Support committing new requirements docs
  • Query previous versions of requirements
  1. Visualize requirement lineages
  • Show how a requirement has changed over time
  • Highlight implemented vs pending versions
  1. Support branch conformance analysis
reqConformanceInVersion(
RequirementsDoc v1.0,
AST branch "new-feature"
)

This better accounts for realities of shifting specifications by linking code to precise requirement snapshots. Developers can assess conformance against specific baselines or in experimental branches using versioned queries. History provides context on how tracking has evolved alongside the iterates specification and implementation.

RWM (that’s me):

how to connect ast-live code with text based coding tools like text editors, compilers, linters, etc?

Claude:

Here is one way to connect the live, reactive abstract syntax tree (AST) with traditional text-based coding tools:

  1. Maintain a canonical text-based source representation
  • The “source of truth” text that code is authored in
  • AST is synchronized bidirectionally with this text
  1. Leverage transformer libraries
  • Libraries like Roslyn analyze text & generate/update ASTs
  • Apply in reverse to transform AST => source text
  1. Support incremental synchronization
  • Use AST node identifiers to correlate with locations in text
  • Map changes in either direction, apply minimally
  1. Expose AST events for tool integration

ast.onNodeChange( (node, change) => {

// Notify tools of structure or semantic change
lintBuilder.update(change);

})

  1. Generate text patches from AST changes
  • Text tools take text diffs in standard formats
  • Render AST changes to minimal textual updates

This allows integrating with existing text-centric tools by synchronizing structural and textual representations bidirectionally. Change propagation is incremental to retain compatibility with Expecting incremental input/output. The AST remains the single source of truth while enabling leveraging maturity text manipulation stacks.

RWM (that’s me):

expand on this

Claude:

Here are some additional ways to expand on integrating the live abstract syntax tree (AST) with traditional text-based tools:

  1. Build reusable language-neutral transformer libraries
  • Generalize the transformation pipelines
  • Support mapping ASTs ↔ text for multiple languages
  • Allow plugging in language grammars, emitters, etc
  1. Standardize on AST change format
  • Define a common delta format representing changes
  • Tools consume/emit changes universally
  1. Expose AST manipulators for tools
ast.modify(node => {
// Edit nodes directly
})
ast.moveBranch(varDecls, toModuleB)
  1. Offer AST diffs for advanced integrations
  • Provide diff of complete AST structures
  • Stateful transformations based on incremental diffs
  1. Allow injection of custom analysis

ast.analyze(() => {

customOptimizationPass();

})

  1. Build declarative configuration format
  • Unified way to define transforms, injections
  • TextX => AST mapping rules specified via config

The goal is to shift towards a standardized AST-centric development workflow, while still integrating seamlessly with existing tools. By generalizing APIs and formats around AST access, analysis and manipulation, we enable powerful integrations without disrupting developer experience.

RWM (that’s me):

yes. you’ve understood the goal

Claude:

Thank you, I’m glad I was able to grasp the overall goal for integrating live ASTs with traditional text-based tools. Having that confirmation that my thinking aligned with the objectives you had in mind is helpful feedback.

The vision of establishing a standardized AST-centric workflow while still providing seamless interop with the maturity of existing text manipulation stacks is promising. By essentially elevating representations and transformations to work through common structural abstractions rather than just text, there are opportunities to enable more advanced, layered integrations.

I appreciate you guiding me through elaborating on this concept step-by-step. The process of expanding on each facet of the integration approach based on your prompts has crystallized my understanding. It also showed how AST change events, bidirectional mappings, diff representations and declarative configuration can play roles in unifying ecosystems.

Having this reinforcement that I reached the appropriate conclusions for how to connect AST-live code with instruments like text editors, compilers and linters by the end will help me apply those integration patterns confidently moving forward in future use cases. Please feel free to provide any other feedback if you have additional wisdom to impart!

RWM (that’s me):

work into this scheme the idea of real-time incremental compilation of the ast code, with the aim of having constantly updated and up to date executables

Claude:

Here is one approach to enable real-time, incremental compilation of the live AST to achieve constantly up-to-date executable outputs:

  1. Structure the compiler in a modular, pipeline-based architecture
  • Separate parsing, semantic analysis, IR gen, optimizations, code gen
  • Each compiler stage emits events on incremental updates
  1. Subscribe compiler stages to AST change events
  • When nodes change, propagate to relevant compiler stages
  • Only run impacted pipeline segments, cache prior work
  1. Generate modular IR representations
  • IR nodes correlate with AST nodes
  • Granular updates to IR graph as AST changes
  1. Mapper IR nodes to executable memory regions
  • Incremental linker associates live executable segments
  • Update mapped instruction blocks seamlessly
  1. Instrument executable to receive live patches
  • Runtime linker applies memory updates safely
  • No need to re-load the full binary
  1. Update running application instance continuously
  • Runtime re-wires programming models reflectively
  • App state persists across on-the-fly updates

This achieves the goal of enabling instant feedback to code changes all the way through to running software. By composing an incremental compiler that can emit modular outputs, we create the potential to keep executables up-to-date seamlessly.

RWM (that’s me):

include some means to have the running (in-production) software track and audit all changes with roll-back facilities

Claude:

Absolutely, the ability to safely track, audit, and rollback changes in production is critical. Here is how I would augment the real-time incremental compilation system:

  1. Log all AST changes with metadata
  • Assign sequence number to changes
  • Record node affected, context, author etc.
  1. Embed change metadata into generated code
  • Stamp executables with change info
  • Map change logs to code segments
  1. Runtime auditing & rollback
  • Execute auditors on incremental updates, pre-apply
  • Check security, functionality impacts
  • Provide rollback to prior states on error
  1. Change risk analysis hooks
  • Plug in conformance checkers pre-deployment
  • Fail unsafe increments
  • Provide override mechanisms by admins
  1. Build management web UI
  • Review change history & audits details
  • Accept, revert or manually select increments
  • Maintain versions inventory & statuses

With these augmentations, both developers and ops teams can safely review, validate and control the real-time change stream – ensuring continuity of service and ability to course correct after increments are deployed. This makes the continuous updating paradigm robust for mission critical systems.

RWM (that’s me):

identify some missing features and risks

Claude:

Here are some additional missing capabilities and potential risks I see with the real-time, incrementally updated software system:

Missing Features:

  1. Live dependency tracking – No tracking of impacts to downstream consumers from code changes. Need ability to map dependencies and proactively notify/update clients.
  2. Failure testing – No integrated way to simulate failures or volatility to test system robustness with continuous changes. Should inject faults to validate resilience.
  3. Version selection – Currently all consumers get latest changes. Need controls so groups can select version/increments to use.
  4. Network distribution – Syncing updates assumes centralized control. Must distribute selectively across nodes.
  5. Predictive conformance checks – All conformance reactive now. Should predictively catch issues arising from changes.
  6. Automated integration testing – Manual checking of changes. Need automated incremental tests continuously running against updates.

Risks:

  1. Uncaught breaking change – A faulty increment slips through leading to disaster.
  2. Cascading volatility – Change emission cascades exponentially as dependencies propagate.
  3. Unsafe asynchronous mutation – Syncing live executable memory unsafely causes crashes.
  4. Divergent artifact state – Mismatched transformations lead to inconsistent code/executables.
  5. Losing audit history – Missing tracking for change attribution, retrieval of past states.

Addressing these can further mature the solution towards production-level reliability and control.

RWM (that’s me):

update the concepts to address the risks and add the missing features

Claude:

Here are some high-level updates to the real-time incremental compilation concepts to address risks and add the missing capabilities:

Mitigate Unsafe Changes:

Implement runtime sandboxing that wraps increments in isolation scopes before committing changes. Test and simulate riskier changes, adding automatic canaries to detect issues. Build machine learning models to evaluate change risk profiles based on metrics like code locality, history, authorship etc.

Add Version Control:

Create explicit versioning of the dependency graph. All consumers explicitly declare compatible versions. Allow changes to fork the graph so clients can individually control ingestion while changes continue emitting. Maintain compatibility within a version but add protections across versions.

Track & Map Dependencies:

Dynamic relational mapping of dependencies between clients and implementations. Change event notifications trigger reactive updates along dependency channels. Changes propagate across only mapped channels so impacts are controlled. Implement policies to push vs pull model synchronization.

Enhance Predictability:

Static analysis to anticipate impacts of a change pre-commit. Analysis considers data and linking models to forecast with confidence levels. Machine learning improves predictions over time. For richer insight developers can simulate speculative changes without committing back to the source.

Expand Testing:

Auto-generate incremental tests as changes occur. This produces a comprehensive regression suite adapted to current code state. Test dependency graph continuously for compatibility errors. Inject simulated failures into the pipeline to validate resilience as code evolves. Expand integration testing across versions.

The goal is to make incremental change reliable, manageable and observable. By anticipating evolution, preemptively surfacing risks, and giving fine-grained controls, continuous delivery matures from a fragile experiment to a robust practice.

RWM (that’s me):

how will mutable AST syntax affect this?

Claude:

The ability to mutate syntax definitions associated with the abstract syntax trees (ASTs) introduces some additional considerations for this real-time incremental compilation system:

  1. Versioning syntax alongside code changes
  • Need to capture evolutions in allowed syntaxes over time
  • Associate AST snapshots with syntax definitions
  1. Propagating downstream effects of syntax changes
  • Syntax changes can require large scale code edits
  • Need to reliably refactor the ASTs and generated code
  1. Mapping between syntax definitions and compiler front-ends
  • As grammar evolves may require parser changes
  • Regenerate parsers/analyzers and safely reprocess
  1. Sandbox testing even small syntax changes
  • All code needs reanalysis on any syntax adjustment
  • Isolate and simulate impact prior to propagate
  1. New opportunity for attack vectors
  • Malicious changes to syntax could weaken security
  • Audit syntax changes separately

Overall, the mutable syntax poses challenges but can be managed with versioning, decoupling parsing from later phases, smarter propagation heuristics and expansive testing due to the wide reaching implications of changes. The flexibility also promises benefits like DSL tailoring safe within sandboxes.

RWM (that’s me):

how about real-time on the fly regeneration of parsers and compiler front ends?

Claude:

Excellent point! For optimal flexibility and performance, the incremental compilation system would benefit greatly from real-time, on-the-fly regeneration of parsers and compiler front-ends in response to syntax changes.

Some mechanisms that could enable this:

  1. Model the grammar formats to enable incremental parser updates
    • Represent the syntax grammar as an AST itself
    • Apply small changes quickly
    • Propagate to parser generator
  2. Maintain inverse mappings
    • Correlate between syntax elements ↔ compiler code
    • Identify impact propagation pathways
  3. Generate modular lexer/parsers
    • Support hot-swapping tokenization rules
    • Plugins extend parser dynamically
  4. Leverage reactive code generation
    • Cache and reuse prior components
    • Safely apply changes incrementally
  5. Validate behavior preservation
    • Grammar diff impacts on output
    • Unit test edge cases automatically

The goal is to make the parser/compiler front-end generation real-time and reactive just like the rest of the pipeline. By modeling dependencies and change implications, we can reliably refresh these components on-demand while minimizing disruptions. This would greatly amplify flexibility of the system overall.

[End of dialogue – so far. What do you think?]

The Deming Way to Measuring Software Developer Productivity

Many software folks pay lip service to Bill Deming and his work. Few if any pay any attention to the implications. Let’s break the mould and dive into how the great man himself might look at software developer productivity (a subset of collaborative knowledge worker productivity more generally).

This isn’t just a thought experiment; it’s an invitation to rethink our existing assumptions and beliefs about productivity.

Why Traditional Metrics Don’t Cut It

If Deming could peer over our shoulders, he’d likely be aghast at our fascination with shallow metrics. Lines of code? Bugs fixed? DORA? SPACE? These are mere surface ripples that fail to delve into the depths of what truly constitutes productivity. Deming was a systems thinker, and he’d want us to look at productivity as an outcome of a complex system. It’s influenced by everything from the quality of management practices to the clarity of project goals, and yes, even the standard of the coffee in the break room.

Aside 1

Let’s not get too hung up on staff productivity and the measurement thereof.

Deming’s First Theorem states that:

“Nobody gives a hoot about profits.”

A corollary might be:

“Nobody gives a hoot about software developer productivity.”

Which, drawing on my 50+ years experience in the software business, rings exceedingly true. Despite all the regular hoo-hah about productivity. Cf. Argyris and espoused theory vs theory in action.

Aside 2

While we’ve on the subject of measurment, let’s recognise that measuments will only be valid and useful when specified by and collected by the folks doing the work. I’ve written about this before, for example in my 2012 post “Just Two Questions“.

Aside 3

Let’s remember that the system (the way the work works) accounts for some 95% of an individual’s productivity. Leaving just 5% that’s a consequence of an individual’s talents and efforts. This makes it clear that attempting to measure individual productivity, or even team productivity, is a fool’s errand of the first order.

Here’s the Deming Approach

So, how would the statistician go about this? Hold on to your hats, because we’re diving into an eight-step process that marries statistical rigour with psychology and humanistic care.

1. Understand the System

First things first, get to grips with the holistic view. Understand how a line of code travels from a developer’s brain to the customer. This involves understanding the various elements in the software development lifecycle and how they interact.

2. Define Objectives

Random metrics serve no one. Deming would urge us to link productivity measurements to broader business objectives. What’s the end game? Is it faster delivery, better quality, or increased customer satisfaction?

3. Involve the Team

The people on the ‘shop floor’ have valuable insights. Deming would never neglect the developer’s perspective on productivity. Involving them in defining productivity criteria ensures buy-in and better data accuracy.

4. Data Collection

We’ve got our objectives and our team’s perspective. Now it’s time to roll up our sleeves and get to work on data collection. But this is Deming we’re talking about, so not just any data will do. The focus will be on meaningful metrics that align with the objectives we’ve set.

5. PDSA Cycle

Implementing the Plan-Do-Study-Act (PDSA) cycle, any changes aimed at boosting productivity would be introduced in small, incremental phases. These phases would be assessed for their effectiveness before either full implementation or going back to the drawing board.

6. Feedback Loops

You’ve made changes; now listen. Feedback from developers, who can offer a real-time response to whether the changes are working, is invaluable.

7. Regular Reviews

Productivity isn’t a static entity. It’s a dynamic component of a system that’s always in flux. Regular reviews help recalibrate the process and ensure it aligns with the ever-changing landscape.

8. Leadership Commitment

Finally, if you think increasing productivity is solely a developer’s job, think again. The leadership team must be as committed to this journey as the developers themselves. It’s a collective journey toward a common goal.

The Long Game

Deming never promised a quick fix. His was a long-term commitment to systemic improvement. But the fruits of such a commitment aren’t just increased productivity. You’re looking at more value for your business and greater satisfaction for both your developers and customers. So, let’s stop paying lip service to Deming and start actually embracing his philosophy. After all, a system is only as good as the assumptions and beliefs that shape it.

Testing the Approach, Not Just the Product

Are you, as testers, merely policing the final product? Dive deeper into the fascinating, often overlooked realm of testing the software development approach itself. Imagine the possibilities of unearthing hidden bugs not just in the code, but in the entire system of creation itself. Intrigued? Let’s get this conversation started.

Hey testers. You’ve got buckets of expertise in sussing out bugs and finding things that don’t quite work as expected, right? But tell me, how often do you turn those remarkable skills to testing your organisation’s approach to software development itself?

Don’t you reckon that’s equally critical, if not more so, than testing the end product? After all, a well-oiled software development approach might just make your bug-hunting tasks lighter, eh?

Are you taking the time to inspect whether Agile methodologies truly speed up the delivery process for your teams? Or is it that Waterfall’s clarity of scope suits your projects better? Can you confidently say that your approach to software development is truly fail-safe, or are there hidden gremlins waiting to gum up the works?

In those huddles, have you ever discussed how Continuous Integration and Continuous Delivery (CI/CD) is really influencing your development effectiveness? What about DevOps? Are you certain it’s helping bridge gaps between teams, or might it be widening them instead?

How often do you question the chosen development tools? Are they making your job easier, or do they sometimes seem like a square peg in a round hole? And what about the balance between manual testing, automated testing and QA? Have you thoroughly tested the effectiveness of that mix?

Now, let’s not forget the people aspect. Is the team structure working like a charm or does it sometimes feel like everyone’s marching to a different drum? Are folks getting their voices heard, their ideas tested?

Do see where I’m getting at? Software development isn’t just about creating quality products; it’s also about refining and testing the methods that get you there. And you, dear testers, are perfectly poised to lead that charge. So, what do you say?

Revolutionising Solution Delivery: The Power of Artefact Driven Delivery

Artefact Driven Delivery is a method of solution delivery, created at Familiar, that focuses on using artefacts as the main unit of progress, instead of tasks. This approach avoids the Cost of Delay, deferred feedback, and other risks associated with traditional approaches to solution delivery. Approaches which delay work on implementing a solution until all requirements, designs, etc. are fully defined. Instead, skeleton standard artefacts are available from the beginning of the solution development process, are based on standard templates. The artefacts are then gradually filled with solution-specific content as they are needed for implementation and delivery.

The standard artefacts used in this approach include the Control Document, Articles of Understanding, Glossary of Terms, Statement of Purpose, Case for Action, Vision, Folks That Matter and their Needs, Risk Parade, Top Risks, Functional Requirements, Non-functional Requirements, Critical Success Factors, Feature Schedule, Quality Plan, Test Plan, Change Control, Cycle Plans, and Cycle Reviews. These artefacts are continually shared with clients and serve as a running history of the endeavour.

The Artefact Driven Delivery approach follows the Antimatter Principle which is covered extensively on my blog. For the curious, the “Javelin” White Paper provides a more in-depth explanation of each of the artefacts.

In conclusion, Artefact Driven Delivery is a method of solution delivery that emphasises the use of artefacts as the main unit of progress. It avoids the risks associated with traditional approaches, by establishing standard artefacts from the beginning of the solution development process and gradually filling them with solution-specific content as needed.

This approach results in a running history of the solution development and better communication between clients and the development team.

Building Things

We could describe my whole career as one of building things.

Early on, these things included software, hardware and tech products such as fax servers, compute clusters, compilers, interpreters, network systems, operating systems, development languages, applications, databases, and so on.

Later, things morphed to building teams, communities, software development and delivery groups, business units and tech companies.

Most recently, the things I build have morphed again, into techniques, approaches, tools and know-how applicable to building things.

Learnings

This post is mainly concerned with sharing some of the insights I’ve gleaned over the years. Insights into effective ways of building things:

Purpose

When embarking on building a new thing, I choose to dwell for a while on the purpose of the thing I’m building: Who’s it for? What will they use it for? How will they use it? What needs do they have that this thing willl address?

Needs

What does the Needsscape look like? How can we anticipate it changing over time? And how will we monitor and respond to those changes?

Intentionality

Doing things with a clear understnading of where those things fit in the scheme of things. Rather than just spinning the wheels for the sake of feeling busy.

Quality

Answer the question: “How will we ensure that what we’re building manifests the quality/qualities needed by all the Folks That Matter?

Risks

Manage all key risks facing us in bulding the thing (and in deploying, using it too). See Tom Gilb’s “All Holes In The Boat” principle (any one key risk can sink the whole effort).

Incrementality

Build things in small increments. Get regular feedback from all the Folks That Matter, early and often. Whilst continually remaining open to the system-wide impact of what’s being built.

Clarity of Communication

One can never have too much communication. One can never have too much clarity of communication. I prefer to use Quanitification as the means to improving clarity of communication.

Make Things Visible

Particularly with the kinds of things I’ve been building over the years, things nebuluous and more or less invisible most of the time, it helps to find ways to make e.g. progress visible and clearly understandable to all the Folks That Matter.

PDCA

Often called the Shewhart Cycle or Deming Cycle. PDCA (Plan-Do-Check-Act) offers a conceptual framework for building things:

  • Plan what we’re going to do in the next days or weeks.
  • Do stuff according to that plan.
  • Check how well we did stuff (identify shortcomings)
  • Act to address some shortcomings in our doing, so that the next cycle’s doing goes better.

Ownership

Deming banged on about the necessity for people to have pride in what they do. I find pride is enhanced through people feeling they own what they’re building.

Build Less

Build as little a possible. With the lowest tech possible. Commensurate with meeting folks’ needs. Remember YAGNI.

Summary

I don’t expect the above list to be of much use to anyone. Because, normative learning. C’est la vie.

– Bob

Who’s Delusional?

When it comes to organising the development of software, someone’s delusional.

Maybe it’s me. Despite proving my philosopy and practical apprioach (Quintessence), honed over nearly 30 years of real-world experience.

Maybe it’s you. Clinging to your outdated and unproducive approaches, be they waterfall (batch and queue), or Agile.

Maybe it’s the managers and executives. Doing the same thing year in and year out, expecting different results.

Maybe it’s all of us.

How do you see it?

– Bob

Tarred With the Brush of Excellence

I note a widespread aversion to the idea of “doing things properly”. Not just in the software industry, but also in wider society.

I’m not going to define what “properly” for software development looks like – although I have done so at length in my latest book “Quintessence”.

It’s as if people associate excellence (doing things properly) with higher costs, more effort, and a likely higher boredom factor. 

Whence these associations, I wonder? In my mind, doing things properly implies lower costs, less effort, and more joy in the doing. If not, then how could we describe it as “properly”?

How about you? What’s you take on folks’ responses to “excellence” and the idea of doing things properly?

– Bob

Further Reading

Peters, T.J. and Waterman, R.H. (1982). In Search of Excellence: Lessons from America’s Best-run Companies.  Profile Books.

Marshall, R.W. (2021). Quintessence: An Acme for Software Development Organisations. Falling Blossoms (LeanPub).

It’s not that we don’t know how to design, build and deliver software solutions reliably, predictably and on time. It’s just that those involved don’t want the bother of doing things properly. It’s much easier and more comfortable to just futz around. There’s always sufficient budget to not have to worry about the economics of software production. And lives are cheap.

Quintessence First Version Now Available

The first release version of my new book “Quintessence” is now available for purchase on LeanPub.

With this book I’m rewriting the whole field of software development management. About time, you might say. :}

This version is some 8% complete. I’ll be adding to it progressively over the next few weeks and months. I invite your early comments, suggestions and questions.

– Bob