<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://www.domfarr.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://www.domfarr.com/" rel="alternate" type="text/html" /><updated>2026-02-25T18:29:46+00:00</updated><id>https://www.domfarr.com/feed.xml</id><title type="html">Dominic Farr</title><subtitle>Platform Architecture for Regulated and Agentic Systems</subtitle><author><name>Dominic Farr</name></author><entry><title type="html">Delivery Processes Are Trust Mechanisms</title><link href="https://www.domfarr.com/2026/02/25/the-processes-we-follow-for-delivering-create-soft.html" rel="alternate" type="text/html" title="Delivery Processes Are Trust Mechanisms" /><published>2026-02-25T00:00:00+00:00</published><updated>2026-02-25T00:00:00+00:00</updated><id>https://www.domfarr.com/2026/02/25/the-processes-we-follow-for-delivering-create-soft</id><content type="html" xml:base="https://www.domfarr.com/2026/02/25/the-processes-we-follow-for-delivering-create-soft.html"><![CDATA[<p>Every gate in your software pipeline exists because someone once asked: how do I know this is safe to ship?</p>

<p>Code review, test coverage, staging environments, deployment approvals. These are trust infrastructure. They encode what your organisation learned from past failures.</p>

<p>Agent-generated code doesn’t change that question. It increases the volume and velocity at which it needs answering.</p>

<p>A team generating code 10x faster through an agent still needs to validate it. If your review process was already shallow — two-minute glances, no checklist, tests written after the fact — you haven’t changed the process. You’ve increased the throughput of it.</p>

<p>Consider a team with inconsistent test coverage and informal review norms. They adopt an agent-assisted workflow. Output doubles within a sprint. Review latency spikes. Reviewers feel the pressure to keep up. A two-second glance and a merge becomes the default. Within two months, defect rates climb and the blame lands on the AI.</p>

<p>The AI didn’t introduce the dysfunction but it dud scaled it.</p>

<p>Teams that adapt well tend to share one characteristic: deliberate validation design before adoption. Structured review checklists. Defined defect classes. Clear ownership of what automated testing covers and what it does not.</p>

<p>They also expect a J-curve. Early adoption slows throughput before it speeds it up. That dip is where trust infrastructure gets stress-tested.</p>

<p>The practical move is to audit your current pipeline before expanding agent usage. Identify where review is weakest. Fix that first. Then scale.</p>

<p>The sequence matters.</p>]]></content><author><name>Dominic Farr</name></author><summary type="html"><![CDATA[Every gate in your software pipeline exists because someone once asked: how do I know this is safe to ship?]]></summary></entry><entry><title type="html">Your Delivery System Was Designed for a Different Constraint</title><link href="https://www.domfarr.com/2026/02/24/assume-code-becomes-semi-black-box-redesign-delive.html" rel="alternate" type="text/html" title="Your Delivery System Was Designed for a Different Constraint" /><published>2026-02-24T00:00:00+00:00</published><updated>2026-02-24T00:00:00+00:00</updated><id>https://www.domfarr.com/2026/02/24/assume-code-becomes-semi-black-box-redesign-delive</id><content type="html" xml:base="https://www.domfarr.com/2026/02/24/assume-code-becomes-semi-black-box-redesign-delive.html"><![CDATA[<p>Most engineers don’t review transpiled output or minified bundles. They review the source and trust the pipeline. At some point, that output became a black box. Validation moved up a level. Nobody held a ceremony about it.</p>

<p>The same shift is happening with agentic code generation. And most delivery systems haven’t adjusted.</p>

<p>Dave Farley recently explained the Nyquist-Shannon Sampling Theorem in the context of software delivery[1]. To reliably detect change, you must sample at twice the frequency of the change itself. If AI increases code output volume, and humans remain the primary sampling layer, review doesn’t scale. The human becomes the bottleneck by design.</p>

<p>The METR study[2] found developers using AI tools were 19% slower on average. The code changed. The delivery system didn’t. That mismatch is where the cost accumulates.</p>

<p>So where does validation move?</p>

<p>Three places. Contracts define expected behaviour before anything is written. External test coverage confirms the implementation meets those contracts. Runtime monitoring catches what slips through both. Together, they replace the pull request as the primary quality gate.</p>

<p>Teams on the frontier are already redesigning around this. They’re not reviewing every line of generated code. They’re investing in interface definitions, automated coverage thresholds, and production observability. The checkpoint moved.</p>

<p>If your team’s primary validation mechanism is still a pull request review, that process was built for a world where implementation speed was the constraint. It probably isn’t anymore.</p>

<p>Audit where your validation actually sits. That’s where to start.</p>

<p>[1] https://www.youtube.com/watch?v=XavrebMKH2A
[2] https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/</p>]]></content><author><name>Dominic Farr</name></author><summary type="html"><![CDATA[Most engineers don’t review transpiled output or minified bundles. They review the source and trust the pipeline. At some point, that output became a black box. Validation moved up a level. Nobody held a ceremony about it.]]></summary></entry><entry><title type="html">The Constraint Has Moved. Most Teams Haven’t.</title><link href="https://www.domfarr.com/2026/02/24/when-code-delivery-is-fully-automated-the-constrai.html" rel="alternate" type="text/html" title="The Constraint Has Moved. Most Teams Haven’t." /><published>2026-02-24T00:00:00+00:00</published><updated>2026-02-24T00:00:00+00:00</updated><id>https://www.domfarr.com/2026/02/24/when-code-delivery-is-fully-automated-the-constrai</id><content type="html" xml:base="https://www.domfarr.com/2026/02/24/when-code-delivery-is-fully-automated-the-constrai.html"><![CDATA[<p>Software delivery has spent thirty years managing human inconsistency. Two developers interpret the same requirement differently. A third forgets an edge case. A fourth ships something untested on Friday. So you built process around it. Standups, sprint reviews, merge gates, retrospectives.</p>

<p>That process was solving a real problem. The problem is changing.</p>

<p>When implementation becomes agentic, you stop managing variance between ten engineers. You’re managing a system that produces consistent output at volume, with failure modes that look nothing like what Jira was designed to track.</p>

<p>The compiler analogy holds here. Nobody reviews compiled bytecode in a pull request. You validate the source, trust the compiler, and monitor runtime behavior. Specification in. Behavior out. You test the contract, not the intermediate steps. Agentic code delivery is heading toward the same model.</p>

<p>Teams already operating near this boundary aren’t running two-week sprints. They’re investing in three things: prompt discipline (what goes in), evaluation frameworks (how correct output is defined), and observability (what happens at runtime). The constraint has moved from writing code to specifying behavior and catching drift when it occurs.</p>

<p>The Scrum Master role was an answer to a specific organisational problem. That problem is dissolving. New ones are forming around specification quality, automated validation, and governance of systems you didn’t write line by line.</p>

<p>Audit your ceremonies against the risks they were built to manage. Some of them no longer apply.</p>]]></content><author><name>Dominic Farr</name></author><summary type="html"><![CDATA[Software delivery has spent thirty years managing human inconsistency. Two developers interpret the same requirement differently. A third forgets an edge case. A fourth ships something untested on Friday. So you built process around it. Standups, sprint reviews, merge gates, retrospectives.]]></summary></entry></feed>