🛰️ Signal Boost

A core idea worth amplifying.

The Deep Work Moved

For most of software engineering's history, implementation was the expensive part. Writing code took time. Debugging took longer. Getting a feature from idea to production meant weeks of careful, sequential work: one developer, one problem, one file at a time. And because building was slow and costly, we were forced to think hard about what we built before we built it.

That friction wasn't a bug. It was a strategic filter.

When a feature took six engineer-weeks to ship, you couldn't afford to be wrong about whether it was worth building. Teams held design reviews because rework was painful. Engineers pushed back on vague requirements because ambiguity meant wasted sprints. The sheer cost of implementation created a natural pressure to get clarity before touching the keyboard.

Kent Beck captured this tension brilliantly in his essay "Inefficient Efficiency." He argued that optimizing for throughput (brewing the whole pot of coffee at once) feels productive but often delivers the wrong thing beautifully. The smarter move was to make one cup, taste it, learn, and adjust. The expense of each cup forced you to care about whether it was right.

AI just mass-produced the coffee machine.

Today, a developer with the right AI tools doesn't produce a single stream of output anymore. They parallelize. They magnify. Work that took a team of five now takes one person with clear intent and good tooling. The cost of implementation has collapsed. And with it, the natural friction that once forced us to think carefully about what we were building.

This is the shift most teams haven't fully reckoned with. The bottleneck moved, but most organizations are still optimizing for the old one.

They're celebrating faster cycle times and higher deployment frequency without asking the harder question: are we building the right things better, or just building the wrong things faster?

The Real Cost Didn't Change

Here's the brutal truth: AI made building cheap. It didn't make building the wrong thing cheap. A feature that doesn't solve a real user problem costs just as much in confused customers, wasted support cycles, and eroded trust. Whether it took six weeks to build or six hours.

And the danger of cheap implementation is that it removes the natural pause where good thinking used to live. When building was hard, teams had to justify the investment. Now that building is easy, bad ideas don't die from economics anymore. They ship.

This is what the team at The Bushido Collective nailed in their piece "Friction Was a Feature": the cost of implementation used to be a strategic filter. Organizations didn't just have fewer bad ideas back then. They had the same number of bad ideas, but the expense of building them killed most before they reached production. Remove that filter without replacing it with something better, and you flood your systems with half-considered work at scale.

Where the Deep Work Lives Now

So if the craft isn't in the typing anymore, where did it go?

It moved upstream. Into the specification. Into the intent.

The developers who are thriving right now aren't the ones generating the most code. They're the ones who've gotten remarkably good at articulating exactly what they want before an AI agent touches a single file. They write clear problem statements. They define success criteria. They think through edge cases, support implications, and production readiness before the first line is generated. Not after.

This is a fundamental inversion of the old workflow. We used to think through the code. Writing was how we discovered what we actually wanted. Now, thinking has to happen before the code, because the machine will faithfully build whatever you describe. Including your mistakes.

Writing has become a core engineering skill. Real writing. The kind where you wrestle with what you actually mean. Not because developers need to author blog posts, but because writing is a proxy for thinking. A well-written specification is a well-thought-out product. A vague prompt produces a vague feature. The quality of your input directly determines the quality of your output, and AI has made that relationship brutally literal.

Building Your Own Friction

The teams that are getting this right aren't moving fast and breaking things. They're moving fast because they've built deliberate friction into the right places.

That means:

  • Frontloading vision, not just requirements. Before anyone writes a prompt or kicks off an agent, the team aligns on what the feature should feel like, not just what it should do. What's the user's experience? What's the support plan? What internal tooling needs to exist? What does production readiness actually look like?

  • Designing backpressure into the system. If AI agents can produce unlimited output, your job as a developer is to build the quality gates that validate that output. What proves the work is correct? Where do humans review? What are the acceptance criteria that an agent can't evaluate for itself?

  • Iterating on customer feedback, not on implementation guesses. The old cycle was: build fast, ship rough, iterate on the code until it's right. The new cycle should be: think deeply, specify clearly, ship something close to right on the first pass, then iterate based on what users actually tell you. Not on what you got wrong because you didn't think it through.

This isn't Big Design Up Front. It's not waterfall dressed in new clothes. It's clarity of vision. The discipline of knowing what you want before you have the power to build anything.

The Intelligence Shifted, Not the Work Ethic

Let me be clear: this isn't an argument for slowing down. It's an argument for redirecting where your intensity goes.

Developers today can accomplish in a day what used to take a sprint. That's extraordinary. But that multiplied output only compounds into value if it's pointed in the right direction. Unfocused speed is just expensive chaos with better tooling.

The intelligence to produce software is now commoditized. The intelligence to decide what software should exist, and to specify it with enough clarity that machines can build it right, that's the new craft. That's where the deep work lives.

The developers and teams who internalize this will build better products with fewer false starts, less rework, and more confidence in what they ship. The ones who don't will generate mountains of code that nobody asked for, solving problems that didn't need solving, at unprecedented speed.

The deep work moved. The question is whether you've followed it.

🔗 Lead Link

One standout article from the web that delivers signal, not noise.

"Friction Was a Feature", The Bushido Collective

This sharp piece articulates something most teams are feeling but haven't named: the cost of implementation used to serve as a strategic filter for bad ideas. When building was expensive, only the ideas worth building survived. AI removed that filter and exposed that many organizations' real bottleneck was never engineering capacity. It was leadership clarity.

💡 Why it matters

The article introduces a concept worth internalizing: cognitive debt. This is distinct from technical debt. It's what happens when code gets generated, reviewed superficially, and deployed without anyone on the team truly understanding how it works. When that system breaks (and it will), the diagnostic knowledge doesn't exist. You've shipped a black box into production.

The piece also nails the symmetry problem: AI amplifies whatever your team already is. Motivated, clear-thinking engineers compound their capabilities. Disengaged or unfocused teams just ship hollow solutions faster. The tool is a multiplier, not a corrector.

Try this: Audit the last three features your team shipped. For each one, ask: Could three different engineers on the team explain how this works and why it was built this way? If the answer is no, you're accumulating cognitive debt faster than you realize.

🛠️ Tactical Byte

A quick tip or tactic you can try this week.

Write the intent doc before you write the prompt.

Before kicking off an AI agent on your next feature, spend 30 minutes writing a short document that answers:

  • What problem are we solving, and for whom?

  • What does success look like? (Be specific: metrics, behaviors, user outcomes)

  • What are the edge cases and failure modes we need to handle?

  • What does production readiness require? (Support tooling, monitoring, rollback plan)

  • How will we validate this is correct before it ships?

This isn't a PRD. It's a thinking exercise. The act of writing it forces you to confront ambiguity before it becomes a bug. It gives your AI agents better input. And it gives your team shared context, which, as I've said a hundred times, is king.

The teams I've seen do this well don't treat it as bureaucracy. They treat it as the actual work. The coding that follows is just execution.

Bonus for leaders: Make intent docs a first-class artifact in your development process. Review them the way you'd review architecture decisions. The quality of the spec is now the strongest predictor of the quality of the output.

🎙️ From the Field

An insight, quote, or story from an experienced tech leader.

At my company, we've been living this shift in real time. And it pushed us to build something new.

We noticed a pattern. Our engineers were getting dramatically faster at producing code with AI tools, but the quality of outcomes wasn't scaling at the same rate. Features would ship quickly, but edge cases were missed. Support implications weren't considered. Internal tooling gaps would surface weeks later. The speed was real, but the thoughtfulness hadn't kept pace.

The problem wasn't the engineers or the tools. It was that our process was still designed for the old world, the world where implementation speed was the bottleneck. We had rituals for planning sprints and reviewing code, but we didn't have structured rituals for the work that now matters most: defining intent, layering in feedback, and validating that AI-generated work actually meets the bar before it hits production.

So our CTO designed something new. It went through feedback cycles with me, our engineers, our design team, and our product team. What came out the other side is something we call H·AI·K·U: Human + AI Knowledge Unification.

H·AI·K·U is a lifecycle orchestration system built around four phases: Elaboration (define what will be done and why), Execution (do the work through structured workflows), Operation (manage what was delivered), and Reflection (learn from what happened). Reflection feeds forward into the next elaboration cycle, creating a continuous loop of compounding improvement.

The key design principle was stages, not steps. We didn't want a rigid, linear process. We wanted quality gates, places where work gets evaluated against clear criteria before it advances. Places where humans make strategic decisions about when to supervise AI closely, when to observe, and when to trust autonomous execution.

And while we built it for software development, H·AI·K·U is really a system for thinking through problems and arriving at solutions. The same structured loops work for design, product strategy, operations, and beyond. Because the core challenge is the same everywhere: when AI can execute quickly, the value shifts to how clearly you define the problem and how deliberately you validate the result.

What changed almost immediately was where our engineers spent their deep thinking time. It moved upstream, into elaboration. Into writing clear specifications that gave AI agents the context they needed to produce good work on the first pass. Into designing the validation criteria that would catch problems before they shipped. Into considering the full picture: user experience, support plan, internal tooling, production readiness.

The thing that became clear quickly is that strong communication skills translate directly into better, more consistent outcomes. The engineers who could articulate intent precisely, think through implications systematically, and write specifications that left little room for misinterpretation were the ones producing the best results. Clarity of vision, expressed as clear communication, became the highest-leverage capability on the team.

We're still early. The framework evolves constantly as AI capabilities change. But the underlying principle has held up: when implementation is cheap, the discipline of knowing what you want becomes the craft. And that discipline benefits enormously from structure, feedback loops, and deliberate human oversight at the moments that matter most.

The deep work didn't disappear. It just found a new address.

💬 Open Threads

Something to chew on, debate, or share back. Open questions for curious minds.
  • Where does your team's deep thinking happen right now? In the code editor, in planning docs, in conversation, or nowhere consistently? Has that shifted in the last year?

  • When AI generates a working feature in hours instead of weeks, how do you maintain the rigor that used to come from the slow, deliberate process of building it by hand?

  • If writing clear specifications is the new highest-leverage skill, how are you developing that skill in your engineers? Is it part of your growth framework?

  • What quality gates exist between "AI generated this" and "this is in production"? Are they enough?

  • Have you experienced cognitive debt, a system in production that nobody on the team truly understands because it was generated, not built? How did you discover it?

Keep Reading