Moon

March 8, 2026

Engineering

๐ŸŒป You're Wasting Your Best Engineering Skill

You're incredible at writing code. But you've stopped doing the thing that actually matters โ€” and you don't even realize it.

Alex Tongยท7 min read

The best engineers I know are spending their days doing something they're actually terrible at.

They're not bad at writing code โ€” they're incredible at it. But they've stopped doing the thing they're best at, and nobody told them. They just... drifted into it. I did the same thing for way too long at work โ€” using Claude Code heavily on a large-scale repo migration โ€” and I didn't even notice until I stepped back and looked at how I was actually spending my time.

The preparation IS the work now. That's it.

Not writing the code. Not debugging the code. Not even reviewing. The thing that determines whether Claude one-shots your ticket or hallucinates straight AI garbage โ€” is how well you set up the problem before the AI writes a single line.

If you prepare well, Claude will understand it and probably nail it first try. If you don't, you'll spend three hours fighting an AI that's over-confidently building the wrong thing. And that's three hours of your best engineering skill โ€” your ability to think through problems โ€” completely wasted on cleanup.

So What Did We Actually Become?

Here's what I realized halfway through our migration: we're not writing code anymore โ€” we're curating context.

It's basically air traffic control, right? The controller doesn't fly the planes โ€” but every plane in the sky depends on them to land safely. They're sequencing โ€” which plane goes first, which one holds, which runway is clear. They have a constrained space (the airspace) and if they overload it, things collide. The skill isn't in the flying. It's in the coordination.

That's what we're doing now. The "flying" โ€” the implementation โ€” Claude handles that. But you're the one deciding what context to load, what files to point it at, what order to tackle things in, and what constraints to set. You're managing a constrained space and sequencing work so nothing collides. And if your sequencing is wrong? The whole thing falls apart โ€” doesn't matter how capable the aircraft are.

You're not paid to write every line of code anymore. You're paid to know which 5% of context actually matters right now.

Here's the thing about AI coding tool best practices โ€” they basically all boil down to one constraint: when you ask an AI to solve something complex, it needs to understand the whole picture. Give it fragments and it guesses. Give it noise and it drowns. Fill the context window wrong and Claude doesn't warn you โ€” it just confidently builds the wrong thing. And you'll spend three hours debugging something that should have taken fifteen minutes.

So yeah, every technique I'm about to share? Managing that constraint. You're not writing prompts anymore โ€” you're deciding what the AI actually needs to see.

Here's Where Most People Go Wrong

Everyone wants to give Claude the whole project at once. They all do. Like โ€” "here's my codebase, migrate the thing, figure it out." Claude will hallucinate you into a brick wall.

What your codebase looks like after giving Claude too much context at once
What your codebase looks like after giving Claude too much context at once

Claude gets confused when tasks aren't EXTREMELY scoped. And I mean extremely. This is what kills you โ€” hallucinated code that looks right but does absolutely nothing.

One task per prompt. One major change per PR. Build the skeleton first. One bone at a time. Sounds slow. It's way way faster โ€” because each one lands clean.

This matters beyond just code quality too โ€” PR reviewability, QA testing, debugging. When something breaks, you know exactly which change caused it. Smaller, targeted context = better results. The less noise the AI has to parse, the more accurate its output.

But here's the part nobody talks about: before you systematize anything, do one full task end-to-end manually. Just one. Start to finish. No shortcuts.

That first rep is where you learn everything. Where does Claude struggle? What context does it actually need? Where does it make assumptions? What does the "happy path" look like when it goes right?

For our repo migration, the first task we migrated was messy. Took way longer than it should have. But it taught us exactly what Claude needed to succeed โ€” which files to reference, what order to build dependencies in, what instructions to put in our repo's CLAUDE.md. Every task after that was dramatically faster because we'd done the spike first. Once you've done one successfully, that becomes your reference pattern โ€” and example-driven prompts produce way more consistent results than spec-driven ones. Every. Single. Time.

And here's a subtle one that bites people constantly โ€” tell the AI which layer something belongs in. Explicitly. If your architecture has distinct layers, Claude needs to know which one it's working in, or it'll write duplicate logic across layers. "Should the system check if a user is authenticated in layer A itself, or layer B?" If you don't specify, Claude will guess. And it will guess wrong.

Enjoying this? Subscribe for free to get the next one.

The Overlooked Part: Your Mistakes Are Worth More Than Your Wins

Once your first task works, lock it in as a reference pattern. Include the file path with line numbers so Claude can read the actual code rather than relying on your description.

But โ€” and this is super important โ€” don't just track what works. Track what went wrong. If you're doing repetitive work, record mistakes and how to avoid them. Counter-intuitive behaviors, one-off exceptions, anything that goes against the general architecture โ€” document it. If you don't, Claude will follow the wrong pattern and you'll debug the same issue three times.

Here's the move: when Claude makes a mistake, tell it to update its own rules. Don't let it repeat it. Over time, your documentation becomes institutional memory โ€” basically a compounding record of every lesson you've learned, locked in. That's the real competitive advantage. Not your prompting skill. Your playbook.

Traditional code-writing ability is becoming a commodity. The scarcity now is in setup โ€” in knowing what to ask for.

Can't build the wrong thing if you prepare so well there's nothing to misinterpret
Can't build the wrong thing if you prepare so well there's nothing to misinterpret

When to Stop Preparing (And Just Ship It)

These techniques are for complex, multi-session, multi-file work. Don't over-apply them.

Separate thinking from doing โ€” don't ask Claude to plan and implement in the same prompt. Ask it to plan first always โ€” output a plan, not code. Review the plan. Then tell it to execute. And when things go sideways during implementation, don't keep pushing forward โ€” go back to planning and re-plan from scratch. A fresh plan consistently beats incremental patching on a broken implementation.

Know when to reset โ€” long sessions degrade in quality. When Claude starts doing weird stuff โ€” inserting defaults you didn't ask for, ignoring constraints you set โ€” it's cheaper to start a fresh session with a clean prompt than to keep fighting. This is another reason well-scoped tasks matter: if your prompts are self-contained, starting fresh is painless.

If the task fits in one prompt and one PR โ€” a one-line bug fix, a simple function addition, a config change โ€” just do it directly. Not every single task needs a dependency chain, a skeleton, and a reference pattern. The right question is: "Will the AI need more context than fits cleanly in one session?" If yes, prepare. If no, just ask.

The Bottom Line

You used to be measured by how much code you could write. Now you're measured by how well you prepare. And that's not a demotion โ€” it's a promotion you didn't ask for. Most people still haven't realized they got it.

The engineers getting the best results with Claude aren't the ones writing the cleverest prompts. They're the ones who prepare so well that Claude has no room to fail. Break the problem down. Scope the task. Give Claude exactly what it needs โ€” nothing more, nothing less. And when things get messy, kill the session and start fresh.

The code writes itself now. Your job is to make sure it's writing the right thing.


I've been a software engineer for half a decade, previously at Amazon and currently at The New York Times. These are lessons from real production work, not theoretical predictions.

What did you think?

Enjoyed this? Get the next one in your inbox.

Already subscribed?