I used to measure my productivity by how fast I could crank out lines of code. Open the IDE, find the file, start typing. That was the job, right? The more I wrote, the more I felt like I was getting done.
Then something weird happened. I forced myself not to write a single line of code on a task I knew exactly how to solve. And somehow, I ended up less tired.
Let me explain.
The Copilot Trap
Like everyone else, I started with Copilot, then moved to Cursor. They were impressive—sometimes they’d nail exactly what I was going for and autocomplete entire functions. But most of the time? They slowed me down.
The code would look right in the editor but wouldn’t compile. They were basically guessing with too little context, trying to read my mind from a handful of lines.
Cursor had an Agent mode, which helped a bit, but it still kept me in “IDE mode.” I was stuck thinking in terms of “what code do I need to type” instead of “what instructions should I give.”
Maybe the tools weren’t ready. Maybe I wasn’t.
The Experiment
Then I tried Claude Code—an AI agent that lives in the terminal. And something clicked.
I made myself a rule: even if I knew exactly what to change, even if it would be faster to just do it myself, I had to let the agent handle it.
That forced me to think differently. If this agent were a colleague offering to help, how would I explain the task?
So I tried:
“In this file there’s a model that needs to change like this. With that change, update the interface in this file, then find all the classes that implement it and adapt them. Finally, update the unit tests.”
Boom. The agent did a pretty solid job. Sure, it made mistakes, but only because I hadn’t given it enough context. That part was on me.
After about an hour, I’d shipped a couple of changes. It took a few rounds of refining the instructions, but here’s what hit me:
- Was I faster? Maybe.
- Did I feel less tired? Holy shit, yes.
And less tired means I can do more. I realized two things:
- I’ll only get better at this.
- Whatever AI I’m using now is the worst I’ll ever use.
The Shift That Matters
That experiment completely changed how I work.
Before: I’d open the IDE, hunt down the API, check where it was used, then start tweaking code. Fifteen minutes later I’d be lost in a rabbit hole. Git would show me half-broken diffs. The code wouldn’t compile. Tests would fail. Cue more debugging, more fixing.
Now: I ask the agent to map out how the API works, where it’s used, and how it’s implemented. That’s the information I used to burn energy finding myself.
Then I ask for a plan: “Here’s what I want to change. These are the key files. Pay attention to X. Test it like this.”
I evaluate the plan. If it sucks, I refine it. I’ve learned it’s better to spend extra time up front than wade through pages of nonsense later.
I think of it like delegating to a colleague with no memory, who I won’t see for the rest of the day. If I don’t give them enough context, it won’t get done.
When the agent finishes and tests pass, I hand it off to another agent for review. I tell it what to focus on.
Once they’re done, I take a look. And what I’m looking at is already a solid PR—code that builds and runs. I can focus on the important stuff, not the syntax errors.
What used to take two days now takes half a day. And I’m not exhausted at the end.
It’s Not Just Code
The same pattern works everywhere.
Before I share a design doc, I ask the AI: “Does this land with the right message? Is it structured for this audience?” It catches the confusing bits before my teammates do.
When I prep for my bi-annual career conversation at Spotify, I dump in my notes, the template, my goals. The AI asks me questions until it has what it needs. Suddenly I’m reflecting on what matters instead of wasting time trying to remember what I did months ago.
It’s always the same loop: I give context. I evaluate. I decide.
The Thing No One Talks About
Here’s what actually matters: I’m still in control.
The agent doesn’t write code and ship it. I set the plan. I refine the approach. I review the output. I decide what merges.
The AI amplifies my thinking—it doesn’t replace it.
At first I worried I’d get lazy or lose my edge. The opposite happened. I’m thinking more strategically. I’m spotting edge cases earlier. I’m catching architectural issues during planning instead of three files deep into debugging.
I’m doing more and better work, with less mental load.
What You Should Try Tomorrow
If you’re reading this thinking “sounds nice, but where do I even start?”—try this:
Pick one small, self-contained task tomorrow. Something you already know how to solve. Now don’t write a single line of code yourself. Don’t even touch git if your tool can handle it. Let the AI do everything.
Will the result be perfect? Nope. But remember: you’re using the worst AI you’ll ever use, and you’re still learning how to shape it with context.
Then, take the next document you write—before sharing it—ask the AI to review it. Is it clear? Does it hit the right people? Does it actually say what you mean?
That’s it. Two experiments.
Code Is Still the Interface
Here’s what hasn’t changed: I’m still a software engineer. I still need to understand code. I still need to judge quality.
What changed is that code became the interface between me and the AI—not the thing I hammer out character by character.
The AI produces code. I read it. I evaluate it. I decide if it’s maintainable, if it handles edge cases, if it fits the architecture. My judgment is what determines if it ships.
The AI can’t replace that. It just amplifies it.
In a way, my role is clearer than ever. I’m not measured by how fast I type or how many lines I commit. I’m measured by the quality of the systems I build and the decisions I make. The AI just makes me better at executing on those decisions.
I stopped writing code line by line. I started getting more done.
So maybe stop typing for a bit—and start directing instead.