Don't become a wrapper
Remember when we made fun of startups that just wrapped a prompt and an API call to OpenAI in a good-looking UI? We agreed those "ChatGPT wrappers" were cooked.
As hard as it is to say this, I think the same critique is translating to people today.
I'm noticing a trend of engineers who've made themselves into "Agent wrappers". They prompt Claude Code for the plan, have it write the code, push up 50 PRs a day, paste in whatever the agent generates into Slack, and on and on it goes... to a point, where if you took the agent away tomorrow, it makes you wonder what remains.
Once you start looking, you see it everywhere. Someone opens a 1600-line PR and can't justify why a file exists. Slack messages start sounding like LinkedIn posts. Architecture decisions happen because the agent recommended them, and when you push on the reasoning the answer is some shape of "the model said so and it's smarter than me."
I get the appeal — the output is good enough that the failure modes don't bite right away. Shipping 10x+ the volume you did a year ago feels great. Every company all-hands has someone telling you to "use AI more," and there's no opposite force pulling you back toward using your own brain too. You learn where the cliffs are by walking the terrain yourself. Wrapper-humans quickly become slop-cannons, defending it with reasoning the agent invented for them, without a worry about what happens to the team and systems around them.
Meanwhile, the alpha of being human is shifting under everyone. Some of the skills that made software engineers valuable in 2022 are commodity now. Any agent can write code faster than anyone with perfect syntax recall. What's left is taste and a sharp judgement — reading a diff and instantly spotting that it's solving the wrong problem, missing a constraint, working around a bug instead of fixing it, etc.
I worry that in the name of AI-adoption and token-maxxing, wrapper-humans are slowly becoming the hands-and-eyes to fill the missing connectors and harnesses that will inevitably show up very soon.
So, think. If anyone with the same agent gets the same output as you, what remains defensible about you?