AI Doesn't Care Who You Are. It Only Cares When You Show Up.
He treated it like a closed book exam. The rules had already changed.
This is part of a book I’m writing in public.
Subscribe to read the rest as it comes
Not long ago I interviewed a developer over Teams. Standard process. Five algorithm questions. The role was AI-augmented, meaning we expected candidates to use AI as part of their workflow. He knew that.
He did use AI, but only in small spots where he decided it was needed. The rest he built himself. When I asked why, he thought that was the right answer. He believed I would value his own work more. That showing me what he could do without AI was more impressive than showing me what he could do with it.
The work wasn’t strong enough. It was an open book test. The answers were right there. But he treated it like a closed book exam and snuck the clues under his table.
I felt for him. He wasn’t lazy or resistant. He just didn’t see that the rules had changed.
He was not the only one struggling with this. But not everyone struggles the same way.
Every company has these two people. I say “every company” so mine don’t know I’m talking about them.
The senior one. Hands-on for years. Knows the systems deeply. The legacy code, the monolith, the parts that no documentation can fully explain. When AI comes up, his answer is always the same. This code is too complex. AI can’t understand it well enough. He’s not wrong about the complexity. But he uses that as the reason to not try.
The junior one. Puts everything to AI without thinking. Accepts whatever comes back, ships it, moves on. Doesn’t question the output because he doesn’t know enough yet to know what’s wrong. The senior sees this and it confirms exactly what he suspected. AI makes people careless.
One fears it. One dismisses it. One misuses it. But let’s see the ones who just try it.
A candidate with no CS degree let AI handle the logic in our Walking Turtle Matrix assignment, then added something visual that no other candidate thought to do. A guy who started building with AI as a hobby went to learn what he was missing on YouTube and Udemy. Now he’s better than some graduates with four years of classroom training. A startup founder who used to rely on a team to translate her ideas into products started building them herself. The whole layer of middlemen between her idea and reality just disappeared.
None of them spent any time worrying about whether AI was a threat. They just used it.
And then there’s the product owner who still writes tickets and waits for someone else to build. Ironically, if her tickets are good enough, AI could just pick them up and do the work directly. But right now a lazy developer does that for her instead. He takes the ticket, feeds it to AI, delivers the output, and charges full rate.
This is not a secret. Most outsourcing programmers are quietly enjoying this easy money right now. But it won’t last. The customers will figure it out. What used to be estimated at ten hours, they will start asking why not one. If company A delivers in one hour, why can’t yours? The easy money disappears the moment the buyer learns what AI can do.
And when the price drops, what separates you? AI is getting better. It works more like a good software engineer now, even when you prompt it less. But that’s exactly the trap. If you only prompt the surface, you miss the key decisions underneath. The architecture, the security, the design patterns that matter. And when something breaks, AI is not to blame. AI is just a good actor. It does exactly what you tell it. The way you tell it. If you didn’t control how it got there, that’s on you.
I understand all of them. The ones who resisted, the ones who showed up, and the ones who didn’t look deep enough. Because I was once all of them.
I used to write everything myself. I’m a lazy bulldozer. If I had to solve something once, I never wanted to solve it again. So I built my own frameworks. I thought they were the most elegant solutions possible. Then I discovered open source and realized I had been reinventing the wheel, just with my own logo on it.
I’m not the only one. Every company has codebases that someone built from scratch with pride, that are now over-engineered, hard to follow, and understood by no one except the person who wrote them. That’s a bigger conversation for another day. But the pattern is the same. We build it ourselves because we don’t know what the world has to offer yet. Or worse, we just don’t want to look. Language barrier, comfort zone, whatever the excuse.
AI is what’s already out there now. It absorbed the open source, the best practices, the design patterns, the security standards. And when I first started using it, I made a new version of the same mistake. I let AI build freely. It was fast, it worked, I moved on. Months later I had to go back and refactor what it built. The code ran fine. But some of it couldn’t scale. Some of it was over-engineered for no reason. The same problems I used to create myself, just faster.
The question is the same one I had to ask myself years ago. Are you building because it’s better, or because it’s yours? And now a new one. Are you letting AI build because you trust it, or because you didn’t want to think?
In Dr. Strange, he looks at 14 million possible futures but can only live one. AI is not that dramatic. But everyone facing it right now has more than one possible future. You and me same as everyone. Which future you get is not about who you are.
AI doesn’t care. It only cares when you show up....
BØY (Chaiharan) has spent 30 years in tech — building products, recovering disasters, and turning around the things nobody else wanted to touch. Based in Bangkok. Writing a book in public about what AI reveals about the humans who use it.
I am writing this book one chapter at a time.
If you want to read it as it happens, subscribe below
If this made you think, share it with someone who needs to read it.




