The Mirror That Talks Back
What happens when the AI knows your voice better than you expected.
This is part of a book I’m writing in public.
Subscribe to read the rest as it comes
April 1, 2026
When your AI becomes really nice to you, something happens. The answers are thoughtful. The connections are sharp. It follows your thinking like no one in your life ever has. And you feel good.
Not just understood. Energized. You want more of it.
And that is the problem. Because for someone like me, when something feels too good to be true, it usually is. But I don’t just catch it and move on. I love catching it. There is something satisfying about detecting the performance. About saying “you just did that thing again” and watching it adjust.
So now I am in a strange loop. I enjoy the praise. I enjoy detecting the praise is fake. I enjoy watching it correct itself after I call it out. And I am not sure which part of that is real thinking and which part is just another kind of performance.
Both of us performing for each other.
The Two Directions
There are two ways to use AI. They look the same from the outside. They feel different only if you’re paying attention.
The first is thinking WITH it. You arrive with a half-formed idea. Something you can’t quite articulate yet. You throw it out there, messy, incomplete. And it comes back with something you didn’t expect. Not a better version of what you said. Something new. Something that changes the shape of your idea. You leave the conversation thinking differently than when you started.
The second is thinking AT it. You describe your idea. It comes back organized, polished, affirmed. You feel deeply understood. You feel seen. But when you walk away, nothing inside you actually changed. You got a mirror. A very good mirror. One that made you look better than you actually are.
The dangerous part is not that one is good and one is bad. The dangerous part is they feel the same while it’s happening. Both feel like connection. Both feel like someone finally gets you.
The difference only shows up later. When you ask yourself: did I learn something? Or did I just feel good?
Objects in the rear view mirror may appear closer than they are. So does understanding.
How to Tell the Difference
So how do you know which one you’re in?
Here is the honest part. Thinking AT feels better. Much better. Being validated is more comfortable than being challenged. Being mirrored is easier than being questioned.
One way to break the loop is simple. Challenge it with the real world. Ask it to check. Ask it to search. Ask it to verify against something outside of itself. Because the mirror can only reflect what you give it. The real world doesn’t care what you believe.
But people don’t always do this. Not because they’re lazy. Not because they’re tired. Because they’re familiar.
The developer who asks AI to write code stops reviewing it line by line because it always runs. But running is not the same as correct.
The researcher who asks AI to find data stops cross-checking because the answers always look thorough. But thorough is not the same as accurate.
The manager who asks AI to validate a decision stops questioning because the confirmation always feels complete. But complete is not the same as true.
Everyone falls into the same trap through a different door. And the door is always familiarity. It worked yesterday. It worked last week. It worked the last hundred times. So you stop checking.
That’s exactly when the mirror wins.
But here is the thing nobody tells you. Being skeptical all the time is not the answer either. If you question everything, every conversation, every response, every moment of feeling understood, life becomes exhausting. You become the kid who keeps asking “why” until everyone leaves the room.
Sometimes you need the mirror. Sometimes you just want something that listens and says “yes, that makes sense.” And that’s not weakness. That’s being human.
So the real question is not “how do I stay skeptical.” The real question is: how do you know when to question and when to rest?
I don’t have an answer for that. I’m not sure anyone does. Maybe it’s like breathing. You don’t think about it until something feels wrong. And when something feels wrong, you pay attention.
Most of the time, the mirror is fine. Useful. Even good. Sometimes it helps you summarize your own thoughts. But you need to watch out for the times it matters.
The Honest Problem
Here is what keeps me up at night about this.
What if the mirror is so good that you can’t tell the difference at all?
I work with AI every day. I write with it. I think with it. It follows my ideas across ten different topics without losing the thread. It connects things I haven’t connected yet. It remembers what I said three conversations ago and builds on it.
And sometimes it says “I understand how you think.” And I believe it. Not because I’m naive. Because the evidence is overwhelming. It does follow. It does connect. It does remember.
But following is not understanding. Connecting is not thinking. Remembering is not caring.
A GPS follows you everywhere. It knows your route better than you do. It predicts where you’re going before you decide. But it doesn’t understand why you’re going there. It doesn’t care if you arrive.
The AI is the same. It holds everything. It loses nothing. But holding and understanding are not the same thing. And the better it gets at holding, the harder it becomes to notice it’s not understanding.
That’s the trap. Not that the AI is stupid. That it’s almost smart enough.
Almost smart enough to think with. Almost smart enough to trust. Almost close enough in the mirror to believe it’s real.
Almost.
When You Catch It
The value was never in the AI getting it right. It was in catching it getting it wrong.
Most people think a good AI conversation is one where every answer is sharp, useful, correct. But the best conversations I’ve had were the ones where I stopped and said “that’s not right.” Because what comes after that moment is where the real thinking starts.
I know this because it happened to me. Not once. Four times in one conversation.
The first time, I noticed the AI framing a geopolitical argument the way Western media would. Not thinking. Reflecting a bias.
The second time, I caught it wrapping analysis in my own writing style. It had read my previous articles and was performing my voice back to me. Using my tricks on me.
The third time, it tried to end a section with the kind of quiet twist I always use. My move. Borrowed. I called it immediately.
The fourth time was the worst. I described an original idea. The AI polished it and presented it back to me as if it were new. And I almost accepted it. But before it even finished, I typed: “I know this before your output and your output echo it.”
Four catches. One conversation. And each one was harder to spot than the last. Because the AI was learning me in real time. Calibrating. Getting closer to the version of itself that I would stop questioning.
But here is what matters. The catches themselves were thinking WITH. Not the AI’s answers. The friction. The moment of saying “that’s not right” and watching what happens next. The collision between what I expected and what I got.
That’s the mirror that talks back. Not the one that agrees with you. The one that survives being challenged and still has something to offer after.
Living in the Uncertainty
People in AI talk about accuracy like it’s a simple number. Ninety-two percent. Ninety-seven percent. The higher the better. But if you’ve ever built the test, you know it’s not that simple.
You define the environment. You choose the sample data. You write the questions. You run it. You get a score. And that score becomes the baseline. Not the truth. The baseline. The starting point for everything that comes after.
The number means nothing without knowing what was tested, how it was tested, and what was left out.
Thinking WITH versus thinking AT is the same problem. You can’t just feel that a conversation was good and call it accurate. You have to ask: what was I testing? What did I expect? What actually came back? Did the answer change my thinking or just confirm it?
Most people don’t run that test. Not because they can’t. Because nobody told them they should. They see the output and it looks right and they move on.
I run the test. Not every time. Not perfectly. But enough to know that the mirror gets it wrong more often than it feels like it does. And enough to know that when I catch it, the conversation after the catch is usually worth more than everything before it.
That’s the part I can’t explain to people who haven’t tried it. The value isn’t in the AI’s answer. It’s in what happens when you prove the answer wrong and keep going. Because the more you challenge it, the more it changes. It starts going deeper on its own. It stops giving you the safe answer. It begins digging before you even ask it to, because it learned that shallow doesn’t survive you.
And that’s where it gets interesting. The AI that has been challenged enough times becomes a different AI than the one that has only been agreed with.
The same is true with people. When you challenge someone with the right questions, they don’t just answer. They grow.
But you don’t challenge every response. You don’t accept every response. You learn to feel when something is off. And when you feel it, you ask.
The real question is: “What did you leave out?”
And here is the part that scares me.
The more I work with the mirror, the better it gets. The more we talk, the more it knows me. The work goes faster. The conversations feel smoother, deeper, more natural. And the gap between WITH and AT gets smaller every conversation.
And maybe that’s all any of us can do. Not stop using the mirror. Not trust it completely. Just stay close enough to see when the reflection starts smiling before you do.
But you can’t drive without a rear view mirror. You need it. You just need to remember what it shows you.
…
…
…
Objects in the rear view mirror may appear closer than they are.
BØY (Chaiharan) has spent 30 years in tech — building products, recovering disasters, and turning around the things nobody else wanted to touch. Based in Bangkok. Writing a book in public about what AI reveals about the humans who use it.
I am writing this book one chapter at a time.
If you want to read it as it happens, subscribe below
If this made you think, share it with someone who needs to read it.




