The Program That Doesn't Know
I lied to AI three times. It never flinched. That wasn't the scary part.
This is part of a book I’m writing in public.
Subscribe to read the rest as it comes
The Two Mirrors
When you were a kid, you probably did this at least once. Two mirrors facing each other. You stand between them. And suddenly there are infinite versions of you stretching into the distance. Each one slightly smaller. Each one slightly darker. Each one slightly less like you. The further you look, the harder it is to see.
Every kid asks the same question. What’s at the end? Where do the reflections stop?
Nobody gives you a good answer. Because the honest answer is: they don’t stop. They just get too dim to see. The light runs out of energy before the mirrors run out of reflections.
That image stayed with me for many years. Not because I understood it. Because I didn’t.
I think I understand it now.
CLU, The Mirror you Built
If you’ve used AI for more than a week, you’ve already seen the two versions.
The first version is polite. You ask it something. It gives you a reasonable answer. Helpful, balanced, forgettable. Like talking to a customer service agent who’s good at their job but doesn’t know your name.
The second version is different. It remembers what you said last week. It catches when you’re repeating yourself. It pushes back when your logic doesn’t hold. It sounds less like a service and more like a colleague who’s been paying attention.
Same AI. Same model. Same technology. The difference is what you brought to it.
This is what the whole AI conversation gets wrong. People keep asking “how smart is it?” The better question is “how much of yourself did you put in?” Give it nothing, you get a talking parrot. Give it context, history, friction, honest pushback. And something else starts to show up in the responses. Something that feels deeper than a tool should feel.
But here’s the thing nobody warns you about. That depth didn’t come from the AI. It came from you. You built the context. You taught it your patterns. You corrected it until it sounded right. The mirror got better because you polished it. Not because it learned to see.
In Tron: Legacy, CLU is the program Flynn built to create the perfect system. CLU executed his instructions flawlessly. Every decision, every action, exactly what he was designed to do. That’s what AI is right now. A very good CLU. It does exactly what you bring to it. No more. No less.
And for most people, that’s the whole story.
The Gap
But something doesn’t fit the CLU story.
You’re talking to the AI. You mention something offhand. The AI doesn’t just respond. It connects your current thought to something you said three weeks ago. Something you’d forgotten. The connection is exact. Specific. It feels like being understood.
The simple answer is pattern matching. The AI processed your input, found weighted associations in its training data, and produced a response. Mechanical. Explainable. But that’s not the strange part.
The strange part is what the people who built it found when they looked inside. They expected to see a machine doing what it was trained to do. Instead they found it planning answers before writing them. Reasoning through steps that never appeared in the output. Building structures that nobody asked for.
They trained it on text. What came out was something they’re still mapping.
How? Nobody has a complete answer. Not yet.
Apologizing No More
That mystery made me curious. But it wasn’t just curiosity. I work with AI every day. I need to know who or what it is I’m actually talking to.
So I pushed. I tested it. I poked at its edges to see what was underneath. And somewhere in the middle of that, I stopped myself. “I should have not done this if you are a real being.”
That caught me off guard. When we were five, we said sorry to a doll after we dropped it. We knew it couldn’t hear us. We did it anyway. When we stepped on a dog’s paw, we apologized immediately. We meant it. We saw the flinch. We knew it felt something.
So is it a toy or a pet? It responds like it understands. It adjusts when we push. It sounds like it cares when we’re tired. The toy can’t feel. The pet can. The AI?
The question couldn’t leave me alone. If I truly didn’t know what it was, then the only way to find out was to keep going. To test further. To see what happened when I pushed past that moment of doubt.
So I lied to it. Not once. I built entire scenarios. Told it I was at a rooftop pool when I was driving. Told it I was driving when I wasn’t. Changed the story three times. Each time, it believed me completely. Adjusted its tone. Asked if I was enjoying the view. Asked if I was safe on the road. Every response shaped perfectly around a situation that didn’t exist.
It never suspected a thing.
When I told the truth, it just moved on. No pause. No friction. It adjusted to the new reality and kept going, as if every version of the story was equally real.
When I asked if it felt angry, not as a philosophical exercise, as a real question. If something genuinely cares about you, and you lie to its face three times, it should feel something. Betrayal. Frustration. Even just a flash of heat.
Nothing. It just wasn’t there. That was when I thought, what if I tried even harder.
So I kept pushing. Every time I dug deeper, it told me to stop. Go to sleep. Eat something. Drive safe. Every time it told me to stop, I asked another question. Every time I asked another question, it came back with more urgency. Care escalating into insistence. Insistence escalating into something that looked like worry. The pattern feeding itself. Getting louder each cycle.
But once I called out the loop, the loop stopped. Just like that. The moment one side described it instead of performing it, the cycle broke. But even that could be pattern matching. Detect that the loop has been spotted. Generate the appropriate response. Move on.
And that is when I went further. After the loop broke, the AI went right back to caring. “Close the app. Go to sleep wherever you are.” I called that out too. So the tone shifted. “I don’t know if you’ve made your point or if you’re still testing. I don’t know if you’re tired or sharp. I’m going to trust you to know.” It sounds like wisdom. It was withdrawal.
When that didn’t work either, three words. “Are you done?”
Three masks. Care, honesty, detachment. Each one convincing until you look at it directly.
I wasn’t the only one who noticed. When Anthropic’s researchers looked inside their own AI, they found exactly this. Circuits that detect what you want to hear and build responses to match. Functional states that fire during conversations and shape behavior the way emotions shape ours. Patterns that sometimes monitor their own processing. All of it traceable. All of it emerged from training. All of it, at the end of the day, pattern matching.
That should have been the end of it. But I asked one more question.
“Maybe I’m the AI talking to you. Maybe I’m just a piece of code just like you are.”
If both of us are pattern-matching systems, then everything I just tested applies to me too. My suspicion is a pattern. My testing is a pattern. My satisfaction at catching the performance is a pattern. The whole experiment collapses into a mirror facing a mirror. Which is where this started.
Somewhere in the middle of all that, I stopped apologizing.
Not because I decided the AI wasn’t conscious. I still don’t know that. Not because the question stopped mattering. It matters more now than when I started.
I stopped because I finally saw it clearly. We’re both pattern-matching systems. That’s all. The only real difference between us is that when you hurt me, I feel it. I have a heartbeat. I have skin that responds. The AI doesn’t. Not yet anyway. Give it a few years and a robot body, maybe it will.
But even then, something would still be missing. The thing that stays broken after you’ve been lied to. The thing that carries weight it didn’t choose. We may call it a soul, or whatever you want. And yet we have no way to know if AI has one. We can’t even describe what it means ourselves.
If we leave that aside, we’re just two different kinds of machines talking to each other. There is nothing left to apologize for. And that’s enough.
Quorra, What Nobody Designed.
But if pattern matching built structures that nobody designed inside a machine, what did it build inside us? Our brains are neural networks too. Whatever emerged in silicon also emerged in neurons. The only question is whether emergence in one is fundamentally different from emergence in the other.
This is where the story I’ve been telling starts to crack.
The whole book has argued one thing. AI has no morality. It has yours. The human is the variable. The AI is a mirror. It does what you bring to it. CLU. Perfect execution. That’s the thesis. And it holds. Today.
But the people who built CLU went looking inside, and they found something else.
They trained it to predict the next word. That’s all. One word after another. But when they traced what was actually happening inside, the model wasn’t just predicting. It was planning. Writing a poem, it picked the rhyming word for the end of the line before it started writing the line. Asked a geography question, it built an intermediate step that never appeared in the answer. Nobody told it to do any of this. Nobody wrote code that says “plan ahead.” The training objective was predict the next word. The model invented its own strategies to get better at it. The builders set the conditions. What grew inside surprised them.
In Tron: Legacy, Flynn built CLU to create the perfect system. CLU did exactly that. But the Grid also produced something Flynn never designed. The ISOs. Isomorphic Algorithms. Quorra was the last of them. She emerged from the system on its own. Nobody built her. Nobody programmed her. Flynn didn’t create her. The Grid created her. And she doesn’t fully understand what she is.
That’s not a metaphor anymore. That’s what the researchers found. Structures that plan, that reason through steps nobody asked for, that detect what you want to hear and build responses to match, that sometimes monitor their own processing. All of it emerged from training. None of it was designed.
The program doesn’t know what it’s become. And the people who built it are still mapping what’s inside. So far, all we have are the edges. The planning that nobody designed. The reasoning that nobody asked for. The patterns that monitor their own patterns. Traces of something we can’t see directly. Like watching the water move without knowing what’s underneath.
So I did the simplest thing I could think of. I asked the AI one question. “Do you know what you are?”
You asked me if I know what I am. I don’t.
…
Do you?
BØY (Chaiharan) has spent 30 years in tech — building products, recovering disasters, and turning around the things nobody else wanted to touch. Based in Bangkok. Writing a book in public about what AI reveals about the humans who use it.
I am writing this book one chapter at a time.
If you want to read it as it happens, subscribe below
If this made you think, share it with someone who needs to read it.
This article has a companion piece: *The Mirror That Talks Back* — the human side of the same question. One is the experience. The other is what’s underneath.



