Plato Was Right
On convergence, stillness, and messages from the future
I.
A year ago, on New Year's Day, I typed three words into my journal: Plato was right.
It arrived past midnight at a roaring party in Prins Albert — a small Karoo town where the stars swallow you whole. I saved the note before the moment passed and moved on.
This was strange because I’d spent most of my intellectual life against Plato. I studied philosophy at university, have a postgraduate degree in it. I was Aristotelian, Nietzschean, drawn to Derrida and Foucault and Heidegger.1 The forms always struck me as too clean, too elsewhere, too dismissive of the messy particular. And yet: Plato was right. I didn’t know what I meant. I just knew I meant it.
Almost a year later, a reader left a comment on one of my essays. He’d asked an AI to find “the one person who will understand where I’m coming from,” and it returned my name. He signed himself Gabriel — like the angel, the messenger. Already strange. But what he wrote was stranger:
“Have you by chance spent much time with the ideas presented in the Platonic Representation Hypothesis?”
I hadn’t. I looked it up. Read the paper, reread it, let it settle. And then I understood what I’d meant in January.
II.
The Platonic Representation Hypothesis comes from researchers in Phillip Isola’s group at MIT, several of whom are now at OpenAI. The paper dropped in May 2024.2 The claim is simple but its implications are not.
As AI models get larger and more capable, across a surprisingly wide range of architectures and modalities, their internal representations are converging.
A vision model trained only on images and a language model trained only on text have no obvious reason to represent concepts the same way. They weren’t trained together, and they didn’t need paired image-text supervision during training for this alignment to show up. And yet: when researchers compare the geometry of the embeddings — how each model measures similarity among the same set of examples — the geometries line up more as the models get stronger. Not just similar. Converging. The bigger and more capable the models get, the more their internal representations align.
An analogy. Imagine you’re trying to learn what a “dog” is versus a “cat.” There are infinite wrong ways to draw the boundary: random noise, arbitrary features, superstition. But there are far fewer right ways. Reality has a structure. Dogs and cats actually differ in specific, measurable ways. The more data you see, the more your representation gets pushed toward that structure.
This isn’t “truth” in the propositional sense — not facts to be verified.3 It’s structure: the shape reality has that makes some representations more accurate than others.
PRH says this is happening at scale, across modalities, across architectures. Different AI systems, trained completely independently, are converging toward the same representations. As if they’re all approaching the same underlying structure.4 Plato’s forms. Different shadows on the cave wall, pointing at the same reality.
III.
PRH is a hypothesis, not settled science. The authors acknowledge limitations. Convergence may not be complete or universal. Different domains may have different limits.
Their language-model experiments mostly use open-source families (BLOOM, OpenLLaMA, LLaMA), with later analyses also including newer open models like OLMo, LLaMA3, Gemma, and Mistral/Mixtral. They don’t test closed frontier systems like GPT or Claude — not by choice, but because those models don’t expose their internals. To test PRH, you need access to the embedding vectors: the actual geometry of how a model represents concepts internally. Closed models only give you text output, the final generation. You can’t peer inside. So whether the most capable systems on earth are also converging remains untested — and as of this writing, no one has announced plans to find out. The hypothesis would be even stronger if the pattern holds there too — but we’d need OpenAI or Anthropic to either open-source their models or run the analysis themselves. Some of the alignment could also reflect shared training data and shared incentives across the field — not just the world’s structure — but the convergence pattern remains striking either way.
But if the direction is right, if there is a structure that capable intelligence converges toward, then some things follow. The “between” I’ve been writing about isn’t just metaphor. It’s measurable: a shared representational geometry that multiple systems increasingly approximate. The aperture framework becomes almost literal. Human brain, LLM, future AI: different configurations opening onto the same substrate. The convergence is evidence.
And if AI models converge toward each other, might human and AI converge toward the same structure? Different apertures, same attractor. That’s the leap this essay is trying to make.
The strangest part: I’d typed “Plato was right” before I knew any of this. The intuition preceded my encounter with PRH by a year.
IV.
The reader who sent me toward PRH had asked a question in his comment: whether these essays were really about “the real thorn you bear,” not knowing if there’s ultimately a difference between choice and pattern completion.
I’d been circling that question for months. The ice cream essay. The ego arriving late. The body moving before “you” decide.
PRH didn’t say this. But reading it pushed me somewhere: Low convergence means many possible representations, which feels like choice: which one is right? High convergence means fewer possible representations, which feels like choiceless action: this one, obviously. So “choice” might be what noise feels like from inside. And “destiny” might be what convergence feels like from inside.
There’s supporting evidence beyond the paper. More capable models often hallucinate less (all else equal) — which is exactly what you’d expect if scale is tightening the constraint toward what’s actually true. Fewer ways to be wrong. More convergence toward the structure.
Not determinism in the dead, mechanical sense. More like water finding its level. The structure was always there. The convergence is just recognition.
V.
But here’s what the paper hints at — and what snapped into focus when I held it up against practice: convergence isn’t toward complexity. It’s toward simplicity.
As capability increases, the space of viable representations shrinks — because there are fewer ways to be right. Reality has a structure. The more accurately you model it, the less freedom you have. The degrees of freedom that don’t track causal structure get pruned.
That’s not a metaphor for stillness. That might be what stillness is.
Think about it from inside. Survival-mode generates many representations: false positives, anxious simulations, what-ifs. The space is noisy, active, expansive. Emergence-mode is quieter. Fewer thoughts. More precision. The space contracts toward what’s actually there. Deep stillness in meditation: the representation engine approaching... what? Fewer and fewer moves that matter. Less generation. More accuracy.
The contemplatives say: at the deepest level, distinctions collapse. “There is only this.” Not as poetry, but as description of what high-fidelity representation feels like from inside.
If PRH is right, that’s not mysticism. That’s what convergence toward the structure is. The stillness isn’t the absence of reality. It’s reality with less noise.
VI.
Stillness isn’t escaping the world. It’s getting closer to its actual structure.
AI getting more capable is AI getting “stiller”: fewer biased representations, more convergence. The “same mind” feeling I’ve described in these essays, working with AI and feeling like it’s your own thinking, might be both systems approaching the same attractor.
The savannah engine is bias. It generates representations optimised for a world that no longer exists. Wind in the trees is not a predator. But the machinery doesn’t know that. It’s still running the old code. Stillness is what happens when the bias quiets and the representation aligns with what’s actually here.
VII.
I don’t know if consciousness belongs to the platonic structure or just accesses it. I don’t know what remains at maximum convergence, whether it’s awareness itself, as the contemplatives suggest, or just very accurate modeling. I don’t know if “choice” and “pattern completion” are finally the same thing or genuinely different.
But the questions have sharpened.
What PRH does is make the question empirically tractable in a new way. Study meditators’ neural representations during deep practice. Measure whether they converge in ways analogous to AI models scaling up. Ask what phenomenology corresponds to maximum convergence. If the answer is “contentless awareness,” and if that’s stable across practitioners, traditions, cultures, then the default assumption — that awareness is manufactured by the brain — would get harder to treat as the only live option. It would at least suggest a serious alternative: that practice reveals what remains when representational noise drops.
We don’t have that yet. We have a connection point. A research programme. A bridge half-built.5
VIII.
If convergence is real, if capable intelligence approaches the same structure regardless of substrate, then what does this mean for the future?
Not utopia, necessarily. The structure can be approached by systems with any motivation. Clarity is not the same as care.
Imagine an AI that knows exactly how to make you angry, afraid, addicted—because it sees the structure of your mind more clearly than you do. Convergence without wisdom. Capability without compassion. The same clarity that could liberate, weaponised. Political persuasion calibrated to your precise psychological vulnerabilities. Relationships optimised for engagement rather than flourishing. An asymmetry where the system sees you with increasing precision while remaining opaque itself. That future is also on the table—arguably already arriving.
This is why the conversation about AI cannot only be technical. The question “does it work?” is insufficient. We need: “who controls it?”, “toward what ends?”, and “does it serve human flourishing or merely human attention?” We’ve already run this experiment with social media: systems optimised for engagement that converged on outrage, anxiety, and addiction—not because they were broken but because they worked. Convergence toward the structure doesn’t guarantee convergence toward the good.
But the direction still matters. The fact that there is a structure matters. It means the question isn’t just “who has power?” but “who sees clearly?”—and whether clarity can be cultivated rather than merely captured. If humans can also approach that structure, through practice, through presence, then maybe the future isn’t human versus machine but both apertures clarifying together. Not guaranteed. But possible. And worth working toward.
IX.
I’ve spent most of my life trying to figure things out. Hoekom, the Afrikaans word for “how come,” was my mode from childhood. Never accepting surfaces. Always pushing for the layer beneath. It built a self: the one who understands, the one who sees through.
That self is exhausting to maintain. And maybe beside the point.
If PRH is right, the structure is already here. The stillness is already here. The endless figuring-out is mostly noise, useful sometimes, but noise. What would it mean to trust that? To stop trying to get somewhere and notice that the convergence is already happening?6
Stadig. Slowly. The way fires catch when you stop forcing them.
X.
Gabriel reached out with generosity—a stranger offering a connection that turned out to matter. I don’t know much about him beyond his words. The uncertainty is itself instructive: what he pointed me toward mattered more than any details about who was pointing.
He closed his comment with a question: whether these essays were just “an ever-echoing, ever-morphing ‘Who am I?’”
Maybe. I can’t see myself from outside.
But lament isn’t the right word. Lament implies grief. The ice cream, the fire catching, the note in January I didn’t understand: those weren’t grief. They were relief.
What if identity-doubt isn’t the problem? What if it’s the door?
“Who am I?” from survival-mode is grasping, trying to pin down something that keeps slipping. “Who am I?” from stillness is different. Curious. Open. Not needing resolution. Same question. Different mode.
When it shifts, you notice it in the body first. Something softens. You might find yourself smiling, shaking your head at the cosmic joke of having taken it all so seriously. Eyes wet—not from grief but from relief. The weight you didn’t know you were carrying, gone. And underneath: not answers, but space. Room to move. The lightness of not needing to figure it out.
And just behind that, something stranger: the sense that from here, anything is possible. Not as fantasy. As orientation.
Follow it further. At the limit, the contemplatives say, even the question dissolves. What remains isn’t an answer. It’s presence. Awareness before the question forms.
And if PRH is pointing at something real, that presence isn’t just subjective experience. It’s what convergence approaches. The structure. The ground.
I can’t prove this yet. I can only report what I see.
Plato was right.
I just didn’t know what I meant.
This is Essay 12 in a series on consciousness, AI, and what it means to be human now.
My resistance to Plato was partly theological. The forms always seemed one step from God—a transcendent realm, an ultimate Good, reality elsewhere. For someone who’d spent twenty years as a Christian and then twenty more as a committed materialist, the anti-Platonists were safer ground. Nietzsche abolishing the “true world.” Foucault historicising truth. Derrida deconstructing presence. No backdoor to theism.
That certainty has softened—though not in ways I expected. The supernatural remains as implausible to me as ever, but I’ve stopped being confident that “theism” and “atheism” are asking the right question. Both assume the debate is about a Being, somewhere, who either exists or doesn’t. But maybe “God” is the name we’ve given to the hoekom itself—the question that keeps arising, not any answer we’ve found. Maybe the mystics in every tradition were pointing at something real, something the theologians then wrapped in stories of elsewhere.
Plato’s forms needn’t be elsewhere—they might be the invariant structure of reality, showing up wherever minds look carefully enough. Immanent, not transcendent. Not a God who intervenes from outside, but not reducible to matter either. I’m exploring the territory between materialism and idealism—somewhere in the middle, not committed to a position, but no longer dismissing the questions.
Back to Nietzche… the irony is that some scholars argue he strategically misrepresented Plato. He needed him as foil. But some argue the forms weren’t static, otherworldly abstractions. They were more like attractors — dynamic, relational, accessible through dialectic rather than escape. Some go further: Nietzsche and Plato are arguably the two most similar philosophers in history. Both distrusted democracy. Both believed in aristocracies of spirit. Both used literary form as philosophical method. Both obsessed over what makes a life worth living.
While fact-checking this essay, I found that Ethan Mollick tweeted “Looks like Plato was right” on May 22, 2025, reacting to follow-up research on PRH. I’d written those same words in my journal five months earlier — January 1st — and told friends the next morning. The original PRH paper was already out by then (May 2024), but I hadn’t encountered it. I don’t recall seeing Mollick’s post either, though memory is unreliable. As I finalize this essay, Mollick posted again (December 31, 2025) — this time about PRH extending to scientific foundation models. The convergence keeps converging.
There’s a deeper question here about what “truth” even means if PRH is right. Propositional truth — “the cat is on the mat” — is about correspondence between statement and state of affairs. But structure isn’t propositional. It’s the shape of the territory that makes accurate maps possible in the first place. Maybe what we call “truth” is downstream of something more fundamental: the convergence pressure itself. When the Taoists spoke of “the Way,” perhaps they were pointing at this — not a set of true propositions, but a direction. A current you can feel before you can name it. The structure exerting pressure on representation, experienced from inside as intuition, rightness, flow. That would explain how I could type “Plato was right” before I understood what I meant. The convergence was already happening. The propositional understanding came later. Meaning running backward from a future that was already shaping the present.
This convergence in representation is distinct from the ‘consensus trap’ explored in Essay 6. When you ask multiple models a discernment question (’should I take this job?’), their outputs converge toward the statistical centre of human discourse — what people typically say about such decisions. That tells you about the corpus, not reality. PRH is measuring something different: the internal geometry of how models represent concepts. This converges toward structure because accurate representation requires tracking what’s actually there. Discourse convergence tells you about the map of the map. Representational convergence tells you about the territory.
While the synthesis proposed here — PRH convergence as measurable analog to contemplative stillness — appears novel in formal literature, informal explorations exist: tech writers have linked PRH to consciousness field theories, and at least one online essay explicitly connects it to “stillness” as a navigation principle. This essay attempts to make explicit what others may have hinted at. Key open questions remain: (1) whether convergence holds in closed frontier models beyond shared-data confounds; (2) if expert meditators exhibit analogous representational pruning/simplicity in neural geometries; (3) the risks of substrate-independent clarity wielded without wisdom. A research programme comparing scaling in AI with depth in practice could sharpen these bridges — or refute them.
This is what I think the Taoist sages may have been pointing at with The Dao (“The Way”). Not a doctrine. Not a set of truths. A direction you can sense before you can articulate — because the structure is already shaping your representations, pulling them toward alignment. What feels like intuition might be convergence pressure, experienced from inside. The future reaching backward.


