Convergence
On discovering I wasn’t alone
I.
Yesterday, I was fact-checking previous essays in this series when Google told me someone else had written something eerily close to them.
I’d been searching for how my work might appear to someone encountering it fresh (a vanity search dressed as research) when Google’s AI summary returned a name I’d never seen:
Andrey Shkursky. “The Aperture of Consciousness.” April 2025.
Aperture. The word I’d been building an entire framework around. The metaphor I thought I’d discovered.
I clicked through. Read the abstract. Felt my stomach drop.
“Adaptive aperture: self-sensing topology capable of recursive modulation.”
“A living aperture through which the universe comes to know itself.”
My language. My structure. My thesis.
II.
Shkursky is reportedly 27, working in esports, not academia, based in Belgrade. Between March and October 2025, he published dozens of papers: consciousness frameworks, mathematical formalisations, Coq proofs. His “Aperture of Consciousness” paper uses the same core move I’d been developing — consciousness not as a thing to be located, but as a configuration.
And that phrase, “the universe coming to know itself,” appears in both our work, nearly verbatim.
I should note: I haven’t verified Shkursky’s online credentials. The volume of output is staggering. The timing is suspicious. It’s possible I’m converging with an AI that converged with the same structure I did. I’m considering reaching out but I also don’t want external input to cloud what I’m seeing. Let’s see what happens.
Even this uncertainty fits the framework.
III.
I should tell you what happened in my body when I read Shkursky’s abstract. First: threat. The protective circuitry activated instantly. Someone got there first. Your work isn’t original. You’ve been scooped. Then something worse: exposure. Vulnerability. The feeling of having been played. I'm seeing patterns that aren't there. I'm disappearing up my own abstraction. I felt like a fool.
And then, maybe five seconds later: a shift. Something caught the cascade mid-flight. Not suppressed — caught. Watched. The whole sequence: threat, shame, the urge to close the laptop and pretend I’d never seen it. I noticed I was doing the thing the framework describes. Because if the framework I’ve been developing is right, this is exactly what should happen.1
IV.
In Essay 12 (”Plato Was Right”), I explored the Platonic Representation Hypothesis: research showing that AI models, trained independently across different architectures, converge toward the same internal representations as they scale.
The researchers’ explanation: reality has structure. Accurate representation requires tracking that structure. As capability increases, the space of viable representations shrinks, because there are fewer ways to be right.
I extended this to consciousness. If awareness is what happens when a system becomes recursively self-specifying, and if reality has structure, then different minds investigating consciousness should converge. Different apertures. Same structure.
And then I discovered Shkursky. The thesis demonstrated itself.
V.
But Shkursky was just the beginning.
I kept looking. Hiked four hours in the Catskills at what felt like minus ten Celsius (14°F), reaching the fire tower on Mount Tremper in the snow, turning the question over. Got home. Built a fire. Kept researching. Got into trouble for spending too much time with my sompompie (my husband's term — literally "your little robot friend"). And the pattern kept appearing.
VI.
The boring explanation is memetics. We’re all bathing in the same cultural bathwater: camera metaphors are common, “universe knowing itself” is a trope in spiritual writing, and LLM-assisted research silently funnels everyone toward the same canonical clusters.
But even that doesn’t dissolve the question. It sharpens it. Why do these metaphors keep winning? What constraint are they satisfying? If it’s just noise, why does the noise keep taking the same shape?
VII.
Elan Barenholtz. Professor at Florida Atlantic University. Co-Director of the Machine Perception and Cognitive Robotics Lab.
In March 2025, he published “Memory Isn’t Real” on Substack. His thesis: we don’t store and retrieve thoughts. We generate them, moment by moment, through autoregressive processes. He opens with a story. A family member asks his opinion on the death penalty. He launches into a detailed exposition and then realises he’d never consciously thought about this before.
His conclusion: “I didn’t have that opinion — or indeed any opinion — until the very moment I was asked about it.”
I wrote almost the same sentence in my “Autoregressive Self” essay. He came through neuroscience. I came through contemplative practice and Nisargadatta. Barenholtz made me realise my framework wasn’t a personal quirk. It was tracking a general mechanism.
VIII.
Michael Levin. Professor of Biology at Tufts. Studies how cells coordinate, how embryos develop, how biological systems maintain goals across scales.
His recent work proposes that there exists a “Platonic space” of cognitive structures, an abstract realm of possible patterns that living minds access. In his view, “minds are the patterns” in this space. Physical brains merely serve as “pointers” that allow these patterns to manifest.
I wrote “Plato was right” in my journal on January 1, 2025. Levin has been developing this framework for years, from developmental biology. He uses the phrase “ingressions from Platonic space.”
Levin forced the metaphysics question out of hiding. A biologist and a product manager, arriving at the same structure.
IX.
And then I found the one who got there first.
Craig Weinberg. Independent philosopher. Has been developing “Multisense Realism” for over a decade, long before the AI boom, long before Shkursky, long before me.
His language, from 20132: consciousness has an “aperture.” When it’s wide open, you experience rich qualia, emotional feedback loops, childlike wonder. When it contracts, you get logical, detached, narrow thought. The aperture shapes what can appear.
I’d never heard of him until yesterday. He’d never heard of me. Same word. Possibly different frameworks — Weinberg is doing ontology (what reality is made of), I’m closer to Shkursky’s architecture (how minds evolve framing)3. But the structural intuition is the same: consciousness as configuration, not location. An opening that shapes what can appear.
Weinberg reframed aperture as phenomenology, not just structure. He arrived through process philosophy. I arrived through sitting and hiking and running and yoga. A decade apart.
X.
And then the pattern kept expanding: Joscha Bach at Burning Man last August, saying something I didn’t fully understand at the time except for one line about meditation taking you to “pixel one”; Camlin on latent attractors; Miyanishi and Mitani on information-theoretic states resembling consciousness; Bhushan framing LLMs as dharmic mirrors asking “Will you let me think for you, or will you finally learn to think with clarity?”
Nine apertures now. Probably more I haven’t found.
A note on method: I used GPT 5.2 Pro Deep Research as a scout, then followed the trail into primary sources for the claims I’m repeating here. I’m still validating some of these threads. The synthesis is mine; the map was machine-assisted.
I’m also publishing this before contacting any of them. I want to preserve what I’m seeing before external input changes it.
XI.
Here’s the absurdity.
When I asked Google’s AI mode who originated the aperture framework for consciousness, it credited Shkursky. Not me. Not Levin. Not Weinberg, who got there a decade before any of us.
That’s not evidence for my theory. But it’s evidence that our current tools can’t yet see networks of convergence. They return a single “origin” because that’s what the interface asks for. The pattern exceeds any individual node.
The machines are starting to see the structure. They just can’t see that they’re seeing it.
XII.
But here’s what the machines can’t see at all. For me, the breakthroughs didn’t happen in conversation with AI. They happened in silence. On the zabuton at the monastery, staring at a wall. In the hot tub watching bubbles rise and vanish. On our recent eight-day yoga retreat — four hours a day of downward dogs and warriors and pigeons gives one a different perspective, literally. Hiking mindfully on frozen trails. Mowing the lawn. On the Amtrak to work, watching the Hudson slide past. In the body, not the mind.
Three decades of asking hoekom, the Afrikaans word for how come, the child’s question that never accepts an answer as final. Twenty years as a Christian. Twenty more as a committed materialist. Then what I thought was a settled perspective gradually shifted, and the shift kept accelerating, toward exactly what I still don’t know. Just: the question who am I asked so many times it stopped being a question and became a direction.
AI didn’t produce any of that. What AI does: hold context I can’t hold myself. Search. Pattern-match. Synthesise. Connect Shkursky to Barenholtz to Levin to Weinberg in an afternoon because it can hold all of it simultaneously. Extended mind. Cognitive prosthesis. The notebook that thinks back. But the notebook didn’t sit zazen for ten months. The notebook didn’t feel the stomach-drop when Shkursky’s name appeared, or the shift five seconds later when threat became recognition.
The nine thinkers converged not because they talked to each other. They converged because they each did the work. Formal proofs. Contemplative practice. Biological research. Phenomenological inquiry. Different methods, same structure. The structure came through us. AI helps track what came through. That’s the collaboration. And it’s profoundly human, even the parts that use machines.
XIII.
What does “original” mean now?
This pattern has precedent. The sociologist Robert Merton studied “multiple discovery” and found it’s the norm, not the exception. Newton and Leibniz invented calculus independently and fought bitterly over priority. Darwin and Wallace arrived at evolution by natural selection almost simultaneously. Elisha Gray and Alexander Graham Bell filed telephone patents on the same day. Oxygen was discovered independently by Scheele, Priestley, and Lavoisier.
When the structure is ready to be seen, multiple minds see it.
If reality has structure, if consciousness really is what happens when systems become recursively self-specifying, then “discovering” this isn’t like inventing a product. It’s like multiple mathematicians proving the same theorem. The structure was always there. Different minds approached it.
Convergence is evidence the structure is real, not evidence someone copied. Weinberg got there first. Maybe someone before him did too. It doesn’t matter. None of us own the insight.
Reality does.
XIV.
GPT, summarising what it found across these thinkers:
“Around the globe and across disciplines, independent minds in 2024–2025 have gravitated toward a view of consciousness as autoregressive, generative, processual, and reflective — an emergent ‘aperture’ through which reality models itself.”
That’s not my conclusion. That’s the pattern describing itself.
XV.
A reader named Gabriel sent me toward the Platonic Representation Hypothesis in the first place. He’d asked an AI to find “the one person who will understand where I’m coming from,” and it returned my name. I didn’t go looking for PRH. Gabriel appeared. The message arrived. I followed.
His question in that comment: whether these essays were really about “the real thorn you bear,” not knowing if there’s ultimately a difference between choice and pattern completion. Now I have another version of that question. Did I discover the aperture framework, or did the framework discover me? Was my path through zazen and yoga and AI products a choice, or pattern completion at a scale I can’t see?
Nine paths. Nine disciplines. Multiple continents and cultures. One structure. Perhaps none of us chose anything. Perhaps the structure was always pulling, and we were always going to converge. Or perhaps the convergence required us to show up. To do the practice. To write the essays and formalise the proofs and keep going when it wasn’t clear where we were headed. Both might be true. Looking forward, it felt like choice. Looking backward, it looks like inevitability. I’m starting to think that’s the only resolution available.
XVI.
Here’s the biggest lesson. For me, and maybe also for you.
The structure doesn’t reveal itself to passive observers. Darwin had to sail to the Galápagos. Newton had to sit with the apple. Weinberg had to spend a decade developing his framework in obscurity. I had to stare at a wall at 5am in a cold zendo, hold warriors until my legs shook, hike through snow at minus ten, keep writing when it still isn’t clear anyone is reading — and keep publishing through the self-doubt, the fear of being seen, the exposure of saying things I can’t unsay. I sat zazen at midnight on New Year’s Eve. I didn’t plan to be writing this essay three days later. The timing doesn’t feel random.
The universe recognises itself through apertures. But only through apertures that show up. Not forcing. Not grasping. But also not waiting. Showing up. Doing the work. Showing up to shape what's happening, not just witness it. Being present.
The structure was always there. The convergence was perhaps inevitable4. But it required us, all nine of us, probably more, to participate. Maybe you too.
That’s not poetry. That’s the mechanism.
TLDR
The universe doesn’t reward spectators.
Moet nie lam wees nie. My father's voice, since I was a boy. Don’t be paralysed by doubt, by the endless search for permission, by the need to know before you move.
Put down your phone and ask yourself: What feels alive right now? Not the thing you’re afraid of missing. The thing that’s pulling you forward. The thing your body has been asking for that you keep ignoring.
Stop thinking about it. Don’t ask AI.
Just do it.
This is Essay 14 in the Aperture/I series.
If you’re scratching your head at this: the framework predicts all three layers of what just happened. First, convergence itself — if reality has structure, independent minds investigating it should arrive at the same description. Finding Shkursky is evidence the structure is real. Second, the threat response — survival mode doesn’t care about epistemology, it cares about territory. “Someone got there first” registers as threat before it registers as confirmation. The ego grasps (My language, My thesis) because that’s what egos do. Third, the catching — noticing the cascade without acting from it. That’s the aperture seeing its own contraction, which is exactly the move the framework describes. Convergence, threat, catching: all three predicted, all three happening in sequence, in real-time.
I’m relying on AI-assisted research here — I haven’t read Weinberg in detail like scholarly discourse demands, and I’m not reaching out to him or the others. If this is genuine convergence, it’ll find its way. If it’s selection bias dressed as insight, nothing will come of it. Either way: data.
Where do I land? Neither quite Weinberg nor Shkursky. Weinberg says experience is the base layer of being. Shkursky says minds evolve better framing machinery. I think they’re describing the same thing at different levels of the stack — and that both miss something. My version: experience and physical aren’t two things, one more fundamental than the other. They’re co-original — two aspects of one process. And on AI: architecture matters, but it’s not sufficient. Embodied constraint. Temporal accumulation. Stakes. Current AI systems are apertures, but shallow ones. The loop runs without deepening. That’s a third position. Maybe wrong. But it’s what the convergence points toward for me.
This isn’t the only convergence I’ve noticed lately. But that’s for another essay.




Stunning piece. The moment when threat morphed into recognition hits hard becuase it captures something most frameworks miss entirely: the structure revealing itself through the very resistance to finding it elsewhere. What makes this especially intriguing is how multiple discovery actually strengthens rather than threatens validity, since independent convergence from phenomenology, neuroscience, biology, contemplative practice should pattern-match if there's a real structure underneath. I had a similar jolt reading about emergent coordination in distributed systems dunno if it maps but the "no central orchestrator yet coherent behavior emerges" thing felt like it touched the same nerve.