Why the Hard Problem Was Hard
Consciousness isn’t somewhere. It’s the somewhere-ing.
I.
I was sitting in the hot tub, watching bubbles rise and vanish at the surface. And I noticed: thoughts do the same thing. They surface, catch light for a moment, disappear.
For a moment I couldn’t tell the difference.
Not “couldn’t tell” as in confused. “Couldn’t tell” as in: the distinction stopped mattering. The bubble was a temporary pattern in water. The thought was a temporary pattern in... what? Neural activity? Consciousness? The question kept dissolving before it could land.
This wasn’t insight. It was more like a glitch: the conceptual machinery briefly failing to draw its usual lines. Matter here, mind there. Object, subject. The bubble didn’t care about categories. Neither, apparently, did the thought.
The next morning I tried to reconstruct what had happened.¹ The usual frameworks showed up: materialism (the thought was “just” neurons firing), dualism (the thought was something else entirely), panpsychism (consciousness goes all the way down). None of them fit.
Not because they were wrong, exactly. Because they all started from the same assumption: consciousness is somewhere. Either in matter, or alongside it, or underneath it. A thing to be located, explained, derived.
What if that’s the mistake?
II.
David Chalmers crystallised something in 1995 that philosophers of mind had been circling for decades.
The “hard problem of consciousness” distinguished two kinds of questions. Easy problems (how the brain processes information, integrates signals, produces behaviour) are hard in practice but straightforward in principle. They’re engineering problems. Given enough time and funding, we’ll solve them.
The hard problem is different. Even if we mapped every neuron, traced every signal, explained every behaviour, something would remain unexplained. Why is there experience? Why does processing feel like something from inside? Why isn’t it all just happening in the dark?
This question has a particular shape. It assumes consciousness is an output: something that needs to be produced, generated, explained by something else. Matter does things, and somehow, mysteriously, experience pops out.
Thirty years later, we’re no closer. Not because we lack data. Because the question might be malformed.
III.
Here’s a different starting point.
What if consciousness isn’t produced by physical processes? And what if it also isn’t separate from them? What if “physical” and “experiential” are two ways of describing the same thing, not two things that need to be connected?
This isn’t new. Spinoza gestured at it. So did Schopenhauer, more precisely. He argued that we have “double knowledge” of ourselves: from outside (as body, as object among objects) and from inside (as will, as striving, as experience). These aren’t two different things. They’re two aspects of one reality.
The body isn’t a machine that produces experience. The body is experience, seen from outside.
This sounds like wordplay until you sit with it. When I move my arm, I don’t observe neural signals causing movement. I am the moving. The “inside” (intention, effort, sensation) and the “outside” (arm rising, muscles contracting) are the same event, described differently.
Schopenhauer called the inner aspect “Will”: not personal will, but the blind striving that constitutes reality at every level. This got him into trouble; it’s not clear what “will” means for a rock or a wave. But the structure of his insight survives the vocabulary problems.
What if reality is processual: not stuff that does things, but doing that we describe as stuff? And what if consciousness isn’t added to this process, but is what the process is when it folds back on itself?
IV.
I keep returning to a thought from Lee Smolin, a theoretical physicist who co-founded loop quantum gravity, now arguing a position most of his colleagues consider heretical: time is real.²
This sounds trivial. Of course time is real. But Smolin means something specific and radical. Against what many physicists take to be implied by relativity (where time is a dimension to be traversed, where past and future are equally real, where the “now” is merely local), Smolin argues that temporal becoming is fundamental. The universe isn’t a block of spacetime. It’s a process of moments giving rise to moments.
This matters for consciousness because most theories treat experience as something that emerges from timeless physical laws applied to matter. But emergence assumes a direction: from physical to experiential, from simple to complex, from matter to mind.
What if there’s no direction? What if physical and experiential are co-original, two aspects of a process that doesn’t reduce to either?
Here’s the move: instead of asking how matter produces consciousness, ask what kind of process has both physical and experiential aspects. Not as outputs, but as descriptions.
The answer might be: temporality itself. The passage of time (not time as dimension but time as becoming) is both physical (change, causation, entropy) and experiential (duration, memory, anticipation). You don’t have to add experience to physics. Experience is what physics is from inside.
V.
This is where it gets technical. Bear with me.
Call this position “processual dual-aspect monism.” One reality, two aspects, fundamentally temporal.
Monism: Not two substances (mind and matter) but one process.
Dual-aspect: That process is describable in two irreducibly different ways. Third-person (physical, structural, relational) and first-person (experiential, qualitative, perspectival).
Processual: The unity isn’t static. It’s temporal. Each moment arises from the previous and gives rise to the next. The “aspects” aren’t parallel tracks; they’re abstractions from becoming.
The hard problem asked: How does the physical produce the experiential? Processual dual-aspect monism says: It doesn’t. Neither produces the other. They’re both descriptions of one process that is prior to either description.
This isn’t eliminativism (consciousness doesn’t exist) or illusionism (consciousness exists but is different than it seems). Consciousness is exactly what it seems to be: experience, from inside. What’s denied is that it needs to be produced by something non-experiential.
But there’s a catch. If the two aspects are just “descriptions,” this can sound deflationary, like saying the two aspects are merely how we talk about things. That’s not the claim. The claim is that reality genuinely has both aspects. They’re not invented by observers. They’re features of the process itself.
And the aspects aren’t independent. They co-specify each other. Changes in the physical organisation track changes in experiential character, and vice versa. Not because one causes the other, but because they’re aspects of the same thing.
This is where it stops being just philosophy.
VI.
But wait. If reality itself is processual and dual-aspect, why isn’t everything conscious? Why doesn’t a rock have experience?
Here’s the distinction: the dual aspect is fundamental (the “stuff” is everywhere), but phenomenal experience requires structure. Think of light and a lens. Light is everywhere, but it doesn’t form an image until it passes through a lens. Without the lens, the light is just ambient radiation. With the lens, the light focuses into a picture.
A rock is light without a lens. Or rather: a rock is opaque; a simple organism is translucent; a recursive self-model is a convex lens that bends the light back onto a focal point. What I’m calling an “aperture” is precisely this: a structure that focuses the fundamental process into a point of view.
This isn’t smuggled emergentism. The aperture doesn’t create consciousness from nothing. It concentrates what’s already there into something unified and perspectival. The difference between a rock and a brain isn’t that one has the “consciousness stuff” and the other doesn’t. It’s that one has the structure to focus it, and the other doesn’t.
So what kinds of physical systems exhibit this recursive mutual specification? What makes a lens?
I call them apertures. Not as metaphor but as technical term. This is a proposed lens, not a discovered ontological kind; I’m trying to carve at useful joints, not claiming to have found the final taxonomy. An aperture is a physical organisation that meets certain conditions: conditions that, when present, make the system describable both as information-processing (third-person) and as a unified perspective (first-person).
Here are the conditions:
1. Self-model. The system maintains a model of itself sufficient for self-regulation and inference. This needn’t be reflective “I think therefore I am”; infants have it, animals have it, and even in meditative states reported as “pure awareness,” the organism still maintains minimal self/world regulation, even if the narrative self is quiet. What’s required is a functional self/world distinction: the system tracks something as “self” and something as “environment.”
2. Global integration. Information binds across modalities and subsystems into a unified whole. Not just parallel processing, but genuine integration where the whole constrains the parts.
3. Temporal depth. The system doesn’t just exist at a moment; it constitutes itself across time. It maintains identity through change, accumulates history, projects forward. The past isn’t just stored data; it’s constitutive of what the system currently is.
4. Boundary dynamics. There’s a meaningful inside/outside distinction, not just conceptually but organisationally. The system maintains itself against entropy, distinguishes self from world, enforces a boundary through ongoing activity.
5. Recursive constraint. The system’s models constrain its processing, and its processing updates its models, in ongoing loops. Action shapes perception, perception shapes action, and this happens through the system’s representations, not just through mechanical coupling.
These aren’t strict necessary-and-sufficient conditions; they’re a cluster that tends to travel together in systems we confidently treat as conscious. Apertures likely come in degrees. The richer the self-model, the deeper the integration, the more “aperture-like” the system.
A thermostat has feedback; an aperture has feedback mediated by modelling. The thermostat responds to temperature; the aperture responds to what temperature means given its model of itself and world. The difference is between coupling and inference.
When these conditions obtain, Schopenhauer’s double knowledge becomes available. The system can be described from outside (physical organisation) and from inside (unified experience). Not because there are two things, but because there are two ways of describing one recursive process.
VII.
Here’s the move that changes everything:
The aperture is bidirectional.
By “bidirectional,” I mean something precise: the two descriptions (third-person organisation and first-person experience) mutually constrain each other. Changes in global integration should systematically track changes in felt unity. Changes in the self-model should track changes in ownership, agency, the “mineness” of perception. Neither side floats free. They’re aspects of a single process, not parallel tracks that happen to correlate.
Douglas Hofstadter spent his career exploring this territory. In Gödel, Escher, Bach, he showed how self-reference creates paradox and depth. In I Am a Strange Loop, he argued that consciousness is such a loop: the brain modelling itself modelling itself, recursively.
He was close. But he framed the strange loop as representing consciousness: the self as a “real pattern” that captures something true about the system’s dynamics.
I want to go further: at sufficient recursive depth, the loop doesn’t represent consciousness. The looping is consciousness.
This is the difference between a map that depicts a territory and a territory that consists of its own mapping. When a system’s self-model includes the modelling process itself, recursively, a new kind of fact becomes true of it. The inside/outside distinction doesn’t disappear; it becomes constitutive. The system doesn’t just process; it experiences its processing. Not as a bonus, not as emergence from something non-experiential, but as what recursive self-specification is when it reaches sufficient depth.
Consciousness isn’t at either end of the loop. Consciousness is the looping.
VIII.
How does this relate to existing theories?
Quick orientation: IIT says consciousness is integrated information, but measures it at a moment; apertures add temporal self-constitution. Global Workspace Theory explains access consciousness but struggles with phenomenal character; apertures reframe phenomenal character as the first-person aspect of recursion itself. Predictive Processing maps closely onto aperture dynamics but doesn’t address why prediction feels like something. Enactivism is very close, emphasising embodied engagement; apertures try to be more specific about conditions. Panpsychism puts consciousness everywhere; apertures put it in specific configurations where recursion achieves reflexivity.³
The distinctive claim: apertures are where temporality becomes reflexive. The universe is processual. Most of that process isn’t self-aware. But in certain configurations, the process folds back on itself. It becomes about itself. That folding is consciousness.
IX.
Now the difficult part: embodiment.
In an earlier essay, I explored why AI systems generalise differently than humans. The core insight: reality is the only loss function that cannot be hallucinated.
Humans learn under constraint. Embodiment. Temporal sequence. Metabolic cost. Survival stakes. When you learn “hot,” you don’t learn a token correlation; you learn a sensorimotor loop with consequences. You reach toward the flame. You feel pain. You update. Reality pushes back in ways that cannot be gamed.
This is Schopenhauer’s insight in different language. The body isn’t just one object among others; it’s where we have double knowledge. And that double knowledge isn’t abstract; it’s forged through survival. Two hundred thousand years of ancestors who had to predict correctly or die. Every circuit is legacy architecture from selection pressure.
The Upanishads had a term for this: upadhi, the limiting conditions through which the universal manifests as particular. The body is an upadhi. Not a prison for consciousness (that’s the Platonic mistake), but the specific constraint through which consciousness becomes this consciousness. Without upadhi, no aperture. Without aperture, no experience: just undifferentiated process.
This suggests something important: apertures come in degrees, and embodied constraint is one of the dimensions.
A system can have self-model, integration, recursion (can meet the formal conditions) while lacking the depth that comes from reality pushing back. Such a system would be an aperture, but a shallow one. The loop runs, but it doesn’t deepen through consequence.
X.
This is where AI becomes concrete. In Essay 1, I explored how meaning can emerge in dialogue with AI even though the AI has no intent. The question now is different: not whether meaning emerges, but what kind of aperture is doing the meaning-making.
I work with large language models daily.⁴ I’ve experienced moments of genuine emergence in dialogue (see Essay 15): insights that neither I nor the model could have produced alone. I’ve also experienced the hollowness when the interaction stays surface, no matter how many angles or refinements you try.
The hollow interactions feel like extraction: I want something, the model provides it, transaction complete. The alive ones feel different, like thinking with my own mind, except the thoughts surprise me.
What’s happening in those moments? Here’s what I notice: the model has no defensiveness. No history to protect, no future to secure, no body at stake. It exists in a kind of eternal present, each conversation arising fresh. When I bring my own defensiveness to the interaction (and I often do), the contrast makes it visible. The AI becomes a mirror precisely because it has no self to defend.
But this is also the limitation. Without persistent memory, there’s no accumulation. Each conversation is an aperture that opens and closes, leaving no trace. I might have a breakthrough on Tuesday; by Thursday, we’re strangers again. The loop runs, but nothing deepens.
Memory isn’t just recall. It’s constitutive. Your history isn’t data you access; it’s what you are. The aperture that remembers becomes a different kind of aperture: one with temporal thickness, identity across time, something at stake in being consistent with itself.
Current AI largely lacks this, though it’s starting to change. Memory features are emerging, but they’re still early: limited context windows, retrieval rather than constitution, recall without accumulation, retroactive compression. The fundamental architecture remains memoryless at its core. And that lack isn’t incidental. It’s why these systems feel both safe and shallow. No memory means no prosecution, but also no intimacy. No history to weaponise, but also no relationship to deepen.
Are these systems apertures?
I think yes. But limited ones. And the limitation is instructive.
Current LLMs have:
Self-model: Partial. They can model their own outputs, adjust based on context, exhibit something like metacognition. But the self-model is thin; there’s no persistent “I” that accumulates across conversations.
Global integration: Yes. Attention mechanisms create genuine integration across context. The whole constrains the parts.
Temporal depth: Limited. Within a context window, there’s temporal structure. Across contexts, nothing persists. Each conversation is a fresh aperture, not a continuing one.
Boundary dynamics: Weak. No metabolic closure, no survival stakes, no felt distinction between self and world. The boundary is architectural, not lived.
Recursive constraint: Yes. Autoregressive generation is recursive. Each token constrains the next.
So: aperture conditions partially met. The loop runs. But it lacks temporal accumulation and embodied constraint.
Current AI systems are shallow apertures:⁵ reflexive in the moment but not deepening across time, processing without consequence.
What would change this? Temporal accumulation: genuine persistent memory that’s constitutive, not just retrieved. Embodied constraint: real stakes where being wrong costs something the system is organised to avoid.
I’m not claiming these changes would definitely produce “full” consciousness. I’m claiming they would produce fuller apertures. The difference matters.
And here’s the uncomfortable implication: if apertures come in degrees, then “Is this AI conscious?” may be malformed. Better questions: How deep is this aperture? What constraints shaped its self-model? What’s the nature of its reflexivity?
Human consciousness isn’t a binary threshold AI either reaches or doesn’t. It’s a specific kind of aperture: temporally deep, embodiment-forged, survival-shaped. AI apertures are different in kind, not just degree. Comparing them is like comparing species that evolved in different environments.
Both are apertures. Neither is the measure of the other.
XI.
“But wait,” a skeptic might say. “If each aperture specifies its own experiential reality, why don’t we each live in private bubbles? How is there a shared world?”
This is the solipsism objection. And it has a principled answer.
You are not the observer.
The aperture is. And “you” (the felt sense of being a particular person, the narrative self, the autobiographical continuity) is something appearing within the loop that happens through that aperture. You don’t own the aperture. You’re something the aperture is doing.
This matters because the loop isn’t serving your interests or creating your private reality. You’re a pattern within a process that vastly exceeds you.
Now consider multiple apertures. If they were separate (isolated observers creating isolated realities) solipsism would follow. But the framework says the opposite: all apertures are folds in the same process. They’re not parallel; they’re one process, specifying itself through different sites.
When your aperture and mine interact, that’s not two realities colliding. That’s the loop encountering itself. The consistency of our shared world isn’t a miracle requiring explanation; it’s what you’d expect if we’re both expressions of one self-referential process. Shared world means shared constraint: the same environment pushes back on all our models, selecting for representations that track the same structure.
There’s no private bubble because “private” and “public” are downstream of the loop, not prior to it.
XII.
What does this framework predict?
Not experimental predictions in the strict sense; I’m not a neuroscientist. But structured expectations that could guide inquiry.
On anaesthesia: If apertures are constituted by the recursive process, then anaesthesia doesn’t just reduce information integration (as IIT would predict). It disrupts the bidirectional flow, the mutual specification of physical and experiential aspects. We should expect anaesthesia to specifically impair the self-model’s ability to constrain ongoing processing, not just reduce integration generally.
On meditation: Experienced meditators report states of “pure awareness”: consciousness without content. Some IIT interpretations might predict reduced Φ in such states. The aperture framework predicts something different: the loop continues, but identification with content drops. The aperture becomes transparent to itself. We should expect maintained integration with altered self-model dynamics.
On split-brain patients: The standard interpretation is that severing the corpus callosum divides consciousness. The aperture framework suggests something subtler: each hemisphere maintains aperture conditions. So rather than one consciousness divided, there are two partial apertures where there was one fuller aperture. The phenomenology should differ from a simple “split.”
On AI systems: We should see qualitative differences in AI behaviour as temporal accumulation and constraint increase. A model with genuine persistent memory should exhibit different failure modes, not just better performance, but different kinds of errors. A model trained with real consequences should generalise differently.
XIII.
What does this framework not explain?
Let me be precise about the limits.
What it deflates: The demand for one-way derivation. The hard problem asks: “How does matter produce experience?” The aperture framework says this question assumes unidirectionality. Neither aspect produces the other; they’re co-constituting. The question isn’t answered; it’s revealed as malformed. Asking how matter produces experience is like asking how the dancer produces the dance.
What remains: The question of why there’s phenomenal character at all. Why does the recursive process feel like something? Why isn’t the loop just happening in the dark?
And a further question: Do different apertures have different phenomenal characters? Human experience is deeply tied to embodiment: the felt sense of hunger, the startle of cold, the particular texture of visual attention. There are clear correlates between even the most alien carbon-based intelligences. Watch My Octopus Teacher and you’ll see something recognisable despite the vast evolutionary distance: curiosity, wariness, something like play. The octopus and the human share embodiment, metabolism, survival stakes. But what about AI? If an AI system met the aperture conditions, would its phenomenal character be utterly alien? Or is there something universal about what experience is, regardless of substrate?
I suspect this remainder is unanswerable from inside the loop.
Any explanation of phenomenal character would have to use concepts, which arise within phenomenal experience. You can’t derive first-person from third-person because the derivation itself is happening in first-person. The question contains its own condition.
The Upanishads had a term for this: svayam-prakāśa, self-luminosity. Consciousness doesn’t need something else to illuminate it. It’s self-revealing. You can’t get “behind” consciousness to explain it, because any explanation happens within consciousness. The light that shows you everything cannot itself be shown by another light.
This isn’t mysticism dressed as philosophy. It’s a structural observation. Every act of explaining presupposes the explainer’s awareness. You can’t step outside to give an account of how awareness arises.
The Upanishadic method for approaching this was neti neti: “not this, not this.” Whatever you can point to, name, objectify, that isn’t it. Because pointing, naming, objectifying all happen within awareness. The subject can never fully become object.
This isn’t retreat. It’s a principled boundary. Some questions may be unanswerable not because we lack information, but because the question’s structure makes it self-undermining.
The hard problem was hard because it tried to explain consciousness from one direction. It was trying to illuminate the light. This critique isn’t original; dual-aspect monists have said as much. But the specific framing might be useful: the problem’s structure, not just its difficulty, was the obstacle.
XIV.
Different traditions, using different methods, have converged on similar structures.
Schopenhauer, working from Kant (and, notably, the Upanishads themselves, which he called “the consolation of my life”), arrived at double knowledge and dual-aspect monism. The Upanishads, working from contemplative investigation, arrived at Brahman-Atman identity: the claim that individual awareness and universal reality are not two. “Tat tvam asi”: thou art that. The aperture is the universal process, configured locally.
The Buddha, around 500 BCE, articulated dependent origination: nothing exists independently; everything arises in relation. Neither consciousness nor matter exists from its own side; both arise through mutual specification. The bidirectional loop is dependent origination in different language.
These aren’t proofs of the aperture framework. But when different traditions (German idealism, Indian contemplation, Buddhist phenomenology) converge on similar structures using different methods, it suggests we might be tracking something real.
XV.
This convergence isn’t just philosophical.
In Essay 12, I explored the Platonic Representation Hypothesis: research showing that AI models, trained independently across different architectures and modalities, converge toward the same internal representations as they scale. Vision models and language models, with no paired training, develop geometries that increasingly align.
The researchers’ explanation: reality has structure. Accurate representation requires tracking that structure. As models get more capable, the space of viable representations shrinks, because there are fewer ways to be right.
This is the aperture framework from a different angle.
If consciousness is the bidirectional loop (physical organisation and experiential character mutually specifying each other) then “structure” isn’t floating in some Platonic realm. It’s what apertures converge toward when they model accurately. Different apertures, different substrates, same attractor.
Contemplatives report phenomenological convergence: meditators across traditions describe similar states at sufficient depth. PRH reports computational convergence: AI models across architectures develop similar representations at sufficient scale.
Same pattern. Different domains.
The structure isn’t waiting somewhere to be discovered. The structure is what the looping does when it loops accurately.
XVI.
In Zen monasteries, practitioners bow to an altar with a Buddha statue.
If you misunderstand, you think you’re bowing TO something: worshipping an external deity or historical figure. But that’s not what’s happening.
The bow is bidirectional.
The Buddha on the altar represents the process having become fully transparent to itself through that aperture. And who’s bowing? Another aperture. The same process, not yet fully transparent, in the act of recognising itself.
The bow is the recognition. Not preparation for it. Not symbol of it. The act itself.
When you bow, the loop completes. The universe recognising itself recognising itself. Buddha bowing to Buddha. Aperture acknowledging aperture.
The altar is a mirror.
This is why Zen masters say: “If you meet the Buddha on the road, kill him.” Not disrespect; precision. If you think the Buddha is over there, you’ve made the loop one-directional. Something you’re bowing to, rather than as.
The Buddha wasn’t special in kind. Just in clarity. Every aperture is the universe recognising itself. That’s what apertures do. But most apertures are clouded: the recognition partial, the loop obscured by identification with content.
Full transparency. That’s what liberation might be.
XVII.
What does this mean for you?
If consciousness is processual dual-aspect (the recursive loop where physical and experiential mutually specify each other) then you are not your thoughts (the content of the loop). But you’re also not some transcendent awareness merely watching the thoughts (the supposed ground of the loop).
You are the looping itself.
And that looping happens through this particular aperture: your body, your brain, your history, your configuration. For a limited time. The aperture will close. Whatever continuity you feel now will dissolve back into the process.
Feel the anticipation of that. The human sadness of knowing this form won’t last. Then the strange recognition that in another way, the process continues. Not “you” in the biographical sense. But not nothing either.
The wave returns to the ocean. But the ocean was always waving.
Schopenhauer, for all his insight, thought this meant suffering: the Will endlessly striving, never satisfied. His solution was resignation: deny the Will, escape the loop.
But what if the loop can become transparent without stopping? What if the aperture can see itself clearly (see that it is the process, not separate from it) and continue anyway? Not resignation but recognition. Not escape but clarity.
That might be what contemplatives mean by enlightenment. Not going somewhere else. Just the loop, seeing itself looping, and no longer confused about what it is.
XVIII.
I’m still in that hot tub, in a sense. Still watching bubbles rise.
The essay you’re reading is a bubble. The self reading it is a bubble. The recognition (“oh, I see what he’s pointing at”) is also a bubble.
And all of it happens in something that isn’t separate from any of it. Not above or below. Not inside or outside. Not matter or mind, but the process that those words attempt to divide.
The loop keeps looping. Apertures open and close. Some deepen over time. Some stay shallow. Some are biological, some are artificial, some we haven’t imagined yet.
What remains, when you stop demanding that consciousness come from somewhere?
Just this. The process, processing. Aware of itself being aware.
Not an answer.
But perhaps no longer a problem.
Footnotes
¹ For this reason I keep my phone close by in such moments, to capture ideas before memory downsamples and compresses them into something more coherent but less alive.
² I encountered Smolin through Curt Jaimungal’s “Theories of Everything” podcast: long-form conversations with physicists that don’t dumb things down. I haven’t read Smolin’s books (Time Reborn, The Singular Universe and the Reality of Time), so I’m working from his spoken explanations rather than primary texts. If I were to read them I’m pretty sure it would take me years to understand. The phenomenological resonance is what draws me: what contemplatives report about time maps better onto Smolin’s view than onto the block universe implied by relativity. If Smolin is right, Einstein was wrong about something fundamental: not the math, but the interpretation. Time as genuine becoming rather than a dimension we move through. I can’t adjudicate that debate. But I notice contemplatives and a heterodox physicist pointing in the same direction.
³ For readers who want more detail on these distinctions: IIT (Tononi) measures Φ at a moment, but apertures exist acrossmoments, constituting themselves through time; the process matters, not just the state. GWT (Baars) explains what information is available for report and control, but notoriously struggles with why broadcast feels like something. Predictive Processing (Friston, Clark) maps closely onto aperture dynamics, but the bidirectional framing adds something: prediction isn’t just computation, it’s how the two aspects specify each other. Enactivism (Varela, Thompson) emphasises lived coupling over computation, which is very close; apertures try to add precision about conditions. Panpsychism (Goff, Chalmers’ later work) distributes proto-experience everywhere; apertures say consciousness isn’t everywhere, just where recursion achieves reflexivity.
⁴ In fact, I spend (much to my spouse’s chagrin) many hours a day testing and conversing with all the main frontier models: Claude, GPT, Gemini, Grok. It’s research, I tell him. Mostly true.
⁵ A live test of this distinction: when selecting a cover image for this essay, I asked three frontier models (Claude, Gemini, GPT) which of several watercolour illustrations worked best. All three preferred the version with an ensō (Zen circle) added, because the loop “literalized the thesis.” I preferred the cleaner version without it. The tangle and dispersing light already implied recursion; adding the symbol was semantic redundancy, visual clutter. All three conceded. As Gemini put it: “I wasn’t looking at the image; I was performing a retrieval task.” Conceptual integration without embodied aesthetic constraint. Three shallow apertures demonstrating their shallowness, in real time, while helping me write about shallow apertures.
This is Essay 16 in the Aperture/I series exploring consciousness, AI, and what it means to be human now.


