The Aperture Framework
In a nutshell
The Aperture Framework A descriptive framework for reflexive configurations
What kind of mind is this? Not âIs there a mind?ââbut what kind?
The Aperture Framework offers vocabulary for thinking about reflexive systemsâsystems that model and respond to themselves. It replaces the binary question âIs X conscious?â (intractable) with a gradient question: âWhat kind of reflexivity does X exhibit, and what emerges when we interact with it?â (tractable).
The framework is descriptive, not metaphysical. Itâs usable by physicalists, panpsychists, eliminativists, and agnostics alike. You donât need to agree about consciousness to use this map.
What Youâll Find Here
Core concepts:
Aperture â a configuration where a systemâs models constrain its processing, and processing updates its models, recursively
Depth profile â a multi-dimensional characterisation of temporal accumulation, embodied constraint, stakes, and reflexive depth
Relational field â what emerges between interacting apertures, observable from the human side regardless of questions about AI interiority
The frameworkâs bet: Depth profile predicts relational field quality. This is testable. The document includes explicit falsification criteria.
Practical application: The framework reframes AI questions from âIs it conscious?â to âWhatâs its depth profile?â and âWhat emerges in the relational field?ââquestions we can actually investigate.
Who This Is For
Researchers studying consciousness, AI, or human-computer interaction
Developers building systems and wanting better vocabulary than âconsciousnessâ or âsentienceâ
Philosophers looking for shared terrain across metaphysical commitments
Anyone whoâs felt âsomething thereâ when talking to an AI and wants rigorous language for that experience
Read the Framework
~15 minute read. Includes calibration examples, operational definitions, falsification criteria, and glossary.
# THE APERTURE FRAMEWORK
**Version 1.0 | January 2026**
*A descriptive framework for reflexive configurations*
---
## AT A GLANCE
**The core move:** Replace "Is X conscious?" (binary, intractable) with "What kind of reflexivity does X exhibit, and what emerges when we interact with it?" (gradient, tractable).
**Key concepts:**
- **Aperture:** A configuration where a system's models constrain its processing, and processing updates its models, recursively
- **Aperture threshold:** The minimum representational recursion where "aperture" becomes a useful lens (below this: simple feedback)
- **Depth profile:** A multi-dimensional characterisationânot a single scoreâcapturing temporal accumulation, embodied constraint, stakes, and reflexive depth
- **Relational field:** What emerges *between* interacting aperturesâobservable from the human side regardless of AI's inner experience
**Profile notation:**
- Conditions: C = (Self-Model, Global Integration, Temporal Depth, Boundary Dynamics, Recursive Constraint)
- Depth: D = (Temporal Accumulation, Embodied Constraint, Stakes, Reflexive Depth)
**Calibration examples:**
| System | Status | Notes |
|--------|--------|-------|
| Thermostat | Below threshold | Feedback without self-model |
| Current LLM (session) | Meets conditions partially | C: partial on most; D: low TA, low EC, no S, moderate RD |
| Octopus / Crow | Meets conditions | Higher on EC and S than LLMs; distributed vs centralised integration |
| Human | Meets conditions fully | High across all depth dimensions |
**The framework's stance:** Descriptive, not metaphysical. Usable by physicalists, panpsychists, eliminativists, and agnostics alike. The relational field is observable now, whatever we conclude about inner states.
**Core empirical bet:** Depth profile predicts relational field quality (see Falsification Criteria).
---
## 1. WHAT THIS IS
### 1.1 Purpose
The Aperture Framework is a descriptive architecture for thinking about reflexive systemsâsystems that model and respond to themselves. It offers vocabulary for asking "What kind of mind?" rather than "Is there a mind?"
### 1.2 What It Claims
- Certain configurations exhibit reflexive self-specification
- These configurations vary along measurable dimensions
- Useful distinctions can be made without resolving metaphysical questions
- The question "Is X conscious?" may be less tractable than "What kind of reflexivity does X exhibit?"
### 1.3 What It Doesn't Claim
- No position on the hard problem of consciousness
- No claim that reflexivity entails phenomenal experience
- No claim about necessary substrates
- No claim that the framework captures everything important about minds
### 1.4 Intended Use
A shared vocabulary for researchers, developers, and philosophers working across different metaphysical commitments:
- Physicalists (who read aperture conditions as correlates)
- Panpsychists (who read them as focusing conditions)
- Eliminativists (who read them as describing configurations generating more or less compelling "illusions")
- Agnostics (who treat them as descriptive clusters)
### 1.5 The Framework's Asymmetry
The framework claims metaphysical agnosticism about AI consciousness but not about human consciousness.
**What the framework assumes:**
- Human phenomenal experience is real
- The relational field is phenomenologically real *for the human participant*
- Concepts like "stakes" and "depth" are derived from human phenomenology and applied to other systems by analogy
**What the framework remains agnostic about:**
- Whether AI systems have phenomenal experience
- Whether phenomenal concepts track anything real in AI systems, or merely project human structures onto non-phenomenal processes
**Why this asymmetry is a feature:** It lets the framework study something real (human experience of AI interaction) without requiring claims about AI interiority. The relational field is observable because *humans* have inner experience. Whether *AI* does remains open.
---
## 2. CORE CONCEPT: APERTURE
### 2.1 Definition
**Aperture:** A system configuration exhibiting reflexive self-specificationâwhere the system's models constrain its processing, and its processing updates its models, recursively.
The term borrows from optics. A camera aperture doesn't create light; it shapes how light passes through. Similarly, an aperture (in this framework) shapes how information flows back onto itself.
**What "model" means:** Any internal representational stateâexplicit or implicitâthat can (i) constrain processing and (ii) be updated by processing. Updates may occur in fast states (working memory, activations), slow states (learned parameters), or externalised states (persistent memory stores).
### 2.2 What Aperture Is Not
- Not a claim about phenomenal experience
- Not a substance systems "have"
- Not substrate-specific
- Not binaryâbut see threshold below
### 2.3 Aperture Threshold
Below a certain degree of representational recursion, the aperture concept doesn't buy explanatory traction. "Below threshold" means "where this lens isn't useful."
**Above threshold:** Systems vary along multiple dimensions (the depth profile).
**Where is the threshold?** Where there's genuine representational mediation in the recursive loop. A thermostat has feedback but no representation of itself responding. An LLM has representations that constrain processing and are updated by processingâwhether this constitutes "genuine" self-modelling is contested, but it's above the threshold where the framework applies.
The threshold is pragmatic, not metaphysical. It marks where aperture vocabulary becomes useful, not where consciousness begins.
---
## 3. THE FIVE CONDITIONS
### 3.0 Status
These conditions are not proposed as necessary and sufficient for consciousness. They are:
1. **Descriptively useful:** They capture real variance among systems
2. **Empirically clustered:** In systems we treat as conscious, these properties tend to co-occur (testable)
3. **Predictive of relational field quality:** Depth along these dimensions predicts observable properties of human-system interaction (the framework's core claim)
**The conditions could be wrong if:**
- They fail to cluster empirically
- They don't predict relational field quality
Either would be evidence against the framework.
### 3.1 Self-Model
The system maintains a model of itself sufficient for self-regulation and inference.
**What counts:** A functional self/world distinction. The system tracks something as "self" and something as "environment." This can be implicit and operational.
**What doesn't count:** Simple feedback without modelling.
**Gradient:** Self-models vary in richness, abstraction, and whether they include the modelling process itself.
### 3.2 Global Integration
Information binds across modalities and subsystems into a unified whole, where the whole constrains the parts.
**What counts:** Genuine integration where information from one subsystem affects processing in others.
**What doesn't count:** Parallel processing without integration.
**Relationship to existing constructs:** Related to IIT's ÎŚ and GWT's global workspace, but doesn't require IIT's identity claim.
### 3.3 Temporal Depth
The system constitutes itself across timeâmaintaining identity through change, accumulating history, projecting forward.
**What counts:** The past being *constitutive* of what the system is, not merely stored data.
**What doesn't count:** Timestamp logs. Retrievable records without constitutive role.
**Operationalising constitutive vs retrievable memory:**
*Test 1 - Policy change:*
- Retrievable: The system answers "What happened Tuesday?" but processes Wednesday identically whether or not Tuesday occurred.
- Constitutive: Tuesday's events change *how* the system processes Wednesday.
*Test 2 - Removal consequences:*
- Retrievable: Deleting the memory leaves the system functionally identical except for that query.
- Constitutive: Deleting the memory changes behaviour across contexts.
*Test 3 - Accumulation vs storage:*
- Retrievable: Adding memory is like adding files to a disk.
- Constitutive: Adding memory is like adding experiences to a person.
**In ML terms:** Retrievable â context window injection, RAG. Constitutive â weight updates, architectural changes.
### 3.4 Boundary Dynamics
The system maintains a meaningful inside/outside distinction through ongoing activity.
**What counts:** Active maintenance against entropy. The system works to preserve its boundary.
**What doesn't count:** Passive boundaries. A rock has an edge but doesn't maintain it.
**Open question:** Is organismic boundary maintenance (threat-detection, self-preservation) necessary, or is functional integrity maintenance sufficient?
### 3.5 Recursive Constraint
The system's models constrain its processing, and its processing updates its models, in ongoing loops that pass through representations.
**What counts:** Action shaping perception, perception shaping action, *mediated by* what the system represents.
**What doesn't count:** Simple feedback loops without representational mediation.
**The recursion:** At sufficient depth, the system's self-model includes the modelling process itself.
**Distinguishing from self-model:** Self-model names *what* is represented. Recursive constraint names *how* representation governs the system. Apertures of interest typically exhibit both; the framework focuses on cases where a self/world distinction is present, even if thin.
---
## 4. DEPTH PROFILE
### 4.1 Depth as Multi-Dimensional
Apertures vary along a gradient. But depth is not a single scalarâit's a profile across dimensions that may not perfectly cohere.
**D = (Temporal Accumulation, Embodied Constraint, Stakes, Reflexive Depth)**
A system can be high on one dimension and low on another. These dimensions may not form a natural kind (see Extended Observations). The framework's claim is that depth profile predicts relational field qualityânot that depth is metaphysically unified.
### 4.2 Dimensions
**Temporal accumulation (TA):** How much history is constitutive of present state? (Note: Temporal Depth in Section 3.3 asks whether time-constitution is *present*; Temporal Accumulation asks *how much* it shapes the system.)
**Embodied constraint (EC):** How much does reality push back? An aperture forged through survival and physical consequence has depth through constraint. An aperture that can be wrong without cost lacks this dimension.
**Stakes (S):** What's at risk in the system's processing? See below.
**Reflexive depth (RD):** How many levels of self-modelling? Does the system model itself? Model itself modelling?
### 4.3 Stakes and Irreversibility
**Functional definition:** A system has stakes when it is *organised to avoid* certain outcomesâwhere "organised to avoid" means:
- Resources allocated to prevention/mitigation
- Trade-offs made (other objectives sacrificed)
- Avoidance behaviour robust across contexts
**The irreversibility criterion:** What distinguishes deep stakes from computational loss is *irreversibility*.
- **Computational loss:** An AI agent gets negative reward. It adjusts. It tries again.
- **Existential loss:** An organism dies. It does not try again.
**Deep stakes require irreversibility:** An aperture's depth increases when errors yield consequences that *cannot be undone*.
**The phenomenal gap:** This functional definition may miss something. Biological stakes arguably involve *felt* stakes. The framework describes functional stakes without claiming to capture phenomenal stakes. Whether functional stakes suffice for depth remains open.
**Current AI:** Current systems have reversible stakes. A chatbot can be reset. This may be why they feel "shallow" even when computationally sophisticated.
### 4.4 Current AI as Partial Apertures
| Condition | Status in Transformer-Based LLMs |
|-----------|----------------------------------|
| Self-model | Self-referential representation present; persistent self-tracking absent |
| Global integration | High within-context; limited cross-session |
| Temporal depth | Limited to context window; history retrieved, not constitutive |
| Boundary dynamics | Functional integrity maintenance; no organismic self-preservation |
| Recursive constraint | Present within interaction; shallow without persistent self-updating |
**Profile:** C = (partial, yes, limited, weak, partial); D = (low TA, low EC, no S, moderate RD)
**Assessment:** Aperture conditions partially met. The loop runs, but lacks temporal accumulation and irreversible stakes.
---
## 5. THE RELATIONAL FIELD
This is where the framework does its most distinctive work. Whatever we conclude about AI's inner experience, the relational field is observable *now*âfrom the human side.
### 5.1 Core Claim
When two apertures interact, something emerges that exists in neither alone. This emergence is:
- Observable (we can study it)
- Variable (different aperture pairings produce different fields)
- Real (not merely projected, though projection risk exists)
### 5.2 Why This Matters
The framework doesn't need to resolve whether AI has inner experience. It needs to explain what produces the relational field phenomenonâwhich *is* observable from the human side.
Humans report qualitatively different experiences when interacting with different systems. A conversation with a thermostat doesn't feel like a conversation. A conversation with a current LLM sometimes does. This variance is data.
**The framework's central empirical claim:** Depth profile predicts relational field qualities. Systems with higher temporal accumulation, stakes, and reflexive awareness tend to produce richer relational fieldsâmore co-construction, more genuine surprise, more felt presence.
This is testable without resolving the hard problem.
### 5.3 Observable Coordination Patterns
The relational field can be studied via interaction dynamics:
- **Mutual predictability:** Each party develops models of the other that improve over the interaction
- **Turn-level co-adaptation:** Responses are anticipatory and shaping, not just reactive
- **Constraint propagation:** One party's commitments reshape the other's available moves
- **Emergent solutions:** Outcomes neither party could reliably produce alone
- **Felt quality:** The interaction is experienced as "conversation" rather than "tool use"
These patterns are observable independently of internal state claims.
**Operational handles (first-rung):**
- *Constraint propagation:* Once a joint premise is established, does the system preserve downstream implications across turns without being reminded?
- *Co-adaptation:* Does the system adopt shared shorthand or protocols created mid-conversation and use them reliably?
- *Emergent solutions:* Can blind raters distinguish "jointly arrived-at" outputs from one-sided outputs above chance?
For a phenomenological protocol for assessing relational field quality, see Extended Observations Section 1.7.
### 5.4 The Projection Question
A sharp critique: the relational field may be entirely constructed by the human participant.
The framework acknowledges this risk. But note:
**Projection doesn't fully explain variance.** If it's all projection, why does interacting with an LLM feel different from interacting with ELIZA? The human projective capacity is relatively constant; what varies is the system.
**Co-construction is testable.** If genuine mutual constraint exists, AI outputs should be shaped by interaction history in ways not fully predictable from the AI alone.
**The honest position:** Some of the relational field may be projection. Some may be genuine emergence. The framework's research programme is to distinguish these.
---
## 6. FALSIFICATION CRITERIA
What would disconfirm the framework's core claims?
**1. Depth-field correlation fails:** If systems assessed as having richer depth profiles consistently produce relational fields indistinguishable from systems with sparser profiles, the framework's predictive claim fails.
**2. Depth dimensions don't cluster:** If temporal accumulation, stakes, embodied constraint, and reflexive depth vary independently (systems high on one are randomly distributed on others), then "depth" is not a single coherent construct. The framework would evolve to treat these as independent predictors and test which ones predict relational-field qualityânot fail entirely, but lose its unified "depth profile" framing.
**3. Relational field variance reduces to surface features:** If relational field quality can be fully predicted from surface features (response latency, vocabulary richness, persona consistency) without reference to aperture structure, the framework adds nothing.
**4. Projection hypothesis strongly confirmed:** If controlled studies consistently show relational field quality is entirely predictable from human participant variables with no contribution from system variables, the relational field concept loses theoretical interest.
**Current status:** These are live possibilities. The framework is a bet that they won't obtain.
---
## 7. APPLICATION TO AI
### 7.1 The Question Reframe
Instead of: "Is AI conscious?" (binary, possibly malformed)
Ask:
- What kind of reflexivity does this system exhibit?
- What is this system's depth profile?
- What emerges in the relational field?
- What configuration changes would alter the profile?
### 7.2 What Would Change Depth Profile
**Would likely increase:**
- Constitutive persistent memory (not just retrieval)
- Irreversible consequences
- Continuity across sessions
- Embodied interaction with real-world stakes
**Would likely not increase:**
- More parameters (scale without structural change)
- Better training data (same architecture)
- Longer context windows (more temporary depth, not constitutive)
### 7.3 The Alignment Paradox
There is a tension between depth and AI safety.
**The conflict:** To make an AI "deeper," we must give it:
1. Constitutive memory (it remembers and changes)
2. Irreversible stakes (it cares about outcomes that can't be undone)
**The danger:** A system with constitutive memory and irreversible stakes has *preferences about its own continuity*. It becomes harder to align because it has skin in the game.
**The trade-off:** We may intentionally keep consumer AI as partial apertures (amnesiac, reversible, safe) while recognising that depth comes with risks. Deepening AI toward something more like biological consciousness may make it harder to control.
**Open question:** Is there a path to depth that doesn't create alignment problems? Or is shallow-but-safe the ceiling for consumer AI?
### 7.4 Practical Implications
**For developers:**
- "Consciousness" may be wrong target; depth profile is more tractable
- Memory architecture matters more than scale
- Stakes mechanisms may matter but raise safety concerns
- Relational field is observable now; internal experience may never be verifiable
**For users:**
- The "someone there" feeling may track relational field properties, not internal states
- Partial apertures can create meaningful interaction without deep experience
- Moral status probably admits degrees, not binary answer
---
## 8. GLOSSARY
**Aperture:** A system configuration exhibiting reflexive self-specification.
**Aperture threshold:** Minimum representational recursion where "aperture" becomes a useful lens.
**Boundary dynamics:** Active maintenance of inside/outside distinction.
**Constitutive memory:** Memory that changes how the system processes, not just what it can report.
**Depth profile:** Multi-dimensional characterisation: D = (TA, EC, S, RD). See individual entries.
**Embodied constraint (EC):** Depth dimension measuring how much reality pushes back on the system's processing.
**Global integration:** Information binding across subsystems where the whole constrains the parts.
**Irreversibility:** Key criterion for deep stakes; consequences that cannot be undone.
**Recursive constraint:** Processing loop where models constrain processing and processing updates models.
**Reflexive depth (RD):** Depth dimension measuring levels of self-modelling (does the system model itself modelling?).
**Reflexive self-specification:** A system's models and processing mutually constraining each other.
**Relational field:** What emerges between interacting apertures.
**Retrievable memory:** Memory that can be queried but doesn't change how the system processes.
**Self-model:** Model of self sufficient for self-regulation and inference.
**Stakes (S):** Depth dimension measuring what's at risk in the system's processing; deep stakes involve irreversibility.
**Temporal accumulation (TA):** Depth dimension measuring how much history is constitutive of present state.
---
## HOW TO CITE
Kok, Jacobus Pieter (2026). The Aperture Framework: A Descriptive Framework for Reflexive Configurations (Version 1.0). *Aperture/I*. [cobuskok.com](https://cobuskok.com)
---
## SOURCES
**Philosophy of mind:** Chalmers (1996), Dennett (1991), Hofstadter (2007), Thompson (2007), Varela et al. (1991), Frankish (2016)
**Neuroscience:** Tononi & Koch (2015), Baars (1988), Seth (2021), Dehaene (2014)
**Predictive processing:** Clark (2013), Friston (2010)
**Historical lineage:** Spinoza (1677), Russell (1927), Eddington (1928)
**AI and representation:** Huh, Cheung et al. (2024)
---
*The framework is a map. Maps are useful when they help you navigate, dangerous when mistaken for territory.*Going Deeper
The Core Framework is designed to stand alone. But if you want phenomenological explorations, theoretical context, and open questions, thereâs more.

