Context Rediscovered
Something significant is happening in computing. After decades of prioritizing data structures, logical operations, and control flows, we’re now shifting our focus to rediscovering the importance of context—the fabric that gives meaning to information. The rapid adoption of large language models has exposed both how essential context is to meaning-making and how brittle our computational representations of context have been.
Language, reasoning, and perception aren’t isolated but contextual phenomena. Every signal, symbol, and interpretation is meaningful within a web of historical, situational, social, and intentional relations. While computing has considered context, it’s often treated as a secondary, a config variable or environment parameter, not as the constitutive dimension of meaning.

The Great Reduction
Human societies evolved through shared meaning-making. Context enabled collective interpretation: we read gestures, histories, and relationships to understand not only what someone said, but its meaning. Every conversation unfolds within nested layers of context—the immediate situation, ongoing relationship, cultural backdrop, and shared purposes that make interaction coherent.
Early computing simplified this complexity by reducing the interpretive richness of human life into discrete, executable representations. This reduction, though not a failure, was a practical necessity that unlocked computing’s universal power. As computational systems spread into organizations, this engineering logic deepened into an architectural paradigm.
The first wave of artificial intelligence, known as Good Old-Fashioned AI (GOFAI), fully embraced this decontextualized model. These systems were powerful within their narrow domains but notoriously brittle, failing catastrophically when faced with situations that fell outside their pre-programmed knowledge. Meaning became implicit, residing in the human operators who knew what the data meant rather than in the systems themselves.
Notable exceptions include expert systems with domain knowledge, early AI research on situated understanding in constrained environments, and context-aware computing research on environmental adaptation. However, these efforts were specialized and didn’t establish fundamental architectural principles. As a result, there was a persistent contextual gap between human meaning and computational representation, with interpretation occurring externally to the machine.
Contextual Computing
Explicit attention to context emerged in the 1990s with context-aware computing and ubiquitous computing research. These efforts promised systems that could sense and adapt to their surroundings, focusing on location, people nearby, and nearby resources. Context was conceptualized as a set of measurable environmental data, such as location, device capabilities, time of day, nearby users, and conditions like lighting and noise levels.
This was valuable but limited. These approaches focused primarily on environmental context—what surrounds the user—rather than interpretive context—what the situation means to them. The question they addressed was “What conditions exist?” rather than “What does this situation signify?”
This distinction matters profoundly. Environmental context involves sensing and classification; interpretive context requires understanding significance and relevance. A smartphone may detect that a user has entered a conference room—an environmental fact. But whether that meeting is routine, decisive, or emotionally charged is a matter of situational meaning—a layer these systems couldn’t access.
Context-aware systems have treated context as an input to behavior rather than as the medium through which behavior becomes meaningful. Context remained something to be detected and reacted to, not something that fundamentally shaped what actions and information meant. The approach captured the physical situation but failed to grasp its social and intentional significance.
Generative AI
The recent rise of large language models (LLMs) has highlighted the significance of context in computational interactions. These systems demonstrate, through their successful performance rather than explicit design, that the same words can have different interpretations depending on the context, and that coherent interaction relies on maintaining continuity.
Large language models (LLMs) don’t genuinely comprehend context. They’re sophisticated pattern-matching systems that simulate context sensitivity through scale, not genuine situational awareness. However, their success demonstrates that context is the foundation of meaning-making. LLMs’ limitations reveal what’s lacking. Their failures often stem from a lack of pragmatic understanding—inferring intended meaning beyond literal words. Grice’s Cooperative Principle describes effective communication as relying on shared maxims:
- Quantity: Informative
- Quality: Truthful
- Relation: Relevant
- Manner: Clear
When an LLM hallucinates, it violates Quality; when it provides irrelevant answers, it violates Quantity and Relation. These aren’t just linguistic errors but failures of social reasoning.
Large language models (LLMs) lack persistent memory, operating on a stateless basis where context is discarded after each interaction. This limits their ability to accumulate knowledge over time. Positional bias also hinders their performance, as demonstrated by the lost in the middle phenomenon. These gaps suggest the need for deeper contextual intelligence, which requires persistent memory, robust grounding, and genuine situational reasoning.
Beyond Flatness
To move forward, we must reconceptualize context not as a single layer but as a multi-dimensional, hierarchical structure. Contexts are not static frames but dynamic, nested environments of meaning that shape and constrain interpretation.
Building on work in pragmatics (Grice), situated cognition (Clark), and phenomenology (Heidegger), we can outline several key dimensions through which context operates:
Temporal Context: The history and trajectory of events leading to the current moment.
How did we arrive here?
What sequence are we within?
Spatial/Material Context: The configuration of physical entities, artifacts, and affordances in an environment.
What resources and constraints does this setting provide?
Social/Relational Context: The roles, relationships, power dynamics, and shared history among participants.
Who are we to each other?
What trust and obligations exist?
Intentional Context: The goals, motives, and expectations of the agents involved.
Why are we doing this?
What are we trying to achieve?
Procedural Context: The stage or phase of activity within a broader process.
What part of the larger arc are we in?
What comes next?
Interpretive/Cultural Context: The frameworks, assumptions, norms, and narratives through which meaning is constructed.
By what logic or lens is this understood?
What’s appropriate or valued here?
Affective Context: The emotional and evaluative tone that colors interaction.
How does this feel?
What mood or stance prevails?
These dimensions aren’t independent but interrelated and mutually constraining. A decision made in a meeting (procedural) might be part of a strategic initiative (intentional), shaped by organizational culture (interpretive), influenced by prior trust between participants (social), and colored by recent setbacks (affective and temporal). Meaning emerges through this hierarchy of horizons—contexts within contexts, each constraining and enriching the others.
The Holonic Structure of Context
One productive way to conceptualize nested contexts is through holons—a concept from systems theory (Koestler) describing entities that are simultaneously wholes and parts. A holon is a self-reliant unit with a degree of autonomy, but it’s also a component of a larger system. A system of these nested holons is called a holarchy.
This recursive structure mirrors how humans naturally organize meaning:
- A conversation is a context within a relationship
- A relationship is a context within an organization
- An organization is a context within an industry and culture
- A culture is a context within historical and geographical circumstances
Each level maintains internal coherence while being influenced by the levels above and below, creating a flow of meaning. This forms a contextual ecology, transforming context from a parameter to a topology of dynamically nested interpretations. Contextual intelligence requires such sensitivity to surroundings and awareness of multiple scales.
From Structure to Situation
Computing has long favored structure: data, process, state, and ontology models. Structure defines entities, relations, and rules in a domain. Context, on the other hand, captures significance and relevance in situations. The rise of generative and agentic systems signals a shift from static objects and flows to understanding situations in motion.
In this emerging paradigm:
- Objects become subjects, embedded in contexts rather than free-floating abstractions.
- Processes become narratives, sequences that derive meaning from progression and purpose.
- Data becomes signs, symbols pointing to situations beyond themselves, requiring interpretation.
This suggests that systems shouldn’t merely model entities but model relationships and relevance. They shouldn’t only react to events but also interpret and anticipate them based on situational understanding. This is the path to contextual intelligence—a synthesis of sensing, understanding, and adaptation grounded in multi-layered awareness of situations.
Beyond Context-Awareness
As AI systems become more embedded in human environments, the central question shifts from “Can the system complete the task?” to “Can it understand the situation in which the task exists?” This points toward what we might call contextual depth—not consciousness in any phenomenological sense, but the capacity to operate meaningfully across multiple nested levels of context simultaneously. A formal framework for this concept comes from Mica Endsley’s model of Situation Awareness (SA), which defines three hierarchical levels of understanding:
- Level 1: Perception of the elements in the environment.
- Level 2: Comprehension of their meaning in relation to goals.
- Level 3: Projection of their status in the near future.
Contextually deep systems, unlike context-aware systems, understand broader meanings beyond immediate surroundings. This requires architectural shifts, focusing on situational models and genuine temporal continuity, not just better algorithms. Current systems fall short of this goal, with the gap between capability and desired depth growing.
Closing the Contextual Gap
The contextual gap that characterized much of computing’s history is becoming untenable. The rediscovery of context marks not a new invention but a return to the social, interpretive, and ecological foundations of intelligence.
We’re witnessing a transition from systems that execute to systems that must interpret, from rigid architectures to more fluid topologies of meaning. This shift from a knowledge-based to a data-driven paradigm may represent a genuine paradigm shift in the Kuhnian sense—a fundamental change in the core assumptions and methods that guide the field.
The challenge is philosophical and phenomenological: to reconnect mechanical processing with meaningful understanding. Sustained engagement across computer science, cognitive science, linguistics, philosophy, and anthropology is crucial. It requires humility about current systems’ limitations and ambition about what’s possible when the gap closes. The future of AI and computing depends on treating context seriously, not as an input, but as the medium for meaning generation.