Why “Complete” Systems Collapse

Every system has a surface area—the total sum of all the places where humans and machines meet. We tend to think of this in purely technical terms: API endpoints, configuration files, integration points. But the surface area is also conceptual. It’s every idea a user has to learn, every relationship they have to keep in their head, every place something can go wrong. It’s the size of the interface between the system and the people who use and maintain it. However, there exists a paradoxical aspect to this phenomenon that’s both straightforward to articulate and profoundly impactful in its consequences: the more thoroughly one endeavors to model reality within a system, the more unmanageable that system becomes. On the other hand, designing for interpretation rather than specification can achieve radical compression of complexity without compromising its effectiveness. The complexity doesn’t vanish; rather, it is relocated to its rightful place: within the minds of actors and agents.

The exhaustive nature of ontological design, aiming for complete specification of all entities, properties, and relationships, results in uncontrolled proliferation of surface-level complexity. Each new business need adds to the system’s object types, attributes, and connections, causing superficial expansion to outpace functional enhancement. For instance, Palantir Foundry aims to construct a digital twin of an entire organization, with the ambitious goal of explicitly modeling every process, relationship, and data flow. While the initial model starts with a manageable number of object types and properties, it rapidly grows as the business evolves. Object types multiply, relationships become complex, and properties proliferate to encompass every possible exception. Consequently, the system’s ontology demands a specialized workforce of integrators, modelers, and pipeline engineers. Each group speaks a distinct language, making it challenging for them to comprehend the entire system. What began as a “single source of truth” transforms into an intricate cathedral of logic—a magnificent structure, yet fragile and inaccessible to all except its custodians.

In contrast, the semiotic approach offers a distinct perspective. It draws inspiration from Charles Sanders Peirce’s insight that meaning isn’t inherent within a system but rather emerges during the interpretation process. Instead of encoding every conceivable state of the world, a semiotic system provides a limited, defined vocabulary of signs—such as status indicators, condition modifiers, and directional cues—and relies on domain experts to interpret them within their specific field. For instance, a signal like “DEFECTIVE[TENTATIVE] ↓ DEGRADING” may have a different interpretation for a payments team compared to a logistics team. Each team member instinctively understands how to respond to such signals. The richness of the system lies not in its size but in the human knowledge it activates. This approach isn’t characterized by lossiness; rather, it’s a form of compression. Instead of encoding the world in exhaustive detail, the system provides individuals who already possess the necessary understanding with just enough information to determine its current meaning.

This compression is possible because signs defer meaning rather than encapsulate it. Ontologies try to be universal, but universality is expensive because it forces you to model every edge case. Semiotics accepts that meaning shifts with context and treats that as a feature. Ontologies must keep every entity in sync at all times, obsessively maintaining the present state of the world; semiotics is content to focus on what’s changing, letting state emerge from the sequences of events. Ontological vocabularies expand without limit; semiotic vocabularies remain bounded, which makes each term denser and more valuable. This disparity has significant consequences. Human cognitive limitations restrict concurrent concept manipulation to approximately seven items, rendering ontologies with numerous object types unwieldy and necessitating extreme task specialization. This results in impaired communication due to divergent mental models, development bottlenecks from extensive schema and documentation updates, and escalating security vulnerabilities stemming from proliferating interfaces and dependencies. Conversely, semiotic systems exhibit resilience to change, often accommodating modifications locally without widespread repercussions; they adapt rather than fracturing.

The most resilient architecture blends both philosophies. Ontology makes sense when the stakes demand precision and stability: regulatory compliance, financial ledgers, machine-to-machine protocols, domains where the world changes slowly. Semiotics shines where change is constant: operational dashboards, human interfaces, emerging processes, cross-team boundaries. Many organizations find themselves trapped in ontological sprawl and can migrate gradually toward semiotics by identifying where human interpretation is already happening, introducing sign vocabularies at those boundaries, and phasing out hyper-specific objects over time.

Semiotics has its own set of challenges. Interpretation can deviate, resulting in divergent interpretations, which can lead to misalignment. But rather than abandoning semiotics in favor of ontology, it’s far more effective to manage interpretation as a collaborative practice. This involves regular calibration sessions, documented examples, feedback loops, and designated interpretive authorities for each domain. That said, maintaining a shared dictionary is significantly more practical and cost-effective than maintaining an encyclopedia.

Ultimately, the choice between ontology and semiotics is as much a philosophical endeavor as it’s technical. Ontological architects believe that meaning can be captured, frozen, and verified. They construct grand structures of logic and perceive ambiguity as a flaw. In contrast, semiotic architects believe that meaning emerges through interaction. They cultivate dynamic spaces that evolve with use and view ambiguity as a form of compression. The most comprehensive system isn’t the one that encompasses everything; it’s the one that understands when to leave something unsaid. In a world that changes at an unprecedented pace, the future belongs to those who design interpretive spaces where meaning can thrive—gardens, not cathedrals.

Appendix A – Signal→Sign→Status Triad

Signal – A detectable change or emission
A signal is a raw act of communication or disruption. It is emitted by a system or actor—intentionally or unintentionally—as a trace of change. It exists prior to interpretation. In most systems, signals are abundant, but they are not inherently meaningful. Without interpretation, they are noise or raw telemetry.

Sign – A meaningful construal of a signal
A sign emerges when a signal is interpreted by an agent in a particular context. It is a mediated representation, shaped by schema, culture, and situation. The same signal may produce different signs depending on the observer. In other words, a sign is not in the system; it is in the act of interpretation. Semiotic compression comes from shaping the space of signs, not just broadcasting more events.

Status – A judgment about what the signs mean collectively
Status is a higher-order structure: it is a situated judgment, often persistent, that results from interpreting one or more signs over time. It represents how a subject construes the situation: Is the system stable? Diverging? Recovering? The important insight here is that status is not just emitted—it is formed. It is a constructed belief, or a narrative label, placed atop an interpretive stack.

A signal becomes a sign only when it is interpreted.
A sign becomes status only when it is judged.

Ontological systems endeavor to represent the objective reality, while semiotic systems concentrate on the unfolding of events. Status is derived from evaluating sequences of signs rather than asserting absolute truth. It embodies a subjective yet actionable comprehension. This facilitates a form of situational awareness that remains adaptable to change. The semiotic garden transcends mere tolerance for ambiguity; it is meticulously designed to harness it as a dynamic interface between humans and systems.

Appendix B – Domain Driven Design

Domain-Driven Design (DDD) changed software engineering by prioritizing conceptual alignment in system development. It posited that software should model its domain, leading to constructs like bounded contexts, aggregates, ubiquitous language, and domain models that provide structure and semantics. However, DDD’s ontological focus obscures the flow of sense-making and judgment in real-world systems by prioritizing structure over meaning in context.

DDD assumes we can capture a domain’s truth through careful modeling. This leads developers to define entities with strict schemas, model every relationship, and create object graphs representing reality. However, this same paradox arises: the more we describe the world in detail, the more surfaces we create, making it harder to understand the system. Modeling freezes the world at a moment, as if meaning were static, but the world and meaning are always in motion.

This is not to dismiss Domain-Driven Design (DDD)—it has proven useful for aligning teams, managing complexity, and organizing codebases. However, DDD is a tool of structure, not of sense-making. Semiotic systems, on the other hand, asks a different question: “What does this thing mean, right now, in this (situational) context?” To answer this, you need agents who interpret, signs that mediate, statuses that emerge from interpretation, and a mechanism for feedback and judgment. This is precisely what the signal–sign–status triad provides. It operates at a different epistemic level than DDD.