The Twin That Never Was
The digital twin, a promising enterprise technology, presents a vision of experimenting with the future. By adjusting a manufacturing process parameter, you can observe its cascading effects throughout supply chains and identify potential vulnerabilities before they manifest. In this perspective, a digital twin transcends its role as a dashboard, transforming into a sandbox for what-if experimentation, a rehearsal space for resilience, optimization, and risk exploration without any consequences. However, engineering teams encounter a vastly different reality when delving into implementation complexities. Instead of functioning as living laboratories, they face intricate data integration projects. These projects involve interconnected catalogs of assets, enriched with IoT telemetry, knowledge graphs, and time-series databases. While these systems excel at answering questions like “What happened?” and “What’s happening now?”, they struggle with hypothetical scenarios such as “What if we shut down Plant B next month?” or “How would supply chain disruption spread?”
This isn’t a problem of lacking data or computing power. Manufacturing plants generate terabytes of sensor readings daily. Cloud platforms can process and visualize this information rapidly and efficiently. The core issue lies deeper: most digital twins are built with a design philosophy that stifles creativity. Consequently, there’s a persistent disconnect between marketing promises and the actual architectural reality. Organizations invest millions in digital twin initiatives, expecting transformative outcomes, only to end up with expensive dashboards instead. The problem isn’t that simulations are impossible—it’s that the fundamental design philosophy of most digital twins lacks temporal, causal, and imaginative elements.

The Great Divide
To understand why most digital twins fail at simulation, we must recognize a crucial architectural distinction that the industry rarely makes explicit: the difference between diagnostic twins and simulative twins.
Diagnostic twins are entity-centric. They begin with the question: “What assets do we have, and what’s their current state?” These systems excel at monitoring equipment health, tracking performance metrics, and conducting root cause analysis when things go wrong. A diagnostic twin for a chemical plant might show real-time temperature readings across reactors, alert operators to anomalies, and provide drill-down capabilities to investigate process deviations. This is valuable for operational awareness and troubleshooting.
Simulative twins, by contrast, are transition-centric. They begin with a fundamentally different question: “How does our system evolve over time, and what futures are possible?” These systems are designed around the dynamics of change—how states transition, how effects propagate, how systems respond to interventions. A simulative twin for the same chemical plant wouldn’t just show current reactor temperatures; it’d encode the thermodynamic relationships that govern how temperature changes affect reaction rates, how reaction rates influence product quality, and how quality variations impact downstream processes.
The crucial distinction lies not in the data sources or user interfaces, but in the underlying ontology—the fundamental assumptions about what the system represents. Diagnostic twins model entities using properties, while simulated twins model processes through behaviors.
The Relational Trap
The dominance of diagnostic twins stems from a deeper architectural bias that pervades enterprise software: the relational mindset. This approach treats entities as primary constructs and changes as secondary phenomena that happen to those entities. Time becomes metadata—timestamps attached to records, audit trails bolted onto tables, slowly changing dimensions that capture state transitions after the fact. This design philosophy emerged from the success of relational databases in managing business transactions and records.
However, when attempting to model dynamic systems, such as manufacturing processes, supply chains, and infrastructure networks, the relational approach becomes inadequate. The fundamental assumption that entities are stable and changes are exceptional no longer holds true. In these domains, change is continuous, interdependent, and frequently holds greater significance than any transient state.
The result is architectural fragmentation. Organizations end up with three separate systems:
Past
Data warehouses and analytics platforms for historical analysis
Present
Real-time dashboards and monitoring systems for awareness
Future
Simulation engines and forecast tooling for scenario planning
Sourcing is not Simulation
At this point, architects familiar with modern data patterns might object: “But what about event sourcing? Doesn’t that solve the temporal problem?” Event sourcing has indeed gained popularity as a way to make systems more audit-friendly and temporally aware. Instead of storing the current state in database tables, event-sourced systems store sequences of events that can be replayed to reconstruct any historical state.
This approach brings real benefits for data integrity, debugging, and historical analysis. If you want to understand exactly how your system reached its current state, event sourcing provides an audit trail. You can replay events to reproduce bugs, analyze conditional paths, or reconstruct the sequence of changes that led to an outcome. But event sourcing, despite its temporal orientation, still falls short of true simulation capability. The crucial distinction lies in what it captures versus what it interprets.
Event sourcing is stimulus capture. It records what happened—every sensor reading, every state change, every user action—in precise chronological order. This is analogous to storing every photon that hits the retina or every sound wave that reaches the ear. The fidelity is perfect, but the data remains raw and uninterpreted.
Simulation requires semantic interpretation. It needs to understand not just that Sensor X reported value Y at time T, but what that reading means in the context of system behavior: Is this normal variation or the beginning of a degradation pattern? Does this indicate an upstream problem or a local anomaly?
The brain demonstrates this difference powerfully. We can imagine, dream, and anticipate without any external sensory input because our nervous systems operate not on raw stimuli but on interpreted patterns—signs and meanings that can be recombined internally. We don’t need to replay actual tiger sightings to imagine a tiger in the grass; we can fire the neural patterns associated with “tiger” directly.
Event sourcing lacks this interpretive layer. It provides recall but no imagination. You can replay historical events to understand what happened, but you can’t generate novel scenarios to explore what might happen. The system remains forever tethered to its stimulus history.
The Missing Mediation Layer
What digital twins need—but rarely include—is a mediation layer: a cognitive substrate that transforms raw signals into meaningful events and encodes the causal relationships that govern system evolution. This mediation layer serves three critical functions:
Semantic Transformation
Raw telemetry streams encompass a diverse range of measurements. The mediation layer plays a pivotal role in interpreting these measurements within their context, identifying patterns, and extracting their inherent significance. This transformation is of paramount importance, as simulation operates on meaningful events rather than raw measurements.
Causal Encoding
The mediation layer captures not just what events mean individually, but how they influence each other over time. This includes: direct causation, delayed effects, threshold dynamics, feedback loops. These relationships form the “physics” of the digital twin—the rules that govern how the system evolves from one state to another.
Temporal Projection
With semantic events and causal relationships encoded, the mediation layer can run the system forward in time. Given a current state and a proposed intervention, it can project the cascade of effects. This projection capability is what transforms a diagnostic mirror into a simulative theater.
Conclusion: From Mirrors to Theaters
The digital twin industry stands at a crossroads. Current implementations have delivered real value in monitoring, diagnostics, and historical analysis. But they fall short of their transformative promise because they’re built on architectural foundations that can’t support imagination. The path forward requires more than incremental improvements to existing systems. It demands a fundamental shift in design philosophy:
- From entities to transitions: Modeling how systems evolve rather than cataloging what they contain
- From logs to signs: Interpreting data streams into meaningful patterns that can be manipulated and projected
- From replay to imagination: Building systems that generate novel scenarios, not just reconstructing sequences