Why I’ve Stopped Selling Situational Awareness

The Expert
I have spent thirty years as the expert to experts in performance optimization, monitoring, observability and controllability. I built distributed tracing tools before the term existed. I designed signal architectures, compression frameworks, and semantic models that I still believe are correct. I have walked into organizations of every size and shape, consulted to the sharpest engineering teams in the industry, and tried to show them something beyond the dashboard. The impact I wanted, the one where the field shifts, where the architecture changes, where situational awareness becomes the foundation, never arrived.
This is the essay where I stop expecting it to.
The Conversation
In every technical exchange I have had with observability vendors, the same pattern emerges. Discussions would orbit features, performance, integrations, anything that preserved the existing frame. The foundational questions, what a signal should carry, where judgment should live, what constitutes meaning in a system, were rarely if ever engaged. This was not a matter of disagreement. To engage those questions seriously would require stepping outside the architecture the industry is built on. It would mean confronting the possibility that the data model itself is insufficient, that signals are semantically empty, and that meaning cannot be recovered through accumulation and correlation. There is no winning move for a vendor inside that conversation. If the ideas are wrong, they are irrelevant. If they are right, they invalidate the category. The rational response is not to argue, but to avoid the discussion. Which is what exactly happened.
The Condition
I walk into companies thinking: this is a mature organization, they’ve got their stuff together.
What I find instead is constant firefighting. Even in the largest, most experienced engineering teams, everything is in motion. Deployments, configuration changes, integrations, and scaling happen continuously. Humans cannot track the state of the system; coordination is impossible. Teams are just throwing changes against the wall and hoping nothing breaks badly enough to cascade globally.
At this scale, software is inherently unstable. Stability is not a phase to be achieved; firefighting is the steady state. Even the most disciplined, high-performing teams operate within it. The ladder to situational awareness doesn’t exist, because the ground beneath it is always moving. No matter how much you educate teams about awareness or operational intelligence, they cannot reach a level where it is meaningful.
I used to think this was immaturity.
Thirty years of evidence has taught me otherwise. What makes this unavoidable is scale and complexity: distributed authorship, non-local effects, and continuous mutation ensure that no human, however experienced, can hold the system in their head. The symptom, firefighting, is inevitable. The deeper cause is that every signal in the system is semantically thin. Logs, traces, and metrics exist, but none carry meaning; everything must be reconstructed after the fact. At human scale, this reconstruction collapses.
This is the condition I have been trying to change for three decades. It has not changed. It will never.
The Market
Dash0 announced a $110M Series B at a $1B valuation on 23 March 2026. Its mission: evolve from OpenTelemetry-native observability into “the autonomous nervous system for production.”
The language borrows from cybernetics and cognitive science but has very little of the substance.
The architecture underneath is the same architecture the industry has been building for fifteen years. The data model assumes the signal is in the payload. Situational understanding is treated as an emergent property of accumulation. The gap between raw telemetry and operational judgment is addressed through acceleration, faster ingestion, faster correlation, faster response, now mediated by a language model.
The AI SRE agent that “finds root cause and gives clear guidance” is a natural language interface over the same correlation-and-search loop. The toil agents that “autonomously create dashboards and alerts” automate the production of artifacts that were already the problem.
And this is exactly why the company is worth a billion dollars.
Dash0 meets the industry where it actually lives. The pitch, “we’ll automate the firefighting for you”, is the correct commercial message for an industry that operates in permanent reactive triage. Six hundred customers are paying for it because the pain is real and immediate. Deployments break. Alerts fire. Someone has to dig. If an AI agent can dig faster, that is a genuine reduction in suffering for the team on call at 3am. I know the founders. I know the space. The commercial logic is sound. The architecture is unchanged.
The Tiers
I have been selling, maybe yelling would be a better word, into this customer landscape for my entire career. It has the same shape it had twenty years ago.
At the top, elite engineering organizations have the capacity for a higher-order operational view. They would build it internally. Their self-image as top-tier operations means they would not look outside for architectural guidance on operational cognition. They are not buyers.
At the bottom, small operations run low-stakes workloads where uptime is adequate and the consequences of failure are tolerable. Free or cheap tooling covers the requirement. They are not buyers either.
In the middle sits the actual market. These organizations operate at equivalent engineering maturity to the bottom tier. The same patterns produce the same problems. What scales is the operational surface: more services, more nodes, more integrations, more teams needing coordinated access. The purchase decision is driven by aggregation needs, compliance requirements, security integration, and vendor consolidation — features that make the same reactive workflow governable at organizational scale.
The differentiation across the entire observability vendor landscape is infrastructural plumbing. The engineering teams producing the problems, and the problems they produce, remain constant across all three tiers. I have consulted at every tier. The quality of the engineering and the nature of the failures are the same. The scale of the coordination overhead is the only variable.
The Absence
This is why the market for situational awareness doesn’t exist. It is not a matter of refusal or ignorance. It is a structural impossibility. Humans cannot perceive, coordinate, or stabilize the system sufficiently for situational awareness to be directly meaningful. The concept does not map onto any existing purchase category or felt need at any tier. The vocabulary migrates into pitch decks, “nervous system,” “autonomous intelligence,” “operational cognition”, while the architecture underneath continues to run the same loop.
The only way situational awareness could exist is if it is built into the system itself, at the point of expression, rather than reconstructed by human observers. Meaning must be encoded at emission. Signs must circulate where payloads now accumulate. Compression must happen at the source.
I have been saying this for fifteen years. The industry has been funding its way around it for just as long.
the Research
The problem extends well beyond observability and distributed systems. Thirty years of walking in and out of organizations has taught me that change itself, at code level, people level, organizational and architectural level, is the universal condition that nobody manages well. Everyone is faking comprehension. The scale of the problem makes genuine certainty impossible. Situational awareness is the discipline that addresses this, and it will eventually be needed. The question is where.
Defense seems like the obvious domain. The expectations are highest, the doctrine is explicit, the stakes are existential. Then you look at the reality and discover it is worse than software engineering. The gap between doctrinal aspiration and operational practice is shocking. People write papers on situational awareness in contexts where the awareness itself is absent from the operations it describes. This is likely why Palantir continues to sell software that never delivers what people expect, or at least what they expect when their reference point is a Hollywood film.
So the market may emerge somewhere. Or it may not. Either way, I have not stopped working on this, because situational awareness as a research domain brings together every area I care about: cybernetics, semiotics, bio-semiotics, neuroscience, holonics, and the architecture of software itself.
I like to build while I research. I like to see how ideas change when they meet implementation. The research produces learnings that transpose into other domains , in piecemeal fashion, applied to ordinary problems, carried into consulting engagements and conversations. The grand unified vision remains mine. What travels is the knowledge it generates, piece by piece, into whatever context can receive it.
What I have stopped doing is expecting the market to meet me at the destination. The industry will keep funding faster firefighting because that is the problem it can feel. The deeper problem, that the data model discards the judgment, is real, and it is mine to carry.