Observability: Cleanup Crew or Cartel?

Not every solution cleans up the problem — some just rearrange the trash.

Imagine a city struggling with its waste management. Contracts are signed behind closed doors with companies that have “special relationships” with officials. Local crime syndicates have infiltrated the industry, ensuring competitors are kept at bay. Costs escalate without transparency. Mountains of waste pile up, hidden away in obscure landfills. Citizens initially trust that these companies have their best interests at heart, but gradually discover that promises fall short. The problem silently grows, concealed just beneath the surface, while invoices—which nobody dares question—arrive reliably each month. It’s an open secret that refusing to pay might result in other “operational problems” or unexpected service disruptions.

Now, shift to enterprise IT observability. Major vendors promise clarity, visibility, and control over the sprawling digital waste of logs, metrics, and traces. Glossy presentations showcase perfect dashboards that’ll finally make sense of your chaotic systems. Yet after contracts are signed, bills grow increasingly opaque with mysterious line items and unexpected surcharges. Data—much of it useless or irrelevant—piles up endlessly within expensive storage solutions. Teams drown in dashboards nobody maintains. Alert fatigue becomes the norm. Meaningful insights get harder to extract from the noise. All the while costs mount and dependency deepens.

These two worlds—corrupt waste management operations and enterprise IT observability—share disturbing parallels. Both position themselves as essential solutions to fundamental problems you can’t handle yourself. Both operate on long-term contracts with complex, impenetrable pricing models. Both create dependencies that make switching painful and expensive with each passing year. And both industries have developed powerful, entrenched ecosystems that systematically eliminate meaningful competition. But the most troubling similarity might be this: in both cases, the very entities promising protection from your problem benefit from its continued growth—or even actively contribute to it. Like an organized crime syndicate that simultaneously creates and “solves” problems for local businesses, the boundary between service providers and extortionists blurs.

Much like modern waste management touts “green” initiatives while landfills continue to grow, the observability industry cloaks itself in the language of optimization, insight, and operational excellence. But in both cases, the emphasis remains on collections, not meaningful recirculation. Data is gathered, billed for, and largely forgotten, piling up in silent digital landfills with little practical reuse or impact. There’s no sustainable loop, no self-healing ecosystem—only more accumulation, more invoices, and more dependence on an infrastructure built to grow, not to renew.

Some might argue that initiatives like OpenTelemetry have ushered in a new era of openness, allowing customers to collect telemetry data in standard formats across vendors. And to a degree, that’s true — at the point of collection and ingestion, interoperability has improved. But once your data enters a vendor’s backend, the openness largely ends. There’s no standard today for extracting your historical data in full fidelity, preserving all relationships, timestamps, and context. The data, once inside, is reshaped into proprietary schemas and stored in ways that make egress complex, lossy, and prohibitively expensive. True openness wouldn’t just standardize how data flows into systems but also guarantee that customers could freely and affordably retrieve what they’ve already paid to generate. Without that, the landfill remains — only now it’s branded “open” at the front gate while the exit stays firmly padlocked.

OpenTelemetry, while standardizing telemetry pipelines, has also significantly reduced the barriers to dumping vast amounts of data without assessing its value. This fear of “unknown unknowns” — the anxiety that some crucial insight might be missed — drives teams to collect and ship everything “just in case.” Consequently, vendors, who profit directly from data volume, have little incentive to encourage restraint. As a result, OpenTelemetry hasn’t curbed the landfill economy; it’s accelerated it. It has made it easier to produce, transmit, and hoard digital waste at an industrial scale, rather than distill intelligence from it. Open standards, when unaccompanied by standards of judgment, simply grease the wheels of waste.

Think about it. When was the last time an observability vendor proactively suggested reducing data collection? Have they ever recommended eliminating stale dashboards or deprecating unused alerts? Have they ever offered to advise that you’re over-provisioned and should downsize your contract? Their revenue models are predicated on data volume and retention—the greater the volume and retention, the higher the payment. This fundamental misalignment between incentives and genuine efficiency is akin to a syndicate’s “protection arrangement,” where the solution and the problem become indistinguishable.

When your account manager calls with “concerns” about your growing infrastructure, which mysteriously leads to yet another upsell, do you feel like transparency or something else entirely? When unexpected price increases arrive with vague justifications about “platform enhancements,” do you feel like a valued customer or a captive one? Are the contracts clear and beneficial, or does it sometimes feel closer to paying protection money—locked into a service that promises everything but delivers just enough to keep you paying? “Nice production environment you have there… shame if something happened to it that nobody noticed until it was too late.”

The bitter irony is that the observability industry, born to bring clarity to complex systems, has itself become a complex system that few can fully comprehend. Teams build monitoring solutions so elaborate they require monitoring themselves, creating a strange recursive loop of complexity. And once you’re in certain vendor ecosystems, they make sure leaving becomes nearly impossible—employing tactics that’d make organized crime syndicates nod in appreciation: proprietary formats, exclusive integrations, and complex migration barriers.

Perhaps it’s time to delve deeper into the question of who benefits from the vast amount of data we collect. We should also consider whether we’ve inadvertently invited a digital “syndicate” into our infrastructure, operating on terms increasingly favorable to itself while maintaining a facade of legitimacy and indispensability.

Are we gaining insights proportional to our investments? Or are we simply feeding an insatiable machine that grows hungrier with each passing quarter? Have we traded one set of problems for another, more expensive set, wrapped in slick marketing and unverifiable promises?

The answer, like any effective monitoring solution should offer, likely resides within your data—if only you could discern it amidst the clutter. If only you could trust that the very individuals assisting you are distinct from those with a vested interest in perpetuating your confusion, dependency, and ongoing financial burden.

The only real difference between the corrupt waste management syndicate and your observability vendor might just be better marketing—and a sleeker interface.