Observability is Yesteryear’s Monitoring

This article was originally posted in 2020 on the OpenSignals website, which is now defunct.

Unchanging Change

Looking back over 20 years of building application performance monitoring and management tooling, little has changed, though today’s tooling does collect more data from far more data sources. But effectiveness and efficiency have not improved; it could be argued that both have regressed. Why is observability so much like monitoring in 2000 when our first profiling tool, JDBInsight, was released?

Much of the change touted by product marketing departments relates to engineering efforts to keep tooling applicable in these new containers, cloud, and microservices environments. Vendors like IBM Instana, Dynatrace, AppDynamics, DataDog, and NewRelic spend much of their engineering budget simply maintaining instrumentation extensions for hundreds of platforms, products, projects, and programming languages. So when we say that not much has changed, we refer to the positioning of tooling on a progress map.

All observability vendors seem stuck in quicksand, unable to deliver real breakthroughs to customers, with AIOps being a pipe dream.

Linking to Controllability

Cognition, control, and communication are still largely deferred and delegated to humans outside of tooling. Application performance monitoring vendors can keep talking up intelligence without ever having to deliver on what many outside the computing industry consider intelligence to be – (re)action appropriate to the context, stimulus, and goal setting. Natural human-like intelligence can never be delivered as a software service without, at minimum, the ability to link past and predicted observation to controllability – an intervention following awareness and reasoning of a situation. Today, it is next to impossible to automate the linking of observability to controllability because the shared communication model, internal and external to tooling and humans, does not exist. It can’t be found in the details.

Safer Steering with Signaling

Cognition and control will never emerge from data and details. Traces, metrics, and logs are too low-level and noisy to be used as an effective and efficient model for tracking, predicting, and learning from human and machine interventions within a system.

Irrespective, such yesteryear approaches are not sustainable. Ultimately, observability and controllability must be embedded directly within the application software. The imbuing of software with self-reflection and self-adaptability has not occurred because observability instrumentation rarely considers the need for local decision-making and steering through control valves or other similar control theory technologies and techniques. Instead of thinking about data, pipelines, and sinks, engineers need to refocus on the significance of signals and how they should be scored in inferring status; otherwise, the next 20 years will be much the same.