Spatial: Local versus Remote

A fundamental difference between Humainary (a modern open-source Observability instrument toolkit) and OpenTelemetry (a vendor-driven standard for Observability based on yesteryear technologies) is the processing location of measurements (observations).

For Humainary, the goal is to encourage as much as possible the analytical processing of observations at the source of event emittance and in the moment of the situation (the context under assessment). Whereas for OpenTelemetry, the focus of the project and codebase is moving the collected data along one or more typed delineated pipelines onto a black hole (a remote data sink endpoint) via a collector process.

Humainary: Signaling Circuits

The approach taken with Humainary is to enable the effective and efficient design, development, and deployment of Observability switch-signaling circuitry that operates largely local by default but can operate in a remote or simulated mode without much effort beyond the configuration (environment) of a context.

An important objective aimed here is to allow the growth of a diverse community of professional and experienced observability and controllability engineers.

We envisage engineers being able to rapidly innovate in designing and developing novel instruments in measuring, sensing, observing, analyzing, reasoning, and reacting to behavior (activity) and structural (state) changes as they happen – all within the same space and time region.

Humainary offers out-of-the-box channels of communication and coordination across a diverse set of instrument libraries that operate in near-real-time and within the local process space of a (micro)service. Remote communication occurs when local measurement and monitoring have been transformed (compressed, contextualized) into information at higher-order levels and scales.

OpenTelemetry: Data Pipes

On the other hand, the approach taken by OpenTelemetry is akin to connecting a network cable to an outbound port and just letting packets pass along freely and without inspection when not dropping them, such as when the workload becomes too high, which happens more than it should and always at the time when it should not.

The approach is understandable in light of the fact this is a standard driven by application performance vendors (APM) vendors and ex-Google engineers who tried to solve a 20-plus-year-old problem of tag-and-trace of requests across processes (services) – a reinvention of the application response measurement (ARM) standard that IBM probably still supports in its legacy mainframe and middleware products today. Naturally, many cloud vendors have a commercial interest in pushing this big fat pipeline approach.