Datadog launches Observability Pipelines for enterprise data management

A person holding a smartphone in front of the Datadog logo

Datadog has launched a new service to help organizations aggregate, filter, and route ‌large volumes‌ ‌of observability data.


Unlocking the value of data with data innovation acceleration

Diving into what a mature data innovation practice looks like


Dubbed Observability Pipelines, the vendor-agnostic tool offers customers with a unified view of infrastructure and application metrics, logs, and traces.

The granular outlook helps with lowering costs, reducing technology lock-in, standardizing data quality, improving compliance, and scaling observability practices, according to Datadog.

Most notably, businesses can gather, enrich, transform, sanitize, and route observability data from any source to any destination at petabyte scale. Rule-based throttles and reactive routing strategies ‌can also be deployed to protect against data spikes.

If need be, organizations can also choose to redact sensitive data from a data stream ‌before‌ ‌it‌ ‌leaves‌ ‌the company‌ ‌network.

"As the amount of telemetry continues to grow at an organization, teams are often completely overwhelmed—if they're not blind to—the runaway costs, reliability and compliance risks that come from a lack of visibility into infrastructure and application data," said Zach Sherman, senior product manager at Datadog.

“We built Datadog Observability Pipelines to give organizations a powerful way to take back control of their data, without compromising visibility for engineering, security and SRE teams," added Sherman.

Datadog’s Observability Pipelines also comes with an extensive list of integrations or built-in connectors that give organizations the ability to collect and route data quickly and easily to any existing tools.