MDAI Observers are configurable telemetry processors built on OpenTelemetry Collectors that turn existing telemetry into Prometheus-compatible metrics—enabling detailed, customizable visibility into your system’s data flows. Designed for flexibility and scalability, they integrate easily with standard tools and empower teams to monitor exactly what matters most.
Introducing MDAI Observers – Smart, Configurable Telemetry Monitors
What’s an Observer?
MDAI Observers are a powerful feature of the MDAI Hub, built on top of custom OpenTelemetry Collectors. They process telemetry from your existing pipelines and generate meaningful Prometheus-compatible metrics—giving you visibility into the flow and volume of telemetry data in your system.
Observers give you the ability to measure your telemetry volume in bytes, by a set of attributes you control. This can give you insights on your telemetry throughput beyond what the OpenTelemetry count connector or collector metrics offer.

Let’s say you want to track how much telemetry each of your services is sending. With an Observer, you can measure data volumes grouped by the service_name attribute and export those metrics to the Prometheus instance bundled with your MDAI Hub.
spec:
observers:
- name: service_bytes
resourceRef: observer-collector
labelResourceAttributes:
- service_name
bytesMetricName: bytes_received_by_service_total
This Observer emits the total bytes received per
service_name. Prometheus can scrape that and record the value over time for dashboards and/or automation!
Want to go deeper? How about counting the number of error logs—split by service and region? Just configure an Observer to filter logs by severity and group by the relevant attributes:
spec:
observers:
- name: errors_by_region
resourceRef: observer-collector
labelResourceAttributes:
- service_name
- region
countMetricName: service_errors_by_region_total
filter:
error_mode: ignore
logs:
log_record:
- 'severity_number < SEVERITY_NUMBER_ERROR'
This Observer will filter down to just error logs for you, and label the count metric by service_name and region, so you can filter by either, or both!
Scaling Observers: You’re in Control
Observers run on specialized OpenTelemetry Collectors that you manage. You define how they scale—just like any other Kubernetes resource. Whether you’re running a lightweight dev cluster or a high-throughput production environment, you can tune memory, CPU, and replica counts to meet your performance needs.
spec:
observerResource:
name: observer-collector
image: public.ecr.aws/decisiveai/observer-collector:0.1
replicas: 3
resources:
limits:
memory: "512Mi"
cpu: "200m"
requests:
memory: "128Mi"
cpu: "100m"
Standards-Based, Plug-and-Play
The metrics produced by Observers are exposed through a standard Prometheus metrics endpoint, making them easily consumable—not just by the built-in Prometheus instance, but also by any compatible tool, including other OpenTelemetry Collectors.
# your-otel-config.yaml
receivers:
prometheus/observer:
config:
scrape_configs:
- job_name: 'observer-collector'
scrape_interval: 5s
static_configs:
- targets: ['mdaihub-sample-observer-collector-service.mdai.svc.cluster.local:8899']
processors:
resource: {...}
exporters:
otlp/your_telemetry_destination: {...}
service:
pipelines:
metrics:
receivers: [prometheus]
processors: [resource]
exporters: [otlp/your_telemetry_destination]
Whether you're monitoring service volumes, error rates, or something more custom, MDAI Observers help you turn raw telemetry into actionable insight—all using open standards and familiar tools.
Try It Out
Ready to get started? Check out our Observers Documentation to see how to configure your first Observer and integrate it with your telemetry pipeline.