Push or Pull? Spring Boot 4 with OpenTelemetry

Published:

With Spring Boot 4, it has become remarkably easy to send observability data in the OpenTelemetry format. Logs, traces, and metrics can now be exported to any OpenTelemetry backend (or a Collector) with just a few lines of configuration. But should you actually do that? Just because you can doesn't mean you should.

Let's take a look at the individual signals and how they can be transmitted.

Traces

With traces, the situation is fairly straightforward. Even without OpenTelemetry, the common approach is for your application to take the active role. Traces are created automatically or via the Micrometer API and sent to a backend. Whether it's OpenZipkin Brave, Jaeger, or Tempo (via OpenTelemetry), or your data first goes through an OpenTelemetry Collector, the fundamental architecture stays the same.

Depending on your setup, one of two Micrometer tracers is used:

The latter is already included in the Spring starter spring-boot-starter-opentelemetry.

Metrics

For many years, Prometheus has been the go-to way to collect metrics from a Spring Boot application. An external Prometheus server periodically scrapes the actuator/prometheus endpoint of your application. The application itself only exposes the data. Prometheus does the heavy lifting here (pull).

With OpenTelemetry, your application can send metrics on its own instead. This way, you can get rid of Prometheus as a separate server. Metrics can go straight to VictoriaMetrics, ClickHouse (ClickStack), or your cloud provider of choice. Here, your application takes on the active responsibility (push).

A third, hybrid option comes from using the OpenTelemetry Collector. Through an OpenTelemetry receiver for Prometheus, the Collector can scrape the actuator/prometheus endpoint (pull) and forward metrics in the OpenTelemetry format. Even though this receiver is not yet marked as stable (currently beta), it's perfectly viable for production use. With distributed Collector instances, however, you need to make sure they don't scrape the same application twice.

The push approach is appealing because there's no detour through another format. It can also eliminate services needed in a pull setup (Prometheus) or simplify them (OpenTelemetry Collector).

At the same time, it puts additional load on your application for tasks it wasn't really designed to handle. Preparing, buffering, and transmitting metrics consumes CPU and memory. Resources that might be better spent serving actual business requests.

For short-lived workloads (serverless, batch jobs) or very simple setups, this approach can work just fine. For everything else, pulling metrics through an Actuator endpoint is the safer bet.

Logs

When it comes to logs, the picture looks similar to metrics.

The traditional approach is to log to stdout. These logs are then picked up and processed by tools like Grafana Alloy or the OpenTelemetry Collector.

This method fully decouples your application from the monitoring infrastructure. The app simply writes a stream of text or JSON. The responsibility for transport and persistence lies elsewhere.

On the other side, there's again the push approach via OpenTelemetry. A specialized appender (e.g., for Logback or Log4j2) can be configured to send logs directly in the OpenTelemetry format over the network to a backend or Collector.

For most production environments, however, pushing logs carries too much risk when it comes to troubleshooting. In critical failure scenarios — a crash on startup, an OutOfMemoryError, or network issues — the very logs that would explain the root cause never make it to the backend. Logging to stdout is simply more robust.

Again: for short-lived workloads or very simple setups, the push approach can still be a reasonable choice.