In our current blog post series we integrate inspectIT Ocelot with popular observability and monitoring solutions. Ocelot is our state-of-the-art open-source Java agent and uses established open-source standards, allowing us to flexibly integrate the agent with different monitoring backends. In this post we instrument an application with Ocelot agents to collect traces and metrics that we then store and analyze in Elastic APM.
About Elastic APM
Elastic APM is the recent APM (Application Performance Management) from Elastic. Elastic (aka ELK stack, for Elasticsearch, Kibana, Logstash) is a popular open-source tool set build around Elasticsearch, a document oriented database and search engine. Chances are high that you know Elastic from use cases such as logging, metrics monitoring, analytics, or simply fast search.
With Elastic APM, Elastic now aims to support APM use cases as well, collecting metrics and traces from applications to enable an integrated monitoring experience in conjunction with Elastics proven capabilities for log and infrastructure monitoring. APM is available as part of Elastic’s free base offering and primarily consists of an additional APM server component deployed in front of Elasticsearch and agents to collect data from different application technologies.
For this blog post series we use a setup designed around the popular Java-based sample application Spring PetClinic. PetClinic is a small microservices style application that allows pet owners to schedule appointments with vets.
For the Elastic APM scenario, our docker-compose setup includes the Spring PetClinic application, Ocelot agents and config server for instrumentation, a load generator, and all the Elastic components required to collect, process and view the data (Elasticsearch, Kibana, APM Server, Metricbeat).
To spin up the demo locally on your device, just download the assets from GitHub and run “docker-compose up” in the “ocelot-meets-elastic-apm” directory.
Integrating Ocelot through Jaeger
Regarding code instrumentation and data collection from the application components, our setup for Elastic APM is not different from other scenarios where we process the monitoring data in backends such as Datadog, Wavefront, or the combination of Jaeger and Prometheus. On the data processing side, however, we need to find a way to ingest the collected data into Elastic.
Elastic’s own agents use a custom, non-standard format and interface to send trace data to the APM server component. To use this interface to export data from Ocelot to Elastic, we would need to implement a custom trace exporter for Ocelot. While this is absolutely doable, thankfully, there is an easier way. In recent versions, Elastic APM supports an experimental Jaeger ingestion endpoint (not yet available on Elastic Cloud). Jaeger is a standardized open-source distributed tracing system and one of the formats natively supported by Ocelot.
To enable the Jaeger HTTP endpoint in our demo setup, we configured two properties in the Elastic APM configuration (ocelot-meets-elastic-apm/apm-server.yml):
This enables the Jaeger HTTP endpoint on port 14268. We set the hostname “apm-server”, because the Ocelot agents will access the APM server through its container name. The only thing for us left to do now is to tell Ocelot to ship traces to the Elastic APM server. Because Ocelot agents can fetch their configuration from a central configuration service, we adjusted only a single line in our Ocelot demo configuration:
The excerpt above is from ocelot-meets-elastic-apm/configuration-server/files/all/general.yml. This file holds configuration that applies to all the Ocelot agents in the demo, regardless of which PetClinic service they instrument. At runtime, the agents will poll the configuration server and fetch any updated configuration. With the configuration above, Ocelot agents will export traces to the APM server endpoint in Jaeger format. The fact that this endpoint is Elastic APM is fully transparent to Ocelot, as we just configure the usual Jaeger exporter.
Scraping Metrics with Metricbeat
Besides traces, Ocelot agents also collect application performance metrics, think throughput (requests per interval), latency (response time distribution), and error rate (errors per interval). As exposing and collecting metrics is very standardized in the open-source observability space, we did not need to configure much to get Ocelot metrics into Elastic. We let Ocelot agents expose their metrics in Prometheus format (which is our default configuration and recommendation) and collect these with Elastic’s Metricbeat.
Metricbeat is a versatile tool that is part of Elastic’s Beat platform and can collect all forms of metric data (time series). As such, it can also scrape Prometheus endpoints. For the demo, we simply added a Metricbeat to our docker-compose setup and annotated the application containers in a way that Metricbeat will automatically scrape them for Prometheus metrics and deliver them to Elasticsearch. This “automagical” integration works through Metricbeat’s hints-based autodiscover mechanism for Docker environments.
With trace ingestion through Elastic APM server and Metric scraping through Metricbeat, our overall monitoring architecture now looks as shown below (also check out this landscape on OpenAPM.io):
Working with Ocelot Data in Kibana
Now that we configured our setup to get application traces and metrics from Ocelot into Elasticsearch, we can have a look at the data in Kibana. Kibana is Elastic’s graphical frontend to work with all kinds of Elastic data. In our demo, you can access Kibana under http:<your docker host IP goes here>:5601. In Kibana, we can click the “APM” button on the sidebar on the left to load the APM section of the tool. We are then greeted with an overview of our PetClinic services and APM KPIs for each one on the right.
From there, we can drill down into data from the different services and investigate individual traces captured by Ocelot. The Elastic APM overview for the central “api-gateway” component of the PetClinic shows statistics for this service and the transactions that flow through it.
The example below shows a distributed trace captured by Ocelot. We can see how the request traverses the different components and where precious processing time is spent.
A trace consists of individual spans, for which we can investigate details such as an actual SQL statement that was executed for this request.
Finally, if we click the “Metrics” button on the Kibana sidebar, we get to the metrics view. There we can use the metrics from Ocelot to build a simple visualization of the HTTP response time per service. Note that we expose and scrape metric data in Prometheus format. In this mode, Ocelot exposes the HTTP response time as a monotonically increasing counter. We then use Elastic’s “rate” feature to compute and visualize the response time per interval as derivative function of the counter metric.
Going Beyond Technical Data
So far we saw that we can easily process and visualize APM data from Ocelot agents in Elastic. However, Ocelot’s dynamic data collection capabilities lend themselves well to expand the scope of this demo beyond traditional APM use cases.
With Ocelot, we can collect arbitrary business data from the application without ever touching the application source code or its logging configuration. Through the Ocelot configuration server included in the demo we could even add additional data collection configuration (instrumentation) through the configuration UI at runtime, without ever restarting the application.
The snippet below tells Ocelot to capture the “pet type” as metadata whenever a visit is created during requests processing. It also exposes a “visits” metric to report the number of visits by pet type. While this configuration is already included with the demo, we could apply similar configuration at runtime and see the results in Elastic shortly afterwards.
- name: create
b: " "
description: "The number of visits."
pet_type : true
With this configuration, we can investigate the “pet type” metadata on individual traces and chart the visits per minute grouped by pet type.
Combining pet type metadata with performance metrics, we can visualize application response time grouped by pet type. We discover that requests with pet type “dog” are unusually slow, which is intended as we changed the demo code to make “dog” requests slow artificially.
While pet type metadata and dogs are cute examples, this demo hints at the potential of collecting arbitrary business data from the application on demand, without invasive code changes. Instead of just pet types, you could implement use cases to measure and track different car models in your connected car system.
Why Use Ocelot?
In conclusion, this demo shows that we can use Ocelot agents with Elastic without any obstacles. However, we are convinced that Ocelot is more than just a drop-in replacement for Elastic’s own Java agent. With its dynamic instrumentation and configuration capabilities, it is superior to Elastic’s Java agent in multiple aspects.
While you need to hardcode configuration for Elastic’s agent before startup, you can adjust Ocelot configuration remotely and at runtime. To extend the basic standard instrumentation of the Elastic agent, you need to adjust your application code to gain more visibility and collect additional data. With Ocelot, all this can be done at runtime and without ever touching the application’s source code.
Especially in combination with Elastic, using Ocelot to extract business data can open up new use cases for you. With Ocelot, you can leverage Elastic’s analytics capabilities without the need to change your application or its logging to get data into Elastic.
Learn more about Ocelot here and on GitHub. If your’re into open-source observability, monitoring or APM in general, also checkout our OpenAPM.io initiative. Let us know what you think in the comments section.
Posts in this Series
- Introductory Post: Ocelot meets Friends – Enhancing Modern Observability Platforms
- Part 1 – Ocelot meets Bits – Enhanced Observability for Datadog
- Part 2 – Ocelot meets Lightstep – Enhanced Tracing with Lightstep
- Part 3 – Ocelot meets Wavefront – Enhanced Tracing with Wavefront
- Part 5 – Ocelot meets SignalFX
- Part 6 – Ocelot meets Instana
- Part 7 – Ocelot meets NewRelic