10. March 2019
4 min

Grafana Data Source Provisioning

Configuring data backends in Grafana is an easy thing, just set endpoint URL, credentials and you're good to go. However, once you find yourself in a dynamic environment with multiple instances of different data sources, doing this manually get's rather tedious very fast. Luckily, Grafana's provisioning features can do this job for you - automagically.

Manual Configuration Doesn’t Scale

In a small, static on-premises environment, configuring your Grafana data sources manually get’s the job done. Just configure the connection to your single Elasticsearch cluster, maybe add the same for your APM system and favorite time series database, and you can focus on building useful Grafana dashboards based on the data. Maybe you will never have to touch the data source configuration again.

However, once you grow out of this constrained scenario, things get more complicated. Think about dynamic container environments in the cloud and add enterprise-level staging concepts. Now your data backends come and go. Maybe it’s your Grafana that is getting redeployed all the time by your CI/CD pipeline. At this point, manual configuration of your data sources in Grafana just doesn’t work anymore.

Grafana Provisioning to the Rescue!

Now you don’t need to invest in complicated configuration scripts, because Grafana has the necessary tools already built-in. These provisioning features are designed to solve the challenges that come with the very scenario described above.

With Grafana’s provisioning features, you can manage your data source configuration in your source repository or some other configuration or secret management system. Through clever integration with your deployment pipelines, you then can push the necessary configuration details to your Grafana instance(s) as needed.

For data source provisioning, Grafana comes with two different approaches, file-based and API-based. Let’s have a look at both variants, using our very own data source plugin for CA APM as example.

File-based Data Source Provisioning

Since version 5.0, you can provision your data sources to Grafana by preparing special YAML configuration files. This works for official Grafana data sources (e.g. Elasticsearch, Prometheus, InfluxDB) as well as for custom data source plugins (such as our CA APM data source), as long as the latter don’t require special configuration mechanisms that are not covered by the Grafana default. At startup, Grafana will look for these files and configure the defined data sources accordingly. As a starting point, take our configuration of a CA APM data source from the screenshot below:

In our simple example, the data source configuration for CA APM mainly consists of a name, an endpoint URL, and basic authentication credentials. You can take all of this and create a YAML file such as the following:

As you can see, we can directly map the configuration properties seen in the Grafana UI to their equivalents in the configuration file. Useful additions when using file-based provisioning are the properties version: 1 and editable: false. These help you with managing the version history of your data source configuration and prevent (accidental) configuration changes from the UI. For a full picture of the possibilities, have a look at the relevant section in Grafana’s official documentation.

Once you place this file in the appropriate directory, Grafana will read and apply the data source configuration at startup. Taking this mechanism as a basis, you can then truly automate data source configuration by integrating it into your deployment pipelines, for example by using systems similar to Kubernetes’ ConfigMaps or by simply integrating the YAML files into your Grafana container image.

A limitation of this solution is the fact that Grafana will only read this configuration once, at startup time. You cannot update your data source configuration without restarting the Grafana process. This might not be a problem in environments with continuous deployment, where you simply re-deploy Grafana as soon as you update the configuration. However, this can be a severe constraint in situations where you have a more static Grafana deployment that you cannot restart at will.

Another possible problem with the file-based provisioning is that you will inevitably have your basic authentication credentials on disk in plain text form. If these issues are problematic for you, have a look at data source provisioning through the Grafana HTTP API instead.

API-Based Data Source Configuration

Using Grafana’s HTTP API for data sources, we can accomplish the same as described above without relying on local configuration files. Take the following JSON data, it’s equivalent to our YAML example:

Given an API key, we can now post this body to Grafana’s API endpoint to create the CA APM data source:

To update an existing data source configuration, simply use PUT instead of POST and include the id property of the existing data source.

The data source API lends itself very well to situations where you have a static Grafana deployment, but want to update data source configurations dynamically. Probably even more than with file-based provisioning, you can automate all of this elegantly by integrating these API features into your CI/CD processes.

In case security is a concern, this can even somewhat mitigate the problem of credentials being stored on disk in plain text form. Berfore calling the API, you could retrieve credentials on the fly from secret management solutions such as Vault. They will then be inserted into Grafana’s database directly. In case of basic authentication, they will still be unencrypted though.

This was our quick overview of just another useful Grafana feature. If you are interested in solutions for digital experience management, observability or other DevOps topics in general, come talk to us! Comment on this post directly or reach out to us via email at digital-experience@novatec-gmbh.de.

Comment article

Cookies help us to improve your experience when surfing our website. By continuing to use this website, you agree to our use of cookies. Learn more