07. February 2017
timer-icon 6 min

Automate Performance Regression Testing with Open Source Tools

Automate your performance regression testing with open-source tools. We automate our JMeter load tests with Jenkins, process results in InfluxDB, visualize test results in Grafana. Our own inspectIT records system internals during the tests for further analysis.

Performing load tests on a regular basis is a vital requirement to prevent performance regressions when changing a system. However, manually building the system, deploying to a test environment, running the load test, and analyzing the results each time you change the system is tiresome. Our colleagues recently described good practices for performance regression testing in Scrum-based software projects.  In this blog post, we demonstrate how to integrate and automate this process in a typical CI environment using open-source tools:

The resulting setup can automatically point out potential performance implications caused by your changes. Just to perform automated load tests we only need Jenkins and JMeter, as we already described in a previous blog post. However, we extend our approach using InfluxDB as a central data repository for all test results. This way we can monitor and explore the system’s performance under load using the rich graphical representation of a Grafana dashboard. Additionally, we use inspectIT to record the systems inner processes to get an in-depth view of what happened during the load test. If the test results indicate performance regressions, we can later use the recording for root cause analysis.

Starting Point

For our implementation we assume the following situation:

  • The source code of the target software project is available in a Git Repository.
  • Jenkins is installed with the following plug-ins:
  • The system under test is pre-configured with the inspectIT agent to facilitate instrumentation and monitoring

In our example scenario we use Ticket Monster as the sample application. For simplicity, in our example we run InfluxDB, Grafana, the inspectIT Server (CMR), and a WildFly 10 application server, all on the same host. Though in this example we use a Windows host, the example can be easily transferred to a distributed setup or a setup containing Unix hosts. Regarding monitoring, please refer to the inspectIT documentation on how to configure a system for instrumentation with inspectIT.

Preparing the JMeter load test

To comply with our intended setup, we need to prepare our JMeter test plan first. For this post, we don’t focus on configuring load tests with JMeter, as proper design of a load test heavily depends on the system to be tested. However, we still need to adjust the JMeter test plan slightly to implement our approach.

First, we want the test configuration to be parameterizable. This allows us to adjust the load test’s configuration options directly through Jenkins so that we can adjust connection parameters (host and port of the system under test) and general test parameters (e.g. number of users / threads, test duration). For more details on how to adjust a JMeter test plan using CLI parameters see our blog post here.

Secondly, we want JMeter to store the test results in our InfluxDB. Therefore we use our InfluxDB backend listener plug-in for JMeter. To use and configure the plug-in, we simply put the plug-in’s jar-file into /lib/ext/ in JMeter’s installation directory. In our test plan we then add a new Backend Listener node and select the implementation for InfluxDB that we just added. In this example, we use the default configuration for InfluxDB and therefore don’t need to adjust the listener’s configuration any further. The plug-in will create the database “jmeter” automatically if it doesn’t already exist.

Configuring the InfluxDB backend listener in JMeter

Configuring the InfluxDB backend listener in JMeter

Tying it all together in Jenkins

Now that we have prepared our test plan and the supporting components, we can configure our main process in Jenkins. We already described the idea of automated JMeter load tests in Jenkins in another blog post. However, for our current scenario we want to extend on this and leverage new functionality of the Performance plug-in to actually indicate performance regressions. To implement our approach, we want the CI server to perform the following steps:

  1. Checkout the updated code from the repository
  2. Build the system
  3. Start an inspectIT recording and run the load test against the newly built system
  4. Evaluate the test results
  5. Mark the build accordingly, keep the inspectIT recording if appropriate

For this example scenario, we create a Maven project in Jenkins. As our sample project is hosted on GitHub, we configure the repository URL and the branch to be built using the Jenkins Git plug-in. Before configuring the main build job, we also add some parameters that facilitate later adjustment of the Jenkins job without requiring us to change details in every build step. Therefore we specified parameters for the connection details of the WildFly server where our sample application will be deployed and of the inspectIT CMR that we need to control the recording functionality.

Configuring the build job

We then configure the main build task by specifying our project’s root POM and the Maven goals that need to be executed to build and deploy the system. For our sample project, we use package wildfly:deploy “-Dwildfly.hostname=$WILDFLY_HOSTNAME” “-Dwildfly.port=$WILDFLY_PORT” to build and deploy the system. Here we reuse the Jenkins Parameters that we defined earlier. After this build step, we specify a post step using the Execute Windows batch command option, as we are on a Windows platform. This script is called after a successful Maven build and performs the defined load tests. First, we delete a possibly existing JMeter result file from previous runs to prevent unwanted accumulation of results.


Starting with line 3, we then create an inspectIT storage and start the recording by using curl to access inspectIT’s REST interface (for curl on Windows use appropriate binaries or Cygwin). The HTTP response includes the ID of the new storage. We store the ID to a file, because we might need it again later to delete the recording.

In line 10 we then, finally, call JMeter to perform the actual load testing, see another blog post for details. We pass our Jenkins parameters to JMeter, allowing us to adjust the test plan directly through the Jenkins user interface.

Evaluating the load test results

After the main build task we want to evaluate the obtained load test results. We therefore add Publish Performance test result report as a new Post-build Action provided by the Performance plug-in. After adding a new JMeter report, we have to tweak the settings for this report to fit our actual needs. Please be careful not to use the default wildcard expression “**/*.jtl” to specify the report file. Instead, configure the actual path to the generated JMeter result. The plug-in will find the files using the wildcard expression, but somehow this configuration prevents builds from failing even if the test result doesn’t meet the defined requirements.

For the actual evaluation, we deviate from our previous post, as we use the newer Relative Threshold mode instead of the Error Threshold mode. This allows us to configure a relative comparison to detect performance regressions. In this mode, the plug-in compares the current results with that of a previous build. Either the last stable build or a fixed build number can be selected for comparison. We can base the evaluation on different metrics, either Average Response Time, Median Response Time, or Percentile Response Time.

Configuring the comparison of load test results based on relative thresholds

Configuring the comparison of load test results based on relative thresholds

To detect possible regressions using the Relative Threshold mode we define acceptable boundaries by specifying negative and positive percentage thresholds for unstable and failed results. If a comparison reports a relative deviation outside the defined boundaries, we consider this a regression and the plug-in marks the build accordingly as either unstable or failed. Finding suitable values is, however, a difficult task, as we want to detect potential regressions reliably on the one hand, but also have to account for variations in the results due to outside influences that we can’t control on the other hand (e.g. network).

Following this evaluation step, the build will be marked depending on the performance results. Based on that outcome, we now want to decide on how to proceed with the inspectIT recording. In case the build and the load test result were successful, we can delete the recording. With a different, negative outcome, we want to keep the recording for further analysis. We perform the necessary actions using the Execute a set of scripts option, which is provided by the Post-Build Script plug-in.

Conditional step to delete the inspectIT recording after a successful load test

Conditional step to delete the inspectIT recording after a successful load test

After adding this additional Post-build action, we configure two Conditional steps:

  1. We use the first script to stop the recording, which we want to happen regardless of the build and load test outcome.
  2. With the second script we want to express that the recording is only to be deleted if the load test result was positive and no regression was detected. Therefore we specify that the script’s execution should depend on the Current build status using the Run? option. We select Success for both Worst status and Best status, because the script should only execute if the build was successful.

For both steps we again use a Windows batch script, similar to our main build task. To delete the recording, we read the storage ID from the file that we created in the main build script.

Visualizing the Results in Grafana

As outlined in the introduction, we can use Grafana to visualize the load test results using graphs. Therefore we just start the Grafana server and access the web interface on port 3000. To use our prepared dashboard, we have to configure an InfluxDB data source first. Via Menu → Data Sources → Add data source we configure our local InfluxDB instance as data source. We specify “jmeter” as database and select the access option “proxy” to prevent errors.

After setting up the data source, we simply import our prepared dashboard by clicking Menu → Dashboards → Import and entering the dashboard’s Grafana.net-URL. A dialog asks for a “jmeter” data source were we select the one we just created. If everything went fine, we should see an empty dashboard. The images below illustrate how the dashboard could look like when we look at load test data.

Load overview in Grafana dashboard

Load overview in Grafana dashboard

Request details in Grafana dashboard

Request details in Grafana dashboard

Comment article

Comment