28. April 2022
timer-icon 6 min

Let's Talk with our Business Processes

Do your processes have user tasks requiring physical activities? Imagine completing these tasks without accessing your task list application. We add Amazon Alexa as another interface to Camunda Platform 7 so that you can interact by voice!

Many processes contain physical activities and have therefore not yet been digitalized. There are also environments in which employees do not have access to the “typical” task list UI. This raises the question of how and whether such processes can be digitized using a process engine.

In this blog post, we look at how processes can be controlled by voice using Amazon Alexa. To do this, we could implement an Alexa Skill that communicates via REST to the Camunda Platform 7 API. However, this requires unnecessary repeated effort. We simplified this step by creating an Alexa Skill-Generator so that we only need to upload our process model and generate the Alexa Skill. Then, the process talks to us about upcoming tasks and we can claim and complete tasks.

But what is an Alexa Skill? Roughly speaking: A program for Alexa that contains the corresponding program logic and configuration. The exact functionality of a skill is explained in the following video: How Alexa Skills Work

The Skill-Generator

The Alexa Skill Generator

Our generator generates the necessary logic for an Alexa skill based on a BPMN model. All the user needs is a BPMN 2.0 model (with user tasks), an invocation name, and a running Camunda Platform 7 Instance with activated REST-API.

After entering all parameters, the generator creates the necessary files for an Alexa skill, which then only needs to be deployed in the Alexa Developer Console. An Amazon Developer account is required for this.

You can find the code for the generator on GitHub.

The Pizza-Service-Demo

As an example, we use the following pizza ordering process (based on Rücker & Freund (2019), Praxishandbuch BPMN 2.0, p.98) and consider the supplier side. A pizza order is received. The pizza is then prepared, baked, and packaged. If the pizza is a bit “burned” and therefore “too dark”, we automatically apply a discount. Finally, the invoice is generated and the delivery boy delivers the pizza.

This is the pizza service process created with the Camunda Modeler

Using the Camunda Modeler, we now need to add task descriptions and logic for the “burned” functionality. We can store the description of a user task in the “Element Documentation” of the respective task and also reference process variables (in this case “${order}” corresponds to the order, e.g. “Pizza Salami”). This approach keeps it simple – although we’re aware that this is not semantically quite correct.

Use the Camunda Modeler to provide instructions for a task within the Element Documentation

For the “Burning functionality” we need to add a process variable. We can do this via the Forms tab. We create a Boolean variable called “burned” and add “Q1: Is the pizza burned?” in the properties. Alexa will then ask this question later to the user when they want to complete the task.

Use the Camunda Modeler to set variables and corresponding questions for tasks

After our process model is ready, we can deploy it to Camunda Platform 7. I use Camunda Platform 7 on Micronaut for this purpose. Using Micronaut Launch, I have generated a new project. In the features “camunda” has to be selected. In the background, the open-source integration micronaut-camunda-bpm is added to the dependencies. The deployment can then be done e.g. directly from the Camunda Modeler (if the REST API has been activated before).

Create a new Alexa Skill

We now use the Alexa Skill-Generator and upload our process model. As invocation name, I use “martins tasklist” (This is how I will be able to interact with the skill later). In my example, I use ngrok to reach my Micronaut-Camunda project from the web. I provide the corresponding link to the REST API.

We receive the files for the Alexa skill. All that is left to do is to create a new skill with these files.

First, we need the Alexa Skills Toolkit for VS Code. With this, we can create a new (Hello-World) skill. In the configuration we select the English language, Alexa-hosted (Node.JS) and as hosting region Ireland.

Create a new Alexa Skill using Visual Studio Code

Then we copy the previously generated skill files into the “pizza-skill-demo” folder (and replace existing files).

Copy Alexa Skill Files to the target folder

In the “pizza-skill/lambda” folder a terminal must now be opened to install the necessary dependencies with npm install. After that, all changes have to be committed with Git.

Deploy the Alexa Skill using Visual Studio Code

Now the skill can be deployed via the plugin. Then we can switch to the Alexa Developer Console in the browser and test our skill (in the “Test” tab of the skill).

First, we start a pizza ordering process in the Tasklist application and order a pizza salami.

Use Amazon Alexa to interact with Camunda Platform 7

We change the perspective: I am now the pizza baker and have a task assigned to me.

Use Alexa to claim a task

I complete my task.

Use Alexa to complete a task

Next, I was assigned the task “Prepare Delivery”. When this task is completed, the degree of the burn must be specified. Here, the property “Q1” from the User Task is reflected.

Use Alexa to complete task with variables

If I forget what my current task is, I can easily ask for it:

Use Alexa to get task details

If I have been assigned several tasks at the same time, I can use a task number to decide between the options. Of course, I can also ask what task 88 was again.

Use Alexa to complete a task

Use Cases when Integrating Camunda Platform 7 and Amazon Alexa

There are some exciting new use cases arising from the integration of Alexa. Processes with many physical tasks can benefit from this since they can be interacted with hands-free. It is also conceivable to combine it with smart glasses or a display (e.g. Echo Show), so that information can be displayed during the interaction.

About our pizza process, use cases arise such as features for premium customers. For example, orders from these customers could be prioritized higher than from other customers and thus be processed earlier. In addition, automated notifications could be sent to customers via the process model (e.g., “Pizza is being prepared,” “Pizza is in the oven,” “Pizza is being delivered”).

Conclusion

Currently, the language model of the generator skill is still very limited in the interaction options as well as the languages. Furthermore, only one process model per Alexa skill is supported. In general, there is still a lot of potential here. For example, more languages, multiple BPMN models and other environments, such as Google Dialogflow, could be supported.

Of course, an Alexa Skill is not suitable for every process. Problems also arise concerning data protection and privacy. Not every employee wants his or her data to be sent to Amazon or for Alexa to be “listening” throughout. However, these problems can be solved by using open-source voice assistants, for example.

With our Skill-Generator, we have shown how easy and automated an Alexa Skill can be created for a process.

Try it out! We welcome your feedback and ideas!

And as always: We are happy to support you with our BPM Technology Consulting.

Comment article