NovaTec took part in this year’s W-JAX conference in Munich and I was glad to follow some of the interesting talks about Apache Kafka. By the evolution of the ecosystem and the active participation on the conference, one could see that it has made its way to become the mainstream asynchronous messaging platform in many companies. While it can be a replacement for RabbitMQ or Apache ActiveMQ, it is by far more than that.
Several talks emphasized the important role, which Kafka Streams (and the new KSQL feature) can play in the Microservice world for implementing event sourcing plus the CQRS pattern and how it can serve in Big Data environments to get machine learning models into production.
Camunda’s new Microservice orchestration enginezeebe.io, has a promising approach to allow to use BPM techniques in high throughput scenarios. Please read further to see how it relates to the rest of this post. 😉
The presentation from Mike Wiesner (MHP) about data and event driven Microservices showed how a typical example of an event-driven Microservice can look like with Kafka. On the one hand, its basic event log serves as single source of truth in an event-driven architecture. On the other hand, you can build materialized views upon this event stream to allow each service to have its own view on the shared kernel (in Domain-Driven Design terms).
The talk from Kai Wähner (Confluent) KSQL – An Open Source Streaming SQL Engine for Apache Kafka gave an outlook how these materialized views can be built more easily in future with KSQL. It permits to create the first class citizens in Kafka Streams – tables and streams – in an SQL-like manner. This allows it also for non-developers in the project to define them. See the example to find out occurrences of 3 illegal login attempts in a tumbling (sliding) 5 second time range.
The resulting KSQL query operates on a shifting time window on the authorization_attempts events and returns continuous stream. With this “Continuous Query” concept, a whole bunch of different views on the event stream can be created. Having both writing services and reading services built on top of Kafka and Kafka Streams, eliminates the need for additional (RDBMS/NoSQL) storage. Write access happens with high performance into the into the distributed commit log, reading can be done from the “consumer-friendly” streams and tables defined using KSQL.
Last but not least, there was an interesting talk by Bernd Rücker (Camunda) about Workflow and State Machines at scale. It was about about Camunda’s new Microservice orchestration enginezeebe.io, which is also a bit “Kafkaesque”: In order to build the core of the high-throughput BPM engine, they didn’t use Kafka itself, but they basically used the same design approach of its distributed commit log.