Kafka Security – Behind the Scenes

First of all if you do not know Apache Kafka, you are welcome to read our other blog post as an introduction.
In this blog post we will talk about:
- What is OAuth 2.0 & OpenID Connect
- How to enable security in Kafka
- Understanding the security protocol flow in the background
OAuth 2.0 & OpenID Connect
OAuth 2.0 (RFC6749)
Is an authorization framework, which has established itself as an industrial standard. It allows us (resource owner) to give an application (client) limited access to resources provided by a third party (resource server). We must first authenticate ourselves to an authorization server. The abstract protocol flow is structured as follows. We directly or indirectly authorize the client (Authorization Grant), with this permission the client can obtain an access token from the Authorization Server, using this token the client gets access to the resource on the resource server. There are four different variants of the OAuth 2.0 protocol, depending on which method we use for the Authorization Grant:
- Authorization Code: Use Case Web-Application. A web client redirects the resource owner via the browser (user agent) to an authorization server. There the resource owner verifies himself and receives an authorization code. After the redirection back, the client can receive the access token using the authorization code.
- Implicit: Single-Page-Application, User-Agent is identical to the client, the client receives the access token directly after authentication.
- Client Credentials: Machine-Machine-Communication within a protected space, resource owner is not involved.
- Resource Owner Password Credentials: The client is trusted and contains username and password. Using these credentials he can directly get an access token.
OAuth 2.0 is a framework and intentionally leaves many questions open. For example, it does not specify what properties and format the tokens have, what methods are used to access a protected resource, and whether the resource server and authorization server communicate. In the following graphic you can see the general protocol flow
OpenID Connect (OIDC)
OpenID Connect adds an Authentication layer upon top of OAuth 2.0 and answers these open questions. The authentication layer allows users to be identified. For this purpose, so-called ID Tokens are introduced. For ID Tokens, it uses JSON Web Tokens (JWT), a recognized and widely used format for tokens. JWTs are simple, portable and support many cryptographic signatures and encryption algorithms. The structure of the access tokens is not predefined.
Difference OIDC & OAuth 2.0
If you compare OIDC and OAuth 2.0 you could say:
- OAuth is the process of lending your car keys to a friend to drive. The car doesn’t care who drives, it only cares that the right key fits.
- OIDC is the process of checking into a hotel with his identity card. The Hotelier is only interested in who you are and that you can prove this credibly.
If you want to know more about this topic, you are welcome to join one of our free security webinars.
In the subsequent elaboration of Kafka Security SASL/OAuthBearer mechanism, we want to focus on the authentication flow of Kafka and will choose OAuth 2.0 as the supporting framework for this authentication mechanism.
How to enable Security in Kafka
In this blog post we want to understand what is happening behind the scenes of Kafka Security. Which is why we’ll only briefly mention how we activated Kafka Security. Moreover, there are already various articles on the Internet that answer this question. We’ll mention them at the end of this section.
To support Kafka Security and OAuth mechanism you have to:
- Implement the Kafka Interface org.apache.kafka.common.security.auth.AuthenticateCallbackHandler twice, once for generating tokens (LoginCallbackHandler) and once for their validation (ServerCallbackHandler).
- Create a JAAS configuration file and pass it to the jvm.
- Configure SASL ports and mechanism in the properties.
- Run a OAuth server for authentication and authorization as well as configure the endpoints in your application (e.g. Keycloak).
If the implementations of LoginCallbackHandler and ServerCallbackHandler are not given, Kafka will create and validate unsecured tokens by default. In that case, claims in the auto-generated token can be specified directly in the JAAS configuration file. However, this is intended for non-production use only.
If you want to know how to do it in detail you can read the following documentations and blog posts.
- Apache Kafka Doc: Security
- Confluent Doc: Security
- KIP-255 OAuth Authentication via SASL/OAuthbearer (starting from Kafka 2.0.0)
- Introduction to Apache Kafka Security, Stéphane Maarek
- How to setup OAuth2 mechanism to a Kafka Broker, Jair de Souza Junior
When you implement the interface, think about using a verified framework like Nimbus. That way you reduce the risk of implementing security issues. You can also use a public key procedure. This means a validation does not always have to communicate with the authentication server.
Understanding the Authentication Flow in Kafka
We use the SASL/OAuthbearer mechanism of Kafka in combination with a Keycloak server to authenticate our clients and broker. In the context with no user involved where all back-end services authenticate and access resources on their own behalf, Client Credentials flow of OAuth 2.0 is a suitable choice. Please note that our Keycloak uses OpenID Connect as protocol. In this case the client authenticates itself to Keycloak with a Client ID and Secret. The authorization server checks the credentials and returns an access token directly to the client. It can then use this token to access protected resources on the resource server.

Client Credentials Flow exemplified by Kafka Clients
When we map this flow to the Kafka context, Kafka broker is the protected resource. In order to establish a connection to a broker, Kafka clients and other brokers (in case of inter-broker connection) must first get an access token from an authorization server and present it to the targeted broker when initiating the connection. If the validation of the token is successful, connection request is accepted by the broker. You can view the detailed flow of Kafka OAuthbearer in the following graphic. In this example, KafkaBroker1 and KafkaBroker 2 are two independent examples of how to establish a connection. If a client wants to read or write from or to a partition for which broker 1 is the leader, it only needs to contact this broker and only authenticate there.

Sequence diagram Kafka OAuthbearer
When Kafka broker and client are configured to authenticate using SASL and OAuthBearerLoginModule, before sending connection request to a Kafka broker, the configured LoginCallbackHandler is invoked to handle an instance of OAuthBearerTokenCallback and retrieve an access token from authorization server.

Log: LoginCallbackHandler
The returned access token should then be wrapped into the OAuthBearerToken interface provided by Kafka to be transparently sent to Kafka broker for authentication. The client then sends connection request to the SASL listener of the broker following Kafka SASL authentication sequence:

Wireshark log: Kafka SASL flow
- Kafka client sends a SaslHandshake request to Kafka broker.
- Kafka broker receives the request and checks if it accepts the requested SASL authentication mechanism which is OAuthbearer in this case. If the authentication mechanism is enabled on the broker, a SASL Handshake response is sent back to client.
- Client sends an SaslAuthenticate request containing the previously acquired access token to broker.
- The broker extracts the access token from the request and validates it by invoking the configured ServerCallbackHandler. Note that the implementation of this handler is completely flexible. In the sequence diagram above, the broker makes HTTP call to Keycloak to validate the token. However, validation can also be done entirely at the broker side if a certificate of the authorization server is already available for verifying the digital signature in the token.
- If the token is validated successfully, the connection is accepted and the identity of the communication initiator is extracted from the access token. Subsequent packets from client are handled as normal Kafka API requests. Otherwise the connection is terminated.
By default, the authentication is done only once when establishing the connection. The connection session then continues until being terminated and no further re-handshake or re-authentication is required. The token used for authentication will not be replaced throughout the entire connection session. If this token expires while the connection is still active, the connection is not affected. If the information in the token is used for authorization, it is possible that authorization decision can be made based on expired token. This default behavior is documented in Kafka KIP-255. Please note that this type of authentication is only available from Kafka version 2.0.0.
On the client side, Kafka client also keeps track of the expiration of the current access token in a background thread and periodically renews the token before it expires by invoking its LoginCallbackHandler again.

Log: Expiration of access token
This ensures that the client always has a valid token ready when it wants to open a new connection to another broker. This token renewal function does not have any impact on other ongoing connection sessions. As can be seen from the above sequence diagram, after getting a new access token, the client can continue to send Kafka request to broker 1 normally. In case the client want to access another partition located on broker 2, it will open a new connection to this broker using the new token for authentication.
Conclusion
As you can see, by implementing the interfaces provided, Kafka makes it relatively easy to implement authentication using OAuth 2.0 and OpenID Connect. The blog post makes the sequence flow and the mechanisms of authentication transparent and thus prepares a basis for own implementations of authentication with Kafka. It should be noted that it is usually an advantage to use proven or certified frameworks to minimize security risks and there are different methods to verify a token.
It is possible to implement authorization in Kafka via ACL’s (Access control lists). These are explicitly managed by Kafka board tools. Here it is possible to use the claim of a token as the principal name of the client and to distribute rights for it. To reuse an existing RBAC system for authorization, you must implement a mapping or service yourself. Note that Confluent already provides such RBAC authorization.
PS:
What are your ideas or experiences with this topic? Just leave a comment or write us an email – we are looking forward to your input.
Recent posts






Comment article