Architecture overview

The Kaa platform architecture overview

Microservice abstraction

The architecture of the Kaa platform rests upon the microservice approach and uses it to the fullest. Each Kaa microservice is an independent building block. You can mix and match these blocks to create coherent solutions.

On the scale of the whole platform, Kaa microservices are just a bunch of black boxes doing their job. This means that the architecture of any individual microservice is not significant to the architecture of the whole Kaa platform.

To achieve this kind of microservice abstraction, Kaa engineers use a number of techniques.

First, all inter-service communication protocols use HTTP and NATS to transport messages, and JSON and Avro to encode them. All these technologies are well-defined and have multiple implementations for all mainstream programming languages, so we’re not tied to any implementation language. At present, most Kaa microservices are written in Java, but we have a couple of microservices written in JavaScript (NodeJS) and Go.

Second, all Kaa microservices are distributed as Docker images. Docker effectively abstracts away all the microservice setup and runtime dependencies—running a Docker container with Java is no different from running a Go-powered Docker container. This helps operations team deploy Kaa solutions without having to set up the dependencies.

Below is a diagram of how Kaa components are typically composed.

Service composition

Combined with well-defined and documented interfaces, these techniques allow us to swap the whole microservice implementation without anyone ever noticing.

Service composition and inter-service communication

To make microservices composable, the Kaa platform uses well-defined NATS-based protocols. We use a ligthweight change management procedure to develop these protocols, and track them separately from the microservice implementations. This allows multiple implementations of a single protocol to co-exist and cooperate within a single solution.

Main inter-service communication guidelines are defined in 3/ISM (Inter-Service Messaging) protocol. All other inter-service protocols build on 3/ISM and usually define one or two roles. For example, 4/ESP (Extension Service Protocol) defines “communication service” and “extension service” roles, and 6/CDTP (Configuration Data Transport Protocol) defines “configuration data provider” and “configuration data consumer” roles.

That’s extremely useful as it allows each role to have multiple diverse implementations.

Service communication

For example, we can have multiple “communication service” implementations, each implementing a different client-facing protocol: MQTT, CoAP, HTTP, proprietary UDP-based protocol—the only requirement for the service is implementing “communication service” side of the 4/ESP. This allows swapping client communication layer easily without affecting any other service—it’s completely transparent. Furthermore, you can deploy multiple communication service implementations to handle clients which communicate over different protocols within a single solution.

Another example is ECR (Endpoint Configuration Repository) and OTAO (Over-the-Air Orchestration) services both implementing “configuration data provider” side of 6/CDTP. Thus, all the microservices down the line—CMX, KPC—work with any implementation.


Before we describe the scalability features of the Kaa platform, let’s define some terms first.

Kaa service is a microservice packed in a Docker image.

Service instance is a Kaa service plus its configuration.

To make a service do something useful, you need to deploy at least one service instance replica (or simply replica)—a running Docker container.

Service Service instance Service instance replica
  • Source code / assembly
  • Packaged as a container
  • Configurable
  • Generic, versatile, reusable

EPR repo

  • Service + configuration
  • Defined specific behavior
  • Zero or more instances of the same service per solution cluster

EPR instance

  • Running service instance process
  • Unit of scaling
  • One or more replica per service instance


Each service instance may have as many replicas as needed to handle the load. Most service replicas are independent and do not communicate with each other. They have neither master-slave nor master-master relationship.

Instance replicas leverage NATS queue groups, so any request directed to a service instance can be handled by any replica. There is no single point of failure.

Service replicas

All Kaa services can be scaled horizontally. Many services do not share any data between the replicas at all; others share the state using Redis or other data storage. In all cases, handling horizontal scalability is internal to each service, so you need to check service-specific documentation for scaling details. In most cases, it boils down to scaling data stores; and we have paid attention to select data stores that scale well.


The Kaa platform leverages Kubernetes as an enterprise-grade orchestrator platform for all the solutions. It lets you abstract from managing containers lifecycle, mitigation of node failures, networking, and much more, keeping the focus on the business domain only.

Kubernetes is built around declarative descriptors, meaning that you define what you need and Kubernetes figures out how to get it on its own. The declarative approach gives you the flexibility of where you can run the cluster without changing a single line of code in your application.

This allows you to run Kaa almost everywhere: on a private bare-metal cluster, in a public cloud like Amazon AWS or Google’s GCP, or even on your laptop. You only need a Kubernetes and a Kaa Cluster blueprint.

Cluster blueprint, in terms of Kaa, is a collection of both Kubernetes resources definitions and configs of Kaa microservices. Both are text-based and are expected to be versioned with a VCS (e.g., Git).

Blueprint fully defines the cluster state, except for the data stored. In other words, you can restore a cluster or duplicate it (e.g., for testing or development purposes) with minimal effort. Furthermore, configs for services can be served directly from the VCS, letting you change the behavior of a running cluster by merely pushing a commit!


All components of the Kaa platform are able to read configuration from the filesystem. Default path of the configuration file is /srv/<service-name>/service-config.yml (e.g. /srv/kpc/service-config.yml).

Reading files from filesystem lets us easily manage configuration when running on Kubernetes cluster. We recommend storing configuration of each service inside a separate Kubernetes ConfigMap object and mounting it to container’s filesystem. Sample definition of a KPC Pod and its ConfigMap can be found below:

apiVersion: v1
kind: ConfigMap
  name: kpc-config
  service-config.yml: |
                - dcx
                - tsx
          endpoint-aware: true
apiVersion: v1
kind: Pod
  name: kpc-example-pod
    - name: kpc
      - name: config-volume
        mountPath: /srv/kpc
    - name: config-volume
        name: kpc-config

List of configurable properties and examples of Kubernetes definitions can be found in the documentation for every Kaa microservice in the Configuration and Deployment sections respectively.

Automatic configuration rollout

In order to automatically apply configuration updates we suggest using a third-party tool called Reloader. When it detects an update of a ConfigMap or a Secret, Reloader utilizes Kubernetes native functionality to perform a rolling update of affected services. That gives you an ability to safely rollout configuration changes without worrying about breaking your cluster.

Core concepts

Now that we’ve covered main non-functional aspects, let’s get to the matter.

While all Kaa microservices are very diverse and perform a variety of jobs, there are some cross-cutting concepts that all services should respect and work with.

Kaa applications and application versions

The Kaa platform is designed to handle different types of devices simultaneously and allow them to co-exist in the scope of a single solution. To do that, we use the concept of Kaa application.

Think of Kaa application as of container where you put your system configuration that depends on the device type.

Aplications diagram showing different types of devices

Each Kaa application is independent and contains one or more application versions. Application versions allow you to further your application-specific configuration and deploy a new feature to the field while keeping your old versions up and running.

Client, endpoint

In pre-1.0 versions of Kaa, each connection to the server represents a single device. That allows associating device identity and other related data with an active session, sparing some couple of bytes for every message.

While implementing dozens of projects, we have found that to be more restricting than helpful. There are many real-life cases when we want to share a single connection among multiple devices.

In Kaa 1.0, we have solved that issue by separating the concerns.

Endpoint (often abbreviated as “EP”) is a primary entity the platform operates with. It’s a thing. All data coming to the platform is associated with endpoints.

“Endpoint” is a cross-cutting concept—all Kaa services from the communication layer to representation one are aware of endpoints.

Client is an entity that is responsible for managing the communication—opening a connection, keeping it intact, sending and receiving messages. Each client operates on behalf of one or multiple endpoints.

While endpoints is a cross-cutting concept, clients are not. Clients are terminated on the communication layer, and the rest of the platform is blindly unaware of their existence.

Diagram illustrating client-endpoint relations

Note that we do not use terms like “device” or “thing.” This is because an individual physical device can be represented as multiple independent endpoints; or you could open multiple connections from a single device, representing multiple clients.

Communication protocols

1/KP (Kaa Protocol)

The main communication protocol of Kaa is 1/KP, which is based on MQTT. The protocol is very general and does not impose any additional format constraints on the clients.

It is designed to allow multiple endpoints to communicate via a single connection and traverse through MQTT gateways and brokers.

It is also well-thought to enable future CoAP bindings implementation.

1/KP does not define all server features that are available to clients. Instead, it defined the MQTT topic format and general guidelines and relies on the protocol extensions to handle the rest.

1/KP extensions include 2/DCP, 7/CMP, and 10/EPMP. Extensions define specific payload formats and how a server should process messages.

4/ESP (Extension Service Protocol)

While 1/KP with its extensions seems to be overarching as it defines all communication with clients, in reality, its effect on the platform architecture is very limited. There is only one microservice that knows or cares about 1/KP—KPC (Kaa Protocol Communication service).

This microservices handles client connections and translates messages to and from 4/ESP (Extension Service Protocol)—the protocol the rest of the platform uses to speak to endpoints.

This approach allows isolating all device communications into a separate layer that can easily be extended or replaced.

Main component groups

Identity management

Responsibilities Relations
  • Manage devices and their credentials.
  • Keep record of digital twins.
  • Manage device logical grouping.

Identity management services

EPR: Endpoint Register

Endpoint Register service (EPR) is a component of Kaa platform that keeps record of all endpoint registrations within a solution, as well as their associated key/value attributes (metadata). This service provides REST API interfaces to manage endpoints and endpoint metadata. EPR broadcasts endpoint lifecycle events, such as registration or deletion of an endpoint, as well as any endpoint metadata update.

EPR uses the following protocols:

  • Endpoint Lifecycle and Connectivity Events (9/ELCE)
  • Endpoint Metadata Events (15/EME)
  • Endpoint Filter Events (18/EFE)

EPMX: Endpoint Metadata Extension

Endpoint Metadata Extension service (EPMX) extends the communication capability of Kaa Protocol (1/KP). It implements 10/EPMP extension protocol to allow endpoints to retrieve and manage their metadata. Additionally to implementing 10/EPMP, EPMX supports metadata whitelisting. This feature allows specifying a list of metadata fields that are accessible by endpoints. It also allows to forbid updating specific fields, rendering them read-only.

EPMX itself does not persist metadata and integrates with Endpoint Register service (EPR) (or other compatible implementation) for that purpose.

CM: Credential Management

Credential Management service (CM) authenticates connecting clients and endpoints, manages credentials state.

CM supports the following protocols:

  • Endpoint Lifecycle and Connectivity Events (9/ELCE)
  • Endpoint and Client Authentication Protocol (16/ECAP)

CM provides a REST-based interface to manage endpoint and client credentials:

  • Provision new credentials.
  • Transition credential states.
  • Delete credentials.


Responsibilities Relations
  • Handle communications between devices and Kaa platform over the standard protocols, via both secure (encrypted & tamper-proof) and insecure channels.
  • Handle device states.

Communication services

KPC: Kaa Protocol Communication

Kaa Protocol Communication service (KPC) implements 1/KP-based communication with clients and endpoints the client represents. KPC is the first point of contact between a client and the Kaa IoT platform. KPC performs client authentication and endpoint identification.

For clients, authentication can be done using MQTT username/password combination or client SSL certificate. Endpoints are identified using endpoint tokens.

Once a client is connected to KPC, this service manages further client’s interactions with Kaa extension services. KPC is unaware of the specifics of the extension protocols that are multiplexed on top of 1/KP (2/DCP, 7/CMP, 10/EPMP, etc.). Rather, it uses information available in 1/KP to (de-)multiplex extensions protocols and route messages from clients to appropriate extension service instances and vice versa.

EPL: Endpoint Lifecycle

Endpoint Lifecycle service (EPL) is a Kaa platform component that monitors endpoint connectivity status and updates an endpoint metadata field with the current connectivity status. It can also send updates to a specific time series, so that you can see when the device was online or offline.

EPL uses the Endpoint Lifecycle and Connectivity Events (9/ELCE) protocol.

Data collection

Responsibilities Relations
  • Reliably collect data at large scale.
  • Configure data processing pipelines.
  • Process structured and unstructured data.
  • Optimize network usage.

Data collection services

DCX: Data Collection Extension

Data Collection Extension service (DCX) extends the communication capability of Kaa Protocol (1/KP) by implementing Data Collection Protocol (2/DCP). DCX supports this extension protocol to receive endpoint data from a communication service and send it to data receiver services for storage and/or processing.

EPTS: Endpoint Time Series

Endpoint Time Series service (EPTS) is a Kaa service that receives endpoint data samples and transforms them into time series. EPTS broadcasts the new data points through the time series transmission interface and provides API for historical time series retrieval.

Configuration management

Responsibilities Relations
  • Configure device behavior parameters.
  • Deliver configurations via push or pull.
  • Attach configurations to groups of devices.
  • Enqueue configuration updates for offline devices.

Configuration management services

CMX: Configuration Management Extension

Configuration Management Extension service (CMX) extends Kaa Protocol (1/KP) and implements Configuration Management Protocol (7/CMP) to distribute configuration data to endpoints. As with other Kaa extension services, CMX uses Extension Service Protocol (4/ESP) for integration with a communication service.

CMX does not persist endpoint configuration data in any way—instead, configuration is pulled from an endpoint configuration data provider.

CMX implements a proactive configuration data push—endpoint configuration is sent to the endpoint as soon as possible, and an explicit endpoint subscription is not required.

Note that explicit subscription is still recommended.

To detect when a configuration push is required, CMX listens to some endpoint connectivity and lifecycle events defined in Endpoint Lifecycle and Connectivity Events (9/ELCE).

CMX acts as the configuration data consumer as per 6/CDTP. This protocol is used to retrieve configuration data, update latest applied endpoint configuration, and listen to endpoint configuration update events.

ECR: Endpoint Configuration Repository

Endpoint Configuration Repository service (ECR) is used for storing and managing endpoint configuration data via REST API.

ECR supports the following protocols:

  • Configuration Data Transport Protocol (6/CDTP)
  • Endpoint Lifecycle and Connectivity Events (9/ELCE)

Command invocation

Responsibilities Relations
  • Remote execution of commands, in both synchronous and asynchronous ways.
  • Scheduling commands delivery (e.g., for offline devices).

Command invocation services

CEX: Command Execution Extension

Command Execution Extension service (CEX) extends the communication capability of Kaa Protocol (1/KP) by implementing Command Execution Protocol (11/CEP). CEX supports this extension protocol to deliver commands to endpoint and consume endpoint command execution result. As with other Kaa extension services, CEX uses Extension Service Protocol (4/ESP) for integration with a communication service.

Commands are provided by command invocation caller. CEX implements a proactive command push—commands are sent to an endpoint as soon as possible, and an explicit endpoint subscription is not required.

CEX acts as a command invocation agent according to Command Invocation Protocol (12/CIP). This protocol is used to accept command invocation request and return command invocation result back to command invocation caller.

RCI: REST Command Invocation

REST Command Invocation service (RCI) is a standard Kaa platform service that exposes REST APIs for invoking commands on endpoints. RCI implements Command Invocation Protocol (12/CIP) to forward commands to endpoints and to consume the invocation results. It acts as Command invocation caller and listens to the results from Command invocation agent.

Software updates

Responsibilities Relations
  • Provide devices with updated software.
  • Track device software versions.
  • Manage software and hardware compatibility matrix.
  • Roll back upgrades.

Sofware update services

OTAO: Over-the-air Orchestrator

Over-the-air Orchestrator service (OTAO) is a Kaa service that is responsible for endpoint over-the-air updates. OTAO does not persist software itself, but rather its specification. OTAO utilizes the “software update” term. The software update is a description of particular software which includes next vital parts:

  • version;
  • upgrade graph from other software;
  • specification (free JSON format).

A software update can describe any software including firmware, device drivers, operating systems, etc. Software update is defined per application.

OTAO provides REST APIs that can be used by other services to manage software updates. Endpoint software update transport interface is based on the Configuration Data Transport Protocol (6/CDTP).