Kafka architecture—a deep dive

Kafka partition

Apache Kafka® is an open-source distributed event streaming platform for high-performance data pipelines and streaming analytics. A key component behind Kafka's scalability and performance is its concept of topics and partitions. The messages in Kafka are written into “topics.” Topics are logical groups of events, allowing consumers and producers to categorize messages. These topics are stored in Kafka brokers.

Kafka further divides topics into smaller units called partitions. Partitions are append-only, ordered log files that hold a topic data subset. A topic can have multiple partitions, which is a way to achieve parallelism in processing messages.

This article explains partitions and why they are important. It also discusses the major partition strategies for producers and consumers.

Kafka Topic with three partitions and replication factor 3
Kafka Topic with three partitions and replication factor 3

Summary of Kafka partition concepts

Kafka lets you choose how producers publish messages to partitions and how consumers are assigned to partitions. There are multiple ways to achieve this, with its own pros and cons. The table below summarises important partitioning strategies.

Producer partition strategies

StrategyDescription
DefaultA key hash maps messages to partitions. The key is passed into a hash function which takes the hash value of the key and determines the partition the message belongs to.
Round-robinMessages map to partitions in a round-robin fashion. Messages with the key set to null get mapped to partitions in a round-robin style.
StickyThe producer request that takes the partition as an input always routes the record to the partition. Messages are sent within batch size and time constraints to reduce latency.
CustomThis approach implements an interface that overrides the Kafka partition method with some custom logic to define the key-to-partition routing strategy.

Consumer assignment strategies

StrategyDescription
Range (default)Assigns the same partition number to the same consumer (Same consumer reads from P0 of Topic X and P0 of Topic Y).
Round-robinPartitions are individually assigned to consumers (in any rational order, say from first to last)
StickyWorks similarly to a round-robin assignor but preserves as many existing assignments as possible.
CustomExtends the AbstractPartitionAssignor class so you can override the assign method with custom logic.

Partition basics

Before diving into the strategies, let’s quickly discuss some background information on partitioning.

As you know, producers write messages to a Kafka topic and consumers read messages from the topics they are interested in. However, in a system with multiple producers and consumers a single server(broker) that stores all topic messages can become a bottleneck. That’s where topic partitions come into play.

You can host each partition on a different broker, scaling a single topic horizontally across multiple brokers to provide performance far beyond the ability of a single server. Multiple consumers (of a consumer group) can read data from the same topic simultaneously.

Additionally, you can replicate partitions so different brokers store copies of the same partition in case one server fails. Kafka thus provides redundancy and scalability with topic partitions.

Choosing the right number of partitions

When you set up Kafka, you choose the number of partitions for each topic you define. This is an important decision, and there is no one right answer. There are several factors to consider while choosing the number of partitions for a topic.

  • Message volume: More partitions allow for higher throughput, but managing a large number becomes complex.
  • Parallelism: The number of partitions matches the number of consumers in a group for optimal load balancing. If there are fewer partitions, some consumers are left idle.
  • Broker limitations: The broker specifications (RAM and CPU) limit the number of partitions it can host. If a low-spec broker hosts many partitions, performance can be affected.
  • Storage: Each partition in Kafka is stored as a segment on a disk. Having too many partitions can increase storage overhead.

Note that once operational, you cannot decrease the partitions for a topic. So, ideally, you should start with a small number of partitions and gradually increase it. As a guideline, you should not cross 4000 partitions per broker and 200,000 partitions in a cluster (broker group that works together).

Kafka partition strategies for producers

To understand partitioning strategies, we must first understand Kafka messages. Conceptually, a Kafka message has a key, value, timestamp, and optional metadata headers. Here's an example message:

  • Key: "Temperature sensor"
  • Value: "Recorded a temperature of 25℃"
  • Timestamp: "April 26, 2024 at 1:00 p.m."

Kafka guarantees that messages within a partition are stored and delivered in the exact order they are written. The producer partitioning strategies determine which partition the producer writes a specific message to.

Default partitioner

When the key is null (or empty), Kafka randomly sends the record to one of the available topic partitions. If a key exists, Kafka maps the message to the same partition number as the key's hash: that way, messages with the same key map to the same partition.

However, the mapping is consistent only as long as the partition number in the topic remains the same. If you add new partitions, Kafka may write new messages with the same key to a partition that is different from before.

Round robin partitioner

You can use this approach if you want the producer to distribute the writes equally among all partitions. From a series of consecutive messages, the producer sends each message to a different partition until all the partitions are covered and then starts the process again. Messages with the same key map to different partitions.

The round-robin strategy is useful if many messages are being produced for the same key. The default strategy would result in hot partitions that are overloaded at the cost of others. In contrast, the round-robin strategy generates an even message distribution across partitions.

Uniform sticky partitioner

A producer defaults to the round-robin strategy if no partition and key are specified. While this spreads records out evenly among the partitions, it also results in more batches that are smaller in size. (Three partitions will result in a batch of three—too small in an infinite data stream.) It can impact performance due to more queuing requests, and higher latency.

The uniform sticky partitioner solves this problem by introducing two rules:

  • Records containing partition information are used as is.
  • Records without partition information are stored in the same partition until the batch is full or the time to wait before sending messages is up.

The sticky partition only changes between batches. It enables larger batches and reduces system latency. Over time, the records are distributed evenly among all partitions. However, records with the same key are still not sent to the same partition.

Kafka partition strategies for producers
Kafka partition strategies for producers

Custom partitioner

Sometimes, a use case does not fit with any built-in Kafka partition strategies. For example, let’s suppose a social media platform using Kafka to process user feed updates and activities. Suppose a power user of this platform (called “PU”) has a lot of activity and results in around 30-40% of the events.

If default hash partitioning is used, Kafka will allocate the “PU” user’s records to the same partition as other users. One partition will become much larger than the rest, causing bottlenecks. An ideal solution is to give the user “PU” a dedicated partition and map the rest with hash partitioning. You can implement the strategy in code by implementing kafka.clients.producer.Partitioner class. You can find the complete code in our guide on Kafka partitioning strategy.

Consumers and Kafka partitions

Consumers read data from Kafka. If multiple producers write messages to a topic, a single consumer reading from the topic will not keep up with the rate of incoming messages and fall behind.

To scale consumption, Kafka has a concept called consumer groups. As the name suggests, a Kafka consumer group consists of multiple consumers who subscribe to the same topic. Every consumer receives messages from a different partition set in the topic and group members distribute data among themselves.

The only condition is that every partition is assigned only to a single consumer. If you have more consumers than partitions, some remain idle. The idle consumer only takes on the workload if an existing consumer fails.

Just as you can control how data is written to partitions in a topic, Kafka also allows you to decide how consumers read data from partitions. You can configure the Kafka partition strategy for consumers using the partition.assignment.strategy property.

Partition assignment to consumer groups
Partition assignment to consumer groups

Consumer rebalancing

You can add or remove consumers from a consumer group at run time. A new consumer will start consuming messages from partitions according to the configured consumer partition strategies (explained later). Similarly, if you remove a consumer or it fails, Kafka reassigns its partitions to other remaining active consumers. The reassignment of partition ownership between consumers is called rebalancing. There are two types of rebalances.

Eager rebalancing

For every change in membership, all consumers in the group temporarily stop reading data. They give up partition ownership and rejoin the group. Kafka then reassigns new partitions to everyone. This causes consumer unavailability for a short time as all consumers stop reading data.

Cooperative rebalancing

It performs the rebalancing in multiple phases, reassigning a small partition subset between consumers. As changes are made incrementally, some consumers are always processing messages from partitions, and you can avoid total unavailability.

Static group membership avoids rebalancing

Rebalances can be undesirable in the ordinary course of events due to the consumer downtime issue. Real-time event processing use cases cannot afford delays of even milliseconds. You can avoid rebalancing with a static membership.

Normal rebalancing works on the premise that consumer identity in a group is transient. If a consumer leaves the group and rejoins, it gets a new set of partitions unrelated to the partitions it had before. However, you can override this behavior by configuring a consumer with a unique group.instance.id property. The property gives the consumer a static membership.

When a static member first joins a group, it is assigned partitions according to the expected partition assignment strategy. If it shuts down and rejoins, Kafka ‘recognizes’ its unique static identity and reassigns the same partitions it was consuming before. No rebalancing is triggered.

Static group membership is useful in scenarios where the state is important and recreating the state every time the consumer restarts is not ideal as this process is time-consuming.

Kafka partition strategies for consumers

The consumer partition strategies are given below.

Range assignor

It assigns consecutive partitions to consumers in a consumer group. The range assigner divides the total number of partitions by the number of consumers to calculate how many each consumer should handle. The first few consumers get one extra partition if the division isn't even. This strategy is simple and efficient but might not always lead to an even workload.

Round robin assignor

Just as for producers, for consumers too, this strategy aims to distribute partitions more evenly. It assigns partitions to consumers sequentially in a round-robin fashion. The first partition goes to the first consumer, the second to the next consumer, and so on, cycling back to the first consumer after assigning a partition to the last group member. It aims to maximize the number of partitions for each consumer.

It is useful when dealing with a variable number of partitions per topic or when partitions have differing amounts of data. However, it cannot reduce partition movement in rebalancing.

Sticky assignor

The sticky assigner enhances the stability of partition assignment across rebalances. It tries to keep partition assignments to consumers as consistent as possible from one rebalance to the next. If a consumer needs to be assigned a different set of partitions, the sticky assigner minimizes the changes, reducing the overhead processing of a rebalance. This results in more stable consumption and can be advantageous in systems where consumers have a high cost of initializing partitions.

Custom assignor

You can extend the AbstractPartitionAssignor class to define your own strategy for partition assignment. You can control how partitions are distributed based on factors like consumer capacity, historical data consumption patterns, or workload characteristics. You can find the complete code in our guide on Kafka partitioning strategy.

Common pitfalls with Kafka partitions

Choosing the optimal partition strategy for your Kafka topics is crucial for ensuring efficient and scalable data processing. Here are some common pitfalls to avoid:

Overpartitioning

Creating a large number of partitions doesn’t always lead to increased performance. Kafka rebalances partitions across brokers whenever the consumer group membership changes. With a high partition count, rebalancing can become time-consuming, impacting availability and performance. Moreover, if the message volume is low compared to the number of partitions, some partitions might remain mostly empty, leading to inefficient resource utilization by brokers.

Underpartitioning

The other extreme is no good either. If there are very few partitions, they can become overloaded with messages, leading to backlogs, processing delays, and ultimately impacting overall throughput. The broker hosting the under-partitioned topic might struggle to keep pace with the high volume of messages, potentially experiencing CPU overload and disk I/O bottlenecks.

Underpartitioning often results in reactive scaling. Suppose you don’t factor in potential future increases in message volume or consumer numbers when choosing the partition count. As message volume or consumer numbers rise, you might need to react by increasing the partition count. This could potentially lead to downtime during rebalancing.

Choosing the wrong partitioning key

Selecting a message key that doesn't effectively distribute messages across partitions leads to uneven data distribution. Messages with similar keys will be directed to the same partition, leading to hot partitions (overloaded) and potentially idle partitions. This creates an uneven workload distribution across consumers and impacts processing efficiency.

Moreover, if in-order processing is important, an inappropriate key can disrupt the order between partitions, even though messages might be delivered in order within each individual partition.

Tips to avoid pitfalls

Try the following.

Start low, scale up

You should begin with a conservative partition count and gradually increase it based on monitoring data and performance metrics. A partition count between 3-10 is generally good to start with.

Meaningful keys

We should avoid using null keys. Random keys (UUIDs, etc.) should only be selected when ordering is not required. Based on your access patterns and desired ordering guarantees, select a key that effectively distributes messages.

Monitoring

Lastly, Kafka metrics like partition lag, consumer offsets, and broker resource utilization should be continuously monitored. Tools like Kafka Monitor, Burrow, or even a Prometheus + Grafana setup help discover anomalies quicker. Analyze message distribution across partitions to identify potential issues.

Conclusion

Kafka partition strategies can be tricky to understand and get right, especially at scale. Many teams that scale Kafka struggle with performance issues that may or may not stem from partitions. Diagnosing, troubleshooting, and managing Kafka in production is challenging for developers.

Redpanda is a drop-in Kafka alternative that is developer-friendly, leaner, and faster to operate. Kafka users can migrate to Redpanda without application code changes. To get started with Redpand in seconds, sign up for Redpanda Serverless!

Chapters