The image reads, "How many Kafka partitions are needed?" and below is displayed the equation for determining the number of partitions.

Kafka Optimization — How many partitions are needed?

Updated February 2023

Apache Kafka is a distributed system, running in a cluster with each of the nodes referred to as brokers. Kafka topics are partitioned and replicated across the brokers throughout the entirety of the implementation. 

Why are Partitions Important?

Partitions allow users to parallelize topics, meaning data for any topic can be divided over multiple brokers. A critical component of Kafka optimization is optimizing the number of partitions in the implementation.

Since a topic can be split into partitions over multiple machines, multiple consumers can read a topic in parallel. This organization sets Kafka up for high message throughput.

In other words, the greater the parallelization the greater the throughput.

Partitions also play an important role in guaranteeing message order.  Check out our article on how Kafka guarantees message order to learn more.

Are More Partitions Better?

You don’t necessarily want to use more partitions than needed because increasing partition count simultaneously increases the number of open server files and leads to increased replication latency.

For most implementations you want to follow the rule of thumb of 10 partitions per topic, and 10,000 partitions per Kafka cluster. Going beyond that amount can require additional monitoring and optimization.  (You can learn more about Kafka monitoring here.)

Calculating Kafka Partition Requirements

Here is the calculation we use to optimize the number of partitions for a Kafka implementation.

# Partitions = Desired Throughput / Partition Speed

Conservatively, you can estimate that a single partition for a single Kafka topic runs at 10 MB/s.

As an example, if your desired throughput is 5 TB per day. That figure comes out to about 58 MB/s. Using the estimate of 10 MB/s per partition, this example implementation would require 6 partitions.

Kafka Partition Calculation

For the example above, the number of partitions is set using the following code:

bin/kafka-topics.sh –zookeeper ip_addr_of_zookeeper:2181 –create –topic my-topic –partitions 6 –replication-factor 3 –config max.message.bytes=64000 –config flush.messages=1

What is the Replication Factor?

The replication factor is set to 3 as a default. While partitions reflect horizontal scaling of unique information, replication factors refer to backups. For a replication factor of 3 in the example above, there are 18 partitions in total with 6 partitions being the originals and then 2 copies of each of those unique partitions.

As with all collected data, you want to ensure information is not lost if there is a failure. Creating replicated partitions is an important component to preventing data loss.

Run Partition Testing

Starting with your partition estimate, it is best then to test partition throughput. Setting up Kafka monitoring will enable you to easily run these tests.

Our post on Kafka monitoring with Elasticsearch and Kibana is a good place to start.


Have Kafka Questions?

Managed Kafka on your environment with 24/ 7 support.

Consulting support to implement, troubleshoot,
and optimize Kafka.

Schedule a call with a Kafka solution architect.

Published by

Dattell - Kafka & Elasticsearch Support

Benefit from the experience of our Kafka, Pulsar, Elasticsearch, and OpenSearch expert services to help your team deploy and maintain high-performance platforms that scale. We support Kafka, Elasticsearch, and OpenSearch both on-prem and in the cloud, whether on stand alone clusters or running within Kubernetes. We’ve saved our clients $100M+ over the past six years. Without our guidance companies tend to overspend on hardware or purchase unnecessary licenses. We typically save clients multiples more money than our fees cost in addition to building, optimizing, and supporting fault-tolerant, highly available architectures.