Updated July 2021
ZooKeeper is used in distributed systems for service synchronization and as a naming registry. When working with Apache Kafka, ZooKeeper is primarily used to track the status of nodes in the Kafka cluster and maintain a list of Kafka topics and messages.
ZooKeeper was originally developed by Yahoo to address the bugs that can arise with distributed, big data applications by storing the status of processes running on clusters. Like Kafka, ZooKeeper is an open source technology under the Apache License.
ZooKeeper and Kafka
For now, Kafka services cannot be used in production without first installing ZooKeeper. * This is true even if your use case requires just a single broker, single topic, and single partition.
*Starting with v2.8, Kafka can be run without ZooKeeper. However, this update isn’t ready for use in production. Click here for section on running Kafka without ZooKeeper.
For any distributed system, there needs to be a way to coordinate tasks. Kafka is a distributed system that was built to use ZooKeeper. However, other technologies like Elasticsearch and MongoDB have their own built-in mechanisms for coordinating tasks.
ZooKeeper has five primary functions. Specifically, ZooKeeper is used for controller election, cluster membership, topic configuration, access control lists, and quotas.
#1 Controller Election. The controller is the broker responsible for maintaining the leader/follower relationship for all partitions. If ever a node shuts down, ZooKeeper ensures that other replicas take up the role of partition leaders replacing the partition leaders in the node that is shutting down.
#2 Cluster Membership. ZooKeeper keeps a list of all functioning brokers in the cluster.
#3 Topic Configuration. ZooKeeper maintains the configuration of all topics, including the list of existing topics, number of partitions for each topic, location of the replicas, configuration overrides for topics, preferred leader node, among other details.
#4 Access Control Lists (ACLs). ZooKeeper also maintains the ACLs for all topics. This includes who or what is allowed to read/write to each topic, list of consumer groups, members of the groups, and the most recent offset each consumer group received from each partition.
#5 Quotas. ZooKeeper accesses how much data each client is allowed to read/write.
ZooKeeper isn’t memory intensive when it’s working solely with Kafka. About 8 GB of RAM will be sufficient for most use cases.
Much like memory, ZooKeeper doesn’t consume CPU resources heavily. However, it is best practice to provide a dedicated CPU core for ZooKeeper to ensure there are no issues with context switching.
Finally, disk performance is critical for ZooKeeper. Because ZooKeeper needs low latency disk writes, we recommend using solid state drives (SSD).
Monitoring ZooKeeper with Elasticsearch and Kibana
Just as it’s important to monitor Kafka performance in real-time to diagnose system issues and prevent future problems, it’s critical to monitor ZooKeeper.
We recommend using Elasticsearch because it’s free (open source) and highly versatile. Kibana is part of the same Elastic Stack as Elasticsearch. Kibana works alongside Elasticsearch to provide customized visualizations for tracking real-time performance.
For more information on how to monitor ZooKeeper and Kafka performance in real-time, check out our post Kafka Monitoring With Elasticsearch and Kibana.
ZooKeeper’s Claims to Fame
ZooKeeper is known for its reliability, simplicity, speed, and scalability.
Reliability. ZooKeeper keeps working even if a node fails.
Simplicity. ZooKeeper’s architecture is simple, with a shared hierarchical namespace that assists in coordinating processes.
Speed. ZooKeeper is known for its fast processing for workloads that require more reading than writing, e.g. read-dominant workloads.
Scalability. ZooKeeper is horizontally scalable, which means it can be scaled by simply adding additional nodes.
Running Kafka Without ZooKeeper
ZooKeeper’s role is being phase out. Let’s run through some of the most common questions we get about this new way of running Kafka.
Can Kafka be run without ZooKeeper?
Starting with version 2.8, Kafka can be run without ZooKeeper. The release of 2.8.0 in April 2021 provided us all a chance to start using Kafka without ZooKeeper. However, this version is not ready for use in production and is missing some core functionality. One important component not yet available in this version is control of ACL.
Why remove ZooKeeper from Kafka implementations?
Using ZooKeeper with Kafka adds complexity for tuning, security, and monitoring. Instead of optimizing and maintaining one tool, users need to optimize and maintain two tools. Building out Kafka functionality to also handle traditional ZooKeeper tasks makes implementing and running Kafka simpler.
How does Kafka work without ZooKeeper?
The latest version of Kafka uses a new quorum controller. This quorum controller enables all of the metadata responsibilities that have traditionally been managed by both the Kafka controller and ZooKeeper to be run internally in the Kafka cluster.
What is KIP-500?
KIP-500 is code that enables topic metadata to be stored within Kafka using a new internal topic, @metadata. The @metadata topic is replicated to all brokers and is managed with a Raft quorum of controllers. For a detailed Quickstart guide, check out the GitHub page.
What is KRaft for Kafka?
Starting with Kafka v2.8, Kafka can be run without ZooKeeper. This sans-ZooKeeper mode is formally named Kafka Raft Metadata mode. However, the developers shorted it to KRaft mode and are pronouncing it like the word “craft”.
Have Kafka Questions?
Managed Kafka on your environment with
24/ 7 support.
Consulting support to help you implement, troubleshoot,
and optimize Kafka.