Running Apache Pulsar on Kubernetes

Running Apache Pulsar on Kubernetes

Running Apache Pulsar on Kubernetes

Apache Pulsar on Kubernetes promises cloud-native scalability—but getting it production-ready requires overcoming several gotchas. After deploying Pulsar across multiple Kubernetes environments, here’s what we’ve learned from the trenches.

Start with the helm chart (but read the fine print)

The official Pulsar Helm chart is a great starting point, but the defaults are not optimized for production.  You must fine-tune BookKeeper and ZooKeeper resources.  Additionally, StatefulSets introduce quirks with pod identity and rescheduling.

Customize values.yaml carefully. Use separate node pools for ZooKeeper and BookKeeper.

Plan for persistent volumes early

Pulsar’s durability relies on BookKeeper, which in turn relies on fast, persistent disks.

What can go wrong: One cluster uses low-IOPS disks, causing message write latency to spike.

The fix: Use SSD-backed PersistentVolumes with appropriate StorageClasses. Monitor write latency.

Understand ZooKeeper’s fragility

ZooKeeper is a single point of coordination. Issues can arise with leader elections during rolling upgrades, and resource starvation under memory pressure.

To address ZooKeeper snags:

  • Set resource requests/limits explicitly
  • Use readiness probes and PodDisruptionBudgets
  • Pin to isolated nodes if needed

Handle broker identity carefully

Each Pulsar broker must maintain a unique identity. When pods restart, identity can break if not using stable hostnames.

Leverage Kubernetes StatefulSets for stable broker naming. Avoid using Deployment-based brokers.

Use observability from day one

It’s easy to overlook observability until something breaks. We recommend:

  • Enable Prometheus metrics and dashboards for brokers, BookKeeper, and ZooKeeper
  • Use Jaeger for distributed tracing if using Pulsar Functions
  • Centralize logs with Fluent Bit or OpenTelemetry

Summing it up

Kubernetes + Pulsar is powerful, but you need to approach it with care. From Helm charts to hostnames, the details matter—and they often break at scale.

Need help building or troubleshooting Pulsar on Kubernetes? Talk to our team—we’ve helped teams scale Pulsar clusters that process billions of messages per day.

Learn about Dattell's managed Pulsar service

Learn about Dattell's managed Pulsar service

Learn about Dattell's managed Pulsar service

Dedicated engineer · Flat-fee pricing
24/7 support · 99.99% uptime SLA
In your environment (cloud or on-prem)

Scroll to Top

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading