Elasticsearch Shards — Definitions, Sizes, Optimizations, and More

Optimizing Elasticsearch for shard size is an important component for achieving maximum performance from your cluster. To get started let’s review a few definitions that are an important part of the Elasticsearch jargon. If you are already familiar with Elasticsearch, you can continue straight to the next section.

shards

Defining Elasticsearch Jargon:  Cluster, Replicas, Shards, and More

Elasticsearch clusters are the gathering of three or more nodes, and each cluster has a unique name for accurate identification.  Each node represents a single Elasticsearch instance, and the minimum number of nodes for a cluster is three because Elasticsearch is a distributed system. [If you’re not familiar with why three (3) is the minimum number of nodes, read this.]

Anytime there is a collection of documents in Elasticsearch, it is referred to as an index, and each index is typically split into sub-elements referred to as shards. These shards are then distributed across multiple nodes in the cluster. Elasticsearch automatically manages and balances how the shards are arranged in the nodes.

Elasticsearch also automatically creates five (5) primary shards and one (1) replica for every index. We recommend that our clients have one (1) replica in every production cluster as a backup. Beyond serving as a backup, the replicas also aid in search performance, providing higher throughput and additional capacity. You can add or remove replicas at anytime to scale out query processing.

Unlike replicas, shard allocation cannot be modified later. So without further ado, let’s dig into shards.

 

Did you know that Dattell offers Elasticsearch as a Service?

Dattell’s Elasticsearch as a Service is a fully managed Elasticsearch solution built on your cloud instances or On-Prem servers, providing enhanced security, reduced latency, and reduced costs.

Learn about Elasticsearch as a Service

Optimizing Elasticsearch Shard Size and Number

Determining shard allocation at the get-go is important because if you want to change the number of shards after the cluster is in production, it is necessary to reindex all of the source documents. If you are new to Elasticsearch, just know that reindexing is a long process.

You might be thinking, “Just allocate a crazy number of shards so that’s never a problem, right?” Not quite, like all things in life there needs to be a balance.  

Each new shard comes with a cost, taking up memory and CPU resources. Additionally, keep in mind that each search request will require an interaction with every shard in the index.  If shards are competing for the same hardware resources (that is part of the same node) then this will impair performance.

 

You will want to limit your maximum shard size to 30-80 GB if running a recent version of Elasticsearch. In fact, a single shard can hold as much as 100s of GB and still perform well. (If running below version 6.0 then estimate 30-50 GB.)

Using the 30-80 GB value, you can calculate how many shards you’ll need. For instance, if you think you will be reaching 600 GB in the next two years, then you will want to allocate 10 to 20 shards.

Typically it is better for performance to have a fewer number of large shards, as opposed to a large number of small shards since each shard will take up a finite amount of resources.

In the classical scenario, you would have one (1) shard per index, per node. So if you aren’t sure of your expected growth, then a good starting point is to estimate the number of shards as 1.5 to 3 times the number of nodes in your starting configuration.

Still have questions about Elasticsearch? Connect with one of our Elasticsearch engineers.

Talk to an Elasticsearch Expert

 

 


Click to learn about Dattell’s Elasticsearch as a Service.

24/7 Monitoring, Built on Your Servers or Cloud Instances for Unmatched Data Authority, Reduced Latency, and Reduced Costs.

dattell logo bars (6)