Published September 2022
OpenSearch shards enable parallelization of data processing across both a single node and multiple OpenSearch nodes.
OpenSearch automatically manages the allocation of shards within the nodes. However, choosing the number of shards needed is up to the user.
To follow along with this article it will be helpful to understand what clusters, shards, and replicas are in OpenSearch. Check out our OpenSearch terms and definitions post for a quick review.
Structure of an OpenSearch Cluster
To start, let’s walk through the structure of an OpenSearch cluster.
Firstly, we use a minimum of three nodes. Distributed systems, like OpenSearch, have the potential for a split brain problem. With a split brain, two different nodes can both claim to be the master. A third node is required to break the tie to choose which node is the master node.
Secondly, each shard should have at least one replica. In the event that a node crashes, you will want a duplicate of the data for the shard(s) stored on the offline node to prevent data loss.
And lastly, the replica should be stored on a different node than its corresponding shard. This is because if a node goes down, then you want the replicas to still be available on a different, online node.
OpenSearch handles splitting up shards and their respective replicas onto different nodes. This isn’t something you will need to manually configure.
We included a depiction of this Opensearch cluster structure in the figure above. Node 1 is where shard_0 is stored, and its replica, replica_0, is on Node 2.
Similarly, shard_1 is stored in Node 2, and its replica is stored in Node 3.
A shard can have as many replicas as you want. We recommend a minimum of one replica. And for important or sensitive information, multiple replicas may be recommended.
Now that we’ve covered the structure of an OpenSearch cluster, let’s start tackling the question of how many shards are required.
Each Shard Uses Resources
Determining shard allocation from the beginning is important. Adjusting the number of shards after the cluster is in production requires reindexing all of the source documents. And reindexing is a long process.
You might be thinking, “Let’s allocate a crazy high number of shards.” Unfortunately, that’s a costly and inefficient approach.
Each shard uses memory and CPU resources.
Additionally, each search request will require an interaction with every shard in the index. If shards are competing for the same hardware resources, then superfluous shards will impair performance.
Calculating the Number of Shards Needed
You ideally want to limit your maximum shard size to 30-80 GB. But a single shard can hold as much as 100s of GB and still perform well.
Using the 30-80 GB value, you can calculate how many shards you will need.
For instance, let’s assume you rotate indices monthly and expect around 600 GB of data per month. In this example, you would allocate 8 to 20 shards.
It’s better for overall performance to use fewer large shards, as opposed to a large number of small shards. This is because each shard will take up a finite amount of resources.
For the example above, we would likely stick to the lower end of the range and use about 10 shards.
There are exceptions, of course.
Let’s say your use case focuses on low latency indexing, quick queries, and you have 40 nodes. In that scenario, we would increase the shard count to 40 for better allocation of shards across our hardware at the expense of less efficient shard allocation.
With a total of 40 shards, each of our nodes will have 1/40th of the data to process, whereas with 8 total shards, we would have 8 nodes processing 1/8th of the data and the other 32 nodes not doing anything.
Have OpenSearch Questions?
Managed OpenSearch on your environment with
24/ 7 support.
Consulting support to implement, troubleshoot, and optimize OpenSearch.