Introduction
Published on : November 2022
OpenSearch enhances the power of Lucene by building a distributed system on top of it, and, in doing so, addresses the issues of scalability and fault tolerance. It also exposes a JSON-based REST API, making interoperability with other systems very straightforward.
Distributed systems like OpenSearch can be very complex, with many factors that can affect their performance and stability. Shards and replicas are among the most fundamental concepts in OpenSearch, and understanding how these work will enable you to effectively manage an OpenSearch cluster.
This article explains what shards and replicas are, their impact on an OpenSearch cluster, and what tools exist to tune them to varying demands.
Understanding Shards
Data in an OpenSearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each OpenSearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the OpenSearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.
Having shards that are too large is simply inefficient. Moving huge indices across machines is time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.
It is when an index is created that the number of shards is set, and this cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index:
PUT /sensor { "settings" : { "index" : { "number_of_shards" : 6, "number_of_replicas" : 2 } } }
The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.
During their lifetime, shards can go through a number of states, including:
Initializing: An initial state before the shard can be used.
Started: A state in which the shard is active and can receive requests.
Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, for example, when the node they are on is running out of disk space.
Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens, for example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).
In order to view all shards, their states, and other metadata, use the following request:
GET _cat/shards
To view shards for a specific index, append the name of the index to the URL, for example
sensor: GET _cat/shards/sensor
This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.
sensor 5 p STARTED 0 283b 127.0.0.1 ziap sensor 5 r UNASSIGNED sensor 2 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 2 r UNASSIGNED sensor 3 p STARTED 3 7.2kb 127.0.0.1 ziap sensor 3 r UNASSIGNED sensor 1 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 1 r UNASSIGNED sensor 4 p STARTED 2 3.8kb 127.0.0.1 ziap sensor 4 r UNASSIGNED sensor 0 p STARTED 0 283b 127.0.0.1 ziap sensor 0 r UNASSIGNED
Understanding Replicas
While each shard contains a single copy of the data, an index can contain multiple copies of the shard. There are thus two types of shard, the primary shard and a copy, or replica. Each replica of the shard is always located on a different node, which ensures access to your data in the event of a node failure. In addition to redundancy and their role in preventing data loss and downtime, replicas can also help boost search performance by allowing queries to be processed in parallel with the primary shard, and therefore faster.
There are some important differences in how primary and replica shards behave. While both are capable of processing queries, indexing requests must first go through primary shards before they can be replicated to the replica shards. As noted above, if a primary shard becomes unavailable—for example, due to a node disconnection or hardware failure—a replica is promoted to take over its role.
While replicas can help in the case of a node failure, replicas use up memory and disk space, as do primary shards. They also use compute powers when indexing, so it is also important not to have too many. Another difference between the primary shards and replicas is that while the number of primary shards cannot be changed after the index has been created, the number of replicas can be altered at any time.
Another factor to consider with replicas is the number of nodes available. Replicas are always placed on different nodes from the primary shard, since two copies of the same data on the same node would add no protection if the node were to fail. As a result, for a system to support n replicas, there need to be at least n + 1 nodes in the cluster. For instance, if there are two nodes in a system and an index is configured with six replicas, only one replica will be allocated. On the other hand, a system with seven nodes is perfectly capable of handling one primary shard and six replicas.
Optimizing Shards and Replicas
Even after an index with the right balance of shards and replicas has been created, these need to be monitored, as the dynamics around an index change over time. For instance, when dealing with time series data, indices with recent data are generally more active than older ones. Without tuning these indices, they would all consume the same amount of resources, despite their very different requirements.
The rollover index API can be used to separate newer and older indices. It can be set to automatically create a new index once a certain threshold—an index’s size on the disk, number of documents, or age—is reached. This API is also useful for keeping shard sizes under control. Because the number of shards cannot be easily changed after index creation, if no rollover conditions are met, shards will continue to accumulate data.
For older indices that only require infrequent access, shrinking and force merging an index are both ways to reduce their memory and disk footprints. The former reduces the number of shards in an index, while the latter reduces the number of Lucene segments and frees up space used by documents that have been deleted.
Shards and Replicas As the Foundation of OpenSearch
OpenSearch has built a strong reputation as a distributed storage, search, and analytics platform for huge volumes of data. When operating at such scale, however, challenges will inevitably arise. This is why understanding shards and replicas is so important and fundamental to OpenSearch, as this can help to optimize the reliability and performance of the platform.
Knowing how they work and how to optimize them is critical for achieving a more robust and performant OpenSearch cluster. If you are experiencing sluggish query responses or outages on a regular basis, this knowledge may be the key to overcoming these obstacles.
Additional notes
Elasticsearch and OpenSearch are both powerful search and analytics engines, but Elasticsearch has several key advantages. Elasticsearch boasts a more mature and feature-rich development history, translating to a better user experience, more features, and continuous optimizations. Our testing has consistently shown that Elasticsearch delivers faster performance while using fewer compute resources than OpenSearch. Additionally, Elasticsearch’s comprehensive documentation and active community forums provide invaluable resources for troubleshooting and further optimization. Elastic, the company behind Elasticsearch, offers dedicated support, ensuring enterprise-grade reliability and performance. These factors collectively make Elasticsearch a more versatile, efficient, and dependable choice for organizations requiring sophisticated search and analytics capabilities.