Briefly, this error occurs when Elasticsearch is trying to upgrade partial searchable snapshots to use the frozen shard limit group. This is part of the process of optimizing the system for better performance and resource management. To resolve this issue, you can either increase the shard limit or reduce the number of shards in your cluster. Alternatively, you can also consider upgrading your Elasticsearch version to the latest one, as it might contain improvements related to shard limit management.
This guide will help you check for common problems that cause the log ” Upgrading partial searchable snapshots to use frozen shard limit group ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, upgrade, shard.
Overview
Upgrade refers to migrating your Elasticsearch version to a newer version. The process of updating distributed systems like Elasticsearch can be intricate, given the extensive data quantities, the involvement of numerous nodes, and the diverse configurations that may exist within your cluster.
An upgrade of an existing cluster can be done in two ways: through a rolling upgrade and through a full cluster restart. The benefit of a rolling upgrade is having zero downtime.
Bear in mind that any changes to your system could lead to data loss if the instructions are not adhered to accurately. Thoroughly test and strategize your upgrade, and ensure you create a backup of your data prior to executing any updates.
For guides on how to upgrade specific versions, see:
- How to Upgrade Elasticsearch from Version 5 to Version 6
- How to Upgrade Elasticsearch from Version 6 to Version 7
- How to Upgrade Elasticsearch from Version 7 to Version 8
What should I check before upgrading versions?
Elasticsearch nodes cannot be downgraded after upgrading. Before starting the upgrade process you should:
- Check the deprecation log and resolve any issues.
- Review the breaking changes to make sure you know of any functionalities which may changes or disappear. This would mainly affect node configuration, index mappings and templates, and cluster settings.
- Check the ES plugin compatibility to ensure they are compatible with the new version.
- Set up a test environment to test the upgrade process in a testing or staging environment first before upgrading your production cluster to avoid any issues.
- Take a backup and snapshots from your data, as the only way to “reverse” a failed upgrade is to create a new cluster with the old version and restore the data from snapshots.
How to perform offline upgrades – full cluster restart upgrades
A complete cluster restart upgrade involves simultaneously stopping all Elasticsearch nodes, updating them, and subsequently restarting each one. Inevitably, this upgrade approach will necessitate the downtime of your Elasticsearch cluster throughout the entire process.
Generally, offline upgrades are simpler than online ones because there’s no need to handle a cluster with varying node versions concurrently.
The steps are:
- Disable shard allocation
- Stop all Elasticsearch nodes and upgrade them
- Upgrade any plugins
- Start the Elasticsearch cluster
- Re-enable shard allocation
- Upgrade client libraries to new version
- Restart master eligible nodes
- Restart non-master eligible nodes
Keep in mind that during a full cluster restart, the master nodes need to be initiated prior to the non-master nodes. This is essential for allowing the master nodes to establish the cluster so that other nodes can join, which is in contrast to a rolling upstart where non-master nodes should be upgraded before the master nodes.
How to perform online upgrades – rolling restart upgrades
A rolling restart upgrade allows for updating a cluster without incurring any downtime. In this scenario, every node is sequentially upgraded and rebooted, without ever halting the entire Elasticsearch cluster.
Rolling restart upgrades cannot be performed when there is a change in MAJOR versions, except for specific exceptions:
- Upgrading Elasticsearch version 5.6.16 to version 6.x.x
- Upgrading Elasticsearch version 6.8.23 to version 7.x.x
- Upgrading Elasticsearch version 7.17.5 to version 8.x.x
For this reason, when performing a rolling restart upgrade between major versions, it is imperative to ALWAYS utilize the most recent minor version as an intermediary step for upgrading to the subsequent major version. For instance, if you are operating Elasticsearch 5.x.x, you can first update to 5.6.16 and then proceed to 6.8.23.
How to upgrade nodes in a rolling upgrade
The process for upgrading your nodes is as follows, upgrading all NON master-eligible nodes first.
- Make sure your cluster status is green and stable
Ensure that all replicas are available so that shutting down the node will not cause data loss.
- Disable unnecessary indexing
Wherever possible, you should stop all indexing processes to increase the cluster’s stability.
- Disable shard allocation
It is important to disable shard allocation so that when you stop a node for upgrade the cluster does not reallocate shards to another node. (See command below).
- Stop Elasticsearch
Stop Elasticsearch before moving on to the next step.
- Upgrade Elasticsearch
The method used to upgrade will depend upon your installation method.
- Upgrade plugins
Elasticsearch will not start if the plugin is not the same version as Elasticsearch.
- Start Elasticsearch
Start Elasticsearch before moving on to the next step.
- Re-enable shard allocation
Using the command given below.
- Check that the upgraded node has rejoined the cluster
Using the command below, you can check how many nodes are in the cluster.
- Wait for cluster status to turn green
The command provided below will also show you the progress of the shard recovery process on the upgraded node, until the cluster reaches a green state.
- Repeat
Repeat the full process above for each node.
To disable shard allocation, run:
PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": "primaries" } }
To re-enable shard allocation, run:
PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": null } }
Get cluster status and see how many nodes are in the cluster using:
GET _cluster/health
Common problems and important points
- The major problem with upgrades is version incompatibility. Elasticsearch supports rolling upgrades only between minor versions. You need to make sure to go through the official documentation to see if your cluster can support a rolling upgrade, otherwise a complete reindexing is required.
- Once you upgrade an Elasticsearch node, a rollback cannot be done. You need to make sure to backup your data before an upgrade.
- Elasticsearch continuously removes or deprecates some of its features with every release, so keep an eye on the change logs of each version before planning an upgrade.
- While doing a rolling upgrade, it is important to disable shard allocation before stopping a node and enable the shard allocation when node is upgraded and restarted. This process helps in avoiding unnecessary IO load in the cluster.
Overview
Data in an Elasticsearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each Elasticsearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the Elasticsearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.
Examples
The number of shards is set when an index is created, and this number cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index using:
PUT /sensor { "settings" : { "index" : { "number_of_shards" : 6, "number_of_replicas" : 2 } } }
The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.
During their lifetime, shards can go through a number of states, including:
- Initializing: An initial state before the shard can be used.
- Started: A state in which the shard is active and can receive requests.
- Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, such as when the node they are on is running out of disk space.
- Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens. For example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).
In order to view all shards, their states, and other metadata, use the following request:
GET _cat/shards
To view shards for a specific index, append the name of the index to the URL, for example:
sensor: GET _cat/shards/sensor
This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.
sensor 5 p STARTED 0 283b 127.0.0.1 ziap sensor 5 r UNASSIGNED sensor 2 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 2 r UNASSIGNED sensor 3 p STARTED 3 7.2kb 127.0.0.1 ziap sensor 3 r UNASSIGNED sensor 1 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 1 r UNASSIGNED sensor 4 p STARTED 2 3.8kb 127.0.0.1 ziap sensor 4 r UNASSIGNED sensor 0 p STARTED 0 283b 127.0.0.1 ziap sensor 0 r UNASSIGNED
Notes and good things to know
- Having shards that are too large is simply inefficient. Moving huge indices across machines is both a time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
- Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.
How to reduce your Elasticsearch costs by optimizing your shards
Watch the video below to learn how to save money on your deployment by optimizing your shards.
Log Context
Log “Upgrading partial searchable snapshots to use frozen shard limit group” classname is SearchableSnapshotIndexMetadataUpgrader.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} private void maybeUpgradeIndices(ClusterState state) { // 99% of the time; this will be a noop; so precheck that before adding a cluster state update. if (needsUpgrade(state)) { logger.info("Upgrading partial searchable snapshots to use frozen shard limit group"); submitUnbatchedTask("searchable-snapshot-index-upgrader"; new ClusterStateUpdateTask() { @Override public ClusterState execute(ClusterState currentState) throws Exception { return upgradeIndices(currentState); }
[ratemypost]