Briefly, this error occurs when there’s an attempt to change the concurrent rebalance limit in Elasticsearch’s cluster settings. This limit determines how many shard rebalances can occur simultaneously. If set too high, it can overload the system; too low, and rebalancing may be slow. To resolve this, ensure the limit is set to a reasonable value. You can adjust this setting using the Cluster Update Settings API. Also, monitor your cluster’s performance to ensure it’s not being overloaded by too many concurrent rebalances.
This guide will help you check for common problems that cause the log ” updating [cluster.routing.allocation.cluster_concurrent_rebalance] from [{}]; to [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: allocation, cluster, rebalance and routing.
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Overview
Cluster rebalancing is the process by which an Elasticsearch cluster distributes data across the nodes. Specifically, it refers to the movement of existing data shards to another node to improve the balance across the nodes (as opposed to the allocation of new shards to nodes). Usually, it is a completely automatic process that requires no outside intervention. However, there are a number of parameters Elasticsearch uses to regulate this process.
Examples
The command below will establish the cluster settings to enable automatic cluster rebalancing. It is not necessary to run the command (the values used are in fact the defaults).
PUT /_cluster/settings?flat_settings=true { "transient" : { "cluster.routing.rebalance.enable": "all", "cluster.routing.allocation.allow_rebalance": "indices_all_active" , "cluster.routing.allocation.cluster_concurrent_rebalance":"2" } }
Notes and good things to know
In general, the cluster rebalance settings have sensible defaults. It is generally not advisable to disable cluster rebalancing. It is usually most sensible to wait until indices are all active before rebalancing since we consider the highest priority to be recovering the indices rather than moving them around. Finally, it is recommended to limit the number of concurrent rebalances to 2 (the default) since having a large number of shards moving around at a given time can use a lot of resources resource and cause instability. Increasing this number would only make sense on large clusters.
You can consider the “rebalance” process to be a tendency to spread the total number of shards across all nodes in the cluster, and also to spread the total number of shards for a given index as evenly as possible across the cluster. The rebalance is a “soft” algorithm, and will be overruled by other “hard” factors such as disk-based or shard allocation awareness.
If you think your cluster is not rebalancing as it should first check the “hard” limits you have on shard allocation awareness or disk-based shard allocation before tweaking the rebalance parameters.
Manual rebalancing
It is also possible to rebalance manually using a command like this:
POST /_cluster/reroute?dry_run=true { "commands" : [ { "move" : { "index" : "test", "shard" : 0, "from_node" : "node1", "to_node" : "node2" } } ] }
It is advisable to include the dry_run parameter to check the result of your action, and if everything is in order then repeat the command with dry_run=false.
Bear in mind that if you rebalance manually, Elasticsearch may move the same (or another shard) back automatically, compensating for your previous action. Similarly, there may be constraints that will prevent your reallocation from being accepted by the cluster.
Log Context
Log “updating [cluster.routing.allocation.cluster_concurrent_rebalance] from [{}]; to [{}]” classname is ConcurrentRebalanceAllocationDecider.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
class ApplySettings implements NodeSettingsService.Listener { Override public void onRefreshSettings(Settings settings) { int clusterConcurrentRebalance = settings.getAsInt(CLUSTER_ROUTING_ALLOCATION_CLUSTER_CONCURRENT_REBALANCE; ConcurrentRebalanceAllocationDecider.this.clusterConcurrentRebalance); if (clusterConcurrentRebalance != ConcurrentRebalanceAllocationDecider.this.clusterConcurrentRebalance) { logger.info("updating [cluster.routing.allocation.cluster_concurrent_rebalance] from [{}]; to [{}]"; ConcurrentRebalanceAllocationDecider.this.clusterConcurrentRebalance; clusterConcurrentRebalance); ConcurrentRebalanceAllocationDecider.this.clusterConcurrentRebalance = clusterConcurrentRebalance; } } }
[ratemypost]