Briefly, this error occurs when the disk space on a node in an Elasticsearch cluster falls below the “low disk watermark” threshold. Elasticsearch stops assigning new shards to this node to prevent further disk space consumption. To resolve this issue, you can either increase the disk space available on the node, delete unnecessary data to free up space, or adjust the “cluster.routing.allocation.disk.watermark.low” setting to a lower percentage or absolute value, allowing more disk usage. However, be cautious with the last option as it could lead to disk space issues if not monitored closely.
We recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
“
Quick Summary
The cause of this error is low disk space on a data node, and as a preventive measure, Elasticsearch throws this log message and takes some preventive measures explained further.
Explanation
Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings related to log:
cluster.routing.allocation.disk.watermark – have three thresholds of low, high, and flood_stage and can be changed dynamically, accepting absolute values as well as percentage values.
Permanent fixes
a). Delete unused indices.
b) Merge segments to reduce the size of the shard on the affected node, more info on opster’s Elasticsearch expert’s STOF answer
c) Attach external disk or increase the disk used by the data node.
Temp hack/fixes
a) Changed settings values to a higher threshold by dynamically update settings using update cluster API:
PUT _cluster/settings
{ ""transient"": { ""cluster.routing.allocation.disk.watermark.low"": ""100gb"", -->adjust according to your situations ""cluster.routing.allocation. disk.watermark.high"": ""50gb"", ""cluster.routing.allocation. disk.watermark.flood_stage"": ""10gb"", ""cluster.info.update. interval"": ""1m"" } }
b) Disable disk check by hitting below cluster update API
{ ""transient"": { ""cluster.routing.allocation.disk.threshold_enabled"" : false } }
C) Even After all these fixes, Elasticsearch won’t bring indices in write mode for that this API needs to be hit.
PUT _all/_settings
{ ""index.blocks.read_only_allow_delete"": null }
”
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Overview
In Elasticsearch, routing refers to document routing. When you index a document, Elasticsearch will determine which shard the document should be routed to for indexing.
The shard is selected based on the following formula:
shard = hash(_routing) % number_of_primary_shards
Where the default value of _routing is _id.
It is important to know which shard the document is routed to, because Elasticsearch will need to determine where to find that document later on for document retrieval requests.
Examples
In twitter index with 2 primary shards, the document with _id equal to “440” gets routed to the shard number:
shard = hash( 440 ) % 2 PUT twitter/_doc/440 { ... }
Notes and good things to know
- In order to improve search speed, you can create custom routing. For example, you can enable custom routing that will ensure that only a single shard will be queried (the shard that contains your data).
- To create custom routing in Elasticsearch, you will need to configure and define that not all routing will be completed by default settings. ( v <= 5.0)
PUT my_index/customer/_mapping { "order":{ "_routing":{ "required":true } } }
- This will ensure that every document in the “customer” type must specify a custom routing. For Elasticsearch version 6 or above you will need to update the same mapping as:
PUT my_index/_mapping { "order":{ "_routing":{ "required":true } } }
Overview
To put it simply, a node is a single server that is part of a cluster. Each node is assigned one or more roles, which describe the node’s responsibility and operations. Data nodes store the data, and participate in the cluster’s indexing and search capabilities, while master nodes are responsible for managing the cluster’s activities and storing the cluster state, including the metadata.
While it is possible to run several node instances of Elasticsearch on the same hardware, it’s considered a best practice to limit a server to a single running instance of Elasticsearch.
Nodes connect to each other and form a cluster by using a discovery method.
Roles
Master node
Master nodes are in charge of cluster-wide settings and changes – deleting or creating indices and fields, adding or removing nodes and allocating shards to nodes. Each cluster has a single master node that is elected from the master eligible nodes using a distributed consensus algorithm and is reelected if the current master node fails.
Coordinating (client) node
There is some confusion in the use of coordinating node terminology. Client nodes were removed from Elasticsearch after version 2.4 and became coordinating nodes.
Coordinating nodes are nodes that do not hold any configured role. They don’t hold data and are not part of the master eligible group nor execute ingest pipelines. Coordinating nodes serve incoming search requests and act as the query coordinator running query and fetch phases, sending requests to every node that holds a shard being queried. The coordinating node also distributes bulk indexing operations and route queries to shards based on the node’s responsiveness.
Log Context
Log “low disk watermark [{}] exceeded on {}; replicas will not be assigned to this node” classname is DiskThresholdMonitor.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final boolean wasUnderLowThreshold = nodesOverLowThreshold.add(node); final boolean wasOverHighThreshold = nodesOverHighThreshold.remove(node); assert (wasUnderLowThreshold && wasOverHighThreshold) == false; if (wasUnderLowThreshold) { logger.info("low disk watermark [{}] exceeded on {}; replicas will not be assigned to this node"; diskThresholdSettings.describeLowThreshold(); usage); } else if (wasOverHighThreshold) { logger.info("high disk watermark [{}] no longer exceeded on {}; but low disk watermark [{}] is still exceeded"; diskThresholdSettings.describeHighThreshold(); usage; diskThresholdSettings.describeLowThreshold()); }
[ratemypost]