Briefly, this error occurs when the disk usage exceeds the “flood stage” watermark level, which is 95% by default in Elasticsearch. This is a protective measure to prevent the node from crashing due to lack of disk space. To resolve this issue, you can either increase your disk space, delete unnecessary data, or adjust the “flood stage” watermark level. However, adjusting the watermark level should be done cautiously as it might lead to disk space issues.
Crossing high disk watermarks can be avoided if detected earlier. Before you read this guide, we strongly recommend you run the Elasticsearch Error Check-Up which detects issues in ES that cause ES errors and specifically problems that causes disk space to run out quickly.The tool can prevent flood disk watermark [95%] from being exceeded from occuring again. It’s a free tool that requires no installation and takes 2 minutes to complete. You can run the Check-Up here.
Quick summary
This error is caused by low disk space on a data node. As a preventive measure, Elasticsearch throws this log message and takes some measures as explained further.
To pinpoint how to resolve issues causing flood stage disk watermark [95%] to be breached, run Opster’s free Elasticsearch Health Check-Up. The tool has several checks on disk watermarks and can provide actionable recommendations on how to resolve and prevent this from occurring (even without increasing disk space).
Explanation
Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or block all index write operations on a data node based on a different threshold of this error. This is because Elasticsearch indices consist of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings related to log:
cluster.routing.allocation.disk.watermark – There are three thresholds: low, high, and flood_stage. These can be changed dynamically, accepting absolute values as well as percentage values. Threshold can be specified both as percentage and byte values, but the former is more flexible and easier to maintain (in case different nodes have different disk sizes, like in hot/warm deployments).
Permanent fixes
- Delete unused indices.
- Attach external disk or increase the disk used by the data node.
- Manually move shards away from the node using cluster reroute API.
- Reduce replicas count to 1 (if replicas > 1).
- Add new data nodes.
Temporary hacks/fixes
1. Change the settings values to a higher threshold by dynamically updating the settings using update cluster API:
PUT _cluster/settings
{ "transient": { "cluster.routing.allocation.disk.watermark.low": "100gb", "cluster.routing.allocation.disk.watermark.high": "50gb", "cluster.routing.allocation.disk.watermark.flood_stage": "10gb", "cluster.info.update.interval": "1m" } }
2. Disable disk check by hitting below cluster update API:
PUT /_cluster/settings { "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }
3. Even after all these fixes, Elasticsearch won’t remove the write block on indices. In order to achieve that, the following API needs to be hit:
PUT _all/_settings { "index.blocks.read_only_allow_delete": null }
Overview
To put it simply, a node is a single server that is part of a cluster. Each node is assigned one or more roles, which describe the node’s responsibility and operations. Data nodes store the data, and participate in the cluster’s indexing and search capabilities, while master nodes are responsible for managing the cluster’s activities and storing the cluster state, including the metadata.
While it is possible to run several node instances of Elasticsearch on the same hardware, it’s considered a best practice to limit a server to a single running instance of Elasticsearch.
Nodes connect to each other and form a cluster by using a discovery method.
Roles
Master node
Master nodes are in charge of cluster-wide settings and changes – deleting or creating indices and fields, adding or removing nodes and allocating shards to nodes. Each cluster has a single master node that is elected from the master eligible nodes using a distributed consensus algorithm and is reelected if the current master node fails.
Coordinating (client) node
There is some confusion in the use of coordinating node terminology. Client nodes were removed from Elasticsearch after version 2.4 and became coordinating nodes.
Coordinating nodes are nodes that do not hold any configured role. They don’t hold data and are not part of the master eligible group nor execute ingest pipelines. Coordinating nodes serve incoming search requests and act as the query coordinator running query and fetch phases, sending requests to every node that holds a shard being queried. The coordinating node also distributes bulk indexing operations and route queries to shards based on the node’s responsiveness.
Overview
In Elasticsearch, routing refers to document routing. When you index a document, Elasticsearch will determine which shard the document should be routed to for indexing.
The shard is selected based on the following formula:
shard = hash(_routing) % number_of_primary_shards
Where the default value of _routing is _id.
It is important to know which shard the document is routed to, because Elasticsearch will need to determine where to find that document later on for document retrieval requests.
Examples
In twitter index with 2 primary shards, the document with _id equal to “440” gets routed to the shard number:
shard = hash( 440 ) % 2 PUT twitter/_doc/440 { ... }
Notes and good things to know
- In order to improve search speed, you can create custom routing. For example, you can enable custom routing that will ensure that only a single shard will be queried (the shard that contains your data).
- To create custom routing in Elasticsearch, you will need to configure and define that not all routing will be completed by default settings. ( v <= 5.0)
PUT my_index/customer/_mapping { "order":{ "_routing":{ "required":true } } }
- This will ensure that every document in the “customer” type must specify a custom routing. For Elasticsearch version 6 or above you will need to update the same mapping as:
PUT my_index/_mapping { "order":{ "_routing":{ "required":true } } }
Log Context
Log “flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only” classname is DiskThresholdMonitor.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
indicesToMarkReadOnly.add(indexName); indicesNotToAutoRelease.add(indexName); } } logger.warn("flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only"; diskThresholdSettings.describeFloodStageThreshold(); usage); continue; }
[ratemypost]