Briefly, this error occurs when the disk usage on a node exceeds 90% of its total capacity. Elasticsearch uses watermarks to ensure that a node has enough disk space to operate normally. When the high disk watermark is exceeded, Elasticsearch will stop allocating new shards to that node. To resolve this issue, you can either increase the disk space, delete unnecessary indices or data, or adjust the disk watermark thresholds in the Elasticsearch settings. However, be cautious when adjusting the thresholds as it might lead to insufficient disk space for Elasticsearch operations.
Crossing high disk watermarks can be avoided if detected earlier. In addition we strongly recommend you run the Elasticsearch Error Check-Up. We found that 20% of people who ran the Check-Up failed this event. This free tool will detect issues in ES that cause many ES errors and specifically problems that causes disk space to run out quickly and prevent high disk watermark from exceeding.The tool requires no installation and takes 2 minutes to complete. You can run the Check-Up here.
Overview
There are various “watermark” thresholds on your Elasticsearch cluster. As the disk fills up on a node, the first threshold to be crossed will be the “low disk watermark”. The second threshold will then be the “high disk watermark threshold”. Finally, the “disk flood stage” will be reached. Once this threshold is passed, the cluster will then block writing to ALL indices that have one shard (primary or replica) on the node which has passed the watermark. Reads (searches) will still be possible.
How to resolve this issue
Passing this threshold is a warning and you should not delay in taking action before the higher threshold flood_stage is reached. Here are possible actions you can take to resolve the issue:
- Delete old indices
- Remove documents from existing indices
- Reduce the number of replicas (on older indices)
- “Increase disk space on all nodes
- Add new nodes to the cluster
Although you may be reluctant to delete data, in a logging system it is often better to delete old indices (which you may be able to restore from a snapshot later if available) than to lose new data. However, this decision will depend upon the architecture of your system and the queueing mechanisms you have available.
Check the disk space on each node
You can see the space you have available on each node by running:
GET _nodes/stats/fs
Check if the cluster is rebalancing
If the high level watermark has been passed, then Elasticsearch should start rebalancing shards from that node to other nodes which are still below the low watermark. You can check to see if any rebalancing is going on by calling:
GET _cluster/health/
If you think that your cluster should be rebalancing shards to other nodes but it is not, there are probably some other cluster allocation rules which are preventing this from happening. The most likely causes are:
- The other nodes are already above the low disk watermark
- There are cluster allocation rules which govern the distribution of shards between nodes and conflict with the rebalancing requirements. (eg. zone awareness allocation).
- There are already too many rebalancing operations in progress
- The other nodes already contain the primary or replica shards of the shards that could be rebalanced.
Check the cluster settings
You can see the settings you have applied with this command:
GET _cluster/settings
If they are not appropriate, you can modify them using a command such as below:
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": "85%", "cluster.routing.allocation.disk.watermark.high": "90%", "cluster.routing.allocation.disk.watermark.flood_stage": "95%", "cluster.info.update.interval": "1m" } }
Note: Threshold can be specified both as percentage and byte values, but the former is more flexible and easier to maintain (in case different nodes have different disk sizes, like in hot/warm deployments).
How to prevent
There are various mechanisms that allow you to automatically delete stale data.
How to automatically delete stale data:
- Apply ILM (Index Lifecycle Management)
Using ILM you can get Elasticsearch to automatically delete an index when your current index reaches a given age.
- Use date based indices
If your application uses date based indices, then it is easy to delete old indices using either a script, ILM or a tool such as Elasticsearch curator.
- Use snapshots to store data offline
It may be appropriate to store snapshotted data offline and restore it in the event that the archived data needs to be reviewed or studied.
- Automate / simplify process to add new data nodes
Use automation tools such as terraform to automate the addition of new nodes to the cluster. If this is not possible, at the very least ensure you have a clearly documented process to create new nodes, add TLS certificates and configuration and bring them into the Elasticsearch cluster in a short and predictable time frame.
Overview
Elasticsearch uses several parameters to enable it to manage hard disk storage across the cluster.
What it’s used for
- Elasticsearch will actively try to relocate shards away from nodes which exceed the disk watermark high threshold.
- Elasticsearch will NOT locate new shards or relocate shards on to nodes which exceed the disk watermark low threshold.
- Elasticsearch will prevent all writes to an index which has any shard on a node that exceeds the disk.watermark.flood_stage threshold.
- The info update interval is the time it will take Elasticsearch to re-check the disk usage.
Examples
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": "85%", "cluster.routing.allocation.disk.watermark.high": "90%", "cluster.routing.allocation.disk.watermark.flood_stage": "95%", "cluster.info.update.interval": "1m" } }
Notes and good things to know
- You can use absolute values (100gb) or percentages (90%), but you cannot mix the two on the same cluster.
- In general, it is recommended to use percentages, since this will work in case the disks are resized.
- You can put the cluster settings on the elasticsearch.yml of each node, but it is recommended to use the PUT _cluster/settings API because it is easier to manage, and ensures that the settings are coherent across the cluster.
- Elasticsearch comes with sensible defaults for these settings, so think twice before modifying them. If you find you are spending a lot of time fine-tuning these settings, then it is probably time to invest in new disk space.
- In the event of the flood_stage threshold being exceeded, once you delete data, Elasticsearch should detect automatically that the block can be released (bearing in mind the update interval which could be, for instance, a minute). However if you want to accelerate this process, you can unblock an index manually, with the following call:
PUT /my_index/_settings { "index.blocks.read_only_allow_delete": null }
Common problems
Inappropriate cluster settings (if the disk watermark.low is too low) can make it impossible for Elasticsearch to allocate shards on the cluster. In particular, bear in mind that these parameters work in combination with other cluster settings (for example shard allocation awareness) which cause further restraints on how Elasticsearch can allocate shards.
Log Context
Log “Elasticsearch high disk watermark [90%] exceeded on” classname is DiskThresholdDecider.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
entry; DiskThresholdDecider.this.rerouteInterval); } } } if (reroute) { logger.info("high disk watermark exceeded on one or more nodes; rerouting shards"); // Execute an empty reroute; but don't block on the response client.admin().cluster().prepareReroute().execute(); } } }
[ratemypost]