Briefly, this error occurs when Elasticsearch takes too long to update its index mappings, possibly due to heavy indexing load or insufficient resources. To resolve this, you can increase the timeout value for mapping updates, reduce the indexing load, or scale up your Elasticsearch cluster to provide more resources. Additionally, ensure that your mappings are not overly complex, as this can also slow down updates.
This guide will help you check for common problems that cause the log ” timed out waiting for mapping updates ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: mapping, indices, recovery.
Overview
Mapping is similar to database schemas that define the properties of each field in the index. These properties may contain the data type of each field and how fields are going to be tokenized and indexed. In addition, the mapping may also contain various advanced level properties for each field to define the options exposed by Lucene and Elasticsearch.
You can create a mapping of an index using the _mappings REST endpoint. The very first time Elasticsearch finds a new field whose mapping is not pre-defined inside the index, it automatically tries to guess the data type and analyzer of that field and set its default value. For example, if you index an integer field without pre-defining the mapping, Elasticsearch sets the mapping of that field as long.
Examples
Create an index with predefined mapping:
PUT /my_index?pretty { "settings": { "number_of_shards": 1 }, "mappings": { "properties": { "name": { "type": "text" }, "age": { "type": "integer" } } } }
Create mapping in an existing index:
PUT /my_index/_mapping?pretty { "properties": { "email": { "type": "keyword" } } }
View the mapping of an existing index:
GET my_index/_mapping?pretty
View the mapping of an existing field:
GET /my_index/_mapping/field/name?pretty
Notes
- It is not possible to update the mapping of an existing field. If the mapping is set to the wrong type, re-creating the index with updated mapping and re-indexing is the only option available.
- In version 7.0, Elasticsearch has deprecated the document type and the default document type is set to _doc. In future versions of Elasticsearch, the document type will be removed completely.
How to optimize your Elasticsearch mapping to reduce costs
Watch the video below to learn how to save money on your deployment by optimizing your mapping.
Common problems
- The most common problem in Elasticsearch is incorrectly defined mapping which limits the functionality of the field. For example, if the data type of a string field is set as text, you cannot use that field for aggregations, sorting or exact match filters. Similarly, if a string field is dynamically indexed without predefined mapping, Elasticsearch automatically creates two fields internally. One as a text type for full-text search and another as keyword type, which in most cases is a waste of space.
- Elasticsearch automatically creates an _all field inside the mapping and copies values of each field of a document inside the _all field. This field is used to search text without specifying a field name. Make sure to disable the _all field in production environments to avoid wasting space. Please note that support for the _all field has been removed in version 7.0.
- In versions lower than 5.0, it was possible to create multiple document types inside an index, similar to creating multiple tables inside a database. In those versions, there were higher chances of getting data types conflicts across different document types if they contained the same field name with different data types.
- The mapping of each index is part of the cluster state and is managed by master nodes. If the mapping is too big, meaning there are thousands of fields in the index, the cluster state grows too large to be handled and creates the issue of mapping explosion, resulting in the slowness of the cluster.
Overview
In Elasticsearch, recovery refers to the process of recovering a shard when something goes wrong. Shard recoveries can take place in various circumstances, such as when a node fails and a replica shard needs to be recreated from a primary shard, when the cluster needs to relocate shards to different nodes due to a rebalancing or a change in shard allocation settings, or when restoring an index from an Elasticsearch snapshot. Alternatively, Elasticsearch can sometimes perform recoveries automatically, such as when a node restarts or disconnects and connects again. In summary, recovery can happen in the following scenarios:
- Node startup or failure (local store recovery)
- Replication of primary shards to replica shards
- Relocation of a shard to a different node in the same cluster
- Restoration of a snapshot
Planned node restart
If you are planning to restart a node, there are some actions that you can take to speed up the shard recoveries when the node has restarted. For optimal recovery speed, you should stop any indexing to the shards that are hosted on the node that is about to be restarted. Once you’ve stopped your indexing process, you can perform the following actions:
1. Disable shard allocation to prevent shards from being reallocated to other nodes while the node is restarting using the following command:
PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": "primaries" } }
It is worth noting that by default the shard relocation process only starts after one minute and that delay can be configured with the `index.unassigned.node_left.delayed_timeout` index setting.
2. Once shard relocation is disabled, you need to flush the transaction logs (using the command below), which will ensure that all operations currently stored in the transactions log are safely committed to the Lucene index on disk. That will save you time during the restart since no operations will need to be replayed, meaning that the recovery of your shards will be faster.
POST /_flush
Note that prior to ES 8.0, this operation was called synced-flush, but it was deprecated in 7.6 and removed in 8.0.
3. At this point, you can restart your node.
4. When the node has properly restarted, you can re-enable shard allocation using the following command:
PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": null } }
If you have several nodes to restart or you are performing a full cluster restart, you can use the same procedure. The key points to remember for speeding up the recovery process are to stop any indexing and to flush your transaction log.
While the recovery process is in progress, there are a few API calls that allow us to monitor the status of the shard recoveries:
# Check the recovery status of a specific index
GET /<index>/_recovery
# Check the recovery status of all indexes
GET /_recovery
# Check the recovery status of all indexes (more concise format)
GET _cat/recovery
Tweaking recovery speed
If you cannot stop your indexing process for whatever reason, you can still perform the same procedure. However, since new data will keep flowing in while the node is restarting, all the indexing operations will need to be replayed, which will slow down the recovery process. However, there are a few knobs that you can tune to speed this up provided you have sufficient hardware resources (CPU, RAM, network).
By default, the total inbound and outbound recovery traffic on each hot and warm data node is limited to 40 Mbps. For dedicated cold and frozen nodes, that limit ranges from 40 Mbps to 250 Mbps depending on the total amount of memory available on those nodes. These default values have been determined empirically based on the assumption that the hardware is composed of standard SSD disks and a network interface with 1 Gbps throughput.If you have superior hardware (e.g., 10 Gbps network and 100K IOPS disks), you can increase the recovery traffic limit to a higher value using the following command:
PUT /_cluster/settings { "transient": { "indices.recovery.max_bytes_per_sec": "100mb" } }
You should be very careful when changing this setting as it can harm your cluster performance if the value you set is too high. Also, there are a few other expert settings that you can tweak if you want to optimize the recovery process, but changing the defaults on these expert settings is strongly discouraged unless you know exactly what you’re doing.
Conclusion
In this guide, we have explained what the shard recovery process is and under which circumstances it kicks in. We have also reviewed a few techniques to speed up the recovery process and highlighted what you need to pay attention to when you start tweaking the default recovery settings values.
Log Context
Log “timed out waiting for mapping updates” class name is PeerRecoveryTargetService.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} @Override public void onTimeout(TimeValue timeout) { // note that we do not use a timeout (see comment above) listener.onFailure(new ElasticsearchTimeoutException("timed out waiting for mapping updates " + "(timeout [" + timeout + "])")); } }); }; final IndexMetadata indexMetadata = clusterService.state().metadata().index(request.shardId().getIndex());
[ratemypost]