Briefly, this error occurs when Elasticsearch is unable to read metadata from the local store during a restore operation. This could be due to corruption of metadata files or insufficient permissions. To resolve this, you can try the following: 1) Check and repair any corrupted metadata files. 2) Ensure Elasticsearch has the necessary permissions to access the local store. 3) If the issue persists, consider deleting the existing local files and reindexing from the source. Always ensure to have a backup before performing any operation that could potentially lead to data loss.
This guide will help you check for common problems that cause the log ” [{}] [{}] Can’t read metadata from store; will not reuse local files during restore ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: metadata, repositories, blobstore, restore.
Overview
Metadata in Elasticsearch refers to additional information stored for each document. This is achieved using the specific metadata fields available in Elasticsearch. The default behavior of some of these metadata fields can be customized during mapping creation.
Examples
Using _meta meta-field for storing application-specific information with the mapping:
PUT /my_index?pretty { "mappings": { "_meta": { "domain": "security", "release_information": { "date": "18-01-2020", "version": "7.5" } } } }
Notes
- In version 2.x, Elasticsearch had a total 13 meta fields available, which are: _index, _uid, _type, _id, _source, _size, _all, _field_names, _timestamp, _ttl, _parent, _routing, _meta
- In version 5.x, _timestamp and _ttl meta fields were removed.
- In version 6.x, the _parent meta field was removed.
- In version 7.x, _uid and _all meta fields were removed.
Overview
An Elasticsearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories can be created per cluster. The following repository types are supported:
Repository types
Repository type | Configuration type |
---|---|
Shared file system | Type: “fs” |
S3 | Type : “s3” |
HDFS | Type :“hdfs” |
Azure | Type: “azure” |
Google Cloud Storage | Type : “gcs” |
Examples
To register an “fs” repository:
PUT _snapshot/my_repo_01 { "type": "fs", "settings": { "location": "/mnt/my_repo_dir" } }
Notes and good things to know
- S3, HDFS, Azure and Google Cloud require a relevant plugin to be installed before it can be used for a snapshot.
- The setting, path.repo: /mnt/my_repo_dir needs to be added to elasticsearch.yml on all the nodes if you are planning to use the repo type of file system. Otherwise, it will fail.
- When using remote repositories, the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally, otherwise you will end up with partial snapshots.
Log Context
Log “[{}] [{}] Can’t read metadata from store; will not reuse local files during restore” classname is FileRestoreContext.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} catch (org.apache.lucene.index.IndexNotFoundException e) { // happens when restore to an empty shard; not a big deal logger.trace("[{}] [{}] restoring from to an empty shard"; shardId; snapshotId); recoveryTargetMetadata = Store.MetadataSnapshot.EMPTY; } catch (IOException e) { logger.warn(new ParameterizedMessage("[{}] [{}] Can't read metadata from store; will not reuse local files during restore"; shardId; snapshotId); e); recoveryTargetMetadata = Store.MetadataSnapshot.EMPTY; } final ListfilesToRecover = new ArrayList(); final Map snapshotMetadata = new HashMap();
[ratemypost]