Briefly, this error occurs when Elasticsearch is unable to retrieve metadata for a store file. This could be due to issues like file corruption, disk space shortage, or permission problems. To resolve this, you can try the following: 1) Check and free up disk space if it’s running low. 2) Verify the file permissions and ensure Elasticsearch has the necessary access. 3) If the file is corrupted, consider restoring it from a backup. 4) Restart the Elasticsearch service as it might help in some cases.
This guide will help you check for common problems that cause the log ” Failed to get store file metadata ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: blobstore, metadata, repositories.
Overview
Metadata in Elasticsearch refers to additional information stored for each document. This is achieved using the specific metadata fields available in Elasticsearch. The default behavior of some of these metadata fields can be customized during mapping creation.
Examples
Using _meta meta-field for storing application-specific information with the mapping:
PUT /my_index?pretty { "mappings": { "_meta": { "domain": "security", "release_information": { "date": "18-01-2020", "version": "7.5" } } } }
Notes
- In version 2.x, Elasticsearch had a total 13 meta fields available, which are: _index, _uid, _type, _id, _source, _size, _all, _field_names, _timestamp, _ttl, _parent, _routing, _meta
- In version 5.x, _timestamp and _ttl meta fields were removed.
- In version 6.x, the _parent meta field was removed.
- In version 7.x, _uid and _all meta fields were removed.
Overview
An Elasticsearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories can be created per cluster. The following repository types are supported:
Repository types
Repository type | Configuration type |
---|---|
Shared file system | Type: “fs” |
S3 | Type : “s3” |
HDFS | Type :“hdfs” |
Azure | Type: “azure” |
Google Cloud Storage | Type : “gcs” |
Examples
To register an “fs” repository:
PUT _snapshot/my_repo_01 { "type": "fs", "settings": { "location": "/mnt/my_repo_dir" } }
Notes and good things to know
- S3, HDFS, Azure and Google Cloud require a relevant plugin to be installed before it can be used for a snapshot.
- The setting, path.repo: /mnt/my_repo_dir needs to be added to elasticsearch.yml on all the nodes if you are planning to use the repo type of file system. Otherwise, it will fail.
- When using remote repositories, the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally, otherwise you will end up with partial snapshots.
Log Context
Log “Failed to get store file metadata” class name is BlobStoreRepository.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final IndexCommit snapshotIndexCommit = context.indexCommit(); logger.trace("[{}] [{}] Loading store metadata using index commit [{}]"; shardId; snapshotId; snapshotIndexCommit); metadataFromStore = store.getMetadata(snapshotIndexCommit); fileNames = snapshotIndexCommit.getFileNames(); } catch (IOException e) { throw new IndexShardSnapshotFailedException(shardId; "Failed to get store file metadata"; e); } } for (String fileName : fileNames) { if (snapshotStatus.isAborted()) { logger.debug("[{}] [{}] Aborted on the file [{}]; exiting"; shardId; snapshotId; fileName);
[ratemypost]