Briefly, this error occurs when Elasticsearch is unable to update the snapshot state after the shards have started. This could be due to insufficient permissions, network connectivity issues, or a problem with the underlying storage system. To resolve this issue, you can check and adjust the permissions, ensure the network connectivity is stable, and verify the health of the storage system. Additionally, you can also try restarting the Elasticsearch cluster or re-creating the snapshot.
This guide will help you check for common problems that cause the log ” failed to update snapshot state after shards started from [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: shards and snapshot.
Overview
Data in an Elasticsearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each Elasticsearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the Elasticsearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.
Examples
The number of shards is set when an index is created, and this number cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index using:
PUT /sensor { "settings" : { "index" : { "number_of_shards" : 6, "number_of_replicas" : 2 } } }
The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.
During their lifetime, shards can go through a number of states, including:
- Initializing: An initial state before the shard can be used.
- Started: A state in which the shard is active and can receive requests.
- Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, such as when the node they are on is running out of disk space.
- Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens. For example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).
In order to view all shards, their states, and other metadata, use the following request:
GET _cat/shards
To view shards for a specific index, append the name of the index to the URL, for example:
sensor: GET _cat/shards/sensor
This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.
sensor 5 p STARTED 0 283b 127.0.0.1 ziap sensor 5 r UNASSIGNED sensor 2 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 2 r UNASSIGNED sensor 3 p STARTED 3 7.2kb 127.0.0.1 ziap sensor 3 r UNASSIGNED sensor 1 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 1 r UNASSIGNED sensor 4 p STARTED 2 3.8kb 127.0.0.1 ziap sensor 4 r UNASSIGNED sensor 0 p STARTED 0 283b 127.0.0.1 ziap sensor 0 r UNASSIGNED
Notes and good things to know
- Having shards that are too large is simply inefficient. Moving huge indices across machines is both a time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
- Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.
How to reduce your Elasticsearch costs by optimizing your shards
Watch the video below to learn how to save money on your deployment by optimizing your shards.
Overview
An Elasticsearch snapshot is a backup of an index taken from a running cluster. Snapshots are taken incrementally. This means that when Elasticsearch creates a snapshot of an index, it will not copy any data that was already backed up in an earlier snapshot of the index (unless it was changed). Therefore, it is recommended to take snapshots often.
You can restore snapshots into a running cluster via the restore API. Snapshots can only be restored to versions of Elasticsearch that can read the indices. Check the version compatibility before you restore. You can’t restore an index to a cluster that is more than one version above the index version.
The following repository types are supported:
- File system location
- S3 object storage
- HDFS
- Azure and Google Cloud storage
Examples
An example of using S3 repository for Elasticsearch:
PUT _snapshot/backups { "type": "s3", "settings": { "bucket": "elastic", "endpoint": "10.3.10.10:9000", "protocol": "http" } }
You will also need to set the S3 access key and secret key in Elasticsearch key store.
bin/elasticsearch-keystore add s3.client.default.access_key bin/elasticsearch-keystore add s3.client.default.secret_key
Taking a snapshot
Once the repo is set, taking a snapshot is just an API call.
PUT /_snapshot/backup/my_snapshot-01-10-2019
Where backup is the name of snapshot repo, and my_snapshot-01-10-2019 is the name of the snapshot. The above example will take a snapshot of all the indices. To take a snapshot of specific indices, provide the names of the indices you would like a snapshot of.
PUT /_snapshot/backup/my_snapshot-01-10-2019 { "indices": "my_index_1,my_index_2" } }
Restoring a snapshot
Restoring from a snapshot is also an API call:
POST /_snapshot/backup/my_snapshot-01-10-2019 /_restore { "indices": "index_1,index_2" }
This will restore index_1 and index_2 from the snapshot my_snapshot-01-10-2019 in backup repository.
Notes and good things to know
- Snapshot repository needs to be set up before you can take a snapshot, and you will need to install the S3 repository plugin as well if you plan to use a repository with S3 as backend storage.
sudo bin/elasticsearch-plugin install repository-s3
- You can use curator_cli tool to automate taking snapshots such as Cron, Kenkins or Kubernetes job schedule.
- It is better to use Elasticsearch snapshots instead of disk backups/snapshots. An index must be closed in order to be restored.
- Another option is to delete the index before restoring it. The snapshot and restore mechanism can also be used to copy data from one cluster to another cluster.
- If you don’t have S3 storage , you can run minio with NFS backend to create an S3 equivalent for your cluster snapshots
- When the operation is retried, it will only try to snapshot any shards that failed on the initial operation, until the snapshot succeeds.
- It is better to have the snapshot repo on the local network with Elasticsearch or configure/design the repository for high write throughput so that you don’t have to deal with partial snapshots.
- The snapshot operation will fail if there is a missing index. Setting the ignore_unavailable option to true will cause indices that do not exist to be ignored during snapshot operation.
- If you are using some open source security tool such as SearchGuard, you will need to configure the Elasticsearch snapshot restore settings on the cluster before you can restore any snapshot.
- In elasticsearch.yml:
searchguard.enable_snapshot_restore_privilege: true
Create data backups automatically without using snapshots
If having backups of your data is important to you and your operations, snapshots may not be ideal for you. Firstly, there are the problems mentioned above, but you also run the risk of losing any data generated in the time elapsed since the last snapshot was stored.
If, for example, you designate a snapshot and restore process to occur every 5 minutes, the data being backed up is always 5 minutes behind. If a cluster fails 4 minutes after the last snapshot was taken, 4 minutes of data will be completely lost.
Opster’s Multi-Cluster Load Balancer mirrors data to multiple clusters in real time to ensure complete data recovery, meaning there are zero time gaps and you’ll never run the risk of losing valuable data. To book a demo of the Mutli-Cluster Load Balancer, click here.
Log Context
Log “failed to update snapshot state after shards started from [{}]” classname is SnapshotsService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
return currentState; } Override public void onFailure(String source; Throwable t) { logger.warn("failed to update snapshot state after shards started from [{}] "; t; source); } }); } }
[ratemypost]