Briefly, this error occurs when Elasticsearch is unable to remove cache files associated with a specific shard, possibly due to insufficient permissions or a locked file. To resolve this issue, you can try to manually delete the cache files, ensure that Elasticsearch has the necessary permissions to modify files in the data directory, or check if any processes are locking the files and terminate them if necessary. Additionally, restarting the Elasticsearch service may also help in resolving this issue.
This guide will help you check for common problems that cause the log ” failed to evict cache files associated with shard %s ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, cache, shard.
Overview
Elasticsearch uses three types of caches to improve the efficiency of operation.
- Node request cache
- Shard data cache
- Field data cache
How they work
Node request cache maintains the results of queries used in a filter context. The results are evicted on a least recently used basis.
Shard data cache maintains the results of frequently used queries where size=0, particularly the results of aggregations. This cache is particularly relevant for logging use cases where data is not updated on old indices, and regular aggregations can be kept in cache to be reused.
The field data cache is used for sorting and aggregations. To keep these operations quick Elasticsearch loads these values into memory.
Examples
Elasticsearch usually manages cache behind the scenes, without the need for any specific settings. However, it is possible to monitor and limit the amount of memory being used on each node for a given cache type by putting the following in elasticsearch.yml :
indices.queries.cache.size: 10% indices.fielddata.cache.size: 30%
Note, the above values are in fact the defaults, and there is no need to set them specifically. The default values are good for most use cases, and should rarely be modified.
You can monitor the use of caches on each node like this:
GET /_nodes/stats/indices/fielddata GET /_nodes/stats/indices/query_cache GET /_nodes/stats/indices/request_cache
Notes and good things to know
Construct your queries with reusable filters. There are certain parts of your query which are good candidates to be reused across a large number of queries, and you should design your queries with this in mind. Anything thing that does not need to be scored should go in the filter section of a bool query. For example, time ranges, language selectors, or clauses that exclude inactive documents are all likely to be excluded in a large number of queries, and should be included in filter parts of the query so that they can be cached and reused.
In particular, take care with time filters. “now-15m” cannot be reused, because “now” will continually change as the time window moves on. On the other hand “now-15/m” will round to the nearest minute, and can be re-used (via cache) for 60 seconds before rolling over to the next minute.
For example when a user enters the search term “brexit”, we may want to also filter on language and time period to return relevant articles. The query below leaves only the query term “brexit” in the “must” part of the query, because this is the only part which should affect the relevance score. The time filter and language filter can be reused time and time again for new queries for different searches.
POST results/_search { "query": { "bool": { "must": [ { "match": { "message": { "query": "brexit" } } } ], "filter": [ { "range": { "@timestamp": { "gte": "now-10d/d" } } }, { "term": { "lang.keyword": { "value": "en", "boost": 1 } } } ] } } }
Limit the use of field data. Be careful about using fielddata=true in your mapping where the number of terms will result in a high cardinality. If you must use fielddata=true, you can also reduce the requirement of fielddata cache by limiting the requirements for fielddata for a given index using a field data frequency filter.
POST results/_search { "query": { "bool": { "must": [ { "match": { "message": { "query": "brexit" } } } ], "filter": [ { "range": { "@timestamp": { "gte": "now-10d/d" } } }, { "term": { "lang.keyword": { "value": "en", "boost": 1 } } } ] } } }
Overview
Data in an Elasticsearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each Elasticsearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the Elasticsearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.
Examples
The number of shards is set when an index is created, and this number cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index using:
PUT /sensor { "settings" : { "index" : { "number_of_shards" : 6, "number_of_replicas" : 2 } } }
The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.
During their lifetime, shards can go through a number of states, including:
- Initializing: An initial state before the shard can be used.
- Started: A state in which the shard is active and can receive requests.
- Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, such as when the node they are on is running out of disk space.
- Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens. For example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).
In order to view all shards, their states, and other metadata, use the following request:
GET _cat/shards
To view shards for a specific index, append the name of the index to the URL, for example:
sensor: GET _cat/shards/sensor
This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.
sensor 5 p STARTED 0 283b 127.0.0.1 ziap sensor 5 r UNASSIGNED sensor 2 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 2 r UNASSIGNED sensor 3 p STARTED 3 7.2kb 127.0.0.1 ziap sensor 3 r UNASSIGNED sensor 1 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 1 r UNASSIGNED sensor 4 p STARTED 2 3.8kb 127.0.0.1 ziap sensor 4 r UNASSIGNED sensor 0 p STARTED 0 283b 127.0.0.1 ziap sensor 0 r UNASSIGNED
Notes and good things to know
- Having shards that are too large is simply inefficient. Moving huge indices across machines is both a time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
- Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.
How to reduce your Elasticsearch costs by optimizing your shards
Watch the video below to learn how to save money on your deployment by optimizing your shards.
Log Context
Log “failed to evict cache files associated with shard %s” classname is CacheService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
processShardEviction(shard); } @Override public void onFailure(Exception e) { logger.warn(() -> format("failed to evict cache files associated with shard %s"; shard); e); assert false : e; } }); return future; });
[ratemypost]