Briefly, this error occurs when Elasticsearch is unable to clear a scroll identified by a specific scrollId. This could be due to a timeout, a server issue, or the scrollId not existing. To resolve this, you can try increasing the timeout period, ensuring the server is functioning properly, or verifying the scrollId. If the issue persists, consider checking the Elasticsearch logs for more detailed error information.
This guide will help you check for common problems that cause the log ” Failed to clear scroll [” + scrollId + “] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: scroll, index, reindex.
Overview
In Elasticsearch, the concept of scroll comes into play when you have a large set of search results. Large search results are exhaustive for both the Elasticsearch cluster and the requesting client in terms of memory and processing. The scroll API enables you to take a snapshot of a large number of results from a single search request.
Examples
To perform a scroll search, you need to add the scroll parameter to a search query and specify how long Elasticsearch should keep the search context viable.
GET mydocs-2019/_search?scroll=40s { "size": 5000, "query": { "match_all": {} }, "sort": [ { "_doc": { "order": "asc" } } ] }
This query will return a maximum of 5000 hits. If the scroll is idle for more than 40 seconds, it will be deleted. The response will return the first page of the results and a scroll ID. You can use the scroll ID to get additional documents from the scroll. You’ll be able to keep retrieving the documents until you have all of them.
Notes
- Changes made to documents after the scroll will not show up in your results.
- When you are done with the scroll, you can delete it manually using the scroll ID.
DELETE _search/scroll/<scroll_id>
Overview
In Elasticsearch, an index (plural: indices) contains a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.
Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.
Examples
Create index
The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1
PUT /test_index1?pretty { "settings" : { "number_of_shards" : 2, "number_of_replicas" : 1 }, "mappings" : { "properties" : { "tags" : { "type" : "keyword" }, "updated_at" : { "type" : "date" } } } }
List indices
All the index names and their basic information can be retrieved using the following command:
GET _cat/indices?v
Index a document
Let’s add a document in the index with the command below:
PUT test_index1/_doc/1 { "tags": [ "opster", "elasticsearch" ], "date": "01-01-2020" }
Query an index
GET test_index1/_search { "query": { "match_all": {} } }
Query multiple indices
It is possible to search multiple indices with a single request. If it is a raw HTTP request, index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.
GET test_index1,test_index2/_search
Delete indices
DELETE test_index1
Common problems
- It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
- Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:
DELETE /*
To disable this, you can add the following lines in the elasticsearch.yml:
action.destructive_requires_name: true
Overview
Reindex is the concept of copying existing data from a source index to a destination index which can be inside the same or a different cluster. Elasticsearch has a dedicated endpoint _reindex for this purpose. A reindexing is mostly required for updating mapping or settings.
Examples
Reindex data from a source index to destination index in the same cluster:
POST /_reindex?pretty { "source": { "index": "news" }, "dest": { "index": "news_v2" } }
Notes
- Reindex API does not copy settings and mappings from the source index to the destination index. You need to create the destination index with the desired settings and mappings before you begin the reindexing process.
- The API exposes an extensive list of configuration options to fetch data from the source index, such as query-based indexing and selecting multiple indices as the source index.
- In some scenarios reindex API is not useful, where reindexing requires complex data processing and data modification based on application logic. In this case, you can write your custom script using Elasticsearch scroll API to fetch the data from source index and bulk API to index data into destination index.
Log Context
Log “Failed to clear scroll [” + scrollId + “]” classname is ClientScrollableHitSource.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
onCompletion.run(); } @Override public void onFailure(Exception e) { logger.warn(() -> "Failed to clear scroll [" + scrollId + "]"; e); onCompletion.run(); } }); }
[ratemypost]