Briefly, this error occurs when both the ‘analyzer’ and ‘search_analyzer’ are defined on the same field in Elasticsearch. Elasticsearch uses ‘analyzer’ for indexing and ‘search_analyzer’ for searching. If both are defined, it can cause conflicts. To resolve this, you can either remove one of them or ensure they are set to compatible values. If you want different behaviors for indexing and searching, consider using ‘analyzer’ for indexing and ‘search_analyzer’ for searching, but ensure they are compatible to avoid unexpected results.
Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.
Try OpsGPT now for step-by-step guidance and tailored insights into your Elasticsearch/OpenSearch operation.
In addition we recommend you run the AutoOps for Elasticsearch which can resolve issues that cause many errors.
This guide will help you understand why the log “analyzer and search_analyzer on field” appears. It’s important to understand the relevant basic information as well, so see the concept definition of index below.
Background
Analysis is the process that Elasticsearch performs on the body of a document before the document is sent off to be added to the inverted index. Elasticsearch goes through a number of steps for every analyzed field before the document is added to the index. These steps are:
- Character filtering
- Breaking text into tokens
- Token filtering
Analyzers are a clever mix of these three components added together. By default, queries will use the analyzer defined in the field mapping, but this can be overridden with the search_analyzer setting. search_analyzer is defined when you want to use a different analyzer at the search time.
Note that this behaviour is different in ES 7.10 version. Elasticsearch no longer expects you to give both analyzer and search_analyzer when you use search_quote_analyzer in the mapping, hence this error is valid only in Elasticsearch versions below 7.10.
How to reproduce this exception
To recreate this exception, create an index with the following mapping:
PUT /my-index { "settings":{ "analysis":{ "analyzer":{ "my_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":[ "lowercase" ] }, "my_stop_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":[ "lowercase", "english_stop" ] } }, "filter":{ "english_stop":{ "type":"stop", "stopwords":"_english_" } } } }, "mappings":{ "properties":{ "title": { "type":"text", "search_quote_analyzer":"my_analyzer" } } } }
The response will be:
{ "error": { "root_cause": [ { "type": "mapper_parsing_exception", "reason": "analyzer and search_analyzer on field [title] must be set when search_quote_analyzer is set" } ], "type": "mapper_parsing_exception", "reason": "Failed to parse mapping [_doc]: analyzer and search_analyzer on field [title] must be set when search_quote_analyzer is set", "caused_by": { "type": "mapper_parsing_exception", "reason": "analyzer and search_analyzer on field [title] must be set when search_quote_analyzer is set" } }, "status": 400 }
How to fix this exception
The exception clearly states that you need to set both the analyzer and search analyzer, when search_quote_analyzer is set. The search_quote_analyzer setting points to the my_analyzer analyzer, as it allows you to specify an analyzer for phrases.
To fix this exception, modify the index mapping:
PUT /my-index { "settings":{ "analysis":{ "analyzer":{ "my_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":[ "lowercase" ] }, "my_stop_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":[ "lowercase", "english_stop" ] } }, "filter":{ "english_stop":{ "type":"stop", "stopwords":"_english_" } } } }, "mappings":{ "properties":{ "title": { "type":"text", "analyzer":"my_analyzer", "search_analyzer":"my_stop_analyzer", "search_quote_analyzer":"my_analyzer" } } } }
Overview
In Elasticsearch, an index (plural: indices) contains a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.
Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.
Examples
Create index
The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1
PUT /test_index1?pretty { "settings" : { "number_of_shards" : 2, "number_of_replicas" : 1 }, "mappings" : { "properties" : { "tags" : { "type" : "keyword" }, "updated_at" : { "type" : "date" } } } }
List indices
All the index names and their basic information can be retrieved using the following command:
GET _cat/indices?v
Index a document
Let’s add a document in the index with the command below:
PUT test_index1/_doc/1 { "tags": [ "opster", "elasticsearch" ], "date": "01-01-2020" }
Query an index
GET test_index1/_search { "query": { "match_all": {} } }
Query multiple indices
It is possible to search multiple indices with a single request. If it is a raw HTTP request, index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.
GET test_index1,test_index2/_search
Delete indices
DELETE test_index1
Common problems
- It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
- Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:
DELETE /*
To disable this, you can add the following lines in the elasticsearch.yml:
action.destructive_requires_name: true
Log Context
Log “analyzer and search_analyzer on field [” class name is TypeParsers.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
if (indexAnalyzer == null && searchAnalyzer != null) { throw new MapperParsingException("analyzer on field [" + name + "] must be set when search_analyzer is set"); } if (searchAnalyzer == null && searchQuoteAnalyzer != null) { throw new MapperParsingException("analyzer and search_analyzer on field [" + name + "] must be set when search_quote_analyzer is set"); } if (searchAnalyzer == null) { searchAnalyzer = indexAnalyzer;
[ratemypost]