Briefly, this error occurs when Elasticsearch is unable to parse the watch status because it expects a field to hold a long (a type of data) but it’s receiving a different data type. To resolve this issue, you can check the data type of the field in question and ensure it’s a long. If it’s not, you need to convert it to a long. Also, ensure that the field is not null or empty as this could also cause the error. Lastly, check your Elasticsearch version, as older versions may have bugs that cause this error.
This guide will help you check for common problems that cause the log ” could not parse watch status. expecting field [{}] to hold a long ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: rest-high-level, client.
Overview
Rest-high-level is built on top of low-level rest-client and is a method of communicating with Elasticsearch based on HTTP REST endpoints. This concept is majorly popular in the context of a Java-based Elasticsearch client. From day one, Elasticsearch supports transport clients for Java to communicate with Elasticsearch. In version 5.0, a low-level rest-client was released with lots of advantages over the existing transport client such as version independencies, increased stability, and lightweight JAR file libraries.
What it is used for
It is used for communicating with Elasticsearch HTTP REST endpoints in which marshalling and unmarshalling of response objects are handled by the Elasticsearch server itself.
Overview
Any application that interfaces with Elasticsearch to index, update or search data, or to monitor and maintain Elasticsearch using various APIs can be considered a client
It is very important to configure clients properly in order to ensure optimum use of Elasticsearch resources.
Examples
There are many open-source client applications for monitoring, alerting and visualization, such as ElasticHQ, Elastalerts, and Grafana to name a few. On top of Elastic client applications such as filebeat, metricbeat, logstash and kibana that have all been designed to integrate with Elasticsearch.
However it is frequently necessary to create your own client application to interface with Elasticsearch. Below is a simple example of the python client (taken from the client documentation):
from datetime import datetime from elasticsearch import Elasticsearch es = Elasticsearch() doc = { 'author': 'Testing', 'text': 'Elasticsearch: cool. bonsai cool.', 'timestamp': datetime.now(), } res = es.index(index="test-index", doc_type='tweet', id=1, body=doc) print(res['result']) res = es.get(index="test-index", doc_type='tweet', id=1) print(res['_source']) es.indices.refresh(index="test-index") res = es.search(index="test-index", body={"query": {"match_all": {}}}) print("Got %d Hits:" % res['hits']['total']['value']) for hit in res['hits']['hits']: print("%(timestamp)s %(author)s: %(text)s" % hit["_source"])
All of the official Elasticsearch clients follow a similar structure, working as light wrappers around the Elasticsearch rest API, so if you are familiar with Elasticsearch query structure they are usually quite straightforward to implement.
Notes and Good Things to Know
Use official Elasticsearch libraries.
Although it is possible to connect with Elasticsearch using any HTTP method, such as a curl request, the official Elasticsearch libraries have been designed to properly implement connection pooling and keep-alives.
Official Elasticsearch clients are available for java, javascript, Perl, PHP, python, ruby and .NET. Many other programming languages are supported by community versions.
Keep your Elasticsearch version and client versions in sync.
To avoid surprises, always keep your client versions in line with the Elasticsearch version you are using. Always test clients with Elasticsearch since even minor version upgrades can cause issues due to dependencies or a need for code changes.
Load balance across appropriate nodes.
Make sure that the client properly load balances across all of the appropriate nodes in the cluster. In small clusters this will normally mean only across data nodes (never master nodes), or in larger clusters, all dedicated coordinating nodes (if implemented) .
Ensure that the Elasticsearch application properly handles exceptions.
In the case of Elasticsearch being unable to cope with the volume of requests, designing a client application to handle this gracefully (such as through some sort of queueing mechanism) will be better than simply inundating a struggling cluster with repeated requests.
Log Context
Log “could not parse watch status. expecting field [{}] to hold a long” class name is WatchStatus.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} } else if (Field.VERSION.match(currentFieldName; parser.getDeprecationHandler())) { if (token.isValue()) { version = parser.longValue(); } else { throw new ElasticsearchParseException("could not parse watch status. expecting field [{}] to hold a long " + "value; found [{}] instead"; currentFieldName; token); } } else if (Field.LAST_CHECKED.match(currentFieldName; parser.getDeprecationHandler())) { if (token.isValue()) { lastChecked = parseDate(currentFieldName; parser);
[ratemypost]