Elasticsearch Queue

By Opster Team

Updated: Mar 10, 2024

| 1 min read

Overview

The queue term in Elasticsearch is used in the context of thread pools. Each node of the Elasticsearch cluster holds various thread pools to manage the memory consumption on that node for different types of requests. The queues come up with initial default limits as per node size but can be modified dynamically using _settings REST endpoint.

What it is used for

Queues are used to hold the pending requests for the corresponding thread pool instead of requests being rejected. For example, if there are too many search requests coming on the node which can not be processed at the same time, the requests are sent to the search thread pool queue.

Examples

Monitoring the thread pools using _cat API:

GET /_cat/thread_pool?v

Get details about each thread pool, including current size:

GET /_nodes/thread_pool

Notes

  • Thread pool queues are one of the most important stats to monitor in Elasticsearch as they have a direct impact on the cluster performance and may halt the indexing and search requests.
  • The specific thread pool queue size can be changed using its type-specific parameters.
  • It is possible to update thread pool queue size dynamically using cluster setting API in version 2.x.
  • From Elasticsearch version 5.x onward, it is not possible to update the thread pool settings dynamically via the cluster setting API. Rather, it is a node level setting and it must be configured inside elasticsearch.yml on each node and a node restart is required after the updates.

Common problems

  • The most common problem that arises in Elasticsearch related to queues is EsRejectedExecutionException that occurs when queues are full and Elasticsearch nodes cannot keep up with the speed of the requests. This may lead to nodes not responding as well. To deal with this issue, thread pools need continuous monitoring and based on thread pool queue utilization, you may need to review and control the indexing/search requests or increase the resources of the cluster.
  • In case of bulk indexing queue rejection, increasing the size of the queue may cause the node to keep more data in memory, which may cause requests taking longer to complete and more heap space to be consumed. As a result you may face impact on cluster performance and stability.

Related log errors to this ES concept


Failed add ILM history item to queue for index
Failed to queue ILM history item in index
Unexpectedly failed to process queue item
Failed to queue ILM history item in index %s %s
Pending task queue has been nonempty for ms which is longer than the warn threshold of ms
Unexpected exception executing queue entry
Queue processor found no items
Received a cluster state uuid v from a different master than the current one rejecting received current