Elasticsearch How to Ensure Slow Logs Don’t Get Cut Off (Applicable before ES 8.0)

By Opster Team - Amit Khandelwal

Updated: Mar 22, 2023

| 1 min read

Overview

Elasticsearch provides a way to activate slow logs on search and index requests so that users can review them and debug the root cause for slow search and indexing operations. Elasticsearch also provides an API to activate these slow logs on a particular index with user-defined thresholds

Opster’s Search Log Analyzer, which is a free tool available here, analyzes the output of search slow logs generated by Elasticsearch. It provides users with advanced insights like the number of costly queries, reasons why queries were costly (aggregation or huge size param, etc.) and more to help locate and resolve issues affecting searches (such as hotspots and more). 

Sometimes, search slow log lines are cut midway through. This happens when the search query is very long. By default, Elasticsearch logs only 1k characters in each search slow log, and if this 1k character threshold or the threshold set by the user is crossed, Elasticsearch will cut the slow logline and limit it to the threshold defined in log4j.properties file.

How to fix this issue 

Option 1 (preferred)

If you can move to JSON based logging, or if this is already the case in your ES version, you can provide the JSON search slow log. In JSON based logging there is no need to configure a threshold for character limit, so the file will never be cut off midway. Also, Elasticsearch no longer supports plaintext-based logging for slow logs and provides these logs in JSON format only beginning 8.0.

Option 2

You can change the threshold of the search slow logline in log4j.properties by updating the following line and providing your character limit.

appender.index_search_slowlog_rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker <your-char-limit>%m%n