Briefly, this error occurs when Elasticsearch is unable to lock the JVM memory due to insufficient system memory or incorrect configuration settings. This could lead to performance issues. To resolve this, you can increase the system’s memory or adjust the Elasticsearch configuration settings. Ensure that the ‘bootstrap.memory_lock’ is set to true and the ‘ES_HEAP_SIZE’ is set to an appropriate value. Also, check the system’s ulimit settings and increase them if necessary. Lastly, ensure that the user running Elasticsearch has the necessary permissions to lock memory.
This guide will help you check for common problems that cause the log ” Unable to lock JVM memory (ENOMEM). ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: bootstrap and memory.
Overview
Elasticsearch has many settings that can cause significant performance problems if not set correctly. To prevent this happening, Elasticsearch carries out “bootstrap checks” to ensure that these important settings have been covered. If any of the checks fail, Elasticsearch will write an error to the logs and will not start. In this guide we cover common bootstrap checks you should know and how to configure your settings correctly to pass the checks successfully.
Bootstrap checks are carried out when the network.host setting in:
network.host: 0.0.0.0
If network host is not set and you use the default localhost, then Elasticsearch will consider the node to be in development mode, and bootstrap checks will not be enforced.
Common issues with bootstrap checks
If you install elasticsearch using RPM or Debian packages (strongly recommended) then most of the important configuration is already done for you, and the only bootstrap checks you are likely to run up against are the following.
Heap size check
The minimum and maximum heap sizes specified in jvm.options (or via environment variables) must be equal to one another.
File descriptors check
Minimum file descriptors must have been set to at least 65535
Memory lock check
There are various methods used to prevent memory swapping, and you must use one of them. The most common is to set in elasticsearch.yml
bootstrap.memory_lock: true
For this to be effective you must give permission to elasticsearch to enable this. There are various ways to do so, depending upon your operating system.
Discovery configuration checks
At least one of the following properties must be configured in elasticsearch.yml to ensure that your node can form a cluster properly:
- discovery.seed_hosts
- discovery.seed_providers
- cluster.initial_master_nodes
Less common bootstrap check issues
If you are not using RPM or debian packages, you may come across the following issues:
Max number of threads check
You must allow your system to create at least 4096 threads. In linux this is done by editing /etc/security/limits.conf and adjusting the nproc setting.
Max file size check
Your system should be able to create unlimited file sizes. In linux this is done by editing /etc/security/limits.conf and adjusting the fsize setting
Max virtual memory check
The system should be able to create unlimited virtual memory for the elasticsearch user.
This is done by editing /etc/security/limits.conf
<user> - as unlimited
Max map count check
The system must be able to use mmap effectively. This is done by running the command
sysctl vm.max_map_count 262144
Other checks
The following checks are also carried out, but are rarely found:
OsX File Descriptor Check Client Jvm Check UseS erial GC Check System Call Filter Check Might Fork Check On Error Check On Out Of Memory Error Check Early Access Check G1GC Check All Permission Check
Elasticsearch memory requirements
The Elasticsearch process is very memory intensive. Elasticsearch uses a JVM (Java Virtual Machine), and close to 50% of the memory available on a node should be allocated to JVM. The JVM machine uses memory because the Lucene process needs to know where to look for index values on disk. The other 50% is required for the file system cache which keeps data that is regularly accessed in memory.
For a full explanation of JVM management, please see: Heap Size Usage and JVM Garbage Collection in ES – A Detailed Guide.
It is also common to receive warnings from the different types of circuit breakers. This is discussed here: How to Handle Circuit Breaker Exceptions in Elasticsearch.
jvm.mem
The most important memory section is the JVM heap.
GET _nodes/stats/jvm
The output could look like this:
"jvm" : { "timestamp" : 1603351829573, "uptime_in_millis" : 150932107, "mem" : { "heap_used_in_bytes" : 258714272, "heap_used_percent" : 24, "heap_committed_in_bytes" : 1073741824, "heap_max_in_bytes" : 1073741824, "non_heap_used_in_bytes" : 192365488, "non_heap_committed_in_bytes" : 209186816,
Note that the heap_used_in_bytes in a healthy JVM will follow a saw tooth pattern due to the garbage collection process, increasing steadily up to around 70%, then reducing sharply to 30% when garbage collection occurs.
The JVM heap_max will depend on the value set in jvm.options file, and you should set it to be 50% of the RAM available for your container or server.
For more information on JVM heap issues, please see: Heap Size Usage and JVM Garbage Collection in ES – A Detailed Guide.
Explaining the different types of memory statistics
When looking at the memory statistics, we need to be aware that many Elasticsearch applications are running inside containers on much larger machines. This is typical if you are using a hosted service such as AWS Elasticsearch service or Elastic cloud, or if you are running Elasticsearch on Docker or Kubernetes. In such cases, it’s important to be careful to interpret the memory statistics available to us.
There are various memory statistics available from the Elasticsearch monitoring APIs, as explained below:
GET _nodes/stats/os "os" : { "timestamp" : 1603350306857, "cpu" : { "percent" : 13, "load_average" : { "1m" : 3.37, "5m" : 3.18, "15m" : 2.8 } }, "mem" : { "total_in_bytes" : 16703369216, "free_in_bytes" : 361205760, "used_in_bytes" : 16342163456, "free_percent" : 2, "used_percent" : 98 }, "swap" : { "total_in_bytes" : 1023406080, "free_in_bytes" : 1302528, "used_in_bytes" : 1022103552 }, "cgroup" : { "cpuacct" : { "control_group" : "/", "usage_nanos" : 2669636420088 }, "cpu" : { "control_group" : "/", "cfs_period_micros" : 100000, "cfs_quota_micros" : -1, "stat" : { "number_of_elapsed_periods" : 0, "number_of_times_throttled" : 0, "time_throttled_nanos" : 0 } }, "memory" : { "control_group" : "/", "limit_in_bytes" : "9223372036854771712", "usage_in_bytes" : "4525641728" } } }
The above statistics are for a development node running on docker. Let’s interpret the sections we received here:
os.mem
The first section “mem” refers to the host server where the machine is running. In this case we are running on docker, so 16GB refers to the memory of the host machine where the container is running. Note that it is quite normal for a machine to be using close to 100% of its memory, and this does not indicate a problem.
os.swap
The swap section again refers to the host machine. In this case, we can see that the host machine allows swapping. This is quite normal when we are running a container inside a host with swapping still enabled. We can double check that inside the docker container no swap is permitted by running:
GET _nodes?filter_path=**.mlockall
os.cgroup
Finally, we can see the cgroup section which refers to the container itself. In this case the container is using 4GB of memory.
process.mem
We also have access to a statistic virtual memory:
GET _nodes/stats/process "process" : { "timestamp" : 1603352751181, "open_file_descriptors" : 436, "max_file_descriptors" : 1048576, "cpu" : { "percent" : 0, "total_in_millis" : 1964850 }, "mem" : { "total_virtual_in_bytes" : 5035376640 }
Note that total_virtual_in_bytes includes the entire memory available to the process including virtual memory available via mmap.
Log Context
Log “Unable to lock JVM memory (ENOMEM).” classname is JNANatives.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
return; } if (errno != Integer.MIN_VALUE) { if (errno == JNACLibrary.ENOMEM && System.getProperty("os.name").toLowerCase(Locale.ROOT).contains("linux")) { logger.warn("Unable to lock JVM memory (ENOMEM)." + " This can result in part of the JVM being swapped out." + " Increase RLIMIT_MEMLOCK (ulimit)."); } else if (!System.getProperty("os.name").toLowerCase(Locale.ROOT).contains("mac")) { // OS X allows mlockall to be called; but always returns an error logger.warn("Unknown mlockall error " + errno);
[ratemypost]