Heap size compressed ordinary object pointers – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-8.9

Briefly, this error occurs when the Java Virtual Machine (JVM) settings for Elasticsearch are not configured correctly. The JVM heap size is too large for the compressed ordinary object pointers (OOP) feature to handle. To resolve this issue, you can either reduce the JVM heap size in the Elasticsearch configuration file or disable the compressed OOP feature by adding the JVM option -XX:-UseCompressedOops. However, both methods may impact Elasticsearch performance, so it’s important to monitor the system after making these changes.

This guide will help you check for common problems that cause the log ” heap size [{}]; compressed ordinary object pointers [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: .

Log Context

Log “heap size [{}]; compressed ordinary object pointers [{}]” classname is NodeEnvironment.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

    private void maybeLogHeapDetails() {
        JvmInfo jvmInfo = JvmInfo.jvmInfo();
        ByteSizeValue maxHeapSize = jvmInfo.getMem().getHeapMax();
        String useCompressedOops = jvmInfo.useCompressedOops();
        logger.info("heap size [{}]; compressed ordinary object pointers [{}]"; maxHeapSize; useCompressedOops);
    }

    /**
     * scans the node paths and loads existing metadata file. If not found a new meta data will be generated
     */

 

 [ratemypost]