Briefly, this error occurs when the hash of the secure setting published in the Elasticsearch cluster differs from the one computed locally. This discrepancy can be due to changes in secure settings without proper re-initialization or due to a cluster node with different secure settings. To resolve this, you can re-initialize the secure settings or ensure all nodes in the cluster have the same secure settings. Also, check for any inconsistencies in your cluster state and rectify them.
This guide will help you check for common problems that cause the log ” the published hash [{}] of the consistent secure setting [{}] differs from the locally computed one [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: settings.
Settings in Elasticsearch
In Elasticsearch, you can configure cluster-level settings, node-level settings and index level settings. Here is a quick rundown of each level.
A. Cluster settings
These settings can either be:
- Persistent, meaning they apply across restarts, or
- Transient, meaning they won’t survive a full cluster restart.
If a transient setting is reset, the first one of these values that is defined is applied:
- The persistent setting
- The setting in the configuration file
- The default value
The order of precedence for cluster settings is:
- Transient cluster settings
- Persistent cluster settings
- Settings in the elasticsearch.yml configuration file
Examples
An example of persistent cluster settings update:
PUT /_cluster/settings { "persistent" : { "indices.recovery.max_bytes_per_sec" : "500mb" } }
An example of a transient update:
PUT /_cluster/settings { "transient" : { "indices.recovery.max_bytes_per_sec" : "40mb" } }
B. Index settings
These are the settings that are applied to individual indices. There is an API to update index level settings.
Examples
The following API call will set the number of replica shards to 5 for my_index index.
PUT /my_index/_settings { "index" : { "number_of_replicas" : 5 } }
To revert a setting to the default value, use null.
PUT /my_index/_settings { "index" : { "refresh_interval" : null } }
C. Node settings
These settings apply to nodes. Nodes can fulfill different roles. These include the master, data, and coordination roles. Node settings are set through the elasticsearch.yml file for each node.
Examples
Setting a node to be a data node (in the elasticsearch.yml file):
node.data: true
Disabling the ingest role for the node (which is enabled by default):
node.ingest: false
For production clusters, you will need to run each type of node on a dedicated machine with two or more instances of each, for HA (minimum three for master nodes).
Notes and good things to know
- Learning more about the cluster settings and index settings is important – it can spare you a lot of trouble. For example, if you are going to ingest huge amounts of data into an index and the number of replica shards is set to say, 5, the indexing process will be super slow because the data will be replicated at the same time it is indexed. What you can do to speed up indexing is to set the replica shards to 0 by updating the settings, and set it back to the original number when indexing is done, using the settings API.
- Another useful example of using cluster-level settings is when a node has just joined the cluster and the cluster is not assigning any shards to the node. Although shard allocation is enabled by default on all nodes, someone may have disabled shard allocation at some point (for example, in order to perform a rolling restart), and forgot to re-enable it later. To enable shard allocation, you can update the Cluster Settings API:
PUT /_cluster/settings{"transient":{"cluster.routing.allocation.enable":"all"}}
- It’s better to set cluster-wide settings with Settings API instead of with the elasticsearch.yml file and to use the file only for local changes. This will keep the same setting on all nodes. However, if you define different settings on different nodes by accident using the elasticsearch.yml configuration file, it is hard to notice these discrepancies.
- See also: Recovery
Log Context
Log “the published hash [{}] of the consistent secure setting [{}] differs from the locally computed one [{}]” classname is ConsistentSettingsService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final String publishedSalt = parts[0]; final String publishedHash = parts[1]; final byte[] computedSaltedHashBytes = computeSaltedPBKDF2Hash(localHash; publishedSalt.getBytes(StandardCharsets.UTF_8)); final String computedSaltedHash = new String(Base64.getEncoder().encode(computedSaltedHashBytes); StandardCharsets.UTF_8); if (false == publishedHash.equals(computedSaltedHash)) { logger.warn("the published hash [{}] of the consistent secure setting [{}] differs from the locally computed one [{}]"; publishedHash; concreteSecureSetting.getKey(); computedSaltedHash); if (state.nodes().isLocalNodeElectedMaster()) { throw new IllegalStateException("Master node cannot validate consistent setting. The published hash [" + publishedHash + "] of the consistent secure setting [" + concreteSecureSetting.getKey() + "] differs from the locally computed one [" + computedSaltedHash + "].");
[ratemypost]