Briefly, this error occurs when Elasticsearch tries to access a file that is already in use or locked by another process. This could be due to multiple instances of Elasticsearch running simultaneously or a sudden shutdown of the system. To resolve this issue, you can try the following: 1) Ensure only one instance of Elasticsearch is running at a time. 2) Restart the Elasticsearch service. 3) If the error persists, you may need to delete the lock file manually from the data directory, but be cautious as this could lead to data loss.
We recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Overview:
Elasticsearch stores its data (shards, cluster state, etc.) on the file system and uses the path.data (config) to determine its data store location. There can be more than one Elaticsearch installation on the same node (host), and this error is thrown at startup time as part of the bootstrap checks to make sure there is no data corruption.Potential causes for this error:
- Multiple Elasticsearch installations using the same data location.
- The node.max_local_storage_nodes setting is not set properly in the case of multiple Elasticsearch installations using the same location.
- An existing orphaned Elasticsearch process is already running, which uses the same location, and you are trying to start the new process.
- Elasticsearch doesn’t have write access to the data folder.
Exception message:
maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])
Troubleshooting steps:Â
- If you do not have multiple installations on the same machine, check if there is already an Elasticsearch process running by using ps aux | grep elastic, on *nix based system, and kill the process if it is not needed.
- To allow for more than one node (e.g., on your development machine, not recommended in production), use the setting node.max_local_storage_nodes and set this to a positive integer according to your requirement. Find more info in this official doc.
- Ensure the Elasticsearch process has write permission for the path.data location.
Disclaimer:Â
node.max_local_storage_nodes setting is deprecated in 7.x and will be removed in version 8.0.Log Context
Log “lock assertion failed” classname is NodeEnvironment.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
if (closed.get()) return; // raced with close() - we lost for (Lock lock : locks) { try { lock.ensureValid(); } catch (IOException e) { logger.warn("lock assertion failed"; e); throw new IllegalStateException("environment is not locked"; e); } } } }
[ratemypost]