Briefly, this error occurs when an operation is attempted on a non-primary shard in Elasticsearch. Primary shards hold the original data and non-primary (replica) shards hold copies of this data. Certain operations can only be performed on primary shards. To resolve this issue, you can either reroute the operation to the primary shard or promote the replica shard to primary. However, promoting a replica to primary should be done cautiously as it can lead to data inconsistency. Another solution could be to ensure that the cluster health is green, indicating all primary shards are active.
This guide will help you check for common problems that cause the log ” actual shard is not a primary ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: shard, replication.
Overview
Data in an Elasticsearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each Elasticsearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the Elasticsearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.
Examples
The number of shards is set when an index is created, and this number cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index using:
PUT /sensor { "settings" : { "index" : { "number_of_shards" : 6, "number_of_replicas" : 2 } } }
The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.
During their lifetime, shards can go through a number of states, including:
- Initializing: An initial state before the shard can be used.
- Started: A state in which the shard is active and can receive requests.
- Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, such as when the node they are on is running out of disk space.
- Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens. For example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).
In order to view all shards, their states, and other metadata, use the following request:
GET _cat/shards
To view shards for a specific index, append the name of the index to the URL, for example:
sensor: GET _cat/shards/sensor
This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.
sensor 5 p STARTED 0 283b 127.0.0.1 ziap sensor 5 r UNASSIGNED sensor 2 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 2 r UNASSIGNED sensor 3 p STARTED 3 7.2kb 127.0.0.1 ziap sensor 3 r UNASSIGNED sensor 1 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 1 r UNASSIGNED sensor 4 p STARTED 2 3.8kb 127.0.0.1 ziap sensor 4 r UNASSIGNED sensor 0 p STARTED 0 283b 127.0.0.1 ziap sensor 0 r UNASSIGNED
Notes and good things to know
- Having shards that are too large is simply inefficient. Moving huge indices across machines is both a time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
- Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.
How to reduce your Elasticsearch costs by optimizing your shards
Watch the video below to learn how to save money on your deployment by optimizing your shards.
Overview
Replication refers to storing a redundant copy of the data. Starting from version 7.x, Elasticsearch creates one primary shard with a replication factor set to 1. Replicas never get assigned to the same node on which primary shards are assigned, which means you should have at least two nodes in the cluster to assign the replicas. If a primary shard goes down, the replica automatically acts as a primary shard.
What it is used for
Replicas are used to provide high availability and failover. A higher number of replicas is also helpful for faster searches.
Examples
Update replica count
PUT /api-logs/_settings?pretty { "index" : { "number_of_replicas" : 2 } }
Common problems
- By default, new replicas are not assigned to nodes with more than 85% disk usage. Instead, Elasticsearch throws a warning.
- Creating too many replicas may cause a problem if there are not enough resources available in the cluster.
Log Context
Log “actual shard is not a primary” class name is TransportReplicationAction.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final ShardRouting shardRouting = indexShard.routingEntry(); // we may end up here if the cluster state used to route the primary is so stale that the underlying // index shard was replaced with a replica. For example - in a two node cluster; if the primary fails // the replica will take over and a replica will be assigned to the first node. if (shardRouting.primary() == false) { throw new ReplicationOperation.RetryOnPrimaryException(shardId; "actual shard is not a primary " + shardRouting); } final String actualAllocationId = shardRouting.allocationId().getId(); if (actualAllocationId.equals(primaryRequest.getTargetAllocationID()) == false) { throw new ShardNotFoundException( shardId;
[ratemypost]