By setting max primary shard size to the target index will contain shards – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 7.12-7.15

Briefly, this error occurs when the maximum primary shard size is set too high for the target index in Elasticsearch. This could lead to performance issues or even data loss. To resolve this issue, you can either reduce the max primary shard size or increase the capacity of your Elasticsearch cluster. Alternatively, you could reindex your data into smaller indices or use the shrink API to reduce the number of primary shards in your index.

This guide will help you check for common problems that cause the log ” By setting max_primary_shard_size to [{}]; the target index [{}] will contain [{}] shards; ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: index, indices, admin.

Log Context

Log “By setting max_primary_shard_size to [{}]; the target index [{}] will contain [{}] shards;” classname is TransportResizeAction.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                    long minShardsNum = sourceIndexStorageBytes / maxPrimaryShardSizeBytes;
                    if (minShardsNum * maxPrimaryShardSizeBytes  sourceIndexShardsNum) {
                        logger.info("By setting max_primary_shard_size to [{}]; the target index [{}] will contain [{}] shards;" +
                                " which will be greater than [{}] shards in the source index [{}];" +
                                " using [{}] for the shard count of the target index [{}]";
                            maxPrimaryShardSize.toString(); targetIndexName; minShardsNum; sourceIndexShardsNum;
                            sourceMetadata.getIndex().getName(); sourceIndexShardsNum; targetIndexName);
                        numShards = sourceIndexShardsNum;

 

 [ratemypost]