Elasticsearch Aggregation

By Opster Team

Updated: Jan 28, 2024

| 4 min read

Aggregations in Elasticsearch

Definition

What is an Elasticsearch aggregation?

The aggregations framework is a powerful tool built in every Elasticsearch deployment. In Elasticsearch, an aggregation is a collection or the gathering of related things together. The aggregation framework collects data based on the documents that match a search request which helps in building summaries of the data. With aggregations you can not only search your data, but also take it a step further and extract analytical information.

Aggregations are used all over the place in Kibana: dashboards, APM app, Machine Learning app and so on. Aggregations are also heavily used in common search use cases, such as an e-Commerce website. In those use cases search results usually come with a set of filters that take into account only the scope of the result set of your search. The user is then given the option to  filter even further by, for example, product category, color, range of price and so on. Those filter options usually come with a metric indication to give the user an idea of, for example, how many items per category their search results contain.

This kind of feature is only possible by using the aggregations framework.

Other examples of uses of the aggregations framework include the following:

  • Average load time of a website
  • Most valuable customers based on transaction volume
  • Histogram showing some metric (quantity, average, sum, …) for events occurred in dynamically generated time periods
  • Quantity of products in each product category

Below are the different types of aggregations:

Types of aggregations

  • Bucket aggregations: Aggregations that group documents into buckets, also called bins, based on field values, ranges, or other criteria in the document. When the aggregation is performed, the documents are placed in the respective bucket(s). This way you can divide a set of invoices into several buckets, one for each customer, system logs can be divided into “error”,”warning” and “info”, or CPU performance data divided into hourly buckets. The output consists of a list of buckets, each with a key and a count of documents. Here are some examples of bucket aggregations: Histogram Aggregation, Range Aggregation, Terms Aggregation, Filter(s) Aggregations, Geo Distance Aggregation and IP Range Aggregation.
  • Metric aggregations: Aggregations that calculate metrics, such as a sum or average, from field values. Mainly refers to the mathematical calculations performed across a set of documents, usually based on the values of a numerical field present in the document, such as COUNT, SUM, MIN, MAX, AVERAGE etc. Metrics may be carried out at top level, but are often more useful as a sub aggregation to calculate values for a bucket aggregation.
  • Pipeline aggregations: Aggregations that take input from other aggregations instead of documents or fields. These aggregations allow you to aggregate based on the result of another aggregation rather than from document sets. Typically this aggregation is used to find the average number of documents in a bucket, or to sort buckets based upon a metric produced by a metric aggregation.

Aggregation syntax

You request the cluster to run aggregations by adding an aggregations (or aggs for short) parameter in your search request. You can ask for more than one aggregation per request. You can even ask for sub-aggregations of a bucket aggregation. The following example shows a request that asks for the sum of the quantities of products, grouped by country.

In the example below, let’s say the use case is an e-Commerce website that acts as a marketplace, meaning they actually allow third party vendors to advertise products in their website, so in this example we want to know how many units of each product there are in each country, and we do that by summing the stock of each third party vendor. This would give us a global stock.

POST products/_search
{
  "size": 0,
  "aggs": {
    "by-country": {
      "terms": {
        "field": "country"
      },
      "aggs": {
        "stock": {
          "sum": {
            "field": "qty"
          }
        }
      }
    }
  }
}

Some things to notice in the example above:

  • You can use aggregations and aggs interchangeably. Every aggregation (or sub aggregation) has a name (by-country and stock, in this case).
  • We have set the size of the results to 0, which means we’re not getting any hits in the response. That’s not uncommon at all and is even recommended.
  • In the example we only used the terms (bucket aggregation) and sum (metric aggregation) aggregation types, but the aggregations framework offers many more.
  • We made use of a sub-aggregation. Notice the by-country aggregation actually creates buckets (groups) of results and then the stock aggregation gives a metric for each bucket. You can nest as many bucket aggregations as you want, before we finally (and optionally) run a metric aggregation on it. 

Nesting aggregations

It is possible to nest aggregations inside one another (nothing to do with nested fields), so as to divide the buckets into sub buckets, or to calculate metrics from the sub buckets. The below aggregation will separate out all exam results by gender of the pupil and then calculate the average results for each gender. In this case, the important thing to understand is that the second aggregation will be calculated on the individual set of the bucket rather than the document set as a whole.

POST exam_results*/_search
{
  "size": 0,
  "aggs": {
    "genders": {
      "terms": {
        "field": "gender"
      },
      "aggs": {
        "avg_grade": {
          "avg": {
            "field": "grades"
          }
        }
      }
    }
  }
}

Aggregation performance

Aggregations are typically carried out in RAM memory,  and require a different document access structure than a search query that is obtained from the inverted index, so it is important to consider the implication of performance when constructing your aggregations.  The most important considerations are:

Number of buckets

This would be controlled by the “size” parameter in a terms aggregation, or the “calendar interval” in a date histogram. Bear in mind that where you have bucket aggregations nested at more than one level, then the total number of buckets will be multiplied for each level of aggregation.

Number of documents

When running an aggregation,it is preferable (if possible) to adjust the query so that your aggregation is only performed on a restricted set of those documents that you are interested in, instead of using a match_all query. This will reduce the memory required to run the aggregation.

Fielddata

Aggregations as a rule should always be run on keyword type fields, not analysed text. It is possible to run on analyzed text by using the mapping setting “fielddata”:”true” but this is highly memory intensive and should be avoided if possible.


Related log errors to this ES concept


ScriptedResult
-> id source
Could not initialize aggregators
Failed to execute global aggregators
Failed to build aggregation aggregator name
No found for value
Trying to create too many buckets Must be less than or equal to
Invalid aggregation order path path
Invalid aggregation name
Aggregation definition for aggregationName starts with a
Found two aggregation type definitions in
Expected XContentParser Token START OBJECT under

< Page: 2 of 12 >