简体   繁体   中英

Range Filters on doc_count on a Term Aggregation

{
    "size": 0,
    "aggs": {
        "categories_agg": {
            "terms": {
                "field": "categories",
                "order": {
                    "_count": "desc"
                }
            }
        }
    }
}

For getting aggregations on a particular fields I used the query given above. It works fine and gives a result like:

{
  "took": 10,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 77445,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "categories_agg": {
      "doc_count_error_upper_bound": 794,
      "sum_other_doc_count": 148316,
      "buckets": [
        {
          "key": "Restaurants",
          "doc_count": 25071
        },
        {
          "key": "Shopping",
          "doc_count": 11233
        },
        {
          "key": "Food",
          "doc_count": 9250
        },
        {
          "key": "Beauty & Spas",
          "doc_count": 6583
        },
        {
          "key": "Health & Medical",
          "doc_count": 5121
        },
        {
          "key": "Nightlife",
          "doc_count": 5088
        },
        {
          "key": "Home Services",
          "doc_count": 4785
        },
        {
          "key": "Bars",
          "doc_count": 4328
        },
        {
          "key": "Automotive",
          "doc_count": 4208
        },
        {
          "key": "Local Services",
          "doc_count": 3468
        }
      ]
    }
  }
}

Is there a way I can filter the aggregation in such a way I can get the buckets within a particular range on doc_count of each bucket?

eg using a range filter for doc_count where max is 25000 and min is 5000 should give me

{
  "took": 10,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "failed": 0
  },
  "hits": {
    "total": 77445,
    "max_score": 0,
    "hits": []
  },
  "aggregations": {
    "categories_agg": {
      "doc_count_error_upper_bound": 794,
      "sum_other_doc_count": 148316,
      "buckets": [
        {
          "key": "Shopping",
          "doc_count": 11233
        },
        {
          "key": "Food",
          "doc_count": 9250
        },
        {
          "key": "Beauty & Spas",
          "doc_count": 6583
        },
        {
          "key": "Health & Medical",
          "doc_count": 5121
        },
        {
          "key": "Nightlife",
          "doc_count": 5088
        }
      ]
    }
  }
}

I solve the problem by buckets_selector . We can filter count in the script .

```
"aggs": {
    "categories_agg": {
      "terms": {
        "field": "cel_num",
        "size": 5000,
        "min_doc_count":1
      },
      "aggs": {
        "count_bucket_selector": {
          "bucket_selector": {
            "buckets_path": {
              "count": "_count"
            },
            "script": {
              "lang":"expression",
              "inline": "count>5000 && count <10000"
            }
          }
        }
      }
    }
  }
```

Simple solution for filtering by minimum doc_count (only) from Elasticsearch filter aggregations on minimal doc count . To save you looking it up:

 aggs: {
    field1: {
        terms: {
            field: 'field1',
            min_doc_count: 1000
        },

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM