简体   繁体   中英

Can the value of the enrich.fetch_size Elasticsearch parameter be increased somehow?

enrich.fetch_size - Maximum batch size when reindexing a source index into an enrich index. Defaults to 10000.

When the value is changed in elasticsearch.yml to ex. 20000, the error appears when executing ingest policy

{
  "error" : {
    "root_cause" : [
      {
        "type" : "illegal_argument_exception",
        "reason" : "Batch size is too large, size must be less than or equal to: [10000] but was [20000]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [index.max_result_window] index level setting."
      }
    ],
    "type" : "search_phase_execution_exception",
    "reason" : "Partial shards failure",
    "phase" : "query",
    "grouped" : true,
    "failed_shards" : [
      {
        "shard" : 0,
        "index" : "name-of-index",
        "node" : "node-id",
        "reason" : {
          "type" : "illegal_argument_exception",
          "reason" : "Batch size is too large, size must be less than or equal to: [10000] but was [20000]. Scroll batch sizes cost as much memory as result windows so they are controlled by the [index.max_result_window] index level setting."
        }
      }
    ]
  },
  "status" : 400
}

config file:

...
discovery:
  seed_hosts:
    - "127.0.0.1"
    - "[::1]"
    - elasticsearch

script:
  context:
    template:
      max_compilations_rate: 400/5m
      cache_max_size: 400

enrich:
  fetch_size: 20000
...

This is pretty common mistake, I think you have not restarted you Elasticsearch server and the new changes of elasticsearch.yml is not loaded.

If its not resolved after restart then share your config file. Will have to take a look at it.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM