简体   繁体   English

在 elasticsearch 中创建索引时出现“unknown key [tagindex_v2] for create index”错误

[英]"unknown key [tagindex_v2] for create index" error when creating an index in elasticsearch

Hu, I'm trying to copy some indices from one Elasticsearch instance to another.胡,我正在尝试将一些索引从一个 Elasticsearch 实例复制到另一个实例。 I'm trying top copy this index called "tagindex_v2".我正在尝试复制这个名为“tagindex_v2”的索引。 I've used http://localhost:9400/tagindex_v2 and in the body i've copied the json,我使用http://localhost:9400/tagindex_v2并在正文中复制了 json,

{
    "tagindex_v2": {
        "aliases": {},
        "mappings": {
            "properties": {
                "deprecated": {
                    "type": "boolean"
                },
                "description": {
                    "type": "keyword",
                    "normalizer": "keyword_normalizer",
                    "fields": {
                        "delimited": {
                            "type": "text",
                            "analyzer": "word_delimited"
                        },
                        "keyword": {
                            "type": "keyword"
                        }
                    }
                },
                "hasOwners": {
                    "type": "boolean"
                },
                "id": {
                    "type": "keyword",
                    "normalizer": "keyword_normalizer",
                    "fields": {
                        "delimited": {
                            "type": "text",
                            "analyzer": "word_delimited"
                        },
                        "keyword": {
                            "type": "keyword"
                        },
                        "ngram": {
                            "type": "text",
                            "analyzer": "partial"
                        }
                    }
                },
                "name": {
                    "type": "keyword",
                    "normalizer": "keyword_normalizer",
                    "fields": {
                        "delimited": {
                            "type": "text",
                            "analyzer": "word_delimited"
                        },
                        "keyword": {
                            "type": "keyword"
                        },
                        "ngram": {
                            "type": "text",
                            "analyzer": "partial"
                        }
                    }
                },
                "owners": {
                    "type": "text",
                    "fields": {
                        "keyword": {
                            "type": "keyword"
                        }
                    },
                    "analyzer": "urn_component"
                },
                "removed": {
                    "type": "boolean"
                },
                "urn": {
                    "type": "keyword"
                }
            }
        },
        "settings": {
            "index": {
                "max_ngram_diff": "17",
                "routing": {
                    "allocation": {
                        "include": {
                            "_tier_preference": "data_content"
                        }
                    }
                },
                "number_of_shards": "1",
                "provided_name": "tagindex_v2",
                "creation_date": "1660141415133",
                "analysis": {
                    "filter": {
                        "partial_filter": {
                            "type": "edge_ngram",
                            "min_gram": "3",
                            "max_gram": "20"
                        },
                        "custom_delimiter": {
                            "type": "word_delimiter",
                            "preserve_original": "true",
                            "split_on_numerics": "false"
                        },
                        "urn_stop_filter": {
                            "type": "stop",
                            "stopwords": [
                                "urn",
                                "li",
                                "container",
                                "datahubpolicy",
                                "datahubaccesstoken",
                                "datahubupgrade",
                                "corpgroup",
                                "dataprocess",
                                "mlfeaturetable",
                                "mlmodelgroup",
                                "datahubexecutionrequest",
                                "invitetoken",
                                "datajob",
                                "assertion",
                                "dataplatforminstance",
                                "schemafield",
                                "tag",
                                "glossaryterm",
                                "mlprimarykey",
                                "dashboard",
                                "notebook",
                                "mlmodeldeployment",
                                "datahubretention",
                                "dataplatform",
                                "corpuser",
                                "test",
                                "mlmodel",
                                "glossarynode",
                                "mlfeature",
                                "dataflow",
                                "datahubingestionsource",
                                "domain",
                                "telemetry",
                                "datahubsecret",
                                "dataset",
                                "chart",
                                "dataprocessinstance"
                            ]
                        }
                    },
                    "normalizer": {
                        "keyword_normalizer": {
                            "filter": [
                                "lowercase",
                                "asciifolding"
                            ]
                        }
                    },
                    "analyzer": {
                        "browse_path_hierarchy": {
                            "tokenizer": "path_hierarchy"
                        },
                        "slash_pattern": {
                            "filter": [
                                "lowercase"
                            ],
                            "tokenizer": "slash_tokenizer"
                        },
                        "partial_urn_component": {
                            "filter": [
                                "lowercase",
                                "urn_stop_filter",
                                "custom_delimiter",
                                "partial_filter"
                            ],
                            "tokenizer": "urn_char_group"
                        },
                        "word_delimited": {
                            "filter": [
                                "custom_delimiter",
                                "lowercase",
                                "stop"
                            ],
                            "tokenizer": "main_tokenizer"
                        },
                        "partial": {
                            "filter": [
                                "custom_delimiter",
                                "lowercase",
                                "partial_filter"
                            ],
                            "tokenizer": "main_tokenizer"
                        },
                        "urn_component": {
                            "filter": [
                                "lowercase",
                                "urn_stop_filter",
                                "custom_delimiter"
                            ],
                            "tokenizer": "urn_char_group"
                        },
                        "custom_keyword": {
                            "filter": [
                                "lowercase",
                                "asciifolding"
                            ],
                            "tokenizer": "keyword"
                        }
                    },
                    "tokenizer": {
                        "main_tokenizer": {
                            "pattern": "[ ./]",
                            "type": "pattern"
                        },
                        "slash_tokenizer": {
                            "pattern": "[/]",
                            "type": "pattern"
                        },
                        "urn_char_group": {
                            "pattern": "[:\\s(),]",
                            "type": "pattern"
                        }
                    }
                },
                "number_of_replicas": "1",
                "uuid": "AoFgpzTXRHyyTL7cuLsS1A",
                "version": {
                    "created": "7160299"
                }
            }
        }
    }
}

I get this error我收到这个错误

{
    "error": {
        "root_cause": [
            {
                "type": "parse_exception",
                "reason": "unknown key [tagindex_v2] for create index"
            }
        ],
        "type": "parse_exception",
        "reason": "unknown key [tagindex_v2] for create index"
    },
    "status": 400
}

Can someone please tell me how i should fix this?有人可以告诉我应该如何解决这个问题吗? I'm simply copying JSON content from an existing index from a different instance and creating it here.我只是从不同实例的现有索引中复制 JSON 内容并在此处创建它。

Thanks in advance提前致谢

You need to take only what's located inside the tagindex_v2 key, ie您只需要获取位于tagindex_v2键内的内容,即

{
    "aliases": {},
    "mappings": {
        "properties": {
            "deprecated": {
                "type": "boolean"
            },
            ...
        }
    },
    "settings": {
        "index": {
            "max_ngram_diff": "17",
            ...
    }
}

You also need to remove the following properties from the settings section:您还需要从settings部分删除以下属性:

  • provided_name
  • creation_date
  • uuid
  • version

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM