简体   繁体   中英

Elasticsearch Analyzer first 4 and last 4 characters

With Elasticsearch, I would like to specify a search analyzer where the first 4 characters and last 4 characters are tokenized.

For example: supercalifragilisticexpialidocious => ["supe", "ious"]

I have had a go with an ngram as follows

PUT my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "ngram",
          "min_gram": 4,
          "max_gram": 4
        }
      }
    }
  }
}

I am testing the analyzer as follows

POST my_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": "supercalifragilisticexpialidocious."
}

And get back `super' ... loads of stuff I don't want and 'cious'. The problem for me is how can I take only the first and last results from the ngram tokenizer specified above?

{
  "tokens": [
    {
      "token": "supe",
      "start_offset": 0,
      "end_offset": 4,
      "type": "word",
      "position": 0
    },
    {
      "token": "uper",
      "start_offset": 1,
      "end_offset": 5,
      "type": "word",
      "position": 1
    },
...
    {
      "token": "ciou",
      "start_offset": 29,
      "end_offset": 33,
      "type": "word",
      "position": 29
    },
    {
      "token": "ious",
      "start_offset": 30,
      "end_offset": 34,
      "type": "word",
      "position": 30
    },
    {
      "token": "ous.",
      "start_offset": 31,
      "end_offset": 35,
      "type": "word",
      "position": 31
    }
  ]
}

One way to achieve this is to leverage the pattern_capture token filter and take the first 4 and last 4 characters.

First, define your index like this:

PUT my_index
{
  "settings": {
    "index": {
      "analysis": {
        "analyzer": {
          "my_analyzer": {
            "type": "custom",
            "tokenizer": "keyword",
            "filter": [
              "lowercase",
              "first_last_four"
            ]
          }
        },
        "filter": {
          "first_last_four": {
            "type": "pattern_capture",
            "preserve_original": false,
            "patterns": [
              """(\w{4}).*(\w{4})"""
            ]
          }
        }
      }
    }
  }
}

Then, you can test your new custom analyzer:

POST test/_analyze
{
  "text": "supercalifragilisticexpialidocious",
  "analyzer": "my_analyzer"
}

And see that the tokens you expect are there:

{
  "tokens" : [
    {
      "token" : "supe",
      "start_offset" : 0,
      "end_offset" : 34,
      "type" : "word",
      "position" : 0
    },
    {
      "token" : "ious",
      "start_offset" : 0,
      "end_offset" : 34,
      "type" : "word",
      "position" : 0
    }
  ]
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM