简体   繁体   中英

python elasticsearch bulk index datatype

I am using the following code to create an index and load data in elastic search

from elasticsearch import helpers, Elasticsearch
import csv
es = Elasticsearch()
es = Elasticsearch('localhost:9200')
index_name='wordcloud_data'
with open('./csv-data/' + index_name +'.csv') as f:
    reader = csv.DictReader(f)
    helpers.bulk(es, reader, index=index_name, doc_type='my-type')

print ("done")

My CSV data is as follows

date,word_data,word_count
2017-06-17,luxury vehicle,11
2017-06-17,signifies acceptance,17
2017-06-17,agency imposed,16
2017-06-17,customer appreciation,11

The data loads fine but then the datatype is not accurate How do I force it to say that the word_count is integer and not text See how it figures out the date type ? Is there a way it can figure out the int datatype automatically ? or by passing some parameter ?

Also what do I do to increase the ignore_above or remove it for some of the fields if I wanted to. basically no limit to the number of characters ?

{
  "wordcloud_data" : {
    "mappings" : {
      "my-type" : {
        "properties" : {
          "date" : {
            "type" : "date"
          },
          "word_count" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          },
          "word_data" : {
            "type" : "text",
            "fields" : {
              "keyword" : {
                "type" : "keyword",
                "ignore_above" : 256
              }
            }
          }
        }
      }
    }
  }
}

You need to create a mapping that would describe field types.

With the elasticsearch-py client this can be done using es.indices.put_mapping or index.create methods, by passing it JSON document that describes mappings, like shown in this SO answer . It would be something like this:

es.indices.put_mapping(
    index="wordcloud_data",
    doc_type="my-type",
    body={
        "properties": {  
            "date": {"type":"date"},
            "word_data": {"type": "text"},
            "word_count": {"type": "integer"}
        }
    }
)

However, I'd suggest to take a look at the elasticsearch-dsl package that provides much nicer declarative API to describe things . It would be something along those lines (untested):

from elasticsearch_dsl import DocType, Date, Integer, Text
from elasticsearch_dsl.connections import connections
from elasticsearch.helpers import bulk

connections.create_connection(hosts=["localhost"])

class WordCloud(DocType):
    word_data = Text()
    word_count = Integer()
    date = Date()

    class Index:
        name = "wordcloud_data"
        doc_type = "my_type"   # If you need it to be called so

WordCloud.init()
with open("./csv-data/%s.csv" % index_name) as f:
    reader = csv.DictReader(f)
    bulk(
        connections.get_connection(),
        (WordCloud(**row).to_dict(True) for row in reader)
    )

Please note, I haven't tried the code I've posted - just written it. Don't have an ES server at hand to test. There could be some small mistakes or typos there (please point out if there are), but the general idea should be correct.

Thanks. @drdaeman's Solution worked for me. Although, I thought it's worth mentioning that in elasticsearch-dsl 6+

class Meta:
     index = "wordcloud_data"
     doc_type = "my-type"

This snippet will raise cannot write to wildcard index exception. Change the following to,

class Index:
   name = 'wordcloud_data'
   doc_type = 'my_type'

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM