简体   繁体   中英

How to migrate Mysql data to elasticsearch using logstash

I need a brief explanation of how can I convert MySQL data to Elastic Search using logstash. can anyone explain the step by step process about this

You can do it using the jdbc input plugin for logstash.

Here is a config example.

This is a broad question, I don't know how much you familiar with MySQL and ES . Let's say you have a table user . you may just simply dump it as csv and load it at your ES will be good. but if you have a dynamic data, like the MySQL just like a pipeline, you need to write a Script to do those stuff. anyway you can check the below link to build your basic knowledge before you ask How.

How to dump mysql?

How to load data to ES

Also, since you probably want to know how to covert your CSV to json file, which is the best suite for ES to understand.

How to covert CSV to JSON

Let me provide you with a high level instruction set.

  • Install Logstash, and Elasticsearch.
  • In Logstash bin folder copy jar ojdbc7.jar.
  • For logstash, create a config file ex: config.yml
# 
input {
    # Get the data from database, configure fields to get data incrementally
    jdbc {
        jdbc_driver_library => "./ojdbc7.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "jdbc:oracle:thin:@db:1521:instance"
        jdbc_user => "user"
        jdbc_password => "pwd"

        id => "some_id"

        jdbc_validate_connection => true
        jdbc_validation_timeout => 1800
        connection_retry_attempts => 10
        connection_retry_attempts_wait_time => 10

        #fetch the db logs using logid
        statement => "select * from customer.table where logid > :sql_last_value order by logid asc"

        #limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
        jdbc_fetch_size => 500
        jdbc_default_timezone => "America/New_York"

        use_column_value => true
        tracking_column => "logid"
        tracking_column_type => "numeric"
        record_last_run => true

        schedule => "*/2 * * * *"

        type => "log.customer.table"
        add_field => {"source" => "customer.table"}
        add_field => {"tags" => "customer.table" } 
        add_field => {"logLevel" => "ERROR" }

        last_run_metadata_path => "last_run_metadata_path_table.txt"
    }

}

# Massage the data to store in index
filter {
    if [type] == 'log.customer.table' {
        #assign values from db column to custom fields of index
        ruby{
            code => "event.set( 'errorid', event.get('ssoerrorid') );
                    event.set( 'msg', event.get('errormessage') );
                    event.set( 'logTimeStamp', event.get('date_created'));
                    event.set( '@timestamp', event.get('date_created'));
                    "
        }
        #remove the db columns that were mapped to custom fields of index
        mutate {
            remove_field => ["ssoerrorid","errormessage","date_created" ]
        }
    }#end of [type] == 'log.customer.table' 
} #end of filter

# Insert into index
output {
    if [type] == 'log.customer.table' {
        amazon_es {
            hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
            region => "us-east-1"
            aws_access_key_id => '<access key>'
            aws_secret_access_key => '<secret password>'
            index => "production-logs-table-%{+YYYY.MM.dd}"
        }
    }
}
  • Go to bin, Run as logstash -f config.yml

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM