简体   繁体   English

如何使用Golang客户端进行写入以连续写入Influxdb

[英]How to write to continuosly write to influxdb using golang client

I'm using influxDB to store my time series data. 我正在使用influxDB存储我的时间序列数据。

I wrote a simple golang application to read lines from a file called time.log . 我编写了一个简单的golang应用程序,以从名为time.log的文件中读取行。

The documentation at https://github.com/influxdata/influxdb/blob/master/client/README.md#inserting-data says: https://github.com/influxdata/influxdb/blob/master/client/README.md#inserting-data上的文档说:

Inserting Data 插入资料

Time series data aka points are written to the database using batch inserts. 使用批处理插入将时间序列数据(又名点)写入数据库。 The mechanism is to create one or more points and then create a batch aka batch points and write these to a given database and series. 机制是创建一个或多个点,然后创建一个批处理(又称为批处理点)并将其写入给定的数据库和系列。 A series is a combination of a measurement (time/values) and a set of tags. 系列是测量(时间/值)和一组标签的组合。

In this sample we will create a batch of a 1,000 points. 在此示例中,我们将创建一个1,000点的批次。 Each point has a time and a single value as well as 2 tags indicating a shape and color. 每个点都有一个时间和一个值,以及2个指示形状和颜色的标签。 We write these points to a database called square_holes using a measurement named shapes. 我们使用名为shapes的度量将这些点写到一个名为square_holes的数据库中。

NOTE: You can specify a RetentionPolicy as part of the batch points. 注意:您可以将RetentionPolicy指定为批处理点的一部分。 If not provided InfluxDB will use the database default retention policy. 如果未提供,InfluxDB将使用数据库默认保留策略。

 func writePoints(clnt client.Client) { sampleSize := 1000 rand.Seed(42) bp, _ := client.NewBatchPoints(client.BatchPointsConfig{ Database: "systemstats", Precision: "us", }) for i := 0; i < sampleSize; i++ { regions := []string{"us-west1", "us-west2", "us-west3", "us-east1"} tags := map[string]string{ "cpu": "cpu-total", "host": fmt.Sprintf("host%d", rand.Intn(1000)), "region": regions[rand.Intn(len(regions))], } idle := rand.Float64() * 100.0 fields := map[string]interface{}{ "idle": idle, "busy": 100.0 - idle, } bp.AddPoint(client.NewPoint( "cpu_usage", tags, fields, time.Now(), )) } err := clnt.Write(bp) if err != nil { log.Fatal(err) } } 

But because I'm continuously reading data from the log. 但是,因为我不断从日志中读取数据。 I'm never done reading the log. 我从来没有读完日志。 So what is the best way for me to write the points to the influx server? 那么,对我而言,将点写到流入服务器的最佳方法是什么?

Here is my current code: 这是我当前的代码:

cmdBP := client.NewBatchPoints(...)
for line := range logFile.Lines {
    pt := parseLine(line.Text)
    cmdBP.AddPoint(pt)
}

influxClient.Write(cmdBP)

Basically range logFile.Lines never terminates because it is based on a channel. 基本上,范围logFile.Lines永远不会终止,因为它基于通道。

Use a combination of batch points and time out (this runs as a goroutine): 结合使用批处理点和超时(这作为goroutine运行):

func (h *InfluxDBHook) loop() {
    var coll []*client.Point
    tick := time.NewTicker(h._batchInterval)

    for {
        timeout := false

        select {
        case pt := <-h._points:
            coll = append(coll, pt)
        case <-tick.C:
            timeout = true
        }

        if (timeout || len(coll) >= h._batchSize) && len(coll) > 0 {
            bp, err := client.NewBatchPoints(h._batchPointsConfig)
            if err != nil {
                //TODO:
            }
            bp.AddPoints(coll)
            err = h._client.Write(bp)
            if err != nil {
                //TODO:
            } else {
                coll = nil
            }
        }
    }
}

BTW you can use a hook with logrus logging package, to send logs into InfluxDB (sample code is from a logrus InfluxDB hook ). 顺便说一句,您可以使用带有logrus日志记录包的钩子将日志发送到InfluxDB(示例代码来自logrus InfluxDB hook )。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM