繁体   English   中英

将包含键值对的数据流(作为DataStream [ObjectNode] json)转换为地图Scala

[英]Converting a Datastream containing key,value pairs as DataStream[ObjectNode] json to a map Scala

我正在尝试从kafka中读取json数据并在Scala中进行处理。我是flink和kafka流的新手,所以请尝试通过提供解决方案代码来回答。我希望能够将其转换为包含所有键,值的Map对。

map1.get(“ FC196”)应该给我休眠,其中map1是包含键值对的地图

我面临的挑战是将DataStream [ObjectNode](它是代码中的st变量)转换为键值对映射。 我正在使用JSonDeserializerSchema。如果使用简单字符串模式,则会得到DataStream [String]。 我愿意接受其他建议。

来自kafka的输入格式:

{"FC196":"Dormant","FC174":"A262210940","FC195":"","FC176":"40","FC198":"BANKING","FC175":"AHMED","FC197":"2017/04/04","FC178":"1","FC177":"CBS","FC199":"INDIVIDUAL","FC179":"SYSTEM","FC190":"OK","FC192":"osName","FC191":"Completed","FC194":"125","FC193":"7","FC203":"A10SBPUB000000000004439900053575","FC205":"1","FC185":"20","FC184":"Transfer","FC187":"2","FC186":"2121","FC189":"abcdef","FC200":"afs","FC188":"BR08","FC202":"INDIVIDUAL","FC201":"","FC181":"7:00PM","FC180":"2007/04/01","FC183":"11000000","FC182":"INR"}

代码:

import java.util.Properties
import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09
import org.apache.flink.streaming.util.serialization.SimpleStringSchema



object WordCount {
  def main(args: Array[String]) {

    // kafka properties
    val properties = new Properties()
    properties.setProperty("bootstrap.servers", "***.**.*.*:9092")
    properties.setProperty("zookeeper.connect", "***.**.*.*:2181")
    properties.setProperty("group.id", "afs")
    properties.setProperty("auto.offset.reset", "latest")

    val env = StreamExecutionEnvironment.getExecutionEnvironment

    val st = env
      .addSource(new FlinkKafkaConsumer09("new", new JSONDeserializationSchema() , properties))

    st.print()

      env.execute()
  }
}

我的代码更改后:

import java.util.Properties

import com.fasterxml.jackson.databind.{JsonNode, ObjectMapper}
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import org.apache.flink.api.scala._
import org.apache.flink.runtime.state.filesystem.FsStateBackend
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09
import org.apache.flink.streaming.util.serialization.SimpleStringSchema
import org.json4s.DefaultFormats
import org.json4s._
import org.json4s.native.JsonMethods
import scala.util.Try



object WordCount{
  def main(args: Array[String]) {

    case class CC(key:String)

    implicit val formats = org.json4s.DefaultFormats
    // kafka properties
    val properties = new Properties()
    properties.setProperty("bootstrap.servers", "***.**.*.***:9093")
    properties.setProperty("zookeeper.connect", "***.**.*.***:2181")
    properties.setProperty("group.id", "afs")
    properties.setProperty("auto.offset.reset", "earliest")
    val env = StreamExecutionEnvironment.getExecutionEnvironment

   val st = env
       .addSource(new FlinkKafkaConsumer09("new", new SimpleStringSchema() , properties))
       .flatMap(raw => JsonMethods.parse(raw).toOption)
       .map(_.extract[CC])

    st.print()

      env.execute()
  }
}

由于某种原因,我无法按照您描述的方式在“平面图中”放置“尝试”

错误:

INFO [main] (TypeExtractor.java:1804) - No fields detected for class org.json4s.JsonAST$JValue. Cannot be used as a PojoType. Will be handled as GenericType
Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: Task not serializable
    at org.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:172)
    at org.apache.flink.api.scala.ClosureCleaner$.clean(ClosureCleaner.scala:164)
    at org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.scalaClean(StreamExecutionEnvironment.scala:666)
    at org.apache.flink.streaming.api.scala.DataStream.clean(DataStream.scala:994)
    at org.apache.flink.streaming.api.scala.DataStream.map(DataStream.scala:519)
    at org.apache.flink.quickstart.WordCount$.main(WordCount.scala:36)
    at org.apache.flink.quickstart.WordCount.main(WordCount.scala)
Caused by: java.io.NotSerializableException: org.json4s.DefaultFormats$$anon$4
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:317)
    at org.apache.flink.api.scala.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:170)
    ... 6 more

Process finished with exit code 1

这里有两个任务需要处理:

  1. 将原始的json有效内容解析为某种形式的AST
  2. 将AST转换为可以使用的格式。

如果您使用SimpleStringSchema,则可以选择一个不错的Json解析器,并在一个简单的flatMap运算符中将json有效内容解组。

您的build.sbt的一些依赖项

"org.json4s" %% "json4s-core" % "3.5.1",
"org.json4s" %% "json4s-native" % "3.5.1"

Scala中有十几个Json库可供选择,可以在这里找到一个很好的概述https://manuel.bernhardt.io/2015/11/06/a-quick-tour-of-json-libraries-in-scala/

然后进行一些解析:

scala> import org.json4s.native.JsonMethods._
import org.json4s.native.JsonMethods._

scala> val raw = """{"key":"value"}"""
raw: String = {"key":"value"}

scala> parse(raw)
res0: org.json4s.JValue = JObject(List((key,JString(value))))

在此阶段,可以使用AST。 可以将其转换为Map,如下所示:

scala> res0.values
res1: res0.Values = Map(key -> value)

请记住,Json4s不执行异常处理,因此会引发异常(要避免从Kafka提取数据时,它会最终杀死您的工作)。

在flink中,它看起来像这样:

env
  .addSource(new FlinkKafkaConsumer09("new", new SimpleStringSchema() , properties))
  .flatMap(raw => Try(JsonMethods.parse(raw).toOption) // this will discard failed instances, you should handle better, ie log
  .map(_.values)

但是,我建议将您的数据表示为案例类。 这将需要另一步骤将AST映射到案例类。

在上面的示例中。

scala> implicit val formats = org.json4s.DefaultFormats
formats: org.json4s.DefaultFormats.type = org.json4s.DefaultFormats$@341621da

scala> case class CC(key: String)
defined class CC

scala> parse(raw).extract[CC]
res20: CC = CC(value)

或换句话说:

env
  .addSource(new FlinkKafkaConsumer09("new", new SimpleStringSchema(), properties))
  .flatMap(raw => Try(JsonMethods.parse(raw).toOption)
  .map(_.extract[CC])

更新:

只需将隐式格式移到main方法之外:

Object WordCount {
    implicit val formats = org.json4s.DefaultFormats
    def main(args: Array[String]) = {...}
}

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM