繁体   English   中英

JSON到Spark Scala中使用多行的案例类

[英]json to case class using multiple rows in spark scala

我有一个带有日志的json文件:

{"a": "cat1", "b": "name", "c": "Caesar", "d": "2016-10-01"}
{"a": "cat1", "b": "legs", "c": "4", "d": "2016-10-01"}
{"a": "cat1", "b": "color", "c": "black", "d": "2016-10-01"}
{"a": "cat1", "b": "tail", "c": "20cm", "d": "2016-10-01"}

{"a": "cat2", "b": "name", "c": "Dickens", "d": "2016-10-02"}
{"a": "cat2", "b": "legs", "c": "4", "d": "2016-10-02"}
{"a": "cat2", "b": "color", "c": "red", "d": "2016-10-02"}
{"a": "cat2", "b": "tail", "c": "15cm", "d": "2016-10-02"}
{"a": "cat2", "b": "ears", "c": "5cm", "d": "2016-10-02"}

{"a": "cat1", "b": "tail", "c": "10cm", "d": "2016-10-10"}

所需的输出:

("id": "cat1", "name": "Caesar", "legs": "4", "color": "black", "tail": "10cm", "day": "2016-10-10")
("id": "cat2", "name": "Dickens", "legs": "4", "color": "red", "tail": "10cm", "ears": "5cm", "day": "2016-10-02")

我可以使用for循环和收集来逐步进行操作,但是我需要使用地图,平面图,aggregatebykey和其他火花魔术以正确的方式进行操作

case class cat_input(a: String, b:String, c:String, d: String)
case class cat_output(id: String, name: String, legs: String, color: String, tail: String, day: String, ears: String, claws: String)
object CatLog {

  def main(args: Array[String]) {

    val sconf = new SparkConf().setAppName("Cat log")
    val sc = new SparkContext(sconf)
    sc.setLogLevel("WARN")
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    import sqlContext.implicits._


    val df = sqlContext.read.json("cats1.txt").as[cat_input]
    val step1 = df.rdd.groupBy(_.a) 

//step1 = (String, Iterator[cat_input]) = (cat1, CompactBuffer(cat_input( "cat1", "name", "Caesar", "2016-10-01"), ... ) )

    val step2 = step1.map(x => x._2)
//step2 = Iterator[cat_input]

    val step3 = step2.map(y => (y.b,y.c)) 
//step3 = ("name", "Caesar")

    val step4 = step3.map( case(x,y) => { cat_output(x) = y }) 
// it should return cat_output(id: "cat1", name: "Caesar", legs: "4", color: "black", tail: "10cm", day: NULL, ears: NULL, claws: NULL)
  1. 步骤4显然无法正常工作
  2. 如何至少返回此cat_output(id:“ cat1”,名称:“ Caesar”,腿:“ 4”,颜色:“ black”,尾巴:“ 10cm”,day:NULL,耳朵:NULL,爪:NULL)
  3. 如何通过d列检查值并在它们之间选择最新值,并将最新日期放入cat_output(date)中?

假设数据具有每只猫(cat1,cat2)的唯一属性。 对重复项应用一些逻辑。 您可以为您的案例类尝试以下方法:

#method to reduce 2 cat_output objects to one
def makeFinalRec(a: cat_output, b:cat_output): cat_output ={ return cat_output( a.id, 
 if(a.name=="" && b.name!="") b.name else a.name, 
 if(a.legs=="" && b.legs!="") b.legs else a.legs,
 if(a.color=="" && b.color!="") b.color else a.color,
 if(a.tail=="" && b.tail!="") b.tail else a.tail,
 if(a.day=="" && b.day!="") b.day else a.day,
 if(a.ears=="" && b.ears!="") b.ears else a.ears,
 if(a.claws=="" && b.claws!="") b.claws else a.claws ); }

dt.map(x => (x(0), x(1), x(2))).map(x => (x._1.toString,
 cat_output(x._1.toString, 
  (x._2.toString match { case "name" => x._3.toString case _ => ""}), 
  (x._2.toString match { case "legs" => x._3.toString case _ => ""}),
  (x._2.toString match { case "color" => x._3.toString case _ => ""}),
  (x._2.toString match { case "tail" => x._3.toString case _ => ""}),
  (x._2.toString match { case "day" => x._3.toString case _ => ""}),
  (x._2.toString match { case "ears" => x._3.toString case _ => ""}),
  (x._2.toString match { case "claws" => x._3.toString case _ => ""})
) )).reduceByKey((a,b) => makeFinalRec(a,b)).map(x=>x._2).toDF().toJSON.foreach(println)

Output:
{"id":"cat2","name":"Dickens","legs":"4","color":"red","tail":"15cm","day":"","ears":"5cm","claws":""}
{"id":"cat1","name":"Caesar","legs":"4","color":"black","tail":"20cm","day":"","ears":"","claws":""}

另请注意,我没有应用实际的“日期”,因为有重复项。 它需要另一个map()和max逻辑来获取每个键的max,然后将这两个数据集结合在一起。

一种方法是使用aggregateByKey函数并将答案存储在可变映射中。

//case class defined outside main()
case class cat_input(a: String, b:String, c:String, d: String)

val df = sqlContext.read.json("cats1.txt").as[cat_input]
val add_to_map = (a: scala.collection.mutable.Map[String,String], x: cat_input) => {
      val ts = x.d
      if(a contains "date"){
        if((a contains x.b) && (ts>=a("date")))
        {
          a(x.b) = x.c
          a("date")=ts
        }
        else if (!(a contains x.b))
        {
          a(x.b) = x.c
          if(a("date")<ts){
             a("date")=ts
          }
        }
      }
      else
      {
        a(x.b) = x.c
        a("date")=ts
      }
      a
      }

    val merge_maps = (a:scala.collection.mutable.Map[String,String], b:scala.collection.mutable.Map[String,String]) => {
      if( a("date") > b("date") ) {
        a.keys.map( k => b(k) = a(k) )
        a
      } else {
        b.keys.map( k => a(k) = b(k) )
        b
      }
    }

    val step3 = df.map(x=> (x.a, x)).aggregateByKey( scala.collection.mutable.Map[String,String]() )(add_to_map, merge_maps)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM