繁体   English   中英

Spark RDD未从Elasticsearch获取所有源字段

[英]Spark RDD not fetching all source fields from Elasticsearch

我在Elasticseach(本地单节点服务器)中有以下数据

seach命令curl -XPOST 'localhost:9200/sparkdemo/_search?pretty' -d '{ "query": { "match_all": {} } }'

OUTPUT:

{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 10,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY_H0lYe0cQl--Bin",
      "_score" : 1.0,
      "_source" : {
        "date" : "9/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 86,
        "avg" : 86,
        "stage" : "S1"
      }
    }, {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY_KklYe0cQl--Bir",
      "_score" : 1.0,
      "_source" : {
        "date" : "13/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 86,
        "avg" : 87,
        "stage" : "S1"
      }
    }, {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY-TolYe0cQl--Bii",
      "_score" : 1.0,
      "_source" : {
        "date" : "4/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 82,
        "avg" : 82,
        "stage" : "S0"
      }
    }, 
.......
... Few more records
..........
    }, {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY_KklYe0cQl--Biq",
      "_score" : 1.0,
      "_source" : {
        "date" : "12/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 91,
        "avg" : 89,
        "stage" : "S1"
      }
    } ]
  }
}

我正在尝试获取Spark程序(从Eclipse运行的本地独立程序)中的所有数据。

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.elasticsearch.spark._
import scala.collection.mutable.Map;

object Test1 {

  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local[2]").setAppName("HRInfo");
    val sc = new SparkContext(conf);

    val esRdd = sc.esRDD("sparkdemo/hrinfo", "?q=*");

    val searchResultRDD = esRdd.map(t => {
      println("id:" + t._1 + ", map:" + t._2);
      t._2;
    });

    val infoRDD = searchResultRDD.collect().foreach(map => {
      var stage = map.get("stage");
      var pid = map.get("pid");
      var date = map.get("date");
      var propName = map.get("propName");
      var propValue = map.get("propValue");
      var avg = map.get("avg");
      var variation = map.get("var");

      println("Info(" + stage + "," + pid + "," + date + "," + propName + "," + propValue + "," + avg + "," + variation + ")");

    });

  }
}

但是程序无法获取存储在ElasticSearch中的记录的所有文件。

程序输出:

id:AVNAY_H0lYe0cQl--Bin, map:Map(date -> 9/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Bir, map:Map(date -> 13/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-TolYe0cQl--Bii, map:Map(date -> 4/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_H0lYe0cQl--Bio, map:Map(date -> 10/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Bip, map:Map(date -> 11/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-TolYe0cQl--Bij, map:Map(date -> 5/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bil, map:Map(date -> 7/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bim, map:Map(date -> 8/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bik, map:Map(date -> 6/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Biq, map:Map(date -> 12/Mar/2016, pid -> 1, propName -> HEARTRATE)
Info(None,Some(1),Some(9/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(13/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(4/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(10/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(11/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(5/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(7/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(8/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(6/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(12/Mar/2016),Some(HEARTRATE),None,None,None)

程序获取所有记录,但在每个记录中不获取其他字段(即stage,propValue,avg和variabtion),为什么? 非常感谢。

这是由于"var": null文档中的"var": null值而发生的。 每个文档中的"var": null ,并且以下所有所有值都不会使其进入Scala的地图。

您可以通过将"var": null值之一替换为非空值(例如"var": "test" )来显示这一点。 然后,您将按预期获得正确返回的所有值。 或者,您可以在文档的开头放置一个空值。 例如

curl -X POST 'http://localhost:9200/sparkdemo/hrinfo/5' -d '{"test":null,"date": "9/Mar/2016","pid": "1","propName": "HEARTRATE","propValue": 86,"avg": 86,"stage": "S1"}'

并且该文档的地图将为空:

id:5, map:Map()

尝试这个 :

import org.elasticsearch.spark.sql._

val sql = new SQLContext(sc)
val index1 = sql.esDF("index/type")
println(index1.schema.treeString)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM