简体   繁体   English

将Apache Spark XML转换为JavaRDD

[英]Apache Spark XML into JavaRDD

I have tried to read a xml file with spark and turn it into a JavaRDD array. 我尝试读取带有spark的xml文件并将其转换为JavaRDD数组。 I have read about how to turn it into a DataSet but I wanted to know if it is possible with JavaRDD. 我已经阅读了有关如何将其转换为DataSet的信息,但是我想知道JavaRDD是否有可能。 I have to mention that in my xml file I have a list which is not always the same size. 我不得不提到,在我的xml文件中,我有一个列表,该列表并不总是相同的大小。 Here is an example of my XML file. 这是我的XML文件的示例。

 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<logs>
    <log>
        <id>1</id>
        <clientId>1</clientId>
        <date>Wed Apr 03 21:16:18 EEST 2019</date>
        <itemList>
            <item>2</item>
        </itemList>
    </log>
    <log>
        <id>2</id>
        <clientId>2</clientId>
        <date>Wed Apr 03 21:16:19 EEST 2019</date>
        <itemList>
            <item>1</item>
            <item>2</item>
            <item>3</item>
        </itemList>
    </log>
</logs>

Thanks! 谢谢!

Here is a possible solution: https://github.com/databricks/spark-xml/issues/213 这是一个可能的解决方案: https : //github.com/databricks/spark-xml/issues/213

Here is what you need: 这是您需要的:

import com.databricks.spark.xml.XmlReader

val rdd = sc.parallelize(Seq("<books><book>book1</book><book>book2</book></books>"))
val df = new XmlReader().xmlRdd(spark.sqlContext, rdd)
df.show

+--------------+
|          book|
+--------------+
|[book1, book2]|
+--------------+

df.printSchema

root
 |-- book: array (nullable = true)
 |    |-- element: string (containsNull = true)

from rdd to JavaRDD is fairly simple. 从rdd到JavaRDD相当简单。 (wrapRdd, look in documentation). (wrapRdd,请参阅文档)。

I hope it answered your question. 希望它回答了您的问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM