简体   繁体   English

如何遍历 Pyspark RDD 中的每一行并将它们转换为键? 使用地图功能?

[英]How can I loop over every item a row in Pyspark RDD and turn them into keys? Use map function?

So, firstly I have some inputs like this:所以,首先我有一些这样的输入:

A:<phone1,phone2>,<location1>,<email1>
B:<phone1>,<location2>,<email1,email2>

I'd like to use Pyspark.rdd.map()function to loop every time in the row and turn them into key-value pairs like this:我想使用 Pyspark.rdd.map() 函数在行中的每次循环并将它们转换为这样的键值对:

phone1: A:<phone1,phone2>,<location1>,<email1>
phone1: B:<phone1>,<location2>,<email1,email2>
phone2: A:<phone1,phone2>,<location1>,<email1>
location1: A:<phone1,phone2>,<location1>,<email1>
location2: B:<phone1>,<location2>,<email1,email2>
email1: A:<phone1,phone2>,<location1>,<email1>
email1: B:<phone1>,<location2>,<email1,email2>
email2: B:<phone1>,<location2>,<email1,email2>

In my previous attempts, I tried to add a loop onto the lambda function inside of the map function, but it didn't support it.在我之前的尝试中,我尝试在 map 函数内部的 lambda 函数上添加一个循环,但它不支持它。 Is there any other way?有没有其他办法?

    scala> val rdd =  sc.parallelize(Seq("A:<phone1,phone2>,<location1>,<email1>", "B:<phone1>,<location2>,<email1,email2>"))

    scala> rdd.foreach(println)
    A:<phone1,phone2>,<location1>,<email1>
    B:<phone1>,<location2>,<email1,email2>

    scala> case class dataclass(c0:String, c1:String)

    scala> val df = rdd.map(x => x.split(":")).map(y => dataclass(y(0), y(1))).toDF

    scala> df.show(false)
    +---+------------------------------------+
    |c0 |c1                                  |
    +---+------------------------------------+
    |A  |<phone1,phone2>,<location1>,<email1>|
    |B  |<phone1>,<location2>,<email1,email2>|
    +---+------------------------------------+


    scala> val df1 = df.withColumn("tempCol",regexp_replace(regexp_replace(col("c1"), "<", ""),">", ""))
                       .withColumn("tempCol", explode(split(col("tempCol"), ",")))
                       .withColumn("out", concat(col("tempCol"), lit(":"), col("c0"), lit(":"), col("c1")))
                       .drop("c0", "c1", "tempCol")

    scala> df1.show(false)
    +------------------------------------------------+
    |out                                             |
    +------------------------------------------------+
    |phone1:A:<phone1,phone2>,<location1>,<email1>   |
    |phone2:A:<phone1,phone2>,<location1>,<email1>   |
    |location1:A:<phone1,phone2>,<location1>,<email1>|
    |email1:A:<phone1,phone2>,<location1>,<email1>   |
    |phone1:B:<phone1>,<location2>,<email1,email2>   |
    |location2:B:<phone1>,<location2>,<email1,email2>|
    |email1:B:<phone1>,<location2>,<email1,email2>   |
    |email2:B:<phone1>,<location2>,<email1,email2>   |
    +------------------------------------------------+

    scala> val rdd2 = df1.rdd.map(_(0))
    scala> rdd2.foreach(println)
    phone1:A:<phone1,phone2>,<location1>,<email1>
    phone2:A:<phone1,phone2>,<location1>,<email1>
    location1:A:<phone1,phone2>,<location1>,<email1>
    email1:A:<phone1,phone2>,<location1>,<email1>
    phone1:B:<phone1>,<location2>,<email1,email2>
    location2:B:<phone1>,<location2>,<email1,email2>
    email1:B:<phone1>,<location2>,<email1,email2>
    email2:B:<phone1>,<location2>,<email1,email2>

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM