簡體   English   中英

處理列值后,從其他列創建數組列

[英]Create an array column from other columns after processing the column values

假設我有一個包含類別列(學校,類型,組)的spark數據框

------------------------------------------------------------
StudentID  |  School |   Type        |  Group               
------------------------------------------------------------
1          |  ABC    |   Elementary  |  Music-Arts          
2          |  ABC    |   Elementary  |  Football            
3          |  DEF    |   Secondary   |  Basketball-Cricket  
4          |  DEF    |   Secondary   |  Cricket             
------------------------------------------------------------

我需要向數據框添加另一列,如下所示:

--------------------------------------------------------------------------------------
StudentID  |  School |   Type        |  Group               |  Combined Array
---------------------------------------------------------------------------------------
1          |  ABC    |   Elementary  |  Music-Arts          | ["School: ABC", "Type: Elementary", "Group: Music", "Group: Arts"]
2          |  ABC    |   Elementary  |  Football            | ["School: ABC", "Type: Elementary", "Group: Football"]
3          |  DEF    |   Secondary   |  Basketball-Cricket  | ["School: DEF", "Type: Secondary", "Group: Basketball", "Group: Cricket"]
4          |  DEF    |   Secondary   |  Cricket             | ["School: DEF", "Type: Secondary", "Group: Cricket"]
----------------------------------------------------------------------------------------

多余的列是所有類別列的組合,但在“組”列上包含不同的處理。 “組”列的值需要在“-”上分割。

包括“組”的所有分類列都包含在列表中。 “組”列也作為字符串輸入,作為要拆分的列。 數據框還有其他未使用的列。

我正在尋找最佳性能的解決方案。

如果它是一個簡單的數組,則可以通過單個“ withColumn”轉換來完成。

val columns = List("School", "Type", "Group")
var df2 = df1.withColumn("CombinedArray", array(columns.map(df1(_)):_*))

但是,由於在“組”列中需要進行其他處理,因此該解決方案似乎並不簡單。

使用spark.sql(),檢查以下內容:

Seq(("ABC","Elementary","Music-Arts"),("ABC","Elementary","Football"),("DEF","Secondary","Basketball-Cricket"),("DEF","Secondary","Cricket"))
  .toDF("School","Type","Group").createOrReplaceTempView("taba")
spark.sql( """ select school, type, group, array(concat('School:',school),concat('type:',type),concat('group:',group)) as combined_array from taba """).show(false)

輸出:

+------+----------+------------------+------------------------------------------------------+
|school|type      |group             |combined_array                                        |
+------+----------+------------------+------------------------------------------------------+
|ABC   |Elementary|Music-Arts        |[School:ABC, type:Elementary, group:Music-Arts]       |
|ABC   |Elementary|Football          |[School:ABC, type:Elementary, group:Football]         |
|DEF   |Secondary |Basketball-Cricket|[School:DEF, type:Secondary, group:Basketball-Cricket]|
|DEF   |Secondary |Cricket           |[School:DEF, type:Secondary, group:Cricket]           |
+------+----------+------------------+------------------------------------------------------+

如果需要將其用作數據框,則

val df = spark.sql( """ select school, type, group, array(concat('School:',school),concat('type:',type),concat('group:',group)) as combined_array from taba """)
df.printSchema()

root
 |-- school: string (nullable = true)
 |-- type: string (nullable = true)
 |-- group: string (nullable = true)
 |-- combined_array: array (nullable = false)
 |    |-- element: string (containsNull = true)

更新:

動態構造sql列。

scala> val df = Seq(("ABC","Elementary","Music-Arts"),("ABC","Elementary","Football"),("DEF","Secondary","Basketball-Cricket"),("DEF","Secondary","Cricket")).toDF("School","Type","Group")
df: org.apache.spark.sql.DataFrame = [School: string, Type: string ... 1 more field]

scala> val columns = df.columns.mkString("select ", ",", "")
columns: String = select School,Type,Group

scala> val arr = df.columns.map( x=> s"concat('"+x+"',"+x+")" ).mkString("array(",",",") as combined_array ")
arr: String = "array(concat('School',School),concat('Type',Type),concat('Group',Group)) as combined_array "

scala> val sql_string = columns + " , " + arr + " from taba "
sql_string: String = "select School,Type,Group , array(concat('School',School),concat('Type',Type),concat('Group',Group)) as combined_array  from taba "

scala> df.createOrReplaceTempView("taba")

scala> spark.sql(sql_string).show(false)
+------+----------+------------------+---------------------------------------------------+
|School|Type      |Group             |combined_array                                     |
+------+----------+------------------+---------------------------------------------------+
|ABC   |Elementary|Music-Arts        |[SchoolABC, TypeElementary, GroupMusic-Arts]       |
|ABC   |Elementary|Football          |[SchoolABC, TypeElementary, GroupFootball]         |
|DEF   |Secondary |Basketball-Cricket|[SchoolDEF, TypeSecondary, GroupBasketball-Cricket]|
|DEF   |Secondary |Cricket           |[SchoolDEF, TypeSecondary, GroupCricket]           |
+------+----------+------------------+---------------------------------------------------+


scala>

UPDATE2:

scala>  val df = Seq((1,"ABC","Elementary","Music-Arts"),(2,"ABC","Elementary","Football"),(3,"DEF","Secondary","Basketball-Cricket"),(4,"DEF","Secondary","Cricket")).toDF("StudentID","School","Type","Group")
df: org.apache.spark.sql.DataFrame = [StudentID: int, School: string ... 2 more fields]

scala> df.createOrReplaceTempView("student")

scala>  val df2 = spark.sql(""" select studentid, collect_list(concat('Group:', t.sp1)) as sp2 from (select StudentID,School,Type,explode((split(group,'-'))) as sp1 from student where size(split(group,'-')) > 1 ) t group by studentid """)
df2: org.apache.spark.sql.DataFrame = [studentid: int, sp2: array<string>]

scala> val df3 = df.alias("t1").join(df2.alias("t2"),Seq("studentid"),"LeftOuter")
df3: org.apache.spark.sql.DataFrame = [StudentID: int, School: string ... 3 more fields]

scala> df3.createOrReplaceTempView("student2")

scala> spark.sql(""" select studentid, school,group, type, array(concat('School:',school),concat('type:',type),concat_ws(',',temp_arr)) from (select studentid,school,group,type, case when sp2 is null then array(concat("Group:",group)) else sp2 end as temp_arr from student2) t """).show(false)
+---------+------+------------------+----------+---------------------------------------------------------------------------+
|studentid|school|group             |type      |array(concat(School:, school), concat(type:, type), concat_ws(,, temp_arr))|
+---------+------+------------------+----------+---------------------------------------------------------------------------+
|1        |ABC   |Music-Arts        |Elementary|[School:ABC, type:Elementary, Group:Music,Group:Arts]                      |
|2        |ABC   |Football          |Elementary|[School:ABC, type:Elementary, Group:Football]                              |
|3        |DEF   |Basketball-Cricket|Secondary |[School:DEF, type:Secondary, Group:Basketball,Group:Cricket]               |
|4        |DEF   |Cricket           |Secondary |[School:DEF, type:Secondary, Group:Cricket]                                |
+---------+------+------------------+----------+---------------------------------------------------------------------------+


scala>

您需要首先添加一個空列,然后按如下方式映射它(在Java中):

StructType newSchema = df1.schema().add("Combined Array", DataTypes.StringType);

df1 = df1.withColumn("Combined Array", lit(null))
        .map((MapFunction<Row, Row>) row ->
            RowFactory.create(...values...) // add existing values and new value here
        , newSchema);

它在Scala中應該非常相似。

使用正則表達式替換來開始每個字段,並在兩者之間使用“-”:

val df1 = spark.read.option("header","true").csv(filePath)
val columns = List("School", "Type", "Group")
var df2 = df1.withColumn("CombinedArray", array(columns.map{
   colName => regexp_replace(regexp_replace(df1(colName),"(^)",s"$colName: "),"(-)",s", $colName: ")
}:_*))

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM