简体   繁体   中英

How to loop through the Dataframe which is of type of Array and append the value to a final Dataframe using Scala

Please could you help me with the solution for the below Questions: Question 01: Is there a way i can loop only Array types as looping string type within an array will throw an error. I cannot drop String Type(VIN) as i need this data on the final df.

df.printSchema

returns:

root
  |-- APP: array (nullable = true)
  |    |-- element: struct (containsNull = true)
  |    |    |-- E: long (nullable = true)
  |    |    |-- V: double (nullable = true)
  |-- B1X: array (nullable = true)
  |    |-- element: struct (containsNull = true)
  |    |    |-- E: long (nullable = true)
  |    |    |-- V: long (nullable = true)
  |-- B2X: array (nullable = true)
  |    |-- element: struct (containsNull = true)
  |    |    |-- E: long (nullable = true)
  |    |    |-- V: long (nullable = true)
  |-- B3X: array (nullable = true)
  |    |-- element: struct (containsNull = true)
  |    |    |-- E: long (nullable = true)
  |    |    |-- V: long (nullable = true)
  |-- VIN: string (nullable = true)

After running the below forloop:

Question 02: Dataframe jsonDF2 is holding only the last E, V value as stime, can_value of the last signal B3X. Is there a way to append all the values( i mean all the signal values{APP, B1X, B2X, B3X, VIN}) to a Dataframe jsonDF2 after it comes out of foreach loop.

val columns:Array[String] = df.columns

for(col_name <- columns){
|       df = df.withColumn("element", explode(col(col_name)))
|         .withColumn("stime", col("element.E"))
|         .withColumn("can_value", col("element.V"))
|         .withColumn("SIGNAL", lit(col_name))
|         .drop(col("element"))
|         .drop(col(col_name))
|     }

You can use the schema member and then filter them out before hand with a filter and a map. Then do your for loop stuff.

import org.apache.spark.sql.types._
val schema = df.schema.filter{ case StructField(_, datatype, _, _) => datatype == ArrayType }
val columns = schema.map{ case StructField(columnName, _ , _, _) => columnName }

Here's one approach illustrated using the following example:

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
import org.apache.spark.sql.functions._
import spark.implicits._

case class Elem(e: Long, v: Double)

val df = Seq(
  (Seq(Elem(1, 1.0)), Seq(Elem(2, 2.0), Elem(3, 3.0)), Seq(Elem(4, 4.0)), Seq(Elem(5, 5.0)), "a"),
  (Seq(Elem(6, 6.0)), Seq(Elem(7, 7.0), Elem(8, 8.0)), Seq(Elem(9, 9.0)), Seq(Elem(10, 10.0)), "b")
).toDF("APP", "B1X", "B2X", "B3X", "VIN")

Question #1: Is there a way i can loop only Array types?

You can simply collect all the top-level field names of ArrayType as follows:

val arrCols = df.schema.fields.collect{
  case StructField(name, dtype: ArrayType, _, _) => name
}
// arrCols: Array[String] = Array(APP, B1X, B2X, B3X)

Question #2: Is there a way to append all the signal values {APP, B1X, B2X, B3X, VIN}?

Not sure I completely understand your requirement without sample output. Based on your code snippet, I'm assuming your goal is to flatten all array columns of struct-typed elements into separate top-level columns. Below are the steps:

Step 1 : Group all the array columns into a single array column of struct(colName, colValue) ; then transform for each row using foldLeft to generate a combined array of struct(colName, Elem-E, Elem-V) :

case class ColElem(c: String, e: Long, v: Double)

val df2 = df.
  select(array(arrCols.map(c => struct(lit(c).as("_1"), col(c).as("_2"))): _*)).
  map{ case Row(rs: Seq[Row] @unchecked) => rs.foldLeft(Seq[ColElem]()){  
    (acc, r) => r match { case Row(c: String, s: Seq[Row] @unchecked) =>
      acc ++ s.map(el => ColElem(c, el.getAs[Long](0), el.getAs[Double](1)))
    }
  }}.toDF("combined_array")

df2.show(false)
// +-----------------------------------------------------------------------------+
// |combined_array                                                               |
// +-----------------------------------------------------------------------------+
// |[[APP, 1, 1.0], [B1X, 2, 2.0], [B1X, 3, 3.0], [B2X, 4, 4.0], [B3X, 5, 5.0]]  |
// |[[APP, 6, 6.0], [B1X, 7, 7.0], [B1X, 8, 8.0], [B2X, 9, 9.0], [B3X, 10, 10.0]]|
// +-----------------------------------------------------------------------------+

Step 2 : Flatten the combined array of struct-typed elements into top-level columns:

df2.
  select(explode($"combined_array").as("flattened")).
  select($"flattened.c".as("signal"), $"flattened.e".as("stime"), $"flattened.v".as("can_value")).
  orderBy("signal", "stime").
  show
// +------+-----+---------+
// |signal|stime|can_value|
// +------+-----+---------+
// |   APP|    1|      1.0|
// |   APP|    6|      6.0|
// |   B1X|    2|      2.0|
// |   B1X|    3|      3.0|
// |   B1X|    7|      7.0|
// |   B1X|    8|      8.0|
// |   B2X|    4|      4.0|
// |   B2X|    9|      9.0|
// |   B3X|    5|      5.0|
// |   B3X|   10|     10.0|
// +------+-----+---------+

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM