[英]Convert dataframe into Spark mllib matrix in Scala
I have a Spark dataframe named df
as input: 我有一个名为df
的Spark数据df
作为输入:
+---------------+---+---+---+---+
|Main_CustomerID| A1| A2| A3| A4|
+---------------+---+---+---+---+
| 101| 1| 0| 2| 1|
| 102| 0| 3| 1| 1|
| 103| 2| 1| 0| 0|
+---------------+---+---+---+---+
I need to collect the values of A1
, A2
, A3
, A4
into a mllib matrix such as, 我需要将A1
, A2
, A3
, A4
的值收集到mllib矩阵中,例如,
dm: org.apache.spark.mllib.linalg.Matrix =
1.0 0.0 2.0 1.0
0.0 3.0 1.0 1.0
2.0 1.0 0.0 0.0
How can I achieve this in Scala? 如何在Scala中实现这一目标?
You can do it as follows, first get all columns that should be included in the matrix: 您可以按照以下步骤进行操作,首先获取应包含在矩阵中的所有列:
import org.apache.spark.sql.functions._
val matrixColumns = df.columns.filter(_.startsWith("A")).map(col(_))
Then convert the dataframe to an RDD[Vector]
. 然后将数据帧转换为RDD[Vector]
。 Since the vector need to contain doubles this conversion need to be done here too. 由于向量需要包含双精度数,因此在此也需要完成转换。
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.linalg.distributed.{IndexedRow, IndexedRowMatrix}
val rdd = df.select(array(matrixColumns:_*).as("arr")).as[Array[Int]].rdd
.zipWithIndex()
.map{ case(arr, index) => IndexedRow(index, Vectors.dense(arr.map(_.toDouble)))}
Then convert the rdd to an IndexedRowMatrix
which can be converted, if required, to a local Matrix: 然后将rdd转换为IndexedRowMatrix
,如果需要,可以将其转换为本地Matrix:
val dm = new IndexedRowMatrix(rdd).toBlockMatrix().toLocalMatrix()
For smaller matrices that can be collected to the driver there is an easier alternative: 对于可以收集到驱动程序的较小矩阵 ,有一个更简单的选择:
val matrixColumns = df.columns.filter(_.startsWith("A")).map(col(_))
val arr = df.select(array(matrixColumns:_*).as("arr")).as[Array[Int]]
.collect()
.flatten
.map(_.toDouble)
val rows = df.count().toInt
val cols = matrixColumns.length
// It's necessary to reverse cols and rows here and then transpose
val dm = Matrices.dense(cols, rows, arr).transpose()
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.