[英]Map in a spark dataframe
Using Spark 2.x I'm making use of the dataframes. 使用Spark 2.x,我正在使用数据框。
val proposals = spark.read
.option("header", true)
.option("inferSchema", true)
.option("delimiter", ";")
.csv("/proposals.txt.gz")
proposals.printSchema()
which works fine and gives: 可以正常工作,并给出:
root
|-- MARKETCODE: string (nullable = true)
|-- REFDATE: string (nullable = true)
|-- UPDTIME: string (nullable = true)
|-- UPDTIMEMSEC: integer (nullable = true)
|-- ENDTIME: string (nullable = true)
|-- ENDTIMEMSEC: integer (nullable = true)
|-- BONDCODE: string (nullable = true)
Now I'd like to calculate a time in milliseconds and thus have written a function: 现在,我想以毫秒为单位计算时间,因此编写了一个函数:
def time2usecs( time:String, msec:Int )={
val Array(hour,minute,seconds) = time.split(":").map( _.toInt )
msec + seconds.toInt*1000 + minute.toInt*60*1000 + hour.toInt*60*60*1000
}
time2usecs( "08:13:44", 111 )
time2usecs: (time: String, msec: Int)Int
res90: Int = 29624111
The last peace of the puzzle that would be something like: 难题的最后一个和平是这样的:
proposals.withColumn( "utime",
proposals.select("UPDTIME","UPDTIMEMSEC")
.map( (t,tms) => time2usecs(t,tms) ))
But I can't figure out how to do the df.select(column1, column2).map(...)
part. 但是我不知道如何做
df.select(column1, column2).map(...)
部分。
The common approach to using a method on dataframe columns in Spark is to define an UDF
(User-Defined Function, see here for more information). 在Spark中对数据框列使用方法的常见方法是定义
UDF
(用户定义函数,请参见此处以获取更多信息)。 For your case: 对于您的情况:
import org.apache.spark.sql.functions.udf
import spark.implicits._
val time2usecs = udf((time: String, msec: Int) => {
val Array(hour,minute,seconds) = time.split(":").map( _.toInt )
msec + seconds.toInt*1000 + minute.toInt*60*1000 + hour.toInt*60*60*1000
})
val df2 = df.withColumn("utime", time2usecs($"UPDTIME", $"UPDTIMEMSEC"))
spark.implicits._
is imported here to allow the use of the $
shorthand for the col()
function. 在此处导入
spark.implicits._
,以允许$
简写形式用于col()
函数。
Why not use SQL all the way? 为什么不一直使用SQL?
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
def time2usecs(time: Column, msec: Column) = {
val bits = split(time, ":")
msec + bits(2).cast("int") * 1000 + bits(1).cast("int") * 60 * 1000 +
bits(0).cast("int") *60*60*1000
}
df.withColumn("ts", time2usecs(col(""UPDTIME"), col("UPDTIMEMSEC"))
With your code you'd have to: 使用您的代码,您必须:
proposals
.select("UPDTIME","UPDTIMEMSEC")
.as[(String, Int)]
.map { case (t, s) => time2usecs(t, s) }
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.