[英]Spark Scala MLlib assignment syntax
I've been going through the guide at https://spark.apache.org/docs/latest/ml-statistics.html and I've noticed that they're using this syntax for val assignment: 我已经浏览了https://spark.apache.org/docs/latest/ml-statistics.html上的指南,并且注意到他们正在使用以下语法进行val分配:
val Row(coeff1: Matrix) = Correlation.corr(df, "features").head
Can someone elaborate on what this means? 有人可以详细说明这是什么意思吗? It seems similar to how Scala handles regex group extraction...
似乎类似于Scala处理正则表达式组提取的方式...
It is nothing more than a pattern matching . 它只不过是模式匹配 。 To make it more obvious, you rewrite it as:
为了使其更明显,您将其重写为:
val coeff1 = Correlation.corr(df, "features").head match {
case Row(coeff1: Matrix) => coeff1
}
In other words it just tries to match the object returned form .head
call and on successful match, it creates a reference ( coeff1
) to the Matrix
object contained in the returned Row
. 换句话说,它只是尝试匹配通过
.head
调用返回的对象,并且在成功匹配后,它将创建对包含在返回的Row
的Matrix
对象的引用( coeff1
)。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.