[英]Scala Spark type missmatch found Unit, required rdd.RDD
I am reading a table from a MySQL database in a spark project written in scala. 我正在用scala编写的spark项目中从MySQL数据库读取表。 It s my first week on it so I am really not so fit.
这是我的第一个礼拜,所以我真的不太适应。 When I am trying to run
当我试图跑步时
val clusters = KMeans.train(parsedData, numClusters, numIterations)
I am getting an error for parsedData that says:"type mismatch; found : org.apache.spark.rdd.RDD[Map[String,Any]] required: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]" 我收到parsedData的错误消息:“类型不匹配;找到:org.apache.spark.rdd.RDD [Map [String,Any]]必需:org.apache.spark.rdd.RDD [org.apache.spark .mllib.linalg.Vector]”
My parsed data is created above like this: 我的解析数据是像上面这样创建的:
val parsedData = dataframe_mysql.map(_.getValuesMap[Any](List("name", "event","execution","info"))).collect().foreach(println)
where dataframe_mysql is the whatever is returned from sqlcontext.read.format("jdbc").option(....) function.
其中
sqlcontext.read.format("jdbc").option(....) function.
是从sqlcontext.read.format("jdbc").option(....) function.
How am I supposed to convert my unit to fit the requirements to pass it in the train function? 我应该如何转换我的单元以使其符合在火车功能中通过的要求?
According to documentation I am supposed to use something like this: 根据文档,我应该使用这样的东西:
data.map(s => Vectors.dense(s.split(' ').map(_.toDouble))).cache()
Am I supposed to transform my values to double? 我应该将自己的价值观翻倍吗? because when I try to run the command above my project will crash.
因为当我尝试运行上面的命令时,我的项目将崩溃。
thank you! 谢谢!
Remove the trailing .collect().foreach(println)
. 删除尾随的
.collect().foreach(println)
。 After calling collect
, you no longer have an RDD - it just turns into a local collection. 调用
collect
,您将不再拥有RDD-它只是变成了本地集合。
Subsequently, when you call foreach
it returns Unit
- foreach is for doing side-effects like printing each element in a collection. 随后,当您调用
foreach
它返回Unit
-foreach用于产生副作用,例如打印集合中的每个元素。 etc. 等等
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.