简体   繁体   中英

Scala Spark type missmatch found Unit, required rdd.RDD

I am reading a table from a MySQL database in a spark project written in scala. It s my first week on it so I am really not so fit. When I am trying to run

  val clusters = KMeans.train(parsedData, numClusters, numIterations)

I am getting an error for parsedData that says:"type mismatch; found : org.apache.spark.rdd.RDD[Map[String,Any]] required: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector]"

My parsed data is created above like this:

 val parsedData = dataframe_mysql.map(_.getValuesMap[Any](List("name", "event","execution","info"))).collect().foreach(println)

where dataframe_mysql is the whatever is returned from sqlcontext.read.format("jdbc").option(....) function.

How am I supposed to convert my unit to fit the requirements to pass it in the train function?

According to documentation I am supposed to use something like this:

data.map(s => Vectors.dense(s.split(' ').map(_.toDouble))).cache()

Am I supposed to transform my values to double? because when I try to run the command above my project will crash.

thank you!

Remove the trailing .collect().foreach(println) . After calling collect , you no longer have an RDD - it just turns into a local collection.

Subsequently, when you call foreach it returns Unit - foreach is for doing side-effects like printing each element in a collection. etc.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM