簡體   English   中英

Spark ML-KMeans-org.apache.spark.sql.AnalysisException:無法解析給定輸入列的“功能”

[英]Spark ML - KMeans - org.apache.spark.sql.AnalysisException: cannot resolve '`features`' given input columns

我正在嘗試使用Spark ML KMeans分析和聚類芝加哥犯罪數據集。 以下是代碼段

case class ChicCase(ID: Long, Case_Number: String, Date: String, Block: String, IUCR: String, Primary_Type: String, Description: String, Location_description: String, Arrest: Boolean, Domestic: Boolean, Beat: Int, District: Int, Ward: Int, Community_Area: Int, FBI_Code: String, X_Coordinate: Int, Y_Coordinate: Int, Year: Int, Updated_On: String, Latitude: Double, Longitude: Double, Location: String)
val city = spark.read.option("header", true).option("inferSchema", true).csv("/chicago_city/Crimes_2001_to_present_2").as[ChicCase]

val data = city.drop("ID", "Case_Number", "Date", "Block", "IUCR", "Primary_Type", "Description", "Location_description", "Arrest", "Domestic", "FBI_Code", "Year", "Location", "Updated_On")

val kmeans = new KMeans
kmeans.setK(10).setSeed(1L)
val model = kmeans.fit(data)

但這引發了以下異常

    org.apache.spark.sql.AnalysisException: cannot resolve '`features`' given input columns: [Ward, Longitude, X_Coordinate, Beat, Latitude, District, Y_Coordinate, Community_Area];   
at  org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42) 
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:77) 
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:74) 
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301) 
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301) 
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69) 
at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:300) 
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:190) 
	at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:200) 
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2$1.apply(QueryPlan.scala:204) 
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) 
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) 
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)  
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)  
at scala.collection.AbstractTraversable.map(Traversable.scala:104)   
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:204) 
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$5.apply(QueryPlan.scala:209) 
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:179) 
	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:209) 
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:74) 
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:67) 
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126) 
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:67) 
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:58) 
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) 
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)   
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$withPlan(Dataset.scala:2589) 
at org.apache.spark.sql.Dataset.select(Dataset.scala:969)   
at org.apache.spark.ml.clustering.KMeans.fit(KMeans.scala:307)   ... 90 elided

數據類型為Int或Double。 可能是什么問題?

在spark ml數據框架API中,應使用帶有功能名稱的VectorAssembler將所有功能列收集為一個單獨的列。 當您擬合模型時,它將嘗試查找features列,在您的情況下,不存在此類列,這就是為什么Exception: 給定輸入列無法解析' features '的原因

import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.clustering.KMeans

// assembler to collect all interesting columns into a single features column
val assembler = (new VectorAssembler().
                     setInputCols(Array("Ward", "Longitude", "X_Coordinate", "Beat", 
                                        "Latitude", "District", "Y_Coordinate", 
                                        "Community_Area")).
                     setOutputCol("features"))   

val data = assembler.transform(city)    
val kmeans = new KMeans()
val model = kmeans.fit(data)

model.getK
// res28: Int = 2     example here

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM