[英]How to fix "java.lang.Integer cannot be cast to java.lang.Double" Error in Spark (Scala)?
I have a 2 column (1 int and 1 double) dataframe "fit_comparison", of predicted values and linear regression results.我有一个 2 列(1 个整数和 1 个双精度)数据框“fit_comparison”,包含预测值和线性回归结果。
I have used the following code to get regression metrics:我使用以下代码来获取回归指标:
val rm = new RegressionMetrics(
fit_comparison.rdd.map(x =>
(x(0).asInstanceOf[Double], x(1).asInstanceOf[Double])))
When I try to get specific regression metrics as below, I get the "java.lang.Integer cannot be case to java.lang.Double" error.当我尝试获得如下特定的回归指标时,我收到“java.lang.Integer 不能是 java.lang.Double 的大小写”错误。
println("MSE: " + rm.meanSquaredError)
Do I first have to convert the first column of "fit_comparison" to a double?我是否首先必须将“fit_comparison”的第一列转换为双精度数?
Any help is appreciated, thank you,任何帮助表示赞赏,谢谢,
You can try this :你可以试试这个:
val rm = new RegressionMetrics
(fit_comparison.rdd.map(x =>
(x(0).asInstanceOf[Int].toDouble,
x(1).asInstanceOf[Int].toDouble)))
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.