[英]Calculate Confidence interval over mean value for all the rows of a dataframe in Spark / Scala
我需要计算数据框在value3列平均值上的置信区间,最大置信区间和最小置信区间,并且需要将其应用于所有数据框。 这是我的数据框:
+--------+---------+------+
| value1| value2 |value3|
+--------+---------+------+
| a | 2 | 3 |
+--------+---------+------+
| b | 5 | 4 |
+--------+---------+------+
| b | 5 | 4 |
+--------+---------+------+
| c | 3 | 4 |
+--------+---------+------+
因此,我的输出应如下所示(x是计算结果):
+--------+---------+------+-------+--------+----------+
| value1| value2 |value3|max_int|min_int | int | |
+--------+---------+------+-------+--------+----------+
| a | 2 | 3 | x | x | x |
+--------+---------+------+-------+--------+----------+
| b | 5 | 4 | x | x | x |
+--------+---------+------+-------+--------+----------+
| b | 5 | 4 | x | x | x |
+--------+---------+------+-------+--------+----------+
| c | 3 | 4 | x | x | x |
+--------+---------+------+-------+--------+----------+
由于找不到适合的内置函数,因此我找到了以下函数来实现。 这是计算它的代码。
import org.apache.commons.math3.distribution.TDistribution
import org.apache.commons.math3.exception.MathIllegalArgumentException
import org.apache.commons.math3.stat.descriptive.SummaryStatistics
import scala.collection.JavaConversions._
object ConfidenceIntervalApp {
def main(args: Array[String]): Unit = {
///my dataframe name is df
}
// Calculate 95% confidence interval
val ci: Double = calcMeanCI(stats, 0.95)
println(String.format("Mean: %f", stats.getMean))
val lower: Double = stats.getMean - ci
val upper: Double = stats.getMean + ci
}
def calcMeanCI(stats:Rdd, level: Double): Double =
try {
// Create T Distribution with N-1 degrees of freedom
val tDist: TDistribution = new TDistribution(stats.getN - 1)
// Calculate critical value
val critVal: Double =
tDist.inverseCumulativeProbability(1.0 - (1 - level) / 2)
// Calculate confidence interval
critVal * stats.getStandardDeviation / Math.sqrt(stats.getN)
} catch {
case e: MathIllegalArgumentException => java.lang.Double.NaN
}
}
您能否帮助或至少指导我如何将其应用于列。 提前致谢。
你能帮助我吗?
你可以做类似的事情
val cntInterval = df.select("value3").rdd.countApprox(timeout = 1000L,confidence = 0.95)
val (lowCnt,highCnt) = (cntInterval.getFinalValue().low, cntInterval.getFinalValue().high)
df.withColumn("max_int", lit(highCnt))
.withColumn("min_int", lit(lowCnt))
.withColumn("int", lit(cntInterval.getFinalValue().toString()))
.show(false)
我从In spark获得了帮助,如何快速估算数据框中的元素数量
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.