繁体   English   中英

获取java.lang.IllegalArgumentException:从java应用程序调用Sparks MLLIB StreamingKMeans时需求失败

[英]Getting java.lang.IllegalArgumentException: requirement failed while calling Sparks MLLIB StreamingKMeans from java application

我是Spark和MLlib的新手,我正试图从我的java应用程序调用StreamingKMeans,我得到一个我似乎不理解的异常。 这是我转换训练数据的代码:

JavaDStream<Vector> trainingData = sjsc.textFileStream("/training")
            .map(new Function<String, Vector>() {
                public DenseVector call(String line) throws Exception {
                    String[] lineSplit = line.split(",");

                    double[] doubleValues = new double[lineSplit.length];
                    for (int i = 0; i < lineSplit.length; i++) {
                        doubleValues[i] = Double.parseDouble(lineSplit[i] != null ? !""
                                .equals(lineSplit[i]) ? lineSplit[i] : "0" : "0");
                    }
                    DenseVector denseV = new DenseVector(doubleValues);
                    if (denseV.size() != 16) {
                        throw new Exception("All vectors are not the same size!");
                    }
                    System.out.println("Vector length is:" + denseV.size());
                    return denseV;
                }
            });

这里是我调用trainOn方法的代码:

int numDimensions = 18;
int numClusters = 2;
StreamingKMeans model = new StreamingKMeans();
model.setK(numClusters);
model.setDecayFactor(.5);
model.setRandomCenters(numDimensions, 0.0, Utils.random().nextLong());

model.trainOn(trainingData.dstream());

以下是我收到的例外情况:

java.lang.IllegalArgumentException: requirement failed
    at scala.Predef$.require(Predef.scala:221)
    at org.apache.spark.mllib.util.MLUtils$.fastSquaredDistance(MLUtils.scala:292)
    at org.apache.spark.mllib.clustering.KMeans$.fastSquaredDistance(KMeans.scala:485)
    at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:459)
    at org.apache.spark.mllib.clustering.KMeans$$anonfun$findClosest$1.apply(KMeans.scala:453)
    at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:73)
    at org.apache.spark.mllib.clustering.KMeans$.findClosest(KMeans.scala:453)
    at org.apache.spark.mllib.clustering.KMeansModel.predict(KMeansModel.scala:35)
    at org.apache.spark.mllib.clustering.StreamingKMeans$$anonfun$predictOnValues$1.apply(StreamingKMeans.scala:258)
    at org.apache.spark.mllib.clustering.StreamingKMeans$$anonfun$predictOnValues$1.apply(StreamingKMeans.scala:258)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$mapValues$1$$anonfun$apply$15.apply(PairRDDFunctions.scala:674)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$mapValues$1$$anonfun$apply$15.apply(PairRDDFunctions.scala:674)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
    at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at scala.collection.AbstractIterator.to(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
    at org.apache.spark.rdd.RDD$$anonfun$33.apply(RDD.scala:1177)
    at org.apache.spark.rdd.RDD$$anonfun$33.apply(RDD.scala:1177)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1498)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1498)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
    at org.apache.spark.scheduler.Task.run(Task.scala:64)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
    at java.lang.Thread.run(Thread.java:662)

正如你在上面的代码中看到的那样,我正在检查以确保我的向量大小都相同,看起来是这样,即使错误表明它们不是。 任何帮助将不胜感激!

所有向量不同的维度都可能导致此异常。

根据我的经验,另一个可能的原因是Vector包含NaN的值。

载体中的所有值都不能包含NaN。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM