簡體   English   中英

Java Apache Spark mllib

[英]java apache spark mllib

我已經開始在Java中學習mllib apache spark。 我正在關注來自官方網站的spark 2.1.1文檔。 我在Ubuntu 14.04 lts中安裝了spark-2.1.1-bin-hadoop2.7。 我正在嘗試運行此代碼。

public class JavaLogisticRegressionWithElasticNetExample {
public static void main(String[] args) {
    SparkSession spark = SparkSession.builder().appName("JavaLogisticRegressionWithElasticNetExample") .master("local[*]").getOrCreate();
  // $example on$
    // Load training data
    Dataset<Row> training = spark.read().format("libsvm")
            .load("data/mllib/sample_libsvm_data.txt");

    LogisticRegression lr = new LogisticRegression()
            .setMaxIter(10)
            .setRegParam(0.3)
            .setElasticNetParam(0.8);

    // Fit the model
    LogisticRegressionModel lrModel = lr.fit(training);

    // Print the coefficients and intercept for logistic regression
    System.out.println("Coefficients: "
            + lrModel.coefficients() + " Intercept: " + lrModel.intercept());

    // We can also use the multinomial family for binary classification
    LogisticRegression mlr = new LogisticRegression()
            .setMaxIter(10)
            .setRegParam(0.3)
            .setElasticNetParam(0.8)
            .setFamily("multinomial");

    // Fit the model
    LogisticRegressionModel mlrModel = mlr.fit(training);

    // Print the coefficients and intercepts for logistic regression with multinomial family
    System.out.println("Multinomial coefficients: " + lrModel.coefficientMatrix()
            + "\nMultinomial intercepts: " + mlrModel.interceptVector());
    // $example off$

    spark.stop();
}

}

我已經在系統中安裝了spark-2.1.1-bin-hadoop2.7。 我有pom.xml文件

<dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.1.1</version>
        <scope>provided</scope>
    </dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-mllib_2.10</artifactId>
    <version>2.1.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-mllib-local_2.10 -->
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-mllib-local_2.10</artifactId>
    <version>2.1.1</version>
</dependency>

但我得到這個例外

17/09/08 16:42:19信息SparkEnv:在線程“主”中注冊OutputCommitCoordinator異常java.lang.NoSuchMethodError:scala.Predef $。$ scope()Lscala / xml / TopScope $; 在org.apache.spark.ui.jobs.AllJobsPage。(AllJobsPage.scala:39)在org.apache.spark.ui.jobs.JobsTab。(JobsTab.scala:38)在org.apache.spark.ui.SparkUI在org.apache.spark.ui.SparkUI。(SparkUI.scala:82)處的.initialize(SparkUI.scala:65)在org.apache.spark.ui.SparkUI $ .create(SparkUI.scala:220)處的org.apache.spark.ui。 apache.spark.ui.SparkUI $ .createLiveUI(SparkUI.scala:162)在org.apache.spark.SparkContext。(SparkContext.scala:452)在org.apache.spark.SparkContext $ .getOrCreate(SparkContext.scala:2320) )在org.apache.spark.sql.SparkSession $ Builder $$ anonfun $ 6.apply(SparkSession.scala:868)在org.apache.spark.sql.SparkSession $ Builder $$ anonfun $ 6.apply(SparkSession.scala:860 )在org.apache.spark.sql.SparkSession $ Builder.getOrCreate(SparkSession.scala:860)的scala.Option.getOrElse(Option.scala:121)在JavaLogisticRegressionWithElasticNetExample.main(JavaLogisticRegressionWithElasticNetExample.java:12)17/09 / 08 16:42:19 INFO DiskBlockManager:名為Shutdown HookManager的關機鈎17/09/08 16:42:19 INFO ShutdownHookManager:關機鈎 稱為17/09/08 16:42:19 INFO ShutdownHookManager:刪除目錄/ tmp / spark-8460a189-3039-47ec-8d75-9e0ca8b4ee5d 17/09/08 16:42:19 INFO ShutdownHookManager:刪除目錄/ tmp / spark- 8460a189-3039-47ec-8d75-9e0ca8b4ee5d / userFiles-9b6994eb-1376-47a3-929e-e415e1fdb0c0

當您在同一程序中使用不同版本的scala時,會發生此類錯誤。 確實,在您的依賴項中(在pom.xml ),您有一些帶有scala 2.10的庫,還有一些帶有scala 2.11的庫。

使用spark-sql_2.10代替spark-sql_2.11 ,您會沒事的(或將mllib版本更改為2.11)。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM