简体   繁体   English

Java8 maven raise 对过滤器的错误引用不明确

[英]Java8 maven raise Error reference to filter is ambiguous

I'm running a Spark quick start application:我正在运行一个 Spark 快速启动应用程序:

/* SimpleApp.java */
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;

public class SimpleApp {
  public static void main(String[] args) {
    String logFile = "/data/software/spark-2.4.4-bin-without-hadoop/README.md"; // Should be some file on your system
    SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
    Dataset<String> logData = spark.read().textFile(logFile).cache();

    long numAs = logData.filter(s -> s.contains("a")).count();
    long numBs = logData.filter(s -> s.contains("b")).count();

    System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

    spark.stop();
  }
}

As the official document told,正如官方文件所说,

# Package a JAR containing your application
$ mvn package

When I ran mvn package it raise the below error:当我运行mvn package它会引发以下错误:

[WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent!
[INFO] Compiling 1 source file to /home/dennis/java/spark_quick_start/target/classes
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[11,25] reference to filter is ambiguous
  both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[12,25] reference to filter is ambiguous
  both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[INFO] 2 errors 
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:00 min
[INFO] Finished at: 2020-01-13T15:04:55+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.3:compile (default-compile) on project simple-project: Compilation failure: Compilation failure: 
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[11,25] reference to filter is ambiguous
[ERROR]   both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] /home/dennis/java/spark_quick_start/src/main/java/SimpleApp.java:[12,25] reference to filter is ambiguous
[ERROR]   both method filter(scala.Function1<T,java.lang.Object>) in org.apache.spark.sql.Dataset and method filter(org.apache.spark.api.java.function.FilterFunction<T>) in org.apache.spark.sql.Dataset match
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException

This is pom.xml :这是pom.xml

<project>
  <groupId>edu.berkeley</groupId>
  <artifactId>simple-project</artifactId>
  <modelVersion>4.0.0</modelVersion>
  <name>Simple Project</name>
  <packaging>jar</packaging>
  <version>1.0</version>
  <dependencies>
    <dependency> <!-- Spark dependency -->
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.12</artifactId>
      <version>2.4.4</version>
      <scope>provided</scope>
    </dependency>
  </dependencies>


<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.3</version>
            <configuration>
                <source>1.8</source>
                <target>1.8</target>
            </configuration>
        </plugin>
    </plugins>
</build>

</project>

It means that your lambda expression can both be turned into a scala.Function1<T,java.lang.Object> or a org.apache.spark.api.java.function.FilterFunction<T> .这意味着您的 lambda 表达式都可以变成scala.Function1<T,java.lang.Object>org.apache.spark.api.java.function.FilterFunction<T>

I don't know if this would be ambiguous in Scala as well, but in Java it is.我不知道这在 Scala 中是否也有歧义,但在 Java 中确实如此。 You need to explicitely state the type in this case:在这种情况下,您需要明确说明类型:

long numAs = logData.filter((org.apache.spark.api.java.function.FilterFunction<String>)s -> s.contains("a")).count();

Or write the code in Scala.或者用 Scala 编写代码。

It seems to be an compatible issue, as Spark 2.4.4 is using Scala 2.11 (I'm not sure).这似乎是一个兼容问题,因为 Spark 2.4.4 使用的是 Scala 2.11(我不确定)。 Because I saw this from the official website:因为我是从官网看到的: 在此处输入图片说明

After changing to 2.11 , all is working fine!更改为2.11 ,一切正常!

    <dependency> <!-- Spark dependency -->
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>2.4.4</version>
      <scope>provided</scope>
    </dependency>

You can convert lambda expression to anonymous class creation as shown in below example:您可以将 lambda 表达式转换为匿名类创建,如下例所示:


import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.SparkSession;

import scala.Function1;

public class SimpleApp {
  public static void main(String[] args) {
    String logFile = "/data/software/spark-2.4.4-bin-without-hadoop/README.md"; // Should be some file on your system
    SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
    Dataset<String> logData = spark.read().textFile(logFile).cache();

    long numAs = logData.filter(new Function1<String, Object>() {
        public Object apply(String s) {
            return s.contains("a");
        }
    }).count();

    long numBs = logData.filter(new Function1<String, Object>() {
        public Object apply(String s) {
            return s.contains("b");
        }
    }).count();

    System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);

    spark.stop();
  }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM