简体   繁体   English

远程 flink 作业查询 Hive 上的纱线集群错误:NoClassDefFoundError: org/apache/hadoop/mapred/JobConf

[英]remote flink job with query to Hive on yarn-cluster error:NoClassDefFoundError: org/apache/hadoop/mapred/JobConf

env: HDP: 3.1.5(hadoop: 3.1.1, hive: 3.1.0), Flink: 1.12.2 Java code:环境:HDP:3.1.5(hadoop:3.1.1,hive:3.1.0),Flink:1.12.2 Java 代码:

public static void main(String[] args) {
    EnvironmentSettings settings = EnvironmentSettings.newInstance().useBlinkPlanner().build();
    TableEnvironment tblEnv=TableEnvironment.create(settings);

    String name = "myhive";
    String defaultDatabase = "default";
    String hiveConfDir = "/etc/hive/conf";
    HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir);
    tblEnv.registerCatalog("myhive", hive);
    tblEnv.useCatalog("myhive");

    //tblEnv.getConfig().setSqlDialect(SqlDialect.HIVE);
    tblEnv.sqlQuery("SELECT * FROM users").execute().print();
}

Dependency:依赖:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-table-api-java-bridge_2.12</artifactId>
  <version>${flink.version}</version>
</dependency>

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-connector-hive_2.12</artifactId>
  <version>${flink.version}</version>
</dependency>

error 1:错误1:

org.apache.flink.util.FlinkException: JobMaster for job 35afe414e1dd861c86130ddd031312f2 failed.
    at org.apache.flink.runtime.dispatcher.Dispatcher.jobMasterFailed(Dispatcher.java:887) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
    at org.apache.flink.runtime.dispatcher.Dispatcher.dispatcherJobFailed(Dispatcher.java:465) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
    at org.apache.flink.runtime.dispatcher.Dispatcher.handleDispatcherJobResult(Dispatcher.java:444) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
    ...
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could not instantiate JobManager.
    at org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:494) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
    at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604) ~[?:1.8.0_292]
    ...
Caused by: org.apache.flink.runtime.JobException: Cannot instantiate the coordinator for operator Source: HiveSource-zjdev_xiangliang.users -> SinkConversionToTuple2
    at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:231) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
    at org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:866) ~[flink-dist_2.12-1.12.2.jar:1.12.2]
    ...
Caused by: java.lang.NoClassDefFoundError: Lorg/apache/hadoop/mapred/JobConf;
    at java.lang.Class.getDeclaredFields0(Native Method) ~[?:1.8.0_292]
    at java.lang.Class.privateGetDeclaredFields(Class.java:2583) ~[?:1.8.0_292]
    at java.lang.Class.getDeclaredField(Class.java:2068) ~[?:1.8.0_292]
    at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1871) ~[?:1.8.0_292]
    ...
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_292]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_292]
    ...

try add dependency尝试添加依赖

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-mapreduce-client-core</artifactId>
      <version>${hadoop.version}</version>
      <scope>provided</scope>
    </dependency>

get another error得到另一个错误

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
    at org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.<clinit>(CommandLineOptions.java:27)
    at org.apache.flink.runtime.entrypoint.DynamicParametersConfigurationParserFactory.options(DynamicParametersConfigurationParserFactory.java:43)
    at org.apache.flink.runtime.entrypoint.DynamicParametersConfigurationParserFactory.getOptions(DynamicParametersConfigurationParserFactory.java:50)
    at org.apache.flink.runtime.entrypoint.parser.CommandLineParser.parse(CommandLineParser.java:42)
    at org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils.parseParametersOrExit(ClusterEntrypointUtils.java:63)
    at org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:89)

try to fix conflict about commons-cli:1.3.1 with 1.2: choose 1.3.1 then error 1;尝试修复 commons-cli:1.3.1 与 1.2 的冲突:选择 1.3.1 然后错误 1; choose 1.2 then error 2;选择 1.2 然后选择错误 2; add dependency commons-cli 1.4, then error 1.添加依赖commons-cli 1.4,然后错误1。

1、commons-cli choose 1.3.1 or 1.4
2、add $hadoop_home/../hadoop_mapreduce/* to yarn.application.classpath

i got this error too, when i think i maybe a version conflict,because i used hive 3.1.2 version is to high than hadoop 2.7.6 version.我也遇到了这个错误,当我认为我可能是版本冲突时,因为我使用的 hive 3.1.2 版本高于 hadoop 2.7.6 版本。 i always got error such as guava version conflict and so on so when i used flink-1.15.0 and link hive with jar flink-sql-connector-hive-3.1.2_2.12-1.15.0.jar i got Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf i always got error such as guava version conflict and so on so when i used flink-1.15.0 and link hive with jar flink-sql-connector-hive-3.1.2_2.12-1.15.0.jar i got Caused by: java.lang.ClassNotFoundException:org.apache.hadoop.mapred.JobConf
resolve: this error it because not found jars please move your hadoop jars hadoop-mapreduce-client-core.,hadoop-common,hadoop-mapreduce-client-common,hadoop-mapreduce-client-jobclient and hive-exec-3.1.2.jar to flink lib path resolve: this error it because not found jars please move your hadoop jars hadoop-mapreduce-client-core.,hadoop-common,hadoop-mapreduce-client-common,hadoop-mapreduce-client-jobclient and hive-exec-3.1.2 .jar 到 flink lib 路径

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 错误:(63,40)java:不兼容的类型:org.apache.hadoop.mapreduce.Job无法转换为org.apache.hadoop.mapred.JobConf - Error:(63, 40) java: incompatible types: org.apache.hadoop.mapreduce.Job cannot be converted to org.apache.hadoop.mapred.JobConf Hadoop 1.2.1 - mapreduce 编译期间的“java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf” - Hadoop 1.2.1 - "java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf" during mapreduce compilation 在纱线群集模式下的Apache Spark正在抛出Hadoop FileAlreadyExistsException - Apache Spark in yarn-cluster mode is throwing Hadoop FileAlreadyExistsException 线程“主” org.apache.Hadoop.mapred.InvalidJobConf中的异常:JobConf中未设置输出目录 - Exception in thread “main” org.apache.Hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf 错误:无法访问org.apache.hadoop.mapred.MapReduceBase - Error: cannot access org.apache.hadoop.mapred.MapReduceBase java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/metadata/HiveException 在 spark-shell 中查询时 - java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/metadata/HiveException when query in spark-shell 未找到org.apache.hadoop.mapred.LocalClientProtocolProvider - org.apache.hadoop.mapred.LocalClientProtocolProvider not found org.apache.hadoop.mapred.FileAlreadyExistsException - org.apache.hadoop.mapred.FileAlreadyExistsException 我们可以在纱线簇上运行apache spark 1.1.0吗? - Can we run apache spark 1.1.0 on yarn-cluster? Apache Sqoop启动配置错误:org.apache.hadoop.mapred.YarnClientProtocolProvider不是子类型 - Apache Sqoop startup configuration error: org.apache.hadoop.mapred.YarnClientProtocolProvider not a subtype
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM