繁体   English   中英

SparkContext:在MapR沙盒上初始化SparkContext时出错

[英]SparkContext: Error initializing SparkContext on MapR Sandbox

我试图运行这个它采用MAPR示例项目。
我尝试在沙箱中以及下面的代码行中执行ml.Flight类,

val spark: SparkSession = SparkSession.builder().appName("churn").getOrCreate()

我得到这个错误。

[user01@maprdemo ~]$ spark-submit --class ml.Flight --master local[2] spark-ml-flightdelay-1.0.jar
Warning: Unable to determine $DRILL_HOME
18/12/19 05:39:09 WARN Utils: Your hostname, maprdemo.local resolves to a loopback address: 127.0.0.1; using 10.0.3.1 instead (on interface enp0s3)
18/12/19 05:39:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/12/19 05:39:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/12/19 05:39:28 ERROR SparkContext: Error initializing SparkContext.
java.io.IOException: Could not create FileClient
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:656)
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:709)
    at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1419)
    at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:1093)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:522)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:933)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:924)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:924)
    at ml.Flight$.main(Flight.scala:37)
    at ml.Flight.main(Flight.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:899)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Could not create FileClient
    at com.mapr.fs.MapRClientImpl.<init>(MapRClientImpl.java:137)
    at com.mapr.fs.MapRFileSystem.lookupClient(MapRFileSystem.java:650)
    ... 22 more

我是Scala / Spark的新手,欢迎任何帮助。 提前致谢。

我认为您正在使用或导出不同的Python版本的spark-submit。

例如:

/opt/mapr/spark/spark-2.3.1/bin/spark-submit

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM