簡體   English   中英

pyspark:帶有spark-submit的罐子依賴項

[英]pyspark: ship jar dependency with spark-submit

我編寫了一個pyspark腳本,該腳本讀取兩個json文件, coGroup它們coGroup在一起,然后將結果發送到coGroup集群; 一切正常(大部分)在本地運行時按預期工作,我為org.elasticsearch.hadoop.mr.EsOutputFormatorg.elasticsearch.hadoop.mr.LinkedMapWritable類下載了org.elasticsearch.hadoop.mr.EsOutputFormat elasticsearch-hadoop jar文件,然后使用pyspark使用--jars參數,我可以看到文檔出現在我的--jars集群中。

但是,當我嘗試在Spark群集上運行它時,出現此錯誤:

Traceback (most recent call last):
  File "/root/spark/spark_test.py", line 141, in <module>
    conf=es_write_conf
  File "/root/spark/python/pyspark/rdd.py", line 1302, in saveAsNewAPIHadoopFile
    keyConverter, valueConverter, jconf)
  File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/root/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile.
: java.lang.ClassNotFoundException: org.elasticsearch.hadoop.mr.LinkedMapWritable
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:274)
    at org.apache.spark.util.Utils$.classForName(Utils.scala:157)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:611)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1$$anonfun$apply$9.apply(PythonRDD.scala:610)
    at scala.Option.map(Option.scala:145)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:610)
    at org.apache.spark.api.python.PythonRDD$$anonfun$getKeyValueTypes$1.apply(PythonRDD.scala:609)
    at scala.Option.flatMap(Option.scala:170)
    at org.apache.spark.api.python.PythonRDD$.getKeyValueTypes(PythonRDD.scala:609)
    at org.apache.spark.api.python.PythonRDD$.saveAsNewAPIHadoopFile(PythonRDD.scala:701)
    at org.apache.spark.api.python.PythonRDD.saveAsNewAPIHadoopFile(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)

在我看來,這很清楚:工人無法使用elasticsearch-hadoop罐子; 問題是:如何將其與我的應用程序一起發送? 我可以將sc.addPyFile用於python依賴項,但不適用於jar,並且使用spark-submit--jars參數無濟於事。

--jars可以正常工作; 問題是我首先如何執行spark-submit作業; 正確的執行方式是:

./bin/spark-submit <options> scriptname

因此,-- --jars選項必須放在腳本之前:

./bin/spark-submit --jars /path/to/my.jar myscript.py

如果您認為這是將參數傳遞給腳本本身的唯一方法,那么這很明顯,因為腳本名稱后面的所有內容都將用作腳本的輸入參數:

./bin/spark-submit --jars /path/to/my.jar myscript.py --do-magic=true

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM