简体   繁体   English

带有本机库的Spark EMR作业中的UnsatisfiedLinkError

[英]UnsatisfiedLinkError in spark EMR job with native library

I'm trying to run a spark job that uses native shared library (.so). 我正在尝试运行使用本机共享库(.so)的spark作业。 I'm using --jars to copy my .so to all executors (and file seems to be there, along the spark .jar app), but somehow I'm failing to set up environment find and use the .so. 我正在使用--jars将我的.so复制到所有执行程序(文件似乎在spark .jar应用程序的旁边),但是由于某种原因,我无法设置环境查找并使用.so。 Tried --conf spark.executor.extraLibraryPath and -Djava.library.path, but not really sure what paths to use.. Is there an easy way to make it work? 尝试了--conf spark.executor.extraLibraryPath和-Djava.library.path,但不确定要使用的路径。是否有一种简单的方法可以使它工作? (using AWS EMR 4.5.0, spark 1.6.x) (使用AWS EMR 4.5.0,spark 1.6.x)

my spark-submit : 我的火花提交:

spark-submit \
--deploy-mode cluster \
--driver-java-options \
--jars s3://at/emr-test/asb_UT/libSplineFitWrapperJava.so \
--class com.SplineFittingDummy \
s3://at/emr-test/asb_UT/asb-0.0.1-SNAPSHOT-jar-with-dependencies.jar \
s3://at/emr-test/asb_UT/testPoints01.xml \
s3://at/emr-test/asb_UT/output

Problem was with way .so was build. 问题是方式。所以建立。 After trying out different settings and available setups (solaris & sfw, debian & g++ 4.6, ...) that failed I tried to compile .so on EMR and everything is working now. 在尝试了不同的设置和可用的设置(solaris和sfw,debian和g ++ 4.6等)失败后,我尝试在EMR上编译.so,现在一切正常。 Though it would be helpful if Amazon could provide some docker image with their setup, so we can compile without actually copying all source code to EMR.. 尽管如果Amazon可以提供一些带有其安装程序的Docker映像会有所帮助,但是我们可以在不将所有源代码实际复制到EMR的情况下进行编译。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM