[英]java.lang.UnsatisfiedLinkError when writing using crunch MemPipeline
I am using com.cloudera.crunch version: '0.3.0-3-cdh-5.2.1'. 我正在使用com.cloudera.crunch版本:“ 0.3.0-3-cdh-5.2.1”。
I have a small program that reads some AVROs and filters out invalid data based on some criteria. 我有一个小程序,可以读取一些AVRO并根据某些条件过滤掉无效数据。 I am using pipeline.write(PCollection, AvroFileTarget) to write the invalid data output.
我正在使用pipeline.write(PCollection,AvroFileTarget)写入无效的数据输出。 It works fine in production run.
在生产运行中运行良好。
For unit testing this piece of code, I use MemPipeline instance. 对于这段代码的单元测试,我使用MemPipeline实例。 But, it fails while writing the output in that case.
但是,在这种情况下,在写入输出时会失败。
I get error: 我得到错误:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:428)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:197)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:78)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
at java.io.DataOutputStream.writeBytes(DataOutputStream.java:276)
at com.cloudera.crunch.impl.mem.MemPipeline.write(MemPipeline.java:159)
Any idea what's wrong? 知道有什么问题吗?
Hadoop environment variable should be configured properly along with hadoop.dll and winutils.exe. Hadoop环境变量应与hadoop.dll和winutils.exe一起正确配置。
Also pass the JVM argument while executing MR job/application -Djava.library.path=HADOOP_HOME/lib/native 在执行MR作业/应用程序时也传递JVM参数-Djava.library.path = HADOOP_HOME / lib / native
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.