简体   繁体   English

运行Hadoop MapReduce Java程序时出现UnsatisfiedLinkError

[英]UnsatisfiedLinkError when running Hadoop MapReduce Java program

I am trying to run this Map Reduce program with Hadoop on Windows 8.1. 我正在尝试在Windows 8.1上使用Hadoop运行此Map Reduce程序。 After much effort, I have gotten it pretty close to working. 经过大量的努力,我已经接近工作了。 I've got Java 1.8.0_45 and Hadoop-2.7.0. 我有Java 1.8.0_45和Hadoop-2.7.0。 I also have the winutils.exe and hadoop.dll which have caused issues for a lot of people. 我也有winutils.exe和hadoop.dll,它们对很多人造成了问题。

Here is the code: 这是代码:

public class OSProject {

public static class Map extends MapReduceBase implements
        Mapper<LongWritable, Text, Text, IntWritable> {

    @Override
    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)
            throws IOException {

        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);

        while (tokenizer.hasMoreTokens()) {
            value.set(tokenizer.nextToken());
            output.collect(value, new IntWritable(1));
        }

    }
}

public static class Reduce extends MapReduceBase implements
        Reducer<Text, IntWritable, Text, IntWritable> {

    @Override
    public void reduce(Text key, Iterator<IntWritable> values,
            OutputCollector<Text, IntWritable> output, Reporter reporter)
            throws IOException {
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }

        output.collect(key, new IntWritable(sum));
    }
}

public static void main(String[] args) throws Exception {

    BasicConfigurator.configure();
    JobConf conf = new JobConf(OSProject.class);
    conf.setJobName("wordcount");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(IntWritable.class);

    conf.setMapperClass(Map.class);
    conf.setReducerClass(Reduce.class);

    conf.setInputFormat(TextInputFormat.class);
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path("c:/hwork/input"));
    FileOutputFormat.setOutputPath(conf, new Path("c:/hwork/output"));

    //FileInputFormat.setInputPaths(conf, new Path(args[0]));
    //FileOutputFormat.setOutputPath(conf, new Path(args[1]));

    JobClient.runJob(conf);

}
}

The problem is that the program throws this error when I run it: 问题是程序在运行时会引发此错误:

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524)
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:473)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:526)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:504)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:147)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at osproject.OSProject.main(OSProject.java:86)

The line that it errors on is: 错误所在的行是:

JobClient.runJob(conf);

What it looks like is that it cannot create the output file for some reason. 它看起来像是由于某种原因无法创建输出文件。 Any answer as to why this is happening and how to fix it would be greatly appreciated. 对于为什么发生这种情况以及如何解决它的任何答案,将不胜感激。

UnsatisfiedLinkError is thrown when some lib couldn't be found (or it was found but doesn't contain implementation of some function) and thus native method can't be found . 当找不到某个库(或找到它但不包含某些函数的实现)时抛出UnsatisfiedLinkError ,因此无法找到本机方法 In my opinion your app can't find proper lib. 我认为您的应用找不到合适的库。

Your problem is similar to this one: hadoop mapreduce: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z 您的问题与此类似: hadoop mapreduce:java.lang.UnsatisfiedLinkError:org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z

So I would suggest runnig your app with LD_LIBRARY_PATH pointing to directory which contains hadoop.dll. 因此,我建议您使用LD_LIBRARY_PATH指向包含hadoop.dll的目录来运行您的应用程序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM