简体   繁体   English

在Windows的Hadoop 2.6.0上运行Map reduce时出错

[英]Error while running Map reduce on Hadoop 2.6.0 on Windows

I've setup a single node Hadoop 2.6.0 cluster on my Windows 8.1 using this tutorial - https://wiki.apache.org/hadoop/Hadoop2OnWindows . 我设置一个单节点的Hadoop集群2.6.0在我的Windows 8.1使用本教程- https://wiki.apache.org/hadoop/Hadoop2OnWindows

All daemons are up and running. 所有守护程序都已启动并正在运行。 I'm able to access hdfs using hadoop fs -ls / but I've not loaded anything, so there is nothing to show up as of now. 我可以使用hadoop fs -ls /访问hdfs,但是我还没有加载任何东西,因此到目前为止没有任何显示。

But when I run a simple map reduce program, I get the below erorr : 但是,当我运行一个简单的map reduce程序时,我得到以下erorr:

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V
at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native Method)
at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(NativeCrc32.java:86)
at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum.java:430)
at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:202)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:163)
at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:144)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:400)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:80)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at wordcount.Wordcount.main(Wordcount.java:62)

Error from hadoop fs -put command : hadoop fs -put命令出错:

在此处输入图片说明

Any advise would be of great help. 任何建议都会有很大帮助。

org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray is part of hadoop.dll : org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArrayhadoop.dll一部分:

JNIEXPORT void JNICALL Java_org_apache_hadoop_util_NativeCrc32_nativeComputeChunkedSumsByteArray
  (JNIEnv *env, jclass clazz,
    jint bytes_per_checksum, jint j_crc_type,
    jarray j_sums, jint sums_offset,
    jarray j_data, jint data_offset, jint data_len,
    jstring j_filename, jlong base_pos, jboolean verify)
{
  ...

Unsatisfied link would indicate that you did not deploy Hadoop.dll in %HADOOP_HOME%\\bin or the process loaded a wrong dll from somewhere else. 如果链接不满意,则表明您未在%HADOOP_HOME%\\ bin中部署Hadoop.dll,或者该进程从其他位置加载了错误的dll。 Make sure the correct dll is placed in %HADOOP_HOME%\\bin, and make sure this is the one loaded (use process explorer ) 确保正确的dll放在%HADOOP_HOME%\\ bin中,并确保这是一个已加载的文件(使用process Explorer

You should also see in the log the NativeCodeLoader output: 您还应该在日志中看到NativeCodeLoader输出:

 private static boolean nativeCodeLoaded = false;

  static {
    // Try to load native hadoop library and set fallback flag appropriately
    if(LOG.isDebugEnabled()) {
      LOG.debug("Trying to load the custom-built native-hadoop library...");
    }
    try {
      System.loadLibrary("hadoop");
      LOG.debug("Loaded the native-hadoop library");
      nativeCodeLoaded = true;
    } catch (Throwable t) {
      // Ignore failure to load
      if(LOG.isDebugEnabled()) {
        LOG.debug("Failed to load native-hadoop with error: " + t);
        LOG.debug("java.library.path=" +
            System.getProperty("java.library.path"));
      }
    }

    if (!nativeCodeLoaded) {
      LOG.warn("Unable to load native-hadoop library for your platform... " +
               "using builtin-java classes where applicable");
    }

Enable DEBUG level for this component and you should see "Loaded the native-hadoop library" (since your code acts as if the hadoop.dll was loaded). 为该组件启用DEBUG级别,您应该看到"Loaded the native-hadoop library" Hadoop "Loaded the native-hadoop library" (因为您的代码就像已加载hadoop.dll一样工作)。 The most likely problem is that the wrong one is loaded because is found first in the PATH. 最可能的问题是,由于在PATH中首先发现错误的错误,因此加载了错误的错误。

hadoop.dll should also be added to C:/Windows/System32. hadoop.dll也应该添加到C:/ Windows / System32。 I got it working with the help of this link - http://cnblogs.com/marost/p/4372778.html (Use an online translator to translate it to your native language) 我借助此链接-http://cnblogs.com/marost/p/4372778.html使它正常工作(使用在线翻译器将其翻译为您的母语)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM