简体   繁体   English

Amazon EMR:使用S3的输入和输出运行Custom Jar

[英]Amazon EMR: running Custom Jar with input and output from S3

I am trying to run an EMR cluster which has a custom jar step. 我正在尝试运行具有自定义jar步骤的EMR集群。 The program takes input from S3 and outputs to S3 (or at least this is what I want to accomplish). 程序从S3获取输入并输出到S3(或者至少这是我想要完成的)。 In the step configuration, I have the following in the arguments field: 在步骤配置中,我在arguments字段中有以下内容:

v3.MaxTemperatureDriver
s3n://hadoopbook/ncdc/all
s3n://hadoop-szhu/max-temp

where hadoopbook/ncdc/all is the path to the bucket containing the input data (as a side note, the example I'm running is from this book ), and hadoop-szhu is my own bucket where I want to store the output. 其中hadoopbook/ncdc/all是包含输入数据的存储桶的路径(作为旁注,我正在运行的示例来自本书 ), hadoop-szhu是我自己的存储桶,我想存储输出。 Following this post , my MapReduce driver looks like this: 在这篇文章之后 ,我的MapReduce驱动程序如下所示:

package v3;

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

import v1.MaxTemperatureReducer;

public class MaxTemperatureDriver extends Configured implements Tool {

  @Override
  public int run(String[] args) throws Exception {
    if (args.length != 2) {
      System.err.printf("Usage: %s [generic options] <input> <output>\n",
          getClass().getSimpleName());
      ToolRunner.printGenericCommandUsage(System.err);
      return -1;
    }

    Job job = new Job(getConf(), "Max temperature");
    job.setJarByClass(getClass());

    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    job.setMapperClass(MaxTemperatureMapper.class);
    job.setCombinerClass(MaxTemperatureReducer.class);
    job.setReducerClass(MaxTemperatureReducer.class);

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    return job.waitForCompletion(true) ? 0 : 1;
  }

  public static void main(String[] args) throws Exception {
    int exitCode = ToolRunner.run(new MaxTemperatureDriver(), args);
    System.exit(exitCode);
  }
}

However, when I try to run this, I get the following error: 但是,当我尝试运行它时,我收到以下错误:

Exception in thread "main" java.io.IOException: No FileSystem for scheme: s3n

I've also tried to copy the data from s3 to the cluster using the following (run after sshing into the master node): 我还尝试使用以下内容将数据从s3复制到集群(在sshing进入主节点后运行):

hadoop distcp \
  -Dfs.s3n.awsAccessKeyId='...' \
  -Dfs.s3n.awsSecretAccessKey='...' \
  s3n://hadoopbook/ncdc/all input/ncdc/all

But I get a bunch of errors, I've included an excerpt below: 但我得到了一堆错误,我在下面列出了一段摘录:

2016-09-03 07:07:11,858 FATAL [IPC Server handler 6 on 43495] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1472884232220_0001_m_000000_0 - exited : java.io.IOException: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
    at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:224)
    at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:796)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.tools.mapred.RetriableFileCopyCommand$CopyReadException: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
    ... 10 more
Caused by: java.io.FileNotFoundException: No such file or directory 's3n://hadoopbook/ncdc/all/1901.gz'
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.getFileStatus(S3NativeFileSystem.java:818)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.getFileStatus(EmrFileSystem.java:511)
    at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:219)
    ... 9 more

I'm not sure where the issue lies, but I would be happy to include more details (please comment below). 我不确定问题出在哪里,但我很乐意提供更多细节(请在下面评论)。 Thanks! 谢谢!

s3n:// is the old protocol, you should instead be using s3:// s3n://是旧协议,你应该使用s3://

Reference: http://docs.aws.amazon.com//ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html 参考: http//docs.aws.amazon.com//ElasticMapReduce/latest/ManagementGuide/emr-plan-file-systems.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM