简体   繁体   English

在Amazon EC2上将HDFS与Apache Spark结合使用

[英]Using HDFS with Apache Spark on Amazon EC2

I have a spark cluster setup by using the spark EC2 script. 我通过使用spark EC2脚本进行了火花群集设置。 I setup the cluster and I am now trying to put a file on HDFS, so that I can have my cluster do work. 我设置了群集,现在尝试将文件放在HDFS上,以便可以使群集正常工作。

On my master I have a file data.txt . 在我的主人上,我有一个文件data.txt I added it to hdfs by doing ephemeral-hdfs/bin/hadoop fs -put data.txt /data.txt 我通过执行ephemeral-hdfs/bin/hadoop fs -put data.txt /data.txt将其添加到hdfs

Now, in my code, I have: 现在,在我的代码中,我有:

JavaRDD<String> rdd = sc.textFile("hdfs://data.txt",8);

I get an exception when doing this: 执行此操作时出现异常:

Exception in thread "main" java.net.UnknownHostException: unknown host: data.txt
    at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:214)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196)
    at org.apache.hadoop.ipc.Client.call(Client.java:1050)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:176)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:123)
    at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:62)
    at org.apache.spark.rdd.RDD.sortBy(RDD.scala:488)
    at org.apache.spark.api.java.JavaRDD.sortBy(JavaRDD.scala:188)
    at SimpleApp.sortBy(SimpleApp.java:118)
    at SimpleApp.main(SimpleApp.java:30)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

How do I properly put this file into HDFS, so that I can use my cluster to start working on the dataset? 如何正确地将此文件放入HDFS,以便可以使用群集开始处理数据集? I also tried just adding the local file path such as: 我也尝试仅添加本地文件路径,例如:

JavaRDD<String> rdd = sc.textFile("/home/ec2-user/data.txt",8);

When I do this, and submit a job as: 当我这样做时,提交的工作如下:

./spark/bin/spark-submit --class SimpleApp --master spark://ec2-xxx.amazonaws.com:7077 --total-executor-cores 8 /home/ec2-user/simple-project-1.0.jar

I only have one executor and the worker nodes in the cluster don't seem to be getting involved. 我只有一个执行程序,集群中的工作程序节点似乎没有参与其中。 I assumed that it was because I was using a local file, and ec2 does not have a NFS. 我以为那是因为我使用的是本地文件,而ec2没有NFS。

因此,您需要在hdfs://data.txt //之后提供的第一部分是主机名,因此它将是hdfs://{active_master}:9000/data.txt (以防在将来,永久hdfs的带有spark-ec2脚本的默认端口为9010 )。

AWS Elastic Map Reduce now supports Spark natively, and includes HDFS out of the box. 现在,AWS Elastic Map Reduce本身支持Spark,并且开箱即用地包含HDFS。

See http://aws.amazon.com/elasticmapreduce/details/spark/ , with more detail and a walkthrough in the introductory blog post . 请参阅http://aws.amazon.com/elasticmapreduce/details/spark/ ,并在介绍性博客文章中获得更多详细信息和演练。

Spark in EMR uses EMRFS to directly access data in S3 without needing to copy it into HDFS first. EMR中的Spark使用EMRFS直接访问S3中的数据,而无需先将其复制到HDFS中。

The walkthrough includes an example of loading data from S3. 演练包括从S3加载数据的示例。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM