简体   繁体   English

如何使用Apache-Spark在AWS集群上运行代码?

[英]How to run code on the AWS cluster using Apache-Spark?

I've written a python code on summing up all numbers in first-column for each csv file which is as follow: 我已经写了一个python代码,用于汇总每个csv文件的第一列中的所有数字,如下所示:

import os, sys, inspect, csv

### Current directory path.
curr_dir = os.path.split(inspect.getfile(inspect.currentframe()))[0]

### Setup the environment variables
spark_home_dir = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "../spark")))
python_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
os.environ["SPARK_HOME"] = spark_home_dir
os.environ["PYTHONPATH"] = python_dir

### Setup pyspark directory path
pyspark_dir = python_dir
sys.path.append(pyspark_dir)

### Import the pyspark
from pyspark import SparkConf, SparkContext

### Specify the data file directory, and load the data files
data_path = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "./test_dir")))

### myfunc is to add all numbers in the first column.
def myfunc(s):
    total = 0
    if s.endswith(".csv"):
            cr = csv.reader(open(s,"rb"))
            for row in cr:
                total += int(row[0])
                return total

def main():
### Initialize the SparkConf and SparkContext
    conf = SparkConf().setAppName("ruofan").setMaster("spark://ec2-52-26-177-197.us-west-2.compute.amazonaws.com:7077")
    sc = SparkContext(conf = conf)
    datafile = sc.wholeTextFiles(data_path)

    ### Sent the application in each of the slave node
    temp = datafile.map(lambda (path, content): myfunc(str(path).strip('file:')))

    ### Collect the result and print it out.
    for x in temp.collect():
            print x

if __name__ == "__main__":
    main()

I would like to use Apache-Spark to parallelize the summation process for several csv files using the same python code. 我想使用Apache-Spark使用相同的python代码并行化几个csv文件的求和过程。 I've already done the following steps: 我已经完成以下步骤:

  1. I've created one master and two slave nodes on AWS. 我在AWS上创建了一个主节点和两个从节点。
  2. I've used the bash command $ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com to upload directory my_dir including my python code with the csv files onto the cluster master node. 我已经使用了bash命令$ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com上传目录my_dir包括将带有csv文件的python代码复制到群集主节点上。
  3. I've login my master node, and from there used the bash command $ ./spark/copy-dir my_dir to send my python code as well as csv files to all slave nodes. 我已经登录了主节点,然后使用bash命令$ ./spark/copy-dir my_dir将python代码以及csv文件发送到所有从属节点。
  4. I've setup the environment variables on the master node: 我已经在主节点上设置了环境变量:

    $ export SPARK_HOME=~/spark

    $ export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH

However, when I run the python code on the master node: $ python sum.py , it shows up the following error: 但是,当我在主节点$ python sum.py上运行python代码时,它显示以下错误:

Traceback (most recent call last):
  File "sum.py", line 18, in <module>
    from pyspark import SparkConf, SparkContext
  File "/root/spark/python/pyspark/__init__.py", line 41, in <module>
    from pyspark.context import SparkContext
  File "/root/spark/python/pyspark/context.py", line 31, in <module>
    from pyspark.java_gateway import launch_gateway
  File "/root/spark/python/pyspark/java_gateway.py", line 31, in <module>
    from py4j.java_gateway import java_import, JavaGateway, GatewayClient
ImportError: No module named py4j.java_gateway

I have no ideas about this error. 我对此错误一无所知。 Also, I am wondering if the master node automatically calls all slave nodes to run in parallel. 另外,我想知道主节点是否自动调用所有从节点以并行运行。 I really appreciate if anyone can help me. 如果有人可以帮助我,我非常感谢。

Here is how I would debug this particular import error. 这是我调试此特定导入错误的方法。

  1. ssh to your master node ssh到您的主节点
  2. Run the python REPL with $ python 使用$ python运行python REPL
  3. Try the failing import line >> from py4j.java_gateway import java_import, JavaGateway, GatewayClient 尝试失败的导入行>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
  4. If it fails, try simply running >> import py4j 如果失败,请尝试运行>> import py4j
  5. If that fails, it means that your system either does not have py4j installed or cannot find it. 如果失败,则意味着您的系统没有安装py4j或找不到它。
  6. Exit the REPL >> exit() 退出REPL >> exit()
  7. Try installing py4j $ pip install py4j (you'll need to have pip installed) 尝试安装py4j $ pip install py4j (您需要安装pip)
  8. Open the REPL $ python 打开REPL $ python
  9. Try importing again >> from py4j.java_gateway import java_import, JavaGateway, GatewayClient 尝试再次导入>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
  10. If that works, then >> exit() and try running your $ python sum.py again 如果$ python sum.py ,则>> exit()并尝试再次运行$ python sum.py

I think you are asking two separate questions. 我认为您在问两个单独的问题。 It looks like you have an import error. 看来您有导入错误。 Is it possible that you have a different version of the package py4j installed on your local computer that you haven't installed on your master node? 是否有可能在本地计算机上安装了与主节点上尚未安装的py4j软件包不同的版本?

I can't help with running this in parallel. 我不禁要并行运行它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM