[英]How to run code on the AWS cluster using Apache-Spark?
I've written a python code on summing up all numbers in first-column for each csv file which is as follow: 我已经写了一个python代码,用于汇总每个csv文件的第一列中的所有数字,如下所示:
import os, sys, inspect, csv
### Current directory path.
curr_dir = os.path.split(inspect.getfile(inspect.currentframe()))[0]
### Setup the environment variables
spark_home_dir = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "../spark")))
python_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
os.environ["SPARK_HOME"] = spark_home_dir
os.environ["PYTHONPATH"] = python_dir
### Setup pyspark directory path
pyspark_dir = python_dir
sys.path.append(pyspark_dir)
### Import the pyspark
from pyspark import SparkConf, SparkContext
### Specify the data file directory, and load the data files
data_path = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "./test_dir")))
### myfunc is to add all numbers in the first column.
def myfunc(s):
total = 0
if s.endswith(".csv"):
cr = csv.reader(open(s,"rb"))
for row in cr:
total += int(row[0])
return total
def main():
### Initialize the SparkConf and SparkContext
conf = SparkConf().setAppName("ruofan").setMaster("spark://ec2-52-26-177-197.us-west-2.compute.amazonaws.com:7077")
sc = SparkContext(conf = conf)
datafile = sc.wholeTextFiles(data_path)
### Sent the application in each of the slave node
temp = datafile.map(lambda (path, content): myfunc(str(path).strip('file:')))
### Collect the result and print it out.
for x in temp.collect():
print x
if __name__ == "__main__":
main()
I would like to use Apache-Spark to parallelize the summation process for several csv files using the same python code. 我想使用Apache-Spark使用相同的python代码并行化几个csv文件的求和过程。 I've already done the following steps:
我已经完成以下步骤:
$ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com
to upload directory my_dir
including my python code with the csv files onto the cluster master node. $ scp -r -i my-key-pair.pem my_dir root@ec2-52-27-82-124.us-west-2.compute.amazonaws.com
上传目录my_dir
包括将带有csv文件的python代码复制到群集主节点上。 $ ./spark/copy-dir my_dir
to send my python code as well as csv files to all slave nodes. $ ./spark/copy-dir my_dir
将python代码以及csv文件发送到所有从属节点。 I've setup the environment variables on the master node: 我已经在主节点上设置了环境变量:
$ export SPARK_HOME=~/spark
$ export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
However, when I run the python code on the master node: $ python sum.py
, it shows up the following error: 但是,当我在主节点
$ python sum.py
上运行python代码时,它显示以下错误:
Traceback (most recent call last):
File "sum.py", line 18, in <module>
from pyspark import SparkConf, SparkContext
File "/root/spark/python/pyspark/__init__.py", line 41, in <module>
from pyspark.context import SparkContext
File "/root/spark/python/pyspark/context.py", line 31, in <module>
from pyspark.java_gateway import launch_gateway
File "/root/spark/python/pyspark/java_gateway.py", line 31, in <module>
from py4j.java_gateway import java_import, JavaGateway, GatewayClient
ImportError: No module named py4j.java_gateway
I have no ideas about this error. 我对此错误一无所知。 Also, I am wondering if the master node automatically calls all slave nodes to run in parallel.
另外,我想知道主节点是否自动调用所有从节点以并行运行。 I really appreciate if anyone can help me.
如果有人可以帮助我,我非常感谢。
Here is how I would debug this particular import error. 这是我调试此特定导入错误的方法。
$ python
$ python
运行python REPL >> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> import py4j
>> import py4j
>> exit()
>> exit()
$ pip install py4j
(you'll need to have pip installed) $ pip install py4j
(您需要安装pip) $ python
$ python
>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> from py4j.java_gateway import java_import, JavaGateway, GatewayClient
>> exit()
and try running your $ python sum.py
again $ python sum.py
,则>> exit()
并尝试再次运行$ python sum.py
I think you are asking two separate questions. 我认为您在问两个单独的问题。 It looks like you have an import error.
看来您有导入错误。 Is it possible that you have a different version of the package py4j installed on your local computer that you haven't installed on your master node?
是否有可能在本地计算机上安装了与主节点上尚未安装的py4j软件包不同的版本?
I can't help with running this in parallel. 我不禁要并行运行它。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.