简体   繁体   English

hadoop流作业在python中失败

[英]hadoop streaming job fails in python

im trying to implement an algorithm in hadoop. 我试图在Hadoop中实现一个算法。 i tried to execute part of the code in hadoop but streaming job fails 我试图在hadoop中执行部分代码,但是流作业失败

$ /home/hadoop/hadoop/bin/hadoop jar contrib/streaming/hadoop-*-streaming.jar -file /home/hadoop/hadoop/PR/mapper.py -mapper mapper.py -file /home/hadoop/hadoop/PR/reducer.py -reducer reducer.py -input pagerank/* -output PRoutput6

packageJobJar: [/home/hadoop/hadoop/PR/mapper.py, /home/hadoop/hadoop/PR/reducer.py, /home/hadoop/hadoop/tmp/dir/hadoop-hadoop/hadoop-unjar7101759175212283428/] [] /tmp/streamjob6286075675343269479.jar tmpDir=null

11/04/23 01:03:24 INFO mapred.FileInputFormat: Total input paths to process : 1

11/04/23 01:03:24 INFO streaming.StreamJob: getLocalDirs(): [/home/hadoop/hadoop/tmp/dir/hadoop-hadoop/mapred/local]

11/04/23 01:03:24 INFO streaming.StreamJob: Running job: job_201104222325_0021

11/04/23 01:03:24 INFO streaming.StreamJob: To kill this job, run:

11/04/23 01:03:24 INFO streaming.StreamJob: /home/hadoop/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201104222325_0021

11/04/23 01:03:24 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201104222325_0021

11/04/23 01:03:25 INFO streaming.StreamJob:  map 0%  reduce 0%

11/04/23 01:03:31 INFO streaming.StreamJob:  map 50%  reduce 0%

11/04/23 01:03:41 INFO streaming.StreamJob:  map 50%  reduce 17%

11/04/23 01:03:56 INFO streaming.StreamJob:  map 100%  reduce 100%

11/04/23 01:03:56 INFO streaming.StreamJob: To kill this job, run:

11/04/23 01:03:56 INFO streaming.StreamJob: /home/hadoop/hadoop/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201104222325_0021

11/04/23 01:03:56 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201104222325_0021

11/04/23 01:03:56 ERROR streaming.StreamJob: Job not Successful!

11/04/23 01:03:56 INFO streaming.StreamJob: killJob...

Streaming Job Failed!

mapper.py 映射器

#!/usr/bin/env python
import sys
import itertools

def ipsum(input_key,input_value_list):
   return sum(input_value_list)

n= 20 # works up to about 1000000 pages
i = {}
for j in xrange(n): i[j] = [1.0/n,0,[]]
j=0
u=0
for line in sys.stdin:
  if j<n:
    i[j][1]=int(line)
  j=j+1

  if j > n: 
    if line != "-1\n":
      i[u][2] = line.split(',')
    else: 
      i[u][2]=[]
    u=u+1
for j in xrange(n):
  if i[j][1] != 0:
    i[j][2] = map(int,i[j][2])    

intermediate=[]
for (input_key,input_value) in i.items():
  if input_value[1] == 0: intermediate.extend([(1,input_value[0])])
  else: intermediate.extend([])
grp = {}
for key, group in itertools.groupby(sorted(intermediate),lambda x: x[0]):
  grp[key] = list([y for x, y in group])
iplist = [ipsum(intermediate_key,grp[intermediate_key]) for intermediate_key in grp]
inter=[]
for (input_key,input_value) in i.items():
  if input_value[1] == 0: inter.extend([(input_key,0.0)]+[(outlink,input_value[0]/input_value[1]) for outlink in input_value[2]])
  else: inter.extend([])

for value in inter:
  value1 = value[0]
  value2 = value[1]
  print '%s %s' % (value1,value2)

reducer.py reducer.py

#!/usr/bin/env python
import sys
import itertools
for line in sys.stdin:
  input_key, input_value=line.split(' ',1)
  input_key = input_key.strip()
  input_value = input_value.strip()
  input_key = int(input_key)
  input_value = float(input_value)
  print str(input_key)+' '+str(input_value)

i dont know whether the error is in my code or hadoop config... because i was able to execute the code using, $ cat /home/hadoop/hadoop/PR/pagerank/input.txt | 我不知道错误是在我的代码中还是在hadoop配置中...因为我能够使用$ cat /home/hadoop/hadoop/PR/pagerank/input.txt |执行代码。 python /home/hadoop/hadoop/PR/mapper.py | python /home/hadoop/hadoop/PR/mapper.py | sort | 排序 python /home/hadoop/hadoop/PR/reducer.py python /home/hadoop/hadoop/PR/reducer.py

would appreciate any help, Thank you. 将不胜感激,谢谢。

Take a look at the job info page url from the output. 查看输出中的职位信息页面网址。 In your case, localhost:50030/jobdetails.jsp?jobid=job_201104222325_0021 您的情况是localhost:50030 / jobdetails.jsp?jobid = job_201104222325_0021

Click on the number in the "failed mappers" column and the "last 8KB" (or whatever) log link and you will see the (most likely) python exception you're hitting. 单击“失败的映射器”列中的数字和“最后8KB”(或任何其他位置)日志链接,您将看到所遇到的(最可能的)Python异常。

My guess is your data may be the key. 我的猜测是您的数据可能是关键。 Casting a float from a string or similar issue may be hitting a bumb in your real data that does not appear in your local test data. 从字符串或类似问题中强制转换浮点数可能会导致您的真实数据中出现泡沫,而这些泡沫不会出现在本地测试数据中。 Perhaps you could address with exception handling or assertions. 也许您可以使用异常处理或断言来解决。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM