简体   繁体   English

Python中的Hadoop流作业失败(不成功)

[英]Hadoop Streaming Job Failed (Not Successful) in Python

I'm trying to run a Map-Reduce job on Hadoop Streaming with Python scripts and getting the same errors as Hadoop Streaming Job failed error in python but those solutions didn't work for me. 我正在尝试使用Python脚本在Hadoop流上运行Map-Reduce作业,并得到与python中的Hadoop流作业失败错误相同的错误,但这些解决方案对我不起作用。

My scripts work fine when I run "cat sample.txt | ./p1mapper.py | sort | ./p1reducer.py" 当我运行“ cat sample.txt | ./p1mapper.py | sort | ./p1reducer.py”时,我的脚本运行良好。

But when I run the following: 但是当我运行以下命令时:

./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \
    -input "p1input/*" \
    -output p1output \
    -mapper "python p1mapper.py" \
    -reducer "python p1reducer.py" \
    -file /Users/Tish/Desktop/HW1/p1mapper.py \
    -file /Users/Tish/Desktop/HW1/p1reducer.py

(NB: Even if I remove the "python" or type the full pathname for -mapper and -reducer, the result is the same) (注意:即使我删除“ python”或为-mapper和-reducer键入完整路径名,结果也一样)

This is the output I get: 这是我得到的输出:

packageJobJar: [/Users/Tish/Desktop/HW1/p1mapper.py, /Users/Tish/Desktop/CS246/HW1/p1reducer.py, /Users/Tish/Documents/workspace/hadoop-0.20.2/tmp/hadoop-unjar4363616744311424878/] [] /var/folders/Mk/MkDxFxURFZmLg+gkCGdO9U+++TM/-Tmp-/streamjob3714058030803466665.jar tmpDir=null
11/01/18 03:02:52 INFO mapred.FileInputFormat: Total input paths to process : 1
11/01/18 03:02:52 INFO streaming.StreamJob: getLocalDirs(): [tmp/mapred/local]
11/01/18 03:02:52 INFO streaming.StreamJob: Running job: job_201101180237_0005
11/01/18 03:02:52 INFO streaming.StreamJob: To kill this job, run:
11/01/18 03:02:52 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
11/01/18 03:02:52 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
11/01/18 03:02:53 INFO streaming.StreamJob:  map 0%  reduce 0%
11/01/18 03:03:05 INFO streaming.StreamJob:  map 100%  reduce 0%
11/01/18 03:03:44 INFO streaming.StreamJob:  map 50%  reduce 0%
11/01/18 03:03:47 INFO streaming.StreamJob:  map 100%  reduce 100%
11/01/18 03:03:47 INFO streaming.StreamJob: To kill this job, run:
11/01/18 03:03:47 INFO streaming.StreamJob: /Users/Tish/Documents/workspace/hadoop-0.20.2/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201101180237_0005
11/01/18 03:03:47 INFO streaming.StreamJob: Tracking URL: http://www.glassdoor.com:50030/jobdetails.jsp?jobid=job_201101180237_0005
11/01/18 03:03:47 ERROR streaming.StreamJob: Job not Successful!
11/01/18 03:03:47 INFO streaming.StreamJob: killJob...
Streaming Job Failed!

For each Failed/Killed Task Attempt: 对于每个失败/杀死的任务尝试:

Map output lost, rescheduling: getMapOutput(attempt_201101181225_0001_m_000000_0,0) failed :
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_201101181225_0001/attempt_201101181225_0001_m_000000_0/output/file.out.index in any of the configured local directories
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:389)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:138)
    at org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.java:2887)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)

Here are my Python scripts: p1mapper.py 这是我的Python脚本:p1mapper.py

#!/usr/bin/env python

import sys
import re

SEQ_LEN = 4

eos = re.compile('(?<=[a-zA-Z])\.')   # period preceded by an alphabet
ignore = re.compile('[\W\d]')

for line in sys.stdin:
    array = re.split(eos, line)
    for sent in array:
        sent = ignore.sub('', sent)
        sent = sent.lower()
        if len(sent) >= SEQ_LEN:
            for i in range(len(sent)-SEQ_LEN + 1):
                print '%s 1' % sent[i:i+SEQ_LEN]

p1reducer.py p1reducer.py

#!/usr/bin/env python

from operator import itemgetter
import sys

word2count = {}

for line in sys.stdin:
    word, count = line.split(' ', 1)
    try:
        count = int(count)
        word2count[word] = word2count.get(word, 0) + count
    except ValueError:    # count was not a number
        pass

# sort
sorted_word2count = sorted(word2count.items(), key=itemgetter(1), reverse=True)

# write the top 3 sequences
for word, count in sorted_word2count[0:3]:
    print '%s\t%s'% (word, count)

Would really appreciate any help, thanks! 非常感谢您的帮助,谢谢!

UPDATE: 更新:

hdfs-site.xml: hdfs-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

          <name>dfs.replication</name>

          <value>1</value>

</property>

</configuration>

mapred-site.xml: mapred-site.xml:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

          <name>mapred.job.tracker</name>

          <value>localhost:54311</value>

</property>

</configuration>

You are missing a lot of configurations and you need to define directories and such. 您缺少许多配置,需要定义目录等。 See here: 看这里:

http://wiki.apache.org/hadoop/QuickStart http://wiki.apache.org/hadoop/QuickStart

Distributed operation is just like the pseudo-distributed operation described above, except: 分布式操作与上述伪分布式操作一样,除了:

  1. Specify hostname or IP address of the master server in the values for fs.default.name and mapred.job.tracker in conf/hadoop-site.xml. 在conf / hadoop-site.xml中的fs.default.name和mapred.job.tracker的值中指定主服务器的主机名或IP地址。 These are specified as host:port pairs. 这些被指定为host:port对。
  2. Specify directories for dfs.name.dir and dfs.data.dir in conf/hadoop-site.xml. 在conf / hadoop-site.xml中为dfs.name.dir和dfs.data.dir指定目录。 These are used to hold distributed filesystem data on the master node and slave nodes respectively. 它们分别用于在主节点和从节点上保存分布式文件系统数据。 Note that dfs.data.dir may contain a space- or comma-separated list of directory names, so that data may be stored on multiple devices. 请注意,dfs.data.dir可能包含用空格或逗号分隔的目录名称列表,因此数据可以存储在多个设备上。
  3. Specify mapred.local.dir in conf/hadoop-site.xml. 在conf / hadoop-site.xml中指定mapred.local.dir。 This determines where temporary MapReduce data is written. 这确定了将MapReduce临时数据写入何处。 It also may be a list of directories. 它也可能是目录列表。
  4. Specify mapred.map.tasks and mapred.reduce.tasks in conf/mapred-default.xml. 在conf / mapred-default.xml中指定mapred.map.tasks和mapred.reduce.tasks。 As a rule of thumb, use 10x the number of slave processors for mapred.map.tasks, and 2x the number of slave processors for mapred.reduce.tasks. 根据经验,将mapred.map.tasks的从属处理器数设为10倍,将mapred.reduce.tasks的从属处理器数设为2倍。
  5. List all slave hostnames or IP addresses in your conf/slaves file, one per line and make sure jobtracker is in your /etc/hosts file pointing to your jobtracker node 列出conf / slaves文件中的所有从属主机名或IP地址,每行列出一个,并确保jobtracker位于/ etc / hosts文件中并指向jobtracker节点

Well, I stuck upon the same problem for 2 days now.. The solution that Joe provided in his other post works well for me.. 好吧,我坚持了同样的问题两天了。乔在其他职位上提供的解决方案对我来说很好。

As a solution to your problem I suggest: 为了解决您的问题,我建议:

1) Follow blindly and only blindly the instructions on how to setup a single node cluster here (I assume you have already done so) 1)盲从和一味上的说明如何设置单节点集群在这里 (我假设你已经这样做了)

2) If anywhere you face a java.io.IOException: Incompatible namespaceIDs error (you will find it if you examine the logs), have a look here 2)如果在任何地方都遇到java.io.IOException:不兼容的namespaceIDs错误(如果检查日志,将会发现它),请在此处查看

3) REMOVE ALL THE DOUBLE QUOTES FROM YOUR COMMAND, in your example run 3)从您的命令中删除所有双引号,在您的示例运行中

./bin/hadoop jar contrib/streaming/hadoop-0.20.2-streaming.jar \
    -input "p1input/*" \
    -output p1output \
    -mapper p1mapper.py \
    -reducer p1reducer.py \
    -file /Users/Tish/Desktop/HW1/p1mapper.py \
    -file /Users/Tish/Desktop/HW1/p1reducer.py

this is ridiculous, but it was the point at which I stuck for 2 whole days 这太荒谬了,但这是我坚持整整两天的时候

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM