简体   繁体   English

无法在YARN群集(Hadoop 2.5.2)上运行Apache Giraph

[英]Trouble running Apache Giraph on YARN cluster (Hadoop 2.5.2)

I'm trying to run the basic ShortestPaths example using Giraph 1.1 on Hadoop 2.5.2. 我正在尝试在Hadoop 2.5.2上使用Giraph 1.1运行基本的ShortestPaths示例。 I'm running in actual cluster model (eg, not psuedo-distributed) and I can run standard mapreduce jobs OK. 我正在实际的群集模型中运行(例如,不是psuedo分布式的),并且可以运行标准的mapreduce作业。 But when I try to run the Giraph example, it seems to hang unless I set 但是,当我尝试运行Giraph示例时,除非我进行设置,否则它似乎会挂起

-ca giraph.SplitMasterWorker=false

and correspondingly set number of workers to 1. But this kinda defeats the point of running on a cluster, no? 并相应地将工人数设置为1。但这有点违反了在集群上运行的观点,不是吗? OTOH, if I run without disabling SplitMasterWorker, I get this error: OTOH,如果我在不禁用SplitMasterWorker的情况下运行,则会收到此错误:

When using LocalJobRunner, you cannot run in split master / worker mode 
since there is only 1 task at a time!

which suggests that Girpah is defaulting to local mode. 这表明Girpah默认使用本地模式。 One report I read suggested fixing this by adding 我阅读的一份报告建议通过添加以下内容来解决此问题

-ca mapred.job.tracker=10.0.0.12:5431 -ca mapred.job.tracker = 10.0.0.12:5431

to the Girpah command line, but on Hadoop 2.5.2 with YARN, there is no JobTracker on port 5431, if I understand correctly. 到Girpah命令行,但是在带有YARN的Hadoop 2.5.2上,如果我理解正确的话,端口5431上没有JobTracker。 Anyway, if I do add that bit, the job tries to run, but seems to hang without ever finishing. 无论如何,如果我确实添加了该位,该作业将尝试运行,但似乎挂起而没有完成。 Here's the complete command line, and the job output follows: 这是完整的命令行,作业输出如下:

[prhodes@ip-10-0-0-12 conf]$ hadoop jar /home/prhodes/giraph/giraph-
examples/target/giraph-examples-1.2.0-SNAPSHOT-for-hadoop-2.5.2-jar-with-
dependencies.jar org.apache.giraph.GiraphRunner 
org.apache.giraph.examples.SimpleShortestPathsComputation -vif 
org.apache.giraph.io.formats.JsonLongDoubleFloatDoubleVertexInputFormat 
-vip /user/prhodes/input/tiny_graph.txt -vof 
org.apache.giraph.io.formats.IdWithValueTextOutputFormat -op 
/user/prhodes/giraph_output/shortestpaths -w 3 -ca 
mapred.job.tracker=10.0.0.12:5431




15/03/10 03:18:59 INFO utils.ConfigurationUtils: No edge input format specified. Ensure your InputFormat does not require one.
15/03/10 03:19:02 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:22181
15/03/10 03:19:02 INFO server.PrepRequestProcessor: zookeeper.skipACL=="yes", ACL checks will be skipped
15/03/10 03:19:05 INFO zk.ZooKeeperManager: onlineZooKeeperServers: Connect attempt 1 of 10 max trying to connect to ip-10-0-0-12.ec2.internal:22181 with poll msecs = 3000
15/03/10 03:19:05 INFO zk.ZooKeeperManager: onlineZooKeeperServers: Connected to ip-10-0-0-12.ec2.internal/10.0.0.12:22181!
15/03/10 03:19:05 INFO zk.ZooKeeperManager: onlineZooKeeperServers: Creating my filestamp _bsp/_defaultZkManagerDir/job_local1346154675_0001/_zkServer/ip-10-0-0-12.ec2.internal 0
15/03/10 03:19:05 INFO server.NIOServerCnxnFactory: Accepted socket connection from /10.0.0.12:45182
15/03/10 03:19:05 INFO graph.GraphTaskManager: setup: Chosen to run ZooKeeper...
15/03/10 03:19:05 INFO graph.GraphTaskManager: setup: Starting up BspServiceMaster (master thread)...
15/03/10 03:19:05 INFO bsp.BspService: BspService: Path to create to halt is /_hadoopBsp/job_local1346154675_0001/_haltComputation
15/03/10 03:19:05 INFO bsp.BspService: BspService: Connecting to ZooKeeper with job job_local1346154675_0001, 0 on ip-10-0-0-12.ec2.internal:22181
15/03/10 03:19:05 INFO zookeeper.ClientCnxn: Opening socket connection to server ip-10-0-0-12.ec2.internal/10.0.0.12:22181. Will not attempt to authenticate using SASL (unknown error)
15/03/10 03:19:05 INFO server.NIOServerCnxnFactory: Accepted socket connection from /10.0.0.12:45183
15/03/10 03:19:05 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-0-0-12.ec2.internal/10.0.0.12:22181, initiating session
15/03/10 03:19:05 INFO server.ZooKeeperServer: Client attempting to establish new session at /10.0.0.12:45183
15/03/10 03:19:05 INFO persistence.FileTxnLog: Creating new log file: log.1
15/03/10 03:19:05 INFO server.ZooKeeperServer: Established session 0x14c01b158f00000 with negotiated timeout 600000 for client /10.0.0.12:45183
15/03/10 03:19:05 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-0-0-12.ec2.internal/10.0.0.12:22181, sessionid = 0x14c01b158f00000, negotiated timeout = 600000
15/03/10 03:19:05 INFO bsp.BspService: process: Asynchronous connection complete.
15/03/10 03:19:05 INFO graph.GraphTaskManager: map: No need to do anything when not a worker
15/03/10 03:19:05 INFO graph.GraphTaskManager: cleanup: Starting for MASTER_ZOOKEEPER_ONLY
15/03/10 03:19:05 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14c01b158f00000 type:create cxid:0x1 zxid:0x2 txntype:-1 reqpath:n/a Error Path:/_hadoopBsp/job_local1346154675_0001/_masterElectionDir Error:KeeperErrorCode = NoNode for /_hadoopBsp/job_local1346154675_0001/_masterElectionDir
15/03/10 03:19:05 INFO master.BspServiceMaster: becomeMaster: First child is '/_hadoopBsp/job_local1346154675_0001/_masterElectionDir/ip-10-0-0-12.ec2.internal_00000000000' and my bid is '/_hadoopBsp/job_local1346154675_0001/_masterElectionDir/ip-10-0-0-12.ec2.internal_00000000000'
15/03/10 03:19:05 INFO netty.NettyServer: NettyServer: Using execution group with 8 threads for requestFrameDecoder.
15/03/10 03:19:05 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/03/10 03:19:05 INFO netty.NettyServer: start: Started server communication server: ip-10-0-0-12.ec2.internal/10.0.0.12:30000 with up to 16 threads on bind attempt 0 with sendBufferSize = 32768 receiveBufferSize = 524288
15/03/10 03:19:05 INFO netty.NettyClient: NettyClient: Using execution handler with 8 threads after request-encoder.
15/03/10 03:19:05 INFO master.BspServiceMaster: becomeMaster: I am now the master!
15/03/10 03:19:05 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14c01b158f00000 type:create cxid:0xe zxid:0x9 txntype:-1 reqpath:n/a Error Path:/_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0 Error:KeeperErrorCode = NoNode for /_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0
15/03/10 03:19:05 INFO bsp.BspService: process: applicationAttemptChanged signaled
15/03/10 03:19:05 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14c01b158f00000 type:create cxid:0x16 zxid:0xc txntype:-1 reqpath:n/a Error Path:/_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir/-1 Error:KeeperErrorCode = NoNode for /_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir/-1
15/03/10 03:19:05 WARN bsp.BspService: process: Unknown and unprocessed event (path=/_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir, type=NodeChildrenChanged, state=SyncConnected)
15/03/10 03:19:07 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY checkWorkers: Only found 0 responses of 3 needed to start superstep -1 > map
15/03/10 03:19:07 INFO job.HaltApplicationUtils$DefaultHaltInstructionsWriter: writeHaltInstructions: To halt after next superstep execute: 'bin/halt-application --zkServer ip-10-0-0-12.ec2.internal:22181 --zkNode /_hadoopBsp/job_local1346154675_0001/_haltComputation'
15/03/10 03:19:07 INFO mapreduce.Job: Running job: job_local1346154675_0001
15/03/10 03:19:08 INFO mapreduce.Job: Job job_local1346154675_0001 running in uber mode : false
15/03/10 03:19:08 INFO mapreduce.Job:  map 25% reduce 0%
15/03/10 03:19:10 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY checkWorkers: Only found 0 responses of 3 needed to start superstep -1 > map
15/03/10 03:19:19 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY checkWorkers: Only found 0 responses of 3 needed to start superstep -1 > map
15/03/10 03:19:28 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY checkWorkers: Only found 0 responses of 3 needed to start superstep -1 > map
15/03/10 03:19:35 INFO master.BspServiceMaster: checkWorkers: Only found 0 responses of 3 needed to start superstep -1.  Reporting every 30000 msecs, 569976 more msecs left before giving up.
15/03/10 03:19:35 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14c01b158f00000 type:create cxid:0x22 zxid:0x10 txntype:-1 reqpath:n/a Error Path:/_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir/-1/_workerHealthyDir Error:KeeperErrorCode = NodeExists for /_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir/-1/_workerHealthyDir
15/03/10 03:19:35 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x14c01b158f00000 type:create cxid:0x23 zxid:0x11 txntype:-1 reqpath:n/a Error Path:/_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir/-1/_workerUnhealthyDir Error:KeeperErrorCode = NodeExists for /_hadoopBsp/job_local1346154675_0001/_applicationAttemptsDir/0/_superstepDir/-1/_workerUnhealthyDir
15/03/10 03:19:40 INFO mapred.LocalJobRunner: MASTER_ZOOKEEPER_ONLY checkWorkers: Only found 0 responses of 3 needed to start superstep -1 > map

OK, this turned out to be fairly simple. 好的,事实证明这很简单。 I built Giraph using the hadoop_2 profile, and not hadoop_yarn. 我使用hadoop_2配置文件而不是hadoop_yarn构建了Giraph。 When I build it using the yarn profile, this no longer happens. 当我使用纱线轮廓构建它时,这种情况不再发生。 I don't understand the entire mechanism of how this works, but apparently building with that profile changes some defaults that put it into pure YARN mode at runtime. 我不了解其工作原理的全部机制,但显然使用该概要文件进行构建会更改一些默认值,从而在运行时将其置于纯YARN模式。

So, if you get this, rebuild using 因此,如果您得到此信息,请使用

mvn -Phadoop_yarn clean package

and that will probably fix it. 这可能会解决它。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM