简体   繁体   English

来自 Kafka 的 Spark 流在本地返回结果但不适用于 Yarn

[英]Spark streaming from Kafka returns result on local but Not working on Yarn

I am using Cloudera's VM CDH 5.12, spark v1.6, kafka(installed by yum) v0.10 and python 2.66 and scala 2.10我正在使用 Cloudera 的 VM CDH 5.12、spark v1.6、kafka(由 yum 安装)v0.10 和 python 2.66 和 scala 2.10

Below is a simple spark application that I am running.下面是我正在运行的一个简单的 spark 应用程序。 It takes events from kafka and prints it after map reduce.它从 kafka 获取事件并在 map reduce 后打印它。

from __future__ import print_function
import sys
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
if __name__ == "__main__":
    if len(sys.argv) != 3:
        print("Usage: kafka_wordcount.py <zk> <topic>", file=sys.stderr)
        exit(-1)
    sc = SparkContext(appName="PythonStreamingKafkaWordCount")
    ssc = StreamingContext(sc, 1)
zkQuorum, topic = sys.argv[1:]
kvs = KafkaUtils.createStream(ssc, zkQuorum, "spark-streaming-consumer", {topic: 1})
lines = kvs.map(lambda x: x[1])
counts = lines.flatMap(lambda line: line.split(" ")) \
    .map(lambda word: (word, 1)) \
    .reduceByKey(lambda a, b: a+b)
counts.pprint()
ssc.start()
ssc.awaitTermination()

When I submit above code using following command(local) it runs fine当我使用以下命令(本地)提交上述代码时,它运行良好

spark-submit --master local[2] --jars /usr/lib/spark/lib/spark-examples.jar testfile.py <ZKhostname>:2181 <kafka-topic>

But when I submit same above code using following command(YARN) it doesn't work但是当我使用以下命令(YARN)提交上述相同的代码时,它不起作用

spark-submit --master yarn --deploy-mode client --jars /usr/lib/spark/lib/spark-examples.jar testfile.py <ZKhostname>:2181 <kafka-topic>

Here is the log generated when ran on YARN(cutting them short, logs may differ from above mentioned spark settings):这是在 YARN 上运行时生成的日志(将它们缩短,日志可能与上述火花设置不同):

INFO Client: 
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.134.143
ApplicationMaster RPC port: 0
queue: root.cloudera
start time: 1515766709025
final status: UNDEFINED
tracking URL: http://quickstart.cloudera:8088/proxy/application_1515761416282_0010/
user: cloudera

40 INFO YarnClientSchedulerBackend: Application application_1515761416282_0010 has started running.
40 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53694.
40 INFO NettyBlockTransferService: Server created on 53694
53 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
54 INFO BlockManagerMasterEndpoint: Registering block manager quickstart.cloudera:56220 with 534.5 MB RAM, BlockManagerId(1, quickstart.cloudera, 56220)
07 INFO ReceiverTracker: Starting 1 receivers
07 INFO ReceiverTracker: ReceiverTracker started
07 INFO PythonTransformedDStream: metadataCleanupDelay = -1
07 INFO KafkaInputDStream: metadataCleanupDelay = -1
07 INFO KafkaInputDStream: Slide time = 10000 ms
07 INFO KafkaInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
07 INFO KafkaInputDStream: Checkpoint interval = null
07 INFO KafkaInputDStream: Remember duration = 10000 ms
07 INFO KafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka.KafkaInputDStream@7137ea0e
07 INFO PythonTransformedDStream: Slide time = 10000 ms
07 INFO PythonTransformedDStream: Storage level = StorageLevel(false, false, false, false, 1)
07 INFO PythonTransformedDStream: Checkpoint interval = null
07 INFO PythonTransformedDStream: Remember duration = 10000 ms
07 INFO PythonTransformedDStream: Initialized and validated org.apache.spark.streaming.api.python.PythonTransformedDStream@de77734

10 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.8 KB, free 534.5 MB)
10 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 3.5 KB, free 534.5 MB)
20 INFO JobScheduler: Added jobs for time 1515766760000 ms
30 INFO JobScheduler: Added jobs for time 1515766770000 ms
40 INFO JobScheduler: Added jobs for time 1515766780000 ms

After this, the job just starts repeating following lines(after some delay set by stream context) and doesnt printout kafka's stream, whereas job on master local with the exact same code does.在此之后,作业只是开始重复以下几行(在流上下文设置的一些延迟之后)并且不会打印出 kafka 的流,而具有完全相同代码的 master local 上的作业会。

Interestingly it prints following line every-time a kafka event occurs(picture is of increased spark memory settings)有趣的是,每次发生 kafka 事件时,它都会打印以下行(图片是增加的火花内存设置)

Note that:请注意:

Data is in kafka and I can see that in consumer console I have also tried increasing executor's momory(3g) and network timeout time(800s) but no success数据在 kafka 中,我可以看到在消费者控制台中我也尝试增加执行者的内存(3g)和网络超时时间(800s)但没有成功

Can you see application stdout logs through Yarn Resource Manager UI?您可以通过 Yarn Resource Manager UI 查看应用程序标准输出日志吗?

  1. Follow your Yarn Resource Manager link.( http://localhost:8088 ).按照您的纱线资源管理器链接。( http://localhost:8088 )。
  2. Find your application in running applications list and follow application's link.在正在运行的应用程序列表中找到您的应用程序并点击应用程序的链接。 ( http://localhost:8088/application_1396885203337_0003/ ) ( http://localhost:8088/application_1396885203337_0003/ )
  3. Open "stdout : Total file length is xxxx bytes" link to see log file on browser.打开“标准输出:总文件长度为 xxxx 字节”链接以查看浏览器上的日志文件。

Hope this helps.希望这会有所帮助。

在本地模式下,应用程序在一台机器上运行,你可以看到代码中给出的所有打印。尝试使用命令 yarn logs -applicationId 获取 spark 生成的日志

有可能你是一个别名并且它没有在纱线节点上定义,或者由于其他原因没有在纱线节点上解析。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM