简体   繁体   English

如何将 Spark-Streaming 作业作为守护进程运行

[英]How to run Spark-Streaming job as a daemon

在使用日志轮换记录日志文件中发生的任何异常的同时,守护 Spark-Streaming 作业的最佳方法是什么?

This is the way to run 2 daemon threads, based on your requirement it can increase..这是运行 2 个守护线程的方法,根据您的要求它可以增加..

nohup ./mysparkstreamingjob.sh one> ../../logs/nohup.out 2> ../../logs/nohup.err < /dev/null &

nohup ./mysparkstreamingjob.sh two> ../../logs/nohup.out 2> ../../logs/nohup.err < /dev/null &

mysparkstreamingjob.sh will look like mysparkstreamingjob.sh 看起来像

#!/bin/sh
echo $CLASSPATH
spark-submit --verbose --jars $(echo /dirofjars/*.jar | tr ' ' ','),$SPARK_STREAMING_JAR --class com.xx.xx.StreamingJob \
    --master yarn-client \
    --num-executors 12 \
    --executor-cores 4 \
    --driver-memory 4G \
    --executor-memory 4G \
    --driver-class-path ../../config/properties/* \
    --conf "spark.driver.extraJavaOptions=-XX:PermSize=256M -XX:MaxPermSize=512M" \
    --conf "spark.shuffle.memoryFraction=0.5" \
    --conf "spark.storage.memoryFraction=0.75" \
    --conf "spark.storage.unrollFraction=0.2" \
    --conf "spark.memory.fraction=0.75" \
    --conf "spark.worker.cleanup.enabled=true" \
    --conf "spark.worker.cleanup.interval=14400" \
    --conf "spark.shuffle.io.numConnectionsPerPeer=5" \
    --conf "spark.eventlog.enabled=true" \
    --conf "spark.driver.extraLibrayPath=$HADOOP_HOME/*:$HBASE_HOME/*:$HADOOP_HOME/lib/*:$HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar:$HDFS_PATH/*:$SOLR_HOME/*:$SOLR_HOME/lib/*" \
    --conf "spark.executor.extraLibraryPath=$HADOOP_HOME/*:$HBASE_HOME/*:$HADOOP_HOME/lib/*:$HBASE_HOME/lib/htrace-core-3.1.0-incubating.jar:$HDFS_PATH/*:$SOLR_HOME/*:$SOLR_HOME/lib/*" \
    --conf "spark.executor.extraClassPath=$(echo /dirofjars/*.jar | tr ' ' ',')" \
    --conf "spark.yarn.executor.memoryOverhead=2048" \
    --conf "spark.yarn.driver.memoryOverhead=1024" \
    --conf "spark.eventLog.overwrite=true" \
    --conf "spark.shuffle.consolidateFiles=true" \
    --conf "spark.akka.frameSize=1024" \
    --files xxxx.properties, xxxx.properties \
    -DprocMySpark$1

Custom log4j rotation of file you need to configure and pass that setting to your spark submit.您需要配置文件的自定义 log4j 轮换并将该设置传递给您的 spark 提交。 based on appender you use it will do in natural way as java + log4j working.基于您使用的 appender,它将以自然的方式与 java + log4j 工作。

For Example :例如 :

--conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=/tmp/log4j.properties"

Moreover, spark webui url(which is by default) has all logs high level and low level此外,spark webui url(默认情况下)具有所有日志高级别和低级别

You should be using oozie for scheduling your spark streaming job.您应该使用 oozie 来安排您的 Spark 流作业。 https://oozie.apache.org/docs/4.2.0/DG_SparkActionExtension.html https://oozie.apache.org/docs/4.2.0/DG_SparkActionExtension.html

This will give you a good overview about scheduling, managing and monitoring your spark jobs.这将为您提供有关调度、管理和监控 Spark 作业的良好概述。 http://blog.cloudera.com/blog/2014/02/new-hue-demos-spark-ui-job-browser-oozie-scheduling-and-yarn-support/ http://blog.cloudera.com/blog/2014/02/new-hue-demos-spark-ui-job-browser-oozie-scheduling-and-yarn-support/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM