繁体   English   中英

火花纱远程提交

[英]Spark Yarn Remote Submit

目前,我正在从事流媒体项目。 刚刚开始,我仍然是spark-kafka-yarn-cloudera的新手。 要尝试(或查看)程序的结果,当前我必须构建项目的jar,将其上传到集群,然后执行spark-submit,我认为这种方式效率不高。

我可以从IDE远程编程运行该程序吗? 我使用scala-IDE。 我在寻找一些可遵循的代码,但仍未找到合适的代码

我的环境:-Cloudera 5.8.2 [OS redhat 7.2,kerberos 5,spark_2.1,scala 2.11]-Windows 7

请按照以下步骤对应用程序进行单元测试。

  1. 下载Winutils for Windows设置HADOOP_HOME环境变量
  2. 给出确切的kafka代理URL和主题名称以进行Sparkstreaming
  3. 确保设置了正确的偏移量管理属性。
  4. 使用Intellij IDE(也可以使用SCALA IDE)。 只需作为scala应用程序运行即可。

    val kafkaParams = Map(“ metadata.broker.list”->“ 168.172.72.128:9092”,ConsumerConfig.AUTO_OFFSET_RESET_CONFIG->“最小”,“ group.id”-> UUID.randomUUID()。toString())

    val topicSet = Set(“ test”)//主题名称val kafkaStream = KafkaUtils .createDirectStream [String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topicSet)//创建BSON数据结构并将数据加载到MongoDB中集合kafkaStream.foreachRDD (rdd => {//业务逻辑代码})

我遵循本教程http://blog.antlypls.com/blog/2017/10/15/using-spark-sql-and-spark-streaming-together/

下面是我的代码:

import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe

import scala.collection.mutable.ListBuffer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds
import org.apache.spark.sql.types.{StringType, StructType, TimestampType}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.count

object SparkKafkaExample {

  def main(args: Array[String]): Unit =
  {

  val brokers = "broker1.com:9092,broker2.com:9092," +
    "broker3.com:9092,broker4.com:9092,broker5.com:9092"
  // Create Spark Session
  val spark = SparkSession
    .builder()
    .appName("KafkaSparkDemo")
    .master("local[*]")
    .getOrCreate()

  import spark.implicits._

  // Create Streaming Context and Kafka Direct Stream with provided settings and 10 seconds batches
  val ssc = new StreamingContext(spark.sparkContext, Seconds(10))

  var kafkaParams = Map(
    "bootstrap.servers" -> brokers,
    "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
    "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
    "group.id" -> "test",
    "security.protocol" -> "SASL_PLAINTEXT",
    "sasl.kerberos.service.name" -> "kafka",
    "auto.offset.reset" -> "earliest")

  val topics = Array("sparkstreaming")
  val stream = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](topics, kafkaParams))

  // Define a schema for JSON data
  val schema = new StructType()
    .add("action", StringType)
    .add("timestamp", TimestampType)

  // Process batches:
  // Parse JSON and create Data Frame
  // Execute computation on that Data Frame and print result
  stream.foreachRDD { (rdd, time) =>
    val data = rdd.map(record => record.value)
    val json = spark.read.schema(schema).json(data)
    val result = json.groupBy($"action").agg(count("*").alias("count"))
    result.show
  }

  ssc.start
  ssc.awaitTermination

}
}

因为我的集群使用kerberos,所以我将此配置文件(kafka_jaas.conf)传递到我的IDE(Eclipse-> VM Arguments上)

-Djava.security.auth.login.config=kafka-jaas.conf

kafka-jaas.conf内容:

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    keyTab="user.keytab"
    serviceName="kafka"
    principal="user@HOST.COM";
};
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="user.keytab"
   storeKey=true
   useTicketCache=false
   serviceName="zookeeper"
   principal="user@HOST.COM";
};

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM