简体   繁体   中英

Spark Yarn Remote Submit

Currently I'm working on spark-streaming project. Just starting, and I am still newbie in spark-kafka-yarn-cloudera. To try (or to see) the result of the program, currently I have to build jar of the project, upload it to cluster then spark-submit which I think this way is not efficient.

Can I run this program programmactically from IDE [remotely]? I use scala-IDE. I look for some code to follow, but still not found the suitable one

My environment: - Cloudera 5.8.2 [OS redhat 7.2, kerberos 5, spark_2.1, scala 2.11] - Windows 7

Follow below steps to unit test your application.

  1. Download winutils for wondows set HADOOP_HOME environmental variable
  2. Give Exact kafka broker url and topic names for sparkstreaming
  3. Make sure that proper offset menagement properties are set.
  4. Use Intellij IDE (SCALA IDE also fine). Just run as scala application will work.

    val kafkaParams = Map( "metadata.broker.list" -> "168.172.72.128:9092", ConsumerConfig.AUTO_OFFSET_RESET_CONFIG -> "smallest", "group.id" -> UUID.randomUUID().toString())

    val topicSet = Set("test") //Topic name val kafkaStream = KafkaUtils .createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicSet) // Creating BSON data Structure and loading data into MongoDB Collection kafkaStream.foreachRDD( rdd => { //code for business logic })

I follow this tutorial http://blog.antlypls.com/blog/2017/10/15/using-spark-sql-and-spark-streaming-together/

Below is my code:

import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe

import scala.collection.mutable.ListBuffer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds
import org.apache.spark.sql.types.{StringType, StructType, TimestampType}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.count

object SparkKafkaExample {

  def main(args: Array[String]): Unit =
  {

  val brokers = "broker1.com:9092,broker2.com:9092," +
    "broker3.com:9092,broker4.com:9092,broker5.com:9092"
  // Create Spark Session
  val spark = SparkSession
    .builder()
    .appName("KafkaSparkDemo")
    .master("local[*]")
    .getOrCreate()

  import spark.implicits._

  // Create Streaming Context and Kafka Direct Stream with provided settings and 10 seconds batches
  val ssc = new StreamingContext(spark.sparkContext, Seconds(10))

  var kafkaParams = Map(
    "bootstrap.servers" -> brokers,
    "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
    "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer",
    "group.id" -> "test",
    "security.protocol" -> "SASL_PLAINTEXT",
    "sasl.kerberos.service.name" -> "kafka",
    "auto.offset.reset" -> "earliest")

  val topics = Array("sparkstreaming")
  val stream = KafkaUtils.createDirectStream[String, String](
    ssc,
    PreferConsistent,
    Subscribe[String, String](topics, kafkaParams))

  // Define a schema for JSON data
  val schema = new StructType()
    .add("action", StringType)
    .add("timestamp", TimestampType)

  // Process batches:
  // Parse JSON and create Data Frame
  // Execute computation on that Data Frame and print result
  stream.foreachRDD { (rdd, time) =>
    val data = rdd.map(record => record.value)
    val json = spark.read.schema(schema).json(data)
    val result = json.groupBy($"action").agg(count("*").alias("count"))
    result.show
  }

  ssc.start
  ssc.awaitTermination

}
}

Because my cluster using kerberos, then I pass this config file (kafka_jaas.conf) to my IDE (Eclipse -> on VM Arguments)

-Djava.security.auth.login.config=kafka-jaas.conf

kafka-jaas.conf content:

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    keyTab="user.keytab"
    serviceName="kafka"
    principal="user@HOST.COM";
};
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   keyTab="user.keytab"
   storeKey=true
   useTicketCache=false
   serviceName="zookeeper"
   principal="user@HOST.COM";
};

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM