I am working on an ETL pipeline in spark and I find that pushing a release is time/bandwidth intensive. My release script (pseudocode):
sbt assembly
openstack object create spark target/scala-2.11/etl-$VERSION-super.jar
spark-submit \
--class comapplications.WindowsETLElastic \
--master spark://spark-submit.cloud \
--deploy-mode cluster \
--verbose \
--conf "spark.executor.memory=16g" \
"$JAR_URL"
which works but can take over 4 minutes to assemble and a minute to push. My build.sbt:
name := "secmon_etl"
version := "1.2"
scalaVersion := "2.11.8"
exportJars := true
assemblyJarName in assembly := s"${name.value}-${version.value}-super.jar"
libraryDependencies ++= Seq (
"org.apache.spark" %% "spark-core" % "2.1.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "2.1.0" % "provided",
"org.apache.spark" %% "spark-streaming-kafka-0-10" % "2.1.0",
"io.spray" %% "spray-json" % "1.3.3",
// "commons-net" % "commons-net" % "3.5",
// "org.apache.httpcomponents" % "httpclient" % "4.5.2",
"org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.3.1"
)
assemblyMergeStrategy in assembly <<= (assemblyMergeStrategy in assembly) {
(old) => {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
}
The issue appears to be the sheer size of the elasticsearch-spark-20_2.11. It adds about 90MB to my uberjar. I would be happy to turn it in to a provided
dependency on the spark host, making it unnecessary to package. The question is, what's the best way to do that? Should I just manually copy over jars or is there a foolproof way of specifying a dependency and having a tool resolve all the transitive dependencies?
I have my spark jobs running and much more quickly now. I ran
sbt assemblyPackageDependency
which generated a huge jar (110MB!), easily placed in the spark working directory 'jars' folder, so now my Dockerfile for a spark cluster looks like this:
FROM openjdk:8-jre
ENV SPARK_VERSION 2.1.0
ENV HADOOP_VERSION hadoop2.7
ENV SPARK_MASTER_OPTS="-Djava.net.preferIPv4Stack=true"
RUN apt-get update && apt-get install -y python
RUN curl -sSLO http://mirrors.ocf.berkeley.edu/apache/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-$HADOOP_VERSION.tgz && tar xzfC /spark-$SPARK_VERSION-bin-$HADOOP_VERSION.tgz /usr/share && rm /spark-$SPARK_VERSION-bin-$HADOOP_VERSION.tgz
# master or worker's webui port,
EXPOSE 8080
# master's rest api port
EXPOSE 7077
ADD deps.jar /usr/share/spark-$SPARK_VERSION-bin-$HADOOP_VERSION/jars/
WORKDIR /usr/share/spark-$SPARK_VERSION-bin-$HADOOP_VERSION
after deploying that configuration I changed my build.sbt so the kafka-streaming
/ elasticsearch-spark
jars and dependencies are marked as provided
:
name := "secmon_etl"
version := "1.2"
scalaVersion := "2.11.8"
exportJars := true
assemblyJarName in assembly := s"${name.value}-${version.value}-super.jar"
libraryDependencies ++= Seq (
"org.apache.spark" %% "spark-core" % "2.1.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "2.1.0" % "provided",
"org.apache.spark" %% "spark-streaming-kafka-0-10" % "2.1.0" % "provided",
"io.spray" %% "spray-json" % "1.3.3" % "provided",
"org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.3.1" % "provided"
)
assemblyMergeStrategy in assembly <<= (assemblyMergeStrategy in assembly) {
(old) => {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
}
Now my deploys go through in 20 seconds!
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.