簡體   English   中英

如何在 google dataproc 上運行 spark 3.2.0?

[英]How to run spark 3.2.0 on google dataproc?

目前,google dataproc 沒有 spark 3.2.0 作為圖像。 最新可用的是 3.1.2。 我想在 pyspark 功能上使用 pandas,spark 已隨 3.2.0 一起發布。

我正在執行以下步驟來使用 spark 3.2.0

  1. 在本地創建了一個環境“pyspark”,其中包含 pyspark 3.2.0
  2. 使用 conda conda env export > environment.yaml導出環境 yaml
  3. 使用此環境創建了一個 dataproc 集群。yaml。 集群創建正確,環境在 master 和所有 worker 上可用
  4. 然后我更改環境變量。 export SPARK_HOME=/opt/conda/miniconda3/envs/pyspark/lib/python3.9/site-packages/pyspark (指向pyspark 3.2.0), export SPARK_CONF_DIR=/usr/lib/spark/conf (使用dataproc的配置文件)和export PYSPARK_PYTHON=/opt/conda/miniconda3/envs/pyspark/bin/python (使環境包可用)

現在,如果我嘗試運行 pyspark shell,我會得到:

21/12/07 01:25:16 ERROR org.apache.spark.scheduler.AsyncEventQueue: Listener AppStatusListener threw an exception
java.lang.NumberFormatException: For input string: "null"
        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
        at java.lang.Integer.parseInt(Integer.java:580)
        at java.lang.Integer.parseInt(Integer.java:615)
        at scala.collection.immutable.StringLike.toInt(StringLike.scala:304)
        at scala.collection.immutable.StringLike.toInt$(StringLike.scala:304)
        at scala.collection.immutable.StringOps.toInt(StringOps.scala:33)
        at org.apache.spark.util.Utils$.parseHostPort(Utils.scala:1126)
        at org.apache.spark.status.ProcessSummaryWrapper.<init>(storeTypes.scala:527)
        at org.apache.spark.status.LiveMiscellaneousProcess.doUpdate(LiveEntity.scala:924)
        at org.apache.spark.status.LiveEntity.write(LiveEntity.scala:50)
        at org.apache.spark.status.AppStatusListener.update(AppStatusListener.scala:1213)
        at org.apache.spark.status.AppStatusListener.onMiscellaneousProcessAdded(AppStatusListener.scala:1427)
        at org.apache.spark.status.AppStatusListener.onOtherEvent(AppStatusListener.scala:113)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100)
        at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
        at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117)
        at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101)
        at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105)
        at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105)
        at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
        at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1404)
        at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96)

然而,shell 確實在這之后開始。 但是,它不執行代碼。 拋出異常:我嘗試運行: set(sc.parallelize(range(10),10).map(lambda x: socket.gethostname()).collect())但是,我得到:

21/12/07 01:32:15 WARN org.apache.spark.deploy.yarn.YarnAllocator: Container from a bad node: container_1638782400702_0003_01_000001 on host: monsoon-test1-w-2.us-central1-c.c.monsoon-credittech.internal. Exit status: 1. Diagnostics: [2021-12-07 
01:32:13.672]Exception from container-launch.
Container id: container_1638782400702_0003_01_000001
Exit code: 1
[2021-12-07 01:32:13.717]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
ltChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
21/12/07 01:31:43 ERROR org.apache.spark.executor.YarnCoarseGrainedExecutorBackend: Executor self-exiting due to : Driver monsoon-test1-m.us-central1-c.c.monsoon-credittech.internal:44367 disassociated! Shutting down.
21/12/07 01:32:13 WARN org.apache.hadoop.util.ShutdownHookManager: ShutdownHook '$anon$2' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
        at java.util.concurrent.FutureTask.get(FutureTask.java:205)
        at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
21/12/07 01:32:13 ERROR org.apache.spark.util.Utils: Uncaught exception in thread shutdown-hook-0
java.lang.InterruptedException
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
        at java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1475)
        at java.util.concurrent.Executors$DelegatedExecutorService.awaitTermination(Executors.java:675)
        at org.apache.spark.rpc.netty.MessageLoop.stop(MessageLoop.scala:60)
        at org.apache.spark.rpc.netty.Dispatcher.$anonfun$stop$1(Dispatcher.scala:197)
        at org.apache.spark.rpc.netty.Dispatcher.$anonfun$stop$1$adapted(Dispatcher.scala:194)
        at scala.collection.Iterator.foreach(Iterator.scala:943)
        at scala.collection.Iterator.foreach$(Iterator.scala:943)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
        at org.apache.spark.rpc.netty.Dispatcher.stop(Dispatcher.scala:194)
        at org.apache.spark.rpc.netty.NettyRpcEnv.cleanup(NettyRpcEnv.scala:331)
        at org.apache.spark.rpc.netty.NettyRpcEnv.shutdown(NettyRpcEnv.scala:309)
        at org.apache.spark.SparkEnv.stop(SparkEnv.scala:96)
        at org.apache.spark.executor.Executor.stop(Executor.scala:335)
        at org.apache.spark.executor.Executor.$anonfun$new$2(Executor.scala:76)
        at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:214)
        at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$2(ShutdownHookManager.scala:188)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1996)
        at org.apache.spark.util.SparkShutdownHookManager.$anonfun$runAll$1(ShutdownHookManager.scala:188)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at scala.util.Try$.apply(Try.scala:213)
        at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
        at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

在停止之前,相同的錯誤會重復多次。

我做錯了什么以及如何在 google dataproc 上使用 python 3.2.0?

可以通過以下方式實現:

  1. 使用包含 pyspark 3.2 作為 package 的環境 ( your_sample_env ) 創建一個 dataproc 集群
  2. 通過添加修改/usr/lib/spark/conf/spark-env.sh
SPARK_HOME="/opt/conda/miniconda3/envs/your_sample_env/lib/python/site-packages/pyspark"
SPARK_CONF="/usr/lib/spark/conf"

最后

  1. 通過注釋掉以下配置來修改/usr/lib/spark/conf/spark-defaults.conf
spark.yarn.jars=local:/usr/lib/spark/jars/*
spark.yarn.unmanagedAM.enabled=true

現在,您的 spark 作業將使用 pyspark 3.2

Dataproc Serverless for Spark 剛剛發布,支持 Spark 3.2.0: https://cloud.google.com/dataproc-serverless

@milominderbinder 的答案在筆記本中對我不起作用。 我使用了 google 提供的pip 安裝腳本,並在 main.js 中添加了以下代碼。

 function main() { install_pip pip install pyspark==3.2.0 sed -i '4d;27d' /usr/lib/spark/conf/spark-defaults.conf cat << EOF | tee -a /etc/profile.d/custom_env.sh /etc/*bashrc >/dev/null export SPARK_HOME=/opt/conda/miniconda3/lib/python3.8/site-packages/pyspark/ export SPARK_CONF=/usr/lib/spark/conf EOF sed -i 's/\/usr\/lib\/spark/\/opt\/conda\/miniconda3\/lib\/python3.8\/site-packages\/pyspark\//g' /opt/conda/miniconda3/share/jupyter/kernels/python3/kernel.json if [[ -z "${PACKAGES}" ]]; then echo "WARNING: requirements empty" exit 0 fi run_with_retry pip install --upgrade ${PACKAGES} }

這使它在 jupyterlab 中與 Python3 kernel 一起工作。

快速而骯臟的腳本,在 Dataproc 映像 2.0 的初始化操作中完成:

#!/usr/bin/env bash

spark_version="3.3.0"

cd /opt

if [[ ! -L /opt/spark ]]; then
    archive_filename="spark-${spark_version}-bin-without-hadoop.tgz"
    rm -rf spark*
    wget "https://dlcdn.apache.org/spark/spark-${spark_version}/${archive_filename}"
    tar xvfz "${archive_filename}"
    rm -f spark*.tgz*
    ln -s spark-* spark
fi

# This will cause spark to fallback to defaults. There's probably a better way.
sed -i '/^spark\.yarn\.jars/d' /usr/lib/spark/conf/spark-defaults.conf

# By default, Dataproc uses Hive. For unknown reasons, this doesn't work, so we replace it with 'in-memory'.
sed -i '/^spark\.sql\.catalogImplementation/d' /usr/lib/spark/conf/spark-defaults.conf
echo "spark.sql.catalogImplementation=in-memory" >>/usr/lib/spark/conf/spark-defaults.conf

# note: weird filename to ensure this runs after all the other profile.d scripts...
{
    # shellcheck disable=SC2016
    echo 'export PATH=/opt/spark/bin:$PATH'
    echo "export SPARK_CONF_DIR=/usr/lib/spark/conf"
    echo "export SPARK_HOME=/opt/spark"
    # shellcheck disable=SC2016
    echo 'export PYTHONPATH=$(ZIPS=("$SPARK_HOME"/python/lib/*.zip); IFS=:; echo "${ZIPS[*]}"):$PYTHONPATH'
    # shellcheck disable=SC2016
    echo 'export SPARK_DIST_CLASSPATH=$(hadoop classpath)'
} >/etc/profile.d/zzzzzzzzzzzzz-custom-spark.sh
chmod +x /etc/profile.d/zzzzzzzzzzzzz-custom-spark.sh


暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM