简体   繁体   English

EKS 上的 Spark 运算符 Apache spark 无法创建临时目录

[英]Spark-operator on EKS Apache spark failed to create temp directory

I am trying to deploy simple spark-pi.yaml onto AWS EKS using spark-operator.我正在尝试使用 spark-operator 将简单的 spark-pi.yaml 部署到 AWS EKS 上。 I have successfully deployed spark-operator我已经成功部署了 spark-operator

Refer deployment YAML here spark-operator example在此处参考部署 YAML spark-operator 示例

I am getting the following error when I do helm install当我执行 helm install 时出现以下错误

Events:
  Type     Reason                            Age   From            Message
  ----     ------                            ----  ----            -------
  Normal   SparkApplicationAdded             8s    spark-operator  SparkApplication spark-pi was added, enqueuing it for submission
  Warning  SparkApplicationSubmissionFailed  5s    spark-operator  failed to submit SparkApplication spark-pi: failed to run spark-submit for SparkApplication spark-operator/spark-pi: WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.1.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Exception in thread "main" java.io.IOException: Failed to create a temp directory (under /tmp) after 10 attempts!
  at org.apache.spark.util.Utils$.createDirectory(Utils.scala:305)
  at org.apache.spark.util.Utils$.createTempDir(Utils.scala:325)
  at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:343)
  at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
  at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
  at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
  at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
  at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

How can I resolve this issue?我该如何解决这个问题?

This is going to be hard to debug, but based on my experience, a couple things could be going on here -这将很难调试,但根据我的经验,这里可能会发生一些事情 -

  1. I see your executor doesn't have it's service account defined.我看到你的执行者没有定义它的服务帐户。 You may want to explicitly define that您可能想要明确定义
  2. There may not be enough space in your volume to create the /tmp directory.您的卷中可能没有足够的空间来创建 /tmp 目录。 You may want to double check your volume size您可能需要仔细检查您的卷大小

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 为 AWS EKS 配置 Apache Spark - Configure Apache Spark for AWS EKS eksctl create iamserviceaccount with EKS add-on support for ADOT Operator - eksctl create iamserviceaccount with EKS add-on support for ADOT Operator AWS EKS NodeGroup“创建失败”:实例未能加入 kubernetes 集群 - AWS EKS NodeGroup "Create failed": Instances failed to join the kubernetes cluster 由于未注册 Spark worker 而无法执行作业 - Failed to execute job because of not registered Spark workers apache spark 可以在没有 hadoop 的情况下运行吗? - Can apache spark run without hadoop? 使用 Apache Spark 在 Databricks 中使用 SQL 查询进行 CASTING 问题 - CASTING issue with SQL query in Databricks with Apache Spark 如何从 Databricks 中的 JSON 或字典或键值对格式创建 Apache Spark DataFrame - How to Create an Apache Spark DataFrame from JSON or Dictonary or Key Value pairs format in Databricks Dataproc Spark 运算符如何返回值以及如何捕获和返回值 - How does a Dataproc Spark operator return a value and how to capture and return it Aws glue spark“没有这样的文件或目录”但文件存在 - Aws glue spark "No such file or directory" but the file exist 如何为 Spark Structural Streaming 创建 KinesisSink - How to create a KinesisSink for Spark Structural Streaming
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM