簡體   English   中英

Google Kubernetes集群上的待處理Spark Pod:CPU不足

[英]pending spark pod on google kubernetes cluster: insufficient cpu

我正在嘗試通過Spark-Submit向Google kubernetes集群提交Spark作業。

Docker鏡像是從2.3.0版本中的spark官方dockerfile構建的。

以下是提交腳本。

#! /bin/bash
spark-submit \
--master k8s://https://<master url> \
--deploy-mode cluster \
--conf spark.executor.instances=1 \
--conf spark.kubernetes.container.image=<official image> \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.app.name=app-name \
--class ExpletivePI \
--name spark-pi \
local:///opt/spark/examples/spark-demo.jar

我可以在本地minikube上完美運行它。

但是,當我嘗試將此提交到我的Google kubernetes集群時。 由於cpu不足,我總是不按計划進行播客。

0/3 nodes are available: 3 Insufficient cpu. 

kubectl描述節點似乎還可以,這是有問題的pod描述結果

Name:         spark-pi-e890cd00394b3b20942f22d0a9173c1c-driver
Namespace:    default
Node:         <none>
Labels:       spark-app-selector=spark-3e8ff877bebd46be9fc8d956531ba186
              spark-role=driver
Annotations:  spark-app-name=spark-pi
Status:       Pending
IP:           
Containers:
  spark-kubernetes-driver:
    Image:      geekbeta/spark:v2
    Port:       <none>
    Host Port:  <none>
    Args:
      driver
    Limits:
      memory:  1408Mi
    Requests:
      cpu:     1
      memory:  1Gi
    Environment:
      SPARK_DRIVER_MEMORY:        1g
      SPARK_DRIVER_CLASS:         ExpletivePI
      SPARK_DRIVER_ARGS:          
      SPARK_DRIVER_BIND_ADDRESS:   (v1:status.podIP)
      SPARK_MOUNTED_CLASSPATH:    /opt/spark/tang_stuff/spark-demo.jar:/opt/spark/tang_stuff/spark-demo.jar
      SPARK_JAVA_OPT_0:           -Dspark.app.name=spark-pi
      SPARK_JAVA_OPT_1:           -Dspark.app.id=spark-3e8ff877bebd46be9fc8d956531ba186
      SPARK_JAVA_OPT_2:           -Dspark.driver.host=spark-pi-e890cd00394b3b20942f22d0a9173c1c-driver-svc.default.svc
      SPARK_JAVA_OPT_3:           -Dspark.submit.deployMode=cluster
      SPARK_JAVA_OPT_4:           -Dspark.driver.blockManager.port=7079
      SPARK_JAVA_OPT_5:           -Dspark.kubernetes.executor.podNamePrefix=spark-pi-e890cd00394b3b20942f22d0a9173c1c
      SPARK_JAVA_OPT_6:           -Dspark.master=k8s://https://35.229.152.59
      SPARK_JAVA_OPT_7:           -Dspark.kubernetes.authenticate.driver.serviceAccountName=spark
      SPARK_JAVA_OPT_8:           -Dspark.executor.instances=1
      SPARK_JAVA_OPT_9:           -Dspark.kubernetes.container.image=geekbeta/spark:v2
      SPARK_JAVA_OPT_10:          -Dspark.kubernetes.driver.pod.name=spark-pi-e890cd00394b3b20942f22d0a9173c1c-driver
      SPARK_JAVA_OPT_11:          -Dspark.jars=/opt/spark/tang_stuff/spark-demo.jar,/opt/spark/tang_stuff/spark-demo.jar
      SPARK_JAVA_OPT_12:          -Dspark.driver.port=7078
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from spark-token-9gdsb (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  spark-token-9gdsb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  spark-token-9gdsb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  3m (x125 over 38m)  default-scheduler  0/3 nodes are available: 3 Insufficient cpu.

我的集群具有3 cpus和11G RAM,我真的很困惑,不知道該怎么辦,非常感謝您提出任何建議或意見,謝謝!

問題已解決,似乎默認情況下,驅動程序Pod需要1個cpu,在我的情況下,這對於GCP是不可能的,因為我的GCP群集上的每個節點只有一個cpu。

通過將驅動程序容器請求cpu更改為較低的值,它可以在GCP上運行

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM