繁体   English   中英

如何修复:pods“”被禁止:用户“system:anonymous”无法观看命名空间“default”中 API 组“”中的资源“pods”

[英]How to fix: pods “” is forbidden: User “system:anonymous” cannot watch resource “pods” in API group “” in the namespace “default”

我正在尝试在 k8 上运行我的 spark,我已经使用以下命令设置了我的 RBAC:

kubectl create serviceaccount spark

kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default

来自 k8 集群外部的 Spark 命令:

bin/spark-submit --master k8s://https://<master_ip>:6443  --deploy-mode cluster  --conf spark.kubernetes.authenticate.submission.caCertFile=/usr/local/spark/spark-2.4.5-bin-hadoop2.7/ca.crt --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark   --conf spark.kubernetes.container.image=bitnami/spark:latest test.py

错误:

   Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: pods "test-py-1590306482639-driver" is forbidden: User "system:anonymous" cannot watch resource "pods" in API group "" in the namespace "default"
    at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onFailure(WatchConnectionManager.java:206)
    at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
    at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
    at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
    at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
    Suppressed: java.lang.Throwable: waiting here
        at io.fabric8.kubernetes.client.utils.Utils.waitUntilReady(Utils.java:134)
        at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.waitUntilReady(WatchConnectionManager.java:350)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:759)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:738)
        at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:69)
        at org.apache.spark.deploy.k8s.submit.Client$$anonfun$run$1.apply(KubernetesClientApplication.scala:140)
        at org.apache.spark.deploy.k8s.submit.Client$$anonfun$run$1.apply(KubernetesClientApplication.scala:140)
        at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2542)
        at org.apache.spark.deploy.k8s.submit.Client.run(KubernetesClientApplication.scala:140)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:250)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication$$anonfun$run$5.apply(KubernetesClientApplication.scala:241)
        at org.apache.spark.util.Utils$.tryWithResource(Utils.scala:2543)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.run(KubernetesClientApplication.scala:241)
        at org.apache.spark.deploy.k8s.submit.KubernetesClientApplication.start(KubernetesClientApplication.scala:204)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/05/24 07:48:04 INFO ShutdownHookManager: Shutdown hook called
20/05/24 07:48:04 INFO ShutdownHookManager: Deleting directory /tmp/spark-f0eeb957-a02e-458f-8778-21fb2307cf42

Spark Docker 图片来源--> docker pull bitnami/spark

我还将我的crt文件放在 k8 集群的主服务器上。 我正在尝试从另一个 GCP 实例运行spark-submit命令。

有人可以在这里帮助我吗,自从过去几天以来,我一直坚持这一点。

编辑

我创建了另一个具有集群管理员权限的集群角色,但它仍然无法正常工作

spark.kubernetes.authenticate 仅适用于部署模式客户端,并且您使用部署模式集群运行

根据您向 kubernetes 集群进行身份验证的方式,您可能需要提供以 spark.kubernetes.authenticate.submission 开头的不同配置参数(这些配置参数在使用部署模式集群运行时适用)。 查找 ~/.kube/config 文件并搜索用户。 例如,如果用户部分指定

访问令牌:XXXX

然后通过 spark.kubernetes.authenticate.submission.oauthToken

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM