[英]Kubernetes 'watch' not receiving events from inside pod
I'm using an EKS cluster.我正在使用 EKS 集群。 I have a script which calls the 'watch' API on a custom resource.
我有一个脚本,它在自定义资源上调用“手表”API。 When I run this script using my cluster admin credentials from my laptop, I can see that events arrive as expected.
当我使用笔记本电脑上的集群管理员凭据运行此脚本时,我可以看到事件按预期到达。 However, whenever I run this script inside a pod using the in-cluster security credentials, no events ever arrive, yet there are no authentication or other errors.
但是,每当我使用集群内安全凭证在 pod 内运行此脚本时,都不会出现任何事件,但也不会出现身份验证或其他错误。 It doesn't appear to be a namespace problem, as I see the same behaviour whether or not resources are created in the same namespace as the script is authenticated, and where the pod is located.
这似乎不是命名空间问题,因为无论资源是否在与脚本经过身份验证的相同命名空间中创建,以及 pod 所在的位置,我都看到相同的行为。
What could be causing this?这可能是什么原因造成的?
The API request I'm making is:我提出的 API 请求是:
GET /apis/mydomain.com/v1/mycustomresource?watch=1
Any help gratefully received.感激地收到任何帮助。
Here's the ClusterRole:这是集群角色:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: manage-mycustomresource
namespace: kube-system
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
...and here's the ClusterRoleBinding: ...这是 ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
meta.helm.sh/release-name: mycustomresource-operator
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2020-07-01T13:23:08Z"
labels:
app.kubernetes.io/managed-by: Helm
name: mycustomresource-operator
resourceVersion: "12976069"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/mycustomresource-operator
uid: 41e6ef6d-cc96-43ec-a58e-48299290f1bc
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: mycustomresource-operator
namespace: kube-system
...and the ServiceAccount for the pod: ...以及 pod 的 ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::043180741939:role/k8s-mycustomresource-operator
meta.helm.sh/release-name: mycustomresource-operator
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2020-07-01T13:23:08Z"
labels:
app.kubernetes.io/instance: mycustomresource-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mycustomresource-operator
app.kubernetes.io/version: 1.16.0
helm.sh/chart: mycustomresource-operator-0.1.0
name: mycustomresource-operator
namespace: kube-system
resourceVersion: "12976060"
selfLink: /api/v1/namespaces/kube-system/serviceaccounts/mycustomresource-operator
uid: 4f30b10b-1deb-429e-95e4-2ff2a91a32c3
secrets:
- name: mycustomresource-operator-token-qz9xz
The deployment, upon which the script runs:脚本运行的部署:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
meta.helm.sh/release-name: mycustomresource-operator
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2020-07-01T13:23:08Z"
generation: 1
labels:
app.kubernetes.io/instance: mycustomresource-operator
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: mycustomresource-operator
app.kubernetes.io/version: 1.16.0
helm.sh/chart: mycustomresource-operator-0.1.0
name: mycustomresource-operator
namespace: kube-system
resourceVersion: "12992297"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/mycustomresource-operator
uid: 7b118d47-e467-48f9-b497-f9e4592e6baf
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: mycustomresource-operator
app.kubernetes.io/name: mycustomresource-operator
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: mycustomresource-operator
app.kubernetes.io/name: mycustomresource-operator
spec:
containers:
- image: myrepo.com/myrepo/k8s-mycustomresource-operator:master
imagePullPolicy: Always
name: mycustomresource-operator
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: mycustomresource-operator
serviceAccountName: mycustomresource-operator
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-07-01T13:23:08Z"
lastUpdateTime: "2020-07-01T13:23:10Z"
message: ReplicaSet "mycustomresource-operator-5dc74765cd" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-07-01T15:13:31Z"
lastUpdateTime: "2020-07-01T15:13:31Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Check the permission of the service account using使用检查服务帐户的权限
kubectl auth can-i watch mycustomresource --as=system:serviceaccount:kube-system:ycustomresource-operator -n kube-system
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.