[英]Pod assigned node role instead of service account role on AWS EKS
Before I get started I have seen questions this and this , and they did not help.在我开始之前,我已经看到了this和this的问题,但它们没有帮助。
I have a k8s cluster on AWS EKS on which I am deploying a custom k8s controller for my application.我在 AWS EKS 上有一个 k8s 集群,我正在为我的应用程序部署一个自定义 k8s 控制器。 Using instructions from eksworkshop.com , I created my service account with the appropriate IAM role using
eksctl
.使用eksworkshop.com的说明,我使用
eksctl
创建了具有适当 IAM 角色的服务帐户。 I assign the role in my deployment.yaml
as seen below.我在我的
deployment.yaml
中分配角色,如下所示。 I also set the securityContext
as that seemed to solve problem in another case as described here .我还设置了
securityContext
,因为这似乎可以解决此处描述的另一种情况下的问题。
apiVersion: apps/v1
kind: Deployment
metadata:
name: tel-controller
namespace: tel
spec:
replicas: 2
selector:
matchLabels:
app: tel-controller
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
app: tel-controller
spec:
serviceAccountName: tel-controller-serviceaccount
securityContext:
fsGroup: 65534
containers:
- image: <image name>
imagePullPolicy: Always
name: tel-controller
args:
- --metrics-bind-address=:8080
- --health-probe-bind-address=:8081
- --leader-elect=true
ports:
- name: webhook-server
containerPort: 9443
protocol: TCP
- name: metrics-port
containerPort: 8080
protocol: TCP
- name: health-port
containerPort: 8081
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
allowPrivilegeEscalation: false
But this does not seem to be working.但这似乎不起作用。 If I describe the pod, I see the correct role.
如果我描述 pod,我会看到正确的角色。
AWS_DEFAULT_REGION: us-east-1
AWS_REGION: us-east-1
AWS_ROLE_ARN: arn:aws:iam::xxxxxxxxx:role/eksctl-eks-tel-addon-iamserviceaccount-tel-t-Role1-3APV5KCV33U8
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6ngsr (ro)
But if I do a sts.GetCallerIdentityInput()
from inside the controller application, I see the node role.但是,如果我从控制器应用程序内部执行
sts.GetCallerIdentityInput()
,我会看到节点角色。 And obviously i get an access denied
error.显然我得到一个
access denied
错误。
caller identity: (go string) { Account: "xxxxxxxxxxxx", Arn: "arn:aws:sts::xxxxxxxxxxx:assumed-role/eksctl-eks-tel-nodegroup-voice-NodeInstanceRole-BJNYF5YC2CE3/i-0694a2766c5d70901", UserId: "AROAZUYK7F2GRLKRGGNXZ:i-0694a2766c5d70901" }
呼叫者身份:(转到字符串){ 帐户:“xxxxxxxxxxxx”,Arn:“arn:aws:sts::xxxxxxxxxxx:assumed-role/eksctl-eks-tel-nodegroup-voice-NodeInstanceRole-BJNYF5YC2CE3/i-0694a2766c5d70901”, UserId :“AROAZUYK7F2GRLKRGGNXZ:i-0694a2766c5d70901”}
This is how I created by service account这就是我通过服务帐户创建的方式
eksctl create iamserviceaccount --cluster ${EKS_CLUSTER_NAME} \
--namespace tel \
--name tel-controller-serviceaccount \
--attach-policy-arn arn:aws:iam::xxxxxxxxxx:policy/telcontrollerRoute53Policy \
--override-existing-serviceaccounts --approve
I have done this successfully in the past.我过去成功地做到了这一点。 The difference this time is that I also have role & role bindings attached to this service account.
这次的不同之处在于我还附加了角色和角色绑定到此服务帐户。 My
rbac.yaml
for this SA.我的
rbac.yaml
用于这个 SA。
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: tel-controller-role
labels:
app: tel-controller
rules:
- apiGroups: [""]
resources: [events]
verbs: [create, delete, get, list, update, watch]
- apiGroups: ["networking.k8s.io"]
resources: [ingressclasses]
verbs: [get, list]
- apiGroups: ["", "networking.k8s.io"]
resources: [services, ingresses]
verbs: [create, get, list, patch, update, delete, watch]
- apiGroups: [""]
resources: [configmaps]
verbs: [create, delete, get, update]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: [get, create, update]
- apiGroups: [""]
resources: [pods]
verbs: [get, list, watch, update]
- apiGroups: ["", "networking.k8s.io"]
resources: [services/status, ingresses/status]
verbs: [update, patch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tel-controller-rolebinding
labels:
app: tel-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: tel-controller-role
subjects:
- kind: ServiceAccount
name: tel-controller-serviceaccount
namespace: tel
What am I doing wrong here?我在这里做错了什么? Thanks.
谢谢。
PS: I am deploying using kubectl
PS:我正在使用
kubectl
进行部署
PPS: from go.mod
I am using github.com/aws/aws-sdk-go v1.44.28
PPS:来自
go.mod
我正在使用github.com/aws/aws-sdk-go v1.44.28
So the Go SDK has 2 methods to create a session.所以 Go SDK 有 2 种方法来创建会话。 Apparently I was using the deprecated method to create my session (
session.New
).显然我正在使用不推荐使用的方法来创建我的会话(
session.New
)。 Using the recommended method to create the new session, session.NewSession
solved my problem.使用推荐的方法创建新会话,
session.NewSession
解决了我的问题。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.