简体   繁体   中英

Authenticate to K8s cluster through AWS Lambda

We are using the following to authenticate:

import base64
import boto3
import string
import random
from botocore.signers import RequestSigner


class EKSAuth(object):
    METHOD = 'GET'
    EXPIRES = 60
    EKS_HEADER = 'x-k8s-aws-id'
    EKS_PREFIX = 'k8s-aws-v1.'
    STS_URL = 'sts.amazonaws.com'
    STS_ACTION = 'Action=GetCallerIdentity&Version=2011-06-15'

    def __init__(self, cluster_id, region='us-east-1'):
        self.cluster_id = cluster_id
        self.region = region

    def get_token(self):
        """
        Return bearer token
        """
        session = boto3.session.Session()
        # Get ServiceID required by class RequestSigner
        client = session.client("sts", region_name=self.region)
        service_id = client.meta.service_model.service_id

        signer = RequestSigner(
            service_id,
            session.region_name,
            'sts',
            'v4',
            session.get_credentials(),
            session.events
        )

        params = {
            'method': self.METHOD,
            'url': 'https://' + self.STS_URL + '/?' + self.STS_ACTION,
            'body': {},
            'headers': {
                self.EKS_HEADER: self.cluster_id
            },
            'context': {}
        }

        signed_url = signer.generate_presigned_url(
            params,
            region_name=session.region_name,
            expires_in=self.EXPIRES,
            operation_name=''
        )

        return (
                self.EKS_PREFIX +
                base64.urlsafe_b64encode(
                    signed_url.encode('utf-8')
                ).decode('utf-8')
        )

And then we call this by

KUBE_FILEPATH = '/tmp/kubeconfig'
CLUSTER_NAME = 'cluster'
REGION = 'us-east-2'

if not os.path.exists(KUBE_FILEPATH):
    kube_content = dict()
    # Get data from EKS API
    eks_api = boto3.client('eks', region_name=REGION)
    cluster_info = eks_api.describe_cluster(name=CLUSTER_NAME)
    certificate = cluster_info['cluster']['certificateAuthority']['data']
    endpoint = cluster_info['cluster']['endpoint']

    kube_content = dict()

    kube_content['apiVersion'] = 'v1'
    kube_content['clusters'] = [
        {
            'cluster':
                {
                    'server': endpoint,
                    'certificate-authority-data': certificate
                },
            'name': 'kubernetes'

        }]

    kube_content['contexts'] = [
        {
            'context':
                {
                    'cluster': 'kubernetes',
                    'user': 'aws'
                },
            'name': 'aws'
        }]

    kube_content['current-context'] = 'aws'
    kube_content['Kind'] = 'config'
    kube_content['users'] = [
        {
            'name': 'aws',
            'user': 'lambda'
        }]

    # Write kubeconfig
    with open(KUBE_FILEPATH, 'w') as outfile:
        yaml.dump(kube_content, outfile, default_flow_style=False)

    # Get Token
    eks = auth.EKSAuth(CLUSTER_NAME)
    token = eks.get_token()
    print("Token here:")
    print(token)
    # Configure
    config.load_kube_config(KUBE_FILEPATH)
    configuration = client.Configuration()
    configuration.api_key['authorization'] = token
    configuration.api_key_prefix['authorization'] = 'Bearer'
    # API
    api = client.ApiClient(configuration)
    v1 = client.CoreV1Api(api)

    print("THIS IS GETTING 401!!")
    ret = v1.list_namespaced_pod(namespace='default')

However, this is getting the error in the Lambda:

[ERROR] ApiException: (401) Reason: Unauthorized

Is there some type of way I have to generate the ~/.aws/credentials or config? I believe this might be why it is not able to authenticate?

Your EKSAuth class works. Just checked it with my cluster.

Here is a working (simpler) snippet instead of the second one.

import base64
import tempfile
import kubernetes
import boto3
from auth import EKSAuth


cluster_name = "my-cluster"

# Details from EKS
eks_client = boto3.client('eks')
eks_details = eks_client.describe_cluster(name=cluster_name)['cluster']

# Saving the CA cert to a temp file (working around the Kubernetes client limitations)
fp = tempfile.NamedTemporaryFile(delete=False)
ca_filename = fp.name
cert_bs = base64.urlsafe_b64decode(eks_details['certificateAuthority']['data'].encode('utf-8'))
fp.write(cert_bs)
fp.close()

# Token for the EKS cluster
eks_auth = EKSAuth(cluster_name)
token = eks_auth.get_token()#name=cluster_name)['token']

# Kubernetes client config
conf = kubernetes.client.Configuration()
conf.host = eks_details['endpoint']
conf.api_key['authorization'] = token
conf.api_key_prefix['authorization'] = 'Bearer'
conf.ssl_ca_cert = ca_filename
k8s_client = kubernetes.client.ApiClient(conf)

# Doing something with the client
v1 = kubernetes.client.CoreV1Api(k8s_client)
print(v1.list_pod_for_all_namespaces())

* Most of the code is taken form here

And you also have to make sure you've granted permission for the IAM role your lambda run with in the eks cluster. For that run:

kubectl edit -n kube-system configmap/aws-auth

Add this lines under mapRoles . rolearn is the arn of your role. username is the name you want to give to that role inside the k8s cluster.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    # Add this #######################################
    - rolearn: arn:aws:iam::111122223333:role/myLambda-role-z71amo5y
      username: my-lambda-mapped-user
    ####################################################

And create a clusterrolebinding or rolebinding to grant this user permissions inside the cluster.

kubectl create clusterrolebinding --clusterrole cluster-admin --user my-lambda-mapped-user  my-clusterrolebinding

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM