简体   繁体   English

使用多个 ingress-nginx 控制器时,入口资源从错误的入口 Controller 获取地址

[英]Ingress Resource getting address from wrong Ingress Controller when using multiple ingress-nginx Controllers

we have a Kubernetes Cluster in AWS (EKS).我们在 AWS (EKS) 中有一个 Kubernetes 集群。 In our setup we need to have two ingress-nginx Controllers so that we can enforce different security policies.在我们的设置中,我们需要有两个 ingress-nginx 控制器,以便我们可以实施不同的安全策略。 To accomplish that, I am leveraging为了实现这一点,我正在利用

kubernetes.io/ingress.class and -ingress-class kubernetes.io/ingress.class 和-ingress-class

As advised here , I created one standard Ingress Controller with default 'mandatory.yaml' from ingress-nginx repository.按照这里的建议,我从 ingress-nginx 存储库创建了一个标准的 Ingress Controller,默认为“mandatory.yaml”

For creating the second ingress controller, I have customized the ingress deployment from 'mandatory.yaml' a little bit.为了创建第二个入口 controller,我从“mandatory.yaml”稍微定制了入口部署。 I have basically added the tag:我基本上添加了标签:

'env: internal' “环境:内部”

to deployment definition.部署定义。

I have also created another Service accordingly, specifying the 'env: internal' tag in order to bind this new service with my new ingress controller .我还相应地创建了另一个服务,指定“env:内部”标签,以便将此新服务与我的新入口 controller 绑定 Please, take a look at my yaml definition:请看一下我的 yaml 定义:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller-internal
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    env: internal
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
      env: internal
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        env: internal
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller-internal
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --ingress-class=nginx-internal
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx-internal
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    env: internal
spec:
  externalTrafficPolicy: Local
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    env: internal
  ports:
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: https

After applying this definition, my Ingress Controller is created along with a new LoadBalancer Service:应用此定义后,我的 Ingress Controller 与新的 LoadBalancer 服务一起创建:

$ kubectl get deployments -n ingress-nginx
NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
nginx-ingress-controller            1/1     1            1           10d
nginx-ingress-controller-internal   1/1     1            1           95m

$ kubectl get service -n ingress-nginx
NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP                                          PORT(S)                      AGE
ingress-nginx            LoadBalancer   172.20.6.67      xxxx.elb.amazonaws.com   80:30857/TCP,443:31863/TCP   10d
ingress-nginx-internal   LoadBalancer   172.20.115.244   yyyyy.elb.amazonaws.com   80:30036/TCP,443:30495/TCP   97m

So far so good, everything is working fine.到目前为止一切顺利,一切正常。

However, when I create two ingresses resources, each of these resources bound to different Ingress Controllers (notice 'kubernetes.io/ingress.class:'):但是,当我创建两个入口资源时,每个资源都绑定到不同的入口控制器(注意'kubernetes.io/ingress.class:'):

External ingress resource:外部入口资源:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: accounting-ingress
  annotations:  
    kubernetes.io/ingress.class: nginx
 spec: ...

Internal ingress resource:内部入口资源:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: internal-ingress
  annotations:  
    kubernetes.io/ingress.class: nginx-internal    
spec: ...

I see that they both contain the same ADDRESS, the address of the first Ingress Controller:我看到它们都包含相同的 ADDRESS,第一个 Ingress Controller 的地址:

$ kg ingress
NAME                 HOSTS                  ADDRESS                                                                   PORTS     AGE
external-ingress   bbb.aaaa.com           xxxx.elb.amazonaws.com   80, 443   10d
internal-ingress   ccc.aaaa.com           xxxx.elb.amazonaws.com   80        88m

I would expect that the ingress bound to 'ingress-class=nginx-internal' would contain this address: 'yyyyy.elb.amazonaws.com'.我希望绑定到“ingress-class=nginx-internal”的入口将包含此地址:“yyyyy.elb.amazonaws.com”。 Everything seems to be working fine though, but this is annoying me, I have the impression something is wrong.一切似乎都运行良好,但这让我很烦,我觉得有些不对劲。

Where should I start troubleshooting it to understand what is happening behind the scenes?我应该从哪里开始对其进行故障排除以了解幕后发生的事情?

####---UPDATE---#### #### - -更新 - -####

Besides what is described above, I have added the line ' "ingress-controller-leader-nginx-internal" ' inside manadatory.yaml as can be seen below.除了上述内容外,我还在 manadatory.yaml 中添加了“ingress-controller-leader-nginx-internal”行,如下所示。 I did that based on one commentary I found inside mandatory.yaml file:我是根据在mandatory.yaml 文件中找到的一条评论做到这一点的:

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
      - "ingress-controller-leader-nginx-internal"

Unfortunately nginx documentation only mention about 'kubernetes.io/ingress.class and -ingress-class' for defining a new controller.不幸的是,nginx 文档只提到了 'kubernetes.io/ingress.class 和 -ingress-class' 来定义一个新的 controller。 There is a chance I am messing with some minor detail.我有可能会弄乱一些小细节。

Try changing this line:尝试更改此行:

- --configmap=$(POD_NAMESPACE)/nginx-configuration

In your code it should be something like this:在您的代码中,它应该是这样的:

- --configmap=$(POD_NAMESPACE)/internal-nginx-configuration

This way you will have a different configuration for each nginx-controller, otherwise you will have the same configuration, it may seems to work, but you will have some bugs when updating... (Already been there....)这样你每个 nginx-controller 会有不同的配置,否则你会有相同的配置,它可能看起来可以工作,但是更新时你会遇到一些错误......(已经在那里......)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM