简体   繁体   English

HTTPS重定向不适用于Nginx-ingress-controller的默认后端

[英]HTTPS redirect not working for default backend of nginx-ingress-controller

I'm having trouble getting an automatic redirect to occur from HTTP -> HTTPS for the default backend of the NGINX ingress controller for kubernetes where the controller is behind an AWS Classic ELB; 我无法从自动重定向到HTTP-> HTTPS进行NGINX入口控制器默认kubernetes后端,该控制器位于AWS Classic ELB后面; is it possible? 可能吗?

According to the guide it seems like by default, HSTS is enabled 根据指南 ,默认情况下似乎启用了HSTS

HTTP Strict Transport Security HTTP严格传输安全性
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. HTTP严格传输安全性(HSTS)是通过使用特殊的响应标头指定的可选安全性增强功能。 Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. 一旦受支持的浏览器收到此标头,该浏览器将阻止通过HTTP将任何通信发送到指定的域,而是通过HTTPS发送所有通信。

HSTS is enabled by default. 默认情况下启用HSTS。

And redirecting HTTP -> HTTPS is enabled 并启用重定向HTTP-> HTTPS

Server-side HTTPS enforcement through redirect 通过重定向执行服务器端HTTPS
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. 默认情况下,如果该入口启用了TLS,则控制器会使用308永久重定向响应将HTTP客户端重定向到HTTPS端口443。

However, when I deploy the controller as configured below and navigate to http://<ELB>.elb.amazonaws.com I am unable to get any response (curl reports Empty reply from server ). 但是,当我按以下配置部署控制器并导航到http://<ELB>.elb.amazonaws.com我无法获得任何响应(curl报告Empty reply from server )。 What I would expect to happen instead is I should see a 308 redirect to https then a 404. 我期望发生的是,我应该看到308重定向到https,然后是404。

This question is similar: Redirection from http to https not working for custom backend service in Kubernetes Nginx Ingress Controller but they resolved it by deploying a custom backend and specifying on the ingress resource to use TLS. 这个问题是相似的: 在Kubernetes Nginx Ingress Controller中,从http重定向到https不适用于自定义后端服务,但是他们通过部署自定义后端并在入口资源上指定使用TLS来解决该问题。 I am trying to avoid deploying a custom backend and just simply want to use the default so this solution is not applicable in my case. 我试图避免部署自定义后端,而只是想使用默认值,因此该解决方案不适用于我的情况。

I've shared my deployment files on gist and have copied them here as well: 我已经在gist上共享了我的部署文件,并且也在这里复制了它们:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx-sit
  labels:
    app.kubernetes.io/name: ingress-nginx-sit
    app.kubernetes.io/part-of: ingress-nginx-sit
spec:
  minReadySeconds: 2
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: '50%'
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx-sit
      app.kubernetes.io/part-of: ingress-nginx-sit
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx-sit
        app.kubernetes.io/part-of: ingress-nginx-sit
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --ingress-class=$(POD_NAMESPACE)
            - --election-id=leader
            - --watch-namespace=$(POD_NAMESPACE)
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx-sit
  labels:
    app.kubernetes.io/name: ingress-nginx-sit
    app.kubernetes.io/part-of: ingress-nginx-sit
data:
  hsts: "true"
  ssl-redirect: "true"
  use-proxy-protocol: "false"
  use-forwarded-headers: "true"
  enable-access-log-for-default-backend: "true"
  enable-owasp-modsecurity-crs: "true"
  proxy-real-ip-cidr: "10.0.0.0/24,10.0.1.0/24" # restrict this to the IP addresses of ELB
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx-sit
  labels:
    app.kubernetes.io/name: ingress-nginx-sit
    app.kubernetes.io/part-of: ingress-nginx-sit
  annotations:
    # replace with the correct value of the generated certificate in the AWS console
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<region>:<account>:certificate/<id>"
    # Specify the ssl policy to apply to the ELB
    service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-2-2017-01"
    # the backend instances are HTTP
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    # Terminate ssl on https port
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "*"
    # Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
    # NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
    # increased to '3600' to avoid any potential issues.
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    # Security group used for the load balancer.
    service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-xxxxx"
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx-sit
    app.kubernetes.io/part-of: ingress-nginx-sit
  loadBalancerSourceRanges:
    # Restrict allowed source IP ranges
    - "192.168.1.1/16"
  ports:
    - name: http
      port: 80
      targetPort: http
      # The range of valid ports is 30000-32767
      nodePort: 30080
    - name: https
      port: 443
      targetPort: http
      # The range of valid ports is 30000-32767
      nodePort: 30443

I think I found the problem. 我想我找到了问题。

For some reason the default server has force_ssl_redirect set to false when determining if it should redirect the incoming request to HTTPS: 由于某种原因,默认服务器在确定是否应将传入请求重定向到HTTPS时将force_ssl_redirect设置为false:

cat /etc/nginx/nginx.conf notice the rewrite_by_lua_block sends force_ssl_redirect = false cat /etc/nginx/nginx.conf注意rewrite_by_lua_block发送force_ssl_redirect = false

...
    ## start server _
    server {
        server_name _ ;

        listen 80 default_server reuseport backlog=511;

        set $proxy_upstream_name "-";
        set $pass_access_scheme $scheme;
        set $pass_server_port $server_port;
        set $best_http_host $http_host;
        set $pass_port $pass_server_port;

        listen 443  default_server reuseport backlog=511 ssl http2;

        # PEM sha: 601213c2dd57a30b689e1ccdfaa291bf9cc264c3
        ssl_certificate                         /etc/ingress-controller/ssl/default-fake-certificate.pem;
        ssl_certificate_key                     /etc/ingress-controller/ssl/default-fake-certificate.pem;

        ssl_certificate_by_lua_block {
            certificate.call()
        }

        location / {

            set $namespace      "";
            set $ingress_name   "";
            set $service_name   "";
            set $service_port   "0";
            set $location_path  "/";

            rewrite_by_lua_block {
                lua_ingress.rewrite({
                    force_ssl_redirect = false,
                    use_port_in_redirects = false,
                })
                balancer.rewrite()
                plugins.run()
            }
...

Then, the LUA code requires force_ssl_redirect and redirect_to_https() 然后,LUA代码需要force_ssl_redirect redirect_to_https()

cat /etc/nginx/lua/lua_ingress.lua

...
  if location_config.force_ssl_redirect and redirect_to_https() then
    local uri = string_format("https://%s%s", redirect_host(), ngx.var.request_uri)

    if location_config.use_port_in_redirects then
      uri = string_format("https://%s:%s%s", redirect_host(), config.listen_ports.https, ngx.var.request_uri)
    end

    ngx_redirect(uri, config.http_redirect_code)
  end
...

From what I can tell the force_ssl_redirect setting is only controlled at the Ingress resource level through the annotation nginx.ingress.kubernetes.io/force-ssl-redirect: "true" . 据我所知, force_ssl_redirect设置仅通过注释nginx.ingress.kubernetes.io/force-ssl-redirect: "true" 在Ingress资源级别进行控制 Because I don't have an ingress rule setup (this is meant to be the default server for requests that don't match any ingress), I have no way of changing this setting. 因为我没有入口规则设置(这是与任何入口都不匹配的请求的默认服务器),所以我无法更改此设置。

So what I determined I have to do is define my own custom server snippet on a different port that has force_ssl_redirect set to true and then point the Service Load Balancer to that custom server instead of the default. 因此,我确定要做的是在将force_ssl_redirect设置为true的其他端口上定义自己的自定义服务器代码段,然后将Service Load Balancer指向该自定义服务器而不是默认值。 Specifically: 特别:

Added to the ConfigMap : 添加到ConfigMap

...
  http-snippet: |
    server {
      server_name _ ;
      listen 8080 default_server reuseport backlog=511;

      set $proxy_upstream_name "-";
      set $pass_access_scheme $scheme;
      set $pass_server_port $server_port;
      set $best_http_host $http_host;
      set $pass_port $pass_server_port;

      server_tokens off;
      location / {
        rewrite_by_lua_block {
            lua_ingress.rewrite({
                force_ssl_redirect = true,
                use_port_in_redirects = false,
            })
            balancer.rewrite()
            plugins.run()
        }
      }
      location /healthz {
        access_log off;
        return 200;
      }
    }
  server-snippet: |
    more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains; preload";

Note I also added the server-snippet to enable HSTS correctly. 注意我还添加了server-snippet以正确启用HSTS。 I think because the traffic from the ELB to NGINX is HTTP not HTTPS, the HSTS headers were not being correctly added by default. 我认为,因为从ELB到NGINX的流量是HTTP而不是HTTPS,所以默认情况下未正确添加HSTS标头。

Added to the DaemonSet : 添加到DaemonSet

...
        ports:
          - name: http
            containerPort: 80
          - name: http-redirect
            containerPort: 8080
...

Modified the Service : 修改Service

...
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
...
  ports:
    - name: http
      port: 80
      targetPort: http-redirect
      # The range of valid ports is 30000-32767
      nodePort: 30080
    - name: https
      port: 443
      targetPort: http
      # The range of valid ports is 30000-32767
      nodePort: 30443
...

And now things seem to be working. 现在事情似乎正在起作用。 I've updated the Gist so it includes the full configuration that I am using. 我更新了Gist ,使其包含了我正在使用的完整配置。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM