简体   繁体   English

http -> https 在 Google Kubernetes 引擎中重定向

[英]http -> https redirect in Google Kubernetes Engine

I'm looking to redirect all traffic from我正在寻找重定向所有流量

http://example.com -> https://example.com like how nearly all websites do. http://example.com -> https://example.Z4D236D9A2D102C50FE6AD1C50DAB几乎所有网站都这样做。

I've looked at this link with no success: Kubernetes HTTPS Ingress in Google Container Engine我查看了这个链接但没有成功: Kubernetes HTTPS Ingress in Google Container Engine

And have tried the following annotations in my ingress.yaml file.并在我的 ingress.yaml 文件中尝试了以下注释。

nginx.ingress.kubernetes.io/configuration-snippet: |
  if ($http_x_forwarded_proto != 'https') {
    return 301 https://$host$request_uri;
  }
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/ingress.allow-http: "false"

All without any success.一切都没有成功。 To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https. To be clear, I can access https://example.com and http://example.com without any errors, I need the http call to redirect to https.

Thanks谢谢

GKE uses GCE L7. GKE 使用 GCE L7。 The rules that you referenced in the example are not supported and the HTTP to HTTPS redirect should be controlled at the application level.您在示例中引用的规则不受支持,应在应用程序级别控制 HTTP 到 HTTPS 重定向。

L7 inserts the x-forwarded-proto header that you can use to understand if the frontend traffic came using HTTP or HTTPS. L7 插入x-forwarded-proto标头,您可以使用它来了解前端流量是使用 HTTP 还是 HTTPS 来的。 Take a look here: Redirecting HTTP to HTTPS看这里: 将 HTTP 重定向到 HTTPS

There is also an example in that link for Nginx (copied for convenience):该链接中还有一个 Nginx 示例(为方便起见复制):

# Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
    return 301 https://$host$request_uri;
}

Currently, the documentation for how to do this properly (annotations, SSL/HTTPS, health checks etc) is severely lacking, and has been for far too long.目前,关于如何正确执行此操作的文档(注释、SSL/HTTPS、健康检查等)严重缺乏,而且已经太久了。 I suspect it's because they prefer you to use the App Engine, which is magical but stupidly expensive.我怀疑这是因为他们更喜欢你使用 App Engine,这很神奇但价格昂贵。 For GKE, here's two options:对于 GKE,这里有两个选项:

  • ingress with a google-managed SSL cert and additional NGINX server configuration in front of your app/site带有谷歌管理的 SSL 证书和在您的应用程序/站点前面的附加 NGINX 服务器配置的入口
  • the NGINX ingress controller with self-managed/third-party SSL certs具有自我管理/第三方 SSL 证书的 NGINX 入口控制器

The following is steps to a working setup using the former.以下是使用前者进行工作设置的步骤。

1 The door to your app 1 应用的大门

nginx.conf: (ellipses represent other non-relevant, non-compulsory settings) nginx.conf:(省略号代表其他不相关的非强制设置)

user  nginx;
worker_processes  auto;

events {
    worker_connections  1024;
}

http {
    ...

    keepalive_timeout  620s;

    ## Logging ##
    ...
    ## MIME Types ##
    ...
    ## Caching ##
    ...
    ## Security Headers ##
    ...
    ## Compression ##
    ....

    server {
        listen 80;

        ## HTTP Redirect ##
        if ($http_x_forwarded_proto = "http") {
            return 301 https://[YOUR DOMAIN]$request_uri;
        }

        location /health/liveness {
            access_log off;
            default_type text/plain;
            return 200 'Server is LIVE!';
        }

        location /health/readiness {
            access_log off;
            default_type text/plain;
            return 200 'Server is READY!';
        }

        root /usr/src/app/www;
        index index.html index.htm;
        server_name [YOUR DOMAIN] www.[YOUR DOMAIN];

        location / {
            try_files $uri $uri/ /index.html;
        }
    }
}

NOTE: One serving port only .注意:只有一个服务端口 The global forwarding rule adds the http_x_forwarded_proto header to all traffic that passes through it.全局转发规则将http_x_forwarded_proto标头添加到通过它的所有流量。 Because ALL traffic to your domain is now passing through this rule (remember, one port on the container, service and ingress), this header will (crucially!) always be set.因为现在进入您域的所有流量都通过此规则(请记住,容器、服务和入口上的一个端口),因此将(至关重要!)始终设置此标头。 Note the check and redirect above: it only continues with serving if the header value is 'https'.请注意上面的检查和重定向:如果标头值为“https”,它只会继续提供服务。 The root and index and location values may differ depending for your project (this is an angular project).根和索引以及位置值可能因您的项目而异(这是一个 angular 项目)。 keepalive_timeout is set to the value recommended by google . keepalive_timeout 设置为google 推荐的值。 I prefer using the main nginx.conf file, but most people add a custom.conf file to /etc/nginx/conf.d;我更喜欢使用主 nginx.conf 文件,但大多数人会在 /etc/nginx/conf.d 中添加 custom.conf 文件; if you do this just make sure the file is imported into the main nginx.conf http block using an includes statement.如果您这样做,只需确保使用包含语句将文件导入到主 nginx.conf http 块中。 The comments highlight where other settings you may want to add when everything is working will go, like gzip/brotli, security headers, where logs are saved, and so on.注释突出显示了在一切正常时您可能想要添加的其他设置的位置,例如 gzip/brotli、安全标头、保存日志的位置等。

Dockerfile: Dockerfile:

...
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]

NOTE: only the final two lines.注意:只有最后两行。 Specifying an EXPOSE port is unnecessary.不需要指定 EXPOSE 端口。 COPY replaces the default nginx.conf with the modified one. COPY 将默认的 nginx.conf 替换为修改后的 nginx.conf。 CMD starts a light server. CMD 启动轻服务器。

2 Create a deployment manifest and apply/create 2 创建部署清单并应用/创建

deployment.yaml:部署.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: uber-dp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: uber
  template:
    metadata:
      labels:
        app: uber
    spec:
      containers:
        - name: uber-ctr
          image: gcr.io/uber/beta:v1 // or some other registry
          livenessProbe:
            failureThreshold: 3
            initialDelaySeconds: 60
            httpGet:
              path: /health/liveness
              port: 80
              scheme: HTTP
          readinessProbe:
            failureThreshold: 3
            initialDelaySeconds: 30
            httpGet:
              path: /health/readiness
              port: 80
              scheme: HTTP
          ports:
            - containerPort: 80
          imagePullPolicy: Always

NOTE: only one specified port is necessary, as we're going to point all (HTTP and HTTPS) traffic to it.注意:只需要一个指定的端口,因为我们要将所有(HTTP 和 HTTPS)流量指向它。 For simplicity we're using the same path for liveness and readiness probes;为简单起见,我们对 liveness 和 readiness 探针使用相同的路径; these checks will be dealt with on the NGINX server, but you can and should add checks that probe the health of your app itself (eg a dedicated page that returns a 200 if healthy).The readiness probe will also be picked up by GCE, which by default has its own irremovable health check.这些检查将在 NGINX 服务器上处理,但您可以并且应该添加检查应用程序本身的健康状况的检查(例如,如果健康,则返回 200 的专用页面)。 GCE 也将接收就绪性探测器,默认情况下,它有自己不可移除的健康检查。

3 Create a service manifest and apply/create 3 创建服务清单并应用/创建

service.yaml:服务.yaml:

apiVersion: v1
kind: Service
metadata:
  name: uber-svc
  labels:
    app: uber
spec:
  ports:
    - name: default-port
      port: 80
  selector:
    app: uber
  sessionAffinity: None
  type: NodePort

NOTE: default-port specifies port 80 on the container.注意:default-port 指定容器上的端口 80。

4 Get a static IP address 4 获取静态IP地址

On GCP in the hamburger menu: VPC Network -> External IP Addresses.在汉堡包菜单中的 GCP 上:VPC 网络 -> 外部 IP 地址。 Convert your auto-generated ephemeral IP or create a new one.转换您自动生成的临时 IP 或创建一个新 IP。 Take note of the name and address.记下姓名和地址。

5 Create an SSL cert and default zone 5 创建 SSL 证书和默认区域

In the hamburger menu: Network Service -> Load Balancing -> click 'advanced menu' -> Certificates -> Create SSL Certificate.在汉堡菜单中:网络服务->负载平衡->点击'高级菜单'->证书->创建SSL证书。 Follow the instructions, create or upload a certificate, and take note of the name.按照说明创建或上传证书,并记下名称。 Then, from the menu: Cloud DNS -> Create Zone.然后,从菜单中:Cloud DNS -> 创建区域。 Following the instructions, create a default zone for your domain.按照说明,为您的域创建一个默认区域 Add a CNAME record with www as the DNS name and your domain as the canonical name.添加一个 CNAME 记录,其中 www 作为 DNS 名称,您的域作为规范名称。 Add an A record with an empty DNS name value and your static IP as the IPV4.添加具有空 DNS 名称值和静态 IP 作为 IPV4 的 A 记录。 Save.保存。

6 Create an ingress manifest and apply/create 6 创建入口清单并应用/创建

ingress.yaml:入口.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mypt-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: [NAME OF YOUR STATIC IP ADDRESS]
    kubernetes.io/ingress.allow-http: "true"
    ingress.gcp.kubernetes.io/pre-shared-cert: [NAME OF YOUR GOOGLE-MANAGED SSL]
spec:
  backend:
    serviceName: mypt-svc
    servicePort: 80

NOTE: backend property points to the service, which points to the container, which contains your app 'protected' by a server.注意:后端属性指向服务,该服务指向容器,其中包含由服务器“保护”的应用程序。 Annotations connect your app with SSL, and force-allow http for the health checks.注释将您的应用程序与 SSL 连接起来,并强制允许 http 进行健康检查。 Combined, the service and ingress configure the G7 load balancer (combined global forwarding rule, backend and frontend 'services', SSL certs and target proxies etc).结合起来,服务和入口配置G7 负载平衡器(结合全局转发规则、后端和前端“服务”、SSL 证书和目标代理等)。

7 Make a cup of tea or something 7 泡杯茶什么的

Everything needs ~10 minutes to configure.一切都需要大约 10 分钟来配置。 Clear cache and test your domain with various browsers (Tor, Opera, Safari, IE etc).使用各种浏览器(Tor、Opera、Safari、IE 等)清除缓存并测试您的域。 Everything will serve over https.一切都将通过 https 提供服务。

What about the NGINX Ingress Controller? NGINX 入口控制器怎么样? I've seen discussion it being better because it's cheaper/uses less resources and is more flexible.我已经看到讨论它更好,因为它更便宜/使用更少的资源并且更灵活。 It isn't cheaper: it requires an additional deployment/workload and service (GCE L4).它并不便宜:它需要额外的部署/工作负载和服务 (GCE L4)。 And you need to do more configuration.而且你需要做更多的配置。 Is it more flexible?是不是更灵活? Yes.是的。 But in taking care of most of the work, the first option gives you a more important kind of flexibility — namely allowing you to get on with more pressing matters.但是在处理大部分工作时,第一个选项为您提供了一种更重要的灵活性——即允许您处理更紧迫的事情。

For everyone like me that searches this question about once a month, Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers.对于像我这样大约每个月搜索一次这个问题的每个人,Google 已经响应了我们的请求,并且正在他们的负载均衡器上测试 HTTP->HTTPS SSL 重定向。 Their latest answer said it should be in Alpha sometime before the end of January 2020.他们的最新回答是,它应该在 2020 年 1 月底之前的某个时间处于 Alpha 状态。

Their comment:他们的评论:

Thank you for your patience on this issue.感谢您对这个问题的耐心等待。 The feature is currently in testing and we expect to enter Alpha phase before the end of January.该功能目前正在测试中,我们预计将在 1 月底之前进入 Alpha 阶段。 Our PM team will have an announcement with more details as we get closer to the Alpha launch.随着我们接近 Alpha 发布,我们的 PM 团队将发布包含更多详细信息的公告。

Update : HTTP to HTTPS redirect is now Generally Available: https://cloud.google.com/load-balancing/docs/features#routing_and_traffic_management更新:HTTP 到 HTTPS 重定向现已普遍可用: https : //cloud.google.com/load-balancing/docs/features#routing_and_traffic_management

GKE uses its own Ingress Controller which does not support forcing https. GKE 使用自己的 Ingress Controller,不支持强制使用 https。

That's why you will have to manage NGINX Ingress Controller yourself.这就是您必须自己管理 NGINX Ingress Controller 的原因。

See this post on how to do it on GKE.请参阅有关如何在 GKE 上执行此操作的帖子

Hope it helps.希望它有帮助。

For what it's worth, I ended up using a reverse proxy in NGINX.无论如何,我最终在 NGINX 中使用了反向代理。

  1. You need to create secrets and sync them into your containers您需要创建机密并将它们同步到您的容器中
  2. You need to create a configmap in nginx with your nginx config, as well as a default config that references this additional config file.您需要使用 nginx 配置在 nginx 中创建一个 configmap,以及引用此附加配置文件的默认配置。

Here is my configuration:这是我的配置:

worker_processes  1;

events {
    worker_connections  1024;
}


http {

default_type  application/octet-stream;

# Logging Configs
log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile        on;
keepalive_timeout  65;

# Puntdoctor Proxy Config
include /path/to/config-file.conf;

# PubSub allows 10MB Files. lets allow 11 to give some space
client_max_body_size 11M;

}

Then, the config.conf然后,config.conf

server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}

server {

listen 443;
server_name example.com;

ssl_certificate           /certs/tls.crt;
ssl_certificate_key       /certs/tls.key;

ssl on;
ssl_session_cache  builtin:1000  shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-RC4-SHA:AES128-GCM-SHA256:HIGH:!RC4:!MD5:!aNULL:!EDH:!CAMELLIA;
ssl_prefer_server_ciphers on;

location / {

  proxy_set_header        Host $host;
  proxy_set_header        X-Real-IP $remote_addr;
  proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header        X-Forwarded-Proto $scheme;
  proxy_set_header        X-Forwarded-Host $http_host;

  # Fix the “It appears that your reverse proxy set up is broken" error.
  proxy_pass          http://deployment-name:8080/;
  proxy_read_timeout  90;

  proxy_redirect      http://deployment-name:8080/ https://example.com/;
}
}
  1. Create a deployment:创建部署:

Here are the .yaml files这是 .yaml 文件

---
apiVersion: v1
kind: Service
metadata:
  name: puntdoctor-lb
spec:
   ports:
    - name: https
      port: 443
      targetPort: 443
     - name: http
      port: 80
      targetPort: 80
  selector:
    app: puntdoctor-nginx-deployment
  type: LoadBalancer
  loadBalancerIP: 35.195.214.7
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: puntdoctor-nginx-deployment
spec:
   replicas: 2
  template:
    metadata:
      labels:
        app: puntdoctor-nginx-deployment
    spec:
       containers:
       - name: adcelerate-nginx-proxy
        image: nginx:1.13
         volumeMounts:
        - name: certs
          mountPath: /certs/
        - name: site-config
          mountPath: /etc/site-config/
        - name: default-config
          mountPath: /etc/nginx/
        ports:
        - containerPort: 80
          name: http
        - containerPort: 443
          name: https
      volumes:
      - name: certs
        secret:
          secretName: nginxsecret
      - name: site-config
        configMap:
          name: nginx-config
       - name: default-config
        configMap:
         name: default

Hope this helps someone solve this issue, thanks for the other 2 answers, they both gave me valuable insight.希望这有助于有人解决这个问题,感谢其他 2 个答案,他们都给了我宝贵的见解。

As of GKE version 1.18.10-gke.600, you can use FrontendConfig to create HTTP -> HTTPS redirection in Google Kubernetes Engine从 GKE 版本 1.18.10-gke.600 开始,您可以使用FrontendConfigGoogle 引擎中创建HTTP -> HTTPS redirection Z30136395F01879792198317C18

HTTP to HTTPS redirects are configured using the redirectToHttps field in a FrontendConfig custom resource. HTTP 到 HTTPS 重定向是使用 FrontendConfig 自定义资源中的 redirectToHttps 字段配置的。 Redirects are enabled for the entire Ingress resource so all services referenced by the Ingress will have HTTPS redirects enabled.为整个 Ingress 资源启用重定向,因此 Ingress 引用的所有服务都将启用 HTTPS 重定向。

The following FrontendConfig manifest enables HTTP to HTTPS redirects.以下 FrontendConfig 清单启用 HTTP 到 HTTPS 重定向。 Set the spec.redirectToHttps.enabled field to true to enable HTTPS redirects.将 spec.redirectToHttps.enabled 字段设置为 true 以启用 HTTPS 重定向。 The spec.responseCodeName field is optional. spec.responseCodeName 字段是可选的。 If it's omitted a 301 Moved Permanently redirect is used.如果省略,则使用 301 Moved Permanently 重定向。

For example:例如:

apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: your-frontend-config-name
spec:
  redirectToHttps:
    enabled: true
    responseCodeName: MOVED_PERMANENTLY_DEFAULT

MOVED_PERMANENTLY_DEFAULT is on of the available RESPONSE_CODE field value, to return a 301 redirect response code (default if responseCodeName is unspecified). MOVED_PERMANENTLY_DEFAULT位于可用的RESPONSE_CODE字段值中,以返回301重定向响应代码(如果未指定responseCodeName ,则为默认值)。

You can find more options here: HTTP to HTTPS redirects您可以在此处找到更多选项: HTTP 到 HTTPS 重定向

Then you have to link your FrontendConfig to the Ingress, like this:然后您必须将您的 FrontendConfig 链接到 Ingress,如下所示:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: your-ingress-name
  annotations:
    networking.gke.io/v1beta1.FrontendConfig: your-frontend-config-name
spec:
  tls:
    ...

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Kubernetes HTTPS Google 容器引擎中的入口 - Kubernetes HTTPS Ingress in Google Container Engine 如何使用 Google 计算云在 kubernetes_ingress_v1 中自动从 http 重定向到 https? - How to redirect from http to https automatically in kubernetes_ingress_v1 with Google compute cloud? Kubernetes 入口:SSL(HTTP -> HTTPS)重定向不起作用(Nginx Docker) - Kubernetes Ingress: SSL (HTTP -> HTTPS) redirect not working (Nginx Docker) Static 网页重定向 http 到 https 使用谷歌负载均衡器 - Static webpage redirect http to https using Google loadbalancer 为什么默认的 Google Kubernetes Engine 集群中有 3 个节点? - Why are there 3 nodes in a default Google Kubernetes Engine cluster? 弹性 APM 错误 | 谷歌 Kubernetes 引擎 - Elastic APM Error | Google Kubernetes Engine 什么是 Google Kubernetes Engine 版本 1.13.12? - What is Google Kubernetes Engine version 1.13.12? 如何将数据从 Google Cloud VM 迁移到 Google Kube.netes Engine? - How to migrate data from Google Cloud VM to Google Kubernetes Engine? 如何使用 GCP 负载均衡器将 HTTP 重定向到 HTTPS - How to redirect HTTP to HTTPS using GCP load balancer 使用 Google Kubernetes 引擎在本地运行项目时,VS Code 使用了错误的 Google Cloud 帐户 - VS Code using wrong Google Cloud account when running project locally with Google Kubernetes Engine
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM