[英]Google cloud CDN, storage and container engine issue with backend-service
I have a specific use case that I can not seem to solve. 我有一个特定的用例,我似乎无法解决。
A typical gcloud setup: 一个典型的gcloud设置:
A K8S cluster 一个K8S集群
A gcloud storage bucket 一个gcloud存储桶
A gcloud loadbalancer 一个gcloud负载均衡器
I managed to get my domain https://cdn.foobar.com/uploads/
to points to a google storage backend without any issue: I can access files. 我设法让我的域名https://cdn.foobar.com/uploads/
指向谷歌存储后端没有任何问题:我可以访问文件。 Its the backend service one that fails. 它的后端服务失败了。
I would like the CDN
to act as a cache, when a HTTP request hits it such as https://cdn.foobar.com/assets/x.jpg
, if it does not have a copy of the asset it should query an other domain https://foobar.com/assets/x.jpg
. 我希望CDN
充当缓存,当HTTP请求到达它时,如https://cdn.foobar.com/assets/x.jpg
,如果它没有资产的副本,它应该查询其他域名https://foobar.com/assets/x.jpg
。
I understood that this what was load balancers backend-service
were for. 我明白这是负载平衡器backend-service
原因。 (Right?) (对?)
The backend-service
is pointing to the instance group of the k8s cluster and requires a port. backend-service
指向k8s群集的实例组,需要一个端口。 I guessed
that I needed to allow the firewall to expose the Nodeport
of my web application service for the loadbalancer to be able to query it. 我guessed
我需要允许防火墙公开我的Web应用程序服务的Nodeport
,以便loadbalancer能够查询它。
Failing health-checks. 健康检查失败。
The backend service is pointing to the instance group of the k8s cluster and requires some ports (default 80?) 80 failed. 后端服务指向k8s群集的实例组,并且需要一些端口(默认80?)80失败。 I guessed
that I needed to allow the firewall to expose the 32231 Nodeport
of my web application service for the loadbalancer to be able to query it. 我guessed
我需要允许防火墙公开我的Web应用程序服务的32231 Nodeport
,以便负载均衡器能够查询它。 That still failed with a 502. 502仍然失败了。
?> kubectl describe svc Name: backoffice-service Namespace: default Labels: app=backoffice Selector: app=backoffice Type: NodePort IP: 10.7.xxx.xxx Port: http 80/TCP NodePort: http 32231/TCP Endpoints: 10.4.xx:8500,10.4.xx:8500 Session Affinity: None No events.
I ran out of ideas at this point. 我此时已经没想完了。 Any hints int the right direction would be much appreciated. 任何正确方向的提示都将非常感激。
When deploying your service as type ' NodePort ', you are exposing the service on each Node's IP, but the service is not reachable to the exterior, so you need to expose your service as 'LoadBalancer' 当您将服务部署为“ NodePort ”类型时,您将在每个Node的IP上公开该服务,但该服务无法到达外部,因此您需要将服务公开为“LoadBalancer”
Since you're looking to use an HTTP(s) Load Balancer, I'll recommend using a Kubernetes Ingress resource . 由于您正在寻找使用HTTP(s)负载均衡器,因此我建议您使用Kubernetes Ingress资源 。 This resource will be in charge of configuring the HTTP(s) load balancer and the required ports that your service is using, as well as the health checks on the specified port. 此资源将负责配置HTTP(s)负载均衡器以及您的服务正在使用的所需端口,以及指定端口上的运行状况检查。
Since you're securing your application, you will need to configure a secret object for securing the Ingress . 由于您要保护您的应用程序,因此您需要配置一个秘密对象来保护Ingress 。
This example will help you getting started on an Ingress with TLS termination. 此示例将帮助您开始使用TLS终止的Ingress。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.