简体   繁体   English

GCP 外部 HTTP 云负载均衡器与 GKE 上的 nginx-ingress

[英]GCP external HTTP Cloud Load Balancer with nginx-ingress on GKE

my goal is to have EXTERNAL HTTP CLOUD LOAD BALANCER with NGINX INGRESS in our GCP GKE.我的目标是在我们的 GCP GKE 中使用带有 NGINX INGRESS 的外部 HTTP 云负载平衡器。

Im trying solution as Rami H proposed and Google developer Garry Singh confirmed here: Global load balancer (HTTPS Loadbalancer) in front of GKE Nginx Ingress Controller我正在尝试 Rami H 提出的解决方案,谷歌开发人员 Garry Singh 在这里确认: Global load balancer (HTTPS Loadbalancer) in front of GKE Nginx Ingress Controller

You can create the Nginx as a service of type LoadBalancer and give it a NEG annotation as per this google documentation.您可以将 Nginx 创建为 LoadBalancer 类型的服务,并根据此 google 文档为其提供 NEG 注释。 https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing Then you can use this NEG as a backend service (target) for HTTP(S) load balancing You can use the gcloud commands from this article https://hodo.dev/posts/post-27-gcp-using-neg/ https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing然后您可以将此NEG用作HTTP(S)负载平衡的后端服务(目标)您可以使用本文中的 gcloud 命令https://hodo.dev/posts/post-27-gcp-using-neg/

I have followed mentioned hodo.dev tutorial and successfully deployed HTTP LB with NEGs as backend service.我遵循了提到的hodo.dev教程并成功部署了带有NEG的HTTP LB作为后端服务。 Then I found this script to attach NGINX-INGRESS to NEGs but its probably obsolete and fails while deploying.然后我发现这个脚本将 NGINX-INGRESS 附加到 NEG,但它可能已经过时并且在部署时失败。 https://gist.github.com/halvards/dc854f16d76bcc86ec59d846aa2011a0 https://gist.github.com/halvards/dc854f16d76bcc86ec59d846aa2011a0

Please can somebody help me to to adapt hodo.dev config to deploy there nginx-ingress?请有人帮我调整hodo.dev配置以部署nginx-ingress吗? Here is repo with my config script https://github.com/robinpecha/hododev_gke-negs-httplb这是我的配置脚本的回购https://github.com/robinpecha/hododev_gke-negs-httplb

#First lets define some variables:
PROJECT_ID=$(gcloud config list project --format='value(core.project)') ; echo $PROJECT_ID
ZONE=europe-west2-b ; echo $ZONE
CLUSTER_NAME=negs-lb ; echo $CLUSTER_NAME

# and we need a cluster
gcloud container clusters create $CLUSTER_NAME --zone $ZONE --machine-type "e2-medium" --enable-ip-alias --num-nodes=2

# the --enable-ip-alias enables the VPC-native traffic routing option for your cluster. This option creates and attaches additional subnets to VPC, the pods will have IP address allocated from the VPC subnets, and in this way the pods can be addressed directly by the load balancer aka container-native load balancing.

# Next we need a simple deployment, we will use nginx
cat << EOF > app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
EOF
kubectl apply -f app-deployment.yaml

# and the service
cat << EOF > app-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: app-service
  annotations:
    cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "app-service-80-neg"}}}'
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
EOF
kubectl apply -f app-service.yaml

# this annotation cloud.google.com/neg tells the GKE to create a NEG for this service and to add and remove endpoints (pods) to this group.
# Notice here that the type is ClusterIP. Yes it is possible to expose the service to the internet even if the type is ClusterIP. This one of the magic of NEGs.
# You can check if the NEG was created by using next command
gcloud compute network-endpoint-groups list

# Next let’s create the load balancer and all the required components.
# We need a firewall rule that will allow the traffic from the load balancer

# find the network tags used by our cluster
NETWORK_TAGS=$(gcloud compute instances describe \
    $(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') \
    --zone=$ZONE --format="value(tags.items[0])")
echo $NETWORK_TAGS

# create the firewall rule
gcloud compute firewall-rules create $CLUSTER_NAME-lb-fw \
    --allow tcp:80 \
    --source-ranges 130.211.0.0/22,35.191.0.0/16 \
    --target-tags $NETWORK_TAGS

# and a health check configuration
gcloud compute health-checks create http app-service-80-health-check \
  --request-path / \
  --port 80 \
  --check-interval 60 \
  --unhealthy-threshold 3 \
  --healthy-threshold 1 \
  --timeout 5

# and a backend service
gcloud compute backend-services create $CLUSTER_NAME-lb-backend \
  --health-checks app-service-80-health-check \
  --port-name http \
  --global \
  --enable-cdn \
  --connection-draining-timeout 300

# next we need to add our NEG to the backend service
gcloud compute backend-services add-backend $CLUSTER_NAME-lb-backend \
  --network-endpoint-group=app-service-80-neg \
  --network-endpoint-group-zone=$ZONE \
  --balancing-mode=RATE \
  --capacity-scaler=1.0 \
  --max-rate-per-endpoint=1.0 \
  --global

# This was the backend configuration, let’s setup also the fronted.
# First the url map
gcloud compute url-maps create $CLUSTER_NAME-url-map --default-service $CLUSTER_NAME-lb-backend

# and then the http proxy
gcloud compute target-http-proxies create $CLUSTER_NAME-http-proxy --url-map $CLUSTER_NAME-url-map

# and finally the global forwarding rule

gcloud compute forwarding-rules create $CLUSTER_NAME-forwarding-rule \
  --global \
  --ports 80 \
  --target-http-proxy $CLUSTER_NAME-http-proxy

# Done! Give some time for the load balancer to setup all the components and then you can test if your setup works as expected.

# get the public ip address
IP_ADDRESS=$(gcloud compute forwarding-rules describe $CLUSTER_NAME-forwarding-rule --global --format="value(IPAddress)")
# print the public ip address
echo $IP_ADDRESS
# make a request to the service
curl -s -I http://$IP_ADDRESS/

The trick is to deploy the ingress-nginx service as ClusterIP and not as LoadBalancer and then expose the ingresss-nginx-controller service using NEG and GCP External Load Balancer feature.诀窍是将ingress-nginx服务部署为ClusterIP而不是LoadBalancer ,然后使用 NEG 和 GCP 外部负载均衡器功能公开ingresss-nginx-controller服务。

First you need to update the helm repo首先你需要更新 helm repo

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

The default installation of this ingress-nginx is configured to use the LoadBalancer option, this will automatically create a load balancer for you, but in this case is not the expected behavior.这个 ingress-nginx 的默认安装被配置为使用 LoadBalancer 选项,这将自动为您创建一个负载均衡器,但在这种情况下不是预期的行为。 If I understood correctly, you want to create/configure your own GCP Load Balancer, outside GKE and to manually configure it, and to route traffic to your custom ingress-nginx.如果我理解正确,您想在 GKE 外部创建/配置您自己的 GCP 负载均衡器并手动配置它,并将流量路由到您的自定义 ingress-nginx。 For this you need to change the service type to be "ClusterIP" and to add the NEG annotation.为此,您需要将服务类型更改为“ClusterIP”并添加 NEG 注释。

Create a file values.yaml创建文件 values.yaml

cat << EOF > values.yaml
controller:
  service:
    type: ClusterIP
    annotations:
      cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
EOF

And install the ingress-nginx并安装 ingress-nginx

helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx

After that you need to configure the load balancer to point to your ingress-nginx controller using NEG.之后,您需要将负载均衡器配置为使用 NEG 指向您的 ingress-nginx 控制器。

I added the complete steps to follow in this gist https://gist.github.com/gabihodoroaga/1289122db3c5d4b6c59a43b8fd659496我在此要点中添加了要遵循的完整步骤https://gist.github.com/gabihodoroaga/1289122db3c5d4b6c59a43b8fd659496

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 通过 http 使用 nginx-ingress 的 Microk8s 仪表板无法正常工作(错误:`版本“extensions/v1beta1”中的类型“Ingress”不匹配`) - Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`) 如何使用Google Cloud Load Balancer和Nginx作为Web服务器强制使用SSL - How to force ssl with Google Cloud Load Balancer and nginx as a web server 在AWS Load Balancer之后使用Nginx将http重定向到https - Redirecting http to https using nginx behind AWS load balancer Http绑定负载均衡器 - Http Binding load balancer microk8s 集群中本地 nginx-ingress 路由器上的间歇性 502 错误 - Intermittent 502 errors on local nginx-ingress router in microk8s cluster AWS Classic Load Balancer HTTP终止 - AWS Classic Load Balancer HTTP termination AWS通过Load Balancer分发HTTP请求 - AWS Distribute HTTP request with Load Balancer 为什么这个 HTTP 对 Ingress (GKE) 的 GET 请求总是返回 400(错误请求)错误? - Why this HTTP GET request to the Ingress (GKE) is always returning 400 (bad request) error? Google云负载平衡器未返回Content-Encoding:gzip - Google cloud load balancer not returning Content-Encoding: gzip Cloud Load Balancer 在连接到上游服务器时是否支持 QUIC? - Does Cloud Load Balancer support QUIC when connecting to an upstream server?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM