简体   繁体   English

如何将K8 Pod暴露给公共互联网?

[英]How to expose k8 pods to the public internet?

I'm currently learning docker and kubernetes. 我目前正在学习docker和kubernetes。 One of the issues that I'm having trouble with is exposing my nginx pod to the public internet. 我遇到的问题之一就是将Nginx Pod暴露在公共互联网上。 I would like to visit my serverIP from my web browser and see the nginx page as if nginx was installed on the server. 我想从Web浏览器访问我的serverIP并看到nginx页面,就像在服务器上安装了nginx一样。

pod-nginx.yml from kubernetes website 来自kubernetes网站的pod-nginx.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80

I can port forward from the pod and then access the default nginx page via curl. 我可以从pod向前移植,然后通过curl访问默认的nginx页面。 sudo kubectl port-forward nginx 80:80

curl http://localhost returns the nginx page, while curl http://<serverIP> returns failed to connect <serverIP> port:80 Connection refused curl http://localhost返回nginx页面,而curl http://<serverIP>返回failed to connect <serverIP> port:80 Connection refused

Do I need to port forward between my pubic network interface to my cluster network interface by modifying iptables and firewall rules? 我是否需要通过修改iptables和防火墙规则在公共网络接口与群集网络接口之间进行端口转发? I feel like im missing something really obvious here. 我觉得我在这里确实缺少一些明显的东西。

I have tried using the nodeport property and have read the documentation on ingress and loadbalancers, but my cloud provider doesn't have those back end functionalities, so those commands just end up pending indefinitely. 我尝试使用nodeport属性并阅读了有关入口和负载平衡器的文档,但是我的云提供程序没有这些后端功能,因此这些命令最终会无限期地挂起。

There are different ways to expose your services: 有多种公开服务的方式:

  • Using NodePort : This will open a port in the host where you can access your service. 使用NodePort :这将在主机中打开一个端口,您可以在其中访问服务。 For example something like 192.168.100.99:37843, being 192.168.100.99 one of the HOST system where the cluster in installed in. 例如,像192.168.100.99:37843这样的东西,是192.168.100.99所在的群集所在的HOST系统之一。

  • Using LoadBalancer : If your cluster is in a cloud like Google, then you can use the underlying infrastructure to generate an external IP for your service. 使用LoadBalancer :如果您的集群位于Google之类的云中,则可以使用基础架构为服务生成外部IP。 I insist on the fact that the underlying cloud must support it. 我坚持以下事实:底层云必须支持它。

  • Using Ingress rules : A proper alternative to LoadBalancers is the use of a reverse proxy. 使用Ingress规则 :LoadBalancers的适当替代方法是使用反向代理。 Kubernetes allows you to have this reverse proxy listening in port 80 and 443 and, using Ingress rules, to forward traffic to your different services. Kubernetes允许您在端口80和443中侦听此反向代理,并使用Ingress规则将流量转发到您的不同服务。

Looking at your case, I think that the Ingress Rules option would suit your needs. 考虑到您的情况,我认为“入口规则”选项将满足您的需求。 If your cluster does not have an Ingress controller installed, you can install this one based on nginx. 如果您的集群未安装Ingress控制器,则可以基于nginx安装控制器。

To expose pods or deployments you must do the following. 要公开Pod或部署,您必须执行以下操作。

  1. Use the nodeport flag to assign the same port across all nodes to the application. 使用nodeport标志可以在所有节点上为应用程序分配相同的端口。 Kubernetes will create a ServiceIP where your application will be exposed. Kubernetes将创建一个ServiceIP,在其中将公开您的应用程序。

kubectl expose <deployment> --nodeport=<common-port> --port=<container-port>

After creating the exposing service, you can get your ServiceIP with kubectl get services 创建暴露服务后,您可以使用kubectl get services获得ServiceIP

  1. Use nginx or another load balancer to reverse proxy into your nodes. 使用nginx或其他负载平衡器将代理反向到您的节点中。 I configured nginx to proxy_pass to my application by creating a defaults file in /etc/nginx/sites-enabled 我通过在/etc/nginx/sites-enabled创建默认文件来将nginx配置为proxy_pass到我的应用程序

    server { listen 80; 服务器{收听80;

    location / { proxy_pass http://ServiceIP:ApplicationPort ; 位置/ {proxy_pass http:// ServiceIP:ApplicationPort ; } } }}

This method allows for unique routing and even round robin load balancing. 此方法允许唯一的路由,甚至轮循负载平衡。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM