简体   繁体   English

如何在没有硬编码到minion IP的情况下向公众公开kubernetes服务?

[英]How to expose kubernetes service to public without hardcoding to minion IP?

I have a kubernetes cluster running with 2 minions. 我有一个运行2个小兵的kubernetes集群。 Currently I make my service accessible in 2 steps: 目前,我通过两个步骤访问我的服务:

  1. Start replication controller & pod 启动复制控制器和pod
  2. Get minion IP (using kubectl get minions ) and set it as publicIPs for the Service. 获取kubectl get minions IP(使用kubectl get minions )并将其设置为服务的publicIP

What is the suggested practice for exposing service to the public? 向公众公开服务的建议做法是什么? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. 我的方法似乎是错误的,因为我对单个minion IP-s的IP进行了硬编码。 It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly. 它似乎也绕过了kubernetes服务的负载平衡功能,因为客户端必须直接访问在各个minions上运行的服务。

To set up the replication controller & pod I use: 要设置复制控制器和pod,我使用:

id: frontend-controller
kind: ReplicationController
apiVersion: v1beta1
desiredState:
  replicas: 2
  replicaSelector:
    name: frontend-pod
  podTemplate:
    desiredState:
      manifest:
        version: v1beta1
        id: frontend-pod
        containers:
          - name: sinatra-docker-demo
            image: madisn/sinatra_docker_demo
            ports:
              - name: http-server
                containerPort: 4567
    labels:
      name: frontend-pod

To set up the service (after getting minion ip-s): 要设置服务(在获得minion ip-s之后):

kind: Service
id: frontend-service
apiVersion: v1beta1
port: 8000
containerPort: http-server
selector:
  name: frontend-pod
labels:
  name: frontend
publicIPs: [10.245.1.3, 10.245.1.4]

As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally. 正如我在上面的评论中提到的,createExternalLoadBalancer是您正在寻找的适当的抽象,但遗憾的是它尚未针对所有云提供程序实现,特别是对于您在本地使用的vagrant。

One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. 一种选择是将群集中所有minions的公共IP用于要外部化的所有服务。 The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. 发往该服务的流量将最终出现在其中一个minions上,它将被kube-proxy进程拦截并重定向到与该服务的标签选择器匹配的pod。 This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable. 这可能会导致网络上的额外跳跃(如果您降落在没有本地运行的pod的节点上),但对于对网络延迟不是非常敏感的应用程序,这可能不会引人注意。

As Robert said in his reply this is something that is coming up, but unfortunately isn't available yet. 正如罗伯特在回答中所说,这是即将到来的事情,但遗憾的是还没有。

I am currently running a Kubernetes cluster on our datacenter network. 我目前正在数据中心网络上运行Kubernetes集群。 I have 1 master and 3 minions all running on CentOS 7 virtuals (vcenter). 我有1个主人和3个小兵都在CentOS 7虚拟机(vcenter)上运行。 The way I handled this was to create a dedicated "kube-proxy" server. 我处理这个问题的方法是创建一个专用的“kube-proxy”服务器。 I basically am just running the Kube-Proxy service (along with Flannel for networking) and then assigning "public" IP addresses to the network adapter attached to this server. 我基本上只是运行Kube-Proxy服务(连同Flannel用于网络),然后将“公共”IP地址分配给连接到此服务器的网络适配器。 When I say public I mean addresses on our local datacenter network. 当我说公开时,我指的是我们本地数据中心网络上的地址。 Then when I create a service that I would like to access outside of the cluster I just set the publicIPs value to one of the available IP addresses on the kube-proxy server. 然后,当我创建一个我想在集群外部访问的服务时,我只需将publicIPs值设置为kube-proxy服务器上的一个可用IP地址。 When someone or something attempts to connect to this service from outside the cluster it will hit the kube-proxy and then be redirected to the proper minion. 当某人或某物尝试从群集外部连接到此服务时,它将命中kube-proxy,然后被重定向到正确的minion。

While this might seem like a work around, this is actually similar to what I would expect to be happening once they come up with a built in solution to this issue. 虽然这似乎是一种解决方法,但这实际上与我们预期在他们提出针对此问题的内置解决方案时所期望的相似。

If you're running a cluster locally, a solution I used was to expose the service on your kubernetes nodes using the nodeport directive in your service definition and then round robin to every node in your cluster with HAproxy. 如果您在本地运行集群,我使用的解决方案是使用服务定义中的nodeport指令在kubernetes节点上公开服务,然后使用HAproxy将集群循环到集群中的每个节点。

Here's what exposing the nodeport looks like: 这是暴露nodeport的样子:

apiVersion: v1
kind: Service
metadata:
  name: nginx-s
  labels:
    name: nginx-s
spec:
  type: NodePort
  ports:
    # must match the port your container is on in your replication controller
    - port: 80
      nodePort: 30000
  selector:
    name: nginx-s

Note: the value you specify must be within the configured range for node ports. 注意:您指定的值必须在节点端口的配置范围内。 (default: 30000-32767) (默认:30000-32767)

This exposes the service on the given nodeport on every node in your cluster. 这会在群集中的每个节点上公开给定节点端口上的服务。 Then I set up a separate machine on the internal network running haproxy and a firewall that's reachable externally on the specified nodeport(s) you want to expose. 然后,我在运行haproxy的内部网络上设置了一个单独的机器,并且在您要公开的指定节点端口上可以从外部访问防火墙。

If you look at your nat table on one of your hosts, you can see what it's doing. 如果你在你的一个主机上查看你的nat表,你可以看到它正在做什么。

root@kube01:~# kubectl create -f nginx-s.yaml
You have exposed your service on an external port on all nodes in your
cluster.  If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30000) to serve traffic.

See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
services/nginx-s
root@kube01:~# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-PORTALS-CONTAINER  all  --  anywhere             anywhere             /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-CONTAINER  all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-PORTALS-HOST  all  --  anywhere             anywhere             /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-HOST  all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  172.17.0.0/16        anywhere

Chain DOCKER (2 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Chain KUBE-NODEPORT-CONTAINER (1 references)
target     prot opt source               destination
REDIRECT   tcp  --  anywhere             anywhere             /* default/nginx-s: */ tcp dpt:30000 redir ports 42422

Chain KUBE-NODEPORT-HOST (1 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             anywhere             /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422

Chain KUBE-PORTALS-CONTAINER (1 references)
target     prot opt source               destination
REDIRECT   tcp  --  anywhere             192.168.3.1          /* default/kubernetes: */ tcp dpt:https redir ports 51751
REDIRECT   tcp  --  anywhere             192.168.3.192        /* default/nginx-s: */ tcp dpt:http redir ports 42422

Chain KUBE-PORTALS-HOST (1 references)
target     prot opt source               destination
DNAT       tcp  --  anywhere             192.168.3.1          /* default/kubernetes: */ tcp dpt:https to:169.55.21.75:51751
DNAT       tcp  --  anywhere             192.168.3.192        /* default/nginx-s: */ tcp dpt:http to:169.55.21.75:42422
root@kube01:~#

Particularly this line 特别是这一行

DNAT       tcp  --  anywhere             anywhere             /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422

And finally, if you look at netstat, you can see kube-proxy is listening and waiting for that service on that port. 最后,如果您查看netstat,您可以看到kube-proxy正在侦听并等待该端口上的该服务。

root@kube01:~# netstat -tupan | grep 42422
tcp6       0      0 :::42422                :::*                    LISTEN      20748/kube-proxy
root@kube01:~#

Kube-proxy will listen on a port for each service, and do network address translation into your virtual subnet that your containers reside in. (I think?) I used flannel. Kube-proxy将侦听每个服务的端口,并将网络地址转换到容器所在的虚拟子网中。(我想?)我使用了法兰绒。


For a two node cluster, that HAproxy configuration might look similiar to this: 对于双节点群集,该HAproxy配置可能看起来与此类似:

listen sampleservice 0.0.0.0:80
    mode http
    stats enable
    balance roundrobin
    option httpclose
    option forwardfor
    server noname 10.120.216.196:30000 check
    server noname 10.155.236.122:30000 check
    option httpchk HEAD /index.html HTTP/1.0

And your service is now reachable on port 80 via HAproxy. 现在,您可以通过HAproxy在端口80上访问您的服务。 If any of your nodes go down, the containers will be moved to another node thanks to replication controllers and HAproxy will only route to your nodes that are alive. 如果您的任何节点发生故障,由于复制控制器,容器将被移动到另一个节点,并且HAproxy将仅路由到您的活动节点。

I'm curious what methods others have used though, that's just what I came up with. 我很好奇其他人使用过什么方法,这就是我想出来的。 I don't usually post on stack overflow, so apologies if I'm not following conventions or proper formatting. 我通常不会发布堆栈溢出,所以如果我没有遵循约定或正确的格式,请道歉。

This is for MrE. 这是给MrE的。 I did not have enough space in the comments area to post this answer so I had to create another answer. 我在评论区域没有足够的空间来发布这个答案,所以我不得不创建另一个答案。 Hope this helps: 希望这可以帮助:

We have actually moved away from Kubernetes since posting this reply. 发布此回复后,我们实际上离开了Kubernetes。 If I remember correctly though all I really had to do was run the kube-proxy executable on a dedicated CentOS VM. 如果我没记错的话虽然我真正需要做的就是在专用的CentOS VM上运行kube-proxy可执行文件。 Here is what I did: 这是我做的:

First I removed Firewalld and put iptables in place. 首先,我删除了Firewalld并将iptables放到位。 Kube-proxy relies on iptables to handle its NAT and redirections. Kube-proxy依赖于iptables来处理其NAT和重定向。

Second, you need to install flanneld so you can have a bridge adapter on the same network as the Docker services running on your minions. 其次,您需要安装flanneld,这样您就可以在与您的minions上运行的Docker服务相同的网络上拥有一个桥接适配器。

Then what I did was assign multiple ip addresses to the local network adapter installed on the machine. 然后我做的是为机器上安装的本地网络适配器分配多个IP地址。 These will be the ip addresses you can use when setting up a service. 这些是您在设置服务时可以使用的IP地址。 These will be the addresses available OUTSIDE your cluster. 这些将是您的群集外的可用地址。

Once that is all taken care of you can start the proxy service. 一旦完成所有工作,您就可以启动代理服务。 It will connect to the Master, and grab an IP address for the flannel bridge network. 它将连接到Master,并获取法兰绒桥网络的IP地址。 Then it will sync up all the IPtables rules and you should be set. 然后它将同步所有IPtables规则,你应该设置。 Every time a new service is added it will create the proxy rules and replicate those rules across all minions (and your proxy). 每次添加新服务时,它都会创建代理规则并在所有minions(和您的代理)之间复制这些规则。 As long as you specified an ip address available on your proxy server then that proxy server will forward all traffic for that ip address over to the proper minion. 只要您在代理服务器上指定了可用的IP地址,该代理服务器就会将该IP地址的所有流量转发到适当的minion。

Hope this is a little more clear. 希望这更清楚一点。 Remember though I have not been part of the Kubernetes project for about 6 months now so I am not sure what changed have been made since I left. 请记住,虽然我现在还没有参加Kubernetes项目大约6个月,所以我不确定自从我离开后发生了什么变化。 They might even have a feature in place that handles this sort of thing. 他们甚至可能有一个处理这类事情的功能。 If not hopefully this helps you get it taken care of. 如果不希望这有助于你得到它照顾。

You can use Ingress resource to allow external connections from outside of a Kubernetes cluster to reach the cluster services. 您可以使用Ingress资源允许来自Kubernetes群集外部的外部连接到达群集服务。

Assuming that you already have a Pod deployed, you now need a Service resource, eg: 假设您已经部署了Pod,您现在需要一个服务资源,例如:

apiVersion: v1 kind: Service metadata: name: frontend-service labels: tier: frontend spec: type: ClusterIP selector: name: frontend-pod ports: - name: http protocol: TCP # the port that will be exposed by this service port: 8000 # port in a docker container; defaults to what "port" has set targetPort: 8000

And you need an Ingress resource: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-ingress spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: frontend-service # the targetPort from service (the port inside a container) servicePort: 8000 In order to be able to use Ingress resources, you need some ingress controller deployed. 你需要一个Ingress资源: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend-ingress spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: frontend-service # the targetPort from service (the port inside a container) servicePort: 8000为了能够使用Ingress资源,您需要部署一些入口控制器

Now, providing that you know your Kubernetes master IP, you can access your application from outside of a Kubernetes cluster with: curl http://<master_ip>:80/ -H 'Host: foo.bar.com' 现在,假设您知道您的Kubernetes主IP,您可以使用以下命令从Kubernetes集群外部访问您的应用程序: curl http://<master_ip>:80/ -H 'Host: foo.bar.com'


If you use some DNS server, you can add this record: foo.bar.com IN A <master_ip> or add this line to your /etc/hosts file: <master_ip> foo.bar.com and now you can just run: curl foo.bar.com 如果您使用某个DNS服务器,则可以添加以下记录: foo.bar.com IN A <master_ip>或将此行添加到/etc/hosts文件: <master_ip> foo.bar.com现在您可以运行: curl foo.bar.com


Notice, that this way you will always access foo.bar.com using port 80. If you want to use some other port, I recommend using a Service of type NodePort, only for that one not-80 port. 请注意,这样您将始终使用端口80访问foo.bar.com 。如果您想使用其他端口,我建议使用NodePort类型的服务,仅适用于那个非-80端口。 It will make that port resolvable, no matter which Kubernetes VM IP you use (any master or any minion IP is fine). 它将使该端口可以解析,无论您使用哪个Kubernetes VM IP(任何主或任何minion IP都可以)。 Example of such a Service: apiVersion: v1 kind: Service metadata: name: frontend-service-ssh labels: tier: frontend spec: type: NodePort selector: name: frontend-pod ports: - name: ssh targetPort: 22 port: 22 nodePort: 2222 protocol: TCP And if you have <master_ip> foo.bar.com in your /etc/hosts file, then you can access: foo.bar.com:2222 此类服务的示例: apiVersion: v1 kind: Service metadata: name: frontend-service-ssh labels: tier: frontend spec: type: NodePort selector: name: frontend-pod ports: - name: ssh targetPort: 22 port: 22 nodePort: 2222 protocol: TCP如果你的/ etc / hosts文件中有<master_ip> foo.bar.com ,那么你可以访问: foo.bar.com:2222 apiVersion: v1 kind: Service metadata: name: frontend-service-ssh labels: tier: frontend spec: type: NodePort selector: name: frontend-pod ports: - name: ssh targetPort: 22 port: 22 nodePort: 2222 protocol: TCP

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 如何使用Kubernetes中的NodePort服务在公共Ip上公开Nginx? - How to expose nginx on public Ip using NodePort service in Kubernetes? Kubernetes minikube,无法在公共IP范围上公开服务 - Kubernetes minikube, cannot expose service on public ip range 如何在端口80/443的公共节点IP上公开kubernetes nginx-ingress服务? - How to expose kubernetes nginx-ingress service on public node IP at port 80 / 443? 如何使用 Terraform 公开具有公共 IP 地址的 Azure Kubernetes 集群 - How to expose an Azure Kubernetes cluster with a public IP address using Terraform Minikube 服务公开 IP - Minikube service expose to public IP 如何使用Azure容器服务将Kubernetes pod公开到公共Internet - How to expose the Kubernetes pod to the public internet using Azure container service 如何在 Kubernetes minion 中将 pod 作为全局服务运行 - How to run pod as a global service in Kubernetes minion 如何在 Kubernetes 中公开服务? - How to expose a service in Kubernetes? 如何将一个节点 Kubernetes 入口服务的外部 IP 暴露到互联网中 - How to expose external IP of one Node Kubernetes Ingress Service into the internet 如何在 Kubernetes 中公开服务 - How to expose a service in Kubernetes
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM