简体   繁体   English

Kubernetes,法兰绒和露天服务

[英]Kubernetes, Flannel and exposing services

I have a kubernetes setup running nicely, but I can't seem to expose services externally. 我有一个运行良好的kubernetes设置,但我似乎无法从外部公开服务。 I'm thinking my networking is not set up correctly: 我在想我的网络设置不正确:

kubernetes services addresses: --service-cluster-ip-range=172.16.0.1/16 kubernetes服务地址: - service-cluster-ip-range = 172.16.0.1 / 16

flannel network config: etcdctl get /test.lan/network/config {"Network":"172.17.0.0/16"} 法兰绒网络配置:etcdctl get /test.lan/network/config {“Network”:“172.17.0.0/16”}

docker subnet setting: --bip=10.0.0.1/24 docker subnet setting:--bip = 10.0.0.1 / 24

Hostnode IP: 192.168.4.57 主机节点IP:192.168.4.57

I've got the nginx service running and I've tried to expose it like so: 我已经运行了nginx服务,我试图像这样暴露它:

[root@kubemaster ~]# kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-px6uy   1/1       Running   0          4m
[root@kubemaster ~]# kubectl get services
NAME         LABELS                                    SELECTOR    IP(S)           PORT(S)    AGE
kubernetes   component=apiserver,provider=kubernetes   <none>      172.16.0.1      443/TCP    31m
nginx        run=nginx                                 run=nginx   172.16.84.166   9000/TCP   3m

and then I exposed the service like this: 然后我暴露了这样的服务:

kubectl expose rc nginx --port=9000 --target-port=9000 --type=NodePort
NAME      LABELS      SELECTOR    IP(S)     PORT(S)    AGE
nginx     run=nginx   run=nginx             9000/TCP   292y

I'm expecting now to be able to get to the nginx container on the hostnodes IP (192.168.4.57) - have I misunderstood the networking? 我现在希望能够访问主机节点IP(192.168.4.57)上的nginx容器 - 我误解了网络吗? If I have, can explanation would be appreciated :( 如果我有,可以解释将是赞赏:(

Note: This is on physical hardware with no cloud provider provided load balancer, so NodePort is the only option I have, I think? 注意:这是在物理硬件上没有云提供商提供的负载均衡器,所以我认为NodePort是我唯一的选择吗?

So the issue here was that there's a missing piece of the puzzle when you use nodePort. 所以这里的问题是当你使用nodePort时,这个谜题有一个缺失。

I was also making a mistake with the commands. 我的命令也犯了错误。

Firstly, you need to make sure you expose the right ports, in this case 80 for nginx: 首先,您需要确保公开正确的端口,在本例中为80,用于nginx:

kubectl expose rc nginx --port=80 --type=NodePort

Secondly, you need to use kubectl describe svc nginx and it'll show you the NodePort it's assigned on each node: 其次,你需要使用kubectl describe svc nginx ,它会显示它在每个节点上分配的NodePort:

[root@kubemaster ~]# kubectl describe svc nginx
Name:           nginx
Namespace:      default
Labels:         run=nginx
Selector:       run=nginx
Type:           NodePort
IP:         172.16.92.8
Port:           <unnamed>   80/TCP
NodePort:       <unnamed>   32033/TCP
Endpoints:      10.0.0.126:80,10.0.0.127:80,10.0.0.128:80
Session Affinity:   None
No events.

You can of course assign one when you deploy, but I was missing this info when using randomly assigned ports. 您当然可以在部署时分配一个,但在使用随机分配的端口时我错过了此信息。

yes, you would need to use NodePort. 是的,您需要使用NodePort。 When you hit the service, the destPort should be equal to NodePort. 当您点击服务时,destPort应该等于NodePort。 The destIP for the service should be considered local by the nodes. 服务的destIP应该被节点视为本地的。 Eg you could use the hostIP of one of the nodes.. 例如,您可以使用其中一个节点的hostIP ..

A load-balancer helps because it would handle situations where your node went down, but other nodes could still process the service.. 负载均衡器有帮助,因为它可以处理节点出现故障的情况,但其他节点仍然可以处理服务。

if you're running a cluster on bare metal or not at a provider that provides the load balancer, you can also define the port to be a hostPort on your pod 如果您在裸机上运行群集或不在提供负载均衡器的提供程序上运行群集,您还可以将端口定义为pod上的hostPort

you define your container, and ports 您定义容器和端口

containers:
- name: ningx
  image: nginx
  ports:
  - containerPort: 80
    hostPort: 80
    name: http

this will bind the container to the host networking and use the port defined. 这会将容器绑定到主机网络并使用定义的端口。

The 2 limitations here are obviously: 1) You can only have one of these pods on each host maximum. 这里的2个限制显然是:1)每个主机上最多只能有一个这样的pod。 2) The IP is the host IP of the node it binds to 2)IP是它绑定的节点的主机IP

this is essentially how the cloud provider load balancers work in a way. 这实际上是云提供商负载均衡器在某种程度上的工作方式。

Using the new DaemonSet features, it's possible to define what node the pod will land on and fix the IP. 使用新的DaemonSet功能,可以定义pod将登陆的节点并修复IP。 However that necessarily impair the high availability aspect, but at some point there is not much choice as DNS load balancing will not avoid forwarding to a dead nodes 但是,这必然会损害高可用性方面,但在某些时候没有太多选择,因为DNS负载平衡不会避免转发到死节点

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM