简体   繁体   English

将裸机Kubernetes集群暴露于Internet

[英]Exposing bare metal kubernetes cluster to internet

I am trying to setup own single-node kubernetes cluster on bare metal dedicated server. 我正在尝试在裸机专用服务器上设置自己的单节点kubernetes集群。 I am not that experienced in dev-ops but I need some service to be deployed for my own project. 我没有开发经验,但是我需要为自己的项目部署一些服务。 I already did a cluster setup with juju and conjure-up kubernetes over LXD . 我已经做了群集安装与jujuconjure-up kubernetesLXD I have running cluster pretty fine. 我运行群集很好。

$ juju status

Model                         Controller                Cloud/Region         Version  SLA          Timestamp
conjure-canonical-kubern-3b3  conjure-up-localhost-db9  localhost/localhost  2.4.3    unsupported  23:49:09Z

App                    Version  Status  Scale  Charm                  Store       Rev  OS      Notes
easyrsa                3.0.1    active      1  easyrsa                jujucharms  195  ubuntu
etcd                   3.2.10   active      3  etcd                   jujucharms  338  ubuntu
flannel                0.10.0   active      2  flannel                jujucharms  351  ubuntu
kubeapi-load-balancer  1.14.0   active      1  kubeapi-load-balancer  jujucharms  525  ubuntu  exposed
kubernetes-master      1.13.1   active      1  kubernetes-master      jujucharms  542  ubuntu
kubernetes-worker      1.13.1   active      1  kubernetes-worker      jujucharms  398  ubuntu  exposed

Unit                      Workload  Agent  Machine  Public address  Ports           Message
easyrsa/0*                active    idle   0        10.213.117.66                   Certificate Authority connected.
etcd/0*                   active    idle   1        10.213.117.171  2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   2        10.213.117.10   2379/tcp        Healthy with 3 known peers
etcd/2                    active    idle   3        10.213.117.238  2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        10.213.117.123  443/tcp         Loadbalancer ready.
kubernetes-master/0*      active    idle   5        10.213.117.172  6443/tcp        Kubernetes master running.
  flannel/1*              active    idle            10.213.117.172                  Flannel subnet 10.1.83.1/24
kubernetes-worker/0*      active    idle   7        10.213.117.136  80/tcp,443/tcp  Kubernetes worker running.
  flannel/4               active    idle            10.213.117.136                  Flannel subnet 10.1.27.1/24

Entity  Meter status  Message
model   amber         user verification pending

Machine  State    DNS             Inst id        Series  AZ  Message
0        started  10.213.117.66   juju-b03445-0  bionic      Running
1        started  10.213.117.171  juju-b03445-1  bionic      Running
2        started  10.213.117.10   juju-b03445-2  bionic      Running
3        started  10.213.117.238  juju-b03445-3  bionic      Running
4        started  10.213.117.123  juju-b03445-4  bionic      Running
5        started  10.213.117.172  juju-b03445-5  bionic      Running
7        started  10.213.117.136  juju-b03445-7  bionic      Running

I also deployed Hello world application to output some hello on port 8080 inside the pod and nginx-ingress for it to re-route the traffic to this service on specified host. 我还部署了Hello world应用程序,以在pod和nginx-ingress内的端口8080上输出一些问候,以使其将流量重新路由到指定主机上的该服务。

NAME                               READY   STATUS    RESTARTS   AGE
pod/hello-world-696b6b59bd-fznwr   1/1     Running   1          176m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/example-service   NodePort    10.152.183.53   <none>        8080:30450/TCP   176m
service/kubernetes        ClusterIP   10.152.183.1    <none>        443/TCP          10h

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-world   1/1     1            1           176m

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/hello-world-696b6b59bd   1         1         1       176m

When I do curl localhost as expected I have connection refused , which looks still fine as it's not exposed to cluster. 当我按预期的方式curl localhost ,我connection refused ,因为它没有暴露给集群,因此看起来仍然很好。 when I curl the kubernetes-worker/0 with public address 10.213.117.136 on port 30450 (which I get from kubectl get all ) 当我在端口30450上卷曲具有公共地址10.213.117.136kubernetes-worker/0时(我从kubectl get all

$ curl 10.213.117.136:30450
Hello Kubernetes!

Everything works like a charm (which is obvious). 一切都像魅力一样(显而易见)。 When I do 当我做

curl -H "Host: testhost.com" 10.213.117.136
Hello Kubernetes!

It works again like charm! 它再次像魅力一样工作! That means ingress controller is successfully routing port 80 based on host rule to correct services. 这意味着入口控制器已根据host规则成功路由了端口80以更正服务。 At this point I am 100% sure that cluster works as it should. 在这一点上,我100%确信群集可以正常工作。

Now I am trying to access this service over the internet externally. 现在,我试图从外部通过Internet访问此服务。 When I load <server_ip> obviously nothing loads as it's living inside own lxd subnet. 当我加载<server_ip>显然没有任何加载,因为它位于自己的lxd子网中。 Therefore I was thinking forward port 80 from server eth0 to this IP. 因此,我正在考虑将端口80从服务器eth0转发到此IP。 So I added this rule to iptables 所以我将此规则添加到iptables

sudo iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 10.213.117.136 (For the sake of example let's route everything not only port 80). sudo iptables -t nat -A PREROUTING -p tcp -j DNAT --to-destination 10.213.117.136 (为示例起见,我们不仅路由端口80,还路由所有路由)。 Now when I open on my computer http://<server_ip> it loads! 现在,当我在计算机上打开http://<server_ip>它将加载!

So the real question is how to do that on production? 因此,真正的问题是如何在生产中做到这一点? Should I setup this forwarding rule in iptables? 我应该在iptables中设置此转发规则吗? Is that normal approach or hacky solution and there is something "standard" which I am missing? 是正常的方法还是骇人听闻的解决方案,而我缺少某些“标准”? The thing is to add this rule with static worker node will make the cluster completely static. 事情是添加带有静态worker节点的规则将使集群完全静态。 IP eventually change, I can remove/add units to workers and it will stop working. IP最终发生了变化,我可以为工作人员删除/添加设备,它将停止工作。 I was thinking about writing script which will obtain this IP address from juju like this: 我正在考虑编写脚本,它将像这样从juju获取此IP地址:

$ juju status kubernetes-worker/0 --format=json | jq '.machines["7"]."dns-name"'
"10.213.117.136"

and add it to IP-tables, which is more okay-ish solution than hardcoded IP but still I feel it's a tricky and there must be a better way. 并将其添加到IP表中,这比硬编码IP更好,但我仍然觉得这很棘手,必须有更好的方法。

As last idea I get to run HAProxy outside of the cluster, directly on the machine and just do forwarding of traffic to all available workers. 最后一个想法是,我可以直接在计算机上在集群外部运行HAProxy ,然后将流量转发给所有可用的工作程序。 This might eventually also work. 这最终可能也会起作用。 But still I don't know the answer what is the correct solution and what is usually used in this case. 但是我仍然不知道答案是什么correct解决方案以及在这种情况下通常使用的解决方案。 Thank you! 谢谢!

So the real question is how to do that on production? 因此,真正的问题是如何在生产中做到这一点?

The normal way to do this in a production system is to use a Service . 在生产系统中执行此操作的通常方法是使用Service

The simplest case is when you just want your application to be accessible from outside on your node(s). 最简单的情况是您只希望从节点上的外部访问应用程序。 In that case you can use a Type NodePort Service. 在这种情况下,您可以使用Type NodePort服务。 This would create the iptables rules necessary to forward the traffic from the host IP address to the pod(s) providing the service. 这将创建必要的iptables规则,以将流量从主机IP地址转发到提供服务的pod。

If you have a single node (which is not recommended in production!), you're ready at this point. 如果您只有一个节点(在生产中不建议这样做!),那么您现在就准备好了。

If you have multiple nodes in your Kubernetes cluster, all of them would be configured by Kubernetes to provide access to the service (your clients could use any of them to access the service). 如果您的Kubernetes集群中有多个节点,那么Kubernetes会将它们全部配置为提供对服务的访问(您的客户端可以使用它们中的任何一个来访问服务)。 Though, you'd have to solve the problem of how the clients would know which nodes are available to be contacted... 但是,您必须解决以下问题:客户端将如何知道可以联系哪些节点...

There are several ways to handle this: 有几种方法可以解决此问题:

  • use a protocol understood by the client to publish the currently available IP addresses (for example DNS), 使用客户端理解的协议来发布当前可用的IP地址(例如DNS),

  • use a floating (failover, virtual, HA) IP address managed by some software on your Kubernetes nodes (for example pacemaker/corosync), and direct the clients to this address, 使用由Kubernetes节点上的某些软件管理的浮动(故障转移,虚拟,HA)IP地址(例如,心脏起搏器/同步),并将客户端定向到该地址,

  • use an external load-balancer, configured separately, to forward traffic to some of the operating nodes, 使用单独配置的外部负载均衡器将流量转发到某些操作节点,

  • use an external load-balancer, configured automatically by Kubernetes using a cloud provider integration script (by using a Type LoadBalancer Service), to forward traffic to some of the operating nodes. 使用由Kubernetes使用云提供商集成脚本(通过使用Type LoadBalancer服务)自动配置的外部负载均衡器将流量转发到某些操作节点。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM