[英]Connect to private ip address - graphql - kubernetes
How to connect graphql
api which is on private network and accessible through private ip address.如何连接私有网络上可通过私有IP地址访问的
graphql
api。 My frontend server and api is on the VNET
.我的前端服务器和 api 在
VNET
。
import { ApolloClient } from 'apollo-client'
import { InMemoryCache } from 'apollo-cache-inmemory'
import { createUploadLink } from 'apollo-upload-client'
const uploadLink = createUploadLink({
uri: 'http://10.0.0.10:3000'+'/api'
})
const client = new ApolloClient({
link: uploadLink,
cache: new InMemoryCache()
})
export default client
Both applications are running on kubernetes
same cluster different pods.这两个应用程序都运行在
kubernetes
同一个集群不同的 pod 上。 Private services are accessible within cluster and when I exec
into the frontend pod I am able to access graphql
end point with private ip address.私有服务可以在集群内访问,当我
exec
到前端 pod 时,我能够使用私有 IP 地址访问graphql
端点。
But, On the browser, it's not connecting and giving this error: ERR_CONNECTION_REFUSED
但是,在浏览器上,它没有连接并给出这个错误:
ERR_CONNECTION_REFUSED
frontend (public ip) --> graphql (private ip)
前端(公共 ip)--> graphql(私有 ip)
The 3 main methods for accessing an internal kubernetes service from outside are: NodePort , LoadBalancer , and Ingress .从外部访问内部 kubernetes 服务的 3 种主要方法是: NodePort 、 LoadBalancer和Ingress 。
You can read about some of the main differences between them here https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0您可以在此处阅读它们之间的一些主要区别https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
Either allow kubernetes to randomly select a high port, or manually define a high port from a predefined range which is by default 30000–32767 (but can be changed), and map it to an internal service port on a 1 to 1 basis.要么允许 Kubernetes 随机选择一个高端口,要么从预定义的范围(默认为 30000-32767(但可以更改)中手动定义一个高端口),并以 1 比 1 的方式将其映射到内部服务端口。
Warning: Although it is possible to manually define a NodePort port number per service, it is generally not recommended due to possible issues such as port conflicts.警告:虽然可以为每个服务手动定义一个 NodePort 端口号,但由于可能存在端口冲突等问题,通常不建议这样做。 So in most cases, you should let the cluster randomly select a NodePort port number for you.
所以在大多数情况下,你应该让集群为你随机选择一个 NodePort 端口号。
From official docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport来自官方文档: https : //kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you set the type field to NodePort, the Kubernetes master will allocate a port from a range specified by --service-node-port-range flag (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service.
如果将 type 字段设置为 NodePort,Kubernetes master 将从 --service-node-port-range 标志指定的范围内分配一个端口(默认值:30000-32767),每个节点将代理该端口(相同端口每个节点上的编号)到您的服务中。
The functionality of this service type depends on external drivers/plugins.此服务类型的功能取决于外部驱动程序/插件。 Most modern clouds offer support to supply public IPs for LoadBalancer definitions.
大多数现代云都支持为 LoadBalancer 定义提供公共 IP。 But if you are spinning a custom cluster with no means to assign public IPs (such as with Rancher with no IP provider plugins), the best you can probably do with this is assign an IP of a host machine to a single service.
但是,如果您正在旋转一个无法分配公共 IP 的自定义集群(例如使用没有 IP 提供程序插件的 Rancher),那么您可以做的最好的事情就是将主机的 IP 分配给单个服务。
From the official docs: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer来自官方文档: https : //kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service.
在支持外部负载均衡器的云提供商上,将 type 字段设置为 LoadBalancer 将为您的服务提供负载均衡器。 The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer will be published in the Service's .status.loadBalancer field.
负载均衡器的实际创建是异步进行的,有关已配置均衡器的信息将发布在服务的 .status.loadBalancer 字段中。
To install it you must create an application router service (such as nginx) which runs in your cluster and analyzes every new resource of type Ingress that is created.要安装它,您必须创建一个应用程序路由器服务(例如 nginx),该服务在您的集群中运行并分析创建的每个 Ingress 类型的新资源。 Then you create Ingress resource that define the routing rules you would like such as which DNS request to listen to and which service to forward the request to.
然后您创建 Ingress 资源,定义您想要的路由规则,例如要侦听哪个 DNS 请求以及将请求转发到哪个服务。
Although multiple solutions exist for this purpose, I recommend Nginx Ingress虽然为此目的存在多种解决方案,但我推荐 Nginx Ingress
https://github.com/helm/charts/tree/master/stable/nginx-ingress https://github.com/kubernetes/ingress-nginx https://github.com/helm/charts/tree/master/stable/nginx-ingress https://github.com/kubernetes/ingress-nginx
Official Docs:官方文档:
What is Ingress?
什么是入口? Typically, services and pods have IPs only routable by the cluster network.
通常,服务和 pod 的 IP 只能由集群网络路由。 All traffic that ends up at an edge router is either dropped or forwarded elsewhere.
所有到达边缘路由器的流量要么被丢弃,要么被转发到别处。 Conceptually, this might look like:
从概念上讲,这可能如下所示:
internet | ------------ [ Services ] An Ingress is a collection of rules that allow inbound connections to reach the cluster services. internet | [ Ingress ] --|-----|-- [ Services ] It can be configured to give services externally-reachable URLs, load balance
traffic, terminate SSL, offer name based virtual hosting, and more.
流量、终止 SSL、提供基于名称的虚拟主机等等。 Users request ingress by POSTing the Ingress resource to the API server.
用户通过将 Ingress 资源 POST 到 API 服务器来请求 Ingress。 An Ingress controller is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
Ingress 控制器负责完成 Ingress,通常使用负载均衡器,但它也可以配置您的边缘路由器或其他前端,以帮助以 HA 方式处理流量。
You seem to answer your own question: that IP address is private.您似乎在回答自己的问题:该 IP 地址是私有的。
You'll want to set a service definition in order to expose it to the public.您需要设置一个服务定义,以便向公众公开它。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.