简体   繁体   中英

tunnel or proxy from app in one kubernetes cluster (local/minikube) to a database inside a different kubernetes cluster (on Google Container Engine)

I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app. Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?

The database contains sensitive information, so can't be visible outside it's own cluster or VPC.

My fall-back is to run kubectl port-forward inside the local pod:

kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200

but this seems suboptimal.

I'd use a ExternalName Service like

kind: Service
apiVersion: v1
metadata:
  name: elastic-db
  namespace: prod
spec:
  type: ExternalName
  externalName: your.elastic.endpoint.com

According to the docs

An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.

If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:

  1. Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
  2. Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM