简体   繁体   中英

Expose an external service to be accessible from within the cluster

I am trying to setup connection to my databases which reside outside of GKE cluster from within the cluster.

I have read various tutorials including https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services and multiple SO questions though the problem persists.

Here is an example configuration with which I am trying to setup kafka connectivity:

---
kind: Endpoints
apiVersion: v1
metadata:
  name: kafka
subsets:
  - addresses:
      - ip: 10.132.0.5
    ports:
      - port: 9092

---
kind: Service
apiVersion: v1
metadata:
  name: kafka
spec:
  type: ClusterIP
  ports:
  - port: 9092
    targetPort: 9092

I am able to get some sort of response by connecting directly via nc 10.132.0.5 9092 from the node VM itself, but if I create a pod, say by kubectl run -it --rm --restart=Never alpine --image=alpine sh then I am unable to connect from within the pod using nc kafka 9092 . All libraries in my code fail by timing out so it seems to be some kind of routing issue.

Kafka is given as an example, I am having the same issues connecting to other databases as well.

Solved it, the issue was within my understanding of how GCP operates.

To solve the issue I had to add a firewall rule which allowed all incoming traffic from internal GKE network. In my case it was 10.52.0.0/24 address range.

Hope it helps someone.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM