简体   繁体   中英

Is there a way to access google cloud SQL via proxy inside docker container

I have multiple docker machines(dev,staging) running on Google Compute Engine which hosts Django servers(this needs access to Google Cloud SQL access). I have multiple Google Cloud SQL instances running, and each instance is used by respective docker machines on my Google Compute Engine instance.

Currently i'm accessing the Cloud SQL by whitelisting my Compute Engine IP. But i dont want to use IPs for obvious reasons ie., i dont use a static ip for my dev machines.

But Now want to use google_cloud_proxy way to gain the access. But How do i do that. GCP gives multiple ways to access google Cloud SQL instances: But none of them fit my usecase:

I have this option https://cloud.google.com/sql/docs/mysql/connect-compute-engine ; but this

  1. only gives my computer engine access to the SQL instance; which i have to access from my Docker.
  2. This doesn't support me to proxy multiple SQL instances on same compute engine machine; I was hoping to do this proxy inside the docker if possible.

So, How do I gain access to the CLoud SQL inside Docker? If docker compose is a better way to start; How easy is it to implement for kube.netes(i use google container engine for production)

I was able to figure out how to use cloudsql-proxy on my local docker environment by using docker-compose. You will need to pull down your Cloud SQL instance credentials and have them ready. I keep them them in my project root as credentials.json and add it to my .gitignore in the project.

The key part I found was using =tcp:0.0.0.0:5432 after the GCP instance ID so that the port can be forwarded. Then, in your application, use cloudsql-proxy instead of localhost as the hostname. Make sure the rest of your db creds are valid in your application secrets so that it can connect through local proxy being supplied by the cloudsql-proxy container.

Note: Keep in mind I'm writing a tomcat java application and my docker-compose.yml reflects that.

docker-compose.yml:

version: '3'
services:
  cloudsql-proxy:
      container_name: cloudsql-proxy
      image: gcr.io/cloudsql-docker/gce-proxy:1.11
      command: /cloud_sql_proxy --dir=/cloudsql -instances=<YOUR INSTANCE ID HERE>=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
      ports:
        - 5432:5432
      volumes:
        - ./credentials.json:/secrets/cloudsql/credentials.json
      restart: always

  tomcatapp-api:
    container_name: tomcatapp-api
    build: .
    volumes:
      - ./build/libs:/usr/local/tomcat/webapps
    ports:
      - 8080:8080
      - 8000:8000
    env_file:
      - ./secrets.env
    restart: always

You can refer to the Google documentation here: https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#connecting-docker

That will show you how to run the proxy on a container. Then you can use docker-compose as per the answer @Dan suggested here: https://stackoverflow.com/a/48431559/14305096

 docker run -d \
      -v PATH_TO_KEY_FILE:/config \
      -p 127.0.0.1:5432:5432 \
      gcr.io/cloudsql-docker/gce-proxy:1.19.1 /cloud_sql_proxy \
      -instances=INSTANCE_CONNECTION_NAME=tcp:0.0.0.0:5432 \
      -credential_file=/config

For Mac OS users you can use the following as POSTGRES_HOST:

host.docker.internal

like

DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.postgresql",
        "NAME": "<DB-NAME>",
        "HOST": "host.docker.internal",
        "PORT": "<YOUR-PORT>",
        "USER": "<DB-USER>",
        "PASSWORD": "<DB-USER-PASSWORD>",
    },
}

Your localhost will be forwarded into the container.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM