[英]Could not translate host name "postgres" to address: Name or service not known - DOCKER
I have a dash app and i'm trying to host it to the port 8050 so i'm trying to create it using gunicorn and I ran my Dockerfile which contains:我有一个 dash 应用程序,我正在尝试将它托管到端口 8050,所以我尝试使用 gunicorn 创建它并运行我的 Dockerfile,其中包含:
FROM airflow-update
CMD gunicorn -b 0.0.0.0:8050 /py_scripts.index:server
and then i ran然后我跑了
docker run -p 8050:8050 airflow
and i got the error below:我收到以下错误:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: Name or service not known
My docker-compose.yaml file is like this:我的 docker-compose.yaml 文件是这样的:
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-airflow:latest}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: CeleryExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__RESULT_BACKEND: db+postgresql://airflow:airflow@postgres/airflow
AIRFLOW__CELERY__BROKER_URL: redis://:@redis:6379/0
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./logs:/opt/airflow/logs/pipeline-logs
- ./py_scripts:/opt/airflow/py_scripts
- ./data:/opt/airflow/data
- ./dbt-redshift:/opt/airflow/dbt-redshift
- ./output:/opt/airflow/output
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-50000}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-worker:
<<: *airflow-common
command: celery worker
healthcheck:
test:
- "CMD-SHELL"
- 'celery --app airflow.executors.celery_executor.app inspect ping -d "celery@$${HOSTNAME}"'
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
command: version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
flower:
<<: *airflow-common
command: celery flower
ports:
- 5555:5555
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:5555/"]
interval: 10s
timeout: 10s
retries: 5
restart: always
volumes:
postgres-db-volume:
What am i doing wrong.我究竟做错了什么。 Should i update my docker-compose file in regards to postgres?
我应该更新关于 postgres 的 docker-compose 文件吗?
Compose creates a network named default
, but your docker run
command isn't attached to that network. Compose 创建一个名为
default
的网络,但您的docker run
命令未连接到该网络。
The best approach to this is to move this setup inside your Compose setup.最好的方法是将此设置移动到您的 Compose 设置中。 If the only thing you're replacing is the command that's being run, you don't even need a custom image.
如果您要替换的唯一内容是正在运行的命令,那么您甚至不需要自定义映像。 This is the same pattern you have for the several other containers in this setup.
这与此设置中的其他几个容器的模式相同。
services:
postgres: { ... }
airflow-server:
<<: *airflow-common
command: gunicorn -b 0.0.0.0:8050 /py_scripts.index:server
ports:
- '8050:8050'
restart: always
If you really want to run it with docker run
, you need to find the network Compose creates by default and specifically attach to it.如果你真的想用
docker run
运行它,你需要找到 Compose 默认创建的网络并专门附加到它。 docker network ls
will show it; docker network ls
将显示它; its name will end with ..._default
.它的名称将以
..._default
结尾。 Note that there's a huge amount of additional setup in the airflow-common
block and docker run
won't see any of this at all.请注意,
airflow-common
块中有大量额外的设置,而docker run
根本看不到这些。
docker run --net airflow_default -p 8050:8050 airflow
I fixed it by adjusting @davidmaze's answer as below:我通过调整@davidmaze的答案来修复它,如下所示:
airflow-server:
<<: *airflow-common
ports:
- '8050:8050'
restart: always
entrypoint: gunicorn --chdir /opt/airflow/py_scripts -b 0.0.0.0:8050 index:server
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.