简体   繁体   English

在使用 redis 对项目进行 docker-compose 后,Django 通道无法连接(查找)websocket

[英]Django channels unable to connect(find) websocket after docker-compose of project using redis

I have currently implemented websocket connections via django channels using a redis layer.我目前已经使用 redis 层通过 django 通道实现了 websocket 连接。

I'm new to docker and not sure where I might have made a mistake.我是 docker 新手,不知道我可能在哪里犯了错误。 After the docker-compose up -d --build the "static files, media, database and gunicorn wsgi" all function, but redis won't connect. docker-compose up -d --build "静态文件、媒体、数据库和gunicorn wsgi"所有功能后,redis 连接不上。 even though it is running in the background.即使它在后台运行。

Before trying to containerize the application with docker, it worked well with:在尝试使用 docker 将应用程序容器化之前,它适用于:

python manage.py runserver

with the following settings.py setction for the redis layer:使用 redis 层的以下 settings.py 设置:

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "channels_redis.core.RedisChannelLayer",
        "CONFIG": {
            "hosts": [("0.0.0.0", 6379)],
        },
    },
}

and by calling a docker container for the redis layer:并通过调用 redis 层的 docker 容器:

docker run -p 6379:6379 -d redis:5

But after the trying to containerize the entire application it was unable to find the websocket但是在尝试将整个应用程序容器化之后,它无法找到 websocket

The new setup for the docker-compose is as follows: docker-compose 的新设置如下:

version: '3.10'

services:
  web:
    container_name: web
    build: 
      context: ./app
      dockerfile: Dockerfile
    command: bash -c "gunicorn core.wsgi:application --bind 0.0.0.0:8000"
    volumes:
      - ./app/:/usr/src/app/
      - static_volume:/usr/src/app/staticfiles/
      - media_volume:/usr/src/app/media/
    ports:
      - 8000:8000
    env_file:
      - ./.env.dev
    depends_on:
      - db
    networks:
      - app_network


  redis:
    container_name: redis
    image: redis:5
    ports:
      - 6379:6379
    networks:
      - app_network
    restart: on-failure


  db:
    container_name: db
    image: postgres
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      - ./.env.psql
    ports:
      - 5432:5432
    networks:
      - app_network


volumes:
  postgres_data:
  static_volume:
  media_volume:

networks:
  app_network:

with this settings.py:使用此 settings.py:

CHANNEL_LAYERS = {
    "default": {
        "BACKEND": "channels_redis.core.RedisChannelLayer",
        "CONFIG": {
            "hosts": [("redis", 6379)],
        },
    },
}

After building successfully the container and running docker-compose logs -f :成功构建容器并运行docker-compose logs -f后:

Attaching to web, db, redis
db       | The files belonging to this database system will be owned by user "postgres".
db       | This user must also own the server process.
db       | 
db       | The database cluster will be initialized with locale "en_US.utf8".
db       | The default database encoding has accordingly been set to "UTF8".
db       | The default text search configuration will be set to "english".
db       | 
db       | Data page checksums are disabled.
db       | 
db       | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db       | creating subdirectories ... ok
db       | selecting dynamic shared memory implementation ... posix
db       | selecting default max_connections ... 100
db       | selecting default shared_buffers ... 128MB
db       | selecting default time zone ... Etc/UTC
db       | creating configuration files ... ok
db       | running bootstrap script ... ok
db       | performing post-bootstrap initialization ... ok
db       | initdb: warning: enabling "trust" authentication for local connections
db       | You can change this by editing pg_hba.conf or using the option -A, or
db       | --auth-local and --auth-host, the next time you run initdb.
db       | syncing data to disk ... ok
db       | 
db       | 
db       | Success. You can now start the database server using:
db       | 
db       |     pg_ctl -D /var/lib/postgresql/data -l logfile start
db       | 
db       | waiting for server to start....2022-06-27 16:18:30.303 UTC [48] LOG:  starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db       | 2022-06-27 16:18:30.310 UTC [48] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db       | 2022-06-27 16:18:30.334 UTC [49] LOG:  database system was shut down at 2022-06-27 16:18:29 UTC
db       | 2022-06-27 16:18:30.350 UTC [48] LOG:  database system is ready to accept connections
db       |  done
db       | server started
db       | CREATE DATABASE
db       | 
db       | 
db       | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db       | 
db       | 2022-06-27 16:18:31.587 UTC [48] LOG:  received fast shutdown request
db       | waiting for server to shut down....2022-06-27 16:18:31.596 UTC [48] LOG:  aborting any active transactions
db       | 2022-06-27 16:18:31.601 UTC [48] LOG:  background worker "logical replication launcher" (PID 55) exited with exit code 1
db       | 2022-06-27 16:18:31.602 UTC [50] LOG:  shutting down
db       | 2022-06-27 16:18:31.650 UTC [48] LOG:  database system is shut down
db       |  done
db       | server stopped
db       | 
db       | PostgreSQL init process complete; ready for start up.
db       | 
db       | 2022-06-27 16:18:31.800 UTC [1] LOG:  starting PostgreSQL 14.4 (Debian 14.4-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
db       | 2022-06-27 16:18:31.804 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db       | 2022-06-27 16:18:31.804 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db       | 2022-06-27 16:18:31.810 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db       | 2022-06-27 16:18:31.818 UTC [62] LOG:  database system was shut down at 2022-06-27 16:18:31 UTC
db       | 2022-06-27 16:18:31.825 UTC [1] LOG:  database system is ready to accept connections
redis    | 1:C 27 Jun 2022 16:18:29.080 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis    | 1:C 27 Jun 2022 16:18:29.080 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
redis    | 1:C 27 Jun 2022 16:18:29.080 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis    | 1:M 27 Jun 2022 16:18:29.082 * Running mode=standalone, port=6379.
redis    | 1:M 27 Jun 2022 16:18:29.082 # Server initialized
redis    | 1:M 27 Jun 2022 16:18:29.082 * Ready to accept connections
web      | Waiting for postgres...
web      | PostgreSQL started
web      | Waiting for redis...
web      | redis started
web      | [2022-06-27 16:18:33 +0000] [1] [INFO] Starting gunicorn 20.1.0
web      | [2022-06-27 16:18:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web      | [2022-06-27 16:18:33 +0000] [1] [INFO] Using worker: sync
web      | [2022-06-27 16:18:33 +0000] [8] [INFO] Booting worker with pid: 8
web      | [2022-06-27 16:19:18 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:8)
web      | [2022-06-27 18:19:18 +0200] [8] [INFO] Worker exiting (pid: 8)
web      | [2022-06-27 16:19:18 +0000] [9] [INFO] Booting worker with pid: 9
web      | Not Found: /ws/user_consumer/1/
web      | Not Found: /ws/accueil/accueil/
web      | Not Found: /ws/user_consumer/1/
web      | Not Found: /ws/accueil/accueil/

And the docker ps :docker ps

CONTAINER ID   IMAGE                     COMMAND                  CREATED          STATUS          PORTS                                       NAMES
cb3e489e0831   dermatology-project_web   "/usr/src/app/entryp…"   35 minutes ago   Up 35 minutes   0.0.0.0:8000->8000/tcp, :::8000->8000/tcp   web
aee14c8665d0   postgres                  "docker-entrypoint.s…"   35 minutes ago   Up 35 minutes   0.0.0.0:5432->5432/tcp, :::5432->5432/tcp   db
94c29591b352   redis:5                   "docker-entrypoint.s…"   35 minutes ago   Up 35 minutes   0.0.0.0:6379->6379/tcp, :::6379->6379/tcp   redis

The build Dockerfile:构建 Dockerfile:

# set work directory

WORKDIR /usr/src/app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# install psycopg2 dependencies
RUN apt-get update
RUN apt-get install -y libpq-dev python3-pip python-dev postgresql postgresql-contrib netcat

# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh


# create the appropriate directories for staticfiles

# copy project
COPY . .

# staticfiles
RUN python manage.py collectstatic --no-input --clear


# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]

and the entrypoint that checks the connections:以及检查连接的入口点:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z $SQL_HOST $SQL_PORT; do
        sleep 0.1
    done

    echo "PostgreSQL started"
fi

if [ "$CHANNEL" = "redis" ]
then
    echo "Waiting for redis..."

    while ! nc -z $REDIS_HOST $REDIS_PORT; do
        sleep 0.1
    done

    echo "redis started"
fi
#python manage.py flush --no-input
#python manage.py migrate

exec "$@"

I have also tried to run the redis container separate like before and maintain the working containers, but that doesn´t work either.我还尝试像以前一样单独运行 redis 容器并维护工作容器,但这也不起作用。 I have also tried it while running daphne on a different port and passing the asgi:application ( daphne -p 8001 myproject.asgi:application ) and it also didn't work.我也在不同的端口上运行 daphne 并传递 asgi:application ( daphne -p 8001 myproject.asgi:application )时尝试过它,它也没有工作。

Thank you谢谢

Managed a solution eventually最终管理了一个解决方案

To make it work I needed to run the wsgi and asgi servers separately from from each other, each with its own container.为了使其工作,我需要将 wsgi 和 asgi 服务器彼此分开运行,每个服务器都有自己的容器。 Also, the previous service "web" that exposed the ports to the applications needed to be run twice for each container as well, with nginx proxies that upstreamed to each respective port.此外,将端口暴露给应用程序的先前服务“web”也需要为每个容器运行两次,其中 nginx 代理上游到每个相应的端口。

This was all thanks to this genius of a man:这一切都归功于这个天才:

https://github.com/pplonski/simple-tasks https://github.com/pplonski/simple-tasks

Here he explains what I needed and more.在这里,他解释了我需要什么以及更多。 He also uses celery workers to manage the asynchronous task queue/job queue based on distributed message passing, which was a bit overkill for my project but beautiful.他还使用 celery workers 来管理基于分布式消息传递的异步任务队列/作业队列,这对我的项目来说有点矫枉过正,但很漂亮。

New docker-compose:新的码头工人撰写:

version: '2'

services:

    nginx:
        container_name: nginx
        restart: always
        build: ./nginx
        ports:
            - 1337:80
        volumes:
            - static_volume:/usr/src/app/staticfiles/
            - media_volume:/usr/src/app/media/
        depends_on:
            - wsgiserver
            - asgiserver

    postgres:
        container_name: postgres
        restart: always
        image: postgres
        volumes:
            - postgres_data:/var/lib/postgresql/data/
        ports:
            - 5433:5432
        expose:
            - 5432
        environment:
            - ./.env.db

    redis:
        container_name: redis
        image: redis:5
        restart: unless-stopped
        ports:
            - 6378:6379


    wsgiserver:
        build:            
            context: ./app
            dockerfile: Dockerfile
        container_name: wsgiserver
        command: gunicorn core.wsgi:application --bind 0.0.0.0:8000 
        env_file:
            - ./.env.dev
        volumes:
            - ./app/:/usr/src/app/
            - static_volume:/usr/src/app/staticfiles/
            - media_volume:/usr/src/app/media/
        links:
            - postgres
            - redis
        expose:
            - 8000


    asgiserver:
        build:            
            context: ./app
            dockerfile: Dockerfile
        container_name: asgiserver
        command: daphne core.asgi:application -b 0.0.0.0 -p 9000
        env_file:
            - ./.env.dev
        volumes:
            - ./app/:/usr/src/app/
        links:
            - postgres
            - redis
        expose:
            - 9000


volumes:
    static_volume: 
    media_volume:
    postgres_data:

New entrypoint.sh:新入口点.sh:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z $SQL_HOST $SQL_PORT; do
        sleep 0.1
    done

    echo "PostgreSQL started"
fi

#python manage.py flush --no-input
#python manage.py migrate

exec "$@"

New nginx新的 nginx

nginx.conf: nginx.conf:

server {
    listen 80;


    # gunicon wsgi server
    location / {
        try_files $uri @proxy_api;
    }

    location @proxy_api {
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Url-Scheme $scheme;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_pass   http://wsgiserver:8000;
    }


    # ASGI
    # map websocket connection to daphne
    location /ws {
        try_files $uri @proxy_to_ws;
    }

    location @proxy_to_ws {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_redirect off;

        proxy_pass   http://asgiserver:9000;
    }
    
    # static and media files 
    location /static/ {
        alias /usr/src/app/staticfiles/;
    }
    location /media/ {
        alias /usr/src/app/media/;
    }
}

Dockerfile for nginx:用于 nginx 的 Dockerfile:

FROM nginx:1.21

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

Note笔记

If anyone is using this as reference, this is not a production container, there are further steps needed.如果有人将此作为参考,这不是生产容器,还需要进一步的步骤。

This article explains the other step: https://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion本文解释了另一个步骤: https ://testdriven.io/blog/dockerizing-django-with-postgres-gunicorn-and-nginx/#conclusion

, as well as securing the application with AWS with Docker and Let's Encrypt, in the conclusion link. ,以及在结论链接中使用 Docker 和 Let's Encrypt 使用 AWS 保护应用程序。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM