简体   繁体   English

如何使用 docker-compose 设置一个容器以允许它的整个卷访问另一个容器

[英]how to set one container to allow it's entire volume access to another container using docker-compose

I have 4 services running using the docker-compose我有 4 个使用 docker-compose 运行的服务

1) python-api 2) python-model 3) python-celery 4) redis-server . 1) python-api 2) python-model 3) python-celery 4) redis-server

Flow:流动:

1) python-api gets hit via postman with images and some text as parameters on port 8000. 1) python-api通过 postman 以图像和一些文本作为端口 8000 上的参数被命中。

2) python-api passes the image and data to python-model on port 8001 for some ML predictions. 2) python-api将图像和数据传递到端口 8001 上的python-model以进行一些 ML 预测。

3) The modified image and response data in JSON format is then passed to python-celery for triggering mails. 3) JSON 格式的修改后的图像和响应数据然后传递给python-celery以触​​发邮件。

Error: python-celery is able to grab hold of images and responses that are being sent by python-model in step3.错误: python-celery能够获取python-model在 step3 中发送的图像和响应。 But it's not able to read image currently但目前无法读取图像

Error log:错误日志:

========================
python-celery_1      | Received task: classify_crack.tasks.queue_task_v3[d71f976f-b2e7-4b29-9147-35996668de17]
python-celery_1      | == unique_file_index
python-celery_1      | AANJkaNIJSDHURHQEYRQ(*R
python-celery_1      | /python-model/server/classify_crack/inference/images/202003251237371/202003251237371_0.jpg 64
python-celery_1      | Task classify_crack.tasks.queue_task_v3[d71f976f-b2e7-4b29-9147-35996668de17] raised unexpected: AttributeError("'NoneType' object has no attribute 'shape'",)
python-celery_1      | Traceback (most recent call last):
python-celery_1      |   File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
python-celery_1      |     R = retval = fun(*args, **kwargs)
python-celery_1      |   File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
python-celery_1      |     return self.run(*args, **kwargs)
python-celery_1      |   File "/python-api/classify_crack/tasks.py", line 690, in queue_task_v3
python-celery_1      |     save_heat_map_v2(predictions, img_path, _dir, unique_file_index, original_image_index, i, grid_size=grid_size, metadata=metadata, StoredFileLinks=StoredFileLinks, row_stride=row_stride, col_stride=col_stride)
python-celery_1      |   File "/python-api/classify_crack/tasks.py", line 128, in save_heat_map_v2
python-celery_1      |     num_row_splits = int(np.ceil(img.shape[0]/row_stride))
python-celery_1      | AttributeError: 'NoneType' object has no attribute 'shape'

Line inside python-celery code where I'm getting an error: python-celery代码中出现错误的行:

  img = cv2.imread(img_path)
  print(img_path)
  print(img)
  # print(img_path,grid_size)
  splits = int(np.ceil(img.shape[0]/row_stride))

Here, img_path is a valid path inside the container that is being printed.这里, img_path是正在打印的容器内的有效路径。 But I'm not able to read the image as img returns None .但是我无法读取图像,因为img返回None and the line splits giving me the above error.和线splits给我上述错误。

Reason why I'm getting an error:我收到错误的原因:

I'm receiving this error because it is trying to access the folder path: /python-model/server/classify_crack/inference/images/202003251237371/202003251237371.jpg , but python-celery is not able to access that folder with the name 202003251237371 .我收到此错误是因为它试图访问文件夹路径: /python-model/server/classify_crack/inference/images/202003251237371/202003251237371.jpg ,但python-celery无法访问名为202003251237371文件夹.

Proof:证明:

I tried using the command:我尝试使用以下命令:

command: >
      sh -c "ls '/python-model/server/classify_crack/inference/images' &&

inside the docker-compose of both python-model and python-celery services, I get the following outcome while I run all the containers again:python-modelpython-celery服务的 docker-compose 中,当我再次运行所有容器时,我得到以下结果:

python-model_1       | 201801151543500
python-model_1       | 201801151543500.jpg
python-model_1       | IMG_20190307_184100
python-model_1       | IMG_20190307_184100.jpg
python-model_1       | extracted_input_0_0 (15)
python-model_1       | extracted_input_0_0 (15).jpg
python-model_1       | extracted_input_0_0 (16)
python-model_1       | extracted_input_0_0 (16).jpg
python-model_1       | extracted_input_0_0 (18)
python-model_1       | extracted_input_0_0 (18).jpg
python-model_1       | extracted_input_0_0 (19)
python-model_1       | extracted_input_0_0 (19).jpg
python-model_1       | extracted_input_0_0 (9)
python-model_1       | extracted_input_0_0 (9).jpg
python-model_1       | file
python-model_1       | image (2)
python-model_1       | image (2).png
python-model_1       | 202003251237371
python-model_1       | 202003251237371.jpg
python-model_1       | image_X
python-model_1       | image_X.png
python-model_1       | original
python-model_1       | original.jpg
python-model_1       | original_image_0
python-model_1       | original_image_0.jpg



python-celery_1      | 20013V_Y.JPG
python-celery_1      | extracted_input_0_0 (15).jpg
python-celery_1      | extracted_input_0_0 (16).jpg
python-celery_1      | extracted_input_0_0 (18).jpg
python-celery_1      | extracted_input_0_0 (19).jpg
python-celery_1      | extracted_input_0_0 (4).jpg
python-celery_1      | 202003251237371.jpg

Now clearly, python-celery cannot display folder 202003251237371 with the image name 202003251237371.jpg , which I could see in python-model .现在很明显, python-celery无法显示文件夹202003251237371的图像名称202003251237371.jpg ,我可以在python-model看到。

How to tackle this scenario and allow python-celery to access such image folders?如何解决这种情况并允许python-celery访问此类图像文件夹?

docker-compose docker-compose

version: "3"
networks:
  app-tier:
    driver: bridge

volumes:
  app-volume: {}

services:
  python-api-celery: &python-api-celery
    build:
      context: /Users/AjayB/Desktop/python-api/
    networks:
      - app-tier
    volumes:
      - app-volume:/python-model/server/classify_crack/:rw
    environment:
      - PYTHON_API_ENV=development

    command: >
      sh -c "python manage.py makemigrations &&
             python manage.py migrate"

  python-api: &python-api
    <<: *python-api-celery
    ports:
      - "8000:8000"
    command: >
      sh -c "python manage.py runserver 0.0.0.0:8000"

  python-celery: &python-celery
    <<: *python-api-celery
    depends_on:
      - redis
    links:
      - python-model

    command: >
      sh -c "ls '/python-model/server/classify_crack/inference/images' &&
             celery -A server worker -l info"

  redis:
    image: redis:5.0.8-alpine
    hostname: redis
    networks:
          - app-tier
    expose:
      - "6379"
    volumes:
      - app-volume:/python-model/server/classify_crack/:rw
    ports:
      - "6379:6379"
    command: ["redis-server"]

  python-model: &python-model
    build:
      context: /Users/AjayB/Desktop/Python/python/
    ports:
      - "8001:8001"
    networks:
      - app-tier
    environment:
      - PYTHON_API_ENV=development
    volumes:
      - app-volume
    depends_on:
      - python-api
    command: >
      sh -c "ls '/python-model/server/classify_crack/inference/images' &&
             cd /python-model/server/ &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8001"

Instance of containers:容器实例:

CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                    NAMES
7374a7b0b051        integrated_python-celery       "sh -c 'celery -A se…"   13 minutes ago      Up 5 seconds        8000/tcp                 integrated_python-celery_1
8eb9a754996a        integrated_python-model        "sh -c 'cd /python-m…"   20 minutes ago      Up 5 seconds        0.0.0.0:8001->8001/tcp   integrated_python-model_1
b268b7bd1ac4        integrated_python-api-celery   "sh -c 'python manag…"   20 minutes ago      Up 6 seconds        8000/tcp                 integrated_python-api-celery_1
869bb5fc21b2        integrated_python-api          "sh -c 'python manag…"   20 minutes ago      Up 6 seconds        0.0.0.0:8000->8000/tcp   integrated_python-api_1
c85a1becea34        redis:5.0.8-alpine             "docker-entrypoint.s…"   About an hour ago   Up 6 seconds        0.0.0.0:6379->6379/tcp   integrated_redis_1

在我看来,您应该遵循有关如何在 Docker 和 Kubernetes 中使用 celery 和 python API 的深入教程:)

From the OpenCV-Python doc: `来自OpenCV-Python文档:`

Even if the image path is wrong, it won't throw any error, but print img will give you None即使图片路径错误,也不会抛出任何错误,但是print img会给你None

Which seems exactly what happens here.这似乎正是这里发生的事情。 Maybe your image path seems right but is actually wrong, possible causes:也许你的图像路径看起来是对的,但实际上是错误的,可能的原因:

  • You are using an incorrect relative or absolute path您使用了不正确的相对或绝对路径
  • Image file exists but is not valid图像文件存在但无效
  • Rights on folder or file /python-model/server/classify_crack/inference/images/202003251237371/202003251237371.jpg does not allow the user running python-celery to read it文件夹或文件的/python-model/server/classify_crack/inference/images/202003251237371/202003251237371.jpg不允许运行python-celery的用户读取它

Your Compose volume definition looks fine otherwise.否则,您的 Compose 卷定义看起来不错。

The error got solved at last:错误终于解决了:

It worked with including following lines in python-api's views.py at the start:它在开始时在 python-api 的views.py中包含以下行:

    self.cur_file_dir_path = '/data/'
    self.cur_file_folder_path = '/'.join(file_path.split('/')[:-1])
    if not os.path.exists('{}/inference'.format(self.cur_file_folder_path)):
      os.makedirs('{}/inference'.format(self.cur_file_folder_path))

also, modified the volume name in docker-compose , just to avoid some confusion.另外,修改了docker-compose的卷名,只是为了避免一些混淆。 Can't believe it took almost 1 week to get this solved.不敢相信花了将近 1 周的时间才解决了这个问题。

docker-compose:码头工人组成:

version: "3"

networks:
  app-tier:
    driver: bridge

volumes:
  app-volume: {}

services:
  python-api-celery: &python-api-celery
    build:
      context: /Users/AjayB/Desktop/python-api/
    networks:
      - app-tier
    volumes:
      - app-volume:/data/:rw
    environment:
      - PYTHON_API_ENV=development

    command: >
      sh -c "python manage.py makemigrations &&
             python manage.py migrate"

  python-api: &python-api
    <<: *python-api-celery
    ports:
      - "8000:8000"
    command: >
      sh -c "python manage.py runserver 0.0.0.0:8000"

  python-celery: &python-celery
    <<: *python-api-celery
    depends_on:
      - redis
    command: >
      sh -c "ls '/data/' &&
             celery -A server worker -l info"

  redis:
    image: redis:5.0.8-alpine
    hostname: redis
    networks:
          - app-tier
    expose:
      - "6379"
    volumes:
      - app-volume:/data/:rw
    ports:
      - "6379:6379"
    command: ["redis-server"]

  python-model: &python-model
    build:
      context: /Users/AjayB/Desktop/Python/python/
    ports:
      - "8001:8001"
    networks:
      - app-tier
    environment:
      - PYTHON_API_ENV=development
    volumes:
      - app-volume:/data/:rw
    depends_on:
      - python-api

    command: >
      sh -c "ls '/data/' &&
             cd /python-model/server/ &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8001"

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Docker-compose没有建立一个容器 - Docker-compose is not building one container 无法从 docker-compose 的另一个容器访问 api 的 docker 容器 - docker container of api not accessible from another container of docker-compose Docker-compose:如何使用相同的网络地址从容器和主机访问 Localstack 资源 - Docker-compose: How to access Localstack resources both from container and host, using same network address 从主机访问 mariadb docker-compose 容器 - Access mariadb docker-compose container from host-machine 使用docker-compose up与docker-compose run从容器内部执行容器中的命令 - executing commands in containers from within a container using docker-compose up vs docker-compose run docker-compose exec导致[Errno 2]没有此类文件或目录:docker容器中的&#39;docker-compose&#39;:&#39;docker-compose&#39; - docker-compose exec causes [Errno 2] No such file or directory: 'docker-compose': 'docker-compose' in docker container Dockerfile 无法复制 &amp; Docker-compose 卷不与容器同步 - Dockerfile can't copy & Docker-compose volume does not sync with container Docker-Compose配置容器时区 - Docker-Compose Configuring Container Timezone 容器不停留在docker-compose up上 - Container not staying up on docker-compose up docker-compose 和与 Mongo 容器的连接 - docker-compose and connection to Mongo container
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM