简体   繁体   English

ECS容器和Docker

[英]ECS Container and Docker

I have been stuck on this for the longest now: I am deploying the Python code as a dockerized container.我一直坚持这一点的时间最长:我将 Python 代码部署为 dockerized 容器。

I am doing all this in Python CDK.我在 Python CDK 中做这一切。

Here is how I am creating the cluster这是我创建集群的方式

vpc_test = _ec2.Vpc.from_lookup(self, "VPC",
                      vpc_id= "vpc-6****"
                )
    #Setting up the container to run the job
    cluster = _ecs.Cluster(self, "ClusterToGetFile",
                           vpc=vpc_test
                           )
    task_definition = _ecs.FargateTaskDefinition(self, "TaskDefinition",
                                                 cpu=2048,
                                                 memory_limit_mib=4096
                                                 )
    task_definition.add_container("getFileTask",

                           image = _ecs.ContainerImage.from_asset(directory="assets", file="Dockerfile-ecs-file-download"))

Here is the - Dockerfile-ecs-file-download这是-Dockerfile-ecs-file-download

FROM python:3.9
WORKDIR /usr/app/src
COPY marketo-ecs-get-file/get_ecs_file_marketo.py ./
COPY marketo-ecs-get-file/requirements.txt ./
COPY common_functions ./
RUN  pip3 install -r requirements.txt  --no-cache
CMD ["python" , "./get_ecs_file_marketo.py"]

All I am trying to do, to begin with, is to run the task(Deploy) manually.首先,我要做的就是手动运行任务(部署)。

All I have in the get_ecs_file_marketo.py.py file is我在 get_ecs_file_marketo.py.py 文件中的所有内容是

import logging
logging.info("ECS Container has stareted. ")

However, when I deploy the task, I get this error:但是,当我部署任务时,我收到此错误:

Stopped reason
Essential container in task exited

My plan is to use the ecs run task as part of a step function.我的计划是使用 ecs run 任务作为步骤 function 的一部分。 So when Lambda is done processing some data and it needs to do a data pull, it will call the RunTask of this container.所以当 Lambda 处理完一些数据,需要做一个数据拉取时,它会调用这个容器的 RunTask。

Ideally, I would want to have the Lambda in step function do it all, however, the time limits are causing the lambda to die before the complete file is downloaded and pushed to S3.理想情况下,我希望 Lambda 在步骤 function 中完成所有操作,但是,时间限制导致 Z945F3FC449518A73B9F5F32868DB466C3 在下载完整文件并将其推送到 S 之前死亡。 This will be a regular exercise, however, the lambda upfront is required as the job will need to Enqueue a request, and keep checking for file status before the ecs/container can go and start downloading the file.这将是一个常规练习,但是,需要预先安装 lambda,因为该作业需要对请求进行排队,并在 ecs/容器可以 go 并开始下载文件之前继续检查文件状态。

Any feedback is appreciated.任何反馈表示赞赏。 Thanks.谢谢。

This appears to be working exactly how I would expect it to.这似乎完全按照我的预期工作。 All your script is doing is printing a log message and exiting.您的脚本所做的只是打印一条日志消息并退出。

You aren't really getting an error message, you're just being notified that your script stopped running (because it finished).您并没有真正收到错误消息,只是通知您脚本停止运行(因为它已完成)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM