简体   繁体   English

Boto3 在 docker 中需要 AWS 区域,但在本地环境中不需要

[英]Boto3 requiring AWS region in docker but not in local env

I have a script setup for a lambda that uses Python and will eventually connect to a DynamoDB table.我有一个使用 Python 的 lambda 脚本设置,最终将连接到 DynamoDB 表。 I setup everything locally (a virtual environment using pipenv) using the docker image AWS provides for DynamoDB and it all worked without a hitch.我使用 AWS 为 DynamoDB 提供的 docker 映像在本地设置了所有内容(使用 pipenv 的虚拟环境),并且一切顺利。 Then I tried to dockerize the Python.然后我尝试对 Python 进行 dockerize。 When I run my table creation script in my local virtual environment, it runs without a problem.当我在本地虚拟环境中运行我的表创建脚本时,它运行没有问题。 When I run the same script from within my docker container, I get an error.当我从 docker 容器中运行相同的脚本时,出现错误。 I'm not sure what the difference is.我不确定有什么区别。 Right now, I use this line to connect:现在,我用这条线连接:

dynamodb = boto3.resource('dynamodb', endpoint_url="http://dynamodb:8000")

dynamodb is the name of the service in docker, so this should work. dynamodbdynamodb中服务的名称,所以这应该可以工作。 When I replace dynamodb:8000 with localhost:8000 , it works fine in my local venv.当我用localhost:8000替换dynamodb:8000时,它在我的本地 venv 中工作正常。 When I run it in docker, I get当我在 docker 中运行它时,我得到

botocore.exceptions.NoRegionError: You must specify a region.

The big question is why it's looking for a region in docker, but not locally.最大的问题是为什么它在 docker 中寻找一个区域,而不是在本地。 Here's my docker-compose for good measure:这是我的 docker-compose 很好的衡量标准:

version: '3'
services:
    dynamodb:
        command: "-jar DynamoDBLocal.jar -sharedDb -optimizeDbBeforeStartup -dbPath ./data"
        image: "amazon/dynamodb-local:latest"
        container_name: dynamodb
        ports:
            - "8000:8000"
        volumes:
            - "./database_data:/home/dynamodblocal/data"
        working_dir: /home/dynamodblocal
    lambda:
        build: .
        container_name: user-rekognition-lambda
        volumes:
            - ./:/usr/src/app

In one of the AWS blogs, local AWS Glue , they share the ~/.aws/ in read-only mode with the docker container using volume option:在 AWS 博客之一本地 AWS Glue 中,他们使用volume选项以只读模式与~/.aws/容器共享~/.aws/

-v ~/.aws:/root/.aws:ro

This would be the easiest way for you to reuse your credentials from host workstation inside the docker.这将是您从 docker 内的主机工作站重用您的凭据的最简单方法。

boto3 has a bug that even if you explicitly give it an endpoint_url , it still wants to know a region_name . boto3 有一个错误,即使你明确地给它一个endpoint_url ,它仍然想知道一个region_name Why?为什么? I don't know, and it doesn't seem to use this name for anything as far as I can tell.我不知道,据我所知,它似乎没有将这个名称用于任何事情。

In most people's setups, $HOME/.aws/config contains some default choice of region, so boto3 picks up the region_name from that configuration, and then just ignores it...在大多数人的设置中, $HOME/.aws/config包含一些默认的区域选择,因此 boto3 从该配置中选择region_name ,然后忽略它......

Since probably your docker image doesn't have that file, the trivial solution is just to add region_name='us-east-1' (for example) explicitly to your boto3.resource() call.由于您的 docker 映像可能没有该文件,因此简单的解决方案就是将region_name='us-east-1' (例如)显式添加到您的boto3.resource()调用中。 Again, the specific region name you choose won't matter - boto3 will connect to the URL you give it, not to that region.同样,您选择的特定区域名称无关紧要 - boto3 将连接到您提供的 URL,而不是该区域。

So the full command becomes:所以完整的命令变成:

dynamodb = boto3.resource('dynamodb',
    endpoint_url="http://dynamodb:8000",
    region_name="us-east-1")

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM