简体   繁体   English

boto3 使用自定义 url 为 S3 上的有效存储桶名称提供 InvalidBucketName 错误

[英]boto3 gives InvalidBucketName error for valid bucket names on S3 with custom url

I am trying to write a python script for basic get/put/delete/list operations on S3.我正在尝试为 S3 上的基本获取/放置/删除/列表操作编写一个 python 脚本。 I am using a cloudian S3 object storage and not AWS.我使用的是 cloudian S3 object 存储而不是 AWS。 To set the boto3 resource, I set the endpoint and keys like this -要设置 boto3 资源,我像这样设置端点和密钥 -

URL = 'http://ip:80'

s3_resource = boto3.resource ('s3', endpoint_url=URL,
   aws_access_key_id = ACCESS_KEY,
   aws_secret_access_key = SECRET_KEY,
   region_name='region1')

I have created some test buckets MANUALLY with following names that pass valid S3 bucket names constraints:我已经使用以下名称手动创建了一些测试存储桶,这些存储桶通过了有效的 S3 存储桶名称约束:

  • test-bucket-0测试桶 0
  • test-bucket-1测试桶 1
  • sample-bucket样品桶
  • testbucket测试桶

However, when I try to create a bucket from python code, I get the following error repeatedly -但是,当我尝试从 python 代码创建存储桶时,我反复收到以下错误 -

# >>> client.list_buckets()
# Traceback (most recent call last):
#   File "<stdin>", line 1, in <module>
#   File "/usr/local/lib/python3.8/site-packages/botocore/client.py", line 357, in _api_call
#     return self._make_api_call(operation_name, kwargs)
#   File "/usr/local/lib/python3.8/site-packages/botocore/client.py", line 676, in _make_api_call
#     raise error_class(parsed_response, operation_name)
# botocore.exceptions.ClientError: An error occurred (InvalidBucketName) when calling the ListBuckets operation: The specified bucket is not valid.

Being very new to boto3, I am really not sure what boto3 is expecting.作为 boto3 的新手,我真的不确定 boto3 期待什么。 I have tried various combinations for creating connections to the S3 service such as using client instead of resource , but the problem is consistent.我尝试了各种组合来创建与 S3 服务的连接,例如使用client而不是resource ,但问题是一致的。

A few other S3 connections I tried are these:我尝试过的其他一些 S3 连接是:

s3 = boto3.resource('s3',
        endpoint_url='http://10.43.235.193:80',
        aws_access_key_id = 'aaa',                                                                                                                                              
        aws_secret_access_key = 'sss',
        config=Config(signature_version='s3v4'),
        region_name='region1')
conn = boto3.connect_s3(                                                                                                                                                       
    aws_access_key_id = 'aaa',                                                                                                                                              
    aws_secret_access_key = 'sss',                                                                                                                                       
    host = '10.43.235.193',                                                                                                                                                       
    port = 80,                                                                                                                                                              
    is_secure = False,                                                                                                                                                        
) 
from boto3.session import Session
session = Session(
    aws_access_key_id='aaa',
    aws_secret_access_key='sss',
    region_name='region1'
)

s3 = session.resource('s3')
client = session.client('s3', endpoint_url='http://10.43.235.193:80') # s3-region1.example.com
s3_client = boto3.client ('s3', 
   endpoint_url=s3_endpoint,
   aws_access_key_id = 'aaa',
   aws_secret_access_key = 'sss',
   region_name='region1')

The python-script is running inside a container and the same pod that runs s3 container. python 脚本在一个容器和运行 s3 容器的同一个 pod 中运行。 Therefore IP is accessible from 1 container to another.因此 IP 可以从一个容器访问到另一个容器。 How should I solve this problem?我应该如何解决这个问题?

My finding is very weird.我的发现很奇怪。 Having an error as InvalidBucketName is super misleading though and I found lots of many thread on boto3 github about this.尽管InvalidBucketName出现错误是非常具有误导性的,但我在 boto3 github 上发现了很多关于此的线程。 But as it turns out, most of the users are of AWS and not on-prem private cloud S3 so that did not help much.但事实证明,大多数用户都是 AWS 而不是本地私有云 S3,所以这并没有太大帮助。

For me, having IP ex.对我来说,有 IP ex。 10.50.32.5 as an S3 endpoint in the configuration while creating s3_client is not working. 10.50.32.5 在创建 s3_client 时作为配置中的 S3 端点不起作用。 Therefore having endpoint set like this -因此,端点设置如下 -

s3_client = boto3.client ('s3', 
   endpoint_url='http://10.50.32.5:80',
   aws_access_key_id = 'AAA',
   aws_secret_access_key = 'SSS',
   region_name='region1')

is failing.正在失败。

How did I fix this?我是如何解决这个问题的?

I added a DNS entry into /etc/hosts;我在 /etc/hosts 中添加了一个 DNS 条目; ie a mapping of IP and S3-endpoint URL like this -即 IP 和 S3 端点 URL 的映射,如下所示 -

10.50.32.5   s3-region1.example.com

And then created an S3 client using boto like this -然后像这样使用 boto 创建一个 S3 客户端 -

s3_client = boto3.client ('s3', 
   endpoint_url=s3_endpoint,
   aws_access_key_id = 'AAA',
   aws_secret_access_key = 'BBB',
   region_name='region1')

And it worked.它奏效了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM