Aloha,
TL&DR:
I am trying to create an s3 bucket locally by using terraform instead of awscli or awslocal and i am running in some errors. I am wondering if this way is even supported by localstack. I am not sure what i did wrong here but i guess i need to use the awscli here to create s3 buckets. Anyone has an idea why the bucket name is not forwarded?
Long Version:
I am using a docker-compose.yaml to define the localstack docker container:
version: '3'
services:
localstack:
image: localstack/localstack:0.10.5
ports:
- "4572:4572"
- "4584:4584"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- DEFAULT_REGION=eu-central-1
- SERVICES=s3,secretsmanager
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=${LOCALSTACK_DOCKER_HOST-unix:///var/run/docker.sock}
- TF_VAR_localstack_host=localhost
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
I use this terraform main.tf to define what i want to create in the docker container:
variable "localstack_host" {
default = "localhost"
}
provider "aws" {
version = "~> 2.39.0"
alias = "local"
region = "eu-central-1"
access_key = "This is not an actual access key."
secret_key = "This is not an actual secret key."
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
secretsmanager = "http://${var.localstack_host}:4584"
s3 = "http://${var.localstack_host}:4572"
}
}
resource "aws_s3_bucket" "s3_encryption_test_bucket" {
bucket = "s3-encryption-test-bucket"
provider = "aws.local"
}
After running the docker container I then apply the terraform file to the local running instance of localstack:
terraform plan
terraform apply
The error i get from terraform is:
aws_s3_bucket.s3_encryption_test_bucket: Creating...
acceleration_status: "" => "<computed>"
acl: "" => "private"
arn: "" => "<computed>"
bucket: "" => "s3-encryption-test-bucket"
bucket_domain_name: "" => "<computed>"
bucket_regional_domain_name: "" => "<computed>"
force_destroy: "" => "false"
hosted_zone_id: "" => "<computed>"
region: "" => "<computed>"
request_payer: "" => "<computed>"
versioning.#: "" => "<computed>"
website_domain: "" => "<computed>"
website_endpoint: "" => "<computed>"
aws_s3_bucket.s3_encryption_test_bucket: Still creating... (10s elapsed)
aws_s3_bucket.s3_encryption_test_bucket: Still creating... (20s elapsed)
.....
aws_s3_bucket.s3_encryption_test_bucket: Still creating... (2m10s elapsed)
aws_s3_bucket.s3_encryption_test_bucket: Still creating... (2m20s elapsed)
Error: Error applying plan:
1 error(s) occurred:
* aws_s3_bucket.s3_encryption_test_bucket: 1 error(s) occurred:
* aws_s3_bucket.s3_encryption_test_bucket: error getting S3 Bucket CORS configuration: timeout while waiting for state to become 'success' (timeout: 2m0s)
I also looked into the logs of the container and got this error message:
2019-12-12T13:24:45:ERROR:localstack.services.generic_proxy: Error forwarding request: Parameter validation failed:
Invalid bucket name "": Bucket name must match the regex "^[a-zA-Z0-9.\-_]{1,255}$" Traceback (most recent call last):
File "/opt/code/localstack/localstack/services/generic_proxy.py", line 240, in forward
path=path, data=data, headers=forward_headers).......
I hade the same problem, the solution for me was adding s3_force_path_style = true
to provider "aws"
section:
provider "aws" {
...
s3_force_path_style = true
...
}
I encountered the same issue. Simply defining ACL in resource block solved it for me:
resource "aws_s3_bucket" "s3_encryption_test_bucket" {
bucket = "s3-encryption-test-bucket"
provider = "aws.local"
acl = "private"
}
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.