简体   繁体   中英

AWS disable network access between tenants

I am fairly new to AWS and I'm trying to build an application that allows customers to spawn up machines for setting up database clusters

Users are free to SSH into their machines, however there should be no connectivity between m1 & m2 where m1 is the cluster of machines tenant t1 owns while m2 is the cluster of machines tenant t2 owns

I did figure out that security groups is the answer to this, however their quota is limited which made me think is my approach even right? Is there an alternative?

Depending on what you're trying to do, you probably want to separate your clients by giving each one its own AWS account (using organizations) or at the very least by creating a separate VPC for each client.

If the Database clusters you intend to build are supported by RDS , this might be a better approach at managing DB instances at scale. You can then create IAM roles specific to customers and their clusters and they can remotely change configurations of their instances without the need to SSH.

Another better approach would be to have a VPC for each client and either create a VPN tunnel back to their on-prem (where they'll SSH from) or setup a public jump box and whitelist source IPs. This creates a more secure boundary for SSH, arguably other areas as well. You'll likely need to request an increase above the default 5 VPCs per region limit.

I'd also strongly advise engaging with a Cloud Network/Security specialist, before implementing any option, there's bound to be nuances here and there.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM