简体   繁体   中英

Deploying AWS Global infrastructure with API Gateway, Lambda, Cognito, S3, Dynamodb

Let say I need an API Gateway that is going to run Lambdas and I want to make the best globally distributed performing infrastructure. Also, I will use Cognito for authentication, Dynamodb, and S3 for user data and frontend statics.

My app is located at myapp.com

First the user get the static front end from the nearest location:

user ===> edge location at CloudFront <--- S3 at any region (with static front end)

After that we need to comunicate with API Gateway.

user ===> API Gateway ---> Lambda ---> S3 || Cognito || Dynamodb

API Gateway can be located in several regions, and even though is distributed with CloudFront, each endpoint is pointing to a Lambda located at a given region: Let say I deploy an API at eu-west-1. If a request is sent from USA, even if my API is on CloudFront, the Lambda it runs is located at eu-west-1, so latency will be high anyway.

To avoid that, I need to deploy another API at us-east-1 and all my Lambdas too. That API will be pointing to those Lambdas

If I deploy one API for every single region, I would need one endpoint for each one of them, and the frontend should decide which one to request. But how could we know which one is the nearest location?

The ideal scenario is a single global endpoint at api.myapp.com , which is going to go to the nearest API Gateway which runs the Lambdas located in that region too. Can I configure that using Route 53 latency routing with multiple A records pointing to each api gateway?

If this is not right way to do this, can you point me in the right direction?

AWS recently announced support for regional API endpoints using which you can achieve this.

Below is an AWS Blog which explains how to achieve this:

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Excerpt from the blog:

The default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can't use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover.

Unfortunately, this is not currently possible . The primarily blocker here is CloudFront. MikeD@AWS provides the info on their forums:

When you create a custom domain name it creates an associated CloudFront distribution for the domain name and CloudFront enforces global uniqueness on the domain name.

If a CloudFront distribution with the domain name already exists, then the CreateCloudFrontDistribution will fail and API Gateway will return an error without saving the domain name or allowing you to define it's associated API(s).

Thus, there is currently (Jun 29, 2016) no way to get API Gateway in multiple regions to handle the same domain name.

AWS has no update on providing the needful since confirming existence of an open feature request on July 4, 2016. AWS Form thread for updates

Checkout Lambda@Edge

Q: What is Lambda@Edge? Lambda@Edge allows you to run code across AWS locations globally without provisioning or managing servers, responding to end users at the lowest network latency. You just upload your Node.js code to AWS Lambda and configure your function to be triggered in response to Amazon CloudFront requests (ie, when a viewer request lands, when a request is forwarded to or received back from the origin, and right before responding back to the end user). The code is then ready to execute across AWS locations globally when a request for content is received, and scales with the volume of CloudFront requests globally. Learn more in our documentation.

Usecase, minimizing latency for globally distributed users

Q: When should I use Lambda@Edge? Lambda@Edge is optimized for latency sensitive use cases where your end viewers are distributed globally. Ideally, all the information you need to make a decision is available at the CloudFront edge, within the function and the request. This means that use cases where you are looking to make decisions on how to serve content based on user characteristics (eg, location, client device, etc) can now be executed and served right from the edge in Node.js-6.10 without having to be routed back to a centralized server.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM