简体   繁体   中英

Fargate tasks for nginx?

I am trying to estimate the cost of fargate.

Goal:

My goal is to set up a reverse proxy using nginx on fargate or eks.
I am trying to evaluate the cost difference, but I am having a hard time doing that because I don't really understand how to use the pricing tool. ( https://calculator.aws/#/addService/Fargate )

Questions:

Here are the questions / mental blocks that I am stuck on:

  1. How do I determine how many tasks are running per minute (or day)? For context: Currently we have a load balancer that routes traffic to ec2 instances with nginx on them and this nlb has roughly 200,000 active flow count per day. Does a task or flow constitute each individual request?
  2. If this nginx server will need to be used hypothetically 24/7 (but mostly during business hours) does fargate make sense to use (from a cost perspective)?
  3. I am new to allocating resources -- how does one go about determining how much vCP, memory, and ephemeral storage? I did some googling of course and found a stack overflow post ( https://stackoverflow.com/questions/63077100/how-much-memory-and-cpu-nginx-and-nodejs-in-each-container-needs#:~:text=You%20should%20not%20exceed%20128MB,should%20be%20more%20than%20enough. ) and also a quora post https://www.quora.com/How-much-disk-space-and-how-much-RAM-does-nginx-ncache-need-when-running-as-a-reverse-HTTP-proxy that both suggest that you would need at minimum 128 ram, but then 100-300 mb per worker. But then how do you know how many workers? I am just trying to understand strategy wise how one would evaluate how much memory is needed for something specific like nginx.

My goal is to set up a reverse proxy using nginx on fargate or eks.

Please note that Fargate is a compute platform that is a deployment target (alternative to EC2) for both AWS ECS and AWS EKS. You never use Fargate directly, you always use it through either ECS or EKS.

How do I determine how many tasks are running per minute (or day)? For context: Currently we have a load balancer that routes traffic to ec2 instances with nginx on them and this nlb has roughly 200,000 active flow count per day. Does a task or flow constitute each individual request?

No, in ECS a task is a running container (or group of containers). Just take the number EC2 instances you currently have running, that is the number of ECS Fargate tasks you would need.

If this nginx server will need to be used hypothetically 24/7 (but mostly during business hours) does fargate make sense to use (from a cost perspective)?

ECS Services can run 24/7 just like your EC2 instances. They can be auto-scaled, just like your EC2 instances. You'll have to do the cost analysis yourself to see if it makes sense in your use case. Moving from EC2 to ECS/Fargate is usually not done for cost reasons, but for elimination of all the server maintenance you have to do. If you factor in the cost of your time taken to manage the EC2 servers it may make sense from a purely cost perspective.

I am new to allocating resources -- how does one go about determining how much vCP, memory, and ephemeral storage?

If you already have it running on EC2, look at how much CPU/RAM/Storage you are using there, and translate that to the Fargate settings.

Otherwise you'll need to look into spinning it up in a test environment and running some performance tests to determine what you need.

If you aren't doing any caching in Nginx then you would most likely not need much storage at all, just go with the default.

But then how do you know how many workers?

One worker per CPU (vCPU). The Nginx default is 1. So you could start by leaving the default setting alone, and spinning up some ECS Fargate services that use a single vCPU per task. Note that the minimum RAM for 1 vCPU is 2GB, which should be plenty of RAM for Nginx.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM