AWS Fargate Pricing Explained

by Vantage Team


AWS Fargate is a serverless application you can use to run containerized workloads—without managing underlying servers or infrastructure. You can use either Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS) as orchestration engines to run containers on Fargate. While AWS simplifies your life by not having to manage the underlying infrastructure, this benefit comes at a premium. We’ll discuss the pricing impact of using Fargate relative to a self-managed EC2 cluster.

How Does AWS Fargate Pricing Work?

Fargate works on a pay-as-you-go pricing model, where you pay only for the resources that you consume. Fargate charges customers on two different compute dimensions: vCPU and GB of RAM. In addition, Fargate pricing is based on operating system and CPU architecture as well as storage resources consumed for a Task or Pod. When you run containers on Fargate, you configure the amount of vCPU and GB of RAM you’d like available to your application in a Task Definition, which is the configuration file AWS uses to run services on Fargate.

When you run a service in Fargate, you can set the number of Tasks that run—so pricing is pretty simple arithmetic: the number of Tasks you run multiplied by the amount of vCPU and GB assigned to each Task.

For simplicity’s sake, we’ll use us-east-1 for the pricing presented here. Pricing is calculated per second, and there is a one-minute minimum. The corresponding Fargate rates by operating system/architecture for each dimension follow.

Architecture Cost per vCPU per hour Cost per GB per hour
Linux/x86 $0.04048 $0.004445
Linux/ARM $0.03238 $0.00356
Windows/x86 $0.09148

Note that this configuration also has an OS license fee of $0.046 per vCPU per hour
$0.01005

Breakdown of Fargate Pricing by Architecture

In addition, you get 20 GB of ephemeral storage included for all Tasks and Pods. You pay $0.000111 per storage GB per hour for additional storage over the 20 GB limit.

AWS Fargate Pricing Example 1

You have a Task configured with 2 vCPUs and 4 GB of RAM on Linux/X86 (using 20 GB of ephemeral storage). The cost of running one Task per hour would be (2 x $0.04048) + (4 x $0.004445) or $0.09874 per Task per hour.

You have a Fargate service that runs 5 of these Tasks, so your cost for the Fargate service would be 5 Tasks x $0.09874 per Task per hour, or $0.4937.

AWS Fargate Pricing Example 2

You have a service that uses 10 EKS Tasks and runs on Windows. The service runs for 2 hours every day for 1 month. Tasks consume 4 vCPU and 8 GB memory. This configuration’s total monthly cost is as follows.

For each calculation, we’ll use 304.17 Tasks per month: 10 per day x (730 hours per month / 24 hours in a day).

Variable Calculation Total
vCPU hours 304.17 tasks x 4 vCPU x 2 hours x $0.09148 per hour $222.60
OS license (Windows) 304.17 tasks x 4 vCPU x 2 hours x $0.046 per vCPU per hour $111.93
GB hours 304.17 tasks x 8.00 GB x 2 hours x $0.01005 per GB per hour $48.91
Total Cost $383.44 per month

Fargate Windows Pricing Example

Note that the same setup on Linux is $120.13 on x86 and $96.12 on ARM.

Fargate Spot Pricing

Fargate Spot is another pricing option where you can save up to 70% from regular pricing. Pricing is determined by supply/demand of the market and changes over time. The pricing at the time of this post (January 2024), for us-east-1 is, listed below:

Unit Price
per vCPU per hour $0.01246287
per GB per hour $0.00136851

Spot Pricing for Fargate

Like EC2 Spot instances, Fargate Spot is not suitable for every workload, and the savings come with some caveats.

  • At times, Spot capacity may be unavailable. ECS services will continue to try restarting Tasks until the capacity becomes available again.
  • There is a two-minute warning for terminating Spot Tasks. You can, however, configure the Task to gracefully exit with the stopTimeout parameter.

As a best practice, use Spot for fault-tolerant applications that can handle a two-minute shutdown warning.

Fargate vs. Self-Managed EC2 Pricing

In ECS and EKS, you can manage your own underlying compute resources without using Fargate. With this in mind, you may ask, “What is the premium I’m paying for Fargate relative to managing EC2 instances myself?” To help answer this question, we reviewed a few popular EC2 instance types and priced out equivalent on-demand Fargate rates with the same amounts of vCPU and RAM. Again, these examples are based off of pricing in us-east-1.

c5.xlarge

c5.xlarge is a compute-intensive EC2 instance with 4 vCPUs and 8 GB of RAM. Its hourly on-demand price is $0.17. A Fargate configuration with 4 vCPUs and 8GB of RAM is ~$0.19748 per hour. Fargate charges a ~16% premium relative to c5.xlarge instances for this configuration.

m5.xlarge

m5.xlarge is a general purpose EC2 instance with 4 vCPUs and 16 GB of RAM. Its hourly on-demand price is $0.192. A Fargate configuration with 4 vCPUs and 16GB of RAM is ~$0.23304 per hour. Fargate charges a ~21% premium relative to m5.xlarge instances for this configuration.

t3.xlarge

t3.xlarge is a low-cost burstable EC2 instance with 4 vCPUs and 16 GB of RAM. Its hourly on-demand price is $0.1664. A Fargate configuration with 4 vCPUs and 16 GB of RAM is ~$0.23304 per hour. Fargate charges a ~40% premium relative to t3.xlarge instances for this configuration.

Not All Fargate vCPUs Are Created Equal

The other thing to note is that when you run a container on Fargate, you don’t actually know what kind of underlying EC2 instance you’re going to be placed on. The pool of underlying compute that makes up Fargate seems to be a mix of different EC2 instances, which we can infer from spinning up many containers and seeing different performance metrics. So not only are you paying a premium for Fargate, there is the potential you’re paying a premium for significantly worse performance, depending on your scheduling luck.

Conclusion

The operational time you save by using Fargate can sometimes be worth the premium you pay; however, we see transitioning from Fargate to EC2 as being not only a chance to potentially save money but also a chance to boost performance. Especially for more normalized compute patterns, managing your own cluster can be relatively easy.