EKS Node Managed VS EKS Fargate
Introduction
Are you in doubt about what would be the right option to run Kubernetes on AWS? In this blog post you will get the answer, I will show you a few differences between EKS Node Managed and EKS Fargate.
Containers are not a new concept in the IT world: they have been widely used in the public cloud or on-premises. Most public cloud providers offer different approaches on how to implement the containers. Let’s talk about EKS on AWS. You may ask yourself what does EKS stand for?
EKS cluster is a container orchestration platform that consists of a set of virtual machines called worker nodes and is designed to manage the lifecycle of containerized applications. Control Manager of EKS manages the nodes and the pods in the cluster. A “pod” is a group of one or more application containers. This platform also provides availability, scaling, and reliability of the pods.
AWS supports 2 EKS models:
EKS Fargate: Container as a Service (CaaS) also called “serverless for containers”.
EKS Node Managed: Infrastructure as a Service (IaaS)
EKS Fargate. This model has been designed to give developers the possibility to concentrate only on their workload, pod configurations, and application logic, without worrying about the infrastructure management, availability, fault tolerance, scaling, and patching of host machines. Therefore the control manager and worker nodes are managed by AWS.
EKS Node Managed. This model gives developers the freedom to manage not only the workload but also the worker nodes. Worker nodes consist of a group of virtual machines. However, the control manager is always managed by AWS.
The following drawing shows a high-level difference between EKS Fargate and Node Managed.
Let me show you a few differences between them:
1. #SECURITY
Nowadays, security is a fundamental component. The importance of data protection increases as the amount of data grows, so it is important to have in place a good approach to protect the data and systems from challenging threats. A good way to start is by understanding the shared responsibility model: what are AWS’s and Customer’s responsibilities, and who is responsible to enhance the security of pods, network, incident response, and compliance. Both services are compliant by meeting the security standards, and there is also an SLA 99.99%.
2. #SCALABILITY
Your resources increase or decrease according to the system’s workload demands. To better manage the workload, it’s good to have the right scaling strategy allowing systems to scale without interrupting their operation and to ensure that the activities work as it is expected.
- In EKS Fargate, AWS is responsible for managing the scaling of the worker nodes in the cluster, so you don’t need to worry about scalability.
- In Node Managed, the customer is responsible for managing the scalability of the worker nodes.
There is a tool in AWS that helps you to manage different scaling approaches, this service is called Auto Scaling Group(ASG). It provides vertical (scale-up) and horizontal scaling (scale-out). The scope of both these approaches is to deliver performance and capacity that the workloads require. But these models work differently. You may ask: Which one is a better solution? Let me briefly explain how they handle the resource based on the workload.
Scale-out: Add or remove nodes to the cluster, this means a group of virtual machines can be deployed as dependent pieces.
Scale-up: Add or remove capacity to a single node, you can add more CPU, RAM, Storage to this node. Not highly recommended as you don’t have redundancy, and high availability, as this machine will reside in a single availability zone.
3. #NETWORK
Running the EKS cluster on AWS requires a pre-existing networking configuration. So, the first step is to set up the virtual private network (VPC) configuration. Keep in mind what workload you may have before deciding the Inter-domain routing (CIDR) of VPC. As your cluster grows and your applications become more complex you may need other resources running on the same network such as load balancers, databases, etc. It’s important to have a scalable solution that fits your requirements, so you won’t be in a position to recreate the resources and network from scratch because your CIDR block is too small in the first place. For both Node Managed and Fargate Control manager is part of the internal network managed by AWS, so you don’t have to worry about that. In the case of Fargate, AWS also manages the worked node network, so you’ll only need to calculate the pod IPs.
Let me give you two examples of how many IP Addresses you need available to run a simple application in EKS. Here are some combinations of subnet size and machine type.
- One cluster with 5 nodes and 40 pods running on each node.
- Two clusters with 30 nodes and 40 pods running on each node.
How many available IPs will we have for different Subnet CIDR range setup? In the illustration below are three different scenarios, and each one of them shows how many available IPs we will have for different SIDR ranges.
Now let’s imagine we want to use different worker nodes types m5.large and m5.4.xlarge. Can we leverage all available IPs in both scenarios? As you can see in the illustration below there are some limitations on pods availability for different instance types.
In the example # 2 above, you will need at least CIDR /21 to fit the requirements.
4. #COSTS
Cost optimization is an important component in any organization, and it requires a good understanding of costs for short and long term. In AWS, you pay for computing capacity per second depending on what cluster you run. Let me show you how much Fargate and Managed Node will cost for one, three and six months. To illustrate this better, I will showcase you two different cases.
Managed Node Case: 15 nodes, using 2vCPU, 16 GB memory each (3 hours a day and 24 hours).
Fargate Case: 15 pods, using 2vCPU, 16 GB memory each (3 hours a day and 24 hours)
Total vCPU charges = # of Pods x # vCPUs x price per CPU-second x CPU duration per day (seconds) x # of days
Total memory charges = # of Pods x memory in GB x price per GB x memory duration per day (seconds) x # of days
Conclusion
Customers may have complex container architecture in their data centers that may face challenges. However, taking advantage of the cloud gives you the flexibility to focus on what you need, and quickly solves scalability, readability, and security.