Serverless Containers vs Kubernetes
Are we still in the age of Kubernetes?
Yes and no. Kubernetes is probably the best of breed container orchestration tool out there, it has a huge operational overhead… so it might make more sense to spend your time developing your code, and use a more serverless solution, such as AWS Fargate on ECS. It will make both your Developers and SREs happy.
What are Fargate and Kubernetes?
While Fargate and Kubernetes are both container orchestration tools, they operate differently. Fargate is a serverless compute engine for containers, meaning you don’t manage the underlying infrastructure. Kubernetes, on the other hand, requires you to manage the cluster’s nodes.
Fargate on ECS is a serverless way to run containers. Instead of managing EC2 instances yourself, AWS handles the underlying infrastructure. You just focus on your containers. Here’s a simplified workflow:
- Package your application: Build a Docker image with your application and its dependencies.
- Define task definition: In ECS, create a task definition specifying your Docker image, CPU/memory needs, and other configurations.
- Create a cluster: Set up an ECS cluster, choosing Fargate as the launch type.
- Run your task: Launch your task, and ECS will run your container on Fargate. Key benefits:
- No server management: No need to provision, patch, or scale EC2 instances.
- Simplified container deployment: Focus on your application, not infrastructure.
- Cost-effective: Pay only for the resources your containers consume. Keep in mind:
- Less control: Compared to EC2, you have less control over the environment.
- Pricing: Fargate pricing is per-second, based on vCPU and memory.
While AWS Fargate offers a lot of convenience, it does have some limitations in customization compared to running ECS on EC2. Key Differences to Keep in Mind:
- No Node Management: With Fargate, you don’t manage nodes or operating systems.
- Limited Customization: Fargate offers less customization compared to running Kubernetes on EC2 instances.
- Pricing: Fargate pricing is based on resource consumption per pod.
Some of the more relevant Fargate limitations:
- Operating System: You can’t choose a custom OS. Fargate provides a managed container environment with a limited set of supported OS versions.
- Kernel Modifications: Tweaking kernel parameters or installing kernel modules isn’t possible.
- Custom Security Agents: Installing third-party security agents or monitoring tools directly on the underlying host is not supported.
- Hardware Access: You have limited access to the underlying hardware, making specialized use cases like GPU processing or high-performance networking more challenging.
- Networking: While you have control over network ports and security groups, advanced network configurations might be restricted.
When to consider EC2: If your workloads require deep OS-level customization, specialized hardware access, or very specific networking configurations, then running ECS on EC2 instances might be a better fit. This gives you more control, but you’ll also be responsible for managing the instances.
How do I deploy containers using Fargate?
In ECS, the equivalent of Kubernetes manifests (YAML files) is the task definition. Think of a task definition as a blueprint for your containerized application. It specifies:
- Container image: The Docker image to use.
- Resource allocation: CPU and memory requirements.
- Networking: Port mappings, security groups.
- Environment variables: Configuration values.
- Logging and monitoring: Log drivers, monitoring configurations.
- IAM roles: Permissions for your containers to access AWS services.
Key differences from Kubernetes manifests:
- Format: Task definitions are typically defined in JSON format within the ECS console or using the AWS CLI/SDKs.
- Scope: Task definitions are more focused on individual containers or small groups of containers working together, whereas Kubernetes manifests can define more complex deployments and relationships between components.
How it works: You create a task definition, and then you use it to launch tasks within an ECS cluster. Each task represents one or more containers running your application.
{
"family": "my-fargate-app",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "my-web-app",
"image": "nginx:latest",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "my-fargate-app-log-group",
"awslogs-region": "your-aws-region",
"awslogs-stream-prefix": "my-web-app"
}
}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512"
}
Explanation:
- family: A name for your task definition.
- networkMode: awsvpc is required for Fargate, enabling your containers to have their own elastic network interfaces.
- containerDefinitions: An array defining your containers.
- name: A name for your container.
- image: The Docker image to use.
- portMappings: Exposes container ports to the host.
- essential: If true, the task will fail if this container fails.
- logConfiguration: Configures logging to AWS CloudWatch Logs.
- requiresCompatibilities: Specifies that this task definition is compatible with Fargate.
- cpu: CPU units to allocate (256 = 0.25 vCPU).
- memory: Memory in MB to allocate.
To use this:
- Save: Save this JSON as a file (e.g., task-definition.json).
- Create: Use the AWS CLI or ECS console to create a new task definition using this file.
- Launch: Launch a Fargate task using this task definition in your ECS cluster.
This code will run an NGINX web server on Fargate, accessible on port 80. Remember to configure your security groups and load balancer to allow traffic to your task.
What about the “vendor lock-in?”
Finding a “common denominator”, as in “a service that runs exactly the same on any cloud”, is impossible. There are much smarted strategies, based on building your own “abstraction layer”, which will allow you to run a pre-configured abstraction of the native cloud services, and still use the best of each cloud in the “underlay”. I highly recommend the following 2 videos about “Platform Engineering” that go deeper into “autonomy + standardization” and “platform orchestration layer”:
- https://youtu.be/YvD2jH_Lz_A?si=C3aBxWPizqCvSj97
- https://www.youtube.com/live/VvMIyzlZaso?si=0jbqsLVfAUej4Kws
Converting an ECS task definition to a Kubernetes YAML manifest isn’t a straightforward one-to-one mapping, but it’s definitely doable, and there are tools that can help you. This way, you can create similar “reference architectures” for your Data Centers & Cloud Providers, and not worry about the lock-in:
- specctl: An AWS Labs tool that helps convert ECS task definitions and services to Kubernetes YAML and vice versa.
- Kompose: A tool that can convert Docker Compose files to Kubernetes manifests. While not directly for task definitions, it can be helpful for parts of the conversion.
Keep in mind:
- Testing: Always test the converted manifests thoroughly in a non-production environment.
- Manual adjustments: Some manual adjustments might be needed to fine-tune the Kubernetes configuration.
Let’s say we want to now create the same task definition we saw above in the Fargate, but now as kubernetes manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-app
spec:
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: my-web-app
image: nginx:latest
ports:
- containerPort: 80
resources:
requests:
cpu: 256m
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app-service
spec:
selector:
app: my-web-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Explanation:
- Deployment: This defines how your application is deployed and scaled.
- replicas: Number of pods to run (similar to tasks in ECS).
- selector: Matches pods with the app: my-web-app label.
- template: Defines the Pod template.
- metadata.labels: Labels the pod.
- spec.containers: Defines the container.
- image: The Docker image.
- ports: Exposes the container port.
- resources: Requests CPU and memory (matching the task definition).
- Service: This exposes your application to the network.
- selector: Matches pods with the app: my-web-app label.
- ports: Defines the service port and the target port on the pods.
- type: LoadBalancer: Creates a load balancer to expose the service.
Important notes:
- Load Balancer: This YAML creates a Kubernetes LoadBalancer. In AWS, this typically translates to an Application Load Balancer. You’ll need to configure the load balancer separately to route traffic to your service.
- Logging: This YAML doesn’t include logging configuration. You’ll need to set up logging in Kubernetes separately, for example, using a sidecar container or a logging agent.
- AWS specifics: The original task definition used awslogs log driver. In Kubernetes, you’d need to configure logging differently (e.g., Fluentd).
Recommended reading
- https://github.com/yaya2devops/CloudBlogs
- https://medium.com/@mobigaurav/embarking-on-the-containerization-journey-exploring-docker-and-kubernetes-fundamentals-192fbb6c3fc4