Cloud-Native principles require automating cloud infrastructure to guarantee scalability and efficiency. Amazon Web Services’ (AWS) Elastic Kubernetes Service (EKS) provides management for containerised applications by handling the complexities of control plane operations. Terraform automates the needed infrastructure that otherwise requires manual deployment and management. You can create and administer resources like EC2 instances, worker nodes, and subnet IDs seamlessly.
By running Terraform, teams can automate setup, integrate IAM users, manage the API server, and configure tools like kube-proxy for efficient networking. Terraform ensures fault-tolerant infrastructure with features like load balancing across virtual machines, simplifying resource management for team members.
Hence,Terraform streamlines deployment, improves reliability, and reduces manual effort—unlocking the full potential of EKS clusters.
In this article, we’ll explore how to simplify EKS deployments with Terraform, unlocking the power of automation and infrastructure as code (IaC). By the end of this article, you will understand how to leverage Terraform for provisioning EKS clusters efficiently, improving scalability, and reducing manual work.
Why Use Terraform for AWS EKS?
1. Infrastructure as Code (IaC)
Terraform enables the definition of cloud infrastructure in a declarative manner. This ensures consistency, scalability, and ease of management when deploying complex Kubernetes environments like AWS EKS.
2. Reusability
With modules, Terraform lets you create reusable components. You can build a module for provisioning EKS clusters and reuse it across various environments (e.g., dev, staging, production).
3. Version Control
Terraform configurations can be stored in Git repositories, allowing for version control, collaboration, and rollbacks when needed.
4. Automation
Terraform’s automation capabilities eliminate human error, ensuring that all environments are consistent and reproducible.
Pre-Requisites
Before diving into Terraform and EKS, ensure the following are set up:
- AWS Account: You’ll need an AWS account with appropriate permissions to create EKS clusters.
- AWS CLI: Used to authenticate Terraform with your AWS account.
- Terraform: Installed on your local machine or CI/CD environment.
- kubectl: Installed for managing the Kubernetes cluster once it’s deployed.
Step 1: Configuring Terraform for AWS
Start by creating the Terraform configuration files. Terraform’s declarative approach requires you to define provider settings, resources, and modules for provisioning the required AWS infrastructure.
Provider Configuration
Create a provider.tf file that contains the AWS provider configuration. Here’s an example:
provider "aws" {
region = "us-west-2"
}
This specifies that Terraform will use AWS resources in the us-west-2 region. Ensure your AWS credentials are configured correctly using the AWS CLI (aws configure).
Step 2: Define VPC and Subnets
For EKS to function properly, it needs to be deployed within a VPC (Virtual Private Cloud) with public and private subnets. Let’s create these resources.
VPC Definition
In the vpc.tf file, define a VPC:
resource "aws_vpc" "eks_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "eks-vpc"
}
}
Subnet Definition
Now, define public and private subnets:
resource "aws_subnet" "public" {
vpc_id = aws_vpc.eks_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
availability_zone = "us-west-2a"
tags = {
Name = "public-subnet"
}
}
resource "aws_subnet" "private" {
vpc_id = aws_vpc.eks_vpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-west-2a"
tags = {
Name = "private-subnet"
}
}
Step 3: Security Groups and IAM Roles
Next, define the security groups and IAM roles that will control access to the EKS cluster and its nodes.
Security Groups
resource "aws_security_group" "eks_sg" {
vpc_id = aws_vpc.eks_vpc.id
name = "eks-security-group"
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
IAM Roles
EKS requires several IAM roles for the control plane and node groups. These roles will allow AWS services to interact with each other.
resource "aws_iam_role" "eks_role" {
name = "eks-role"
assume_role_policy = data.aws_iam_policy_document.eks_assume_role_policy.json
}
Step 4: EKS Cluster Definition
Once your VPC, subnets, security groups, and IAM roles are defined, you can now provision the EKS cluster.
EKS Cluster Resource
In the eks.tf file, define the EKS cluster:
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "my-eks-cluster"
cluster_version = "1.21"
subnets = [aws_subnet.public.id, aws_subnet.private.id]
vpc_id = aws_vpc.eks_vpc.id
worker_groups = [
{
instance_type = "t3.medium"
asg_max_size = 3
asg_min_size = 1
}
]
}
This configuration uses the official Terraform AWS EKS module to simplify the process of provisioning an EKS cluster. You define the cluster version, subnets, VPC ID, and worker node groups.
Step 5: Apply Terraform Configurations
With everything defined, run the following commands to provision your EKS cluster.
terraform init
terraform apply
Terraform will initialize your configuration and apply it to AWS, provisioning all the resources defined. This process may take a few minutes, as it involves creating a Kubernetes control plane, worker nodes, and configuring networking.
Step 6: Configuring kubectl
Once the cluster is created, configure kubectl to manage it. Terraform outputs a kubeconfig file for accessing the EKS cluster.
aws eks --region us-west-2 update-kubeconfig --name my-eks-cluster
Now you can manage your cluster with kubectl and deploy your Kubernetes workloads.
Step 7: Scaling and Managing the EKS Cluster
Scaling your EKS cluster is as simple as modifying the asg_min_size and asg_max_size in your Terraform configurations. If your workloads require more resources, you can add more nodes or change instance types.
worker_groups = [
{
instance_type = "t3.large"
asg_max_size = 6
asg_min_size = 2
}
]
Running terraform apply again will adjust the cluster accordingly.
Step 8: Monitoring and Cost Management
Once your EKS cluster is up and running, you can use Amazon CloudWatch to monitor its performance. Additionally, regularly review your AWS billing to optimize costs. For large-scale environments, consider using Amazon EC2 Spot Instances for worker nodes to save costs.
Conclusion
By leveraging Terraform, the deployment and management of AWS EKS clusters can be streamlined and automated. Terraform’s infrastructure-as-code approach not only simplifies the deployment process but also ensures scalability and consistency across environments. Whether you’re managing a single cluster or deploying multiple clusters across different regions, Terraform provides the flexibility and power to automate the entire process. The code snippets mentioned in the article are just for example, you need to update the code to implement best practices as per your requirements.