Cloud Platforms & Serverless

Maximizing Cloud Efficiency with Terraform and AWS Automation

Leverage IaC with Terraform and AWS for Enhanced Cloud Automation and Management

Bruns Michael

Cloud automation is revolutionizing the way businesses manage their infrastructure by reducing manual workloads, boosting scalability, and driving efficiency. This guide explores how automation, using tools like AWS, Terraform, and serverless computing, enhances workflows, helping organizations streamline operations and optimize resource usage. Whether you are adopting Infrastructure as Code (IaC), using Kubernetes, or integrating CI/CD pipelines, automation is key to ensuring that your systems remain scalable, reliable, and cost-effective. Discover how to harness the power of cloud automation to gain a competitive edge. Amazon Web Services (AWS) provides a wide range of services that can be automated using Infrastructure as Code (IaC) tools like Terraform. As Bruns Michael explains, developers no longer need to manually click through a web UI. Instead, they can leverage Terraform for automation, versioning, and replication to efficiently manage cloud environments. By adopting IaC practices, businesses can optimize cloud performance and scalability, reducing operational overhead.

STAY TUNED

Learn more about DevOpsCon

 

Today, the barrier to building your own cloud infrastructure has never been lower. Providers such as AWS, Microsoft Azure, or Google Cloud Platform provide a variety of flexible, highly available services. The usage of these is usually explained very clearly, utilizing an easy to understand web interface, tutorials, or blog articles. Furthermore, free or at least heavily discounted test phases represent a practical springboard into the cloud, making it possible to conduct initial experiments without financial risks.

If someone wants to use one of these platforms in a productive manner after a successful experiment, the realization quickly sets in that both the UI and test phase are rather more like a gateway drug than an actual key to success. Fortunately, there are now various tools to make life easier in this regard. In this author’s daily work, the combination of AWS and Terraform has proven to be very suitable to solve this problem.

What is Terraform?

Terraform is an open source tool by HashiCorp. It’s written in Go and available on GitHub. It enables the declarative configuration of a infrastructure in structured text files so they can be managed like any other source code in a version control system. This configuration can be used to plan, set up, change, and even dismantle an environment.

Listing 1 gives a rough insight into the form of a terraform configuration by setting up a virtual machine of type t2.micro, based on an Amazon Machine Image (AMI) with Ubuntu 16.04 LTS, in the Frankfurt data center of AWS. More examples for different applications are available here.

 
provider "aws" {
  region = "eu-central-1"
}

resource "aws_instance" "kaeptn-eichhorn" {
  ami = "ami-13b8337c"
  instance_type = "t2.micro"
}

The use of Terraform is not limited to just AWS. Theoretically, it can work with every provider and also can be used to set up its own data center. For the most popular providers, ready-made plugins are actively maintained and adapted to changing conditions and new features. You can also contribute your own plugins and add them to the community repository.

Tools such as Chef, Puppet, etc., assume the existence of a machine that needs to be set up, whereas Terraform is able to provision virtual machines. In addition, Terraform can also set up services such as databases or object storage with various IaaS providers, as opposed to proprietary tools such as CloudFormation which are always focused on a single provider.

Working with Terraform is divided into several phases. First, its use must be prepared by calling terraform init, similar to a new Git repository. In this and all the other steps, all files with the extension .tf in the current directory are read in and processed without a specific sequence.

All plugins (e.g. for AWS) that are referenced in the found files are then downloaded and initialized. With terraform plan you can now see what changes would be made to the infrastructure. This includes things like the provisioning of new resources, the renaming of DNS entries, or the expansion of the memory of a database.

Optionally, this execution plan can also be saved so that a subsequent terraform apply does not create a new plan, but uses the one already created. In this step, the previously planned changes are now actually executed, i.e. Terraform calls the APIs of AWS and sets up the services in the respective AWS account.

A detailed description of how to use your AWS account with Terraform can be found here. If you want to shut down and dismantle all configured resources, this is possible with terraform destroy. Caution: This will actually destroy all resources contained in the .tf files! To remove individual resources, you only need to remove them from the configuration before an apply.

Resources and Dependencies

One of Terraform’s core features is the resource graph. As the infrastructure grows, dependencies between resources develop and they become interdependent. A good example of this is perhaps a firewall rule (in AWS: Security Group) that only allows incoming traffic from the internet to port 80 and is referenced by one or more virtual machines (in AWS: EC2 instances).

Therefore, the EC2 instances are dependent on the Security Group, which Terraform recognizes during an execution plan and stores it in the graph. So, this makes it possible to create or change the configured resources in the correct order in AWS. In this case, first the Security Group, then the following EC2 instances.

There is also the possibility to specify dependencies explicitly. For example, if they result out of the application logic and not at the infrastructure level. If an object store (in AWS: S3-Bucket) is used for file storage by an application on EC2 instances, these instances should only be set up after the S3-Bucket is available. Such a link can be specified with the keyword depends_on, whose value refers to another resource.

The resource graph of Terraform is able to recognize cyclic dependencies and point out such an error already during the execution plan. In Listing 2, a bidirectional dependency was artificially created. This leads to the message Cycle: aws_instance.example, aws_s3_bucket.example when calling plan or apply, and to the execution being terminated, so that an error cannot even occur when setting up the infrastructure.

 
resource "aws_s3_bucket" "example" {
  bucket = "kaeptn-eichhorn"
  depends_on = ["aws_instance.example"] # Dependency to EC2-Instance
}

resource "aws_instance" "example" {
  ami = "ami-13b8337c"
  instance_type = "t2.micro"
  depends_on = ["aws_s3_bucket.example"] # Dependency to S3-Bucket
}

Variables

In most projects, you want to create your infrastructure in multiple environments, such as dev and prod. These environments should of course be as similar as possible, but will differ from time to time, especially during the development of new features. Also, names of S3 buckets (e.g. kaeptn-eichhorn-dev vs. kaeptn-eichhorn-prod) or endpoints differ, so it is necessary to replace variables within the Terraform configuration.

Listing 3 shows the definition of a variable env. It can be replaced when Terraform is called, either by environment variables or by calling terraform plan -var “env=dev”. This results in the bucket name kaeptn-eichhorn-dev.

 
variable "env" {
  type = "string"
}

resource "aws_s3_bucket" "example" {
  bucket = "kaeptn-eichhorn-${var.env}"
}

Such variables can now be linked together by using their value as a key to read something from another variable. Such variables are called map, which is accessed via a lookup as shown in Listing 4. When calling with env=dev, the bucket name tmp-kaeptn-eichhorn-dev is created. A call with env=prod leads to kaeptn-eichhorn-prod.

 
variable "env" {
  type = "string"
}

variable "bucket-prefix" {
  type = "map"
  default = {
    "dev" = "tmp-"
    "prod" = ""
  }
}

resource "aws_s3_bucket" "example" {
  bucket = "${lookup(var.bucket-prefix, var.env)}kaeptn-eichhorn-${var.env}"
}

By using variables, developers can reproduce an infrastructure in different environments and thus in different AWS accounts. The usual procedure with AWS is to set up a superordinate account with several sub-accounts, whereby each of these sub-accounts represents an environment such as dev or prod. Thus, the often difficult to achieve dev-prod-parity can be achieved with very simple means.

In addition, it is possible to define so-called outputs. The following listing shows how to create an output variable named kaeptn-eichhorn-ip. The value is the dynamic IP address of the generated EC2 instance, which was assigned during execution:

 

output "kaeptn-eichhorn-ip" {
  value = "${aws_instance.kaeptn-eichhorn.ip.public_ip}"
}

After the execution, the IP address will be put out on the console and can also be retrieved again with terraform output kaeptn-eichhorn-ip. It is also possible to use the values of the outputs in other configurations – see below for more details.

Teamwork with states and backends

In order not to have to determine the current state of the provisioned infrastructure completely via APIs every time when it is called, Terraform remembers it in states. During the execution, a JSON-file called terraform.tfstate is created on the local computer. Here, Terraform remembers the assignment of an abstract resource like “aws_instance” “kaeptn-eichhorn” to a concrete AWS instance with the ID i-123abc456 to know which concrete instances should be modified or destroyed.

The dependency paths are also stored in the state. This is so that Terraform can determine whether a resource can simply be destroyed if it has been removed from the configuration, and if so, in what order. In the above example, the S3 bucket should not be destroyed as long as the EC2 instance is still dependent on it. In addition, the stored state can be compared with the actual state of the infrastructure to determine any differences between the plan and reality.

Until now, it was assumed that only one person works with Terraform and had the entire code under their control. Since this rarely occurs in real projects, Terraform offers the possibility to collaborate by using remote states that are stored in a backend. After each apply, the created state is stored in the user backend and made available to other users with the same backend. In the context of AWS, S3 can be used as a backend, but services such as Artifactory or Consul can also be used.

If a backend is configured, the local state is synchronized with the remote state before each planning and each change in order to always work at the current status. Some backends also support locking, which prevents two users from making changes to the infrastructure at the same time. It is important to ensure that the local file terraform.tfstate does not end up in the repository, if remote states are used, otherwise unsightly conflicts may occur.

Since resources share commonalities in a large distributed infrastructure, they can be shared using Data Sources. One example is the Security Group mentioned above, which only allows incoming traffic from the Internet to port 80. Access to these data sources is read-only, which protects shared configurations from unwanted changes. It is also possible to access the outputs described above so that their values can be used in other configurations.

Listing 5 does show how the remote state of a project named base-project can be included in another configuration in order to then access the ID of the therein defined Security Group with the name http-inbound and to assign it to an EC22 instance.

 
data "terraform_remote_state" "base-project" {
  backend = "s3"
  config {
    bucket = "kaeptn-eichhorn-example"
    key = "base-project.json"
  }
}

resource "aws_instance" "kaeptn-eichhorn" {
  ami = "ami-13b8337c"
  instance_type = "t2.micro"
  security_groups = ["${data.terraform_remote_state.base -project.aws_security_group.http-inbound.id}"]
}

What about everyday life?

Terraform can be integrated wonderfully in everyday life into an existing or a deployment pipeline that is still under development. Using tools like GitLab CI or GoCD, you can use the variable described above to define a clean pipeline that uses the same configuration for different environments. With the help of parameters, it can be adapted to the respective target for deployment, for example to set DNS entries or instance types.

This means that building, deploying, and testing an application are so closely interlinked that each team member can carry out the entire chain. In just a few steps, they could independently push code into the repository, up to deployment, and into the production system. The detailed documentation of Terraform helps here considerably. DevOps becomes more than just a buzzword (or worse: a role in a team). Everyone can pull together.

Terraform has also been used successfully  in training sessions for initial attempts of taking the first steps in the cloud. As a rule, participants find their way around very quickly and are able to make initial configurations themselves or adapt existing configurations with little help. Even new team members usually find working with Terraform pleasant and are quick to understand the structure.

Terraform’s great advantage is that the configuration of the infrastructure is in the same repository as the source code of the application. Developers can quickly determine that the application is running in which environment, on which machines, with which database, etc.

For a large application with many small services that are in turn distributed across various repositories, this advantage can also turn into a disadvantage. For example, if the syntax changes or a new major version is released, you have to adjust and check many places instead of just touching a single configuration.

However, since such massive conversions are rare and it is not a problem to execute Terraform locally instead of in a tool like GitLab CI, the advantages of the distribution to different repositories usually outweighs the disadvantages. After all, there are tools like find and grep, with which the detection of places that need to be changed should be mastered by every developer.

Conclusion: Embrace Automation for Cloud Efficiency

Even though the current version is number 0.11.2 at the time of writing this article, Terraform has matured into very stable and reliable software in recent years. Current developments, especially in the AWS context, are always kept up to date. Bugs are usually addressed quickly. Due to its understandable syntax, the initial hurdle is low.

Developers can quickly get used to declaring the infrastructure in code instead of clicking it together in a Web UI. As a result, Terraform is a tool that exemplifies DevOps culture thanks to how it considerably simplifies both experiments and the productive use of cloud solutions.

Top Articles About Cloud Platforms & Serverless

Stay Tuned:

Behind the Tracks

 

Kubernetes Ecosystem

Docker, Kubernetes & Co

Microservices & Software Architecture

Maximize development productivity

Continuous Delivery & Automation

Build, test and deploy agile

Cloud Platforms & Serverless

Cloud-based & native apps

Observability & Monitoring

Monitor, analyze, and optimize

Security

DevSecOps for safer applications

Business & Company Culture

Radically optimize IT

GET DEVOPS NEWS AND UPDATES!