Terraform: AWS ECS

ECS stands for Elastic Container Service. It is an Amazon Web Services proprietary Container Management Platform for running docker containers.

This article guides the reader on how to provision an ECS cluster using Terraform. We can provision 2 types of ECs cluster on AWS Cloud Platform.

  • ECS cluster with Fargate: Provides a Serverless cluster with no ec2 instances to provision or manage.
  • ECS cluster with EC2 Instances: A Cluster with provisioned EC2 instances.

Requirements/Prerequisites

  • An AWS account.
  • A user that has generated his/her AWS access key ID and secret access key and permissions to create AWS resources.
  • Have Terraform installed on your respective PC.
  • Manually generated the EC2 Key Pair on your AWS account.
  • This article assumes that the reader has already created a Terraform Module to set up the Network Infrastructure and Security Groups. We can find a guide for this on the link below.

https://wordpress.com/post/cloudanddevopstech.com/158

N/B: We can find All the Terraform code on the below github repository

https://github.com/MaureenBarasa/IaC-Terraform

Terraform: Provision ECS Fargate Cluster

Image Source: https://medium.com/@gmusumeci/how-to-deploy-aws-ecs-fargate-containers-step-by-step-using-terraform-545eeac743be

Step 1: Create the Terraform ECS-Fargate Module

Create a folder on your PC for Terraform modules. Within the folder you can create other folders to better organize templates. For our case we created the services directory within the modules directory and ecs-fargate directory within the services directory. Then, add the below files to the folder.

  • ecs-fargate.tf
  • outputs.tf

Use the below commands to execute this task:

sudo mkdir modules
cd mkdir modules
sudo mkdir services
cd services
sudo mkdir ecs-fargate
cd ecs-fargate
touch ecs-fargate.tf outputs.tf  

On the ecs-fargate.tf add the below code:

resource "aws_ecs_cluster" "test-fargate" {
    name = "test-fargate-ecs"
    setting {
        name = "containerInsights"
        value = "enabled"
    }
    tags = {
        Name = "test-fargate-ecs"
        createdBy = "MaureenBarasa"
        Owner = "DevSecOps" 
        Project = "test-terraform"
        environment = "test"
    }
}

The reader/user should customize template properties to reflect their specific user requirements. Properties to be customized include:

  • Cluster Name.
  • Cluster Tags.

On the outputs.tf file add the below code:

output "ecs-fargate" {
  description = "The ECS fargate cluster"
  value       = "${aws_ecs_cluster.test-fargate.id}"
}

Step 2: Create the Terraform Root Module

Create another folder called Terraform-ec2 and within that folder, add 3 files:

  • main.tf
  • version.tf
  • vars.tf

On the vars.tf file add the below code. The file contains variables that describe the users AWS credentials, ec2 key-pair name and the region that the user would like to launch his resources on.

variable "AWS_ACCESS_KEY" {
  default = "**************"
}

variable "AWS_SECRET_KEY" {
  default = "***************"
}

variable "AWS_REGION" {
  default = "eu-central-1"
}

Replace the AWS access and secret key with the user generated AWS access key ID and AWS secret access key, respectively. The user should also change the region default value to the specific AWS region they would like to provision the resources.

Add the below code to the version.tf file:

terraform {
  required_version = ">= 0.12" 
}

Finally, on the main.tf file, Add the below code. The main.tf file calls the terraform modules that will be invoked to create the AWS resources.

N/B: the module source directory should reflect the reader’s /User’s actual directory path where the module files are stored.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

provider "aws" {
  access_key = var.AWS_ACCESS_KEY
  secret_key = var.AWS_SECRET_KEY
  region = "eu-central-1"
}

module "ecs-fargate" {
  source  = "/Users/maureenbarasa/modules/services/ecs-fargate"
}

To provision this resource, Kindly have a look at “Provisioning the Resources” section on this article.

Terraform: Provision ECS with EC2 Instances Cluster

Image Source: https://medium.com/swlh/creating-an-aws-ecs-cluster-of-ec2-instances-with-terraform-85a10b5cfbe3

Step 1: Create the Terraform ECS-EC2 Module

Create a folder on your PC for Terraform modules. Within the folder you can create other folders to better organize templates. For our case we created the services directory within the modules directory and ecs-ec2 directory within the services directory. Then, add the below files to the folder.

  • ecs-ec2.tf
  • outputs.tf

Use the below commands to execute this task:

sudo mkdir modules
cd mkdir modules
sudo mkdir services
cd services
sudo mkdir ecs-ec2
cd ecs-ec2
touch ecs-ec2.tf outputs.tf container-agent.sh

On the ecs-ec2.tf add the below code:

module "vpc" {
  source  = "/Users/maureenbarasa/modules/services/vpc"
}

#Instance Role
resource "aws_iam_role" "test_role" {
  name = "ecs_Instance_role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
  tags = {
    Name = "ecs_Instance_role"
    createdBy = "MaureenBarasa"
    Owner = "DevSecOps"
    Project = "test-terraform"
    environment = "test"
  }
}

#Instance Profile
resource "aws_iam_instance_profile" "ecs_Instance_Profile" {
  name = "ecs_Instance_role"
  role = "${aws_iam_role.test_role.name}"
}

#Attach Policies to Instance Role
resource "aws_iam_policy_attachment" "test_attach1" {
  name       = "test-attachment"
  roles      = [aws_iam_role.test_role.id]
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}

resource "aws_iam_policy_attachment" "test_attach2" {
  name       = "test-attachment"
  roles      = [aws_iam_role.test_role.id]
  policy_arn = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
}

resource "aws_iam_policy_attachment" "test_attach3" {
  name       = "test-attachment"
  roles      = [aws_iam_role.test_role.id]
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}

#ECS Cluster
resource "aws_ecs_cluster" "test-ecs-ec2" {
    name = "test-ecs-ec2"
    setting {
        name = "containerInsights"
        value = "enabled"
    }
    tags = {
        Name = "test-ecs-ec2"
        createdBy = "MaureenBarasa"
        Owner = "DevSecOps" 
        Project = "test-terraform"
        environment = "test"
    }
}

#The Autoscaling Group
resource "aws_autoscaling_group" "test-ecs-asg-group" {
    name = "test-ecs-asg-group"
    max_size = 2
    min_size = 1
    desired_capacity = 1
    health_check_grace_period = 300
    launch_configuration = "${aws_launch_configuration.container-instance.id}"
    vpc_zone_identifier = [module.vpc.vpc_private_subnet1_id, module.vpc.vpc_private_subnet1_id]
}

#The launch Configuration
resource "aws_launch_configuration" "container-instance" {
    name = "web_config"
    image_id = "ami-06c755ec615b860ac"
    instance_type = "t2.micro"
    key_name = "test-key"
    user_data = "${file("/Users/maureenbarasa/modules/services/ecs-ec2/container-agent.sh")}"
    iam_instance_profile = "${aws_iam_instance_profile.ecs_Instance_Profile.id}"
    security_groups = [module.vpc.vpc_ecs_security_group_id]
    ebs_block_device {
        device_name = "/dev/xvda"
        volume_size = 20
        volume_type = "gp2"
    }
}

Ensure the vpc module directory reflects the user’s actual module path.

Also, the reader/user should customize template properties to reflect their specific user requirements. Properties to be customized include:

  • Resource Name.
  • Resource Tags.

N/B: Also, for the launch configuration ami -id the user should use an ECS optimized ami from their AWS account.

On the outputs.tf file add the below code:

output "ecs-ec2" {
  description = "The ECS ec2 cluster"
  value       = "${aws_ecs_cluster.test-ec2.id}"
}

We will use EC2 User Data for the ecs container agent. Add the below code to your container-agent.sh file.

#!/bin/bash
echo ECS_CLUSTER=test-ecs-ec2 >> /etc/ecs/ecs.config;echo ECS_BACKEND_HOST= >> /etc/ecs/ecs.config;

N/B: Replace the ECS_CLUSTER name with your respective cluster name.

Step 2: Create the Terraform Root Module

To create the root module, follow the steps as per Step 2 of the ECS Fargate cluster. This time on the main.tf file replace the ecs-fargate module with your ecs-ec2 module.

Provision the Resources

To create the infrastructure on your account, On your root module directory, run below commands:

terraform init
terraform plan
terraform apply

The terraform init command initializes our working directory. We should run this command whenever we write a new terraform configuration.

The terraform plan creates a preview of the actions that terraform will perform i.e. the resources that terraform will create.

After reviewing the actions that Terraform will perform, we can now run terraform apply to create the resources. You will be prompted to approve by typing yes.

To destroy everything created with the template, run the below command. The command will preview all resources that will be destroyed. It will also prompt the user to approve by typing yes.

terraform destroy

Important Links

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_configuration

https://stackoverflow.com/questions/42610807/terraform-ebs-volume

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group

Happy Building!!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s