Automation of Machine Learning Infrastructure Setup in Cloud Environment using Terraform

Setting up the complete workflow of a Machine Learning project involves several steps like data ingestion, processing, analyzing, and predicting. Major cloud providers like AWS, GCP, Azure offers a wide variety of services that can be used to set up the infrastructure to achieve all the steps mentioned above. However in large organizations, setting up services in a cloud environment for the development, testing, and production environments can be cumbersome due to manual steps involved in each step. So in this blog post, I will give you a glimpse of a tool called Terraform that can automate the setting up of your infrastructure using AWS. You will understand how Terraform works, how to install Terraform and you will also get a hands-on tutorial on how to create basic services in AWS using Terraform in an automated way.

The same approach can also be used to set up the infrastructure related to other services in Machine Learning as well as for other cloud providers also.

Lets dive in.

Prerequisites

  1. An account with AWS – Free tier
  2. Setting up access key and secret key with AWS.
  3. Basic knowledge of AWS services including VPC, subnet, EC2, S3 bucket.

Introduction

Terraform is an open-source Infrastructure as a code tool developed by Hashicorp. It can be used to deploy and manage your infrastructure seamlessly with the help of code.

For example, in the case of AWS, the creation of an EC2 instance requires going through a number of wizards. Deploying an Enterprise Infrastructure typically involves Development, Test, and Production environments. Going through the wizards manually and creating the infrastructure might also lead to human errors as well.

If you are already familiar with AWS services, you might ask why can’t we go with AWS CloudFront for this task instead of Terraform. If you use CloudFront then you are restricting yourself within AWS, However, Terraform can connect to multiple cloud providers which will enhance the exposure of your infrastructure to all the existing cloud environments.

How Terraform works

Terraform tracks the state information of the existing infrastructure in a file. There are three major steps that are involved in deploying the infrastructure from Terraform INIT, PLAN, and APPLY.

  1. INIT command sets up all the configurations needed to connect Terraform to the provider [AWS in this case].
  2. PLAN command will compare the existing code with the state file and then gives you a detailed execution plan about what resources are going to be added and what resources are going to be removed. But it does not implement it.
  3. If you are ok with the changes, then you need to apply these changes using the APPLY command.
Terraform functionality

Terraform uses a declarative way of programming style which is quite different compared to imperative programming. There is no need to give information to Terraform in a sequential way. That means, you only tell Terraform what needs to be achieved in the target infrastructure[desired state] but there is no need to tell how to achieve it step by step. For example, in AWS, we will create a VPC first and then we create a subnet inside it, and then we create EC2 instances, However in the code if you can write these steps in any order. Terraform knows what to create first. Awesome right!

Lets get started with the practical now.

Installation of Terraform

In this section, I will show you how to install Terraform on your windows machine. It can also be installed in Mac, LinuxFreeBSD, OpenBSD, Solaris as well.

Go to Terraform, download the respective installation file. As I am doing it on Windows 64 bit machine, I have downloaded that file.

Installation of Terraform

The file downloaded is shown below

Installation of Terraform

Unzip this file and store it in a folder , lets say C:\Terraform.

Installation of Terraform

Now, go to the System Properties -> Environment Variables -> System Variables -> Path . Add the path C:\Terraform and save.

Installation of Terraform

Now check the version of the Terraform from the command prompt of windows as below

This means that our installation of Terraform is successful.

Installation of Code Editor[Optional]

As I am a big fan of VS code editors, I will show you how to set up this. You can also run Terraform commands directly from the Windows command prompt.So this installation step is optional.

You can install VS code editor using this link.

Once you install , ensure you install the Terraform Extension as explained below.

Go to the Extensions in Visual Code Editor as shown below. Type Terraform , Ensure it from HashiCorp .Install this as shown below.

Installation of Terraform

Hurray!!!! Now, you have everything on your system to go ahead and try cool stuff in Terrraform !

Deploy resources automatically in AWS from Terraform

In this section, I will show how to deploy resources including VPC, subnet, EC2 instance, and related security groups, creating S3 bucket directly from Terraform.

First, we need to create a main.tf file that will have the code as below. Every resource is specified in simple JSON blocks of code as below.

#Accomplish below tasks in this code
#1.Configuration of the AWS Provider
#2.Creating a VPC
#3.Creating a subnet inside VPC
#4.Creating an EC2 instance inside subnet
#5.Setting up inbound and outbound traffic using security groups for EC2
#6.Creating a S3 buckect


#1.Configuration of the AWS Provider
provider "aws" {
   region  = "ap-southeast-1"
   access_key = "XXXXXXXXXXXX"
   secret_key = "XXXXXXXXXXXX"
}

#2.Creating a VPC starts here
resource "aws_vpc" "sample_vpc" {
  cidr_block       = "10.0.0.0/16"
   tags = {
    Name = "Development_VPC"
  }
}
#Code for creating a VPC ends here

#3.Code for creating a subnet inside VPC starts here. 
#We can refer the vpc created in the above block using .id

resource "aws_subnet" "sample_subnet" {
  vpc_id     = aws_vpc.sample_vpc.id
  cidr_block = "10.0.1.0/24"

  tags = {
    Name = "Development"
  }
}

#Creation of subnet inside VPC ends here

#4.Creation of EC2 instance inside the subnet

resource "aws_instance" "sample_instance" {
  ami           = "ami-03ca998611da0fe12"
  instance_type = "t2.micro"
  subnet_id   = aws_subnet.sample_subnet.id
  security_groups= [aws_security_group.sample_instance_port.id]
    tags= {
    Name = "dev_instance"
  }
}

#Creation of EC2 instance inside the subnet ends here

#5.Setting up Security for EC2 instance

resource "aws_security_group" "sample_instance_port" {
  name        = "sample_instance_port"
  description = "security group for EC2"
  vpc_id      = aws_vpc.sample_vpc.id
  

#Inbound Security for EC2 instance #allow web traffic
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

 ingress {
    from_port   = 443
   to_port     = 443
  protocol    = "tcp"
  cidr_blocks = ["0.0.0.0/0"]
  }

#Outbound Security for EC2 instance
 egress {
   from_port   = 0
  to_port     = 0
  protocol    = "-1"
  cidr_blocks = ["0.0.0.0/0"]
 }

  tags= {
    Name = "sample_instance_port"
  }
}


#6.Create S3 buckect starts here

resource "aws_s3_bucket" "b" {
  bucket = "samplebuckect9881"
  acl    = "private"
  force_destroy = true

  tags = {
    Name        = "My bucket"
    Environment = "Dev"
  }
}

#Create S3 buckect ends here

Copy this code in Visual Studio , You need to replace your access key and secret key that was mentioned in the Prerequisite section above. A sample of code in VS studio editor is shown below.

Deploy resources automatically in AWS from Terraform

We need to initialize terraform first as mentioned below

Deploy resources automatically in AWS from Terraform
Output: 
C:\Terraform>terraform init 

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v3.35.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Then execute the plan to see if it is creating 6 resources as expected.

Deploy resources automatically in AWS from Terraform
Output:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.sample_instance will be created
  + resource "aws_instance" "sample_instance" {
      + ami                          = "ami-03ca998611da0fe12"
      + arn                          = (known after apply)
      + associate_public_ip_address  = (known after apply)
      + availability_zone            = (known after apply)
      + cpu_core_count               = (known after apply)
      + cpu_threads_per_core         = (known after apply)
      + get_password_data            = false
      + host_id                      = (known after apply)
      + id                           = (known after apply)
      + instance_state               = (known after apply)
      + instance_type                = "t2.micro"
      + ipv6_address_count           = (known after apply)
      + ipv6_addresses               = (known after apply)
      + key_name                     = (known after apply)
      + outpost_arn                  = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns                  = (known after apply)
      + private_ip                   = (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      + secondary_private_ips        = (known after apply)
      + security_groups              = (known after apply)
      + source_dest_check            = true
      + subnet_id                    = (known after apply)
      + tags                         = {
          + "Name" = "dev_instance"
        }
      + tenancy                      = (known after apply)
      + vpc_security_group_ids       = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # aws_s3_bucket.b will be created
  + resource "aws_s3_bucket" "b" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "samplebuckect9881"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = true
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Environment" = "Dev"
          + "Name"        = "My bucket"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }
    }

  # aws_security_group.sample_instance_port will be created
  + resource "aws_security_group" "sample_instance_port" {
      + arn                    = (known after apply)
      + description            = "security group for jenkins"
      + egress                 = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 0
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "-1"
              + security_groups  = []
              + self             = false
              + to_port          = 0
            },
        ]
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 443
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 443
            },
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 80
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 80
            },
        ]
      + name                   = "sample_instance_port"
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Name" = "sample_instance_port"
        }
      + vpc_id                 = (known after apply)
    }

  # aws_subnet.sample_subnet will be created
  + resource "aws_subnet" "sample_subnet" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = (known after apply)
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.1.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = false
      + owner_id                        = (known after apply)
      + tags                            = {
          + "Name" = "Development"
        }
      + tags_all                        = {
          + "Name" = "Development"
        }
      + vpc_id                          = (known after apply)
    }

  # aws_vpc.sample_vpc will be created
  + resource "aws_vpc" "sample_vpc" {
      + arn                              = (known after apply)
      + assign_generated_ipv6_cidr_block = false
      + cidr_block                       = "10.0.0.0/16"
      + default_network_acl_id           = (known after apply)
      + default_route_table_id           = (known after apply)
      + default_security_group_id        = (known after apply)
      + dhcp_options_id                  = (known after apply)
      + enable_classiclink               = (known after apply)
      + enable_classiclink_dns_support   = (known after apply)
      + enable_dns_hostnames             = (known after apply)
      + enable_dns_support               = true
      + id                               = (known after apply)
      + instance_tenancy                 = "default"
      + ipv6_association_id              = (known after apply)
      + ipv6_cidr_block                  = (known after apply)
      + main_route_table_id              = (known after apply)
      + owner_id                         = (known after apply)
      + tags                             = {
          + "Name" = "Development_VPC"
        }
      + tags_all                         = {
          + "Name" = "Development_VPC"
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Deploy resources automatically in AWS from Terraform

This will also first generate the execution plan and ask you for confirmation to proceed further with the creation of resources.

Output:
An execution plan has been generated and is shown below.  
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.sample_instance will be created
  + resource "aws_instance" "sample_instance" {
      + ami                          = "ami-03ca998611da0fe12"
      + arn                          = (known after apply)
      + associate_public_ip_address  = (known after apply)
      + availability_zone            = (known after apply)
      + cpu_core_count               = (known after apply)
      + cpu_threads_per_core         = (known after apply)
      + get_password_data            = false
      + host_id                      = (known after apply)
      + id                           = (known after apply)
      + instance_state               = (known after apply)
      + instance_type                = "t2.micro"
      + ipv6_address_count           = (known after apply)
      + ipv6_addresses               = (known after apply)
      + key_name                     = (known after apply)
      + outpost_arn                  = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns                  = (known after apply)
      + private_ip                   = (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      + secondary_private_ips        = (known after apply)
      + security_groups              = (known after apply)
      + source_dest_check            = true
      + subnet_id                    = (known after apply)
      + tags                         = {
          + "Name" = "dev_instance"
        }
      + tenancy                      = (known after apply)
      + vpc_security_group_ids       = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # aws_s3_bucket.b will be created
  + resource "aws_s3_bucket" "b" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "samplebuckect9881"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = true
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags                        = {
          + "Environment" = "Dev"
          + "Name"        = "My bucket"
        }
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }
    }

  # aws_security_group.sample_instance_port will be created
  + resource "aws_security_group" "sample_instance_port" {
      + arn                    = (known after apply)
      + description            = "security group for jenkins"
      + egress                 = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 0
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "-1"
              + security_groups  = []
              + self             = false
              + to_port          = 0
            },
        ]
      + id                     = (known after apply)
      + ingress                = [
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 443
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 443
            },
          + {
              + cidr_blocks      = [
                  + "0.0.0.0/0",
                ]
              + description      = ""
              + from_port        = 80
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "tcp"
              + security_groups  = []
              + self             = false
              + to_port          = 80
            },
        ]
      + name                   = "sample_instance_port"
      + name_prefix            = (known after apply)
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + tags                   = {
          + "Name" = "sample_instance_port"
        }
      + vpc_id                 = (known after apply)
    }

  # aws_subnet.sample_subnet will be created
  + resource "aws_subnet" "sample_subnet" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = (known after apply)
      + availability_zone_id            = (known after apply)
      + cidr_block                      = "10.0.1.0/24"
      + id                              = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = false
      + owner_id                        = (known after apply)
      + tags                            = {
          + "Name" = "Development"
        }
      + tags_all                        = {
          + "Name" = "Development"
        }
      + vpc_id                          = (known after apply)
    }

  # aws_vpc.sample_vpc will be created
  + resource "aws_vpc" "sample_vpc" {
      + arn                              = (known after apply)
      + assign_generated_ipv6_cidr_block = false
      + cidr_block                       = "10.0.0.0/16"
      + default_network_acl_id           = (known after apply)
      + default_route_table_id           = (known after apply)
      + default_security_group_id        = (known after apply)
      + dhcp_options_id                  = (known after apply)
      + enable_classiclink               = (known after apply)
      + enable_classiclink_dns_support   = (known after apply)
      + enable_dns_hostnames             = (known after apply)
      + enable_dns_support               = true
      + id                               = (known after apply)
      + instance_tenancy                 = "default"
      + ipv6_association_id              = (known after apply)
      + ipv6_cidr_block                  = (known after apply)
      + main_route_table_id              = (known after apply)
      + owner_id                         = (known after apply)
      + tags                             = {
          + "Name" = "Development_VPC"
        }
      + tags_all                         = {
          + "Name" = "Development_VPC"
        }
    }

Plan: 5 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: 

You need to give ‘yes’ to further proceed.

Output:

aws_vpc.sample_vpc: Creating...
aws_s3_bucket.b: Creating...
aws_vpc.sample_vpc: Creation complete after 3s [id=vpc-0c9f13d8496ccc2c3]
aws_subnet.sample_subnet: Creating...
aws_security_group.sample_instance_port: Creating...
aws_subnet.sample_subnet: Creation complete after 1s [id=subnet-01ebc55bbe0c2ebb4]
aws_security_group.sample_instance_port: Creation complete after 2s [id=sg-08704a687086cc39f]
aws_instance.sample_instance: Creating...
aws_s3_bucket.b: Creation complete after 6s [id=samplebuckect9881]
aws_instance.sample_instance: Still creating... [10s elapsed]
aws_instance.sample_instance: Still creating... [20s elapsed]
aws_instance.sample_instance: Creation complete after 23s [id=i-06f115834b8017203]
Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
Deploy resources automatically in AWS from Terraform

Now let us check the AWS console if these are created or not

VPC

Deploy resources automatically in AWS from Terraform

SUBNET

Deploy resources automatically in AWS from Terraform

EC2 INSTANCE

Deploy resources automatically in AWS from Terraform

EC2 SECURITY PORTS

Deploy resources automatically in AWS from Terraform

S3 BUCKECT

Deploy resources automatically in AWS from Terraform

Oh wait dont leave now.The most important thing is to terminate the resources created .Otherwise you will end up losing your amount. Using Terraform we can also automate the termination of all the resources using a command called destroy

This will also create a execution plan after which we need to manually confirm to delete

Output: 
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_instance.sample_instance will be destroyed
  - resource "aws_instance" "sample_instance" {
      - ami                          = "ami-03ca998611da0fe12" -> null
      - arn                          = "arn:aws:ec2:ap-southeast-1:169869717766:instance/i-06f115834b8017203" -> null
      - associate_public_ip_address  = false -> null
      - availability_zone            = "ap-southeast-1a" -> null
      - cpu_core_count               = 1 -> null
      - cpu_threads_per_core         = 1 -> null
      - disable_api_termination      = false -> null
      - ebs_optimized                = false -> null
      - get_password_data            = false -> null
      - hibernation                  = false -> null
      - id                           = "i-06f115834b8017203" -> null
      - instance_state               = "running" -> null
      - instance_type                = "t2.micro" -> null
      - ipv6_address_count           = 0 -> null
      - ipv6_addresses               = [] -> null
      - monitoring                   = false -> null
      - primary_network_interface_id = "eni-00310353b2b0e2b59" -> null
      - private_dns                  = "ip-10-0-1-116.ap-southeast-1.compute.internal" -> null
      - private_ip                   = "10.0.1.116" -> null
      - secondary_private_ips        = [] -> null
      - security_groups              = [
          - "sg-08704a687086cc39f",
        ] -> null
      - source_dest_check            = true -> null
      - subnet_id                    = "subnet-01ebc55bbe0c2ebb4" -> null
      - tags                         = {
          - "Name" = "dev_instance"
        } -> null
      - tenancy                      = "default" -> null
      - vpc_security_group_ids       = [
          - "sg-08704a687086cc39f",
        ] -> null

      - credit_specification {
          - cpu_credits = "standard" -> null
        }

      - enclave_options {
          - enabled = false -> null
        }

      - metadata_options {
          - http_endpoint               = "enabled" -> null
          - http_put_response_hop_limit = 1 -> null
          - http_tokens                 = "optional" -> null
        }

      - root_block_device {
          - delete_on_termination = true -> null
          - device_name           = "/dev/xvda" -> null
          - encrypted             = false -> null
          - iops                  = 100 -> null
          - tags                  = {} -> null
          - throughput            = 0 -> null
          - volume_id             = "vol-02f473f29134dd3ca" -> null
          - volume_size           = 8 -> null
          - volume_type           = "gp2" -> null
        }
    }

  # aws_s3_bucket.b will be destroyed
  - resource "aws_s3_bucket" "b" {
      - acl                         = "private" -> null
      - arn                         = "arn:aws:s3:::samplebuckect9881" -> null
      - bucket                      = "samplebuckect9881" -> null
      - bucket_domain_name          = "samplebuckect9881.s3.amazonaws.com" -> null
      - bucket_regional_domain_name = "samplebuckect9881.s3.ap-southeast-1.amazonaws.com" -> null
      - force_destroy               = true -> null
      - hosted_zone_id              = "Z3O0J2DXBE1FTB" -> null
      - id                          = "samplebuckect9881" -> null
      - region                      = "ap-southeast-1" -> null
      - request_payer               = "BucketOwner" -> null
      - tags                        = {
          - "Environment" = "Dev"
          - "Name"        = "My bucket"
        } -> null

      - versioning {
          - enabled    = false -> null
          - mfa_delete = false -> null
        }
    }

  # aws_security_group.sample_instance_port will be destroyed
  - resource "aws_security_group" "sample_instance_port" {
      - arn                    = "arn:aws:ec2:ap-southeast-1:169869717766:security-group/sg-08704a687086cc39f" -> null
      - description            = "security group for jenkins" -> null
      - egress                 = [
          - {
              - cidr_blocks      = [
                  - "0.0.0.0/0",
                ]
              - description      = ""
              - from_port        = 0
              - ipv6_cidr_blocks = []
              - prefix_list_ids  = []
              - protocol         = "-1"
              - security_groups  = []
              - self             = false
              - to_port          = 0
            },
        ] -> null
      - id                     = "sg-08704a687086cc39f" -> null
      - ingress                = [
          - {
              - cidr_blocks      = [
                  - "0.0.0.0/0",
                ]
              - description      = ""
              - from_port        = 443
              - ipv6_cidr_blocks = []
              - prefix_list_ids  = []
              - protocol         = "tcp"
              - security_groups  = []
              - self             = false
              - to_port          = 443
            },
          - {
              - cidr_blocks      = [
                  - "0.0.0.0/0",
                ]
              - description      = ""
              - from_port        = 80
              - ipv6_cidr_blocks = []
              - prefix_list_ids  = []
              - protocol         = "tcp"
              - security_groups  = []
              - self             = false
              - to_port          = 80
            },
        ] -> null
      - name                   = "sample_instance_port" -> null
      - owner_id               = "169869717766" -> null
      - revoke_rules_on_delete = false -> null
      - tags                   = {
          - "Name" = "sample_instance_port"
        } -> null
      - vpc_id                 = "vpc-0c9f13d8496ccc2c3" -> null
    }

  # aws_subnet.sample_subnet will be destroyed
  - resource "aws_subnet" "sample_subnet" {
      - arn                             = "arn:aws:ec2:ap-southeast-1:169869717766:subnet/subnet-01ebc55bbe0c2ebb4" -> null
      - assign_ipv6_address_on_creation = false -> null
      - availability_zone               = "ap-southeast-1a" -> null
      - availability_zone_id            = "apse1-az2" -> null
      - cidr_block                      = "10.0.1.0/24" -> null
      - id                              = "subnet-01ebc55bbe0c2ebb4" -> null
      - map_customer_owned_ip_on_launch = false -> null
      - map_public_ip_on_launch         = false -> null
      - owner_id                        = "169869717766" -> null
      - tags                            = {
          - "Name" = "Development"
        } -> null
      - tags_all                        = {
          - "Name" = "Development"
        } -> null
      - vpc_id                          = "vpc-0c9f13d8496ccc2c3" -> null
    }

  # aws_vpc.sample_vpc will be destroyed
  - resource "aws_vpc" "sample_vpc" {
      - arn                              = "arn:aws:ec2:ap-southeast-1:169869717766:vpc/vpc-0c9f13d8496ccc2c3" -> null
      - assign_generated_ipv6_cidr_block = false -> null
      - cidr_block                       = "10.0.0.0/16" -> null
      - default_network_acl_id           = "acl-0b96a0fc1313b8824" -> null
      - default_route_table_id           = "rtb-0c45254b0d3021eaa" -> null
      - default_security_group_id        = "sg-04b6f3ca74660a7db" -> null
      - dhcp_options_id                  = "dopt-e7aa5581" -> null
      - enable_classiclink               = false -> null
      - enable_classiclink_dns_support   = false -> null
      - enable_dns_hostnames             = false -> null
      - enable_dns_support               = true -> null
      - id                               = "vpc-0c9f13d8496ccc2c3" -> null
      - instance_tenancy                 = "default" -> null
      - main_route_table_id              = "rtb-0c45254b0d3021eaa" -> null
      - owner_id                         = "169869717766" -> null
      - tags                             = {
          - "Name" = "Development_VPC"
        } -> null
      - tags_all                         = {
          - "Name" = "Development_VPC"
        } -> null
    }

Plan: 0 to add, 0 to change, 5 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: 

Once you give the value as ‘yes’

Output:
aws_s3_bucket.b: Destroying... [id=samplebuckect9881]
aws_instance.sample_instance: Destroying... [id=i-06f115834b8017203]
aws_s3_bucket.b: Destruction complete after 1s
aws_instance.sample_instance: Still destroying... [id=i-06f115834b8017203, 10s elapsed]
aws_instance.sample_instance: Still destroying... [id=i-06f115834b8017203, 20s elapsed]
aws_instance.sample_instance: Still destroying... [id=i-06f115834b8017203, 30s elapsed]
aws_instance.sample_instance: Still destroying... [id=i-06f115834b8017203, 40s elapsed]
aws_instance.sample_instance: Destruction complete after 41s
aws_subnet.sample_subnet: Destroying... [id=subnet-01ebc55bbe0c2ebb4]
aws_security_group.sample_instance_port: Destroying... [id=sg-08704a687086cc39f]
aws_subnet.sample_subnet: Destruction complete after 0s
aws_security_group.sample_instance_port: Destruction complete after 0s
aws_vpc.sample_vpc: Destroying... [id=vpc-0c9f13d8496ccc2c3]
aws_vpc.sample_vpc: Destruction complete after 1s

Destroy complete! Resources: 5 destroyed.
Deploy resources automatically in AWS from Terraform

Conclusion

In this post, you got an overview of Terraform and also learned how to create resources in AWS using Terraform.I hope you liked this post. If so, then feel free to comment below and subscribe. Let me know if you need more posts like this.

Happy Learning!

Leave a Comment