TASK1: Creating And Automating the Infrastructure On AWS Cloud Using Terraform.

So,What is Terraform??

Terraform is a open-source tool created by HashiCorp for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

1

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc

Task : Have to create/launch Application using Terraform:-

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this EC2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Tools Required:-

1. AWS command line interface

2. Terraform

3. Github

Configuration:

1.) AWS command line interface :

Step1: configure a user-profile and give the credentials of “IAM” User

We are creating this profile for better security as instead of giving the credentials in my terraform file, I could directly pass the profile name and hence noone gonna know my secret key.

2. Creation of Terraform file

Step 1: The required Code for our Infrastructure
Step 2: Initializing Terraform(Download required plugins) using “terraform init”.

terraform init

Task :

Have to create/launch Application using Terraform

  1. Create the key and security group which allow the port 80.
    2. Launch EC2 instance.
    3. In this Ec2 instance use the key and security group which we have created in step 1.
    4. Launch one Volume (EBS) and mount that volume into /var/www/html
    5. Developer have uploded the code into github repo also the repo has some images.
    6. Copy the github repo code into /var/www/html
    7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
    8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Let’s begin…

Terraform Providers

Terraform is platform-agnostic; you can use it to manage bare metal servers or cloud servers like AWS, Google Cloud Platform, OpenStack, and Azure. In Terraform lingo, these are called providers, You can get a sense of the scale by reading a full list of supported providers. Multiple providers can be used at the same time. We are using AWS provider to provision the resources in AWS. There’s a few ways of supplying the provider with your AWS credentials, but a pretty standard way is as follows:

The Most Important And Basic Step:-We need to first of all mention the providers(say cloud) we want to work on .In this configuration we provide our user-profile and region.

*****************************************

provider “aws” {
region = “ap-south-1”
profile = “myprofile”
}

****************************************

Step 1:Creating key and Security Group

  • In this step we create security group for our instance so that we can control inbound and outbound rules (Inbound — Who can connect with our instance means which port ,which protocol , Outbound — Instance will respond to outside request).
  • Creation of key is for safe login to our instance.

In cloud computing Inbound is known as ingress and outbound is known as egress.

*********************************************************

resource “aws_security_group” “sgbyterra” {
name = “sgbyterra”
description = “Allow Tcp inbound traffic”

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “security_group_terra”
}
}

resource “tls_private_key” “weboskey12” {
algorithm = “RSA”
rsa_bits = 4096
}

resource “local_file” “mytaskkey_access”{
content= tls_private_key.weboskey12.private_key_pem
filename = “weboskey12.pem”
}

resource “aws_key_pair” “generated_key” {
key_name = “weboskey12”
public_key = tls_private_key.weboskey12.public_key_openssh
}

creating the variable for key using for launching the ec2-instance****

variable “mykey1” {
default = “weboskey12”
}

*****************************************************************

2. Launch of EC2 instance

In launching of instance we used an ami-image,the key-pair and the security group I have created in the previous Step.

**************************

resource “aws_instance” “my1stterra1” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = var.mykey1
security_groups = [“sgbyterra”]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.weboskey12.private_key_pem
host = aws_instance.my1stterra1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install httpd -y”,
“sudo yum install php -y”,
“sudo yum install git -y”,
“sudo systemctl start httpd”,
“sudo systemctl enable httpd”

]
}

tags = {
Name = “terraoslaunching”
}
}

output “myav_zone” {
value = aws_instance.my1stterra1.availability_zone
}

3. Launch of one Volume (EBS) and mount that volume into /var/www/html

******Now we will create an ebs (pendrive like)volume . It can also carry stored data even after terminating the instance.*****

4. Developer have uploded the code into github repo also the repo has some images in it.

5. Copy the github repo code into /var/www/html

resource “aws_ebs_volume” “ebsbyterra” {
availability_zone = aws_instance.my1stterra1.availability_zone
size = 1

tags = {
Name = “myebs”
}
}

output “myebs” {
value = aws_ebs_volume.ebsbyterra.id
}

resource “aws_volume_attachment” “ebs_attach” {
device_name = “/dev/sdp”
volume_id = aws_ebs_volume.ebsbyterra.id
instance_id = aws_instance.my1stterra1.id
force_detach = true

}

resource “null_resource” “remote” {

depends_on = [
aws_volume_attachment.ebs_attach
]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.weboskey12.private_key_pem
host = aws_instance.my1stterra1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdp”,
“sudo mount /dev/xvdp /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone
https://github.com/MIglaniAman/awstest1.git /var/www/html/”

]
}
}

resource “null_resource” “local” {

depends_on = [
null_resource.remote
]

provisioner “local-exec” {
command = “chrome ${aws_instance.my1stterra1.public_ip}”
}
}

6. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

Creation of AWS S3 Bucket

To upload out data (images, videos, documents etc.) to Amazon S3, you must first create an S3 bucket in one of the AWS Regions. You can then upload any number of objects to the bucket. We are using aws_s3_bucket_object to upload the image to S3.

*****************************************************************

resource “aws_s3_bucket” “bucketbyterraMIG” {
bucket = “my-tf-test-bucket-aman-miglani-a”
acl = “public-read”

tags = {
Name = “MybucketterraA”
Environment = “Devv”
}
}

resource “aws_s3_bucket_object” “amanmiglaniobject” {
depends_on = [
aws_s3_bucket.bucketbyterraMIG
]
bucket = “my-tf-test-bucket-aman-miglani-a”
key = “myimage.jpg”
source = “
https://github.com/MIglaniAman/awstest.git/ours.jpg
acl = “public-read”

}

output “mys31” {
value = aws_s3_bucket.bucketbyterraMIG.bucket_regional_domain_name
}

*****************************************************************

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Creation of AWS Cloudfront Distribution

CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you’re serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.

*****************************************************************

locals {
s3_origin_id = “S3-my-tf-test-bucket-aman-miglani-a”
}

resource “aws_cloudfront_origin_access_identity” “origin_access_identity” {
comment = “my_irgin_acess_identity”
}

resource “aws_cloudfront_distribution” “s3_distribution_terra” {
depends_on = [
aws_s3_bucket_object.amanmiglaniobject,
]

origin {
domain_name = aws_s3_bucket.bucketbyterraMIG.bucket_regional_domain_name
origin_id = local.s3_origin_id

s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
}
}

enabled = true
is_ipv6_enabled = true

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “redirect-to-https”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

price_class = “PriceClass_All”

restrictions {
geo_restriction {
restriction_type = “none”

}
}

viewer_certificate {
cloudfront_default_certificate = true
}

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.weboskey12.private_key_pem
host = aws_instance.my1stterra1.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo su <<END”,
“echo \”<img src=’
http://${aws_cloudfront_distribution.s3_distribution_terra.domain_name}/${aws_s3_bucket_object.amanmiglaniobject.key} ‘height=’200' width=’200'>\” >> /var/www/html/index.html”,
“END”,
]
}

}

resource “null_resource” “locally” {

depends_on = [
aws_cloudfront_distribution.s3_distribution_terra
]

provisioner “local-exec” {
command = “chrome ${aws_cloudfront_distribution.s3_distribution_terra.domain_name}/${aws_s3_bucket_object.amanmiglaniobject.key}”
}

*****************************************************************

***this the output and the image will be in your near by edge location(say cache) for 1 day***

So, Here I am done with my first task assigned by Mr. Vimal Daga in the training of Hybrid Cloud And Computing. Thank You So Much Sir for making me a tech-enthusiast.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store