Deploying webserver on aws ec2 instance with s3 and cloudfront using Terrafrom (infrastructure as a code)

Hirendra kumar
7 min readSep 12, 2020

Below is problem statement of an task given to me in multicloud training.

→ Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

let’s discuss some terminologies first-

Terraform —

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform is used to set up the big infrastructure using different service providers as aws , gcp and azure etc.

Need of terraform-

To setup a big infrastructure we always use almost all service providers and remembering all the commands of all providers is not easy because we don’t have document for that , so using Terraform we don’t need to remember commands for all . We only tell our desire what we want and terraform will automatically goes to that service providers and do the setup . it works also as a document . it is like a infrastructure as a code.

Start with solution-

first we have to login into the aws using aws cli

>> aws configure

provider "aws" {
region = "ap-south-1"
profile = "hirendra"
}

using this we can login into the aws with the proifle.

>> creating key

variable "keyname" {
default = "key2211"
}
resource "tls_private_key" "mykey" {
algorithm = "RSA"
}
module "key_pair" {
source = "terraform-aws-modules/key-pair/aws"
key_name = "key2211"
public_key = tls_private_key.mykey.public_key_openssh
}

using above code key will be created and stored in variable keyname to use further. key is used as a password to login to the ec2 instance.

key created

key is created named key2211.

>> creating security group

resource "aws_security_group" "secure" {
name = "myossecure"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "Security"
}
}

the above code we have created the security group to allow ssh as we want to login into the instances and allowing the http protocal because we are about to launch the webserver and it will be accessible by the end users.

security group created.

>> now launching ec2 instances

resource "aws_instance" "myOS" {
ami = "ami-09a7bbd08886aafdf"
instance_type = "t2.micro"
key_name = var.keyname
security_groups = ["myossecure"]
connection {
type = "ssh"
user = "ec2-user"
private_key = "${tls_private_key.mykey.private_key_pem}"
host = aws_instance.myOS.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install httpd -y",
"sudo yum install git -y",
"sudo yum install php -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "first_OS"
}
}

we have launched the instance using the key and security group we create before launching it.

here we logged into the OS using ssh and downloading some desire software as httpd , git and php interpreter because we want to launch a webserver using apache httpd that has webpage in php.

OS launched named first_OS.

>> now creaiting EBS(elastic block storage) and attaching to the OS

resource "aws_ebs_volume" "ebs1" {
availability_zone = aws_instance.myOS.availability_zone
size = 1
tags = {
Name = "ebsstroage"
}
}
resource "aws_volume_attachment" "ebs_attachment" {
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.ebs1.id
instance_id = aws_instance.myOS.id
force_detach = true
}

with the above code we created volume named ebsstorage in the same availability zone of OS. and attaching to the OS.

volume created

>> creating s3 storage

resource "aws_s3_bucket" "my_bucket" {
bucket = "hirendra-bucket"
acl = "public-read"
provisioner "local-exec" {
command = "git clone https://github.com/hirendrakumar/cloud_task.git images"
}
provisioner "local-exec" {
when = destroy
command = "echo Y | rmdir /s images"
}
tags = {
Name = "hirendra-bucket1234"
}
}
locals {
s3_origin_id = "s3_origin"
}

this code will help to create s3 storage. in aws s3 storage folder is called bucket. using local-exec we are copying the image from github to the local machine . using when keyword we are telling terraform to delete the copied image from local machine also we run destroy command.

s3 create named ‘hirendra-bucket’ . bucket name should be unique in entire region because they have to create a unique url to access the bucket.

>> putting image into the s3 object.

resource "aws_s3_bucket_object" "object"{
bucket = "hirendra-bucket"
acl = "public-read"
key = "vimal.jpeg"
source = "C:/Users/hirendra/Downloads/images/vimal.jpeg"
}

we are importing image from local machine to the s3 bucket with the above code.

image is uploaded

>> now creating cloudfront

resource "aws_cloudfront_distribution" "cloudfront" {
origin {
domain_name = aws_s3_bucket.my_bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
}

enabled = true
default_root_object = "vimal.jpeg"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

restrictions {
geo_restriction {
restriction_type = "none"
}
}
# SSL certificate for the service.
viewer_certificate {
cloudfront_default_certificate = true
}
}

now cloudfront is created and we have provided the s3 origin id so that this cloudfront will go to the s3 to access the images on behalf of clients.

cloudfront creates the caches of that object into the client’s nearby edge location so that will be accessed with low latency . cloudfront is used mainly for static data .

cloufront created

we will use cloudfront url to update into the code.

>> now mounting the volume to the folder

resource "null_resource" "nullremote" {
depends_on = [aws_volume_attachment.ebs_attachment]
connection {
type = "ssh"
user = "ec2-user"
private_key = "${tls_private_key.mykey.private_key_pem}"
host = aws_instance.myOS.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/hirendrakumar/cloud_task.git /var/www/html",
"sudo su << EOF",
"echo \"<img src=\"https://\"${aws_cloudfront_distribution.cloudfront.domain_name}\"/vimal.jpeg\">\" >> /var/www/html/index.html",
"EOF",
"sudo systemctl restart httpd",
]
}
}

before mounting we have to format the ebs storage . formatted using mkfs.ext4 command .

Now we logged into the OS for copying the code from github to the directory /var/www/html.

using cloudfront url we put the image into the code. now all the code will be safe in ebs storage even the OS goes down.

>> starting chrome

resource "null_resource" "local1"  {
depends_on = [
null_resource.nullremote,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.myOS.public_ip}"
}
}

now using this code it will automatically open the chrome and will open the website.

all the above setup will be created using only few commands of terraform.

>> terraform init

it will install some necessary plugins to do the desired task we given.

>> terraform apply -auto-approve

using above command whole infrastructure will be created.

information about what to be done after applying
starting creating all desired things

after remote log in installing desired packeges.

httpd installed
installed git
php installed

in above pic it is creating ebs , cloudfront distribution.

formatting the ebs volume
chrome will be launched
site launched

now automatically site is launched.

>> terraform destroy -auto-approve

this command will automatically destroy entire infrastructure.

Now it is like a document . only one click entire infrastructure will be created automatically.

THANK YOU !!

for code please visit —

https://github.com/hirendrakumar/cloud_task

--

--