Deploying the Load balancer with webservers using Ansible

Hirendra kumar
9 min readJan 28, 2021

Task description:

✍️ Provision EC2 instances through ansible.

✍️ Retrieve the IP Address of instances using the dynamic inventory concept.

✍️ Configure the web servers through the ansible role.

✍️ Configure the load balancer through the ansible role.

✍️ The target nodes of the load balancer should auto-update as per the status of web servers.

HAPROXY:

HAProxy is a high-performance, open-source load balancer and reverse proxy for TCP and HTTP applications. Users can make use of HAProxy to improve the performance of websites and applications by distributing their workloads. Performance improvements include minimized response times and increased throughput.

let’s start — -

first we have to create ec2 instances . so here I am launching three instances . one of them will be for load balancer and two of them will be backend nodes for my load balancer.

for launching these instances over aws , I wrote a ansible playbook , in which i have written ec2 module provided by ansible and some important attributes to launch a instances.

---
- hosts: localhost
vars_files:
- secure.yml

tasks:
- name: launching lb-server
ec2:
key_name: mykey11
instance_type: t2.micro
region: ap-south-1
image: ami-0a9d27a9f4f5c0efc
vpc_subnet_id: subnet-03a951776444f8ab7
wait: yes
count: 1
instance_tags:
name: lb
assign_public_ip: yes
aws_access_key: "{{ access_kay }}"
aws_secret_key: "{{ secret_key }}"
group_id: sg-0040e7dcb8558adf4
register: lbb
- name: launching web-servers
ec2:
key_name: mykey11
instance_type: t2.micro
region: ap-south-1
image: ami-0a9d27a9f4f5c0efc
vpc_subnet_id: subnet-03a951776444f8ab7
wait: yes
count: 2
instance_tags:
name: web
assign_public_ip: yes
aws_access_key: "{{ access_kay }}"
aws_secret_key: "{{ secret_key }}"
group_id: sg-0040e7dcb8558adf4
register: webb

these tasks will be run on my ansible controller node . I have given instance type , region , image id and subnet id etc.

I have given count 1 for load balancer and count 2 for launching to managed nodes. so that we can check how load balancer is balancing the nodes between them.

Ansible-vault:

also we have to give the credentials of our aws user in playbook that I have already created. directly giving the credentials in playbook is not at all good. For giving the credentials , here i have used ansible vault. this is like a box in which we put our credentials and those will be encrypted inside it. here i have created a file secure.yml with ansible vault. To create ansible-vault

ansible-vault create secure.yml

we have put the credentials inside secure.yml file with ansible-vault.

encrypted keys with vault

Dynamic Inventory:

Now we have to retreive the ip of instances with dynamic inventory, for this first download two files provided aws community . These are the python files in which the code is written such that after running that code ansible automatically go to the aws and will fetch the ip dynamically with any ansible command. we have to download the file called ec2.py and ec2.ini .

the link to download these files —

https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.pyhttps://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

here you can see with this command we have the ip of all the instances.

Adding hosts:

- meta: refresh_inventory- name: adding hostnames
add_host:
hostname: "{{ item.public_ip }}"
groups: web
loop: "{{ webb.instances }}"
- name: wait for ssh
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
state: started
loop: "{{ webb.instances }}"

Next, I have used meta module , it will refresh the inventory because we have more IP which we fetch dynamically.

Now we are adding host names in a group . for this we have used add_host module and give a groupname called web. In hostname it will retrieve the ip of instances from webb.instance in which we have the public ip of instances. we have add loop here, so that in future new managed node might come so they will automatically add to the group called web.

Next we wait for ssh with wait_for module , with this we are waiting for instances to properly come up so that ansible can do ssh to them and do further configuration on them.

- hosts: tag_name_web
tasks:
- name: roles executing
include_role:
name: httpdrole

- hosts: tag_name_lb
tasks:
- name: roles
include_role:
name: mylb

Now we have instance ready to be configured , for this we have used hosts as their aws tags and runing roles on them.

ansible roles:

ansible Roles let you automatically load related vars_files, tasks, handlers, and other Ansible artifacts based on a known file structure. Once you group your content in roles, you can easily reuse them and share them with other users. with roles we can easily manage our codes easily.

creating role for webserver —

To create a role command is:

ansible-galaxy init role_name

so for webserver here i have created a role called httpdrole and in this role i have written a task for installing httpd software with php . and copying the web page to the servers. and also used handler so that after every change in web page the service of web server will be restarted.

above the code that is inside the httpdrole. running this role will be result in configuring webserver.

role for load balancer —

we will understand this code one by one.

---
# tasks file for mylb
- name: installing the haproxy
package:
name: "haproxy"
state: present
- name: changing port no
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "5000"
replace: "8080"
notify: hello

above is the task of role for load balancer-

In the first I am installing the load balancer software that is HAProxy with package module . after installing the HAProxy we have to change the port no , so here I changed the port no from 5000(pre-written) to 8080. so that our load balancer will listen on this particular port. In this task I have given a notify to notify the next task . and the task is written in handlers section of the role.

---
# handlers file for mylb
- name: hello
replace:
path: /etc/haproxy/haproxy.cfg
regexp: '(.*app.*)'
replace: '#\1'

so above is the handler for the notify “hello” and this will notify handler . In this handler I have used replace module. here I am commenting all the pre-written lines, because they are some dummy lines in the configuration file. Instead of those lines will be putting my managed nodes ip.

here I have used regexp - ‘.*app.*’ . This regexp means wherever the app word is present in between any of the lines ,then that line will be replaced with # version of itself with the help of ‘#\1’.

example-

server app1 127.0.0.1:80 check

in this example we have app word between the line so it will be replace with

# server app1 127.0.01:80 check , in this way we will be commenting those lines.

- name: handlers run fast
meta: flush_handlers

Generally in a playbook handlers will run at the end all the tasks so i have used meta , through which it will run handler just after it will notify to it.

- name: comment removing
replace:
path: /etc/haproxy/haproxy.cfg
regexp: '#backend app'
replace: 'backend app'
- name: comment removing again
replace:
path: /etc/haproxy/haproxy.cfg
regexp: "# default_backend app"
replace: " default_backend app"

after commenting the line having app word in between them , we also commented some important line that are useful for load balancer to work as fully functional. so here I am uncommenting the “#backend app” line and “# default_backend app” line by replacing them with their uncomment version. after that we successfully remove the comment of both the line.

- name: managed nodes
blockinfile:
path: /etc/haproxy/haproxy.cfg
insertafter: '''backend app
balance roundrobin'''
block: |
{% for hosts in groups['web'] %}
server app{{ loop.index }} {{ hosts }}:80 check
{% endfor %}
- name: service
service:
name: haproxy
state: started

The next task is to the put the ip of managed nodes into the configuration file of load balancer or say we have to register the backend nodes. For this here I used blockinfile module . This module has capability to put a block of data in a file. here I have to put the ip of backend nodes ip as block . here I have used for loop in jinja syntax and updating the ip of backend server into the conf file. The ip will come from the groups that we created recently. In this way the ip of future upcoming nodes will be registered automatically.

after that we have to start the service the service of haproxy.

Now we have completed our playbook . only thing is remaining to run the playbook . after running it everything will be setup automatically.

ansible-playbook test.yml --ask-vault-pass

here we have to give the password for vault file that we created .

here you can see everything is setup with ansible now.

At the AWS console , we will see all the three instances is launched with one load balancer and two webservers.

launched laod balancer

logging to the haproxy load balancer node and verifying everything.

ip have come up

in the above picture you can see that the IP of backend server is registered in the configuration file of HAProxy. and also can see that demo lines in this file has been commented . Now the load balancer is ready to balance the load. In future if more backend server will come , then they will be automatically registered in this.

Now put the url that is : http://13.233.81.86:8080 and we will get the page.

In the code of web page I intentionally put the ifconfig command to see the ip of web server so that we come to know the load balancer is balancing the load between both of them

server 1
server2

Here you can see that first time we get the content from sever1 and second time we get the content from server2 on refreshing means load is balancing now.

That’s all we have successfully deployed the web server and load balancer with ansible.

Thanks for reading the blog…

--

--

No responses yet