Launching A WordPress Application With MySQL Database in K8S Cluster On AWS Using Ansible!
Hello guys, as we all know how Kubernetes and ansible play a vital role nowadays in the industry for managing container technology like Docker and automating the configuration of any environment respectively. So this article is for configuring Kubernetes(k8s) Multi-Node Cluster on AWS and launching a WordPress application with Database ( Mysql ) using ansible. So let’s start.
What is Kubernetes multinode cluster?
A multinode cluster is where we use multiple worker nodes to avoid a single point of failure. Here the master node is connected to several working nodes and is called a multinode cluster.In this article, I will show how to deploy a K8S multinode cluster over AWS using Ansible roles.
What is WordPress?
WordPress is a free and open-source content management system written in PHP and integrated with the MySQL or MariaDB database. Features include a plugin architecture and a template system that is referred to as themes in WordPress.
Now to start this project we need to look at the required steps:
- First, we will launch 4 AWS t2.micro ec2-instance of type Amazon Linux using ansible dynamic inventory. Here one instance will be used for the master node and the other 3 instances will be used for the worker node.
- configure master node
- configure worker node
- create a MySQL database deployment in the k8s cluster
- create a WordPress application deployment in the k8s cluster
- expose the application to the outside world and let’s use it.
So let’s start to describe each step …………..
Launching ec2 instances using ansible dynamic inventory
We don’t really know how many examples we want. If the number of instances is a bit high, it is often impossible to bring up a public IP by turning on each instance in case of time management and storing it in the host’s file. To avoid this kind of problem, I use the ansible dynamic inventory where each step will be taken automatically with one click.
Using dynamic inventory is not a big deal. we just need to know how it is used?
Already I published one article about how to use dynamic inventory and launch AWS ec2 instances. Here I only have run my playbook for provisioning. to know in detail, I attached my article for this. just go and read if needed .
playbook for launching ec2 instances
Let’s run the playbook .
here you see, my ec2 instances have been launched successfully.
configure master node :
Configuring the Kubernets master node is not a single click job. We have to do a lot of operations. If you try to configure it manually, you will find that it is a bit more complicated than configuring a hapoxy or webserver. Let’s start moving forward one by one.
Steps for the configuration of master in Kubernetes cluster
Install docker:
start docker and enable it
configure Kubernetes repo:
to use Kubernetes services, we need some software that is not present in our default yum repository. that’s why we need to create a repo for Kubernetes software.
Install Kubeadm:
Now we need to enable kubelet. So after booting our os we don’t need to install start it again and again. Kublet is the one who connects the master node to the slave node or we can say that if some server is running inside the slave node then the client goes to the slave through the master kublet program when it hits the service port.
Pull docker images using kubeadm:
change driver of docker from cgroupfs to systems and restart Docker:
write below lines in /etc/docker/daemon.json file .
After changing any system setting we need to restart docker to make the changes successful. Here i have used service module to restart docker.
Installing iproute-tc:
Setting bridge-nf-call-iptables to 1:
Initializing Master:
.kube is the main folder where Kubernets holds all the necessary files. These commands come when the master is started and this command is run on the master node only to prepare the master node. Then we need to copy the content from /etc/Kubernetes/admin.conf to $ HOME / .kube / config. Then we need to change the permission of the owner of the configuration file (configure).
Creating Flannel:
Flannel is a plugin that helps to connect the nodes of the slaves and master internally. As in the master initializing section we have already used the range of IP should be assigned to our pods in the respective slave nodes. So to manage the range of IP address to respective pods in the respective slave node is done by flannel only.
Generating Token:
Now we need to create the tokens which is provided while initializing the master. The output of the tokens will be stored/registered into the tokens variable.
We have completed all the required steps to achieve configuration on the master node. Now we need to do configuration on Slave nodes. The Configuration on Slave nodes is much simpler as most of the steps are repeated.
Steps for the configuration of Slaves in the Kubernetes cluster:
Let’s run the playbook .
configure worker node
Steps for the configuration of Slaves in the Kubernetes cluster:
- Start docker.
- enable docker.
- Configure Kubernetes Repo.
- Install Kubeadm (it will automatically install kubectl and kubelet).
- enable kubelet.
- pull docker images using kubeadm.
- change driver of docker from cgroupfs to systemd.
- restart docker.
- Installing iproute-tc.
- Setting bridge-nf-call-iptables to 1.
- Join the Slave with the master.
here up to the 11th steps, we have already explained. just we need to join the slave with the master.
Let’s run the playbook .
create a MySQL database deployment in the k8s cluster:
The headline shows you that we need to launch WordPress with our Baker database. So I’m launching WordPress and MySQL inside a pod but you can launch using a yML file. I created a role to do the same. You can create roles.
Now we need to create a pod for the wordpess and mysql to launch them respectively.
expose the application to the outside world :
Expose the pod to the outside world so that anyone can use these application through < public_ip:port >
everything is done… Let’s check the output :
Note down port no and access from chrome with the help of master’s public IP .