Ansible script to turn on TcpForwarding

First ,problem statements is like this ,imagine that you are gonna add new node to your cluster then you have to do some changes . In the mean while previous entities also needs to be change. By writing a ansible script that makes the life easy.

I’ m gonna break this into sections.

Introduction

install and play with it

First it’s better to create around 4 instance in AWS(in this case i use Amzon linux).

We are gonna use one of them as Ansible master and other for nodes.While creating the instance we get the .pem files keep them in your local machine ,will use them later.

First ,lets connect to ansible master from our local machine.execute the below command(##.126.255.### is public IP4 address and ansibleserver is particular pem file for that instance)

ssh -i ansibleserver.pem ec2-user@##.126.255.###

some time you may get error like ‘Permissions 0664 for ‘ansibleserver.pem’ are too open’.that means it is required that your private key files are not accessible by others.so execute below command (give the correct path )

sudo chmod 600 /path/to/my/key.pem

scp -i ansibleserver.pem ansibleserver.pem ec2-user@ip-172–31–13–60:home/ec2-user

note:

when you need to restrict .pem file by other ubuntu,use chmod 400 {keyfile}.pem ( note mandatory )

one you are connected update the instance by sudo yum update.we need to enable the epal. for that use below command:

sudo yum-config-manager — enable epal

FYI:

yum-config-manager is a program that can manage main yum configuration options, toggle which repositories are enabled or disabled, and add new repositories.(epal name-of-repo[provides lots of open source packages to install via Yum and DNF].)

when you goto the /etc/ansible direction there are files we required.(it’s already there :))

Create an Inventory

why it is not working

other servers are configured to communicate with SSH file and ansible server doesn’t know which certificates to pass.

now we can [in for other two nodes

ansible all -i hosts -m ping

by that we can check the connectivity with other nodes

Let’s take a another approach to solve the same issue

Setting Up the Inventory File

first we override the configuration file and the hosts file.

mkdir ansible-kubernetes
cd ansible-kubernetes

cat ansible.cfg
[defaults]
inventory = ./dev

cat dev
[test]
test-1 ansible_ssh_host=x.x.x.x ansible_ssh_user=ubuntu

(replace with own IP address of Ansible master)

Creating Kubernetes Manifests

create the kubernetes manifest file for a deployment and a pod on a different namespace. We create a folder called k8s where we will store the kubernetes manifest files.

mkdir k8s
cd k8s
cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
Namespace: test
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: shashwot/nginx-more:latest
ports:
- containerPort: 80

cat pod.yaml
apiVersion: v1
kind: Pod
Namespace: test
metadata:
name: nginx
labels:
app: nginx
tier: frontend
spec:
containers:
- name: nginx
image: shashwot/nginx-more:latest

We should have two files namely deployment.yaml and pod.yaml with the above configurations.

Creating the ansible playbook

mkdir playbooks
cd playbooks
cat kubernetes.yaml
---
- hosts: '{{ host }}'
tasks:

- name: "Install kubernetes python package" #This will be installed on the remote host.
pip:
name: kubernetes
state: present# Create a test namespace on the cluster without any manifest fies. This is an added advantage on ansible.
- name: "Create a k8s namespace"
k8s:
name: test
api_version: v1
kind: Namespace
state: present# Copying the Pod.yaml and deployment.yaml in the remote node.
- name: "copying file with playbook"
copy:
src: ../k8s/pod.yaml
dest: /tmp/pod.yaml- name: copying file with playbook
copy:
src: ../k8s/deployment.yaml
dest: /tmp/deployment.yaml# Creating a Kubernetes pod in test using file stored on local.- name: "Create a pod"
k8s:
state: present
namespace: test
src: /tmp/pod.yaml# Checking if the Kubernetes pods is running on the cluster.
- name: "Status of the pod"
k8s:
api_version: v1
kind: pod
name: nginx
namespace: test
register: web_service# Creating a Kubernetes deployment in test using file stored locally
- name: "Create a deployment"
k8s:
state: present
namespace: test
src: /tmp/deployment.yaml# CleanUP all the applied configurations
- name: "Ansible file module to delete multiple files"
file:
path: "{{ item }}"
state: absent # to delete the files
with_items:
- /tmp/deployment.yaml
- /tmp/pod.yaml# Clear the namespace on the cluster
- name: "Delete a k8s namespace"
k8s:
name: test
api_version: v1
kind: Namespace
state: absent

Execution

With the above configuration, our folder structure should look similar to this.

ansible-playbook playbooks/kubernetes.yaml -e host=test-1

For further Reference I would like recommend Following resource which was interesting for me:

https://spacelift.io/blog/ansible-playbooks

--

--

--

Software Engineer | Data Engineer | AI Enthusiast

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

AMAZON WEB SERVICES

Center an element in CSS

Deep dive into container networking: Part 3

Death by 1000 layers: the perils of over-abstraction in Java

Trading Execution Algorithms

Is Cross-Docking better than Traditional Warehousing

A Look at Rust/WinRT

Bootcamp week 2

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Gayan Sanjeewa

Gayan Sanjeewa

Software Engineer | Data Engineer | AI Enthusiast

More from Medium

Provision an on-prems Kubernetes Cluster with Rancher, Terraform and Ansible

The Architecture of Docker Engine

Getting Started with Kubernetes (K8s) -Part 3

The need for Kubernetes in DevOps