Photo by Markus Spiske on Unsplash
Used to test Ansible Engine/Tower and playbook refactoring - locally and in the cloud
Expand
This Repo's Purpose
The idea of this Repo is for ME to walkthough the process of creating a playbook and then refactoring the playbook tidying up format, using different techniques etc... to learn more. There will be a branch for different phases of refactoring
Most of the parameters are set in either the playbook OR role/<role-name>/defaults/main.yml - THIS WILL BE CHANGED AS I GO THROUGH REFACTORING
The main branch will be a merge of the last working branch of refactoring
THE PRESENT MERGED BRANCH IS INITIAL
This repo uses the following from MY environment
- Ansible Engine / Ansible Tower VM (Local)
- K8s Master / Worker (Local)
- DigitalOcean Droplets - Ubuntu 20.04/20.10 and Centos 8.3 (Cloud)
The playbooks aim is to deploy K8s environment - 1 Master and X number of workers either locally (on existing VMs in Virtualbox) or in the cloud to DigitalOcean (provisioning the droplets in the process).
There are 6 playbooks in the repo
- module-role.yml - uses a role to deploy a droplet on DigitalOcean (ansible-engine digitalocean module)
- used for basic testing of deploying droplets on DigitalOcean
- collection-tower.yml - uses a role to deploy a droplet on DigitalOcean (community.digitalocean collection)
- used for basic testing of deploying droplets on DigitalOcean
- do-teardown.yml - used to teardown droplets on DigitalOcean (community.digitalocean collection)
- do-inventory.yml - used to play with the dynamic inventory (script) to query DigitalOcean (not plugin inventory)
- do-deploy-k8s.yml - uses a role to deploy X number of droplets on DigitalOcean and provision K8s (Centos/Redhat/Ubuntu) (community.digitalocean collection)
- used for deploying droplets on DigitalOcean and then provisioning K8s master/worker nodes
- deploy-k8s.yml - uses a role to provision K8s on pre-provisioned VMs (Centos/Redhat/Ubuntu) (community.digitalocean collection)
- used for provisioning K8s master/worker nodes on existing VMs
The focus will be on the last 2 playbooks
How To Use
Set your DigitalOcean Token as an environment variable
- export OAUTH_TOKEN=XXXXXXxxxxxxxxxxXXXXXXXXxx <-- I set this in a file which I source
Clone this repo - see installing section
Install DigitalOcean Collection - see installing section
Run do-deploy-k8s playbook which deploys DigitalOcean droplet(s) and then provisions them as a K8s master or as worker nodes using 2 roles - 1 to provision digitalocean droplet(s) and 1 to provision K8s on the droplet(s)
$ more k8-hosts
[master]
k8s-master
[worker]
k8s-node1
k8s-node2
$ ansible-playbook do-deploy-k8s.yml -i k8s-hosts <--- Builds VMs with these names in the specified groups
The main.yml in digitalocean-deploy role will in addition to deploying the droplets add them to the correct groups - master or worker (code snippet below)
- name: Add host to group 'master'
add_host:
name: '{{ item.data.droplet.name }}'
ansible_ssh_host: '{{ item.data.ip_address }}'
groups: master
when: item.data.droplet.name.find("master") != -1
loop: "{{ droplet_info.results }}"
- name: Add host to group 'worker'
add_host:
name: '{{ item.data.droplet.name }}'
ansible_ssh_host: '{{ item.data.ip_address }}'
groups: worker
when: item.data.droplet.name.find("master") == -1
loop: "{{ droplet_info.results }}"
In the main.yml for provision-k8s role the droplets will be provisioned either as a master or worker depending on what group they have been added to. Then once the master and worker(s) are configured the worker(s) will be joined to the master (code snippet below).
- name: Join K8s workers to Master
block:
- name: Join worker to K8s-master
command: "{{ hostvars[groups['master'][0]]['kube_join_command']['stdout'] }}"
when: (ansible_hostname != "k8s-master") and (st.stat.exists == false)
Run deploy-k8s playbook which provisions the required number of K8s master and worker nodes to existing VMs using 1 role to provision K8s on the VMs. It uses the same role to do this as do-deploy-k8s.yml - provision-k8s.
$ more k8-local
[master]
k8s-master ansible_host=192.168.0.104
[worker]
k8s-node1 ansible_host=192.168.0.105
$ ansible-playbook deploy-k8s.yml -i k8s-local <--- Builds VMs with these names in the specified groups
Additional Comments
- K8s environment is provisioned with CRI-O (not docker)
- K8s environment is provisioned with CILIUM CNI (you can choose something else - CALICO)
- DigitalOcean community collection is used - collections are the future
- ansible.cfg in the project will define collection path - please install collections (see below) before running playbooks
- presently doesn't use become in the playbooks - connects as root. Refactoring will create/use a user with sudo
Ansible Tower
These playbooks will work in Tower if it has been configured correctly
-
project - this repo pulled into Ansible Tower (Tower will automatically pull in DigitalOcean Collection via requirements file)
-
credential - New Type for DigitalOcean Token - set as env (injector configuration)
-
credential - Machine credential to connect to the new machines - ssh key
-
Inventory - inventory configured with groups/hosts - use source from project (K8s-host or K8-local) or add dynamic inventory script - digital_ocean.py
more will be added on this section later
Expand
- Ansible Engine - Configuration Management and more
- DigitalOcean Ansible Collection - DigitalOcean Collection used for Ansible to interact with DigitalOcean Platform
- Kubernetes Production Environment - Knowledge of Kubernetes for container orchestration
- Cilium CNI - Cilium CNI - network connectivity between apps and containers - https://docs.cilium.io/en/stable/intro/
- Hubble - Hubble - Networking and Security observability - https://docs.cilium.io/en/stable/intro/
Expand
for Ansible - use your package manager of choice OR pip
$ pip install ansible
Clone this Repo
$ git clone [email protected]:unixdaddy/do_ansible.git
$ cd do_ansible.git
for DigitalOcean Collection - use requirements file ./collections/requirements.yml and ansible-galaxy command. Note: Tower will do this automatically
$ ansible-galaxy collection install -r ./collections/requirements.yml -p ./collections
You will also need to have setup the following:-
- user on target system (or use root)
- privilege escalation will be required if not using root
- ssh key authentication - as part of deploying droplets on DigitalOcean I have public keys that are inserted
- set required variables in playbook or defaults/main.yml of role i.e. set region to nyc1 or image to centos-7-x64 or size to s-2vcpu-4gb etc..
- for existing VMs you will need to update K8s-local to point to the IPs/DNS Names of those machines.
This Section is useful because it provides insight into the general tools and environment I use for development purposes
My local environment consists of
- Ansible Engine 2.9 /Tower 2.8.1 Centos Stream 8.3 (VirtualBox - 2vCPU, 5GB Memory and 30GB Disk Space)
- K8s 1.20 Master RHEL 8.3 (VirtualBox - 2vCPU, 2GB Memory and 30GB Disk Space)
- K8s 1.20 Worker RHEL 8.3 (VirtualBox - 2vCPU, 2GB Memory and 30GB Disk Space)
- Ansible Automation Hub 1.2.1 RHEL 8.3 (VirtualBox - 2vCPU, 5GB Memory and 30GB Disk Space)
- Ansible Automation Hub 1.2.1 Centos Stream 8.3 (VirtualBox - 2vCPU, 5GB Memory and 30GB Disk Space)
- OpenSuSe Leap 15.2 (WSL)
- Ubuntu 20.04 LTS (WSL)
My Cloud environment consists of
- DigitalOcean Droplets - varied number, varied sizes of Ubuntu 20.04/20.10 and Centos 8.3
To interact with these environments I am using
- vscode studio code
- mobaxterm
- notepad++
- Github
End of Environment Details (Not all are used for this Repo)
Expand
Name | Role | |
---|---|---|
daddy, unix | withheld | learning |
Expand
Gratitude for assistance:
- All, Who-Came-Before-Me - everything
This project is licensed under the Apache License 2.0.
Company, Inc. or its affiliates. All Rights Reserved.