Kubernetes Lab: Quick and Easy

Aug 01, 2022

Introduction

At the end of 2017 Kubernetes had won the container orchestration wars. Rival container orchestration platforms had transitioned to instead providing managed service offerings of Kubernetes. There were and are multiple powerful and enterprise offerings for this functionality. However, sometimes we really only need a sandbox for a development environment. In these situations, it is better to build and manage a lightweight, robust, and free Kubernetes cluster. Alongside the powerful Kubernetes offerings arose several lightweight and “local” distributions from various companies. These can be readily utilized for a local Kubernetes lab environment.

In this article we will explain how to easily and quickly provision a lightweight Kubernetes cluster as a local development sandbox.

Providers

There are essentially five well-known provider options for a local Kubernetes lab environment:

  • minikube
  • k3s
  • kind
  • microk8s
  • k0s

We will not perform an in-depth comparison between these five options, because others have already performed this analysis sufficiently well, and also because it would be an article unto itself. However, I will briefly explain the decision here.

minikube: My first local Kubernetes labs in 2016-2018 used this because it was the only real option. However, it was slightly cumbersome to use, and also was prone to making backwards-incompatible changes between versions.

k3s: This seemed rather promising when it first released in early 2019. However, myself and others quickly noticed that it had issues with bridged networking on a local device. This inhibited k3s from provisioning an actual multi-node cluster inside virtual machines, which was the primary advantage over minikube. While it has improved over the years (and k3sup by Alex Ellis now exists), you basically just want to forgo this and use Rancher in a dedicated environment if you are interested in this product.

kind: I am only somewhat familiar with this offering. It sounds promising and has good feedback, but I am unable to really determine how well it performs as a local lab environment.

microk8s: Despite the enormous red flag that it installs as a snap, this is the fastest and easiest to setup, and therefore we choose it for this demonstration. It also tends to “just work”, which is always something I appreciate. Since we are using the obvious candidate of Vagrant for an easy portable and disposable local lab environment, then the use of snap is completely mitigated by being separated from our local system.

k0s: This is supposed to be the new hotness, and could definitely dethrone microk8s as the best for a local Kubernetes sandbox based on current feedback, but I have not tried this offering as of yet. Look forward to this potentially appearing in a Part Two article.

Vagrant Configuration

The following Vagrantfile will provision a Kubernetes cluster with one control plane node, and two worker nodes. Each node uses the official Ubuntu 22.04 operating system with the VirtualBox provider. The comments below explain the Ruby code within the Vagrantfile. The Ansible tasks are found and explained in the next section.

 # specify number of worker nodes and control plane ip as constants
 WORKERS = 2
 CONTROL_IP = '192.168.56.8'.freeze
 
 Vagrant.configure('2') do |config|
   config.vm.box = 'ubuntu/jammy64'
 
   # configure the kubernetes control plane node
   config.vm.define 'controller' do |controller|
     # assign the private network and hostname to enable connectivity
     controller.vm.network 'private_network', ip: CONTROL_IP
     controller.vm.hostname = 'microk8s.local'
 
     # specify the resources
     controller.vm.provider 'virtualbox' do |vb|
       vb.cpus = '2'
       vb.memory = '2048'
     end
 
     # software provision the control plane node with ansible
     controller.vm.provision :ansible do |ansible|
       ansible.playbook = 'microk8s.yml'
       # pass number of works and control plane ip to the ansible tasks
       ansible.extra_vars = {
         workers:    WORKERS,
         control_ip: CONTROL_IP
       }
     end
   end
 
   # configure the kubernetes worker nodes
   (1..WORKERS).each do |i|
     config.vm.define "worker#{i}" do |worker|
       # dynamically assign the private network and hostname to enable connectivity
       worker.vm.network 'private_network', ip: "#{CONTROL_IP}#{i}"
       worker.vm.hostname = "microk8s#{i}.local"
       # folder syncing is unnecessary for the worker nodes
       worker.vm.synced_folder '.', '/vagrant', disabled: true
 
       # specify the resources
       worker.vm.provider 'virtualbox' do |vb|
         vb.cpus = '1'
         vb.memory = '1024'
       end
 
       # software provision the worker node with ansible
       worker.vm.provision :ansible do |ansible|
         ansible.playbook = 'worker.yml'
         ansible.extra_vars = {
           # pass worker number with element ordering to the ansible tasks
           worker: i - 1
         }
       end
     end
   end
 end

Ansible Provisioner

We use the following vars.yml in the same directory as the Ansible provisioning tasks files and Vagrantfile to easily update values for the tasks within a singular interface.

 kube_version: '1.23'
 extensions:
 - dns
 - dashboard
 - helm3
 - ingress
 - storage

The following are the set of tasks for provisioning a Kubernetes control plane node with microk8s as a provider. A moderate level of experience with Ansible, Kubernetes, and Linux may be required to fully understand the functionality in these tasks, but the name explains what each is performing at a high level. All command module tasks are intrinsically idempotent unless otherwise specified. A version of Ansible >= 2.9 is assumed.

 - name: bootstrap an ubuntu microk8s control plane vagrant box
   hosts: all, localhost
   become: true
   vars_files:
   - vars.yml
 
   tasks:
   - name: install microk8s snap
     community.general.snap:
       name: microk8s
       state: present
       channel: "{{ kube_version }}/stable"
       classic: true
 
   - name: start microk8s
     ansible.builtin.command: microk8s.start
 
   - name: wait until microk8s is truly up
     ansible.builtin.command: microk8s.status --wait-ready
 
   - name: enable microk8s extensions
     ansible.builtin.command: microk8s.enable {{ item }}
     with_items:
     - "{{ extensions }}"
 
   - name: alias the microk8s kubectl and helm3
     ansible.builtin.command: snap alias microk8s.{{ item.key }} {{ item.value }}
     with_dict:
       kubectl: kubectl
       helm3: helm
 
   - name: label the dashboard as a cluster service so it appears under cluster-info
     ansible.builtin.command: kubectl -n kube-system label --overwrite=true svc kubernetes-dashboard kubernetes.io/cluster-service=true
 
   - name: capture kubeconfig information
     ansible.builtin.command: microk8s.config
     register: config_contents
 
   - name: display kubeconfig information
     ansible.builtin.debug:
       msg: "{{ config_contents.stdout_lines | regex_replace('10.0.2.15', control_ip, multiline=True) }}"
 
   - name: generate cluster join commands for workers
     ansible.builtin.include_tasks: cluster_join_gen.yml
     loop: "{{ range(workers) }}"
 
   - name: proxy the kubernetes dashboard in the background
     ansible.builtin.command: nohup microk8s dashboard-proxy &
     async: 65535
     poll: 0
 
   - name: explain what needs to be performed manually
     ansible.builtin.debug:
       msg: admin user token and microk8s-cluster certificate-authority-data updated per provision in kubeconfig
 # cluster_join_gen.yml
 ---
 - name: capture cluster join command
   ansible.builtin.command: microk8s add-node
   register: cluster_join_contents
 
 - name: save cluster join information
   ansible.builtin.copy:
     dest: /vagrant/cluster_join{{ item }}
     content: "{{ cluster_join_contents.stdout | regex_search('microk8s join ' + control_ip + ':25000/(\\w+)', multiline=True) }}"
     mode: '0440'

The following are the set of tasks for provisioning a Kubernetes worker node with microk8s as a provider. A moderate level of experience with Ansible, Kubernetes, and Linux may be required to fully understand the functionality in these tasks, but the name explains what each is performing at a high level. The second command task for joining the cluster is unlikely to be idempotent, but it is also probability zero that this will need to be re-executed with vagrant provision. A version of Ansible >= 2.9 is assumed.

 ---
 - name: bootstrap an ubuntu microk8s worker vagrant box
   hosts: all, localhost
   become: true
   vars_files:
   - vars.yml
 
   tasks:
   - name: install microk8s snap
     community.general.snap:
       name: microk8s
       state: present
       channel: "{{ kube_version }}/stable"
       classic: true
 
   - name: start microk8s
     ansible.builtin.command: microk8s.start
 
   - name: join kubernetes cluster
     ansible.builtin.command: "{{ lookup('ansible.builtin.file', 'cluster_join' + worker | string) }} --worker"

Results

After the normal vagrant up we will have everything we need to connect to our Kubernetes cluster. After modifying the KUBECONFIG on our local device for the information obtained from Ansible for connecting to the cluster, we can begin interacting with kubectl, helm, SDK, API, etc. With kubectl at version 1.24.3 on my local device, I observed the following output for kubectl version:

Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.6-2+2a84a218e3cd52", GitCommit:"2a84a218e3cd52ee62c7c5aeb40c7281d5c5b0a0", GitTreeState:"clean", BuildDate:"2022-04-28T11:13:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}

Our initial test for connectivity passed with flying colors. We have a functioning connection to a local Kubernetes cluster. Now, let us take the next step and confirm our Kubernetes cluster is completely ready with the expected three nodes. This is straightforward with a simple kubectl get node:

NAME        STATUS   ROLES    AGE    VERSION
microk8s    Ready    <none>   164m   v1.23.6-2+2a84a218e3cd52
microk8s2   Ready    <none>   66s    v1.23.6-2+2a84a218e3cd52
microk8s1   Ready    <none>   7m2s   v1.23.6-2+2a84a218e3cd52

Now we have confirmed that we have a functioning multi-node cluster with the expected number of nodes. At this point we are fairly confident about the results, but just for fun we can check more exhaustively with a kubectl -n kube-system get all:

NAME                                             READY   STATUS    RESTARTS        AGE
pod/dashboard-metrics-scraper-69d9497b54-8hq78   1/1     Running   0               163m
pod/kubernetes-dashboard-585bdb5648-jhsll        1/1     Running   0               163m
pod/hostpath-provisioner-7764447d7c-xtzjt        1/1     Running   1 (8m18s ago)   163m
pod/coredns-64c6478b6c-x28md                     1/1     Running   0               165m
pod/calico-kube-controllers-564fb98b44-rj6jn     1/1     Running   0               165m
pod/metrics-server-679c5f986d-d2srb              1/1     Running   0               163m
pod/calico-node-gzphr                            1/1     Running   0               6m58s
pod/calico-node-8mt44                            1/1     Running   0               6m53s
pod/calico-node-s8nqc                            1/1     Running   0               110s

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns                    ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   165m
service/metrics-server              ClusterIP   10.152.183.139   <none>        443/TCP                  164m
service/dashboard-metrics-scraper   ClusterIP   10.152.183.94    <none>        8000/TCP                 164m
service/kubernetes-dashboard        ClusterIP   10.152.183.71    <none>        443/TCP                  164m

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/calico-node   3         3         3       3            3           kubernetes.io/os=linux   165m

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/calico-kube-controllers     1/1     1            1           165m
deployment.apps/coredns                     1/1     1            1           165m
deployment.apps/metrics-server              1/1     1            1           164m
deployment.apps/dashboard-metrics-scraper   1/1     1            1           164m
deployment.apps/hostpath-provisioner        1/1     1            1           164m
deployment.apps/kubernetes-dashboard        1/1     1            1           164m

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/calico-kube-controllers-6966456d6b     0         0         0       165m
replicaset.apps/calico-kube-controllers-564fb98b44     1         1         1       165m
replicaset.apps/coredns-64c6478b6c                     1         1         1       165m
replicaset.apps/metrics-server-679c5f986d              1         1         1       163m
replicaset.apps/dashboard-metrics-scraper-69d9497b54   1         1         1       163m
replicaset.apps/hostpath-provisioner-7764447d7c        1         1         1       163m
replicaset.apps/kubernetes-dashboard-585bdb5648        1         1         1       163m

Now we are surely completely confident that we have a totally ready local Kubernetes cluster for a development and testing sandbox.

Conclusion

In this article, we explained how to quickly and easily provision a multi-node Kubernetes cluster in a local lab environment for sandbox and development purposes. Now you have the freedom to experiment with Kubernetes without incurring costs, and in a disposable and portable environment that does not interfere with others. With this in your arsenal, you should now be able to expedite progress on your Kubernetes development.

If your organization is interested in accessible and portable sandbox and development environments, enhancing your Kubernetes presence, or improving your Vagrant environment and Ansible software provisioning usage, then please set up a free :30 coaching session with our Enterprise Architect, Derrick Sutherland by clicking on any Coaching button on this website.