Container Management – How to Build a Kubernetes Development Cluster
Introduction
Navisite has recently started using Kubernetes and containers to increase efficiencies with a continuous innovation and continuous delivery(CI/CD) model for updates to internal service and support systems.
As a Principal Cloud Engineer here at Navisite I thought it would be interesting to put together a test cluster to get an understanding Kubernetes cluster management. It is very handy to be able to spin up a cluster as needed on any desktop machine I might be using to test various aspects of cluster management before applying to a production cluster.
This article focuses on that experience – building a platform for learning container management using a Kubernetes development cluster running on a local machine.
I am sharing that experience here for any of our clients who have been thinking about how to get started with understanding containerization. Please reach out to us if you are interested in exploring beyond experimenting with this development cluster, or even if you have questions/suggestions related to this project.
Future articles will focus on how to deploy Kubernetes production clusters in private and public cloud environments. Read on and have fun building a Kubernetes cluster on your local machine.
Project Overview
Containers have caused a major impact on how cloud-native applications are developed and have some benefits over traditional “hypervisor” approach to application deployment. This approach allows for a more reliable way to move software environments from one computing environment to another.
A container consists of the entire runtime environment, application and just the resources needed to run it in a single bundled package. This is unlike an application built with standard virtualization practices for which each application typically lives in a VM and runs a single operating system / application. Here is a visual comparison:
Containerized applications are much lighter weight (MB rather than GB in size) compared to virtual machines(VMs). A single server can host many more containers than virtual machines.
Alternatively, running containers in a VM can also be beneficial as it may allow for less VMs required for the application potentially driving down virtualization licensing costs. This also makes it possible to run containers in cloud environments. In either scenario, container proliferation can quickly become difficult to manage. Kubernetes provides a container-centric management environment to simplify management and deployment.
Before Getting Started
This project is an automated deployment of a Kubernetes multi-node VM cluster installation, for those wanting to get familiar with container orchestration and developing/deploying containerized applications from a local desktop or laptop.
Designed to run on MacOS, Linux or Windows. Some basic Linux system administration skills are required for this project. Any of the OS platforms will require a user with local administrative privileges to install the required software packages on the local machine where the VMs reside.
The supplied Vagrantfile handles the provisioning of the VMs and uses embedded shell script provisioning for Guest OS, Kubernetes/Docker deployment. While shell scripting is not the most efficient, this method is used to make it easy to modify as opposed to requiring tools like Ansible, SaltStack, Chef, Puppet etc.
Variables have been defined for easy modification of the VM configuration parameters to expand the number of worker nodes in the cluster. Note that it is possible to create a Kubernetes multiple master node deployment, it is unnecessary overhead for a development cluster.
A note on security – in a production environment there are a number of security considerations that should be understood before deploying a container environment. These considerations are outside the scope of this project and were not applied here. Security in this environment is as good as the perimeter of the laptop or desktop the VM’s run on and are the responsibility of the user.
A high-level architecture drawing is provided in the following figure:
Fig. 1 Architecture Diagram
Software Prerequisites
On the local machine (Mac OS, Windows or Linux) install the following applications in the order listed below. Follow instructions from the respective websites:
- Vagrant (Deployment tool for building the environment)
- VirtualBox (Virtual Machine provider)
- VirtualBox Extensions (add on software needed for VirtualBox guests)
- Git (utility needed for downloading this project from GitHub)
- Minikube (Used for generating a unique token for multi-node cluster build)
Cluster Installation Overview
This project is intended as a learning tool and should not be considered a production level deployment of Kubernetes.
The cluster will be comprised of a Single Master Node with a user defined number of Worker Nodes. All nodes will run the Linux distribution Ubuntu 18.04 (ubuntu/bionic64) in a VirtualBox Virtual Machine.
By default “Addon” features Kubernetes Dashboard and Metallb LoadBalancer in conjunction with an NGINX webserver to demonstrate the cluster is working properly after installation.
The Kubernetes Dashboard is deployed with role based authentication control token authentication. Installation instructions provide commands for accessing the dashboard from the local system the cluster is installed on.
The default private internal network 172.16.35.0/24
will be created and nodes are assigned a static address starting at 172.16.3.100
for the master. The nodes can be accessed using the upcoming command example when run from the same directory the `vagrant up` command was executed from during installation.
Replace NodeName
with a VM hostname from Table 1. List of nodes and IP Addresses
[LocalMachine]$ vagrant ssh NodeName
Table 1. List of nodes and IP addresses
VM Hostname | IP Address |
node1 | 172.16.35.100 |
node2 | 172.16.35.101 |
node3 | 172.16.35.102 |
If more than two worker nodes are created the pattern would continue node4
with ip 172.16.35.104
and so forth. Note that the nodes /etc/ssh/sshd_config
file has been modified to allow ssh login via the “private network”, the 172.16.35.0
network.
Cluster provisioning scripts for the master and worker nodes are embedded in the Vagrantfile. These are fairly straight forward bash shell scripts: $masterscript
and $workerscript
. Check the echo statements in the code to understand the operations.
User should edit variables as needed. Note: there is a requirement to provide a unique token value for KUBETOKEN
. Do not skip the Minikube pre-requisite as that is required for generating the token.
Currently Flannel is the only network overlay the provisioning script provides. If a different network overlay is desired, the embedded $mastershell script can be edited.
Vagrantfile Customization
“Table 2. Variable Defaults” displays the default values for the variables defined in the Vagrantfile
. These should be edited as prescribed in “Table 3. Variable Definitions”. For Linux, Mac OS, use a command line text editor like vi
. For Windows, try Notepad++
.
IMPORTANT: KUBETOKEN
should be a uniquely generated value using “Minikube”, instructions are provided in the “Table 2. Variable Definitions” section below.
If making changes to theVM_SUBNET
and NODE_OCTET
values check “Addons” section for other required edits (or the Addons may not work properly)
Table 2. Variable Defaults
Variable | Default Value |
KUBETOKEN |
“03fe0c.e57e7831b69b2687” Note: replace with unique token from Minikube |
VM_SUBNET |
172.16.35. |
NODE_OCTET |
100 |
MASTER_IP |
#{VM_SUBNET}#{NODE_OCTET} |
POD_NTW_CIDR |
10.244.0.0/16 |
BOX_IMAGE |
ubuntu/bionic64 |
NODE_COUNT |
2 |
CPU |
1 |
MEMORY |
1024 |
Table 3. Variable Definitions
Variable | Definition |
KUBETOKEN |
Generate a unique token Minikube from Cluster Insallation procedure, copy and paste value replacing default value into Vagrantfile. |
VM_SUBNET |
Default is “172.16.35.” . Change accordingly if default creates IP conflict with local machine. Do not overlap with POD_NET_CIDR. |
NODE_OCTET |
Default is 100. The master will get 100 and node1 101, node2 102 etc. |
MASTER_IP |
Default is VM_SUBNET + NODE_OCTET. |
POD_NET_CIDR |
Default is `”10.244.0.0/16″`. This value is required for Flannel to run |
BOX_IMAGE |
Default is `”ubuntu/bionic64″`. Changing OS value may require script changes. |
NODE_COUNT |
Default is `2` Set desired number of worker nodes |
CPU |
Default is `1`. Recommend at least `2` if the system has the resources |
MEMORY |
Default is `1`. Recommend at least `2` if the system has the resources |
NOTE: If Changing VM_SUBNET
, NODE_OCTET
, be sure to check the “Addons” section as IP changes in the layer2.config-yaml
configuration file for MetalLB will require edits.
Cluster Installation Procedure
Step 1
KUBETOKEN
Generate a unique token from the Minikube VM using the following command:
[Minikube]$ kubectl token generate token 04ff0b.e57e683ec69b2587
Open a terminal session on the LocalMachine and Download the repository https://github.com/ecorbett135/kubernetes-dev-cluster.git
[LocalMachine]$ git clone https://github.com/ecorbett135/kubernetes-dev-cluster.git
Step 2
Ensure all variables have been edited to desired values and KUBETOKEN
is a uniquely generated token value.
Install and configure the cluster from the LocalMachine:
[LocalMachine]$ cd kubernetes-dev-cluster [LocalMachine]$ vagrant up
Once installation completes final line in output will look something like:
node2: Run 'kubectl get nodes' on the master to see this node join the cluster.
Step 3
Login using ssh with port forward, check node status, start proxy and get dashboard token (copy to paste into web browser)
Windows Hint: Git comes with a bash shell
General Hint: vagrant
user password is vagrant
. Change it using the passwd
command from within the VM)
[LocalMachine]$ ssh -L 8001:127.0.0.1:8001 vagrant@172.16.35.100 [master]$ kubectl -n kube-system get nodes NAME STATUS ROLES AGE VERSION master Ready master 28m v1.11.2 node1 Ready <none> 27m v1.11.2 node2 Ready <none> 26m v1.11.2
[master]$ kubectl -n kube-system describe secret $(ks get secret | awk '/^admin-user/{print $1}') | awk '$1=="token:"{print $2}' eyJhbGciOiJSUzI1NiIsImwpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXg2OTR2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3Y2F1MjJjZi1iNDZkLTExZTgtOWZkMS0wMjJmNjJjZDllMjIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.PCGqgoVvJSFk8hP447cAi6VsLtvbQa_UxhdijdBK6P6i2TOfSzmTShI2gIyUGVOIiLp8RhbjbiZ_m9Cpi404dw5zKhjGcgUOUj-KpgpDgIDiO1GFeE6EHkrmni_ig0vbMF5AEemvtCdp6VS8sNqP6t-LatV-AL4S-K1i_N79wcpOCiIzdtD0itoXspz63hDt4zvRhGmLhAGIDPqT_8H79eOdxEkIjb-LmHJg6yvp0ApSCBGDJJRgDLRaP_xS0m913EbPIK6O6gGB2zER0JB7nMdYxHByDJwKZwoZZjHp6h42f53CjKp9pjTXcufjMLyIcV80ui76PPrrB3VoWHlLQ
Step 4
Copy the kubectl describe secret
output to the clipboard in the previous step
From local machine VM’s are running on enter the following url or click below:
https://127.0.0.1:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Now select TOKEN
radial button and past token copied from terminal session on the provided line.
Cluster Administration Tips
Note that the master node vagrant user .bashrc is configured with some aliases:
alias kc='kubectl' alias kcw='kubectl -o wide' alias ks='kubectl -n kube-system' alias ksw='kubectl -n kube-system -o wide'
These are shortcuts to increase efficiency for example, instead of the following command:
[master]$ kubectl -n kube-system get nodes NAME STATUS ROLES AGE VERSION master Ready master 28m v1.11.2 node1 Ready <none> 27m v1.11.2 node2 Ready <none> 26m v1.11.2 node3 Ready <none> 24m v1.11.2
use the ks
alias:
[master]$ ks get nodes NAME STATUS ROLES AGE VERSION master Ready master 28m v1.11.2 node1 Ready <none> 27m v1.11.2 node2 Ready <none> 26m v1.11.2 node3 Ready <none> 24m v1.11.2
Stopping and Restarting the Cluster
It is important to note that the VM management should be done with Vagrant. This is because a vm.synced_folder
option is used in the Vagrant file and this folder will not mount properly to the VMs if they are managed from the
- To stop the cluster, use the command
vagrant halt
from the local machine from with the directory the cluster was deployed:
[LocalMachine]$ vagrant halt ==> node2: Attempting graceful shutdown of VM... ==> node1: Attempting graceful shutdown of VM... ==> master: Attempting graceful shutdown of VM...
2. To start the cluster run vagrant up
from the local machine from with the directory the cluster was deployed:
[LocalMachine]$ vagrant up Bringing machine 'master' up with 'virtualbox' provider... Bringing machine 'node1' up with 'virtualbox' provider... Bringing machine 'node2' up with 'virtualbox' provider... Bringing machine 'node3' up with 'virtualbox' provider... ==> master: Checking if box 'ubuntu/bionic64' is up to date... ==> master: Clearing any previously set forwarded ports... ==> master: Clearing any previously set network interfaces... ==> master: Preparing network interfaces based on configuration... master: Adapter 1: nat master: Adapter 2: hostonly ==> master: Forwarding ports... master: 22 (guest) => 2222 (host) (adapter 1) ==> master: Running 'pre-boot' VM customizations... ==> master: Booting VM... ==> master: Waiting for machine to boot. This may take a few minutes... master: SSH address: 127.0.0.1:2222 master: SSH username: vagrant master: SSH auth method: private key master: Warning: Connection reset. Retrying... master: Warning: Remote connection disconnect. Retrying... master: Warning: Connection reset. Retrying... ==> master: Machine booted and ready! ==> master: Checking for guest additions in VM... ==> master: Setting hostname... ==> master: Configuring and enabling network interfaces... ==> master: Mounting shared folders... master: /vagrant => /Users/ecorbett/Documents/repos/k8s-ubuntu-vagrant ==> master: Machine already provisioned. Run `vagrant provision` or use the `--provision` ...
Add-ons
Load Balancer
MetalLB is a LoadBalancer application for Kubernetes (primarily designed for bare metal K8s installs). Metallb will install by default – can be commented out in the Vagrantfile if installation is not desired. Edit the Vagrantfile and put a # at the beginning of these 4 lines:
#echo "Installing addon: Metallb (Loadbalancer)" #kubectl apply -f https://raw.githubusercontent.com/ecorbett135/k8s-ubuntu-vagrant/master/addon/metallb/metallb-install.yaml #kubectl apply -f https://raw.githubusercontent.com/ecorbett135/k8s-ubuntu-vagrant/master/addon/metallb/layer2.config-yaml #kubectl apply -f https://raw.githubusercontent.com/ecorbett135/k8s-ubuntu-vagrant/master/addon/metallb/nginx-loadbalancer-test-deployment.yaml
For this project a simple Layer 2 configuration deployment is sufficient. By default the Load Balancer has a defined IP pool range
Changing the default VM_SUBNET
will require edits to the red text below to match the new subnet value. Also, it will require changing if NODE_OCTET
is in within the 240 – 250 range
Contents of ~/kubernetes-dev-cluster/addon/metallb/layer2.config-yaml
apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: my-ip-space protocol: layer2 addresses: - 172.16.35.240-172.16.35.250
Web Server Deployment:
NGINX Web server is deployed with a “LoadBalancer” configuration by default. It can be accessed typically by the first IP address in the range defined in the layer2.config-yaml. By default https://172.16.35.240
The index.html
is loaded from the ~/kubernetes-dev-cluster/addon/metallb/
on the local machine. This file can be customized by the user.
Thank You
Thanks to Steve Carlton, Chris Moore and Gary Pratt from the Navisite OSS team for their input.
Thanks to Javid Azadzoi from Navisite Messaging team for his objective view and coaching.
Thanks to Scott Hyslep from Navisite ACS team for validating the process on the Windows Platform.