{"id":535,"date":"2018-10-08T13:34:22","date_gmt":"2018-10-08T13:34:22","guid":{"rendered":"https:\/\/cloudmanagement.navisite.com\/?p=535"},"modified":"2020-01-08T04:28:09","modified_gmt":"2020-01-08T04:28:09","slug":"container-management-how-to-build-a-kubernetes-development-cluster","status":"publish","type":"post","link":"https:\/\/www.navisite.com\/blog\/container-management-how-to-build-a-kubernetes-development-cluster\/","title":{"rendered":"Container Management – How to Build a Kubernetes Development Cluster"},"content":{"rendered":"
Navisite has recently started using Kubernetes and containers to increase efficiencies with a continuous innovation and continuous delivery(CI\/CD) model for updates to internal service and support systems. Containers have caused a major impact on how cloud-native applications are developed and have some benefits over traditional \u201chypervisor\u201d approach to application deployment. This approach allows for a more reliable way to move software environments from one computing environment to another. This project is an automated deployment of a Kubernetes multi-node VM cluster installation, for those wanting to get familiar with container orchestration and developing\/deploying containerized applications from a local desktop or laptop. <\/p>\n On the local machine (Mac OS, Windows or Linux) install the following applications in the order listed below. Follow instructions from the respective websites:<\/p>\n This project is intended as a learning tool and should not be considered a production level deployment of Kubernetes<\/em><\/strong>. <\/p>\n
\nAs a Principal Cloud Engineer here at Navisite I thought it would be interesting to put together a test cluster to get an understanding Kubernetes cluster management. It is very handy to be able to spin up a cluster as needed on any desktop machine I might be using to test various aspects of cluster management before applying to a production cluster.
\nThis article focuses on that experience – building a platform for learning container management using a Kubernetes development<\/u><\/em> cluster running on a local machine.
\nI am sharing that experience here for any of our clients who have been thinking about how to get started with understanding containerization. Please reach out to us if you are interested in exploring beyond experimenting with this development cluster, or even if you have questions\/suggestions related to this project.
\nFuture articles will focus on how to deploy Kubernetes production clusters in private and public cloud environments. Read on and have fun building a Kubernetes cluster on your local machine.<\/p>\nProject Overview<\/h3>\n
\nA container consists of the entire runtime environment, application and just the resources needed to run it in a single bundled package. This is unlike an application built with standard virtualization practices for which each application typically lives in a VM and runs a single operating system \/ application. \u00a0Here is a visual comparison:
\n
\n
\nContainerized applications are much lighter weight (MB rather than GB in size) compared to virtual machines(VMs). A single server can host many more containers than virtual machines.
\nAlternatively, running containers in a VM can also be beneficial as it may allow for less VMs required for the application potentially driving down virtualization licensing costs. This also makes it possible to run containers in cloud environments. In either scenario, container proliferation can quickly become difficult to manage. Kubernetes provides a container-centric management environment to simplify management and deployment.<\/p>\nBefore Getting Started<\/h3>\n
\nDesigned to run on MacOS, Linux or Windows. Some basic Linux system administration skills are required for this project. Any of the OS platforms will require a user with local administrative privileges to install the required software packages on the local machine where the VMs reside.
\nThe supplied Vagrantfile handles the provisioning of the VMs and uses embedded shell script provisioning for Guest OS, Kubernetes\/Docker deployment. \u00a0While shell scripting is not the most efficient, this method is used to make it easy to modify as opposed to requiring tools like Ansible, SaltStack, Chef, Puppet etc.
\nVariables have been defined for easy modification of the VM configuration parameters to expand the number of worker nodes in the cluster. Note that it is possible to create a Kubernetes multiple master node deployment, it is unnecessary overhead for a development cluster.
\nA note on security \u2013 in a production environment there are a number of security considerations that should be understood before deploying a container environment. These considerations are outside the scope of this project and were not applied here. Security in this environment is as good as the perimeter of the laptop or desktop the VM\u2019s run on and are the responsibility of the user.
\nA high-level architecture drawing is provided in the following figure:<\/p>\n<\/a>Fig. 1 Architecture Diagram<\/h6>\n
<\/h3>\n
Software Prerequisites<\/h3>\n
\n
Cluster Installation Overview<\/h3>\n
\nThe cluster will be comprised of a Single Master Node with a user defined number of Worker Nodes. All nodes will run the Linux distribution Ubuntu 18.04 (ubuntu\/bionic64) in a VirtualBox Virtual Machine.
\nBy default \u201cAddon\u201d features Kubernetes Dashboard and Metallb LoadBalancer in conjunction with an NGINX webserver to demonstrate the cluster is working properly after installation.
\nThe Kubernetes Dashboard is deployed with role based authentication control token authentication. Installation instructions provide commands for accessing the dashboard from the local system the cluster is installed on.
\nThe default private internal network 172.16.35.0\/24<\/code> will be created and nodes are assigned a static address starting at
172.16.3.100<\/code> for the master. The nodes can be accessed using the upcoming command example when run from the same directory the `vagrant up` command was executed from during installation.
\nReplace NodeName<\/code> with a VM hostname from <\/a>Table 1. List of nodes and IP Addresses<\/p>\n
[LocalMachine]$ vagrant ssh NodeName<\/pre>\n
<\/a>Table 1. List of nodes and IP addresses<\/h3>\n