Kubernetes is widely used orchestration tool used for deploying, auto-scaling and managing containerized applications.
Kubernetes can be setup on on-premise or cloud or hybrid (combination of on-premise and cloud) infrastructure.
Kubernetes cluster is little complex to setup and requires heavy resources which makes it difficult to spin-up on local machine for learning or development.
Here, We will use K3s which is a lightweight distribution of Kubernetes which is developed and managed by Rancher. It can be setup on any Linux desktop or servers with ease and resources required is also lesser. It has all the features which Kubernetes has and works with kubectl command. This is best for learning or for development.
You can also find convenience script in my GitLab/GitHub repo to setup this cluster.
https://gitlab.com/linuxshots/spinup-k8s/-/tree/master
Kubernetes (K8s) and K3s
Contents
Pre-requisites
Create cluster
Install MetalLB load balancer
Install Nginx Ingress Controller
Stop, start and delete a cluster
Setup using convenience script
Pre-requisites
This demo has been performed on/with:
Ubuntu 20.04 Linux Desktop (Windows user can use Linux Virtual Machine in VirtualBox/VMWare)
2 CPUs and 2 GB RAM (1GB memory for each node)
Docker 20.10.9
Create cluster
Make sure above pre-requisite are met. This is not a production-grade setup and must only be used for education or development purpose.
Install K3d which can be used to deploy K3s cluster. (OPTIONAL: You can review the script before executing it on your system)
curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v5.0.1 bash
2. Create a K3s cluster
k3d cluster create k3s-demo-cluster --api-port 6550 --agents 1 --k3s-arg "--disable=traefik@server:0" --k3s-arg "--disable=servicelb@server:0" --no-lb --wait
Spin-up K3s cluster
Above command will create a K3s cluster with 1 master/controlplane node (server) and 1 Worker/Agent node. We have disabled the service loadbalancer and Traefik ingress controller and instead of that we will setup MetalLB load balancer and Nginx ingress controller, which I find better (My personal opinion).
3. Install kubectl tool
curl -LO https://dl.k8s.io/release/v1.22.2/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
4. Check cluster info and nodes
kubectl cluster-info
It should return cluster information.
kubectl get nodes
Install MetalLB Load Balancer
Kubernetes deployed on bare metal clusters doesn’t offer network load balancer. So, Kubernetes Service type can either be ClusterIP or NodePort but not LoadBalancer.
MetalLB enables the load balancer service type in bare metal kubernetes cluster.
Install MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.3/manifests/metallb.yaml
Above commands will create a new namespace ‘metallb-system’ on kubernetes cluster and deploy required resources (clusterroles and bindings, pod security policies and deployment) for metallb.
2. Configure MetalLB
sudo apt install jq -y
cidr_block=$(docker network inspect k3d-k3s-demo-cluster | jq '.[0].IPAM.Config[0].Subnet' | tr -d '"')
base_addr=${cidr_block%???}
first_addr=$(echo $base_addr | awk -F'.' '{print $1,$2,$3,240}' OFS='.')
range=$first_addr/29
Create a configmap
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- $range
EOF
Above configuration can create up to 8 load balancers.
3. Deploy Nginx and create a service of LoadBalancer type to test load balancer
kubectl create deploy nginx --image=nginx
kubectl expose deploy nginx --port=80 --target-port=80 --type=LoadBalancer
kubectl get svc nginx
Load Balancer service should be able to create an external IP. Use external-IP of nginx service to access nginx.
4. Delete the nginx after testing, if not required.
kubectl delete deploy nginx
kubectl delete svc nginx
Install Nginx Ingress Controller
Install Nginx Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy.yaml
2. Wait for controller pod to come up.
3. Get external IP of ingress controller and access through browser
kubectl -n ingress-nginx get svc ingress-nginx-controller
Page should look like above in browser without any ingress created.
Ingress controller load balancer IP can be used to access any other application deployed on Kubernetes with ingress.
Stop, start and delete a cluster
K3s cluster can be stopped when not in use and can be started again.
To list K3s clusters:
k3d cluster list
To stop a running cluster
k3d cluster stop CLUSTERNAME
To start a stopped cluster
k3d cluster start CLUSTERNAME
To delete a cluster
k3d cluster delete CLUSTERNAME
A lightweight Kubernetes cluster is ready to use now.
Set up using convenience script
Run below commands to set up using convenience script.
curl -L https://gitlab.com/linuxshots/spinup-k8s/-/raw/master/spinup.sh -o spinup.sh
sudo bash spinup.sh
Provide the details requested by prompts.
To know more about K3s and K3d, Visit their official documentation page.
https://rancher.com/docs/k3s/latest/en/
Thanks
Navratan Lal Gupta
Linux Shots