Certified Kubernetes Administrator Study notes

  • Blog
  • Certified Kubernetes Administrator Study notes
  • By: Anas Aboureada
  • November 11, 2021

Certified Kubernetes Administrator (CKA) Study Notes

What is Kubernetes

  • Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.”**

  • Kubernetes makes it easier to build reliable, self-healing, and scalable applications.

K8s Cluster

control plane

K8s Control Plane

control plane

The control plane

  • is a collection of multiple components responsible for managing the cluster itself globally.
  • Essentially, the control plane controls the cluster.
  • Individual control plane components can run on any machine in the cluster, but usually are run on dedicated controller machines.

kube-api-server

  • serves the Kubernetes API, the primary interface to

Etcd

  • is the backend data store for the Kubernetes cluster. It provides high-availability storage for all data relating to the state of the cluster.

kube-scheduler

  • handles scheduling, the process of selecting an available node in the cluster on which to run containers.

kube-controller-manager

  • runs a collection of multiple controller utilities in a single process. These controllers carry out a variety of automation-related tasks within the Kubernetes cluster.

cloud-controller-manager

  • provides an interface between Kubernetes and various cloud platforms. It is only used when using using cloud-based resources alongside Kubernetes.

K8s Nodes

K8s Nodes

  • are the machines where the containers managed by the cluster run. A cluster can have any number of nodes.
  • Various node components manage containers on the machine and communicate with the control plane.

Kubelet

  • is the Kubernetes agent that runs on each node.
  • It communicates with the control plane and ensures that containers are run on its node as instructed by the control plane.
  • Kubelet also handles the process of reporting container status and other data about containers back to the control plane.

The container runtime

  • is not built into Kubernetes.
  • It is a separate piece of software that is responsible for actually running containers on the machine.
  • Kubernetes supports multiple container runtime implementations.
  • Some popular container runtimes are Docker and containerd.

kube-proxy

  • is a network proxy.
  • It runs on each node and handles some tasks related to providing networking between containers and services in the cluster.

Building a Kubernetes Cluster with kubeadm

![[labdiagram_1030-_S02-LAB01_Building_a_Kubernetes_1.20_Cluster_with_Kubeadm.png]]

  • Kubeadm is a cluster setup tool.

Install Packages

  1. Log into the Control Plane Node (Note: The following steps must be performed on all three nodes.).
  2. Create configuration file for containerd:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
  1. Load modules:
sudo modprobe overlay sudo modprobe br_netfilter
  1. Set system configurations for Kubernetes networking:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
  1. Apply new settings:
sudo sysctl --system
  1. Install containerd:
sudo apt-get update && sudo apt-get install -y containerd
  1. Create default configuration file for containerd:
sudo mkdir -p /etc/containerd
  1. Generate default containerd configuration and save to the newly created default file:
sudo containerd config default | sudo tee /etc/containerd/config.toml
  1. Restart containerd to ensure new configuration file usage:
sudo systemctl restart containerd
  1. Verify that containerd is running.
sudo systemctl status containerd
  1. Disable swap:
sudo swapoff -a
  1. Disable swap on startup in /etc/fstab:
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  1. Install dependency packages:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
  1. Download and add GPG key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  1. Add Kubernetes to repository list:
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
  1. Update package listings:
sudo apt-get update
  1. Install Kubernetes packages (Note: If you get a dpkg lock message, just wait a minute or two before trying the command again):
sudo apt-get install -y kubelet=1.22.0-00 kubeadm=1.22.0-00 kubectl=1.22.0-00
  1. Turn off automatic updates:
sudo apt-mark hold kubelet kubeadm kubectl
  1. Log into both Worker Nodes to perform previous steps.

Initialize the Cluster

  1. Initialize the Kubernetes cluster on the control plane node using kubeadm (Note: This is only performed on the Control Plane Node):
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.22.0
  1. Set kubectl access:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
  1. Test access to cluster:
kubectl get nodes

Install the Calico Network Add-On

  1. On the Control Plane Node, install Calico Networking:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  1. Check status of the control plane node:
kubectl get nodes

Join the Worker Nodes to the Cluster

  1. In the Control Plane Node, create the token and copy the kubeadm join command (NOTE:The join command can also be found in the output from kubeadm init command):
kubeadm token create --print-join-command
  1. In both Worker Nodes, paste the kubeadm join command to join the cluster. Use sudo to run it as root:
sudo kubeadm join ...
  1. In the Control Plane Node, view cluster status (Note: You may have to wait a few moments to allow all nodes to become ready):
$ kubectl get nodes

Using Namespaces in K8s

What Is a Namespace?

  • Namespaces are virtual clusters backed by the same physical cluster.
  • Kubernetes objects, such as pods and containers, live in namespaces.
  • Namespaces are a way to separate and organize objects in your cluster.

Namespaces commands

$ kubectl get namespaces $ kubectl get pods --namespace my-namespace $ kubectl create namespace my-namespace # List the current namespaces: $ kubectl get namespace # Get all the pods in the cluster $kubectl get pods --all-namespaces

High Availability in K8s

  • K8s facilitates high-availability applications, but you can also design the cluster itself to be highly available.
  • To do this, you need multiple control plane nodes.

High Availability Control Plane

h8s-cluster-high-availability.jpg

When using multiple control planes for high availability, you will likely need to communicate with the Kubernetes API through a load balancer. This includes clients such as kubelet instances running on worker nodes.

Stacked etcd

stacked-etcd.jpg

stacked-etcd-2.jpg

External Etcd

external-etcd.jpg

external-etcd-2.jpg

K8s Management Tools

kubectl

kubectl is the official command line interface for Kubernetes.

kubeadm

kubeadm is a tool for quickly and easily creating Kubernetes clusters.

Minikube

Minikube allows you to automatically set up a local single-node Kubernetes cluster. It is great for getting Kubernetes up and running quickly for development purposes.

Helm

Helm provides templating and package management for Kubernetes objects. You can use it to manage your own templates (known as charts). You can also download and use shared templates.

Kompose

Kompose helps you translate Docker compose files into Kubernetes objects. If you are using Docker compose for some part of your workflow, you can move your application to Kubernetes easily with Kompose.

Kustomize

Kustomize is a configuration management tool for managing Kubernetes object configurations. It allows you to share and re-use templated configurations for Kubernetes applications.

Safely Draining a K8s Node

What Is Draining?

When performing maintenance, you may sometimes need to remove a Kubernetes node from service. To do this, you can drain the node. Containers running on the node will be gracefully terminated (and potentially rescheduled on another node).

Draining a Node

To drain a node, use the kubectl drain command.

$ kubectl drain <node name>

When draining a node, you may need to ignore DaemonSets (pods that are tied to each node). If you have any DaemonSet pods running on the node, you will likely need to use the —ignore-daemonsets flag.

$ kubectl drain <node name> --ignore-daemonsets

Uncordoning a Node

If the node remains part of the cluster, you can allow pods to run on the node again when maintenance is complete using the kubectl uncordon command.

$ kubectl uncordon <node name>

Upgrading K8s with kubeadm

When using Kubernetes, you will likely want to periodically upgrade Kubernetes to keep your cluster up to date.

Control Plane Upgrade Steps

  • Upgrade kubeadm on the control plane node.
  • Drain the control plane node.
  • Plan the upgrade (kubeadm upgrade plan).
  • Apply the upgrade (kubeadm upgrade apply).
  • Uncordon the control plane node.
  • Upgrade kubelet and kubectl on the control plane node.

Worker Node Upgrade Steps

  • Upgrade kubeadm.
  • Drain the node.
  • Upgrade the kubelet configuration (kubeadm upgrade node).
  • Upgrade kubelet and kubectl.
  • Uncordon the node.

Performing a Kubernetes Upgrade with kubeadm

Upgrade the Control Plane

  1. Upgrade kubeadm:
[[email protected]]$ sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubeadm=1.22.2-00
  1. Make sure it upgraded correctly:
[[email protected]]$ kubeadm version
  1. Drain the control plane node:
[[email protected]]$ kubectl drain k8s-control --ignore-daemonsets
  1. Plan the upgrade:
[[email protected]]$ sudo kubeadm upgrade plan v1.22.2
  1. Upgrade the control plane components:
[[email protected]]$ sudo kubeadm upgrade apply v1.22.2
  1. Upgrade kubelet and kubectl on the control plane node:
[[email protected]]$ sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubelet=1.22.2-00 kubectl=1.22.2-00
  1. Restart kubelet:
[[email protected]]$ sudo systemctl daemon-reload [[email protected]]$ sudo systemctl restart kubelet
  1. Uncordon the control plane node:
[[email protected]]$ kubectl uncordon k8s-control
  1. Verify the control plane is working:
[[email protected]]$ kubectl get nodes

If it shows a NotReady status, run the command again after a minute or so. It should become Ready.

Upgrade the Worker Nodes

Note: In a real-world scenario, you should not perform upgrades on all worker nodes at the same time. Make sure enough nodes are available at any given time to provide uninterrupted service.

Worker Node 1
  1. Run the following on the control plane node to drain worker node 1:
[[email protected]]$ kubectl drain k8s-worker1 --ignore-daemonsets --force

You may get an error message that certain pods couldn’t be deleted, which is fine.

  1. In a new terminal window, log in to worker node 1:
ssh [email protected]<WORKER_1_PUBLIC_IP_ADDRESS>
  1. Upgrade kubeadm on worker node 1:
[[email protected]]$ sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubeadm=1.22.2-00 [[email protected]]$ kubeadm version
  1. Back on worker node 1, upgrade the kubelet configuration on the worker node:
[[email protected]]$ sudo kubeadm upgrade node
  1. Upgrade kubelet and kubectl on worker node 1:
[[email protected]]$ sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubelet=1.22.2-00 kubectl=1.22.2-00
  1. Restart kubelet:
[[email protected]]$ sudo systemctl daemon-reload [[email protected]]$ sudo systemctl restart kubelet
  1. From the control plane node, uncordon worker node 1:
[[email protected]]$ kubectl uncordon k8s-worker1
Worker Node 2
  1. From the control plane node, drain worker node 2:
[[email protected]]$ kubectl drain k8s-worker2 --ignore-daemonsets --force
  1. In a new terminal window, log in to worker node 2:
ssh [email protected]<WORKER_2_PUBLIC_IP_ADDRESS>
  1. Upgrade kubeadm:
[[email protected]]$ sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubeadm=1.22.2-00 [[email protected]]$ kubeadm version
  1. Back on worker node 2, perform the upgrade:
[[email protected]]$ sudo kubeadm upgrade node [[email protected]]$ sudo apt-get update && \ sudo apt-get install -y --allow-change-held-packages kubelet=1.22.2-00 kubectl=1.22.2-00 [[email protected]]$ sudo systemctl daemon-reload [[email protected]]$ sudo systemctl restart kubelet
  1. From the control plane node, uncordon worker node 2:
[[email protected]]$ kubectl uncordon k8s-worker2
  1. Still in the control plane node, verify the cluster is upgraded and working:
[[email protected]]$ kubectl get nodes

If they show a NotReady status, run the command again after a minute or so. They should become Ready.