Recently I made a dicision to pursue Cloud Native Stack and start to developing on Kubernetes Clusters for my projects. So I started to learn how to setup a Kubernetes Cluster for developing purpose on Oracle Cloud Infrastructure (OCI) from scratch. The reason why I choose OCI is it’s has a free tier and I just need to setup a cluster on it for development and testing.

And also I want to do Kubernetes in a more hands-on way, so I choose to use kubeadm to setup the cluster, from scratch.

Here are the steps.

📝 Note: This series of articles is intend to have a hands-on experience on Kubernetes, so I will not go into details of the concepts of Kubernetes. If you want to know more about Kubernetes, please refer to the Kubernetes official documentation.


Part 1: Setup a Kubernetes Cluster Control Plane on OCI

Plan and preparations

According to the Kubernetes official documentation, the minimum requirements for a Kubernetes cluster are:

  • A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
  • 2 GB or more of RAM per machine (any less will leave little room for your apps).
  • 2 CPUs or more.
  • Full network connectivity between all machines in the cluster (public or private network is fine).
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

So we will use a VM instance with 2 OCPU and 8 GB RAM on OCI to setup the Kubernetes cluster’s control plane (master node).


Open Ports in iptables

For a virtual machine instance on OCI, all the requirements are met except the network connectivity part, as in the OCI platform the virtual machine is using iptables not the normal ufw firewall to control the network traffic. So we need to make some changes to the iptables rules to make the network traffic work properly.

According to the Kubernetes official documentation, the following ports are required to be open for the control plane:

ProtocolDirectionPort RangePurposeUsed By
TCPInbound6443*Kubernetes API serverAll
TCPInbound2379-2380etcd server client APIkube-apiserver, etcd
TCPInbound10250Kubelet APISelf, Control plane
TCPInbound10259kube-schedulerSelf
TCPInbound10257kube-controller-managerSelf

After we have created a virtual machine instance on OCI, we can use the following commands to open the required ports:

sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 6443 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 2379:2380 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 10250 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 10259 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 10257 -j ACCEPT

To make the changes permanent:

sudo netfilter-persistent save

We can also use the following command to check the status of the iptables rules:

sudo iptables -L

The following is the output of the command related our changes:

ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:10257
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:10259
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:10250
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpts:2379:2380
ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:6443

Add Ingress Rules in the Security List of VCN

The last step is to config the Ingress Rulls in the Security List of the virtual machine’s VCN. We can follow the OCI official documentation to do that. We need to add the following rules:

StatelessSourceIP ProtocalSource Port RangeDestination Port RangeAllowsDescription
No0.0.0.0/0TCPAll6443TCP traffic for ports: 6443Kubernetes API server

Install Container Runtime

Refer the Kubernetes official documentation, there are several container runtimes that can be used with Kubernetes, including Docker, containerd, CRI-O, we choose to use containerd as the container runtime.


Install and config prerequisites

Forwarding IPv4 and letting iptables see bridged traffic

Execute the following commands to let br_netfilter and overlay modules to be loaded:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

Verify that the br_netfilter, overlay modules are loaded by running below instructions:

lsmod | grep br_netfilter
lsmod | grep overlay

Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, net.ipv4.ip_forward system variables are set to 1 in the sysctl config by running below instruction:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Install containerd

The installation steps are from the containerd official documentation.

First of all, Download the containerd-<VERSION>-<OS>-<ARCH>.tar.gz archive from here.

For example if we are using Ubuntu 20.04 on arm64, we need to download the containerd-1.6.15-linux-arm64.tar.gz archive.

sudo wget https://github.com/containerd/containerd/releases/download/v1.6.15/containerd-1.6.15-linux-arm64.tar.gz
sudo tar Cxzvf /usr/local containerd-1.6.15-linux-arm64.tar.gz 
sudo wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo cp containerd.service /usr/local/lib/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now containerd

Confirm the service via:

sudo systemctl status containerd

Install runc

Download the runc.<ARCH> binary from here.

For example we are using Ubuntu 20.04 on arm64, so we need to download the runc.arm64 binary.

sudo wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.arm64
sudo install -m 755 runc.arm64 /usr/local/sbin/runc

Install CNI plugins

sudo wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-arm64-v1.2.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.2.0.tgz

Setup containerd for Kubernetes

# Create the containerd config file directory
sudo mkdir -p /etc/containerd

# Create the containerd config file
sudo su
containerd config default > /etc/containerd/config.toml
exit

Edit the /etc/containerd/config.toml file and set the SystemdCgroup to true:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

Restart the containerd service:

sudo systemctl restart containerd

Install kubeadm

Refer the Kubernetes official documentation, install kubeadm, kubelet and kubectl.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
sudo echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

📝 Note: According to Kubernetes official documentation, the cgroup driver used by kubelet must match the container runtime and kubelet. But for kubernetes version higher than 1.22, if not setting cgroupDriver in kubeletconfiguration field, then systemd will be set, so no need to edit the kubeadm-config.yaml. This can be verified with sudo nano /var/lib/kubelet/config.yaml that cgroupDriver has been already set to systemd.

Execute the following command to check the cgroup driver used by kubelet:

nano /var/lib/kubelet/config.yaml

We can find that the cgroupDriver has been already set to systemd:

apiVersion: kubelet.config.k8s.io/v1beta1
...
cgroupDriver: systemd

Initialize the master node with config file

Some config need to be setup before initializing the master node:

  • clusterName: the name of the cluster
  • podSubnet: the subnet used by pods
  • certSANs: the IP address or DNS need to be added to the api server certificate

So let us create an init-config.yaml file with the following content:

apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
  podSubnet: "192.168.0.0/16"
apiServer:
  certSANs:
    - "myapiserverip or myapiserverdns"
clusterName: "myclustername"

Then use the kubeadm init command with the --config flag to initialize the master node:

sudo kubeadm init --config init-config.yaml

After the initialization, the output will show the command to join the worker nodes to the cluster:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  /docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash> 

As intructed in the output, run the following commands to setup the kubeconfig:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now we can using kubectl to manage the cluster from the master node:

kubectl get nodes

Install pod network add-on

Refer the Kubernetes official documentation, we must deploy a Container Network Interface (CNI) based Pod network add-on so that the Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.

See a list of add-ons that implement the Kubernetes networking model.

For this time we choose calico as the pod network add-on. Execute the following command to install calico:

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
kubectl apply -f calico.yaml

After installing the pod network add-on, we can verify the status of the pods:

kubectl get pods --all-namespaces

Use kubectl on local machine to connect to the cluster

As we initialized the master node with the certSANs, we can use the master node’s kubeconfig file to connect to the cluster from local machine.

Using the following command to get the kubeconfig file from the master node:

kubectl config view --minify --flatten

Copy the content of the output and save it to a file named config in the ~/.kube directory on your local machine (asuume the local machine is a Linux or MacOS machine):

mkdir -p $HOME/.kube
nano $HOME/.kube/config

Then we can use the kubectl command to connect to the cluster from local machine:

kubectl get nodes

Next step

After all the steps above, we have a kubernetes cluster with one master node. Next setp is to add worker nodes to the cluster.

We will continue this part in the next post.