Bootstrapping Kubernetes in an Offline/Air Gapped VM Cluster.

Harsh Dhillon
5 min readJun 3, 2020

--

I work for a VFX House as a Software Developer. In entertainment industries, our data is our valuable Intellectual Property. So machines in our architecture are never fully connected to Internet to maintain a secure platform. In such cases, bootstrapping kubernetes can be a workaround as many components assume the internet connectivity and make various API calls to bring in images and configurations.

Hopefully this will help you build up your kubernetes platform fast and save you from hefty amount of googling. This post presumes that you already have docker installed and running in your nodes and one online machine to download the resources and transfer them to your cluster machines.

Prerequisite Settings for Docker.

To begin, make sure you add these settings in your docker’s daemon configuration file. The default location of the configuration file on Linux is /etc/docker/daemon.json

{  "exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"group": "rnd",
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}

Configuring Prerequisites for Kubernetes (K8s)

Add these Kernel parameters which are required by the Kubernetes cluster.

cat <<EOF > /etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

Reload these parameters by calling these commands.

modprobe br_netfilter
sysctl --system

Basically we are enabling the IP Forwarding, which is essential for the cluster communication and networking.

Turn Swap Off for Installation

swapoff -a
sed -e '/swap/s/^/#/g' -i /etc/fstab

Switch SELinux to Permissive mode

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install K8s Packages

I am installing Kubernetes 1.17.3 in my nodes. This stack of RPM’s may change depending on the version. These packages are trimmed down for a base Kubernetes install without any dependency issues. Adding more or less can lead up to a dependency hell, which often is hard to get out of. The RPM stack will look like this.

libmnl-1.0.3-7.el7.i686.rpm 
libnfnetlink-1.0.1-4.el7.i686.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
glibc-2.17-292.el7.x86_64.rpm
glibc-common-2.17-292.el7.x86_64.rpm
nspr-4.21.0-1.el7.x86_64.rpm
glibc-2.17-292.el7.i686.rpm
yum-utils-1.1.31-52.el7.noarch.rpm
kubeadm-1.17.3-0.x86_64.rpm
kubernetes-cni-0.7.5-0.x86_64.rpm
kubelet-1.17.3-0.x86_64.rpm
kubectl-1.17.3-0.x86_64.rpm
cri-tools-1.13.0-0.x86_64.rpm
conntrack-tools-1.4.4-5.el7_7.2.x86_64.rpm
libnetfilter_cthelper-1.0.0-10.el7_7.1.x86_64.rpm libnetfilter_cttimeout-1.0.0-6.el7_7.1.x86_64.rpm libnetfilter_cthelper-1.0.0-10.el7_7.1.i686.rpm libnetfilter_cttimeout-1.0.0-6.el7_7.1.i686.rpm
nss-softokn-freebl-3.44.0-8.el7_7.x86_64.rpm
nss-util-3.44.0-4.el7_7.x86_64.rpm
nss-softokn-freebl-3.44.0-8.el7_7.i686.rpm

As my civic duty I have placed all the RPMs and images we will use for bootstrapping throughout here.

Run this command to install all the RPM packages in one go.

rpm -ivh --replacefiles --replacepkgs /location/to/rpms//*.rpm

You would of course require sudo access to the machine to install these RPMs.

Image Setup

To start the kubernetes control plane, we will run kubeadm tool which provides kubeadm init and kubeadm join services to start a cluster and later join worker nodes to the cluster.

By default kubeadm is configured to pull required component images from the k8s.gcr.io image registry.

Because we are working in an air-gapped server, with no external communication. We need to setup a local registry from which we can redirect this pull. Its best practice to setup a registry accessible to every node in the cluster. For now, I’ll setup a localhost:5000 registry to save and pull all our images.

Run this command to get a list of images for initialization.

kubeadm config images list

You will get a list with these images.

k8s.gcr.io/kube-apiserver:v1.17.3
k8s.gcr.io/kube-controller-manager:v1.17.3
k8s.gcr.io/kube-scheduler:v1.17.3
k8s.gcr.io/kube-proxy:v1.17.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

I have stored all these images as tarballs in the drive link.

Run docker load command to add them in the image list and push them to the locally hosted registry.

Configure the Kubeadm Initialization

To configure the kubeadm init, we will feed it with our own configuration file.

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.108.3.250
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: mxtkpw-dev01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: localhost:5000/k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.3
networking:
podSubnet: 192.168.0.0/16 # --pod-network-cidr
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12

scheduler: {}

Among all the settings, noteworthy settings are,

imageRepository: localhost:5000/k8s.gcr.io ← Must be specified, else kubeadm calls external repos.

advertiseAddress: 10.108.3.250 ← Control Plane VM’s IP.
bindPort: 6443 ← Kubernetes Api Server will run on this port.

criSocket: /var/run/dockershim.sock ←sets the container runtime socket.

podSubnet: 10.244.0.0/16 ← Essential port setting for flannel, which is our chosen CNI provider.

Save this as kubeadm_init.config

Initiate the Control Plane Node

With all required settings in place. we can bootstrap our cluster using the above config file.

kubeadm init --config /location/to/config/file/kubeadm_init.config

Without experiencing any pre-flight errors, this initialization will print a token which can be used in a RPM installed VM to join the cluster as a worker node.

Note : You can add — ignore-preflight-errors=all to kubeadm init command to override all the preflight errors.

To view current tokens run following in the master node.

kubeadm token list

To generate a new token you can run the following command.

sudo kubeadm token create

To setup the environment properly, we will copy the configuration file generated on initialization to our home directory.

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Worker Node Configuration

We need kubelet installed and running on all machines in cluster.

To add a worker node to the cluster,

sudo kubeadm join --token abcdef.1234567890abcdef controlPlaneIP:6443 --discovery-token-unsafe-skip-ca-verification

Flannel : The glue that holds everything together.

Flannel is a network fabric for containers, designed for Kubernetes. It provisions a subnet to each host to use with container runtimes.

There are many more networking models to choose from. Flannel is simple and easier to setup and build.

The flannel yml file and tar file is placed here

Apply the flannel yaml file and make sure the flannel image is present on docker registry.

Base Infrastructure

Our base infrastructure, which consists of 4 Virtual Machines is complete.

We have one control plane VM as master-node-01, and rest of the VM’s are worker nodes.

~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-node-01 Ready master 18h v1.17.3
worker-node-01 Ready worker 18h v1.17.3
worker-node-02 Ready worker 18h v1.17.3
worker-node-03 Ready worker 18h v1.17.3

To add a role label to our nodes use this command,

kubectl label node worker-node-01 node-role.kubernetes.io/worker=worker

CONCLUSION

Hopefully this post made your task to bootstrap your kubernetes cluster if not complete, at-least a bit easier. If you find any blocks which googling can’t remedy you can contact me on twitter @hershdhillon

I’ll try to help with best of my abilities. Good luck and have fun!

--

--

Harsh Dhillon
Harsh Dhillon

No responses yet