On Prem setup - Ubuntu


Installation on Ubuntu 18.04 Xenial

Prerequisites

Install Docker

Install docker following the instructions from official docker wiki

https://docs.docker.com/install/linux/docker-ce/ubuntu/

Static IP for master and node machines

All machines should be set to static ip either in router config or on the machines itself.

Disable swap

Kubernetes won’t let you install if swap is enabled. One way to disable swap on ubuntu machines if enabled:

  • Check if swap is enabled. If the below command returns empty we are good to proceed else disable swap.
sudo swapon --summary
  • Disable swap
sudo swapoff -a
  • Remove any swap entry from /etc/fstab

  • Check if swap is disabled again

sudo swapon --summary

Install and setup Kubernetes on master and slave nodes

With recent releases of Kubernetes the installation is pretty straight forward if you follow the official documentation.

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Pasting the same steps below. We are marking the installed version on hold as automatic upgrade to Kubernetes can break the cluster due to api version changes. At this point its good to follow the manual upgrade sequence to upgrade Kubernetes clusters.

sudo su -
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Init Master Node

Prepull all images. May take ~4-5 minutes to complete depending on network.

sudo kubeadm config images pull

Initialize and generate the Kubernetes token. By default token ttl is set to 24h. This can be overridden with –ttl=0 for non expiring tokens but its not recommended for production usage. Also you can customize network and other options if required and is documented in official Kubernetes documentation. In my case proceeding with defaults.

sudo kubeadm init

If the command is successful we should see a join token command for nodes and other instructions for kube config files similar to sample output below.

Regenerating token after init

After ttl expiry if you want to generate new join token.

sudo kubeadm token create --print-join-command

This can be useful for joining more nodes later to cluster.

Sample output

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.21:6443 --token 88utc2.th8efhl66wrylpsy --discovery-token-ca-cert-hash sha256:a7767824d5d724a0efafea997874aae9dd9c03ad08a09a1ee3c0ca0f23c02d76

As regular user copy these files in master

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Join other nodes In all node machines execute the join command we got after init

sudo kubeadm join 192.168.0.21:6443 --token 88utc2.th8efhl66wrylpsy --discovery-token-ca-cert-hash sha256:a7767824d5d724a0efafea997874aae9dd9c03ad08a09a1ee3c0ca0f23c02d76

In master check nodes

kubectl get nodes

output

NAME        STATUS     ROLES    AGE   VERSION
host2      NotReady   <none>   22s   v1.14.0
host1      NotReady   master   33m   v1.14.0

The “NotReady” status is ok at this point as the networking CNI layer is not enabled yet.

Installing a Container Network Interface (CNI) on master node only

We need to install a CNI like Weave, Callico, Flannel etc to enable networking. I chose weave in this example as its quite stable and easy to install.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.NO_MASQ_LOCAL=1"

For resources (services and pods) with spec annotation “service.spec.externalTrafficPolicy=Local” we can get the real client IP address inside pods if we use NO_MASQ_LOCAL environment variable.

Once weave is applied successfully it will automatically propagate required services and pods to all worker nodes.

At this point connections should be ready

kubectl get nodes

NAME        STATUS   ROLES    AGE   VERSION
host2       Ready    <none>   10m   v1.13.4
host1       Ready    master   43m   v1.13.4

Running pods on master node also

By default master nodes don’t run any pods and is recommended for production usage. If we have limitd resources and want to use master also for scheduling and running pods and other resources we can set the master to allow that.

kubectl taint nodes --all node-role.kubernetes.io/master-

Installing Kubernetes dashboard

kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

Service account to access dashboard with cluster-admin privileges

kubectl create serviceaccount dashboard -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard

kubectl create clusterrolebinding kubernetes-dashboard -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

Dashboard needs a token or kubeconfig by default to login. Also access to dashboard should be protected and safeguarded as it has all cluster admin privileges. To get the login token use:

kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

Accessing dashboard using kubectl proxy

kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'

From browser use the below url to access dashboard.

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default

Alternatively you can expose dashboard as a service on node port or load balancer instead of kube proxy method.

NodePort sample yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  labels:
    app: kubernetes-dashboard
    k8s-app: kubernetes-dashboard
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 9090
    protocol: TCP
  - name: https
    port: 443
    targetPort: https-int
    protocol: TCP
  selector:
    app: kubernetes-dashboard
    k8s-app: kubernetes-dashboard

Sample Dashboard screenshot

Kubernetes dashboard

Installing a Load balancer for home network

This step is optional and will enable us to provision services on actual host network ips. The below example assumes your network is on 192.168.0.1/24 and we want our loadbalancer to expose services on 192.168.0.240-192.168.0.254.

Metal-LB is one of the popular lightweight option we can use.

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

If installation is successful we need to activate configuration. We can use yaml file. Make sure its modified as per the IP range for your network.

Create the yaml file in master or wherever we have kubectl access to our clusters.

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.254

Apply the configuration assuming file is saved as metallb-conf.yaml

kubectl apply -f metallb-conf.yaml

This should create all required services and in our services yaml config for our pods we can specify type as loadbalancer and an ip to expose services on static ips in our network.

Sample for exposing kubernetes dashboard with Loadbalancer instead of NodePort.

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  labels:
    app: kubernetes-dashboard
    k8s-app: kubernetes-dashboard
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.1.250
  ports:
  - name: http
    port: 80
    targetPort: 9090
    protocol: TCP
  - name: https
    port: 443
    targetPort: https-int
    protocol: TCP
  selector:
    app: kubernetes-dashboard
    k8s-app: kubernetes-dashboard

Upgrading Kubernetes cluster to new version

The official recommended way is to use https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-13/