How to set up a HA Kubernetes cluster: preparing the nodes

The second part of this blog post series deals with the preparation of the Kubernetes and service nodes. It describes the installation of the HAProxy, the Flannel daemon (flanneld), the docker engine (docker) and kubelet.

Installing the service node

The service node which is located outside the cluster undertakes the task of a load balancer. Since the open source version of nginx does not support TCP load balancing we use HAProxy. HAProxy distributes packages between the hosts on the transport layer of the OSI reference model. The following configuration is set up after the Installation of HAProxy.

root@service:~$ cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
        log     global
        mode    http
        option  dontlognull
        option  redispatch
        retries 3
        maxconn 2000
        timeout connect         10s
        timeout client          1m
        timeout server          1m

listen https :443
        mode tcp
        balance roundrobin

        server apiserver01 master01.example.com:443 check
        server apiserver01 master02.example.com:443 check
EOF
root@service:~$ systemd restart haproxy

After restarting the service it should be possible to reach the master nodes via the load balancer URL (e.g. loadbalancer.example.com) although at this time there is no service available on port 443.

Flanneld

Flannel enables the cluster wide distribution of the pods’ packages. Since the description of the networks are stored in etcd a connection of all Kubernetes nodes (master and worker) to the etcd cluster has to be enabled. The docker daemon on each Kubernetes node will be performed with special parameters provided by flanneld. Cross-node communication of containers is realized by iptables that route the packages to the correct target servers. For this purpose flanneld creates its own virtual network interface and must therefore run with root rights.

Create etcd client certificates

Kubernetes nodes need a client certificate because each user of the etcd cluster has to authenticate. It can be created as described in the first part of this blog post series but there are differences in the openssl.cnf. The openssl.cnf has a simpler structure as flanneld has to identify itself as client only.

root@kubernetes:~$ mkdir -p /etc/ssl/etcd
root@kubernetes:~$ cd /etc/ssl/etcd
root@kubernetes:/etc/ssl/etcd$ cat > openssl.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
EOF
root@kubernetes:/etc/ssl/etcd$ openssl genrsa -out etcd-client.key 2048
root@kubernetes:/etc/ssl/etcd$ chmod 600 etcd-client.key
root@kubernetes:/etc/ssl/etcd$ openssl \
  req -new \
  -key etcd-client.key \
  -out etcd-client.csr \
  -subj "/CN=$(hostname -s)" \
  -extensions v3_req \
  -config openssl.cnf \
  -sha256
root@kubernetes:/etc/ssl/etcd$ scp openssl.cnf etcd-client.csr root@root-ca-host:

This certificate request is also signed with the root CA stored as /etc/ssl/etcd/etcd-client.crt on the Kubernetes nodes.

root@root-ca-host:~$ openssl x509 \
  -req \
  -CA ca.crt \
  -CAkey ca.key \
  -CAcreateserial \
  -in etcd-client.csr \
  -out etcd-client.crt \
  -days 365 \
  -extensions v3_req \
  -extfile openssl.cnf \
  -sha256
root@root-ca-host:~$ scp etcd-client.crt ca.crt root@kubernetes:/etc/ssl/etcd/

Altogether, the following files should be available on all Kubernetes nodes:

root@kubernetes:~$ cd /etc/ssl/etcd/
root@kubernetes:/etc/ssl/etcd$ rm -f openssl.cnf etcd-client.csr
root@kubernetes:/etc/ssl/etcd$ ls -la
-rw-r--r--  1 root root 4711 Jun  33 08:15 ca.crt
-rw-r--r--  1 root root 4712 Jun  33 08:15 etcd-client.crt
-rw-------  1 root root 4713 Jun  33 08:15 etcd-client.key

Installing of the daemon

For the various settings an extra file options.env is used. Besides all etcd nodes options.env file contains the certificates. Finally, the Systemd service is created.

root@kubernetes:~$ cd /tmp
root@kubernetes:/tmp$ mkdir -p /opt/flanneld
root@kubernetes:/tmp$ curl -L -O https://github.com/coreos/flannel/releases/download/v0.5.5/flannel-0.5.5-linux-amd64.tar.gz
root@kubernetes:/tmp$ tar -xzf flannel-0.5.5-linux-amd64.tar.gz
root@kubernetes:/tmp$ cp flannel-0.5.5/flanneld /usr/local/bin/flanneld
root@kubernetes:/tmp$ cp flannel-0.5.5/mk-docker-opts.sh /opt/flanneld/mk-docker-opts.sh
root@kubernetes:/tmp$ chmod 755 /usr/local/bin/flanneld
root@kubernetes:/tmp$ chmod 755 /opt/flanneld/mk-docker-opts.sh
root@kubernetes:/tmp$ mkdir -p /etc/flanneld
root@kubernetes:/tmp$ export PRIMARY_HOST_IP=192.168.1.[1|2] or 192.168.2.[1|2|3]
root@kubernetes:/tmp$ cat > /etc/flanneld/options.env << EOF
FLANNELD_ETCD_ENDPOINTS=https://192.168.0.1:2379,https://192.168.0.2:2379,https://192.168.0.3:2379
FLANNELD_ETCD_CAFILE=/etc/ssl/etcd/ca.crt
FLANNELD_ETCD_CERTFILE=/etc/ssl/etcd/etcd-client.crt
FLANNELD_ETCD_KEYFILE=/etc/ssl/etcd/etcd-client.key
FLANNELD_IFACE=$PRIMARY_HOST_IP
FLANNELD_PUBLIC_IP=$PRIMARY_HOST_IP
EOF
root@kubernetes:/tmp$ cat > /etc/systemd/system/flanneld.service << EOF
[Unit]
Description=Network fabric for containers
Documentation=https://github.com/coreos/flannel
Requires=networking.service
Before=docker.service
After=networking.service

[Service]
Type=notify
Restart=always
RestartSec=5
EnvironmentFile=/etc/flanneld/options.env
LimitNOFILE=40000
LimitNPROC=1048576
ExecStartPre=/sbin/modprobe ip_tables
ExecStartPre=/bin/mkdir -p /run/flanneld

ExecStart=/usr/local/bin/flanneld --ip-masq=true

## Updating Docker options
ExecStartPost=/opt/flanneld/mk-docker-opts.sh -d /run/flanneld/docker_opts.env -i

[Install]
WantedBy=multi-user.target
EOF
root@kubernetes:/tmp$ systemctl enable flanneld
root@kubernetes:/tmp$ systemctl start flanneld
root@kubernetes:/tmp$ rm -rf flannel-0.5.5
root@kubernetes:/tmp$ rm -f flannel-0.5.5-linux-amd64.tar.gz

Docker engine

As you can see in /etc/systemd/system/flanneld.service the script mk-docker-opts.sh creates environment variables with the file /run/flanneld/docker_opts.env. These variables are used by the Docker daemon. We use the drop-in feature of Systemd. So there is no need to change the standard service definition of Docker. The installation of the Docker engine is done in the standard way.

root@kubernetes:~$ apt-get purge lxc-docker*
root@kubernetes:~$ apt-get purge docker.io*
root@kubernetes:~$ apt-get install -y apt-transport-https ca-certificates
root@kubernetes:~$ apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
root@kubernetes:~$ cat > /etc/apt/sources.list.d/docker.list << EOF
deb https://apt.dockerproject.org/repo debian-jessie main
EOF
root@kubernetes:~$ apt-get update
root@kubernetes:~$ apt-get install -y docker-engine
root@kubernetes:~$ systemctl enable docker
root@kubernetes:~$ systemctl stop docker
root@kubernetes:~$ mkdir -p /etc/systemd/system/docker.service.d
root@kubernetes:~$ cat > /etc/systemd/system/docker.service.d/docker.conf << "EOF"
[Service]
EnvironmentFile=/run/flanneld/docker_opts.env
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
EOF
root@kubernetes:~$ systemctl daemon-reload
root@kubernetes:~$ systemctl enable docker
root@kubernetes:~$ systemctl restart docker

drop-in overwrites the start command. The new one contains the variables from the file /run/flanneld/docker_opts.env generated by the flanneld and loaded via the option EnvironmentFile.

Kubelet

Kubelet controls the directory /etc/kubernetes/manifests and creates Docker containers from the Pod definitions located here. Therefore, it also needs root rights. The Kubelet version has to be compatible to the one of the API server.

Installation

Kubelet is part of the official Kubernetes release. It is simply loaded to every Kubernetes node and unpacked there.

root@kubernetes:/tmp$ curl -L -O https://github.com/kubernetes/kubernetes/releases/download/v1.2.2/kubernetes.tar.gz
root@kubernetes:/tmp$ tar -xzf kubernetes.tar.gz
root@kubernetes:/tmp$ cd kubernetes/server
root@kubernetes:/server$ tar -xzf kubernetes-server-linux-amd64.tar.gz
root@kubernetes:/server$ cp kubernetes/server/bin/kubelet /usr/local/bin/kubelet
root@kubernetes:/server$ cd /tmp
root@kubernetes:/tmp$ rm -rf kubernetes
root@kubernetes:/tmp$ chmod 755 /usr/local/bin/kubelet
root@kubernetes:/tmp$ mkdir -p /etc/kubernetes/manifests

In addition we have to add Kernel boot parameter. Refer to the following code example on how to do this using sed.

root@kubernetes:~$ sed -i "s/^GRUB_CMDLINE_LINUX_DEFAULT=\"\(.*\)\"$/GRUB_CMDLINE_LINUX_DEFAULT=\"\1\ cgroup_enable=memory\"/g" /etc/default/grub
root@kubernetes:~$ update-grub
root@kubernetes:~$ reboot

Summary

The external service node is used as a load balancer. Working on the Transport layer of the OSI reference model it does not terminate SSL but simply routes the traffic.

On all Kubernetes nodes run various services. Flanneld is required to run the Docker engine. It stores data for the networks in the etcd cluster and therefore needs a client certificate. The Docker engine is a standard installation. It uses its own drop-in to be able to handle the parameters provided by flanneld.

About the author

Karsten Peskova is a qualified civil engineer and has held a variety of different jobs since joining the software industry many years ago. He enjoys working directly with our customers, but also solving technical problems of all kinds.