This is the first of a four-part blog post series about how we install a high-availability (HA) Kubernetes cluster for testing purposes with Debian 8. As part of this process, the whole cluster services communication is secured via TLS.
Outline
Kubernetes offers a wide range of settings. Therefore, in this series we focus on version 1.2.2 and its characteristics.
In order not to specify every command per host, we use role names for groups of computers. Considering the user and the workspace, a command could look as follows:
This example shows a root user listing all files of the directory /etc of the computers with the role common.
In many cases we will avoid explaining commands in detail, as they are self-explaining. Any specific characteristics will be mentioned at the corresponding points.
In order to ensure that all content is well understood, some frequently used terms are explained in the Glossary at the end of this post. These terms are linked first time they appear in the post.
Prerequisites
Root certification authority (CA)
In order to use TLS, a variety of certificates are required. In this post, we assume that a root CA is already available. If not, it can be easily created.
Remote access
For executing the commands we use the Bash that Debian 8 contains by default. It can be reached via a remote console (ssh). The connection can be established with an additional user who is able to gain root rights.
Additional packages
In this series, we assume that the commands we use are installed via the package manager. Ensure the following programs are installed on all computers:
Load balancer URL
During installation a load balancer URL is used in several places and has to be available in the own network. In this simple configuration it is sufficient to enter the FQDN of the service nodes. So in our example, the load balancer URL always references to the service node.
Hardware assembly
Overview of nodes with their IPs and roles:
For this simple HA setup, we use 9 computers in total, which are listed in the image above according to their role and affiliation. Each computer has just one network card. The RAM size and number of CPUs was chosen according to the computer’s function:
- Etcd
- CPU: 2
- RAM: 1GB
- Disk: 15GB
- Master
- CPU: 2
- RAM: 2GB
- Disk: 15GB
- Worker
- CPU: 2
- RAM: 4GB
- Disk: 20GB
- Service
- CPU: 2
- RAM: 2GB
- Disk: 30GB
The service node outside of the cluster has a special functionality. For a using the cluster productively, a [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) is required which is provided by this computer. It is not part of the Kubernetes cluster, i.e. it does not run Kubernetes services.
Etcd installation
Let’s start our installation by configuring the etcd nodes. As etcd cluster they serve as centralized storage location for all important information.
Creating etcd peer certificates
First, create one certificate signing request for the so called peer certificate for each Etcd node. You will require the IP of the primary network card, on which the etcd service will be available later. In this example, it is 192.168.0.[1|2|3] depending on the node.
Basically, this process will be repeated several times, and then mentioned correspondingly.
These commands generate a new file etcd.csr which contains the information to create a peer certificate. In order to issue the certificate, you’ll additionally need the openssl.cnf file. With the following scp-command both files will be copied to the computer on which the root CA certificate is available:
To subsequently issue the certificate we use openssl again. Here, we have to consider the environment variables that have to match with the certificate signing request. Depending on the etcd node the IP and the FQDN have to be set correctly.
In order for the etcd nodes to validate the certificate of the communication partner in case of a self-created root CA, the root CA certificate has to be attached to the generated certificate.
You then copy the peer certificate as well as the root CA certificate to the corresponding etcd node at /etc/ssl/etcd/etcd.crt or /etc/ssl/etcd/ca.crt.
The following files together should be availabe on all etcd nodes with the correct rights:
Installing the etcd daemon
In the next step we install the etcd daemon. It will be managed via Systemd. By using the EnvironmentFile parameter in the service description, the configuration can be easily outsourced to an extra file (options.env). As the option ETCD_ADVERTISE_CLIENT_URLS is node-specific, you will have to enter the IP of the primary network card per node. To simplify this, we use the variable PRIMARY_HOST_IP in the following commands:
After some seconds the etcd cluster should be initialized and prepared to store data. Use the following command to verify that:
Creating flannel networks
Network information for the later used flanneld are the first data to be stored in the etcd. These data can be easily created via a curl command. This command is executed on one of the etcd nodes, as the etcd service, additionally to the node, is also available on localhost:2379.
The network area used here (10.2.0.0/16) enables to operated flanneld 256 worker nodes with 256 pods each. Depending on the respective network infrastructure, this area has to be changed to avoid collisions with other networks. If e.g. the network (10.2.0.0/16) already exists in the own infrastructure, the applications within the cluster would not be able to access that network. The iptables rules of flannld would block a redirection.
Summary
The task of etcd in the Kubernetes cluster is to store the complete cluster information. In order to design a fail-safe storage you should operate several etcd nodes. In the described structure we tolerate the failure of an etcd node.
If you want to improve the reliability, you have to add further nodes. Encryption is ensured via TLS. By communicating via https the data of the etcd nodes are transferred in an encrypted form. The unique feature of the certificates is that we use one peer certificate per etcd node. One etcd node encrypts its communication with the participating clusters via the same certificate as for the the other participants when acting as a server. This simplifies the structure, as we don’t need both, a client and a server certificate, per node. The required settings will be done in the openssl.cnf. In this context it is important that the extendedKeyUsage = clientAuth,serverAuth is set. For the local administration you can also use an unencrypted connection via localhost. By using the etcdctl tool you can manage the content of the key-value store. In order to achieve this, the option –no-sync always has to be used, if you want to limit the interactions on the local etcd node.
Glossary
Cluster
During operation, the whole computer network is called Kubernetes cluster or just cluster.
Nodes
A node is a computer that is part of a cluster, independent of its function. In case a special node is referred to, that belongs to a function group, we use a prefix, e.g. master node.
Backend/Infrastructure
Computers that carry out services that are mandatory to operate the cluster. These nodes are critical infrastructure and are to be operated redundantly.
Frontend/Worker nodes
Computers that execute the majority of the pods. The number of these nodes can be easily scaled, as these nodes do no execute mandatory services from a cluster point of view.
Pod
Within Kubernetes, this is the smallest unit in which containers can be managed. Pods can consist of one or more containers. The containers within a pod will always be executed on the same Kubernetes node and can easily communicate amongst each other via their Docker network connection.
API-Server
The API server is a Kubernetes core component. This pod is executed on the master nodes and provides the cluster API via https.