The fourth part of this blog post series deals with the worker nodes.
Due to their role it is easier to install worker nodes than master nodes.
Master nodes also need client certificates to connect to the API server.
Certificates
Worker certificates are also created per node.
They are structured as easy as Etcd client certificates.
The creation is done by analogy with the certificates of the Etcd clients.
Since the worker nodes already got the root CA certificate (for flanneld) they should have the following certificates at this stage:
kubeconfig
Now the YAML file is created which will be used by kubelet and kube-proxy.
It determines where the previously created certificates are located.
Kubelet
kubelet too is running on the worker nodes.
The configuration is different to the one of the master nodes.
Yet, the installation is done in the same way as described in the second part of this blog post series.
As described in the setup of the master componentskubelet and kube-proxy are affected by an error.
Therefore, we do not insert a comma separated list of API servers but only the load balancer’s URL in the parameter api-servers.
Unlike the master nodes, on the worker nodes shall run the real application pods.
As a consequence, kubelet registers its own node within the API as “schedulable”.
Kube-proxy
The load balancer’s URL is also registered in the pod definition of the service kube-proxy of the worker nodes.
At this place we see a vulnerability of the HA cluster.
Normally, the load balancer has to be failure-resistant too, because it is the only connection from the worker to the master nodes.
But since this would go beyond the constraints of this article, we simply refer to this older documentation.
Now, that the kube-proxy pod is created, the worker nodes are ready for operation.
Based on flanneld and the docker engine, kubelet is started with the following command:
Analogous to the master nodes, kubelet now starts transferring the pod definitions from /etc/kubernetes/manifests into containers.
kubectl
kubectl is the administration tool for the cluster.
It is running on your local computer and interacts via Kubernetes API with the cluster.
Among others, it enables you to create new pods using YAML files on your local machine.
Note that YAML files on the Kubernetes nodes (so called “static pods”) cannot be administrated via kebectl.
The kubectl communication is TLS encrypted.
The API certificate is created in an accustomed manner (see prior worker certificates).
The cluster’s root CA is copied to the folder ~/.kube as ca.crt.
After that kubectl is configured.
With the following command, the correct way of functioning can be tested.
SkyDNS
Now that the cluster is set up, we can install the cluster DNS.
We use the add-on SkyDNS.
This way an application within the cluster DNS name can use Kubernetes services.
This pod runs on one of the worker nodes.
So SkyDNS is the first application on the new cluster (represented by a Replication Controller).
In addition, SkyDNS needs another Kubernetes object: the Service.
The example demonstrates the creation of both objects directly on the shell of kubectl.
But of course they can be created via YAML files, too.
The code shows that the cluster-dns IP entered in kubelet is implemented in the DNS service.
Other applications from different namespaces can use the corresponding services simply by calling the service name.
SkyDNS is permanently looking for new services within the API and creates new entries in the Etcd storage.
For the communication with the API it uses a secret.
This secret is provided via the secured connections of all cluster components.
Summary
The installation of worker nodes also requires certificates to authenticate against the Kubernetes API.
To distribute the worker connections to both master nodes we use the load balancer.
This workaround is necessary, because there is an implementation error in kube-proxy.
The downside of this solution is the fact that the load balancer is not failure-resistant.
Outlook
In order to reduce the risk of a cluster outage a considerable monitor environment of all nodes should be established.
Consequently, problems can be found as early as possible.
For this purpose Kubernetes provides additional add-ons.
Fluentd, combined with Elasticsearch and Kibana, as well as Heapster, with Influxdb and Grafana, offer the possibility to analyze log files and the utilization of nodes.
Karsten Peskova is a qualified civil engineer and has held a variety of different jobs since joining the software industry many years ago. He enjoys working directly with our customers, but also solving technical problems of all kinds.
Our website uses cookies to improve your user experience. Some cookies are
required for the basic functionality of the website while other cookies help
us to improve our content and layout. You can agree to all cookies by selecting
"Accept all" or you can select "Accept required" to confirm only the required ones.
Further information can be found in our
Data protection declaration.