Kubernetes Architecture

Nodes —> Nodes are the machines, virtual or physical, in the cluster that host our workload. Nodes can fail at any given moment, and this something that Kubernetes is designed to handle. We may want to be able to check the state of any particular node.

Pods —> Pods are the smallest deployable units of computing that we can use in Kubernetes. When they can not enter the ready state, it is worth checking out what the problem might be. The reasons may be both internal to the pod, as well as external to it. Some pods may have a problem with the invalid access credentials. Others wont be able to create a container due to misconfiguration, and others yet will be stuck waiting for an appropriate node to host it. By looking at logs presented by pods and containers, We can understand the root cause of the problem and remediate it.

Containers —> Containers are responsible for handling the actual workload inside of pod.

Control Plane —> Control plane is the part that we can communicate with each time we use kubernetes api. For example via kubectl command. Through control plane, we can query all of the events happening within a cluster to understand its stat.

Kubernetes insallation with kubespray

git clone <https://github.com/kubernetes-sigs/kubespray.git> --branch release-2.16
cd kubespray/
cp -rf inventory/sample inventory/mycluster
vi inventory/mycluster/inventory.ini

---
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
master ansible_host=13.41.... ansible_private_host=172.31....  # ip=10.3.0.1 etcd_member_name=etcd1
worker ansible_host=18.169.... ansible_private_host=172.31.... # etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15  # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6

# ## configure a bastion host if your nodes are not directly reachable
# [bastion]
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube_control_plane]
master
# node2
# node3

[etcd]
master
# node2
# node3

[kube_node]
worker
# node3
# node4
# node5
# node6

[calico_rr]

[k8s_cluster:children]
kube_control_plane
kube_node
calico_rr
----

sudo docker run --rm -it \\\\
 --mount type=bind,source=/home/cloud_user/kubernetes_installation/kubespray/inventory/mycluster,dst=/inventory \\\\
 --mount type=bind,source=/home/cloud_user/.ssh/id_rsa,dst=/root/.ssh/id_rsa \\\\
 --mount type=bind,source=/home/cloud_user/.ssh/id_rsa,dst=/home/cloud_user/.ssh/id_rsa \\\\
 quay.io/kubespray/kubespray:v2.16.0 bash

Inside docker

ansible-playbook -i /inventory/inventory.ini cluster.yml --user=cloud_user --ask-pass --become --ask-become-pass

Rancher installation on master node

docker run --name rancher-server -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.5.7