Deploying Kubernetes with RKE on Red Hat: A Guide for Medium-Sized Enterprises”

Si Thu Ye Aung
5 min readOct 15, 2023

--

Rancher Kubernetes Engine (RKE) is a lightweight Kubernetes installer that simplifies the deployment and management of Kubernetes clusters. To install RKE on Red Hat Linux for a medium-sized cluster, you can follow these steps:

Note: This guide assumes that you have a Red Hat Linux server set up and ready to use.

Prerequisites for RKE All Nodes.

System OS Requirement and root privilege account.

  1. Redhat 8.5 (5 Nodes Based on your requirements).
  2. Root or sudo access on the server.

DNS mapping in k8s all hosts.

cat <<EOF | sudo tee -a /etc/hosts

192.168.200.10 master-01 master-01.example.com

192.168.200.11 master-02 master-02.example.com

192.168.200.12 master-03 master-03.example.com

192.168.200.30 worker-01 worker-01.example.com

192.168.200.31 worker-02 worker-02.example.com

EOF

Firewall allow following ports for each nodes.

[root@master01 ~]# firewall-cmd — add-port={22/tcp,80/tcp,2376/tcp,2379/tcp,2380/tcp,8472/tcp,9099/tcp,10250/tcp,6443/tcp,10254/tcp,30000–32767/tcp,30000–32767/udp} — permanent
success
[root@master01 ~]# firewall-cmd — reload
success

Selinux permissive mode for each nodes

sudo sed -i ‘s/SELINUX=enforcing/SELINUX=permissive/’ /etc/selinux/config

Configure required modules for each nodes

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

overlay

br_netfilter

EOF

sudo modprobe overlay

sudo modprobe br_netfilter

sudo sed -i ‘s/^net.ipv4.ip_forward/#&/’ /etc/sysctl.d/*

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

EOF

sudo sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

Disable Swap for each nodes

sudo swapoff -a

sudo sed -i ‘s/^[^#]*swap/#&/’ /etc/fstab

Install Docker Engine (CE) for each nodes

sudo dnf install -y yum-utils device-mapper-persistent-data lvm2 iscsi-initiator-utils vim wget chrony

sudo dnf config-manager — add-repo=https://download.docker.com/linux/centos/docker-ce.repo

sudo dnf remove -y podman buildah

sudo dnf install -y docker-ce docker-ce-cli containerd.io

sudo usermod -a -G docker $USER

sudo systemctl enable docker — now

docker –version

sudo mkdir -p /etc/containerd

sudo containerd config default | sudo tee /etc/containerd/config.toml

sudo systemctl enable containerd –now

SSH setting for RKE mandatory requirement for each nodes

cat <<EOF | sudo tee -a /etc/ssh/sshd_config

AllowStreamLocalForwarding yes

PermitTunnel yes

EOF

sudo systemctl restart sshd

SSH Keygen generate and ssh-copy-id

- Login to the master nodes with ssh client.

ssh-keygen

ssh-copy-id k8s@master-01.example.com

ssh-copy-id k8s@master-02.example.com

ssh-copy-id k8s@master-03.example.com

ssh-copy-id k8s@worker-01.example.com

ssh-copy-id k8s@worker-02.example.com

Download the RKE on master node

wget https://github.com/rancher/rke/releases/download/v1.17.5/rke_linux-amd64

sudo install -o root -g root -m 0755 rke_linux-amd64 /usr/bin/rke

rke — version

Download the kubectl on master node

curl -LO “https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

sudo install -o root -g root -m 0755 kubectl /usr/bin/kubectl

kubectl version — client –short

SSH tunnel testing on k8s Nodes

ssh -i ~/.ssh/id_rsa k8s@master-01 docker version

ssh -i ~/.ssh/id_rsa k8s@master-02 docker version

ssh -i ~/.ssh/id_rsa k8s@master-03 docker version

ssh -i ~/.ssh/id_rsa k8s@worker-01 docker version

ssh -i ~/.ssh/id_rsa k8s@worker-02 docker version

Building the RKE cluster

rke config

[rke@master01 ~]$ rke config 
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: /home/k8s/.ssh/id_rsa
[+] Number of Hosts [1]: 5
[+] SSH Address of host (1) [none]: master01
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (master01) [none]: /home/k8s/.ssh/id_rsa
[+] SSH User of host (master01) [redhat]: k8s
[+] Is host (master01) a Control Plane host (y/n)? [y]:
[+] Is host (master01) a Worker host (y/n)? [n]:
[+] Is host (master01) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (master01) [none]:
[+] Internal IP of host (master01) [none]:
[+] Docker socket path on host (master01) [/var/run/docker.sock]:
[+] SSH Address of host (2) [none]: master02
[+] SSH Port of host (2) [22]:
[+] SSH Private Key Path of host (master02) [none]: /home/k8s/.ssh/id_rsa
[+] SSH User of host (master02) [redhat]: k8s
[+] Is host (master02) a Control Plane host (y/n)? [y]: y
[+] Is host (master02) a Wok8sr host (y/n)? [n]: n
[+] Is host (master02) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (master02) [none]:
[+] Internal IP of host (master02) [none]:
[+] Docker socket path on host (master02) [/var/run/docker.sock]:
[+] SSH Address of host (3) [none]: master03
[+] SSH Port of host (3) [22]:
[+] SSH Private Key Path of host (master03) [none]: /home/k8s/.ssh/id_rsa
[+] SSH User of host (master03) [redhat]: k8s
[+] Is host (master03) a Control Plane host (y/n)? [y]: y
[+] Is host (master03) a Wok8sr host (y/n)? [n]: n
[+] Is host (master03) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (master03) [none]:
[+] Internal IP of host (master03) [none]:
[+] Docker socket path on host (master03) [/var/run/docker.sock]:
[+] SSH Address of host (4) [none]: worker01
[+] SSH Port of host (4) [22]:
[+] SSH Private Key Path of host (worker01) [none]: /home/k8s/.ssh/id_rsa
[+] SSH User of host (worker01) [redhat]: k8s
[+] Is host (worker01) a Control Plane host (y/n)? [y]: n
[+] Is host (worker01) a Wok8sr host (y/n)? [n]: y
[+] Is host (worker01) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (worker01) [none]:
[+] Internal IP of host (worker01) [none]:
[+] Docker socket path on host (worker01) [/var/run/docker.sock]:
[+] SSH Address of host (5) [none]: worker02
[+] SSH Port of host (5) [22]:
[+] SSH Private Key Path of host (worker02) [none]: /home/k8s/.ssh/id_rsa
[+] SSH User of host (worker02) [redhat]: k8s
[+] Is host (worker02) a Control Plane host (y/n)? [y]: n
[+] Is host (worker02) a Wok8sr host (y/n)? [n]: y
[+] Is host (worker02) an etcd host (y/n)? [n]: n
[+] Override Hostname of host (worker02) [none]:
[+] Internal IP of host (worker02) [none]:
[+] Docker socket path on host (worker02) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal) [calico]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.17.5-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:

rke up

[rke@master01 ~]$ rke up
INFO[0000] Running RKE version: v1.0.8
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [master01]
INFO[0000] [dialer] Setup tunnel for host [master02]
INFO[0000] [dialer] Setup tunnel for host [master03]
INFO[0000] [dialer] Setup tunnel for host [worker01]
INFO[0000] [dialer] Setup tunnel for host [worker02]
...
INFO[0379] [ingress] Setting up nginx ingress controller
INFO[0379] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0379] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0379] [addons] Executing deploy job rke-ingress-controller
INFO[0384] [ingress] ingress controller nginx deployed successfully
INFO[0384] [addons] Setting up user addons
INFO[0384] [addons] no user addons defined
INFO[0384] Finished building Kubernetes cluster successfully

Kube config setting up

mkdir ~/.kube

cp kube_config_cluster.yml ~/.kube/config

Kube config setting up

kubectl labels nodes master-01.example.com node=master-01
kubectl labels nodes master-02.example.com node=master-02
kubectl labels nodes master-03.example.com node=master-03
kubectl labels nodes worker-01.example.com node=worker-01
kubectl labels nodes worker-02.example.com node=worker-02

kubectl get nodes

NAME     STATUS ROLES                  AGE    VERSION
master01 Ready controlplane,etcd. 5m34s v1.17.5
master02 Ready controlplane,etcd. 5m34s v1.17.5
master03 Ready controlplane,etcd. 5m34s v1.17.5
worker01 Ready worker 5m34s v1.17.5
worker02 Ready worker 5m34s v1.17.5

Will be continue for K8s components in next series.

--

--

Si Thu Ye Aung

Azure Solutions Architect Expert| CKA | RHCSA| RHCE |Terraform |VMware Certified Professional 6 | CCNA |Certificate of Cloud Security (CCSK v3)