Table of Contents
介紹
此篇以安裝 Kubernetes v1.22 以及 CentOS 9 stream 為例,記錄各式套件安裝上的方法以及小細節,可能會因為版本更新後失效,請小心服用,另外因為是 Home Lab 所以對防火牆的設定著墨並不多,也請各位在自行研究看看。
安裝 CRI
CRI 上筆者是推薦使用 containerd,CRI-O 僅在 openshift 上有較高的性能,但是在其他場景下,應該會是 containerd 較快
CRI-O
OS=CentOS_9
VERSION=1.22
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
dnf install -y cri-o vim wget bash-completion
systemctl enable crio
systemctl start crio
ContainerD
dnf install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
dnf install -y containerd.io vim wget bash-completion
containerd config default > /etc/containerd/config.toml
# 修改 /etc/containerd/config.toml
#[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
# SystemdCgroup = true
vim /etc/containerd/config.toml
cat < /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
echo "alias docker='crictl" >> ./.bashrc
systemctl restart containerd
systemctl enable containerd
安裝 kubeadm
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
cat < /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
dnf install -y kubelet-1.22.9 kubeadm-1.22.9 kubectl-1.22.9 --disableexcludes=kubernetes
# 這裡是讓 Kubernetes 使用 CRI-O
echo "KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint='unix:///var/run/crio/crio.sock'" > /etc/sysconfig/kubelet
# 這裡是讓 Kubernetes 使用 containerd
echo "KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint='unix:///run/containerd/containerd.sock'" > /etc/sysconfig/kubelet
修改 kubelet
啟動檔
最主要是要設定讓 kubelete 在 CRI 後啟動
以 CentOS 為例,修改 /usr/lib/systemd/system/kubelet.service
:
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
# modify here
Wants=network-online.target crio.service containerd.service
After=network-online.target crio.service containerd.service
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable --now kubelet
systemctl stop firewalld
systemctl disable firewalld
sysctl --system
reboot
啟動 Kubernetes
Master
如果要使用 cilium without kube-proxy 可以多帶
--skip-phases=addon/kube-proxy
如果是要部屬高可用性的 master with internal etcd cluster 可以帶入
--control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
kubeadm init --pod-network-cidr=10.0.0.0/8
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
echo "source <(kubectl completion bash)" >> $HOME/.bashrc
alias k="kubectl"
source <(k completion bash)
complete -o default -F __start_kubectl k
TIPS:
如果要避免 metrics-server 使用
--kubelet-insecure-tls
,可以在第一個 master 啟動後修改以下:
/var/lib/kubelet/config.yaml
、kubectl edit cm -n kube-system kubelet-config-1.22
加入
serverTLSBootstrap: true
後systemctl restart kubelet
。然後在其他節點加入後
kubectl get csr kubectl certificat approve csr-xxxxx
Slave
使用安裝後 kubeadm 給的指令加入節點:
kubeadm join 192.168.50.132:8443 --token \
--discovery-token-ca-cert-hash
# 使用 kubectl 給 node role,可以不做
kubectl label nodes node-role.kubernetes.io/worker=
安裝 CNI
calico
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
wget https://docs.projectcalico.org/manifests/custom-resources.yaml
# 請先修改該檔案內的 cidr: 10.0.0.0/8
kubectl apply -f custom-resources.yaml
weave
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
cilium
# Systemd 245 and above (systemctl --version) overrides rp_filter setting of Cilium network interfaces
echo 'net.ipv4.conf.lxc*.rp_filter = 0' > /etc/sysctl.d/99-override_cilium_rp_filter.conf
systemctl restart systemd-sysctl
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --version 1.11.5 \
--namespace kube-system \
# this options is for replace kube-proxy
--set kubeProxyReplacement=strict \
# kube-apiserver address
--set k8sServiceHost="192.168.50.132"\
# kube-apiserver port
--set k8sServicePort=8443\
# enable hubble
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true
output:
NAME: cilium
LAST DEPLOYED: Tue May 31 22:53:19 2022
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.
Your release version is 1.11.5.
firewall
這裡記錄一些防火牆的規則
# for kubelet
firewall-cmd --new-service=kubelet --permanent
firewall-cmd --service=kubelet --add-port=10250/tcp --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.136/32" service name="kubelet" accept' --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.137/32" service name="kubelet" accept' --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.138/32" service name="kubelet" accept' --permanent
# for etcd
firewall-cmd --new-service=etcd --permanent
firewall-cmd --service=etcd --add-port=2379/tcp --permanent
firewall-cmd --service=etcd --add-port=2380/tcp --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.136/32" service name="etcd" accept' --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.137/32" service name="etcd" accept' --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.138/32" service name="etcd" accept' --permanent
# for kube-apiserver
firewall-cmd --new-service=kube_api --permanent
firewall-cmd --service=kube_api --add-port=6443/tcp --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.133/32" service name="kube_api" accept' --permanent
firewall-cmd --add-rich-rule 'rule family="ipv4" source address="192.168.50.134/32" service name="kube_api" accept' --permanent
firewall-cmd --reload