Refactoring e automatizzazione di Kind

Cluster KinD versione 2

Tutti gli esercizi utilizzeranno una nuova versione del cluster costruito con Kind.

Editare la versione nuova:

cd
vim std.sh
#!/bin/sh
set -o errexit

# crea il contenitore del registry se non esiste
reg_name='kind-registry'
reg_port='5000'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
  docker run \
    -d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \
    -v $PWD/registry:/var/lib/registry registry:2
fi

# crea un cluster con il registry locale abilitato in containerd
cat <<EOF | kind create cluster --image kindest/node:v1.24.0 --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:5000"]
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /data
      containerPath: /data
EOF

# connette il registry alla rete del cluster se non connesso
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
  docker network connect "kind" "${reg_name}"
fi

# Documenta il local registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-registry-hosting
  namespace: kube-public
data:
  localRegistryHosting.v1: |
    host: "localhost:${reg_port}"
    help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF

Renderla eseguibile:

chmod +x ~/std.sh

La sezione cambiata è:

.....
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /data
      containerPath: /data
- role: worker
  extraMounts:
    - hostPath: /data
      containerPath: /data
.....

Ogni nodo crea un mappaggio tra la sua directory /data e la directory /data delle macchina host.

Questo ci servirà per la gestione di volumi dati.

Procedure Shell per il Cluster

La creazione ed eliminazione del cluster verranno gestite da due procedure shell.

Procedura di creazione ~/setup.sh:

vim ~/setup.sh
#! /bin/sh
echo "Creating cluster ..."
echo
echo "~/std.sh"
~/std.sh
echo "Cluster info follows ..."
kubectl cluster-info --context kind-kind

echo
echo "---> Loading and configuring the Metallb load balancer ..."
echo
echo "-- Namespace"
echo "kubectl apply -f scripts/metallb-ns.yml"
kubectl apply -f scripts/metallb-ns.yml
echo
echo "-- Deployment and service"
echo "kubectl apply -f scripts/metallb-svc.yml"
kubectl apply -f scripts/metallb-svc.yml
echo
echo "-- Configmap"
echo "kubectl apply -f scripts/metallb-configmap.yml"
kubectl apply -f scripts/metallb-configmap.yml
echo
echo "Wait up to 120s for Metallb controller deployment to be ready ..."
kubectl wait deployment -n metallb-system controller --for condition=Available=True --timeout=120s
echo
echo "    CLUSTER NOW READY"
echo " ===> All resources in namespace 'default'"
kubectl get all
echo
echo " ===> All resources in namespace 'metallb-system'"
kubectl get all -n metallb-system

Procedura di eliminazione ~/teardown.sh:

vim ~/teardown.sh
#! /bin/sh
trap 'echo "Interrupted. Cluster NOT deleted"; exit 100' SIGINT
echo "About to delete cluster 'std'"
echo "Press Control-C within 10 seconds to interrupt"
sleep 10
kind delete cluster
echo "Cluster deleted"

Rendere le due procedure eseguibili:

chmod +x ~/setup.sh
chmod +x ~/teardown.sh