Fuzzball Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Installing RKE2 on the Server Node

After the initial configuration has been completed, you are ready to install RKE2.

The RKE2 quick start guide suggests using a curl | sh approach to download and run the appropriate script for installation.

It is usually considered a bad practice to perform a curl | sh since you never can be completely sure of the code you are running. A bad actor could compromise the server hosting the URL and change the script. You may want to download and inspect the script before running it.

The following script will set up a YUM repository and install RKE2 with compatible OS packages.

# curl -sfL https://get.rke2.io | sh -

Nodes that have more than one IP address should specify explicitly which IP address to use prior to starting the server. You can determine which IP address you want to use to host the server with ip addr or similar.

# INTERNAL_IP_ADDR="" # set this value to the server internal IP address you want to use (e.g. 10.0.0.3)

# cat > /etc/rancher/rke2/config.yaml <<EOF
node-ip: ${INTERNAL_IP_ADDR}
disable:
  - rke2-ingress-nginx
EOF

Be aware of the following issue from the RKE2 GitHub repo.

“Servers must have static IP addresses. If you must change the address, you should delete the node from the cluster and re-add it with the new address. In the case of a single-node cluster, you can stop the rke2-server service, run rke2 server --cluster-reset to reset the etcd cluster membership back to a single member with the current node IP address, then start the rke2-server service again.”


Instructions for a 3 node HA setup

If you are installing Fuzzball across a pool of 3 server nodes, expand the instructions in the following section.

By default, the Flannel service that configures the network fabric on your RKE2 cluster uses the external interface on your server nodes. This should be reconfigured to use the internal interface. If not, you will likely see webhook validation timeouts when you start installing Metallb, Longhorn, etc.

These instructions assume that the interface name is deterministic and is the same across all of your server nodes.

Run the following commands to write a configuration file ensuring that flannel will use the internal interface.

# mkdir -p /var/lib/rancher/rke2/server/manifests

# INTERNAL_INTERFACE="" # fill in the name of your internal interface (e.g. enp8s0)

# cat >>/var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml <<EOF
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-canal
  namespace: kube-system
spec:
  valuesContent: |-
    flannel:
      iface: "${INTERNAL_INTERFACE}"
EOF

With the local configuration in place, you are ready to start your RKE2 server.

# systemctl enable --now rke2-server.service

Once you have started the service, you can use kubectl to monitor the progress of pods as they are initialized.

# export PATH=/var/lib/rancher/rke2/bin:$PATH

# export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

# kubectl get pods -A

The updated PATH and KUBECONFIG variables are necessary for the kubectl command to operate properly and should therefore be added to your ~/.bashrc or otherwise set automatically.

When all pods are either “Completed” or “Running” your basic RKE2 cluster is ready to go!

Instructions for a 3 node HA setup

If you are installing Fuzzball across a pool of 3 server nodes, expand the instructions in the following section.

If you want to run Orchestrate on 3 nodes instead of one, you will need to install RKE2 on all three nodes and use an enrollment token in the configuration files. More information is available in the RKE2 docs here.

Start by finding the enrollment token on the first server with following command. The token will resemble the output here, but the actual content will differ.

# cat /var/lib/rancher/rke2/server/node-token
K503435bda0bd651f90b9e49734346942875d427268a5a7f9f8cb4dfc12638eb893::server:5ed4a7bad837f97b855ea7015e173547

Begin the RKE2 installation on the next server by setting the environment variables and running the same commands listed in the guide up to this point. Stop at the step requiring you to set the /etc/rancher/rke2/config.yaml file and continue with the instructions below.

Set the following environment variables and run the echo command to generate an appropriate configuration file that points to the first server node:

The following procedure does not create a fully redundant, fault tolerant cluster, because the IP address of the first node is used to route traffic between nodes. If the first node fails, the cluster will become inoperable. To create a fully redundant, production-ready installation, admins should explore options like HAProxy, or kube-vip. Instructions for a fully redundant setup are currently outside the scope of this document.
# FIRST_SERVER_IP="" # set this to the internal server IP address where you obtained the enrollment token

# NEW_SERVER_IP="" # set this to the internal IP address of the server you are currently installing

# TOKEN="" # set this to the literal value of the enrollment token that you noted above

# cat > /etc/rancher/rke2/config.yaml <<EOF
node-ip: ${NEW_SERVER_IP}
server: https://${FIRST_SERVER_IP}:9345
token: ${TOKEN}
disable:
  - rke2-ingress-nginx
EOF

Then you can finish the installation like so:

# systemctl enable --now rke2-server.service

Repeat the same procedure on the 3rd node. For convenience, you can set the KUBECONFIG and PATH environment variables and add them to your .bashrc scripts as detailed above so that you can run the kubectl command from any of your 3 nodes. Once you have completed these steps you can check that your HA 3-node cluster is healthy with the following command:

# kubectl get nodes
NAME            STATUS   ROLES                       AGE     VERSION
godloved-ctl1   Ready    control-plane,etcd,master   26m     v1.33.6+rke2r1
godloved-ctl2   Ready    control-plane,etcd,master   10m     v1.33.6+rke2r1
godloved-ctl3   Ready    control-plane,etcd,master   9m32s   v1.33.6+rke2r1

Now check to make sure that the cluster is healthy. The following command should return all pods in the Running or Completed state

# kubectl get pods -A