Storage Configuration
Configuring Fuzzball to use storage is a two stage process. First, you need to install a Container Storage Interface (CSI) driver that allows Fuzzball to understand what type of storage it is expected to access. Second you need to add storage classes that allow Fuzzball to understand how it should allocate storage volumes on behalf of users.
The procedure for both of these stages is to create (or obtain) an appropriate YAML file and then apply the file using the Fuzzball CLI.
For the purposes of this guide, we will create 2 new NFS shares from the Fuzzball server node and use those to back ephemeral and persistent storage classes.
This is a very basic setup suitable for getting up and running quickly and for testing purposes. For more detailed information, see the storage configuration section of the guide.
First, create two new directories that will be served as NFS shares. On the server node, (or the node serving your NFS shares) execute the following commands:
# PRIVATE_SUBNET="" # populate this with the proper value for your environment (e.g. 10.0.0.0/20)
# mkdir -pv /srv/fuzzball/{ephemeral,persistent}
# for i in {ephemeral,persistent}; \
do echo "/srv/fuzzball/${i} ${PRIVATE_SUBNET}(rw,sync,no_subtree_check,no_root_squash)"; done \
>>/etc/exports
# exportfs -a
# exportfsIt is not necessary to mount the newly created NFS shares on any compute nodes. Fuzzball will automatically handle this through the NFS CSI driver that we will create in the next step.
Now you can tell Fuzzball that you will use NFS to back cluster storage by installing an appropriate driver. Execute the following to create an appropriate YAML file.
Please note that the single quotes around 'EOF' are necessary. The YAML file is intended to
include the literal strings ${CSI_NODE_ID} and ${CSI_ENDPOINT} rather than the current (likely
non-existent) values of these variables. In a Heredoc as in the example below this can be achieved
by surrounding the EOF marker with single quotes to prevent variable expansion.
# cat >nfs_driver_definition.yaml<< 'EOF'
version: v1
name: nfs.csi.k8s.io
description: NFS CSI Driver
image:
uri: docker://registry.k8s.io/sig-storage/nfsplugin:v4.2.0
args:
- --nodeid=${CSI_NODE_ID}
- --endpoint=${CSI_ENDPOINT}
- -v=10
EOFYou can install the NFS CSI driver like so:
# fuzzball admin storage driver install nfs_driver_definition.yaml
Driver "9c35fed2-26a1-3313-97f6-f25dc2366dd9" installed
# fuzzball admin storage driver list
ID | NAME | DESCRIPTION | CREATED TIME | LAST UPDATED | CLUSTER
432b9f9e-efed-34b5-bd83-4db6549955c9 | nfs.csi.k8s.io | NFS CSI Driver | 2026-01-05 11:40:45PM | 2026-01-05 11:40:45PM | unset-clusterStorage drivers are applied at the cluster level. Once you install a storage driver, you may use it to create storage classes within any organization on the cluster.
Now you can create the YAML files for your ephemeral and persistent storage classes and apply them.
Many of the CIQ created templates in the Workflow Catalog assume there will be a storage class calledephemeraland a storage class calledpersistent. So it is a good idea to keep those names unless you have good reason to change them.
You can start with ephemeral storage. Execute the following to create an appropriate configuration file.
# NFS_SERVER_IP="" # fill in the value of your NFS server IP address (e.g. 10.0.0.4)
# cat >ephemeral_data_class.yaml<< EOF
version: v1
name: ephemeral
description: Ephemeral Scratch Volumes
driver: nfs.csi.k8s.io
properties:
persistent: false
retainOnDelete: false
parameters:
server: ${NFS_SERVER_IP}
share: /srv/fuzzball/ephemeral
capacity:
size: 100
unit: GiB
access:
type: filesystem
mode: multi_node_multi_writer
mount:
options:
- nfsvers=4
user: user
group: user
permissions: 770
scope: user
volumes:
nameArgs:
- WORKFLOW_ID
nameFormat: "{{workflow_id}}"
EOFNow you can apply this configuration and create an ephemeral storage class backed by NFS with the
following. The -w flag causes the command to wait for the operation to succeed before exiting.
# fuzzball admin storage class create -w ephemeral_data_class.yaml
# fuzzball admin storage class list
ID | NAME | STATUS | CREATED TIME | LAST UPDATED | PERSISTENT | RESTRICTED | CLUSTER
94e14bb0-2235-3373-a404-b1c78e0a411e | ephemeral | Ready | 2026-01-05 11:42:24PM | 2026-01-05 11:42:24PM | No | No | unset-clusterThe second command above shows you that your new ephemeral storage class has been created and is ready for use.
Now you can create a persistent storage class. Start by creating the appropriate configuration in a YAML file.
# cat >persistent_data_class.yaml<< EOF
version: v1
name: persistent
description: Persistent data
driver: nfs.csi.k8s.io
properties:
persistent: true
retainOnDelete: true
parameters:
server: ${NFS_SERVER_IP}
share: /srv/fuzzball/persistent
capacity:
size: 100
unit: GiB
access:
type: filesystem
mode: multi_node_multi_writer
mount:
options:
- nfsvers=4
user: user
group: user
permissions: 770
scope: user
volumes:
nameArgs:
- USERNAME
nameFormat: "{{username}}"
maxByAccount: 1
EOFApply this configuration using the same command as above.
# fuzzball admin storage class create -w persistent_data_class.yaml
# fuzzball admin storage class list
ID | NAME | STATUS | CREATED TIME | LAST UPDATED | PERSISTENT | RESTRICTED | CLUSTER
94e14bb0-2235-3373-a404-b1c78e0a411e | ephemeral | Ready | 2026-01-05 11:42:24PM | 2026-01-05 11:42:24PM | No | No | unset-cluster
726b008d-b298-33a9-be0c-efe89ddf95a3 | persistent | Ready | 2026-01-05 11:43:49PM | 2026-01-05 11:46:34PM | Yes | No | unset-clusterAt this point, you have successfully configured ephemeral and persistent storage classes and you can move on to configuring Keycloak.