Fuzzball Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Storage Configuration

Configuring Fuzzball to use storage is a two stage process. First, you need to install a Container Storage Interface (CSI) driver that allows Fuzzball to understand what type of storage it is expected to access. Second you need to add storage classes that allow Fuzzball to understand how it should allocate storage volumes on behalf of users.

The procedure for both of these stages is to create (or obtain) an appropriate YAML file and then apply the file using the Fuzzball CLI. The specific steps vary depending on your deployment type.

This is a basic setup suitable for getting up and running quickly and for testing purposes. For more detailed information, see the storage configuration section of the guide.
Select your deployment type to see storage configuration instructions.

NFS Storage Configuration

For on-prem deployments, we will create 2 new NFS shares from the Fuzzball server node and use those to back ephemeral and persistent storage classes.

Prerequisite: Create 2 New NFS Shares

First, create two new directories that will be served as NFS shares. On the server node, (or the node serving your NFS shares) execute the following commands:

# PRIVATE_SUBNET="" # populate this with the proper value for your environment (e.g. 10.0.0.0/20)
# mkdir -pv /srv/fuzzball/{ephemeral,persistent}

# for i in {ephemeral,persistent}; \
  do echo "/srv/fuzzball/${i} ${PRIVATE_SUBNET}(rw,sync,no_subtree_check,no_root_squash)"; done \
  >>/etc/exports

# exportfs -a

# exportfs
It is not necessary to mount the newly created NFS shares on any compute nodes. Fuzzball will automatically handle this through the NFS CSI driver that we will create in the next step.

Install the NFS CSI Driver

Now you can tell Fuzzball that you will use NFS to back cluster storage by installing an appropriate driver. Execute the following to create an appropriate YAML file.

Please note that the single quotes around 'EOF' are necessary. The YAML file is intended to include the literal strings ${CSI_NODE_ID} and ${CSI_ENDPOINT} rather than the current (likely non-existent) values of these variables. In a Heredoc as in the example below this can be achieved by surrounding the EOF marker with single quotes to prevent variable expansion.

# cat >nfs_driver_definition.yaml<< 'EOF'
version: v1
name: nfs.csi.k8s.io
description: NFS CSI Driver
image:
  uri: docker://registry.k8s.io/sig-storage/nfsplugin:v4.2.0
args:
  - --nodeid=${CSI_NODE_ID}
  - --endpoint=${CSI_ENDPOINT}
  - -v=10
EOF

You can install the NFS CSI driver like so:

# fuzzball admin storage driver install nfs_driver_definition.yaml
Driver "9c35fed2-26a1-3313-97f6-f25dc2366dd9" installed

# fuzzball admin storage driver list
ID                                   | NAME           | DESCRIPTION    | CREATED TIME          | LAST UPDATED          | CLUSTER
432b9f9e-efed-34b5-bd83-4db6549955c9 | nfs.csi.k8s.io | NFS CSI Driver | 2026-01-05 11:40:45PM | 2026-01-05 11:40:45PM | unset-cluster
Storage drivers are applied at the cluster level. Once you install a storage driver, you may use it to create storage classes within any organization on the cluster.

Create Storage Classes

Now you can create the YAML files for your ephemeral and persistent storage classes and apply them.

Many of the CIQ created templates in the Workflow Catalog assume there will be a storage class called ephemeral and a storage class called persistent. So it is a good idea to keep those names unless you have good reason to change them.

You can start with ephemeral storage. Execute the following to create an appropriate configuration file.

# NFS_SERVER_IP="" # fill in the value of your NFS server IP address (e.g. 10.0.0.4)
# cat >ephemeral_data_class.yaml<< EOF
version: v1
name: ephemeral
description: Ephemeral Scratch Volumes
driver: nfs.csi.k8s.io
properties:
  persistent: false
  retainOnDelete: false
parameters:
  server: ${NFS_SERVER_IP}
  share: /srv/fuzzball/ephemeral
capacity:
  size: 100
  unit: GiB
access:
  type: filesystem
  mode: multi_node_multi_writer
mount:
  options:
    - nfsvers=4
  user: user
  group: user
  permissions: 770
scope: user
volumes:
  nameArgs:
    - WORKFLOW_ID
  nameFormat: "{{workflow_id}}"
EOF

Now you can apply this configuration and create an ephemeral storage class backed by NFS with the following. The -w flag causes the command to wait for the operation to succeed before exiting.

# fuzzball admin storage class create -w ephemeral_data_class.yaml

# fuzzball admin storage class list
ID                                   | NAME      | STATUS | CREATED TIME          | LAST UPDATED          | PERSISTENT | RESTRICTED | CLUSTER
94e14bb0-2235-3373-a404-b1c78e0a411e | ephemeral | Ready  | 2026-01-05 11:42:24PM | 2026-01-05 11:42:24PM | No         | No         | unset-cluster

The second command above shows you that your new ephemeral storage class has been created and is ready for use.

Now you can create a persistent storage class. Start by creating the appropriate configuration in a YAML file.

# cat >persistent_data_class.yaml<< EOF
version: v1
name: persistent
description: Persistent data
driver: nfs.csi.k8s.io
properties:
  persistent: true
  retainOnDelete: true
parameters:
  server: ${NFS_SERVER_IP}
  share: /srv/fuzzball/persistent
capacity:
  size: 100
  unit: GiB
access:
  type: filesystem
  mode: multi_node_multi_writer
mount:
  options:
    - nfsvers=4
  user: user
  group: user
  permissions: 770
scope: user
volumes:
  nameArgs:
    - USERNAME
  nameFormat: "{{username}}"
  maxByAccount: 1
EOF

Apply this configuration using the same command as above.

# fuzzball admin storage class create -w persistent_data_class.yaml

# fuzzball admin storage class list
ID                                   | NAME       | STATUS | CREATED TIME          | LAST UPDATED          | PERSISTENT | RESTRICTED | CLUSTER
94e14bb0-2235-3373-a404-b1c78e0a411e | ephemeral  | Ready  | 2026-01-05 11:42:24PM | 2026-01-05 11:42:24PM | No         | No         | unset-cluster
726b008d-b298-33a9-be0c-efe89ddf95a3 | persistent | Ready  | 2026-01-05 11:43:49PM | 2026-01-05 11:46:34PM | Yes        | No         | unset-cluster

The AWS deployment method includes a shortcut allowing you to quickly and easily configure EFS for ephemeral and persistent storage classes. After deployment, you can run the following command to apply the appropriate YAML files for an EFS driver and 2 new storage classes:

$ fuzzball admin storage setup

The command runs asynchronously and the actual process may take several minutes to complete. Once finished, you can check the success of storage class creation like so:

$ fuzzball admin storage driver list
ID                                   | NAME            | DESCRIPTION        | CREATED TIME          | LAST UPDATED          | CLUSTER
0b8d8c99-fa77-34f4-b87b-8df0f4a00e59 | efs.csi.aws.com | Default CSI Driver | 2026-03-12 03:42:28PM | 2026-03-12 03:42:28PM | fuzzball-aws-aws-pulumi-runner

$ fuzzball admin storage class list
ID                                   | NAME       | STATUS | CREATED TIME          | LAST UPDATED          | PERSISTENT | RESTRICTED | CLUSTER
532b9716-b87d-3dbb-9999-3901ebef2054 | ephemeral  | Ready  | 2026-03-12 03:42:29PM | 2026-03-12 03:42:29PM | No         | No         | fuzzball-aws-aws-pulumi-runner
ee8267d2-ff29-38e4-8868-cfd8506f8fdf | persistent | Ready  | 2026-03-12 03:42:28PM | 2026-03-12 03:42:28PM | Yes        | No         | fuzzball-aws-aws-pulumi-runner

The setup command produces and applies the following CSI driver:

args:
    - --endpoint=${CSI_ENDPOINT}
    - -v=10
    - --logtostderr
    - --delete-access-point-root-dir=false
description: Default CSI Driver
files:
    - content: |-
        [default]
        region = us-east-1
        aws_access_key_id = ${secret.AWS_ACCESS_KEY_ID}
        aws_secret_access_key = ${secret.AWS_SECRET_ACCESS_KEY}
      path: /root/.aws/credentials
      secret: secret://cluster/CSI_STORAGE_SECRET
image:
    uri: docker://amazon/aws-efs-csi-driver:v1.7.6
name: efs.csi.aws.com
version: v1

It also produces the following yaml files to create ephemeral and persistent storage classes.

access:
    mode: MULTI_NODE_MULTI_WRITER
    type: FILESYSTEM
capacity:
    size: "100"
    unit: GIB
description: ephemeral Volumes
driver: efs.csi.aws.com
mount:
    group: ACCOUNT
    options:
        - iam
        - tls
    permissions: "770"
    user: ROOT
name: ephemeral
parameters:
    basePath: /ephemeral
    csi.storage.k8s.io/pvc/name: '{{volume_id}}'
    directoryPerms: "700"
    fileSystemId: <efs-filesystem-id>
    gid: '{{gid}}'
    provisioningMode: efs-ap
    uid: '{{uid}}'
properties: {}
scope: ALL
version: v1
volumes:
    maxByAccount: 1
    nameArgs:
        - WORKFLOW_ID
    nameFormat: '{{workflow_id}}'
access:
    mode: MULTI_NODE_MULTI_WRITER
    type: FILESYSTEM
capacity:
    size: "100"
    unit: GIB
description: persistent Volumes
driver: efs.csi.aws.com
mount:
    group: ACCOUNT
    options:
        - iam
        - tls
    permissions: "770"
    user: ROOT
name: persistent
parameters:
    basePath: /persistent
    csi.storage.k8s.io/pvc/name: '{{volume_id}}'
    directoryPerms: "700"
    fileSystemId: <efs-filesystem-id>
    gid: '{{gid}}'
    provisioningMode: efs-ap
    reuseAccessPoint: "true"
    uid: '{{uid}}'
properties:
    persistent: true
    retainOnDelete: true
scope: ALL
version: v1
volumes:
    maxByAccount: 1
    nameArgs:
        - ACCOUNT_ID
    nameFormat: '{{account_id}}'

Save the driver manifest above as efs-driver.yaml, the ephemeral storage class manifest as ephemeral.yaml, and the persistent storage class manifest as persistent.yaml.

If you prefer to create and apply these files manually, you can do so with the following commands:

You will need to replace the <efs-filesystem-id> placeholders in the storage class files above with actual values if you set them up manually.
$ fuzzball admin storage driver install efs-driver.yaml

$ fuzzball admin storage class create ephemeral.yaml

$ fuzzball admin storage class create persistent.yaml

The GCP deployment method includes a shortcut allowing you to quickly and easily configure ephemeral and persistent storage classes. After deployment, you can run the following command to apply the appropriate configuration as a cluster admin:

$ fuzzball storage setup

The command runs asynchronously and the actual process may take several minutes to complete. Once finished, you can check the success of storage class creation like so:

$ fuzzball storage driver list
ID                                   | NAME           | DESCRIPTION        | CREATED TIME          | LAST UPDATED          | CLUSTER
96864eee-c785-3de2-959a-a21ee5879f11 | nfs.csi.k8s.io | Default CSI Driver | 2026-04-08 09:49:22AM | 2026-04-08 09:49:22AM | DEPLOYMENT_NAME

$ fuzzball storage class list
ID                                   | NAME       | STATUS | CREATED TIME          | LAST UPDATED          | PERSISTENT | RESTRICTED | CLUSTER
7cf2432f-e612-3fae-a529-dd53bdee5dfa | persistent | Ready  | 2026-04-08 10:15:53AM | 2026-04-08 10:15:53AM | Yes        | No         | DEPLOYMENT_NAME
fc67d787-3677-3d04-9b2d-57dc7b81494f | ephemeral  | Ready  | 2026-04-08 10:15:53AM | 2026-04-08 10:15:53AM | No         | No         | DEPLOYMENT_NAME

The setup command produces and applies the following CSI driver:

$ fuzzball storage driver export 96864eee-c785-3de2-959a-a21ee5879f11
args:
    - --endpoint=${CSI_ENDPOINT}
    - --v=5
    - --nodeid=substrate
description: Default CSI Driver
image:
    uri: docker://registry.k8s.io/sig-storage/nfsplugin:v4.9.0
name: nfs.csi.k8s.io
version: v1

It also produces the following yaml files to create ephemeral and persistent storage classes.

$ fuzzball storage class export 7cf2432f-e612-3fae-a529-dd53bdee5dfa
access:
    mode: MULTI_NODE_MULTI_WRITER
    type: FILESYSTEM
capacity:
    size: "100"
    unit: GIB
description: persistent Volumes
driver: nfs.csi.k8s.io
mount:
    group: ACCOUNT
    options:
        - nfsvers=3
    permissions: "770"
    user: ROOT
name: persistent
parameters:
    server: 10.171.0.2
    share: /workflowio
    subDir: persistent
properties:
    persistent: true
    retainOnDelete: true
scope: ALL
version: v1
volumes:
    nameArgs:
        - ACCOUNT_ID
    nameFormat: '{{account_id}}'

$ fuzzball storage class export fc67d787-3677-3d04-9b2d-57dc7b81494f
access:
    mode: MULTI_NODE_MULTI_WRITER
    type: FILESYSTEM
capacity:
    size: "100"
    unit: GIB
description: ephemeral Volumes
driver: nfs.csi.k8s.io
mount:
    group: ACCOUNT
    options:
        - nfsvers=3
    permissions: "770"
    user: ROOT
name: ephemeral
parameters:
    server: 10.171.0.2
    share: /workflowio
    subDir: ephemeral
properties: {}
scope: ALL
version: v1
volumes:
    nameArgs:
        - WORKFLOW_ID
    nameFormat: '{{workflow_id}}'

Note that the UUIDs used in the export commands above are from the example output of the list commands and may be different in your deployment.

Support for CoreWeave within Fuzzball is in preview status and is currently subject to more rapid change to address customer requirements than other features of Fuzzball. If you are interested in using Fuzzball on CoreWeave, we recommend contacting CIQ as part of your deployment planning process.

CoreWeave Shared-Vast Storage

For Fuzzball on CoreWeave, Kubernetes Persistent Volume Claims (PVC) are created using CoreWeave’s shared-vast storage class. To expose this storage to workflows, Fuzzball uses a HostPath CSI driver that mounts the PVC on substrate nodes and makes them available as workflow storage volumes.

Fuzzball uses two types of PVCs on CoreWeave:

  1. Workflow Data PVC (fuzzball-shared-storage) - Mounted at /mnt/shared-storage on substrate nodes and exposed to workflows via the HostPath CSI driver. Workflows access this storage through the fuzzball-shared-vast storage class created below.
  2. Image Cache PVC (fuzzball-sharedfs) - Mounted at /mnt/fuzzball-sharedfs on substrate nodes for internal container image caching. This is not exposed to workflows.

The following steps install the HostPath CSI driver and create the storage class for workflow data access.

Install the shared-vast Storage Driver

Create a storage driver configuration file for CoreWeave’s shared-vast storage:

# coreweave-vast-shared-driver.yaml
version: v1
name: csi.hostpath.shared-vast
description: HostPath CSI Driver backed by CoreWeave shared-vast PVC
image:
  uri: docker://ghcr.io/ctrliq/hostpathplugin:v1.1.24
cmd: /hostpathplugin
args:
  - --drivername=csi.hostpath.shared-vast
  - --endpoint=${CSI_ENDPOINT}
  - --nodeid=${CSI_NODE_ID}
  - --v=5
mounts:
  - source: /mnt/shared-storage/volumes
    destination: /csi-data-dir
    options:
      - rbind
      - rslave

Install the driver:

$ fuzzball admin storage driver install coreweave-vast-shared-driver.yaml

Create the shared-vast Storage Class

Create a storage class configuration:

# coreweave-shared-vast-class.yaml
version: v1
name: fuzzball-shared-vast
description: Fuzzball storage backed by CoreWeave shared-vast
driver: csi.hostpath.shared-vast
mount:
  user: root
  group: account
  permissions: 770
capacity:
  size: 100
  unit: GiB
access:
  type: filesystem
  mode: multi_node_multi_writer
scope: all
volumes:
  nameFormat: "{{custom_name}}"
  nameArgs:
    - CUSTOM_NAME
  maxByAccount: 0
properties:
  persistent: true
  retainOnDelete: true
parameters:
  storageType: "Directory"

Create the storage class:

$ fuzzball admin storage class create coreweave-shared-vast-class.yaml

Verify the storage class is ready:

$ fuzzball admin storage class list
CoreWeave’s shared-vast storage requires Native Protocol Limit view policy. Ensure this is configured before deployment.
For additional CoreWeave storage options and LOTA object storage configuration, see the CoreWeave Configuration Guide.

Next Steps

At this point, you have successfully configured storage for your Fuzzball cluster. You can move on to configuring some initial entities.