Fuzzball Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Basic Provisioner Configuration

You can set a provisioner configuration to add one or more compute nodes running Substrate to your cluster. The configuration process varies depending on your deployment type.

Select your deployment type to see provisioner configuration instructions.

Static Provisioner

The most basic provisioner configuration simply matches nodes based on hostnames. To add a single node, obtain the hostname of the node that you want to add and run the following commands on your server node:

# COMPUTE_HOSTNAME="" # populate with the hostname for your compute node.
# cat >provisioner.yaml<<EOF
definitions:
  - id: compute1
    provisioner: static
    provisionerSpec:
      condition: |-
        hostname() matches "${COMPUTE_HOSTNAME}"
EOF

# fuzzball admin config set provisioner.yaml

The hostname() matches field supports pattern matching, so you can add more than one node with patterns that match the hostnames.

Once you’ve run the set command, you can restart Substrate on the compute node like so:

# systemctl restart fuzzball-substrate.service
This is a bare-bones configuration suitable for testing. For complete instructions see the Provisioner Configuration Guide
Support for CoreWeave within Fuzzball is in preview status and is currently subject to more rapid change to address customer requirements than other features of Fuzzball. If you are interested in using Fuzzball on CoreWeave, we recommend contacting CIQ as part of your deployment planning process.

CoreWeave Provisioning

CoreWeave supports dynamic and static provisioning:

  1. Dynamic Provisioning: Fuzzball creates and destroys nodes on-demand based on workflow requirements.
  2. Static Provisioning: Use pre-existing CoreWeave NodePool resources after installing the coreweave-substrate-static Helm chart (see Static Provisioning Guide)

Dynamic Provisioning Configuration

Dynamic provisioning is the default approach where Fuzzball automatically manages the CoreWeave NodePool lifecycle.

Define CoreWeave Instance Types

Create provisioner definitions specifying which CoreWeave instance types your cluster can use, and you want to make available to workflows:

# coreweave-provisioner-definitions.yaml
definitions:
  - id: cpu-gp-genoa
    provisioner: coreweave
    provisionerSpec:
      instanceType: "cd-gp-a192-genoa"
      costPerHour: 7.78
  - id: gpu-h100
    provisioner: coreweave
    provisionerSpec:
      instanceType: "gd-8xh100ib-i128"
      costPerHour: 49.24

Apply the configuration:

$ fuzzball admin config set coreweave-provisioner-definitions.yaml

Initialize Provisioner Definitions

Before workflows can use a CoreWeave instance type, you must create at least one instance of that type to initialize and discover its available resources:

$ fuzzball admin provisioner instance create cpu-gp-genoa
$ fuzzball admin provisioner instance create gpu-h100

Monitor node provisioning:

$ fuzzball admin scheduler node list

Static Provisioning Configuration

For static provisioning with pre-existing NodePool resources, you define provisioner resources with provisioner: static and deploy Substrate using the coreweave-substrate-static Helm chart.

Define Static Provisioner Definitions

Create provisioner definitions for your existing CoreWeave NodePools:

# static-provisioner-definitions.yaml
definitions:
  - id: coreweave-cpu-small
    provisioner: static
    provisionerSpec:
      condition: "true"
      costPerHour: 7.78
  - id: coreweave-gpu-large
    provisioner: static
    provisionerSpec:
      condition: "true"
      costPerHour: 49.24

The condition and nodeSelector fields work together for CoreWeave static provisioning:

  • nodeSelector (in Helm values below): Controls which Kubernetes nodes receive substrate pods via DaemonSet
  • condition (in provisioner definition): Controls which substrate pods register with this definition

Setting condition: "true" on a definition allows all Substrate pods deployed by the DaemonSet to register with that definition. You can use more specific conditions (e.g., hostname() matches "gpu-.*") to further filter which Substrate pods use this definition beyond the DaemonSet’s node selection.

Apply the definitions:

$ fuzzball admin config set static-provisioner-definitions.yaml

Create Values File

Create a values file mapping your targets to node pools:

# static-provisioner-values.yaml
namespace: fuzzball

# Must match your Fuzzball version
imageTag: "v3.2.0"

# Image pull secret name
imagePullSecretName: "registry-image-credentials"

# Automatically patch CoreWeave NodePools with substrate taint
autopatchNodePools: true

# Define targets - one DaemonSet per target
targets:
  - name: fuzzball-cpu
    definitionId: "coreweave-cpu-small"
    nodeSelector:
      compute.coreweave.com/node-pool: "fuzzball-cpu-node-pool"

  - name: fuzzball-gpu
    definitionId: "coreweave-gpu-large"
    nodeSelector:
      compute.coreweave.com/node-pool: "fuzzball-gpu-node-pool"
    tolerations:
      - key: nvidia.com/gpu
        operator: Exists
        effect: NoSchedule

Install Static Provisioner Helm Chart

Install the coreweave-substrate-static Helm chart to deploy substrate as DaemonSets on your NodePools:

$ helm upgrade --install fuzzball-substrate-static \
  oci://depot.ciq.com/fuzzball/fuzzball-images/helm/coreweave-substrate-static \
  --namespace fuzzball \
  --version v3.2.0 \
  --values static-provisioner-values.yaml

Verification

You can verify that nodes have been properly added to your cluster by running the node list subcommand:

# fuzzball node list
NODE ID        | HOSTNAME | CPU TYPE    | TOTAL CORES | TOTAL MEMORY (GB) | TOTAL DEVICES | RUNNING JOBS
10.1.96.4/7331 | compute1 | cpu/x86/avx | 2           | 8                 | 0             | 0

Now that your Fuzzball cluster can provision compute nodes, you are ready to configure storage.