Fuzzball Documentation
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

CoreWeave Static Node Pool Provisioning

Support for CoreWeave within Fuzzball is in preview status and is currently subject to more rapid change to address customer requirements than other features of Fuzzball. If you are interested in using Fuzzball on CoreWeave, we recommend contacting CIQ as part of your deployment planning process.

Static provisioning allows you to deploy Fuzzball Substrate on pre-existing CoreWeave NodePool resources, rather than using dynamic provisioning where Fuzzball creates nodes on-demand. This approach is useful when you want more control over node lifecycle, need predictable capacity, or want to optimize costs by managing node pools directly.

Static provisioning is an alternative to the dynamic provisioning approach described in the main Deployment guide. You can use either approach, or a combination of both.

Prerequisites

Before deploying the CoreWeave static provisioner, ensure you have completed:

  1. Requirements - All prerequisites for deploying Fuzzball on CoreWeave
  2. Deployment - Operator and FuzzballOrchestrate deployment
  3. Initial Configuration - Provisioner and storage configuration
  4. Pre-existing CoreWeave NodePool resources in your cluster

Architecture Overview

Static provisioning deploys Fuzzball substrate as DaemonSets on existing CoreWeave nodes:

  • One DaemonSet per target: Each target corresponds to a CoreWeave NodePool or node, depending on your node selector.
  • Automatic substrate deployment: Substrate pods run on all nodes matching the node selector
  • Shared storage: Uses the same CoreWeave shared-vast storage configured from the Fuzzball deployment
  • Definition mapping: Each target maps to a Fuzzball provisioner definition

Step 1: Create CoreWeave NodePool

Create a CoreWeave NodePool resource for your compute nodes. For example, to create a H100 GPU node pool:

# coreweave-nodepool-gpu.yaml
apiVersion: compute.coreweave.com/v1alpha1
kind: NodePool
metadata:
  name: fuzzball-h100-node-pool
spec:
  computeClass: default
  autoscaling: false
  instanceType: gd-8xh100ib-i128
  targetNodes: 2

Apply the node pool configuration:

$ kubectl apply -f coreweave-nodepool-gpu.yaml

Repeat this process for each node pool type you want.

Step 2: Create Provisioner Definitions

Define the provisioner definitions that map to your node pools:

# static-provisioner-definitions.yaml
definitions:
  - id: coreweave-gpu-large
    provisioner: static
    provisionerSpec:
      condition: "true"
      costPerHour: 49.24

The condition and nodeSelector fields work together for CoreWeave static provisioning:

  • nodeSelector (in Helm values, Step 3): Controls which Kubernetes nodes receive substrate pods via DaemonSet
  • condition (in provisioner definition): Controls which substrate pods register with this definition

Using condition: "true" allows all substrate pods deployed by the DaemonSet to register with this definition. You can use more specific conditions (e.g., hostname() matches "gpu-.*") to further filter which substrate pods use this definition beyond the DaemonSet’s node selection.

Apply the definitions:

$ fuzzball admin config set static-provisioner-definitions.yaml
For more general information about configuring provisioners for Fuzzball, see the Provisioner Administration section.

Step 3: Install Static Provisioner Helm Chart

The coreweave-substrate-static Helm chart deploys substrate as DaemonSets on your node pools.

Create Values File

Create a values file mapping your targets to NodePool resources:

# static-provisioner-values.yaml
namespace: fuzzball

# Must match your Fuzzball version
imageTag: "v3.2.0"

# Image pull secret name
imagePullSecretName: "registry-image-credentials"

# Automatically patches CoreWeave NodePools with substrate taint
autopatchNodePools: true

# Define targets - one DaemonSet per target
targets:
  - name: fuzzball-gpu-large
    definitionId: "coreweave-gpu-large"
    nodeSelector:
      compute.coreweave.com/node-pool: "fuzzball-h100-node-pool"

Install the Chart

Install the static provisioner chart:

$ helm upgrade --install fuzzball-substrate-static \
  oci://depot.ciq.com/fuzzball/fuzzball-images/helm/coreweave-substrate-static \
  --namespace fuzzball \
  --version v3.2.0 \
  --values static-provisioner-values.yaml

Step 4: Verify Static Provisioning Deployment

Check DaemonSets

Verify that DaemonSets were created for each target:

$ kubectl get daemonsets -n fuzzball -l app=fuzzball-substrate-static

Expected output:

NAME                                DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR
substrate-static-fuzzball-gpu-large       2         2       2            2           2   compute.coreweave.com/node-pool=fuzzball-h100-node-pool

Check Substrate Pods

Verify substrate pods are running on your nodes:

$ kubectl get pods -n fuzzball -l app=fuzzball-substrate-static -o wide

Check Fuzzball Scheduler Nodes

Check that Fuzzball recognizes the substrate nodes:

$ fuzzball admin scheduler node list

Configuration Reference

Values File Parameters

ParameterTypeRequiredDescription
namespacestringYesNamespace where Fuzzball is deployed (typically fuzzball)
imageTagstringYesSubstrate image version matching your Fuzzball deployment
imagePullSecretNamestringNoImage pull secret name. Default: registry-image-credentials
autopatchNodePoolsbooleanNoAutomatically adds Substrate taint to CoreWeave NodePools. Default: true
targetsarrayYesList of node pool targets for Substrate deployment

Target Configuration

Each target in the targets array supports:

FieldTypeRequiredDescription
namestringYesUnique name for this target (used in DaemonSet name)
definitionIdstringYesFuzzball provisioner definition ID matching this target
nodeSelectormapYesKubernetes node selector to match nodes in this pool
annotationsmapNoAdditional annotations for substrate pods
tolerationsarrayNoAdditional tolerations beyond the substrate taint

Node Selector Examples

CoreWeave node pool selector:

nodeSelector:
  compute.coreweave.com/node-pool: "your-pool-name"

Instance type selector:

nodeSelector:
  compute.coreweave.com/instance-type: "cd-gp-a192-genoa"

Combining Static and Dynamic Provisioning

You can use both static and dynamic provisioning in the same deployment by defining both types of provisioners:

definitions:
  # Static provisioner
  - id: coreweave-gpu-large
    provisioner: static
    provisionerSpec:
      condition: "true"
      costPerHour: 49.24

  # Dynamic provisioner
  - id: coreweave-cpu-dynamic
    provisioner: coreweave
    provisionerSpec:
      instanceType: "cd-gp-a192-genoa"
      costPerHour: 7.78

Workflows can then choose which provisioner to use based on their requirements.

Upgrading Static Substrate

To upgrade Substrate on static nodes:

  1. Update the imageTag in your values file
  2. Run helm upgrade with the new values
  3. The DaemonSet will perform a rolling update
$ helm upgrade fuzzball-substrate-static \
  oci://depot.ciq.com/fuzzball/fuzzball-images/helm/coreweave-substrate-static \
  --namespace fuzzball \
  --version v3.2.0 \
  --values static-provisioner-values.yaml
During substrate upgrades, running workflows on affected nodes will be terminated. Plan upgrades during maintenance windows or ensure workflows can handle interruptions.

Scaling Static NodePools

Scale CoreWeave NodePool resources by updating the targetNodes field:

$ kubectl patch nodepool fuzzball-cpu-node-pool -n tenant-your-tenant-id \
  --type merge -p '{"spec":{"targetNodes":5}}'

This scales the node pool to 5 nodes. The Substrate DaemonSet will automatically deploy or remove pods as nodes are added or removed.