Provisioner Configuration Reference
This document provides an exhaustive reference for all configuration parameters available in the Fuzzball central configuration system. The central configuration uses YAML format and supports multiple provisioner types with their specific parameters.
# Global cluster settings
nodeAnnotations:
# Map of global annotations applied to all nodes
# For example:
global.annotation: "cluster-wide-value"
environment: "production"
softwareTokens:
# Map of software license token limits
# For example:
matlab: 20
ansys: 10
scheduler:
queueDepth: 64
# Recognized annotations which could be passed by workflow
# jobs to the scheduler to match against provisioner definition
# annotations.
# By default the recognized annotations are:
# - nvidia.com/gpu.arch
# - nvidia.com/gpu.model
recognizedAnnotations:
- custom.annotation/one
- custom.annotation/two
definitions:
# Array of provisioner definitions
# For example:
- id: compute-nodes
provisioner: static
# and more provisioner-specific configuration ...
Global annotations applied to all cluster nodes.
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
nodeAnnotations | map[string]string | No | Key-value pairs of annotations applied globally to all nodes | cluster.name: "production" |
Example:
nodeAnnotations:
cluster.name: "hpc-cluster-01"
datacenter: "us-west-2"
environment: "production"
cost.center: "research"
Software license token limits for concurrent usage control.
Software tokens are currently on the roadmap but not yet implemented.
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
softwareTokens | map[string]uint32 | No | Software name to maximum concurrent license count mapping | matlab: 25 |
Example:
softwareTokens:
matlab: 25
ansys: 15
comsol: 8
abaqus: 10
scheduler:
# maximum number of requests in queue processed by scheduling iteration
queueDepth: 64
# Recognized annotations which could be passed by workflow
# jobs to the scheduler to match against provisioner definition
# annotations.
# By default the recognized annotations are:
# - nvidia.com/gpu.arch
# - nvidia.com/gpu.model
recognizedAnnotations:
- custom.annotation/one
- custom.annotation/two
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
queueDepth | uint32 | No | Scheduler queue depth (default: 64) | 128 |
recognizedAnnotations | []string | No | Set of recognized annotations used to match job/definition | ["custom.annotation/one"] |
Each definition in the definitions array represents a compute resource provisioner.
These parameters are available for all provisioner types:
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
id | string | Yes | Unique identifier for the provisioner definition | "compute-nodes" |
annotations | map[string]string | No | Key-value pairs of annotations specific to this definition | node.type: "compute" |
provisioner | string | Yes | Provisioner backend type: static, aws, slurm, pbs | "static" |
policy | string | No | Expression-based policy controlling access to this definition | request.owner.organization_id == "research" |
ttl | uint32 | No | Defines node lifetime in seconds (ignored for static provisioner) | 86400 |
exclusive | string | No | Node exclusive level: empty or none (default, shared), job (exclusive to one job), or workflow (exclusive to one workflow) | "job" |
provisionerSpec | object | Yes | Provisioner-specific configuration (see sections below) | - |
The exclusive parameter controls how nodes provisioned by this definition are shared among jobs:
If not specified or empty, nodes are shared and can run multiple jobs simultaneously. Multiple jobs from the same or different workflows can be scheduled on the same node based on available resources.
job: Nodes are exclusive to a single job allocation. Once a job is assigned to the node, no other jobs can use it until the job completes and the node is cleaned up. This ensures complete isolation at the job level.workflow: Nodes are exclusive to a single workflow. All jobs within the same workflow can share the node, but jobs from other workflows cannot use it. This is useful for workflows that need dedicated resources but want to share nodes across their jobs.
Example:
definitions:
# Shared nodes for general workloads
- id: shared-compute
provisioner: static
exclusive: none
provisionerSpec:
condition: hostname() matches "shared-[0-9]+"
# Job-exclusive nodes for sensitive workloads
- id: exclusive-compute
provisioner: pbs
exclusive: job
ttl: 3600
provisionerSpec:
cpu: 8
memory: "32GiB"
queue: "workq"
Static provisioners manage physical or pre-allocated compute resources.
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
condition | string | Yes | Expression-based condition for node matching | hostname() matches "compute-[0-9]+" |
costPerHour | float64 | No | Cost per hour for resource usage (must be ≥ 0) | 0.25 |
The condition field supports these built-in variables and functions:
| Variable | Type | Description | Example Value |
|---|---|---|---|
uname.sysname | string | Operating system name | "Linux" |
uname.nodename | string | Network node hostname | "compute-001" |
uname.release | string | Operating system release | "5.4.0-74-generic" |
uname.version | string | Operating system version | "#83-Ubuntu SMP" |
uname.machine | string | Hardware machine type | "x86_64", "aarch64" |
uname.domainname | string | Network domain name | "cluster.local" |
| Variable | Type | Description | Example Value |
|---|---|---|---|
osrelease.name | string | OS name | "Ubuntu" |
osrelease.id | string | OS identifier | "ubuntu" |
osrelease.id_like | string | Similar OS identifiers | "debian" |
osrelease.version | string | OS version string | "20.04.3 LTS (Focal Fossa)" |
osrelease.version_id | string | OS version identifier | "20.04" |
osrelease.version_codename | string | OS version codename | "focal" |
| Variable | Type | Description | Example Value |
|---|---|---|---|
cpuinfo.vendor_id | string | CPU vendor | "GenuineIntel", "AuthenticAMD" |
cpuinfo.cpu_family | uint | CPU family number | 6 |
cpuinfo.model | uint | CPU model number | 158 |
cpuinfo.model_name | string | CPU model name string | "Intel(R) Xeon(R) CPU E5-2680 v4" |
cpuinfo.microcode | uint | Microcode version | 240 |
cpuinfo.cpu_cores | uint | Number of physical CPU cores | 16 |
| Function | Return Type | Description | Example |
|---|---|---|---|
hostname() | string | Returns current hostname | "compute-001" |
modalias.match(pattern) | bool | Matches hardware modalias patterns | modalias.match("pci:v000010DEd*") |
# NVIDIA GPU (any model)
modalias.match("pci:v000010DEd*sv*sd*bc03sc*i*")
# Specific NVIDIA GPU models
modalias.match("pci:v000010DEd00001B06sv*sd*bc03sc*i*") # GTX 1080 Ti
modalias.match("pci:v000010DEd00001E07sv*sd*bc03sc*i*") # RTX 2080 Ti
# Intel Ethernet controllers
modalias.match("pci:v00008086d*sv*sd*bc02sc00i*")
# Mellanox InfiniBand adapters
modalias.match("pci:v000015B3d*sv*sd*bc0Csc06i*")
You can also easily get the modalias for all the PCI devices on a node to match a specific device with the following one-liner:
$ IFS=$'\n'; for d in $(lspci); do modalias=$(cat /sys/bus/pci/devices/0000\:${d%% *}/modalias); echo "$modalias -> ${d#* }"; done
pci:v00008086d00004641sv00001D05sd00001174bc06sc00i00 -> Host bridge: Intel Corporation 12th Gen Core Processor Host Bridge/DRAM Registers (rev 02)
pci:v00008086d0000460Dsv00000000sd00000000bc06sc04i00 -> PCI bridge: Intel Corporation 12th Gen Core Processor PCI Express x16 Controller #1 (rev 02)
pci:v00008086d000046A6sv00001D05sd00001174bc03sc00i00 -> VGA compatible controller: Intel Corporation Alder Lake-P GT2 [Iris Xe Graphics] (rev 0c)
[snip...]definitions:
# Basic compute nodes
- id: compute-standard
provisioner: static
provisionerSpec:
condition: |-
hostname() matches "compute-[0-9]{3}" &&
cpuinfo.vendor_id == "GenuineIntel" &&
cpuinfo.cpu_cores >= 16
costPerHour: 0.40
# GPU nodes
- id: gpu-nodes
provisioner: static
provisionerSpec:
condition: |-
hostname() matches "gpu-[0-9]+" &&
modalias.match("pci:v000010DEd*sv*sd*bc03sc*i*")
costPerHour: 2.50
# High-memory nodes
- id: highmem-nodes
provisioner: static
provisionerSpec:
condition: |-
hostname() matches "mem-[0-9]+" &&
cpuinfo.cpu_cores >= 64
costPerHour: 1.75
condition: |-
osrelease.id == "ubuntu" &&
osrelease.version_id >= "20.04"
condition: |-
uname.machine == "x86_64" &&
cpuinfo.vendor_id == "GenuineIntel" &&
cpuinfo.cpu_cores >= 16
condition: |-
let compute_regex = "compute-[0-9]{3}";
let gpu_regex = "gpu-[0-9]{2}";
hostname() matches compute_regex || hostname() matches gpu_regex
condition: |-
// Match NVIDIA GPU devices
modalias.match("pci:v000010DEd*sv*sd*bc03sc*i*") &&
cpuinfo.cpu_cores >= 8
condition: |-
let is_compute_node = hostname() matches "compute-[0-9]+";
let is_intel_cpu = cpuinfo.vendor_id == "GenuineIntel";
let is_ubuntu = osrelease.id == "ubuntu";
let has_enough_cores = cpuinfo.cpu_cores >= 16;
is_compute_node && is_intel_cpu && is_ubuntu && has_enough_cores
AWS provisioners support dynamic EC2 instance provisioning.
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
instanceType | string | Yes | EC2 instance type or wildcard pattern | "t3.large", "c5.*" |
spot | bool | No | Use spot instances | true, false |
AWS provisioners support wildcard patterns that automatically expand to individual instance types:
t3.*expands tot3.nano,t3.micro,t3.small, etc.c5.*expands toc5.large,c5.xlarge,c5.2xlarge, etc.p3.*expands top3.2xlarge,p3.8xlarge,p3.16xlarge
When using wildcards, the ${spec.instanceType} placeholder in the definition ID is replaced with the actual instance type.
definitions:
# Spot instances for cost optimization
- id: aws-${spec.instanceType}-spot
provisioner: aws
provisionerSpec:
instanceType: t3.*
spot: true
policy: |-
request.job_ttl <= 3600
# On-demand compute instances
- id: aws-${spec.instanceType}
provisioner: aws
provisionerSpec:
instanceType: c5.*
spot: false
policy: |-
request.job_kind == "service"
# GPU instances for ML workloads
- id: aws-${spec.instanceType}-gpu
provisioner: aws
provisionerSpec:
instanceType: p3.*
spot: false
policy: |-
request.job_resource.devices["nvidia.com/gpu"] > 0
Slurm provisioners integrate with existing Slurm clusters.
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
costPerHour | float64 | No | Cost per hour for resource usage (must be ≥ 0) | 0.30 |
cpu | int | Yes | Number of CPU cores (must be > 0) | 16 |
memory | string | Yes | Memory specification | "64GiB" |
partition | string | Yes | Slurm partition name | "compute" |
definitions:
# Standard compute partition
- id: slurm-compute
provisioner: slurm
provisionerSpec:
costPerHour: 0.30
cpu: 16
memory: "64GiB"
partition: "compute"
policy: |-
request.job_resource.cpu.cores <= 16
# GPU partition
- id: slurm-gpu
provisioner: slurm
ttl: 43200 # node lifetime set to 12h
provisionerSpec:
costPerHour: 1.80
cpu: 8
memory: "32GiB"
partition: "gpu"
policy: |-
request.job_resource.devices["nvidia.com/gpu"] > 0
PBS provisioners integrate with OpenPBS/PBS Pro clusters.
| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
cpu | int | Yes | Number of CPU cores (must be > 0) | 8 |
memory | string | Yes | Memory specification | "32GiB" |
gpus | int | No | Number of GPUs (must be ≥ 0) | 1 |
queue | string | Yes | PBS queue name | "workq" |
costPerHour | float64 | No | Cost per hour for resource usage (must be ≥ 0) | 0.30 |
definitions:
# Standard PBS queue
- id: pbs-compute
provisioner: pbs
provisionerSpec:
cpu: 8
memory: "32GiB"
gpus: 0
queue: "workq"
costPerHour: 0.30
# GPU queue
- id: pbs-gpu
provisioner: pbs
provisionerSpec:
cpu: 4
memory: "16GiB"
gpus: 1
queue: "gpu"
costPerHour: 1.00
Policy expressions control access to provisioner definitions and use the same expression language as static conditions.
| Variable | Type | Description | Example |
|---|---|---|---|
request.owner.id | string | User ID | "user-123" |
request.owner.organization_id | string | Organization ID | "org-research" |
request.owner.email | string | User email address | "user@example.com" |
request.owner.cluster_id | string | Cluster ID | "cluster-01" |
request.owner.account_id | string | Group ID | "account-456" |
| Variable | Type | Description | Example |
|---|---|---|---|
request.job_kind | string | Job type | "job", "service", "internal" |
request.job_ttl | int | Job time-to-live in seconds | 3600 |
request.job_annotations | map[string]string | Job annotation key-value pairs | request.job_annotations["tier"] |
request.multinode_job | bool | True for multi-node jobs | true |
request.task_array_job | bool | True for task array jobs | false |
| Variable | Type | Description | Example |
|---|---|---|---|
request.job_resource.cpu.affinity | string | CPU affinity | "none", "core", "socket", "numa" |
request.job_resource.cpu.cores | int | Number of CPU cores requested | 4 |
request.job_resource.cpu.threads | bool | Hyperthreading enabled | true |
request.job_resource.cpu.sockets | int | Number of CPU sockets | 1 |
request.job_resource.mem.bytes | int | Memory in bytes | 4294967296 |
request.job_resource.mem.by_core | bool | Memory allocation per core | false |
request.job_resource.devices | map[string]uint32 | Device requests | request.job_resource.devices["nvidia.com/gpu"] |
request.job_resource.exclusive | bool | Exclusive node access | true |
policy: |-
request.owner.organization_id == "research" &&
request.owner.account_id in ["2f0a8f4e-0a16-47d5-b541-05d3f9f44910", "c602cf05-7604-4f11-a690-79552b1fdbdd"]
policy: |-
request.job_resource.cpu.cores <= 32 &&
request.job_resource.mem.bytes <= (256 * 1024 * 1024 * 1024) &&
!request.job_resource.exclusive
policy: |-
request.job_kind == "job" &&
request.job_ttl >= 300 &&
request.job_ttl <= 86400
policy: |-
let gpu_count = request.job_resource.devices["nvidia.com/gpu"];
gpu_count > 0 && gpu_count <= 4 &&
request.owner.organization_id == "280abb59-b765-4cdd-a538-6ab8f9b7927c"
policy: |-
request.job_annotations["priority"] == "high" &&
request.job_annotations["project"] in ["proj-a", "proj-b"] &&
request.owner.email matches "*@ciq.com"
policy: |-
request.multinode_job ?
request.job_resource.cpu.cores >= 4 &&
request.owner.account_id == "092403fe-12ef-4465-bce4-18292fec13c8"
:
request.job_resource.cpu.cores <= 16
policy: |-
let current_hour = time.Now().Hour();
let is_business_hours = current_hour >= 9 && current_hour <= 17;
request.job_annotations["priority"] == "low" ? !is_business_hours : true