FuzzballOrchestrate CRD Reference
The FuzzballOrchestrate Custom Resource Definition (CRD) is the primary way to deploy and
configure a Fuzzball cluster on Kubernetes. This CRD provides comprehensive control over all aspects
of a Fuzzball deployment, from basic infrastructure (database, ingress, authentication) to advanced
features (multi-cloud provisioning, billing integration, autoscaling).
The minimal configuration requires image credentials, ingress configuration, database setup, and authentication. This example is suitable for local or on-premises deployments:
apiVersion: deployment.ciq.com/v1alpha1
kind: FuzzballOrchestrate
metadata:
name: fuzzball-orchestrate
spec:
image:
username: ${DEPOT_USER}
password: ${ACCESS_KEY}
exclusive: false
ingress:
create:
domain: "10.0.0.99.nip.io"
proxy:
type: LoadBalancer
annotations:
metallb.io/loadBalancerIPs: 10.0.0.99
database:
create:
storage:
class: longhorn
keycloak:
create:
ingress:
hostname: auth.10.0.0.99.nip.io
realmName: Fuzzball
realmId: 550e8400-e29b-41d4-a716-446655440000
username: keycloak
ownerEmail: "admin@example.com"
createDatabase: true
tls:
certManager:
create: {}
trustManager:
create: {}
fuzzball:
substrate:
nfs:
destination: "/fuzzball/shared"
server: 10.0.0.10
path: "/srv/fuzzball/shared"
jetstream:
replicas: 3
storage:
class: longhorn
size: 10Gi
Controls where container images are pulled from and authentication:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
repository | string | No | depot.ciq.com/fuzzball/fuzzball-images | Container image registry |
username | string | Yes* | - | Registry authentication username |
password | string | Yes* | - | Registry authentication password |
exclusive | boolean | No | true | If true, all images must come from specified repository |
* Required for private registries like CIQ Depot
Example:
spec:
image:
repository: depot.ciq.com/fuzzball/fuzzball-images
username: my-depot-user
password: my-depot-token
exclusive: false # Allow pulling some images from public registries
Defines how the cluster is exposed to the network. Choose either create (new ingress) or
external (existing ingress controller).
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
domain | string | Yes | - | Base domain for all Fuzzball services |
proxy.type | string | Yes | LoadBalancer | Service type: LoadBalancer or NodePort |
proxy.annotations | map | No | {} | Annotations for the proxy service (e.g., MetalLB config) |
proxy.http.nodePort | integer | No | - | NodePort for HTTP (if type is NodePort) |
proxy.tls.nodePort | integer | No | - | NodePort for HTTPS (if type is NodePort) |
Example with LoadBalancer:
spec:
ingress:
create:
domain: fuzzball.example.com
proxy:
type: LoadBalancer
annotations:
metallb.io/loadBalancerIPs: 10.0.0.99
metallb.io/allow-shared-ip: ingress-and-fuzzball
CoreWeave requires specific annotations on the LoadBalancer service for automatic DNS configuration:
| Annotation | Value | Description |
|---|---|---|
service.beta.kubernetes.io/coreweave-load-balancer-type | public | Creates an internet-accessible load balancer |
service.beta.kubernetes.io/external-hostname | *.<domain> | Wildcard domain for automatic DNS resolution |
CoreWeave example:
spec:
ingress:
create:
domain: a1b2c3-my-cluster.coreweave.app
proxy:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/coreweave-load-balancer-type: public
service.beta.kubernetes.io/external-hostname: '*.a1b2c3-my-cluster.coreweave.app'
Example with NodePort:
spec:
ingress:
create:
domain: fuzzball.example.com
proxy:
type: NodePort
http:
nodePort: 30080
tls:
nodePort: 30443
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
domain | string | Yes | - | Base domain for all Fuzzball services |
className | string | Yes | - | Ingress class name (e.g., nginx, traefik) |
annotations | map | No | {} | Annotations for ingress resources |
Example:
spec:
ingress:
external:
domain: fuzzball.example.com
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
Fuzzball requires a PostgreSQL database. Choose either create (deploys a dedicated PostgreSQL alongside Orchestrate) or external.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
enableDebugPod | boolean | No | false | Deploy a debug pod with database tools |
annotations | map | No | {} | Annotations for database resources |
storage.class | string | No | - | StorageClass for database PVC |
storage.size | string | No | - | Size of database storage |
storage.accessMode | string | No | ReadWriteOnce | Access mode for PVC |
storage.annotations | map | No | {} | Annotations for PVC |
Example:
spec:
database:
create:
enableDebugPod: true
storage:
class: longhorn
size: 200Gi
accessMode: ReadWriteOnce
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
host | string | Yes | - | Database hostname or IP |
port | string | No | 5432 | Database port |
driver | string | No | postgres | Database driver |
credentials.user | string | Yes | - | Database username |
credentials.password | string | Yes | - | Database password |
sslMode | string | No | verify-full | SSL mode: disable, require, verify-ca, verify-full |
rdsSecretId | string | No | - | AWS Secrets Manager ARN for RDS credentials |
certificate.caCert | string | No | - | CA certificate for SSL verification |
certificate.caCertURL | string | No | - | URL to download CA certificate |
certificate.clientCert | string | No | - | Client certificate for mTLS |
certificate.clientKey | string | No | - | Client key for mTLS |
Example with external PostgreSQL:
spec:
database:
external:
host: postgres.example.com
port: "5432"
driver: postgres
credentials:
user: fuzzball
password: secure-password
sslMode: verify-full
certificate:
caCert: |
<full CA certificate>
Example with AWS RDS:
spec:
database:
external:
host: mydb.abc123.us-east-1.rds.amazonaws.com
rdsSecretId: arn:aws:secretsmanager:us-east-1:123456789012:secret:rds-db-credentials-abc123
sslMode: verify-full
certificate:
caCertURL: https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
Fuzzball uses Keycloak for authentication. Choose either create (deploy Keycloak alongside Orchestrate) or external.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
ownerEmail | string | Yes | - | Email address for the Fuzzball organization owner |
realmId | string | No | (generated) | Keycloak realm ID (must be UUID v4, lower case) |
realmName | string | No | Fuzzball | Keycloak realm display name |
username | string | No | keycloak | Keycloak admin username |
password | string | No | (generated) | Keycloak admin password |
defaultUserPassword | string | No | - | Default password for new users |
createDatabase | boolean | No | true | Create dedicated database for Keycloak |
replicas | integer | No | 1 | Number of Keycloak replicas |
ingress.hostname | string | No | auth.{domain} | Hostname for Keycloak UI |
ingress.className | string | No | - | Ingress class name |
ingress.tls.cert | string | No | - | TLS certificate |
ingress.tls.key | string | No | - | TLS private key |
ingress.annotations | map | No | {} | Ingress annotations |
Basic example:
spec:
keycloak:
create:
ownerEmail: admin@example.com
realmName: MyOrganization
realmId: 550e8400-e29b-41d4-a716-446655440000
username: keycloak
password: secure-keycloak-password
createDatabase: true
ingress:
hostname: auth.fuzzball.example.com
Generate a UUID v4 for the realm ID:
$ uuidgenUse the output in your configuration:
realmId: 550e8400-e29b-41d4-a716-446655440000
The ownerEmail and defaultUserPassword fields create the initial administrator account with full
cluster administration privileges. Change the default password after first login.
ownerEmail: admin@example.com
defaultUserPassword: initial-secure-password
Keycloak can be configured to federate users from an LDAP/Active Directory server:
| Parameter | Type | Required | Description |
|---|---|---|---|
ldap.url | string | Yes | LDAP server URL (ldap:// or ldaps://) |
ldap.startTLS | boolean | No | Use StartTLS for ldap:// connections |
ldap.insecure | boolean | No | Skip TLS certificate verification |
ldap.vendor | string | No | LDAP vendor: ad, rhds, tivoli, edirectory, other |
ldap.bindDN | string | No | Bind DN for LDAP authentication |
ldap.bindPassword | string | No | Password for bind DN |
ldap.searchScope | string | No | Search scope: single or subtree |
ldap.users.dn | string | Yes | Base DN for user search |
ldap.users.attributes.* | string | Yes | User attribute mappings |
ldap.users.objectClasses | []string | Yes | User object classes |
ldap.users.filter | string | No | LDAP filter for users |
ldap.groups.dn | string | Yes | Base DN for group search |
ldap.groups.membershipAttributeType | string | Yes | Membership type: dn or uid |
ldap.groups.userGroupsStrategy | string | Yes | Strategy for loading groups |
ldap.groups.attributes.* | string | Yes | Group attribute mappings |
ldap.groups.objectClasses | []string | Yes | Group object classes |
ldap.groups.filter | string | No | LDAP filter for groups |
LDAP example:
spec:
keycloak:
create:
ownerEmail: admin@example.com
ldap:
url: ldaps://ldap.example.com
vendor: ad
bindDN: cn=fuzzball,ou=service,dc=example,dc=com
bindPassword: ldap-bind-password
searchScope: subtree
users:
dn: ou=users,dc=example,dc=com
attributes:
username: sAMAccountName
rdn: sAMAccountName
uuid: objectGUID
uidNumber: uidNumber
gidNumber: gidNumber
objectClasses:
- person
- organizationalPerson
filter: "(memberOf=cn=fuzzball-users,ou=groups,dc=example,dc=com)"
groups:
dn: ou=groups,dc=example,dc=com
membershipAttributeType: dn
userGroupsStrategy: get_groups_from_user_memberof_attribute
attributes:
groupName: cn
groupMembership: member
userMembership: sAMAccountName
memberOf: memberOf
gidNumber: gidNumber
objectClasses:
- group
filter: "(cn=fuzzball-*)"
| Parameter | Type | Required | Description |
|---|---|---|---|
url | string | Yes | Keycloak server URL |
realmId | string | Yes | Existing realm ID (UUID v4) |
realmName | string | Yes | Existing realm name |
username | string | Yes | Admin username |
password | string | Yes | Admin password |
ownerEmail | string | Yes | Organization owner email |
useLDAPFederation | boolean | No | Whether LDAP is configured |
Example:
spec:
keycloak:
external:
url: https://keycloak.example.com
realmId: 550e8400-e29b-41d4-a716-446655440000
realmName: ExistingRealm
username: admin
password: keycloak-admin-password
ownerEmail: admin@example.com
Optional configuration for certificate management and Let’s Encrypt:
| Parameter | Type | Required | Description |
|---|---|---|---|
certManager.create | object | No | Deploy cert-manager (empty object for defaults) |
certManager.serviceAccount.annotations | map | No | Service account annotations (for IRSA) |
trustManager.create | object | No | Deploy trust-manager (empty object for defaults) |
| Default behavior (empty objects for certManager and trustManager) is to issue self-signed certificates |
| Parameter | Type | Required | Description |
|---|---|---|---|
internalIssuer | object | No | Internal certificate issuer configuration |
ingressIssuer.create.letsEncrypt.email | string | No | Email for Let’s Encrypt notifications |
ingressIssuer.create.letsEncrypt.issuer | string | No | Issuer name (e.g., letsencrypt-prod, letsencrypt-staging) |
ingressIssuer.create.letsEncrypt.solvers | []object | No | ACME challenge solvers (dns01 or http01) |
ingressIssuer.external.internalCAIssuerName | string | No | Name of existing CA issuer to use |
Example with Let’s Encrypt (DNS-01):
spec:
tls:
certManager:
create: {}
trustManager:
create: {}
ingressIssuer:
create:
letsEncrypt:
email: admin@example.com
issuer: letsencrypt-prod
solvers:
- dns01:
route53:
region: us-east-1
hostedZoneID: Z1234567890ABC
Example with Let’s Encrypt (HTTP-01):
spec:
tls:
certManager:
create: {}
ingressIssuer:
create:
letsEncrypt:
email: admin@example.com
issuer: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
The fuzzball section controls all Fuzzball services and their configurations.
| Parameter | Type | Description |
|---|---|---|
version | string | Fuzzball version (defaults to operator version) |
cluster.name | string | Cluster name (default: unset-cluster) |
cluster.kind | string | Cluster kind/type |
Controls how substrate nodes connect and operate:
| Parameter | Type | Description |
|---|---|---|
substrate.nfs.destination | string | Mount point on substrate nodes |
substrate.nfs.server | string | NFS server IP or hostname |
substrate.nfs.path | string | NFS export path |
substrate.secureRegistries | []string | Private registries requiring authentication |
substrate.imageProxy | string | HTTP(S) proxy for image pulling |
substrate.imageNoProxy | []string | Hosts to exclude from proxy |
substrate.mtls.* | object | mTLS configuration for substrate |
Example:
spec:
fuzzball:
cluster:
name: my-fuzzball-cluster
kind: on-premises
substrate:
nfs:
destination: /fuzzball/shared
server: 10.0.0.10
path: /srv/fuzzball/shared
secureRegistries:
- depot.ciq.com
imageProxy: http://proxy.example.com:3128
imageNoProxy:
- localhost
- 127.0.0.1
- .example.com
Most Fuzzball services support the following common parameters:
| Parameter | Type | Description |
|---|---|---|
replicas | integer | Number of pod replicas |
autoscaling.enabled | boolean | Enable horizontal pod autoscaling |
autoscaling.minReplicas | integer | Minimum replicas when autoscaling |
autoscaling.maxReplicas | integer | Maximum replicas when autoscaling |
autoscaling.targetCPUUtilization | integer | Target CPU percentage (default: 80) |
autoscaling.targetMemoryUtilization | integer | Target memory percentage (default: 80) |
resources.requests | map | Resource requests (cpu, memory) as key-value pairs |
resources.limits | map | Resource limits (cpu, memory) as key-value pairs |
serviceAccount.annotations | map | Service account annotations (for IRSA) |
Example for orchestrator service:
spec:
fuzzball:
orchestrator:
replicas: 2
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilization: 70
targetMemoryUtilization: 75
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 2000m
memory: 2Gi
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/fuzzball-orchestrator
The following configuration sections are available under spec.fuzzball:
Top-Level Configuration:
version- Fuzzball version (defaults to operator version)config- Global configuration (shared storage, etc.)cluster- Cluster name and metadatasubstrate- Substrate node configuration (NFS, registries, proxies, mTLS)
Active Services:
agent- Agent service (workflow execution)audit- Audit logging serviceauth- Authentication service (SpiceDB)billing- Billing and marketplace integrationclusterAdmin- Admin UI and cluster setupjetstream- NATS Jetstream message brokeropenapi- OpenAPI documentation serviceorchestrator- Orchestrator servicestorage- Storage servicesubstrateBridge- Substrate bridge service (DNS, logging, Kubernetes integration)ui- Web UI serviceworkflow- Workflow serviceworkflowCatalog- Workflow catalog configuration
The following services have been deprecated as of version v3.0 and will be removed in a future version (use recommended replacements):
account- Deprecated (useagent)dns- Deprecated (usesubstrateBridge)kube- Deprecated (usesubstrateBridge)log- Deprecated (usesubstrateBridge)organization- Deprecated (useagent)provision- Deprecated (useorchestrator)schedule- Deprecated (useorchestrator)secret- Deprecated (useagent)user- Deprecated (useagent)workflowEngine- Deprecated (usejetstream)
NATS Jetstream has specific configuration requirements:
| Parameter | Type | Default | Description |
|---|---|---|---|
replicas | integer | 3 | Number of Jetstream replicas (recommended: 3) |
storage.class | string | - | StorageClass for Jetstream PVCs |
storage.size | string | 10Gi | Storage size per replica |
externalService.type | string | - | External service type (NodePort, LoadBalancer) |
Example:
spec:
fuzzball:
jetstream:
replicas: 3
storage:
class: longhorn
size: 20Gi
externalService:
type: NodePort
This storage is necessary for internal services. Admins can configure storage for computational jobs later.
| Parameter | Type | Description |
|---|---|---|
storage.storage.class | string | StorageClass for storage service |
Example:
spec:
fuzzball:
storage:
storage:
class: longhorn
| Parameter | Type | Description |
|---|---|---|
audit.storage.class | string | StorageClass for audit logs |
Example:
spec:
fuzzball:
audit:
storage:
class: longhorn
| Parameter | Type | Description |
|---|---|---|
substrateBridge.log.storage.class | string | StorageClass for substrate logs |
substrateBridge.dns.externalService.type | string | DNS service type |
NodePort is recommended for local deployments.
Example:
spec:
fuzzball:
substrateBridge:
log:
storage:
class: longhorn
dns:
externalService:
type: NodePort
The orchestrator provisioner enables multi-cloud and HPC integration:
| Parameter | Type | Description |
|---|---|---|
enabled | boolean | Enable provisioner |
substrateComputeDirectory | string | Working directory for substrate operations |
aws | object | AWS provisioner configuration |
coreweave | object | CoreWeave provisioner configuration |
slurm | object | Slurm provisioner configuration |
pbs | object | PBS provisioner configuration |
| Parameter | Type | Description |
|---|---|---|
aws.enabled | boolean | Enable AWS provisioner |
aws.region | string | AWS region |
aws.subnetIDs | []string | VPC subnet IDs |
aws.securityGroupIDs | []string | Security group IDs |
aws.instanceProfileARN | string | IAM instance profile ARN |
aws.usePublicIP | boolean | Assign public IPs |
aws.sshEnabled | boolean | Enable SSH access |
aws.sshKeyPairName | string | EC2 key pair name |
aws.sshPrivateKeyPem | string | SSH private key |
aws.depotUser | string | CIQ Depot username for substrate nodes |
aws.depotAccessToken | string | CIQ Depot access token for substrate nodes |
aws.cloudInitScripts | []string | Custom cloud-init scripts to run on instances |
aws.sharedFs | map | Shared filesystem configuration |
Example:
spec:
fuzzball:
orchestrator:
provisioner:
enabled: true
aws:
enabled: true
region: us-east-1
subnetIDs:
- subnet-0123456789abcdef0
securityGroupIDs:
- sg-0123456789abcdef0
instanceProfileARN: arn:aws:iam::123456789012:instance-profile/FuzzballSubstrate
usePublicIP: false
The CoreWeave provisioner enables dynamic provisioning of compute nodes on CoreWeave infrastructure.
When enabled, Fuzzball will automatically create and manage CoreWeave NodePool resources and deploy
substrate pods as DaemonSets on provisioned nodes.
CoreWeave deployments use two types of shared storage PVCs:
- Workflow Data PVC (
fuzzball-shared-storage): Mounted at/mnt/shared-storageon substrate nodes and exposed to workflows via the HostPath CSI driver - Image Cache PVC (
fuzzball-sharedfs): Mounted at/mnt/fuzzball-sharedfson substrate nodes for internal container image caching (not exposed to workflows)
CoreWeave’sshared-vaststorage requires Native Protocol Limit view policy. Ensure this is configured before deployment.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
coreweave.enabled | boolean | Yes | - | Enable CoreWeave provisioner |
coreweave.storage.class | string | No | - | StorageClass for CoreWeave shared storage PVCs (typically shared-vast) |
coreweave.storage.size | string | No | - | Size of shared storage per substrate node |
coreweave.storage.accessMode | string | No | ReadWriteOnce | Access mode for shared storage PVC (use ReadWriteMany for multi-node access) |
coreweave.storage.annotations | map | No | {} | Additional annotations for storage resources |
Example:
spec:
fuzzball:
orchestrator:
provisioner:
enabled: true
coreweave:
enabled: true
storage:
class: shared-vast
size: 100Gi
accessMode: ReadWriteMany
| Workload Type | Recommended Size | Rationale |
|---|---|---|
| Light workflows | 50Gi | Minimal data processing |
| Standard workflows | 100Gi | Typical data processing needs |
| Heavy workflows | 250Gi+ | Large datasets, intermediate files |
| Data-intensive | 500Gi+ | Big data processing, ML training |
The shared PVC (fuzzball-sharedfs) is mounted at /mnt/fuzzball-sharedfs on substrate nodes for
internal container image caching across compute nodes. This improves workflow startup times by
avoiding repeated image downloads. This storage is not exposed to workflows.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
fuzzball.config.sharedPVC.accessMode | string | Yes | - | Volume access mode (must be ReadWriteMany) |
fuzzball.config.sharedPVC.class | string | Yes | - | Storage class name (must be shared-vast) |
fuzzball.config.sharedPVC.size | string | Yes | - | Total shared cache size |
Example:
spec:
fuzzball:
config:
sharedPVC:
accessMode: ReadWriteMany
class: shared-vast
size: 10Gi
| Environment Type | Recommended Size | Rationale |
|---|---|---|
| Testing/Development | 10Gi | Few container images |
| Small Production | 25Gi | Limited image variety |
| Large Production | 50Gi+ | Many different images |
See Slurm Integration Documentation for detailed configuration.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
slurm.enabled | boolean | Yes | - | Enable Slurm provisioner |
slurm.sshHost | string | Yes | - | SSH host for remote Slurm instance |
slurm.sshPort | integer | No | 22 | SSH host port for remote Slurm instance |
slurm.username | string | Yes | - | SSH login username for remote Slurm instance |
slurm.password | string | No | - | SSH login password for remote Slurm instance |
slurm.sshHostPublicKey | string | No | - | SSH host public key for remote Slurm instance |
slurm.sshPrivateKeyPem | string | No | - | SSH private key PEM for remote Slurm instance |
slurm.sshPrivateKeyPassPhrase | string | No | - | Passphrase for encrypted SSH private key |
slurm.binaryPath | string | No | - | Custom path to Slurm binaries (if not in $PATH) |
slurm.connectionTimeout | integer | No | 30 | SSH connection timeout in seconds |
slurm.sudoPath | string | No | - | Path to sudo binary on compute nodes |
slurm.options | map | No | {} | Additional Slurm sbatch options |
slurm.skipHostKeyVerification | boolean | No | false | Skip SSH host key verification (not recommended) |
Basic example:
spec:
fuzzball:
orchestrator:
provisioner:
enabled: true
slurm:
enabled: true
sshHost: slurm-head.example.com
sshPort: 22
username: fuzzball-service
sshPrivateKeyPem: |
<full private key in PEM format>
sshHostPublicKey: "slurm-head.example.com ecdsa-sha2-nistp256 AAAAE2..."
See PBS Integration Documentation for detailed configuration.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
pbs.enabled | boolean | Yes | - | Enable PBS provisioner |
pbs.sshHost | string | Yes | - | SSH host for remote PBS instance |
pbs.sshPort | integer | No | 22 | SSH host port for remote PBS instance |
pbs.username | string | Yes | - | SSH login username for remote PBS instance |
pbs.password | string | No | - | SSH login password for remote PBS instance |
pbs.sshHostPublicKey | string | No | - | SSH host public key for remote PBS instance |
pbs.sshPrivateKeyPem | string | No | - | SSH private key PEM for remote PBS instance |
pbs.sshPrivateKeyPassPhrase | string | No | - | Passphrase for encrypted SSH private key |
pbs.binaryPath | string | No | - | Custom path to PBS binaries (if not in $PATH) |
pbs.validateSubstrate | boolean | No | false | Validate substrate before use |
pbs.defaultQueue | string | No | - | Default PBS queue name |
pbs.pbsServer | string | No | - | PBS server hostname |
pbs.connectionTimeout | integer | No | 30 | SSH connection timeout in seconds |
pbs.options | map | No | {} | Additional PBS qsub options |
pbs.sudoPath | string | No | - | Path to sudo binary on compute nodes |
pbs.skipHostKeyVerification | boolean | No | false | Skip SSH host key verification (not recommended) |
Basic example:
spec:
fuzzball:
orchestrator:
provisioner:
enabled: true
pbs:
enabled: true
sshHost: pbs-head.example.com
sshPort: 22
username: fuzzball-service
password: secure-password
Support for CoreWeave LOTA object storage is in preview status and is currently subject to more rapid change to address customer requirements than other features of Fuzzball. If you are interested in using LOTA with Fuzzball on CoreWeave, we recommend contacting CIQ as part of your deployment planning process.
CoreWeave’s LOTA provides S3-compatible object storage for workflow data ingress and egress. Configure LOTA credentials to enable workflows to read from and write to LOTA buckets.
| Parameter | Type | Required | Description |
|---|---|---|---|
type | string | Yes | Must be s3 for LOTA |
secret.access-key-id | string | Yes | LOTA access key ID |
secret.secret-access-key | string | Yes | LOTA secret access key |
secret.endpoint | string | Yes | LOTA endpoint URL (https://cwlota.com) |
secret.region | string | Yes | LOTA bucket region (e.g., ord1, lga1) |
Create a YAML file with your LOTA credentials:
# lota-credentials.yaml
type: s3
secret:
access-key-id: AKIAIOSFODNN7EXAMPLE
secret-access-key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
endpoint: https://cwlota.com
region: us-east-02a
Replace placeholder values with your actual LOTA credentials from CoreWeave.
$ fuzzball secret create lota-credentials \
--from-file lota-credentials.yaml \
-s userThe -s user flag makes the secret available to all users in your organization.
Reference the LOTA credentials in workflow volume definitions:
volumes:
shared-storage:
reference: volume://user/fuzzball-shared-vast/shared-storage
ingress:
- source:
uri: "s3://my-bucket/input/data.txt"
secret: secret://user/lota-credentials
destination:
uri: "file://data.txt"
egress:
- source:
uri: "file://results.tar.gz"
destination:
uri: "s3://my-bucket/output/results.tar.gz"
secret: secret://user/lota-credentials
Replace my-bucket with your LOTA bucket name.
For testing and development on a local Kubernetes cluster. This deploys the Fuzzball control plane.
To run workflows, you’ll need to add shared storage (fuzzball.substrate.nfs or
fuzzball.config.sharedPVC) and configure compute nodes.
apiVersion: deployment.ciq.com/v1alpha1
kind: FuzzballOrchestrate
metadata:
name: fuzzball-orchestrate
spec:
image:
username: depot-user
password: depot-token
exclusive: false
ingress:
create:
domain: localhost.nip.io
proxy:
type: NodePort
database:
create:
storage:
class: local-path
keycloak:
create:
ownerEmail: admin@localhost
createDatabase: true
tls:
certManager:
create: {}
trustManager:
create: {}
fuzzball:
jetstream:
replicas: 1
externalService:
type: NodePort
storage:
class: local-path
substrateBridge:
log:
storage:
class: "local-path"
dns:
externalService:
type: NodePort
For production deployments with high availability:
apiVersion: deployment.ciq.com/v1alpha1
kind: FuzzballOrchestrate
metadata:
name: fuzzball-orchestrate
spec:
image:
username: depot-user
password: depot-token
exclusive: true
ingress:
create:
domain: fuzzball.company.com
proxy:
type: LoadBalancer
annotations:
metallb.io/loadBalancerIPs: 10.0.100.50
database:
create:
storage:
class: longhorn
size: 500Gi
keycloak:
create:
ownerEmail: admin@company.com
replicas: 2
createDatabase: true
tls:
certManager:
create: {}
trustManager:
create: {}
ingressIssuer:
create:
letsEncrypt:
email: admin@company.com
issuer: letsencrypt-prod
solvers:
- dns01:
route53:
region: us-east-1
hostedZoneID: Z1234567890ABC
fuzzball:
cluster:
name: production-cluster
substrate:
nfs:
server: nfs.company.com
path: /fuzzball/shared
destination: /fuzzball/shared
orchestrator:
replicas: 3
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
agent:
replicas: 5
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
jetstream:
replicas: 3
storage:
class: longhorn
size: 50Gi
Using external database and authentication:
apiVersion: deployment.ciq.com/v1alpha1
kind: FuzzballOrchestrate
metadata:
name: fuzzball-orchestrate
spec:
image:
username: depot-user
password: depot-token
ingress:
external:
domain: fuzzball.cloud.com
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
database:
external:
host: postgres.abc123.us-east-1.rds.amazonaws.com
rdsSecretId: arn:aws:secretsmanager:us-east-1:123456789012:secret:fuzzball-db
credentials:
user: "" # Leave empty when using rdsSecretId
password: "" # Leave empty when using rdsSecretId
sslMode: verify-full
certificate:
caCertURL: https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
keycloak:
external:
url: https://keycloak.cloud.com
realmId: 550e8400-e29b-41d4-a716-446655440000
realmName: FuzzballProd
username: admin
password: keycloak-password
ownerEmail: admin@cloud.com
fuzzball:
config:
sharedPVC:
accessMode: ReadWriteMany
class: efs-sc
size: 100Gi
orchestrator:
provisioner:
enabled: true
aws:
enabled: true
region: us-east-1
subnetIDs:
- subnet-0123456789abcdef0
securityGroupIDs:
- sg-0123456789abcdef0
instanceProfileARN: arn:aws:iam::123456789012:instance-profile/FuzzballSubstrate
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/fuzzball-orchestrator
Check deployment status:
$ kubectl get fuzzballorchestrate
NAME STATUS AGE
fuzzball-orchestrate Running 10mView the current configuration:
$ kubectl get fuzzballorchestrate fuzzball-orchestrate -o yamlView status and events:
$ kubectl describe fuzzballorchestrate fuzzball-orchestrateEdit the configuration in-place (changes are applied immediately):
$ kubectl edit fuzzballorchestrate fuzzball-orchestrateApply configuration from a file:
$ kubectl apply -f fuzzball.yamlWatch operator logs during deployment:
$ kubectl logs -l app.kubernetes.io/name=fuzzball-operator -n fuzzball-system -f- Resource Scope:
FuzzballOrchestrateis a cluster-scoped resource (not namespace-scoped) - Short Names: Can use
fborfuzzas shortcuts in kubectl commands - Mutual Exclusivity:
- Database: Use either
createorexternal, not both - Ingress: Use either
createorexternal, not both - Keycloak: Use either
createorexternal, not both
- Database: Use either
- Autoscaling: When enabled,
replicasis ignored in favor ofminReplicas/maxReplicas - Storage Classes: Ensure specified storage classes exist in your cluster
- UUIDs:
realmIdmust be a valid UUID v4 format - NFS Requirements: NFS server must be accessible from all nodes
Check operator logs:
$ kubectl logs -l app.kubernetes.io/name=fuzzball-operator -n fuzzball-system --tail=100Check FuzzballOrchestrate status:
$ kubectl describe fuzzballorchestrate fuzzball-orchestrateCheck pod events:
$ kubectl get pods -n fuzzball
$ kubectl describe pod <pod-name> -n fuzzballCheck PVC status:
$ kubectl get pvc -ACheck ingress configuration:
$ kubectl get ingress -A
$ kubectl describe ingress <ingress-name> -n fuzzballVerify domain DNS resolution and load balancer IP assignment.