Deployment
After fulfilling the prerequisites listed in the requirements doc, you are ready to perform a deployment in your cloud environment.
If there are any left-over stacks or resources from a previous deployment they may interfere with your ability to deploy a fresh Fuzzball cluster. Please make sure that all stacks, nested stacks, and resources from previous deployments have been fully and successfully deleted before initiating a new one. You can use the following command to forcibly remove old Fuzzball resources after destroying your CloudFormation stack if necessary:
$ fuzzball cluster aws cleanup
Make sure you are logged into your AWS account as the fuzzballAdmin IAM user
and navigate to the Fuzzball Cloud Marketplace
listing. Review
the information paying special attention to the pricing options. When you are
ready to subscribe, press the “View Purchase Options” button.

On the subscription page you can accept the public or private offer

After subscribing, you can press the “Launch your Software” button.

The launch page will include a link to the AWS CloudFormation page as well as the link to the CloudFormation template for your selected version of Fuzzball. Note down the highlighted URI of the template.

With the template URI in hand you can head over to the CloudFormation page either by following the link on the launch page or going directly to CloudFormation in the AWS console. On the CloudFormation page select “Create stack” and on the following page add the Fuzzball CloudFormation template URI
Make sure you are in the region where you want to install Fuzzball and where you have the appropriate service quotas configured. Use the dropdown on the top right to change to the correct region if necessary.

After clicking “Next” you will be taken to the configuration form. Here, you can supply the values that are specific to your Fuzzball cluster.

Most of these values are self-explanatory or are adequately explained by the accompanying text in the form itself. There are a few notes below to better explain some of the values.
ClusterAdminUsers: These are the IAM usernames that will have access to the Kubernetes cluster through the AWS UI or
kubectl. This value is technically optional, but you may have trouble accessing your cluster (because you won’t be able to retrieve login information) if you leave it blank. Note that the IAM users specified in this parameter do not need to actually exist before the deployment process, but can be added to the AWS account (with appropriate permissions) after deployment is completed. We suggest using thefuzzballAdminIAM user as created in the previous section.SSOAdminRoleArn: This is the arn that corresponds to the role your users assume when they login via AWS’ SSO, or IAM Identity Center. If you plan to connect to the cluster as an individual IAM user, you can ignore this parameter and follow the instructions for ClusterAdminUsers above.
Domain: This is the domain name of the Hosted Zone that you should have configured in Route53 by either purchasing a new domain, or creating a new Hosted Zone and adding it to the DNS records of your existing top-level domain. See the section on setting up a hosted zone with Route53 in the appendices for more details.
DomainHostedZoneID: This is the actual ID of the Hosted Zone that you should have configured in Route53. Once again, see the section on setting up a hosted zone with Route53 in the appendices for more details.
PostgresEngineVersion: Fuzzball supports the
v16.xseries of the PostgreSQL engine. New point versions within thev16.xseries are regularly available, and older versions are automatically deprecated when newer versions are released. This means that the version that is pre-filled in the CloudFormation form may be deprecated. To avoid this issue, it is recommended that you run the following command and locate the latest version in thev16.xseries. In this example, you would place16.6in the PostgresEngineVersion field.EnableRDSDeletionProtection: When this parameter is set to true, it will prevent the AWS RDS instance deployed by the CloudFormation template from being deleted. By default, this parameter is set to false.
EnableRDSDRSnapshot: When this parameter is set to true, it will enable automated RDS backup replication in a secondary region. By default, this parameter is set to true.
$ aws rds describe-db-engine-versions --engine postgres \
--query 'DBEngineVersions[].EngineVersion' --output json | \
grep "^\s*\"16\.[0-9]*\""
"16.4",
"16.5",
"16.6",- StandardComputeNodeInstanceType, AdvancedComputeNodeInstanceType: Instance types for which you can automatically create a node description for the provisioner once your Fuzzball cluster is deployed. The defaults we provide are a CPU and single GPU node but you can select any two instance types appropriate for your workloads. If you choose other nodes make sure your account has sufficient quotas for the node types you select. You can also add additional node types manually once Fuzzball has been deployed.
In addition, there are parameters that relate to third-party components that fuzzball utilizes within its stack. Below are these parameters with more context about the dependencies they correspond to.
Keycloak: Fuzzball utilizes Keycloak as its main authentication provider. A Keycloak instance is deployed by the operator and torn down with the cluster. Keycloak provides fuzzball with SSO, user management, and secure authentication/authorization for Fuzzball users and services.
KeycloakRealmName: The name of the Keycloak realm to be created for your deployment. The realm name will also be used as the organization name by Fuzzball.
KeycloakOwnerEmail: The email address of the owner or administrator for the Keycloak realm.
KeycloakUsername: The username for the Keycloak admin account. This account will have administrative privileges within the master realm. By default, the username will be “keycloak”
KeycloakPassword: The password for the Keycloak admin account (KeycloakUsername).
KeycloakDefaultUserPassword: The default password assigned to new users created in the Keycloak realm, including the organization owner.
Let’s Encrypt: Fuzzball utilizes Let’s Encrypt to automatically provision and manage TLS certificates, enabling secure HTTPS connections for its services. Let’s Encrypt is deployed by the operator and torn down with the cluster
- LetsEncryptEmail: The email address that will be associated with the Let’s Encrypt account for your deployment
Once you’ve filled in the appropriate values and pressed next, you will be presented with several optional configuration parameters. You can fill these in as your use-case dictates, and then check the two acknowledgement boxes at the bottom of the page if you understand the permissions you are giving Fuzzball.

After clicking “Next” you will be presented with one more screen to review all of the parameters you provided and create your cluster.

Once you are satisfied with the options you have chosen you can select “Submit”. You will be presented with a screen to watch your cluster as it is deployed. There are several options available for watching the various services in your stacks and nested stacks as they are deployed.

During the deployment process, CloudFormation establishes all of the resources necessary to host the Fuzzball stack, and then the Fuzzball Kubernetes operator deploys the Kubernetes Orchestrate platform. These steps take about 90 minutes and 30 minutes respectively, so the entire deployment usually takes around 2 hours to complete.

Once the main stack reports that it is in the Completed state, you can begin configuring and using Fuzzball.
In order to run the deployment, we have to create a few IAM roles to establish the required permissions. Each of these roles have their AssumeRolePolicy scoped to the relevant AWS service and therefore can not be used by a user. In addition, each policy is scoped either to a specific set of resources, or requires explicit tagging in order to apply said access. Meaning the roles can only affect fuzzball resources in your account.
ECSRunnerLambdaExecutionRole: This role defines the permissions the lambda needs to trigger the pulumi runner. It can only be assumed by the lambda service. It focuses on giving permission to start and manage ecs tasks in the ecs cluster.
FuzzballPulumiRunnerTaskExecutionRole: This role defines the permissions needed to create the container the pulumi runner will use. It is used by the FuzzballPulumiRunnerTaskDefinition. It can only be assumed by the ecs service. It gives access to pull the image from our marketplace repository, and adds some ec2 resources to the container.
FuzzballPulumiRunnerTaskRole: This role defines the permissions needed to run the pulumi program that deploys Fuzzball and its dependencies. Because of the varied resource footprint, it mentions many actions. The permissions themselves are broken up into multiple managed policies also included in the template (FuzzballPulumiRunnerTaskRolePolicyECRS3KMS, FuzzballPulumiRunnerTaskRolePolicySTSRDSEFS, etc). It can only be assumed by the ecs service.
As an alternative to the CloudFormation console workflow above, you can deploy and manage Fuzzball
on AWS directly from the Fuzzball CLI using the fuzzball cluster aws subcommands.
Before deploying, run the preflight check to verify that required AWS service-linked IAM roles exist and review quota guidance:
$ fuzzball cluster aws preflightUse --provision-roles to automatically create any missing roles:
$ fuzzball cluster aws preflight --provision-rolesDeploy a new cluster interactively:
$ fuzzball cluster aws deployFor non-interactive deployments, supply all parameters as flags. Use --instance-types to specify
a comma-separated list of instance types to configure during initial cluster setup:
$ fuzzball cluster aws deploy \
--domain "$DOMAIN" \
--organization-admin "admin@example.com" \
--instance-types "t3a.2xlarge,p3.2xlarge" \
--non-interactivePreflight checks run automatically before deploy. Pass --skip-preflight to bypass them.
Once deployed, the following subcommands let you manage the cluster:
| Subcommand | Description |
|---|---|
update | Update an existing deployment to a new version |
delete | Delete a deployment and all associated AWS resources |
status | Show the CloudFormation stack status and recent events |
info | Show deployment details including cluster URLs and kubectl context commands |
list | List all Fuzzball deployments in the account |
logs | Stream pod logs from the EKS cluster |
cleanup | Remove orphaned AWS resources using tag-based discovery |
Use fuzzball cluster aws <subcommand> --help for the full list of options for each command.
If there are any left-over resources from a previous deployment in your GCP project, they may interfere with your ability to deploy a fresh Fuzzball cluster. Please make sure that all resources from previous deployments have been fully and successfully deleted before initiating a new one. You can use the following command to forcibly remove old Fuzzball resources after destroying your GCP deployment if necessary:
$ fuzzball cluster gcp cleanup
To deploy Fuzzball in GCP, you must log in via Helm to the OCI registry that you set up during the Requirements section.
To create an access token and use it to log into the OCI registry via Helm, execute the following command:
$ gcloud auth configure-docker ${REGION}-docker.pkg.dev
$ gcloud auth print-access-token | helm registry login ${REGION}-docker.pkg.dev \
--username oauth2accesstoken \
--password-stdinThis token is short-lived (~1 hour). If a deployment takes longer than that, re-runhelm registry loginwith a fresh token. Thegcloud auth configure-dockerstep configures the OCI credential helper and only needs to be run once.
Once you are logged in, you are ready to deploy Fuzzball!
The simplest way to deploy is using interactive mode.
Include the--dry-runflag to get an idea of what will happen before you actually execute the command.
$ fuzzball cluster gcp deployThe CLI will prompt you for all required parameters including the following:
- GCP project
- Region and zone
- Fuzzball version (defaults to the CLI version if omitted)
- Domain name
- Organization owner email
For non-interactive deployments, you can use the following. (See the Requirements section for information on setting these environment variables.)
$ fuzzball cluster gcp deploy \
--project "$PROJECT_ID" \
--region "$REGION" \
--zone "$ZONE" \
--version "$VERSION" \
--domain "$SELECTED_DOMAIN" \
--dns-zone-name "$MANAGED_ZONE" \
--dns-zone-project "$PROJECT_ID" \
--keycloak-owner-email "admin@example.com" \
--deployment-name "unique-name" \
--instance-types=n1-standard-4,n2-standard-8,g2-standard-8 \
--non-interactiveFor a full list of the options and arguments that can be specified during deployment, use the
fuzzball cluster gcp deploy --help command.
A successful deployment will create a fuzzball cli context for the cluster in
${XDG_CONFIG_HOME:-$HOME/.config}/fuzzball/fuzzball.yaml and print out a basic summary of
the cluster properties including commands to set up kubectl access to the underlying
Kubernetes cluster (see below). You can also obtain similar information later by running
$ fuzzball cluster gcp info --project $PROJECT_ID --region $REGION
SUCCESS GCP authentication verified for project 'PROJECT'
INFO Searching for Fuzzball deployments to view info for...
SUCCESS Found 1 Fuzzball deployment(s):
1. DEPLOYMENT_NAME (Version: v3-3-0, Region: us-central1, Status: ACTIVE)
INFO Using deployment: DEPLOYMENT_NAME
INFO Deployment Information
Deployment: DEPLOYMENT_NAME
Project: PROJECT
Region: REGION
Status: ACTIVE
Version: v3.3.0
Domain: DOMAIN
Created: 2026-04-08t09-20-19z
INFO Cluster URLs
API: https://api.DOMAIN
UI: https://ui.DOMAIN
Keycloak: https://auth.DOMAIN
INFO Context Configuration
To connect the Fuzzball CLI to this deployment, run:
fuzzball context create DEPLOYMENT_NAME \
--api-url https://api.DOMAIN \
--auth-url https://auth.DOMAIN
fuzzball context use DEPLOYMENT_NAMEYou can use Kubernetes through the kubectl command to monitor your cluster as it deploys and to
manage the underlying pods and resources. First you need to configure your local kubectl
installation to use your GCP deployment. Issue the following commands:
$ gcloud container clusters list --project=$PROJECT_ID --region=$REGIONThis will give you your cluster name. Now you can use it to execute the following:
$ gcloud container clusters get-credentials <cluster-name> --project $PROJECT_ID --region $REGIONYou may need to install thegke-gcloud-auth-pluginpackage and rerun the command above if it fails.
You now have enough information to proceed to the Initial Configuration section and log in and finish setting your cluster up.
At this point you can also run commands like the following to monitor your deployment and check the health of your cluster:
$ kubectl logs -l app.kubernetes.io/name=fuzzball-operator -n fuzzball-system -f --tail=-1$ kubectl get pods -n fuzzballThe fuzzball cluster gcp info or the deployment-info-wait.sh script referenced in the initial
login section display a command to create the
appropriate context to add and log into your cluster along with the credentials needed for the
automatically provisioned users including the cluster admin user. Once you have done so, you can
use the fuzzball command directly to monitor and manage many aspects of your deployment.
For instance, the list, status, info, and logs commands allow you to view information about
your running deployment(s).
And the update, destroy, and cleanup commands allow you to manage your cluster directly.
Use the --help flag in the CLI to list these commands and see information about running each of
them.
Support for CoreWeave within Fuzzball is in preview status and is currently subject to more rapid change to address customer requirements than other features of Fuzzball. If you are interested in using Fuzzball on CoreWeave, we recommend contacting CIQ as part of your deployment planning process.
After fulfilling the prerequisites listed in the requirements and discovering your cluster’s domain using the domain discovery procedure, you are ready to deploy Fuzzball on CoreWeave.
The Fuzzball operator manages the deployment and lifecycle of Fuzzball on your Kubernetes cluster.
First, use your Depot credentials to authenticate with the Helm registry:
$ DEPOT_USER="your-depot-username"
$ ACCESS_KEY="your-depot-access-key"Replace your-depot-username and your-depot-access-key with the credentials provided by CIQ.
$ helm registry login depot.ciq.com --username "${DEPOT_USER}" --password "${ACCESS_KEY}"Install the Fuzzball operator using the Helm chart from the depot:
$ VERSION="v3.3.0"
$ CHART="oci://depot.ciq.com/fuzzball/fuzzball-images/helm/fuzzball-operator"
$ STORAGE_CLASS="shared-vast"$ helm upgrade --install fuzzball-operator "${CHART}" \
--namespace fuzzball-system \
--create-namespace \
--version "${VERSION}" \
--set "image.tag=${VERSION}" \
--set "imagePullSecrets.name=repository-ciq-com" \
--set "imagePullSecrets.inline.registry=depot.ciq.com" \
--set "imagePullSecrets.inline.username=${DEPOT_USER}" \
--set "imagePullSecrets.inline.password=${ACCESS_KEY}" \
--set "storageClassName=${STORAGE_CLASS}"Check that the operator pod is running:
$ kubectl get pods -n fuzzball-system
NAME READY STATUS RESTARTS AGE
fuzzball-operator-controller-manager-xxxxx-xxxxx 2/2 Running 0 2mExpected output should show the operator pod in Running state with 2/2 containers ready.
Create a FuzzballOrchestrate custom resource to deploy Fuzzball on CoreWeave. Here’s a complete
example configuration:
apiVersion: deployment.ciq.com/v1alpha1
kind: FuzzballOrchestrate
metadata:
name: fuzzball-coreweave
namespace: fuzzball-system
spec:
# Image registry configuration
image:
username: <depot-username>
password: <depot-password>
exclusive: false
# Database configuration
database:
create:
enableDebugPod: false
storage:
class: shared-vast
# Fuzzball version and cluster configuration
fuzzball:
version: v3.3.0
cluster:
name: fuzzball-coreweave
# CoreWeave provisioner configuration
orchestrator:
provisioner:
enabled: true
coreweave:
enabled: true
storage:
accessMode: ReadWriteMany
class: shared-vast
size: 100Gi # Adjust based on workflow needs
# Shared container image cache
config:
sharedPVC:
accessMode: ReadWriteMany
class: shared-vast
size: 10Gi # Adjust based on caching needs
# Ingress and networking configuration
ingress:
create:
# Use your discovered CoreWeave domain
domain: <YOUR_COREWEAVE_DOMAIN>
proxy:
type: LoadBalancer
annotations:
# Public LoadBalancer for internet access
service.beta.kubernetes.io/coreweave-load-balancer-type: public
# Wildcard DNS for all services
service.beta.kubernetes.io/external-hostname: '*.<YOUR_COREWEAVE_DOMAIN>'
# Keycloak identity management
keycloak:
create:
createDatabase: true
# Generate with: uuidgen
realmId: <uuid-v4-realm-id>
ownerEmail: <owner-email>
# Change this password after first login!
defaultUserPassword: <initial-user-password>
# TLS certificate configuration
tls:
# cert-manager for certificate management
certManager:
create: {}
# trust-manager for CA certificate distribution
trustManager:
create: {}
# Let's Encrypt certificate issuer
ingressIssuer:
create:
letsEncrypt:
email: <letsencrypt-email>
issuer: letsencrypt-prod
Replace the placeholder values with your configuration. Use the domain discovered in the domain discovery procedure.
For detailed explanations of these configuration options and additional settings, see the CRD reference material.
If your cluster already has cert-manager installed, you can configure Fuzzball to use it instead of deploying a new instance. See Deploying with External cert-manager for details.
Save the configuration to a file and apply it with kubectl:
$ kubectl apply -f fuzzball-orchestrate-coreweave.yamlWatch the deployment status:
$ kubectl get fuzzballorchestrate -A -wWait until the status shows Ready. This typically takes several minutes for a full deployment.
Check the deployed resources:
$ kubectl get fuzzballorchestrate -A
$ kubectl get pvc -n fuzzball
$ kubectl get pods -n fuzzballFor detailed deployment status and troubleshooting, check the operator logs:
$ kubectl logs -l control-plane=fuzzball-operator-controller-manager -n fuzzball-system -fThe deployment steps above configure dynamic provisioning, where Fuzzball creates and destroys CoreWeave nodes on-demand. If you prefer to manage node pools yourself for predictable capacity or faster startup times, see Static Node Pool Provisioning for an alternative deployment approach.
After you complete your deployment, you can proceed to the Initial Configuration section to get your cluster ready to run workflows!