Alpha testing tutorials
On this page
Introduction: Why this matters
In most engineering orgs today, developers are blocked by ticket-based provisioning or overly rigid CI/CD flows. The new Orchestrator helps platform teams enable dynamic, policy-enforced environments that still respect governance, identity, and existing infra setups.
In this tutorial, you’ll step into the shoes of a developer or platform engineer at a fictional company, Acme Corp, learning how to:
- Create projects and environments
- Understand how dynamic configuration works
- Deploy workloads with environment-specific values
Preparation: local test setup
You’re testing the first alpha release of the Canyon Orchestrator. Everything runs locally using Kubernetes-in-Docker (kind
), simulating a real platform experience.
Prerequisites
Ensure you have:
- Docker installed (Linux | Mac )
kind
installed (brew install kind
or Linux setup )- (Optional but helpful)
kubectl
: Install guide - Terminal access
📺 Need a walkthrough? Watch: Video walkthrough and demo
Installation instructions
- Download the latest
release-X.tar
archive: Download link
- 🔐 Password:
deploywithlove<3
- Extract the archive into a new directory (e.g.,
~/Downloads/release-X
) - Open a terminal and navigate into the extracted directory:
cd ~/Downloads/release-X
- Run the install script:
./install.sh
- Set your environment variables:
export PATH=$(pwd):${PATH}
canyon config set-org alpha
canyon config set-url http://localhost:30080
- Verify everything is working:
canyon list projects
✅ If no error occurs, you’re good to go!
Teardown
Done with your testing, or want to update to newer software versions?
- In the terminal, run
kind delete cluster --name canyon
- Download the latest
release-X.tar
and follow the instructions again
Documentation, support and feedback
Got stuck? Head over to our Slack space to get support.
Curious how it all works? Take a quick glance at the core concepts of the Orchestrator.
Got feedback? Please share it here .
Tutorial 1 - How developers Get dynamic configuration
Learn how the Orchestrator empowers self-service and policy-driven workload management.
Context: Why this matters
It is common for software development projects to use a sequence of stages, or environments, for continuous release and testing of their artefacts. The Orchestrator meets project teams in this development lifecycle by offering a configurable project/environment structure. A core capability of the Orchestrator is to then perform deployments into environments, and to dynamically make environment-specific adjustments of such parts that require them.
This tutorial shows how the Orchestrator:
- Lets users maintain a project/environment structure
- Performs deployments into an environment
- Applies environment-specific, dynamic configuration of each deployment
- Promotes a deployment from one environment to the next
Step 1: Create a project
Projects represent application or team boundaries. Let’s create one:
canyon create project my-project
View the project:
canyon get project my-project
canyon list projects
Check environments:
canyon list environments my-project
🧩 Challenge: There’s another project in the system. Can you find and list its environments?
Step 2: Add a development environment
Projects use environment types (like development
, staging
, prod
) to define shared policies.
canyon create environment my-project development --set env_type_id=development
🧩 Challenge: What other environment types are available?
Step 3: Understand the manifest
Manifests define the desired state of an environment: workloads + shared resources.
Let’s try:
canyon get manifest my-project development
❌ You’ll get an error - there’s no deployment yet.
Step 4: Create and deploy an empty manifest
Create a file named manifest.yaml
:
workloads: {}
Deploy it:
canyon deploy my-project development ./manifest.yaml
This will succeed, but deploy nothing (empty manifest).
🧩 Challenge: Can you inspect the manifest of another project’s environment?
Step 5: Add a workload with dynamic config
List available resource types for this environment:
canyon list available-resource-types my-project development
Look for environment
- this resource provides context variables like color
and animal
.
Update your manifest.yaml
:
workloads:
main:
resources:
env:
type: environment
variables:
KEY: value
COLOR: ${resources.env.outputs.values.color}
ANIMAL: ${resources.env.outputs.values.animal}
Deploy it:
canyon deploy my-project development ./manifest.yaml
By default, outputs are not returned (for security). Use:
canyon deploy my-project development ./manifest.yaml --output -
Outputs are securely passed via ephemeral encryption and never stored by the Orchestrator.
🧩 Challenge: Try adding these flags to see how behavior changes:
--dry-run
--no-prompt
--plan-only
Step 6: Promote to production
Let’s simulate promoting your workload from development
to production
.
Create a prod environment:
canyon create environment my-project production --set env_type_id=production
Promote:
canyon deploy my-project production environment://development
To fetch outputs:
canyon deploy my-project production deployment://HEAD --output -
You’ll now see that outputs differ between dev and prod - a core feature of dynamic config by context.
Step 7: Understand why this worked
You can create projects and environments and deploy to them. But why did you see those other values in production, although the deployment was promoted from development without any changes?
The key is that the Orchestrator objects may contain dynamic elements, including “the current environment”. These elements will resolve to an environment-specific value, producing a different result without requiring any code changes in the application. In the present tutorial, the install script created a set of Kubernetes secrets named after the environments, and each deployment is reading its values from the secret for “the current environment”. In a real-world setup, dynamic configuration may change anything from a simple value to provisioning a different size or implementation of a resource.
Final Challenge: Inspect what matched
Use the CLI to inspect how the environment
resource was resolved.
# Find matching module and rule
canyon get manifest my-project production
# Identify the resource → get its module ID
canyon get module <module_id>
# Inspect the rule that caused this match
canyon list rules
You’ll see how modules and rules enable dynamic provisioning logic.
Recap
You’ve:
- Created a project and environments
- Understood the purpose of environment types
- Deployed a manifest with dynamic variables
- Promoted a manifest across environments
- Seen how environment context drives configuration
Submit Feedback
Submit your feedback - especially if something didn’t work or felt confusing.
Tutorial 2 - Deploying a Score application with real infrastructure dependencies
Learn how the Orchestrator turns a declarative workload spec into real provisioned infrastructure.
Context: why this matters
In Tutorial 1, you saw how dynamic configuration is injected based on the environment context. But real workloads don’t run in a vacuum - they depend on infrastructure: storage buckets, secrets, access keys, and compute runtimes.
This tutorial shows how the Orchestrator:
- Converts a Score specification into a deployable workload
- Provisions dependent infrastructure resources in the right order
- Injects outputs dynamically into your running container
System setup recap
The installation script (install.sh
) from Tutorial 1 already set up:
- A local Kubernetes cluster via Kind
- An in-cluster MinIO object store (S3-compatible)
- A module to convert Score specs into Kubernetes deployments
Step 1: Understand what you’re about to deploy
In this tutorial, you’ll:
- Deploy a Score-defined app that lists files in an S3 bucket
- Use the Orchestrator to provision:
- The S3 bucket
- An access key for that bucket
- The application container, with the key injected as variables
The runtime will be your local Kind cluster. The MinIO bucket acts as our S3 implementation.
Step 2: Verify modules are set
Check that the score-k8s
module is in place:
canyon get module score-k8s
This points to a module that knows how to take a Score workload and deploy it to Kubernetes using Terraform: https://github.com/humanitec/module-definition-library/tree/preview/score-workload/kubernetes
Step 3: Write your Score file
Create a new file named score.yaml
:
apiVersion: score.dev/v1b1
metadata:
name: my-app
containers:
main:
image: minio/mc
command: ["/bin/sh", "-c", "mc alias set minio http://minio:9000 $BUCKET_ACCESS_KEY_ID $BUCKET_SECRET_KEY; while true; do date; mc ls --summarize minio/$BUCKET; sleep 5; done"]
variables:
BUCKET_ACCESS_KEY_ID: ${resources.bucket-key.access_key_id}
BUCKET_SECRET_KEY: ${resources.bucket-key.secret_key}
BUCKET: ${resources.bucket.bucket}
resources:
bucket:
type: s3
bucket-key:
type: s3-access-key
params:
for: ["${resources.bucket.bucket}"]
What this app does: Runs an infinite loop in a container that lists contents of a provisioned bucket using credentials injected at runtime.
Step 4: Deploy the Score app
We’ll deploy into the same project and environment from Tutorial 1:
canyon score deploy my-project development ./score.yaml
This command:
- Patches the manifest
- Adds a new workload
my-app
- Links it to
s3
ands3-access-key
resources
Step 5: Verify deployment
Check that the Kubernetes deployment exists:
kubectl get deployment my-app
Check container logs:
kubectl logs deployment/my-app
You should see lines like:
Total Size: 0 B
Total Objects: 0
Challenge: Inspect the deployment internals
Where do those environment variables come from?
kubectl get deployment my-app -o yaml
kubectl exec deployment/my-app -- env | grep BUCKET
Then, inspect the resulting manifest:
canyon get manifest my-project development
You should now see:
- A
bucket
resource (MinIO) - A
bucket-key
(access credentials) - A
score-workload
(your application)
Step 6: Understand Why This Worked
Let’s trace what just happened:
Why did the s3 resource create a MinIO bucket?
canyon list available-resource-types my-project development
The s3
type is mapped to the in-cluster-minio
module.
Check the module:
canyon get module in-cluster-minio
It uses: https://github.com/humanitec/module-definition-library/tree/preview/s3/minio
How is it connected to the real MinIO backend?
canyon get provider minio default
The provider is configured to connect to the in-cluster MinIO service, with auth values injected from Kubernetes secrets.
Why did the Score workload get deployed to Kubernetes?
Score supports multiple runtimes, but in your current config:
canyon get module score-k8s
…is enforced by a rule for the score-workload
resource type.
It uses Terraform code that translates Score into a valid Kubernetes Deployment: https://github.com/humanitec/module-definition-library/blob/preview/score-workload/kubernetes/main.tf
Recap
You’ve:
- Written a declarative Score spec
- Let the Orchestrator provision infra dependencies
- Verified a working deployment to Kubernetes
- Traced exactly how this happened via modules and providers
Final Challenge
Use get
and list
commands to:
- Explore how the right modules were matched
- Inspect the modules and rule that triggered the S3 bucket deployment
- Find how secrets were injected into the workload
Submit Feedback
Submit your feedback - especially if something didn’t work or felt confusing.
Next Up
Tutorial 3 - Working with the active resource graph
Learn how to view and query the resource graph of an environment.
You’ll explore how the Orchestrator represents workloads and dependencies as a graph, and how to navigate and debug resource relationships.
Tutorial 4 - Writing modules
Understand how platform engineers can build reusable Terraform-based modules.
You’ll create and share modules that can provision infrastructure or deploy workloads across any environment that supports a Terraform provider - cloud or on-prem.