Get started
- What is the Platform Orchestrator?
- Prerequisites
- Create an organization
- Install the hctl CLI
- Setup demo infrastructure
- Choose your tool to manage the Orchestrator
- Set up the TF Orchestrator provider
- Create a runner
- Create a project and environment
- Create a module for a workload
- Create a manifest
- Perform a deployment
- Inspect the running workload
- Create a module for a database
- Add the database to the manifest
- Re-deploy
- Clean up
- Recap
- Next steps
On this page
- What is the Platform Orchestrator?
- Prerequisites
- Create an organization
- Install the hctl CLI
- Setup demo infrastructure
- Choose your tool to manage the Orchestrator
- Set up the TF Orchestrator provider
- Create a runner
- Create a project and environment
- Create a module for a workload
- Create a manifest
- Perform a deployment
- Inspect the running workload
- Create a module for a database
- Add the database to the manifest
- Re-deploy
- Clean up
- Recap
- Next steps
Get started using the Platform Orchestrator by following this step-by-step guide to explore all essential concepts and perform your first hands-on tasks.
At the end of this tutorial you will know how to:
- Setup your infrastructure to run Orchestrator deployments
- Describe an application and its resource requirements
- Use the Orchestrator to deploy a workload
- Use the Orchestrator to deploy dependent infrastructure resources like a database
- Observe the results and status of deployments
- Create a link from the Orchestrator to real-world resources
Duration: about 45min
What is the Platform Orchestrator?
Humanitec’s Platform Orchestrator unifies infrastructure management for humans and AI agents, streamlining operations, enforcing standards, and boosting developer productivity in complex environments.
Prerequisites
The tutorial currently supports using these cloud providers:
![]() |
||
---|---|---|
To walk through this tutorial, you’ll need:
- An account in one of the supported cloud providers to provision some temporary infrastructure to run and deploy resources
- Either the Terraform CLI or the OpenTofu CLI installed locally
- Git installed locally
- (optional) kubectl and helm installed locally
Amazon Web Services (AWS)
When using AWS, you’ll also need:
- Access to an AWS project that has sufficient quota to run a Kubernetes cluster and a small RDS database
- The
AWS
CLI installed and authenticated. The tutorial will use this authenticated user to set up the base infrastructure on AWS
Create an organization
Organizations are the top level grouping of all your Orchestrator objects, similar to “tenants” in other tools. Your real-world organization or company will usually have one Orchestrator organization.
If you already have an organization, you’re good. Otherwise, register for a free trial organization now.
Once done, you will have access to the Orchestrator console at https://console.humanitec.dev .
Install the hctl CLI
Follow the instructions on the CLI page to install the hctl
CLI if it is not already present on your system.
Ensure you are logged in. Execute this command and follow the instructions:
hctl login
Ensure that the CLI is configured to use your Orchestrator organization:
hctl config show
If the default_org_id
is empty, set the organization to your value like this:
hctl config set-org your-org-id
All subsequent hctl
commands will now be targeted at your organization.
Setup demo infrastructure
This tutorial requires some demo infrastructure to run things. Most notably, it creates a publicly accessible Kubernetes cluster to host both a runner component and to deploy a workload to.
We have prepared a public repository with Terraform/OpenTofu (“TF”) code to set it up in the cloud of your choice.
Clone the repository and navigate to the infra
directory:
git clone https://github.com/humanitec-tutorials/get-started
cd get-started/infra
The repository comes with a terraform.tfvars.template
file. Copy this file and remove the template extension:
cp terraform.tfvars.template terraform.tfvars
Edit the new terraform.tfvars
file and provide all necessary values as instructed by the inline comments. This also inludes your choice of infrastructure or cloud provider.
Initialize your Terraform or OpenTofu environment and run apply
:
Terraform
terraform init
terraform apply
OpenTofu
tofu init
tofu apply
Check the output to see which resources will be created, and confirm. This may take five minutes or more.
To continue with the tutorial while the apply
is being executed, open a new shell and navigate to the ./orchestrator
directory:
cd <your-repo-checkout-location>/get-started/orchestrator
You now have:
- ✅ Created a generic infrastructure which will serve to run deployments and spin up resources
Choose your tool to manage the Orchestrator
Throughout the next steps, you will set up some configuration objects in the Platform Orchestrator.
You generally have a choice of tools for maintaining Orchestrator configuration:
- Using Terraform or OpenTofu code and the corresponding Platform Orchestrator provider
- Using the
hctl
CLI commands
We recommend option 1 because maintaining your Orchestrator estate via TF has all the advantages of infrastructure-as-code (IaC). It is repeatable, sustainable, and easy to maintain, version, and evolve. If you choose this option, you will gradually expand the demo repository code, simulating the real-life scenario of adding the Orchestrator setup to your IaC estate.
Option 2 is generally useful for urgent fixes and quick results, e.g., if you want to proceed through all steps as quickly as possible.
Option 2 will also require kubectl
and helm
to be available on your system.
Make your choice and follow it for the remainder of the tutorial.
All read operations will generally use the CLI.
Set up the TF Orchestrator provider
Note: For simplicity, the TF code uses a local backend to store the state. You can clean up the entire setup at the end of this tutorial to safely remove it again.
Copy variable values from the demo infrastructure
Copy the existing terraform.tfvars
file from the infra
directory into the present directory to re-use them:
cp ../infra/terraform.tfvars .
Prepare a service user and token
All requests to the Orchestrator must be authenticated. To this end, the Orchestrator lets you create “Service users” with an associated token that you can supply to code-based interactions such as the Orchestrator TF providers.
- Open the Orchestrator web console at https://console.humanitec.dev/
- Select the Service users view
- Click Create service user
- Enter “
get-started-tutorial
” as the name and keep the default expiry - Click Create to display the token and leave it open
- Open the
terraform.tfvars
file (the one in the currentorchestrator
directory) and append the following configuration, replacing all values with yours. Obtain the token value from the open Orchestrator console:
# ===================================
# Platform Orchestrator configuration
# ===================================
# Your Orchestrator organization ID
orchestrator_org = "replace-with-your-orchestrator-organization-id"
# Your Orchestrator API token
orchestrator_auth_token = "replace-with-your-token-value"
- Save the file. Click OK in the console to close the token display
Prepare the TF core setup
You will now set up the Platform Orchestrator provider for Terraform/OpenTofu to maintain Orchestrator resources in your organization.
Create a new file named providers.tf
in the current orchestrator
directory, containing this content:
terraform {
required_providers {
# Provider for managing Orchestrator objects
platform-orchestrator = {
source = "humanitec/platform-orchestrator"
version = "~> 2"
}
}
}
Create a new file named orchestrator.tf
in the current orchestrator
directory, containing this configuration:
# Configure the Platform Orchestrator provider
provider "platform-orchestrator" {
org_id = var.orchestrator_org
auth_token = var.orchestrator_auth_token
api_url = "https://api.humanitec.dev"
}
This configuration sets up the Platform Orchestrator provider for TF using the service user permissions to authenticate against the Orchestrator.
You now have:
- ✅ Configured the Orchestrator TF provider locally so that you can now manage Orchestrator objects via IaC
Create a runner
Runners are used by the Orchestrator to execute Terraform/OpenTofu modules securely inside your own infrastructure.
You will now define a runner that is using the demo infrastructure created for this tutorial.
You also define a runner rule. Runner rules are attached to runners and define under which circumstances to use which runner.
There are different runner types. This tutorial uses the kubernetes-agent
runner which works on all kinds of Kubernetes clusters. It is installed into the demo cluster via a Helm chart and, for simplicity, will maintain TF state in Kubernetes secrets.
apply
for the demo infrastructure to be finished. Verify now that this is the case by checking your shell, which may be another window.Make sure you are in the orchestrator
directory in your shell.
Create a public/private key pair for the runner to authenticate against the Orchestrator:
openssl genpkey -algorithm ed25519 -out runner_private_key.pem
openssl pkey -in runner_private_key.pem -pubout -out runner_public_key.pem
Then continue with the tool of your choice:
Prepare additional variables. This command will create a new variables file orchestrator.tfvars
, using values from the outputs
of the prior demo infrastructure setup:
Terraform
cat << EOF > orchestrator.tfvars
k8s_cluster_name = "$(terraform output -state "../infra/terraform.tfstate" -json | jq -r ".k8s_cluster_name.value")"
cluster_ca_certificate = "$(terraform output -state "../infra/terraform.tfstate" -json | jq -r ".cluster_ca_certificate.value")"
k8s_cluster_endpoint = "$(terraform output -state "../infra/terraform.tfstate" -json | jq -r ".k8s_cluster_endpoint.value")"
agent_runner_irsa_role_arn = "$(terraform output -state "../infra/terraform.tfstate" -json | jq -r ".agent_runner_irsa_role_arn.value")"
EOF
OpenTofu
cat << EOF > orchestrator.tfvars
k8s_cluster_name = "$(tofu output -state "../infra/terraform.tfstate" -json | jq -r ".k8s_cluster_name.value")"
cluster_ca_certificate = "$(tofu output -state "../infra/terraform.tfstate" -json | jq -r ".cluster_ca_certificate.value")"
k8s_cluster_endpoint = "$(tofu output -state "../infra/terraform.tfstate" -json | jq -r ".k8s_cluster_endpoint.value")"
agent_runner_irsa_role_arn = "$(tofu output -state "../infra/terraform.tfstate" -json | jq -r ".agent_runner_irsa_role_arn.value")"
EOF
Verify all values are set. If they are not, the apply
run for the demo infrastructure may not be finished yet. Wait for this to be done, and repeat the previous command.
cat orchestrator.tfvars
Open the file providers.tf
and add this configuration to the required_providers
block:
# Provider for installing the runner Helm chart
helm = {
source = "hashicorp/helm"
version = "~> 3"
}
# Provider for installing K8s objects for the runner
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2"
}
Open the file orchestrator.tf
and append this content:
data "local_file" "agent_runner_public_key" {
filename = "./runner_public_key.pem"
}
data "local_file" "agent_runner_private_key" {
filename = "./runner_private_key.pem"
}
# Configure the kubernetes-agent runner
resource "platform-orchestrator_kubernetes_agent_runner" "get_started" {
id = "get-started"
description = "kubernetes-agent runner for the Get started tutorial"
runner_configuration = {
key = data.local_file.agent_runner_public_key.content
job = {
namespace = "humanitec-kubernetes-agent-runner"
service_account = "humanitec-kubernetes-agent-runner"
}
}
state_storage_configuration = {
type = "kubernetes"
kubernetes_configuration = {
namespace = "humanitec-kubernetes-agent-runner"
}
}
}
# Role to allow the runner to manage secrets for state storage
resource "kubernetes_role" "humanitec_runner_kubernetes_stage_storage" {
metadata {
name = "humanitec-runner-kubernetes-stage-storage"
namespace = kubernetes_namespace.humanitec_kubernetes_agent_runner.metadata[0].name
}
rule {
api_groups = [""]
resources = ["secrets"]
verbs = ["create", "get", "list", "watch", "update", "delete"]
}
rule {
api_groups = ["coordination.k8s.io"]
resources = ["leases"]
verbs = ["create", "get", "update"]
}
}
# Bind the role to the service account used by the runner
resource "kubernetes_role_binding" "humanitec_runner_kubernetes_stage_storage" {
metadata {
name = "humanitec-runner-kubernetes-stage-storage"
namespace = kubernetes_namespace.humanitec_kubernetes_agent_runner.metadata[0].name
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.humanitec_runner_kubernetes_stage_storage.metadata[0].name
}
subject {
kind = "ServiceAccount"
name = "humanitec-kubernetes-agent-runner"
namespace = kubernetes_namespace.humanitec_kubernetes_agent_runner.metadata[0].name
}
}
# Configure the Kubernetes provider for accessing the demo cluster
provider "kubernetes" {
host = var.k8s_cluster_endpoint
cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--output", "json", "--cluster-name", var.k8s_cluster_name, "--region", var.aws_region]
command = "aws"
}
}
# The namespace for the kubernetes-agent runner
resource "kubernetes_namespace" "humanitec_kubernetes_agent_runner" {
metadata {
name = "humanitec-kubernetes-agent-runner"
}
}
# A Secret for the agent runner private key
resource "kubernetes_secret" "agent_runner_key" {
metadata {
name = "humanitec-kubernetes-agent-runner"
namespace = kubernetes_namespace.humanitec_kubernetes_agent_runner.metadata[0].name
}
type = "Opaque"
data = {
"private_key" = data.local_file.agent_runner_private_key.content
}
}
# Configure the Helm provider to use the aws CLI for K8s auth
provider "helm" {
kubernetes = {
host = var.k8s_cluster_endpoint
cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
exec = {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--output", "json", "--cluster-name", var.k8s_cluster_name, "--region", var.aws_region]
command = "aws"
}
}
}
# Install the Kubernetes agent runner Helm chart
resource "helm_release" "humanitec_kubernetes_agent_runner" {
name = "humanitec-kubernetes-agent-runner"
namespace = kubernetes_namespace.humanitec_kubernetes_agent_runner.metadata[0].name
create_namespace = false
repository = "oci://ghcr.io/humanitec/charts"
chart = "humanitec-kubernetes-agent-runner"
set = [
{
name : "humanitec.orgId"
value : var.orchestrator_org
},
{
name : "humanitec.runnerId"
value : platform-orchestrator_kubernetes_agent_runner.get_started.id
},
{
name : "humanitec.existingSecret"
value : kubernetes_secret.agent_runner_key.metadata[0].name
},
# Annotate the service account for IRSA using a role prepared via the base setup
{
name : "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value : var.agent_runner_irsa_role_arn
}
]
}
# Assign a pre-existing ClusterRole to the service account used by the runner
# to enable the runner to create deployments in other namespaces
resource "kubernetes_cluster_role_binding" "runner_inner_cluster_admin" {
metadata {
name = "humanitec-kubernetes-agent-runner-cluster-edit"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "edit"
}
subject {
kind = "ServiceAccount"
name = "humanitec-kubernetes-agent-runner"
namespace = kubernetes_namespace.humanitec_kubernetes_agent_runner.metadata[0].name
}
}
Re-initialize your Terraform or OpenTofu environment and run another apply
:
Terraform
terraform init
terraform apply -var-file=orchestrator.tfvars
OpenTofu
tofu init
tofu apply -var-file=orchestrator.tfvars
While the apply
executes, inspect the resources now being added. Refer to the inline comments for an explanation of each resource.
Register the runner with the Orchestrator:
hctl create runner get-started \
--set=runner_configuration="$(jq -nc --arg key "$(cat runner_public_key.pem)" '{"type": "kubernetes-agent","key":$key,"job":{"namespace":"humanitec-kubernetes-agent-runner","service_account":"humanitec-kubernetes-agent-runner"}}')" \
--set=state_storage_configuration='{"type":"kubernetes","namespace":"humanitec-kubernetes-agent-runner"}'
Install the agent runner onto the cluster using its Helm chart. Prior to the install, obtain a kubectl
context to the tutorial demo cluster. Use this convenience command to display the required command:
Terraform
terraform output -state "../infra/terraform.tfstate" -json \
| jq -r ".k8s_connect_command.value"
OpenTofu
tofu output -state "../infra/terraform.tfstate" -json \
| jq -r ".k8s_connect_command.value"
Copy and execute the command that is displayed. kubectl
will now target the demo cluster. Verify the context is set correctly:
kubectl config current-context
Create the namespace for the runner and a secret holding the runner private key:
kubectl create namespace humanitec-kubernetes-agent-runner
kubectl create secret generic humanitec-kubernetes-agent-runner \
-n humanitec-kubernetes-agent-runner \
--from-literal=private_key="$(cat runner_private_key.pem)"
Create a Role and Rolebinding allowing the runner to manage secrets in its namespace for state storage:
kubectl apply -f - <<EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: humanitec-runner-kubernetes-stage-storage
namespace: humanitec-kubernetes-agent-runner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "list", "update", "patch", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: humanitec-runner-kubernetes-stage-storage
namespace: humanitec-kubernetes-agent-runner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: humanitec-runner-kubernetes-stage-storage
subjects:
- kind: ServiceAccount
name: humanitec-kubernetes-agent-runner
EOF
Install the runner Helm chart onto the cluster, providing required values:
helm install humanitec-kubernetes-agent-runner \
oci://ghcr.io/humanitec/charts/humanitec-kubernetes-agent-runner \
-n humanitec-kubernetes-agent-runner \
--set humanitec.orgId=$(hctl config show -o json | jq -r ".default_org_id") \
--set humanitec.runnerId=get-started \
--set humanitec.existingSecret=humanitec-kubernetes-agent-runner \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=$(aws iam get-role --role-name get-started-humanitec-kubernetes-agent-runner | jq -r .Role.Arn)"
Note:
- The value
humanitec.existingSecret
references the Kubernetes secret created just previously containing the runner private key - The value
serviceAccount.annotations.eks...
annotates the Kubernetes service account to leverage the EKS workload identity solution IRSA for the runner to authenticate against AWS, using a role pre-configured as part of the initial base setup
Assign a pre-existing ClusterRole to the service account used by the runner to enable the runner to create deployments in other namespaces:
kubectl create clusterrolebinding humanitec-kubernetes-agent-runner-cluster-edit \
--clusterrole=edit \
--serviceaccount=humanitec-kubernetes-agent-runner:humanitec-kubernetes-agent-runner
You now have:
- ✅ Installed a runner component into the demo infrastructure
- ✅ Configured the runner with a key pair
- ✅ Enabled the runner to maintain TF state
- ✅ Registered the runner with the Orchestrator
- ✅ Instructed the Orchestrator when to use the runner via a runner rule
The runner is now ready to execute deployments, polling the Orchestrator for deployment requests.
Create a project and environment
Prepare the Orchestrator to receive deployments by creating a project with one environment which will be the deployment target.
Projects are the home for a project or application team of your organization. A project is a collection of one or more environments.
Environments receive deployments. They are usually isolated instantiations of the same project representing a stage in its development lifecycle, e.g. from “development” through to “production”.
Environments are classified by environment types, so you need to create one of these as well.
An environment needs to have a runner assigned to it at all times, so you also create a runner rule. Runner rules are attached to runners and define under which circumstances to use which runner. The runner rule will assign the runner you just created to the new environment.
Open the file orchestrator.tf
and append this configuration:
# Create a project
resource "platform-orchestrator_project" "get_started" {
id = "get-started"
}
# Create a runner rule
resource "platform-orchestrator_runner_rule" "get_started" {
runner_id = platform-orchestrator_kubernetes_agent_runner.get_started.id
project_id = platform-orchestrator_project.get_started.id
}
# Create an environment type
resource "platform-orchestrator_environment_type" "get_started_development" {
id = "get-started-development"
}
# Create an environment "development" in the project "get-started"
resource "platform-orchestrator_environment" "development" {
id = "development"
project_id = platform-orchestrator_project.get_started.id
env_type_id = platform-orchestrator_environment_type.get_started_development.id
# Ensure the runner rule is in place so that the Orchestrator may assign a runner to the environment
depends_on = [platform-orchestrator_runner_rule.get_started]
}
Perform another apply
:
Terraform
terraform apply -var-file=orchestrator.tfvars
OpenTofu
tofu apply -var-file=orchestrator.tfvars
The plan will now output the new Orchestrator objects to be added. Confirm the apply
to do so.
# Create a project
hctl create project get-started
# Create a runner making the runner defined earlier be used for all deployments into the project
hctl create runner-rule --set=runner_id=get-started --set=project_id=get-started
# Create an environment type
hctl create environment-type get-started-development
# Create an environment "development" in the project "get-started"
hctl create environment get-started development --set env_type_id=development
You now have:
- ✅ Created a logical project and environment to serve as a deployment target
Create a module for a workload
You will now set up the Orchestrator for deploying a simple workload as a Kubernetes Deployment to the demo cluster.
Enabling the Orchestrator to deploy things is done through a module. Modules describe how to provision a real-world resource of a resource type by referencing a Terraform/OpenTofu module.
Every module is of a resource type. Resource types define the formal structure for a kind of real-world resource such as an Amazon S3 bucket or PostgreSQL database.
You also need a simple module rule. Module rules are attached to modules and define under which circumstances to use which module.
You will now create one object of each kind. They are all rather simple. The module itself wraps an existing TF module from a public Git source.
Open the file orchestrator.tf
and add this configuration to define the Orchestrator objects previously mentioned:
# Create a resource type "k8s-workload-get-started" with an empty output schema
resource "platform-orchestrator_resource_type" "k8s_workload_get_started" {
id = "k8s-workload-get-started"
description = "Kubernetes workload for the Get started tutorial"
output_schema = jsonencode({
type = "object"
properties = {}
})
is_developer_accessible = true
}
# Create a module, setting values for the module variables
resource "platform-orchestrator_module" "k8s_workload_get_started" {
id = "k8s-workload-get-started"
description = "Simple Kubernetes Deployment in default namespace"
resource_type = platform-orchestrator_resource_type.k8s_workload_get_started.id
module_source = "git::https://github.com/humanitec-tutorials/get-started//modules/workload/kubernetes"
module_params = {
image = {
type = "string"
description = "The image to use for the container"
}
variables = {
type = "map"
is_optional = true
description = "Container environment variables"
}
}
module_inputs = jsonencode({
name = "get-started"
namespace = "default"
})
}
# Create a module rule making the module applicable to the demo project
resource "platform-orchestrator_module_rule" "k8s_workload_get_started" {
module_id = platform-orchestrator_module.k8s_workload_get_started.id
project_id = platform-orchestrator_project.get_started.id
}
Perform another apply
:
Terraform
terraform apply -var-file=orchestrator.tfvars
OpenTofu
tofu apply -var-file=orchestrator.tfvars
# Create a resource type "k8s-workload-get-started" with an empty output schema
hctl create resource-type k8s-workload-get-started --set-yaml=- <<EOF
description: "Kubernetes workload for the Get started tutorial"
output_schema:
type: object
properties: {}
EOF
# Create a module, setting values for the module variables
hctl create module k8s-workload-get-started --set-yaml=- <<EOF
resource_type: k8s-workload-get-started
description: Simple Kubernetes Deployment in default namespace
module_source: git::https://github.com/humanitec-tutorials/get-started//modules/workload/kubernetes
module_params:
image:
type: string
description: The image to use for the container
variables:
type: map
is_optional: true
description: Container environment variables
module_inputs:
name: get-started
namespace: default
EOF
# Create a module rule making the module applicable to the demo project
hctl create module-rule --set=module_id=k8s-workload-get-started --set=project_id=get-started
To inspect the TF module code referenced by the module, go to the GitHub source .
The Orchestrator is now configured to deploy resources of the new type k8s-workload-get-started
. Verify it by querying the available resource types for the target project environment:
hctl get available-resource-types get-started development
You now have:
- ✅ Instructed the Orchestrator how to deploy a containerized workload via a module
- ✅ Instructed the Orchestrator when to use the module via a module rule
Create a manifest
You can now utilize the Orchestator configuration prepared in the previous steps and perform an actual deployment using the manifest file. Manifests allow developers to submit the description of the desired state for an environment. A manifest is the main input for performing a deployment.
Create a new manifests
directory and change into it:
cd ..
mkdir manifests
cd manifests
Create a new file manifest-1.yaml
in the manifests
directory, containing this configuration:
workloads:
# The name you assign to the workload in the context of the manifest
demo-app:
resources:
# The name you the assign to this resource in the context of the manifest
demo-workload:
# The resource type of the resource you wish to provision
type: k8s-workload-get-started
# The resource parameters. They are mapped to the module_params of the module
params:
image: ghcr.io/astromechza/demo-app:latest
The manifest is a high-level abtraction of the demo-app
made up of a resource of type k8s-workload-get-started
.
Manifests are generally owned and maintained by application developers as a part of their code base, and used to self-serve all kinds of resources. You do not install them into the Orchestrator using hctl
or TF like the previous configuration objects.
You now have:
- ✅ Described an application and its resource requirements in an abstracted, environment-agnostic way called a manifest
Perform a deployment
Verify that the demo infrastructure provisioning which may be running in another shell is complete before proceeding.
Perform a deployment of the manifest into the development
environment of the get-started
project:
hctl deploy get-started development manifest-1.yaml
Confirm the deployment and wait for the execution to finish. It should take less than a minute.
Upon completion, the CLI outputs a hctl logs
command to access the deployment logs. Use it to see the execution details.
Run this command to see all prior deployments:
hctl get deployments
Copy the ID from the top of list and inspect the details by using the ID in this command:
hctl get deployment the-id-you-copied
Inspect the result in the Orchestrator console:
- Open the Orchestrator web console at https://console.humanitec.dev/
- In the Projects view, select the get-started project
- Select the development environment
You will see a “Successful” status and a graph containing two nodes. This is the resource graph for the latest deployment. The resource graph shows the dependencies of all active resources in an environment.
Click on the resources to explore their details. In particular, inspect the “Metadata” section of the demo-workload
resource. By convention, the Orchestrator will display the content of a regular output named humanitec_metadata
of the TF module being used for a resource. This is useful for conveying insights about the real-life object that has been provisioned via the Orchestrator.
Defining metadata output in the module source
outputs.tf
(view on GitHub )
:
output "humanitec_metadata" {
description = "Metadata for the Orchestrator"
value = merge(
{
"Namespace" = var.namespace
},
{
"Image" = var.image
},
{
"Deployment" = kubernetes_deployment.simple.metadata[0].name
}
)
}
You now have:
- ✅ Used the manifest to request a deployment from the Orchestrator into a specific target environment
Inspect the running workload
If you have kubectl
available, you can inspect the demo cluster to see the running workload. Otherwise, skip to the next section.
If you have not already set the kubectl
context, use this convenience command to display the required command:
Terraform
terraform output -state "../infra/terraform.tfstate" -json \
| jq -r ".k8s_connect_command.value"
OpenTofu
tofu output -state "../infra/terraform.tfstate" -json \
| jq -r ".k8s_connect_command.value"
Copy and execute the command that is displayed. kubectl
will now target the demo cluster. Verify the context is set correctly:
kubectl config current-context
The module created a Kubernetes Deployment
in the default
namespace. Verify it exists:
kubectl get deployments
The demo image being used has a simple web interface. Create a port-forward
to the Pod to access it:
kubectl port-forward pod/$(kubectl get pods -l app=get-started -o json \
| jq -r ".items[0].metadata.name") 8080:8080
Open http://localhost:8080 to see the running workload.
In the output you see, note the message at the bottom about no postgres being configured. As the last and final step in this tutorial, you will add a simple PostgreSQL database to the deployment and connect the workload to it.
Quit the port-forward
and continue.
You now have:
- ✅ Verified your workload is indeed runnning on the target environment
Create a module for a database
You will now extend the capabilities of the Orchestrator and enable it to provision a database to be used by the workload. The tutorial uses PostgreSQL as an example.
Just like for the workload previously, that involves creating a resource type, a module encapsulating the TF code, and a module rule.
The PostgreSQL database will use a managed service offering from your cloud provider and the appropriate TF provider:
- AWS: RDS and the
hashicorp/aws
provider using workload identity (container credentials (IRSA) ) for provider authentication
Remember that the TF code for provisioning this database will be executed by the runner. The provider
configuration must receive appropriate authentication data just like in any other TF execution. To achieve this, you will also define a provider in the Orchestrator. Providers are the reusable direct equivalent of Terraform/OpenTofu providers that may be injected into the Terraform/OpenTofu code referenced in modules.
Return to the orchestrator
folder:
cd ../orchestrator
Open the file orchestrator.tf
and append this configuration to define the Orchestrator objects previously mentioned:
# Create a provider leveraging EKS workload identity
# Because workload identity has already been configured on the cluster,
# the provider confuration can be effectively empty
resource "platform-orchestrator_provider" "aws_get_started" {
id = "get-started"
description = "aws provider for the Get started tutorial"
provider_type = "aws"
source = "hashicorp/aws"
version_constraint = "~> 6"
}
# Create a resource type "postgres-get-started"
resource "platform-orchestrator_resource_type" "postgres_get_started" {
id = "postgres-get-started"
description = "Postgres instance for the Get started tutorial"
is_developer_accessible = true
output_schema = jsonencode({
type = "object"
properties = {
host = {
type = "string"
}
port = {
type = "integer"
}
database = {
type = "string"
}
username = {
type = "string"
}
password = {
type = "string"
}
}
})
}
# Create a module "postgres-get-started"
resource "platform-orchestrator_module" "postgres_get_started" {
id = "postgres-get-started"
description = "Simple cloud-based Postgres instance"
resource_type = platform-orchestrator_resource_type.postgres_get_started.id
module_source = "git::https://github.com/humanitec-tutorials/get-started//modules/postgres/${var.enabled_cloud_provider}"
provider_mapping = {
aws = "${platform-orchestrator_provider.aws_get_started.provider_type}.${platform-orchestrator_provider.aws_get_started.id}"
}
}
# Create a module rule making the module applicable to the demo project
resource "platform-orchestrator_module_rule" "postgres_get_started" {
module_id = platform-orchestrator_module.postgres_get_started.id
project_id = platform-orchestrator_project.get_started.id
}
Perform another apply
:
Terraform
terraform apply -var-file=orchestrator.tfvars
OpenTofu
tofu apply -var-file=orchestrator.tfvars
Read the chosen cloud provider from the output:
Terraform
export ENABLED_CLOUD_PROVIDER=$(terraform output -state "../infra/terraform.tfstate" -json | jq -r ".enabled_cloud_provider.value")
OpenTofu
export ENABLED_CLOUD_PROVIDER=$(tofu output -state "../infra/terraform.tfstate" -json | jq -r ".enabled_cloud_provider.value")
Then create the Orchestrator objects:
# Create a provider
# Because workload identity has already been configured on the cluster,
# the provider confuration can be effectively empty
hctl create provider ${ENABLED_CLOUD_PROVIDER} get-started --set-yaml=- <<EOF
description: ${ENABLED_CLOUD_PROVIDER} provider for the Get started tutorial
source: hashicorp/${ENABLED_CLOUD_PROVIDER}
version_constraint: ~> 6
EOF
# Create a resource type "postgres-get-started"
hctl create resource-type postgres-get-started --set-yaml=- <<EOF
description: Postgres instance for the Get started tutorial
is_developer_accessible: true
output_schema:
type: object
properties:
host:
type: string
port:
type: integer
database:
type: string
username:
type: string
password:
type: string
EOF
# Create a module "postgres-get-started"
# Re-use a TF output to select the module for your chosen cloud provider
hctl create module postgres-get-started --set-yaml=- <<EOF
resource_type: postgres-get-started
description: Simple cloud-based Postgres instance
module_source: git::https://github.com/humanitec-tutorials/get-started//modules/postgres/${ENABLED_CLOUD_PROVIDER}
provider_mapping:
${ENABLED_CLOUD_PROVIDER}: ${ENABLED_CLOUD_PROVIDER}.get-started
EOF
# Create a module rule making the module applicable to the demo project
hctl create module-rule --set=module_id=postgres-get-started --set=project_id=get-started
Verify that the new resource type is available:
hctl get available-resource-types get-started development
You now have:
- ✅ Instructed the Orchestrator how to provision a PostgreSQL database by providing another module and module rule
- ✅ Configured a TF provider which can be injected into upcoming executions to access the target infrastructure
Add the database to the manifest
Create another manifest that in addition to the workload requests a PostgreSQL database and connects the workload to it.
Return to the manifests
directory::
cd ../manifests
Create a new file manifest-2.yaml
containing this configuration. It expands the previous manifest to request a database resource and set a container environment variable, using outputs from the database resource to construct a connection string. Note the elements marked NEW
:
workloads:
# The name you assign to the workload in the context of the manifest
demo-app:
resources:
# The name of the workload resource in the context of the manifest
demo-workload:
# The resource type of the workload resource you wish to provision
type: k8s-workload-get-started
# The resource parameters. They are mapped to the module_params of the module
params:
image: ghcr.io/astromechza/demo-app:latest
variables:
# NEW: This environment variable is used by the demo image to create a connection to a postgres database
OVERRIDE_POSTGRES: postgres://${resources.db.outputs.username}:${resources.db.outputs.password}@${resources.db.outputs.host}:${resources.db.outputs.port}/${resources.db.outputs.database}
# NEW: The name of the database resource in the context of the manifest
db:
# NEW: The resource type of the database resource you wish to provision
type: postgres-get-started
Requesting and connecting the database resource is literally a matter of a few lines added to the manifest file. The Orchestrator configuration you created previously (resource type, module, module rule, provider) will execute this request.
You now have:
- ✅ Extended the manifest to request an additional database resource
- ✅ Extended the manifest to inject outputs from the new database resource into the workload
Re-deploy
Perform another deployment into the same development
environment of the get-started
project using the new manifest file:
hctl deploy get-started development manifest-2.yaml
This new deployment will take a few minutes as it now creates a managed PostgreSQL instance. Use that time to inspect the TF code behind the database module. Expand the section below to see it.
Note that the module has a required_providers
block but it does not include a provider
block for configuring providers. Instead, the Orchestrator will inject the provider configuration at deploy time based on the provider object you created earlier and mapped into the module via the provider_mapping
.
Database module main code
main.tf
(view on GitHub )
:
# Module for creating a simple default RDS PostgreSQL database
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6"
}
random = {
source = "hashicorp/random"
version = "~> 3.0"
}
}
}
locals {
db_name = "get_started"
db_username = "get_started"
}
data "aws_region" "current" {}
resource "random_string" "db_password" {
length = 12
lower = true
upper = true
numeric = true
special = false
}
resource "aws_db_instance" "get_started_rds_postgres_instance" {
allocated_storage = 5
engine = "postgres"
identifier = "get-started-rds-db-instance"
instance_class = "db.t4g.micro"
storage_encrypted = true
publicly_accessible = true
delete_automated_backups = true
skip_final_snapshot = true
db_name = local.db_name
username = local.db_username
password = random_string.db_password.result
apply_immediately = true
multi_az = false
dedicated_log_volume = false
}
# Need to open access to the database port via a security rule despite "publicly_accessible"
resource "aws_security_group_rule" "get_started_postgres_access" {
type = "ingress"
from_port = aws_db_instance.get_started_rds_postgres_instance.port
to_port = aws_db_instance.get_started_rds_postgres_instance.port
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = tolist(aws_db_instance.get_started_rds_postgres_instance.vpc_security_group_ids)[0]
}
Control over the providers and the authentication mechanisms they use therefore lies with the Orchestrator and is governed by the platform engineers.
Once the deployment finished, return to the Orchestrator console at https://console.humanitec.dev and open the development
environment of the get-started
project.
The resource graph now contains an additional active resource node for the PostgreSQL database. Click on the resource to explore its details. It exposes set of metadata on the real-world database resource, including a link to the cloud console or portal. Click the link to view it.
If you have kubectl
available and your context set, you may again connect to the running pod:
kubectl port-forward pod/$(kubectl get pods -l app=get-started -o json \
| jq -r ".items[0].metadata.name") 8080:8080
Open http://localhost:8080 to see the status output. The demo workload now displays the message Database table count result: 0
, which means that it could connect to the PostgreSQL database and found it to be empty, which is expected.
Quit the port-forward
and continue.
You now have:
- ✅ Expanded the running deployment by a PostgreSQL database instance
Clean up
Once you are done exploring, clean up the objects you created in the process.
-
Delete the Orchestrator environment first. This will make the Orchestrator execute a
destroy
on all the resources in the environment, and may therefore take several minutes.Navigate to the
orchestrator
directory:cd ../orchestrator
Terraform
terraform destroy -target="platform-orchestrator_environment.development" \
-var-file=orchestrator.tfvars
OpenTofu
tofu destroy -target="platform-orchestrator_environment.development" \
-var-file=orchestrator.tfvars
hctl delete environment get-started development
Once complete, if you now inspect the cluster and your cloud console again, you will find both the workload and the database resource gone.
- Destroy the remaining Orchestrator configuration objects.
Terraform
terraform destroy -var-file=orchestrator.tfvars
OpenTofu
tofu destroy -var-file=orchestrator.tfvars
# Objects around the database
hctl delete module-rule $(hctl get module-rules -o json | jq -r '.[] | select((.project_id == "get-started") and (.module_id == "postgres-get-started")) | .id')
hctl delete module postgres-get-started
hctl delete resource-type postgres-get-started
hctl delete provider ${ENABLED_CLOUD_PROVIDER} get-started
# Objects around the workload
hctl delete module-rule $(hctl get module-rules -o json | jq -r '.[] | select((.project_id == "get-started") and (.module_id == "k8s-workload-get-started")) | .id')
hctl delete module k8s-workload-get-started
hctl delete resource-type k8s-workload-get-started
# Project and environment
hctl delete environment-type get-started-development
hctl delete runner-rule $(hctl get runner-rules -o json | jq -r '.[] | select((.project_id == "get-started") and (.runner_id == "get-started")) | .id')
hctl delete runner get-started
hctl delete project get-started
-
Destroy all demo infrastructure created via TF. This will take several minutes.
Navigate to the
infra
directory:cd ../infra
Then execute the destroy
:
Terraform
terraform destroy
OpenTofu
tofu destroy
- Delete the service user in the Orchestrator.
- Open the Orchestrator console at https://console.humanitec.dev
- Select the Service users view
- On the
get-started-tutorial
user, select Delete from the item menu - Confirm with Yes
Nothing do do. The service user was only needed for the TF track.
- Finally, remove the local directory containing the cloned repository and you additions. If you want to keep them for later to review things, skip this step.
cd ../../
rm -rf get-started
Recap
This concludes the tutorial. You have learned how to:
- ✅ Setup your infrastructure to run Orchestrator deployments
- ✅ Describe an application and its resource requirements
- ✅ Use the Orchestrator to deploy a workload
- ✅ Use the Orchestrator to deploy dependent infrastructure resources like a database
- ✅ Observe the results and status of deployments
- ✅ Create a link from the Orchestrator to real-world resources
The tutorial content made some choices we would like to highlight and make you aware of the options you have.
Runner compute vs. target compute: the tutorial uses the same compute (here: a Kubernetes cluster) to run both your runners and and an actual application workload. In a real-life setting, you may want to use a dedicated runner compute instance to reduce the risk of interference with applications.
Containerized and non-containerized workloads: the tutorial uses a containerized workload on Kubernetes as an example, but a “workload” can really have any shape, fitting its execution environment. You may deploy any flavor of workload artefacts by providing the proper module and TF code.
Workload and/or infrastructure: the tutorial deploys both a workload and a piece of infrastructure, but it can be just one of the two as well. You describe what is to be deployed via the manifest.
Next steps
- Go here to follow an advanced showcase deploying workloads to VMs