Deploy your Workload
Introduction
This tutorial guides you through the process of deploying your first Workload using the Humanitec Platform Orchestrator .
You will bring your own container, deploy it as a Workload using Score and the Humanitec Platform Orchestrator, and it will be started in a Pod on your target Kubernetes cluster.
You will learn how to:
- Deploy a Workload to your cluster
- Inspect your Deployment
- Use the CLI and the Humanitec Portal
- Add a variable
- Add a dependent resource to your Workload
This tutorial will use the humctl
CLI as the primary means of interfacing with the Humanitec Platform Orchestrator. You will execute all commands locally at first to become familiar with them. You will then find it easy to integrate them into your CI/CD setup later on for automation.
Prerequisites
To get started you’ll need:
- The
humctl
CLI installed locally - (Optional) Access to your target Kubernetes cluster via
kubectl
or another cluster management tool - (Recommended) Your own container image of the Workload you which to deploy, built and pushed into your container registry. If you do not have one, you will be able to use a public sample image as a fallback
The Humanitec tools do not build or push your container images to your own registry. You continue to do this normally as part of your own CI pipeline.
Check with your Platform Engineering team if the Kubernetes cluster where your Workload will be deployed can pull this container image.
- A set of data provided to you by your Platform Engineering team:
- The Organization
ID
of your Humanitec Organization, e.g.my-org
- The Application
ID
of your target Humanitec Application in the Platform Orchestrator, e.g.my-app
(an Application contains Environments to deploy Workloads into) - The Environment
ID
of the target Environment created in that Application, e.g.development
(a Workload Deployment always targets one specific Environment) - Your personal Humanitec user account (“Organization member”) in that Organization. This user needs permission to at least view the target Application
- A Resource Type your Application can use, e.g.
redis
- A Kubernetes cluster connected to the Platform Orchestrator for Workload deployments
- The Organization
Prepare your local environment
Configure your environment:
humctl config set org <your-humanitec-org-id> # Always lowercase, e.g. "my-org"
humctl config set app <your-humanitec-application-id> # Always lowercase, e.g. "my-app"
humctl config set env <your-humanitec-app-environment-id> # Always lowercase, e.g. "development"
These commands set the context for all upcoming humctl
commands. You can always query the current context using humctl config context
. To override the values for a single command, use the --org
, --app
, and --env
flags.
Login to the Platform Orchestrator:
humctl login
Inspect your Application
Your Platform Engineering team has created a target “Application” and an “Environment” inside that Application for you in the Platform Orchestrator. Take a look:
- Open the Humanitec Portal at https://app.humanitec.io
- Login using the proper method as instructed by your Platform Engineering team. This will log you in with your personal Organization Member user
- From the main navigation, select Applications
- Locate your Application name and select the target Environment, e. g. development
The Environment is empty as yet, awaiting your first deployment. Proceed to prepare it by creating a Score file.
Create a Score file
A Score file describes your containerized Workload and its resource dependencies.
Create this simple Score file to start:
cat <<EOF > score.yaml
apiVersion: score.dev/v1b1
metadata:
name: my-workload
containers:
my-container:
image: .
EOF
This Score file does not declare any resource dependencies yet.
The line image: .
represents a placeholder for the actual image. In practice, the image and especially the image tag will vary depending on the context. You may want to deploy latest
, a particular version tag, or a build number tag created by your CI pipeline. You will therefore provide the image name and tag as a parameter for every deployment call, allowing the Score file to remain environment agnostic.
Deploy your Workload
Set the container image name and tag for your Workload:
export CONTAINER_IMAGE_NAME_AND_TAG=<my-image-name:my-tag>
Provide the image name and tag of your own Workload. As a fallback, you can use this demo image: ghcr.io/astromechza/demo-app:latest
.
Then perform the Workload Deployment using the score.yaml
file, targeting the proper Humanitec Organization, Application, and Environment.
humctl score deploy \
-f score.yaml \
--image ${CONTAINER_IMAGE_NAME_AND_TAG} \
--wait \
--message "Initial deployment"
Here’s what the flags mean:
- The
-f
flag specifies the Score file to use - The
--image
flag supplies the image name and tag to fill in theimage: .
placeholder in the Score file - The
--wait
flag outputs the logs of the Platform Orchestrator’s deployment progress - The
--message
lets you provide a comment which will be displayed in the Platform Orchestrator for this Deployment
The Deployment should not require more than a minute. You should eventually see a Status: succeeded
output.
Inspect the Deployment
Get information on the latest Deployment via the CLI.
humctl get deploy .
Note the following:
- The
created_by
property will show the id of the service user associated with the API token you used, recognizeable by thes-
prefix - The
commment
property will show the message you provided with thehumctl score deploy
call, if any - You can query the Deployment status via the detailed output adding the
-o json
or-o yaml
flag
Now inspect the Deployment. Go back to the Humanitec Portal at https://app.humanitec.io and navigate to the same Application Environment as before.
The Environment Status page is populated with an overview of the last Deployment, its status, Resource Graph, Workloads, and Shared Resources that were automatically provisioned along with them.
The Resource Graph is quite simple yet and will currently look like this. It will grow when you add a Resource further along.
At the bottom, you will find Shared Resources of the types k8s-cluster
and k8s-namespace
. Expand them to see the details of your target Kubernetes cluster and namespace.
The Workload my-workload
has been named after the name: my-workload
property in the Score file. Click on it to see its details.
You will see 1 Pod running and a container my-container
. It has been named after this section in the Score file:
containers:
my-container:
Inspect your workload’s output. Click on the container named my-container
. You will see the container log output at the top of the page.
If you have access to your target Kubernetes cluster, you may inspect the objects created in the target namespace on your own. You can obtain the namespace by querying the Platform Orchesatrator:
export NAMESPACE_DEVELOPMENT=$(humctl get active-resource \
/orgs/${HUMANITEC_ORG}/apps/${HUMANITEC_APP}/envs/${HUMANITEC_ENV}/resources \
-oyaml \
| yq -r '.[] | select (.metadata.type == "k8s-namespace") | .status.resource.namespace')
echo $NAMESPACE_DEVELOPMENT
Obtain more deployment details
You can obtain additional information on your Deployment using these CLI commands.
Get latest Deployment errors, if any:
humctl get deploy-error
Get the Active Resources of the latest Deployment:
humctl get active-resources
Add a variable
You can add container environment variables to configure your Workload via Score.
The Humanitec Platform Orchestrator provides a flexible way to maintain Shared Values on an Application or Environment level, and reference them through your Score file.
Create a Shared Value now on the Application level:
humctl create value YOUR_KEY your-value \
-d "The value of your-value stored in YOUR_KEY" \
--env ""
""
to override the value of env
set in the context and not target the Environment defined therein, but the Application level.Then modify your Score file to look like this:
apiVersion: score.dev/v1b1
metadata:
name: my-workload
containers:
my-container:
image: .
variables:
# Variable declaration reading a Shared Value
MY_CONTAINER_VAR: ${resources.env.YOUR_KEY}
resources:
# Referencing the Workload environment in the Platform Orchestrator
env:
type: environment
type: environment
resource uses a reserved resource type from the
Score specification
which in the Platform Orchestrator gives you access to Shared Values.Now re-deploy your Workload:
humctl score deploy \
-f score.yaml \
--image ${CONTAINER_IMAGE_NAME_AND_TAG} \
--wait \
--message "Added a container variable"
Verifying the actual value injected requires access to the target Pod created on the cluster. You can use these commands:
# See the Pods in the target namespace
kubectl get pods -n $NAMESPACE_DEVELOPMENT
# Output a Pod's environment variables
kubectl exec -n $NAMESPACE_DEVELOPMENT <your-pod-name> -- env
Add a Resource
Given that your Platform Engineering team has provided you with a Resource Type you can use, e.g. redis
, you can now enrich your Workload to request and use such a resource.
Modify your Score file to look like this:
apiVersion: score.dev/v1b1
metadata:
name: my-workload
containers:
my-container:
image: .
variables:
# Variable declaration reading a Shared Value
MY_CONTAINER_VAR: ${resources.env.YOUR_KEY}
# Variables declarations using outputs of the redis resource
REDIS_HOST: ${resources.my-cache.host}
REDIS_PORT: ${resources.my-cache.port}
REDIS_USERNAME: ${resources.my-cache.username}
REDIS_PASSWORD: ${resources.my-cache.password}
resources:
# Referencing the Workload environment in the Platform Orchestrator
env:
type: environment
# Requesting a resource of type redis
my-cache:
type: redis
This example requests a Redis cache and injects its properties as environment variables into the container. The container code could now pick them up to connect to the Redis instance.
type
accordingly. You can use the outputs defined for that
Resource Type
to inject variables.Re-deploy your Workload:
humctl score deploy \
-f score.yaml \
--image ${CONTAINER_IMAGE_NAME_AND_TAG} \
--wait \
--message "Adding a Resource"
Get the difference between the latest Deployment and the previous version:
humctl diff sets deploy/+0 deploy/+1
Technically, the output shows the difference between the two Deployment Sets of the current and the previous Deployment. The Deployment Set is the internal data structure of the Platform Orchestrator Deployment. You will normally not have to work with it, but it’s useful to expose the details of what’s happening.
Go back to the Humanitec Portal at
https://app.humanitec.io
and navigate to the same Application Environment as before. Inspect the Resource Graph. It now contains a resource of type redis
(or what other type you used) as a dependency of your Workload.
Selecting the resource will display its non-secret output values.
Finally, repeat the query to show the Active Resources of your target Environment. It will now include the new Resource:
humctl get active-resources
Recap
Congratulations! You have successfully completed the tutorial and learned how to deploy your Workload. Specifically, you learned how to:
- ✅ Use Score
- ✅ Deploy your Workload with Score,
humctl
, and the Platform Orchestrator as your backend - ✅ View and analyze your deployed Workload using
humctl
and the Humanitec Portal - ✅ Find the Resource Graph in the Portal
- ✅ Further inspect your Deployment to see Active Resources and possible errors
- ✅ Add a dependent Resource and see it deployed
Next steps
Using what you already have, add Score to your source code and deploy continuously . Ask your Platform Engineering team for a service user API token to be made available for your pipeline.
Set up your CI/CD system to manage ephemeral Environments . The setup will provide you with isolated, fully functional Application deployments for each of your PRs to test your code, at zero developer effort.
Explore the configuration possibilities through Score by browsing our Score example library .
See how to deploy multiple Workloads at once .