Deploy to a private cluster using the Humanitec Agent

This article describes how to deploy Workloads to a Kubernetes cluster on a private network (“private cluster”) using the Humanitec Agent.


To get started you’ll need:

  • A Kubernetes cluster as the target for Workload deployments.
    • The cluster’s API server endpoint is not reachable from the Platform Orchestrator.
    • The cluster is configured as a Resource Definition in your Humanitec Organization.
  • The Humanitec Agent installed and registered in your infrastructure.
    • The cluster’s API server endpoint is reachable from the Humanitec Agent.
  • The score-humanitec CLI installed.
  • The humctl Humanitec CLI installed.
  • (Recommended) Access to the log output of the Humanitec Agent. If the Agent is running on the same (private) target cluster, the access method could be kubectl run from a cloud shell. Otherwise it could be the cloud UI providing access to logs, or a tunneling option of the cloud provider’s CLI.

Prepare your environment

Set these variables to connect to your Humanitec Organization and the installed Agent:

export HUMANITEC_ORG=<your-humanitec-org-id>
export HUMANITEC_TOKEN=<your-humanitec-api-token>
export AGENT_ID=<your-agent-id>
export K8S_DEFINITION_ID=<your-cluster-resource-definition-id>
export K8S_DEFINITION_NAME=<your-cluster-resource-definition-name>

The AGENT_ID must be set to the ID you used when installing and registering your Humanitec Agent.

The K8S_DEFINITION is the ID of the Resource Definition for the target Kubernetes cluster. K8S_DEFINITION_NAME is the display name in that Definition.

Configure the cluster to use the Agent

As part of the Agent Installation process, you already created a Resource Definition of type: agent with the ID AGENT_ID.

You can now reference an Agent Resource from the Resource Definition of the target cluster. Specify the driver input property agent_url as ${resources['agent#agent'].outputs.url}. The reference to resources['agent#agent'] will use the matching Resource Definition to provide an actual active Agent Resource for the Deployment.

The agent_url property is supported in all Drivers of type k8s-cluster. Note that it is a secret property and must be placed in the “secrets” section of the Resource Definition structure.

Update the Resource Definition like this:

  • From the left navigation menu, select “Resource Management”.
  • Select the Resource Definition of your cluster.
  • Select the “Configuration” tab.
  • Select “Edit configuration”.
  • Enter this value into the “Agent URL” field:
  • Select “Save”.

To patch the Resource Definition through a one-time action, use this command:

humctl api PATCH /orgs/${HUMANITEC_ORG}/resources/defs/${K8S_DEFINITION_ID} -d '{
  "name": "'${K8S_DEFINITION_NAME}'",
  "driver_inputs": {
    "secrets": {
      "agent_url": "${resources['agent#agent'].outputs.url}"

To add the secret value to a YAML representation of the Resource Definition, add these elements:

      agent_url: "${resources['agent#agent'].outputs.url}"

Then (re-)apply the Resource Definition:

humctl apply -f resource-definition-k8s.yaml

To patch the Resource Definition through a one-time action, use this command:

curl -s${HUMANITEC_ORG}/resources/defs/${K8S_DEFINITION_ID} \
  -X PATCH \
  -H "Authorization: Bearer ${HUMANITEC_TOKEN}" \
  -H "Content-Type: application/json" \
  -d '{
  "name": "'${K8S_DEFINITION_NAME}'",
  "driver_inputs": {
    "secrets": {
      "agent_url": "${resources['agent#agent'].outputs.url}"

Add these elements to the humanitec_resource_definition resource for your cluster:

resource "humanitec_resource_definition" "my-cluster" {
  driver_inputs = {
    secrets_string = jsonencode({
      "agent_url" = "$${resources['agent#agent'].outputs.url}"
      "credentials" = ...

Then use terraform apply to apply the change.

Align matching criteria of the cluster and Agent

Because the Resource Definition of the target cluster needs to reference the Shared Resource of the Humanitec Agent, make sure this Agent Resource will always be available in your Deployments involving the cluster. In practice, this means aligning the matching criteria of both Resource Definitions:

Check both matching criteria specifications and make sure that this is the case. Deployments using a cluster Resource Definition with an agent_url configured will fail if no Humanitec Agent Resource Definition was matched as well.

Perform a test Deployment

Check whether you can now deploy to your cluster via the Agent. You can use any of your existing Applications, or follow the example below.

  1. To follow the example, create this Score file:
cat <<EOF > score.yaml
  name: agent-app
    command: ["/bin/sh"]
    args: ["-c", "while true; do printenv && sleep 60; done"]

You may need to adjust the image to reference any image registry and image that is available to your cluster. That image may or may not require specifying command and/or args. It’s recommended to configure a container that creates some log output for later testing.

  1. Create an Application to deploy into unless you’re using an existing one:
humctl create application agent-app
  1. Before deploying, make sure that the matching criteria of both the cluster Resource Definition and the Humanitec Agent Resource Definition will match the target Application and Environment.
  1. Deploy the Score file:
score-humanitec delta -f score.yaml \
  --org ${HUMANITEC_ORG} \
  --app agent-app \
  --env development \
  --token ${HUMANITEC_TOKEN} \
  1. Examine the most recent Deployment:
humctl get deploy . --app agent-app --env development -o yaml

You should see status: succeeded in the output.

  1. Check the Active Resources of the current Deployment in the target Environment:
humctl get active-resources --app agent-app --env development

You should see a Resource with a type of agent. This confirms an Agent was matched for the Deployment, and due to it being referenced by the cluster Resource Definition, an active “Shared Resource of type: agent was created.

  1. Check the container logs of the deployed container (remember to adjust parameters if not following the example):
humctl api get \
deployment_id=$(humctl get deploy . --app agent-app --env development -o yaml | yq '')&\

You should see the log output generated by the Application container. This confirms the Agent can transport this content back to the Platform Orchestrator via the secure tunnel.

  1. (Recommended) Check the logs of the Humanitec Agent container using an access method you have available.

The procedure to read logs from the container differs depending on the Agent’s execution environment.

If you can access logs, you should see an output similar to this:

tunnel"time=2030-12-21T17:00:47.392Z level=INFO msg="new request from wstunnel" method=CONNECT uri=//
time=2030-12-21T17:00:47.395Z level=INFO msg="copying data"
time=2030-12-21T17:00:47.395Z level=INFO msg="copying from tunnel to target connection"
time=2030-12-21T17:00:47.395Z level=INFO msg="copying from target connection to tunnel"
time=2030-12-21T17:01:17.307Z level=INFO msg="new request from wstunnel" method=CONNECT uri=//
time=2030-12-21T17:01:17.312Z level=INFO msg="copying data"
time=2030-12-21T17:01:17.312Z level=INFO msg="copying from tunnel to target connection"
time=2030-12-21T17:01:17.312Z level=INFO msg="copying from target connection to tunnel"

This confirms that the Humanitec Agent has indeed been forwarding data from and to the secure tunnel.


Connectivity issues

For perceived connectivity issues, check the Troubleshooting section of the Humanitec Agent installation guide.

Deployment error

If your Application Deployment produces this error:

It means the Resource Definition of your cluster might be referencing a Shared Agent Resource in its agent_url property, but no such Resource is present in the Deployment.

Check whether an Agent is present in the Shared Resources of the Deployment. If not, check the matching criteria of the Humanitec Agent Resource Definition and adjust them to match the target Application and Environment.

Next steps

If you haven’t already, consider using the Humanitec Operator on your private cluster to connect your internal secret stores.