Deploy to a private cluster using the Humanitec Agent
This article describes how to deploy Workloads to a Kubernetes cluster on a private network (“private cluster”) using the Humanitec Agent .
Prerequisites
To get started you’ll need:
- A Kubernetes cluster as the target for Workload deployments.
- The cluster’s API server endpoint is not reachable from the Platform Orchestrator.
- The cluster is configured as a Resource Definition in your Humanitec Organization.
- The Humanitec Agent
installed and registered
in your infrastructure.
- The cluster’s API server endpoint is reachable from the Humanitec Agent.
- The
humctl
Humanitec CLI installed. - (Recommended) Access to the log output of the Humanitec Agent. If the Agent is running on the same (private) target cluster, the access method could be
kubectl
run from a cloud shell. Otherwise it could be the cloud UI providing access to logs, or a tunneling option of the cloud provider’s CLI.
Prepare your environment
Set these variables to connect to your Humanitec Organization and the installed Agent:
export HUMANITEC_ORG=<your-humanitec-org-id>
export HUMANITEC_TOKEN=<your-humanitec-api-token>
export AGENT_ID=<your-agent-id>
export K8S_DEFINITION_ID=<your-cluster-resource-definition-id>
export K8S_DEFINITION_NAME=<your-cluster-resource-definition-name>
The AGENT_ID
must be set to the ID you used when installing and registering your Humanitec Agent.
The K8S_DEFINITION
is the ID of the Resource Definition for the target Kubernetes cluster. K8S_DEFINITION_NAME
is the display name in that Definition.
Configure the cluster to use the Agent
As part of the
Agent Installation
process, you already created a
Resource Definition
of type: agent
with the ID AGENT_ID
.
You can now reference an Agent Resource from the Resource Definition of the target cluster. Specify the driver input property agent_url
as ${resources['agent#agent'].outputs.url}
. The reference to resources['agent#agent']
will use the matching Resource Definition to provide an actual active Agent Resource for the Deployment.
The agent_url
property is supported in all Drivers of type k8s-cluster
. Note that it is a secret property and must be placed in the “secrets” section of the Resource Definition structure.
Update the Resource Definition like this:
- From the left navigation menu, select “Resource Management”.
- Select the Resource Definition of your cluster.
- Select the “Configuration” tab.
- Select “Edit configuration”.
- Enter this value into the “Agent URL” field:
${resources['agent#agent'].outputs.url}
- Select “Save”.
To patch the Resource Definition through a one-time action, use this command:
humctl api PATCH /orgs/${HUMANITEC_ORG}/resources/defs/${K8S_DEFINITION_ID} -d '{
"name": "'${K8S_DEFINITION_NAME}'",
"driver_inputs": {
"secrets": {
"agent_url": "${resources['agent#agent'].outputs.url}"
}
}
}'
To add the secret value to a YAML representation of the Resource Definition, add these elements:
...
entity:
driver_inputs:
secrets:
agent_url: "${resources['agent#agent'].outputs.url}"
...
Then (re-)apply the Resource Definition:
humctl apply -f resource-definition-k8s.yaml
To patch the Resource Definition through a one-time action, use this command:
curl -s https://api.humanitec.io/orgs/${HUMANITEC_ORG}/resources/defs/${K8S_DEFINITION_ID} \
-X PATCH \
-H "Authorization: Bearer ${HUMANITEC_TOKEN}" \
-H "Content-Type: application/json" \
-d '{
"name": "'${K8S_DEFINITION_NAME}'",
"driver_inputs": {
"secrets": {
"agent_url": "${resources['agent#agent'].outputs.url}"
}
}
}'
Add these elements to the humanitec_resource_definition
resource for your cluster:
resource "humanitec_resource_definition" "my-cluster" {
...
driver_inputs = {
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
"credentials" = ...
})
}
...
}
Then use terraform apply
to apply the change.
Align matching criteria of the cluster and Agent
Because the Resource Definition of the target cluster needs to reference the Shared Resource of the Humanitec Agent, make sure this Agent Resource will always be available in your Deployments involving the cluster. In practice, this means aligning the matching criteria of both Resource Definitions:
Check both matching criteria specifications and make sure that this is the case. Deployments using a cluster Resource Definition with an agent_url
configured will fail if no Humanitec Agent Resource Definition was matched as well.
Perform a test Deployment
Check whether you can now deploy to your cluster via the Agent. You can use any of your existing Applications, or follow the example below.
- To follow the example, create this Score file:
cat <<EOF > score.yaml
apiVersion: score.dev/v1b1
metadata:
name: agent-app
containers:
agent-app:
image: busybox:latest
command: ["/bin/sh"]
args: ["-c", "while true; do printenv && sleep 60; done"]
EOF
You may need to adjust the image
to reference any image registry and image that is available to your cluster. That image may or may not require specifying command
and/or args
. It’s recommended to configure a container that creates some log output for later testing.
- Create an Application to deploy into unless you’re using an existing one:
humctl create application agent-app
- Before deploying, make sure that the matching criteria of both the cluster Resource Definition and the Humanitec Agent Resource Definition will match the target Application and Environment.
For all subsequent commands, adjust --app
and --env
as needed if you are using your own Application.
The example uses the Application agent-app
and the Environment development
.
- Deploy the Score file:
humctl score deploy -f score.yaml \
--app agent-app \
--env development
- Examine the most recent Deployment:
humctl get deploy . --app agent-app --env development -o yaml
You should see status: succeeded
in the output.
- Check the Active Resources of the current Deployment in the target Environment:
humctl get active-resources --app agent-app --env development
You should see a Resource with a type
of agent
. This confirms an Agent was matched for the Deployment, and due to it being referenced by the cluster Resource Definition, an active “
Shared Resource
of type: agent
was created.
- Check the container logs of the deployed container (remember to adjust parameters if not following the example):
humctl api get \
/orgs/${HUMANITEC_ORG}/apps/agent-app/envs/development/logs?\
workload_id=agent-app&\
container_id=agent-app&\
deployment_id=$(humctl get deploy . --app agent-app --env development -o yaml | yq '.metadata.id')&\
limit=10
You should see the log output generated by the Application container. This confirms the Agent can transport this content back to the Platform Orchestrator via the secure tunnel.
- (Recommended) Check the logs of the Humanitec Agent container using an access method you have available.
The procedure to read logs from the container differs depending on the Agent’s execution environment.
If you can access logs, you should see an output similar to this:
tunnel"time=2030-12-21T17:00:47.392Z level=INFO msg="new request from wstunnel" method=CONNECT uri=//mycluster.myprovider.com:443 urlHost=mycluster.myprovider.com:443
time=2030-12-21T17:00:47.395Z level=INFO msg="copying data"
time=2030-12-21T17:00:47.395Z level=INFO msg="copying from tunnel to target connection"
time=2030-12-21T17:00:47.395Z level=INFO msg="copying from target connection to tunnel"
time=2030-12-21T17:01:17.307Z level=INFO msg="new request from wstunnel" method=CONNECT uri=//mycluster.myprovider.com:443 urlHost=mycluster.myprovider.com:443
time=2030-12-21T17:01:17.312Z level=INFO msg="copying data"
time=2030-12-21T17:01:17.312Z level=INFO msg="copying from tunnel to target connection"
time=2030-12-21T17:01:17.312Z level=INFO msg="copying from target connection to tunnel"
This confirms that the Humanitec Agent has indeed been forwarding data from and to the secure tunnel.
Troubleshooting
Connectivity issues
For perceived connectivity issues, check the Troubleshooting section of the Humanitec Agent installation guide.
Deployment error
If your Application Deployment produces this error:
no matching resource definition found for the resource type ‘agent.default’
It means the Resource Definition of your cluster might be referencing a Shared Agent Resource in its agent_url
property, but no such Resource is present in the Deployment.
Check whether an Agent is present in the Shared Resources of the Deployment. If not, check the matching criteria of the Humanitec Agent Resource Definition and adjust them to match the target Application and Environment.
Next steps
If you haven’t already, consider using the Humanitec Operator on your private cluster to connect your internal secret stores.