Installation

This document will guide you through the installation of the Humanitec Operator into your infrastructure.

Prerequisites

To get started you’ll need:

  • Helm installed - version 3 or later .
  • A Kubernetes (K8s) cluster version 1.23 or later.
  • Access to the cluster via kubectl.
  • Unless you are using the Humanitec Agent , the Kubernetes cluster firewall must allow incoming connections from the Humanitec source IPs to the cluster API server endpoint.
  • If you intend to use any Humanitec-hosted resource drivers, the execution enviroment firewall must allow TCP connections to drivers.humanitec.io on port 443.

Install with Helm

Install the latest version of the Helm chart:

helm install humanitec-operator \
  oci://ghcr.io/humanitec/charts/humanitec-operator \
  --namespace humanitec-operator-system \
  --create-namespace

To install a particular version, add a “--version” parameter to the command.

You can obtain the available Helm values by inspecting the values.yaml and template files of the chart. Download and unpack it using helm pull --untar oci://ghcr.io/humanitec/charts/humanitec-operator.

When using a Helm version prior to 3.8.0, you must manually enable OCI registry support. See this article for instructions.

Verify your installation

Verify that the Humanitec Operator’s controller-manager Pod is running in the target namespace:

kubectl get pods -n humanitec-operator-system

You should see this Pod in a Running state:

NAME                                                    READY   STATUS    RESTARTS   AGE
humanitec-operator-controller-manager-000000000-00000   2/2     Running   0          3m

Beyond this basic verification, we provide test cases for more extensive functionality tests of the Humanitec Operator.

Upgrade your installation

To upgrade to a particular version x.y.z, run this command:

helm upgrade humanitec-operator \
  oci://ghcr.io/humanitec/charts/humanitec-operator \
  --namespace humanitec-operator-system \
  --version <version>

Configure authentication for Drivers

To ensure secure access to Humanitec-hosted Drivers , the Humanitec Operator will have to establish its identity against the Humanitec Driver API. This is done using a private key to sign a token, which can then be validated by the Humanitec Drivers using the public key.

This step is required only if you’ll be using Humanitec-hosted Drivers. That excludes the Echo and Template Drivers whose logic is executed inside the Operator itself.

In the Custom Drivers page you can find more details about authentication of Operator requests in the Platform Orchestrator.

Generate public/private key pair

The following commands will generate two files humanitec_operator_private_key.pem and humanitec_operator_public_key.pem containing a private and corresponding public key.

# Generate a new private key
openssl genpkey -algorithm RSA -out humanitec_operator_private_key.pem -pkeyopt rsa_keygen_bits:4096

# Extract the public key from the private key generated in the previous command
openssl rsa -in humanitec_operator_private_key.pem -outform PEM -pubout -out humanitec_operator_public_key.pem

resource "tls_private_key" "operator_private_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

Add a Secret to the Humanitec Operator

The public/private key pair must be made available to the Operator through a K8s Secret.

Set these environment variables:

export HUMANITEC_ORG=<your-humanitec-org-id>
export HUMANITEC_TOKEN=<your-humanitec-api-token>

where:

  • HUMANITEC_ORG holds the Humanitec Organization ID (all lowercase)
  • HUMANITEC_TOKEN holds a valid Humanitec API Token with the Administrator role in the Organization.

Create the Secret:

kubectl --namespace humanitec-operator-system create secret generic humanitec-operator-private-key \
     --from-file=privateKey=humanitec_operator_private_key.pem \
     --from-literal=humanitecOrganisationID=$HUMANITEC_ORG

Register the public key with the Humanitec Platform Orchestrator

Register the public key in the Humanitec Organization:

curl -s https://api.humanitec.io/orgs/${HUMANITEC_ORG}/keys \
  -X POST \
  -H "Authorization: Bearer $HUMANITEC_TOKEN" \
  -H "Content-Type: application/json" \
  -d "$(cat humanitec_operator_public_key.pem | jq -sR)" \
  | jq

Note the key id for future identification.

humctl api post /orgs/${HUMANITEC_ORG}/keys \
  -d "$(cat humanitec_operator_public_key.pem | jq -sR)"

Note the key id for future identification.

resource "humanitec_key" "operator_public_key" {
  key = "${var.public_key}"
  # Alternatively, use the tls_private_key resource as shown above. Not recommended for production use.
  key = "${tls_private_key.operator_private_key.public_key_pem}"
}

Note: the jq command is used to encode the public key file as a JSON string.

This step completes setting up the Humanitec Operator authentication for calling Drivers. Note that this does not yet configure a specific secret store, which is done via SecretStore custom resources in the Operator namespace. Follow one of the guides for your secret store type, e.g. HashiCorp Vault .

Clean up

It’s best to store the public/private key pair in a safe place for future use, e.g. in a secret store your organization is using. Refer to the product documentation of that store for details.

Then, remove the generated key pair from your local environment:

rm humanitec_operator_private_key.pem humanitec_operator_public_key.pem

Troubleshoot connectivity

When facing connectivity issues, always ensure that the version of the Humanitec Operator is updated to the latest version. Improvements and bug fixes released over time may resolve the issue or make it easier to troubleshoot.

To check whether your Humanitec Operator installation has the required access to the Platform Orchestrator, you must verify that a secure connection can be made to the https://drivers.humanitec.io API. This can be checked in three ways:

  1. Using a chart version from at least 0.3.6 or at least image version 0.16.3, verify that the controller-manager deployment in Kubernetes is ready. The readiness probe for this application periodically checks connectivity to Humanitec.
kubectl get pods -l control-plane=controller-manager -n humanitec-operator-system
  1. Or, execute the check-connectivity probe from within one of the pods:
pod=$(kubectl get pods -l control-plane=controller-manager -n humanitec-operator-system -o name)
kubectl exec $pod -n humanitec-operator-system -c manager -- /manager --check-connectivity
  1. Or, if using an older version of the operator, manually deploy a job containing the curl binary to the Kubernetes cluster and verify that the following command succeeds:
job=$(kubectl create job --image curlimages/curl connectivity-test-$(date +%s) -o name -- curl -I --fail https://drivers.humanitec.io)
kubectl logs $job

Disabling the driver readiness probe

The Humanitec Operator readiness probe checks connectivity to the https://drivers.humanitec.io endpoint. However this may not be desirable when running in a fully disconnected network environment or when no Humanitec-hosted drivers are needed. To disable this probe, you can set the --driver-readiness-url argument to an empty value by overriding the Helm chart values. Create values-override.yaml file, copy entries controllerManager.manager.args from the chart’s default values.yaml and set the additional flag:

controllerManager:
  manager:
    args:
    - --health-probe-bind-address=:8081
    - --metrics-bind-address=127.0.0.1:8080
    - --leader-elect
    - --driver-readiness-url=

Using HTTP or HTTPS proxy to connect Humanitec-hosted drivers

(Optional) The Humanitec Operator can be configured to use an HTTP or HTTPS proxy in order to connect to Humanitec-hosted drivers.

For this you can override the arguments for the operator controller manager using Helm chart values. Create values-override.yaml file, copy entries controllerManager.manager.args from the chart’s default values.yaml and add the following arguments:

  • --driver-proxy The proxy to use to make requests to drivers supplied as a URL, e.g. https://user:password@httpproxy:3128.
  • --driver-proxy-ca-cert-file (optional) Path to a file containing one or more Certificate Authority (CA) certificates as PEM encoded x509 certificates to use for connections to the HTTPS proxy, can be mounted to the operator controller container as a Kubernetes volume.
  • --driver-proxy-tls-insecure-skip-verify (optional) If set, certificate of HTTPS proxy is not being verified.

Example:

controllerManager:
  manager:
    args:
    - --health-probe-bind-address=:8081
    - --metrics-bind-address=127.0.0.1:8080
    - --leader-elect
    - --driver-proxy=https://user:password@httpproxy:3128
    - --driver-proxy-ca-cert-file=/keys/httpproxy.crt

Upgrade your operator installation:

helm upgrade humanitec-operator \
  oci://ghcr.io/humanitec/charts/humanitec-operator \
  --values values-override.yaml

Uninstalling

Before continuing, ensure that all Humanitec Operator resources that have been created while using it have been deleted. You can check for any existing resources with this command:

kubectl get Resources,SecretMappings,SecretStores,WorkloadPatches,Workloads -A

Once all these resources have been deleted, you’re ready to uninstall the Operator:

helm uninstall humanitec-operator -n humanitec-operator-system

Delete the Humanitec Operator namespace:

kubectl delete namespace humanitec-operator-system

Delete the Operator public key from the Platform Orchestrator. If you are managing the key via a Terraform humanitec_key resource, use the Terraform mechanisms to remove that resource. Otherwise execute this command using the key id that was generated during key registration:

humctl api delete /orgs/${HUMANITEC_ORG}/keys/<id>

To see all registered keys, use this command:

humctl api get /orgs/${HUMANITEC_ORG}/keys

curl -s https://api.humanitec.io/orgs/${HUMANITEC_ORG}/keys/<id> \
  -X DELETE \
  -H "Authorization: Bearer $HUMANITEC_TOKEN"

To see all registered keys, use this command:

curl -s https://api.humanitec.io/orgs/${HUMANITEC_ORG}/keys \
  -H "Authorization: Bearer $HUMANITEC_TOKEN" \
  | jq

CRD Handling

The Humanitec Operator bundles all CRDs along with the other templates in the Helm chart. This improves ease of use and makes upgrading CRDs possible with Helm alone. The best practices described by the Helm team do not apply.

This means that if you uninstall the Helm release, the CRDs will also be uninstalled. You’ll then lose all instances of those CRDs, e.g. all SecretStore resources in the cluster. Consider preparing a mitigation against accidental deletion so you have a means to reapply resources e.g. from an Infrastructure as Code (IaC) pattern.

Next steps

Top