kubernetes

The kubernetes runner type lets you execute runners on any Kubernetes cluster via configuration from a kube config.

The Orchestrator will create Kubernetes Jobs directly. This runner type does not require a runner agent.

To access the cluster, the Orchestrator uses values from a kube config  file.

Follow the steps below to configure a runner of type kubernetes.

Example configuration

runner-config.yaml:

runner_configuration:
  type: kubernetes
  cluster:
    cluster_data:
      certificate_authority_data: LS0tLS1CRUdJTiBDRVJUS...
      server: https://kubernetes.default.svc.cluster.local
    auth:
      client_certificate_data: LS0tLS1CRUdJTiBDRVJUS...
      client_key_data: LS0tLS1CRUdJTiBSU0EgUFJJ...
  job:
    namespace: humanitec-runner
    service_account: humanitec-runner
state_storage_configuration:
  ...

Before you begin

You will need the following resources and permissions:

  • A kubernetes cluster with a publicly available API server endpoint
  • The kubectl CLI  installed and the current context set to target the kubernetes cluster
  • The hctl CLI installed and authenticated against your Orchestrator organization
  • A project in your Orchestrator organization

The runner supports client certificate authentication to the cluster only. Use this command to see if the user of the current context uses a client certificate:

kubectl config view --minify -o jsonpath='{.users[0].user.client-certificate-data}'

The command should output DATA+OMMITTED.

Prepare your environment

Perform the following setup for defining the target cluster.

  1. Define the cluster name
export CLUSTER_NAME=<my-cluster-name>
  1. Define the runner name
export RUNNER_ID=kubernetes-${CLUSTER_NAME}

Prepare the kubernetes cluster

Perform the following setup on the target cluster for the runner.

  1. Create a Kubernetes namespace where your runner will run
export RUNNER_K8S_NAMESPACE=humanitec-runner

kubectl create namespace ${RUNNER_K8S_NAMESPACE}
  1. Create a Kubernetes service account for the runner
export RUNNER_K8S_SA=humanitec-runner

kubectl create serviceaccount -n ${RUNNER_K8S_NAMESPACE} ${RUNNER_K8S_SA}

Assign permissions to the Orchestrator

Create and assign a Kubernetes Role

Prepare a Role  and RoleBinding . These permissions effective enable the Orchestrator to create jobs in the runner namespace.

cat << EOF > kubernetes-runner-k8s-role-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: humanitec-runner-orchestrator-access
  namespace: ${RUNNER_K8S_NAMESPACE}
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs    : ["create", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: humanitec-runner-orchestrator-access
  namespace: ${RUNNER_K8S_NAMESPACE}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: humanitec-runner-orchestrator-access
subjects:
- kind: ServiceAccount
  name: ${RUNNER_K8S_SA}
EOF

Apply the file:

kubectl apply -f kubernetes-runner-k8s-role-rolebinding.yaml

Configure a state storage

Decide which state storage types the runner is supposed to use. Check the compatibility matrix and perform the setup according to the chosen state storage type documentation.

Prepare a local file state-storage-config.yaml containing the properties for the state storage configuration underneath the top level property state_storage_configuration.

state_storage_configuration:
  ...

Create a runner

In this example we will use the cluster you are currently connected to via your local kube config. We will use the command kubectl config view --minify to read your current kube config context and extract the required values via the jsonpath paramater.

  1. Prepare the runner configuration

Connect to the kubernetes cluster you would like to register for the runner.

Prepare the runner_configuration in a local file:

cat <<EOF > runner-config.yaml
runner_configuration:
  type: kubernetes
  cluster:
    cluster_data:
      certificate_authority_data: $(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw)
      server: $(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' --raw)
    auth:
      client_certificate_data: $(kubectl config view --minify -o jsonpath='{.users[0].user.client-certificate-data}' --raw)
      client_key_data: $(kubectl config view --minify -o jsonpath='{.users[0].user.client-key-data}' --raw)
  job:
    namespace: ${RUNNER_K8S_NAMESPACE}
    service_account: ${RUNNER_K8S_SA}
EOF
  1. Append the state storage configuration you created earlier

Append the state storage config file prepared in the previous section to the existing runner configuration:

Add the state_storage_configuration block you prepared to the platform-orchestrator_kubernetes_runner resource:

resource "platform-orchestrator_kubernetes_runner" "example" {

  # ...

  state_storage_configuration = {
    # Your prepared state storage configuration
    # ...
  }
}

cat state-storage-config.yaml >> runner-config.yaml
  1. Verify the runner configuration

Verify the structure of the runner configuration. It needs to have the top level properties as shown:

resource "platform-orchestrator_kubernetes_runner" "example" {

  runner_configuration = {
    # ...
  }
  state_storage_configuration = {
    # ...
  }
}

Verify the structure of the configuration file. It needs to have the top level properties as shown:

cat runner-config.yaml
runner_configuration:
  ...
state_storage_configuration:
  ...
  1. Create the runner

Create the runner using the configuration prepared previously:

apply the TF configuration you created.

hctl create runner ${RUNNER_ID} \
  [email protected]

Create runner rules

Add any runner rules for the newly created runner.

Setting sensitive environment variables

The runner_configuration.job.pod_template field contains a Kubernetes pod template you can set to extend the runtime configuration of the runner. The pod template expects a structure of pod spec with a container named main. You can set secret environment variables by referencing existing secrets within the same target namespace of the runner pod. For example, if you want to mount the value of the key field within a secret named my-secret to the environment variable TF_EXAMPLE, you can set the pod template as the following:

  runner_configuration = {
    job = {
      pod_template = jsonencode({
        spec = {
          containers = [
            {
              name = "main"
              env = [
                {
                  name = "TF_EXAMPLE"
                  valueFrom = {
                    secretKeyRef = {
                      name = "my-secret"
                      key  = "key"
                    }
                  }
                }
              ]
            }
          ]
        }
      })
    }
  }

runner_configuration:
  job:
    pod_template:
      spec:
        containers:
          - name: main
            env:
              name: TF_EXAMPLE
              valueFrom:
                secretKeyRef: 
                  name: my-secret
                  key: key

The service account used by the runner must have permissions to get the secret.

Environment variables that are not secret or sensitive can be set directly in the env structure.

Top