kubernetes-gke

The kubernetes-gke runner type lets you execute runners on Google Cloud Kubernetes Engine (GKE) clusters having a publicly accessible API server.

The Orchestrator will create Kubernetes Jobs directly. This runner type does not require a runner agent.

To access the cluster, the Orchestrator uses temporary credentials obtained via Service account impersonation  in conjunction with OIDC-based Workload identity federation .

Follow the steps below to configure a runner of type kubernetes-gke.

Example configuration

runner-config.yaml:

runner_configuration:
  type: kubernetes-gke
  cluster:
    name: my-cluster
    project_id: my-gcp-project
    location: europe-west3
    auth:
      gcp_audience: //iam.googleapis.com/projects/123456789012/locations/global/workloadIdentityPools/my-wif-pool/providers/humanitec-runner
      gcp_service_account: humanitec-orchestrator-runner@my-gcp-project.iam.gserviceaccount.com
  job:
    namespace: humanitec-runner
    service_account: humanitec-runner
state_storage_configuration:
  ...

See all configuration options further down.

Before you begin

You will need the following resources and permissions:

  • The gcloud CLI  installed and authenticated with a principal having permission to manage workload identity pools, service accounts, and IAM policies in the Google Cloud project of the target GKE cluster
  • A GKE cluster with a publicly available API server endpoint and Workload Identity Federation enabled 
  • The kubectl CLI  installed and the current context set to target the GKE cluster and using a principal with permission to create namespaces, service accounts, Roles, and RoleBindings on it
  • The hctl CLI installed and authenticated against your Orchestrator organization
  • A project in your Orchestrator organization

Prepare the cloud environment

Perform the following setup in the GCP project of the target GKE cluster.

  1. Set Google Cloud and GKE values

Set the proper values for your Orchestrator organization and Google Cloud project:

export HUMANITEC_ORG=<my-org>
export GCP_PROJECT_ID=<my-gcp-project-id>

Set the gcloud CLI to your project:

gcloud config set project $GCP_PROJECT_ID

Set the proper values for your GKE cluster:

export CLUSTER_NAME=<my-cluster-name>
export CLUSTER_LOCATION=<my-cluster-region>
  1. Create a workload identity pool

You may use an existing workload identity pool instead of creating a new one. To do so, set the pool name to your existing pool in the first command and omit the creation.

export WIP_NAME=humanitec-runner-pool

gcloud iam workload-identity-pools create ${WIP_NAME} \
--location="global" \
--project ${GCP_PROJECT_ID}
  1. Create a workload identity provider in the pool
gcloud iam workload-identity-pools providers create-oidc humanitec-runner \
    --location="global" \
    --workload-identity-pool=${WIP_NAME} \
    --issuer-uri="https://oidc.humanitec.dev" \
    --attribute-mapping="google.subject=assertion.sub" \
    --project=${GCP_PROJECT_ID}
  1. Create a Google Cloud service account
export GCP_SERVICE_ACCOUNT_NAME=humanitec-runner-k8s-gke

gcloud iam service-accounts create ${GCP_SERVICE_ACCOUNT_NAME} \
    --description="Used by Humanitec Orchestrator to access GKE clusters for launching runners" \
    --display-name=${GCP_SERVICE_ACCOUNT_NAME} \
    --project=${GCP_PROJECT_ID}
  1. Define the runner name
export RUNNER_ID=kubernetes-gke-${CLUSTER_NAME}-${CLUSTER_LOCATION}
  1. Add an IAM policy binding between the service account and workload identity federation principal to enable it for service account impersonation
export GCP_SERVICE_ACCOUNT_EMAIL=${GCP_SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com
export GCP_PROJECT_NUMBER=$(gcloud projects describe ${GCP_PROJECT_ID} --format='get(projectNumber)')

gcloud iam service-accounts add-iam-policy-binding ${GCP_SERVICE_ACCOUNT_EMAIL} \
    --member=principal://iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${WIP_NAME}/subject/${HUMANITEC_ORG}+${RUNNER_ID} \
    --role=roles/iam.workloadIdentityUser \
    --format=json

Prepare the GKE cluster

Perform the following setup on the target GKE cluster for the runner.

  1. Create a Kubernetes namespace where your runner will run
export RUNNER_K8S_NAMESPACE=humanitec-runner

kubectl create namespace ${RUNNER_K8S_NAMESPACE}
  1. Create a Kubernetes service account for the runner
export RUNNER_K8S_SA=humanitec-runner

kubectl create serviceaccount -n ${RUNNER_K8S_NAMESPACE} ${RUNNER_K8S_SA}

kubectl annotate serviceaccount ${RUNNER_K8S_SA} \
    --namespace ${RUNNER_K8S_NAMESPACE} \
    iam.gke.io/gcp-service-account=${GCP_SERVICE_ACCOUNT_EMAIL}

Assign permissions to the Orchestrator

Prepare and assign the required permissions to the GCP service account used by the Orchestrator.

  1. Create an IAM custom role

The permissions on the role only enable the Orchestrator to generally access the GKE cluster. Fine grained permissions on the cluster itself will be defined via a Kubernetes Role further down.

Prepare a YAML file for the role:

export IAM_ROLE_ID=humanitec_runner_kubernetes_gke

cat <<EOF > kubernetes-gke-runner-iam-custom-role.yaml
title: ${IAM_ROLE_ID}
description: Access for the Humanitec Orchestrator to GKE clusters for launching runners
includedPermissions:
- "container.clusters.get"
EOF

See Create a custom role  for further configuration options.

Create the custom role:

gcloud iam roles create ${IAM_ROLE_ID} --project=${GCP_PROJECT_ID} \
  --file=kubernetes-gke-runner-iam-custom-role.yaml
  1. Grant the custom role to the GCP service account used by the Orchestrator
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
    --member="serviceAccount:${GCP_SERVICE_ACCOUNT_EMAIL}" \
    --role="projects/${GCP_PROJECT_ID}/roles/${IAM_ROLE_ID}"

This command will grant the role on the Google Cloud project level. See Grant a role  for further configuration options.

  1. Create and assign a Kubernetes Role

Prepare a Role  and RoleBinding . These permissions effective enable the Orchestrator to create jobs in the runner namespace.

cat << EOF > kubernetes-gke-runner-k8s-role-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: humanitec-runner-orchestrator-access
  namespace: ${RUNNER_K8S_NAMESPACE}
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs    : ["create", "get"]
  - apiGroups: [""]
    resources: ["pods", "events"]
    verbs    : ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: humanitec-runner-orchestrator-access
  namespace: ${RUNNER_K8S_NAMESPACE}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: humanitec-runner-orchestrator-access
subjects:
- kind: User
  name: ${GCP_SERVICE_ACCOUNT_EMAIL}
EOF

Apply the file:

kubectl apply -f kubernetes-gke-runner-k8s-role-rolebinding.yaml

Configure a state storage

Decide which state storage types the runner is supposed to use. Check the compatibility matrix and perform the setup according to the chosen state storage type documentation.

Prepare a TF code snippet containing the properties for the state storage configuration underneath a property state_storage_configuration:

  state_storage_configuration = {
    ...
  }

Prepare a local file state-storage-config.yaml containing the properties for the state storage configuration underneath the top level property state_storage_configuration.

state_storage_configuration:
  ...

Create a runner

  1. Prepare the runner configuration

Prepare the GCP audience for Workload Identity:

export GCP_AUDIENCE=//iam.googleapis.com/projects/${GCP_PROJECT_NUMBER}/locations/global/workloadIdentityPools/${WIP_NAME}/providers/humanitec-runner

Prepare the runner_configuration in a local file:

cat <<EOF > runner-config.yaml
runner_configuration:
  type: kubernetes-gke
  cluster:
    name: ${CLUSTER_NAME}
    project_id: ${GCP_PROJECT_ID}
    location: ${CLUSTER_LOCATION}
    auth:
      gcp_audience: ${GCP_AUDIENCE}
      gcp_service_account: ${GCP_SERVICE_ACCOUNT_EMAIL}
  job:
    namespace: ${RUNNER_K8S_NAMESPACE}
    service_account: ${RUNNER_K8S_SA}
EOF
  1. Append the state storage configuration you created earlier

Append the state storage config file prepared in the previous section to the existing runner configuration:

Add the state_storage_configuration block you prepared to the platform-orchestrator_serverless_ecs_runner resource:

resource "platform-orchestrator_serverless_ecs_runner" "example" {
  
  # ...

  state_storage_configuration = {
    # Your prepared state storage configuration
    # ...
  }
}

cat state-storage-config.yaml >> runner-config.yaml
  1. Verify the runner configuration

Verify the structure of the runner configuration. It needs to have the top level properties as shown:

resource "platform-orchestrator_serverless_ecs_runner" "example" {
  
  runner_configuration = {
    # ...
  }
  state_storage_configuration = {
    # ...
  }
}

Verify the structure of the configuration file. It needs to have the top level properties as shown:

cat runner-config.yaml
runner_configuration:
  ...
state_storage_configuration:
  ...
  1. Create the runner

Create the runner using the configuration prepared previously:

apply the TF configuration you created.

hctl create runner ${RUNNER_ID} \
  [email protected]

Create runner rules

Add any runner rules for the newly created runner.

Configuration options

The following YAML configuration shows all available options for the kubernetes-gke runner type:

runner_configuration:
  # Runner type
  type: kubernetes-gke
  
  # GKE cluster configuration
  cluster:
    # Name of the GKE cluster
    name: my-gke-cluster
    
    # GCP project ID
    project_id: my-gcp-project

    # GCP cluster Location
    location: europe-west3

    # (optional) Cluster Proxy URL
    proxy_url: https://some-proxy-url.com

    # (optional) Whether to use the private endpoint address of the cluster. Defaults to false
    internal_ip: false

    # Authentication options
    auth:
      # The URL of the workload identity pool provider used as an audience for OIDC token
      gcp_audience: https://iam.googleapis.com/projects/123456789012/locations/global/workloadIdentityPools/humanitec-wif-pool/providers/humanitec-wif

      # The Google service account to impersonate
      gcp_service_account: humanitec-runner-k8s-gke

  # Kubernetes job configuration
  job:
    # Namespace where runner jobs will be created
    namespace: humanitec-runner
    
    # Service account to use for the runner jobs
    service_account: humanitec-runner

    # (optional) Pod template for customizing runner job pods
    pod_template: 
      # Add custom pod specifications here
      # See Kubernetes PodSpec documentation for available options
      ...

# State storage configuration
state_storage_configuration:
  # Add state storeage configuration here
  # See State storage types documentation for available options
  ...

Setting sensitive environment variables

The runner_configuration.job.pod_template field contains a Kubernetes pod template you can set to extend the runtime configuration of the runner. The pod template expects a structure of pod spec with a container named main. You can set secret environment variables by referencing existing secrets within the same target namespace of the runner pod. For example, if you want to mount the value of the key field within a secret named my-secret to the environment variable TF_EXAMPLE, you can set the pod template as the following:

  runner_configuration = {
    job = {
      pod_template = jsonencode({
        spec = {
          containers = [
            {
              name = "main"
              env = [
                {
                  name = "TF_EXAMPLE"
                  valueFrom = {
                    secretKeyRef = {
                      name = "my-secret"
                      key  = "key"
                    }
                  }
                }
              ]
            }
          ]
        }
      })
    }
  }

runner_configuration:
  job:
    pod_template:
      spec:
        containers:
          - name: main
            env:
              name: TF_EXAMPLE
              valueFrom:
                secretKeyRef: 
                  name: my-secret
                  key: key

The service account used by the runner must have permissions to get the secret.

Environment variables that are not secret or sensitive can be set directly in the env structure.

Top