kubernetes-agent
On this page
The kubernetes-agent runner type lets you execute runners on Kubernetes clusters which do not have a publicly accessible API server.
The kubernetes-agent is a runner agent which runs in your cluster and establishes a secure connection to the Platform Orchestrator. Once connected, the kubernetes-agent runner polls the Orchestrator for events and triggers Kubernetes Jobs in the same cluster where it’s running in response to them.
Follow the steps below to configure a runner of type kubernetes-agent.
Example configuration
resource.tf
(view on GitHub )
:
resource "platform-orchestrator_kubernetes_agent_runner" "my_runner" {
id = "my-runner"
description = "runner for all the envs"
runner_configuration = {
key = <<EOT
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAc5dgCx4ano39JT0XgTsHnts3jej+5xl7ZAwSIrKpef0=
-----END PUBLIC KEY-----
EOT
job = {
namespace = "default"
service_account = "canyon-runner"
pod_template = jsonencode({
metadata = {
labels = {
"app.kubernetes.io/name" = "humanitec-runner"
}
}
})
}
}
state_storage_configuration = {
type = "kubernetes"
kubernetes_configuration = {
namespace = "humanitec"
}
}
}
hctl create runner my-runner [email protected]
where runner-config.yaml is:
runner_configuration:
type: kubernetes-agent
key: |
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAc5dgCx4ano39JT0XgTsHnts3jej+5xl7ZAwSIrKpef0=
-----END PUBLIC KEY-----
job:
namespace: default
service_account: humanitec-runner
pod_template:
metadata:
labels:
app.kubernetes.io/name: humanitec-runner
state_storage_configuration:
type: kubernetes
namespace: humanitec
...
See all configuration options further down.
Configuration options
The following configuration shows all available options for the kubernetes-agent runner type:
runner_configuration:
# Runner type
type: kubernetes-agent
# Kubernetes job configuration
job:
# Public key used to verify the runner agent's identity
key: |
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----
# Namespace where runner jobs run
namespace: humanitec-runner
# Service account to use for the runner jobs
service_account: humanitec-runner
# (optional) Pod template for customizing runner job pods
pod_template:
# Add custom pod specifications here
# See Kubernetes PodSpec documentation for available options
...
# State storage configuration
state_storage_configuration:
# Add state storeage configuration here
# See State storage types documentation for available options
...
Setup guide
The steps below serve as a general guide of the steps to configure the runner using the either Terraform/OpenTofu (“TF”) code or the CLI. Adjust the steps as needed based on your environment and requirements.
The setup guide is divided into two sections: one for EKS-specific setup and another for
GKE-specific setup. Perform the steps in the section that corresponds to your Kubernetes cluster type.
EKS-specific setup
To have the kubernetes-agent runner in your EKS cluster, you need:
- An existing AWS EKS cluster
- The
awsCLI installed and authenticated with a principal having permission to manage IAM roles and policies in the AWS account of the EKS cluster - The
hctlCLI installed and authenticated against your Orchestrator organization - kubectl installed locally and the current context set to the target cluster
- Depending on your choice of tooling:
- TF: Either the Terraform CLI or the OpenTofu CLI installed
hctlCLI: helm installed locally
The setup utilizes IAM roles for service accounts (IRSA) for the EKS cluster to provide credentials to the runner for TF execution.
Why IRSA and not EKS Pod Identity ?
The hashicorp/aws TF provider as of version 6.x only supports IRSA to provide credentials to the runner.
- Declare the required providers:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6"
}
# Providers needed to install the kubernetes-agent runner Helm chart and register it with the Orchestrator
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2"
}
platform-orchestrator = {
source = "humanitec/platform-orchestrator"
version = "~> 2"
}
helm = {
source = "hashicorp/helm"
version = "~> 3"
}
local = {
source = "hashicorp/local"
version = ">= 2.0"
}
}
}
- Set values according to your setup. Change the default values at your discretion:
locals {
aws_account_id = "<my-aws-account-id>"
aws_region = "<my-aws-region>"
cluster_name = "<my-eks-cluster-name>"
humanitec_org = "<my-org>"
# The Kubernetes namespace where the kubernetes-agent runner should run
cluster_runner_namespace = "humanitec-kubernetes-agent-runner-ns"
# The name of the Kubernetes service account to be assumed by the the kubernetes-agent runner
cluster_runner_serviceaccount = "humanitec-kubernetes-agent-runner"
# The Kubernetes namespace where the deployment jobs should run
cluster_runner_job_namespace = "humanitec-kubernetes-agent-runner-job-ns"
# The name of the Kubernetes service account to be assumed by the deployment jobs created by the kubernetes-agent runner
cluster_runner_job_serviceaccount = "humanitec-kubernetes-agent-runner-job"
}
Use TF variables of your module instead of placing the raw values in the locals at your discretion.
- Create a data source from the existing cluster and extract required cluster properties:
# Data source for the existing EKS cluster
data "aws_eks_cluster" "cluster" {
name = local.cluster_name
}
# Extract cluster properties
locals {
oidc_provider_url = data.aws_eks_cluster.cluster.identity[0].oidc[0].issuer
oidc_provider = trimprefix(data.aws_eks_cluster.cluster.identity[0].oidc[0].issuer, "https://")
cluster_endpoint = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = data.aws_eks_cluster.cluster.certificate_authority[0].data
}
- Configure providers:
# AWS provider. Authentication taken from local "aws" CLI
provider "aws" {
region = local.aws_region
}
# Platform Orchestrator provider. Authentication taken from local "hctl" CLI
provider "platform-orchestrator" {
org_id = local.humanitec_org
}
# Kubernetes provider. Authentication taken from local "aws" CLI
provider "kubernetes" {
host = local.cluster_endpoint
cluster_ca_certificate = base64decode(local.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--output", "json", "--cluster-name", local.cluster_name, "--region", local.aws_region]
command = "aws"
}
}
# Helm provider. Authentication taken from local "aws" CLI
provider "helm" {
kubernetes = {
host = local.cluster_endpoint
cluster_ca_certificate = base64decode(local.cluster_ca_certificate)
exec = {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--output", "json", "--cluster-name", local.cluster_name, "--region", local.aws_region]
command = "aws"
}
}
}
- Create an IAM role for IRSA that allows the Kubernetes service account used by the runner jobs to assume it:
data "aws_iam_policy_document" "assume_role" {
version = "2012-10-17"
statement {
effect = "Allow"
principals {
type = "Federated"
identifiers = ["arn:aws:iam::${local.aws_account_id}:oidc-provider/${local.oidc_provider}"]
}
actions = [
"sts:AssumeRoleWithWebIdentity"
]
condition {
test = "StringEquals"
variable = "${local.oidc_provider}:aud"
values = [
"sts.amazonaws.com"
]
}
condition {
test = "StringEquals"
variable = "${local.oidc_provider}:sub"
values = [
"system:serviceaccount:${local.cluster_runner_job_namespace}:${local.cluster_runner_job_serviceaccount}"
]
}
}
}
# This role will be assumed by the runner jobs
resource "aws_iam_role" "agent_runner_irsa_role" {
name = "humanitec-kubernetes-agent-runner-${local.cluster_name}"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
- Create and associate an IAM OIDC provider for your cluster:
Skip this step if you already have an IAM OIDC provider associated to your EKS cluster. To determine whether you do, refer to the AWS documentation .
resource "aws_iam_openid_connect_provider" "agent_runner" {
url = local.oidc_provider_url
client_id_list = [
"sts.amazonaws.com"
]
}
- Set values
export AWS_ACCOUNT_ID=<my-aws-account-id>
export AWS_REGION=<my-aws-region>
export CLUSTER_NAME=<my-eks-cluster-name>
# The Kubernetes namespace where the kubernetes-agent runner should run
export CLUSTER_RUNNER_NAMESPACE="humanitec-kubernetes-agent-runner-ns"
# The name of the Kubernetes service account to be assumed by the the kubernetes-agent runner
export CLUSTER_RUNNER_SERVICEACCOUNT="humanitec-kubernetes-agent-runner"
# The Kubernetes namespace where the deployment jobs should run
export CLUSTER_RUNNER_JOB_NAMESPACE="humanitec-kubernetes-agent-runner-job-ns"
# The name of the Kubernetes service account to be assumed by the deployment jobs created by the kubernetes-agent runner
export CLUSTER_RUNNER_JOB_SERVICEACCOUNT="humanitec-kubernetes-agent-runner-job"
- Obtain and output the EKS cluster’s OIDC issuer:
export CLUSTER_OIDC_ISSUER=$(aws eks describe-cluster --name $CLUSTER_NAME --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
echo $CLUSTER_OIDC_ISSUER
- Create an IAM role for IRSA that allows the Kubernetes service account used by the runner jobs to assume it:
export RUNNER_ROLE_NAME=humanitec-kubernetes-agent-runner-${CLUSTER_NAME}
aws iam create-role \
--role-name ${RUNNER_ROLE_NAME} \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::'${AWS_ACCOUNT_ID}':oidc-provider/'${CLUSTER_OIDC_ISSUER}'"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"'${CLUSTER_OIDC_ISSUER}':aud": "sts.amazonaws.com",
"'${CLUSTER_OIDC_ISSUER}':sub": "system:serviceaccount:'${CLUSTER_RUNNER_JOB_NAMESPACE}':'${CLUSTER_RUNNER_JOB_SERVICEACCOUNT}'"
}
}
}
]
}'
- Create and associate an IAM OIDC provider for your cluster:
Skip this step if you already have an IAM OIDC provider associated to your EKS cluster. To determine whether you do, refer to the AWS documentation .
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve
A common part for both TF and CLI setups is the need for a private/public key pair for the runner to authenticate against the Orchestrator:
openssl genpkey -algorithm ed25519 -out runner_private_key.pem
openssl pkey -in runner_private_key.pem -pubout -out runner_public_key.pem
- Define the runner resource using the public Humanitec module and the key pair to install the runner into the cluster and register it with the Orchestrator:
module "kubernetes_agent_runner" {
source = "github.com/humanitec/platform-orchestrator-tf-modules//orchestrator-configuration/runner/kubernetes-agent"
humanitec_org_id = local.humanitec_org
private_key_path = "./runner_private_key.pem"
public_key_path = "./runner_public_key.pem"
k8s_namespace = local.cluster_runner_namespace
k8s_service_account_name = local.cluster_runner_serviceaccount
k8s_job_namespace = local.cluster_runner_job_namespace
k8s_job_service_account_name = local.cluster_runner_job_serviceaccount
# EKS IRSA configuration - link to the IAM role created above
service_account_annotations = {
"eks.amazonaws.com/role-arn" = aws_iam_role.agent_runner_irsa_role.arn
}
}
Visit the module documentation to see further configuration options.
- Assign any AWS permissions the runner will need to provision AWS resources to the IAM role. E.g. to enable the runner to manage Amazon RDS instances, attach a policy like this:
resource "aws_iam_role_policy_attachment" "agent_runner_manage_rds" {
role = aws_iam_role.agent_runner_irsa_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonRDSFullAccess"
}
The exact permissions and whether you use built-in policies or custom policies is at your own discretion and depending on your resource requirements.
- Verify the
kubectlcontext is set to your target cluster:
kubectl config current-context
- Initialize and apply the Terraform configuration:
Terraform
terraform init
terraform apply
OpenTofu
tofu init
tofu apply
- Verify the
kubectlcontext is set to your target cluster:
kubectl config current-context
- Set values:
export HUMANITEC_ORG=<my-org>
export RUNNER_ID=kubernetes-runner-agent-${CLUSTER_NAME}
- Create the namespace for the runner and a secret holding the runner private key:
kubectl create namespace ${CLUSTER_RUNNER_NAMESPACE}
kubectl create secret generic humanitec-kubernetes-agent-runner \
-n ${CLUSTER_RUNNER_NAMESPACE} \
--from-literal=private_key="$(cat runner_private_key.pem)"
- Create the namespace for the runner jobs:
kubectl create namespace ${CLUSTER_RUNNER_JOB_NAMESPACE}
- Install the runner Helm chart onto the cluster, providing required values:
helm install humanitec-kubernetes-agent-runner \
oci://ghcr.io/humanitec/charts/humanitec-kubernetes-agent-runner \
-n ${CLUSTER_RUNNER_NAMESPACE} \
--set humanitec.orgId=${HUMANITEC_ORG} \
--set humanitec.runnerId=${RUNNER_ID} \
--set humanitec.existingSecret=humanitec-kubernetes-agent-runner \
--set namespaceOverride=${CLUSTER_RUNNER_NAMESPACE} \
--set serviceAccount.name=${CLUSTER_RUNNER_SERVICEACCOUNT} \
--set jobsRbac.namespace=${CLUSTER_RUNNER_JOB_NAMESPACE} \
--set jobsRbac.serviceAccountName=${CLUSTER_RUNNER_JOB_SERVICEACCOUNT} \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=$(aws iam get-role --role-name ${RUNNER_ROLE_NAME} | jq -r .Role.Arn)"
The value serviceAccount.annotations.eks... annotates the Kubernetes service account to leverage the EKS workload identity solution IRSA for the runner to authenticate against AWS, using a role pre-configured as part of the initial base setup.
- Register the
kubernetes-agentrunner with the Orchestrator:
hctl create runner ${RUNNER_ID} \
--set=runner_configuration="$(jq -nc --arg key "$(cat runner_public_key.pem)" '{"type": "kubernetes-agent","key":$key,"job":{"namespace":"'${CLUSTER_RUNNER_JOB_NAMESPACE}'","service_account":"'${CLUSTER_RUNNER_JOB_SERVICEACCOUNT}'"}}')" \
--set=state_storage_configuration='{"type":"kubernetes","namespace":"'${CLUSTER_RUNNER_JOB_NAMESPACE}'"}'
- Assign any AWS permissions the runner will need to provision AWS resources to the IAM role. E.g. to enable the runner to manage Amazon RDS instances, attach a policy like this:
aws iam attach-role-policy \
--role-name ${RUNNER_ROLE_NAME} \
--policy-arn "arn:aws:iam::aws:policy/AmazonRDSFullAccess"
The exact permissions and whether you use built-in policies or custom policies is at your own discretion and depending on your resource requirements.
Your kubernetes-agent runner is now ready to be used. Continue by defining runner rules to assign the runner to environments and execute deployments.
GKE-specific setup
To have the kubernetes-agent runner in your GKE cluster, you need:
- An existing GCP GKE cluster
- The
gcloudCLI installed and authenticated with a principal having permission to manage IAM roles and policies in the GCP project of the GKE cluster - The
hctlCLI installed and authenticated against your Orchestrator organization - kubectl installed locally and the current context set to the target cluster
- Depending on your choice of tooling:
- TF: Either the Terraform CLI or the OpenTofu CLI installed
hctlCLI: helm installed locally
- Declare the required providers:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 7"
}
# Providers needed to install the kubernetes-agent runner Helm chart and register it with the Orchestrator
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2"
}
platform-orchestrator = {
source = "humanitec/platform-orchestrator"
version = "~> 2"
}
helm = {
source = "hashicorp/helm"
version = "~> 3"
}
local = {
source = "hashicorp/local"
version = ">= 2.0"
}
}
}
- Set values according to your setup. Change the default values at your discretion:
locals {
gcp_project_id = "<my-gcp-project-id>"
gcp_region = "<my-gcp-region>"
cluster_name = "<my-gke-cluster-name>"
humanitec_org = "<my-org>"
# The Kubernetes namespace where the kubernetes-agent runner should run
cluster_runner_namespace = "humanitec-kubernetes-agent-runner-ns"
# The name of the Kubernetes service account to be assumed by the the kubernetes-agent runner
cluster_runner_serviceaccount = "humanitec-kubernetes-agent-runner"
# The Kubernetes namespace where the deployment jobs should run
cluster_runner_job_namespace = "humanitec-kubernetes-agent-runner-job-ns"
# The name of the Kubernetes service account to be assumed by the deployment jobs created by the kubernetes-agent runner
cluster_runner_job_serviceaccount = "humanitec-kubernetes-agent-runner-job"
}
Use TF variables of your module instead of placing the raw values in the locals at your discretion.
- Create a data source from the existing cluster and extract required cluster properties:
# Data source for the existing EKS cluster
data "google_container_cluster" "cluster" {
name = local.cluster_name
location = local.gcp_region
project = local.gcp_project_id
}
# Extract cluster properties
locals {
cluster_ca_certificate = data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate
k8s_cluster_endpoint = data.google_container_cluster.cluster.endpoint
}
- Configure providers:
# GCP provider. Authentication taken from local "gcloud" CLI
provider "google" {
project = local.gcp_project_id
region = local.gcp_region
}
# Platform Orchestrator provider. Authentication taken from local "hctl" CLI
provider "platform-orchestrator" {
org_id = local.humanitec_org
}
# Kubernetes provider. Authentication taken from local "gcloud" CLI
provider "kubernetes" {
host = "https://${local.k8s_cluster_endpoint}"
cluster_ca_certificate = base64decode(local.cluster_ca_certificate)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "gke-gcloud-auth-plugin"
}
}
# Helm provider. Authentication taken from local "aws" CLI
provider "helm" {
kubernetes = {
host = "https://${local.k8s_cluster_endpoint}"
cluster_ca_certificate = base64decode(local.cluster_ca_certificate)
exec = {
api_version = "client.authentication.k8s.io/v1beta1"
command = "gke-gcloud-auth-plugin"
}
}
}
- Create a GCP IAM service account and allow the Kubernetes service account to impersonate it using GKE Workload Identity:
# This GCP service account will be used by the runner
resource "google_service_account" "humanitec_kubernetes_agent" {
account_id = "runner-${local.cluster_name}"
display_name = "Humanitec Kubernetes agent runner on cluster ${local.cluster_name}"
}
resource "google_service_account_iam_member" "workload_identity_binding" {
service_account_id = google_service_account.humanitec_kubernetes_agent.id
role = "roles/iam.workloadIdentityUser"
member = "serviceAccount:${local.gcp_project_id}.svc.id.goog[${local.cluster_runner_job_namespace}/${local.cluster_runner_job_serviceaccount}]"
}
- Set values
export GCP_PROJECT_ID=<my-gcp-project-id>
export CLUSTER_LOCATION=<my-cluster-region>
export CLUSTER_NAME=<my-cluster-name>
# The Kubernetes namespace where the kubernetes-agent runner should run
export CLUSTER_RUNNER_NAMESPACE="humanitec-kubernetes-agent-runner-ns"
# The name of the Kubernetes service account to be assumed by the the kubernetes-agent runner
export CLUSTER_RUNNER_SERVICEACCOUNT="humanitec-kubernetes-agent-runner"
# The Kubernetes namespace where the deployment jobs should run
export CLUSTER_RUNNER_JOB_NAMESPACE="humanitec-kubernetes-agent-runner-job-ns"
# The name of the Kubernetes service account to be assumed by the deployment jobs created by the kubernetes-agent runner
export CLUSTER_RUNNER_JOB_SERVICEACCOUNT="humanitec-kubernetes-agent-runner-job"
- Set the
gcloudCLI to your project:
gcloud config set project $GCP_PROJECT_ID
- Create a Google Cloud service account:
export GCP_SERVICE_ACCOUNT_NAME=runner-${CLUSTER_NAME}
gcloud iam service-accounts create ${GCP_SERVICE_ACCOUNT_NAME} \
--description="Used by Kubernetes Agent Runner" \
--display-name="Humanitec Kubernetes agent runner on cluster ${CLUSTER_NAME}" \
--project=${GCP_PROJECT_ID}
- Allow the Kubernetes service account to impersonate the Google Cloud service account using GKE Workload Identity:
gcloud iam service-accounts add-iam-policy-binding ${GCP_SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:${GCP_PROJECT_ID}.svc.id.goog[${CLUSTER_RUNNER_JOB_NAMESPACE}/${CLUSTER_RUNNER_JOB_SERVICEACCOUNT}]" \
--project=${GCP_PROJECT_ID}
A common part for both TF and CLI setups is the need for a private/public key pair for the runner to authenticate against the Orchestrator:
openssl genpkey -algorithm ed25519 -out runner_private_key.pem
openssl pkey -in runner_private_key.pem -pubout -out runner_public_key.pem
- Define the runner resource using the public Humanitec module and the key pair to install the runner into the cluster and register it with the Orchestrator:
module "kubernetes_agent_runner" {
source = "github.com/humanitec/platform-orchestrator-tf-modules//orchestrator-configuration/runner/kubernetes-agent"
humanitec_org_id = local.humanitec_org
private_key_path = "./runner_private_key.pem"
public_key_path = "./runner_public_key.pem"
k8s_namespace = local.cluster_runner_namespace
k8s_service_account_name = local.cluster_runner_serviceaccount
k8s_job_namespace = local.cluster_runner_job_namespace
k8s_job_service_account_name = local.cluster_runner_job_serviceaccount
# GKE Workload Identity configuration - link to the GCP service account created above
service_account_annotations = {
"iam.gke.io/gcp-service-account" = google_service_account.humanitec_kubernetes_agent.email
}
}
Visit the module documentation to see further configuration options.
- Assign any GCP permissions the runner will need to provision GCP resources to the IAM service account. E.g. to enable the runner to manage CloudSQL instances, attach a policy like this:
resource "google_project_iam_member" "agent_runner_cloudsql_admin" {
project = local.gcp_project_id
role = "roles/cloudsql.admin"
member = "serviceAccount:${google_service_account.humanitec_kubernetes_agent.email}"
}
- Verify the
kubectlcontext is set to your target cluster:
kubectl config current-context
- Initialize and apply the Terraform configuration:
Terraform
terraform init
terraform apply
OpenTofu
tofu init
tofu apply
- Verify the kubectl context is set to your target cluster:
kubectl config current-context
- Set values:
export HUMANITEC_ORG=<my-org>
export RUNNER_ID=kubernetes-runner-agent-${CLUSTER_NAME}
- Create the namespace for the runner and a secret holding the runner private key:
kubectl create namespace ${CLUSTER_RUNNER_NAMESPACE}
kubectl create secret generic humanitec-kubernetes-agent-runner \
-n ${CLUSTER_RUNNER_NAMESPACE} \
--from-literal=private_key="$(cat runner_private_key.pem)"
- Create the namespace for the runner jobs:
kubectl create namespace ${CLUSTER_RUNNER_JOB_NAMESPACE}
- Install the runner Helm chart onto the cluster, providing required values:
helm install humanitec-kubernetes-agent-runner \
oci://ghcr.io/humanitec/charts/humanitec-kubernetes-agent-runner \
-n ${CLUSTER_RUNNER_NAMESPACE} \
--set humanitec.orgId=${HUMANITEC_ORG} \
--set humanitec.runnerId=${RUNNER_ID} \
--set humanitec.existingSecret=humanitec-kubernetes-agent-runner \
--set namespaceOverride=${CLUSTER_RUNNER_NAMESPACE} \
--set serviceAccount.name=${CLUSTER_RUNNER_SERVICEACCOUNT} \
--set jobsRbac.namespace=${CLUSTER_RUNNER_JOB_NAMESPACE} \
--set jobsRbac.serviceAccountName=${CLUSTER_RUNNER_JOB_SERVICEACCOUNT} \
--set "serviceAccount.annotations.iam\.gke\.io/gcp-service-account=${GCP_SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com"
The value serviceAccount.annotations.iam.gke.io/gcp-service-account annotates the Kubernetes service account to leverage GKE Workload Identity for the runner to authenticate against GCP.
- Register the
kubernetes-agentrunner with the Orchestrator:
hctl create runner ${RUNNER_ID} \
--set=runner_configuration="$(jq -nc --arg key "$(cat runner_public_key.pem)" '{"type": "kubernetes-agent","key":$key,"job":{"namespace":"'${CLUSTER_RUNNER_JOB_NAMESPACE}'","service_account":"'${CLUSTER_RUNNER_JOB_SERVICEACCOUNT}'"}}')" \
--set=state_storage_configuration='{"type":"kubernetes","namespace":"'${CLUSTER_RUNNER_JOB_NAMESPACE}'"}'
- Assign any GCP permissions the runner will need to provision GCP resources to the IAM service account. E.g. to enable the runner to manage CloudSQL instances, attach a policy like this:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member "serviceAccount:${GCP_SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com" \
--role "roles/cloudsql.admin"
Your kubernetes-agent runner is now ready to be used. Continue by defining runner rules to assign the runner to environments and execute deployments.