kubernetes-eks

The kubernetes-eks runner type lets you execute runners on Amazon Elastic Kubernetes Service (EKS) clusters having a publicly accessible API server.

The Orchestrator will create Kubernetes Jobs directly. This runner type does not require a runner agent.

To access the cluster, the Orchestrator uses temporary credentials obtained via AWS Security Token Service (STS)  by assuming an IAM role with minimal required permissions.

Follow the steps below to configure a runner of type kubernetes-eks.

Example configuration

runner-config.yaml:

runner_configuration:
  type: kubernetes-eks
  cluster:
    name: my-eks-cluster
    region: us-west-2
    auth:
      role_arn: arn:aws:iam::123456789012:role/humanitec-runner-eks-role
  job:
    namespace: humanitec-runner
    service_account: humanitec-runner
state_storage_configuration:
  ...

See all configuration options further down.

Before you begin

You will need the following resources and permissions:

  • The aws CLI  installed and authenticated with a principal having permission to manage IAM roles and policies in the AWS account of the target EKS cluster
  • An EKS cluster with a publicly available API server endpoint
  • The kubectl CLI  installed and the current context set to target the EKS cluster using a principal with permission to create namespaces, service accounts, Roles, and RoleBindings
  • The hctl CLI installed and authenticated against your Orchestrator organization
  • A project in your Orchestrator organization

Prepare the cloud environment

Perform the following setup in the AWS account of the target EKS cluster.

  1. Set values
export AWS_ACCOUNT_ID=<my-aws-account-id>
export CLUSTER_NAME=<my-eks-cluster-name>
export CLUSTER_REGION=<my-eks-cluster-region>
export HUMANITEC_ORG=<my-org>
  1. Create an OIDC identity provider
aws iam create-open-id-connect-provider \
  --url https://oidc.humanitec.dev \
  --client-id-list sts.amazonaws.com
  1. Create an IAM role for the Orchestrator
export RUNNER_ID=kubernetes-eks-${CLUSTER_NAME}-${CLUSTER_REGION}
export IAM_ROLE_NAME=humanitec-runner-eks-role

cat <<EOF > humanitec-runner-eks-trust-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.humanitec.dev"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "oidc.humanitec.dev:aud": "sts.amazonaws.com",
          "oidc.humanitec.dev:sub": "${HUMANITEC_ORG}+${RUNNER_ID}"
        }
      }
    }
  ]
}
EOF

aws iam create-role \
  --role-name ${IAM_ROLE_NAME} \
  --assume-role-policy-document file://humanitec-runner-eks-trust-policy.json \
  --description "Role for Humanitec Orchestrator to access EKS clusters for launching runners"
  1. Create an IAM policy with minimal EKS permissions
export IAM_POLICY_NAME=humanitec-runner-eks-policy

cat <<EOF > humanitec-runner-eks-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "eks:DescribeCluster"
      ],
      "Resource": "arn:aws:eks:${CLUSTER_REGION}:${AWS_ACCOUNT_ID}:cluster/${CLUSTER_NAME}"
    }
  ]
}
EOF

aws iam create-policy \
  --policy-name ${IAM_POLICY_NAME} \
  --policy-document file://humanitec-runner-eks-policy.json \
  --description "Minimal permissions for Humanitec Orchestrator to access EKS clusters"
  1. Attach the policy to the role
aws iam attach-role-policy \
  --role-name ${IAM_ROLE_NAME} \
  --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/${IAM_POLICY_NAME}
  1. Add IAM access entries for the role
aws eks create-access-entry \
  --cluster-name ${CLUSTER_NAME} \
  --principal-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/${IAM_ROLE_NAME} \
  --type STANDARD

Prepare the EKS cluster

Perform the following setup on the target EKS cluster for the runner.

  1. Create a Kubernetes namespace where your runner will run
export RUNNER_K8S_NAMESPACE=humanitec-runner

kubectl create namespace ${RUNNER_K8S_NAMESPACE}
  1. Create a Kubernetes service account for the runner
export RUNNER_K8S_SA=humanitec-runner

kubectl create serviceaccount -n ${RUNNER_K8S_NAMESPACE} ${RUNNER_K8S_SA}
  1. Create and assign a Kubernetes Role

Prepare a Role  and RoleBinding . These permissions enable the Orchestrator to create jobs in the runner namespace with minimal required access.

export SESSION_NAME=${HUMANITEC_ORG}-${RUNNER_ID}
# Ensure session name doesn't exceed 64 characters
export SESSION_NAME=${SESSION_NAME:0:64}

cat << EOF > kubernetes-eks-runner-k8s-role-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: humanitec-runner-orchestrator-access
  namespace: ${RUNNER_K8S_NAMESPACE}
rules:
  - apiGroups: ["batch"]
    resources: ["jobs"]
    verbs: ["create", "get"]
  - apiGroups: [""]
    resources: ["pods", "events"]
    verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: humanitec-runner-orchestrator-access
  namespace: ${RUNNER_K8S_NAMESPACE}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: humanitec-runner-orchestrator-access
subjects:
- kind: User
  name: arn:aws:sts::${AWS_ACCOUNT_ID}:assumed-role/${IAM_ROLE_NAME}/${SESSION_NAME}
EOF

kubectl apply -f kubernetes-eks-runner-k8s-role-rolebinding.yaml

Configure a state storage

Decide which state storage types the runner is supposed to use. Check the compatibility matrix and perform the setup according to the chosen state storage type documentation.

Prepare a TF code snippet containing the properties for the state storage configuration underneath a property state_storage_configuration:

  state_storage_configuration = {
    ...
  }

Prepare a local file state-storage-config.yaml containing the properties for the state storage configuration underneath the top level property state_storage_configuration.

state_storage_configuration:
  ...

Create a runner

  1. Prepare the runner configuration

Prepare the runner_configuration in a local file:

cat <<EOF > runner-config.yaml
runner_configuration:
  type: kubernetes-eks
  cluster:
    name: ${CLUSTER_NAME}
    region: ${CLUSTER_REGION}
    auth:
      role_arn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/${IAM_ROLE_NAME}
  job:
    namespace: ${RUNNER_K8S_NAMESPACE}
    service_account: ${RUNNER_K8S_SA}
EOF
  1. Append the state storage configuration you created earlier

Append the state storage config file prepared in the previous section to the existing runner configuration:

Add the state_storage_configuration block you prepared to the platform-orchestrator_serverless_ecs_runner resource:

resource "platform-orchestrator_serverless_ecs_runner" "example" {
  
  # ...

  state_storage_configuration = {
    # Your prepared state storage configuration
    # ...
  }
}

cat state-storage-config.yaml >> runner-config.yaml
  1. Verify the runner configuration

Verify the structure of the runner configuration. It needs to have the top level properties as shown:

resource "platform-orchestrator_serverless_ecs_runner" "example" {
  
  runner_configuration = {
    # ...
  }
  state_storage_configuration = {
    # ...
  }
}

Verify the structure of the configuration file. It needs to have the top level properties as shown:

cat runner-config.yaml
runner_configuration:
  ...
state_storage_configuration:
  ...
  1. Create the runner

Create the runner using the configuration prepared previously:

apply the TF configuration you created.

hctl create runner ${RUNNER_ID} \
  [email protected]

Create runner rules

Add any runner rules for the newly created runner.

Configuration options

The following YAML configuration shows all available options for the kubernetes-eks runner type:

runner_configuration:
  # Runner type
  type: kubernetes-eks
  
  # EKS cluster configuration
  cluster:
    # Name of the EKS cluster
    name: my-eks-cluster
    
    # AWS region where the cluster is located
    region: us-west-2
    
    # Authentication configuration
    auth:
      # ARN of the IAM role that the Orchestrator will assume to describe and access cluster
      role_arn: arn:aws:iam::123456789012:role/humanitec-runner-eks-role
      
      # (optional, defaults to cluster region) AWS region for STS token requests
      sts_region: eu-center-1

      # (optional, defaults to "{humanitec-org}-{runner-id}") Custom session name for the assumed role
      session_name: my-session
  
  # Kubernetes job configuration
  job:
    # Namespace where runner jobs will be created
    namespace: humanitec-runner
    
    # Service account to use for the runner jobs
    service_account: humanitec-runner

    # (optional) Pod template for customizing runner job pods
    pod_template: 
      # Add custom pod specifications here
      # See Kubernetes PodSpec documentation for available options
      ...

# State storage configuration
state_storage_configuration:
  # Add state storeage configuration here
  # See State storage types documentation for available options
  ...

Setting sensitive environment variables

The runner_configuration.job.pod_template field contains a Kubernetes pod template you can set to extend the runtime configuration of the runner. The pod template expects a structure of pod spec with a container named main. You can set secret environment variables by referencing existing secrets within the same target namespace of the runner pod. For example, if you want to mount the value of the key field within a secret named my-secret to the environment variable TF_EXAMPLE, you can set the pod template as the following:

  runner_configuration = {
    job = {
      pod_template = jsonencode({
        spec = {
          containers = [
            {
              name = "main"
              env = [
                {
                  name = "TF_EXAMPLE"
                  valueFrom = {
                    secretKeyRef = {
                      name = "my-secret"
                      key  = "key"
                    }
                  }
                }
              ]
            }
          ]
        }
      })
    }
  }

runner_configuration:
  job:
    pod_template:
      spec:
        containers:
          - name: main
            env:
              name: TF_EXAMPLE
              valueFrom:
                secretKeyRef: 
                  name: my-secret
                  key: key

The service account used by the runner must have permissions to get the secret.

Environment variables that are not secret or sensitive can be set directly in the env structure.

Top