Kubernetes

Overview

The Humanitec Platform Orchestrator is designed to integrate with your existing Kubernetes clusters wherever they’re hosted. You can configure the Orchestrator to run your Application in a single Kubernetes cluster or across different clusters in a multi-cloud setup while having an all-in-one solution for managing what is running where. The Orchestrator has integrated support for all major public cloud providers and vanilla Kubernetes out-of-the-box.

Kubernetes Clusters are represented in the Platform Orchestrator by Resource Definitions of the Resource Type k8s-cluster.

Platform Engineers maintain the cluster Resource Definitions as shown on this page. By attaching the proper Matching Criteria, they configure which Deployments will be directed at which cluster.

Application developers do not have to do anything to request the cluster Resource. The Platform Orchestrator automatically matches a target cluster for every Deployment. Technically speaking, that is because k8s-cluster is a so-called implicit Resource Type which is automatically referenced for every Deployment.

Integrating your Kubernetes cluster is a process following these steps:

  1. Preparing credentials via a Cloud Account
  2. Configuring cluster access
  3. Creating a Resource Definition for your cluster, using the previously created credentials
  4. Creating Matching Criteria for the Resource Definition
  5. (optional) If your cluster is private, i.e. its API server endpoint is not accessible from public networks, installing the Humanitec Agent
  6. Start deploying to your cluster

AKS

Azure Kubernetes Service (AKS) is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all of the prerequisites met.

Prerequisites

To integrate your AKS cluster with the Platform Orchestrator, you will need the following:

  • An AKS cluster
  • The Azure CLI installed
  • (optional) The humctl CLI installed
    • Authentication performed against the Platform Orchestrator via humctl login
    • The environment variable HUMANITEC_ORG set to your Organization id
  • The ability to assign roles on the scope of the cluster

1. Prepare AKS credentials

Prepare the credentials to access your AKS cluster by setting up an Azure Cloud Account in the Platform Orchestrator.

2. Configure AKS cluster access

The principal used with the Cloud Account needs appropriate access on the scope of the target AKS cluster on the control plane and the data plane.

  • Control plane (Azure Resource Manager level): the permission Microsoft.ContainerService/managedClusters/read
  • Data plane (Kubernetes level): cluster administrative access. This is due to the requirement to create new Kubernetes namespaces on the cluster.

Depending on how Authentication and Authorization are configured for cluster, this can be achieved by a different set of role assignments. Refer to the AKS Access and Identity documentation to find the relevant roles for your setup.

When using AKS-managed Microsoft Entra integration, assign this combination of roles:

3. Create an AKS Resource Definition

Now that your cluster is ready, you connect it to the Platform Orchestrator by providing a Resource Definition.

  1. Start on the Resources Management screen and select Add resource definition
  2. In the modal dialog, select Kubernetes Cluster
  3. Select k8s-cluster-aks
  4. Next, you’ll need to provide the following details:
    1. A unique Resource Definition ID
    2. For Credentials, select the Cloud Account you created earlier
    3. (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
    4. (optional) The IP address or DNS name of the Ingress Controller’s load balancer
    5. The Cluster Name as it appears in your Azure Portal
    6. (optional) A Proxy URL. It represents the kubeconfig value proxy-url
    7. The Azure Resource Group the cluster is deployed in
    8. If your cluster has been configured to use AKS-managed Microsoft Entra integration, the AAD Server Application ID must be set to the value 6dae42f8-4368-4678-94ff-3960e28e3630 (see the AKS documentation)
    9. The Azure Subscription ID for the cluster

Create a Resource Definition like the one shown in the example below.

Set the driver_account to the ID of the Cloud Account you created earlier.

Install it into your Organization using this command:

humctl apply -f aks-dynamic-credentials.yaml

aks-dynamic-credentials.yaml (view on GitHub) :

# Connect to an AKS cluster using dynamic credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aks-dynamic-credentials
entity:
  name: aks-dynamic-credentials
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "azure-identity"
  # which needs to be configured for your Organization.
  driver_account: azure-dynamic-creds
  driver_type: humanitec/k8s-cluster-aks
  driver_inputs:
    values:
      loadbalancer: 20.10.10.10
      name: demo-123
      resource_group: my-resources
      subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
      # Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
      # server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630

Create a humanitec_resource_definition resource using the Humanitec Terraform provider like the one shown in the example below.

Set the driver_account to the ID of the Cloud Account you created earlier.


aks-dynamic-credentials.tf (view on GitHub) :

# Connect to an AKS cluster using dynamic credentials defined via a Cloud Account
resource "humanitec_resource_definition" "aks-dynamic-credentials" {
  id          = "aks-dynamic-credentials"
  name        = "aks-dynamic-credentials"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-aks"
  # The driver_account is referring to a Cloud Account resource
  driver_account = humanitec_resource_account.azure-dynamic.id

  driver_inputs = {
    values_string = jsonencode({
      "name"            = var.azure_aks_private_cluster_name
      "loadbalancer"    = var.azure_aks_private_cluster_loadbalancer
      "resource_group"  = var.azure_aks_resource_group
      "subscription_id" = var.azure_subscription_id
      # Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
      # "server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
    })
  }
}

4. Configure AKS Resource Matching

Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.

  1. Click on the relevant row in the Resource Definition table
  2. Then switch to the Matching Criteria tab
  3. Click + Add new Criteria
  4. Configure the matching rules as needed
  5. Click Save

This example configures matching on an Environment type development.

humctl api post /orgs/$HUMANITEC_ORG/resources/defs/aks-dynamic-credentials/criteria \
-d '{
"env_type": "development"
}'

This example configures matching on an Environment type development.

resource "humanitec_resource_definition_criteria" "aks-dynamic-credentials-matching" {
  resource_definition_id = humanitec_resource_definition.aks-dynamic-credentials.id
  env_type               = "development"
}
  1. Install the Agent for a private AKS cluster

If your AKS cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent.

  1. Start deploying to your AKS cluster

Any Deployment fitting the matching criteria you configured will now be directed at your AKS cluster.

EKS

AWS Elastic Kubernetes Service (EKS) is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all of the prerequisites met.

Prerequisites

To integrate your EKS cluster with the Platform Orchestrator, you will need the following:

  • An EKS cluster with a NodePool configured
  • The aws CLI installed
  • (optional) The humctl CLI installed
    • Authentication performed against the Platform Orchestrator via humctl login
    • The environment variable HUMANITEC_ORG set to your Organization id
  • The ability to create IAM policies and attach them to an IAM user

1. Prepare EKS credentials

Prepare the credentials to access your AKS cluster by setting up an AWS Cloud Account in the Platform Orchestrator.

2. Configure EKS cluster access

The IAM principal used with the Cloud Account needs appropriate access on the scope of the target EKS cluster through an IAM policy with these permissions:

  • eks:DescribeNodegroup
  • eks:ListNodegroups
  • eks:AccessKubernetesApi
  • eks:DescribeCluster
  • eks:ListClusters

A policy containing these permissions for your target cluster ARN will look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:DescribeNodegroup",
                "eks:ListNodegroups",
                "eks:AccessKubernetesApi",
                "eks:DescribeCluster",
                "eks:ListClusters"
            ],
            "Resource": "arn:aws:eks:<region>:<accountid>:cluster/<clustername>"
        }
    ]
}

Attach the policy to the principal used with the Cloud Account. See the IAM Tutorial: Create and attach your first customer managed policy for step-by-step instructions.

The IAM principal from the Cloud Account must then be associated with Kubernetes Permssions. See Associate IAM Identities with Kubernetes Permissions in the AWS documentation for detailed instructions depending on your chosen method.

When using access entries, create an access entry of type STANDARD for the principal, and add the access policy arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy to the entry.

When using the aws-auth ConfigMap, add the principal to

  groups:
  - system:masters

3. Create an EKS Resource Definition

Now that your cluster is ready, you need to connect it to the Platform Orchestrator by providing a Resource Definition.

  1. Start on the Resources Management screen and select Add resource definition
  2. In the modal dialog, select Kubernetes Cluster
  3. Then select k8s-cluster-eks
  4. Next, you’ll need to provide the following details:
    1. A unique Resource Definition ID
    2. For Credentials, select the Cloud Account you created earlier
    3. (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
    4. (optional) The IP address or DNS name of the Ingress Controller’s load balancer.
    5. (optional) The Hosted Zone the Load Balancer is hosted in
    6. The Cluster Name as it appears in your AWS console
    7. (optional) A Proxy URL. It represents the kubeconfig value proxy-url
    8. The AWS Region that the cluster is deployed in

Create a Resource Definition like the one shown in the example below.

Set the driver_account to the ID of the Cloud Account you created earlier.

Install it into your Organization using this command:

humctl apply -f eks-dynamic-credentials.yaml

eks-dynamic-credentials.yaml (view on GitHub) :

# Connect to an EKS cluster using dynamic credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: eks-dynamic-credentials
entity:
  name: eks-dynamic-credentials
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "aws-role"
  # which needs to be configured for your Organization.
  driver_account: aws-temp-creds
  # The driver_type k8s-cluster-eks automatically handles the dynamic credentials
  # injected via the driver_account.
  driver_type: humanitec/k8s-cluster-eks
  driver_inputs:
    values:
      region: eu-central-1
      name: demo-123
      loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
      loadbalancer_hosted_zone: ABC0DEF5WYYZ00

Create a humanitec_resource_definition resource using the Humanitec Terraform provider like the one shown in the example below.

Set the driver_account to the ID of the Cloud Account you created earlier.


eks-dynamic-credentials.tf (view on GitHub) :

# Connect to an EKS cluster using dynamic credentials defined via a Cloud Account
resource "humanitec_resource_definition" "eks-dynamic-credentials" {
  id          = "eks-dynamic-credentials"
  name        = "eks-dynamic-credentials"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-eks"
  # The driver_account is referring to a Cloud Account resource
  driver_account = humanitec_resource_account.aws-dynamic.id

  driver_inputs = {
    values_string = jsonencode({
      "name"                     = var.eks_cluster_name
      "region"                   = var.aws_region
      "loadbalancer"             = var.eks_loadbalancer
      "loadbalancer_hosted_zone" = var.eks_loadbalancer_hostedzone
    })
  }
}

4. Configure EKS Resource Matching

Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.

  1. Click on the relevant row in the Resource Definition table
  2. Then switch to the Matching Criteria tab
  3. Click + Add new Criteria
  4. Configure the matching rules as needed
  5. Click Save

This example configures matching on an Environment type development.

humctl api post /orgs/$HUMANITEC_ORG/resources/defs/eks-dynamic-credentials/criteria \
-d '{
"env_type": "development"
}'

This example configures matching on an Environment type development.

resource "humanitec_resource_definition_criteria" "eks-dynamic-credentials-matching" {
  resource_definition_id = humanitec_resource_definition.eks-dynamic-credentials.id
  env_type               = "development"
}
  1. Install the Agent for a private EKS cluster

If your EKS cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent.

  1. Start deploying to your EKS cluster

Any Deployment fitting the matching criteria you configured will now be directed at your EKS cluster.

GKE

Google Kubernetes Engine (GKE) is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all the prerequisites met.

Prerequisites

To integrate your GKE cluster with the Platform Orchestrator, you will need the following:

  • A GKE cluster
  • The gcloud CLI installed
  • (optional) The humctl CLI installed
    • Authentication performed against the Platform Orchestrator via humctl login
    • The environment variable HUMANITEC_ORG set to your Organization id
  • The ability to create Service Accounts with the Kubernetes Engine Admin (roles/container.admin) role (or a role with the equivalent set of permissions)
    • The Google Cloud Resource Manager API enabled

1. Prepare GKE credentials

Prepare the credentials to access your GKE cluster by setting up a GCP Cloud Account in the Platform Orchestrator.

2. Configure GKE cluster access

The IAM principal used with the Cloud Account needs appropriate access on the scope to the target GKE cluster by being assigned the role “Kubernetes Engine Admin” (roles/container.admin) in the project the cluster belings to.

See the Google Cloud documentation for details on granting roles.

3. Create a GKE Resource Definition

Now that your cluster is ready, you connect it to the Platform Orchestrator by providing a Resource Definition.

  1. Start on the Resources Management screen and select Add resource definition
  2. In the modal dialog, select Kubernetes Cluster
  3. Select k8s-cluster-gke
  4. Next, you’ll need to provide the following details:
    1. A unique Resource Definition ID
    2. For Credentials, select the Cloud Account you created earlier
    3. (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
    4. (optional) The IP address or DNS name of the Ingress Controller’s load balancer
    5. The Cluster Name as it appears in the Google Cloud console
    6. The Google Cloud Project ID
    7. (optional) A Proxy URL. It represents the kubeconfig value proxy-url
    8. The Google Cloud Zone that the cluster is deployed in

Create a Resource Definition like the one shown in the example below.

Set the driver_account to the ID of the Cloud Account you created earlier.

Install it into your Organization using this command:

humctl apply -f gke-dynamic-credentials.yaml

gke-dynamic-credentials.yaml (view on GitHub) :

# Connect to a GKE cluster using dynamic credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: gke-dynamic-credentials
entity:
  name: gke-dynamic-credentials
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "gcp-identity"
  # which needs to be configured for your Organization.
  driver_account: gcp-dynamic-creds
  driver_type: humanitec/k8s-cluster-gke
  driver_inputs:
    values:
      loadbalancer: 35.10.10.10
      name: demo-123
      zone: europe-west2-a
      project_id: my-gcp-project

Create a humanitec_resource_definition resource using the Humanitec Terraform provider like the one shown in the example below.

Set the driver_account to the ID of the Cloud Account you created earlier.


gke-dynamic-credentials.tf (view on GitHub) :

# Connect to a GKE cluster using dynamic credentials defined via a Cloud Account
resource "humanitec_resource_definition" "gke-dynamic" {
  id          = "gke-dynamic"
  name        = "gke-dynamic"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-gke"
  # The driver_account references a Cloud Account of type "gcp-identity"
  driver_account = humanitec_resource_account.gcp-dynamic.id

  driver_inputs = {
    values_string = jsonencode({
      "name"         = var.gke_cluster_name
      "loadbalancer" = var.gke_loadbalancer
      "project_id"   = var.gcp_project_id
      "zone"         = var.gcp_region
    })
  }
}

4. Configure GKE Resource Matching

Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.

  1. Click on the relevant row in the Resource Definition table
  2. Then switch to the Matching Criteria tab
  3. Click + Add new Criteria
  4. Configure the matching rules as needed
  5. Click Save

This example configures matching on an Environment type development.

humctl api post /orgs/$HUMANITEC_ORG/resources/defs/gke-dynamic-credentials/criteria \
-d '{
"env_type": "development"
}'

This example configures matching on an Environment type development.

resource "humanitec_resource_definition_criteria" "gke-dynamic-credentials-matching" {
  resource_definition_id = humanitec_resource_definition.gke-dynamic.id
  env_type               = "development"
}
  1. Install the Agent for a private GKE cluster

If your GKE cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent.

  1. Start deploying to your GKE cluster

Any Deployment fitting the matching criteria you configured will now be directed at your GKE cluster.

Vanilla Kubernetes

Kubernetes is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all the prerequisites met.

Prerequisites

To integrate your vanilla cluster with the Platform Orchestrator, you will need the following:

  • A Kubernetes cluster
  • The ability to create a kubeconfig that authenticates a user with the cluster using client certificates
  • Making the cluster’s API endpoint accessible to the Platform Orchestrator, either by:
    • Ensuring the cluster API endpoint is accessible on the public internet
    • If required, whitelisting the Humanitec public IPs for the API endpoint
    • Using the Humanitec Agent
    • Configuring a VPN allowing the Platform Orchestrator access to the cluster API endpoint

1. Prepare credentials

Integrate with your cluster the Platform Orchestrator requires a client certificate and private key.

Follow the Kubernetes instructions to create a normal user. Save the certificate, private key, and certificate authority for later use.

2. Configure cluster access

Create a RoleBinding for the user to the role clusterrole/admin.

3. Create a Resource Definition

Now that your cluster is ready, you connect it to the Platform Orchestrator by providing a Resource Definition.

  1. Start on the Resources Management screen and select Add resource definition
  2. In the modal dialog, select Kubernetes Cluster
  3. Select k8s-cluster
  4. Next, you’ll need to provide the following details:
    1. A unique Resource Definition ID
    2. (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
    3. The Client Certificate, Client Private Key, and Cluster Certificate Authority for the account you created earlier
    4. (optional) A Proxy URL. It represents the kubeconfig value proxy-url
    5. The Cluster API Server URL that the Platform Orchestrator can use to access the cluster
    6. (Optional) The IP address or DNS name of the Ingress Controller’s load balancer

Create a Resource Definition like the one shown in the example below.

Install it into your Organization using this command:

humctl apply -f generic-k8s-client-certificate.yaml

generic-k8s-client-certificate.yaml (view on GitHub) :

# Resource Definition for a generic Kubernetes cluster
# Make sure all ${ENVIRONMENT_VARIABLES} are set when applying this Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: generic-k8s-static-credentials
entity:
  name: generic-k8s-static-credentials
  type: k8s-cluster
  driver_type: humanitec/k8s-cluster
  driver_inputs:
    values:
      name: my-generic-k8s-cluster
      loadbalancer: 35.10.10.10
      cluster_data:
        server: https://35.11.11.11:6443
        # Single line base64-encoded cluster CA data in the format "LS0t...ca-data....=="
        certificate-authority-data: ${CLUSTER_CERTIFICATE_CA_DATA}
    secrets:
      credentials:
        # Single line base64-encoded client certificate data in the format "LS0t...cert-data...=="
        client-certificate-data: ${USER_CLIENT_CERTIFICATE_DATA}
        # Single line base64-encoded client key data in the format "LS0t...key-data...=="
        client-key-data: ${USER_CLIENT_KEY_DATA}

Create a humanitec_resource_definition resource using the Humanitec Terraform provider like the one shown in the example below.


generic-k8s-client-certificate.tf (view on GitHub) :

# Provide access to the kubeconfig file
locals {
  parsed_kubeconfig = yamldecode(file("/path/to/kubeconfig"))
}

# Resource Definition for a generic Kubernetes cluster
resource "humanitec_resource_definition" "generic_cluster" {
  id          = "generic-k8s-static-credentials"
  name        = "generic-k8s-static-credentials"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster"

  driver_inputs = {
    values_string = jsonencode({
      loadbalancer = "35.10.10.10"
      # The index [0] assumes the target cluster is the first cluster definition
      cluster_data = local.parsed_kubeconfig["clusters"][0]["cluster"]
    })
    secrets_string = jsonencode({
      # Setting the URL for the Humanitec Agent. Remove the line if not used
      agent_url   = "$${resources['agent#agent'].outputs.url}"
      # The index [0] assumes the target user is the first user definition
      credentials = local.parsed_kubeconfig["users"][0]["user"]
    })
  }
}

4. Configure K8s Resource Matching

Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.

  1. Click on the relevant row in the Resource Definition table
  2. Then switch to the Matching Criteria tab
  3. Click + Add new Criteria
  4. Configure the matching rules as needed
  5. Click Save

This example configures matching on an Environment type development.

humctl api post /orgs/$HUMANITEC_ORG/resources/defs/generic-k8s-static-credentials/criteria \
-d '{
"env_type": "development"
}'

This example configures matching on an Environment type development.

resource "humanitec_resource_definition_criteria" "generic-k8s-static-credentials-matching" {
  resource_definition_id = humanitec_resource_definition.generic_cluster.id
  env_type               = "development"
}
  1. Install the Agent for a private K8s cluster

If your Kubernetes cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent.

  1. Start deploying to your K8s cluster

Any Deployment fitting the matching criteria you configured will now be directed at your Kubernetes cluster.

Top