Kubernetes

Humanitec is designed to integrate with your existing Kubernetes clusters wherever they’re hosted. You can configure Humanitec to run your Application in a single Kubernetes cluster or across different clusters in a multi-cloud setup while having an all-in-one solution for managing what is running where. Humanitec has integrated support for all major public cloud providers and vanilla Kubernetes out-of-the-box.

AKS

Azure Kubernetes Service (AKS) is natively supported by Humanitec and should take at most 30 minutes to integrate, provided you have all of the prerequisites met.

Prerequisites

To integrate your AKS cluster with Humanitec, you will need the following:

Create a service principal

To integrate with your cluster Humanitec requires a Service Principal.

  1. Follow Microsoft’s instructions to create a Service Principal for access to the Kubernetes cluster using the Azure CLI.
  2. Save the generated JSON object for later use. It adheres to this format:
{
  "appId": "myAppId",
  "displayName": "myServicePrincipalName",
  "password": "myServicePrincipalPassword",
  "tenant": "myTentantId"
}

If you wish to reuse an existing service principal, you need to create the exact same JSON object. Follow Microsoft’s instructions to get an existing service principal. This command will give you the base object for a given principal named myPrincipal:

az ad sp list --filter "displayname eq 'myPrincipal'" \
  --query "[0].{appId:appId, displayName:displayName, tenant:appOwnerOrganizationId}" \
  --output json

You will still have to add the "password" element yourself. If you cannot recover the password, consider resetting the service principal’s credentials.

Configure cluster access

The newly created service principal needs appropriate access on the scope of the target AKS cluster on the control plane and the data plane.

  • Control plane (Azure Resource Manager level): the permission Microsoft.ContainerService/managedClusters/read
  • Data plane (Kubernetes level): cluster administrative access. This is due to the requirement to create new Kubernetes namespaces on the cluster.

Depending on how Authentication and Authorization are configured for cluster, this can be achieved by a different set of role assignments. Refer to the AKS Access and Identity documentation to find the relevant roles for your setup.

E.g. when using AKS-managed Microsoft Entra integration, use this combination of roles:

Connect to your cluster

Now that your cluster is ready, you need to connect it to Humanitec. You can do so on the Resources Management screen.

  1. Start on the Resources Management screen and select Add resource definition.
  2. In the modal dialog, select Kubernetes Cluster.
  3. Then select k8s-cluster-aks.
  4. Next, you’ll need to provide the following details:
    1. A unique resource ID.
    2. The Provider’s Credentials JSON object saved earlier when creating or reusing the service principal.
    3. The IP address or DNS name of the Ingress Controller’s Load-Balancer
    4. The Cluster Name as it appears in your Azure Portal.
    5. A Proxy URL (if used).
    6. The Azure Resource Group that the cluster is deployed in.
    7. The Azure Subscription ID for the account.
    8. If your cluster has been configured to use AKS-managed Microsoft Entra integration, the AAD Server Application ID must be set to the value 6dae42f8-4368-4678-94ff-3960e28e3630 (see the AKS documentation).

Resource Matching

Now that you’ve registered the cluster you will need to define a Matching Rule so that Humanitec knows when to use it.

  1. Click on the relevant row in the Resource Definition table.
  2. Then switch to the Matching Criteria tab.
  3. Click + Add new Criteria.
  4. Configure the matching rules as needed.
  5. Click Save.

EKS

AWS’s Elastic Kubernetes Service (EKS) is natively supported by Humanitec and should take at most 30 minutes to integrate, provided you have all of the prerequisites met.

You have these choices to connect your EKS cluster:

  • Configure a Cloud Account using dynamic credentials (recommended). Continue using these instructions.
  • Configure an IAM user using static credentials. Continue below.

Prerequisites

  • Be able to create IAM policies and attach them to an IAM user
  • Have an EKS cluster with a NodePool configured
  • kubectl access

Configuring an IAM user

To integrate with your cluster Humanitec requires an IAM User with following policy.

  • eks:DescribeNodegroup
  • eks:ListNodegroups
  • eks:AccessKubernetesApi
  • eks:DescribeCluster
  • eks:ListClusters

A policy containing these permissions will look like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "eks:DescribeNodegroup",
                "eks:ListNodegroups",
                "eks:AccessKubernetesApi",
                "eks:DescribeCluster",
                "eks:ListClusters"
            ],
            "Resource": "*"
        }
    ]
}
  1. Create an IAM User with the Policy attached.
  2. Generate an access key for the IAM user.
  3. Save the access key ID and the secret access key in the following format:
{
  "aws_access_key_id": "AAABBBCCCDDDEEEFFFGGG",
  "aws_secret_access_key": "zZxXyY123456789aAbBcCdD"
}

Mapping the IAM user into the cluster

This IAM user must be mapped into your cluster. The mapping is done by adding an entry into the aws-auth ConfigMap in the cluster. This AWS documentation provides more detailed information how to manage users and IAM roles for your cluster.

  1. Edit the aws-auth ConfigMap via kubectl edit configmap aws-auth -n kube-system.
  2. Add the following configuration to the mapUsers array - Replacing the XXXXXXXXXXXX and <username> as appropriate.
- userarn: arn:aws:iam::XXXXXXXXXXXX:user/<username>
  username: <username>
  groups:
    - system:masters
  1. (Optional) If the IAM User is assuming a role that has the policy described above, then the role will need to be mapped by adding the following configuration to the mapRoles array - Replacing the XXXXXXXXXXXX and <rolename> as appropriate.
- rolearn: arn:aws:iam::XXXXXXXXXXXX:role/<rolename>
  username: <rolename>
  groups:
    - system:masters

If you are using eksctl:

You can set the user with the following command:

eksctl create iamidentitymapping \
    --cluster "${CLUSTER_NAME}" \
    --region "${REGION}" \
    --arn "${IAM_ARN}" \
    --group system:masters \
    --no-duplicate-arns \
    --username "${K8S_USER}"

Where the following environment variables are set:

Variable Example Description
CLUSTER_NAME my-eks-cluster The name of the EKS cluster.
REGION us-east-1 The AWS region the cluster is in.
IAM_ARN arn:aws:iam::12345678901:role/eks-role The ARN for the user or role to map.
K8S_USER my-k8s-user The name of the user to create / map to inside the cluster.

Connect the cluster to Humanitec

Now that your cluster is ready, you need to connect it to Humanitec. You can do so on the Resources Management screen.

  1. Start on the Resources Management screen and select Add resource definition.
  2. In the modal dialog, select Kubernetes Cluster.
  3. Then select k8s-cluster-eks.
  4. Next, you’ll need to provide the following details:
    1. A unique resource ID.
    2. The Provider’s Credentials object created earlier.
    3. The IP address or DNS name of the Ingress Controller’s Load-Balancer.
      1. For more information, see Ingress Controllers.
    4. The Hosted Zone the Load Balancer is hosted in.
    5. The Cluster Name as it appears in your AWS.
    6. A Proxy URL (if used).
    7. The AWS Region that the cluster is deployed in.

Resource Matching

Now that you’ve registered the cluster you will need to define a Matching Rule so that Humanitec knows when to use it.

  1. Click on the relevant row in the Resource Definition table.
  2. Then switch to the Matching Criteria tab.
  3. Click + Add new Criteria.
  4. Configure the matching rules as needed.
  5. Click Save.

GKE

Google Kubernetes Engine (GKE) is natively supported by Humanitec and should take at most 30 minutes to integrate, provided you have all the prerequisites met.

Prerequisites

To integrate your GKE cluster with Humanitec, you will need the following:

  • A GKE cluster
  • The ability to create Service Accounts with the Kubernetes Engine Admin (roles/container.admin) role (or a role with the equivalent set of permissions)
  • The following APIs enabled on your Google Cloud project:
    • Cloud Resource Manager API
    • Stackdriver API

Create a service account

To integrate with your cluster Humanitec requires a Google Cloud Service Account with the Kubernetes Engine Admin (roles/container.admin) role (or equivalent permissions).

  1. Follow Google’s instructions to create the Service Account and grant the appropriate role.
  2. Create and save an Account Key for this Service Account in the GCP Console format.

Connect your cluster

  1. Start on the Resources Management screen and select Add resource definition.
  2. In the modal dialog, select Kubernetes Cluster.
  3. Then select k8s-cluster-gke.
  4. Next, you’ll need to provide the following details:
    1. A unique resource ID.
    2. The Account Key for the Service account you created earlier.
    3. The IP address or DNS name of the Ingress Controller’s Load-Balancer.
      1. For more information, see Ingress Controllers.
    4. The Cluster Name as it appears in your Google Cloud.
    5. The Google Cloud Project ID.
    6. A Proxy URL (if used).
    7. The Google Cloud Zone that the cluster is deployed in.
  5. Finally, click Add Kubernetes Cluster to register the cluster in Humanitec.

Resource Matching

Now that you’ve registered the cluster you will need to define a Matching Rule so that Humanitec knows when to use it.

  1. Click on the relevant row in the Resource Definition table.
  2. Then switch to the Matching Criteria tab.
  3. Click + Add new Criteria.
  4. Configure the matching rules as needed.
  5. Click Save.

Vanilla Kubernetes

Kubernetes is natively supported by Humanitec and should take at most 30 minutes to integrate, provided you have all the prerequisites met.

Prerequisites

To integrate your vanilla cluster with Humanitec, you will need the following:

  • A Kubernetes cluster
  • The ability to create a kubeconfig that authenticate with the Cluster using Client Certificates
  • Making the cluster’s API endpoint accessible to Humanitec, either by:
    • ensuring the cluster API endpoint is accessible on the public internet
    • whitelisting the Humanitec public IPs for the API endpoint
    • configuring a VPN allowing Humanitec access to the cluster API endpoint
  • As an alternative to making the API endoint accessible, setting up a pull approach using the Humanitec Operator GitOps mode

Create a user

To integrate with your cluster Humanitec requires a client certificate and private key with a role-binding for the clusterrole/admin role (or equivalent permissions).

  1. Follow Kubernetes’ instructions to create the account and grant the appropriate permissions
  2. Save the certificate, private key, and certificate authority for use later

Connect your Cluster

  1. Start on the Resources Management screen and select Add resource definition.
  2. In the modal dialog, select Kubernetes Cluster.
  3. Then select k8s-cluster.
  4. Next, you’ll need to provide the following details:
    1. A unique resource ID.
    2. The Client Certificate, Client Private Key, and Cluster Certificate Authority for the account you created earlier.
    3. A Proxy URL (if used).
    4. The Cluster API Server URL that Humanitec can use to access the cluster.
    5. The IP address or DNS name of the Ingress Controller’s Load-Balancer.
  5. Finally, click Add Kubernetes Cluster to register the cluster in Humanitec.

Resource Matching

Now that you’ve registered the cluster you will need to define a Matching Rule so that Humanitec knows when to use it.

  1. Click on the relevant row in the Resource Definition table.
  2. Then switch to the Matching Criteria tab.
  3. Click + Add new Criteria.
  4. Configure the matching rules as needed.
  5. Click Save.
Top