Kubernetes
On this page
Overview
The Humanitec Platform Orchestrator is designed to integrate with your existing Kubernetes clusters wherever they’re hosted. You can configure the Orchestrator to run your Application in a single Kubernetes cluster or across different clusters in a multi-cloud setup while having an all-in-one solution for managing what is running where. The Orchestrator has integrated support for all major public cloud providers and vanilla Kubernetes out-of-the-box.
Kubernetes Clusters are represented in the Platform Orchestrator by
Resource Definitions
of the
Resource Type k8s-cluster
.
Platform Engineers maintain the cluster Resource Definitions as shown on this page. By attaching the proper Matching Criteria , they configure which Deployments will be directed at which cluster.
Application developers do not have to do anything to request the cluster Resource. The Platform Orchestrator automatically matches a target cluster for every Deployment. Technically speaking, that is because k8s-cluster
is a so-called
implicit Resource Type
which is automatically referenced for every Deployment.
Integrating your Kubernetes cluster is a process following these steps:
- Preparing credentials via a Cloud Account
- Configuring cluster access
- Creating a Resource Definition for your cluster, using the previously created credentials
- Creating Matching Criteria for the Resource Definition
- (optional) If your cluster is private, i.e. its API server endpoint is not accessible from public networks, installing the Humanitec Agent
- Start deploying to your cluster
AKS
Azure Kubernetes Service (AKS) is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all of the prerequisites met.
Prerequisites
To integrate your AKS cluster with the Platform Orchestrator, you will need the following:
- An AKS cluster
- The Azure CLI installed
- (optional) The
humctl CLI
installed
- Authentication performed against the Platform Orchestrator via
humctl login
- The environment variable
HUMANITEC_ORG
set to your Organization ID
- Authentication performed against the Platform Orchestrator via
- The ability to assign roles on the scope of the cluster
1. Prepare AKS credentials
Prepare the credentials to access your AKS cluster by setting up an Azure Cloud Account in the Platform Orchestrator.
2. Configure AKS cluster access
The principal used with the Cloud Account needs appropriate access on the scope of the target AKS cluster on the control plane and the data plane.
- Control plane (Azure Resource Manager level): the permission
Microsoft.ContainerService/managedClusters/read
- Data plane (Kubernetes level): permission to create new Kubernetes namespaces on the cluster, as well as permission to manage a range of namespaced Kubernetes resources (details below)
Depending on how Authentication and Authorization are configured for cluster, this can be achieved by a different set of role assignments.
The following instructions refer to a cluster using AKS-managed Microsoft Entra integration (recommended).
Refer to the AKS Access and Identity documentation to find the relevant roles for your setup when using a different access method.
You have several options available to configure access.
- Cluster admin access
This option does not require you to maintain any custom role, but provides excess permissions.
For the Control plane, assign the role Azure Kubernetes Service Cluster User Role to the identity used by the Cloud Account.
For the Data plane:
-
Assign the role Azure Kubernetes Service RBAC Cluster Admin to the identity used by the Cloud Account
OR
-
Add the identity used by the Cloud Account to one of the groups configured as an admin group of the target cluster
- Least privilege role
This option works with a Kubernetes ClusterRole containing just the required permissions (“least privilege”). To access the cluster API server in Azure, a lightweight control plane role is needed as well.
All custom role definitions shown below are designed for running in Direct mode .
Running in Legacy mode requires additional permissions for creating Kubernetes objects, causing deployments to fail until those permissions are configured.
Create an Entra ID security group and add the identity used by the Cloud Account as a member.
For the Control plane, assign the role Azure Kubernetes Service Cluster User Role to the Entra ID group you created.
For the Data plane:
- Create a Kubernetes ClusterRole on the target cluster. See the permissions list below:
Kubernetes ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: humanitec-deploy-access
rules:
# Namespaces management
- apiGroups: [""]
resources: ["namespaces"]
verbs : ["create", "get", "list", "update", "patch", "delete"]
# Humanitec's CRs management
- apiGroups: ["humanitec.io"]
resources: ["resources", "secretmappings", "workloadpatches", "workloads"]
verbs : ["create", "get", "list", "update", "patch", "delete", "deletecollection", "watch"]
# Deployment / Workload Status in UI
- apiGroups: ["batch"]
resources: ["jobs"]
verbs : ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "replicasets", "daemonsets"]
verbs : ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs : ["get", "list"]
# Container's logs in the UI
- apiGroups: [""]
resources: ["pods/log"]
verbs : ["get", "list"]
# Pause Environments
- apiGroups: ["apps"]
resources: ["deployments/scale"]
verbs : ["update"]
# To get the active resources (resources outputs)
- apiGroups: [""]
resources: ["configmaps"]
verbs : ["get"]
-
Create a Kubernetes ClusterRoleBinding
for this ClusterRole onto the Entra ID group you created. Use the Object ID of the group as the
subject
’sname
:
Kubernetes ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: humanitec-deploy-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: humanitec-deploy-access
subjects:
- kind: Group
name: ${GROUP_OBJECT_ID}
apiGroup: rbac.authorization.k8s.io
If you are running the Terraform Runner in the target cluster, you additionally need to:
- Create a Kubernetes Role in the Runner namespace:
TF Runner Kubernetes Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: terraform-runner-role
namespace: tf-runner-namespace
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs : ["create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs : ["get", "create", "delete", "deletecollection"]
-
Create a Kubernetes RoleBinding
for this Role in the Runner namespace onto the Entra ID group you created. Use the Object ID of the group as the
subject
’sname
:
TF Runner Kubernetes RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tf-tunner-deploy-access
namespace: tf-runner-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: terraform-runner-role
subjects:
- kind: Group
name: ${GROUP_OBJECT_ID}
apiGroup: rbac.authorization.k8s.io
3. Create an AKS Resource Definition
Now that your cluster is ready, you connect it to the Platform Orchestrator by providing a Resource Definition.
- Start on the Resources Management screen and select Add resource definition
- In the modal dialog, select Kubernetes Cluster
- Select k8s-cluster-aks
- Next, you’ll need to provide the following details:
- A unique Resource Definition ID
- For Credentials, select the Cloud Account you created earlier
- (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
- (optional) The IP address or DNS name of the Ingress Controller’s load balancer
- For more information, see Ingress Controllers
- The Cluster Name as it appears in your Azure Portal
- (optional) A Proxy URL. It represents the
kubeconfig value
proxy-url
- The Azure Resource Group the cluster is deployed in
- If your cluster has been configured to use
AKS-managed Microsoft Entra integration
, the AAD Server Application ID must be set to the value
6dae42f8-4368-4678-94ff-3960e28e3630
(see the AKS documentation ) - The Azure Subscription ID for the cluster
Create a Resource Definition like the one shown in the example below.
Set the driver_account
to the ID of the Cloud Account you created earlier.
Install it into your Organization using this command:
humctl apply -f aks-temporary-credentials.yaml
aks-temporary-credentials.yaml
(
view on GitHub
)
:
# Connect to an AKS cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-temporary-credentials
entity:
name: aks-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "azure-identity"
# which needs to be configured for your Organization.
driver_account: azure-temporary-creds
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
Create a
humanitec_resource_definition
resource
using the
Humanitec Terraform provider
like the one shown in the example below.
Set the driver_account
to the ID of the Cloud Account you created earlier.
aks-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aks-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-temporary-credentials"
name = "aks-temporary-credentials"
type = "k8s-cluster"
driver_account = "azure-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
4. Configure AKS Resource Matching
Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.
- Click on the relevant row in the Resource Definition table
- Then switch to the Matching Criteria tab
- Click + Add new Criteria
- Configure the matching rules as needed
- Click Save
This example configures matching on an Environment type development
.
humctl api post /orgs/$HUMANITEC_ORG/resources/defs/aks-temporary-credentials/criteria \
-d '{
"env_type": "development"
}'
This example configures matching on an Environment type development
.
resource "humanitec_resource_definition_criteria" "aks-temporary-credentials-matching" {
resource_definition_id = humanitec_resource_definition.aks-temporary-credentials.id
env_type = "development"
}
-
Install the Agent for a private AKS cluster
If your AKS cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent .
-
Start deploying to your AKS cluster
Any Deployment fitting the matching criteria you configured will now be directed at your AKS cluster.
EKS
AWS Elastic Kubernetes Service (EKS) is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all of the prerequisites met.
Prerequisites
To integrate your EKS cluster with the Platform Orchestrator, you will need the following:
- An EKS cluster with a NodePool configured
- The
aws
CLI installed - (optional) The
humctl CLI
installed
- Authentication performed against the Platform Orchestrator via
humctl login
- The environment variable
HUMANITEC_ORG
set to your Organization ID
- Authentication performed against the Platform Orchestrator via
- The ability to create IAM policies and attach them to an IAM user
1. Prepare EKS credentials
Prepare the credentials to access your EKS cluster by setting up an AWS Cloud Account in the Platform Orchestrator.
2. Configure EKS cluster access
The IAM principal used with the Cloud Account needs appropriate access on the scope of the target EKS cluster through an IAM policy.
-
Prepare an IAM policy defining the required permissions. Set the value of
"Resource"
to the ARN of your target cluster:cat <<EOF > role-policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "eks:DescribeNodegroup", "eks:ListNodegroups", "eks:AccessKubernetesApi", "eks:DescribeCluster", "eks:ListClusters" ], "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/my-cluster" } ] } EOF
Define the name of the new IAM policy according to your own naming schema:
export POLICY_NAME=Humanitec_Access_EKS_MyCluster
Create the IAM policy and capture its ARN:
export POLICY_ARN=$(aws iam create-policy \ --policy-name ${POLICY_NAME} \ --policy-document file://role-policy.json \ | jq .Policy.Arn | tr -d "\"") echo ${POLICY_ARN}
-
Attach the IAM policy to the IAM principal (role or user) used in the Cloud Account:
# When using an IAM role aws iam attach-role-policy \ --role-name <iam-role-name> \ --policy-arn ${POLICY_ARN}
# When using an IAM user aws iam attach-user-policy \ --user-name <iam-user-name> \ --policy-arn ${POLICY_ARN}
3. Configure Kubernetes level access
The IAM principal from the Cloud Account must be associated with the required Kubernetes Permissions. See Associate IAM Identities with Kubernetes Permissions in the AWS documentation for detailed instructions depending on your chosen method.
You have several options available to configure access.
- Cluster admin access
This option does not require you to maintain any custom obejcts, but provides excess permissions.
- When using
access entries
, create an access entry of type
STANDARD
for the IAM principal used in the Cloud Account, and add the access policyarn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy
to the entry. - When using the
aws-auth
ConfigMap , add the principal to:
groups:
- system:masters
- Least privilege
We describe a least privilege approach for the recommended method of access entries only.
All custom role definitions shown below are designed for running in Direct mode .
Running in Legacy mode requires additional permissions for creating Kubernetes objects, causing deployments to fail until those permissions are configured.
-
Create an access entry
of type
STANDARD
for the IAM principal used in the Cloud Account, and add the group namehumanitec-platform-orchestrator
(you are free to choose another name). Do not add an access policy
- Create a Kubernetes ClusterRole on the target cluster:
Kubernetes ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: humanitec-deploy-access
rules:
# Namespaces management
- apiGroups: [""]
resources: ["namespaces"]
verbs : ["create", "get", "list", "update", "patch", "delete"]
# Humanitec's CRs management
- apiGroups: ["humanitec.io"]
resources: ["resources", "secretmappings", "workloadpatches", "workloads"]
verbs : ["create", "get", "list", "update", "patch", "delete", "deletecollection", "watch"]
# Deployment / Workload Status in UI
- apiGroups: ["batch"]
resources: ["jobs"]
verbs : ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "replicasets", "daemonsets"]
verbs : ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs : ["get", "list"]
# Container's logs in the UI
- apiGroups: [""]
resources: ["pods/log"]
verbs : ["get", "list"]
# Pause Environments
- apiGroups: ["apps"]
resources: ["deployments/scale"]
verbs : ["update"]
# To get the active resources (resources outputs)
- apiGroups: [""]
resources: ["configmaps"]
verbs : ["get"]
- Create a Kubernetes ClusterRoleBinding for this ClusterRole onto the group you named in the access entry:
Kubernetes ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: humanitec-deploy-access
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: humanitec-deploy-access
subjects:
- kind: Group
name: humanitec-platform-orchestrator
apiGroup: rbac.authorization.k8s.io
If you are running the Terraform Runner in the target cluster, you additionally need to:
- Create a Kubernetes Role in the Runner namespace:
TF Runner Kubernetes Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: terraform-runner-role
namespace: tf-runner-namespace
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs : ["create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs : ["get", "create", "delete", "deletecollection"]
- Create a Kubernetes RoleBinding for this Role in the Runner namespace onto the group you named in the access entry:
TF Runner Kubernetes RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tf-tunner-deploy-access
namespace: tf-runner-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: terraform-runner-role
subjects:
- kind: Group
name: humanitec-platform-orchestrator
apiGroup: rbac.authorization.k8s.io
3. Create an EKS Resource Definition
Now that your cluster is ready, you need to connect it to the Platform Orchestrator by providing a Resource Definition.
- Start on the Resources Management screen and select Add resource definition
- In the modal dialog, select Kubernetes Cluster
- Then select k8s-cluster-eks
- Next, you’ll need to provide the following details:
- A unique Resource Definition ID
- For Credentials, select the Cloud Account you created earlier
- (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
- (optional) The IP address or DNS name of the Ingress Controller’s load balancer.
- For more information, see Ingress Controllers
- (optional) The Hosted Zone the Load Balancer is hosted in
- The Cluster Name as it appears in your AWS console
- (optional) A Proxy URL. It represents the
kubeconfig value
proxy-url
- The AWS Region that the cluster is deployed in
Create a Resource Definition like the one shown in the example below.
Set the driver_account
to the ID of the Cloud Account you created earlier.
Install it into your Organization using this command:
humctl apply -f eks-temporary-credentials.yaml
eks-temporary-credentials.yaml
(
view on GitHub
)
:
# Connect to an EKS cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-temporary-credentials
entity:
name: eks-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "aws-role"
# which needs to be configured for your Organization.
driver_account: aws-temp-creds
# The driver_type k8s-cluster-eks automatically handles the temporary credentials
# injected via the driver_account.
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
Create a
humanitec_resource_definition
resource
using the
Humanitec Terraform provider
like the one shown in the example below.
Set the driver_account
to the ID of the Cloud Account you created earlier.
eks-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "eks-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-temporary-credentials"
name = "eks-temporary-credentials"
type = "k8s-cluster"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
}
}
4. Configure EKS Resource Matching
Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.
- Click on the relevant row in the Resource Definition table
- Then switch to the Matching Criteria tab
- Click + Add new Criteria
- Configure the matching rules as needed
- Click Save
This example configures matching on an Environment type development
.
humctl api post /orgs/$HUMANITEC_ORG/resources/defs/eks-temporary-credentials/criteria \
-d '{
"env_type": "development"
}'
This example configures matching on an Environment type development
.
resource "humanitec_resource_definition_criteria" "eks-temporary-credentials-matching" {
resource_definition_id = humanitec_resource_definition.eks-temporary-credentials.id
env_type = "development"
}
-
Install the Agent for a private EKS cluster
If your EKS cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent .
-
Start deploying to your EKS cluster
Any Deployment fitting the matching criteria you configured will now be directed at your EKS cluster.
GKE
Google Kubernetes Engine (GKE) is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all the prerequisites met.
Prerequisites
To integrate your GKE cluster with the Platform Orchestrator, you will need the following:
- A GKE cluster
- The
gcloud
CLI installed - (optional) The
humctl CLI
installed
- Authentication performed against the Platform Orchestrator via
humctl login
- The environment variable
HUMANITEC_ORG
set to your Organization ID
- Authentication performed against the Platform Orchestrator via
- The ability to create IAM service accounts and role bindings, and optionally custom roles
- The Google Cloud Resource Manager API enabled
1. Prepare GKE credentials
Prepare the credentials to access your GKE cluster by setting up a GCP Cloud Account in the Platform Orchestrator.
2. Configure GKE cluster access
The IAM principal used with the Cloud Account needs appropriate access on the scope to the target GKE cluster.
The GKE access control mechanisms generally provide a choice of using Kubernetes RBAC, IAM, or a combination of both. Depending on your strategy, choose from one of the options below.
- IAM container admin role
This option does not require you to maintain any custom role, but provides excess permissions.
-
Assign the IAM role
“Kubernetes Engine Admin” (
roles/container.admin
) to the service account used by the Cloud Account
All custom role definitions shown below are designed for running in Direct mode .
Running in Legacy mode requires additional permissions for creating Kubernetes objects, causing deployments to fail until those permissions are configured.
- IAM least privilege custom role
This options works with an IAM custom role only containing just the required permissions.
- Create a IAM custom role named “GKE access least privilege” or similar with these permissions:
IAM custom role permissions
# GKE cluster credentials
"container.clusters.get",
# Namespaces management
"container.namespaces.get",
"container.namespaces.create",
"container.namespaces.update",
"container.namespaces.delete",
# Humanitec's CRs management
"container.thirdPartyObjects.get",
"container.thirdPartyObjects.list",
"container.thirdPartyObjects.create",
"container.thirdPartyObjects.update",
"container.thirdPartyObjects.delete",
# Deployment / Workload Status in UI
"container.daemonSets.get",
"container.daemonSets.list",
"container.deployments.get",
"container.deployments.list",
"container.jobs.get",
"container.jobs.list",
"container.pods.get",
"container.pods.list",
"container.replicaSets.get",
"container.replicaSets.list",
"container.statefulSets.get",
"container.statefulSets.list",
# Container's logs in the UI
"container.pods.getLogs",
# Pause Environments
"container.deployments.updateScale",
# To get the active resources (resources outputs)
"container.configMaps.get",
# For private TF runner if deployed by Humanitec in the same cluster as your workloads
"container.jobs.create",
"container.jobs.delete",
"container.secrets.create",
"container.secrets.delete",
"container.secrets.get"
- Assign the IAM custom role to the service account used by the Cloud Account
- Kubernetes Cluster role + IAM cluster acess custom role
This option works with a Kubernetes ClusterRole containing just the required permissions (“least privilege”). To access the cluster API server in Google Cloud, a minimal IAM custom role is needed as well.
- Create an IAM custom role named “GKE access” or similar with these permissions:
IAM custom role permissions
"container.clusters.get"
- Assign the IAM custom role to the service account used by the Cloud Account
- Create a Kubernetes ClusterRole on the target cluster:
Kubernetes ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: humanitec-deploy-access
rules:
# Namespaces management
- apiGroups: [""]
resources: ["namespaces"]
verbs : ["create", "get", "list", "update", "patch", "delete"]
# Humanitec's CRs management
- apiGroups: ["humanitec.io"]
resources: ["resources", "secretmappings", "workloadpatches", "workloads"]
verbs : ["create", "get", "list", "update", "patch", "delete", "deletecollection", "watch"]
# Deployment / Workload Status in UI
- apiGroups: ["batch"]
resources: ["jobs"]
verbs : ["get", "list"]
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "replicasets", "daemonsets"]
verbs : ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs : ["get", "list"]
# Container's logs in the UI
- apiGroups: [""]
resources: ["pods/log"]
verbs : ["get", "list"]
# Pause Environments
- apiGroups: ["apps"]
resources: ["deployments/scale"]
verbs : ["update"]
# To get the active resources (resources outputs)
- apiGroups: [""]
resources: ["configmaps"]
verbs : ["get"]
- Create a Kubernetes ClusterRoleBinding for this ClusterRole onto the service account used by the Cloud Account:
Kubernetes ClusterRoleBinding
The User
to name in the ClusterRoleBinding
depends on the type of Cloud Account being used to access the cluster.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: humanitec-deploy-access
subjects:
- kind: User
name: ${SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com
roleRef:
kind: ClusterRole
name: humanitec-deploy-access
apiGroup: rbac.authorization.k8s.io
Obtain the OAuth client ID for the service account:
export SERVICE_ACCOUNT_CLIENT_ID=$(gcloud iam service-accounts \
describe ${SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com \
--format json | jq .oauth2ClientId | tr -d "\"")
Then use the client ID in the ClusterRoleBinding
:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: humanitec-deploy-access
subjects:
- kind: User
# Make sure to "quote" the ID
name: "${SERVICE_ACCOUNT_CLIENT_ID}"
roleRef:
kind: ClusterRole
name: humanitec-deploy-access
apiGroup: rbac.authorization.k8s.io
If you are running the Terraform Runner in the target cluster, you additionally need to:
- Create a Kubernetes Role in the Runner namespace:
TF Runner Kubernetes Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: terraform-runner-role
namespace: tf-runner-namespace
rules:
- apiGroups: ["batch"]
resources: ["jobs"]
verbs : ["create", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs : ["get", "create", "delete", "deletecollection"]
- Create a Kubernetes RoleBinding for this Role in the Runner namespace onto the service account used by the Cloud Account:
TF Runner Kubernetes RoleBinding
The User
to name in the RoleBinding
depends on the type of Cloud Account being used to access the cluster.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tf-tunner-deploy-access
namespace: tf-runner-namespace
subjects:
- kind: User
name: ${SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: terraform-runner-role
Obtain the OAuth client ID for the service account:
export SERVICE_ACCOUNT_CLIENT_ID=$(gcloud iam service-accounts \
describe ${SERVICE_ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com \
--format json | jq .oauth2ClientId | tr -d "\"")
Then use the client ID in the RoleBinding
:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tf-tunner-deploy-access
namespace: tf-runner-namespace
subjects:
- kind: User
# Make sure to "quote" the ID
name: "${SERVICE_ACCOUNT_CLIENT_ID}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: terraform-runner-role
3. Create a GKE Resource Definition
Now that your cluster is ready, you connect it to the Platform Orchestrator by providing a Resource Definition.
- Start on the Resources Management screen and select Add resource definition
- In the modal dialog, select Kubernetes Cluster
- Select k8s-cluster-gke
- Next, you’ll need to provide the following details:
- A unique Resource Definition ID
- For Credentials, select the Cloud Account you created earlier
- (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
- (optional) The IP address or DNS name of the Ingress Controller’s load balancer
- For more information, see Ingress Controllers
- The Cluster Name as it appears in the Google Cloud console
- The Google Cloud Project ID
- (optional) A Proxy URL. It represents the
kubeconfig value
proxy-url
- The Google Cloud Zone that the cluster is deployed in
Create a Resource Definition like the one shown in the example below.
Set the driver_account
to the ID of the Cloud Account you created earlier.
Install it into your Organization using this command:
humctl apply -f gke-temporary-credentials.yaml
gke-temporary-credentials.yaml
(
view on GitHub
)
:
# Connect to a GKE cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-temporary-credentials
entity:
name: gke-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "gcp-identity"
# which needs to be configured for your Organization.
driver_account: gcp-temporary-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
Create a
humanitec_resource_definition
resource
using the
Humanitec Terraform provider
like the one shown in the example below.
Set the driver_account
to the ID of the Cloud Account you created earlier.
gke-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-temporary-credentials"
name = "gke-temporary-credentials"
type = "k8s-cluster"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
}
}
4. Configure GKE Resource Matching
Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.
- Click on the relevant row in the Resource Definition table
- Then switch to the Matching Criteria tab
- Click + Add new Criteria
- Configure the matching rules as needed
- Click Save
This example configures matching on an Environment type development
.
humctl api post /orgs/$HUMANITEC_ORG/resources/defs/gke-temporary-credentials/criteria \
-d '{
"env_type": "development"
}'
This example configures matching on an Environment type development
.
resource "humanitec_resource_definition_criteria" "gke-temporary-credentials-matching" {
resource_definition_id = humanitec_resource_definition.gke-temporary.id
env_type = "development"
}
-
Install the Agent for a private GKE cluster
If your GKE cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent .
-
Start deploying to your GKE cluster
Any Deployment fitting the matching criteria you configured will now be directed at your GKE cluster.
Vanilla Kubernetes
Kubernetes is natively supported by the Platform Orchestrator and should take at most 30 minutes to integrate, provided you have all the prerequisites met.
Prerequisites
To integrate your vanilla cluster with the Platform Orchestrator, you will need the following:
- A Kubernetes cluster
- The ability to create a
kubeconfig
that authenticates a user with the cluster using client certificates - Making the cluster’s API endpoint accessible to the Platform Orchestrator, either by:
- Ensuring the cluster API endpoint is accessible on the public internet
- If required, whitelisting the Humanitec public IPs for the API endpoint
- Using the Humanitec Agent
- Configuring a VPN allowing the Platform Orchestrator access to the cluster API endpoint
1. Prepare credentials
Integrate with your cluster the Platform Orchestrator requires a client certificate and private key.
Follow the Kubernetes instructions to create a normal user . Save the certificate, private key, and certificate authority for later use.
2. Configure cluster access
Create a RoleBinding
for the user to the role clusterrole/admin
.
3. Create a Resource Definition
Now that your cluster is ready, you connect it to the Platform Orchestrator by providing a Resource Definition.
- Start on the Resources Management screen and select Add resource definition
- In the modal dialog, select Kubernetes Cluster
- Select k8s-cluster
- Next, you’ll need to provide the following details:
- A unique Resource Definition ID
- (optional) The Agent URL if you are using the Humanitec Agent. Go here to see the required format
- The Client Certificate, Client Private Key, and Cluster Certificate Authority for the account you created earlier
- (optional) A Proxy URL. It represents the
kubeconfig value
proxy-url
- The Cluster API Server URL that the Platform Orchestrator can use to access the cluster
- (Optional) The IP address or DNS name of the Ingress Controller’s load balancer
Create a Resource Definition like the one shown in the example below.
Install it into your Organization using this command:
humctl apply -f generic-k8s-client-certificate.yaml
generic-k8s-client-certificate.yaml
(
view on GitHub
)
:
# Resource Definition for a generic Kubernetes cluster
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: generic-k8s-static-credentials
entity:
name: generic-k8s-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster
driver_inputs:
values:
name: my-generic-k8s-cluster
loadbalancer: 35.10.10.10
cluster_data:
server: https://35.11.11.11:6443
# Single line base64-encoded cluster CA data in the format "LS0t...ca-data....=="
certificate-authority-data: "LS0t...ca-data....=="
secrets:
credentials:
# Single line base64-encoded client certificate data in the format "LS0t...cert-data...=="
client-certificate-data: "LS0t...cert-data...=="
# Single line base64-encoded client key data in the format "LS0t...key-data...=="
client-key-data: "LS0t...key-data...=="
Create a
humanitec_resource_definition
resource
using the
Humanitec Terraform provider
like the one shown in the example below.
generic-k8s-client-certificate.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "generic-k8s-static-credentials" {
driver_type = "humanitec/k8s-cluster"
id = "generic-k8s-static-credentials"
name = "generic-k8s-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"name" = "my-generic-k8s-cluster"
"loadbalancer" = "35.10.10.10"
"cluster_data" = {
"server" = "https://35.11.11.11:6443"
"certificate-authority-data" = "LS0t...ca-data....=="
}
})
secrets_string = jsonencode({
"credentials" = {
"client-certificate-data" = "LS0t...cert-data...=="
"client-key-data" = "LS0t...key-data...=="
}
})
}
}
4. Configure K8s Resource Matching
Now that you’ve registered the cluster you will need to define Matching Criteria so that the Platform Orchestrator knows when to use it.
- Click on the relevant row in the Resource Definition table
- Then switch to the Matching Criteria tab
- Click + Add new Criteria
- Configure the matching rules as needed
- Click Save
This example configures matching on an Environment type development
.
humctl api post /orgs/$HUMANITEC_ORG/resources/defs/generic-k8s-static-credentials/criteria \
-d '{
"env_type": "development"
}'
This example configures matching on an Environment type development
.
resource "humanitec_resource_definition_criteria" "generic-k8s-static-credentials-matching" {
resource_definition_id = humanitec_resource_definition.generic_cluster.id
env_type = "development"
}
-
Install the Agent for a private K8s cluster
If your Kubernetes cluster is private, i.e. its API server endpoint is not accessible from public networks, install the Humanitec Agent .
-
Start deploying to your K8s cluster
Any Deployment fitting the matching criteria you configured will now be directed at your Kubernetes cluster.