Manage workload identity
Overview
This guide describes a step-by-step blueprint showing how to manage workload identity for Kubernetes-based workloads, followed by a complete executable example.
Once set up by the platform team, developers will enjoy a fully automated, transparent provisioning of cloud permissions to the resources they request via Score without having to know about, or even be aware of, how such permissions are managed in your cloud.
Developers will be able to choose the kind of permission their workload requires on a particular resource, i.e. “admin” or “read-only”. The solution supports access to the same resource by one or many workloads using any access level.
The guide leverages the workload identity solutions of the major cloud providers. While the details vary greatly, they work by associating a set of permissions to cloud resources via the Kubernetes service account a workload runs as:
graph LR
service_account_1 o-.-o permissions_1[Permissions] o-.-o cloud_resource_a
subgraph k8s[K8s cluster]
direction LR
service_account_1[ServiceAccount 1] o--o workload_1[Workload 1]
service_account_2[ServiceAccount 2] o--o workload_2[Workload 2]
end
permissions_1 o-.-o cloud_resource_b
service_account_2 o-.-o permissions_2[Permissions] o-.-o cloud_resource_b
workload_1 -->|read-only| cloud_resource_a[Cloud resource A]
workload_1 -->|admin| cloud_resource_b
workload_2 -->|read-only| cloud_resource_b[Cloud resource B]
The guide on Modeling identities in the Resource Graph elaborates on the different patterns in greater detail.
Supported services
Currently supported providers and clusters are:
- Google Cloud (GCP) using Workload Identity Federation for GKE
We are currently preparing to expand coverage to these providers:
- AWS using EKS Pod Identity or IAM roles for service accounts
- Azure using Microsoft Entra Workload ID for AKS
This guide provides step-by-step instructions how to build the workload identity solution in your Humanitec Organization, and learn the mechanics in the process.
You may also skip straight to deploying the complete example in your Humanitec Organization.
Provision a Kubernetes service account
The cloud providers’ workload identity solutions require the use of a Kubernetes service account for your workload. A cloud identity is then associated with that service account to assign permissions to cloud resources.
The first step is therefore to extend the Resource Graph so that every workload depends on a dedicated service account with all the required annotations, labels, or other attributes required for the workload identity mechanism of the cloud provider.
A recommended practice is to source any values the service account requires from a separate config
Resource.
---
title: Creating a service account and sourcing values from a config
---
graph LR
workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-gcp-1") -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-gcp-1") -->|references| cluster_config_gke("GKE config<br/>type: config<br/>class: default<br/>id: gke-config")
class k8s_service_account_1,cluster_config_gke highlight
Follow these steps to achieve this Graph setup:
- Have the Platform Orchestrator automatically provision a Kubernetes
ServiceAccount
for every workload. Provide a Resource Definition of Typeworkload
that is requesting a Resource of typek8s-service-account
and adding it to the workload specification.
Sample workload Resource Definition
custom-workload.yaml
(
view on GitHub
)
:
# This Resource Definition uses the Template Driver to assign a custom Kubernetes service account to a workload
# to facilitate workload identity
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-custom-workload
entity:
name: wi-gcp-custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Request a resource of type 'k8s-service-account' in the Resource Graph
# and add its name to the workload spec
outputs: |
update:
- op: add
path: /spec/serviceAccountName
value: ${resources.k8s-service-account.outputs.name}
# Adjust matching criteria as required
criteria:
- app_id: workload-identity-test-gcp
workload
is always provisioned for each Workload being deployed because workload
is an
implicit Resource Type
. No resource reference via Score is required.-
Provide a Resource Definition to provision the
k8s-service-account
requested by theworkload
.The service account represents the identity of the Workload. It must provide the required output(s) for identifying this identity, depending on the cloud provider. Refer to the
outputs
of the sample Resource Definition to see the implementation details.
Sample service account Resource Definition
custom-service-account.yaml
(
view on GitHub
)
:
# This Resource Definition uses the Template Driver to create a Kubernetes service account.
# It pulls central configuration parameters from `config` Resources.
# It returns the "principal" string for the ServiceAccount to configure GKE workload identity.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-custom-service-account
entity:
driver_type: humanitec/template
name: wi-gcp-custom-service-account
type: k8s-service-account
driver_inputs:
values:
res_id: ${context.res.id}
gke_project_id: ${resources['config.default#gke-config'].outputs.gke_project_id}
gke_project_number: ${resources['config.default#gke-config'].outputs.gke_project_number}
# Using the id `k8s-namespace`, this reference targets the existing, implicitly created namespace resource
k8s_namespace: "${resources['k8s-namespace.default#k8s-namespace'].outputs.namespace}"
templates:
init: |
res_id: {{ .driver.values.res_id }}
{{- $res_name := index (splitList "." .driver.values.res_id) 1 }}
name: {{ $res_name | toRawJson }}
principal: principal://iam.googleapis.com/projects/{{ .driver.values.gke_project_number }}/locations/global/workloadIdentityPools/{{ .driver.values.gke_project_id }}.svc.id.goog/subject/ns/{{ .driver.values.k8s_namespace }}/sa/{{ $res_name }}
manifests: |
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
annotations:
hum-res: {{ .init.res_id }}
outputs: |
name: {{ .init.name }}
principal: {{ .init.principal }}
# Adjust matching criteria as required
criteria:
- app_id: workload-identity-test-gcp
-
We recommend you provide cluster configuration values via a centralized
config
resource to reduce redundancy. The service account Resource Definition shown in the previous step contains references to such a resource via${resources['config.default...}
placeholders.By naming a Resource ID in the reference (the part after the
#
sign), the Resource will be a Shared Resource and unique in the Graph. All other Resources using the same reference will use the same Resource.
Sample cluster config Resource Definition
gke-config.yaml
(
view on GitHub
)
:
# This Resource Definition of type "config" uses the Template Driver to provide configuration values
# to other Resource Definitions for accessing the GKE cluster
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-gke-config
entity:
name: wi-gcp-gke-config
type: config
driver_type: humanitec/template
# The identity configured in this Cloud Account needs permission
# to deploy Workloads to your GKE cluster
driver_account: YOURORG/YOURACCOUNT
driver_inputs:
values:
templates:
outputs: |
gke_project_id: YOURVALUE
# Keep the "" around the value of gke_project_number
gke_project_number: "YOURVALUE"
criteria:
- class: default
res_id: gke-config
env_type: development
app_id: workload-identity-test-gcp
Now extend the Graph to provision the actual cloud service.
Provision cloud service and permissions
Now enable the Platform Orchestrator to provision a cloud service of the desired type which your workload will eventually access using workload identity.
We describe a basic and advanced setup for you to choose from going forward. Please read through both setups to find which one suits your needs. The example section further down features sample implementations for both setups.
This guide uses these cloud services as examples:
- GCP:
Cloud Storage buckets
via the
Resource Type
gcs
and Spanner instances via the Resource Typespanner-instance
Basic setup
Overview
The basic setup for accessing a cloud service using workload identity has these features:
- All workloads requesting a resource via Score receive the same level of access, e.g. a “reader” or “admin” access, as defined by the platform team
- Developers therefore do not need to, and cannot, request a particular level of access on a resource they request via Score
- Resources may be both private to a single Workload or shared between Workloads
- The resulting Resource Graph is simpler than the advanced setup
If you require flexible access levels per workload for developers to choose from, the advanced setup is for you. Please read through the basic setup in any case as they share the same foundation.
Add cloud service and permissions Resources
Add these Resources to the Graph to establish the basic setup:
- The cloud service Resource requested via Score
- A permissions/policy Resource provisioning the required cloud permissions for all workloads using that cloud service instance
- (recommended) Another
config
to centrally maintain parameters for the cloud service
Target Resource Graph (basic)
Adding those elements extends the Resource Graph like this:
---
Adding gcs bucket, bucket policy, and app config
---
graph LR
workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-gcp-1") --->|depends on| bucket-1("bucket-1<br/>type: gcs<br/>class: default<br/>id: modules.hello-gcp-1.externals.bucket-1")
policy-bucket-1("Policy for bucket-1<br/>type: gcp-iam-policy-binding<br/>class: gcs-default<br/>id: modules.hello-gcp-1.externals.bucket-1") -.->|resource selector| k8s_service_account_1
policy-bucket-1 -->|co-provisioned by /<br/>depends on| bucket-1
app_config_gke("App config<br/>type: config<br/>class: default<br/>id: app-config")
bucket-1 -->|references| app_config_gke
workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-gcp-1") -->|references| cluster_config_gke("GKE config<br/>type: config<br/>class: default<br/>id: gke-config")
policy-bucket-1 -->|references| app_config_gke
class bucket-1,policy-bucket-1,app_config_gke highlight
Corresponding Score file
This Score file will produce that Graph:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-1
containers:
hello-world:
image: .
variables:
GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
resources:
bucket-1:
type: gcs
These mechanisms create the Graph:
- The Score file requests a
resource
of thetype
of your cloud service, adding it to the Graph as a dependent Resource of the workload.
Sample cloud service Resource Definition
gcs.yaml
(
view on GitHub
)
:
# This Resource Definition of type `gcs` uses the Terraform Driver to provision a gcs bucket
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-gcs
entity:
driver_type: humanitec/terraform-runner
name: wi-gcp-gcs
type: gcs
# The identity used in this Cloud Account is the one executing the Terraform code
# It needs permissions to manage Cloud Storage buckets
driver_account: ${resources['config.default#app-config'].account}
driver_inputs:
values:
append_logs_to_error: true
variables:
app_id: "${context.app.id}"
env_id: "${context.env.id}"
res_id: "${context.res.id}"
project_id: ${resources['config.default#app-config'].outputs.project_id}
region: ${resources['config.default#app-config'].outputs.region}
credentials_config:
variables:
access_token: access_token
files:
providers.tf: |
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
random = {
source = "hashicorp/random"
}
}
}
variables.tf: |
locals {
workload_id = split(".", var.res_id)[1]
}
variable "access_token" {
type = string
sensitive = true
}
variable "app_id" {}
variable "env_id" {}
variable "res_id" {}
variable "project_id" {}
variable "region" {}
main.tf: |
provider "google" {
access_token = var.access_token
default_labels = {
"humanitec" = "true"
"hum-app" = var.app_id
"hum-env" = var.env_id
"hum-res" = replace(var.res_id, ".", "-")
"managed-by" = "terraform"
}
}
resource "random_string" "bucket_name" {
length = 10
special = false
lower = true
upper = false
}
resource "google_storage_bucket" "bucket" {
project = var.project_id
name = random_string.bucket_name.result
location = var.region
uniform_bucket_level_access = true
force_destroy = true
}
outputs.tf: |
output "name" {
value = google_storage_bucket.bucket.name
}
provision:
# Co-provision an IAM Policy resource of class "gcs-default" for this default gcs
# By not specifying an ID, the co-provisioned gcp-iam-policy-binding will have the same ID as the present resource
gcp-iam-policy-binding.gcs-default:
is_dependent: true
# Adjust matching criteria as required
criteria:
- app_id: workload-identity-test-gcp
-
That cloud service Resource co-provisions a Resource to create the cloud permissions/policy. Look for the
provision:
statement in the sample Resource Definition above.The Resource Type to use for this Definition depends on the cloud provider:
The permissions/policy Resource performs a lookup into the Graph to find the identities of all workloads depending on the same resource, using a Resource selector . Note that this may be more than one workload for a Shared Resource (the example below shows such a setup).
It then provisions the required permission or policy resources, using the level of access that is defined in the Resource Definition IaC code.
Sample permissions/policy Resource Definition
gcs-iam-member.yaml
(
view on GitHub
)
:
# This Resource Definition uses the Terraform Driver to create IAM policies for gcs resources
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-gcs-iam-member
entity:
driver_type: humanitec/terraform-runner
name: wi-gcp-gcs-iam-member
type: gcp-iam-policy-binding
# The identity used in this Cloud Account is the one executing the Terraform code
# It needs permissions to manage IAM members
driver_account: ${resources['config.default#app-config'].account}
driver_inputs:
values:
variables:
res_class: ${context.res.class}
# Obtain the bucket name from the bucket resource this resource depends on
gcs_bucket_name: ${resources['gcs.default'].outputs.name}
# This Resource selector traverses the Graph to get the "principal" output
# from all k8s-service-account resources that require access
# 1. Start with the gcs resource of class "default" and the same ID as the present resource
# This effectively matches the gcs that co-provisioned the present resource
# 2. Find all the workloads that depend on that gcs
# 3. Find all the k8s-service-accounts that these workloads depend on
# 4. For each k8s-service-account, read its "principal" output
# Notes:
# - The present gcp-iam-policy-binding depends on the gcs because the gcs co-provisions it with is_dependent = true
# - The selector may return more than one element because more than one workload may depend on the gcs
# if the gcs is a Shared Resource
principals: ${resources['gcs.default<workload>k8s-service-account'].outputs.principal}
credentials_config:
variables:
access_token: access_token
files:
providers.tf: |
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
variables.tf: |
variable "res_class" {}
variable "access_token" {
type = string
sensitive = true
}
variable "gcs_bucket_name" {}
# This variable is a set because a resource selector, which is used to retrieve it, always returns an array
variable "principals" {
type = set(string)
}
main.tf: |
provider "google" {
access_token = var.access_token
}
# Create one IAM member for each principal using a role of your choice
resource "google_storage_bucket_iam_member" "iam_member" {
for_each = var.principals
bucket = var.gcs_bucket_name
role = "roles/storage.objectAdmin"
member = each.key
}
outputs.tf: |
# This output is for debugging and illustration purposes only
output "principals" {
value = join(",", var.principals)
}
# Match the class that is used in the "provision" statement in the gcs Resource Definition
criteria:
- class: gcs-default
- Similar to the externalized cluster configuration, we recommend to centrally provide app configuration values via another
config
Resource. The cloud service Resource Definition can reference that Resource via${resources['config.default#app-config']
.
Sample app config Resource Definition
app-config.yaml
(
view on GitHub
)
:
# This Resource Definition of type "config" uses the Template Driver to provide configuration values
# to other Resource Definitions for accessing the Google Cloud Project
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-app-config
entity:
name: wi-gcp-app-config
type: config
driver_type: humanitec/template
# The identity used in this Cloud Account needs permission to manage
# the kinds of cloud resources used in the example
driver_account: YOURVALUE
driver_inputs:
values:
templates:
outputs: |
project_id: YOURVALUE
region: YOURVALUE
criteria:
- class: default
res_id: app-config
env_type: development
app_id: workload-identity-test-gcp
All workloads requesting a resource of the proper type via Score will now automatically receive permissions via workload identity according to the defined access level.
Advanced setup
Overview
The advanced setup has extends the basic setup for greater flexibility:
- Developers select a particular access class via Score, e.g. “reader” or “admin”, for each resource
- For a Shared Resource, different Workloads may choose different access levels
- An additional layer is added to the resulting Resource Graph, increasing its complexity compared to the basic setup
To provide this flexibility, the advanced setup introduces an intermediate Resource to the Graph called a
Delegator Resource
. This Resource is located in front of the concrete
Resource that represents the actual cloud service instance.
Developers request an access classes as a Resource Class via Score:
resources:
my-resource:
type: some-resource-type
class: type-read-only
That class is used in the Matching Criteria of the Delegator Resource Definition. Whenever a Resource of a certain access class is requested via Score, that request provisions a Delegator of the requested class which then performs these functions:
- Request, and therefore provision, the actual
concrete
cloud service Resource - Pass through any outputs of the
concrete
Resource to the upstream elements in the Graph -
Co-provision
a cloud permissions Resource for that particular access level, expressed through the Resource
class
Sample Delegator Resource Definition
gcs-delegator.yaml
(
view on GitHub
)
:
# This Resource Definition of type `gcs` implements a "Delegator" resource
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-gcs-delegator
entity:
driver_type: humanitec/echo
name: wi-gcp-gcs-delegator
type: gcs
driver_inputs:
values:
# Referencing the "concrete" Resource and passing through all of its outputs
name: ${resources['gcs.default'].outputs.name}
provision:
# Co-provision an IAM Policy resource for this class of gcs
# The gcp-iam-policy-binding resource will have the same class and ID as the present resource
gcp-iam-policy-binding:
is_dependent: true
# Adjust matching criteria as required, but must match all access classes
criteria:
- class: gcs-admin
app_id: workload-identity-test-gcp
- class: gcs-read-only
app_id: workload-identity-test-gcp
Target Resource Graph (advanced)
The extended Resource Graph will then look like this for a workload requesting “read-only” access to a private resource:
---
title: Adding a "gcs" Delegator Resource
---
graph LR
workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-gcp-1") --->|depends on| bucket-1-delegator-read-only("bucket-1 Delegator for read-only access<br/>type: gcs<br/>class: gcs-read-only<br/>id: modules.hello-gcp-1.externals.bucket-1")
bucket-1-delegator-read-only -->|references| bucket-1-concrete("bucket-1 concrete<br/>type: gcs<br/>class: default<br/>id: modules.hello-gcp-1.externals.bucket-1")
policy-bucket-1-admin("Policy for bucket-1 read-only access<br/>type: gcp-iam-policy-binding<br/>class: gcs-read-only<br/>id: modules.hello-gcp-1.externals.bucket-1") -.->|resource selector| k8s_service_account_1
policy-bucket-1-admin -->|co-provisioned by /<br/>depends on| bucket-1-delegator-read-only
app_config_gke("App config<br/>type: config<br/>class: default<br/>id: app-config")
bucket-1-concrete -->|references| app_config_gke
workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-gcp-1") --->|references| cluster_config_gke("GKE config<br/>type: config<br/>class: default<br/>id: gke-config")
policy-bucket-1-admin -->|references| app_config_gke
class bucket-1-delegator-read-only highlight
Corresponding Score file
This Score file will produce the Graph. Note that it now requests a class
for the gcs
resource.
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-1
containers:
hello-world:
image: .
variables:
GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
resources:
bucket-1:
type: gcs
# Adding the class to request a particular access
class: gcs-read-only
The permissions/policy resource now becomes dynamic. In the basic setup, it always provisioned the same permissions to all associated workload identities. Now it uses its own Resource class to determine the corresponding permissions. The exact implementation depends on the IaC tooling. Our example shows a Terraform implementation.
The Resource Definitions for both the Delegator and the permissions/policy Resource must match all access classes you define.
Sample permissions/policy Resource Definition
gcs-iam-member.yaml
(
view on GitHub
)
:
# This Resource Definition uses the Terraform Driver to create IAM policies for gcs resources
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: wi-gcp-gcs-iam-member
entity:
driver_type: humanitec/terraform-runner
name: wi-gcp-gcs-iam-member
type: gcp-iam-policy-binding
# The identity used in this Cloud Account is the one executing the Terraform code
# It needs permissions to manage IAM members
driver_account: ${resources['config.default#app-config'].account}
driver_inputs:
values:
variables:
res_class: ${context.res.class}
# Obtain the bucket name from the bucket resource this resource depends on (same class and ID)
gcs_bucket_name: ${resources.gcs.outputs.name}
# This Resource selector traverses the Graph to get the "principal" output
# from all k8s-service-account resources that require access
# 1. Start with the gcs resource of the same class and ID as the present resource
# This effectively matches the gcs Delegator that co-provisioned the present resource
# 2. Find all the workloads that depend on that gcs Delegator
# 3. Find all the k8s-service-accounts that these workloads depend on
# 4. For each k8s-service-account, read its "principal" output
# Notes:
# - The present gcp-iam-policy-binding depends on the gcs Delegator because the gcs Delegator co-provisions it with is_dependent = true
# - The selector may return more than one element because more than one workload may depend on the gcs Delegator
# if the gcs Delegator is a Shared Resource
principals: ${resources['gcs<workload>k8s-service-account'].outputs.principal}
credentials_config:
variables:
access_token: access_token
files:
providers.tf: |
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
variables.tf: |
variable "res_class" {}
variable "access_token" {
type = string
sensitive = true
}
variable "gcs_bucket_name" {}
# This variable is a set because a resource selector, which is used to retrieve it, always returns an array
variable "principals" {
type = set(string)
}
# Define the roles for each access class
locals {
roles = {
"gcs-read-only" = "roles/storage.objectViewer"
"gcs-admin" = "roles/storage.objectAdmin"
}
}
main.tf: |
provider "google" {
access_token = var.access_token
}
# Create one IAM member for each principal using the role for the current access class
resource "google_storage_bucket_iam_member" "iam_member" {
for_each = var.principals
bucket = var.gcs_bucket_name
role = local.roles[var.res_class]
member = each.key
}
outputs.tf: |
# This output is for debugging and illustration purposes only
output "principals" {
value = join(",", var.principals)
}
# Match the class that is used in the "provision" statement in the gcs Delegator Resource Definition
criteria:
- class: gcs-read-only
- class: gcs-admin
Providing an access class
Follow these steps to provide an access class:
-
Decide on the access class you wish to support, e.g. “
gcs-read-only
” or “s3-admin
”. The class name must be unique across Resource Types to ensure proper matching of all Resource Definitions. -
Determine the set of permissions, defined by e.g. a cloud role or policy, to be assigned for this access class. The details depend on your cloud provider. Any role or policy objects to be assigned must be maintained outside of the Platform Orchestrator because their lifecycle is not bound to any Deployment or Application Environment.
-
Expand the cloud permissions/policy Resource Definition:
- Add the new access class to its matching criteria
- Cover the cloud role or policy to be used for the access class in the IaC code used in the Resource Definition
- Add the new access class to the matching criteria of the Delegator Resource Definition.
Register the Resource Classes
We generally recommend that you register all Resource Classes with the Platform Orchestrator to support developers with better validation. See Creating a Resource Class for details.
Supporting more Resource Types
To support another Resource Type with workload identity, you need to:
- Provide the
concrete
Resource Definition for the Resource Type to perform the actual provisioning using IaC tooling - Provide the permissions/policy Resource Definition for the Resource Type
- Follow the examples for basic or advanced setup
- (Advanced setup only) Provide all access classes you wish to support
No change to the workload
and k8s-service-account
Resource Definitions is required.
Request Resources via Score
These principles apply for requesting resources via Score:
- Developers may specify an
id
for any resource in Score to make it a Shared Resource - (
Advanced setup
only) Developers must specify a
class
out of the supported set of access classes for thattype
Examples show the resources
section only.
Score examples (basic setup)
Score file 1 | Score file 2 | Resource access |
---|---|---|
|
|
|
|
|
|
Score examples (advanced setup)
Score file 1 | Score file 2 | Resource access |
---|---|---|
|
|
|
|
|
|
|
|
|
Example
Sample implementations of both the basic and the advanced setup in this guide are available in this public GitHub repository . It showcases a number of access level variations and two types of cloud resources for each cloud provider.
Prerequisites
To run the example, you’ll need:
- A Humanitec Organization and a user with permission to manage Resource Definitions, create Applications, and perform Deployments
- The humctl CLI installed
- A Kubernetes cluster for Workload deployments of one of the types listed above , with the respective workload identity solution enabled
- This Kubernetes cluster connected to the Platform Orchestrator. If you do not have the cluster connected yet, these are your options:
Options to connect your Kubernetes cluster
Five-minute-IDP | Bring your own cluster | Reference architecture |
---|---|---|
Set up a local demo cluster following the
Five-minute IDP
Duration: 5 min No need to deploy the demo Workload, just perform the setup Ephemeral (throw-away) setup for demo purposes |
Connect an existing cluster by following the
Quickstart
up to “Connect your cluster” (guided), or using the instructions in
Kubernetes
(self-guided) Duration: 15-30 min One-time effort per cluster, can be re-used going forward |
Set up the
reference architecture
Duration: 15-30 min Creates a new cluster plus supporting cloud infrastructure, and connects it to the Platform Orchestrator |
- A
Cloud Account
with permission to manage these kinds of cloud resources. It may be the same or a separate Cloud Account from the one you use to connect to your cluster:
- GCP: Cloud Storage and Spanner
- A Kubernetes cluster set up for executing the example Terraform code using the Terraform Runner Driver . It can be the same or a different cluster than the one used for Workload deployments. See the Driver page for setup details.
Installation
Clone the repository:
git clone https://github.com/humanitec-tutorials/k8s-workload-identity.git
Login to the Platform Orchestrator:
humctl login
Follow the instructions for your cloud provider:
- Navigate into the
gcp/common
directory:
cd gcp/common
-
Edit the
config
Resource Definitions at./resource-definitions/app-config.yaml
,./resource-definitions/gke-config.yaml
, and./resource-definitions/runner-config.yaml
. Replace all values markedYOURVALUE
to match your setup -
Create the demo Application and add common Resource Definitions to your Organization:
humctl apply -f ./app.yaml
humctl apply -f ./resource-definitions
-
Create the matching criteria
app_id: workload-identity-test-gcp
on the existing Resource Definition of your target GKE cluster for Workload deployments so that the upcoming Deployment will use that cluster -
Create the same matching criteria on the Humanitec Agent Resource Definition used to access that cluster
-
Navigate to either the
basic
oradvanced
directory to execute the setup of your choice:
# To execute the "basic" setup
cd ../basic
# To execute the "advanced" setup
cd ../advanced
- Create the Resource Definition for the setup:
humctl apply -f ./resource-definitions
- Deploy the set of demo workloads into the demo Application all at once:
humctl score deploy --deploy-config deploy-config.yaml \
--app workload-identity-test-gcp --env development
-
Open the Humanitec Portal and navigate to the
development
Environment of Applicationworkload-identity-test-gcp
. You should see a Deployment in process. -
While the Deployment is ongoing, inspect the Score files used for it. Note how the
resources
sections request resources with varyingid
andclass
settings:
Basic setup
score-1.yaml
(
view on GitHub
)
:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-1
containers:
hello-world:
image: .
command: ["sleep","infinity"]
variables:
GCS_PRIVATE_BUCKET_1_NAME: ${resources.private-bucket-1.name}
GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
GCS_BUCKET_2_NAME: ${resources.bucket-2.name}
SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
private-bucket-1:
type: gcs
bucket-1:
type: gcs
id: bucket-1
bucket-2:
type: gcs
id: bucket-2
spanner-1:
type: spanner-instance
id: spanner-1
score-2.yaml
(
view on GitHub
)
:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-2
containers:
hello-world:
image: .
command: ["sleep","infinity"]
variables:
GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
resources:
bucket-2:
type: gcs
id: bucket-2
score-3.yaml
(
view on GitHub
)
:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-3
containers:
hello-world:
image: .
command: ["sleep","infinity"]
variables:
GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
bucket-2:
type: gcs
id: bucket-2
spanner-1:
type: spanner-instance
id: spanner-1
Advanced setup
score-1.yaml
(
view on GitHub
)
:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-1
containers:
hello-world:
image: .
command: ["sleep","infinity"]
variables:
GCS_PRIVATE_BUCKET_1_NAME: ${resources.private-bucket-1.name}
GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
GCS_BUCKET_2_NAME: ${resources.bucket-2.name}
SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
private-bucket-1:
type: gcs
class: gcs-read-only
# no "id" for a private resource
bucket-1:
type: gcs
class: gcs-admin
id: bucket-1
bucket-2:
type: gcs
class: gcs-read-only
id: bucket-2
spanner-1:
type: spanner-instance
class: spanner-instance-reader
id: spanner-1
score-2.yaml
(
view on GitHub
)
:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-2
containers:
hello-world:
image: .
command: ["sleep","infinity"]
variables:
GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
resources:
bucket-2:
type: gcs
class: gcs-read-only
id: bucket-2
score-3.yaml
(
view on GitHub
)
:
apiVersion: score.dev/v1b1
metadata:
name: hello-gcp-3
containers:
hello-world:
image: .
command: ["sleep","infinity"]
variables:
GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
bucket-2:
type: gcs
class: gcs-admin
id: bucket-2
spanner-1:
type: spanner-instance
class: spanner-instance-admin
id: spanner-1
-
Once the deployment finished, the Portal should display the Resource Graph showing all workloads, cloud resourses, and supporting elements. Nagivate through the Graph to see the connections
-
Open the Storage Browser and the Spanner instances in the Google Cloud console. Observe the instances created and check their permissions. You should see the appropriate permissions for each service account
principal
on each single resource
Testing resource access
You can test for real whether the workloads deployed through the example have the proper access level. The example is using SDK images maintained by the cloud providers for the workloads (check the deploy-config.yaml
file for the specific image). You can open a shell in the running containers and perform CLI commands against the cloud resources. Doing so requires kubectl
access to the target cluster.
Obtain the Kubernetes namespace name for the example deployment (fill in the <placeholders>
):
humctl get active-resource \
/orgs/<my-org>/apps/workload-identity-test-<cloud>/envs/development/resources \
-oyaml \
| yq -r '.[] | select (.metadata.type == "k8s-namespace") | .status.resource.namespace'
Locate the workload Pods running in that namespace. Choose one, open a shell using kubectl exec
, and issue test commands like shown below.
To locate the target resources, it is easiest to use the
Humanitec Portal
UI. Find the Application named workload-identity-test-<cloud>
and open its development
Environment. Navigate the Resource Graph and click on any resource to see its details.
# Open a shell into a workload Pod
kubectl exec -it -n <namespace> <pod> -- /bin/bash
# Test read access to a gcs bucket
gcloud storage ls gs://my-bucketname
# Test write access to a gcs bucket
echo foo > foo.txt
gcloud storage cp foo.txt gs://my-bucketname
# Test admin access by creating a database in a spanner instance
gcloud spanner databases create testdb --instance=my-instance-id
# Test read access by listing databases in a spanner instance
gcloud spanner databases list --instance=my-instance-id
Switching from basic to advanced setup
The sample implementations support switching from the basic to the advanced setup even for existing deployments with some restrictions.
The cloud service resources themselves will be continued, preserving all data stored in them. However, existing workloads may have the sum of the previous (basic) and new (advanced) access permissions.
To make the switch, execute these commands:
# Navigate to the **/advanced directory
cd ./gcp/advanced
# Install the advanced set of Resource Definitions
# This will override some existing Definitions from the basic setup
humctl apply -f resource-definitions/
# Deploy the workloads
humctl score deploy --deploy-config deploy-config.yaml --app workload-identity-test-gcp --env development
Cleaning up
Once finished observing the example setup, clean up:
-
Delete the Application. This will, with a few minutes delay, also remove the cloud resources:
# Move to your cloud directory cd <cloud> # Delete the Application humctl delete -f ./common/app.yaml
-
Using the target cloud tooling, console or portal, ensure that all cloud resources have been deleted
-
You will then be able to remove the remaining Orchestrator objects:
humctl delete -f ./common/resource-definitions # Remove basic or advanced objects depending on what you used humctl delete -f ./basic/resource-definitions humctl delete -f ./advanced/resource-definitions
Recap
You have seen how to:
- ✅ Set up the Platform Orchestrator to automatically leverage the cloud provider’s workload identity solution for Kubernetes-based workloads
- ✅ Work with Private and Shared Resources
- ✅ Optionally define and offer access classes on cloud resources to developers, and use those access classes via Score
Troubleshooting
Verify your workload identity setup
Some cloud providers provide instructions for testing the workload identity setup on your cluster. You may need to adjust the commands to fit your type of target cloud resource.
Permissions not provisioned for a Shared Resource
If your deployment succeeded, but permissions were not provisioned for a workload on a particular Shared Resource in your cloud, verify your setup:
- Does the Score file request the proper Resource Type/access class combination in its
resources
? See the Score examples above for guidance- If not, correct the entry
- If yes: Does the
workload
depend on the requested resource (basic) or on the Delegator Resource for the Shared Resource/access class combination (advanced) that is requested via Score?- If yes: verify the structure of the remaining Graph, comparing it with the example for the basic and advanced setup. Check proper matching criteria for all relevant Resource Definitions
- If no: there might not be a reference to the resource (e.g.
${resources.my-resource.some-output}
) in the Score file. Add one, e.g. by populating a containervariable
Re-deploy the Environment to apply your changes.