Manage workload identity

Overview

This guide describes a step-by-step blueprint showing how to manage workload identity for Kubernetes-based workloads, followed by a complete executable example.

Once set up by the platform team, developers will enjoy a fully automated, transparent provisioning of cloud permissions to the resources they request via Score without having to know about, or even be aware of, how such permissions are managed in your cloud.

Developers will be able to choose the kind of permission their workload requires on a particular resource, i.e. “admin” or “read-only”. The solution supports access to the same resource by one or many workloads using any access level.

The guide leverages the workload identity solutions of the major cloud providers. While the details vary greatly, they work by associating a set of permissions to cloud resources via the Kubernetes service account a workload runs as:

graph LR
  service_account_1 o-.-o permissions_1[Permissions] o-.-o cloud_resource_a
  subgraph k8s[K8s cluster]
    direction LR
    service_account_1[ServiceAccount 1] o--o workload_1[Workload 1]
    service_account_2[ServiceAccount 2] o--o workload_2[Workload 2]
  end
  permissions_1 o-.-o cloud_resource_b
  service_account_2 o-.-o permissions_2[Permissions] o-.-o cloud_resource_b

  workload_1 -->|read-only| cloud_resource_a[Cloud resource A]
  workload_1 -->|admin| cloud_resource_b
  workload_2 -->|read-only| cloud_resource_b[Cloud resource B]

The guide on Modeling identities in the Resource Graph elaborates on the different patterns in greater detail.

Supported services

Currently supported providers and clusters are:

We are currently preparing to expand coverage to these providers:

Provision a Kubernetes service account

The cloud providers’ workload identity solutions require the use of a Kubernetes service account  for your workload. A cloud identity is then associated with that service account to assign permissions to cloud resources.

The first step is therefore to extend the Resource Graph so that every workload depends on a dedicated service account with all the required annotations, labels, or other attributes required for the workload identity mechanism of the cloud provider.

A recommended practice is to source any values the Resources require from a separate config Resource.

---
title: Creating a service account and an associated IAM role, sourcing values from a config
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-aws-1") -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-aws-1")
  aws_role_1("IAM Role for workload-1<br/>type: aws-role<br/>class: default<br/>id: modules.hello-aws-1") -->|co-provisioned by /<br/>depends on| k8s_service_account_1
  aws_role_1 -->|references| app_config("App config<br/>type: config<br/>class: default<br>id: app-config")
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| aws_role_1

  class k8s_service_account_1,aws_role_1,app_config highlight

---
title: Creating a service account and sourcing values from a config
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-gcp-1") -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-gcp-1") -->|references| cluster_config_gke("GKE config<br/>type: config<br/>class: default<br/>id: gke-config")

  class k8s_service_account_1,cluster_config_gke highlight

Follow these steps to achieve this Graph setup:

  1. Have the Platform Orchestrator automatically provision a Kubernetes ServiceAccount for every workload. Provide a Resource Definition of Type workload that is requesting a Resource of type k8s-service-account and adding it to the workload specification.
Sample workload Resource Definition


custom-workload.yaml (view on GitHub ) :

# This Resource Definition uses the Template Driver to assign a custom Kubernetes service account to a workload
# to facilitate workload identity
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-custom-workload
entity:
  name: wi-aws-custom-workload
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        # Request a resource of type 'k8s-service-account' in the Resource Graph
        # and add its name to the workload spec
        outputs: |
          update:
            - op: add
              path: /spec/serviceAccountName
              value: ${resources.k8s-service-account.outputs.name}
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-aws


custom-workload.yaml (view on GitHub ) :

# This Resource Definition uses the Template Driver to assign a custom Kubernetes service account to a workload
# to facilitate workload identity
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-custom-workload
entity:
  name: wi-gcp-custom-workload
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        # Request a resource of type 'k8s-service-account' in the Resource Graph
        # and add its name to the workload spec
        outputs: |
          update:
            - op: add
              path: /spec/serviceAccountName
              value: ${resources.k8s-service-account.outputs.name}
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-gcp
  1. Provide a Resource Definition to provision the k8s-service-account requested by the workload.

    The service account represents the identity of the Workload on the cluster. It must provide the required output(s) for identifying this identity, depending on the cloud provider. Refer to the outputs of the sample Resource Definition to see the implementation details.

Sample service account Resource Definition


custom-service-account.yaml (view on GitHub ) :

# This Resource Definition uses the Template Driver to create a Kubernetes service account.
# Unlike IAM roles for service accounts, EKS Pod Identity doesn’t use an annotation on the service account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-custom-service-account
entity:
  driver_type: humanitec/template
  name: wi-aws-custom-service-account
  type: k8s-service-account
  driver_inputs:
    values:
      res_id: ${context.res.id}
      templates:
        init: |
          res_id: {{ .driver.values.res_id }}
          {{- $res_name := index (splitList "." .driver.values.res_id) 1 }}
          name: {{ $res_name | toRawJson }}
        manifests: |
          service-account.yaml:
            location: namespace
            data:
              apiVersion: v1
              kind: ServiceAccount
              metadata:
                name: {{ .init.name }}
                annotations:
                  hum-res: {{ .init.res_id }}
        outputs: |
          name: {{ .init.name }}
  # Co-provision an aws-role to represent this service account in IAM through workload identity
  # By specifying neither class nor ID, the co-provisioned aws-role will have the same class and ID as the present resource
  provision:
    aws-role:
      is_dependent: true
      match_dependents: true
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-aws


custom-service-account.yaml (view on GitHub ) :

# This Resource Definition uses the Template Driver to create a Kubernetes service account.
# It pulls central configuration parameters from `config` Resources.
# It returns the "principal" string for the ServiceAccount to configure GKE workload identity.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-custom-service-account
entity:
  driver_type: humanitec/template
  name: wi-gcp-custom-service-account
  type: k8s-service-account
  driver_inputs:
    values:
      res_id: ${context.res.id}
      gke_project_id: ${resources['config.default#gke-config'].outputs.gke_project_id}
      gke_project_number: ${resources['config.default#gke-config'].outputs.gke_project_number}
      # Using the id `k8s-namespace`, this reference targets the existing, implicitly created namespace resource
      k8s_namespace: "${resources['k8s-namespace.default#k8s-namespace'].outputs.namespace}"
      templates:
        init: |
          res_id: {{ .driver.values.res_id }}
          {{- $res_name := index (splitList "." .driver.values.res_id) 1 }}
          name: {{ $res_name | toRawJson }}
          principal: principal://iam.googleapis.com/projects/{{ .driver.values.gke_project_number }}/locations/global/workloadIdentityPools/{{ .driver.values.gke_project_id }}.svc.id.goog/subject/ns/{{ .driver.values.k8s_namespace }}/sa/{{ $res_name }}
        manifests: |
          service-account.yaml:
            location: namespace
            data:
              apiVersion: v1
              kind: ServiceAccount
              metadata:
                name: {{ .init.name }}
                annotations:
                  hum-res: {{ .init.res_id }}
        outputs: |
          name: {{ .init.name }}
          principal: {{ .init.principal }}
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-gcp
  1. (AWS only) The service account automatically co-provisions an AWS role to represent the workload identity in the IAM (check the provision: statement in the service account Resource Definition). The co-provisioning lets the workload resource depend on the AWS role as well to support an upcoming lookup in the Graph.

    Provide a Resource Definition of type aws-role to provision that role.

Sample AWS role Resource Definition


aws-role.yaml (view on GitHub ) :

# This Resource Definition of type "aws-role" provisions an IAM Role representing a workload identity
# and associates it with the Kubernetes service account of that workload
# It expects to depend on the k8s-service-account resource
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-role
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-role
  type: aws-role
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: "${context.app.id}"
        env_id: "${context.env.id}"
        res_id: "${context.res.id}"
        region: "${resources['config.default#app-config'].outputs.region}"
        # The Globally Unique Resource ID (GUResID) is useful for ensuring uniqueness
        guresid: ${context.res.guresid}
        # Using the id `k8s-cluster`, this reference targets the existing, implicitly created k8s-cluster resource
        k8s_cluster: "${resources['k8s-cluster.default#k8s-cluster'].outputs.name}"
        # Using the id `k8s-namespace`, this reference targets the existing, implicitly created namespace resource
        k8s_namespace: "${resources['k8s-namespace.default#k8s-namespace'].outputs.namespace}"
        # Obtain the name of the service account with the same class and id as the current resource
        service_account: "${resources.k8s-service-account.outputs.name}"
        # This Resource selector traverses the Graph to get the "arn" output
        # from all aws-policy resources that needs to be attached to the role
        # 1. Start at this aws-role (same class and ID)
        # 2. Find all the workloads that depend on that aws-role
        # 3. Find all the aws-policies that these workloads depend on (direct and indirect)
        # 4. For each aws-policy, read its "arn" output
        # Notes:
        # - The workload depends on the present aws-role depends because the k8s-service-account co-provisions it with match_dependents = true
        # - The selector may return more than one element because the workload may depend on more than one aws-policy
        iam_policy_arns: ${resources["aws-role<workload>aws-policy"].outputs.arn}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: | 
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
            }
          }
        variables.tf: |
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "region" {}
          variable "guresid" {}
          variable "k8s_cluster" {}
          variable "k8s_namespace" {}
          variable "service_account" {}
          variable "iam_policy_arns" {
            type = set(string)
          }
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec                                     = "true"
                hum-app                                       = var.app_id
                hum-env                                       = var.env_id
                hum-res                                       = replace(var.res_id, ".", "-")
                "alpha.eksctl.io/cluster-name"                = var.k8s_cluster
                "eksctl.cluster.k8s.io/v1alpha1/cluster-name" = var.k8s_cluster
                managed-by                                    = "terraform"
              }
            }
          }
          locals {
            # res_id will have the form: modules.<workload_name>
            workload_name = split(".", var.res_id)[1]
          }
          data "aws_iam_policy_document" "assume_role" {
            statement {
              sid    = "AllowEksAuthToAssumeRoleForPodIdentity"
              effect = "Allow"
              principals {
                type        = "Service"
                identifiers = ["pods.eks.amazonaws.com"]
              }
              actions = [
                "sts:AssumeRole",
                "sts:TagSession"
              ]
            }
          }
          # Create the role for the service account
          resource "aws_iam_role" "this" {
            name               = var.guresid
            description        = "IAM Role for workload $\{local.workload_name} in env $\{var.env_id} of app $\{var.app_id}"
            assume_role_policy = data.aws_iam_policy_document.assume_role.json
          }
          # Create the EKS Pod Identity Association
          resource "aws_eks_pod_identity_association" "this" {
            cluster_name    = var.k8s_cluster
            namespace       = var.k8s_namespace
            service_account = var.service_account
            role_arn        = aws_iam_role.this.arn
          }
          # Attach all required IAM policies
          resource "aws_iam_role_policy_attachment" "workload_role_attached_policies" {
            for_each   = var.iam_policy_arns
            role       = aws_iam_role.this.name
            policy_arn = each.value
          }
        outputs.tf: |
          output "arn" {
            value = aws_iam_role.this.arn
          }
  criteria:
  - app_id: workload-identity-test-aws
  1. We recommend you provide configuration values via centralized config resources to reduce redundancy. The Resource Definitions shown in the previous steps contains references to such resources via ${resources['config.default...} placeholders.

    By naming a Resource ID in the reference (the part after the # sign), the Resource will be a Shared Resource and unique in the Graph. All other Resources using the same reference will use the same Resource.

Sample config Resource Definition


app-config.yaml (view on GitHub ) :

# This Resource Definition of type "config" uses the Template Driver to provide configuration values
# to other Resource Definitions for accessing the AWS Project
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-app-config
entity:
  name: wi-aws-app-config
  type: config
  driver_type: humanitec/template
  # The identity used in this Cloud Account needs permission to manage
  # the kinds of cloud resources used in the example
  driver_account: YOURACCOUNT
  driver_inputs:
    values:
      templates:
        outputs: |
          region: YOURVALUE
  criteria:
    - class: default
      res_id: app-config
      env_type: development
      app_id: workload-identity-test-aws


gke-config.yaml (view on GitHub ) :

# This Resource Definition of type "config" uses the Template Driver to provide configuration values
# to other Resource Definitions for accessing the GKE cluster
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-gke-config
entity:
  name: wi-gcp-gke-config
  type: config
  driver_type: humanitec/template
  # The identity configured in this Cloud Account needs permission
  # to deploy Workloads to your GKE cluster
  driver_account: YOURACCOUNT
  driver_inputs:
    values:
      templates:
        outputs: |
          gke_project_id: YOURVALUE
          # Keep the "" around the value of gke_project_number
          gke_project_number: "YOURVALUE"
  criteria:
    - class: default
      res_id: gke-config
      env_type: development
      app_id: workload-identity-test-gcp

Now extend the Graph to provision the actual cloud service.

Provision cloud service and permissions

Now enable the Platform Orchestrator to provision a cloud service of the desired type which your workload will eventually access using workload identity.

We describe a basic and advanced setup for you to choose from going forward. Please read through both setups to find which one suits your needs. The example section further down features sample implementations for both setups.

This guide uses these cloud services as examples:

Basic setup

Overview

The basic setup for accessing a cloud service using workload identity has these features:

  • All workloads requesting a resource via Score receive the same level of access, e.g. a “reader” or “admin” access, as defined by the platform team
  • Developers therefore do not need to, and cannot, request a particular level of access on a resource they request via Score
  • Resources may be both private to a single Workload or shared between Workloads
  • The resulting Resource Graph is simpler than the advanced setup

If you require flexible access levels per workload for developers to choose from, the advanced setup is for you. Please read through the basic setup in any case as they share the same foundation.

Add cloud service and permissions Resources

Add these Resources to the Graph to establish the basic setup:

  1. The cloud service Resource requested via Score
  2. A permissions/policy Resource provisioning the required cloud permissions for all workloads using that cloud service instance
  3. (recommended) A config to centrally maintain parameters for the cloud service

Target Resource Graph (basic)

Adding those elements extends the Resource Graph like this:

---
title: Adding a sqs Resource requested via Score, and an IAM policy
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-aws-1") --> |depends on| sqs-1("sqs-1<br/>type: sqs<br/>class: default<br/>id: modules.hello-aws-1.externals.sqs-1")

  iam_role_1("IAM Role for workload-1<br/>type: aws-role<br/>class: default<br/>id: modules.hello-aws-1") -->|co-provisioned by /<br/>depends on| k8s_service_account_1
  workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-aws-1")
  policy-sqs-1("IAM policy for sqs-1<br/>type: aws-policy<br/>class: sqs-default<br/>id: modules.hello-aws-1.externals.sqs-1") -->|co-provisioned by /<br/>depends on| sqs-1
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| iam_role_1
  iam_role_1 -->|resource selector| policy-sqs-1
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| policy-sqs-1

  class sqs-1,policy-sqs-1 highlight

The diagram is omitting the config resource for brevity.

Corresponding Score file

This Score file will produce the Graph:

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-1
containers:
  hello-world:
    image: .
    variables:
      SQS_1_NAME: ${resources.sqs-1.name}
resources:
  sqs-1:
    type: sqs

---
title: Adding a s3 Resource requested via Score, and its bucket policy
---
graph LR
  policy-bucket-1("Bucket policy for bucket-1<br/>type: aws-policy<br/>class: s3-bucket-policy<br/>id: modules.hello-aws-1.externals.bucket-1") -->|co-provisioned by /<br/>depends on| bucket-1
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-aws-1") --> |depends on| bucket-1("bucket-1<br/>type: s3<br/>class: default<br/>id: modules.hello-aws-1.externals.bucket-1")
  bucket-1 -->|references| app_config("App config<br/>type: config<br/>class: default<br/>id: app-config")
  iam_role_1("IAM Role for workload-1<br/>type: aws-role<br/>class: default<br/>id: modules.hello-aws-1") -->|co-provisioned by /<br/>depends on| k8s_service_account_1
  workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-aws-1")
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| iam_role_1
  iam_role_1 -->|references| app_config

  class bucket-1,policy-bucket-1 highlight
Corresponding Score file

This Score file will produce the Graph:

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-1
containers:
  hello-world:
    image: .
    variables:
      BUCKET_1_NAME: ${resources.bucket-1.name}
resources:
  bucket-1:
    type: s3

---
title: Adding a gcs bucket requested via Score, bucket policy, and app config
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-gcp-1") --->|depends on| bucket-1("bucket-1<br/>type: gcs<br/>class: default<br/>id: modules.hello-gcp-1.externals.bucket-1")
  policy-bucket-1("Policy for bucket-1<br/>type: gcp-iam-policy-binding<br/>class: gcs-default<br/>id: modules.hello-gcp-1.externals.bucket-1") -.->|resource selector| k8s_service_account_1
  policy-bucket-1 -->|co-provisioned by /<br/>depends on| bucket-1
  
  app_config_gke("App config<br/>type: config<br/>class: default<br/>id: app-config")
  bucket-1 -->|references| app_config_gke
  workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-gcp-1") -->|references| cluster_config_gke("GKE config<br/>type: config<br/>class: default<br/>id: gke-config")
  policy-bucket-1 -->|references| app_config_gke

  class bucket-1,policy-bucket-1,app_config_gke highlight
Corresponding Score file

This Score file will produce that Graph:

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-1
containers:
  hello-world:
    image: .
    variables:
      GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
resources:
  bucket-1:
    type: gcs

These mechanisms create the Graph:

  1. The Score file requests a resource of the type of your cloud service, adding it to the Graph as a dependent Resource of the workload.
Sample cloud service Resource Definition


sqs.yaml (view on GitHub ) :

# This Resource Definition of type `sqs` uses the Terraform Runner Driver
# to provision an Amazon SQS service
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-sqs
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-sqs
  type: sqs
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage SQS services
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: "${context.app.id}"
        env_id: "${context.env.id}"
        res_id: "${context.res.id}"
        region: ${resources['config.default#app-config'].outputs.region}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: |
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
              random = {
                source  = "hashicorp/random"
              }
            }
          }
        variables.tf: |
          locals {
            workload_id = split(".", var.res_id)[1]
          }
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "region" {}
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec  = "true"
                hum-app    = var.app_id
                hum-env    = var.env_id
                hum-res    = replace(var.res_id, ".", "-")
                managed-by = "terraform"
              }
            }
          }
          resource "random_string" "sqs_name" {
            length  = 10
            special = false
            lower   = true
            upper   = false
          }
          resource "aws_sqs_queue" "queue" {
            name = random_string.sqs_name.result
          }
        outputs.tf: |
          output "arn" {
            value = aws_sqs_queue.queue.arn
          }
          output "region" {
            value = var.region
          }
          output "url" {
            value = aws_sqs_queue.queue.url
          }
  provision:
    # Co-provision an IAM Policy resource of class "sqs-default"
    # By not specifying an ID, the co-provisioned aws-policy will have the same ID as the present resource
    aws-policy.sqs-default:
      match_dependents: true
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-aws


s3.yaml (view on GitHub ) :

# This Resource Definition of type `s3` uses the Terraform Runner Driver to provision an Amazon S3 bucket
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-s3
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-s3
  type: s3
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage S3 buckets
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: "${context.app.id}"
        env_id: "${context.env.id}"
        res_id: "${context.res.id}"
        region: ${resources['config.default#app-config'].outputs.region}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: |
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
              random = {
                source  = "hashicorp/random"
              }
            }
          }
        variables.tf: |
          locals {
            workload_id = split(".", var.res_id)[1]
          }
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "region" {}
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec  = "true"
                hum-app    = var.app_id
                hum-env    = var.env_id
                hum-res    = replace(var.res_id, ".", "-")
                managed-by = "terraform"
              }
            }
          }
          resource "random_string" "bucket_name" {
            length  = 10
            special = false
            lower   = true
            upper   = false
          }
          resource "aws_s3_bucket" "bucket" {
            bucket        = random_string.bucket_name.result
            force_destroy = true
          }
        outputs.tf: |
          output "arn" {
            value = aws_s3_bucket.bucket.arn
          }
          output "bucket" {
            value = aws_s3_bucket.bucket.bucket
          }
          output "region" {
            value = aws_s3_bucket.bucket.region
          }
  provision:
    # Co-provision an IAM Policy resource of class "s3-bucket-policy"
    # By not specifying an ID, the co-provisioned aws-policy will have the same ID as the present resource
    aws-policy.s3-bucket-policy:
      is_dependent: true
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-aws


gcs.yaml (view on GitHub ) :

# This Resource Definition of type `gcs` uses the Terraform Driver to provision a gcs bucket
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-gcs
entity:
  driver_type: humanitec/terraform-runner
  name: wi-gcp-gcs
  type: gcs
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage Cloud Storage buckets
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: "${context.app.id}"
        env_id: "${context.env.id}"
        res_id: "${context.res.id}"
        project_id: ${resources['config.default#app-config'].outputs.project_id}
        region: ${resources['config.default#app-config'].outputs.region}
      credentials_config:
        variables:
          access_token: access_token
      files:
        providers.tf: |
          terraform {
            required_providers {
              google = {
                source  = "hashicorp/google"
              }
              random = {
                source  = "hashicorp/random"
              }
            }
          }
        variables.tf: |
          locals {
            workload_id = split(".", var.res_id)[1]
          }
          variable "access_token" {
            type      = string
            sensitive = true
          }
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "project_id" {}
          variable "region" {}
        main.tf: |
          provider "google" {
            access_token = var.access_token
            default_labels = {
              "humanitec"  = "true"
              "hum-app"    = var.app_id
              "hum-env"    = var.env_id
              "hum-res"    = replace(var.res_id, ".", "-")
              "managed-by" = "terraform"
            }
          }
          resource "random_string" "bucket_name" {
            length  = 10
            special = false
            lower   = true
            upper   = false
          }
          resource "google_storage_bucket" "bucket" {
            project                     = var.project_id
            name                        = random_string.bucket_name.result
            location                    = var.region
            uniform_bucket_level_access = true
            force_destroy               = true
          }
        outputs.tf: |
          output "name" {
            value = google_storage_bucket.bucket.name
          }
  provision:
    # Co-provision an IAM Policy resource of class "gcs-default" for this default gcs
    # By not specifying an ID, the co-provisioned gcp-iam-policy-binding will have the same ID as the present resource
    gcp-iam-policy-binding.gcs-default:
      is_dependent: true
  # Adjust matching criteria as required
  criteria:
  - app_id: workload-identity-test-gcp
  1. That cloud service Resource co-provisions a Resource to create the cloud permissions/policy. Look for the provision: statement in the sample Resource Definition above.

    The Resource Type to use for this Definition depends on the cloud provider:

    The permissions/policy Resource performs a lookup into the Graph to find the identities of all workloads depending on the same resource, using a Resource selector. Note that this may be more than one workload for a Shared Resource (the example below shows such a setup).

    It then provisions the required permission or policy resources, using the level of access that is defined in the Resource Definition IaC code.

Sample permissions/policy Resource Definition


sqs-iam-policy.yaml (view on GitHub ) :

# This Resource Definition of type "aws-policy" uses the Terraform Runner Driver to provision
# an IAM policy for the access class it matched to
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-sqs-iam-policy
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-sqs-iam-policy
  type: aws-policy
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage IAM policies
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: "${context.app.id}"
        env_id: "${context.env.id}"
        res_id: "${context.res.id}"
        res_class: "${context.res.class}"
        region: ${resources['config.default#app-config'].outputs.region}
        # The Globally Unique Resource ID (GUResID) is useful for ensuring uniqueness
        guresid: ${context.res.guresid}
        # Get the sqs resource with the class "default" and the same ID as the current resource
        sqs_arn: ${resources['sqs.default'].outputs.arn}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: |
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
            }
          }
        variables.tf: |
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "res_class" {}
          variable "region" {}
          variable "guresid" {}
          variable "sqs_arn" {}
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec  = "true"
                hum-app    = var.app_id
                hum-env    = var.env_id
                hum-res    = replace(var.res_id, ".", "-")
                managed-by = "terraform"
              }
            }
          }
          # Prepare the policy document. Adjust the policy according at your discretion to fit your needs
          data "aws_iam_policy_document" "sqs_policy" {
            statement {
              # Statement IDs must be alphanumeric
              sid = replace(var.res_class, "-", "")
              effect = "Allow"
              actions = [
                "sqs:*"
              ]
              resources = [
                var.sqs_arn
              ]
            }
          }
          # Create the IAM policy
          resource "aws_iam_policy" "policy" {
            name        = var.guresid
            path        = "/"
            description = "IAM Policy $\{var.res_class} for SQS $\{var.sqs_arn}"
            policy      = data.aws_iam_policy_document.sqs_policy.json
          }
        outputs.tf: |
          output "arn" {
            value = aws_iam_policy.policy.arn
          }
  # Match the class that is used in the "provision" statement in the sqs Resource Definition
  criteria:
  - class: sqs-default


s3-bucket-policy.yaml (view on GitHub ) :

# This Resource Definition of type "aws-policy" uses the Terraform Runner Driver to provision
# a bucket policy for a S3 bucket it depends on
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-s3-bucket-policy
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-s3-bucket-policy
  type: aws-policy
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: ${context.app.id}
        env_id: ${context.env.id}
        res_id: ${context.res.id}
        region: ${resources['config.default#app-config'].outputs.region}
        # Reference the bucket for which to provision the bucket policy
        s3_bucket_name: ${resources['s3.default'].outputs.bucket}
        s3_bucket_arn: ${resources['s3.default'].outputs.arn}
        # This Resource selector traverses the Graph to get the "arn" output
        # from all aws-role resources that require access
        # 1. Start with the s3 resource of class "default" and the same ID as the present resource
        #    This effectively matches the s3 that co-provisioned the present resource
        # 2. Find all the workloads that depend on that s3 (direct and indirect)
        # 3. Find all the aws-roles that these workloads depend on
        # 4. For each aws-role, read its "arn" output
        # Notes:
        # - The workload depends on the aws-role because the k8s-service-account co-provisions the aws-role with match_dependents: true
        # - The selector may return more than one element because more than one workload may depend on the s3
        #   if the s3 is a Shared Resource
        role_arns: ${resources['s3.default<workload>aws-role'].outputs.arn}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: | 
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
            }
          }
        variables.tf: |
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "region" {}
          variable "s3_bucket_arn" {}
          variable "s3_bucket_name" {}
          variable "role_arns" {
            type = set(string)
          }
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec  = "true"
                hum-app    = var.app_id
                hum-env    = var.env_id
                hum-res    = replace(var.res_id, ".", "-")
                managed-by = "terraform"
              }
            }
          }
          # Prepare the policy document. Adjust the policy to your scenario
          data "aws_iam_policy_document" "bucket_access" {
            statement {
              sid = "Admin"
              dynamic "principals" {
                for_each = var.role_arns
                content {
                  type        = "AWS"
                  identifiers = ["$\{principals.value}"]
                }
              }
              actions = [
                "s3:*"
              ]
              resources = [
                var.s3_bucket_arn,
                "$\{var.s3_bucket_arn}/*",
              ]
            }
          }
          # Create the bucket policy
          resource "aws_s3_bucket_policy" "bucket_policy" {
            bucket = var.s3_bucket_name
            policy = data.aws_iam_policy_document.bucket_access.json
          }
        outputs.tf: |
          # This output is for debugging and illustration purposes only
          output "role_arns" {
            value = join(",", var.role_arns)
          }
  # Match the class that is used in the "provision" statement in the s3 Resource Definition
  criteria:
  - class: s3-bucket-policy


gcs-iam-member.yaml (view on GitHub ) :

# This Resource Definition uses the Terraform Driver to create IAM policies for gcs resources
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-gcs-iam-member
entity:
  driver_type: humanitec/terraform-runner
  name: wi-gcp-gcs-iam-member
  type: gcp-iam-policy-binding
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage IAM members
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      variables:
        res_class: ${context.res.class}
        # Obtain the bucket name from the bucket resource this resource depends on
        gcs_bucket_name: ${resources['gcs.default'].outputs.name}
        # This Resource selector traverses the Graph to get the "principal" output
        # from all k8s-service-account resources that require access
        # 1. Start with the gcs resource of class "default" and the same ID as the present resource
        #    This effectively matches the gcs that co-provisioned the present resource
        # 2. Find all the workloads that depend on that gcs
        # 3. Find all the k8s-service-accounts that these workloads depend on
        # 4. For each k8s-service-account, read its "principal" output
        # Notes:
        # - The present gcp-iam-policy-binding depends on the gcs because the gcs co-provisions it with is_dependent = true
        # - The selector may return more than one element because more than one workload may depend on the gcs
        #   if the gcs is a Shared Resource
        principals: ${resources['gcs.default<workload>k8s-service-account'].outputs.principal}
      credentials_config:
        variables:
          access_token: access_token
      files:
        providers.tf: |
          terraform {
            required_providers {
              google = {
                source  = "hashicorp/google"
              }
            }
          }
        variables.tf: |
          variable "res_class" {}
          variable "access_token" {
            type      = string
            sensitive = true
          }
          variable "gcs_bucket_name" {}
          # This variable is a set because a resource selector, which is used to retrieve it, always returns an array
          variable "principals" {
            type = set(string)
          }
        main.tf: |
          provider "google" {
            access_token = var.access_token
          }
          # Create one IAM member for each principal using a role of your choice
          resource "google_storage_bucket_iam_member" "iam_member" {
            for_each = var.principals
            bucket  = var.gcs_bucket_name
            role    = "roles/storage.objectAdmin"
            member  = each.key
          }
        outputs.tf: |
          # This output is for debugging and illustration purposes only
          output "principals" {
            value = join(",", var.principals)
          }   
  # Match the class that is used in the "provision" statement in the gcs Resource Definition
  criteria:
  - class: gcs-default
  1. Similar to the externalized cluster configuration, we recommend to centrally provide app configuration values via a config Resource. The cloud service Resource Definition can reference that Resource via ${resources['config.default#app-config'].
Sample app config Resource Definition


app-config.yaml (view on GitHub ) :

# This Resource Definition of type "config" uses the Template Driver to provide configuration values
# to other Resource Definitions for accessing the AWS Project
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-app-config
entity:
  name: wi-aws-app-config
  type: config
  driver_type: humanitec/template
  # The identity used in this Cloud Account needs permission to manage
  # the kinds of cloud resources used in the example
  driver_account: YOURACCOUNT
  driver_inputs:
    values:
      templates:
        outputs: |
          region: YOURVALUE
  criteria:
    - class: default
      res_id: app-config
      env_type: development
      app_id: workload-identity-test-aws


app-config.yaml (view on GitHub ) :

# This Resource Definition of type "config" uses the Template Driver to provide configuration values
# to other Resource Definitions for accessing the Google Cloud Project
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-app-config
entity:
  name: wi-gcp-app-config
  type: config
  driver_type: humanitec/template
  # The identity used in this Cloud Account needs permission to manage
  # the kinds of cloud resources used in the example
  driver_account: YOURACCOUNT
  driver_inputs:
    values:
      templates:
        outputs: |
          project_id: YOURVALUE
          region: YOURVALUE
  criteria:
    - class: default
      res_id: app-config
      env_type: development
      app_id: workload-identity-test-gcp

All workloads requesting a resource of the proper type via Score will now automatically receive permissions via workload identity according to the defined access level.

Advanced setup

Overview

The advanced setup has extends the basic setup for greater flexibility:

  • Developers select a particular access class via Score, e.g. “reader” or “admin”, for each resource
  • For a Shared Resource, different Workloads may choose different access levels
  • An additional layer is added to the resulting Resource Graph, increasing its complexity compared to the basic setup

To provide this flexibility, the advanced setup introduces an intermediate Resource to the Graph called a Delegator Resource. This Resource is located in front of the concrete Resource that represents the actual cloud service instance.

Developers request an access classes as a Resource Class via Score:

resources:
  my-resource:
    type: some-resource-type
    class: type-read-only

That class is used in the Matching Criteria of the Delegator Resource Definition. Whenever a Resource of a certain access class is requested via Score, that request provisions a Delegator of the requested class which then performs these functions:

  1. Request, and therefore provision, the actual concrete cloud service Resource
  2. Pass through any outputs of the concrete Resource to the upstream elements in the Graph
  3. Co-provision a cloud permissions Resource for that particular access level, expressed through the Resource class
Sample Delegator Resource Definition


sqs-delegator.yaml (view on GitHub ) :

# This Resource Definition of type `sqs` implements a "Delegator" resource
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-sqs-delegator
entity:
  driver_type: humanitec/echo
  name: wi-aws-sqs-delegator
  type: sqs
  driver_inputs:
    values:
      # Referencing the "concrete" Resource and passing through all of its outputs
      arn: ${resources['sqs.default'].outputs.arn}
      region: ${resources['sqs.default'].outputs.region}
      url: ${resources['sqs.default'].outputs.url}
  provision:
    # Co-provision an IAM Policy resource for this class of sqs
    # By specifying neither class nor ID, the co-provisioned aws-policy will have the same class and ID as the present resource
    aws-policy:
      match_dependents: true
  # Adjust matching criteria as required, but must match all classes
  criteria:
  - class: sqs-admin
    app_id: workload-identity-test-aws
  - class: sqs-read-only
    app_id: workload-identity-test-aws


s3-delegator.yaml (view on GitHub ) :

# This Resource Definition of type `s3` implements a "Delegator" resource 
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-s3-delegator
entity:
  driver_type: humanitec/echo
  name: wi-aws-s3-delegator
  type: s3
  driver_inputs:
    values:
      # Referencing the "concrete" Resource and passing through all of its outputs
      arn: ${resources['s3.default'].outputs.arn}
      name: ${resources['s3.default'].outputs.bucket}
      region: ${resources['s3.default'].outputs.region}
  # Adjust matching criteria as required, but must match all classes
  criteria:
  - class: s3-admin
    app_id: workload-identity-test-aws
  - class: s3-read-only
    app_id: workload-identity-test-aws


gcs-delegator.yaml (view on GitHub ) :

# This Resource Definition of type `gcs` implements a "Delegator" resource
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-gcs-delegator
entity:
  driver_type: humanitec/echo
  name: wi-gcp-gcs-delegator
  type: gcs
  driver_inputs:
    values:
      # Referencing the "concrete" Resource and passing through all of its outputs
      name: ${resources['gcs.default'].outputs.name}
  provision:
    # Co-provision an IAM Policy resource for this class of gcs
    # The gcp-iam-policy-binding resource will have the same class and ID as the present resource
    gcp-iam-policy-binding:
      is_dependent: true
  # Adjust matching criteria as required, but must match all access classes
  criteria:
  - class: gcs-admin
    app_id: workload-identity-test-gcp
  - class: gcs-read-only
    app_id: workload-identity-test-gcp

Target Resource Graph (advanced)

The extended Resource Graph will then look like this for a workload requesting “read-only” access to a private resource:

---
title: Adding a sqs Delegator Resource
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-aws-1") --> |depends on| sqs-1-delegator-read-only("sqs-1 Delegator<br/>type: sqs<br/>class: sqs-read-only<br/>id: modules.hello-aws-1.externals.sqs-1")
  sqs-1-delegator-read-only -->|references| sqs-1("sqs-1 concrete<br/>type: sqs<br/>class: default<br/>id: modules.hello-aws-1.externals.sqs-1")

  iam_role_1("IAM Role for workload-1<br/>type: aws-role<br/>class: default<br/>id: modules.hello-aws-1") -->|co-provisioned by /<br/>depends on| k8s_service_account_1
  workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-aws-1")
  policy-sqs-1("IAM policy for sqs-1<br/>type: aws-policy<br/>class: sqs-read-only<br/>id: modules.hello-aws-1.externals.sqs-1") -->|co-provisioned by /<br/>depends on| sqs-1-delegator-read-only
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| iam_role_1
  iam_role_1 -->|resource selector| policy-sqs-1
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| policy-sqs-1

  class sqs-1-delegator-read-only highlight

The diagram is omitting the config resource for brevity.

Corresponding Score file

This Score file will produce the Graph. Note that it now requests a class for the sqs resource.

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-1
containers:
  hello-world:
    image: .
    variables:
      SQS_1_NAME: ${resources.sqs-1.name}
resources:
  sqs-1:
    type: sqs
    # Adding the class to request a particular access
    class: sqs-read-only

---
title: Adding a s3 Delegator Resource
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-aws-1") --> |depends on| bucket-1-delegator("bucket-1 Delegator<br/>type: s3<br/>class: s3-read-only<br/>id: modules.hello-aws-1.externals.bucket-1")
  bucket-1-delegator -->|references| bucket-1("bucket-1<br/>type: s3<br/>class: default<br/>id: modules.hello-aws-1.externals.bucket-1")
  policy-bucket-1("Bucket policy for bucket-1<br/>type: aws-policy<br/>class: s3-bucket-policy<br/>id: modules.hello-aws-1.externals.bucket-1") -->|co-provisioned by /<br/>depends on| bucket-1
  bucket-1 -->|references| app_config("App config<br/>type: config<br/>class: default<br/>id: app-config")
  iam_role_1("IAM Role for workload-1<br/>type: aws-role<br/>class: default<br/>id: modules.hello-aws-1") -->|co-provisioned by /<br/>depends on| k8s_service_account_1
  workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-aws-1")
  policy-bucket-1 -->|resource selector| iam_role_1
  workload-1 -->|match dependents<br/>from aws-role co-provisioning| iam_role_1
  iam_role_1 -->|references| app_config

  class bucket-1-delegator highlight
Corresponding Score file

This Score file will produce the Graph. Note that it now requests a class for the s3 resource.

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-1
containers:
  hello-world:
    image: .
    variables:
      BUCKET_1_NAME: ${resources.bucket-1.name}
resources:
  bucket-1:
    type: s3
    class: s3-read-only

---
title: Adding a gcs Delegator Resource
---
graph LR
  workload-1("workload-1<br/>type: workload<br/>class: default<br/>id: modules.hello-gcp-1") --->|depends on| bucket-1-delegator-read-only("bucket-1 Delegator for read-only access<br/>type: gcs<br/>class: gcs-read-only<br/>id: modules.hello-gcp-1.externals.bucket-1")
  bucket-1-delegator-read-only -->|references| bucket-1-concrete("bucket-1 concrete<br/>type: gcs<br/>class: default<br/>id: modules.hello-gcp-1.externals.bucket-1")
  policy-bucket-1-admin("Policy for bucket-1 read-only access<br/>type: gcp-iam-policy-binding<br/>class: gcs-read-only<br/>id: modules.hello-gcp-1.externals.bucket-1") -.->|resource selector| k8s_service_account_1
  policy-bucket-1-admin -->|co-provisioned by /<br/>depends on| bucket-1-delegator-read-only
  
  app_config_gke("App config<br/>type: config<br/>class: default<br/>id: app-config")
  bucket-1-concrete -->|references| app_config_gke
  workload-1 -->|references| k8s_service_account_1("K8S ServiceAccount 1<br/>type: k8s-service-account<br/>class: default<br/>id: modules.hello-gcp-1") --->|references| cluster_config_gke("GKE config<br/>type: config<br/>class: default<br/>id: gke-config")
  policy-bucket-1-admin -->|references| app_config_gke

  class bucket-1-delegator-read-only highlight
Corresponding Score file

This Score file will produce the Graph. Note that it now requests a class for the gcs resource.

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-1
containers:
  hello-world:
    image: .
    variables:
      GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
resources:
  bucket-1:
    type: gcs
    # Adding the class to request a particular access
    class: gcs-read-only

The permissions/policy resource now becomes dynamic. In the basic setup, it always provisioned the same permissions to all associated workload identities. Now it uses its own Resource class to determine the corresponding permissions. The exact implementation depends on the IaC tooling. Our example shows a Terraform implementation.

The Resource Definitions for both the Delegator and the permissions/policy Resource must match all access classes you define.

Sample permissions/policy Resource Definition


sqs-iam-policy.yaml (view on GitHub ) :

# This Resource Definition creates an IAM policy for the access class it matched to
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-sqs-iam-policy
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-sqs-iam-policy
  type: aws-policy
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage IAM policies
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: "${context.app.id}"
        env_id: "${context.env.id}"
        res_id: "${context.res.id}"
        res_class: "${context.res.class}"
        region: ${resources['config.default#app-config'].outputs.region}
        # The Globally Unique Resource ID (GUResID) is useful for ensuring uniqueness
        guresid: ${context.res.guresid}
        # Get the sqs resource with the same class and ID as the current resource
        sqs_arn: ${resources.sqs.outputs.arn}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: |
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
            }
          }
        variables.tf: |
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "res_class" {}
          variable "region" {}
          variable "guresid" {}
          variable "sqs_arn" {}
          # Define the actions for each access class
          locals {
            actions = {
              "sqs-admin" = [
                "sqs:*"
              ]
              "sqs-read-only" = [
                "sqs:GetQueueAttributes",
                "sqs:GetQueueUrl",
                "sqs:ListDeadLetterSourceQueues",
                "sqs:ListQueues",
                "sqs:ListMessageMoveTasks",
                "sqs:ListQueueTags",
                "sqs:ReceiveMessage"
              ]
            }
          }
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec  = "true"
                hum-app    = var.app_id
                hum-env    = var.env_id
                hum-res    = replace(var.res_id, ".", "-")
                managed-by = "terraform"
              }
            }
          }
          # Prepare the policy document
          data "aws_iam_policy_document" "sqs_policy" {
            statement {
              # Statement IDs must be alphanumeric
              sid = replace(var.res_class, "-", "")
              effect = "Allow"
              actions = local.actions[var.res_class]
              resources = [
                var.sqs_arn
              ]
            }
          }
          # Create the IAM policy
          resource "aws_iam_policy" "policy" {
            name        = var.guresid
            path        = "/"
            description = "IAM Policy $\{var.res_class} for SQS $\{var.sqs_arn}"
            policy      = data.aws_iam_policy_document.sqs_policy.json
          }
        outputs.tf: |
          output "arn" {
            value = aws_iam_policy.policy.arn
          }
  # Match all the supported classes
  criteria:
  - class: sqs-admin
  - class: sqs-read-only


s3-bucket-policy.yaml (view on GitHub ) :

# This Resource Definition of type "aws-role" uses the Terraform Driver to provision a bucket policy
# for an S3 bucket it depends on
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-aws-s3-bucket-policy
entity:
  driver_type: humanitec/terraform-runner
  name: wi-aws-s3-bucket-policy
  type: aws-policy
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      append_logs_to_error: true
      variables:
        app_id: ${context.app.id}
        env_id: ${context.env.id}
        res_id: ${context.res.id}
        region: ${resources['config.default#app-config'].outputs.region}
        # Reference the "concrete" bucket for which to provision the bucket policy
        s3_bucket_name: ${resources['s3.default'].outputs.bucket}
        s3_bucket_arn: ${resources['s3.default'].outputs.arn}
        # This Resource selector traverses the Graph to get the "arn" output
        # from all aws-role resources that require "read-only" access to an s3
        # 1. Start with the s3 resource of class "default" and the same ID as the present resource
        #    This effectively matches the "concrete" s3 resource
        # 2. Find all the s3 resources of class "s3-read-only" that depend on that s3
        #    This effectively matches the s3 Delegator resource for this access class
        # 3. Find all the workloads that depend on that s3 Delegator
        # 4. Find all the aws-roles that these workloads depend on
        # 5. For each aws-role, read its "arn" output
        # Notes:
        # - The workload depends on the aws-role because the k8s-service-account co-provisions the aws-role with match_dependents: true
        # - The selector may return more than one element because more than one workload may depend on the s3 Delegator
        #   if the s3 is a Shared Resource
        role_arns_readonly: ${resources['s3.default<s3.s3-read-only<workload>aws-role'].outputs.arn}
        # Same as the previous selector, but for access class "s3-admin"
        role_arns_admin: ${resources['s3.default<s3.s3-admin<workload>aws-role'].outputs.arn}
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: AccessKeyId
          AWS_SECRET_ACCESS_KEY: SecretAccessKey
          AWS_SESSION_TOKEN: SessionToken
      files:
        providers.tf: | 
          terraform {
            required_providers {
              aws = {
                source  = "hashicorp/aws"
              }
            }
          }
        variables.tf: |
          variable "app_id" {}
          variable "env_id" {}
          variable "res_id" {}
          variable "region" {}
          variable "s3_bucket_arn" {}
          variable "s3_bucket_name" {}
          variable "role_arns_readonly" {
            type = set(string)
          }
          variable "role_arns_admin" {
            type = set(string)
          }
        main.tf: |
          # Provider credentials are being injected through environment variables
          provider "aws" {
            region = var.region
            default_tags {
              tags = {
                humanitec  = "true"
                hum-app    = var.app_id
                hum-env    = var.env_id
                hum-res    = replace(var.res_id, ".", "-")
                managed-by = "terraform"
              }
            }
          }
          # Prepare the policy document for read-only access
          data "aws_iam_policy_document" "bucket_readonly_access" {
            count = length(var.role_arns_readonly) > 0 ? 1 : 0
            statement {
              sid = "ReadOnly"
              dynamic "principals" {
                for_each = var.role_arns_readonly
                content {
                  type        = "AWS"
                  identifiers = ["$\{principals.value}"]
                }
              }
              actions = [
                "s3:GetObject",
                "s3:GetObjectVersion",
                "s3:ListBucket",
              ]
              resources = [
                var.s3_bucket_arn,
                "$\{var.s3_bucket_arn}/*",
              ]
            }
          }
          # Prepare the policy document for admin access
          data "aws_iam_policy_document" "bucket_admin_access" {
            count = length(var.role_arns_admin) > 0 ? 1 : 0
            statement {
              sid = "Admin"
              dynamic "principals" {
                for_each = var.role_arns_admin
                content {
                  type        = "AWS"
                  identifiers = ["$\{principals.value}"]
                }
              }
              actions = [
                "s3:*"
              ]
              resources = [
                var.s3_bucket_arn,
                "$\{var.s3_bucket_arn}/*",
              ]
            }
          }
          # Merge policy documents into one
          data "aws_iam_policy_document" "combined" {
            source_policy_documents = [
              length(data.aws_iam_policy_document.bucket_readonly_access) > 0 ? data.aws_iam_policy_document.bucket_readonly_access[0].json : "",
              length(data.aws_iam_policy_document.bucket_admin_access) > 0 ? data.aws_iam_policy_document.bucket_admin_access[0].json : "",
              ""]
          }
          # Create the bucket policy
          resource "aws_s3_bucket_policy" "bucket_policy" {
            bucket = var.s3_bucket_name
            policy = data.aws_iam_policy_document.combined.json
          }
        outputs.tf: |
          # This output is for debugging and illustration purposes only
          output "role_arns_readonly" {
            value = join(",", var.role_arns_readonly)
          }
          output "role_arns_admin" {
            value = join(",", var.role_arns_admin)
          }
  # Match the class that is used in the "provision" statement in the Delegator s3 Resource Definition
  criteria:
  - class: s3-bucket-policy


gcs-iam-member.yaml (view on GitHub ) :

# This Resource Definition uses the Terraform Driver to create IAM policies for gcs resources
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: wi-gcp-gcs-iam-member
entity:
  driver_type: humanitec/terraform-runner
  name: wi-gcp-gcs-iam-member
  type: gcp-iam-policy-binding
  # The identity used in this Cloud Account is the one executing the Terraform code
  # It needs permissions to manage IAM members
  driver_account: ${resources['config.default#app-config'].account}
  driver_inputs:
    values:
      variables:
        res_class: ${context.res.class}
        # Obtain the bucket name from the bucket resource this resource depends on (same class and ID)
        gcs_bucket_name: ${resources.gcs.outputs.name}
        # This Resource selector traverses the Graph to get the "principal" output
        # from all k8s-service-account resources that require access
        # 1. Start with the gcs resource of the same class and ID as the present resource
        #    This effectively matches the gcs Delegator that co-provisioned the present resource
        # 2. Find all the workloads that depend on that gcs Delegator
        # 3. Find all the k8s-service-accounts that these workloads depend on
        # 4. For each k8s-service-account, read its "principal" output
        # Notes:
        # - The present gcp-iam-policy-binding depends on the gcs Delegator because the gcs Delegator co-provisions it with is_dependent = true
        # - The selector may return more than one element because more than one workload may depend on the gcs Delegator
        #   if the gcs Delegator is a Shared Resource
        principals: ${resources['gcs<workload>k8s-service-account'].outputs.principal}
      credentials_config:
        variables:
          access_token: access_token
      files:
        providers.tf: |
          terraform {
            required_providers {
              google = {
                source  = "hashicorp/google"
              }
            }
          }
        variables.tf: |
          variable "res_class" {}
          variable "access_token" {
            type      = string
            sensitive = true
          }
          variable "gcs_bucket_name" {}
          # This variable is a set because a resource selector, which is used to retrieve it, always returns an array
          variable "principals" {
            type = set(string)
          }
          # Define the roles for each access class
          locals {
            roles = {
              "gcs-read-only" = "roles/storage.objectViewer"
              "gcs-admin"     = "roles/storage.objectAdmin"
            }
          }
        main.tf: |
          provider "google" {
            access_token = var.access_token
          }
          # Create one IAM member for each principal using the role for the current access class
          resource "google_storage_bucket_iam_member" "iam_member" {
            for_each = var.principals
            bucket  = var.gcs_bucket_name
            role    = local.roles[var.res_class]
            member  = each.key
          }
        outputs.tf: |
          # This output is for debugging and illustration purposes only
          output "principals" {
            value = join(",", var.principals)
          }   
  # Match the class that is used in the "provision" statement in the gcs Delegator Resource Definition
  criteria:
  - class: gcs-read-only
  - class: gcs-admin

Providing an access class

Follow these steps to provide an access class:

  1. Decide on the access class you wish to support, e.g. “gcs-read-only” or “s3-admin”. The class name must be unique across Resource Types to ensure proper matching of all Resource Definitions.

  2. Determine the set of permissions, defined by e.g. a cloud role or policy, to be assigned for this access class. The details depend on your cloud provider. Any role or policy objects to be assigned must be maintained outside of the Platform Orchestrator because their lifecycle is not bound to any Deployment or Application Environment.

  3. Expand the cloud permissions/policy Resource Definition:

  • Add the new access class to its matching criteria
  • Cover the cloud role or policy to be used for the access class in the IaC code used in the Resource Definition
  1. Add the new access class to the matching criteria of the Delegator Resource Definition.

Register the Resource Classes

We generally recommend that you register all Resource Classes with the Platform Orchestrator to support developers with better validation. See Creating a Resource Class for details.

Supporting more Resource Types

To support another Resource Type with workload identity, you need to:

  1. Provide the concrete Resource Definition for the Resource Type to perform the actual provisioning using IaC tooling
  2. Provide the permissions/policy Resource Definition for the Resource Type
    • Follow the examples for basic or advanced setup
  3. (Advanced setup only) Provide all access classes you wish to support

No change to the workload and k8s-service-account Resource Definitions is required.

Request Resources via Score

These principles apply for requesting resources via Score:

  • Developers may specify an id for any resource in Score to make it a Shared Resource
  • (Advanced setup only) Developers must specify a class out of the supported set of access classes for that type

Examples show the resources section only.

Score examples (basic setup)
Score file 1Score file 2Resource access
resources:
  some-resource:
    type: some-resource-type
resources:
  some-resource:
    type: some-resource-type

  • Separate Resources because both are Private (no id)
  • Both workloads have the standard access to their Private Resource
resources:
  some-resource:
    type: some-resource-type
    id: resource-1
resources:
  some-resource:
    type: some-resource-type
    id: resource-1

  • Same Shared Resource due to same type and id
  • Both workloads have the standard access to the Shared Resource
Score examples (advanced setup)
Score file 1Score file 2Resource access
resources:
  some-resource:
    type: some-resource-type
    class: type-read-only
    id: resource-1
resources:
  some-resource:
    type: some-resource-type
    class: type-read-only

  • Separate Resources because one is Shared, one is Private
  • Both workloads have read-only access to their Resource
resources:
  some-resource:
    type: some-resource-type
    class: type-read-only
    id: resource-1
resources:
  some-resource:
    type: some-resource-type
    class: type-admin
    id: resource-1

  • Same Shared Resource due to same type and id
  • The two workloads have read-only and admin access to the Resource, respectively
resources:
  some-resource:
    type: some-resource-type
    class: type-read-only
    id: resource-1
resources:
  some-resource:
    type: some-resource-type
    class: type-admin
    id: resource-2

  • Separate Shared Resources due to different id
  • The two workloads have read-only and admin access to the Resources, respectively

Example

Sample implementations of both the basic and the advanced setup in this guide are available in this public GitHub repository . It showcases a number of access level variations and two types of cloud resources for each cloud provider.

Prerequisites

To run the example, you’ll need:

  • A Humanitec Organization and a user with permission to manage Resource Definitions, create Applications, and perform Deployments
  • The humctl CLI installed
  • A Kubernetes cluster for Workload deployments of one of the types listed above, with the respective workload identity solution enabled
  • This Kubernetes cluster connected to the Platform Orchestrator. If you do not have the cluster connected yet, these are your options:
Options to connect your Kubernetes cluster
Five-minute-IDP Bring your own cluster Reference architecture
Set up a local demo cluster following the Five-minute IDP

Duration: 5 min

No need to deploy the demo Workload, just perform the setup

Ephemeral (throw-away) setup for demo purposes
Connect an existing cluster by following the Quickstart up to “Connect your cluster” (guided), or using the instructions in Kubernetes (self-guided)

Duration: 15-30 min

One-time effort per cluster, can be re-used going forward
Set up the reference architecture

Duration: 15-30 min

Creates a new cluster plus supporting cloud infrastructure, and connects it to the Platform Orchestrator
  • A Cloud Account with permission to manage these kinds of cloud resources. It may be the same or a separate Cloud Account from the one you use to connect to your cluster:
  • A Kubernetes cluster set up for executing the example Terraform code using the Terraform Runner Driver. It can be the same or a different cluster than the one used for Workload deployments. See the Driver page for setup details.

Installation

Clone the repository:

git clone https://github.com/humanitec-tutorials/k8s-workload-identity.git

Login to the Platform Orchestrator:

humctl login

Follow the instructions for your cloud provider:

  • Navigate into the aws/common directory:
cd aws/common
  • Edit the config Resource Definitions at ./resource-definitions/app-config.yaml and ./resource-definitions/tf-runner-config.yaml. Replace all values marked YOURVALUE to match your setup

  • Create the demo Application and add common Resource Definitions to your Organization:

humctl apply -f ./app.yaml
humctl apply -f ./resource-definitions
  • Create the matching criteria app_id: workload-identity-test-aws on the existing Resource Definition of your target EKS cluster for Workload deployments so that the upcoming Deployment will use that cluster

  • Create the same matching criteria on the Humanitec Agent Resource Definition used to access that cluster

  • Navigate to either the basic or advanced directory to execute the setup of your choice:

# To execute the "basic" setup
cd ../basic
# To execute the "advanced" setup
cd ../advanced
  • Create the Resource Definitions for the setup:
humctl apply -f ./resource-definitions
  • Deploy the set of demo workloads into the demo Application all at once:
humctl score deploy --deploy-config deploy-config.yaml \
  --app workload-identity-test-aws --env development
  • Open the Humanitec Portal  and navigate to the development Environment of Application workload-identity-test-aws. You should see a Deployment in process.

  • While the Deployment is ongoing, inspect the Score files used for it. Note how the resources sections request resources with varying id and class settings:

Basic setup

score-1.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-1
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      S3_PRIVATE_BUCKET_1_NAME: ${resources.private-bucket-1.name}
      S3_BUCKET_1_NAME: ${resources.bucket-1.name}
      S3_BUCKET_2_NAME: ${resources.bucket-2.name}
      SQS_1_URL: ${resources.sqs-1.url}
resources:
  private-bucket-1:
    type: s3
    # no "id" for a private resource
  bucket-1:
    type: s3
    id: bucket-1
  bucket-2:
    type: s3
    id: bucket-2
  sqs-1:
    type: sqs
    id: sqs-1

score-2.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-2
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      S3_BUCKET_2_NAME: ${resources.bucket-2.name}
      SQS_1_URL: ${resources.sqs-1.url}
resources:
  bucket-2:
    type: s3
    id: bucket-2
  sqs-1:
    type: sqs
    id: sqs-1

score-3.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-3
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      S3_BUCKET_2_NAME: ${resources.bucket-2.name}
      SQS_1_URL: ${resources.sqs-1.url}
resources:
  bucket-2:
    type: s3
    id: bucket-2
  sqs-1:
    type: sqs
    id: sqs-1
Advanced setup

score-1.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-1
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      S3_PRIVATE_BUCKET_1_NAME: ${resources.private-bucket-1.name}
      S3_BUCKET_1_NAME: ${resources.bucket-1.name}
      S3_BUCKET_2_NAME: ${resources.bucket-2.name}
      SQS_1_URL: ${resources.sqs-1.url}
resources:
  private-bucket-1:
    type: s3
    class: s3-read-only
    # no "id" for a private resource
  bucket-1:
    type: s3
    class: s3-admin
    id: bucket-1
  bucket-2:
    type: s3
    class: s3-read-only
    id: bucket-2
  sqs-1:
    type: sqs
    class: sqs-read-only
    id: sqs-1

score-2.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-2
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      S3_BUCKET_2_NAME: ${resources.bucket-2.name}
      SQS_1_URL: ${resources.sqs-1.url}
resources:
  bucket-2:
    type: s3
    class: s3-read-only
    id: bucket-2
  sqs-1:
    type: sqs
    class: sqs-read-only
    id: sqs-1

score-3.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-aws-3
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      S3_BUCKET_2_NAME: ${resources.bucket-2.name}
      SQS_1_URL: ${resources.sqs-1.url}
resources:
  bucket-2:
    type: s3
    class: s3-admin
    id: bucket-2
  sqs-1:
    type: sqs
    class: sqs-admin
    id: sqs-1
  • Once the deployment finished, the Portal should display the Resource Graph showing all workloads, cloud resourses, and supporting elements. Nagivate through the Graph to see the connections

  • Open Amazon S3  in the AWS Console. Observe the instances created and check their permissions. You should see the appropriate bucket policy for each role principal on each single resource

  • Open Amazon SQS  in the AWS Console. Observe the instances and verify in IAM Policies  that there is a Permissions policy for each access class with the proper role entities attached

  • Navigate into the gcp/common directory:
cd gcp/common
  • Edit the config Resource Definitions at ./resource-definitions/app-config.yaml, ./resource-definitions/gke-config.yaml, and ./resource-definitions/tf-runner-config.yaml. Replace all values marked YOURVALUE to match your setup

  • Create the demo Application and add common Resource Definitions to your Organization:

humctl apply -f ./app.yaml
humctl apply -f ./resource-definitions
  • Create the matching criteria app_id: workload-identity-test-gcp on the existing Resource Definition of your target GKE cluster for Workload deployments so that the upcoming Deployment will use that cluster

  • Create the same matching criteria on the Humanitec Agent Resource Definition used to access that cluster

  • Navigate to either the basic or advanced directory to execute the setup of your choice:

# To execute the "basic" setup
cd ../basic
# To execute the "advanced" setup
cd ../advanced
  • Create the Resource Definitions for the setup:
humctl apply -f ./resource-definitions
  • Deploy the set of demo workloads into the demo Application all at once:
humctl score deploy --deploy-config deploy-config.yaml \
  --app workload-identity-test-gcp --env development
  • Open the Humanitec Portal  and navigate to the development Environment of Application workload-identity-test-gcp. You should see a Deployment in process.

  • While the Deployment is ongoing, inspect the Score files used for it. Note how the resources sections request resources with varying id and class settings:

Basic setup

score-1.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-1
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      GCS_PRIVATE_BUCKET_1_NAME: ${resources.private-bucket-1.name}
      GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
      GCS_BUCKET_2_NAME: ${resources.bucket-2.name}
      SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
  private-bucket-1:
    type: gcs
  bucket-1:
    type: gcs
    id: bucket-1
  bucket-2:
    type: gcs
    id: bucket-2
  spanner-1:
    type: spanner-instance
    id: spanner-1

score-2.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-2
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
resources:
  bucket-2:
    type: gcs
    id: bucket-2

score-3.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-3
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
      SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
  bucket-2:
    type: gcs
    id: bucket-2
  spanner-1:
    type: spanner-instance
    id: spanner-1
Advanced setup

score-1.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-1
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      GCS_PRIVATE_BUCKET_1_NAME: ${resources.private-bucket-1.name}
      GCS_BUCKET_1_NAME: ${resources.bucket-1.name}
      GCS_BUCKET_2_NAME: ${resources.bucket-2.name}
      SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
  private-bucket-1:
    type: gcs
    class: gcs-read-only
    # no "id" for a private resource
  bucket-1:
    type: gcs
    class: gcs-admin
    id: bucket-1
  bucket-2:
    type: gcs
    class: gcs-read-only
    id: bucket-2
  spanner-1:
    type: spanner-instance
    class: spanner-instance-reader
    id: spanner-1

score-2.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-2
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
resources:
  bucket-2:
    type: gcs
    class: gcs-read-only
    id: bucket-2

score-3.yaml (view on GitHub ) :

apiVersion: score.dev/v1b1
metadata:
  name: hello-gcp-3
containers:
  hello-world:
    image: .
    command: ["sleep","infinity"]
    variables:
      GCP_BUCKET_2_NAME: ${resources.bucket-2.name}
      SPANNER_1_INSTANCE: ${resources.spanner-1.instance}
resources:
  bucket-2:
    type: gcs
    class: gcs-admin
    id: bucket-2
  spanner-1:
    type: spanner-instance
    class: spanner-instance-admin
    id: spanner-1
  • Once the deployment finished, the Portal should display the Resource Graph showing all workloads, cloud resourses, and supporting elements. Nagivate through the Graph to see the connections

  • Open the Storage Browser  and the Spanner instances  in the Google Cloud console. Observe the instances created and check their permissions. You should see the appropriate permissions for each service account principal on each single resource

Testing resource access

You can test for real whether the workloads deployed through the example have the proper access level. The example is using SDK images maintained by the cloud providers for the workloads (check the deploy-config.yaml file for the specific image). You can open a shell in the running containers and perform CLI commands against the cloud resources. Doing so requires kubectl access to the target cluster.

Obtain the Kubernetes namespace name for the example deployment (fill in the <placeholders>):

humctl get active-resource \
  /orgs/<my-org>/apps/workload-identity-test-<cloud>/envs/development/resources \
  -oyaml \
  | yq -r '.[] | select (.metadata.type == "k8s-namespace") | .status.resource.namespace'

Locate the workload Pods running in that namespace. Choose one, open a shell using kubectl exec, and issue test commands like shown below.

To locate the target resources, it is easiest to use the Humanitec Portal  UI. Find the Application named workload-identity-test-<cloud> and open its development Environment. Navigate the Resource Graph and click on any resource to see its details.

# Open a shell into a workload Pod
kubectl exec -it -n <namespace> <pod> -- /bin/bash

# Test read access to a S3 bucket
aws s3 ls s3://my-bucketname
# Test write access to a S3 bucket
echo foo > foo.txt
aws s3 cp foo.txt s3://my-bucketname

# Test write access to a SQS queue
aws sqs send-message --queue-url <value> --message-body <value>
# Test read access to a SQS queue
aws sqs receive-message --queue-url <value> 

# Open a shell into a workload Pod
kubectl exec -it -n <namespace> <pod> -- /bin/bash

# Test read access to a gcs bucket
gcloud storage ls gs://my-bucketname
# Test write access to a gcs bucket
echo foo > foo.txt
gcloud storage cp foo.txt gs://my-bucketname

# Test admin access by creating a database in a spanner instance
gcloud spanner databases create testdb --instance=my-instance-id
# Test read access by listing databases in a spanner instance
gcloud spanner databases list --instance=my-instance-id

Switching from basic to advanced setup

The sample implementations support switching from the basic to the advanced setup even for existing deployments with some restrictions.

The cloud service resources themselves will be continued, preserving all data stored in them. However, existing workloads may have the sum of the previous (basic) and new (advanced) access permissions.

To make the switch, execute these commands:

# Navigate to the **/advanced directory
cd ./aws/advanced
# Install the advanced set of Resource Definitions
# This will override some existing Definitions from the basic setup
humctl apply -f resource-definitions/
# Deploy the workloads
humctl score deploy --deploy-config deploy-config.yaml --app workload-identity-test-aws --env development

# Navigate to the **/advanced directory
cd ./gcp/advanced
# Install the advanced set of Resource Definitions
# This will override some existing Definitions from the basic setup
humctl apply -f resource-definitions/
# Deploy the workloads
humctl score deploy --deploy-config deploy-config.yaml --app workload-identity-test-gcp --env development

Cleaning up

Once finished observing the example setup, clean up:

  • Delete the Application. This will, with a few minutes delay, also remove the cloud resources:

    # Move to your cloud directory
    cd <cloud>
    # Delete the Application
    humctl delete -f ./common/app.yaml
    
  • Using the target cloud tooling, console or portal, ensure that all cloud resources have been deleted

  • You will then be able to remove the remaining Orchestrator objects:

    humctl delete -f ./common/resource-definitions
    # Remove basic or advanced objects depending on what you used
    humctl delete -f ./basic/resource-definitions
    humctl delete -f ./advanced/resource-definitions
    

Recap

You have seen how to:

  • ✅ Set up the Platform Orchestrator to automatically leverage the cloud provider’s workload identity solution for Kubernetes-based workloads
  • ✅ Work with Private and Shared Resources
  • ✅ Optionally define and offer access classes on cloud resources to developers, and use those access classes via Score

Troubleshooting

Verify your workload identity setup

Some cloud providers provide instructions for testing the workload identity setup on your cluster. You may need to adjust the commands to fit your type of target cloud resource.

Permissions not provisioned for a Shared Resource

If your deployment succeeded, but permissions were not provisioned for a workload on a particular Shared Resource in your cloud, verify your setup:

  • Does the Score file request the proper Resource Type/access class combination in its resources? See the Score examples above for guidance
    • If not, correct the entry
    • If yes: Does the workload depend on the requested resource (basic) or on the Delegator Resource for the Shared Resource/access class combination (advanced) that is requested via Score?
      • If yes: verify the structure of the remaining Graph, comparing it with the example for the basic and advanced setup. Check proper matching criteria for all relevant Resource Definitions
      • If no: there might not be a reference to the resource (e.g. ${resources.my-resource.some-output}) in the Score file. Add one, e.g. by populating a container variable

Re-deploy the Environment to apply your changes.

Top