Resource Graph Patterns

Resource Graph Patterns

This repo contains a set of examples of patterns that can be used when building Resource Graphs. Each pattern is explained along with use cases for when the pattern can be useful.

Running the examples

Prerequisites

In order to try out these examples, the following is necessary:

  • A Humanitec Organization.
  • The Humanitec CLI
  • An API Token for a Service User with the Administrator role on the Humanitec Organization.
  • A k8s-cluster Resource Definition that matches to the development environment of the application defined in each example.

The environment must be configured with the following environment variables:

Variable Description
HUMANITEC_ORG The ID of the Humanitec Organization.
HUMANITEC_TOKEN The API Token for the service user with Administrator role on the Humanitec Organization
HUMANITEC_ENV This should be set to development

NOTE:

It is usually necessary to export the environment variable into the shell’s environment if you wish to use the CLIs interactively.

For example:

export HUMANITEC_ORG="my-org"

Break a loop additional resource

Break a loop with an additional resource

This example demonstrates how to break a loop where two resources have to both depend on each others outputs.

How the example works

There is often a mutual loop assigning a principal to a Kubernetes service account to enable Workload Identity. The Kubernetes service account often needs to be annotated with the principal and the principal needs some policy to allow it to be used by the Kubernetes Service Account.

In this example, we will simulate the graph using a k8s-service-account and a fake aws-role resource both implemented with the humanitec/template driver.

graph LR
    workload --> k8s-service-account
    k8s-service-account --> aws-role
    aws-role --> k8s-service-account

The loop arises because it is necessary to generate both the Kubernetes service account name and the aws-role dynamically. This is because these need to be unique for each Workload in the Application. Essentially, both resources require the same two pieces of information.

There are two ways to break the loop:

  1. Convention

    Decide that each resource “knows” how to generate both pieces of information. This can be achieved by using the context to provide the unique element.

    This has the downside that it is inflexible and limiting. For example, if a 3rd party system is used to issue principals, then this technique will not work.

  2. Add an additional resource

    Both the k8s-service-account and aws-role resources get both the service account name and role ID from a 3rd resource.

    This approach ensures consistency, does not rely on convention and allows for complex scenarios like getting IDs from a 3rd party system.

graph LR
    workload --> k8s-service-account
    k8s-service-account --> aws-role
    k8s-service-account --> config
    aws-role --> config

def-aws-role.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-aws-role
entity:
  criteria:
    - app_id: example-break-a-loop-additional-resource
  driver_inputs:
    values:
      role_arn: ${resources["config.sa-name-role-id"].outputs.role_arn}
      sa_name: ${resources["config.sa-name-role-id"].outputs.sa_name}
      templates:
        outputs: |
          arn: {{ .drivers.role_arn }}
  driver_type: humanitec/template
  name: example-aws-role
  type: aws-role


def-config-name-id.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-config-name-id
entity:
  criteria:
    # We only match to class and not res_id because the res_id changes for
    # each workload
    - class: sa-name-role-id
      app_id: example-break-a-loop-additional-resource
  driver_inputs:
    values:
      res_id: ${context.res.id}
      app_id: ${context.app.id}
      templates:
        init: |
          workload_id: {{ .driver.values.res_id | splitList "." | last }}
        outputs: |
          role_arn: "arn:aws:iam::123456789012:role/{{ .driver.values.app_id }}/sa-role-{{ .init.workload_id }}"
          sa_name: {{ .init.workload_id }}-sa
  driver_type: humanitec/template
  name: example-config-name-id
  type: config


def-k8s-service-account.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-k8s-service-account
entity:
  criteria:
    - app_id: example-break-a-loop-additional-resource
  driver_inputs:
    values:
      role_arn: ${resources["config.sa-name-role-id"].outputs.role_arn}
      sa_name: ${resources["config.sa-name-role-id"].outputs.sa_name}
      templates:
        outputs: |
          name: {{ .driver.values.sa_name }}
        manifests: |
          service-account.yaml:
            location: namespace
            data: |
              apiVersion: v1
              kind: ServiceAccount
              metadata:
                name: {{ .driver.values.sa_name }}
                annotations:
                  eks.amazonaws.com/role-arn: {{ .driver.values.role_arn }}

  driver_type: humanitec/template
  name: example-k8s-service-account
  type: k8s-service-account


def-workload.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-workload
entity:
  criteria:
    - app_id: example-break-a-loop-additional-resource
  driver_inputs:
    values:
      templates:
        outputs: |
          update: 
          - op: add
            path: /spec/serviceAccountName
            {{/*
              The resource reference does not specify ID or class so the ID and
              class of the workload being provisioned will be used.
            *//}
            value: ${resources.k8s-service-account.outputs.name}
  driver_type: humanitec/template
  name: example-workload
  type: workload


score.yaml (view on GitHub) :

apiVersion: score.dev/v1b1

metadata:
  name: example-workload

containers:
  busybox:
    image: busybox:latest

    variables:
      BUCKET_NAME: ${resources.my-s3.bucket}

    command:
      - /bin/sh
    args:
      - "-c"
      # This will output all of the environment variables in the container to
      # STDOUT every 15 seconds. This can be seen in the container logs in the
      # Humanitec UI.
      - "while true; do set; sleep 15; done"

resources:
  my-s3:
    type: s3
    # Change the class to "two" 
    class: one

Config pattern

Config Pattern

This example demonstrates how config resources can be used to parameterize general purpose resource definitions. The config resource can be used to parameterize a Resource Definition for different contexts such as environment type and even be used by development teams to further specialize a resource for their purpose.

How the example works

This example is made up of:

  • A single s3 resource definition. (Implemented using the Echo driver for simplicity for this example)
  • A config resource that provides default configuration as specified by the platform team.
  • A config resource that can be used by developers to override some configuration values.

The Resource Graph for production with developer overrides would look like:

flowchart LR
    WL[Workload] -->|score resource dependency| S3(type: s3, id: s3-bucket, class: default)
    S3 --> CONF_S3(type: config, id: s3-bucket, class: default)
    CONF_S3 --> CONF_S3_DEV_OVERRIDE(type: config, id: s3-bucket, class: developer-overrides)

The example demonstrates how:

  • different configurations can be used in different environments while using the same Terraform Resource Definition
  • developers can optionally provide overrides that can be controlled by the platform team.

There are 3 resource definitions:

  1. The s3 Resource Definition def-s3-base.yaml defines the underlying “base” resource. In this case it is very simple - implemented using the Echo Driver. It takes 2 parameters - region and bucket - returning both of these.

  2. The first config Resource Definition def-config-platform-defaults.yaml does two things:

    • Provide default configuration values supplied by the platform team.
    • Reference the overrides that developers can supply via their own config Resource Definition.

    These config also provide guardrails in that only certain values can be overridden. In this example, developers can override the prefix and the name properties but not tags or region.

  3. The last config Resource Definition def-config-developer-overrides.yaml allows developers to provide their overrides that can tune the resource that they request.

In practice, you may choose to maintain the Resource Definitions for the platform team and the developers in different git repositories to separate out access permissions.

Run the demo

Prerequisites

See the prerequisites section in the README at the root of this section.

In addition, the environment variable HUMANITEC_APP should be set to example-config-pattern.

Cost

This example will result in one Pod being deployed.

Deploy the example

  1. Create a new app:

    humctl create app "${HUMANITEC_APP}"
    
  2. Register the Resource Definitions:

    mkdir resource-definitions
    cp def-*.yaml ./resource-definitions
    humctl apply -f ./resource-definitions
    
  3. Deploy the Score workload:

    humctl score deploy
    
  4. Inspect the effective environment variables of your workload:

  • Open the portal at https://app.humanitec.io/orgs/${HUMANITEC_ORG}/apps/example-config-pattern/envs/development
  • Select the example-config-pattern-workload and inspect the log output of the busybox container.
  • Check the values of the BUCKET_NAME and or BUCKET_REGION variables.

Explore the example

  1. Change the name and or prefix properties in def-config-developer-overrides.yaml. Try adding region.

  2. Redeploy:

    humctl score deploy
    
  3. Observe if the BUCKET_NAME and or BUCKET_REGION variables have changed.

Clean up the example

  1. Delete the Application:

    humctl delete app "${HUMANITEC_APP}"
    
  2. Delete the Resource Definitions:

    humctl delete -f ./resource-definitions
    rm -rf ./resource-definitions
    

def-config-developer-overrides.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: config-pattern-example-config-developer-overrides
entity:
  name: config-pattern-example-config-developer-overrides
  type: config
  criteria:
  - app_id: example-config-pattern
    class: developer-overrides
  driver_type: humanitec/echo
  driver_inputs:
    values:
      # Here we only override prefix. But try also overriding "name" and "region".
      # "name" will be used, "region" will be ignored.
      prefix: overridden-


def-config-platform-defaults.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: config-pattern-example-config-platform-defaults
entity:
  name: config-pattern-example-config-platform-defaults
  type: config
  criteria:
  - app_id: example-config-pattern
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        init: |
          defaults:
            # These are values defined by the platform team to be used by the terraform module
            prefix: ""
            region: eu-central-1
            name: {{ "${context.res.id}" | splitList "." | last }}-${context.app.id}-${context.org.id}
            tags:
              example: config-pattern-example
              env: ${context.env.id}
        outputs: |
          {{- $overrides := .driver.values.overrides }}
          # Loop through all the default keys - this way we don't introduce additional keys from
          # the developer overrides.
          {{- range $key, $val := .init.defaults }}

            # Don't allow overrides of some keys
            {{- if (list "region" "tags") | has $key }}

          {{ $key }}: {{ $val | toRawJson }}

            {{- else }}

          {{ $key }}: {{ $overrides | dig $key $val | toRawJson }}

            {{- end}}

          {{- end }}

          # Generate some additional keys
          bucket_name: {{ $overrides.prefix | default .init.defaults.prefix }}{{ $overrides.name | default .init.defaults.name }}

      overrides: ${resources["config.developer-overrides"].values}


def-s3-base.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: config-pattern-example-s3-base
entity:
  name: config-pattern-example-s3-base
  type: s3
  criteria:
  - app_id: example-config-pattern

  driver_type: humanitec/echo
  driver_inputs:
    values:
      # Placeholders of the form "${resources.config.outputs."" will
      # automatically be provisioned with res_id and class equal to this
      # instance that will be provisioned.
      #
      # See github.com/humanitec-architecture/example-library//propagate-id and github.com/humanitec-architecture/example-library//propagate-class for more details.

      region: ${resources.config.outputs.region}
      bucket: ${resources.config.outputs.bucket_name}

def-s3-terraform.exclude (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: config-pattern-example-s3-terraform
entity:
  name: config-pattern-example-s3-terraform
  type: s3
  criteria:
  - app_id: example-config-pattern
  driver_type: humanitec/terraform
  # The driver_account references a Cloud Account of type "aws-role"
  # which needs to be configured for your Organization.
  driver_account: aws-example-role
  driver_inputs:
    values:
      variables:
        # Placeholders of the form "${resources.config.outputs."" will
        # automatically be provisioned with res_id and class equal to this
        # instance that will be provisioned.
        #
        # See github.com/humanitec-architecture/example-library//propagate-id and github.com/humanitec-architecture/example-library//propagate-class for more details.

        region: ${resources.config.outputs.region}
        bucket_name: ${resources.config.outputs.bucket_name}
        tags: ${resources.config.outputs.tags}
      # Use the credentials injected via the driver_account
      # to set variables as expected by your Terraform code
      credentials_config:
        variables:
          access_key: AccessKeyId
          secret_key: SecretAccessKey
          session_token: SessionToken
      script: |-
        # The script resolved to the overrides.tf file. It is the expected place
        # to configure the provider
        provider "aws" {
          region     = var.region
          access_key = var.access_key
          secret_key = var.secret_key
          token      = var.session_token
        }

        resource "aws_s3_bucket" "example" {
          bucket = var.bucket_name
          tags = jsondecode(var.tags)
        }



score.yaml (view on GitHub) :

apiVersion: score.dev/v1b1
metadata:
  name: example-config-pattern-workload

containers:
  busybox:
    image: busybox

    command:
      - /bin/sh
    args:
      - "-c"
      # This will output all of the environment variables in the container to
      # STDOUT every 15 seconds. This can be seen in the container logs in the
      # Humanitec UI.
      - "while true; do set; sleep 15; done"
    variables:
      BUCKET_NAME: ${resources.s3-bucket.bucket}
      BUCKET_REGION: ${resources.s3-bucket.region}

resources:
  s3-bucket:
    type: s3

Delegator resource

Delegator Resource

This example demonstrates how delegator Resource Definitions can be used to expose a shared base resource with different access policies.

How the example works

This example is made up of:

  • Two delegator s3 Resource Definitions
  • One base s3 Resource Definition
  • Two aws-policy Resource Definitions

and the resulting graph will look like:

flowchart LR
    WL_A[Workload A] -->|score resource dependency| DELEGATOR_RES_ADMIN(type: s3, id: shared.main-s3, class: admin)
    WL_B[Workload B] -->|score resource dependency| DELEGATOR_RES_READ_ONLY(type: s3, id: shared.main-s3, class: read-only)
    DELEGATOR_RES_ADMIN -->|co-provision| POL_ADMIN(type: aws-policy, id: shared.main-s3, class: s3-admin)
    DELEGATOR_RES_ADMIN -->|Resource Reference| BASE_RES(type: s3, id: shared.main-s3, class: concrete)
    DELEGATOR_RES_READ_ONLY -->|co-provision| POL_READ_ONLY(type: aws-policy, id: shared.main-s3, class: s3-read-only)
    DELEGATOR_RES_READ_ONLY -->|Resource Reference| BASE_RES

To keep the examples as simple as possible, the humanitec/echo driver is used throughout. Check out the Resource Packs if you are interested in examples with Resource Definitions that also include provisioning.

The s3 Resource Definition def-s3-concrete.yaml defines the underlying “base” resource and is matched as class: concrete.

In a real-world setup, this Resource Definition is the only one that would actually provision the s3 bucket using a Driver other than Echo, e.g. the Terraform Driver. The delegator resources will not actually provision anything. Their purpose is the co-provisioning of the appropriate aws-policy resource based on their class.

The aws-policy Resource Definitions def-aws-policy-s3-admin.yaml and def-aws-policy-s3-read-only.yaml contain the different policies we want to make available. Those are matched as admin and read-only.

The s3 Resource Definitions def-s3-admin.yaml and dev-s3-read-only.yaml are delegator resources that have two functions:

  • Co-provision the respective aws-policy Resource Definition.
  • Forward the outputs of the “base” resource using a Resource Reference.

When the workload defined in score-a.yaml now requests an s3 resource with class: admin, the Humanitec Platform Orchestrator creates the “delegator” s3 resource class: admin, the “base” s3 resource class: concrete, and co-provisions the aws-policy resource class: s3-admin.

Similar to the first workload, score-b.yaml requests an s3 resource, but this time with class: read-only and here the Humanitec Platform Orchestrator creates the “delegator” s3 resource class: read-only, the “base” s3 resource class: concrete, and co-provisions the aws-policy resource class: s3-read-only.

As both workloads used the same s3 resource id main-s3 via the id property on the resource objects in their Score files, they will use the same resource and thus the same underlying s3 bucket, but each workload uses a different access policy. The Score schema reference has details on this property.

Run the demo

Prerequisites

See the prerequisites section in the README at the root of this section.

In addition, set these environment variables:

export HUMANITEC_APP=example-delegator
export HUMANITEC_ENV=development
export HUMANITEC_ORG=<your-org-id>

Cost

This example will result in a two Pods being deployed to a Kubernetes cluster.

Deploy the example

  1. Login to the Platform Orchestrator:

    humctl login
    
  2. Create a new app:

    humctl create app "${HUMANITEC_APP}"
    
  3. Register the Resource Definitions:

    mkdir resource-definitions
    cp def-*.yaml ./resource-definitions
    humctl apply -f ./resource-definitions
    
  4. Deploy the Score workload A:

    humctl score deploy --org "${HUMANITEC_ORG}" --app "${HUMANITEC_APP}" --env "${HUMANITEC_ENV}" --file score-a.yaml
    
  5. Deploy the Score workload B:

    humctl score deploy --org "${HUMANITEC_ORG}" --app "${HUMANITEC_APP}" --env "${HUMANITEC_ENV}" --file score-b.yaml
    
  6. Create the Resource Graph:

    humctl resources graph --org "${HUMANITEC_ORG}" --app "${HUMANITEC_APP}" --env "${HUMANITEC_ENV}" > graph.dot
    

Clean up the example

  1. Delete the Application:

    humctl delete app "${HUMANITEC_APP}"
    
  2. Delete the Resource Definitions:

    humctl delete -f ./resource-definitions
    rm -rf ./resource-definitions
    

def-aws-policy-s3-admin.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-policy-s3-admin
entity:
  criteria:
    - app_id: example-delegator
      class: s3-admin
  driver_inputs:
    values:
      arn: arn:aws:iam::aws:policy/AmazonS3FullAccess
  # In a real-world scenario, a different Driver would be used for this resource, e.g. Terraform
  driver_type: humanitec/echo
  name: aws-policy-s3-admin
  type: aws-policy


def-aws-policy-s3-read-only.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-policy-s3-read-only
entity:
  criteria:
    - app_id: example-delegator
      class: s3-read-only
  driver_inputs:
    values:
      arn: arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
  # In a real-world scenario, a different Driver would be used for this resource, e.g. Terraform
  driver_type: humanitec/echo
  name: aws-policy-s3-read-only
  type: aws-policy


def-s3-admin.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-admin
entity:
  criteria:
    - app_id: example-delegator
      class: admin
  driver_inputs:
    values:
      # This Resource reference to 's3.concrete' creates the dependency to the base resource
      bucket: ${resources['s3.concrete'].outputs.bucket}
  provision:
    # Co-provision a Resource of type "aws-policy" and class "s3-admin"
    aws-policy.s3-admin:
      is_dependent: false
      match_dependents: true
  driver_type: humanitec/echo
  name: s3-admin
  type: s3


def-s3-concrete.yaml (view on GitHub) :

# This Resource Definition represents the concrete base resource of the Delegator pattern
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-concrete
entity:
  criteria:
    - app_id: example-delegator
      class: concrete
  driver_inputs:
    values:
      bucket: example-bucket
  # In a real-world scenario, a different Driver would be used for the base resource, e.g. Terraform
  driver_type: humanitec/echo
  name: s3-concrete
  type: s3


def-s3-read-only.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-read-only
entity:
  criteria:
    - app_id: example-delegator
      class: read-only
  driver_inputs:
    values:
      # This Resource reference to 's3.concrete' creates the dependency to the base resource
      bucket: ${resources['s3.concrete'].outputs.bucket}
  provision:
    # Co-provision a Resource of type "aws-policy" and class "s3-read-only"
    aws-policy.s3-read-only:
      is_dependent: false
      match_dependents: true
  driver_type: humanitec/echo
  name: s3-read-only
  type: s3


score-a.yaml (view on GitHub) :

apiVersion: score.dev/v1b1
metadata:
  name: example-a

containers:
  busybox:
    image: busybox:latest

    command:
      - /bin/sh
    args:
      - "-c"
      # This will output all of the environment variables in the container to
      # STDOUT every 15 seconds. This can be seen in the container logs in the
      # Humanitec UI.
      - "while true; do set; sleep 15; done"
    variables:
      BUCKET_NAME: ${resources.s3.bucket}

resources:
  s3:
    type: s3
    class: admin
    id: main-s3


score-b.yaml (view on GitHub) :

apiVersion: score.dev/v1b1
metadata:
  name: example-b

containers:
  busybox:
    image: busybox:latest

    command:
      - /bin/sh
    args:
      - "-c"
      # This will output all of the environment variables in the container to
      # STDOUT every 15 seconds. This can be seen in the container logs in the
      # Humanitec UI.
      - "while true; do set; sleep 15; done"
    variables:
      BUCKET_NAME: ${resources.s3.bucket}

resources:
  s3:
    type: s3
    class: read-only
    id: main-s3

Propagate class

Propagate Class

This example demonstrates how Resource classes can be propagate via Resource References. It involves having a single Resource Definition that can be parameterized by referencing another Resource.

How the example works

This example is made up of 3 Resource Definitions: one s3 and two config Resource Definitions. To keep the examples as simple as possible, the humanitec/echo driver is used.

The s3 Resource Definition def-s3.yaml is configured to match to two classes one and two. It outputs its bucket output based on the Resource Reference ${resources.config#example.outputs.name}. This Resource Reference will cause a new resource to be provisioned and be replaced with the name output of that newly provisioned resource. As the Resource Definition only defines the type of the resource (config) and the ID of the resource (example), the class that the new resource will be provisioned with will be the same as the class of the s3 resource.

The one config Resource Definition (def-config-one.yaml) is configured to match the class one and the other def-config-two.yaml matches the class two.

The score.yaml file depends on a resource of type s3 bucket with a class of one. It outputs the bucket name in the environment variable BUCKET_NAME.

This means that an S3 bucket will be provisioned via the def-s3.yaml Resource Definition and the bucket name will be pulled from the Resource Definition def-config-one.yaml is configured to match the class one and so will be name-01.

If the class on the s3 resource is changed to two then the bucket name will be pulled from the Resource Definition def-config-two.yaml is configured to match the class two and so will be name-02.

Run the demo

Prerequisites

See the prerequisites section in the README at the root of this section.

In addition, the environment variable HUMANITEC_APP should be set to example-propagate-class.

Cost

This example will result in a single pod being deployed to a Kubernetes Cluster.

Deploy the example

  1. Create a new app:

    humctl create app "${HUMANITEC_APP}"
    
  2. Register the resource definitions:

    mkdir resource-definitions
    cp def-*.yaml ./resource-definitions
    humctl apply -f ./resource-definitions
    
  3. Deploy the score workload:

    humctl score deploy --org "${HUMANITEC_ORG}" --app "${HUMANITEC_APP}" --env "${HUMANITEC_ENV}" --token "${HUMANITEC_TOKEN}
    

Play with the demo

  1. In the Humanitec UI, visit the running deployment and look in the container logs for the line starting with BUCKET_NAME. It should have a value of name-01

  2. Change the class of the s3 resource in the score.yaml file from one to two.

  3. Redeploy the Score file:

    humctl score deploy --org "${HUMANITEC_ORG}" --app "${HUMANITEC_APP}" --env "${HUMANITEC_ENV}"--token "${HUMANITEC_TOKEN}
    
  4. In the Humanitec UI, visit the running deployment and look in the container logs for the line starting with BUCKET_NAME. It should now have a value of name-02

Clean up the example

  1. Delete the Application:

    humctl delete app "${HUMANITEC_APP}"
    
  2. Delete the Resource Definitions:

    humctl delete -f ./resource-definitions
    rm -rf ./resource-definitions
    

def-config-one.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-config-one
entity:
  criteria:
    - class: one
      res_id: example
      app_id: example-propagate-class
  driver_inputs:
    values:
      name: name-01
  driver_type: humanitec/echo
  name: example-config-one
  type: config


def-config-two.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-config-two
entity:
  criteria:
    - class: two
      res_id: example
      app_id: example-propagate-class
  driver_inputs:
    values:
      name: name-02
  driver_type: humanitec/echo
  name: example-config-two
  type: config


def-s3.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-s3
entity:
  criteria:
    - class: one
      app_id: example-propagate-class
    - class: two
      app_id: example-propagate-class
  driver_inputs:
    values:
      bucket: ${resources.config#example.outputs.name}
  driver_type: humanitec/echo
  name: example-s3
  type: s3


score.yaml (view on GitHub) :

apiVersion: score.dev/v1b1

metadata:
  name: example-workload

containers:
  busybox:
    image: busybox:latest

    variables:
      BUCKET_NAME: ${resources.my-s3.bucket}

    command:
      - /bin/sh
    args:
      - "-c"
      # This will output all of the environment variables in the container to
      # STDOUT every 15 seconds. This can be seen in the container logs in the
      # Humanitec UI.
      - "while true; do set; sleep 15; done"

resources:
  my-s3:
    type: s3
    # Change the class to "two" 
    class: one

Propagate id

Propagate Class

This example demonstrates how ID propagation through Resource References can be used to create a new instance of a resource for another resource. It involves provisioning a k8s-service-account resource for every workload resource provisioned.

How the example works

This example is made up of two Resource Definitions: one workload and two k8s-service-account Resource Definition. Both Resource Definitions use the humanitec/template driver.

A resource of type workload is automatically provisioned for each Workload in an Application in Humanitec. The workload will have the ID of modules.<workload id> and a class of default. This means that if an Application contains two workloads called workload-one and workload-two, two workload resources will be provisioned, one with ID modules.workload-one and the other with ID modules.workload-two.

The def-workload.yaml Resource Definition has a reference to a k8s-service-account resource. The Resource Reference does not specify either the class or the ID of the resource. This means that the k8s-service-account resource is provisioned with the same ID and class as the Workload.

Run the example

Prerequisites

See the prerequisites section in the README at the root of this repository.

In addition, the environment variable HUMANITEC_APP should be set to example-propagate-class.

Cost

This example will result in a single Pod being deployed to a Kubernetes Cluster.

Deploy the example

  1. Create a new Application:

    humctl create app "${HUMANITEC_APP}"
    
  2. Register the Resource Definitions:

    mkdir resource-definitions
    cp def-*.yaml ./resource-definitions
    humctl apply -f ./resource-definitions
    
  3. Deploy the Score Workload:

    humctl score deploy --org "${HUMANITEC_ORG}" --app "${HUMANITEC_APP}" --env "${HUMANITEC_ENV}" --token "${HUMANITEC_TOKEN}
    

Clean up the example

  1. Delete the Application

    humctl delete app "${HUMANITEC_APP}"
    
  2. Delete the Resource Definitions

    humctl delete -f ./resource-definitions
    rm -rf ./resource-definitions
    

def-k8s-service-account.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-k8s-service-account
entity:
  criteria:
    - app_id: example-propagate-id
  driver_inputs:
    values:
      res_id: ${context.res.id}
      templates:
        init: |
          name: {{ .driver.values.res_id | splitList "." | last }}-sa
        outputs: |
          name: {{ .init.name }}
        manifests: |
          service-account.yaml:
            location: namespace
            data: |
              apiVersion: v1
              kind: ServiceAccount
              metadata:
                name: {{ .init.name }}

  driver_type: humanitec/template
  name: example-k8s-service-account
  type: k8s-service-account


def-workload.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: example-workload
entity:
  criteria:
    - app_id: example-propagate-id
  driver_inputs:
    values:
      templates:
        outputs: |
          update: 
          - op: add
            path: /spec/serviceAccountName
            {{/*
              The resource reference does not specify ID or class so the ID and
              class of the workload being provisioned will be used.
            *//}
            value: ${resources.k8s-service-account.outputs.name}
  driver_type: humanitec/template
  name: example-workload
  type: workload


score.yaml (view on GitHub) :

apiVersion: score.dev/v1b1

metadata:
  name: example-workload

containers:
  busybox:
    image: busybox:latest

    command:
      - /bin/sh
    args:
      - "-c"
      # This will output all of the environment variables in the container to
      # STDOUT every 15 seconds. This can be seen in the container logs in the
      # Humanitec UI.
      - "while true; do set; sleep 15; done"

Top