Resource Definitions

Resource Definitions

This section contains example Resource Definitions.

Install any Resource Definition into your Humanitec Organization using the CLI and this command:

humctl create -f resource-definition-file.yaml

Echo driver

Resource Definitions using the Echo Driver

This section contains example Resource Definitions using the Echo Driver.

Namespace

This section contains example Resource Definitions using the Echo Driver for managing Kubernetes namespaces.

  • custom-namespace.yaml: Shows how to use the Echo Driver to return the name of an externally managed namespace. This format is for use with the Humanitec CLI.
  • custom-namespace.tf: Shows how to use the Echo Driver to return the name of an externally managed namespace. This format is for use with the Humanitec Terraform provider.

custom-namespace.tf (view on GitHub) :

resource "humanitec_resource_definition" "k8s_namespace" {
  driver_type = "humanitec/echo"
  id          = "default-namespace"
  name        = "default-namespace"
  type        = "k8s-namespace"

  driver_inputs = {
    values_string = jsonencode({
      "namespace" = "$${context.app.id}-$${context.env.id}"
    })
  }
}

resource "humanitec_resource_definition_criteria" "k8s_namespace" {
  resource_definition_id = humanitec_resource_definition.k8s_namespace.id
}

custom-namespace.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: namespace-echo
entity:
  name: namespace-echo
  type: k8s-namespace
  driver_type: humanitec/echo
  driver_inputs:
    values:
      namespace: "${context.app.id}-${context.env.id}"
  criteria:
    - {}

Postgres

This section contains example Resource Definitions using the Echo Driver for PostgreSQL.

  • postgres-secretstore.yaml: Shows how to use the Echo Driver and secret references to fetch database credentials from an external secret store. This format is for use with the Humanitec CLI.

postgres-secretstore.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: postgres-echo
entity:
  name: postgres-echo
  type: postgres
  driver_type: humanitec/echo
  driver_inputs:
    values:
      name: my-database
      host: products.postgres.dev.example.com
      port: 5432
    secret_refs:
      username:
        store: my-gsm
        ref: cloudsql-username
      password:
        store: my-gsm
        ref: cloudsql-password
  criteria:
    - {}

Redis

This section contains example Resource Definitions using the Echo Driver for Redis.

  • redis-secret-refs.yaml: Shows how to use the Echo Driver and secret references to provision a Redis resource. This format is for use with the Humanitec CLI.

redis-secret-refs.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: redis-echo
entity:
  name: redis-echo
  type: redis
  driver_type: humanitec/echo
  driver_inputs:
    values:
      host: 0.0.0.0
      port: 6379
    secret_refs:
      password:
        store: my-gsm
        ref: redis-password
      username:
        store: my-gsm
        ref: redis-user
  criteria:
    - {}

K8s cluster

Connecting to generic Kubernetes clusters

This section contains example Resource Definitions for connecting to generic Kubernetes clusters of any kind beyond the managed solutions of the major cloud providers.

Static credentials

Using static credentials

This section contains example Resource Definitions using static credentials for connecting to generic Kubernetes clusters.


generic-k8s-client-certificate.yaml (view on GitHub) :

# Make sure all ${ENVIRONMENT_VARIABLES} are set when applying this Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: generic-k8s-static-credentials
entity:
  name: generic-k8s-static-credentials
  type: k8s-cluster
  driver_type: humanitec/k8s-cluster
  driver_inputs:
    values:
      name: my-generic-k8s-cluster
      loadbalancer: 35.10.10.10
      cluster_data:
        server: https://35.11.11.11:6443
        # Single line base64-encoded cluster CA data in the format "LS0t...ca-data....=="
        certificate-authority-data: ${CLUSTER_CERTIFICATE_CA_DATA}
    secrets:
      credentials:
        # Single line base64-encoded client certificate data in the format "LS0t...cert-data...=="
        client-certificate-data: ${USER_CLIENT_CERTIFICATE_DATA}
        # Single line base64-encoded client key data in the format "LS0t...key-data...=="
        client-key-data: ${USER_CLIENT_KEY_DATA}

K8s cluster aks

Connecting to AKS clusters

This section contains example Resource Definitions for connecting to AKS clusters.

Static credentials

Using static credentials

This section contains example Resource Definitions using static credentials for connecting to AKS clusters.

  • aks-static-credentials.yaml: use static credentials of a service principal defined via environment variables. This format is for use with the Humanitec CLI.
  • aks-static-credentials-cloudaccount.yaml: use static credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.

aks-static-credentials-cloudaccount.yaml (view on GitHub) :

# Connect to an AKS cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aks-static-credentials-cloudaccount
entity:
  name: aks-static-credentials-cloudaccount
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "azure"
  # which needs to be configured for your Organization.
  driver_account: azure-static-creds
  driver_type: humanitec/k8s-cluster-aks
  driver_inputs:
    values:
      loadbalancer: 20.10.10.10
      name: demo-123
      resource_group: my-resources
      subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
      # Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
      # server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630

aks-static-credentials.yaml (view on GitHub) :


# Make sure all ${ENVIRONMENT_VARIABLES} are set when applying this Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aks-static-credentials
entity:
  name: aks-static-credentials
  type: k8s-cluster
  driver_type: humanitec/k8s-cluster-aks
  driver_inputs: 
    values: 
      loadbalancer: 20.10.10.10
      name: demo-123
      resource_group: my-resources
      subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
      # Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
      # server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
    secrets:
      # The "credentials" data correspond to the content of the output
      # that Azure generates for a service principal
      credentials:
        appId: b520e4a8-6cb4-49dc-8f42-f3281dc2efe9
        displayName: my-cluster-sp
        password: ${SERVICE_PRINCIPAL_PASSWORD}
        tenant: 9b8c7b62-aaaa-4444-ffff-0987654321fd

K8s cluster eks

Connecting to EKS clusters

This section contains example Resource Definitions for connecting to EKS clusters.

Dynamic credentials

Using dynamic credentials

This section contains example Resource Definitions using dynamic credentials for connecting to EKS clusters.

  • eks-dynamic-credentials.yaml: use dynamic credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.

eks-dynamic-credentials.yaml (view on GitHub) :

# Connect to an EKS cluster using dynamic credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: eks-dynamic-credentials
entity:
  name: eks-dynamic-credentials
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "aws-role"
  # which needs to be configured for your Organization.
  driver_account: aws-temp-creds
  # The driver_type k8s-cluster-eks automatically handles the dynamic credentials
  # injected via the driver_account.
  driver_type: humanitec/k8s-cluster-eks
  driver_inputs:
    values:
      region: eu-central-1
      name: demo-123
      loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
      loadbalancer_hosted_zone: ABC0DEF5WYYZ00

Static credentials

Using static credentials

This section contains example Resource Definitions using static credentials for connecting to EKS clusters.

  • eks-static-credentials.yaml: use static credentials defined via environment variables. This format is for use with the Humanitec CLI.
  • eks-static-credentials-cloudaccount.yaml: use static credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.

eks-static-credentials-cloudaccount.yaml (view on GitHub) :

# Connect to an EKS cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: eks-static-credentials-cloudaccount
entity:
  name: eks-static-credentials-cloudaccount
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "aws"
  # which needs to be configured for your Organization.
  driver_account: aws-static-creds
  # The driver_type k8s-cluster-eks automatically handles the static credentials
  # injected via the driver_account.
  driver_type: humanitec/k8s-cluster-eks
  driver_inputs:
    values:
      region: eu-central-1
      name: demo-123
      loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
      loadbalancer_hosted_zone: ABC0DEF5WYYZ00

eks-static-credentials.yaml (view on GitHub) :

# Make sure all ${ENVIRONMENT_VARIABLES} are set when applying this Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: eks-static-credentials
entity:
  name: eks-static-credentials
  type: k8s-cluster
  driver_type: humanitec/k8s-cluster-eks
  driver_inputs:
    values:
      region: eu-central-1
      name: demo-123
      loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
      loadbalancer_hosted_zone: ABC0DEF5WYYZ00
    secrets:
      credentials: 
        aws_access_key_id: ${AWS_ACCESS_KEY_ID}
        aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}

K8s cluster git

Connecting to a Git repository (GitOps mode)

This section contains example Resource Definitions for connecting to a Git repository to push application CRs (GitOps mode).

Static credentials

Using static credentials

This section contains example Resource Definitions using static credentials for connecting to a Git repository in (GitOps mode).

  • github-for-gitops.yaml: use static credentials defined via GitHub variables. This format is for use with the Humanitec CLI.

github-for-gitops.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: github-for-gitops
entity:
  name: github-for-gitops
  driver_type: humanitec/k8s-cluster-git
  type: k8s-cluster
  driver_inputs:
    values:
      # Git repository for pushing manifests
      url: [email protected]:example-org/gitops-repo.git
      # Branch in the git repository, optional. If not specified, the default branch is used.
      branch: development
      # Path in the git repository, optional. If not specified, the root is used.
      path: "${context.app.id}/${context.env.id}"
      # Load Balancer, optional. Though it's not related to the git, it's used to create ingress in the target K8s cluster.
      loadbalancer: 35.10.10.10
    secrets:
      credentials:
        ssh_key: ${GIT_SSH_KEY}
        # Alternative to ssh_key: password or Personal Account Token
        # password: ${GIT_PAT}

K8s cluster gke

Connecting to GKE clusters

This section contains example Resource Definitions for connecting to GKE clusters.

Static credentials

Using static credentials

This section contains example Resource Definitions using static credentials for connecting to GKE clusters.

  • gke-static-credentials.yaml: use static credentials defined via environment variables. This format is for use with the Humanitec CLI.
  • gke-static-credentials-cloudaccount.yaml: use static credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.

gke-static-credentials-cloudaccount.yaml (view on GitHub) :


# Connect to a GKE cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: gke-static-credentials-cloudaccount
entity:
  name: gke-static-credentials-cloudaccount
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "gcp"
  # which needs to be configured for your Organization.
  driver_account: gcp-static-creds
  driver_type: humanitec/k8s-cluster-gke
  driver_inputs: 
    values: 
      loadbalancer: 35.10.10.10
      name: demo-123
      zone: europe-west2-a
      project_id: my-gcp-project

gke-static-credentials.yaml (view on GitHub) :


# Make sure all ${ENVIRONMENT_VARIABLES} are set when applying this Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: gke-static-credentials
entity:
  name: gke-static-credentials
  type: k8s-cluster
  driver_type: humanitec/k8s-cluster-gke
  driver_inputs: 
    values: 
      loadbalancer: 35.10.10.10
      name: demo-123
      zone: europe-west2-a
      project_id: my-gcp-project
    secrets: 
      # The "credentials" data correspond to the content of the credentials.json
      # that Google Cloud generates for a service account key
      credentials:
        type: service_account
        project_id: my-gcp-project
        # Example private_key_id: 48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f
        private_key_id: ${PRIVATE_KEY_ID}
        # Example private_key in one line: -----BEGIN PRIVATE KEY-----\\n...key...data...\\n...key...data...\\n...\\n-----END PRIVATE KEY-----\\n
        private_key: ${PRIVATE_KEY}
        # Example client_id: 206964217359046819490
        client_id: ${CLIENT_ID}
        client_email: [email protected]
        auth_uri: https://accounts.google.com/o/oauth2/auth
        token_uri: https://oauth2.googleapis.com/token
        auth_provider_x509_cert_url: https://www.googleapis.com/oauth2/v1/certs
        client_x509_cert_url: https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-gcp-project.iam.gserviceaccount.com

Template driver

Resource Definitions using the Template Driver

This section contains example Resource Definitions using the Template Driver.

Add sidecar

Add a sidecar to workloads using the workload resource

The workload Resource Type can be used to make updates to resources before they are deployed into the cluster. In this example, a Resource Definition implementing the workload Resource Type is used to inject the Open Telemetry agent as a sidecar into every workload. In addition to adding the sidecar, it also adds an environment variable called OTEL_EXPORTER_OTLP_ENDPOINT to each container running in the workload.


otel-sidecar.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: otel-sidecar
entity:
  name: otel-sidecar
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        outputs: |
          {{- /*
              The "update" output is passed into the corresponding "update" output of the "workload" Resource Type.
          */ -}}
          update:
            {{- /*
              Add the variable OTEL_EXPORTER_OTLP_ENDPOINT to all containers
            */ -}}
            {{- range $containerId, $value := .resource.spec.containers }}
            - op: add
              path: /spec/containers/{{ $containerId }}/variables/OTEL_EXPORTER_OTLP_ENDPOINT
              value: http://localhost:4317
            {{- end }}
        manifests:
          sidecar.yaml:
            location: containers
            data: |
              {{- /*
                The Open Telemetry container as a sidecar in the workload
              */ -}}
              command:
                - "/otelcol"
                - "--config=/conf/otel-agent-config.yaml"
              image: otel/opentelemetry-collector:0.94.0
              name: otel-agent
              resources:
                limits:
                  cpu: 500m
                  memory: 500Mi
                requests:
                  cpu: 100m
                  memory: 100Mi
              ports:
              - containerPort: 55679 # ZPages endpoint.
              - containerPort: 4317 # Default OpenTelemetry receiver port.
              - containerPort: 8888  # Metrics.
              env:
                - name: GOMEMLIMIT
                  value: 400MiB
              volumeMounts:
              - name: otel-agent-config-vol
                mountPath: /conf
          sidecar-volume.yaml:
            location: volumes
            data: |
              {{- /*
                A volume that is used to surface the config file
              */ -}}
              configMap:
                name: otel-agent-conf-{{ .id }}
                items:
                  - key: otel-agent-config
                    path: otel-agent-config.yaml
              name: otel-agent-config-vol
          otel-config-map.yaml:
            location: namespace
            data: |
              {{- /*
                The config file for the Open Telemetry agent. Notice that it's name includes the GUResID
              */ -}}
              apiVersion: v1
              kind: ConfigMap
              metadata:
                name: otel-agent-conf-{{ .id }}
                labels:
                  app: opentelemetry
                  component: otel-agent-conf
              data:
                otel-agent-config: |
                  receivers:
                    otlp:
                      protocols:
                        grpc:
                          endpoint: localhost:4317
                        http:
                          endpoint: localhost:4318
                  exporters:
                    otlp:
                      endpoint: "otel-collector.default:4317"
                      tls:
                        insecure: true
                      sending_queue:
                        num_consumers: 4
                        queue_size: 100
                      retry_on_failure:
                        enabled: true
                  processors:
                    batch:
                    memory_limiter:
                      # 80% of maximum memory up to 2G
                      limit_mib: 400
                      # 25% of limit up to 2G
                      spike_limit_mib: 100
                      check_interval: 5s
                  extensions:
                    zpages: {}
                  service:
                    extensions: [zpages]
                    pipelines:
                      traces:
                        receivers: [otlp]
                        processors: [memory_limiter, batch]
                        exporters: [otlp]
  criteria: []

Affinity

This section contains example Resource Definitions using the Template Driver for the affinity of Kubernetes Pods.

  • affinity.yaml: Add affinity rules to the Workload. This format is for use with the Humanitec CLI.

affinity.yaml (view on GitHub) :

# Add affinity rules to the Workload by adding a value to the manifest at .spec.affinity 
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: workload-affinity
entity:
  name: workload-affinity
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        outputs: |
          update:
          - op: add
            path: /spec/affinity
            value: 
              nodeAffinity:
                preferredDuringSchedulingIgnoredDuringExecution:
                - weight: 1
                  preference:
                    matchExpressions:
                    - key: another-node-label-key
                      operator: In
                      values:
                      - another-node-label-value
  criteria: []

Labels

This section shows how to use the Template Driver for managing labels on Kubernetes objects.

While it is also possible to set labels via Score, the approach shown here shifts the management of labels down to the Platform, ensuring consistency and relieving developers of the task to repeat common labels for each Workload in the Score extension file.

  • config-labels.yaml: Resource Definition of type config which defines the value for a sample label at a central place.
  • custom-workload-with-dynamic-labels.yaml: Add dynamic labels to your Workload. This format is for use with the Humanitec CLI.
  • custom-namespace-with-dynamic-labels.yaml: Add dynamic labels to your Namespace. This format is for use with the Humanitec CLI.

config-labels.yaml (view on GitHub) :

# This "config" type Resource Definition provides the value for the sample label
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: app-config
entity:
  name: app-config
  type: config
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        # Returns a sample output named "cost_center_id" to be used as a label
        outputs: |
          cost_center_id: my-example-id
  # Match the resource ID "app-config" so that it can be requested via that ID
  criteria:
    - res_id: app-config

custom-namespace-with-dynamic-labels.yaml (view on GitHub) :

# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: custom-namespace-with-label
entity:
  name: custom-namespace-with-label
  type: k8s-namespace
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        init: |
          name: ${context.app.id}-${context.env.id}
        manifests: |-
          namespace.yaml:
            location: cluster
            data:
              apiVersion: v1
              kind: Namespace
              metadata:
                labels:
                  env_id: ${context.env.id}
                  cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
                name: {{ .init.name }}
        outputs: |
          namespace: {{ .init.name }}
  # Set matching criteria as required
  criteria:
    - {}

custom-workload-with-dynamic-labels.yaml (view on GitHub) :

# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: custom-workload-with-label
entity:
  name: custom-workload-with-label
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        # Remove the /spec/service/labels part if there is no "service" in your Score file.
        outputs: |
          update:
            - op: add
              path: /spec/labels
              value:
                {{- range $key, $val := .resource.spec.labels }}
                {{ $key }}: {{ $val | quote }}
                {{- end }}
                env_id: ${context.env.id}
                cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
            - op: add
              path: /spec/service/labels
              value:
                {{- range $key, $val := .resource.spec.service.labels }}
                {{ $key }}: {{ $val | quote }}
                {{- end }}
                env_id: ${context.env.id}
                cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
  # Set matching criteria as required
  criteria:
    - {}

Namespace

This section contains example Resource Definitions using the Template Driver for managing Kubernetes namespaces.

  • custom-namespace.yaml: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec CLI.
  • custom-namespace.tf: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec Terraform provider.

custom-namespace.tf (view on GitHub) :

resource "humanitec_resource_definition" "namespace" {
  id          = "custom-namespace"
  name        = "custom-namespace"
  type        = "k8s-namespace"
  driver_type = "humanitec/template"

  driver_inputs = {
    values_string = jsonencode({
      templates = {
        init      = "name: $${context.env.id}-$${context.app.id}"
        manifests = <<EOL
namespace.yaml:
  location: cluster
  data:
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        pod-security.kubernetes.io/enforce: restricted
      name: {{ .init.name }}
EOL
        outputs   = "namespace: {{ .init.name }}"
      }
    })
  }
}

resource "humanitec_resource_definition_criteria" "namespace" {
  resource_definition_id  = humanitec_resource_definition.namespace.id
  # ... add any matching criteria as required.
}

custom-namespace.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: custom-namespace
entity:
  name: custom-namespace2
  type: k8s-namespace
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        # Use any combination of placeholders and characters to configure your naming scheme
        init: |
          name: ${context.env.id}-${context.app.id}
        manifests: |-
          namespace.yaml:
            location: cluster
            data:
              apiVersion: v1
              kind: Namespace
              metadata:
                labels:
                  pod-security.kubernetes.io/enforce: restricted
                name: {{ .init.name }}
        outputs: |
          namespace: {{ .init.name }}
  criteria:
    - {}

Node selector

This section contains example Resource Definitions using the Template Driver for setting nodeSelectors on your Pods.

  • aci-workload.yaml: Add the required node selector and tolerations to the Workload so it can be scheduled on an Azure AKS virtual node. This format is for use with the Humanitec CLI.

aci-workload.yaml (view on GitHub) :

# Add tolerations and nodeSelector to the Workload to make it runnable AKS virtual nodes
# served through Azure Container Instances (ACI).
# See https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aci-workload
entity:
  name: aci-workload
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        outputs: |
          update: 
          - op: add
            path: /spec/tolerations
            value:
            - key: "virtual-kubelet.io/provider"
              operator: "Exists"
            - key: "azure.com/aci"
              effect: "NoSchedule"
          - op: add
            path: /spec/nodeSelector
            value:
              kubernetes.io/role: agent
              beta.kubernetes.io/os: linux
              type: virtual-kubelet
  criteria: []

Security context

This section contains example Resource Definitions using the Template Driver for adding Security Context on Kubernetes Deployment.

  • custom-workload-with-security-context.yaml: Add Security Context to your Workload. This format is for use with the Humanitec CLI.
  • custom-workload-with-security-context.tf: Add Security Context to your Workload. This format is for use with the Humanitec Terraform provider.

custom-workload-with-security-context.tf (view on GitHub) :

resource "humanitec_resource_definition" "workload" {
  driver_type = "humanitec/template"
  id          = "custom-workload"
  name        = "custom-workload"
  type        = "workload"

  driver_inputs = {
    values_string = jsonencode({
      templates = {
        init      = ""
        manifests = ""
        outputs   = <<EOL
update:
  - op: add
    path: /spec/securityContext
    value:
      fsGroup: 1000
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
  {{- range $containerId, $value := .resource.spec.containers }}
  - op: add
    path: /spec/containers/{{ $containerId }}/securityContext
    value:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      privileged: false
      readOnlyRootFilesystem: true
  {{- end }}
EOL
      }
    })
  }
}

resource "humanitec_resource_definition_criteria" "workload" {
  resource_definition_id = humanitec_resource_definition.workload.id
  # ... add any matching criteria as required.
}

custom-workload-with-security-context.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: custom-workload
entity:
  name: custom-workload
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        outputs: |
          update:
            - op: add
              path: /spec/securityContext
              value:
                fsGroup: 1000
                runAsGroup: 1000
                runAsNonRoot: true
                runAsUser: 1000
                seccompProfile:
                  type: RuntimeDefault
            {{- range $containerId, $value := .resource.spec.containers }}
            - op: add
              path: /spec/containers/{{ $containerId }}/securityContext
              value:
                allowPrivilegeEscalation: false
                capabilities:
                  drop:
                    - ALL
                privileged: false
                readOnlyRootFilesystem: true
            {{- end }}
  criteria:
    - {}

Tolerations

This section contains example Resource Definitions using the Template Driver for managing tolerations on your Pods.

  • tolerations.yaml: Add tolerations to the Workload. This format is for use with the Humanitec CLI.

tolerations.yaml (view on GitHub) :

# Add tolerations to the Workload by adding a value to the manifest at .spec.tolerations 
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: workload-toleration
entity:
  name: workload-toleration
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        outputs: |
          update: 
          - op: add
            path: /spec/tolerations
            value:
            - key: "example-key"
              operator: "Exists"
              effect: "NoSchedule"
  criteria: []

Volumes static provisioning

This example will let participating Workloads share a common persistent storage service through the Kubernetes volumes system.

It is possible to use the Drivers volume-nfs or volume-pvc to create a PersistentVolume for your application. If you have special requirements for your PersistentVolume, you can also use the Template Driver to create it as shown here.

The example setup will perform static provisioning for a Kubernetes PersistentVolume of type nfs and a corresponding PersistentVolumeClaim. The volume points to an existing NFS server endpoint. The endpoint shown is an in-cluster NFS service which can be set up using this Kubernetes example. Modify the endpoint to use your own NFS server, or substitute the data completely for a different volume type.

flowchart TB
  subgraph pod1[Pod]
    direction TB
    subgraph container1[Container]
      volumeMount1(volumeMount\n/tmp/data):::codeComponent
    end
    volumeMount1 --> volume1(volume):::codeComponent
  end
  subgraph pod2[Pod]
    direction TB
    subgraph container2[Container]
      volumeMount2(volumeMount\n/tmp/data):::codeComponent
    end
    volumeMount2 --> volume2(volume):::codeComponent
  end
  pvc1(PersistentVolumeClaim) --> pv1(PersistentVolume)
  volume1 --> pvc1
  pvc2(PersistentVolumeClaim) --> pv2(PersistentVolume)
  volume2 --> pvc2
  nfsServer[NFS Server]
  pv1 --> nfsServer
  pv2 --> nfsServer

  classDef codeComponent font-family:Courier

To use the example, apply both Resource Definitions to your Organization and add the required matching criteria to both so they are matched to your target Deployments.

Note that this setup does not require any resource to be requested via Score. The implicit workload Resource, when matched to the Resource Definition of type workload of this example, will trigger the provisioning of the volume Resource through its own Resource reference.

Those files make up the example:

  • workload-volume-nfs.yaml: Resource Definition of type workload. It references a Resource of type volume through Resource References, thus adding such a Resource to the Resource Graph and effectively triggering the provisioning of that Resource. It uses the Resource outputs to set a label for a fictitious backup solution, and to add the PersistentVolumeClaim to the Workload container.
  • volume-nfs.yaml: Resource Definition of type volume. It creates the PersistentVolume and PersistentVolumeClaim manifests and adds the volumes element to the Workload’s Pod. The ID generated in the init section will be different for each active Resource, i.e. for each Workload, so that each Workload will get their own PersistentVolume and PersistentVolumeClaim objects created for them. Still, through the common NFS server endpoint, they will effectively share access to the data.

The resulting Resource Graph portion will look like this:

flowchart LR
  subgraph resource-graph[Resource Graph]
    direction TB
    W1((Workload)) --->|implicit reference| W2(Workload)
    W2 --->|"resource reference\n${resources.volume...}"| V1(Volume)
  end
  subgraph key [Key]
    VN((Virtual\nNodes))
    AN(Active\nResources)
  end
  resource-graph ~~~ key

volume-nfs.yaml (view on GitHub) :

# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: volume-nfs
entity:
  name: volume-nfs
  type: volume
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        init: |
          # Generate a unique id for each pv/pvc combination.
          # Every Workload will have a separate pv and pvc created for it,
          # but pointing to the same NFS server endpoint.
          volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
          pvBaseName: pv-tmpl-
          pvcBaseName: pvc-tmpl-
          volBaseName: vol-tmpl-
        manifests:
          ####################################################################
          # This template creates the PersistentVolume in the target namespace
          # Modify the nfs server and path to address your NFS server
          ####################################################################
          app-pv-tmpl.yaml:
            location: namespace
            data: |
              apiVersion: v1
              kind: PersistentVolume
              metadata:
                name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
              spec:
                capacity:
                  storage: 1Mi
                accessModes:
                  - ReadWriteMany
                nfs:
                  server: nfs-server.default.svc.cluster.local
                  path: "/"
                mountOptions:
                  - nfsvers=4.2
          
          #########################################################################
          # This template creates the PersistentVolumeClaim in the target namespace
          #########################################################################
          app-pvc-tmpl.yaml:
            location: namespace
            data: |
              apiVersion: v1
              kind: PersistentVolumeClaim
              metadata:
                name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
              spec:
                accessModes:
                  - ReadWriteMany
                storageClassName: ""
                resources:
                  requests:
                    storage: 1Mi
                volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}

          ########################################################
          # This template creates the volume in the Workload's Pod
          ########################################################
          app-vol-tmpl.yaml:
            location: volumes
            data: |
              name: {{ .init.volBaseName }}{{ .init.volumeUid }}
              persistentVolumeClaim:
                claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}

        # Make the volume name and pvc name available for other Resources
        outputs: |
          volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
          pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}

workload-volume-nfs.yaml (view on GitHub) :

# This workload Resource Definition uses the output of the "volume" type Resource
# to add a label for a backup solution
# and to create the volumeMount for the container.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: workload-volume-nfs
entity:
  name: workload-volume-nfs
  type: workload
  driver_type: humanitec/template
  driver_inputs:
    values:
      templates:
        init: |
          pvcName: ${resources.volume.outputs.pvcName}
          volumeName: ${resources.volume.outputs.volumeName}
        outputs: |
          update:
            - op: add
              path: /spec/annotations/backup.org-name.io
              value: {{ .init.pvcName }}
            {{- range $containerId, $value := .resource.spec.containers }}
            - op: add
              path: /spec/containers/{{ $containerId }}/volumeMounts
              value:
              - name: {{ $.init.volumeName }}
                mountPath: /tmp/data
            {{- end }}

Terraform driver

Resource Definitions using the Terraform Driver

This section contains example Resource Definitions using the Terraform Driver.

Azure blob

Use the Terraform Driver to provision Azure Blob storage resources.

  • ssh-secret-refs.tf: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.

ssh-secret-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "azure-blob" {
  driver_type = "humanitec/terraform"
  id          = "azure-blob"
  name        = "azure-blob"
  type        = "azure-blob"

  driver_inputs = {
    # All secrets are read from a secret store using secret references
    secret_refs = jsonencode({
      variables = {
        client_id = {
          ref   = var.client_id_secret_reference_key
          store = var.secret_store
        }
        client_secret = {
          ref   = var.client_secret_secret_reference_key
          store = var.secret_store
        }
      }

      source = {
        # Using an SSH key to authenticate against the Git repo providing the Terraform module
        ssh_key = {
          ref   = var.ssh_key_secret_reference_key
          store = var.secret_store
        }
      }
    })

    values_string = jsonencode({
      source = {
        path = "azure-blob/terraform/"
        rev  = "refs/heads/main"
        url  = "[email protected]:my-org/my-repo.git"
      }

      variables = {
        # Variables for the Terraform module located in the Git repo
        tenant_id                = var.tenant_id
        subscription_id          = var.subscription_id
        resource_group_name      = var.resource_group_name
        name                     = var.name
        prefix                   = var.prefix
        account_tier             = var.account_tier
        account_replication_type = var.account_replication_type
        container_name           = var.container_name
        container_access_type    = var.container_access_type
      }
    })
  }
}

Backends

Backends

Humanitec manages the state file for the local backend. This is the backend that is used if no backend is specified.

In order to manage your own state, you will need to define your own backend. We recommend that the backend configuration is defined in the script part of the Resource Definition - i.e. as an override.tf file (see the Inputs of the Terraform Driver). This allows the backend to be tuned per resource instance.

In order to centralize configuration, it is also recommended to create a config resource that can be used to centrally manage the backend configuration.

In this example, there are two config resources defined. Both are using the Template Driver to generate outputs for use in the example Resource Definition:

  • backend-config.yaml which provides shared backend configuration that can be used across Resource Definitions.
  • account-config-aws.yaml which provides credentials used by the provider.

The example Resource Definition s3-backend.yaml does the following:

  • Configures a backend using the backend-config.yaml.
  • Configures the provider using a different set of credentials from account-config-aws.yaml.
  • Provisions an s3 bucket.

account-config-aws.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: account-config-aws
entity:
  criteria:
    # This res_id is used in the resource reference in the s3-backend Resource Definition.
    - res_id: aws-account

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name your AWS Cloud Account.
  driver_account: aws-credentials

  driver_inputs:
    values:
      templates:
        secrets: |
          aws_access_key_id: {{ .driver.secrets.account.aws_access_key_id }}
          aws_secret_access_key: {{ .driver.secrets.account.aws_secret_access_key }}
          credentials_file: |
            [default]
            aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
            aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
  driver_type: humanitec/template
  name: account-config-aws
  type: config


backend-config.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: tf-backend-config
entity:
  criteria:
    # This res_id is used in the resource reference in the s3-backend Resource Definition.
    - res_id: tf-backend

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name of your AWS Cloud Account.
  driver_account: aws-credentials

  driver_inputs:
    values:
      templates:
        outputs: |
          bucket: my-terraform-state-bucket
          key_prefix:  "tf-state/"
          region: us-east-1
        secrets: |
          credentials_file: |
            [default]
            aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
            aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
  driver_type: humanitec/template
  name: tf-backend-config
  type: config


s3-backend.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-backend-example
entity:
  driver_inputs:
    # We are using secret references to write the credentials using their "value" element.
    # Using "secrets" instead would work too, but due to the placeholders in the values, the
    # Platform Orchestrator will resolve them to the exact secret references used here
    # in the resulting Resource Definition.
    # This structure therefore represents the way the Platform Orchestrator manages the Resource Definition
    # and is better suited to any round-trip engineering, if needed.
    secret_refs:
      files:
        # Credentials for the AWS provider
        aws_creds:
          # Using the resource ID "#aws-account" to fulfill the matching criteria of the "account-config-aws" config resource.  
          value: ${resources.config#aws-account.outputs.credentials_file}
        
        # In general, the credentials for the backend should be different from those of the provider
        backend_creds:
          # Using the resource ID "#tf-backend" to fulfill the matching criteria of the "tf-backend-config" config resource.
          value: ${resources.config#tf-backend.outputs.credentials_file}
    values:
      script: |-
        variable "region" {}

        terraform {
          backend {
            bucket = "${resources.config#tf-backend.outputs.bucket}"
            key    = "${resources.config#tf-backend.outputs.prefix}${context.app.id}/${context.env.id}/${context.res.type}.${context.res.class}/${context.res.id}"
            region = "${resources.config#tf-backend.outputs.region}"
            shared_credentials_files = ["backend_creds"]
          }

          required_providers {
            aws = {
              source = "hashicorp/aws"
            }
          }
        }

        provider "aws" {
          region     = var.region

          # The file is defined above. The provide will read the creds from this file.
          shared_credentials_files = ["aws_creds"]
        }
        
        output "bucket" {
          value = aws_s3_bucket.bucket.bucket
        }

        output "region" {
          value = var.region
        }

        resource "aws_s3_bucket" "bucket" {
          bucket = "$\{replace("${context.res.id}", "^.*\.", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          tags = {
            Humanitec = true
          }
        }

      variables:
        region: us-east-1
  driver_type: humanitec/terraform
  name: s3-backend-example
  type: s3

  # Supply matching criteria
  criteria: []

Co provision

Resource co-provisioning

This section contains an example of Resource Definitions using the Terraform Driver and illustrating the co-provisioning concept.

Scenario: For each AWS S3 bucket resource an AWS IAM policy resource must be created. The bucket properties (region, ARN) should be passed to the policy resource. In other words, an IAM Policy resource depends on a S3 resource, but it needs to be created automatically.

Any time a Workload references a S3 resource using this Resource Definition, an IAM Policy resource will be co-provisioned and reference the S3 resource. The resulting Resource Graph will look like this:

flowchart LR
  R1(Workload) --->|references| R2(S3)
  N1(AWS Policy) --->|references| R2
  classDef pClass stroke-width:1px
  classDef rClass stroke-width:2px
  classDef nClass stroke-width:2px,stroke-dasharray: 5 5
  class R1 pClass
  class R2 rClass
  class N1 nClass

aws-policy-co-provision.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-policy-co-provision
entity:
  type: aws-policy
  driver_type: humanitec/terraform
  # Use the credentials injected via the driver_account to set variables as expected by your Terraform code
  driver_account: aws
  driver_inputs:
    values:
      variables:
        REGION: ${resources.s3.outputs.region}
        BUCKET: ${resources.s3.outputs.bucket}
        BUCKET_ARN: ${resources.s3.outputs.arn}
      credentials_config:
        variables:
          ACCESS_KEY_ID: AccessKeyId
          ACCESS_KEY_VALUE: SecretAccessKey
      script: |-
        # This provider block is using the Terraform variables
        # set through the credentials_config.
        # Variable declarations omitted for brevity.
        provider "aws" {
          region     = var.REGION
          access_key = var.ACCESS_KEY_ID
          secret_key = var.ACCESS_KEY_VALUE
        }

        # ... Terraform code reduced for brevity

        resource "aws_iam_policy" "bucket" {
          name        = "${var.BUCKET}-policy"
          policy      = data.aws_iam_policy_document.main.json
        }
  
        data "aws_iam_policy_document" "main" {
          statement {
            effect = "Allow"
  
            actions = [
              "s3:GetObject",
              "s3:ListBucket",
            ]
  
            resources = [
              var.BUCKET_ARN,
            ]
          }
}


s3-co-provision.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-co-provision
entity:
  name: s3-co-provision
  type: s3
  driver_type: humanitec/terraform
  # Use the credentials injected via the driver_account to set variables as expected by your Terraform code
  driver_account: aws
  driver_inputs:
    values:
      variables:
        REGION: eu-central-1
      credentials_config:
        variables:
          ACCESS_KEY_ID: AccessKeyId
          ACCESS_KEY_VALUE: SecretAccessKey
      script: |-
        # This provider block is using the Terraform variables
        # set through the credentials_config.
        # Variable declarations omitted for brevity.
        provider "aws" {
          region     = var.REGION
          access_key = var.ACCESS_KEY_ID
          secret_key = var.ACCESS_KEY_VALUE
        }

        # ... Terraform code reduced for brevity

        resource "aws_s3_bucket" "bucket" {
          bucket = my-bucket
        }
        
        output "bucket" {
          value = aws_s3_bucket.main.id
        }
  
        output "arn" {
          value = aws_s3_bucket.main.arn
        }
          
        output "region" {
          value = aws_s3_bucket.main.region
        }
  # Co-provision aws-policy resource
  provision:
    aws-policy:
      is_dependent: false

Credentials

Credentials

Different Terraform providers have different ways of being configured. Generally, there are 3 ways that providers can be configured:

  • Directly using parameters on the provider. We call this “provider” credentials.
  • Using a credentials file. The filename is supplied to the provider. We call this “file” credentials.
  • Via environment variables that the provider reads. We call this “environment” credentials.

A powerful approach for working with different cloud accounts for the same resource definition is to reference the credentials from a config resource. By using matching criteria on the config resource, it is possible to specialize the account used in the terraform to different contexts. For example, there might be different AWS Accounts for test and production environments. The same resource definition can be used to manage the terraform and 2 config resources can be created matching to the staging and production environments respectively.

In this set of examples, we provide two config Resource Definitions for AWS and GCP.

AWS

  • Account config (account-config-aws.yaml)
  • Provider Credentials (aws-provider-credentials.yaml)
  • Environment Credentials (aws-environment-credentials.yaml)

GCP

  • Account config (account-config-gcp.yaml)
  • File Credentials (gcp-file-credentials.yaml)

account-config-aws.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: account-config-aws
entity:
  criteria:
    # This res_id is used in the resource reference in the s3-backend Resource Definition.
    - res_id: aws-account

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name your AWS Cloud Account.
  driver_account: aws-credentials

  driver_inputs:
    values:
      region: us-east-1

  driver_type: humanitec/echo
  name: account-config-aws
  type: config


account-config-gcp.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: account-config-gcp
entity:
  criteria:
    # This res_id is used in the resource reference in the gcp-file-credentials Resource Definition.
    - res_id: gcp-account

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name your GCP Cloud Account.
  driver_account: gcp-credentials

  driver_inputs:
    values:
      location: US
      project_id: my-gcp-prject
  driver_type: humanitec/echo
  name: account-config-gcp
  type: config


aws-environment-credentials.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-environment-credentials
entity:
  # Use the same account as the config we're ma
  driver_account: ${resources['config.default#aws-account'].account}
  driver_inputs:
    values:
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: "AccessKeyId"
          AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
          AWS_SESSION_TOKEN: "SessionToken"
      script: |-

        variable "region" {}

        terraform {
          required_providers {
            aws = {
              source = "hashicorp/aws"
            }
          }
        }

        provider "aws" {
          region = var.region
        }

        output "bucket" {
          value = aws_s3_bucket.bucket.bucket
        }

        output "region" {
          value = var.region
        }

        resource "aws_s3_bucket" "bucket" {
          bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          tags = {
            Humanitec = true
          }
        }

      variables:
        region: ${resources['config.default#aws-account'].outputs.region}
  driver_type: humanitec/terraform
  name: aws-environment-credentials
  type: s3

  # Supply matching criteria
  criteria: []

aws-provider-credentials.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-provider-credentials
entity:
  # Use the same account as the config we're ma
  driver_account: ${resources['config.default#aws-account'].account}
  driver_inputs:
    values:
      credentials_config:
        variables:
          access_key_id: "AccessKeyId"
          secret_access_key: "SecretAccessKey"
          session_token: "SessionToken"
      script: |-

        variable "access_key_id" {
          sensitive = true
        }
        variable "secret_access_key" {
          sensitive = true
        }
        variable "session_token" {
          sensitive = true
        }
        variable "region" {}

        terraform {
          required_providers {
            aws = {
              source = "hashicorp/aws"
            }
          }
        }

        provider "aws" {
          region     = var.region
          access_key = var.access_key_id
          secret_key = var.secret_access_key
          token      = var.session_token
        }

        output "bucket" {
          value = aws_s3_bucket.bucket.bucket
        }

        output "region" {
          value = var.region
        }

        resource "aws_s3_bucket" "bucket" {
          bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          tags = {
            Humanitec = true
          }
        }

      variables:
        region: ${resources['config.default#aws-account'].outputs.region}
  driver_type: humanitec/terraform
  name: aws-provider-credentials
  type: s3

  # Supply matching criteria
  criteria: []

gcp-file-credentials.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: gcp-file-credentials
entity:
  driver_account: ${resources['config.default#gcp-account'].account}
  driver_inputs:
    values:
      credentials_config:
        file: credentials.json
      script: |-

        variable "project_id" {}
        variable "location" {}

        terraform {
          required_providers {
            google = {
              source = "hashicorp/google"
            }
          }
        }
        provider "google" {
          project     = var.project_id

          # The file is defined above. The provider will read a service account token from this file.
          credentials = "credentials.json"
        }
        
        output "name" {
          value = google_storage_bucket.bucket.name
        }

        resource "google_storage_bucket" "bucket" {
          name          = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          location      = var.location
          force_destroy = true
        }

      variables:
        location: ${resources.config#gcp-account.outputs.location}
        project_id: ${resources.config#gcp-account.outputs.project_id}

  driver_type: humanitec/terraform
  name: gcp-file-credentials
  type: gcs

  # Supply matching criteria
  criteria: []

Dynamic credentials

Dynamic Credentials

Using a Cloud Account type that supports dynamic credentials, those credentials can be easily injected into a Resource Definition using the Terraform Driver. Use a driver_account referencing the Cloud Account in the Resource Definition, and access its the credentials through the supplied values as shown in the examples.

AWS

  • S3 bucket (s3-dynamic-credentials.yaml)

s3-dynamic-credentials.yaml (view on GitHub) :

# Connect to an EKS cluster using dynamic credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-dynamic-credentials
entity:
  name: s3-dynamic-credentials
  type: s3
  driver_type: humanitec/terraform
  # The driver_account references a Cloud Account of type "aws-role"
  # which needs to be configured for your Organization.
  driver_account: aws-temp-creds
  driver_inputs:
    values:
      variables:
        REGION: eu-central-1
      # Use the credentials injected via the driver_account
      # to set variables as expected by your Terraform code
      credentials_config:
        variables:
          ACCESS_KEY_ID: AccessKeyId
          ACCESS_KEY_VALUE: SecretAccessKey
          SESSION_TOKEN: SessionToken
      script: |-
        # This provider block is using the Terraform variables
        # set through the credentials_config.
        # Variable declarations omitted for brevity.
        provider "aws" {
          region     = var.REGION
          access_key = var.ACCESS_KEY_ID
          secret_key = var.ACCESS_KEY_VALUE
          token      = var.SESSION_TOKEN
        }

        # ... Terraform code reduced for brevity

        resource "aws_s3_bucket" "bucket" {
          bucket = my-bucket
        }

Private git repo

The Terraform Driver can access Terraform definitions stored in a Git repository. In the case that this repository requires authentication, you must supply credentials to the Driver. The examples in this section show how to provide those as part of the secrets in the Resource Definition based on the Terraform Driver.

  • ssh-secret-refs.tf: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.
  • https-secret-refs.tf: uses secret references to obtain an HTTPS password from a secret store to connect to the Git repo providing the Terraform code.

https-secret-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "example-resource" {
  driver_type = "humanitec/terraform"
  id          = "example-resource"
  name        = "example-resource"
  type        = "some-resource-type"

  driver_inputs = {
    # This examples uses secret references, pointing at a secret store
    # to obtain the actual values
    secret_refs = jsonencode({

      source = {
        # Using the password for a connection to the Git repo via HTTPS
        password = {
          ref   = var.password_secret_reference_key
          store = var.secret_store
        }
      }

      variables = {
        # ...
      }
    })

    values_string = jsonencode({
      # Connection information to the target Git repo
      source = {
        path = "some-resource-type/terraform"
        rev  = "refs/heads/main"
        url  = "https://my-domain.com/my-org/my-repo.git"
      }
      # ...
    })
  }
}


ssh-secret-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "example-resource" {
  driver_type = "humanitec/terraform"
  id          = "example-resource"
  name        = "example-resource"
  type        = "some-resource-type"

  driver_inputs = {
    # This examples uses secret references, pointing at a secret store
    # to obtain the actual values
    secret_refs = jsonencode({

      source = {
        # Using the ssh_key for a connection to the Git repo via SSH
        ssh_key = {
          ref   = var.ssh_key_secret_reference_key
          store = var.secret_store
        }
      }

      variables = {
        # ...
      }
    })

    values_string = jsonencode({
      # Connection information to the target Git repo
      source = {
        path = "some-resource-type/terraform"
        rev  = "refs/heads/main"
        url  = "[email protected]:my-org/my-repo.git"
      }
      # ...
    })
  }
}

Runner

The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.

The examples in this section show how to provide those data by referencing a k8s-cluster Resource Definition as part of the non-secret and secret fields of the runner object in the s3 Resource Definition based on the Terraform Driver.

  • k8s-cluster-refs.tf: provides a connection to an EKS cluster.
  • s3-ext-runner-refs.tf: uses runner configuration to run the Terraform Runner in the external cluster specified by k8s-cluster-refs.tf and provision an S3 bucket. It configures the Runner to run Terraform scripts from a private Git repository which initializes a Terraform s3 backend via Environment Variables.

k8s-cluster-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "eks_resource_cluster" {
  id          = "eks-cluster"
  name        = "eks-cluster"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-eks"

  driver_inputs = {
    secrets = {
      credentials = {
        aws_access_key_id     = var.aws_access_key_id
        aws_secret_access_key = var.aws_secret_access_key
      }
    }

    values = {
      loadbalancer             = "10.10.10.10"
      name                     = "my-cluster"
      region                   = "eu-central-1"
      loadbalancer             = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
      loadbalancer_hosted_zone = "ABC0DEF5WYYZ00"
    }
  }
}


resource "humanitec_resource_definition_criteria" "eks_resource_cluster" {
  resource_definition_id = humanitec_resource_definition.eks_resource_cluster.id
  class                  = "runner"
}


s3-ext-runner-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "aws_terraform_external_runner_resource_s3_bucket" {
  id          = "aws-terrafom-ext-runner-s3-bucket"
  name        = "aws-terrafom-ext-runner-s3-bucket"
  type        = "s3"
  driver_type = "humanitec/terraform"
  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name of your AWS Cloud Account.
  # The account is used to provide credentials to the Terraform script via environment variables to access the TF state.
  driver_account = "my-aws-account"

  driver_inputs = {
    secrets = {
      # Secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = jsonencode({
        credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
      })

      source = jsonencode({
        ssh_key = var.ssh_key
      })
    }

    values = {
      # This instructs the driver that the Runner must run in an external cluster.
      runner_mode = "custom-kubernetes"
      # Non-secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = jsonencode({
        cluster_type = "eks"
        cluster = {
          region                   = "$${resources['k8s-cluster.runner'].outputs.region}"
          name                     = "$${resources['k8s-cluster.runner'].outputs.name}"
          loadbalancer             = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
          loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
        }
        # Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
        service_account = "humanitec-tf-runner-sa"
        namespace       = "humanitec-tf-runner"
      })

      # Configure the way we provide account credentials to the Terraform scripts in the referenced repository.
      # These credentials are related to the `driver_account` configured above.
      credentials_config = jsonencode({

        # Terraform script Variables. 
        variables = {
          ACCESS_KEY_ID     = "AccessKeyId"
          SECRET_ACCESS_KEY = "SecretAccessKey"
        }

        # Environment Variables.
        environment = {
          AWS_ACCESS_KEY_ID     = "AccessKeyId"
          AWS_SECRET_ACCESS_KEY = "SecretAccessKey"
        }
      })

      # Connection information to the Git repo containing the Terraform code.
      # It will provide a backend configuration initialized via Environment Variables.
      source = jsonencode({
        path = "s3/terraform/bucket/"
        rev  = "refs/heads/main"
        url  = "my-domain.com:my-org/my-repo.git"
      })


      variables = jsonencode({
        # Provide a separate bucket per Application and Environment
        bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
        region = var.region
      })

    }
  }
}

Runner pod configuration

The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.

The examples in this section show how to provide those data by referencing a k8s-cluster Resource Definition as part of the non-secret and secret fields of the runner object in the azure-blob-account Resource Definition based on the Terraform Driver. They also provide an example of how to apply labels to the Runner Pod and make it able to run with an Azure Workload Identity getting rid of the need of explicitly setting Azure credentials in the Resource Definition or using a Driver Account.

  • k8s-cluster-refs.tf: provides a connection to an AKS cluster.
  • azure-blob-account.tf: uses runner configuration to run the Terraform Runner in the external cluster specified by k8s-cluster-refs.tf and provision an azure blob account. It configures the Runner to run Terraform scripts from a private Git repository where an Terraform azurerm backend. Neither a driver account or secret credentials are used here as the runner pod is configured to run with a workload identity associated to the specify service account via runner.runner_pod_template property.

azure-blob-account.tf (view on GitHub) :

resource "humanitec_resource_definition" "azure_blob_account" {
  driver_type = "humanitec/terraform"
  id          = "azure-blob-account-basic"
  name        = "azure-blob-account-basic"
  type        = "azure-blob-account"

  driver_inputs = {
    secrets_string = jsonencode({
      # Secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = jsonencode({
        credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
      })

      source = {
        ssh_key = var.ssh_key
      }
    })

    values_string = jsonencode({
      append_logs_to_error = true

      # This instructs the driver that the Runner must be run in an external cluster.
      runner_mode = "custom-kubernetes"
      # Non-secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = {
        cluster_type = "aks"
        cluster = {
          region                   = "$${resources['k8s-cluster.runner'].outputs.region}"
          name                     = "$${resources['k8s-cluster.runner'].outputs.name}"
          loadbalancer             = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
          loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
        }
        # Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
        # In this example, the Service Account needs to be annotated to specify the Microsoft Entra application client ID to be used with the pod: https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#service-account-labels-and-annotations
        service_account = "humanitec-tf-runner-sa"
        namespace       = "humanitec-tf-runner"
        # This instructs the driver that the Runner pod must run with a workload identity.
        runner_pod_template = <<EOT
metadata:
  labels:
    azure.workload.identity/use: "true"
EOT
      }

      # Connection information to the Git repo containing the Terraform code.
      # It will provide a backend configuration initialized via Environment Variables.
      source = {
        path = "modules/azure-blob-account/basic"
        rev  = var.resource_packs_azure_rev
        url  = var.resource_packs_azure_url
      }
      variables = {
        res_id = "$${context.res.id}"
        app_id = "$${context.app.id}"
        env_id = "$${context.env.id}"

        subscription_id          = var.subscription_id
        resource_group_name      = var.resource_group_name
        name                     = var.name
        prefix                   = var.prefix
        account_tier             = var.account_tier
        account_replication_type = var.account_replication_type
      }
    })
  }
}


k8s-cluster-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "aks_aad_resource_cluster" {
  id          = "aad-enabled-cluster"
  name        = "aad-enabled-cluster"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-aks"

  driver_inputs = {
    secrets = {
      credentials = {
        appId       = var.app_id
        displayName = var.display_name
        password    = var.password
        tenant      = var.tenant
      }
    }

    values = {
      name            = "my-cluster"
      resource_group  = "my-azure-resource-group"
      subscription_id = "123456-1234-1234-1234-123456789"
      server_app_id   = "6dae42f8-4368-4678-94ff-3960e28e3630"
    }
  }
}


resource "humanitec_resource_definition_criteria" "aks_aad_resource_cluster" {
  resource_definition_id = humanitec_resource_definition.aks_aad_resource_cluster.id
  class                  = "runner"
}

S3

Use the Terraform Driver to provision Amazon S3 bucket resources.

  • public-git-repo.tf: uses a publicly accessible Git repo to find the Terraform code.
  • private-git-repo.tf: uses a private Git repo requiring authentication to find the Terraform code.

private-git-repo.tf (view on GitHub) :

resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
  id          = "aws-terrafom-s3-bucket"
  name        = "aws-terrafom-s3-bucket"
  type        = "s3"
  driver_type = "humanitec/terraform"

  driver_inputs = {

    secrets = {
      variables = jsonencode({
        access_key = var.access_key
        secret_key = var.secret_key

      })
      source = jsonencode({
        # Provide either an SSH key (for SSH connection) or password (for HTTPS).
        ssh_key  = var.ssh_key
        password = var.password
      })
    }
    
    values = {
      # Connection information to the Git repo containing the Terraform code
      "source" = jsonencode(
        {
          path = "s3/terraform/bucket/"
          rev  = "refs/heads/main"
          url  = "https://my-domain.com/my-org/my-repo.git"
          # url  = "[email protected]:my-org/my-repo.git" # For SSH access instead of HTTPS
        }
      )
      "variables" = jsonencode(
        {
          # Provide a separate bucket per Application and Environment
          bucket          = "my-company-my-app-$${context.app.id}-$${context.env.id}"
          region          = var.region
          assume_role_arn = var.assume_role_arn
        }
      )
    }
  }
}


public-git-repo.tf (view on GitHub) :

resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
  id          = "aws-terrafom-s3-bucket"
  name        = "aws-terrafom-s3-bucket"
  type        = "s3"
  driver_type = "humanitec/terraform"

  driver_inputs = {

    secrets = {
      variables = jsonencode({
        access_key = var.access_key
        secret_key = var.secret_key
      })
    }

    values = {
      # Connection information to the Git repo containing the Terraform code
      # The repo must not require authentication
      "source" = jsonencode(
        {
          path = "s3/terraform/bucket/"
          rev  = "refs/heads/main"
          url  = "https://my-domain.com/my-org/my-repo.git"
        }
      )
      "variables" = jsonencode(
        {
          # Provide a separate bucket per Application and Environment
          bucket          = "my-company-my-app-$${context.app.id}-$${context.env.id}"
          region          = var.region
          assume_role_arn = var.assume_role_arn
        }
      )
    }
  }
}

Top