Terraform Driver

Resource Definitions using the Terraform Driver

This section contains example Resource Definitions using the Terraform Driver.

Azure blob

Use the Terraform Driver to provision Azure Blob storage resources.

  • ssh-secret-refs.tf: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.

ssh-secret-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "azure-blob" {
  driver_type = "humanitec/terraform"
  id          = "azure-blob"
  name        = "azure-blob"
  type        = "azure-blob"

  driver_inputs = {
    # All secrets are read from a secret store using secret references
    secret_refs = jsonencode({
      variables = {
        client_id = {
          ref   = var.client_id_secret_reference_key
          store = var.secret_store
        }
        client_secret = {
          ref   = var.client_secret_secret_reference_key
          store = var.secret_store
        }
      }

      source = {
        # Using an SSH key to authenticate against the Git repo providing the Terraform module
        ssh_key = {
          ref   = var.ssh_key_secret_reference_key
          store = var.secret_store
        }
      }
    })

    values_string = jsonencode({
      source = {
        path = "azure-blob/terraform/"
        rev  = "refs/heads/main"
        url  = "[email protected]:my-org/my-repo.git"
      }

      variables = {
        # Variables for the Terraform module located in the Git repo
        tenant_id                = var.tenant_id
        subscription_id          = var.subscription_id
        resource_group_name      = var.resource_group_name
        name                     = var.name
        prefix                   = var.prefix
        account_tier             = var.account_tier
        account_replication_type = var.account_replication_type
        container_name           = var.container_name
        container_access_type    = var.container_access_type
      }
    })
  }
}

Backends

Backends

Humanitec manages the state file for the local backend. This is the backend that is used if no backend is specified.

In order to manage your own state, you will need to define your own backend. We recommend that the backend configuration is defined in the script part of the Resource Definition - i.e. as an override.tf file (see the Inputs of the Terraform Driver). This allows the backend to be tuned per resource instance.

In order to centralize configuration, it is also recommended to create a config resource that can be used to centrally manage the backend configuration.

In this example, there are two config resources defined. Both are using the Template Driver to generate outputs for use in the example Resource Definition:

  • backend-config.yaml which provides shared backend configuration that can be used across Resource Definitions.
  • account-config-aws.yaml which provides credentials used by the provider.

The example Resource Definition s3-backend.yaml does the following:

  • Configures a backend using the backend-config.yaml.
  • Configures the provider using a different set of credentials from account-config-aws.yaml.
  • Provisions an s3 bucket.

account-config-aws.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: account-config-aws
entity:
  criteria:
    # This res_id is used in the resource reference in the s3-backend Resource Definition.
    - res_id: aws-account

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name your AWS Cloud Account.
  driver_account: aws-credentials

  driver_inputs:
    values:
      templates:
        secrets: |
          aws_access_key_id: {{ .driver.secrets.account.aws_access_key_id }}
          aws_secret_access_key: {{ .driver.secrets.account.aws_secret_access_key }}
          credentials_file: |
            [default]
            aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
            aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
  driver_type: humanitec/template
  name: account-config-aws
  type: config


backend-config.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: tf-backend-config
entity:
  criteria:
    # This res_id is used in the resource reference in the s3-backend Resource Definition.
    - res_id: tf-backend

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name of your AWS Cloud Account.
  driver_account: aws-credentials

  driver_inputs:
    values:
      templates:
        outputs: |
          bucket: my-terraform-state-bucket
          key_prefix:  "tf-state/"
          region: us-east-1
        secrets: |
          credentials_file: |
            [default]
            aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
            aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
  driver_type: humanitec/template
  name: tf-backend-config
  type: config


s3-backend.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-backend-example
entity:
  driver_inputs:
    # We are using secret references to write the credentials using their "value" element.
    # Using "secrets" instead would work too, but due to the placeholders in the values, the
    # Platform Orchestrator will resolve them to the exact secret references used here
    # in the resulting Resource Definition.
    # This structure therefore represents the way the Platform Orchestrator manages the Resource Definition
    # and is better suited to any round-trip engineering, if needed.
    secret_refs:
      files:
        # Credentials for the AWS provider
        aws_creds:
          # Using the resource ID "#aws-account" to fulfill the matching criteria of the "account-config-aws" config resource.  
          value: ${resources.config#aws-account.outputs.credentials_file}
        
        # In general, the credentials for the backend should be different from those of the provider
        backend_creds:
          # Using the resource ID "#tf-backend" to fulfill the matching criteria of the "tf-backend-config" config resource.
          value: ${resources.config#tf-backend.outputs.credentials_file}
    values:
      script: |-
        variable "region" {}

        terraform {
          backend {
            bucket = "${resources.config#tf-backend.outputs.bucket}"
            key    = "${resources.config#tf-backend.outputs.prefix}${context.app.id}/${context.env.id}/${context.res.type}.${context.res.class}/${context.res.id}"
            region = "${resources.config#tf-backend.outputs.region}"
            shared_credentials_files = ["backend_creds"]
          }

          required_providers {
            aws = {
              source = "hashicorp/aws"
            }
          }
        }

        provider "aws" {
          region     = var.region

          # The file is defined above. The provide will read the creds from this file.
          shared_credentials_files = ["aws_creds"]
        }
        
        output "bucket" {
          value = aws_s3_bucket.bucket.bucket
        }

        output "region" {
          value = var.region
        }

        resource "aws_s3_bucket" "bucket" {
          bucket = "$\{replace("${context.res.id}", "^.*\.", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          tags = {
            Humanitec = true
          }
        }

      variables:
        region: us-east-1
  driver_type: humanitec/terraform
  name: s3-backend-example
  type: s3

  # Supply matching criteria
  criteria: []

Co provision

Resource co-provisioning

This section contains an example of Resource Definitions using the Terraform Driver and illustrating the co-provisioning concept.

Scenario: For each AWS S3 bucket resource an AWS IAM policy resource must be created. The bucket properties (region, ARN) should be passed to the policy resource. In other words, an IAM Policy resource depends on a S3 resource, but it needs to be created automatically.

Any time a Workload references a S3 resource using this Resource Definition, an IAM Policy resource will be co-provisioned and reference the S3 resource. The resulting Resource Graph will look like this:

flowchart LR
  R1(Workload) --->|references| R2(S3)
  N1(AWS Policy) --->|references| R2
  classDef pClass stroke-width:1px
  classDef rClass stroke-width:2px
  classDef nClass stroke-width:2px,stroke-dasharray: 5 5
  class R1 pClass
  class R2 rClass
  class N1 nClass

aws-policy-co-provision.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-policy-co-provision
entity:
  type: aws-policy
  driver_type: humanitec/terraform
  # Use the credentials injected via the driver_account to set variables as expected by your Terraform code
  driver_account: aws
  driver_inputs:
    values:
      variables:
        REGION: ${resources.s3.outputs.region}
        BUCKET: ${resources.s3.outputs.bucket}
        BUCKET_ARN: ${resources.s3.outputs.arn}
      credentials_config:
        variables:
          ACCESS_KEY_ID: AccessKeyId
          ACCESS_KEY_VALUE: SecretAccessKey
      script: |-
        # This provider block is using the Terraform variables
        # set through the credentials_config.
        # Variable declarations omitted for brevity.
        provider "aws" {
          region     = var.REGION
          access_key = var.ACCESS_KEY_ID
          secret_key = var.ACCESS_KEY_VALUE
        }

        # ... Terraform code reduced for brevity

        resource "aws_iam_policy" "bucket" {
          name        = "${var.BUCKET}-policy"
          policy      = data.aws_iam_policy_document.main.json
        }
  
        data "aws_iam_policy_document" "main" {
          statement {
            effect = "Allow"
  
            actions = [
              "s3:GetObject",
              "s3:ListBucket",
            ]
  
            resources = [
              var.BUCKET_ARN,
            ]
          }
}


s3-co-provision.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-co-provision
entity:
  name: s3-co-provision
  type: s3
  driver_type: humanitec/terraform
  # Use the credentials injected via the driver_account to set variables as expected by your Terraform code
  driver_account: aws
  driver_inputs:
    values:
      variables:
        REGION: eu-central-1
      credentials_config:
        variables:
          ACCESS_KEY_ID: AccessKeyId
          ACCESS_KEY_VALUE: SecretAccessKey
      script: |-
        # This provider block is using the Terraform variables
        # set through the credentials_config.
        # Variable declarations omitted for brevity.
        provider "aws" {
          region     = var.REGION
          access_key = var.ACCESS_KEY_ID
          secret_key = var.ACCESS_KEY_VALUE
        }

        # ... Terraform code reduced for brevity

        resource "aws_s3_bucket" "bucket" {
          bucket = my-bucket
        }
        
        output "bucket" {
          value = aws_s3_bucket.main.id
        }
  
        output "arn" {
          value = aws_s3_bucket.main.arn
        }
          
        output "region" {
          value = aws_s3_bucket.main.region
        }
  # Co-provision aws-policy resource
  provision:
    aws-policy:
      is_dependent: false

Credentials

Credentials

Different Terraform providers have different ways of being configured. Generally, there are 3 ways that providers can be configured:

  • Directly using parameters on the provider. We call this “provider” credentials.
  • Using a credentials file. The filename is supplied to the provider. We call this “file” credentials.
  • Via environment variables that the provider reads. We call this “environment” credentials.

A powerful approach for working with different cloud accounts for the same resource definition is to reference the credentials from a config resource. By using matching criteria on the config resource, it is possible to specialize the account used in the terraform to different contexts. For example, there might be different AWS Accounts for test and production environments. The same resource definition can be used to manage the terraform and 2 config resources can be created matching to the staging and production environments respectively.

In this set of examples, we provide two config Resource Definitions for AWS and GCP.

AWS

  • Account config (account-config-aws.yaml)
  • Provider Credentials (aws-provider-credentials.yaml)
  • Environment Credentials (aws-environment-credentials.yaml)

GCP

  • Account config (account-config-gcp.yaml)
  • File Credentials (gcp-file-credentials.yaml)

account-config-aws.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: account-config-aws
entity:
  criteria:
    # This res_id is used in the resource reference in the s3-backend Resource Definition.
    - res_id: aws-account

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name your AWS Cloud Account.
  driver_account: aws-credentials

  driver_inputs:
    values:
      region: us-east-1

  driver_type: humanitec/echo
  name: account-config-aws
  type: config


account-config-gcp.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: account-config-gcp
entity:
  criteria:
    # This res_id is used in the resource reference in the gcp-file-credentials Resource Definition.
    - res_id: gcp-account

  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name your GCP Cloud Account.
  driver_account: gcp-credentials

  driver_inputs:
    values:
      location: US
      project_id: my-gcp-prject
  driver_type: humanitec/echo
  name: account-config-gcp
  type: config


aws-environment-credentials.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-environment-credentials
entity:
  # Use the same account as the config we're ma
  driver_account: ${resources['config.default#aws-account'].account}
  driver_inputs:
    values:
      credentials_config:
        environment:
          AWS_ACCESS_KEY_ID: "AccessKeyId"
          AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
          AWS_SESSION_TOKEN: "SessionToken"
      script: |-

        variable "region" {}

        terraform {
          required_providers {
            aws = {
              source = "hashicorp/aws"
            }
          }
        }

        provider "aws" {
          region = var.region
        }

        output "bucket" {
          value = aws_s3_bucket.bucket.bucket
        }

        output "region" {
          value = var.region
        }

        resource "aws_s3_bucket" "bucket" {
          bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          tags = {
            Humanitec = true
          }
        }

      variables:
        region: ${resources['config.default#aws-account'].outputs.region}
  driver_type: humanitec/terraform
  name: aws-environment-credentials
  type: s3

  # Supply matching criteria
  criteria: []

aws-provider-credentials.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: aws-provider-credentials
entity:
  # Use the same account as the config we're ma
  driver_account: ${resources['config.default#aws-account'].account}
  driver_inputs:
    values:
      credentials_config:
        variables:
          access_key_id: "AccessKeyId"
          secret_access_key: "SecretAccessKey"
          session_token: "SessionToken"
      script: |-

        variable "access_key_id" {
          sensitive = true
        }
        variable "secret_access_key" {
          sensitive = true
        }
        variable "session_token" {
          sensitive = true
        }
        variable "region" {}

        terraform {
          required_providers {
            aws = {
              source = "hashicorp/aws"
            }
          }
        }

        provider "aws" {
          region     = var.region
          access_key = var.access_key_id
          secret_key = var.secret_access_key
          token      = var.session_token
        }

        output "bucket" {
          value = aws_s3_bucket.bucket.bucket
        }

        output "region" {
          value = var.region
        }

        resource "aws_s3_bucket" "bucket" {
          bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          tags = {
            Humanitec = true
          }
        }

      variables:
        region: ${resources['config.default#aws-account'].outputs.region}
  driver_type: humanitec/terraform
  name: aws-provider-credentials
  type: s3

  # Supply matching criteria
  criteria: []

gcp-file-credentials.yaml (view on GitHub) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: gcp-file-credentials
entity:
  driver_account: ${resources['config.default#gcp-account'].account}
  driver_inputs:
    values:
      credentials_config:
        file: credentials.json
      script: |-

        variable "project_id" {}
        variable "location" {}

        terraform {
          required_providers {
            google = {
              source = "hashicorp/google"
            }
          }
        }
        provider "google" {
          project     = var.project_id

          # The file is defined above. The provider will read a service account token from this file.
          credentials = "credentials.json"
        }
        
        output "name" {
          value = google_storage_bucket.bucket.name
        }

        resource "google_storage_bucket" "bucket" {
          name          = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
          location      = var.location
          force_destroy = true
        }

      variables:
        location: ${resources.config#gcp-account.outputs.location}
        project_id: ${resources.config#gcp-account.outputs.project_id}

  driver_type: humanitec/terraform
  name: gcp-file-credentials
  type: gcs

  # Supply matching criteria
  criteria: []

Dynamic credentials

Dynamic Credentials

Using a Cloud Account type that supports dynamic credentials, those credentials can be easily injected into a Resource Definition using the Terraform Driver. Use a driver_account referencing the Cloud Account in the Resource Definition, and access its the credentials through the supplied values as shown in the examples.

AWS

  • S3 bucket (s3-dynamic-credentials.yaml)

s3-dynamic-credentials.yaml (view on GitHub) :

# Connect to an EKS cluster using dynamic credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: s3-dynamic-credentials
entity:
  name: s3-dynamic-credentials
  type: s3
  driver_type: humanitec/terraform
  # The driver_account references a Cloud Account of type "aws-role"
  # which needs to be configured for your Organization.
  driver_account: aws-temp-creds
  driver_inputs:
    values:
      variables:
        REGION: eu-central-1
      # Use the credentials injected via the driver_account
      # to set variables as expected by your Terraform code
      credentials_config:
        variables:
          ACCESS_KEY_ID: AccessKeyId
          ACCESS_KEY_VALUE: SecretAccessKey
          SESSION_TOKEN: SessionToken
      script: |-
        # This provider block is using the Terraform variables
        # set through the credentials_config.
        # Variable declarations omitted for brevity.
        provider "aws" {
          region     = var.REGION
          access_key = var.ACCESS_KEY_ID
          secret_key = var.ACCESS_KEY_VALUE
          token      = var.SESSION_TOKEN
        }

        # ... Terraform code reduced for brevity

        resource "aws_s3_bucket" "bucket" {
          bucket = my-bucket
        }

Private git repo

The Terraform Driver can access Terraform definitions stored in a Git repository. In the case that this repository requires authentication, you must supply credentials to the Driver. The examples in this section show how to provide those as part of the secrets in the Resource Definition based on the Terraform Driver.

  • ssh-secret-refs.tf: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.
  • https-secret-refs.tf: uses secret references to obtain an HTTPS password from a secret store to connect to the Git repo providing the Terraform code.

https-secret-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "example-resource" {
  driver_type = "humanitec/terraform"
  id          = "example-resource"
  name        = "example-resource"
  type        = "some-resource-type"

  driver_inputs = {
    # This examples uses secret references, pointing at a secret store
    # to obtain the actual values
    secret_refs = jsonencode({

      source = {
        # Using the password for a connection to the Git repo via HTTPS
        password = {
          ref   = var.password_secret_reference_key
          store = var.secret_store
        }
      }

      variables = {
        # ...
      }
    })

    values_string = jsonencode({
      # Connection information to the target Git repo
      source = {
        path = "some-resource-type/terraform"
        rev  = "refs/heads/main"
        url  = "https://my-domain.com/my-org/my-repo.git"
      }
      # ...
    })
  }
}


ssh-secret-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "example-resource" {
  driver_type = "humanitec/terraform"
  id          = "example-resource"
  name        = "example-resource"
  type        = "some-resource-type"

  driver_inputs = {
    # This examples uses secret references, pointing at a secret store
    # to obtain the actual values
    secret_refs = jsonencode({

      source = {
        # Using the ssh_key for a connection to the Git repo via SSH
        ssh_key = {
          ref   = var.ssh_key_secret_reference_key
          store = var.secret_store
        }
      }

      variables = {
        # ...
      }
    })

    values_string = jsonencode({
      # Connection information to the target Git repo
      source = {
        path = "some-resource-type/terraform"
        rev  = "refs/heads/main"
        url  = "[email protected]:my-org/my-repo.git"
      }
      # ...
    })
  }
}

Runner

The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.

The examples in this section show how to provide those data by referencing a k8s-cluster Resource Definition as part of the non-secret and secret fields of the runner object in the s3 Resource Definition based on the Terraform Driver.

  • k8s-cluster-refs.tf: provides a connection to an EKS cluster.
  • s3-ext-runner-refs.tf: uses runner configuration to run the Terraform Runner in the external cluster specified by k8s-cluster-refs.tf and provision an S3 bucket. It configures the Runner to run Terraform scripts from a private Git repository which initializes a Terraform s3 backend via Environment Variables.

k8s-cluster-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "eks_resource_cluster" {
  id          = "eks-cluster"
  name        = "eks-cluster"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-eks"

  driver_inputs = {
    secrets = {
      credentials = {
        aws_access_key_id     = var.aws_access_key_id
        aws_secret_access_key = var.aws_secret_access_key
      }
    }

    values = {
      loadbalancer             = "10.10.10.10"
      name                     = "my-cluster"
      region                   = "eu-central-1"
      loadbalancer             = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
      loadbalancer_hosted_zone = "ABC0DEF5WYYZ00"
    }
  }
}


resource "humanitec_resource_definition_criteria" "eks_resource_cluster" {
  resource_definition_id = humanitec_resource_definition.eks_resource_cluster.id
  class                  = "runner"
}


s3-ext-runner-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "aws_terraform_external_runner_resource_s3_bucket" {
  id          = "aws-terrafom-ext-runner-s3-bucket"
  name        = "aws-terrafom-ext-runner-s3-bucket"
  type        = "s3"
  driver_type = "humanitec/terraform"
  # The driver_account references a Cloud Account configured in the Platform Orchestrator.
  # Replace with the name of your AWS Cloud Account.
  # The account is used to provide credentials to the Terraform script via environment variables to access the TF state.
  driver_account = "my-aws-account"

  driver_inputs = {
    secrets = {
      # Secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = jsonencode({
        credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
      })

      source = jsonencode({
        ssh_key = var.ssh_key
      })
    }

    values = {
      # This instructs the driver that the Runner must run in an external cluster.
      runner_mode = "custom-kubernetes"
      # Non-secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = jsonencode({
        cluster_type = "eks"
        cluster = {
          region                   = "$${resources['k8s-cluster.runner'].outputs.region}"
          name                     = "$${resources['k8s-cluster.runner'].outputs.name}"
          loadbalancer             = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
          loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
        }
        # Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
        service_account = "humanitec-tf-runner-sa"
        namespace       = "humanitec-tf-runner"
      })

      # Configure the way we provide account credentials to the Terraform scripts in the referenced repository.
      # These credentials are related to the `driver_account` configured above.
      credentials_config = jsonencode({

        # Terraform script Variables. 
        variables = {
          ACCESS_KEY_ID     = "AccessKeyId"
          SECRET_ACCESS_KEY = "SecretAccessKey"
        }

        # Environment Variables.
        environment = {
          AWS_ACCESS_KEY_ID     = "AccessKeyId"
          AWS_SECRET_ACCESS_KEY = "SecretAccessKey"
        }
      })

      # Connection information to the Git repo containing the Terraform code.
      # It will provide a backend configuration initialized via Environment Variables.
      source = jsonencode({
        path = "s3/terraform/bucket/"
        rev  = "refs/heads/main"
        url  = "my-domain.com:my-org/my-repo.git"
      })


      variables = jsonencode({
        # Provide a separate bucket per Application and Environment
        bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
        region = var.region
      })

    }
  }
}

Runner pod configuration

The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.

The examples in this section show how to provide those data by referencing a k8s-cluster Resource Definition as part of the non-secret and secret fields of the runner object in the azure-blob-account Resource Definition based on the Terraform Driver. They also provide an example of how to apply labels to the Runner Pod and make it able to run with an Azure Workload Identity getting rid of the need of explicitly setting Azure credentials in the Resource Definition or using a Driver Account.

  • k8s-cluster-refs.tf: provides a connection to an AKS cluster.
  • azure-blob-account.tf: uses runner configuration to run the Terraform Runner in the external cluster specified by k8s-cluster-refs.tf and provision an azure blob account. It configures the Runner to run Terraform scripts from a private Git repository where an Terraform azurerm backend. Neither a driver account or secret credentials are used here as the runner pod is configured to run with a workload identity associated to the specify service account via runner.runner_pod_template property.

azure-blob-account.tf (view on GitHub) :

resource "humanitec_resource_definition" "azure_blob_account" {
  driver_type = "humanitec/terraform"
  id          = "azure-blob-account-basic"
  name        = "azure-blob-account-basic"
  type        = "azure-blob-account"

  driver_inputs = {
    secrets_string = jsonencode({
      # Secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = jsonencode({
        credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
      })

      source = {
        ssh_key = var.ssh_key
      }
    })

    values_string = jsonencode({
      append_logs_to_error = true

      # This instructs the driver that the Runner must be run in an external cluster.
      runner_mode = "custom-kubernetes"
      # Non-secret info of the cluster where the Terraform Runner should run.
      # This references a k8s-cluster resource that will be matched by class `runner`.
      runner = {
        cluster_type = "aks"
        cluster = {
          region                   = "$${resources['k8s-cluster.runner'].outputs.region}"
          name                     = "$${resources['k8s-cluster.runner'].outputs.name}"
          loadbalancer             = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
          loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
        }
        # Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
        # In this example, the Service Account needs to be annotated to specify the Microsoft Entra application client ID to be used with the pod: https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#service-account-labels-and-annotations
        service_account = "humanitec-tf-runner-sa"
        namespace       = "humanitec-tf-runner"
        # This instructs the driver that the Runner pod must run with a workload identity.
        runner_pod_template = <<EOT
metadata:
  labels:
    azure.workload.identity/use: "true"
EOT
      }

      # Connection information to the Git repo containing the Terraform code.
      # It will provide a backend configuration initialized via Environment Variables.
      source = {
        path = "modules/azure-blob-account/basic"
        rev  = var.resource_packs_azure_rev
        url  = var.resource_packs_azure_url
      }
      variables = {
        res_id = "$${context.res.id}"
        app_id = "$${context.app.id}"
        env_id = "$${context.env.id}"

        subscription_id          = var.subscription_id
        resource_group_name      = var.resource_group_name
        name                     = var.name
        prefix                   = var.prefix
        account_tier             = var.account_tier
        account_replication_type = var.account_replication_type
      }
    })
  }
}


k8s-cluster-refs.tf (view on GitHub) :

resource "humanitec_resource_definition" "aks_aad_resource_cluster" {
  id          = "aad-enabled-cluster"
  name        = "aad-enabled-cluster"
  type        = "k8s-cluster"
  driver_type = "humanitec/k8s-cluster-aks"

  driver_inputs = {
    secrets = {
      credentials = {
        appId       = var.app_id
        displayName = var.display_name
        password    = var.password
        tenant      = var.tenant
      }
    }

    values = {
      name            = "my-cluster"
      resource_group  = "my-azure-resource-group"
      subscription_id = "123456-1234-1234-1234-123456789"
      server_app_id   = "6dae42f8-4368-4678-94ff-3960e28e3630"
    }
  }
}


resource "humanitec_resource_definition_criteria" "aks_aad_resource_cluster" {
  resource_definition_id = humanitec_resource_definition.aks_aad_resource_cluster.id
  class                  = "runner"
}

S3

Use the Terraform Driver to provision Amazon S3 bucket resources.

  • public-git-repo.tf: uses a publicly accessible Git repo to find the Terraform code.
  • private-git-repo.tf: uses a private Git repo requiring authentication to find the Terraform code.

private-git-repo.tf (view on GitHub) :

resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
  id          = "aws-terrafom-s3-bucket"
  name        = "aws-terrafom-s3-bucket"
  type        = "s3"
  driver_type = "humanitec/terraform"

  driver_inputs = {

    secrets = {
      variables = jsonencode({
        access_key = var.access_key
        secret_key = var.secret_key

      })
      source = jsonencode({
        # Provide either an SSH key (for SSH connection) or password (for HTTPS).
        ssh_key  = var.ssh_key
        password = var.password
      })
    }
    
    values = {
      # Connection information to the Git repo containing the Terraform code
      "source" = jsonencode(
        {
          path = "s3/terraform/bucket/"
          rev  = "refs/heads/main"
          url  = "https://my-domain.com/my-org/my-repo.git"
          # url  = "[email protected]:my-org/my-repo.git" # For SSH access instead of HTTPS
        }
      )
      "variables" = jsonencode(
        {
          # Provide a separate bucket per Application and Environment
          bucket          = "my-company-my-app-$${context.app.id}-$${context.env.id}"
          region          = var.region
          assume_role_arn = var.assume_role_arn
        }
      )
    }
  }
}


public-git-repo.tf (view on GitHub) :

resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
  id          = "aws-terrafom-s3-bucket"
  name        = "aws-terrafom-s3-bucket"
  type        = "s3"
  driver_type = "humanitec/terraform"

  driver_inputs = {

    secrets = {
      variables = jsonencode({
        access_key = var.access_key
        secret_key = var.secret_key
      })
    }

    values = {
      # Connection information to the Git repo containing the Terraform code
      # The repo must not require authentication
      "source" = jsonencode(
        {
          path = "s3/terraform/bucket/"
          rev  = "refs/heads/main"
          url  = "https://my-domain.com/my-org/my-repo.git"
        }
      )
      "variables" = jsonencode(
        {
          # Provide a separate bucket per Application and Environment
          bucket          = "my-company-my-app-$${context.app.id}-$${context.env.id}"
          region          = var.region
          assume_role_arn = var.assume_role_arn
        }
      )
    }
  }
}

Top