Resource Definitions
Resource Definitions
This section contains example Resource Definitions.
Install any Resource Definition into your Humanitec Organization using the CLI and this command:
humctl create -f resource-definition-file.yaml
Echo driver
Resource Definitions using the Echo Driver
This section contains example Resource Definitions using the Echo Driver .
Namespace
This section contains example Resource Definitions using the Echo Driver for managing Kubernetes namespaces .
custom-namespace.yaml
: Shows how to use the Echo Driver to return the name of an externally managed namespace. This format is for use with the Humanitec CLI .custom-namespace.tf
: Shows how to use the Echo Driver to return the name of an externally managed namespace. This format is for use with the Humanitec Terraform provider .
custom-namespace.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "namespace-echo" {
driver_type = "humanitec/echo"
id = "namespace-echo"
name = "namespace-echo"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"namespace" = "$${context.app.id}-$${context.env.id}"
})
}
}
resource "humanitec_resource_definition_criteria" "namespace-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.namespace-echo.id
}
custom-namespace.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: namespace-echo
entity:
name: namespace-echo
type: k8s-namespace
driver_type: humanitec/echo
driver_inputs:
values:
namespace: "${context.app.id}-${context.env.id}"
criteria:
- {}
Postgres
This section contains example Resource Definitions using the Echo Driver for PostgreSQL.
postgres-secretstore.yaml
: Shows how to use the Echo Driver and secret references to fetch database credentials from an external secret store. This format is for use with the Humanitec CLI .
postgres-secretstore.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "postgres-echo" {
driver_type = "humanitec/echo"
id = "postgres-echo"
name = "postgres-echo"
type = "postgres"
driver_inputs = {
values_string = jsonencode({
"name" = "my-database"
"host" = "products.postgres.dev.example.com"
"port" = 5432
})
secret_refs = jsonencode({
"username" = {
"store" = "my-gsm"
"ref" = "cloudsql-username"
}
"password" = {
"store" = "my-gsm"
"ref" = "cloudsql-password"
}
})
}
}
resource "humanitec_resource_definition_criteria" "postgres-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.postgres-echo.id
}
postgres-secretstore.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: postgres-echo
entity:
name: postgres-echo
type: postgres
driver_type: humanitec/echo
driver_inputs:
values:
name: my-database
host: products.postgres.dev.example.com
port: 5432
secret_refs:
username:
store: my-gsm
ref: cloudsql-username
password:
store: my-gsm
ref: cloudsql-password
criteria:
- {}
Redis
This section contains example Resource Definitions using the Echo Driver for Redis .
redis-secret-refs.yaml
: Shows how to use the Echo Driver and secret references to provision a Redis resource. This format is for use with the Humanitec CLI .
redis-secret-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "redis-echo" {
driver_type = "humanitec/echo"
id = "redis-echo"
name = "redis-echo"
type = "redis"
driver_inputs = {
values_string = jsonencode({
"host" = "0.0.0.0"
"port" = 6379
})
secret_refs = jsonencode({
"password" = {
"store" = "my-gsm"
"ref" = "redis-password"
}
"username" = {
"store" = "my-gsm"
"ref" = "redis-user"
}
})
}
}
resource "humanitec_resource_definition_criteria" "redis-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.redis-echo.id
}
redis-secret-refs.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: redis-echo
entity:
name: redis-echo
type: redis
driver_type: humanitec/echo
driver_inputs:
values:
host: 0.0.0.0
port: 6379
secret_refs:
password:
store: my-gsm
ref: redis-password
username:
store: my-gsm
ref: redis-user
criteria:
- {}
Generic async driver
Resource Definitions using the Generic Async Driver
This section contains example Resource Definitions using the Generic Async Driver .
The requirements to make these Resource Definitions work with the Orchestrator are:
- The image supplied in the Generic Async Driver Definitions in
values.job.image
should adhere to the interface between Driver and Runner Image . - The cluster chosen to run the Kubernetes Job should be properly configured .
Inline terraform
The Generic Async Driver executes a container supplied as input as part of a Kubernetes Job execution in a target Kubernetes cluster.
The example in this section shows:
- How to reference a
config
Resource Definition to provide the data needed to create a Kubernetes Job in the desired cluster . - How to reference a
config
Resource Definition to create the job with the proper configuration. - How to make the Kubernetes Job able to pull an image from a private registry .
- How to inject the cloud account credentials into the IaC code running in the container via the credentials_config object.
The example is made up out of these files:
k8s-cluster-runner-config.yaml
: provides a connection to a GKE cluster .agent-runner.yaml
: provides the configuration to access a private cluster via the Humanitec Agent.s3.yaml
: in addition to referencing theconfig
Resource Definition, it defines the Terraform scripts to run to provision an S3 bucket whose name is produced appending a random postfix to the application and the environment name. The supplied scripts provide an AWS S3 bucket as place where to store the resource state.
agent-runner.yaml
(
view on GitHub
)
:
# This Resource Definition specifies the Humanitec Agent to use for the Runner.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: agent-runner
entity:
driver_type: humanitec/agent
name: agent-runner
type: agent
driver_inputs:
values:
id: my-agent
criteria:
# Change to match the name of the development type you want this to apply to
- env_type: development
class: runner
k8s-cluster-runner-config.yaml
(
view on GitHub
)
:
# This Resource Definition provides configuration values for the Generic Async Driver.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: qa-testing-ground-generic-async
entity:
name: qa-testing-ground-generic-async
type: config
driver_type: humanitec/echo
driver_inputs:
values:
job:
# Change to match the image you built to run the IaC of your choice
image: ghcr.io/my-registry/generic-async-driver-runner:1.0.1
# Change to match the command to run your image or remove it if you want to use the image entrypoint
command: ["/opt/container"]
# Change to match the mount point of your shared directory
shared_directory: /home/runneruser/workspace
# Change to the namespace name you created to host the Kubernetes Job created by the Driver.
namespace: humanitec-runner
# Change to the service account name with permissions to create secrets/configmaps in the Kubernetes Job namespace you created.
service_account: humanitec-runner-job
# This assumes a secret with the given name exists in the desired namespace and it contains the credentials to pull the job image from the private registry.
pod_template: |
spec:
imagePullSecrets:
- name: ghcr-private-registry
# Change to match the configuration of your target cluster
cluster:
cluster_type: gke
account: my-org/my-gcp-cloud-account
cluster:
loadbalancer: 10.10.10.10
name: my-cluster
project_id: my-project
zone: europe-west2
internal_ip: true
# Change to match the desired agent (if any)
secret_refs:
agent_url:
value: ${resources['agent.default#agent'].outputs.url}
criteria:
# Change to match the name of the development type you want this to apply to
- env_type: development
class: runner
s3.yaml
(
view on GitHub
)
:
# This Resource Definition specifies an `s3` Resource to be provisioned through inline Terraform code.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-s3
entity:
name: aws-s3
type: s3
driver_type: humanitec/generic-async
driver_account: my-aws-cloud-account
driver_inputs:
values:
job: ${resources['config.runner'].outputs.job}
cluster:
cluster_type: ${resources['config.runner'].outputs.cluster.cluster_type}
account: ${resources['config.runner'].outputs.cluster.account}
cluster: ${resources['config.runner'].outputs.cluster.cluster}
# Needed to authenticate to aws TF provider in the TF code passed via files inputs
credentials_config:
environment:
AWS_ACCESS_KEY_ID: AccessKeyId
AWS_SECRET_ACCESS_KEY: SecretAccessKey
files:
terraform.tfvars.json: |
{"REGION": "eu-west-3", "BUCKET": "${context.app.id}-${context.env.id}"}
# Change to match the backend of your choice.
backend.tf: |
terraform {
backend "s3" {
bucket = "my-s3-to-store-tf-state"
key = "${context.res.guresid}/state/terraform.tfstate"
region = "eu-west-3"
}
}
providers.tf: |
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.72.0"
}
}
}
vars.tf: |
variable "REGION" {
type = string
}
variable "BUCKET" {
type = string
}
main.tf: |
provider "aws" {
region = var.REGION
default_tags {
tags = {
CreatedBy = "Humanitec"
}
}
}
resource "random_string" "bucket_suffix" {
length = 5
special = false
upper = false
}
module "aws_s3" {
source = "terraform-aws-modules/s3-bucket/aws"
bucket = format("%s-%s", var.BUCKET, random_string.bucket_suffix.result)
acl = "private"
force_destroy = true
control_object_ownership = true
object_ownership = "BucketOwnerPreferred"
}
output "region" {
value = module.aws_s3.s3_bucket_region
}
output "bucket" {
value = module.aws_s3.s3_bucket_id
}
secret_refs:
cluster:
agent_url:
value: ${resources['config.runner'].outputs.agent_url}
criteria:
# Change to match the name of the development type you want this to apply to
- env_type: development
Private git repo
The Generic Async Driver executes a container supplied as input as part of a Kubernetes Job execution in a target Kubernetes cluster.
The example in this section shows:
- How to reference a
config
Resource Definition to provide the data needed to create a Kubernetes Job in the desired cluster . - How to reference a
config
Resource Definition to create the job with the proper configuration. - How to make the Kubernetes Job able to pull an image from a private registry .
- How to inject the cloud account credentials into the IaC code running in the container via the credentials_config object.
- How to fetch the IaC scripts from a private Repository, via non-secret and secret fields.
The example is made up out of these files:
k8s-cluster-runner-config.yaml
: provides a connection to a GKE cluster .agent-runner.yaml
: provides the configuration to access a private cluster via the Humanitec Agent.s3.yaml
: in addition to referencing theconfig
Resource Definition, it defines how to fetch the Terraform scripts from a private Github Repository to provision an S3 bucket. This also provides via file an AWS S3 bucket as place where to store the resource state.
agent-runner.yaml
(
view on GitHub
)
:
# This Resource Definition specifies the Humanitec Agent to use for the Runner.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: agent-runner
entity:
driver_type: humanitec/agent
name: agent-runner
type: agent
driver_inputs:
values:
id: my-agent
criteria:
# Change to match the name of the development type you want this to apply to
- env_type: development
class: runner
k8s-cluster-runner-config.yaml
(
view on GitHub
)
:
# This Resource Definition provides configuration values for the Generic Async Driver.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: qa-testing-ground-generic-async
entity:
name: qa-testing-ground-generic-async
type: config
driver_type: humanitec/echo
driver_inputs:
values:
job:
# Change to match the image you built to run the IaC of your choice
image: ghcr.io/my-registry/generic-async-driver-runner:1.0.1
# Change to match the command to run your image or remove it if you want to use the image entrypoint
command: ["/opt/container"]
# Change to match the mount point of your shared directory
shared_directory: /home/runneruser/workspace
# Change to the namespace name you created to host the Kubernetes Job created by the Driver.
namespace: humanitec-runner
# Change to the service account name with permissions to create secrets/configmaps in the Kubernetes Job namespace you created.
service_account: humanitec-runner-job
# This assumes a secret with the given name exists in the desired namespace and it contains the credentials to pull the job image from the private registry.
pod_template: |
spec:
imagePullSecrets:
- name: ghcr-private-registry
# Change to match the configuration of your target cluster
cluster:
cluster_type: gke
account: my-org/my-gcp-cloud-account
cluster:
loadbalancer: 10.10.10.10
name: my-cluster
project_id: my-project
zone: europe-west2
internal_ip: true
# Change to match the desired agent (if any)
secret_refs:
agent_url:
value: ${resources['agent.default#agent'].outputs.url}
criteria:
# Change to match the name of the development type you want this to apply to
- env_type: development
class: runner
s3.yaml
(
view on GitHub
)
:
# This Resource Definition specifies an `s3` Resource to be provisioned through Terraform code read from a private Git repository accessed via an SSH key.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-s3
entity:
name: aws-s3
type: s3
driver_type: humanitec/generic-async
driver_account: my-aws-cloud-account
driver_inputs:
values:
job: ${resources['config.runner'].outputs.job}
cluster:
cluster_type: ${resources['config.runner'].outputs.cluster.cluster_type}
account: ${resources['config.runner'].outputs.cluster.account}
cluster: ${resources['config.runner'].outputs.cluster.cluster}
# Needed to authenticate to aws TF provider in the TF code passed via files inputs
credentials_config:
environment:
AWS_ACCESS_KEY_ID: AccessKeyId
AWS_SECRET_ACCESS_KEY: SecretAccessKey
# Change to match your repository
source:
path: path/to/my/iac/scripts
ref: refs/heads/main
url: [email protected]:my-org/my-repo.git
files:
terraform.tfvars.json: |
{"REGION": "eu-west-3", "BUCKET": "${context.app.id}-${context.env.id}"}
# Change to match the backend of your choice.
backend.tf: |
terraform {
backend "s3" {
bucket = "my-s3-to-store-tf-state"
key = "${context.res.guresid}/state/terraform.tfstate"
region = "eu-west-3"
}
}
secret_refs:
cluster:
agent_url:
value: ${resources['config.runner'].outputs.agent_url}
# Change to match where your ssh key is stored
source:
ssh_key:
store: my-secret-store
ref: my-path-to-git-ssh-key
criteria:
# Change to match the name of the development type you want this to apply to
- env_type: development
Hpa
Horizontal pod autoscaler
This section contains a Resource Definition example for handling Kubernetes
HorizontalPodAutoscaler
by using the
hpa
Driver. If you have special requirements for your HorizontalPodAutoscaler
implementation, you can
see this other example using the template
Driver
.
You can find a Score file example using the horizontal-pod-autoscaler
resource type
here
.
hpa.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "hpa" {
driver_type = "humanitec/hpa"
id = "hpa"
name = "hpa"
type = "horizontal-pod-autoscaler"
driver_inputs = {
values_string = jsonencode({
"minReplicas" = 2
"maxReplicas" = 5
"targetCPUUtilizationPercentage" = 80
})
}
}
resource "humanitec_resource_definition_criteria" "hpa_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.hpa.id
}
hpa.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: hpa
entity:
driver_type: humanitec/hpa
name: hpa
type: horizontal-pod-autoscaler
driver_inputs:
values:
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 80
criteria:
- {}
Ingress
Creating Ingress objects
This section contains example Resource Definitions for creating Kubernetes Ingress objects using the Ingress Driver.
External dns cert manager
This section contains example Resource Definitions for using External DNS and Cert Manager by setting annotations in the Ingress
dns.yaml
generates a DNS subdomain that can then be used by External DNS to create the DNS record.ingress.yaml
creates a Kubernetes Ingress resource with the following annotations:cert-manager.io/cluster-issuer
- set to the cluster issuer already defined in the cluster. ( Cert Manager Ingress Annotations )external-dns.alpha.kubernetes.io/hostname
- set to a resource reference to the DNS that the ingress is for. ( External DNS Ingress Annotation Hostname )
Before using the above examples, ensure that:
- External DNS and Cert Manager operators are installed and configured in the cluster,
- matching criteria in both resource definitions are updated,
- the Cluster Issuer Annotation in the Ingress resource is updated and
- the super domain in the dns resource definition.
dns.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "external-dns-cert-manager-dns" {
driver_type = "humanitec/dns-wildcard"
id = "external-dns-cert-manager-dns"
name = "external-dns-cert-manager-dns"
type = "dns"
driver_inputs = {
values_string = jsonencode({
"domain" = "staging.hosted-domain.com"
"template" = "$${context.env.id}-$${context.app.id}"
})
}
provision = {
"ingress" = {
}
}
}
resource "humanitec_resource_definition_criteria" "external-dns-cert-manager-dns_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.external-dns-cert-manager-dns.id
app_id = "external-dns-cert-manager-example-app"
}
dns.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: external-dns-cert-manager-dns
entity:
driver_type: humanitec/dns-wildcard
driver_inputs:
values:
# Update to your DNS superdomain
domain: staging.hosted-domain.com
# Update to your preferred template for the subdomain
template: "${context.env.id}-${context.app.id}"
name: external-dns-cert-manager-dns
type: dns
provision:
ingress: {}
criteria:
# Change to match the name of the app you want this to apply to
- app_id: external-dns-cert-manager-example-app
ingress.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "external-dns-cert-manager-ingress" {
driver_type = "humanitec/ingress"
id = "external-dns-cert-manager-ingress"
name = "external-dns-cert-manager-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"annotations" = {
"cert-manager.io/cluster-issuer" = "my-cluster-certificate-issue"
"external-dns.alpha.kubernetes.io/hostname" = "$${resources.dns.outputs.host}"
}
"class" = "nginx"
"tls_secret_name" = "tls-cert-$${resources.dns.guresid}"
})
}
}
resource "humanitec_resource_definition_criteria" "external-dns-cert-manager-ingress_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.external-dns-cert-manager-ingress.id
app_id = "external-dns-cert-manager-example-app"
}
ingress.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: external-dns-cert-manager-ingress
entity:
driver_type: humanitec/ingress
driver_inputs:
values:
annotations:
# Replace with your Cert Manager Cluster Issuer
cert-manager.io/cluster-issuer: my-cluster-certificate-issue
external-dns.alpha.kubernetes.io/hostname: ${resources.dns.outputs.host}
class: nginx
# Use the Globally Unique RESource ID of the DNS resource in order to
# have a secret name that is unique to the DNS. Cert Manager will create
# a secret with this name.
tls_secret_name: tls-cert-${resources.dns.guresid}
name: external-dns-cert-manager-ingress
type: ingress
criteria:
# Change to match the name of the app you want this to apply to
- app_id: external-dns-cert-manager-example-app
Ingress
This section contains example Resource Definitions for handling Kubernetes ingress traffic using the Ingress Driver.
ingress-alb.tf
: defines anIngress
annotated for an internet-facingAmazon Application Load Balancer (ALB)
. This format is for use with the Humanitec Terraform Provideringress-kong.yaml
: defines anIngress
object annotated for the Kong Ingress Controller . This format is for use with the Humanitec CLIingress-openshift-operator.yaml
: defines anIngress
object annotated for the OpenShift Container Platform Ingress Operator . This format is for use with the Humanitec CLI
ingress-alb.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "alb-ingress" {
driver_type = "humanitec/ingress"
id = "alb-ingress"
name = "alb-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"annotations" = {
"alb.ingress.kubernetes.io/certificate-arn" = "arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx"
"alb.ingress.kubernetes.io/group.name" = "my-team.my-group"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\":80},{\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/ssl-redirect" = "443"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
"class" = "alb"
"no_tls" = true
})
}
}
ingress-alb.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
entity:
driver_inputs:
values:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx
alb.ingress.kubernetes.io/group.name: my-team.my-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
class: alb
no_tls: true
driver_type: humanitec/ingress
name: alb-ingress
type: ingress
kind: Definition
metadata:
id: alb-ingress
ingress-kong.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "kong-ingress" {
driver_type = "humanitec/ingress"
id = "kong-ingress"
name = "kong-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"annotations" = {
"konghq.com/preserve-host" = "false"
"konghq.com/strip-path" = "true"
}
"api_version" = "v1"
"class" = "kong"
})
}
}
ingress-kong.yaml
(
view on GitHub
)
:
# This Resource Definition provisions an Ingress object for the Kong Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: kong-ingress
entity:
name: kong-ingress
type: ingress
driver_type: humanitec/ingress
driver_inputs:
values:
annotations:
konghq.com/preserve-host: "false"
konghq.com/strip-path: "true"
api_version: v1
class: kong
ingress-openshift-operator.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "openshift-ingress" {
driver_type = "humanitec/ingress"
id = "openshift-ingress"
name = "openshift-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"class" = "openshift-default"
})
}
}
ingress-openshift-operator.yaml
(
view on GitHub
)
:
# This Resource Definition provisions an Ingress object for the OpenShift Container Platform Ingress Operator
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: openshift-ingress
entity:
name: openshift-ingress
type: ingress
driver_type: humanitec/ingress
driver_inputs:
values:
class: openshift-default
K8s cluster
Connecting to generic Kubernetes clusters
This section contains example Resource Definitions for connecting to generic Kubernetes clusters of any kind beyond the managed solutions of the major cloud providers.
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to generic Kubernetes clusters.
generic-k8s-client-certificate.tf
: use a client certificate to connect to the cluster. This format is for use with the Humanitec Terraform providergeneric-k8s-static-credentials.yaml
: use a client certificate to connect to the cluster. This format is for use with the Humanitec CLI .
generic-k8s-client-certificate.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "generic-k8s-static-credentials" {
driver_type = "humanitec/k8s-cluster"
id = "generic-k8s-static-credentials"
name = "generic-k8s-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"name" = "my-generic-k8s-cluster"
"loadbalancer" = "35.10.10.10"
"cluster_data" = {
"server" = "https://35.11.11.11:6443"
"certificate-authority-data" = "LS0t...ca-data....=="
}
})
secrets_string = jsonencode({
"credentials" = {
"client-certificate-data" = "LS0t...cert-data...=="
"client-key-data" = "LS0t...key-data...=="
}
})
}
}
generic-k8s-client-certificate.yaml
(
view on GitHub
)
:
# Resource Definition for a generic Kubernetes cluster
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: generic-k8s-static-credentials
entity:
name: generic-k8s-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster
driver_inputs:
values:
name: my-generic-k8s-cluster
loadbalancer: 35.10.10.10
cluster_data:
server: https://35.11.11.11:6443
# Single line base64-encoded cluster CA data in the format "LS0t...ca-data....=="
certificate-authority-data: "LS0t...ca-data....=="
secrets:
credentials:
# Single line base64-encoded client certificate data in the format "LS0t...cert-data...=="
client-certificate-data: "LS0t...cert-data...=="
# Single line base64-encoded client key data in the format "LS0t...key-data...=="
client-key-data: "LS0t...key-data...=="
K8s cluster aks
Connecting to AKS clusters
This section contains example Resource Definitions for connecting to AKS clusters .
Agent
Using the Humanitec Agent
This section contains example Resource Definitions using the Humanitec Agent for connecting to AKS clusters.
aks-agent.yaml
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec CLI .aks-agent.tf
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec Terraform provider
aks-agent.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aks-agent" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-agent"
name = "aks-agent"
type = "k8s-cluster"
driver_account = "azure-temporary"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
aks-agent.yaml
(
view on GitHub
)
:
# AKS private cluster. It is to be accessed via the Humanitec Agent
# It is using a Cloud Account to obtain credentials
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-agent
entity:
name: aks-agent
type: k8s-cluster
# The driver_account is referring to a Cloud Account configured in your Organization
driver_account: azure-temporary
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
secrets:
# Setting the URL for the Humanitec Agent
agent_url: "${resources['agent#agent'].outputs.url}"
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to AKS clusters.
aks-static-credentials.yaml
: use static credentials of a service principal defined via environment variables. This format is for use with the Humanitec CLI .aks-static-credentials-cloudaccount.yaml
: use static credentials defined via a Cloud Account . This format is for use with the Humanitec CLI .
Using temporary credentials
This section contains example Resource Definitions using temporary credentials for connecting to AKS clusters.
aks-temporary-credentials.yaml
: use temporary credentials defined via a Cloud Account. This format is for use with the Humanitec CLIaks-temporary-credentials.tf
: uses temporary credentials defined via a Cloud Account. This format is for use with the Humanitec Terraform provider
aks-static-credentials-cloudaccount.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aks-static-credentials-cloudaccount" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-static-credentials-cloudaccount"
name = "aks-static-credentials-cloudaccount"
type = "k8s-cluster"
driver_account = "azure-static-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
aks-static-credentials-cloudaccount.yaml
(
view on GitHub
)
:
# Connect to an AKS cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-static-credentials-cloudaccount
entity:
name: aks-static-credentials-cloudaccount
type: k8s-cluster
# The driver_account references a Cloud Account of type "azure"
# which needs to be configured for your Organization.
driver_account: azure-static-creds
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
aks-static-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aks-static-credentials" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-static-credentials"
name = "aks-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
secrets_string = jsonencode({
"credentials" = {
"appId" = "b520e4a8-6cb4-49dc-8f42-f3281dc2efe9"
"displayName" = "my-cluster-sp"
"password" = "my-cluster-sp-pw"
"tenant" = "9b8c7b62-aaaa-4444-ffff-0987654321fd"
}
})
}
}
aks-static-credentials.yaml
(
view on GitHub
)
:
# NOTE: Providing inline credentials as shown in this example is discouraged and will be deprecated.
# Using a Cloud Account is the recommended approach instead.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-static-credentials
entity:
name: aks-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
secrets:
# The "credentials" data correspond to the content of the output
# that Azure generates for a service principal
credentials:
appId: b520e4a8-6cb4-49dc-8f42-f3281dc2efe9
displayName: my-cluster-sp
password: my-cluster-sp-pw
tenant: 9b8c7b62-aaaa-4444-ffff-0987654321fd
aks-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aks-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-temporary-credentials"
name = "aks-temporary-credentials"
type = "k8s-cluster"
driver_account = "azure-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
aks-temporary-credentials.yaml
(
view on GitHub
)
:
# Connect to an AKS cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-temporary-credentials
entity:
name: aks-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "azure-identity"
# which needs to be configured for your Organization.
driver_account: azure-temporary-creds
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
K8s cluster eks
Connecting to EKS clusters
This section contains example Resource Definitions for connecting to EKS clusters .
Agent
Using the Humanitec Agent
This section contains example Resource Definitions using the Humanitec Agent for connecting to EKS clusters.
eks-agent.yaml
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec CLI .eks-agent.tf
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec Terraform provider
eks-agent.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "eks-agent" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-agent"
name = "eks-agent"
type = "k8s-cluster"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
eks-agent.yaml
(
view on GitHub
)
:
# EKS private cluster. It is to be accessed via the Humanitec Agent
# It is using a Cloud Account with temporary credentials
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-agent
entity:
name: eks-agent
type: k8s-cluster
# The driver_account is referring to a Cloud Account configured in your Organization
driver_account: aws-temp-creds
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
secrets:
# Setting the URL for the Humanitec Agent
agent_url: "${resources['agent#agent'].outputs.url}"
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to EKS clusters.
eks-static-credentials.yaml
: use static credentials defined via environment variables. This format is for use with the Humanitec CLI .eks-static-credentials-cloudaccount.yaml
: use static credentials defined via a Cloud Account . This format is for use with the Humanitec CLI .
Using temporary credentials
This section contains example Resource Definitions using temporary credentials for connecting to EKS clusters.
eks-temporary-credentials.yaml
: use temporary credentials defined via a Cloud Account. This format is for use with the Humanitec CLIeks-temporary-credentials.tf
: uses temporary credentials defined via a Cloud Account. This format is for use with the Humanitec Terraform provider
eks-static-credentials-cloudaccount.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "eks-static-credentials-cloudaccount" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-static-credentials-cloudaccount"
name = "eks-static-credentials-cloudaccount"
type = "k8s-cluster"
driver_account = "aws-static-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
}
}
eks-static-credentials-cloudaccount.yaml
(
view on GitHub
)
:
# Connect to an EKS cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-static-credentials-cloudaccount
entity:
name: eks-static-credentials-cloudaccount
type: k8s-cluster
# The driver_account references a Cloud Account of type "aws"
# which needs to be configured for your Organization.
driver_account: aws-static-creds
# The driver_type k8s-cluster-eks automatically handles the static credentials
# injected via the driver_account.
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
eks-static-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "eks-static-credentials" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-static-credentials"
name = "eks-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
secrets_string = jsonencode({
"credentials" = {
"aws_access_key_id" = "my-access-key-id"
"aws_secret_access_key" = "my-secret-access-key"
}
})
}
}
eks-static-credentials.yaml
(
view on GitHub
)
:
# NOTE: Providing inline credentials as shown in this example is discouraged and will be deprecated.
# Using a Cloud Account is the recommended approach instead.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-static-credentials
entity:
name: eks-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
secrets:
credentials:
aws_access_key_id: my-access-key-id
aws_secret_access_key: my-secret-access-key
eks-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "eks-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-temporary-credentials"
name = "eks-temporary-credentials"
type = "k8s-cluster"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
}
}
eks-temporary-credentials.yaml
(
view on GitHub
)
:
# Connect to an EKS cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-temporary-credentials
entity:
name: eks-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "aws-role"
# which needs to be configured for your Organization.
driver_account: aws-temp-creds
# The driver_type k8s-cluster-eks automatically handles the temporary credentials
# injected via the driver_account.
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
K8s cluster git
Connecting to a Git repository (GitOps mode)
This section contains example Resource Definitions for connecting to a Git repository to push application CRs ( GitOps mode ).
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to a Git repository in ( GitOps mode ).
github-for-gitops.yaml
: use static credentials defined via GitHub variables. This format is for use with the Humanitec CLI .
github-for-gitops.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "github-for-gitops" {
driver_type = "humanitec/k8s-cluster-git"
id = "github-for-gitops"
name = "github-for-gitops"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"url" = "[email protected]:example-org/gitops-repo.git"
"branch" = "development"
"path" = "$${context.app.id}/$${context.env.id}"
"loadbalancer" = "35.10.10.10"
})
secrets_string = jsonencode({
"credentials" = {
"ssh_key" = "my-git-ssh-key"
}
})
}
}
github-for-gitops.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: github-for-gitops
entity:
name: github-for-gitops
driver_type: humanitec/k8s-cluster-git
type: k8s-cluster
driver_inputs:
values:
# Git repository for pushing manifests
url: [email protected]:example-org/gitops-repo.git
# Branch in the git repository, optional. If not specified, the default branch is used.
branch: development
# Path in the git repository, optional. If not specified, the root is used.
path: "${context.app.id}/${context.env.id}"
# Load Balancer, optional. Though it's not related to the git, it's used to create ingress in the target K8s cluster.
loadbalancer: 35.10.10.10
secrets:
credentials:
ssh_key: my-git-ssh-key
# Alternative to ssh_key: password or Personal Account Token
# password: my-git-ssh-pat
Runtime
Connecting to a Git repository (GitOps mode)
This section contains example Resource Definitions for connecting to a Git repository to push application CRs in GitOps mode .
It also provides a solution to specify the non GitOps-cluster where the GitOps deploys workloads.
To retrieve the status of deployed workloads, the Orchestrator searches for a k8s-cluster
Resource with the Id k8s-cluster-runtime
. If it doesn’t find this resource, it defaults to the k8s-cluster
resource with the Id k8s-cluster
. When the default cluster is a GitOps-managed cluster, an additional non-GitOps cluster is required to gather runtime information for workloads deployed by the GitOps operator. This example demonstrates that process.
The namespace name where the Orchestrator will look for the Kubernetes objects to gather runtime information is retrieved from the Active Resource of type k8s-namespace
. Note that in GitOps mode, the namespace is an externally managed Resource, i.e. the Platform Orchestrator does not create the namespace. It is recommended to use a namespace Resource Definition based on the
Echo Driver
to reflect this fact. This also means that any customization of the namespace, such as adding specific labels, must be managed externally.
The chart illustrates the setup. Because k8s-cluster
is an
implicit Resource Type
, one such Resource is always matched for any Deployment. This creates the Resource representing the GitOps cluster, using the k8s-cluster-git
Driver. That resource co-provisions another Resource of type k8s-cluster
with the Id k8s-cluster-runtime
and the k8s-cluster-gke
Driver, representing the runtime cluster.
flowchart LR
subgraph implicitResources[Implicit Resources]
k8s-namespace(id: k8s-namespace<br/>type: k8s-namespace<br/>Driver: echo)
workload(id: workload<br/>type: workload) ~~~ k8s-cluster-gitops(id: k8s-cluster<br/>type: k8s-cluster<br/>Driver: k8s-cluster-git)
end
k8s-cluster-gitops --o|co-provision| k8s-cluster-runtime(id: k8s-cluster-runtime<br/>type: k8s-cluster<br/>Driver: k8s-cluster-gke)
platformOrchestrator{Platform<br/>Orchestrator} -.->|Deploy K8s CRs to<br/>Git repo through| k8s-cluster-gitops
platformOrchestrator -.->|Determine<br/>namespace from| k8s-namespace
platformOrchestrator -.->|Obtain runtime information from| k8s-cluster-runtime
These files make up the example:
github-for-gitops.yaml
: contains configuration for connecting to a Git repository. This Resource Definition co-provisions a GKE cluster to be used to fetch Runtime Information, thek8s-cluster-runtime
Id is used in the co-provision key. This format is for use with theHumanitec CLI
.gke-temporary-credentials-runtime.yaml
: uses temporary credentials defined via a Cloud Account. The Resource Id specified in the Matching Criteria isk8s-cluster-runtime
and it ensures that this Definition will be matched to provision thek8s-cluster
Resource co-provisioned by the GitOps cluster Resource Definition. This format is for use with the Humanitec CLI .- This
runtime
Resource Definition can optionally use the Humanitec Agent to access runtime information on a private cluster. This requires the Agent to be configured for being able to access the cluster and the corresponding Agent Resource Definition (next item) being matched in theagent_url
property. See the documentation for details.
- This
gke-agent.yaml
: defines the Resource for the Humanitec Agent . Relevant only if the Agent is being used to access the runtime cluster.custom-namespace.yaml
: shows how to use the Echo Driver to return the name of an externally managed namespace that must match the namespace where the GitOps Operator creates the resources. This format is for use with the Humanitec CLI .
custom-namespace.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "namespace-echo" {
driver_type = "humanitec/echo"
id = "namespace-echo"
name = "namespace-echo"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"namespace" = "$${context.app.id}-$${context.env.id}"
})
}
}
resource "humanitec_resource_definition_criteria" "namespace-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.namespace-echo.id
}
custom-namespace.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: namespace-echo
entity:
name: namespace-echo
type: k8s-namespace
driver_type: humanitec/echo
driver_inputs:
values:
namespace: "${context.app.id}-${context.env.id}"
criteria:
- {}
github-for-gitops.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "github-for-gitops" {
driver_type = "humanitec/k8s-cluster-git"
id = "github-for-gitops"
name = "github-for-gitops"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"url" = "[email protected]:example-org/gitops-repo.git"
"branch" = "development"
"path" = "$${context.app.id}/$${context.env.id}"
"loadbalancer" = "35.10.10.10"
})
secrets_string = jsonencode({
"credentials" = {
"ssh_key" = "my-git-ssh-key"
}
})
}
provision = {
"k8s-cluster#k8s-cluster-runtime" = {
is_dependent = false
match_dependents = false
}
}
}
github-for-gitops.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: github-for-gitops
entity:
name: github-for-gitops
driver_type: humanitec/k8s-cluster-git
type: k8s-cluster
driver_inputs:
values:
# Git repository for pushing manifests
url: [email protected]:example-org/gitops-repo.git
# When using a GitHub personal access token, use the HTTPS URL:
# url: https://github.com/example-org/gitops-repo.git
# Branch in the git repository, optional. If not specified, the default branch is used.
branch: development
# Path in the git repository, optional. If not specified, the root is used.
path: "${context.app.id}/${context.env.id}"
# Load Balancer, optional. Though it's not related to the GitOps setup, it's used to create ingress in the target K8s cluster if such resources are part of the Resource Graph, just like with a non-GitOps cluster.
loadbalancer: 35.10.10.10
secrets:
credentials:
ssh_key: my-git-ssh-key
# Alternative to ssh_key: password or Personal Account Token
# password: my-git-ssh-pat
# To co-provision a non-GitOps cluster resource from which the Orchestrator will fetch runtime info.
# The provision key specifies `k8s-cluster-runtime` as Resource Id and it must be used in the non-GitOps cluster resource definition Matching Criteria.
provision:
k8s-cluster#k8s-cluster-runtime:
is_dependent: false
match_dependents: false
gke-agent.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-agent" {
driver_type = "humanitec/agent"
id = "gke-agent"
name = "gke-agent"
type = "agent"
driver_inputs = {
values_string = jsonencode({
"id" = "gke-agent"
})
}
}
resource "humanitec_resource_definition_criteria" "gke-agent_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.gke-agent.id
env_type = "development"
res_id = "agent"
}
gke-agent.yaml
(
view on GitHub
)
:
# This Resource Definition describes the Humanitec Agent to match for the runtime cluster
# if the Agent is being used
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-agent
entity:
type: agent
name: gke-agent
driver_type: humanitec/agent
driver_inputs:
values:
# This property must match the Agent id as it is configured in the Platform Orchestrator
id: gke-agent
# Set matching criteria so that it is matched along with the runtime cluster Resource Definition
criteria:
- env_type: development
res_id: agent
gke-temporary-credentials-runtime.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-temporary-credentials"
name = "gke-temporary-credentials"
type = "k8s-cluster"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
resource "humanitec_resource_definition_criteria" "gke-temporary-credentials_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.gke-temporary-credentials.id
res_id = "k8s-cluster-runtime"
}
gke-temporary-credentials-runtime.yaml
(
view on GitHub
)
:
# Connect to a GKE cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-temporary-credentials
entity:
name: gke-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "gcp-identity"
# which needs to be configured for your Organization.
driver_account: gcp-temporary-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
secrets:
# Optional: set this property to use the Humanitec Agent for accessing runtime information
# if the target cluster is private. This requires the Agent to be configured
# for the cluster and the proper Agent Resource Definition to be matched.
agent_url: "${resources['agent#agent'].outputs.url}"
criteria:
- res_id: k8s-cluster-runtime
K8s cluster gke
Connecting to GKE clusters
This section contains example Resource Definitions for connecting to GKE clusters .
Agent
Using the Humanitec Agent
This section contains example Resource Definitions using the Humanitec Agent for connecting to GKE clusters.
gke-agent.yaml
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec CLI .gke-agent.tf
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec Terraform provider
gke-agent.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-agent" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-agent"
name = "gke-agent"
type = "k8s-cluster"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
gke-agent.yaml
(
view on GitHub
)
:
# GKE private cluster. It is to be accessed via the Humanitec Agent
# It is using a Cloud Account with temporary credentials
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-agent
entity:
name: gke-agent
type: k8s-cluster
# The driver_account is referring to a Cloud Account configured in your Organization
driver_account: gcp-temporary-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
secrets:
# Setting the URL for the Humanitec Agent
agent_url: "${resources['agent#agent'].outputs.url}"
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to GKE clusters.
gke-static-credentials.yaml
: use static credentials defined via environment variables. This format is for use with the Humanitec CLI .gke-static-credentials-cloudaccount.yaml
: use static credentials defined via a Cloud Account . This format is for use with the Humanitec CLI .
Using temporary credentials
This section contains example Resource Definitions using temporary credentials for connecting to GKE clusters.
gke-temporary-credentials.yaml
: use temporary credentials defined via a Cloud Account. This format is for use with the Humanitec CLIgke-temporary-credentials.tf
: uses temporary credentials defined via a Cloud Account. This format is for use with the Humanitec Terraform provider
gke-static-credentials-cloudaccount.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-static-credentials-cloudaccount" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-static-credentials-cloudaccount"
name = "gke-static-credentials-cloudaccount"
type = "k8s-cluster"
driver_account = "gcp-static-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
}
}
gke-static-credentials-cloudaccount.yaml
(
view on GitHub
)
:
# Connect to a GKE cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-static-credentials-cloudaccount
entity:
name: gke-static-credentials-cloudaccount
type: k8s-cluster
# The driver_account references a Cloud Account of type "gcp"
# which needs to be configured for your Organization.
driver_account: gcp-static-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
gke-static-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-static-credentials" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-static-credentials"
name = "gke-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
secrets_string = jsonencode({
"credentials" = {
"type" = "service_account"
"project_id" = "my-gcp-project"
"private_key_id" = "48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f"
"private_key" = "my-private-key"
"client_id" = "206964217359046819490"
"client_email" = "[email protected]"
"auth_uri" = "https://accounts.google.com/o/oauth2/auth"
"token_uri" = "https://oauth2.googleapis.com/token"
"auth_provider_x509_cert_url" = "https://www.googleapis.com/oauth2/v1/certs"
"client_x509_cert_url" = "https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-gcp-project.iam.gserviceaccount.com"
}
})
}
}
gke-static-credentials.yaml
(
view on GitHub
)
:
# NOTE: Providing inline credentials as shown in this example is discouraged and will be deprecated.
# Using a Cloud Account is the recommended approach instead.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-static-credentials
entity:
name: gke-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
secrets:
# The "credentials" data correspond to the content of the credentials.json
# that Google Cloud generates for a service account key
credentials:
type: service_account
project_id: my-gcp-project
# Example private_key_id: 48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f
private_key_id: 48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f
# Example private_key in one line: -----BEGIN PRIVATE KEY-----\\n...key...data...\\n...key...data...\\n...\\n-----END PRIVATE KEY-----\\n
private_key: my-private-key
# Example client_id: 206964217359046819490
client_id: "206964217359046819490"
client_email: [email protected]
auth_uri: https://accounts.google.com/o/oauth2/auth
token_uri: https://oauth2.googleapis.com/token
auth_provider_x509_cert_url: https://www.googleapis.com/oauth2/v1/certs
client_x509_cert_url: https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-gcp-project.iam.gserviceaccount.com
gke-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gke-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-temporary-credentials"
name = "gke-temporary-credentials"
type = "k8s-cluster"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
}
}
gke-temporary-credentials.yaml
(
view on GitHub
)
:
# Connect to a GKE cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-temporary-credentials
entity:
name: gke-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "gcp-identity"
# which needs to be configured for your Organization.
driver_account: gcp-temporary-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
Template driver
Resource Definitions using the Template Driver
This section contains example Resource Definitions using the Template Driver .
Add sidecar
Add a sidecar to workloads using the workload resource
The
workload
Resource Type can be used to make updates to resources before they are deployed into the cluster. In this example, a Resource Definition implementing the workload
Resource Type is used to inject the Open Telemetry agent as a sidecar into every workload. In addition to adding the sidecar, it also adds an environment variable called OTEL_EXPORTER_OTLP_ENDPOINT
to each container running in the workload.
otel-sidecar.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "otel-sidecar" {
driver_type = "humanitec/template"
id = "otel-sidecar"
name = "otel-sidecar"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
{{- /*
The "update" output is passed into the corresponding "update" output of the "workload" Resource Type.
*/ -}}
update:
{{- /*
Add the variable OTEL_EXPORTER_OTLP_ENDPOINT to all containers
*/ -}}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/variables/OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
{{- end }}
END_OF_TEXT
"manifests" = {
"sidecar.yaml" = {
"location" = "containers"
"data" = <<END_OF_TEXT
{{- /*
The Open Telemetry container as a sidecar in the workload
*/ -}}
command:
- "/otelcol"
- "--config=/conf/otel-agent-config.yaml"
image: otel/opentelemetry-collector:0.94.0
name: otel-agent
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 55679 # ZPages endpoint.
- containerPort: 4317 # Default OpenTelemetry receiver port.
- containerPort: 8888 # Metrics.
env:
- name: GOMEMLIMIT
value: 400MiB
volumeMounts:
- name: otel-agent-config-vol
mountPath: /conf
END_OF_TEXT
}
"sidecar-volume.yaml" = {
"location" = "volumes"
"data" = <<END_OF_TEXT
{{- /*
A volume that is used to surface the config file
*/ -}}
configMap:
name: otel-agent-conf-{{ .id }}
items:
- key: otel-agent-config
path: otel-agent-config.yaml
name: otel-agent-config-vol
END_OF_TEXT
}
"otel-config-map.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
{{- /*
The config file for the Open Telemetry agent. Notice that it's name includes the GUResID
*/ -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-conf-{{ .id }}
labels:
app: opentelemetry
component: otel-agent-conf
data:
otel-agent-config: |
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
otlp:
endpoint: "otel-collector.default:4317"
tls:
insecure: true
sending_queue:
num_consumers: 4
queue_size: 100
retry_on_failure:
enabled: true
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 400
# 25% of limit up to 2G
spike_limit_mib: 100
check_interval: 5s
extensions:
zpages: {}
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp]
END_OF_TEXT
}
}
}
})
}
}
otel-sidecar.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: otel-sidecar
entity:
name: otel-sidecar
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
{{- /*
The "update" output is passed into the corresponding "update" output of the "workload" Resource Type.
*/ -}}
update:
{{- /*
Add the variable OTEL_EXPORTER_OTLP_ENDPOINT to all containers
*/ -}}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/variables/OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
{{- end }}
manifests:
sidecar.yaml:
location: containers
data: |
{{- /*
The Open Telemetry container as a sidecar in the workload
*/ -}}
command:
- "/otelcol"
- "--config=/conf/otel-agent-config.yaml"
image: otel/opentelemetry-collector:0.94.0
name: otel-agent
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 55679 # ZPages endpoint.
- containerPort: 4317 # Default OpenTelemetry receiver port.
- containerPort: 8888 # Metrics.
env:
- name: GOMEMLIMIT
value: 400MiB
volumeMounts:
- name: otel-agent-config-vol
mountPath: /conf
sidecar-volume.yaml:
location: volumes
data: |
{{- /*
A volume that is used to surface the config file
*/ -}}
configMap:
name: otel-agent-conf-{{ .id }}
items:
- key: otel-agent-config
path: otel-agent-config.yaml
name: otel-agent-config-vol
otel-config-map.yaml:
location: namespace
data: |
{{- /*
The config file for the Open Telemetry agent. Notice that it's name includes the GUResID
*/ -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-conf-{{ .id }}
labels:
app: opentelemetry
component: otel-agent-conf
data:
otel-agent-config: |
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
otlp:
endpoint: "otel-collector.default:4317"
tls:
insecure: true
sending_queue:
num_consumers: 4
queue_size: 100
retry_on_failure:
enabled: true
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 400
# 25% of limit up to 2G
spike_limit_mib: 100
check_interval: 5s
extensions:
zpages: {}
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp]
criteria: []
Affinity
This section contains example Resource Definitions using the Template Driver for the affinity of Kubernetes Pods.
affinity.yaml
: Add affinity rules to the Workload. This format is for use with the Humanitec CLI .
affinity.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "workload-affinity" {
driver_type = "humanitec/template"
id = "workload-affinity"
name = "workload-affinity"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/affinity
value:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
END_OF_TEXT
}
})
}
}
affinity.yaml
(
view on GitHub
)
:
# Add affinity rules to the Workload by adding a value to the manifest at .spec.affinity
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-affinity
entity:
name: workload-affinity
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/affinity
value:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
criteria: []
Annotations
This section shows how to use the Template Driver for managing annotations on Kubernetes objects.
While it is also possible to set annotations via Score , the approach shown here shifts the management of annotations down to the Platform, ensuring consistency and relieving developers of the task to repeat common annotations for each Workload in the Score extension file.
The example illustrates an annotation on the Kubernetes Service object (specific to Google Kubernetes Engine in this case). But if you want to see a more generic approach with annotations on Workloads too, you can follow the same approach as described in the example with labels .
annotations.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "annotations" {
driver_type = "humanitec/template"
id = "annotations"
name = "annotations"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/deployment/annotations
value:
{{- range $key, $val := .resource.spec.deployment.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
- op: add
path: /spec/pod/annotations
value:
{{- range $key, $val := .resource.spec.pod.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
# If the Score file also defines a service, add annotations to the service object
{{- if .resource.spec.service }}
- op: add
path: /spec/service/annotations
value:
{{- range $key, $val := .resource.spec.service.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
{{- $port := values .resource.spec.service.ports | first }}
- op: add
path: /spec/service/annotations/cloud.google.com~1neg
value: '{"ingress":true,"exposed_ports":{ {{- $port.service_port | quote -}} :{}}}'
{{- end }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "annotations_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.annotations.id
}
annotations.yaml
(
view on GitHub
)
:
# This Resource Definition shows how to add annotations to the Kubernetes service object using the Template Driver
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: annotations
entity:
name: annotations
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/deployment/annotations
value:
{{- range $key, $val := .resource.spec.deployment.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
- op: add
path: /spec/pod/annotations
value:
{{- range $key, $val := .resource.spec.pod.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
# If the Score file also defines a service, add annotations to the service object
{{- if .resource.spec.service }}
- op: add
path: /spec/service/annotations
value:
{{- range $key, $val := .resource.spec.service.annotations }}
{{ $key }}: {{ $val | quote }}
{{- end }}
{{- $port := values .resource.spec.service.ports | first }}
- op: add
path: /spec/service/annotations/cloud.google.com~1neg
value: '{"ingress":true,"exposed_ports":{ {{- $port.service_port | quote -}} :{}}}'
{{- end }}
criteria:
- {}
Endpoint
This section contains example Resource Definitions using the
Template Driver
for creating a endpoint
resource.
The endpoint
resource is used to represent an endpoint that a workload needs to interact with. This could for example be a shared service, external API etc.
Here, the template driver is used to provide optional outputs making use of the
default
function as well as
merge
to construct
dictionaries
with default values.
endpoint-def.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "endpoint-example-endpoint" {
driver_type = "humanitec/template"
id = "endpoint-example-endpoint"
name = "endpoint-example-endpoint"
type = "endpoint"
driver_inputs = {
values_string = jsonencode({
"host" = "example.com"
"port" = 8080
"templates" = {
"init" = <<END_OF_TEXT
{{- $username := .driver.secrets.username | default "" }}
{{- $password := .driver.secrets.password | default "" }}
username: {{ $username | toRawJson }}
password: {{ $password | toRawJson }}
userinfo: {{ if $username }}
{{- $username }}:
{{- end }}
{{- $password }}
hostport: {{ .driver.values.host }}
{{- if .driver.values.port -}}
:{{ .driver.values.port }}
{{- end }}
END_OF_TEXT
"outputs" = <<END_OF_TEXT
scheme: {{ .driver.values.scheme | default "http" }}
host: {{ .driver.values.host }}
port: {{ .driver.values.port }}
path: {{ .driver.values.path | default "" | toRawJson }}
query: {{ .driver.values.query | default "" | toRawJson }}
fragment: {{ .driver.values.fragment | default "" | toRawJson }}
END_OF_TEXT
"secrets" = <<END_OF_TEXT
username: {{ .init.username | toRawJson }}
password: {{ .init.password | toRawJson }}
url: {{ .outputs.values | merge (dict "userinfo" (.init.userinfo | default "") "host" .init.hostport) | urlJoin | toRawJson }}
END_OF_TEXT
}
})
}
}
endpoint-def.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: endpoint-example-endpoint
entity:
driver_inputs:
values:
# Commented out properties are optional
# scheme: http
host: example.com
port: 8080
# path: ""
# query: ""
# fragment: ""
templates:
init: |
{{- $username := .driver.secrets.username | default "" }}
{{- $password := .driver.secrets.password | default "" }}
username: {{ $username | toRawJson }}
password: {{ $password | toRawJson }}
userinfo: {{ if $username }}
{{- $username }}:
{{- end }}
{{- $password }}
hostport: {{ .driver.values.host }}
{{- if .driver.values.port -}}
:{{ .driver.values.port }}
{{- end }}
outputs: |
scheme: {{ .driver.values.scheme | default "http" }}
host: {{ .driver.values.host }}
port: {{ .driver.values.port }}
path: {{ .driver.values.path | default "" | toRawJson }}
query: {{ .driver.values.query | default "" | toRawJson }}
fragment: {{ .driver.values.fragment | default "" | toRawJson }}
secrets: |
username: {{ .init.username | toRawJson }}
password: {{ .init.password | toRawJson }}
url: {{ .outputs.values | merge (dict "userinfo" (.init.userinfo | default "") "host" .init.hostport) | urlJoin | toRawJson }}
# Both username and password are optional
# If supplied, they should be secret references to keys in the
# secret manager configured for the Humanitec Operator
# secret_refs:
# username:
# store:
# ref:
# password:
# store:
# ref:
driver_type: humanitec/template
type: endpoint
name: endpoint-example-endpoint
Horizontal pod autoscaler
This section contains a Resource Definition example for handling Kubernetes
HorizontalPodAutoscaler
by using the
template
Driver to configure your own HorizontalPodAutoscaler
implementation. You can
see this other example
if you want to use the hpa
Driver.
You can find a Score file example using the horizontal-pod-autoscaler
resource type
here
.
hpa.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "hpa" {
driver_type = "humanitec/template"
id = "hpa"
name = "hpa"
type = "horizontal-pod-autoscaler"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
{{ $defaultMaxReplicas := 3 }}
{{ $defaultMinReplicas := 2 }}
{{ $absoluteMaxReplicas := 10 }}
{{ $defaultTargetUtilizationPercent := 80 }}
workload: {{ index (splitList "." "$${context.res.id}") 1 }}
maxReplicas: {{ .resource.maxReplicas | default $defaultMaxReplicas | min $absoluteMaxReplicas }}
minReplicas: {{ .resource.minReplicas | default $defaultMinReplicas }}
targetCPUUtilizationPercentage: {{ .resource.targetCPUUtilizationPercentage | default $defaultTargetUtilizationPercent }}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
hpa.yaml:
location: namespace
data:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .init.workload }}-hpa
spec:
maxReplicas: {{ .init.maxReplicas }}
metrics:
- resource:
name: cpu
target:
averageUtilization: {{ .init.targetCPUUtilizationPercentage }}
type: Utilization
type: Resource
minReplicas: {{ .init.minReplicas }}
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .init.workload }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "hpa_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.hpa.id
}
hpa.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: hpa
entity:
driver_type: humanitec/template
name: hpa
type: horizontal-pod-autoscaler
driver_inputs:
values:
templates:
init: |
{{ $defaultMaxReplicas := 3 }}
{{ $defaultMinReplicas := 2 }}
{{ $absoluteMaxReplicas := 10 }}
{{ $defaultTargetUtilizationPercent := 80 }}
workload: {{ index (splitList "." "${context.res.id}") 1 }}
maxReplicas: {{ .resource.maxReplicas | default $defaultMaxReplicas | min $absoluteMaxReplicas }}
minReplicas: {{ .resource.minReplicas | default $defaultMinReplicas }}
targetCPUUtilizationPercentage: {{ .resource.targetCPUUtilizationPercentage | default $defaultTargetUtilizationPercent }}
manifests: |
hpa.yaml:
location: namespace
data:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: {{ .init.workload }}-hpa
spec:
maxReplicas: {{ .init.maxReplicas }}
metrics:
- resource:
name: cpu
target:
averageUtilization: {{ .init.targetCPUUtilizationPercentage }}
type: Utilization
type: Resource
minReplicas: {{ .init.minReplicas }}
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .init.workload }}
criteria:
- {}
Imagepullsecrets
This section shows how to use the Template Driver for configuring cluster access to a private container image registry.
The example implements the Kubernetes standard mechanism to
Pull an Image from a Private Registry
. It creates a Kubernetes Secret of kubernetes.io/dockerconfigjson
type, reading the credentials from a secret store. It then configures the secret as the imagePullSecret
for a Workload’s Pod.
The example is applicable only when using the Humanitec Operator on the cluster. With the Operator, using the Registries feature of the Platform Orchestrator is not supported.
To use this mechanism, install the Resource Definitions of this example into your Organization, replacing some placeholder values with the actual values of your setup. Add the appropriate
matching criteria
to the workload
Definition to match the Workloads you want to have access to the private registry.
Note:
workload
is an implicit Resource Type so it is automatically referenced for every Deployment.
config.yaml
: Resource Definition oftype: config
that reads the credentials for the private registry from a secret store and creates the Kubernetes Secretworkload.yaml
: Resource Definition oftype: workload
that adds theimagePullSecrets
element to the Pod spec, referencing theconfig
Resource
config.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "regcred-config" {
driver_type = "humanitec/template"
id = "regcred-config"
name = "regcred-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"secret_name" = "regcred"
"server" = "FIXME"
"templates" = {
"init" = <<END_OF_TEXT
dockerConfigJson:
auths:
{{ .driver.values.server | quote }}:
username: {{ .driver.secrets.username | toRawJson }}
password: {{ .driver.secrets.password | toRawJson }}
END_OF_TEXT
"manifests" = {
"regcred-secret.yaml" = {
"data" = <<END_OF_TEXT
apiVersion: v1
kind: Secret
metadata:
name: {{ .driver.values.secret_name }}
data:
.dockerconfigjson: {{ .init.dockerConfigJson | toRawJson | b64enc }}
type: kubernetes.io/dockerconfigjson
END_OF_TEXT
"location" = "namespace"
}
}
"outputs" = "secret_name: {{ .driver.values.secret_name }}"
}
})
secret_refs = jsonencode({
"password" = {
"ref" = "regcred-password"
"store" = "FIXME"
}
"username" = {
"ref" = "regcred-username"
"store" = "FIXME"
}
})
}
}
resource "humanitec_resource_definition_criteria" "regcred-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.regcred-config.id
class = "default"
res_id = "regcred"
}
config.yaml
(
view on GitHub
)
:
# This Resource Definition pulls credentials for a container image registry from a secret store
# and creates a Kubernetes Secret of kubernetes.io/dockerconfigjson type
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: regcred-config
entity:
driver_type: humanitec/template
name: regcred-config
type: config
criteria:
- class: default
# This res_id must be used from a referencing Resource Definition to request this config Resource
res_id: regcred
driver_inputs:
# These secret references read the credentials from a secret store
secret_refs:
password:
ref: regcred-password
# Replace this value with the secret store id that's supplying the password
store: FIXME
username:
ref: regcred-username
# Replace this value with the secret store id that's supplying the username
store: FIXME
values:
secret_name: regcred
# Replace this value with the servername of your registry
server: FIXME
templates:
# The init template is used to prepare the "dockerConfigJson" content
init: |
dockerConfigJson:
auths:
{{ .driver.values.server | quote }}:
username: {{ .driver.secrets.username | toRawJson }}
password: {{ .driver.secrets.password | toRawJson }}
manifests:
# The manifests template creates the Kubernetes Secret
# which can then be used in the workload "imagePullSecrets"
regcred-secret.yaml:
data: |
apiVersion: v1
kind: Secret
metadata:
name: {{ .driver.values.secret_name }}
data:
.dockerconfigjson: {{ .init.dockerConfigJson | toRawJson | b64enc }}
type: kubernetes.io/dockerconfigjson
location: namespace
outputs: |
secret_name: {{ .driver.values.secret_name }}
workload.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "custom-workload" {
driver_type = "humanitec/template"
id = "custom-workload"
name = "custom-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/imagePullSecrets
value:
- name: $${resources['config.default#regcred'].outputs.secret_name}
END_OF_TEXT
}
})
}
}
workload.yaml
(
view on GitHub
)
:
# This workload Resource Definition adds an "imagePullSecrets" element to the Pod spec
# It references a "config" type Resource to obtain the secret name
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/imagePullSecrets
value:
- name: ${resources['config.default#regcred'].outputs.secret_name}
Ingress
This section contains example Resource Definitions for handling Kubernetes ingress traffic. Instead of the Driver type Ingress , we are using the Template Driver type, which allows us to render any Kubernetes YAML object.
ingress-traefik.yaml
: defines anIngressRoute
object for the Traefik Ingress Controller using the IngressRoute custom resource definition . This format is for use with the Humanitec CLIingress-traefik-multiple-routes.yaml
: defines anIngressRoute
object for the Traefik Ingress Controller using the IngressRoute custom resource definition . It dynamically extracts the routes from theroute
resource in the Resource Graph to provide multiple routes. This format is for use with the Humanitec CLIingress-ambassador.yaml
: defines aMapping
object for the Ambassador Ingress Controller using the Mapping custom resource definition . This format is for use with the Humanitec CLI
ingress-ambassador.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "ambassador-ingress" {
driver_type = "template"
id = "ambassador-ingress"
name = "ambassador-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
name: {{ .id }}-ingress
secretname: $${resources.tls-cert.outputs.tls_secret_name}
host: $${resources.dns.outputs.host}
namespace: $${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
ambassador-mapping.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: {{ .init.name }}-mapping
spec:
host: {{ .init.host }}
prefix: /
service: my-service-name:8080
location: namespace
ambassador-tlscontext.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: TLSContext
metadata:
name: {{ .init.name }}-tlscontext
spec:
hosts:
- {{ .init.host }}
secret: {{ .init.secretname }}
location: namespace
END_OF_TEXT
}
})
}
}
ingress-ambassador.yaml
(
view on GitHub
)
:
# This Resource Definition provisions an IngressRoute object for the Traefik Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: ambassador-ingress
entity:
name: ambassador-ingress
type: ingress
driver_type: template
driver_inputs:
values:
templates:
init: |
name: {{ .id }}-ingress
secretname: ${resources.tls-cert.outputs.tls_secret_name}
host: ${resources.dns.outputs.host}
namespace: ${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
manifests: |
ambassador-mapping.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: {{ .init.name }}-mapping
spec:
host: {{ .init.host }}
prefix: /
service: my-service-name:8080
location: namespace
ambassador-tlscontext.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: TLSContext
metadata:
name: {{ .init.name }}-tlscontext
spec:
hosts:
- {{ .init.host }}
secret: {{ .init.secretname }}
location: namespace
ingress-traefik-multiple-routes.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "traefik-ingress-eg" {
driver_type = "humanitec/template"
id = "traefik-ingress-eg"
name = "traefik-ingress-eg"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"routeHosts" = "$${resources.dns<route.outputs.host}"
"routePaths" = "$${resources.dns<route.outputs.path}"
"routePorts" = "$${resources.dns<route.outputs.port}"
"routeServices" = "$${resources.dns<route.outputs.service}"
"templates" = {
"init" = <<END_OF_TEXT
host: {{ .resource.host | quote }}
# ingress paths are added implicitely to our ingress resource based on the contents of our workload. These are an older
# alternative to route resources. Consider this deprecated in the future!
ingressPaths: {{ dig "rules" "http" (list) .resource | toRawJson }}
# The tls secret name could be generated by Humanitec or injected as an input parameter to our ingress.
tlsSecretName: {{ .driver.values.tls_secret_name | default .resource.tls_secret_name | default .driver.values.automatic_tls_secret_name | quote }}
{{- if eq (lower ( .driver.values.path_type | default "Prefix")) "exact" -}}
defaultMatchRule: Path
{{- else }}
defaultMatchRule: PathPrefix
{{- end }}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
# Create our single manifest with many routes in it. Alternative configurations could create a manifest per route with unique file names if required.
ingressroute.yaml:
location: namespace
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
# id is the unique resource uuid for this ingress
name: {{ .id }}-ingressroute
annotations:
{{- range $k, $v := .driver.values.annotations }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
labels:
{{- range $k, $v := .driver.values.labels }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
spec:
entryPoints:
- websecure
routes:
# Add all the paths from the dependent route resources. Route resources can have different hostnames but will all obey the path type set out in the resource inputs.
{{- range $index, $path := .driver.values.routePaths }}
- match: Host(`{{ index $.driver.values.routeHosts $index }}`) && {{ $.init.defaultMatchRule }}(`{{ $path }}`)
kind: Rule
services:
- kind: Service
name: {{ index $.driver.values.routeServices $index | toRawJson }}
port: {{ index $.driver.values.routePorts $index }}
{{- end }}
# Add all the support ingress paths. The old style ingress rules use a single hostname coming from the resource configuration but support different path types per rule.
# As mentioned further up, consider these deprecated in the future!
{{- range $path, $rule := .init.ingressPaths }}
{{ $lcType := lower $rule.type -}}
{{- if eq $lcType "implementationspecific" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{- else if eq $lcType "exact" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{ else }}
- match: Host(`{{ $.init.host }}`) && PathPrefix(`{{ $path }}`)
{{- end }}
kind: Rule
services:
- kind: Service
name: {{ $rule.name | quote }}
port: {{ $rule.port }}
{{- end }}
{{- if not (or .driver.values.no_tls (eq .init.tlsSecretName "")) }}
tls:
secretName: {{ .init.tlsSecretName | toRawJson }}
{{- end }}
END_OF_TEXT
}
})
}
}
ingress-traefik-multiple-routes.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: traefik-ingress-eg
entity:
name: traefik-ingress-eg
driver_type: humanitec/template
type: ingress
driver_inputs:
values:
# Find all the route resources that are dependent on any dns resources used in this workload.
# We extract arrays of their host, path, port, and service resource.
# These will become new entries in the .drivers.values table.
routeHosts: ${resources.dns<route.outputs.host}
routePaths: ${resources.dns<route.outputs.path}
routePorts: ${resources.dns<route.outputs.port}
routeServices: ${resources.dns<route.outputs.service}
templates:
# The init template gives us a place to precompute some fields that we'll use in the manifests template.
init: |
host: {{ .resource.host | quote }}
# ingress paths are added implicitely to our ingress resource based on the contents of our workload. These are an older
# alternative to route resources. Consider this deprecated in the future!
ingressPaths: {{ dig "rules" "http" (list) .resource | toRawJson }}
# The tls secret name could be generated by Humanitec or injected as an input parameter to our ingress.
tlsSecretName: {{ .driver.values.tls_secret_name | default .resource.tls_secret_name | default .driver.values.automatic_tls_secret_name | quote }}
{{- if eq (lower ( .driver.values.path_type | default "Prefix")) "exact" -}}
defaultMatchRule: Path
{{- else }}
defaultMatchRule: PathPrefix
{{- end }}
manifests: |
# Create our single manifest with many routes in it. Alternative configurations could create a manifest per route with unique file names if required.
ingressroute.yaml:
location: namespace
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
# id is the unique resource uuid for this ingress
name: {{ .id }}-ingressroute
annotations:
{{- range $k, $v := .driver.values.annotations }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
labels:
{{- range $k, $v := .driver.values.labels }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
spec:
entryPoints:
- websecure
routes:
# Add all the paths from the dependent route resources. Route resources can have different hostnames but will all obey the path type set out in the resource inputs.
{{- range $index, $path := .driver.values.routePaths }}
- match: Host(`{{ index $.driver.values.routeHosts $index }}`) && {{ $.init.defaultMatchRule }}(`{{ $path }}`)
kind: Rule
services:
- kind: Service
name: {{ index $.driver.values.routeServices $index | toRawJson }}
port: {{ index $.driver.values.routePorts $index }}
{{- end }}
# Add all the support ingress paths. The old style ingress rules use a single hostname coming from the resource configuration but support different path types per rule.
# As mentioned further up, consider these deprecated in the future!
{{- range $path, $rule := .init.ingressPaths }}
{{ $lcType := lower $rule.type -}}
{{- if eq $lcType "implementationspecific" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{- else if eq $lcType "exact" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{ else }}
- match: Host(`{{ $.init.host }}`) && PathPrefix(`{{ $path }}`)
{{- end }}
kind: Rule
services:
- kind: Service
name: {{ $rule.name | quote }}
port: {{ $rule.port }}
{{- end }}
{{- if not (or .driver.values.no_tls (eq .init.tlsSecretName "")) }}
tls:
secretName: {{ .init.tlsSecretName | toRawJson }}
{{- end }}
ingress-traefik.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "traefik-ingress" {
driver_type = "template"
id = "traefik-ingress"
name = "traefik-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
name: {{ .id }}-ir
secretname: $${resources.tls-cert.outputs.tls_secret_name}
host: $${resources.dns.outputs.host}
namespace: $${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
traefik-ingressroute.yaml:
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: {{ .init.name }}
spec:
routes:
- match: Host(`{{ .init.host }}`) && PathPrefix(`/`)
kind: Rule
services:
- name: my-service-name
kind: Service
port: 8080
namespace: {{ .init.namespace }}
tls:
secretName: {{ .init.secretname }}
location: namespace
END_OF_TEXT
}
})
}
}
ingress-traefik.yaml
(
view on GitHub
)
:
# This Resource Definition provisions an IngressRoute object for the Traefik Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: traefik-ingress
entity:
name: traefik-ingress
type: ingress
driver_type: template
driver_inputs:
values:
templates:
init: |
name: {{ .id }}-ir
secretname: ${resources.tls-cert.outputs.tls_secret_name}
host: ${resources.dns.outputs.host}
namespace: ${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
manifests: |
traefik-ingressroute.yaml:
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: {{ .init.name }}
spec:
routes:
- match: Host(`{{ .init.host }}`) && PathPrefix(`/`)
kind: Rule
services:
- name: my-service-name
kind: Service
port: 8080
namespace: {{ .init.namespace }}
tls:
secretName: {{ .init.secretname }}
location: namespace
Labels
This section shows how to use the Template Driver for managing labels on Kubernetes objects.
While it is also possible to set labels via Score , the approach shown here shifts the management of labels down to the Platform, ensuring consistency and relieving developers of the task to repeat common labels for each Workload in the Score extension file.
config-labels.yaml
: Resource Definition of typeconfig
which defines the value for a sample label at a central place.custom-workload-with-dynamic-labels.yaml
: Add dynamic labels to your Workload. This format is for use with the Humanitec CLI .custom-namespace-with-dynamic-labels.yaml
: Add dynamic labels to your Namespace. This format is for use with the Humanitec CLI .
config-labels.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "app-config" {
driver_type = "humanitec/template"
id = "app-config"
name = "app-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = "cost_center_id: my-example-id\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "app-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.app-config.id
res_id = "app-config"
}
config-labels.yaml
(
view on GitHub
)
:
# This "config" type Resource Definition provides the value for the sample label
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: app-config
entity:
name: app-config
type: config
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Returns a sample output named "cost_center_id" to be used as a label
outputs: |
cost_center_id: my-example-id
# Match the resource ID "app-config" so that it can be requested via that ID
criteria:
- res_id: app-config
custom-namespace-with-dynamic-labels.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "custom-namespace-with-label" {
driver_type = "humanitec/template"
id = "custom-namespace-with-label"
name = "custom-namespace-with-label"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = "name: $${context.app.id}-$${context.env.id}\n"
"manifests" = <<END_OF_TEXT
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
env_id: $${context.env.id}
cost_center_id: $${resources['config.default#app-config'].outputs.cost_center_id}
name: {{ .init.name }}
END_OF_TEXT
"outputs" = "namespace: {{ .init.name }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-namespace-with-label_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-namespace-with-label.id
}
custom-namespace-with-dynamic-labels.yaml
(
view on GitHub
)
:
# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace-with-label
entity:
name: custom-namespace-with-label
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
name: ${context.app.id}-${context.env.id}
manifests: |
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
# Set matching criteria as required
criteria:
- {}
custom-workload-with-dynamic-labels.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "custom-workload-with-label" {
driver_type = "humanitec/template"
id = "custom-workload-with-label"
name = "custom-workload-with-label"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/deployment/labels
value:
{{- range $key, $val := .resource.spec.deployment.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: $${context.env.id}
cost_center_id: $${resources['config.default#app-config'].outputs.cost_center_id}
- op: add
path: /spec/pod/labels
value:
{{- range $key, $val := .resource.spec.pod.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
# If the Score file also defines a service, add labels to the service object
{{- if .resource.spec.service }}
- op: add
path: /spec/service/labels
value:
{{- range $key, $val := .resource.spec.service.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: $${context.env.id}
cost_center_id: $${resources['config.default#app-config'].outputs.cost_center_id}
{{- end }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-workload-with-label_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-workload-with-label.id
}
custom-workload-with-dynamic-labels.yaml
(
view on GitHub
)
:
# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload-with-label
entity:
name: custom-workload-with-label
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/deployment/labels
value:
{{- range $key, $val := .resource.spec.deployment.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
- op: add
path: /spec/pod/labels
value:
{{- range $key, $val := .resource.spec.pod.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
# If the Score file also defines a service, add labels to the service object
{{- if .resource.spec.service }}
- op: add
path: /spec/service/labels
value:
{{- range $key, $val := .resource.spec.service.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
{{- end }}
# Set matching criteria as required
criteria:
- {}
Namespace
This section contains example Resource Definitions using the Template Driver for managing Kubernetes namespaces .
custom-namespace.yaml
: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec CLI .custom-namespace.tf
: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec Terraform provider .short-namespace.yaml
: Create Kubernetes namespaces with your own custom naming scheme of defined length. This format is for use with the Humanitec CLI .
custom-namespace.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "custom-namespace" {
driver_type = "humanitec/template"
id = "custom-namespace"
name = "custom-namespace2"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = "name: $${context.env.id}-$${context.app.id}\n"
"manifests" = <<END_OF_TEXT
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
END_OF_TEXT
"outputs" = "namespace: {{ .init.name }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-namespace_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-namespace.id
}
custom-namespace.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace
entity:
name: custom-namespace2
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Use any combination of placeholders and characters to configure your naming scheme
init: |
name: ${context.env.id}-${context.app.id}
manifests: |
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
criteria:
- {}
short-namespace.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "custom-namespace" {
driver_type = "humanitec/template"
id = "custom-namespace"
name = "custom-namespace"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = "name: {{ trunc 8 \"$${context.env.id}\" }}-{{ trunc 8 \"$${context.app.id}\" }}\n"
"manifests" = <<END_OF_TEXT
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
END_OF_TEXT
"outputs" = "namespace: {{ .init.name }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-namespace_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-namespace.id
}
short-namespace.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace
entity:
name: custom-namespace
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Here the namespace name is shortened to be a maximum of 17 characters,
# no matter how long the app and env name might be.
init: |
name: {{ trunc 8 "${context.env.id}" }}-{{ trunc 8 "${context.app.id}" }}
manifests: |
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
criteria:
- {}
Namespaced resources
This example shows a sample usage of the base-env
Resource Type. It is one of the implicit
Resource Types
that always gets provisioned for a Deployment.
In this example, it will be used to provision multiple Kubernetes resources scoped to the Namespace: ResourceQuota and NetworkPolicies :
- The Resource Definition
base-env-resource-quota.yaml
uses thetemplate
driver to provision a Kubernetes manifest describing a ResourceQuota in the target namespace. - The Resource Definition
base-env-network-policies.yaml
uses thetemplate
driver to provision a Kubernetes manifest describing a NetworkPolicies in the target namespace.
Splitting provisioning of the two Kubernetes Resources in two different Resource Definitions allows to:
- Keep modularity: the same
base-env-resource-quota
(orbase-env-network-policies
) Resource Definition can be used by differentbase-env-default
. - Allow flexibility: every
base-env
can use a different Resource Driver (e.g.template
,terraform
).
The base-env-default.yaml
Resource Definition creates a dependency on the other two base-env
Resource Definitions using a
Resource Reference
. The reference specifies different Resource IDs (resource-quota
and network-policies
) so that the proper base-env
Resource Definitions will be matched based on their matching criteria.
Three base-env
Resource Definitions are provided:
base-env-default.yaml
to add thebase-env
Resources that provision the Kubernetes manifests to the Resource Graphbase-env-resource-quota.yaml
will be matched for all references ofres_id: resource-quota
base-env-network-policies.yaml
will be matched for all references ofres_id: network-policies
base-env-default.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "base-env-default" {
driver_type = "humanitec/echo"
id = "base-env-default"
name = "base-env-default"
type = "base-env"
driver_inputs = {
values_string = jsonencode({
"namespaced-resources" = {
"resource-quota" = "$${resources[\"base-env.default#resource-quota\"].guresid}"
"network-policies" = "$${resources[\"base-env.default#network-policies\"].guresid}"
}
})
}
}
base-env-default.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: base-env-default
entity:
name: base-env-default
type: base-env
driver_type: humanitec/echo
driver_inputs:
values:
namespaced-resources:
resource-quota: ${resources["base-env.default#resource-quota"].guresid}
network-policies: ${resources["base-env.default#network-policies"].guresid}
base-env-network-policies.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "base-env-network-policies" {
driver_type = "humanitec/template"
id = "base-env-network-policies"
name = "base-env-network-policies"
type = "base-env"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"manifests" = "network-policies.yaml:\n location: namespace\n data:\n apiVersion: networking.k8s.io/v1\n kind: NetworkPolicy\n metadata:\n name: default-deny-egress\n spec:\n podSelector: {}\n policyTypes:\n - Egress"
}
})
}
}
resource "humanitec_resource_definition_criteria" "base-env-network-policies_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.base-env-network-policies.id
res_id = "network-policies"
}
base-env-network-policies.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: base-env-network-policies
entity:
name: base-env-network-policies
type: base-env
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests: |-
network-policies.yaml:
location: namespace
data:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
spec:
podSelector: {}
policyTypes:
- Egress
criteria:
- res_id: network-policies
base-env-resource-quota.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "base-env-resource-quota" {
driver_type = "humanitec/template"
id = "base-env-resource-quota"
name = "base-env-resource-quota"
type = "base-env"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"manifests" = "quota.yaml:\n location: namespace\n data:\n apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: compute-resources\n spec:\n hard:\n limits.cpu: 1\n limits.memory: 256Mi"
}
})
}
}
resource "humanitec_resource_definition_criteria" "base-env-resource-quota_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.base-env-resource-quota.id
res_id = "resource-quota"
}
base-env-resource-quota.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: base-env-resource-quota
entity:
name: base-env-resource-quota
type: base-env
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests: |-
quota.yaml:
location: namespace
data:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
limits.cpu: 1
limits.memory: 256Mi
criteria:
- res_id: resource-quota
Node selector
This section contains example Resource Definitions using the Template Driver for setting nodeSelectors on your Pods.
aci-workload.yaml
: Add the required node selector and tolerations to the Workload so it can be scheduled on an Azure AKS virtual node . This format is for use with the Humanitec CLI .
aci-workload.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aci-workload" {
driver_type = "humanitec/template"
id = "aci-workload"
name = "aci-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/tolerations
value:
- key: "virtual-kubelet.io/provider"
operator: "Exists"
- key: "azure.com/aci"
effect: "NoSchedule"
- op: add
path: /spec/nodeSelector
value:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
END_OF_TEXT
}
})
}
}
aci-workload.yaml
(
view on GitHub
)
:
# Add tolerations and nodeSelector to the Workload to make it runnable AKS virtual nodes
# served through Azure Container Instances (ACI).
# See https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aci-workload
entity:
name: aci-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "virtual-kubelet.io/provider"
operator: "Exists"
- key: "azure.com/aci"
effect: "NoSchedule"
- op: add
path: /spec/nodeSelector
value:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
criteria: []
Resourcequota
This example shows a sample usage of the base-env
Resource Type. It is one of the implicit
Resource Types
that always gets provisioned for a Deployment.
The Resource Definition base-env-resourcequota.yaml
uses it the provision a Kubernetes manifest describing a
ResourceQuota
in the target namespace.
The base-env
Resource Definition reads the configuration values from another Resource of type config
using a
Resource Reference
. The reference specifies a class (config#quota
) so that the proper config
Resource Definition will be matched based on its matching criteria.
Two config
Resource Definitions are provided:
config-quota.yaml
will be matched for all references ofres_id: quota
config-quota-override.yaml
will additionally be matched for a particularapp_id: my-app
only, effectively providing an override for the configuration values for this particular Application id
The Resource Graphs for two Applications, one of which matches the “override” criteria, will look like this:
flowchart LR
subgraph app2[Resource Graph "my-app"]
direction LR
workload2[Workload] --> baseEnv2(type: base-env\nid: base-env) --> config2("type: config\nid:quota")
end
subgraph app1[Resource Graph "some-app"]
direction LR
workload1[Workload] --> baseEnv1(type: base-env\nid: base-env) --> config1("type: config\nid: quota")
end
resDefBaseEnv[base-env\nResource Definition]
resDefBaseEnv -.-> baseEnv1
resDefBaseEnv -.-> baseEnv2
resDefQuotaConfig[config-quota\nResource Definition] -.->|criteria:\n- res_id: quota| config1
resDefQuotaConfigOverride[config-quota-override\nResource Definition] -.->|criteria:\n- res_id: quota\n app_id: my-app| config2
base-env-resourcequota.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "base-env" {
driver_type = "humanitec/template"
id = "base-env"
name = "base-env"
type = "base-env"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"manifests" = "quota.yaml:\n location: namespace\n data:\n apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: compute-resources\n spec:\n hard:\n limits.cpu: $${resources['config#quota'].outputs.limits-cpu}\n limits.memory: $${resources['config#quota'].outputs.limits-memory}"
}
})
}
}
resource "humanitec_resource_definition_criteria" "base-env_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.base-env.id
}
base-env-resourcequota.yaml
(
view on GitHub
)
:
# This Resource Definition uses the base-env Resource type to create
# a ResourceQuota manifest in the target namespace.
# The actual values are read from a referenced config resource.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: base-env
entity:
name: base-env
type: base-env
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests: |-
quota.yaml:
location: namespace
data:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
limits.cpu: ${resources['config#quota'].outputs.limits-cpu}
limits.memory: ${resources['config#quota'].outputs.limits-memory}
criteria:
- {}
config-quota-override.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "quota-config-override" {
driver_type = "humanitec/echo"
id = "quota-config-override"
name = "quota-config-override"
type = "config"
driver_inputs = {
values_string = jsonencode({
"limits-cpu" = "750m"
"limits-memory" = "750Mi"
})
}
}
resource "humanitec_resource_definition_criteria" "quota-config-override_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.quota-config-override.id
res_id = "quota"
app_id = "my-app"
}
config-quota-override.yaml
(
view on GitHub
)
:
# This Resource Definition uses the Echo Driver to provide configuration values
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: quota-config-override
entity:
name: quota-config-override
type: config
driver_type: humanitec/echo
driver_inputs:
# Any Driver inputs will be returned as outputs by the Echo Driver
values:
limits-cpu: "750m"
limits-memory: "750Mi"
# The matching criteria make this Resource Definition match for a particular app_id only
criteria:
- res_id: quota
app_id: my-app
config-quota.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "quota-config" {
driver_type = "humanitec/echo"
id = "quota-config"
name = "quota-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"limits-cpu" = "500m"
"limits-memory" = "500Mi"
})
}
}
resource "humanitec_resource_definition_criteria" "quota-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.quota-config.id
res_id = "quota"
}
config-quota.yaml
(
view on GitHub
)
:
# This Resource Definition uses the Echo Driver to provide configuration values
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: quota-config
entity:
name: quota-config
type: config
driver_type: humanitec/echo
driver_inputs:
# Any Driver inputs will be returned as outputs by the Echo Driver
values:
limits-cpu: "500m"
limits-memory: "500Mi"
criteria:
- res_id: quota
Security context
This section contains example Resource Definitions using the
Template Driver
for adding
Security Context on Kubernetes Deployment
.
custom-workload-with-security-context.yaml
: Add Security Context to your Workload. This format is for use with the Humanitec CLI .custom-workload-with-security-context.tf
: Add Security Context to your Workload. This format is for use with the Humanitec Terraform provider .
custom-workload-with-security-context.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "custom-workload" {
driver_type = "humanitec/template"
id = "custom-workload"
name = "custom-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-workload_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-workload.id
}
custom-workload-with-security-context.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
criteria:
- {}
Serviceaccount
This section contains example Resource Definitions using the Template Driver for provisioning Kubernetes ServiceAccounts for your Workloads.
The solution consists of a combination of two Resource Definitions of type workload
and k8s-service-account
.
The
workload
Resource Type
is an
implicit
Type which is automatically referenced for any Deployment.
This workload
Resource Definition adds the serviceAccountName
item to the Pod spec and references a
k8s-service-account
type Resource
, causing it to be provisioned. The k8s-service-account
Resource Definition generates the Kubernetes manifest for the actual ServiceAccount.
A Resource Graph for a Workload using those Resource Definitions will look like this:
flowchart LR
workloadVirtual[Workload "my-workload"] --> workload(id: modules.my-workload\ntype: workload\nclass: default)
workload --> serviceAccount(id: modules.my-workload\ntype: k8s-service-account\nclass: default)
Note that the resource id
is used in the k8s-service-account
Resource Definition to derive the name of the actual Kubernetes ServiceAccount. Check the code for details.
serviceaccount-k8ssa-def.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "serviceaccount-k8s-service-account" {
driver_type = "humanitec/template"
id = "serviceaccount-k8s-service-account"
name = "serviceaccount-k8s-service-account"
type = "k8s-service-account"
driver_inputs = {
values_string = jsonencode({
"res_id" = "$${context.res.id}"
"templates" = {
"init" = "name: {{ index ( .driver.values.res_id | splitList \".\" ) 1 }}\n"
"outputs" = "name: {{ .init.name }}\n"
"manifests" = <<END_OF_TEXT
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
END_OF_TEXT
}
})
}
}
serviceaccount-k8ssa-def.yaml
(
view on GitHub
)
:
# This Resource Defintion provisions a Kubernetes ServiceAccount
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: serviceaccount-k8s-service-account
entity:
driver_type: humanitec/template
name: serviceaccount-k8s-service-account
type: k8s-service-account
driver_inputs:
values:
res_id: ${context.res.id}
templates:
init: |
name: {{ index ( .driver.values.res_id | splitList "." ) 1 }}
outputs: |
name: {{ .init.name }}
manifests: |
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
serviceaccount-workload-def.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "serviceaccount-workload" {
driver_type = "humanitec/template"
id = "serviceaccount-workload"
name = "serviceaccount-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/serviceAccountName
value: $${resources.k8s-service-account.outputs.name}
END_OF_TEXT
}
})
}
}
serviceaccount-workload-def.yaml
(
view on GitHub
)
:
# This Resource Definition adds a Kubernetes ServiceAccount to a Workload
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: serviceaccount-workload
entity:
driver_type: humanitec/template
name: serviceaccount-workload
type: workload
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/serviceAccountName
value: ${resources.k8s-service-account.outputs.name}
Tls cert
This section contains example Resource Definitions using the Template Driver for managing TLS Certificates in your cluster.
certificate-crd.yaml
: Add a certificate custom resource definition in the namespace of your deployment. This format is for use with the Humanitec CLI .
certificate-crd.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "certificate-crd" {
driver_type = "humanitec/template"
id = "certificate-crd"
name = "certificate-crd"
type = "tls-cert"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
tlsSecretName: {{ .id }}-tls
hostName: $${resources.dns.outputs.host}
certificateName: {{ .id }}-cert
END_OF_TEXT
"manifests" = <<END_OF_TEXT
certificate-crd.yml:
data:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .init.certificateName }}
spec:
secretName: {{ .init.tlsSecretName }}
duration: 2160h # 90d
renewBefore: 720h # 30d
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
dnsNames:
- {{ .init.hostName | toString | toRawJson }}
# The name of the issuerRef must point to the issuer / clusterIssuer in your cluster
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
location: namespace
END_OF_TEXT
"outputs" = "tls_secret_name: {{ .init.tlsSecretName }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "certificate-crd_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.certificate-crd.id
class = "default"
}
certificate-crd.yaml
(
view on GitHub
)
:
# This Resource Definition creates a certificate custom resource definition,
# which will instruct cert-manager to create a TLS certificate
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: certificate-crd
entity:
driver_type: humanitec/template
name: certificate-crd
type: tls-cert
criteria:
- class: default
driver_inputs:
values:
templates:
init: |
tlsSecretName: {{ .id }}-tls
hostName: ${resources.dns.outputs.host}
certificateName: {{ .id }}-cert
manifests: |
certificate-crd.yml:
data:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .init.certificateName }}
spec:
secretName: {{ .init.tlsSecretName }}
duration: 2160h # 90d
renewBefore: 720h # 30d
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
dnsNames:
- {{ .init.hostName | toString | toRawJson }}
# The name of the issuerRef must point to the issuer / clusterIssuer in your cluster
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
location: namespace
outputs: |
tls_secret_name: {{ .init.tlsSecretName }}
Tolerations
This section contains example Resource Definitions using the Template Driver for managing tolerations on your Pods.
tolerations.yaml
: Add tolerations to the Workload. This format is for use with the Humanitec CLI .
tolerations.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "workload-toleration" {
driver_type = "humanitec/template"
id = "workload-toleration"
name = "workload-toleration"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/tolerations
value:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
END_OF_TEXT
}
})
}
}
tolerations.yaml
(
view on GitHub
)
:
# Add tolerations to the Workload by adding a value to the manifest at .spec.tolerations
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-toleration
entity:
name: workload-toleration
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
criteria: []
Volumes
This section contains Resource Definition examples for handling Kubernetes
Volumes
by using the
template
Driver to configure your own PersistentVolume
implementation. You can
see this other example
if you want to use the volume-pvc
Driver.
You will find two examples:
volume-emptydir
- in order to inject anemptyDir
volume
in a Workload for any request of avolume
resource with theclass
ephemeral
.volume-nfs
- in order to create the associatedPersistentVolumeClaim
,PersistentVolume
andvolume
in a Workload for any request of avolume
resource with theclass
nfs
.
You can find a Score file example using the volume
resource type
here
.
volume-emptydir.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "volume-emptydir" {
driver_type = "humanitec/template"
id = "volume-emptydir"
name = "volume-emptydir"
type = "volume"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"manifests" = {
"emptydir.yaml" = {
"location" = "volumes"
"data" = <<END_OF_TEXT
name: $${context.res.guresid}-emptydir
emptyDir:
sizeLimit: 1024Mi
END_OF_TEXT
}
}
}
})
}
}
resource "humanitec_resource_definition_criteria" "volume-emptydir_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.volume-emptydir.id
class = "ephemeral"
}
volume-emptydir.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-emptydir
entity:
name: volume-emptydir
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests:
emptydir.yaml:
location: volumes
data: |
name: ${context.res.guresid}-emptydir
emptyDir:
sizeLimit: 1024Mi
criteria:
- class: ephemeral
volume-nfs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "volume-nfs" {
driver_type = "humanitec/template"
id = "volume-nfs"
name = "volume-nfs"
type = "volume"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
END_OF_TEXT
"manifests" = {
"app-pv-tmpl.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
END_OF_TEXT
}
"app-pvc-tmpl.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
"app-vol-tmpl.yaml" = {
"location" = "volumes"
"data" = <<END_OF_TEXT
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
}
"outputs" = <<END_OF_TEXT
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "volume-nfs_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.volume-nfs.id
class = "nfs"
}
volume-nfs.yaml
(
view on GitHub
)
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
criteria:
- class: nfs
Terraform driver
Resource Definitions using the Terraform Driver
This section contains example Resource Definitions using the Terraform Driver .
Azure blob
Use the Terraform Driver to provision Azure Blob storage resources.
ssh-secret-refs.tf
: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.
ssh-secret-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "azure-blob" {
driver_type = "humanitec/terraform"
id = "azure-blob"
name = "azure-blob"
type = "azure-blob"
driver_inputs = {
# All secrets are read from a secret store using secret references
secret_refs = jsonencode({
variables = {
client_id = {
ref = var.client_id_secret_reference_key
store = var.secret_store
}
client_secret = {
ref = var.client_secret_secret_reference_key
store = var.secret_store
}
}
source = {
# Using an SSH key to authenticate against the Git repo providing the Terraform module
ssh_key = {
ref = var.ssh_key_secret_reference_key
store = var.secret_store
}
}
})
values_string = jsonencode({
source = {
path = "azure-blob/terraform/"
rev = "refs/heads/main"
url = "[email protected]:my-org/my-repo.git"
}
variables = {
# Variables for the Terraform module located in the Git repo
tenant_id = var.tenant_id
subscription_id = var.subscription_id
resource_group_name = var.resource_group_name
name = var.name
prefix = var.prefix
account_tier = var.account_tier
account_replication_type = var.account_replication_type
container_name = var.container_name
container_access_type = var.container_access_type
}
})
}
}
Backends
Backends
When a terraform apply
is executed, data and metadata is generated that must be stored to be able to keep track of the resources that have been created. For example, the ID of an S3 bucket might be generated. This ID is needed in order to update or destroy the bucket on future invocations. This data is stored in a
Terraform State file
. The state file should be considered sensitive data as it can contain secrets such as credentials or keys.
Terraform provides various ways storing the state file in different places. These are called Backends .
Humanitec recommends that you configure a backend for all Resource Definitions using the Terraform Driver. That way you maintain control over the state file and any sensitive data it may contain.
Configuring a backend
Terraform block
Backend configuration can be defined in the
terraform
block
. Terraform imposes limitations on the terraform
block. For example, Terraform variables cannot be used to parameterize the terraform
block. This means that the terraform block must be generated ahead of time.
Backends can be configured via a backend
block inside the terraform
block. An example block would be:
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
}
}
Unique state key
The key used to identify the terraform state needs to be unique to all instances of a resource being created. A resource is uniquely described by its context. That is the application ID, environment ID, Type and Class of the resource and the Resource ID.
Here is an example string of placeholders that will uniquely define a resource.
${context.app.id}_${context.env.id}_${context.res.type}_${context.res.class}_${context.res.id}
A less descriptive, but equally unique key would be the Globally Unique RESource ID (GUResID). This is available for the current resource via this placeholder:
${context.res.guresid}
Credentials
State files often contain sensitive data. Therefore they should be stored on backend that support authorization. This means that the backend should be configured with credentials in Terraform.
There are 2 broad types of credentials that could be used:
- Temporary credentials
- Long lived credentials
The Platform Orchestrator supports Temporary Credentials via
Cloud Accounts
. All backends can be configured via environment variables, so it is recommended to use the
credentials_config
object
to specify credentials via the appropriate environment variable for the backend.
For Long lived credentials, it is recommended to store the credentials in a config resource definition and inject them using a placeholder.
Examples
The following examples of configuring backends are provided:
S3 backend using temporary credentials
GitLab HTTP backed using long lived credentials
Gitlab
GitLab HTTP Backend using Long Lived credentials
GitLab implements the Terraform HTTP backend . In order to use the Terraform backend in GitLab, the following is needed:
- A Personal Access Token with
api
scope - A GitLab project that the token has access to.
This example has a simple resource definition using the Terraform Driver. The backend configuration is generated via a config
resource and then injected as a file in the terraform resource definition using a placeholder.
The following needs to be defined in the config for this example to work:
.entity.driver_inputs.values.gitlab_project_id
- Should be the numerical ID of the GitLab project being used to store the state.entity.driver_inputs.secret_refs.username
- The username that the Personal Access token is associated with.entity.driver_inputs.secret_refs.password
- The value of the Personal Access token
gitlab-backend.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "example-terraform-gitlab-backend-s3" {
driver_type = "humanitec/terraform"
id = "example-terraform-gitlab-backend-s3"
name = "example-terraform-gitlab-backend-s3"
type = "s3"
driver_inputs = {
values_string = jsonencode({
"files" = {
"main.tf" = <<END_OF_TEXT
resource "random_id" "thing" {
byte_length = 8
}
output "bucket" {
value = random_id.thing.hex
}
END_OF_TEXT
}
})
secrets_string = jsonencode({
"files" = {
"backend.tf" = "$${resources['config.tf-runner'].outputs.backend_tf}"
}
})
}
}
gitlab-backend.yaml
(
view on GitHub
)
:
# This Resource Definition uses GitLab as the Terraform backend to store Terraform state
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: example-terraform-gitlab-backend-s3
entity:
driver_inputs:
values:
files:
main.tf: |
resource "random_id" "thing" {
byte_length = 8
}
output "bucket" {
value = random_id.thing.hex
}
secrets:
files:
# We don't supply the res_id so that it can be passed through to build the state key
backend.tf: ${resources['config.tf-runner'].outputs.backend_tf}
driver_type: humanitec/terraform
name: example-terraform-gitlab-backend-s3
type: s3
# Supply matching criteria
criteria: []
tf-be-config.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "example-terraform-gitlab-backend-config" {
driver_type = "humanitec/template"
id = "example-terraform-gitlab-backend-config"
name = "example-terraform-gitlab-backend-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"gitlab_project_id" = ""
"state_name" = "$${context.app.id}_$${context.env.id}_$${context.res.id}"
"templates" = {
"init" = "address: https://gitlab.com/api/v4/projects/{{ .driver.values.gitlab_project_id }}/terraform/state/{{ .driver.values.state_name | replace \".\" \"_\" }}\n"
"outputs" = <<END_OF_TEXT
# Useful for debugging to output the address as an output
address: {{ .init.address }}
END_OF_TEXT
"secrets" = <<END_OF_TEXT
backend_tf: |
terraform {
# https://developer.hashicorp.com/terraform/language/v1.5.x/settings/backends/configuration
# https://developer.hashicorp.com/terraform/language/v1.5.x/settings/backends/http
backend "http" {
address = "{{ .init.address }}"
lock_address = "{{ .init.address }}/lock"
lock_method = "POST"
unlock_address = "{{ .init.address }}/lock"
unlock_method = "DELETE"
username = "{{ .driver.secrets.username }}"
password = "{{ .driver.secrets.password }}"
retry_wait_min = 5
}
}
END_OF_TEXT
}
})
secret_refs = jsonencode({
"username" = {
"store" = null
"ref" = null
}
"password" = {
"store" = null
"ref" = null
}
})
}
}
resource "humanitec_resource_definition_criteria" "example-terraform-gitlab-backend-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.example-terraform-gitlab-backend-config.id
class = "tf-runner"
}
tf-be-config.yaml
(
view on GitHub
)
:
# This Resource Definition provides backend configuration for using GitLab to store the Terraform state.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: example-terraform-gitlab-backend-config
entity:
criteria:
- class: tf-runner
driver_type: humanitec/template
driver_inputs:
values:
# Provide the ID of the GitLab Project - it should be a long number as a string
gitlab_project_id: ""
state_name: ${context.app.id}_${context.env.id}_${context.res.id}
templates:
init: |
address: https://gitlab.com/api/v4/projects/{{ .driver.values.gitlab_project_id }}/terraform/state/{{ .driver.values.state_name | replace "." "_" }}
outputs: |
# Useful for debugging to output the address as an output
address: {{ .init.address }}
secrets: |
backend_tf: |
terraform {
# https://developer.hashicorp.com/terraform/language/v1.5.x/settings/backends/configuration
# https://developer.hashicorp.com/terraform/language/v1.5.x/settings/backends/http
backend "http" {
address = "{{ .init.address }}"
lock_address = "{{ .init.address }}/lock"
lock_method = "POST"
unlock_address = "{{ .init.address }}/lock"
unlock_method = "DELETE"
username = "{{ .driver.secrets.username }}"
password = "{{ .driver.secrets.password }}"
retry_wait_min = 5
}
}
secret_refs:
# The Username associated with your Personal Access Token
username:
store:
ref:
# The Personal Access Token
password:
store:
ref:
type: config
name: example-terraform-gitlab-backend-config
S3
S3 Backend using Temporary Credentials
The Backend is configured using the backend
block. A config
resource holds the key configuration for the backend.
The Credentials for the backend are automatically read in via the AWS_
environment variables defined in credentials_config
.
The following needs to be defined in tf-be-config.yaml
resource definition:
.entity.driver_account
- Should be the ID of the Cloud Account that was configured..entity.driver_inputs.values.bucket
- Should be the ID of the S3 bucket..entity.driver_inputs.values.prefix
- Should be the prefix for state path.entity.driver_inputs.values.region
- The region that the bucket is in.
It is critical that the Identity defined in the driver account has access to the S3 bucket.
For example, using this policy document, replacing my-terraform-state-bucket
with your bucket ID:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-terraform-state-bucket",
"arn:aws:s3:::my-terraform-state-bucket/*"
]
}
]
}
s3-backend.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "example-terraform-s3-backend-s3" {
driver_type = "humanitec/terraform"
id = "example-terraform-s3-backend-s3"
name = "s3-backend-example"
type = "s3"
driver_account = "$${resources.config#tf-backend.account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"environment" = {
"AWS_ACCESS_KEY_ID" = "AccessKeyId"
"AWS_SECRET_ACCESS_KEY" = "SecretAccessKey"
"AWS_SESSION_TOKEN" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
terraform {
backend "s3" {
bucket = "$${resources.config#tf-backend.outputs.bucket}"
key = "$${resources.config#tf-backend.outputs.prefix}$${context.app.id}/$${context.env.id}/$${context.res.type}.$${context.res.class}/$${context.res.id}"
region = "$${resources.config#tf-backend.outputs.region}"
}
}
resource "random_id" "thing" {
byte_length = 8
}
output "bucket" {
value = "$\{random_id.thing.hex}"
}
END_OF_TEXT
})
}
}
s3-backend.yaml
(
view on GitHub
)
:
# This Resource Definition uses an S3 bucket as the Terraform backend to store Terraform state
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: example-terraform-s3-backend-s3
entity:
driver_account: ${resources.config#tf-backend.account}
driver_inputs:
values:
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
terraform {
backend "s3" {
bucket = "${resources.config#tf-backend.outputs.bucket}"
key = "${resources.config#tf-backend.outputs.prefix}${context.app.id}/${context.env.id}/${context.res.type}.${context.res.class}/${context.res.id}"
region = "${resources.config#tf-backend.outputs.region}"
}
}
resource "random_id" "thing" {
byte_length = 8
}
output "bucket" {
value = "$\{random_id.thing.hex}"
}
driver_type: humanitec/terraform
name: s3-backend-example
type: s3
# Supply matching criteria
criteria: []
tf-be-config.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "tf-backend-config" {
driver_type = "humanitec/echo"
id = "tf-backend-config"
name = "tf-backend-config"
type = "config"
driver_account = "aws-ref-arch"
driver_inputs = {
values_string = jsonencode({
"bucket" = "my-terraform-state-bucket"
"prefix" = "tf-state/"
"region" = "us-east-1"
})
}
}
resource "humanitec_resource_definition_criteria" "tf-backend-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.tf-backend-config.id
res_id = "tf-backend"
}
tf-be-config.yaml
(
view on GitHub
)
:
# This Resource Definition provides configuration for using an S3 bucket to store the Terraform state.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: tf-backend-config
entity:
criteria:
# This res_id is used in the resource reference in the s3-backend Resource Definition.
- res_id: tf-backend
driver_account: aws-ref-arch
driver_inputs:
values:
bucket: my-terraform-state-bucket
prefix: tf-state/
region: us-east-1
driver_type: humanitec/echo
name: tf-backend-config
type: config
Co provision
Resource co-provisioning
This section contains an example of Resource Definitions using the Terraform Driver and illustrating the co-provisioning concept.
Scenario: For each AWS S3 bucket resource an AWS IAM policy resource must be created. The bucket properties (region, ARN) should be passed to the policy resource. In other words, an IAM Policy resource depends on a S3 resource, but it needs to be created automatically.
Any time a Workload references a S3 resource using this Resource Definition, an IAM Policy resource will be co-provisioned and reference the S3 resource. The resulting Resource Graph will look like this:
flowchart LR
R1(Workload) --->|references| R2(S3)
N1(AWS Policy) --->|references| R2
classDef pClass stroke-width:1px
classDef rClass stroke-width:2px
classDef nClass stroke-width:2px,stroke-dasharray: 5 5
class R1 pClass
class R2 rClass
class N1 nClass
aws-policy-co-provision.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aws-policy-co-provision" {
driver_type = "humanitec/terraform"
id = "aws-policy-co-provision"
name = "aws-policy-co-provision"
type = "aws-policy"
driver_account = "aws"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"REGION" = "$${resources.s3.outputs.region}"
"BUCKET" = "$${resources.s3.outputs.bucket}"
"BUCKET_ARN" = "$${resources.s3.outputs.arn}"
}
"credentials_config" = {
"variables" = {
"ACCESS_KEY_ID" = "AccessKeyId"
"ACCESS_KEY_VALUE" = "SecretAccessKey"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_iam_policy" "bucket" {
name = "$\{var.BUCKET}-policy"
policy = data.aws_iam_policy_document.main.json
}
data "aws_iam_policy_document" "main" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:ListBucket",
]
resources = [
var.BUCKET_ARN,
]
}
}
END_OF_TEXT
})
}
}
aws-policy-co-provision.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-policy-co-provision
entity:
name: aws-policy-co-provision
type: aws-policy
driver_type: humanitec/terraform
# Use the credentials injected via the driver_account to set variables as expected by your Terraform code
driver_account: aws
driver_inputs:
values:
variables:
REGION: ${resources.s3.outputs.region}
BUCKET: ${resources.s3.outputs.bucket}
BUCKET_ARN: ${resources.s3.outputs.arn}
credentials_config:
variables:
ACCESS_KEY_ID: AccessKeyId
ACCESS_KEY_VALUE: SecretAccessKey
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_iam_policy" "bucket" {
name = "$\{var.BUCKET}-policy"
policy = data.aws_iam_policy_document.main.json
}
data "aws_iam_policy_document" "main" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:ListBucket",
]
resources = [
var.BUCKET_ARN,
]
}
}
s3-co-provision.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "s3-co-provision" {
driver_type = "humanitec/terraform"
id = "s3-co-provision"
name = "s3-co-provision"
type = "s3"
driver_account = "aws"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"REGION" = "eu-central-1"
}
"credentials_config" = {
"variables" = {
"ACCESS_KEY_ID" = "AccessKeyId"
"ACCESS_KEY_VALUE" = "SecretAccessKey"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
output "bucket" {
value = aws_s3_bucket.main.id
}
output "arn" {
value = aws_s3_bucket.main.arn
}
output "region" {
value = aws_s3_bucket.main.region
}
END_OF_TEXT
})
}
provision = {
"aws-policy" = {
is_dependent = false
}
}
}
s3-co-provision.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: s3-co-provision
entity:
name: s3-co-provision
type: s3
driver_type: humanitec/terraform
# Use the credentials injected via the driver_account to set variables as expected by your Terraform code
driver_account: aws
driver_inputs:
values:
variables:
REGION: eu-central-1
credentials_config:
variables:
ACCESS_KEY_ID: AccessKeyId
ACCESS_KEY_VALUE: SecretAccessKey
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
output "bucket" {
value = aws_s3_bucket.main.id
}
output "arn" {
value = aws_s3_bucket.main.arn
}
output "region" {
value = aws_s3_bucket.main.region
}
# Co-provision aws-policy resource
provision:
aws-policy:
is_dependent: false
Credentials
Credentials
General credentials configuration
Different Terraform providers have different ways of being configured. Generally, there are 3 ways that providers can be configured:
- Directly using parameters on the provider. We call this “provider” credentials.
- Using a credentials file. The filename is supplied to the provider. We call this “file” credentials.
- Via environment variables that the provider reads. We call this “environment” credentials.
A powerful approach for working with different cloud accounts for the same resource definition is to reference the credentials from a config
resource. By using matching criteria on the config
resource, it is possible to specialize the account used in the terraform to different contexts. For example, there might be different AWS Accounts for test
and production
environments. The same resource definition can be used to manage the terraform and 2 config
resources can be created matching to the staging
and production
environments respectively.
In this set of examples, we provide two config
Resource Definitions for AWS and GCP.
AWS
Account config (
account-config-aws.yaml)
Provider Credentials (
aws-provider-credentials.yaml)
Environment Credentials (
aws-environment-credentials.yaml)
GCP
Account config (
account-config-gcp.yaml)
File Credentials (
gcp-file-credentials.yaml)
Temporary credentials
Using a Cloud Account type that supports temporary credentials, those credentials can be easily injected into a Resource Definition using the Terraform Driver. Use a driver_account
referencing the Cloud Account in the Resource Definition, and access its the credentials through the supplied values as shown in the examples.
AWS
S3 bucket (
s3-temporary-credentials.yaml)
GCP
Cloud Storage bucket (
gcs-temporary-credentials.yaml)
Azure
Blob Storage container (
azure-blob-storage-temporary-credentials.yaml)
account-config-aws.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "account-config-aws" {
driver_type = "humanitec/echo"
id = "account-config-aws"
name = "account-config-aws"
type = "config"
driver_account = "aws-credentials"
driver_inputs = {
values_string = jsonencode({
"region" = "us-east-1"
})
}
}
resource "humanitec_resource_definition_criteria" "account-config-aws_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.account-config-aws.id
res_id = "aws-account"
}
account-config-aws.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: account-config-aws
entity:
criteria:
# This res_id is used in the resource reference in the s3-backend Resource Definition.
- res_id: aws-account
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name your AWS Cloud Account.
driver_account: aws-credentials
driver_inputs:
values:
region: us-east-1
driver_type: humanitec/echo
name: account-config-aws
type: config
account-config-gcp.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "account-config-gcp" {
driver_type = "humanitec/echo"
id = "account-config-gcp"
name = "account-config-gcp"
type = "config"
driver_account = "gcp-credentials"
driver_inputs = {
values_string = jsonencode({
"location" = "US"
"project_id" = "my-gcp-prject"
})
}
}
resource "humanitec_resource_definition_criteria" "account-config-gcp_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.account-config-gcp.id
res_id = "gcp-account"
}
account-config-gcp.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: account-config-gcp
entity:
criteria:
# This res_id is used in the resource reference in the gcp-file-credentials Resource Definition.
- res_id: gcp-account
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name your GCP Cloud Account.
driver_account: gcp-credentials
driver_inputs:
values:
location: US
project_id: my-gcp-prject
driver_type: humanitec/echo
name: account-config-gcp
type: config
aws-environment-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aws-environment-credentials" {
driver_type = "humanitec/terraform"
id = "aws-environment-credentials"
name = "aws-environment-credentials"
type = "s3"
driver_account = "$${resources['config.default#aws-account'].account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"environment" = {
"AWS_ACCESS_KEY_ID" = "AccessKeyId"
"AWS_SECRET_ACCESS_KEY" = "SecretAccessKey"
"AWS_SESSION_TOKEN" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("$${context.res.id}", "/^.*\\./", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
tags = {
Humanitec = true
}
}
END_OF_TEXT
"variables" = {
"region" = "$${resources['config.default#aws-account'].outputs.region}"
}
})
}
}
aws-environment-credentials.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-environment-credentials
entity:
# Use the account provided by the config resource
driver_account: ${resources['config.default#aws-account'].account}
driver_inputs:
values:
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
tags = {
Humanitec = true
}
}
variables:
region: ${resources['config.default#aws-account'].outputs.region}
driver_type: humanitec/terraform
name: aws-environment-credentials
type: s3
# Supply matching criteria
criteria: []
aws-provider-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aws-provider-credentials" {
driver_type = "humanitec/terraform"
id = "aws-provider-credentials"
name = "aws-provider-credentials"
type = "s3"
driver_account = "$${resources['config.default#aws-account'].account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"variables" = {
"access_key_id" = "AccessKeyId"
"secret_access_key" = "SecretAccessKey"
"session_token" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
variable "access_key_id" {
sensitive = true
}
variable "secret_access_key" {
sensitive = true
}
variable "session_token" {
sensitive = true
}
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
access_key = var.access_key_id
secret_key = var.secret_access_key
token = var.session_token
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("$${context.res.id}", "/^.*\\./", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
tags = {
Humanitec = true
}
}
END_OF_TEXT
"variables" = {
"region" = "$${resources['config.default#aws-account'].outputs.region}"
}
})
}
}
aws-provider-credentials.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-provider-credentials
entity:
# Use the account provided by the config resource
driver_account: ${resources['config.default#aws-account'].account}
driver_inputs:
values:
credentials_config:
variables:
access_key_id: "AccessKeyId"
secret_access_key: "SecretAccessKey"
session_token: "SessionToken"
script: |
variable "access_key_id" {
sensitive = true
}
variable "secret_access_key" {
sensitive = true
}
variable "session_token" {
sensitive = true
}
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
access_key = var.access_key_id
secret_key = var.secret_access_key
token = var.session_token
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
tags = {
Humanitec = true
}
}
variables:
region: ${resources['config.default#aws-account'].outputs.region}
driver_type: humanitec/terraform
name: aws-provider-credentials
type: s3
# Supply matching criteria
criteria: []
azure-blob-storage-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "blob-storage-temporary-credentials" {
driver_type = "humanitec/terraform"
id = "blob-storage-temporary-credentials"
name = "blob-storage-temporary-credentials"
type = "azure-blob"
driver_account = "azure-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"location" = "eastus"
"resource_group_name" = "my-test-resources"
"tenant_id" = "3987ae5f-008f-4265-a6ee-e9dcedce4742"
"subscription_id" = "742f6d8b-1b7b-4c6a-9f37-90bdd5aeb996"
"client_id" = "c977c44d-3003-464c-b163-03920d4a390b"
}
"credentials_config" = {
"variables" = {
"oidc_token" = "oidc_token"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "azurerm" {
features {}
subscription_id = var.subscription_id
tenant_id = var.tenant_id
client_id = var.client_id
use_oidc = true
oidc_token = var.oidc_token
}
# ... Terraform code reduced for brevity
resource "azurerm_storage_account" "example" {
name = "mystorageaccount"
resource_group_name = var.resource_group_name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "mystorage"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
END_OF_TEXT
})
}
}
azure-blob-storage-temporary-credentials.yaml
(
view on GitHub
)
:
# Create Azure Blob Storage container using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: blob-storage-temporary-credentials
entity:
name: blob-storage-temporary-credentials
type: azure-blob
driver_type: humanitec/terraform
# The driver_account references a Cloud Account of type "azure-identity"
# which needs to be configured for your Organization.
driver_account: azure-temporary-creds
driver_inputs:
values:
variables:
location: eastus
resource_group_name: my-test-resources
tenant_id: 3987ae5f-008f-4265-a6ee-e9dcedce4742
subscription_id: 742f6d8b-1b7b-4c6a-9f37-90bdd5aeb996
# Managed Identity Client ID used in the Cloud Account
client_id: c977c44d-3003-464c-b163-03920d4a390b
# Use the credentials injected via the driver_account
# to set `oidc_token` variable as expected by your Terraform code
credentials_config:
variables:
oidc_token: oidc_token
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "azurerm" {
features {}
subscription_id = var.subscription_id
tenant_id = var.tenant_id
client_id = var.client_id
use_oidc = true
oidc_token = var.oidc_token
}
# ... Terraform code reduced for brevity
resource "azurerm_storage_account" "example" {
name = "mystorageaccount"
resource_group_name = var.resource_group_name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "mystorage"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
gcp-file-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gcp-file-credentials" {
driver_type = "humanitec/terraform"
id = "gcp-file-credentials"
name = "gcp-file-credentials"
type = "gcs"
driver_account = "$${resources['config.default#gcp-account'].account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"file" = "credentials.json"
}
"script" = <<END_OF_TEXT
variable "project_id" {}
variable "location" {}
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
provider "google" {
project = var.project_id
# The file is defined above. The provider will read a service account token from this file.
credentials = "credentials.json"
}
output "name" {
value = google_storage_bucket.bucket.name
}
resource "google_storage_bucket" "bucket" {
name = "$\{replace("$${context.res.id}", "/^.*\\./", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
location = var.location
force_destroy = true
}
END_OF_TEXT
"variables" = {
"location" = "$${resources.config#gcp-account.outputs.location}"
"project_id" = "$${resources.config#gcp-account.outputs.project_id}"
}
})
}
}
gcp-file-credentials.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gcp-file-credentials
entity:
driver_account: ${resources['config.default#gcp-account'].account}
driver_inputs:
values:
credentials_config:
file: credentials.json
script: |
variable "project_id" {}
variable "location" {}
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
provider "google" {
project = var.project_id
# The file is defined above. The provider will read a service account token from this file.
credentials = "credentials.json"
}
output "name" {
value = google_storage_bucket.bucket.name
}
resource "google_storage_bucket" "bucket" {
name = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
location = var.location
force_destroy = true
}
variables:
location: ${resources.config#gcp-account.outputs.location}
project_id: ${resources.config#gcp-account.outputs.project_id}
driver_type: humanitec/terraform
name: gcp-file-credentials
type: gcs
# Supply matching criteria
criteria: []
gcs-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "gcs-temporary-credentials" {
driver_type = "humanitec/terraform"
id = "gcs-temporary-credentials"
name = "gcs-temporary-credentials"
type = "gcs"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"location" = "europe-west3"
"project_id" = "my-gcp-project"
}
"credentials_config" = {
"variables" = {
"access_token" = "access_token"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "google" {
project = var.project_id
access_token = var.access_token
}
# ... Terraform code reduced for brevity
resource "google_storage_bucket" "bucket" {
name = my-bucket
location = var.location
}
END_OF_TEXT
})
}
}
gcs-temporary-credentials.yaml
(
view on GitHub
)
:
# Create Google Cloud Storage bucket using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gcs-temporary-credentials
entity:
name: gcs-temporary-credentials
type: gcs
driver_type: humanitec/terraform
# The driver_account references a Cloud Account of type "gcp-identity"
# which needs to be configured for your Organization.
driver_account: gcp-temporary-creds
driver_inputs:
values:
variables:
location: europe-west3
project_id: my-gcp-project
# Use the credentials injected via the driver_account
# to set variables as expected by your Terraform code
credentials_config:
variables:
access_token: access_token
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "google" {
project = var.project_id
access_token = var.access_token
}
# ... Terraform code reduced for brevity
resource "google_storage_bucket" "bucket" {
name = my-bucket
location = var.location
}
s3-temporary-credentials.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "s3-temporary-credentials" {
driver_type = "humanitec/terraform"
id = "s3-temporary-credentials"
name = "s3-temporary-credentials"
type = "s3"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"REGION" = "eu-central-1"
}
"credentials_config" = {
"variables" = {
"ACCESS_KEY_ID" = "AccessKeyId"
"ACCESS_KEY_VALUE" = "SecretAccessKey"
"SESSION_TOKEN" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
token = var.SESSION_TOKEN
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
END_OF_TEXT
})
}
}
s3-temporary-credentials.yaml
(
view on GitHub
)
:
# Create S3 bucket using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: s3-temporary-credentials
entity:
name: s3-temporary-credentials
type: s3
driver_type: humanitec/terraform
# The driver_account references a Cloud Account of type "aws-role"
# which needs to be configured for your Organization.
driver_account: aws-temp-creds
driver_inputs:
values:
variables:
REGION: eu-central-1
# Use the credentials injected via the driver_account
# to set variables as expected by your Terraform code
credentials_config:
variables:
ACCESS_KEY_ID: AccessKeyId
ACCESS_KEY_VALUE: SecretAccessKey
SESSION_TOKEN: SessionToken
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
token = var.SESSION_TOKEN
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
Custom git config
Custom git-config for sourcing Terraform modules
This section contains an example of providing a custom git-config to be used by Terraform when accessing modules sources from private git repositories.
Terraform can use modules from various sources including git . The documentation states : Terraform installs modules from Git repositories by running git clone, and so it will respect any local Git configuration set on your system, including credentials. To access a non-public Git repository, configure Git with suitable credentials for that repository.
Custom git configuration can be provided by including a file with name .gitconfig
in the files
input. This file can be either a value or a secret depending on whether it contains sensitive credentials or not.
In this example we add a git-config
that re-writes URLs.
example-def.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "example-git-config" {
driver_type = "humanitec/terraform"
id = "example-git-config"
name = "example-git-config"
type = "s3"
driver_inputs = {
values_string = jsonencode({
"files" = {
".gitconfig" = <<END_OF_TEXT
[url "https://github.com/Invicton-Labs/"]
insteadOf = https://example.com/replace-with-git-config/
END_OF_TEXT
}
"script" = <<END_OF_TEXT
module "uuid" {
# We rely on the git-config above to rewrite this URL into one that will work
source = "git::https://example.com/replace-with-git-config/terraform-random-uuid.git?ref=v0.2.0"
}
output "bucket" {
value = module.uuid.uuid
}
END_OF_TEXT
})
}
}
example-def.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
metadata:
id: example-git-config
entity:
criteria: {}
driver_inputs:
values:
files:
.gitconfig: |
[url "https://github.com/Invicton-Labs/"]
insteadOf = https://example.com/replace-with-git-config/
script: |
module "uuid" {
# We rely on the git-config above to rewrite this URL into one that will work
source = "git::https://example.com/replace-with-git-config/terraform-random-uuid.git?ref=v0.2.0"
}
output "bucket" {
value = module.uuid.uuid
}
driver_type: humanitec/terraform
name: example-git-config
type: s3
kind: Definition
Private git repo
The Terraform Driver can access Terraform definitions stored in a Git repository. In the case that this repository requires authentication, you must supply credentials to the Driver. The examples in this section show how to provide those as part of the secrets in the Resource Definition based on the Terraform Driver.
ssh-secret-refs.tf
: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.https-secret-refs.tf
: uses secret references to obtain an HTTPS password from a secret store to connect to the Git repo providing the Terraform code.
https-secret-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "example-resource" {
driver_type = "humanitec/terraform"
id = "example-resource"
name = "example-resource"
type = "some-resource-type"
driver_inputs = {
# This examples uses secret references, pointing at a secret store
# to obtain the actual values
secret_refs = jsonencode({
source = {
# Using the password for a connection to the Git repo via HTTPS
password = {
ref = var.password_secret_reference_key
store = var.secret_store
}
}
variables = {
# ...
}
})
values_string = jsonencode({
# Connection information to the target Git repo
source = {
path = "some-resource-type/terraform"
rev = "refs/heads/main"
url = "https://my-domain.com/my-org/my-repo.git"
}
# ...
})
}
}
ssh-secret-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "example-resource" {
driver_type = "humanitec/terraform"
id = "example-resource"
name = "example-resource"
type = "some-resource-type"
driver_inputs = {
# This examples uses secret references, pointing at a secret store
# to obtain the actual values
secret_refs = jsonencode({
source = {
# Using the ssh_key for a connection to the Git repo via SSH
ssh_key = {
ref = var.ssh_key_secret_reference_key
store = var.secret_store
}
}
variables = {
# ...
}
})
values_string = jsonencode({
# Connection information to the target Git repo
source = {
path = "some-resource-type/terraform"
rev = "refs/heads/main"
url = "[email protected]:my-org/my-repo.git"
}
# ...
})
}
}
Runner
The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.
The examples in this section show how to provide those data by referencing a k8s-cluster
Resource Definition as part of the
non-secret
and
secret
fields of the runner
object in the s3
Resource Definition based on the Terraform Driver.
k8s-cluster-refs.tf
: provides a connection to an EKS cluster .s3-ext-runner-refs.tf
: uses runner configuration to run the Terraform Runner in the external cluster specified byk8s-cluster-refs.tf
and provision an S3 bucket. It configures the Runner to run Terraform scripts from a private Git repository which initializes a Terraform s3 backend via Environment Variables.
k8s-cluster-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "eks_resource_cluster" {
id = "eks-cluster"
name = "eks-cluster"
type = "k8s-cluster"
driver_type = "humanitec/k8s-cluster-eks"
driver_inputs = {
secrets_string = jsonencode({
credentials = {
aws_access_key_id = var.aws_access_key_id
aws_secret_access_key = var.aws_secret_access_key
}
}
)
values_string = jsonencode({
loadbalancer = "10.10.10.10"
name = "my-cluster"
region = "eu-central-1"
loadbalancer = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
loadbalancer_hosted_zone = "ABC0DEF5WYYZ00"
})
}
}
resource "humanitec_resource_definition_criteria" "eks_resource_cluster" {
resource_definition_id = humanitec_resource_definition.eks_resource_cluster.id
class = "runner"
}
s3-ext-runner-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aws_terraform_external_runner_resource_s3_bucket" {
id = "aws-terrafom-ext-runner-s3-bucket"
name = "aws-terrafom-ext-runner-s3-bucket"
type = "s3"
driver_type = "humanitec/terraform"
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name of your AWS Cloud Account.
# The account is used to provide credentials to the Terraform script via environment variables to access the TF state.
driver_account = "my-aws-account"
driver_inputs = {
secrets_string = jsonencode({
# Secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = {
credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
}
source = {
ssh_key = var.ssh_key
}
}
)
values_string = jsonencode({
# This instructs the driver that the Runner must run in an external cluster.
runner_mode = "custom-kubernetes"
# Non-secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = {
cluster_type = "eks"
cluster = {
region = "$${resources['k8s-cluster.runner'].outputs.region}"
name = "$${resources['k8s-cluster.runner'].outputs.name}"
loadbalancer = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
}
# Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
service_account = "humanitec-tf-runner-sa"
namespace = "humanitec-tf-runner"
}
# Configure the way we provide account credentials to the Terraform scripts in the referenced repository.
# These credentials are related to the `driver_account` configured above.
credentials_config = {
# Terraform script Variables.
variables = {
ACCESS_KEY_ID = "AccessKeyId"
SECRET_ACCESS_KEY = "SecretAccessKey"
}
# Environment Variables.
environment = {
AWS_ACCESS_KEY_ID = "AccessKeyId"
AWS_SECRET_ACCESS_KEY = "SecretAccessKey"
}
}
# Connection information to the Git repo containing the Terraform code.
# It will provide a backend configuration initialized via Environment Variables.
source = {
path = "s3/terraform/bucket/"
rev = "refs/heads/main"
url = "my-domain.com:my-org/my-repo.git"
}
variables = {
# Provide a separate bucket per Application and Environment
bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
region = var.region
}
})
}
}
Runner pod configuration
The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.
The examples in this section show how to provide those data by referencing a k8s-cluster
Resource Definition as part of the
non-secret
and
secret
fields of the runner
object in the azure-blob-account
Resource Definition based on the Terraform Driver.
They also provide an example of how to apply labels to the Runner Pod and make it able to run with an
Azure Workload Identity
getting rid of the need of explicitly setting Azure credentials in the Resource Definition or using a Driver Account.
k8s-cluster-refs.tf
: provides a connection to an AKS cluster .azure-blob-account.tf
: uses runner configuration to run the Terraform Runner in the external cluster specified byk8s-cluster-refs.tf
and provision an azure blob account . It configures the Runner to run Terraform scripts from a private Git repository where an Terraform azurerm backend . Neither a driver account or secret credentials are used here as the runner pod is configured to run with a workload identity associated to the specify service account viarunner.runner_pod_template
property.
azure-blob-account.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "azure_blob_account" {
driver_type = "humanitec/terraform"
id = "azure-blob-account-basic"
name = "azure-blob-account-basic"
type = "azure-blob-account"
driver_inputs = {
secrets_string = jsonencode({
# Secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = jsonencode({
credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
})
source = {
ssh_key = var.ssh_key
}
})
values_string = jsonencode({
append_logs_to_error = true
# This instructs the driver that the Runner must be run in an external cluster.
runner_mode = "custom-kubernetes"
# Non-secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = {
cluster_type = "aks"
cluster = {
region = "$${resources['k8s-cluster.runner'].outputs.region}"
name = "$${resources['k8s-cluster.runner'].outputs.name}"
loadbalancer = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
}
# Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
# In this example, the Service Account needs to be annotated to specify the Microsoft Entra application client ID to be used with the pod: https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#service-account-labels-and-annotations
service_account = "humanitec-tf-runner-sa"
namespace = "humanitec-tf-runner"
# This instructs the driver that the Runner pod must run with a workload identity.
runner_pod_template = <<EOT
metadata:
labels:
azure.workload.identity/use: "true"
EOT
}
# Connection information to the Git repo containing the Terraform code.
# It will provide a backend configuration initialized via Environment Variables.
source = {
path = "modules/azure-blob-account/basic"
rev = var.resource_packs_azure_rev
url = var.resource_packs_azure_url
}
variables = {
res_id = "$${context.res.id}"
app_id = "$${context.app.id}"
env_id = "$${context.env.id}"
subscription_id = var.subscription_id
resource_group_name = var.resource_group_name
name = var.name
prefix = var.prefix
account_tier = var.account_tier
account_replication_type = var.account_replication_type
}
})
}
}
k8s-cluster-refs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aks_aad_resource_cluster" {
id = "aad-enabled-cluster"
name = "aad-enabled-cluster"
type = "k8s-cluster"
driver_type = "humanitec/k8s-cluster-aks"
driver_inputs = {
secrets_string = jsonencode({
credentials = {
appId = var.app_id
displayName = var.display_name
password = var.password
tenant = var.tenant
}
}
)
values_string = jsonencode({
name = "my-cluster"
resource_group = "my-azure-resource-group"
subscription_id = "123456-1234-1234-1234-123456789"
server_app_id = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
resource "humanitec_resource_definition_criteria" "aks_aad_resource_cluster" {
resource_definition_id = humanitec_resource_definition.aks_aad_resource_cluster.id
class = "runner"
}
S3
Use the Terraform Driver to provision Amazon S3 bucket resources.
public-git-repo.tf
: uses a publicly accessible Git repo to find the Terraform code.private-git-repo.tf
: uses a private Git repo requiring authentication to find the Terraform code.
private-git-repo.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
id = "aws-terrafom-s3-bucket"
name = "aws-terrafom-s3-bucket"
type = "s3"
driver_type = "humanitec/terraform"
driver_inputs = {
secrets_string = jsonencode({
variables = {
access_key = var.access_key
secret_key = var.secret_key
}
source = {
# Provide either an SSH key (for SSH connection) or password (for HTTPS).
ssh_key = var.ssh_key
password = var.password
}
}
)
values_string = jsonencode({
# Connection information to the Git repo containing the Terraform code
source = {
path = "s3/terraform/bucket/"
rev = "refs/heads/main"
url = "https://my-domain.com/my-org/my-repo.git"
# url = "[email protected]:my-org/my-repo.git" # For SSH access instead of HTTPS
}
variables = {
# Provide a separate bucket per Application and Environment
bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
region = var.region
assume_role_arn = var.assume_role_arn
}
})
}
}
public-git-repo.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
id = "aws-terrafom-s3-bucket"
name = "aws-terrafom-s3-bucket"
type = "s3"
driver_type = "humanitec/terraform"
driver_inputs = {
secrets_string = jsonencode({
variables = {
access_key = var.access_key
secret_key = var.secret_key
}
}
)
values_string = jsonencode({
# Connection information to the Git repo containing the Terraform code
# The repo must not require authentication
source = {
path = "s3/terraform/bucket/"
rev = "refs/heads/main"
url = "https://my-domain.com/my-org/my-repo.git"
}
variables = {
# Provide a separate bucket per Application and Environment
bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
region = var.region
assume_role_arn = var.assume_role_arn
}
})
}
}
Volume pvc
Volumes
This section contains Resource Definitions examples for handling Kubernetes
Volumes
by using the
volume-pvc
Driver. If you have special requirements for your PersistentVolume
implementation, you can
see this other example using the template
Driver
.
You can find a Score file example using the volume
resource type
here
.
volume-ebs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "volume-ebs" {
driver_type = "humanitec/volume-pvc"
id = "volume-ebs"
name = "volume-ebs"
type = "volume"
driver_inputs = {
values_string = jsonencode({
"access_modes" = "ReadWriteOnce"
"capacity" = "5Gi"
"storage_class_name" = "ebs-sc"
})
}
}
resource "humanitec_resource_definition_criteria" "volume-ebs_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.volume-ebs.id
class = "ebs"
}
volume-ebs.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-ebs
entity:
type: volume
driver_type: humanitec/volume-pvc
name: volume-ebs
driver_inputs:
values:
access_modes: ReadWriteOnce
capacity: 5Gi
storage_class_name: ebs-sc
criteria:
- class: ebs
volume-pvc.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "volume-pvc" {
driver_type = "humanitec/volume-pvc"
id = "volume-pvc"
name = "volume-pvc"
type = "volume"
driver_inputs = {
values_string = jsonencode({
"access_modes" = "ReadWriteOnce"
"capacity" = "10Gi"
})
}
}
resource "humanitec_resource_definition_criteria" "volume-pvc_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.volume-pvc.id
}
volume-pvc.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-pvc
entity:
type: volume
driver_type: humanitec/volume-pvc
name: volume-pvc
driver_inputs:
values:
access_modes: ReadWriteOnce
capacity: 10Gi
criteria:
- {}
Wildcard dns
This section contains example Resource Definitions using the Wildcard DNS Driver returning an externally managed DNS record for routing and ingress inside the cluster.
The provision
section is to
co-provision
an ingress
resource, see
Routes
to learn how the networking Resource Types work together.
dns-template.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "dns-template" {
driver_type = "humanitec/dns-wildcard"
id = "dns-template"
name = "dns-template"
type = "dns"
driver_inputs = {
values_string = jsonencode({
"domain" = "my-domain.com"
"template" = "{{ index (splitList \".\" \"$${context.res.id}\") 1 }}-$${context.env.id}-$${context.app.id}"
})
}
provision = {
"ingress" = {
is_dependent = false
}
}
}
resource "humanitec_resource_definition_criteria" "dns-template_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.dns-template.id
}
dns-template.yaml
(
view on GitHub
)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: dns-template
entity:
name: dns-template
type: dns
driver_type: humanitec/dns-wildcard
driver_inputs:
values:
domain: "my-domain.com"
template: '{{ index (splitList "." "${context.res.id}") 1 }}-${context.env.id}-${context.app.id}'
provision:
ingress:
is_dependent: false
criteria:
- {}