Resource Definitions
Resource Definitions
This section contains example Resource Definitions.
Install any Resource Definition into your Humanitec Organization using the CLI and this command:
humctl create -f resource-definition-file.yaml
Echo driver
Resource Definitions using the Echo Driver
This section contains example Resource Definitions using the Echo Driver.
Namespace
This section contains example Resource Definitions using the Echo Driver for managing Kubernetes namespaces.
custom-namespace.yaml
: Shows how to use the Echo Driver to return the name of an externally managed namespace. This format is for use with the Humanitec CLI.custom-namespace.tf
: Shows how to use the Echo Driver to return the name of an externally managed namespace. This format is for use with the Humanitec Terraform provider.
custom-namespace.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "namespace-echo" {
driver_type = "humanitec/echo"
id = "namespace-echo"
name = "namespace-echo"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"namespace" = "$${context.app.id}-$${context.env.id}"
})
}
}
resource "humanitec_resource_definition_criteria" "namespace-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.namespace-echo.id
}
custom-namespace.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: namespace-echo
entity:
name: namespace-echo
type: k8s-namespace
driver_type: humanitec/echo
driver_inputs:
values:
namespace: "${context.app.id}-${context.env.id}"
criteria:
- {}
Postgres
This section contains example Resource Definitions using the Echo Driver for PostgreSQL.
postgres-secretstore.yaml
: Shows how to use the Echo Driver and secret references to fetch database credentials from an external secret store. This format is for use with the Humanitec CLI.
postgres-secretstore.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "postgres-echo" {
driver_type = "humanitec/echo"
id = "postgres-echo"
name = "postgres-echo"
type = "postgres"
driver_inputs = {
values_string = jsonencode({
"name" = "my-database"
"host" = "products.postgres.dev.example.com"
"port" = 5432
})
secret_refs = jsonencode({
"username" = {
"store" = "my-gsm"
"ref" = "cloudsql-username"
}
"password" = {
"store" = "my-gsm"
"ref" = "cloudsql-password"
}
})
}
}
resource "humanitec_resource_definition_criteria" "postgres-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.postgres-echo.id
}
postgres-secretstore.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: postgres-echo
entity:
name: postgres-echo
type: postgres
driver_type: humanitec/echo
driver_inputs:
values:
name: my-database
host: products.postgres.dev.example.com
port: 5432
secret_refs:
username:
store: my-gsm
ref: cloudsql-username
password:
store: my-gsm
ref: cloudsql-password
criteria:
- {}
Redis
This section contains example Resource Definitions using the Echo Driver for Redis.
redis-secret-refs.yaml
: Shows how to use the Echo Driver and secret references to provision a Redis resource. This format is for use with the Humanitec CLI.
redis-secret-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "redis-echo" {
driver_type = "humanitec/echo"
id = "redis-echo"
name = "redis-echo"
type = "redis"
driver_inputs = {
values_string = jsonencode({
"host" = "0.0.0.0"
"port" = 6379
})
secret_refs = jsonencode({
"password" = {
"store" = "my-gsm"
"ref" = "redis-password"
}
"username" = {
"store" = "my-gsm"
"ref" = "redis-user"
}
})
}
}
resource "humanitec_resource_definition_criteria" "redis-echo_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.redis-echo.id
}
redis-secret-refs.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: redis-echo
entity:
name: redis-echo
type: redis
driver_type: humanitec/echo
driver_inputs:
values:
host: 0.0.0.0
port: 6379
secret_refs:
password:
store: my-gsm
ref: redis-password
username:
store: my-gsm
ref: redis-user
criteria:
- {}
Ingress
Creating Ingress objects
This section contains example Resource Definitions for creating Kubernetes Ingress objects using the Ingress Driver.
Ingress
This section contains examlle Resource Definitions for handling Kubernetes ingress traffic using the Ingress Driver.
ingress-alb.tf
: defines anIngress
annotated for an internet-facingAmazon Application Load Balancer (ALB)
. This format is for use with the Humanitec Terraform Provideringress-kong.yaml
: defines anIngress
object annotated for the Kong Ingress Controller. This format is for use with the Humanitec CLIingress-openshift-operator.yaml
: defines anIngress
object annotated for the OpenShift Container Platform Ingress Operator. This format is for use with the Humanitec CLI
ingress-alb.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "alb-ingress" {
driver_type = "humanitec/ingress"
id = "alb-ingress"
name = "alb-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"annotations" = {
"alb.ingress.kubernetes.io/certificate-arn" = "arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx"
"alb.ingress.kubernetes.io/group.name" = "my-team.my-group"
"alb.ingress.kubernetes.io/listen-ports" = "[{\"HTTP\":80},{\"HTTPS\":443}]"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/ssl-redirect" = "443"
"alb.ingress.kubernetes.io/target-type" = "ip"
}
"class" = "alb"
"no_tls" = true
})
}
}
ingress-alb.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
entity:
driver_inputs:
values:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/xxxxxxx
alb.ingress.kubernetes.io/group.name: my-team.my-group
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/target-type: ip
class: alb
no_tls: true
driver_type: humanitec/ingress
name: alb-ingress
type: ingress
kind: Definition
metadata:
id: alb-ingress
ingress-kong.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "kong-ingress" {
driver_type = "humanitec/ingress"
id = "kong-ingress"
name = "kong-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"annotations" = {
"konghq.com/preserve-host" = "false"
"konghq.com/strip-path" = "true"
}
"api_version" = "v1"
"class" = "kong"
})
}
}
ingress-kong.yaml
(view on GitHub)
:
# This Resource Definition provisions an Ingress object for the Kong Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: kong-ingress
entity:
name: kong-ingress
type: ingress
driver_type: humanitec/ingress
driver_inputs:
values:
annotations:
konghq.com/preserve-host: "false"
konghq.com/strip-path: "true"
api_version: v1
class: kong
ingress-openshift-operator.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "openshift-ingress" {
driver_type = "humanitec/ingress"
id = "openshift-ingress"
name = "openshift-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"class" = "openshift-default"
})
}
}
ingress-openshift-operator.yaml
(view on GitHub)
:
# This Resource Definition provisions an Ingress object for the OpenShift Container Platform Ingress Operator
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: openshift-ingress
entity:
name: openshift-ingress
type: ingress
driver_type: humanitec/ingress
driver_inputs:
values:
class: openshift-default
K8s cluster
Connecting to generic Kubernetes clusters
This section contains example Resource Definitions for connecting to generic Kubernetes clusters of any kind beyond the managed solutions of the major cloud providers.
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to generic Kubernetes clusters.
generic-k8s-client-certificate.tf
: use a client certificate to connect to the cluster. This format is for use with the Humanitec Terraform providergeneric-k8s-static-credentials.yaml
: use a client certificate to connect to the cluster. This format is for use with the Humanitec CLI.
generic-k8s-client-certificate.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "generic-k8s-static-credentials" {
driver_type = "humanitec/k8s-cluster"
id = "generic-k8s-static-credentials"
name = "generic-k8s-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"name" = "my-generic-k8s-cluster"
"loadbalancer" = "35.10.10.10"
"cluster_data" = {
"server" = "https://35.11.11.11:6443"
"certificate-authority-data" = "LS0t...ca-data....=="
}
})
secrets_string = jsonencode({
"credentials" = {
"client-certificate-data" = "LS0t...cert-data...=="
"client-key-data" = "LS0t...key-data...=="
}
})
}
}
generic-k8s-client-certificate.yaml
(view on GitHub)
:
# Resource Definition for a generic Kubernetes cluster
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: generic-k8s-static-credentials
entity:
name: generic-k8s-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster
driver_inputs:
values:
name: my-generic-k8s-cluster
loadbalancer: 35.10.10.10
cluster_data:
server: https://35.11.11.11:6443
# Single line base64-encoded cluster CA data in the format "LS0t...ca-data....=="
certificate-authority-data: "LS0t...ca-data....=="
secrets:
credentials:
# Single line base64-encoded client certificate data in the format "LS0t...cert-data...=="
client-certificate-data: "LS0t...cert-data...=="
# Single line base64-encoded client key data in the format "LS0t...key-data...=="
client-key-data: "LS0t...key-data...=="
K8s cluster aks
Connecting to AKS clusters
This section contains example Resource Definitions for connecting to AKS clusters.
Agent
Using the Humanitec Agent
This section contains example Resource Definitions using the Humanitec Agent for connecting to AKS clusters.
aks-agent.yaml
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec CLI.aks-agent.tf
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec Terraform provider
aks-agent.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aks-agent" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-agent"
name = "aks-agent"
type = "k8s-cluster"
driver_account = "azure-temporary"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
aks-agent.yaml
(view on GitHub)
:
# AKS private cluster. It is to be accessed via the Humanitec Agent
# It is using a Cloud Account to obtain credentials
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-agent
entity:
name: aks-agent
type: k8s-cluster
# The driver_account is referring to a Cloud Account configured in your Organization
driver_account: azure-temporary
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
secrets:
# Setting the URL for the Humanitec Agent
agent_url: "${resources['agent#agent'].outputs.url}"
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to AKS clusters.
aks-static-credentials.yaml
: use static credentials of a service principal defined via environment variables. This format is for use with the Humanitec CLI.aks-static-credentials-cloudaccount.yaml
: use static credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.
Using temporary credentials
This section contains example Resource Definitions using temporary credentials for connecting to AKS clusters.
aks-temporary-credentials.yaml
: use temporary credentials defined via a Cloud Account. This format is for use with the Humanitec CLIaks-temporary-credentials.tf
: uses temporary credentials defined via a Cloud Account. This format is for use with the Humanitec Terraform provider
aks-static-credentials-cloudaccount.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aks-static-credentials-cloudaccount" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-static-credentials-cloudaccount"
name = "aks-static-credentials-cloudaccount"
type = "k8s-cluster"
driver_account = "azure-static-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
aks-static-credentials-cloudaccount.yaml
(view on GitHub)
:
# Connect to an AKS cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-static-credentials-cloudaccount
entity:
name: aks-static-credentials-cloudaccount
type: k8s-cluster
# The driver_account references a Cloud Account of type "azure"
# which needs to be configured for your Organization.
driver_account: azure-static-creds
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
aks-static-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aks-static-credentials" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-static-credentials"
name = "aks-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
secrets_string = jsonencode({
"credentials" = {
"appId" = "b520e4a8-6cb4-49dc-8f42-f3281dc2efe9"
"displayName" = "my-cluster-sp"
"password" = "my-cluster-sp-pw"
"tenant" = "9b8c7b62-aaaa-4444-ffff-0987654321fd"
}
})
}
}
aks-static-credentials.yaml
(view on GitHub)
:
# NOTE: Providing inline credentials as shown in this example is discouraged and will be deprecated.
# Using a Cloud Account is the recommended approach instead.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-static-credentials
entity:
name: aks-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
secrets:
# The "credentials" data correspond to the content of the output
# that Azure generates for a service principal
credentials:
appId: b520e4a8-6cb4-49dc-8f42-f3281dc2efe9
displayName: my-cluster-sp
password: my-cluster-sp-pw
tenant: 9b8c7b62-aaaa-4444-ffff-0987654321fd
aks-temporary-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aks-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-aks"
id = "aks-temporary-credentials"
name = "aks-temporary-credentials"
type = "k8s-cluster"
driver_account = "azure-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "20.10.10.10"
"name" = "demo-123"
"resource_group" = "my-resources"
"subscription_id" = "12345678-aaaa-bbbb-cccc-0987654321ba"
"server_app_id" = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
aks-temporary-credentials.yaml
(view on GitHub)
:
# Connect to an AKS cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aks-temporary-credentials
entity:
name: aks-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "azure-identity"
# which needs to be configured for your Organization.
driver_account: azure-temporary-creds
driver_type: humanitec/k8s-cluster-aks
driver_inputs:
values:
loadbalancer: 20.10.10.10
name: demo-123
resource_group: my-resources
subscription_id: 12345678-aaaa-bbbb-cccc-0987654321ba
# Add this exact server_app_id for a cluster using AKS-managed Entra ID integration
server_app_id: 6dae42f8-4368-4678-94ff-3960e28e3630
K8s cluster eks
Connecting to EKS clusters
This section contains example Resource Definitions for connecting to EKS clusters.
Agent
Using the Humanitec Agent
This section contains example Resource Definitions using the Humanitec Agent for connecting to EKS clusters.
eks-agent.yaml
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec CLI.eks-agent.tf
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec Terraform provider
eks-agent.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "eks-agent" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-agent"
name = "eks-agent"
type = "k8s-cluster"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
eks-agent.yaml
(view on GitHub)
:
# EKS private cluster. It is to be accessed via the Humanitec Agent
# It is using a Cloud Account with temporary credentials
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-agent
entity:
name: eks-agent
type: k8s-cluster
# The driver_account is referring to a Cloud Account configured in your Organization
driver_account: aws-temp-creds
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
secrets:
# Setting the URL for the Humanitec Agent
agent_url: "${resources['agent#agent'].outputs.url}"
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to EKS clusters.
eks-static-credentials.yaml
: use static credentials defined via environment variables. This format is for use with the Humanitec CLI.eks-static-credentials-cloudaccount.yaml
: use static credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.
Using temporary credentials
This section contains example Resource Definitions using temporary credentials for connecting to EKS clusters.
eks-temporary-credentials.yaml
: use temporary credentials defined via a Cloud Account. This format is for use with the Humanitec CLIeks-temporary-credentials.tf
: uses temporary credentials defined via a Cloud Account. This format is for use with the Humanitec Terraform provider
eks-static-credentials-cloudaccount.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "eks-static-credentials-cloudaccount" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-static-credentials-cloudaccount"
name = "eks-static-credentials-cloudaccount"
type = "k8s-cluster"
driver_account = "aws-static-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
}
}
eks-static-credentials-cloudaccount.yaml
(view on GitHub)
:
# Connect to an EKS cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-static-credentials-cloudaccount
entity:
name: eks-static-credentials-cloudaccount
type: k8s-cluster
# The driver_account references a Cloud Account of type "aws"
# which needs to be configured for your Organization.
driver_account: aws-static-creds
# The driver_type k8s-cluster-eks automatically handles the static credentials
# injected via the driver_account.
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
eks-static-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "eks-static-credentials" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-static-credentials"
name = "eks-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
secrets_string = jsonencode({
"credentials" = {
"aws_access_key_id" = "my-access-key-id"
"aws_secret_access_key" = "my-secret-access-key"
}
})
}
}
eks-static-credentials.yaml
(view on GitHub)
:
# NOTE: Providing inline credentials as shown in this example is discouraged and will be deprecated.
# Using a Cloud Account is the recommended approach instead.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-static-credentials
entity:
name: eks-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
secrets:
credentials:
aws_access_key_id: my-access-key-id
aws_secret_access_key: my-secret-access-key
eks-temporary-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "eks-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-eks"
id = "eks-temporary-credentials"
name = "eks-temporary-credentials"
type = "k8s-cluster"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"region" = "eu-central-1"
"name" = "demo-123"
"loadbalancer" = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
"loadbalancer_hosted_zone" = "ABC0DEF5WYYZ00"
})
}
}
eks-temporary-credentials.yaml
(view on GitHub)
:
# Connect to an EKS cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: eks-temporary-credentials
entity:
name: eks-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "aws-role"
# which needs to be configured for your Organization.
driver_account: aws-temp-creds
# The driver_type k8s-cluster-eks automatically handles the temporary credentials
# injected via the driver_account.
driver_type: humanitec/k8s-cluster-eks
driver_inputs:
values:
region: eu-central-1
name: demo-123
loadbalancer: x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com
loadbalancer_hosted_zone: ABC0DEF5WYYZ00
K8s cluster git
Connecting to a Git repository (GitOps mode)
This section contains example Resource Definitions for connecting to a Git repository to push application CRs (GitOps mode).
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to a Git repository in (GitOps mode).
github-for-gitops.yaml
: use static credentials defined via GitHub variables. This format is for use with the Humanitec CLI.
github-for-gitops.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "github-for-gitops" {
driver_type = "humanitec/k8s-cluster-git"
id = "github-for-gitops"
name = "github-for-gitops"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"url" = "[email protected]:example-org/gitops-repo.git"
"branch" = "development"
"path" = "$${context.app.id}/$${context.env.id}"
"loadbalancer" = "35.10.10.10"
})
secrets_string = jsonencode({
"credentials" = {
"ssh_key" = "my-git-ssh-key"
}
})
}
}
github-for-gitops.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: github-for-gitops
entity:
name: github-for-gitops
driver_type: humanitec/k8s-cluster-git
type: k8s-cluster
driver_inputs:
values:
# Git repository for pushing manifests
url: [email protected]:example-org/gitops-repo.git
# Branch in the git repository, optional. If not specified, the default branch is used.
branch: development
# Path in the git repository, optional. If not specified, the root is used.
path: "${context.app.id}/${context.env.id}"
# Load Balancer, optional. Though it's not related to the git, it's used to create ingress in the target K8s cluster.
loadbalancer: 35.10.10.10
secrets:
credentials:
ssh_key: my-git-ssh-key
# Alternative to ssh_key: password or Personal Account Token
# password: my-git-ssh-pat
K8s cluster gke
Connecting to GKE clusters
This section contains example Resource Definitions for connecting to GKE clusters.
Agent
Using the Humanitec Agent
This section contains example Resource Definitions using the Humanitec Agent for connecting to GKE clusters.
gke-agent.yaml
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec CLI.gke-agent.tf
: uses a Cloud Account as well as the Humanitec Agent to access this private cluster. This format is for use with the Humanitec Terraform provider
gke-agent.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "gke-agent" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-agent"
name = "gke-agent"
type = "k8s-cluster"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
secrets_string = jsonencode({
"agent_url" = "$${resources['agent#agent'].outputs.url}"
})
}
}
gke-agent.yaml
(view on GitHub)
:
# GKE private cluster. It is to be accessed via the Humanitec Agent
# It is using a Cloud Account with temporary credentials
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-agent
entity:
name: gke-agent
type: k8s-cluster
# The driver_account is referring to a Cloud Account configured in your Organization
driver_account: gcp-temporary-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
secrets:
# Setting the URL for the Humanitec Agent
agent_url: "${resources['agent#agent'].outputs.url}"
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
Credentials
Credentials
Using static credentials
This section contains example Resource Definitions using static credentials for connecting to GKE clusters.
gke-static-credentials.yaml
: use static credentials defined via environment variables. This format is for use with the Humanitec CLI.gke-static-credentials-cloudaccount.yaml
: use static credentials defined via a Cloud Account. This format is for use with the Humanitec CLI.
Using temporary credentials
This section contains example Resource Definitions using temporary credentials for connecting to GKE clusters.
gke-temporary-credentials.yaml
: use temporary credentials defined via a Cloud Account. This format is for use with the Humanitec CLIgke-temporary-credentials.tf
: uses temporary credentials defined via a Cloud Account. This format is for use with the Humanitec Terraform provider
gke-static-credentials-cloudaccount.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "gke-static-credentials-cloudaccount" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-static-credentials-cloudaccount"
name = "gke-static-credentials-cloudaccount"
type = "k8s-cluster"
driver_account = "gcp-static-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
}
}
gke-static-credentials-cloudaccount.yaml
(view on GitHub)
:
# Connect to a GKE cluster using static credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-static-credentials-cloudaccount
entity:
name: gke-static-credentials-cloudaccount
type: k8s-cluster
# The driver_account references a Cloud Account of type "gcp"
# which needs to be configured for your Organization.
driver_account: gcp-static-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
gke-static-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "gke-static-credentials" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-static-credentials"
name = "gke-static-credentials"
type = "k8s-cluster"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
secrets_string = jsonencode({
"credentials" = {
"type" = "service_account"
"project_id" = "my-gcp-project"
"private_key_id" = "48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f"
"private_key" = "my-private-key"
"client_id" = "206964217359046819490"
"client_email" = "[email protected]"
"auth_uri" = "https://accounts.google.com/o/oauth2/auth"
"token_uri" = "https://oauth2.googleapis.com/token"
"auth_provider_x509_cert_url" = "https://www.googleapis.com/oauth2/v1/certs"
"client_x509_cert_url" = "https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-gcp-project.iam.gserviceaccount.com"
}
})
}
}
gke-static-credentials.yaml
(view on GitHub)
:
# NOTE: Providing inline credentials as shown in this example is discouraged and will be deprecated.
# Using a Cloud Account is the recommended approach instead.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-static-credentials
entity:
name: gke-static-credentials
type: k8s-cluster
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
secrets:
# The "credentials" data correspond to the content of the credentials.json
# that Google Cloud generates for a service account key
credentials:
type: service_account
project_id: my-gcp-project
# Example private_key_id: 48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f
private_key_id: 48b483fbf1d6e80fb4ac1a4626eb5d8036e3520f
# Example private_key in one line: -----BEGIN PRIVATE KEY-----\\n...key...data...\\n...key...data...\\n...\\n-----END PRIVATE KEY-----\\n
private_key: my-private-key
# Example client_id: 206964217359046819490
client_id: "206964217359046819490"
client_email: [email protected]
auth_uri: https://accounts.google.com/o/oauth2/auth
token_uri: https://oauth2.googleapis.com/token
auth_provider_x509_cert_url: https://www.googleapis.com/oauth2/v1/certs
client_x509_cert_url: https://www.googleapis.com/robot/v1/metadata/x509/my-service-account%40my-gcp-project.iam.gserviceaccount.com
gke-temporary-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "gke-temporary-credentials" {
driver_type = "humanitec/k8s-cluster-gke"
id = "gke-temporary-credentials"
name = "gke-temporary-credentials"
type = "k8s-cluster"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"loadbalancer" = "35.10.10.10"
"name" = "demo-123"
"zone" = "europe-west2-a"
"project_id" = "my-gcp-project"
})
}
}
gke-temporary-credentials.yaml
(view on GitHub)
:
# Connect to a GKE cluster using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gke-temporary-credentials
entity:
name: gke-temporary-credentials
type: k8s-cluster
# The driver_account references a Cloud Account of type "gcp-identity"
# which needs to be configured for your Organization.
driver_account: gcp-temporary-creds
driver_type: humanitec/k8s-cluster-gke
driver_inputs:
values:
loadbalancer: 35.10.10.10
name: demo-123
zone: europe-west2-a
project_id: my-gcp-project
Template driver
Resource Definitions using the Template Driver
This section contains example Resource Definitions using the Template Driver.
Add sidecar
Add a sidecar to workloads using the workload resource
The workload Resource Type can be used to make updates to resources before they are deployed into the cluster. In this example, a Resource Definition implementing the workload
Resource Type is used to inject the Open Telemetry agent as a sidecar into every workload. In addition to adding the sidecar, it also adds an environment variable called OTEL_EXPORTER_OTLP_ENDPOINT
to each container running in the workload.
otel-sidecar.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "otel-sidecar" {
driver_type = "humanitec/template"
id = "otel-sidecar"
name = "otel-sidecar"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
{{- /*
The "update" output is passed into the corresponding "update" output of the "workload" Resource Type.
*/ -}}
update:
{{- /*
Add the variable OTEL_EXPORTER_OTLP_ENDPOINT to all containers
*/ -}}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/variables/OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
{{- end }}
END_OF_TEXT
"manifests" = {
"sidecar.yaml" = {
"location" = "containers"
"data" = <<END_OF_TEXT
{{- /*
The Open Telemetry container as a sidecar in the workload
*/ -}}
command:
- "/otelcol"
- "--config=/conf/otel-agent-config.yaml"
image: otel/opentelemetry-collector:0.94.0
name: otel-agent
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 55679 # ZPages endpoint.
- containerPort: 4317 # Default OpenTelemetry receiver port.
- containerPort: 8888 # Metrics.
env:
- name: GOMEMLIMIT
value: 400MiB
volumeMounts:
- name: otel-agent-config-vol
mountPath: /conf
END_OF_TEXT
}
"sidecar-volume.yaml" = {
"location" = "volumes"
"data" = <<END_OF_TEXT
{{- /*
A volume that is used to surface the config file
*/ -}}
configMap:
name: otel-agent-conf-{{ .id }}
items:
- key: otel-agent-config
path: otel-agent-config.yaml
name: otel-agent-config-vol
END_OF_TEXT
}
"otel-config-map.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
{{- /*
The config file for the Open Telemetry agent. Notice that it's name includes the GUResID
*/ -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-conf-{{ .id }}
labels:
app: opentelemetry
component: otel-agent-conf
data:
otel-agent-config: |
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
otlp:
endpoint: "otel-collector.default:4317"
tls:
insecure: true
sending_queue:
num_consumers: 4
queue_size: 100
retry_on_failure:
enabled: true
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 400
# 25% of limit up to 2G
spike_limit_mib: 100
check_interval: 5s
extensions:
zpages: {}
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp]
END_OF_TEXT
}
}
}
})
}
}
otel-sidecar.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: otel-sidecar
entity:
name: otel-sidecar
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
{{- /*
The "update" output is passed into the corresponding "update" output of the "workload" Resource Type.
*/ -}}
update:
{{- /*
Add the variable OTEL_EXPORTER_OTLP_ENDPOINT to all containers
*/ -}}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/variables/OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
{{- end }}
manifests:
sidecar.yaml:
location: containers
data: |
{{- /*
The Open Telemetry container as a sidecar in the workload
*/ -}}
command:
- "/otelcol"
- "--config=/conf/otel-agent-config.yaml"
image: otel/opentelemetry-collector:0.94.0
name: otel-agent
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 55679 # ZPages endpoint.
- containerPort: 4317 # Default OpenTelemetry receiver port.
- containerPort: 8888 # Metrics.
env:
- name: GOMEMLIMIT
value: 400MiB
volumeMounts:
- name: otel-agent-config-vol
mountPath: /conf
sidecar-volume.yaml:
location: volumes
data: |
{{- /*
A volume that is used to surface the config file
*/ -}}
configMap:
name: otel-agent-conf-{{ .id }}
items:
- key: otel-agent-config
path: otel-agent-config.yaml
name: otel-agent-config-vol
otel-config-map.yaml:
location: namespace
data: |
{{- /*
The config file for the Open Telemetry agent. Notice that it's name includes the GUResID
*/ -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-conf-{{ .id }}
labels:
app: opentelemetry
component: otel-agent-conf
data:
otel-agent-config: |
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
otlp:
endpoint: "otel-collector.default:4317"
tls:
insecure: true
sending_queue:
num_consumers: 4
queue_size: 100
retry_on_failure:
enabled: true
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 400
# 25% of limit up to 2G
spike_limit_mib: 100
check_interval: 5s
extensions:
zpages: {}
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp]
criteria: []
Affinity
This section contains example Resource Definitions using the Template Driver for the affinity of Kubernetes Pods.
affinity.yaml
: Add affinity rules to the Workload. This format is for use with the Humanitec CLI.
affinity.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "workload-affinity" {
driver_type = "humanitec/template"
id = "workload-affinity"
name = "workload-affinity"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/affinity
value:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
END_OF_TEXT
}
})
}
}
affinity.yaml
(view on GitHub)
:
# Add affinity rules to the Workload by adding a value to the manifest at .spec.affinity
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-affinity
entity:
name: workload-affinity
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/affinity
value:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
criteria: []
Imagepullsecrets
This section shows how to use the Template Driver for configuring cluster access to a private container image registry.
The example implements the Kubernetes standard mechanism to Pull an Image from a Private Registry. It creates a Kubernetes Secret of kubernetes.io/dockerconfigjson
type, reading the credentials from a secret store. It then configures the secret as the imagePullSecret
for a Workload’s Pod.
The example is applicable only when using the Humanitec Operator on the cluster. With the Operator, using the Registries feature of the Platform Orchestrator is not supported.
To use this mechanism, install the Resource Definitions of this example into your Organization, replacing some placeholder values with the actual values of your setup. Add the appropriate matching criteria to the workload
Definition to match the Workloads you want to have access to the private registry.
Note:
workload
is an implicit Resource Type so it is automatically referenced for every Deployment.
config.yaml
: Resource Definition oftype: config
that reads the credentials for the private registry from a secret store and creates the Kubernetes Secretworkload.yaml
: Resource Definition oftype: workload
that adds theimagePullSecrets
element to the Pod spec, referencing theconfig
Resource
config.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "regcred-config" {
driver_type = "humanitec/template"
id = "regcred-config"
name = "regcred-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"secret_name" = "regcred"
"server" = "FIXME"
"templates" = {
"init" = <<END_OF_TEXT
dockerConfigJson:
auths:
{{ .driver.values.server | quote }}:
username: {{ .driver.secrets.username | toRawJson }}
password: {{ .driver.secrets.password | toRawJson }}
END_OF_TEXT
"manifests" = {
"regcred-secret.yaml" = {
"data" = <<END_OF_TEXT
apiVersion: v1
kind: Secret
metadata:
name: {{ .driver.values.secret_name }}
data:
.dockerconfigjson: {{ .init.dockerConfigJson | toRawJson | b64enc }}
type: kubernetes.io/dockerconfigjson
END_OF_TEXT
"location" = "namespace"
}
}
"outputs" = "secret_name: {{ .driver.values.secret_name }}"
}
})
secret_refs = jsonencode({
"password" = {
"ref" = "regcred-password"
"store" = "FIXME"
}
"username" = {
"ref" = "regcred-username"
"store" = "FIXME"
}
})
}
}
resource "humanitec_resource_definition_criteria" "regcred-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.regcred-config.id
class = "default"
res_id = "regcred"
}
config.yaml
(view on GitHub)
:
# This Resource Definition pulls credentials for a container image registry from a secret store
# and creates a Kubernetes Secret of kubernetes.io/dockerconfigjson type
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: regcred-config
entity:
driver_type: humanitec/template
name: regcred-config
type: config
criteria:
- class: default
# This res_id must be used from a referencing Resource Definition to request this config Resource
res_id: regcred
driver_inputs:
# These secret references read the credentials from a secret store
secret_refs:
password:
ref: regcred-password
# Replace this value with the secret store id that's supplying the password
store: FIXME
username:
ref: regcred-username
# Replace this value with the secret store id that's supplying the username
store: FIXME
values:
secret_name: regcred
# Replace this value with the servername of your registry
server: FIXME
templates:
# The init template is used to prepare the "dockerConfigJson" content
init: |
dockerConfigJson:
auths:
{{ .driver.values.server | quote }}:
username: {{ .driver.secrets.username | toRawJson }}
password: {{ .driver.secrets.password | toRawJson }}
manifests:
# The manifests template creates the Kubernetes Secret
# which can then be used in the workload "imagePullSecrets"
regcred-secret.yaml:
data: |
apiVersion: v1
kind: Secret
metadata:
name: {{ .driver.values.secret_name }}
data:
.dockerconfigjson: {{ .init.dockerConfigJson | toRawJson | b64enc }}
type: kubernetes.io/dockerconfigjson
location: namespace
outputs: |
secret_name: {{ .driver.values.secret_name }}
workload.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "custom-workload" {
driver_type = "humanitec/template"
id = "custom-workload"
name = "custom-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/imagePullSecrets
value:
- name: $${resources['config.default#regcred'].outputs.secret_name}
END_OF_TEXT
}
})
}
}
workload.yaml
(view on GitHub)
:
# This workload Resource Definition adds an "imagePullSecrets" element to the Pod spec
# It references a "config" type Resource to obtain the secret name
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/imagePullSecrets
value:
- name: ${resources['config.default#regcred'].outputs.secret_name}
Ingress
This section contains example Resource Definitions for handling Kubernetes ingress traffic. Instead of the Driver type Ingress, we are using the Template Driver type, which allows us to render any Kubernetes YAML object.
ingress-traefik.yaml
: defines anIngressRoute
object for the Traefik Ingress Controller using the IngressRoute custom resource definition. This format is for use with the Humanitec CLIingress-traefik-multiple-routes.yaml
: defines anIngressRoute
object for the Traefik Ingress Controller using the IngressRoute custom resource definition. It dynamically extracts the routes from theroute
resource in the Resource Graph to provide multiple routes. This format is for use with the Humanitec CLIingress-ambassador.yaml
: defines aMapping
object for the Ambassador Ingress Controller using the Mapping custom resource definition. This format is for use with the Humanitec CLI
ingress-ambassador.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "ambassador-ingress" {
driver_type = "template"
id = "ambassador-ingress"
name = "ambassador-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
name: {{ .id }}-ingress
secretname: $${resources.tls-cert.outputs.tls_secret_name}
host: $${resources.dns.outputs.host}
namespace: $${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
ambassador-mapping.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: {{ .init.name }}-mapping
spec:
host: {{ .init.host }}
prefix: /
service: my-service-name:8080
location: namespace
ambassador-tlscontext.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: TLSContext
metadata:
name: {{ .init.name }}-tlscontext
spec:
hosts:
- {{ .init.host }}
secret: {{ .init.secretname }}
location: namespace
END_OF_TEXT
}
})
}
}
ingress-ambassador.yaml
(view on GitHub)
:
# This Resource Definition provisions an IngressRoute object for the Traefik Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: ambassador-ingress
entity:
name: ambassador-ingress
type: ingress
driver_type: template
driver_inputs:
values:
templates:
init: |
name: {{ .id }}-ingress
secretname: ${resources.tls-cert.outputs.tls_secret_name}
host: ${resources.dns.outputs.host}
namespace: ${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
manifests: |
ambassador-mapping.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: {{ .init.name }}-mapping
spec:
host: {{ .init.host }}
prefix: /
service: my-service-name:8080
location: namespace
ambassador-tlscontext.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: TLSContext
metadata:
name: {{ .init.name }}-tlscontext
spec:
hosts:
- {{ .init.host }}
secret: {{ .init.secretname }}
location: namespace
ingress-traefik-multiple-routes.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "traefik-ingress-eg" {
driver_type = "humanitec/template"
id = "traefik-ingress-eg"
name = "traefik-ingress-eg"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"routeHosts" = "$${resources.dns<route.outputs.host}"
"routePaths" = "$${resources.dns<route.outputs.path}"
"routePorts" = "$${resources.dns<route.outputs.port}"
"routeServices" = "$${resources.dns<route.outputs.service}"
"templates" = {
"init" = <<END_OF_TEXT
host: {{ .resource.host | quote }}
# ingress paths are added implicitely to our ingress resource based on the contents of our workload. These are an older
# alternative to route resources. Consider this deprecated in the future!
ingressPaths: {{ dig "rules" "http" (list) .resource | toRawJson }}
# The tls secret name could be generated by Humanitec or injected as an input parameter to our ingress.
tlsSecretName: {{ .driver.values.tls_secret_name | default .resource.tls_secret_name | default .driver.values.automatic_tls_secret_name | quote }}
{{- if eq (lower ( .driver.values.path_type | default "Prefix")) "exact" -}}
defaultMatchRule: Path
{{- else }}
defaultMatchRule: PathPrefix
{{- end }}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
# Create our single manifest with many routes in it. Alternative configurations could create a manifest per route with unique file names if required.
ingressroute.yaml:
location: namespace
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
# id is the unique resource uuid for this ingress
name: {{ .id }}-ingressroute
annotations:
{{- range $k, $v := .driver.values.annotations }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
labels:
{{- range $k, $v := .driver.values.labels }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
spec:
entryPoints:
- websecure
routes:
# Add all the paths from the dependent route resources. Route resources can have different hostnames but will all obey the path type set out in the resource inputs.
{{- range $index, $path := .driver.values.routePaths }}
- match: Host(`{{ index $.driver.values.routeHosts $index }}`) && {{ $.init.defaultMatchRule }}(`{{ $path }}`)
kind: Rule
services:
- kind: Service
name: {{ index $.driver.values.routeServices $index | toRawJson }}
port: {{ index $.driver.values.routePorts $index }}
{{- end }}
# Add all the support ingress paths. The old style ingress rules use a single hostname coming from the resource configuration but support different path types per rule.
# As mentioned further up, consider these deprecated in the future!
{{- range $path, $rule := .init.ingressPaths }}
{{ $lcType := lower $rule.type -}}
{{- if eq $lcType "implementationspecific" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{- else if eq $lcType "exact" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{ else }}
- match: Host(`{{ $.init.host }}`) && PathPrefix(`{{ $path }}`)
{{- end }}
kind: Rule
services:
- kind: Service
name: {{ $rule.name | quote }}
port: {{ $rule.port }}
{{- end }}
{{- if not (or .driver.values.no_tls (eq .init.tlsSecretName "")) }}
tls:
secretName: {{ .init.tlsSecretName | toRawJson }}
{{- end }}
END_OF_TEXT
}
})
}
}
ingress-traefik-multiple-routes.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: traefik-ingress-eg
entity:
name: traefik-ingress-eg
driver_type: humanitec/template
type: ingress
driver_inputs:
values:
# Find all the route resources that are dependent on any dns resources used in this workload.
# We extract arrays of their host, path, port, and service resource.
# These will become new entries in the .drivers.values table.
routeHosts: ${resources.dns<route.outputs.host}
routePaths: ${resources.dns<route.outputs.path}
routePorts: ${resources.dns<route.outputs.port}
routeServices: ${resources.dns<route.outputs.service}
templates:
# The init template gives us a place to precompute some fields that we'll use in the manifests template.
init: |
host: {{ .resource.host | quote }}
# ingress paths are added implicitely to our ingress resource based on the contents of our workload. These are an older
# alternative to route resources. Consider this deprecated in the future!
ingressPaths: {{ dig "rules" "http" (list) .resource | toRawJson }}
# The tls secret name could be generated by Humanitec or injected as an input parameter to our ingress.
tlsSecretName: {{ .driver.values.tls_secret_name | default .resource.tls_secret_name | default .driver.values.automatic_tls_secret_name | quote }}
{{- if eq (lower ( .driver.values.path_type | default "Prefix")) "exact" -}}
defaultMatchRule: Path
{{- else }}
defaultMatchRule: PathPrefix
{{- end }}
manifests: |
# Create our single manifest with many routes in it. Alternative configurations could create a manifest per route with unique file names if required.
ingressroute.yaml:
location: namespace
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
# id is the unique resource uuid for this ingress
name: {{ .id }}-ingressroute
annotations:
{{- range $k, $v := .driver.values.annotations }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
labels:
{{- range $k, $v := .driver.values.labels }}
{{ $k | toRawJson }}: {{ $v | toRawJson }}
{{- end }}
spec:
entryPoints:
- websecure
routes:
# Add all the paths from the dependent route resources. Route resources can have different hostnames but will all obey the path type set out in the resource inputs.
{{- range $index, $path := .driver.values.routePaths }}
- match: Host(`{{ index $.driver.values.routeHosts $index }}`) && {{ $.init.defaultMatchRule }}(`{{ $path }}`)
kind: Rule
services:
- kind: Service
name: {{ index $.driver.values.routeServices $index | toRawJson }}
port: {{ index $.driver.values.routePorts $index }}
{{- end }}
# Add all the support ingress paths. The old style ingress rules use a single hostname coming from the resource configuration but support different path types per rule.
# As mentioned further up, consider these deprecated in the future!
{{- range $path, $rule := .init.ingressPaths }}
{{ $lcType := lower $rule.type -}}
{{- if eq $lcType "implementationspecific" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{- else if eq $lcType "exact" -}}
- match: Host(`{{ $.init.host }}`) && Path(`{{ $path }}`)
{{ else }}
- match: Host(`{{ $.init.host }}`) && PathPrefix(`{{ $path }}`)
{{- end }}
kind: Rule
services:
- kind: Service
name: {{ $rule.name | quote }}
port: {{ $rule.port }}
{{- end }}
{{- if not (or .driver.values.no_tls (eq .init.tlsSecretName "")) }}
tls:
secretName: {{ .init.tlsSecretName | toRawJson }}
{{- end }}
ingress-traefik.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "traefik-ingress" {
driver_type = "template"
id = "traefik-ingress"
name = "traefik-ingress"
type = "ingress"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
name: {{ .id }}-ir
secretname: $${resources.tls-cert.outputs.tls_secret_name}
host: $${resources.dns.outputs.host}
namespace: $${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
END_OF_TEXT
"manifests" = <<END_OF_TEXT
traefik-ingressroute.yaml:
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: {{ .init.name }}
spec:
routes:
- match: Host(`{{ .init.host }}`) && PathPrefix(`/`)
kind: Rule
services:
- name: my-service-name
kind: Service
port: 8080
namespace: {{ .init.namespace }}
tls:
secretName: {{ .init.secretname }}
location: namespace
END_OF_TEXT
}
})
}
}
ingress-traefik.yaml
(view on GitHub)
:
# This Resource Definition provisions an IngressRoute object for the Traefik Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: traefik-ingress
entity:
name: traefik-ingress
type: ingress
driver_type: template
driver_inputs:
values:
templates:
init: |
name: {{ .id }}-ir
secretname: ${resources.tls-cert.outputs.tls_secret_name}
host: ${resources.dns.outputs.host}
namespace: ${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
manifests: |
traefik-ingressroute.yaml:
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: {{ .init.name }}
spec:
routes:
- match: Host(`{{ .init.host }}`) && PathPrefix(`/`)
kind: Rule
services:
- name: my-service-name
kind: Service
port: 8080
namespace: {{ .init.namespace }}
tls:
secretName: {{ .init.secretname }}
location: namespace
Labels
This section shows how to use the Template Driver for managing labels on Kubernetes objects.
While it is also possible to set labels via Score, the approach shown here shifts the management of labels down to the Platform, ensuring consistency and relieving developers of the task to repeat common labels for each Workload in the Score extension file.
config-labels.yaml
: Resource Definition of typeconfig
which defines the value for a sample label at a central place.custom-workload-with-dynamic-labels.yaml
: Add dynamic labels to your Workload. This format is for use with the Humanitec CLI.custom-namespace-with-dynamic-labels.yaml
: Add dynamic labels to your Namespace. This format is for use with the Humanitec CLI.
config-labels.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "app-config" {
driver_type = "humanitec/template"
id = "app-config"
name = "app-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = "cost_center_id: my-example-id\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "app-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.app-config.id
res_id = "app-config"
}
config-labels.yaml
(view on GitHub)
:
# This "config" type Resource Definition provides the value for the sample label
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: app-config
entity:
name: app-config
type: config
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Returns a sample output named "cost_center_id" to be used as a label
outputs: |
cost_center_id: my-example-id
# Match the resource ID "app-config" so that it can be requested via that ID
criteria:
- res_id: app-config
custom-namespace-with-dynamic-labels.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "custom-namespace-with-label" {
driver_type = "humanitec/template"
id = "custom-namespace-with-label"
name = "custom-namespace-with-label"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = "name: $${context.app.id}-$${context.env.id}\n"
"manifests" = <<END_OF_TEXT
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
env_id: $${context.env.id}
cost_center_id: $${resources['config.default#app-config'].outputs.cost_center_id}
name: {{ .init.name }}
END_OF_TEXT
"outputs" = "namespace: {{ .init.name }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-namespace-with-label_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-namespace-with-label.id
}
custom-namespace-with-dynamic-labels.yaml
(view on GitHub)
:
# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace-with-label
entity:
name: custom-namespace-with-label
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
name: ${context.app.id}-${context.env.id}
manifests: |
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
# Set matching criteria as required
criteria:
- {}
custom-workload-with-dynamic-labels.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "custom-workload-with-label" {
driver_type = "humanitec/template"
id = "custom-workload-with-label"
name = "custom-workload-with-label"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/labels
value:
{{- range $key, $val := .resource.spec.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: $${context.env.id}
cost_center_id: $${resources['config.default#app-config'].outputs.cost_center_id}
- op: add
path: /spec/service/labels
value:
{{- range $key, $val := .resource.spec.service.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: $${context.env.id}
cost_center_id: $${resources['config.default#app-config'].outputs.cost_center_id}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-workload-with-label_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-workload-with-label.id
}
custom-workload-with-dynamic-labels.yaml
(view on GitHub)
:
# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload-with-label
entity:
name: custom-workload-with-label
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Remove the /spec/service/labels part if there is no "service" in your Score file.
outputs: |
update:
- op: add
path: /spec/labels
value:
{{- range $key, $val := .resource.spec.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
- op: add
path: /spec/service/labels
value:
{{- range $key, $val := .resource.spec.service.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
# Set matching criteria as required
criteria:
- {}
Namespace
This section contains example Resource Definitions using the Template Driver for managing Kubernetes namespaces.
custom-namespace.yaml
: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec CLI.custom-namespace.tf
: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec Terraform provider.short-namespace.yaml
: Create Kubernetes namespaces with your own custom naming scheme of defined length. This format is for use with the Humanitec CLI.
custom-namespace.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "custom-namespace" {
driver_type = "humanitec/template"
id = "custom-namespace"
name = "custom-namespace2"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = "name: $${context.env.id}-$${context.app.id}\n"
"manifests" = <<END_OF_TEXT
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
END_OF_TEXT
"outputs" = "namespace: {{ .init.name }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-namespace_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-namespace.id
}
custom-namespace.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace
entity:
name: custom-namespace2
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Use any combination of placeholders and characters to configure your naming scheme
init: |
name: ${context.env.id}-${context.app.id}
manifests: |
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
criteria:
- {}
short-namespace.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "custom-namespace" {
driver_type = "humanitec/template"
id = "custom-namespace"
name = "custom-namespace"
type = "k8s-namespace"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = "name: {{ trunc 8 \"$${context.env.id}\" }}-{{ trunc 8 \"$${context.app.id}\" }}\n"
"manifests" = <<END_OF_TEXT
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
END_OF_TEXT
"outputs" = "namespace: {{ .init.name }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-namespace_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-namespace.id
}
short-namespace.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace
entity:
name: custom-namespace
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Here the namespace name is shortened to be a maximum of 17 characters,
# no matter how long the app and env name might be.
init: |
name: {{ trunc 8 "${context.env.id}" }}-{{ trunc 8 "${context.app.id}" }}
manifests: |
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
criteria:
- {}
Node selector
This section contains example Resource Definitions using the Template Driver for setting nodeSelectors on your Pods.
aci-workload.yaml
: Add the required node selector and tolerations to the Workload so it can be scheduled on an Azure AKS virtual node. This format is for use with the Humanitec CLI.
aci-workload.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aci-workload" {
driver_type = "humanitec/template"
id = "aci-workload"
name = "aci-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/tolerations
value:
- key: "virtual-kubelet.io/provider"
operator: "Exists"
- key: "azure.com/aci"
effect: "NoSchedule"
- op: add
path: /spec/nodeSelector
value:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
END_OF_TEXT
}
})
}
}
aci-workload.yaml
(view on GitHub)
:
# Add tolerations and nodeSelector to the Workload to make it runnable AKS virtual nodes
# served through Azure Container Instances (ACI).
# See https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aci-workload
entity:
name: aci-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "virtual-kubelet.io/provider"
operator: "Exists"
- key: "azure.com/aci"
effect: "NoSchedule"
- op: add
path: /spec/nodeSelector
value:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
criteria: []
Resourcequota
This example shows a sample usage of the base-env
Resource Type. It is one of the implicit Resource Types that always gets provisioned for a Deployment.
The Resource Definition base-env-resourcequota.yaml
uses it the provision a Kubernetes manifest describing a ResourceQuota in the target namespace.
The base-env
Resource Definition reads the configuration values from another Resource of type config
using a Resource Reference. The reference specifies a class (config#quota
) so that the proper config
Resource Definition will be matched based on its matching criteria.
Two config
Resource Definitions are provided:
config-quota.yaml
will be matched for all references ofres_id: quota
config-quota-override.yaml
will additionally be matched for a particularapp_id: my-app
only, effectively providing an override for the configuration values for this particular Application id
The Resource Graphs for two Applications, one of which matches the “override” criteria, will look like this:
flowchart LR
subgraph app2[Resource Graph "my-app"]
direction LR
workload2[Workload] --> baseEnv2(type: base-env\nid: base-env) --> config2("type: config\nid:quota")
end
subgraph app1[Resource Graph "some-app"]
direction LR
workload1[Workload] --> baseEnv1(type: base-env\nid: base-env) --> config1("type: config\nid: quota")
end
resDefBaseEnv[base-env\nResource Definition]
resDefBaseEnv -.-> baseEnv1
resDefBaseEnv -.-> baseEnv2
resDefQuotaConfig[config-quota\nResource Definition] -.->|criteria:\n- res_id: quota| config1
resDefQuotaConfigOverride[config-quota-override\nResource Definition] -.->|criteria:\n- res_id: quota\n app_id: my-app| config2
base-env-resourcequota.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "base-env" {
driver_type = "humanitec/template"
id = "base-env"
name = "base-env"
type = "base-env"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"manifests" = "quota.yaml:\n location: namespace\n data:\n apiVersion: v1\n kind: ResourceQuota\n metadata:\n name: compute-resources\n spec:\n hard:\n limits.cpu: $${resources['config#quota'].outputs.limits-cpu}\n limits.memory: $${resources['config#quota'].outputs.limits-memory}"
}
})
}
}
resource "humanitec_resource_definition_criteria" "base-env_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.base-env.id
}
base-env-resourcequota.yaml
(view on GitHub)
:
# This Resource Definition uses the base-env Resource type to create
# a ResourceQuota manifest in the target namespace.
# The actual values are read from a referenced config resource.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: base-env
entity:
name: base-env
type: base-env
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests: |-
quota.yaml:
location: namespace
data:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
limits.cpu: ${resources['config#quota'].outputs.limits-cpu}
limits.memory: ${resources['config#quota'].outputs.limits-memory}
criteria:
- {}
config-quota-override.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "quota-config-override" {
driver_type = "humanitec/echo"
id = "quota-config-override"
name = "quota-config-override"
type = "config"
driver_inputs = {
values_string = jsonencode({
"limits-cpu" = "750m"
"limits-memory" = "750Mi"
})
}
}
resource "humanitec_resource_definition_criteria" "quota-config-override_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.quota-config-override.id
res_id = "quota"
app_id = "my-app"
}
config-quota-override.yaml
(view on GitHub)
:
# This Resource Definition uses the Echo Driver to provide configuration values
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: quota-config-override
entity:
name: quota-config-override
type: config
driver_type: humanitec/echo
driver_inputs:
# Any Driver inputs will be returned as outputs by the Echo Driver
values:
limits-cpu: "750m"
limits-memory: "750Mi"
# The matching criteria make this Resource Definition match for a particular app_id only
criteria:
- res_id: quota
app_id: my-app
config-quota.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "quota-config" {
driver_type = "humanitec/echo"
id = "quota-config"
name = "quota-config"
type = "config"
driver_inputs = {
values_string = jsonencode({
"limits-cpu" = "500m"
"limits-memory" = "500Mi"
})
}
}
resource "humanitec_resource_definition_criteria" "quota-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.quota-config.id
res_id = "quota"
}
config-quota.yaml
(view on GitHub)
:
# This Resource Definition uses the Echo Driver to provide configuration values
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: quota-config
entity:
name: quota-config
type: config
driver_type: humanitec/echo
driver_inputs:
# Any Driver inputs will be returned as outputs by the Echo Driver
values:
limits-cpu: "500m"
limits-memory: "500Mi"
criteria:
- res_id: quota
Security context
This section contains example Resource Definitions using the Template Driver for adding Security Context on Kubernetes Deployment
.
custom-workload-with-security-context.yaml
: Add Security Context to your Workload. This format is for use with the Humanitec CLI.custom-workload-with-security-context.tf
: Add Security Context to your Workload. This format is for use with the Humanitec Terraform provider.
custom-workload-with-security-context.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "custom-workload" {
driver_type = "humanitec/template"
id = "custom-workload"
name = "custom-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "custom-workload_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.custom-workload.id
}
custom-workload-with-security-context.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
criteria:
- {}
Serviceaccount
This section contains example Resource Definitions using the Template Driver for provisioning Kubernetes ServiceAccounts for your Workloads.
The solution consists of a combination of two Resource Definitions of type workload
and k8s-service-account
.
The workload
Resource Type is an implicit Type which is automatically referenced for any Deployment.
This workload
Resource Definition adds the serviceAccountName
item to the Pod spec and references a k8s-service-account
type Resource, causing it to be provisioned. The k8s-service-account
Resource Definition generates the Kubernetes manifest for the actual ServiceAccount.
A Resource Graph for a Workload using those Resource Definitions will look like this:
flowchart LR
workloadVirtual[Workload "my-workload"] --> workload(id: modules.my-workload\ntype: workload\nclass: default)
workload --> serviceAccount(id: modules.my-workload\ntype: k8s-service-account\nclass: default)
Note that the resource id
is used in the k8s-service-account
Resource Definition to derive the name of the actual Kubernetes ServiceAccount. Check the code for details.
Example files:
cli-serviceaccount-workload-def.yaml
andcli-serviceaccount-k8ssa-def.yaml
: Resource Definition combination for Workload/ServiceAccount. This format is for use with the Humanitec CLI.tf-serviceaccount-workload-def.tf
andtf-serviceaccount-k8ssa-def.tf
: Resource Definition combination for Workload/ServiceAccount. This format is for use with the Humanitec Terraform provider.
cli-serviceaccount-k8ssa-def.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "serviceaccount-k8s-service-account" {
driver_type = "humanitec/template"
id = "serviceaccount-k8s-service-account"
name = "serviceaccount-k8s-service-account"
type = "k8s-service-account"
driver_inputs = {
values_string = jsonencode({
"res_id" = "$${context.res.id}"
"templates" = {
"init" = "name: {{ index (splitList \".\" \"$${context.res.id}\") 1 }}\n"
"outputs" = "name: {{ .init.name }}\n"
"manifests" = <<END_OF_TEXT
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
END_OF_TEXT
}
})
}
}
cli-serviceaccount-k8ssa-def.yaml
(view on GitHub)
:
# This Resource Defintion provisions a Kubernetes ServiceAccount
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: serviceaccount-k8s-service-account
entity:
driver_type: humanitec/template
name: serviceaccount-k8s-service-account
type: k8s-service-account
driver_inputs:
values:
res_id: ${context.res.id}
templates:
# Name the ServiceAccount after the Resource
init: |
name: {{ index (splitList "." "${context.res.id}") 1 }}
outputs: |
name: {{ .init.name }}
manifests: |
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
cli-serviceaccount-workload-def.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "serviceaccount-workload" {
driver_type = "humanitec/template"
id = "serviceaccount-workload"
name = "serviceaccount-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/serviceAccountName
value: $${resources.k8s-service-account.outputs.name}
END_OF_TEXT
}
})
}
}
cli-serviceaccount-workload-def.yaml
(view on GitHub)
:
# This Resource Definition adds a Kubernetes ServiceAccount to a Workload
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: serviceaccount-workload
entity:
driver_type: humanitec/template
name: serviceaccount-workload
type: workload
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/serviceAccountName
value: ${resources.k8s-service-account.outputs.name}
tf-serviceaccount-k8ssa-def.tf
(view on GitHub)
:
# This Resource Defintion provisions a Kubernetes ServiceAccount
resource "humanitec_resource_definition" "k8s_service_account" {
driver_type = "humanitec/template"
id = "${var.prefix}k8s-service-account"
name = "${var.prefix}k8s-service-account"
type = "k8s-service-account"
driver_inputs = {
values_string = jsonencode({
templates = {
# Name the ServiceAccount after the Resource
init = <<EOL
name: {{ index (splitList "." "$${context.res.id}") 1 }}
EOL
manifests = <<EOL
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
EOL
outputs = <<EOL
name: {{ .init.name }}
EOL
}
})
}
}
tf-serviceaccount-workload-def.tf
(view on GitHub)
:
# This Resource Definition adds a Kubernetes ServiceAccount to a Workload
resource "humanitec_resource_definition" "workload" {
driver_type = "humanitec/template"
id = "${var.prefix}workload"
name = "${var.prefix}workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
templates = {
init = ""
manifests = ""
outputs = <<EOL
update:
- op: add
path: /spec/serviceAccountName
value: $${resources.k8s-service-account.outputs.name}
EOL
}
})
}
}
Tls cert
This section contains example Resource Definitions using the Template Driver for managing TLS Certificates in your cluster.
certificate-crd.yaml
: Add a certificate custom resource definition in the namespace of your deployment. This format is for use with the Humanitec CLI.
certificate-crd.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "certificate-crd" {
driver_type = "humanitec/template"
id = "certificate-crd"
name = "certificate-crd"
type = "tls-cert"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
tlsSecretName: {{ .id }}-tls
hostName: $${resources.dns.outputs.host}
certificateName: {{ .id }}-cert
END_OF_TEXT
"manifests" = <<END_OF_TEXT
certificate-crd.yml:
data:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .init.certificateName }}
spec:
secretName: {{ .init.tlsSecretName }}
duration: 2160h # 90d
renewBefore: 720h # 30d
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
dnsNames:
- {{ .init.hostName | toString | toRawJson }}
# The name of the issuerRef must point to the issuer / clusterIssuer in your cluster
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
location: namespace
END_OF_TEXT
"outputs" = "tls_secret_name: {{ .init.tlsSecretName }}\n"
}
})
}
}
resource "humanitec_resource_definition_criteria" "certificate-crd_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.certificate-crd.id
class = "default"
}
certificate-crd.yaml
(view on GitHub)
:
# This Resource Definition creates a certificate custom resource definition,
# which will instruct cert-manager to create a TLS certificate
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: certificate-crd
entity:
driver_type: humanitec/template
name: certificate-crd
type: tls-cert
criteria:
- class: default
driver_inputs:
values:
templates:
init: |
tlsSecretName: {{ .id }}-tls
hostName: ${resources.dns.outputs.host}
certificateName: {{ .id }}-cert
manifests: |
certificate-crd.yml:
data:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ .init.certificateName }}
spec:
secretName: {{ .init.tlsSecretName }}
duration: 2160h # 90d
renewBefore: 720h # 30d
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
dnsNames:
- {{ .init.hostName | toString | toRawJson }}
# The name of the issuerRef must point to the issuer / clusterIssuer in your cluster
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
location: namespace
outputs: |
tls_secret_name: {{ .init.tlsSecretName }}
Tolerations
This section contains example Resource Definitions using the Template Driver for managing tolerations on your Pods.
tolerations.yaml
: Add tolerations to the Workload. This format is for use with the Humanitec CLI.
tolerations.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "workload-toleration" {
driver_type = "humanitec/template"
id = "workload-toleration"
name = "workload-toleration"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/tolerations
value:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
END_OF_TEXT
}
})
}
}
tolerations.yaml
(view on GitHub)
:
# Add tolerations to the Workload by adding a value to the manifest at .spec.tolerations
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-toleration
entity:
name: workload-toleration
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
criteria: []
Volumes static provisioning
This example will let participating Workloads share a common persistent storage service through the Kubernetes volumes system.
It is possible to use the Drivers volume-nfs
or volume-pvc
to create a PersistentVolume for your application. If you have special requirements for your PersistentVolume, you can also use the Template Driver to create it as shown here.
The example setup will perform static provisioning for a Kubernetes PersistentVolume of type nfs
and a corresponding PersistentVolumeClaim. The volume points to an existing NFS server endpoint. The endpoint shown is an in-cluster NFS service which can be set up using this Kubernetes example. Modify the endpoint to use your own NFS server, or substitute the data completely for a different volume type.
flowchart TB
subgraph pod1[Pod]
direction TB
subgraph container1[Container]
volumeMount1(volumeMount\n/tmp/data):::codeComponent
end
volumeMount1 --> volume1(volume):::codeComponent
end
subgraph pod2[Pod]
direction TB
subgraph container2[Container]
volumeMount2(volumeMount\n/tmp/data):::codeComponent
end
volumeMount2 --> volume2(volume):::codeComponent
end
pvc1(PersistentVolumeClaim) --> pv1(PersistentVolume)
volume1 --> pvc1
pvc2(PersistentVolumeClaim) --> pv2(PersistentVolume)
volume2 --> pvc2
nfsServer[NFS Server]
pv1 --> nfsServer
pv2 --> nfsServer
classDef codeComponent font-family:Courier
To use the example, apply both Resource Definitions to your Organization and add the required matching criteria to both so they are matched to your target Deployments.
Note that this setup does not require any resource
to be requested via Score. The implicit workload
Resource, when matched to the Resource Definition of type workload
of this example, will trigger the provisioning of the volume
Resource through its own Resource reference.
Those files make up the example:
workload-volume-nfs.yaml
: Resource Definition of typeworkload
. It references a Resource of typevolume
through Resource References, thus adding such a Resource to the Resource Graph and effectively triggering the provisioning of that Resource. It uses the Resource outputs to set a label for a fictitious backup solution, and to add the PersistentVolumeClaim to the Workload container.volume-nfs.yaml
: Resource Definition of typevolume
. It creates the PersistentVolume and PersistentVolumeClaim manifests and adds thevolumes
element to the Workload’s Pod. The ID generated in theinit
section will be different for each active Resource, i.e. for each Workload, so that each Workload will get their own PersistentVolume and PersistentVolumeClaim objects created for them. Still, through the common NFS server endpoint, they will effectively share access to the data.
The resulting Resource Graph portion will look like this:
flowchart LR
subgraph resource-graph[Resource Graph]
direction TB
W1((Workload)) --->|implicit reference| W2(Workload)
W2 --->|"resource reference\n${resources.volume...}"| V1(Volume)
end
subgraph key [Key]
VN((Virtual\nNodes))
AN(Active\nResources)
end
resource-graph ~~~ key
volume-nfs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "volume-nfs" {
driver_type = "humanitec/template"
id = "volume-nfs"
name = "volume-nfs"
type = "volume"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
END_OF_TEXT
"manifests" = {
"app-pv-tmpl.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
END_OF_TEXT
}
"app-pvc-tmpl.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
"app-vol-tmpl.yaml" = {
"location" = "volumes"
"data" = <<END_OF_TEXT
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
}
"outputs" = "volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}\npvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}"
}
})
}
}
volume-nfs.yaml
(view on GitHub)
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
workload-volume-nfs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "workload-volume-nfs" {
driver_type = "humanitec/template"
id = "workload-volume-nfs"
name = "workload-volume-nfs"
type = "workload"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
pvcName: $${resources.volume.outputs.pvcName}
volumeName: $${resources.volume.outputs.volumeName}
END_OF_TEXT
"outputs" = <<END_OF_TEXT
update:
- op: add
path: /spec/annotations/backup.org-name.io
value: {{ .init.pvcName }}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/volumeMounts
value:
- name: {{ $.init.volumeName }}
mountPath: /tmp/data
{{- end }}
END_OF_TEXT
}
})
}
}
workload-volume-nfs.yaml
(view on GitHub)
:
# This workload Resource Definition uses the output of the "volume" type Resource
# to add a label for a backup solution
# and to create the volumeMount for the container.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-volume-nfs
entity:
name: workload-volume-nfs
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
pvcName: ${resources.volume.outputs.pvcName}
volumeName: ${resources.volume.outputs.volumeName}
outputs: |
update:
- op: add
path: /spec/annotations/backup.org-name.io
value: {{ .init.pvcName }}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/volumeMounts
value:
- name: {{ $.init.volumeName }}
mountPath: /tmp/data
{{- end }}
Terraform driver
Resource Definitions using the Terraform Driver
This section contains example Resource Definitions using the Terraform Driver.
Azure blob
Use the Terraform Driver to provision Azure Blob storage resources.
ssh-secret-refs.tf
: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.
ssh-secret-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "azure-blob" {
driver_type = "humanitec/terraform"
id = "azure-blob"
name = "azure-blob"
type = "azure-blob"
driver_inputs = {
# All secrets are read from a secret store using secret references
secret_refs = jsonencode({
variables = {
client_id = {
ref = var.client_id_secret_reference_key
store = var.secret_store
}
client_secret = {
ref = var.client_secret_secret_reference_key
store = var.secret_store
}
}
source = {
# Using an SSH key to authenticate against the Git repo providing the Terraform module
ssh_key = {
ref = var.ssh_key_secret_reference_key
store = var.secret_store
}
}
})
values_string = jsonencode({
source = {
path = "azure-blob/terraform/"
rev = "refs/heads/main"
url = "[email protected]:my-org/my-repo.git"
}
variables = {
# Variables for the Terraform module located in the Git repo
tenant_id = var.tenant_id
subscription_id = var.subscription_id
resource_group_name = var.resource_group_name
name = var.name
prefix = var.prefix
account_tier = var.account_tier
account_replication_type = var.account_replication_type
container_name = var.container_name
container_access_type = var.container_access_type
}
})
}
}
Backends
Backends
Humanitec manages the state file for the local
backend. This is the backend that is used if no backend is specified.
In order to manage your own state, you will need to define your own backend. We recommend that the backend configuration is defined in the script
part of the Resource Definition - i.e. as an override.tf
file (see the Inputs of the Terraform Driver). This allows the backend to be tuned per resource instance.
In order to centralize configuration, it is also recommended to create a config
resource that can be used to centrally manage the backend configuration.
In this example, there are two config
resources defined. Both are using the Template Driver to generate outputs for use in the example Resource Definition:
backend-config.yaml
which provides shared backend configuration that can be used across Resource Definitions.account-config-aws.yaml
which provides credentials used by the provider.
The example Resource Definition s3-backend.yaml
does the following:
- Configures a backend using the
backend-config.yaml
. - Configures the provider using a different set of credentials from
account-config-aws.yaml
. - Provisions an s3 bucket.
account-config-aws.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "account-config-aws" {
driver_type = "humanitec/template"
id = "account-config-aws"
name = "account-config-aws"
type = "config"
driver_account = "aws-credentials"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"secrets" = <<END_OF_TEXT
aws_access_key_id: {{ .driver.secrets.account.aws_access_key_id }}
aws_secret_access_key: {{ .driver.secrets.account.aws_secret_access_key }}
credentials_file: |
[default]
aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "account-config-aws_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.account-config-aws.id
res_id = "aws-account"
}
account-config-aws.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: account-config-aws
entity:
criteria:
# This res_id is used in the resource reference in the s3-backend Resource Definition.
- res_id: aws-account
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name your AWS Cloud Account.
driver_account: aws-credentials
driver_inputs:
values:
templates:
secrets: |
aws_access_key_id: {{ .driver.secrets.account.aws_access_key_id }}
aws_secret_access_key: {{ .driver.secrets.account.aws_secret_access_key }}
credentials_file: |
[default]
aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
driver_type: humanitec/template
name: account-config-aws
type: config
backend-config.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "tf-backend-config" {
driver_type = "humanitec/template"
id = "tf-backend-config"
name = "tf-backend-config"
type = "config"
driver_account = "aws-credentials"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"outputs" = <<END_OF_TEXT
bucket: my-terraform-state-bucket
key_prefix: "tf-state/"
region: us-east-1
END_OF_TEXT
"secrets" = <<END_OF_TEXT
credentials_file: |
[default]
aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
END_OF_TEXT
}
})
}
}
resource "humanitec_resource_definition_criteria" "tf-backend-config_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.tf-backend-config.id
res_id = "tf-backend"
}
backend-config.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: tf-backend-config
entity:
criteria:
# This res_id is used in the resource reference in the s3-backend Resource Definition.
- res_id: tf-backend
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name of your AWS Cloud Account.
driver_account: aws-credentials
driver_inputs:
values:
templates:
outputs: |
bucket: my-terraform-state-bucket
key_prefix: "tf-state/"
region: us-east-1
secrets: |
credentials_file: |
[default]
aws_access_key_id = {{ .driver.secrets.account.aws_access_key_id }}
aws_secret_access_key = {{ .driver.secrets.account.aws_secret_access_key }}
driver_type: humanitec/template
name: tf-backend-config
type: config
s3-backend.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "s3-backend-example" {
driver_type = "humanitec/terraform"
id = "s3-backend-example"
name = "s3-backend-example"
type = "s3"
driver_inputs = {
values_string = jsonencode({
"script" = <<END_OF_TEXT
variable "region" {}
terraform {
backend {
bucket = "$${resources.config#tf-backend.outputs.bucket}"
key = "$${resources.config#tf-backend.outputs.prefix}$${context.app.id}/$${context.env.id}/$${context.res.type}.$${context.res.class}/$${context.res.id}"
region = "$${resources.config#tf-backend.outputs.region}"
shared_credentials_files = ["backend_creds"]
}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
# The file is defined above. The provide will read the creds from this file.
shared_credentials_files = ["aws_creds"]
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("$${context.res.id}", "^.*\.", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
tags = {
Humanitec = true
}
}
END_OF_TEXT
"variables" = {
"region" = "us-east-1"
}
})
secret_refs = jsonencode({
"files" = {
"aws_creds" = {
"value" = "$${resources.config#aws-account.outputs.credentials_file}"
}
"backend_creds" = {
"value" = "$${resources.config#tf-backend.outputs.credentials_file}"
}
}
})
}
}
s3-backend.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: s3-backend-example
entity:
driver_inputs:
# We are using secret references to write the credentials using their "value" element.
# Using "secrets" instead would work too, but due to the placeholders in the values, the
# Platform Orchestrator will resolve them to the exact secret references used here
# in the resulting Resource Definition.
# This structure therefore represents the way the Platform Orchestrator manages the Resource Definition
# and is better suited to any round-trip engineering, if needed.
secret_refs:
files:
# Credentials for the AWS provider
aws_creds:
# Using the resource ID "#aws-account" to fulfill the matching criteria of the "account-config-aws" config resource.
value: ${resources.config#aws-account.outputs.credentials_file}
# In general, the credentials for the backend should be different from those of the provider
backend_creds:
# Using the resource ID "#tf-backend" to fulfill the matching criteria of the "tf-backend-config" config resource.
value: ${resources.config#tf-backend.outputs.credentials_file}
values:
script: |
variable "region" {}
terraform {
backend {
bucket = "${resources.config#tf-backend.outputs.bucket}"
key = "${resources.config#tf-backend.outputs.prefix}${context.app.id}/${context.env.id}/${context.res.type}.${context.res.class}/${context.res.id}"
region = "${resources.config#tf-backend.outputs.region}"
shared_credentials_files = ["backend_creds"]
}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
# The file is defined above. The provide will read the creds from this file.
shared_credentials_files = ["aws_creds"]
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("${context.res.id}", "^.*\.", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
tags = {
Humanitec = true
}
}
variables:
region: us-east-1
driver_type: humanitec/terraform
name: s3-backend-example
type: s3
# Supply matching criteria
criteria: []
Co provision
Resource co-provisioning
This section contains an example of Resource Definitions using the Terraform Driver and illustrating the co-provisioning concept.
Scenario: For each AWS S3 bucket resource an AWS IAM policy resource must be created. The bucket properties (region, ARN) should be passed to the policy resource. In other words, an IAM Policy resource depends on a S3 resource, but it needs to be created automatically.
Any time a Workload references a S3 resource using this Resource Definition, an IAM Policy resource will be co-provisioned and reference the S3 resource. The resulting Resource Graph will look like this:
flowchart LR
R1(Workload) --->|references| R2(S3)
N1(AWS Policy) --->|references| R2
classDef pClass stroke-width:1px
classDef rClass stroke-width:2px
classDef nClass stroke-width:2px,stroke-dasharray: 5 5
class R1 pClass
class R2 rClass
class N1 nClass
aws-policy-co-provision.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aws-policy-co-provision" {
driver_type = "humanitec/terraform"
id = "aws-policy-co-provision"
name = "aws-policy-co-provision"
type = "aws-policy"
driver_account = "aws"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"REGION" = "$${resources.s3.outputs.region}"
"BUCKET" = "$${resources.s3.outputs.bucket}"
"BUCKET_ARN" = "$${resources.s3.outputs.arn}"
}
"credentials_config" = {
"variables" = {
"ACCESS_KEY_ID" = "AccessKeyId"
"ACCESS_KEY_VALUE" = "SecretAccessKey"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_iam_policy" "bucket" {
name = "$\{var.BUCKET}-policy"
policy = data.aws_iam_policy_document.main.json
}
data "aws_iam_policy_document" "main" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:ListBucket",
]
resources = [
var.BUCKET_ARN,
]
}
}
END_OF_TEXT
})
}
}
aws-policy-co-provision.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-policy-co-provision
entity:
name: aws-policy-co-provision
type: aws-policy
driver_type: humanitec/terraform
# Use the credentials injected via the driver_account to set variables as expected by your Terraform code
driver_account: aws
driver_inputs:
values:
variables:
REGION: ${resources.s3.outputs.region}
BUCKET: ${resources.s3.outputs.bucket}
BUCKET_ARN: ${resources.s3.outputs.arn}
credentials_config:
variables:
ACCESS_KEY_ID: AccessKeyId
ACCESS_KEY_VALUE: SecretAccessKey
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_iam_policy" "bucket" {
name = "$\{var.BUCKET}-policy"
policy = data.aws_iam_policy_document.main.json
}
data "aws_iam_policy_document" "main" {
statement {
effect = "Allow"
actions = [
"s3:GetObject",
"s3:ListBucket",
]
resources = [
var.BUCKET_ARN,
]
}
}
s3-co-provision.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "s3-co-provision" {
driver_type = "humanitec/terraform"
id = "s3-co-provision"
name = "s3-co-provision"
type = "s3"
driver_account = "aws"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"REGION" = "eu-central-1"
}
"credentials_config" = {
"variables" = {
"ACCESS_KEY_ID" = "AccessKeyId"
"ACCESS_KEY_VALUE" = "SecretAccessKey"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
output "bucket" {
value = aws_s3_bucket.main.id
}
output "arn" {
value = aws_s3_bucket.main.arn
}
output "region" {
value = aws_s3_bucket.main.region
}
END_OF_TEXT
})
}
provision = {
"aws-policy" = {
is_dependent = false
}
}
}
s3-co-provision.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: s3-co-provision
entity:
name: s3-co-provision
type: s3
driver_type: humanitec/terraform
# Use the credentials injected via the driver_account to set variables as expected by your Terraform code
driver_account: aws
driver_inputs:
values:
variables:
REGION: eu-central-1
credentials_config:
variables:
ACCESS_KEY_ID: AccessKeyId
ACCESS_KEY_VALUE: SecretAccessKey
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
output "bucket" {
value = aws_s3_bucket.main.id
}
output "arn" {
value = aws_s3_bucket.main.arn
}
output "region" {
value = aws_s3_bucket.main.region
}
# Co-provision aws-policy resource
provision:
aws-policy:
is_dependent: false
Credentials
Credentials
General credentials configuration
Different Terraform providers have different ways of being configured. Generally, there are 3 ways that providers can be configured:
- Directly using parameters on the provider. We call this “provider” credentials.
- Using a credentials file. The filename is supplied to the provider. We call this “file” credentials.
- Via environment variables that the provider reads. We call this “environment” credentials.
A powerful approach for working with different cloud accounts for the same resource definition is to reference the credentials from a config
resource. By using matching criteria on the config
resource, it is possible to specialize the account used in the terraform to different contexts. For example, there might be different AWS Accounts for test
and production
environments. The same resource definition can be used to manage the terraform and 2 config
resources can be created matching to the staging
and production
environments respectively.
In this set of examples, we provide two config
Resource Definitions for AWS and GCP.
AWS
Account config (
account-config-aws.yaml)
Provider Credentials (
aws-provider-credentials.yaml)
Environment Credentials (
aws-environment-credentials.yaml)
GCP
Account config (
account-config-gcp.yaml)
File Credentials (
gcp-file-credentials.yaml)
Temporary credentials
Using a Cloud Account type that supports temporary credentials, those credentials can be easily injected into a Resource Definition using the Terraform Driver. Use a driver_account
referencing the Cloud Account in the Resource Definition, and access its the credentials through the supplied values as shown in the examples.
AWS
S3 bucket (
s3-temporary-credentials.yaml)
GCP
Cloud Storage bucket (
gcs-temporary-credentials.yaml)
Azure
Blob Storage container (
azure-blob-storage-temporary-credentials.yaml)
account-config-aws.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "account-config-aws" {
driver_type = "humanitec/echo"
id = "account-config-aws"
name = "account-config-aws"
type = "config"
driver_account = "aws-credentials"
driver_inputs = {
values_string = jsonencode({
"region" = "us-east-1"
})
}
}
resource "humanitec_resource_definition_criteria" "account-config-aws_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.account-config-aws.id
res_id = "aws-account"
}
account-config-aws.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: account-config-aws
entity:
criteria:
# This res_id is used in the resource reference in the s3-backend Resource Definition.
- res_id: aws-account
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name your AWS Cloud Account.
driver_account: aws-credentials
driver_inputs:
values:
region: us-east-1
driver_type: humanitec/echo
name: account-config-aws
type: config
account-config-gcp.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "account-config-gcp" {
driver_type = "humanitec/echo"
id = "account-config-gcp"
name = "account-config-gcp"
type = "config"
driver_account = "gcp-credentials"
driver_inputs = {
values_string = jsonencode({
"location" = "US"
"project_id" = "my-gcp-prject"
})
}
}
resource "humanitec_resource_definition_criteria" "account-config-gcp_criteria_0" {
resource_definition_id = resource.humanitec_resource_definition.account-config-gcp.id
res_id = "gcp-account"
}
account-config-gcp.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: account-config-gcp
entity:
criteria:
# This res_id is used in the resource reference in the gcp-file-credentials Resource Definition.
- res_id: gcp-account
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name your GCP Cloud Account.
driver_account: gcp-credentials
driver_inputs:
values:
location: US
project_id: my-gcp-prject
driver_type: humanitec/echo
name: account-config-gcp
type: config
aws-environment-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aws-environment-credentials" {
driver_type = "humanitec/terraform"
id = "aws-environment-credentials"
name = "aws-environment-credentials"
type = "s3"
driver_account = "$${resources['config.default#aws-account'].account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"environment" = {
"AWS_ACCESS_KEY_ID" = "AccessKeyId"
"AWS_SECRET_ACCESS_KEY" = "SecretAccessKey"
"AWS_SESSION_TOKEN" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("$${context.res.id}", "/^.*\\./", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
tags = {
Humanitec = true
}
}
END_OF_TEXT
"variables" = {
"region" = "$${resources['config.default#aws-account'].outputs.region}"
}
})
}
}
aws-environment-credentials.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-environment-credentials
entity:
# Use the account provided by the config resource
driver_account: ${resources['config.default#aws-account'].account}
driver_inputs:
values:
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
tags = {
Humanitec = true
}
}
variables:
region: ${resources['config.default#aws-account'].outputs.region}
driver_type: humanitec/terraform
name: aws-environment-credentials
type: s3
# Supply matching criteria
criteria: []
aws-provider-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aws-provider-credentials" {
driver_type = "humanitec/terraform"
id = "aws-provider-credentials"
name = "aws-provider-credentials"
type = "s3"
driver_account = "$${resources['config.default#aws-account'].account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"variables" = {
"access_key_id" = "AccessKeyId"
"secret_access_key" = "SecretAccessKey"
"session_token" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
variable "access_key_id" {
sensitive = true
}
variable "secret_access_key" {
sensitive = true
}
variable "session_token" {
sensitive = true
}
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
access_key = var.access_key_id
secret_key = var.secret_access_key
token = var.session_token
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("$${context.res.id}", "/^.*\\./", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
tags = {
Humanitec = true
}
}
END_OF_TEXT
"variables" = {
"region" = "$${resources['config.default#aws-account'].outputs.region}"
}
})
}
}
aws-provider-credentials.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aws-provider-credentials
entity:
# Use the account provided by the config resource
driver_account: ${resources['config.default#aws-account'].account}
driver_inputs:
values:
credentials_config:
variables:
access_key_id: "AccessKeyId"
secret_access_key: "SecretAccessKey"
session_token: "SessionToken"
script: |
variable "access_key_id" {
sensitive = true
}
variable "secret_access_key" {
sensitive = true
}
variable "session_token" {
sensitive = true
}
variable "region" {}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
access_key = var.access_key_id
secret_key = var.secret_access_key
token = var.session_token
}
output "bucket" {
value = aws_s3_bucket.bucket.bucket
}
output "region" {
value = var.region
}
resource "aws_s3_bucket" "bucket" {
bucket = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
tags = {
Humanitec = true
}
}
variables:
region: ${resources['config.default#aws-account'].outputs.region}
driver_type: humanitec/terraform
name: aws-provider-credentials
type: s3
# Supply matching criteria
criteria: []
azure-blob-storage-temporary-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "blob-storage-temporary-credentials" {
driver_type = "humanitec/terraform"
id = "blob-storage-temporary-credentials"
name = "blob-storage-temporary-credentials"
type = "azure-blob"
driver_account = "azure-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"location" = "eastus"
"resource_group_name" = "my-test-resources"
"tenant_id" = "3987ae5f-008f-4265-a6ee-e9dcedce4742"
"subscription_id" = "742f6d8b-1b7b-4c6a-9f37-90bdd5aeb996"
"client_id" = "c977c44d-3003-464c-b163-03920d4a390b"
}
"credentials_config" = {
"variables" = {
"oidc_token" = "oidc_token"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "azurerm" {
features {}
subscription_id = var.subscription_id
tenant_id = var.tenant_id
client_id = var.client_id
use_oidc = true
oidc_token = var.oidc_token
}
# ... Terraform code reduced for brevity
resource "azurerm_storage_account" "example" {
name = "mystorageaccount"
resource_group_name = var.resource_group_name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "mystorage"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
END_OF_TEXT
})
}
}
azure-blob-storage-temporary-credentials.yaml
(view on GitHub)
:
# Create Azure Blob Storage container using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: blob-storage-temporary-credentials
entity:
name: blob-storage-temporary-credentials
type: azure-blob
driver_type: humanitec/terraform
# The driver_account references a Cloud Account of type "azure-identity"
# which needs to be configured for your Organization.
driver_account: azure-temporary-creds
driver_inputs:
values:
variables:
location: eastus
resource_group_name: my-test-resources
tenant_id: 3987ae5f-008f-4265-a6ee-e9dcedce4742
subscription_id: 742f6d8b-1b7b-4c6a-9f37-90bdd5aeb996
# Managed Identity Client ID used in the Cloud Account
client_id: c977c44d-3003-464c-b163-03920d4a390b
# Use the credentials injected via the driver_account
# to set `oidc_token` variable as expected by your Terraform code
credentials_config:
variables:
oidc_token: oidc_token
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "azurerm" {
features {}
subscription_id = var.subscription_id
tenant_id = var.tenant_id
client_id = var.client_id
use_oidc = true
oidc_token = var.oidc_token
}
# ... Terraform code reduced for brevity
resource "azurerm_storage_account" "example" {
name = "mystorageaccount"
resource_group_name = var.resource_group_name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "mystorage"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
gcp-file-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "gcp-file-credentials" {
driver_type = "humanitec/terraform"
id = "gcp-file-credentials"
name = "gcp-file-credentials"
type = "gcs"
driver_account = "$${resources['config.default#gcp-account'].account}"
driver_inputs = {
values_string = jsonencode({
"credentials_config" = {
"file" = "credentials.json"
}
"script" = <<END_OF_TEXT
variable "project_id" {}
variable "location" {}
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
provider "google" {
project = var.project_id
# The file is defined above. The provider will read a service account token from this file.
credentials = "credentials.json"
}
output "name" {
value = google_storage_bucket.bucket.name
}
resource "google_storage_bucket" "bucket" {
name = "$\{replace("$${context.res.id}", "/^.*\\./", "")}-standard-$${context.env.id}-$${context.app.id}-$${context.org.id}"
location = var.location
force_destroy = true
}
END_OF_TEXT
"variables" = {
"location" = "$${resources.config#gcp-account.outputs.location}"
"project_id" = "$${resources.config#gcp-account.outputs.project_id}"
}
})
}
}
gcp-file-credentials.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gcp-file-credentials
entity:
driver_account: ${resources['config.default#gcp-account'].account}
driver_inputs:
values:
credentials_config:
file: credentials.json
script: |
variable "project_id" {}
variable "location" {}
terraform {
required_providers {
google = {
source = "hashicorp/google"
}
}
}
provider "google" {
project = var.project_id
# The file is defined above. The provider will read a service account token from this file.
credentials = "credentials.json"
}
output "name" {
value = google_storage_bucket.bucket.name
}
resource "google_storage_bucket" "bucket" {
name = "$\{replace("${context.res.id}", "/^.*\\./", "")}-standard-${context.env.id}-${context.app.id}-${context.org.id}"
location = var.location
force_destroy = true
}
variables:
location: ${resources.config#gcp-account.outputs.location}
project_id: ${resources.config#gcp-account.outputs.project_id}
driver_type: humanitec/terraform
name: gcp-file-credentials
type: gcs
# Supply matching criteria
criteria: []
gcs-temporary-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "gcs-temporary-credentials" {
driver_type = "humanitec/terraform"
id = "gcs-temporary-credentials"
name = "gcs-temporary-credentials"
type = "gcs"
driver_account = "gcp-temporary-creds"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"location" = "europe-west3"
"project_id" = "my-gcp-project"
}
"credentials_config" = {
"variables" = {
"access_token" = "access_token"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "google" {
project = var.project_id
access_token = var.access_token
}
# ... Terraform code reduced for brevity
resource "google_storage_bucket" "bucket" {
name = my-bucket
location = var.location
}
END_OF_TEXT
})
}
}
gcs-temporary-credentials.yaml
(view on GitHub)
:
# Create Google Cloud Storage bucket using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: gcs-temporary-credentials
entity:
name: gcs-temporary-credentials
type: gcs
driver_type: humanitec/terraform
# The driver_account references a Cloud Account of type "gcp-identity"
# which needs to be configured for your Organization.
driver_account: gcp-temporary-creds
driver_inputs:
values:
variables:
location: europe-west3
project_id: my-gcp-project
# Use the credentials injected via the driver_account
# to set variables as expected by your Terraform code
credentials_config:
variables:
access_token: access_token
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "google" {
project = var.project_id
access_token = var.access_token
}
# ... Terraform code reduced for brevity
resource "google_storage_bucket" "bucket" {
name = my-bucket
location = var.location
}
s3-temporary-credentials.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "s3-temporary-credentials" {
driver_type = "humanitec/terraform"
id = "s3-temporary-credentials"
name = "s3-temporary-credentials"
type = "s3"
driver_account = "aws-temp-creds"
driver_inputs = {
values_string = jsonencode({
"variables" = {
"REGION" = "eu-central-1"
}
"credentials_config" = {
"variables" = {
"ACCESS_KEY_ID" = "AccessKeyId"
"ACCESS_KEY_VALUE" = "SecretAccessKey"
"SESSION_TOKEN" = "SessionToken"
}
}
"script" = <<END_OF_TEXT
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
token = var.SESSION_TOKEN
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
END_OF_TEXT
})
}
}
s3-temporary-credentials.yaml
(view on GitHub)
:
# Create S3 bucket using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: s3-temporary-credentials
entity:
name: s3-temporary-credentials
type: s3
driver_type: humanitec/terraform
# The driver_account references a Cloud Account of type "aws-role"
# which needs to be configured for your Organization.
driver_account: aws-temp-creds
driver_inputs:
values:
variables:
REGION: eu-central-1
# Use the credentials injected via the driver_account
# to set variables as expected by your Terraform code
credentials_config:
variables:
ACCESS_KEY_ID: AccessKeyId
ACCESS_KEY_VALUE: SecretAccessKey
SESSION_TOKEN: SessionToken
script: |
# This provider block is using the Terraform variables
# set through the credentials_config.
# Variable declarations omitted for brevity.
provider "aws" {
region = var.REGION
access_key = var.ACCESS_KEY_ID
secret_key = var.ACCESS_KEY_VALUE
token = var.SESSION_TOKEN
}
# ... Terraform code reduced for brevity
resource "aws_s3_bucket" "bucket" {
bucket = my-bucket
}
Custom git config
Custom git-config for sourcing Terraform modules
This section contains an example of providing a custom git-config to be used by Terraform when accessing modules sources from private git repositories.
Terraform can use modules from various sources including git. The documentation states: Terraform installs modules from Git repositories by running git clone, and so it will respect any local Git configuration set on your system, including credentials. To access a non-public Git repository, configure Git with suitable credentials for that repository.
Custom git configuration can be provided by including a file with name .gitconfig
in the files
input. This file can be either a value or a secret depending on whether it contains sensitive credentials or not.
In this example we add a git-config
that re-writes URLs.
example-def.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "example-git-config" {
driver_type = "humanitec/terraform"
id = "example-git-config"
name = "example-git-config"
type = "s3"
driver_inputs = {
values_string = jsonencode({
"files" = {
".gitconfig" = <<END_OF_TEXT
[url "https://github.com/Invicton-Labs/"]
insteadOf = https://example.com/replace-with-git-config/
END_OF_TEXT
}
"script" = <<END_OF_TEXT
module "uuid" {
# We rely on the git-config above to rewrite this URL into one that will work
source = "git::https://example.com/replace-with-git-config/terraform-random-uuid.git?ref=v0.2.0"
}
output "bucket" {
value = module.uuid.uuid
}
END_OF_TEXT
})
}
}
example-def.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
metadata:
id: example-git-config
entity:
criteria: {}
driver_inputs:
values:
files:
.gitconfig: |
[url "https://github.com/Invicton-Labs/"]
insteadOf = https://example.com/replace-with-git-config/
script: |
module "uuid" {
# We rely on the git-config above to rewrite this URL into one that will work
source = "git::https://example.com/replace-with-git-config/terraform-random-uuid.git?ref=v0.2.0"
}
output "bucket" {
value = module.uuid.uuid
}
driver_type: humanitec/terraform
name: example-git-config
type: s3
kind: Definition
Private git repo
The Terraform Driver can access Terraform definitions stored in a Git repository. In the case that this repository requires authentication, you must supply credentials to the Driver. The examples in this section show how to provide those as part of the secrets in the Resource Definition based on the Terraform Driver.
ssh-secret-refs.tf
: uses secret references to obtain an SSH key from a secret store to connect to the Git repo providing the Terraform code.https-secret-refs.tf
: uses secret references to obtain an HTTPS password from a secret store to connect to the Git repo providing the Terraform code.
https-secret-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "example-resource" {
driver_type = "humanitec/terraform"
id = "example-resource"
name = "example-resource"
type = "some-resource-type"
driver_inputs = {
# This examples uses secret references, pointing at a secret store
# to obtain the actual values
secret_refs = jsonencode({
source = {
# Using the password for a connection to the Git repo via HTTPS
password = {
ref = var.password_secret_reference_key
store = var.secret_store
}
}
variables = {
# ...
}
})
values_string = jsonencode({
# Connection information to the target Git repo
source = {
path = "some-resource-type/terraform"
rev = "refs/heads/main"
url = "https://my-domain.com/my-org/my-repo.git"
}
# ...
})
}
}
ssh-secret-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "example-resource" {
driver_type = "humanitec/terraform"
id = "example-resource"
name = "example-resource"
type = "some-resource-type"
driver_inputs = {
# This examples uses secret references, pointing at a secret store
# to obtain the actual values
secret_refs = jsonencode({
source = {
# Using the ssh_key for a connection to the Git repo via SSH
ssh_key = {
ref = var.ssh_key_secret_reference_key
store = var.secret_store
}
}
variables = {
# ...
}
})
values_string = jsonencode({
# Connection information to the target Git repo
source = {
path = "some-resource-type/terraform"
rev = "refs/heads/main"
url = "[email protected]:my-org/my-repo.git"
}
# ...
})
}
}
Runner
The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.
The examples in this section show how to provide those data by referencing a k8s-cluster
Resource Definition as part of the non-secret and secret fields of the runner
object in the s3
Resource Definition based on the Terraform Driver.
k8s-cluster-refs.tf
: provides a connection to an EKS cluster.s3-ext-runner-refs.tf
: uses runner configuration to run the Terraform Runner in the external cluster specified byk8s-cluster-refs.tf
and provision an S3 bucket. It configures the Runner to run Terraform scripts from a private Git repository which initializes a Terraform s3 backend via Environment Variables.
k8s-cluster-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "eks_resource_cluster" {
id = "eks-cluster"
name = "eks-cluster"
type = "k8s-cluster"
driver_type = "humanitec/k8s-cluster-eks"
driver_inputs = {
secrets_string = jsonencode({
credentials = {
aws_access_key_id = var.aws_access_key_id
aws_secret_access_key = var.aws_secret_access_key
}
}
)
values_string = jsonencode({
loadbalancer = "10.10.10.10"
name = "my-cluster"
region = "eu-central-1"
loadbalancer = "x111111xxx111111111x1xx1x111111x-x111x1x11xx111x1.elb.eu-central-1.amazonaws.com"
loadbalancer_hosted_zone = "ABC0DEF5WYYZ00"
})
}
}
resource "humanitec_resource_definition_criteria" "eks_resource_cluster" {
resource_definition_id = humanitec_resource_definition.eks_resource_cluster.id
class = "runner"
}
s3-ext-runner-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aws_terraform_external_runner_resource_s3_bucket" {
id = "aws-terrafom-ext-runner-s3-bucket"
name = "aws-terrafom-ext-runner-s3-bucket"
type = "s3"
driver_type = "humanitec/terraform"
# The driver_account references a Cloud Account configured in the Platform Orchestrator.
# Replace with the name of your AWS Cloud Account.
# The account is used to provide credentials to the Terraform script via environment variables to access the TF state.
driver_account = "my-aws-account"
driver_inputs = {
secrets_string = jsonencode({
# Secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = {
credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
}
source = {
ssh_key = var.ssh_key
}
}
)
values_string = jsonencode({
# This instructs the driver that the Runner must run in an external cluster.
runner_mode = "custom-kubernetes"
# Non-secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = {
cluster_type = "eks"
cluster = {
region = "$${resources['k8s-cluster.runner'].outputs.region}"
name = "$${resources['k8s-cluster.runner'].outputs.name}"
loadbalancer = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
}
# Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
service_account = "humanitec-tf-runner-sa"
namespace = "humanitec-tf-runner"
}
# Configure the way we provide account credentials to the Terraform scripts in the referenced repository.
# These credentials are related to the `driver_account` configured above.
credentials_config = {
# Terraform script Variables.
variables = {
ACCESS_KEY_ID = "AccessKeyId"
SECRET_ACCESS_KEY = "SecretAccessKey"
}
# Environment Variables.
environment = {
AWS_ACCESS_KEY_ID = "AccessKeyId"
AWS_SECRET_ACCESS_KEY = "SecretAccessKey"
}
}
# Connection information to the Git repo containing the Terraform code.
# It will provide a backend configuration initialized via Environment Variables.
source = {
path = "s3/terraform/bucket/"
rev = "refs/heads/main"
url = "my-domain.com:my-org/my-repo.git"
}
variables = {
# Provide a separate bucket per Application and Environment
bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
region = var.region
}
})
}
}
Runner pod configuration
The Terraform Driver can be configured to execute the Terraform scripts as part of a Kubernetes Job execution in a target Kubernetes cluster, instead of in the Humanitec infrastructure. In this case, you must supply access data to the cluster to the Humanitec Platform Orchestrator.
The examples in this section show how to provide those data by referencing a k8s-cluster
Resource Definition as part of the non-secret and secret fields of the runner
object in the azure-blob-account
Resource Definition based on the Terraform Driver.
They also provide an example of how to apply labels to the Runner Pod and make it able to run with an Azure Workload Identity getting rid of the need of explicitly setting Azure credentials in the Resource Definition or using a Driver Account.
k8s-cluster-refs.tf
: provides a connection to an AKS cluster.azure-blob-account.tf
: uses runner configuration to run the Terraform Runner in the external cluster specified byk8s-cluster-refs.tf
and provision an azure blob account. It configures the Runner to run Terraform scripts from a private Git repository where an Terraform azurerm backend. Neither a driver account or secret credentials are used here as the runner pod is configured to run with a workload identity associated to the specify service account viarunner.runner_pod_template
property.
azure-blob-account.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "azure_blob_account" {
driver_type = "humanitec/terraform"
id = "azure-blob-account-basic"
name = "azure-blob-account-basic"
type = "azure-blob-account"
driver_inputs = {
secrets_string = jsonencode({
# Secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = jsonencode({
credentials = "$${resources['k8s-cluster.runner'].outputs.credentials}"
})
source = {
ssh_key = var.ssh_key
}
})
values_string = jsonencode({
append_logs_to_error = true
# This instructs the driver that the Runner must be run in an external cluster.
runner_mode = "custom-kubernetes"
# Non-secret info of the cluster where the Terraform Runner should run.
# This references a k8s-cluster resource that will be matched by class `runner`.
runner = {
cluster_type = "aks"
cluster = {
region = "$${resources['k8s-cluster.runner'].outputs.region}"
name = "$${resources['k8s-cluster.runner'].outputs.name}"
loadbalancer = "$${resources['k8s-cluster.runner'].outputs.loadbalancer}"
loadbalancer_hosted_zone = "$${resources['k8s-cluster.runner'].outputs.loadbalancer_hosted_zone}"
}
# Service Account created following: https://developer.humanitec.com/integration-and-extensions/drivers/generic-drivers/terraform/#runner-object
# In this example, the Service Account needs to be annotated to specify the Microsoft Entra application client ID to be used with the pod: https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet#service-account-labels-and-annotations
service_account = "humanitec-tf-runner-sa"
namespace = "humanitec-tf-runner"
# This instructs the driver that the Runner pod must run with a workload identity.
runner_pod_template = <<EOT
metadata:
labels:
azure.workload.identity/use: "true"
EOT
}
# Connection information to the Git repo containing the Terraform code.
# It will provide a backend configuration initialized via Environment Variables.
source = {
path = "modules/azure-blob-account/basic"
rev = var.resource_packs_azure_rev
url = var.resource_packs_azure_url
}
variables = {
res_id = "$${context.res.id}"
app_id = "$${context.app.id}"
env_id = "$${context.env.id}"
subscription_id = var.subscription_id
resource_group_name = var.resource_group_name
name = var.name
prefix = var.prefix
account_tier = var.account_tier
account_replication_type = var.account_replication_type
}
})
}
}
k8s-cluster-refs.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aks_aad_resource_cluster" {
id = "aad-enabled-cluster"
name = "aad-enabled-cluster"
type = "k8s-cluster"
driver_type = "humanitec/k8s-cluster-aks"
driver_inputs = {
secrets_string = jsonencode({
credentials = {
appId = var.app_id
displayName = var.display_name
password = var.password
tenant = var.tenant
}
}
)
values_string = jsonencode({
name = "my-cluster"
resource_group = "my-azure-resource-group"
subscription_id = "123456-1234-1234-1234-123456789"
server_app_id = "6dae42f8-4368-4678-94ff-3960e28e3630"
})
}
}
resource "humanitec_resource_definition_criteria" "aks_aad_resource_cluster" {
resource_definition_id = humanitec_resource_definition.aks_aad_resource_cluster.id
class = "runner"
}
S3
Use the Terraform Driver to provision Amazon S3 bucket resources.
public-git-repo.tf
: uses a publicly accessible Git repo to find the Terraform code.private-git-repo.tf
: uses a private Git repo requiring authentication to find the Terraform code.
private-git-repo.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
id = "aws-terrafom-s3-bucket"
name = "aws-terrafom-s3-bucket"
type = "s3"
driver_type = "humanitec/terraform"
driver_inputs = {
secrets_string = jsonencode({
variables = {
access_key = var.access_key
secret_key = var.secret_key
}
source = {
# Provide either an SSH key (for SSH connection) or password (for HTTPS).
ssh_key = var.ssh_key
password = var.password
}
}
)
values_string = jsonencode({
# Connection information to the Git repo containing the Terraform code
source = {
path = "s3/terraform/bucket/"
rev = "refs/heads/main"
url = "https://my-domain.com/my-org/my-repo.git"
# url = "[email protected]:my-org/my-repo.git" # For SSH access instead of HTTPS
}
variables = {
# Provide a separate bucket per Application and Environment
bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
region = var.region
assume_role_arn = var.assume_role_arn
}
})
}
}
public-git-repo.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "aws_terraform_resource_s3_bucket" {
id = "aws-terrafom-s3-bucket"
name = "aws-terrafom-s3-bucket"
type = "s3"
driver_type = "humanitec/terraform"
driver_inputs = {
secrets_string = jsonencode({
variables = {
access_key = var.access_key
secret_key = var.secret_key
}
}
)
values_string = jsonencode({
# Connection information to the Git repo containing the Terraform code
# The repo must not require authentication
source = {
path = "s3/terraform/bucket/"
rev = "refs/heads/main"
url = "https://my-domain.com/my-org/my-repo.git"
}
variables = {
# Provide a separate bucket per Application and Environment
bucket = "my-company-my-app-$${context.app.id}-$${context.env.id}"
region = var.region
assume_role_arn = var.assume_role_arn
}
})
}
}
Wildcard dns
This section contains example Resource Definitions using the Wildcard DNS Driver for managing DNS records for routing and ingress inside the cluster.
dns-template.yaml
: Shows how to use the Wildcard DNS Driver to return the name of an externally managed DNS record. This format is for use with the Humanitec CLI.
dns-template.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "dns-template" {
driver_type = "humanitec/dns-wildcard"
id = "dns-template"
name = "dns-template"
type = "dns"
driver_inputs = {
values_string = jsonencode({
"domain" = "my-test-domain.com"
"template" = "preview-$${context.app.id}-$${context.env.id}"
})
}
}
dns-template.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: dns-template
entity:
name: dns-template
type: dns
driver_type: humanitec/dns-wildcard
driver_inputs:
values:
domain: "my-test-domain.com"
template: "preview-${context.app.id}-${context.env.id}"