Use existing Terraform modules
Introduction
Many Platform Teams adopting the Humanitec Platform Orchestrator already have a comprehensive set of custom Terraform modules. These Terraform modules already incorporate security policies, conventions and best practices for the Platform Team’s organization.
It is common that these modules are centrally managed and so cannot be easily updated for use with the Platform Orchestrator. Even if this is not the case, in order to encourage re-use it is often advantageous to avoid having to make changes to the Terraform modules.
This tutorial walks through how to use an existing Terraform module to provision a resource using the Platform Orchestrator. Specifically:
- Referencing existing Terraform modules
- Mapping outputs from the module to resource type outputs
- Creating a pass through module to allow the configuration to be reused across Resource Definitions
Prerequisites
To get started you’ll need:
- A Humanitec Platform Orchestrator organization with the Resource Definitions to connect the infrastructure (eg the Kubernetes cluster)
- Administrator role on the organization
- The
Humanitec CLI
humctl
installed - The
jq
tool installed
Planning the work
We will need to construct a Resource Definition that does the following:
- Configure providers
- Reference the Terraform Module
- Map the module outputs to Resource Type outputs
We will be using the public module
terraform-aws-modules/s3-bucket/aws
to provision an S3 bucket. The
s3
Resource Type
requires a bucket
and region
output to be defined. The public module has
10 outputs
. The outputs need to be mapped as follows:
Terraform Module | s3 Resource Type |
---|---|
s3_bucket_id |
bucket |
s3_bucket_region |
region |
Creating the Resource Definition
We start off with a basic Resource Definition using the
Terraform Driver
. For this we assume that we have an
AWS Cloud Account of type aws-role
configured with ID aws-test-account
(update with an appropriate ID). This cloud account will be used to configure the AWS provider:
step-01.yaml
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: use-existing-tf-module-tutorial
entity:
driver_account: aws-test-account # Update if necessary
driver_inputs:
values:
# Used to configure the Terraform provider.
# See: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#environment-variables
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
variable "region" {
type = string
}
variables:
region: us-east-1
driver_type: humanitec/terraform
name: use-existing-tf-module-tutorial
type: s3
Referencing the module
We use the standard Terraform module block mechanism to reference the module. We will use the following module spec:
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
acl = "private"
bucket_prefix = "use-tf-module-example-"
control_object_ownership = true
object_ownership = "ObjectWriter"
versioning = {
enabled = true
}
}
We will then need to map the module outputs to the s3
Resource Type outputs:
output "bucket" {
value = module.s3_bucket.s3_bucket_id
}
output "region" {
value = module.s3_bucket
}
Here we just reference the outputs of the module directly in the outputs.
These can be combined into the Resource Definition as 2 additional files:
step-02.yaml
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: use-existing-tf-module-tutorial
entity:
driver_account: aws-test-account # Update if necessary
driver_inputs:
values:
# Used to configure the Terraform provider.
# See: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#environment-variables
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
variable "region" {
type = string
}
variables:
region: us-east-1
#######################
# Added for this step #
#######################
files:
module.tf: |
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
acl = "private"
bucket_prefix = "use-tf-module-example-"
control_object_ownership = true
object_ownership = "ObjectWriter"
versioning = {
enabled = true
}
}
outputs.tf: |
output "bucket" {
value = module.s3_bucket.s3_bucket_id
}
output "region" {
value = module.s3_bucket.s3_bucket_region
}
driver_type: humanitec/terraform
name: use-existing-tf-module-tutorial
type: s3
Optional: Dealing with private modules
In the case that the module is only available on a private registry or in a private git repository, additional authentication needs to be provided.
Private Terraform repositories
The Terraform Driver supports specifying
access tokens per domains
. This works by writing out the domain name with all .
replaced with _
and all non-ASCII characters converted to their punycode equivalent. See
Terraform CLI: Environment Variable Credentials
.
For example, if the private Terraform module is in Hashicorp Terraform Cloud, the following snippet could be added to the Resource Definition, using a secret reference to read credentials from a secret store:
...
entity:
...
driver_inputs:
...
secrets:
terraform:
tokens:
app_terraform_io:
store: my-secret-store
ref: my-personal-access-token-for-terraform-cloud
Private git repositories
Terraform can use modules from various sources including git . The documentation states : Terraform installs modules from Git repositories by running git clone, and so it will respect any local Git configuration set on your system, including credentials. To access a non-public Git repository, configure Git with suitable credentials for that repository.
Custom git configuration can be provided by including a file with name .gitconfig
in the files
input. This file can be either a value or a secret depending on whether it contains sensitive credentials or not.
For example, if the private Terraform module is in a private GitHub repository, the following snippet could be added to the Resource Definition, using a secret reference to read credentials from a secret store:
...
entity:
...
driver_inputs:
...
secrets:
files:
.gitconfig:
store: my-secret-store
ref: gitconfig-for-private-modules
And the key gitconfig-for-private-modules
contains:
[url "https://my-github-user:[email protected]"]
insteadOf = "https://github.com"
Optional: Creating a “passthrough” module
In some cases, a single Terraform module may be reused in many different Resource Definitions. One way this can happen is if a module is used across different Environment Types with different configurations. For example, a Redis in AWS may have different machine sizes for development, staging and production Environments.
If this is the case, it can be useful to package up the mappings discussed in step-02
into its own Terraform module which can itself be stored in git and referenced from the Resource Definition. For our example, the following three Terraform files could be saved in a git repository:
inputs.tf
variable "region" {
type = string
}
module.tf
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
acl = "private"
bucket_prefix = "use-tf-module-example-"
control_object_ownership = true
object_ownership = "ObjectWriter"
versioning = {
enabled = true
}
}
outputs.tf
output "bucket" {
value = module.s3_bucket.s3_bucket_id
}
output "region" {
value = module.s3_bucket.
}
Assuming that the module is stored in GitHub, the Resource Definition could look like:
passthrough-def.yaml
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: use-existing-tf-module-tutorial
entity:
driver_account: aws-test-account # Update if necessary
driver_inputs:
values:
# Used to configure the Terraform provider.
# See: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#environment-variables
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
source:
url: https://github.com/my-github-org/my-passthrough-module.git
rev: refs/tags/v1.0.1
variables:
region: us-east-1
driver_type: humanitec/terraform
name: use-existing-tf-module-tutorial
type: s3
Testing the Resource Definition
Prerequisites
The testing flow is based on AWS and has these additional prequisites:
- An AWS Cloud Account configured in your Organization
- A Kubernetes cluster connected to the Platform Orchestrator for Workload deployments. If you do not have one yet, these are your options:
Options to connect your Kubernetes cluster
Five-minute-IDP | Bring your own cluster | Reference architecture |
---|---|---|
Set up a local demo cluster following the
Five-minute IDP
Duration: 5 min No need to deploy the demo Workload, just perform the setup Ephemeral (throw-away) setup for demo purposes |
Connect an existing cluster by following the
Quickstart
up to “Connect your cluster” (guided), or using the instructions in
Kubernetes
(self-guided) Duration: 15-30 min One-time effort per cluster, can be re-used going forward |
Set up the
reference architecture
Duration: 15-30 min Creates a new cluster plus supporting cloud infrastructure, and connects it to the Platform Orchestrator |
Setup
humctl login
.-
Setup your environment by setting the following environment variables. Replace
...
with your org and an app name of your choice.export HUMANITEC_ORG=... export HUMANITEC_APP=... export HUMANITEC_ENV=development
-
Register the Resource Definition and match it to the current app, updating the
driver_account
field as required:
driver_account
needs to have permission to create S3 buckets.cat << "EOF" > use-existing-tf-module-tutorial-def.yaml
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: use-existing-tf-module-tutorial
entity:
###########################
# Added for the code copy #
###########################
criteria:
- res_id: modules.use-existing-tf-module-workload.externals.my-s3
driver_account: aws-test-account # Update if necessary
driver_inputs:
values:
# Used to configure the Terraform provider.
# See: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#environment-variables
credentials_config:
environment:
AWS_ACCESS_KEY_ID: "AccessKeyId"
AWS_SECRET_ACCESS_KEY: "SecretAccessKey"
AWS_SESSION_TOKEN: "SessionToken"
script: |
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = var.region
}
variable "region" {
type = string
}
variables:
region: us-east-1
files:
module.tf: |
module "s3_bucket" {
source = "terraform-aws-modules/s3-bucket/aws"
acl = "private"
bucket_prefix = "use-tf-module-example-"
control_object_ownership = true
object_ownership = "ObjectWriter"
versioning = {
enabled = true
}
}
outputs.tf: |
output "bucket" {
value = module.s3_bucket.s3_bucket_id
}
output "region" {
value = module.s3_bucket.s3_bucket_region
}
driver_type: humanitec/terraform
name: use-existing-tf-module-tutorial
type: s3
EOF
humctl apply -f use-existing-tf-module-tutorial-def.yaml
-
Create a new app:
humctl create app "${HUMANITEC_APP}"
-
Create matching criteria on the target Kubernetes cluster for your Application. This will make the Platform Orchestrator pick that cluster for the subsequent deployment.
Set the Resource Definition defining your target cluster:
export K8S_RESDEF=...
Create the matching criteria:
export CRITERIA_ID=$(humctl api post /orgs/$HUMANITEC_ORG/resources/defs/$K8S_RESDEF/criteria \ -d '{ "app_id": "'${HUMANITEC_APP}'" }' | jq --raw-output '.id' ) echo $CRITERIA_ID
We capture the
CRITERIA_ID
for cleaning up again later. -
Create a Score file and deploy it:
cat << "EOF" > score.yaml apiVersion: score.dev/v1b1 metadata: name: use-existing-tf-module-workload containers: test: image: busybox:latest command: - /bin/sh - "-c" args: - | while true do echo "Bucket name: $BUCKET_NAME" echo "Bucket region: $BUCKET_REGION" sleep 60 done variables: BUCKET_NAME: ${resources.my-s3.bucket} BUCKET_REGION: ${resources.my-s3.region} resources: my-s3: type: s3 EOF
humctl score deploy --wait
-
Check the deployed workload’s S3 bucket was generated via the Terraform module:
humctl get active-resources -o json | jq '.[] | select(.metadata.res_id == "modules.use-existing-tf-module-workload.externals.my-s3") | {"bucket": .status.resource.bucket, "region": .status.resource.region}'
The
bucket
property should start withuse-tf-module-example-
and theregion
should beus-east-1
.
Teardown
The above example can be cleared with the following commands - assuming the environment variables set above have not changed:
humctl delete app "${HUMANITEC_APP}"
humctl delete -f use-existing-tf-module-tutorial-def.yaml
Note: it may take a few minutes before the deletion of the Resource Definition succeeds. The associated active resource has to be taken down first following the app deletion.
humctl api delete /orgs/${HUMANITEC_ORG}/resources/defs/${K8S_RESDEF}/criteria/${CRITERIA_ID}
Recap
Congratulations! You successfully completed the tutorial and saw how to use existing Terraform modules with the Platform Orchestrator. You learned how to:
- Create a Resource Definition referencing your module
- Mapping the module outputs to the required Resource Type outputs
- Using private modules
- Creating a “passthrough” module as a wrapper
- Using the Resource Definition to provision an actual Resource for a deployment via Score