Container
What is the Container Driver?
The Container Driver is a fully customizable Driver offered by the Humanitec Platform Orchestrator, allowing you to provision any Resource Type your Workloads depend on through the execution of a container image of your choice. A common use case is executing your favorite IaC framework by providing an image containing the required tooling.
You can choose to execute, for example, OpenTofu , Cloud Formation or Pulumi or any other IaC you use daily.
The Container Driver expects a container image as one of its inputs . It always runs in your infrastructure as part of the Driver execution, requiring a Kubernetes host cluster.
Why use the Container Driver?
Using IaC and the Container Driver to provision resources has a number of advantages over other Humanitec built-in drivers:
- Declarative: It is declarative, allowing true Infrastructure-as-Code (IaC) with all its associated benefits, like diffs, version control, and familiar developer workflows.
- Adaptive: As you can choose the tooling to execute, this is widely adopted and offers a vast range of features, covering all major public clouds as well as many 3rd party offerings.
- Integrate: It provides an easy way to do custom code in Humanitec.
- Control: Since you control the IaC code, you have full control of how it operates. You can test the code independently before handing it to the Driver for execution.
How it works
The Container Driver creates a Kubernetes Job in a target cluster whose coordinates are specified via the Driver inputs .
The container specified in the Driver inputs represents the main container of the Job. This container is supposed to execute the IaC code and provision (or destroy) the desired resources. According to the provisioned Resource Type , the container should produce some sensitive and non-sensitive outputs, as described in the notes .
These outputs must be stored in compliance with the Container Driver Contract to make them available to the Driver.
Once the Kubernetes Job completes its execution, the Container Driver retrieves and passes the Resource outputs to the Orchestrator.
The structure of the Kubernetes Job created by the Driver depends on the inputs it receives:
- The app container, named
runner
, is the one supplied in the Driver inputs . The container image is provided by you. - A
sidecar container
named
outputs-fetcher
that parses outputs and errors from therunner
container and makes them available to the Container Driver. - An optional
init container
named
pre-checks
that performs some pre-checks before running the main container. This container can be disabled via the Driver inputs . - In case of the IaC scripts to executed are stored in a Github Repository, an additional init container is created. The
git-init
init container takes care of checking out the repository at the desired ref, making the scripts available torunner
container. - All containers exchange data through a common shared directory implemented as an emptyDir volume .
This architecture is described by the diagram below.
%%{ init: { 'flowchart': { 'curve': 'basis' } } }%%
flowchart TB
start(["Start"]) --> StartSidecarContainer>"Start outputs-fetcher<br/>sidecar container"]
StartSidecarContainer -- Start waiting loop --> wait(("Wait for<br/>runner container<br/>to stop or<br/>init containers<br/>to fail"))
StartSidecarContainer -- Start other containers --> ifchecks
subgraph preChecksContainer["Pre checks"]
ifchecks{"Pre-checks<br/>enabled?"}
ifchecks -- Yes --> StartPreChecksInitContainer>"Start pre-checks<br/>init container"]
StartPreChecksInitContainer --> ifchecksok{"Pre-checks<br/>succeeded?"}
end
subgraph gitInit["Git init"]
ifchecksok -->|Yes|ifgit
ifgit -- Yes --> StartGitInitContainer>"Start git-init<br/>init container"]
StartGitInitContainer --> CheckOutRepo["Check out repo"]
CheckOutRepo --> CopyFilesIntoSharedDirectory["Copy files into<br/>the shared directory"]
CopyFilesIntoSharedDirectory --> ifgitok{"git-init<br/>succeeded?"}
end
ifchecksok -.->|No|wait
subgraph runner["Runner"]
ifgitok -->|Yes|StartRunner
ifgit -- No --> StartRunner>"Start Runner container"]
StartRunner --> dostuff["Do stuff"]
dostuff --> SaveOutputs["Save outputs or errors<br/>to file"]
SaveOutputs --> EndRunner>"Runner Container<br/>finished"]
end
ifchecks -- No --> ifgit{"Git Repo<br/>configured?"}
ifgitok -.->|No|wait
EndRunner -.-> wait
wait --> outputs["Parse outputs or errors"]
outputs --> configsecret["Create Outputs<br/>ConfigMap and Secrets"]
configsecret --> EndSidecarContainer>"outputs-fetcher<br/>sidecar container finished"]
EndSidecarContainer --> e(["End"])
StartSidecarContainer:::sidecar
wait:::sidecar
ifchecks:::initPrechecks
StartPreChecksInitContainer:::initPrechecks
ifgit:::initGit
ifchecksok:::initPrechecks
StartGitInitContainer:::initGit
StartRunner:::runner
CheckOutRepo:::initGit
CopyFilesIntoSharedDirectory:::initGit
ifgitok:::initGit
dostuff:::runner
SaveOutputs:::runner
EndRunner:::runner
outputs:::sidecar
configsecret:::sidecar
EndSidecarContainer:::sidecar
classDef sidecar fill:#7584ff
classDef initPrechecks fill:#80b880
classDef initGit fill:#bababa
classDef runner fill:#e84444
The Platform Orchestrator performs these steps when executing a Deployment into an Application Environment which leads to involving the Container Driver.
- The Orchestrator performs resource matching using all relevant Resource Definitions. The matched Resource Definition uses the Container Driver.
- The Orchestrator passes the values of all the inputs to the Container as configured in the Resource Definition.
- The Driver instantiates a Kubernetes Job according to the received inputs .
- An optional init container is created. The
git-init
init container takes care of checking out the repository at the desired ref and path, making the content available to therunner
container. - Once the init containers have completed their execution, the
runner
container runs and provisions the Resources and the expected outputs. - The sidecar container
outputs-fetcher
waits for the main container to complete. parses the outputs it produced and makes them ready to be fetched by the Container Driver. - The Container Driver passes the outputs on to the Orchestrator.
- Further processing of the response inside the Orchestrator occurs like with any other Driver.
Contract between Container Driver and Runner Image
The runner
container image supplied as Container Driver
inputs
should respect the following requirements to make the Driver and the sidecar container able to retrieve the Resource outputs and any eventual errors and make them available to the Orchestrator:
- The container should execute its command, e.g. the IaC tool, in the directory specified by the Driver via the
SCRIPTS_DIRECTORY
environment variable. It matches the value ofjob.shared_directory
. - The container receives the Resource Inputs as a JSON object stored in the file at the path specified by the Driver in the
RESOURCE_INPUTS_FILE
environment variable. - In case of error, the container should exit with a non-zero code, to allow the Driver to mark the Job as failed and return the error to the Orchestrator.
- The container should be able to provision and de-provision Resources. An environment variable
ACTION
, that can assume the value ofcreate
ordestroy
, is set on therunner
container by the Driver to lead the execution of the container. - The container should produce non-sensitive outputs and store them as a JSON object in a file whose path is specified by the Driver in the environment variable
OUTPUTS_FILE
. - The container should produce sensitive outputs and store them as a JSON object in a file whose path is specified by the Driver in the environment variable
SECRET_OUTPUTS_FILE
. - The container should store errors in a file whose path is specified by the Driver in the environment variable
ERROR_FILE
. No particular structure is expected for the content of this file. The content will be included in the deployment error message in the Platform Orchestrator if the container exits with a non-zero exit code. - In case of the container running as a non-root user, as specified by Docker best practices , the user should be specified via UID and GID to allow the Kubernetes Job defined by the Driver to run without knowing the user and the group.
Properties
Property | Description |
---|---|
Resource type | Any |
Account type | Any |
Inputs
Values
Name | Type | Description |
---|---|---|
job |
object | An object describing how to configure the Kubernetes Job which will run the supplied container. |
cluster |
object | An object providing all the information about how the Container Driver should connect to the target cluster. |
credentials_config |
object | [Optional] An object describing how credentials should be made available to the runner container. |
files |
object | [Optional] A Map of file paths to their content to create, according to their path, in the directory where the IaC executes (if the path is relative) or at a specific mount point (if the path is absolute). |
source |
object | [Optional] A Git repository to use to fetch content such as IaC scripts. |
skip_permission_checks |
boolean | [Optional] If set to true , the Driver and the Job skip the checks to ensure that the K8s cluster client and service account have the permissions to complete successfully. Defaults to false . |
manifests_output |
string | [Optional] The name of a property in the OUTPUTS_FILE or in the SECRET_OUTPUTS_FILE that contains a list of
Manifest Location
objects in JSON format. If the property is present in both files, the contents will be appended. These will then be returned as the manifests property in the Driver outputs. See
Generating manifests
for more details. |
Placeholders are resolved by the Platform Orchestrator before the General IaC Driver is called. This means that the content of the placeholder will already be resolved in the Driver inputs that the Driver receives. (This includes any placeholders in comments.)
Placeholders will not be resolved in content pulled from a Git source
.
Job object
The job
object provides all the information to properly build the Kubernetes Job that will run the runner
container.
Property | Type | Description |
---|---|---|
image |
string | The image the runner container will run. |
shared_directory |
string | The mount point of the directory shared among the main and the sidecar containers for exchanging inputs and outputs. Must not conflict with an existing folder of your runner image filesystem. |
service_account |
string | The Service Account the Kubernetes Job should run with. |
command |
[]string | Entrypoint array. The provided container image’s ENTRYPOINT is used if this is not provided. |
pod_template |
string | [Optional] The Pod Template Spec manifest which defines the Kubernetes Job Pod in the target cluster. |
namespace |
string | [Optional] The Namespace where the Runner should run. If empty, the Driver assumes the Namespace is humanitec-runner . |
variables |
object | [Optional] A Map of environment variables to their non-secret content the Kubernetes Job will run with. |
A Service Account must be created in the target Namespace
and it should have been configured to have the permissions needed by the K8s Job to complete its execution.
Kubernetes Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: humanitec-container-runner
namespace: humanitec-runner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: humanitec-runner
name: humanitec-container-runner
rules:
- apiGroups: [""]
resources: ["secrets", "configmaps"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: humanitec-container-runner
namespace: humanitec-runner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: humanitec-container-runner
subjects:
- kind: ServiceAccount
name: humanitec-container-runner
namespace: humanitec-runner
Pod Template
The Pod Template Spec defined in the pod_template
field will be merged to the default one defined by the Container Driver as specified below:
Default Container runner Pod template
spec:
restartPolicy: Never
serviceAccountName: <job.service_account>
containers:
- name: runner
image: <job.image>
imagePullPolicy: IfNotPresent
command:
- <job.command>
resources:
requests:
memory: 1Gi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsGroup: 1000
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
env:
- name: ACTION
value: create
...
volumeMounts:
- mountPath: <job.shared_directory>
name: shared-directory
...
initContainers:
- name: outputs-fetcher
image: registry.humanitec.io/public/humanitec-container-driver-sidecars:<version>
command: ["/opt/container-driver/outputsfetcher"]
restartPolicy: Always
resources:
requests:
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsGroup: 1000
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
env:
...
volumeMounts:
- mountPath: <job.shared_directory>
name: shared-directory
...
- name: pre-checks
image: registry.humanitec.io/public/humanitec-container-driver-sidecars:<version>
command: ["/opt/container-driver/prechecks"]
restartPolicy: Never
resources:
requests:
memory: 64Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsGroup: 1000
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
env:
...
volumeMounts:
- mountPath: <job.shared_directory>
name: shared-directory
...
- name: git-init
image: registry.humanitec.io/public/humanitec-container-driver-sidecars:<version>
command: ["/opt/container-driver/inigit"]
restartPolicy: Never
resources:
requests:
memory: 1Gi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsGroup: 1000
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
env:
...
volumeMounts:
- mountPath: <job.shared_directory>
name: shared-directory
...
A common use case is specifying a secret to use to pull the container runner
image.
spec:
imagePullSecrets:
- name: regcred
This assumes the existence of a Kubernetes Secret named regcred
that hosts the registry credentials in the namespace where the K8s Job runs, as documented
here
.
The target cluster hosting the Job needs network access to pull images from the registry at ghcr.io
.
If you want to host the Container sidecar image in your own registry, use ghcr.io/humanitec/ghcr.io/humanitec/container-driver-sidecars:1.0.0
as the upstream source for greatest stability. We will continuously update the available version information at this place.
As the Pod defined via job.pod_template
is still supposed to work in combination with the Humanitec Container Driver and run the image built by Humanitec for the sidecar containers, not all the fields in the manifest can be overriden:
metadata.namespace
can’t be overriden as it is already defined at Job level and specified injob.namespace
value.- Some properties in the
runner
container can’t be overriden:args
,env
,envFrom
andvolumeMounts
as defined by the Humanitec Container Driver,image
andcommand
as already handled byjob.image
andjob.command
. - Some properties in the init containers can’t be overriden as they are defined by the Humanitec Container Driver:
args
,command
,env
,envFrom
,volumeMounts
,workingDir
.
Cluster object
The cluster
object defines the properties the Driver uses to authenticate to the target cluster where the Kubernetes Job Runner should run.
Property | Type | Description |
---|---|---|
cluster |
object | Object which contains the non-sensitive data needed to access the target cluster. |
account |
string | A Fully Qualified Cloud Account ID in the format <orgId>/<accountId> (not including <> ). |
To be able to successfully access the target k8s cluster, the object cluster
should include the property cluster_type
as well as specific properties according to the cluster type:
aks
cluster type expects the non-secret AKS Cluster valueseks
cluster type expects the non-secret EKS Cluster valuesgke
cluster type expects the non-secret GKE Cluster values
The identity assumed by the Driver to authenticate to the cluster must be configured to have the permissions needed to create the Kubernetes Job and handle inputs and outputs. See the instructions for each cluster_type
:
credentials_config object
The credentials_config
object describes how the provider credentials should be made available to the runner
container.
Property | Type | Description |
---|---|---|
environment |
object | Map whose keys are the environment values expected by the runner and value can be flattened credential keys (with . as delimiter) or credentials file path.If the value is * it means that the whole credentials will be available at the specified environment variable. If the value at the specified key is an object, the environment variable assumes the stringified value of it.Example: AWS_ACCESS_KEY_ID: aws.accessKeyId |
file |
string | File path for the file that will be built from credentials. The file path can’t be absolute or use dots. Example: credentials.json |
script_variables |
object | Object that allows to manipulate provider credentials and supply them as a JSON file containing script variables. |
script_variables object
Property | Type | Description |
---|---|---|
file |
string | The name of the json file that will host the script variables. If it’s relative, it will be mounted to the scripts directory, otherwise it will be treated as a mount point. |
variables |
object | Map whose keys are variables expected by the runner and values can be flattened credential keys (with . as delimiter) or credentials file path.If the value is * it means that the whole credentials will be available at the specified variable.If the key contains . , it is considered as a sub-field of a variable of type objects (e.g. “SECRET.ID: aws_access_key_id ” generates a variable SECRET of type object which a field ID whose value is fetched by creds at path aws_access_key_id ).If the value at the specified key is an object, the variable is supplied to the scripts as an object, otherwise as a string. Example: access_key: aws.accessKeyId |
Source object
The source
object defines how the Container Driver uses IaC scripts that are stored in Git. The supplied repository must be accessible to the Container K8s Job and credentials must be supplied if necessary.
Property | Type | Description |
---|---|---|
ref |
string | [Optional] Full branch name or tag, or commit SHA, for example, /refs/heads/main . |
url |
object | Repository URL, for example, github.com:my-org/project.git for SSH or https://github.com/my-org/project.git for HTTPS. |
username |
object | [Optional] Username to authenticate. The default is git . |
According to the value specified in the ref
property, the git-init
init container fetches the content in the repository at a specific reference:
- If
ref
is a git reference (e.g.refs/heads/my-branch
,refs/tags/my-tag
), that branch or tag is checked out. - If
ref
is a valid commit SHA, that commit is checked out. - If
ref
is empty, the HEAD of the repository is checked out (generallymain
ormaster
).
This means that if ref
is not a commit SHA, we can’t ensure that the scripts used to provision a resource match exactly the ones used to delete the same resource as a branch (or a tag) might have been updated compared to when the resource provisioning happened.
Secrets
Name | Type | Description |
---|---|---|
job |
object | [Optional] Object which describes the sensitive data to configure the Kubernetes Job which will run in the target cluster. |
cluster |
object | [Optional] An object providing credentials and other sensitive information about how the Container Driver should connect to the target cluster. |
files |
object | [Optional] A Map of file paths to their sensitive content to create, according to their path, in the directory where the runner executes (if the path is relative) or at a specific mount point (if the path is absolute). |
source |
object | [Optional] Credentials to access the Git repository. |
job object
The job
object defines the environment variables with a sensitive value the container runner
runs with.
Property | Type | Description |
---|---|---|
variables |
object | [Optional] A map of environment variables to their secret content the target container will run with. |
cluster object
The cluster
object defines the sensitive data the Driver needs to reach the target cluster where the Kubernetes Job should run.
Property | Type | Description |
---|---|---|
agent_url |
object | [Optional] The signed URL produced by the humanitec/agent driver. It is expected to be a reference to the url output of a Agent resource. |
Source object
Credentials to be used to access the Git repository. The choice of credentials depends on the url
format.
Property | Type | Description |
---|---|---|
password |
string | [Optional] Password or Personal Account Token - for HTTPS. |
ssh_key |
string | [Optional] SSH Private key - for connections over SSH. |
Notes
Interaction with Humanitec Resources
Resource Types in Humanitec have a specified Resource Output Schema. In order for a resource to be usable by the Platform Orchestrator, the Driver must generate the outputs complying with this schema. For the Container Driver, the outputs must be placed in the files named in the OUTPUTS_FILE
and SECRET_OUTPUTS_FILE
variables as per the
contract
.
Assuming that the IaC code executes some
OpenTofu
scripts, those scripts should specify output
variables that ideally exactly match this schema. If they do not, the image implementation will have to map the names for the output files.
For example, the
s3
Resource Type
has the following properties in its output schema:
Name | Type | Description |
---|---|---|
bucket |
string | The bucket name. |
region |
string | The region the bucket is hosted in. |
Therefore, the OpenTofu module should have outputs defined similar to:
output "region" {
value = module.aws_s3.s3_bucket_region
}
output "bucket" {
value = module.aws_s3.s3_bucket_bucket_domain_name
}
The image implementation must read the OpenTofu outputs to create an OUTPUTS_FILE
like this:
{
"region": "eu-north-1",
"bucket": "my-company-my-app-dev-bucket"
}
Examples
See the Container Driver examples page for a collection of examples.