Template Driver
Resource Definitions using the Template Driver
This section contains example Resource Definitions using the Template Driver.
Add sidecar
Add a sidecar to workloads using the workload resource
The workload Resource Type can be used to make updates to resources before they are deployed into the cluster. In this example, a Resource Definition implementing the workload
Resource Type is used to inject the Open Telemetry agent as a sidecar into every workload. In addition to adding the sidecar, it also adds an environment variable called OTEL_EXPORTER_OTLP_ENDPOINT
to each container running in the workload.
otel-sidecar.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: otel-sidecar
entity:
name: otel-sidecar
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
{{- /*
The "update" output is passed into the corresponding "update" output of the "workload" Resource Type.
*/ -}}
update:
{{- /*
Add the variable OTEL_EXPORTER_OTLP_ENDPOINT to all containers
*/ -}}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/variables/OTEL_EXPORTER_OTLP_ENDPOINT
value: http://localhost:4317
{{- end }}
manifests:
sidecar.yaml:
location: containers
data: |
{{- /*
The Open Telemetry container as a sidecar in the workload
*/ -}}
command:
- "/otelcol"
- "--config=/conf/otel-agent-config.yaml"
image: otel/opentelemetry-collector:0.94.0
name: otel-agent
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 55679 # ZPages endpoint.
- containerPort: 4317 # Default OpenTelemetry receiver port.
- containerPort: 8888 # Metrics.
env:
- name: GOMEMLIMIT
value: 400MiB
volumeMounts:
- name: otel-agent-config-vol
mountPath: /conf
sidecar-volume.yaml:
location: volumes
data: |
{{- /*
A volume that is used to surface the config file
*/ -}}
configMap:
name: otel-agent-conf-{{ .id }}
items:
- key: otel-agent-config
path: otel-agent-config.yaml
name: otel-agent-config-vol
otel-config-map.yaml:
location: namespace
data: |
{{- /*
The config file for the Open Telemetry agent. Notice that it's name includes the GUResID
*/ -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-conf-{{ .id }}
labels:
app: opentelemetry
component: otel-agent-conf
data:
otel-agent-config: |
receivers:
otlp:
protocols:
grpc:
endpoint: localhost:4317
http:
endpoint: localhost:4318
exporters:
otlp:
endpoint: "otel-collector.default:4317"
tls:
insecure: true
sending_queue:
num_consumers: 4
queue_size: 100
retry_on_failure:
enabled: true
processors:
batch:
memory_limiter:
# 80% of maximum memory up to 2G
limit_mib: 400
# 25% of limit up to 2G
spike_limit_mib: 100
check_interval: 5s
extensions:
zpages: {}
service:
extensions: [zpages]
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [otlp]
criteria: []
Affinity
This section contains example Resource Definitions using the Template Driver for the affinity of Kubernetes Pods.
affinity.yaml
: Add affinity rules to the Workload. This format is for use with the Humanitec CLI.
affinity.yaml
(view on GitHub)
:
# Add affinity rules to the Workload by adding a value to the manifest at .spec.affinity
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-affinity
entity:
name: workload-affinity
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/affinity
value:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
criteria: []
Imagepullsecrets
This section shows how to use the Template Driver for configuring cluster access to a private container image registry.
The example implements the Kubernetes standard mechanism to Pull an Image from a Private Registry. It creates a Kubernetes Secret of kubernetes.io/dockerconfigjson
type, reading the credentials from a secret store. It then configures the secret as the imagePullSecret
for a Workload’s Pod.
The example is applicable only when using the Humanitec Operator on the cluster. With the Operator, using the Registries feature of the Platform Orchestrator is not supported.
To use this mechanism, install the Resource Definitions of this example into your Organization, replacing some placeholder values with the actual values of your setup. Add the appropriate matching criteria to the workload
Definition to match the Workloads you want to have access to the private registry.
Note:
workload
is an implicit Resource Type so it is automatically referenced for every Deployment.
config.yaml
: Resource Definition oftype: config
that reads the credentials for the private registry from a secret store and creates the Kubernetes Secretworkload.yaml
: Resource Definition oftype: workload
that adds theimagePullSecrets
element to the Pod spec, referencing theconfig
Resource
config.yaml
(view on GitHub)
:
# This Resource Definition pulls credentials for a container image registry from a secret store
# and creates a Kubernetes Secret of kubernetes.io/dockerconfigjson type
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: regcred-config
entity:
driver_type: humanitec/template
name: regcred-config
type: config
criteria:
- class: default
# This res_id must be used from a referencing Resource Definition to request this config Resource
res_id: regcred
driver_inputs:
# These secret references read the credentials from a secret store
secret_refs:
password:
ref: regcred-password
# Replace this value with the secret store id that's supplying the password
store: FIXME
username:
ref: regcred-username
# Replace this value with the secret store id that's supplying the username
store: FIXME
values:
secret_name: regcred
# Replace this value with the servername of your registry
server: FIXME
templates:
# The init template is used to prepare the "dockerConfigJson" content
init: |
dockerConfigJson:
auths:
{{ .driver.values.server | quote }}:
username: {{ .driver.secrets.username | toRawJson }}
password: {{ .driver.secrets.password | toRawJson }}
manifests:
# The manifests template creates the Kubernetes Secret
# which can then be used in the workload "imagePullSecrets"
regcred-secret.yaml:
data: |
apiVersion: v1
kind: Secret
metadata:
name: {{ .driver.values.secret_name }}
data:
.dockerconfigjson: {{ .init.dockerConfigJson | toRawJson | b64enc }}
type: kubernetes.io/dockerconfigjson
location: namespace
outputs: |
secret_name: {{ .driver.values.secret_name }}
workload.yaml
(view on GitHub)
:
# This workload Resource Definition adds an "imagePullSecrets" element to the Pod spec
# It references a "config" type Resource to obtain the secret name
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/imagePullSecrets
value:
- name: ${resources['config.default#regcred'].outputs.secret_name}
Ingress
This section contains example Resource Definitions for handling Kubernetes ingress traffic. Instead of the Driver type Ingress, we are using the Template Driver type, which allows us to render any Kubernetes YAML object.
ingress-traefik.yaml
: defines anIngressRoute
object for the Traefik Ingress Controller using the IngressRoute custom resource definition. This format is for use with the Humanitec CLIingress-ambassador.yaml
: defines aMapping
object for the Ambassador Ingress Controller using the Mapping custom resource definition. This format is for use with the Humanitec CLI
ingress-ambassador.yaml
(view on GitHub)
:
# This Resource Definition provisions an IngressRoute object for the Traefik Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: ambassador-ingress
entity:
name: ambassador-ingress
type: ingress
driver_type: template
driver_inputs:
values:
templates:
init: |
name: {{ .id }}-ingress
secretname: ${resources.tls-cert.outputs.tls_secret_name}
host: ${resources.dns.outputs.host}
namespace: ${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
manifests: |
ambassador-mapping.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: {{ .init.name }}-mapping
spec:
host: {{ .init.host }}
prefix: /
service: my-service-name:8080
location: namespace
ambassador-tlscontext.yaml:
data:
apiVersion: getambassador.io/v3alpha1
kind: TLSContext
metadata:
name: {{ .init.name }}-tlscontext
spec:
hosts:
- {{ .init.host }}
secret: {{ .init.secretname }}
location: namespace
ingress-traefik.yaml
(view on GitHub)
:
# This Resource Definition provisions an IngressRoute object for the Traefik Ingress Controller
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: traefik-ingress
entity:
name: traefik-ingress
type: ingress
driver_type: template
driver_inputs:
values:
templates:
init: |
name: {{ .id }}-ir
secretname: ${resources.tls-cert.outputs.tls_secret_name}
host: ${resources.dns.outputs.host}
namespace: ${resources['k8s-namespace#k8s-namespace'].outputs.namespace}
manifests: |
traefik-ingressroute.yaml:
data:
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: {{ .init.name }}
spec:
routes:
- match: Host(`{{ .init.host }}`) && PathPrefix(`/`)
kind: Rule
services:
- name: my-service-name
kind: Service
port: 8080
namespace: {{ .init.namespace }}
tls:
secretName: {{ .init.secretname }}
location: namespace
Labels
This section shows how to use the Template Driver for managing labels on Kubernetes objects.
While it is also possible to set labels via Score, the approach shown here shifts the management of labels down to the Platform, ensuring consistency and relieving developers of the task to repeat common labels for each Workload in the Score extension file.
config-labels.yaml
: Resource Definition of typeconfig
which defines the value for a sample label at a central place.custom-workload-with-dynamic-labels.yaml
: Add dynamic labels to your Workload. This format is for use with the Humanitec CLI.custom-namespace-with-dynamic-labels.yaml
: Add dynamic labels to your Namespace. This format is for use with the Humanitec CLI.
config-labels.yaml
(view on GitHub)
:
# This "config" type Resource Definition provides the value for the sample label
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: app-config
entity:
name: app-config
type: config
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Returns a sample output named "cost_center_id" to be used as a label
outputs: |
cost_center_id: my-example-id
# Match the resource ID "app-config" so that it can be requested via that ID
criteria:
- res_id: app-config
custom-namespace-with-dynamic-labels.yaml
(view on GitHub)
:
# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace-with-label
entity:
name: custom-namespace-with-label
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
name: ${context.app.id}-${context.env.id}
manifests: |-
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
# Set matching criteria as required
criteria:
- {}
custom-workload-with-dynamic-labels.yaml
(view on GitHub)
:
# This Resource Definition references the "config" resource to use its output as a label
# and adds another label taken from the Deployment context
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload-with-label
entity:
name: custom-workload-with-label
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Remove the /spec/service/labels part if there is no "service" in your Score file.
outputs: |
update:
- op: add
path: /spec/labels
value:
{{- range $key, $val := .resource.spec.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
- op: add
path: /spec/service/labels
value:
{{- range $key, $val := .resource.spec.service.labels }}
{{ $key }}: {{ $val | quote }}
{{- end }}
env_id: ${context.env.id}
cost_center_id: ${resources['config.default#app-config'].outputs.cost_center_id}
# Set matching criteria as required
criteria:
- {}
Namespace
This section contains example Resource Definitions using the Template Driver for managing Kubernetes namespaces.
custom-namespace.yaml
: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec CLI.custom-namespace.tf
: Create Kubernetes namespaces with your own custom naming scheme. This format is for use with the Humanitec Terraform provider.
custom-namespace.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "namespace" {
id = "custom-namespace"
name = "custom-namespace"
type = "k8s-namespace"
driver_type = "humanitec/template"
driver_inputs = {
values_string = jsonencode({
templates = {
init = "name: $${context.env.id}-$${context.app.id}"
manifests = <<EOL
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
EOL
outputs = "namespace: {{ .init.name }}"
}
})
}
}
resource "humanitec_resource_definition_criteria" "namespace" {
resource_definition_id = humanitec_resource_definition.namespace.id
# ... add any matching criteria as required.
}
custom-namespace.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-namespace
entity:
name: custom-namespace2
type: k8s-namespace
driver_type: humanitec/template
driver_inputs:
values:
templates:
# Use any combination of placeholders and characters to configure your naming scheme
init: |
name: ${context.env.id}-${context.app.id}
manifests: |-
namespace.yaml:
location: cluster
data:
apiVersion: v1
kind: Namespace
metadata:
labels:
pod-security.kubernetes.io/enforce: restricted
name: {{ .init.name }}
outputs: |
namespace: {{ .init.name }}
criteria:
- {}
Node selector
This section contains example Resource Definitions using the Template Driver for setting nodeSelectors on your Pods.
aci-workload.yaml
: Add the required node selector and tolerations to the Workload so it can be scheduled on an Azure AKS virtual node. This format is for use with the Humanitec CLI.
aci-workload.yaml
(view on GitHub)
:
# Add tolerations and nodeSelector to the Workload to make it runnable AKS virtual nodes
# served through Azure Container Instances (ACI).
# See https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aci-workload
entity:
name: aci-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "virtual-kubelet.io/provider"
operator: "Exists"
- key: "azure.com/aci"
effect: "NoSchedule"
- op: add
path: /spec/nodeSelector
value:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
criteria: []
Security context
This section contains example Resource Definitions using the Template Driver for adding Security Context on Kubernetes Deployment
.
custom-workload-with-security-context.yaml
: Add Security Context to your Workload. This format is for use with the Humanitec CLI.custom-workload-with-security-context.tf
: Add Security Context to your Workload. This format is for use with the Humanitec Terraform provider.
custom-workload-with-security-context.tf
(view on GitHub)
:
resource "humanitec_resource_definition" "workload" {
driver_type = "humanitec/template"
id = "custom-workload"
name = "custom-workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
templates = {
init = ""
manifests = ""
outputs = <<EOL
update:
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
EOL
}
})
}
}
resource "humanitec_resource_definition_criteria" "workload" {
resource_definition_id = humanitec_resource_definition.workload.id
# ... add any matching criteria as required.
}
custom-workload-with-security-context.yaml
(view on GitHub)
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: custom-workload
entity:
name: custom-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/securityContext
value:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/securityContext
value:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
{{- end }}
criteria:
- {}
Serviceaccount
This section contains example Resource Definitions using the Template Driver for provisioning Kubernetes ServiceAccounts for your Workloads.
The solution consists of a combination of two Resource Definitions of type workload
and k8s-service-account
.
The workload
Resource Type is an implicit Type which is automatically referenced for any Deployment.
This workload
Resource Definition adds the serviceAccountName
item to the Pod spec and references a k8s-service-account
type Resource, causing it to be provisioned. The k8s-service-account
Resource Definition generates the Kubernetes manifest for the actual ServiceAccount.
A Resource Graph for a Workload using those Resource Definitions will look like this:
flowchart LR
workloadVirtual[Workload "my-workload"] --> workload(id: modules.my-workload\ntype: workload\nclass: default)
workload --> serviceAccount(id: modules.my-workload\ntype: k8s-service-account\nclass: default)
Note that the resource id
is used in the k8s-service-account
Resource Definition to derive the name of the actual Kubernetes ServiceAccount. Check the code for details.
Example files:
cli-serviceaccount-workload-def.yaml
andcli-serviceaccount-k8ssa-def.yaml
: Resource Definition combination for Workload/ServiceAccount. This format is for use with the Humanitec CLI.tf-serviceaccount-workload-def.tf
andtf-serviceaccount-k8ssa-def.tf
: Resource Definition combination for Workload/ServiceAccount. This format is for use with the Humanitec Terraform provider.
cli-serviceaccount-k8ssa-def.yaml
(view on GitHub)
:
# This Resource Defintion provisions a Kubernetes ServiceAccount
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: serviceaccount-k8s-service-account
entity:
driver_type: humanitec/template
name: serviceaccount-k8s-service-account
type: k8s-service-account
driver_inputs:
values:
res_id: ${context.res.id}
templates:
# Name the ServiceAccount after the Resource
init: |
name: {{ index (splitList "." "${context.res.id}") 1 }}
outputs: |
name: {{ .init.name }}
manifests: |-
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
cli-serviceaccount-workload-def.yaml
(view on GitHub)
:
# This Resource Definition adds a Kubernetes ServiceAccount to a Workload
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: serviceaccount-workload
entity:
driver_type: humanitec/template
name: serviceaccount-workload
type: workload
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/serviceAccountName
value: ${resources.k8s-service-account.outputs.name}
tf-serviceaccount-k8ssa-def.tf
(view on GitHub)
:
# This Resource Defintion provisions a Kubernetes ServiceAccount
resource "humanitec_resource_definition" "k8s_service_account" {
driver_type = "humanitec/template"
id = "${var.prefix}k8s-service-account"
name = "${var.prefix}k8s-service-account"
type = "k8s-service-account"
driver_inputs = {
values_string = jsonencode({
templates = {
# Name the ServiceAccount after the Resource
init = <<EOL
name: {{ index (splitList "." "$${context.res.id}") 1 }}
EOL
manifests = <<EOL
service-account.yaml:
location: namespace
data:
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .init.name }}
EOL
outputs = <<EOL
name: {{ .init.name }}
EOL
}
})
}
}
tf-serviceaccount-workload-def.tf
(view on GitHub)
:
# This Resource Definition adds a Kubernetes ServiceAccount to a Workload
resource "humanitec_resource_definition" "workload" {
driver_type = "humanitec/template"
id = "${var.prefix}workload"
name = "${var.prefix}workload"
type = "workload"
driver_inputs = {
values_string = jsonencode({
templates = {
init = ""
manifests = ""
outputs = <<EOL
update:
- op: add
path: /spec/serviceAccountName
value: $${resources.k8s-service-account.outputs.name}
EOL
}
})
}
}
Tolerations
This section contains example Resource Definitions using the Template Driver for managing tolerations on your Pods.
tolerations.yaml
: Add tolerations to the Workload. This format is for use with the Humanitec CLI.
tolerations.yaml
(view on GitHub)
:
# Add tolerations to the Workload by adding a value to the manifest at .spec.tolerations
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-toleration
entity:
name: workload-toleration
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "example-key"
operator: "Exists"
effect: "NoSchedule"
criteria: []
Volumes static provisioning
This example will let participating Workloads share a common persistent storage service through the Kubernetes volumes system.
It is possible to use the Drivers volume-nfs
or volume-pvc
to create a PersistentVolume for your application. If you have special requirements for your PersistentVolume, you can also use the Template Driver to create it as shown here.
The example setup will perform static provisioning for a Kubernetes PersistentVolume of type nfs
and a corresponding PersistentVolumeClaim. The volume points to an existing NFS server endpoint. The endpoint shown is an in-cluster NFS service which can be set up using this Kubernetes example. Modify the endpoint to use your own NFS server, or substitute the data completely for a different volume type.
flowchart TB
subgraph pod1[Pod]
direction TB
subgraph container1[Container]
volumeMount1(volumeMount\n/tmp/data):::codeComponent
end
volumeMount1 --> volume1(volume):::codeComponent
end
subgraph pod2[Pod]
direction TB
subgraph container2[Container]
volumeMount2(volumeMount\n/tmp/data):::codeComponent
end
volumeMount2 --> volume2(volume):::codeComponent
end
pvc1(PersistentVolumeClaim) --> pv1(PersistentVolume)
volume1 --> pvc1
pvc2(PersistentVolumeClaim) --> pv2(PersistentVolume)
volume2 --> pvc2
nfsServer[NFS Server]
pv1 --> nfsServer
pv2 --> nfsServer
classDef codeComponent font-family:Courier
To use the example, apply both Resource Definitions to your Organization and add the required matching criteria to both so they are matched to your target Deployments.
Note that this setup does not require any resource
to be requested via Score. The implicit workload
Resource, when matched to the Resource Definition of type workload
of this example, will trigger the provisioning of the volume
Resource through its own Resource reference.
Those files make up the example:
workload-volume-nfs.yaml
: Resource Definition of typeworkload
. It references a Resource of typevolume
through Resource References, thus adding such a Resource to the Resource Graph and effectively triggering the provisioning of that Resource. It uses the Resource outputs to set a label for a fictitious backup solution, and to add the PersistentVolumeClaim to the Workload container.volume-nfs.yaml
: Resource Definition of typevolume
. It creates the PersistentVolume and PersistentVolumeClaim manifests and adds thevolumes
element to the Workload’s Pod. The ID generated in theinit
section will be different for each active Resource, i.e. for each Workload, so that each Workload will get their own PersistentVolume and PersistentVolumeClaim objects created for them. Still, through the common NFS server endpoint, they will effectively share access to the data.
The resulting Resource Graph portion will look like this:
flowchart LR
subgraph resource-graph[Resource Graph]
direction TB
W1((Workload)) --->|implicit reference| W2(Workload)
W2 --->|"resource reference\n${resources.volume...}"| V1(Volume)
end
subgraph key [Key]
VN((Virtual\nNodes))
AN(Active\nResources)
end
resource-graph ~~~ key
volume-nfs.yaml
(view on GitHub)
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
workload-volume-nfs.yaml
(view on GitHub)
:
# This workload Resource Definition uses the output of the "volume" type Resource
# to add a label for a backup solution
# and to create the volumeMount for the container.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: workload-volume-nfs
entity:
name: workload-volume-nfs
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
pvcName: ${resources.volume.outputs.pvcName}
volumeName: ${resources.volume.outputs.volumeName}
outputs: |
update:
- op: add
path: /spec/annotations/backup.org-name.io
value: {{ .init.pvcName }}
{{- range $containerId, $value := .resource.spec.containers }}
- op: add
path: /spec/containers/{{ $containerId }}/volumeMounts
value:
- name: {{ $.init.volumeName }}
mountPath: /tmp/data
{{- end }}