Volumes Static Provisioning
This example will let participating Workloads share a common persistent storage service through the Kubernetes volumes system.
It is possible to use the Drivers
volume-nfs
or
volume-pvc
to create a
PersistentVolume
for your application. If you have special requirements for your PersistentVolume, you can also use the
Template Driver
to create it as shown here.
The example setup will perform static provisioning for a Kubernetes
PersistentVolume
of type
nfs
and a corresponding
PersistentVolumeClaim
. The volume points to an existing NFS server endpoint. The endpoint shown is an in-cluster NFS service which can be set up using this
Kubernetes example
. Modify the endpoint to use your own NFS server, or substitute the data completely for a different
volume type
.
flowchart TB
subgraph pod1[Pod]
direction TB
subgraph container1[Container]
volumeMount1(volumeMount\n/tmp/data):::codeComponent
end
volumeMount1 --> volume1(volume):::codeComponent
end
subgraph pod2[Pod]
direction TB
subgraph container2[Container]
volumeMount2(volumeMount\n/tmp/data):::codeComponent
end
volumeMount2 --> volume2(volume):::codeComponent
end
pvc1(PersistentVolumeClaim) --> pv1(PersistentVolume)
volume1 --> pvc1
pvc2(PersistentVolumeClaim) --> pv2(PersistentVolume)
volume2 --> pvc2
nfsServer[NFS Server]
pv1 --> nfsServer
pv2 --> nfsServer
classDef codeComponent font-family:Courier
To use the example, apply both Resource Definitions to your Organization and add the required matching criteria to both so they are matched to your target Deployments.
Note that this setup does not require any resource
to be requested via Score. The
implicit
workload
Resource, when matched to the Resource Definition of type workload
of this example, will trigger the provisioning of the volume
Resource through its own Resource reference.
Those files make up the example:
workload-volume-nfs.yaml
: Resource Definition of typeworkload
. It references a Resource of typevolume
through Resource References , thus adding such a Resource to the Resource Graph and effectively triggering the provisioning of that Resource. It uses the Resource outputs to set a label for a fictitious backup solution, and to add the PersistentVolumeClaim to the Workload container.volume-nfs.yaml
: Resource Definition of typevolume
. It creates the PersistentVolume and PersistentVolumeClaim manifests and adds thevolumes
element to the Workload’s Pod. The ID generated in theinit
section will be different for each active Resource, i.e. for each Workload, so that each Workload will get their own PersistentVolume and PersistentVolumeClaim objects created for them. Still, through the common NFS server endpoint, they will effectively share access to the data.
The resulting Resource Graph portion will look like this:
flowchart LR
subgraph resource-graph[Resource Graph]
direction TB
W1((Workload)) --->|implicit reference| W2(Workload)
W2 --->|"resource reference\n${resources.volume...}"| V1(Volume)
end
subgraph key [Key]
VN((Virtual\nNodes))
AN(Active\nResources)
end
resource-graph ~~~ key
The nfs can be used in score following the example in Score volumes example
Resource Definitions
volume-nfs.yaml
(
view on GitHub
)
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
volume-nfs.tf
(
view on GitHub
)
:
resource "humanitec_resource_definition" "volume-nfs" {
driver_type = "humanitec/template"
id = "volume-nfs"
name = "volume-nfs"
type = "volume"
driver_inputs = {
values_string = jsonencode({
"templates" = {
"init" = <<END_OF_TEXT
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
END_OF_TEXT
"manifests" = {
"app-pv-tmpl.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
END_OF_TEXT
}
"app-pvc-tmpl.yaml" = {
"location" = "namespace"
"data" = <<END_OF_TEXT
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
"app-vol-tmpl.yaml" = {
"location" = "volumes"
"data" = <<END_OF_TEXT
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
END_OF_TEXT
}
}
"outputs" = "volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}\npvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}"
}
})
}
}