Volumes
How to work with volumes
Kubernetes-based workloads can use volumes for containers in a pod to access and share data via the filesystem.
Developers request volumes for a Workload via Score as resources
of type volume
and map them into a container filesystem via the volumes
property for a container:
apiVersion: score.dev/v1b1
metadata:
name: my-workload
containers:
my-container:
image: .
volumes:
- source: ${resources.my-volume}
target: /target/dir
# Optionally set the volume to read-only
readOnly: true
# Optionally choose a sub-path inside the referenced volume
path: html
resources:
my-volume:
type: volume
Note that the resource reference ${resources.my-volume}
does not include a Resource output like for other Resource Types.
In the generated Kubernetes manifests, all volumes
items from Score will be transformed into container volumeMounts
items by the Platform Orchestrator.
Platform engineers provide the implementations for supported types of volumes through Resource Definitions. They may offer a selection of different volume types through Resource Classes which developer can choose from via Score as shown in this extended example.
Score file example using volume of different classes
score.yaml
(view on GitHub )
:
apiVersion: score.dev/v1b1
metadata:
name: my-workload
containers:
my-container:
image: .
volumes:
- source: ${resources.my-ephemeral-volume}
target: /tmp/ephemeral-dir
- source: ${resources.my-config-volume}
target: /var/config
- source: ${resources.my-projected-volume}
target: /var/all-in-one
readOnly: true
- source: ${resources.my-dynamic-provisioning-volume}
target: /var/dynamic
- source: ${resources.my-nfs-volume}
target: /var/nfs
resources:
my-ephemeral-volume:
type: volume
class: ephemeral
my-config-volume:
type: volume
class: config
my-projected-volume:
type: volume
class: projected
my-dynamic-provisioning-volume:
type: volume
class: standard-rwo
my-nfs-volume:
type: volume
class: nfs
Providing Resource Definitions for volumes
A Resource Definition for the Resource Type volume
needs to provision these elements on the Kubernetes side:
- The
volumes
section in the generated Pods - Additionally for persistent volumes :
PersistentVolume
(PV
) and/orPersistentVolumeClaim
(PVC
) objects
Whether to create both PV
and PVC
objects or just the PVC
depends on your own choice of static vs. dynamic provisioning for the respective volume type. We have included examples for both cases further down in this section.
The common Driver to use for volume
Resource Definitions is the Template Driver which will work in any situation. There are convenience Drivers which offer a simplified experience for select volume types.
With the Template Driver, construct your Resource Definition like this:
- Use a
manifests
section with alocation: volumes
setting to create thevolumes
section in the Pod, specifying the data required by the Kubernetes volume type :
Resource Definition specifying the Pod volume section
volume-emptydir.yaml
(view on GitHub )
:
# This Resource Definition uses the Template Driver to inject an emptyDir volume into the workload
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-emptydir
entity:
name: volume-emptydir
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests:
emptydir.yaml:
location: volumes
data: |
name: ${context.res.guresid}-emptydir
emptyDir:
sizeLimit: 1024Mi
criteria:
- class: ephemeral
- (For persistent volumes only) Use additional
manifests
sections with alocation: namespace
setting to createPersistentVolume
and/orPersistentVolumeClaim
objects as required:
Resource Definition specifying PV and PVC objects (static provisioning)
volume-nfs.yaml
(view on GitHub )
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
cookie: |
# Store the volumeUid in a cookie to be reused for subsequent deployments
volumeUid: {{ .init.volumeUid }}
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
{{- if and .cookie .cookie.volumeUid }}
volumeUid: {{ .cookie.volumeUid }}
{{- else }}
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
{{- end }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
criteria:
- class: nfs
Resource Definition specifying the PVC object (dynamic provisioning)
volume-dynamic-provisioning.yaml
(view on GitHub )
:
# Using the Template Driver for the dynamic provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The PVC requests a storageClass "standard-rwo".
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-standard-dynamic
entity:
name: volume-standard-dynamic
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteOnce
storageClassName: "standard-rwo"
resources:
requests:
storage: 10Gi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
criteria:
- class: standard-rwo
- Add matching criteria to provide a particular Resource Class:
criteria:
- class: nfs
Convenience Drivers
While the Template Driver will work for any types of volume, these Drivers offer a simplified handling for select types. See the Driver pages for usage details and examples:
volume-nfs
Driver for the volume typenfs
(static provisioning)volume-pvc
Driver for the volume typepersistentVolumeClaim
(dynamic provisioning)
Ephemeral volumes
Platform engineers can use the Template Driver to provide Resource Definitions for ephemeral volumes .
Resource Definition for an ephemeral volume
volume-emptydir.yaml
(view on GitHub )
:
# This Resource Definition uses the Template Driver to inject an emptyDir volume into the workload
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-emptydir
entity:
name: volume-emptydir
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests:
emptydir.yaml:
location: volumes
data: |
name: ${context.res.guresid}-emptydir
emptyDir:
sizeLimit: 1024Mi
criteria:
- class: ephemeral
Projected volumes
Platform engineers can use the Template Driver to provide Resource Definitions for projected volumes .
Resource Definition for a projected volume
volume-projected.yaml
(view on GitHub )
:
# This Resource Definition uses the Template Driver to create a projected volume
# accessing a ConfigMap and the downwardAPI
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-projected
entity:
name: volume-projected
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
manifests:
projected.yaml:
location: volumes
data: |
name: ${context.res.guresid}-projected
projected:
sources:
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- configMap:
# The ConfigMap named here needs to exist. The Resource Definition does not create it
name: log-config
items:
- key: log_level
path: log_level.conf
criteria:
- class: projected
Persistent volumes
Static provisioning
For a persistent volume with static provisioninig , the volume
Resource Definition needs to provision:
- the
volumes
section in the Pod - the
PersistentVolume
andPersistentVolumeClaim
objects in the target namespace
You can use either the Template Driver (for any type of volume) or the convenience volume-nfs
Driver (for the volume type nfs
only).
Resource Definition for a persistent volume (static provisioning) using the Template Driver
volume-nfs.yaml
(view on GitHub )
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
cookie: |
# Store the volumeUid in a cookie to be reused for subsequent deployments
volumeUid: {{ .init.volumeUid }}
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
{{- if and .cookie .cookie.volumeUid }}
volumeUid: {{ .cookie.volumeUid }}
{{- else }}
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
{{- end }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
criteria:
- class: nfs
storageClassName
to ""
to effectively disable dynamic provisioning as shown in the Kubernetes documentation .Resource Definition for a persistent nfs volume (static provisioning) using the volume-nfs Driver
volume-nfs.yaml
(view on GitHub )
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
type: volume
name: volume-nfs
driver_type: humanitec/volume-nfs
driver_inputs:
values:
path: "/"
server: nfs-server.default.svc.cluster.local
criteria:
- class: nfs
Dynamic provisioning
For a persistent volume with dynamic provisioninig , the volume
Resource Definition needs to provision:
- the
volumes
section in the Pod - the
PersistentVolumeClaim
object in the target namespace
You can use either the Template Driver (for any type of volume) or the convenience volume-pvc
Driver (for the volume type persistentVolumeClaim
only).
Resource Definition for a persistent volume (dynamic provisioning) using the Template Driver
volume-dynamic-provisioning.yaml
(view on GitHub )
:
# Using the Template Driver for the dynamic provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The PVC requests a storageClass "standard-rwo".
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-standard-dynamic
entity:
name: volume-standard-dynamic
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteOnce
storageClassName: "standard-rwo"
resources:
requests:
storage: 10Gi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
criteria:
- class: standard-rwo
Resource Definition for a persistent volume (dynamic provisioning) using the volume-pvc Driver
volume-pvc.yaml
(view on GitHub )
:
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-pvc
entity:
type: volume
driver_type: humanitec/volume-pvc
name: volume-pvc
driver_inputs:
values:
access_modes: ReadWriteOnce
capacity: 10Gi
criteria:
- {}
Shared volumes
A “shared volume” can mean different things depending on the point where access to a volume is being shared. We show different scenarios below and how they map into the Platform Orchestrator concepts.
When implementing any scenario, you will need to ensure that the access modes supported by the resource provider of a persistent volume facilitate the kind of shared access for your workloads.
Sharing a volume between containers in a workload
Several containers of the same workload may share a volume by referencing the same resource
of type volume
in Score.
flowchart LR
subgraph k8s[Kubernetes Namespace]
direction LR
subgraph pod1[Pod]
direction LR
container1_1(Container 1) -->|volumeMount| volume1(volume)
container1_2(Container 2) -->|volumeMount| volume1
end
volume1 --> pvc(PVC) --> pv(PV)
end
storage[Storage]
pv --> storage
To realize this setup, reference the same volume
resource in several containers in Score.
Score file for sharing a volume between containers
apiVersion: score.dev/v1b1
metadata:
name: my-shared-emptydir-workload
containers:
my-container:
image: .
volumes:
- source: ${resources.my-ephemeral-volume}
target: /tmp/ephemeral-dir
my-other-container:
image: .
volumes:
- source: ${resources.my-ephemeral-volume}
target: /tmp/ephemeral-dir
resources:
my-ephemeral-volume:
type: volume
class: ephemeral
Accessing shared storage through separate volumes
Several Workloads may access the same persistent storage through separate volume
Resources. This can be achieved by the PersistentVolume
object created for the volumes
pointing at the same external storage endpoint.
In this setup, each workload will have its own capacity
allotment due to separate PersistentVolume
objects.
flowchart LR
subgraph k8s[Kubernetes Namespace]
direction LR
subgraph pod1[Pod 1]
direction LR
container1_1(Container) -->|volumeMount| volume1(volume)
end
volume1 --> pvc1(PVC) --> pv1(PV)
subgraph pod2[Pod 2]
direction TB
container2_1(Container) -->|volumeMount| volume2(volume)
end
volume2 --> pvc2(PVC) --> pv2(PV)
end
storage[Storage]
pv1 --> storage
pv2 --> storage
This example volume
Resource Definition will create separate Pod volumes
, PersistentVolume
, and PersistentVolumeClaim
objects for each workload requesting a volume
Resource matching the Definition. Since all PersistentVolume
objects refer to the same endpoint, they will effectively access the same storage. The storage provider used for the type of volume needs to support this shared access.
Resource Definition for sharing storage through separate volumes
volume-nfs.yaml
(view on GitHub )
:
# Using the Template Driver for the static provisioning of
# a Kubernetes PersistentVolume and PersistentVolumeClaim combination,
# then adding the volume into the Pod of the Workload.
# The volumeMount in the container is defined in the "workload" type Resource Definition.
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: volume-nfs
entity:
name: volume-nfs
type: volume
driver_type: humanitec/template
driver_inputs:
values:
templates:
cookie: |
# Store the volumeUid in a cookie to be reused for subsequent deployments
volumeUid: {{ .init.volumeUid }}
init: |
# Generate a unique id for each pv/pvc combination.
# Every Workload will have a separate pv and pvc created for it,
# but pointing to the same NFS server endpoint.
{{- if and .cookie .cookie.volumeUid }}
volumeUid: {{ .cookie.volumeUid }}
{{- else }}
volumeUid: {{ randNumeric 4 }}-{{ randNumeric 4 }}
{{- end }}
pvBaseName: pv-tmpl-
pvcBaseName: pvc-tmpl-
volBaseName: vol-tmpl-
manifests:
####################################################################
# This template creates the PersistentVolume in the target namespace
# Modify the nfs server and path to address your NFS server
####################################################################
app-pv-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .init.pvBaseName }}{{ .init.volumeUid }}
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.default.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
#########################################################################
# This template creates the PersistentVolumeClaim in the target namespace
#########################################################################
app-pvc-tmpl.yaml:
location: namespace
data: |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: {{ .init.pvBaseName }}{{ .init.volumeUid }}
########################################################
# This template creates the volume in the Workload's Pod
########################################################
app-vol-tmpl.yaml:
location: volumes
data: |
name: {{ .init.volBaseName }}{{ .init.volumeUid }}
persistentVolumeClaim:
claimName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
# Make the volume name and pvc name available for other Resources
outputs: |
volumeName: {{ .init.volBaseName }}{{ .init.volumeUid }}
pvcName: {{ .init.pvcBaseName }}{{ .init.volumeUid }}
criteria:
- class: nfs
The resources requested via Score must be Private Resources in the Platform Orchestrator, i.e. they must be requested without an id
.
Score file 1 | Score file 2 |
---|---|
|
|
Accessing shared storage through shared volumes
Several Workloads may access the same persistent storage through a shared volume
Resource. They will then also share the PersistentVolumeClaim
and PersistentVolume
objects,
Compared to the previous scenario using separate volumes, only one PersistentVolume
object accesses the external storage endpoint. The workloads will share the capacity
allotment of the shared PersistentVolume
object instead of having separate capacities.
flowchart LR
subgraph k8s[Kubernetes Namespace]
direction LR
subgraph pod1[Pod 1]
direction LR
container1_1(Container) -->|volumeMount| volume1(volume)
end
subgraph pod2[Pod 2]
direction LR
container2_1(Container) -->|volumeMount| volume2(volume)
end
volume1 --> pvc(PVC) --> pv(PV)
volume2 --> pvc
end
storage[Storage]
pv --> storage
The resources requested via Score must now be Shared Resources in the Platform Orchestrator, i.e. they must be requested with a common id
.
Score file 1 | Score file 2 |
---|---|
|
|
Examples
The example library provides a range of examples on volumes, including those shown on this page.