- Home
- Integration and extensions
- Containerization
- Serverless
-
- Overview
-
- Overview
- Deploy your Application
- Manage your Java application
- Migrate an Application
- Provision Amazon S3 Buckets
- Deploy an Amazon S3 Resource to production
- Set up the reference architecture in your cloud
- Scaffold a new Workload and create staging and prod Environments
- Update Resource Definitions for related Applications
- Provision a Redis cluster on AWS using Terraform
- Perform daily developer activities (debug, rollback, diffs, logs)
- Deploy ephemeral Environments
-
-
Serverless
On this page
Kubernetes API #
The Humanitec Platform Orchestrator currently focuses exclusively on containerized workloads running on Kubernetes.
Generally, to deploy workloads, the Orchestrator either needs a Kubernetes API server endpoint to deploy Kubernetes manifests directly (in Direct Cluster Mode), or it pre-generates Kubernetes manifests to be finalized on the cluster (in any of the Humanitec Operator modes).
Either way, a Kubernetes API is required to process the resulting manifests.
Serverless #
“Serverless” is an umbrella term for managed, usually cloud-based services abstracting away some of the underlying infrastructure properties to provide a simplified experience and management. Many “serverless” offerings exist for running containerized workloads. However, even if some of them may be based on Kubernetes themselves underneath, they offer their own way of deploying and managing workloads, and in particular, do not provide a Kubernetes API by design.
Find the list of supported and unsupported services below.
Supported services #
Beyond what is covered in Kubernetes, the Platform Orchestrator supports deploying workloads into the services shown below. Those are managed Kubernetes offerings backed by a serverless technology.
- AKS virtual nodes
- A number of limitations apply
Configure your workload for AKS virtual nodes
This workload
Resource Definition shows how to set the required properties as per the AKS Virtual Nodes documentation.
aci-workload.yaml
(view on GitHub)
:
# Add tolerations and nodeSelector to the Workload to make it runnable AKS virtual nodes
# served through Azure Container Instances (ACI).
# See https://learn.microsoft.com/en-us/azure/aks/virtual-nodes-cli
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
id: aci-workload
entity:
name: aci-workload
type: workload
driver_type: humanitec/template
driver_inputs:
values:
templates:
outputs: |
update:
- op: add
path: /spec/tolerations
value:
- key: "virtual-kubelet.io/provider"
operator: "Exists"
- key: "azure.com/aci"
effect: "NoSchedule"
- op: add
path: /spec/nodeSelector
value:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
criteria: []
-
- A number of limitations apply
- To control the target namespace for your Humanitec Platform Orchestrator Deployments to match an EKS Fargate profile, see Namespaces
- To set labels on your workloads to match an EKS Fargate profile, see the Score Examples or Resource Definition Examples
-
GKE Autopilot clusters
Unsupported services #
The Platform Orchestrator currently does not support deploying workloads into these services:
- Amazon Elastic Container Service (ECS)
- Amazon ECS on AWS Fargate (see the distinction against AWS Fargate with Amazon EKS above)
- AWS Lambda
- Azure Functions with containers
- Azure Container Apps
- Azure Container Instances (see the distinction against AKS virtual nodes above)
- Google Cloud Run