Resource management theory

Introduction

Humanitec’s Platform Orchestrator is key to enabling Dynamic Configuration Management (DCM), a methodology that enables developers to create Workload specifications, describing everything their Workloads need to run successfully. The specification is then used to dynamically create the configuration with every deployment. With DCM, developers do not need to define or maintain any environment-specific configuration for their Workloads. The Platform Orchestrator’s approach to resource management and infrastructure orchestration is what makes this possible. Let’s take a look at how.

First, the Platform Orchestrator takes an abstract request from developers like “my Workload requires a db of type Postgres” and creates/updates the correct Resource depending on the context. This provides the fundamental approach to enable golden paths, separation of concerns, and the setup of scalable Internal Developer Platforms.

Resource Management Overview

This approach has an array of positive implications:

  • Rather than maintaining hundreds of vaguely different configurations of the same Resource across your Organization, you only need to focus on 2-3 options per Resource type (a PostgresDB for production Environments). This allows for a very high degree of standardization improving security, ease of maintenance, and efficiency.
  • Because you can create/update Resources with every deployment, you can do updates centrally and update all active Resources automatically, keeping config drift to a minimum.
  • It’s super easy to consume for developers. They just describe in general terms what their Workload requires (I need a database of type Postgres) and where they are deploying (environment = production). The Platform Orchestrator will figure out what Resource to use, how, and when. This frees your platform team from overwhelmed developers and repetitive requests.

Following a deployment to connect the dots

A good way to understand how the Platform Orchestrator manages Resources is to follow what happens when a developer deploys “something” because the orchestration of infrastructure happens, as discussed, with every deployment.

With every git-push the developer also pushes a Workload specification, for instance Score. which describes the Workload and its dependencies in general terms. At this point the Resource is just specified in terms of the “type” of Resource (a db of type Postgres) required. The Score file runs through the CI pipeline and hits the Platform Orchestrator. The Platform Orchestrator now executes an RMCD pattern which stands for Read Match Create Deploy.

In the Read phase the Platform Orchestrator just reads the Score file. With every deployment it assumes you’re starting from scratch and never assumes anything exists yet. Let’s say in this example the Score file indicates that the Workload requires Postgres.

In the Match phase the Platform Orchestrator reads the context from the metadata of the deployment (the user is deploying to an Environment of type staging) and now matches the correct Resource Definition for this context. A Resource Definition tells the Platform Orchestrator what Resource to use, how, and when. We’ll learn more about this in the next section.

In the Create phase the Platform Orchestrator creates the app configs and creates/updates the Resources. It pulls the credentials, injects them as secrets at runtime and …

In the Deploy phase the Platform Orchestrator then deploys the image, uses the app configs to configure the cluster, and orchestrates the infrastructure. It registers the secrets and finally injects them into the container at runtime.

Resource Definitions define the what, when, and how

Now you know that the Platform Orchestrator creates/updates Resources with every single deployment. It takes the abstract request and then does something, somehow, at some point. But what does it do? When? How? The Platform Orchestrator doesn’t know either. It’s a sophisticated API layer. The thing that tells it how to resolve the abstract request to the actual implementation is what’s called a Resource Definition. It’s a way to tell the Platform Orchestrator what Resource to use, when to use it, and how to configure it in detail.

Let’s deep-dive on our example outlined above. The abstract request from the developer is going to an Environment of type staging and the Workload spec looks as follows:



How exactly does the Platform Orchestrator interpret this? It does that by identifying the correct Resource Definition. Here’s an example written using the Humanitec Terraform provider:



The what is specified by the type=postgres line.

The when is specified in the criteria section. It defines that this Resource Definition is used every time the Matching Criteria equal app_id=”test-app”, a particularly tight Definition.

The how is specified by referencing the Driver. The Driver is the thing that will actually create or update the Resource. Drivers are little services that are called by the Resource Definition, can receive input from it, create or update the Resource, and return the credentials as output.

A really helpful way to understand how the Platform Orchestrator is choosing the right Resource Definition is to look at the following question/answer sequence for the abstract request of the developer “I need a db of type Postgres”:

Orchestrator Question and Answer

Resource Definitions can be configured as code using the Terraform Provider or through the user interface. You usually have one Resource Definition for many running Resources. The wider you set the criteria, the higher the degree of standardization. If your criteria is app_id = ”test-app”, the Platform Orchestrator will only use the Resource Definition for all apps with id test-app. But if you set this to env_type = "staging", you’ll suddenly use the same Resource Definition for every single Resource associated with a Workload running in staging. The effect is enormous.

The purpose of Drivers: To create/update Resources

We learned that once the right Resource Definition is called, it will in turn call a Driver to do the actual job of putting the Resource into the right state — and maybe forward some additional input to the Driver.

A Driver is the code that runs to actually create or update a Resource based on a unique ID. It’s an HTTP web service that implements the Humanitec Driver API, an extension point of the Platform Orchestrator. The Resource Definition defines Driver Inputs that are passed to the Resource Driver, that then performs the provisioning of a Resource. The schema of the Driver Inputs depends on the Driver. For example, provisioning a dns Resource with the humanitec/dns-cloudflare Driver requires the Cloudflare Zone ID and the parent domain to be specified while the humanitec/dns-wildcard Driver only requires the parent domain.

Or it can specify exactly how a namespace should be named following the conventions of the organization.



The Platform Orchestrator does not care how exactly a Driver executes its task. Drivers can work on their own or in combination with any IaC approach. You can use existing open source Drivers or write your own. Here’s a sequence of how a Driver would work:

  1. The Driver is called by the Resource Definition. The Resource Definition can make a PUT request to the Driver with the Driver input.
  2. The Driver creates/updates the Resource.
  3. The Driver fetches credentials from the created Resource and returns them as outputs to the Platform Orchestrator.

There are all sorts of Drivers, but in reality three Drivers cover 90% of the use cases:

  • The Echo Driver returns what it gets as input, ideal for just “wiring up existing Resources” without actually touching them.
  • The Terraform Driver helps you provision Resources outside of the cluster using Terraform.
  • The Template Driver uses the Go templating language (much like Helm) and allows you to produce outputs programmatically. This is great for things in-cluster, i.e. to create Kubernetes manifests. For instance, you can create an Istio VirtualService.

There are also Resource-specific Drivers (e.g. the cloudsql Driver) or you can build your own. Below is a decision tree explaining what Driver to use and when:

Drivers decision tree

How to move your existing setup with little disruption

You’ve probably read all of this and thought “nice concept, but I have an incredibly complex setup so how do I move this over?” Well, the answer is gradually — and it’s a lot easier than you think.

  • First, if you have IaC you can just use those modules. The association is done using Drivers. If you’re using Terraform for instance, the Terraform Driver allows you to use Terraform to update/create Resources.
  • And do you need to write Resource Definitions for all possible Resource combinations? Not really. Most Resource Definitions are available as open source and you can just apply them on your cloud. You really only have to alter the definitions that are important and individual for your specific situation. For instance, you might be OK using a default Resource Definition for Redis in staging but want to add your own in production.

What most people do is take an approach that allows them to yield the benefits ultra-fast yet keep their production setup constant. Here’s how:

  • Transform your app configs to Score (which takes minutes).
  • Deploy them using default, open source Resource Definitions for all deployments up to production.
  • For production Resources, keep them out of the life-cycle management of the Platform Orchestrator and just register them with the echo Driver. You can later replace the production Resource Definitions bit by bit.

With this approach you can literally have everything up and running in days, the developer experience is already clean and neat, and you don’t have to touch the production infrastructure at all for now.

Next steps

This document was a high-level introduction to Resource management and infrastructure orchestration with Humanitec. But now you may be wondering whether this approach can deal with the immensely complex, nested infrastructure setup you’re running. The answer is yes, it’s already running some of the most complex infrastructure setups out there.

If you want to get your hands dirty and try it, fork one of our reference architectures and check out the following tutorials:

Top