Dynamic Configuration Management

This section explains how the platform introduces abstraction, how these abstractions are resolved and become executable configurations, what happens if you hit “deploy”, and how the request flows through the Internal Developer Platform (IDP) step by step. If you’d like to start with the first tutorial from a developer perspective head to Scaffolding a new Workload. Or for tutorials from a platform engineer perspective, head to Provisioning a Redis cluster.

How devs describe things abstractly and platform engineers generally

The reference architecture introduces a Workload-centered abstraction. This means the defining entity is the “Workload”. This Workload can have expressed dependencies such as Resources, metadata, or inputs. We’re using Score as the Workload specification to express the abstraction as code. You can fine-tune the level of abstraction on the Score layer depending on your organization’s preferences but in this case, we’re going with the default abstractions.



This abstraction is replacing all other configurations on the Workload source code level and is Environment agnostic. That means this one file can be used across local, dev, staging, production, and preview Environments. This leaves the repository of the Workload looking as follows:



This code alone is not enough to actually provision the Environment. It’s solely the basis for the Platform Orchestrator to create the actual app and infrastructure configurations.

The Platform Orchestrator requires three more additional inputs:

  1. The context: This will be pulled from the deployment metadata and provides the specific application ID, Environment ID, and Environment type the deployment is targeting. The combined information from the Score file and the context allow the Platform Orchestrator to match the right Resource Definition for any Resources requested.
  2. The Resource Definition: This defines what, when, and how the Platform Orchestrator should update/create any given Resource. In the example below we can see that this particular Resource Definition should be used if the what (resource=postgres) matches the request in the Score file AND if the context (app-id=python-service) matches the deployment metadata. Once those conditions are met the Platform Orchestrator picks the correct driver to update/create the Resource, and forwards any additional inputs into the driver to customize the Resource to the specific situation.

Resource Definition

  1. The IaC code: This is used by the driver to create/update the Resource. Our example is using Terraform. Note that this code can be shared across many different Workloads, and applying changes to it will result in an eventual update across all Resources that match the Resource Definition.

Resource Definition Driver

This means that the final repository structure differentiates the app source and the platform source code, and usually looks like this:

Repository structure

This allows for the highest degree of standardization and abstraction, which lowers cognitive load.

Continue to the next section to understand how the Platform Orchestrator interprets the repository to create the executable configurations.

What happens when you hit “deploy”?

The heartbeat of this reference architecture is the deployment. There are a whole bunch of systems involved when you deploy your Workload and each of them has their very own “job to be done”. Here’s how this request runs through them:

idp

Let’s assume we’re a developer and have applied a change to our source code. We have also added a new Resource dependency to our Score file, in other words, we’ve requested a new Resource of type postgres. We now git-push these changes into our version control system, this being GitHub in the reference architecture example. Using tags, we indicate that it’s going to an Environment of type staging.

GitHub actions is configured to run at commit, and will now build the image and push it to the image registry. It will also run the score-humanitec CLI which converts the generic Score file into a Humanitec Deployment Delta which can be interpreted by the Platform Orchestrator.

The deployment metadata and the deployment delta will now be pushed to Platform Orchestrator which executes an RMCD pattern. RMCD standards for the following phases:

  1. Read: The Platform Orchestrator reads the metadata and the changes in the Workload specification /deployment delta (“resource of type postgres”)
  2. Match: The Platform Orchestrator matches the context (“env type=staging, app id=my-app”) and looks up the correct Resource Definition from the context and the Resource type requested.
  3. Create: In this phase, the Platform Orchestrator creates app configurations and updates/creates infrastructure using the approach configured on the Resource Definition level, e.g. through Terraform.
  4. Deploy: The Platform Orchestrator then deploys the image, uses the app configs to configure the cluster, and orchestrates the infrastructure. It registers the secrets and finally injects them into the container at runtime.

This procedure of dynamically generating app and infra configs at deployment time is called Dynamic Configuration Management.

Dynamic Configuration Management

We’ve now reached the desired state where all Resources are updated and the application is deployed and running.

Variations to the theme

There are variations to the process. You could use the Platform Orchestrator’s built-in deployment pipeline functionality instead of GitHub actions, or run checks or get sign-offs before deploying.

You could also use the Platform Orchestrator in tandem with GitOps operators. In this case, the Platform Orchestrator would simply create config files and push them to a Git repo. A tool like ArgoCD would pull this into the cluster from there.

Top