First Deployment

Welcome

Welcome to your first deployment tutorial!

In this tutorial you will learn how to create your organization and configure the orchestrator to be able to deploy into your infrastructure. This is the platform engineer perspective.

You will also execute several deployments to understand how the orchestrator works in general and specifically how the created configuration . This is the developer perspective.

To make sure you are able to understand all of the moving pieces, you will review the object system of the orchestrator before we get going and assemble the parts.

This tutorial focuses on Google Cloud as the deployment target and demonstrates deployments onto Kubernetes and VMs. The Platform Orchestrator itself is fully cloud- and compute-agnostic, and support for additional technologies and runtimes is continuously expanding. If you’d like to see the orchestrator in action with technologies not covered here, you can book a demo with us .

You should be proficient in the following technologies to enjoy this tutorial without additional learning:

  • Terraform / OpenTofu
  • YAML / JSON
  • Kubernetes (on API level)
  • Google Cloud (administration)
  • Git

Prerequisites

Install kubectl

Please refer to the official documentation of the Kubernetes project on how to achieve this for your system. https://kubernetes.io/docs/tasks/tools/ 

Install hctl CLI

Please refer to the official Humanitec documentation on how to achieve this for your system. You can find it here: CLI installation.

Register you org (if that hasn’t happened)

Please go to https://console.humanitec.dev/auth/register  and follow the process.

Prepare your cloud environment

At this time the tutorial expects you to have access to a GCP project that has sufficient quota to run a Kubernetes cluster and deploy workloads on top of it. It is also expected that additional resources can be created, e.g. additional VMs. To be able to execute all parts of the tutorial, it is expected that your user has the Owner role in the project.

General intro

Before we dive into the „how“ part - let’s quickly recap the foundation together.

What is a platform orchestrator?

In most engineering orgs today, developers are blocked by ticket-based provisioning or overly rigid CI/CD flows. The Platform Orchestrator helps platform teams enable dynamic, policy-enforced environments that still respect governance, identity, and existing infra setups.

Overview of the object system of the orchestrator

To achieve this, the orchestrator makes use of an internal representation of the world, that captures all necessary details, but exposes only the relevant parts to each persona that takes part in the software development lifecycle. This abstraction enables developers to make quicker decisions and shed cognitive load while being able to drive the process in full self-sevice autonomy.

Projects

From the developers perspective it all starts with a project. This is where e.g. all services of microservice architecture would be bundled as workloads to later create a cohesive deployment for all of them.

It is also a top-level container that contains environments and binds them together.

Environment Types

Environments are always of a certain type. This helps people to understand their purpose beyond the name. A very popular scheme is to categorize environments with their stage in the software development lifecycle to show how far changes have been progressed. E.g.: development –> staging –> production. This helps as soon as more than one environment of a type is present. Think of two environments named „apac“ and „europe“ which are both of type „development“. There are however different requirements with each organization and you could also rather group on geolocation, which would lead to e.g. the same environments being both named „staging“ but one being of type „apac“ and one being of type „europe“. There are no strict rules and you can create as many environment types as you need to fulfill your requirements.

Environments

Environments receive deployments. They’re the logical representation of a container for a set of physical systems that receive the deployment and associated backing infrastructure resources like databases, message queues, etc.

Projects & Environments

Deployments

Deployments describe a state. Each environment can only have one current state but many historical states. Tracking the states enables several capabilities - diffing two states, rolling back to a historical state, or building a new state by applying changes to an existing one. Knowing the state in which an environment „should“ be also allows us to compare reality against it and reason if we reached the desired state or if we are observing drift. Knowing this allows us to make decisions on how to handle the situation.

Deployment Manifests

Deployment manifests allow developers to submit the description of a desired state for an environment. The key distinction here is that deployment manifests allow for a declarative description of the „what“ and leave the imperative description of the „how“ to reach that state out - as such they provide an abstraction for developers.

Deployments & Manifests

Resource Types

To leave the „how“ away and get to the „what“ for a deployment manifest, we need to have a mapping between the name of a certain resource and the actual implementation. This mapping also describes the interface, so both sides know what they can expect or need to implement. Developers order resources of a certain type and can expect the defined outputs to be delivered back to them - e.g. a resource of type Postgres will probably deliver all details back to create a connection string. Platform engineers provide one or many implementations behind the type and know exactly what their implementations need to pass back to the orchestrator. Resource types allow for a clear separation of concerns and responsibilities that is facilitated by the orchestrator.

Modules

A module is an implementation of a resource type - there can be as many implementations as are needed. You could e.g. have different implementations for a Postgres database depending on the cloud provider that your deployment is targeting. At the end, the relevant part for the developer is, that the provided resource is conforming to the interface that he is expecting. Modules are implemented in Terraform / OpenTofu for the orchestrator - this allows them to already interact with a wide variety of infrastructure but also allows for execution of other IaC dialects or API calls, offering total customizability for the platform engineer.

Providers

Providers are the direct equivalent of Terraform  / OpenTofu  providers. Their presence enables platform engineers to supply correctly configured providers for different contexts, that can be utilized by modules.

Active Resources

As soon as an instance of a certain resource type is created by a module and through a provider, as part of a deployment, we call it an active resource.

Resources & Modules

Rules

Whenever you’ve read about a certain context or certain dependencies / conditions in the previous explanations, it boils down to the fact, that platform engineers can provide rules that describe when a concrete object of a type should be used. Most objects can lend themselves to this system either over their ID or type. You can for example use a different module to deploy e.g. a Postgres database for different environment types. This leads to differently sized and configured databases in development and production. Not only highly desired but also another abstraction that enables developers to shed cognitive load. The platform will manifest not only the correct type but also correctly configured resource of a type dynamically with each request. This enables the creation of environments in full self-service autonomy as developers do not need to supply a bespoke set of configuration for an environment before the fact anymore.

Runners

The final piece to the puzzle is the runner. A runner will actually execute the deployment. After the orchestrator has created the deployment schematics, it schedules a runner that will execute the compiled Terraform / OpenTofu code. Runners allow for many things. They enable custom tooling because platform engineers can build or enhance runner images. They enable security and compliance as they are executed in your network. They limit the blast radius as you can have as many runners defined as needed and have e.g. different runners for different network segments or permission sets. Rules enable once again to dynamically select the correct runner with each request.

Landscape

If you put all of the pieces together and show their relations, you will get to this landscape of the object system of the orchestrator.

Orchestrator object system landscape

Prepare your cloud resources

This tutorial comes with a completely terraformed cloud infrastructure. We cannot predict how much infrastructure you already have in place, so you can either execute the full creation (includes e.g. networks and Kubernetes cluster) or just import the relevant infrastructure into your state and (re)use it.

Clone

Please clone the infrastructure project from: https://github.com/humanitec-tutorials/first-deployment 

Decide what to run

All following filenames and references point to the infra directory.

In google.tf you will find the baseline setup including IAM, networking and raw GKE cluster. Please decide if you need the networking and cluster - if not, you can simply remove them. The IAM setup is needed for the orchestrator to connect to a service principal and access your target GKE cluster to schedule runners. In cluster.tf comes the configuration of the cluster. If you want to use your own GKE cluster, please update the configuration of the Kubernetes provider to target it.

Create your Runner

With the cloud infrastructure in place, we can now build the orchestrator’s runner configuration, so that the orchestrator can schedule runners in your cloud infrastructure. The good part about this is, that this is also part of the infrastructure repository you already cloned. Let’s review the relevant file, if you need any adaptations: humanitec.tf. Please take note how an additional secret is being injected into the cluster and provided to the runner by mounting it. This is needed in the next step to configure the Google provider. Also the tutorial project and needed two environments are created here. Same goes for the development environment type. Feel free to rename it, if that suits your naming conventions better!

The baseline resources

The final puzzle piece to review is the modules.tf, which includes all infrastructure components that you want the orchestrator to create. You will always find triplets of resources in the Terraform code - a resource-type, a module and a rule - please refer back to the orchestrator’s object system above if you want to revisit how they interact to achieve an outcome. A set of modules is headed by the provider for this type of infrastructure, e.g. the storage bucket and queue are following the Google provider, that is needed to provision them. For modules, there is always a module_source configured, which specifies where the Terraform code is hosted that makes up the module. This makes it trivial to write wrapping module definitions for you own Terraform modules. For the moment the relevant part is for you to check, if the targeted repository is reachable for a scheduled runner or not. If not, you need to fork the repository to a location where the runner can access it and modify the module_source entries.

Executing the infrastructure Terraform

Please review variables.tf to understand what parameters need to be supplied. As soon as you have provided them, you can finally execute the Terraform and create the baseline for your first deployment with the orchestrator. If you don’t know where to find your org, a simple

hctl login
hctl config show

will get you to it really fast.

For the token, you will have to go to https://console.humanitec.dev  and create a service user. Navigate “Service Users” → “Create Service User” → “Create” and copy the token.

The first deployment

Now that the baseline is in place, we can start to deploy things and see how the orchestrator manifests them in reality. Our first deployment is going to be really boring 😅 Let’s create a manifest.yaml file with the following contents\

workloads: {}

This deployment manifest  format is purpose built to enable to deploy basically any application or resource. It provides the right abstraction level to create infrastructure with the „what“ and leave the „how“ to the platform engineer that encoded this concern into the module that you’re calling. Let’s do that now by executing

hctl deploy tutorial dev ./manifest.yaml

To see the outcome, you can browse to https://console.humanitec.dev/orgs/###yourOrg###/projects/tutorial/envs/dev/status after fixing your org in this URL. You can see the resource graph - which is quite bland because we deployed an empty manifest.

More resources

Let’s deploy something for real and also add some infrastructure. On the first shot we’re going to deploy an application directly to the Kubernetes cluster where the runner is scheduled and add a database to it. For that, you can create a manifest2.yaml with the following content

workloads:
  demo-app:
    resources:
      score-workload:
        params:
          containers:
            main:
              image: ghcr.io/astromechza/demo-app:latest
          metadata:
            name: demo-app
          service:
            ports:
              web:
                port: 80
                targetPort: 8080
        type: score-workload
      db:
        type: postgres
    variables:
      OVERRIDE_POSTGRES: postgres://${resources.db.outputs.username}:${resources.db.outputs.password}@${resources.db.outputs.hostname}:${resources.db.outputs.port}/${resources.db.outputs.database}

and deploy it with

hctl deploy tutorial dev ./manifest2.yaml

You can observe how the deployment command prompts you to confirm the changes which are being applied to the up to now empty environment. Accepting this will deploy the application to the cluster and connect it to the database which is deployed into the cluster as well. Go back to the UI to observe how the graph has changed to reflect the now existing reality.

Cloud resources and alternative runtimes

For the last deployment, we’re going to leave the cluster behind and deploy the same workload onto a fleet of VMs. The fleet is going to be fronted by a loadbalancer, so we can directly access the application without any port-forwarding. This time we’re going to drop the deployment manifest in favour of a Score  file. This format allows for a higher level abstraction that developers prefer, as it aligns well with more of their tooling and requirements. You can create a score.yaml file with the following contents

apiVersion: score.dev/v1b1
metadata:
  name: demo-app
containers:
  main:
    image: ghcr.io/astromechza/demo-app:latest
service:
  ports:
    web:
      port: 80
      targetPort: 8080

and deploy it like this

hctl score deploy tutorial score ./score.yaml

After the deployment is complete (which can take around 5 minutes) you should now be able to observe three VMs in your GCP project, which are behind a loadbalancer. Calling to this loadbalancer should immediately show you the application.

The reason why this was so simple is, that the whole „how“ is encoded into the score-workload module. It knows that you want to create those VMs and uses Ansible internally to configure those VMs to install a Docker instance that can serve the container specified in the image part of your Score file.

This is different to how the deployment before performed the conversion of the score-workload - the reason being the matching according to the defined rules. Locate the platform-orchestrator_module_rule.ansible_score_workload in modules.tf. The rule specifies the env_id as matching criterion.

Feel free to browse the inside of the modules - you can find them by opening the module_source and browse to understand what is really happening.

Recap

We’ve made it to the end of this tutorial, which means that you have successfully created a configuration for the orchestrator to deploy to your own cloud infrastructure.

You have done so and deployed different applications and sets of infrastructure.

Exploring those outcomes should have reinforced your learning of the orchestrator’s object system by providing practical examples of how the different objects work together to create those outcomes.

You should be ready to create your own modules now, that allow your developers deploy any kind of custom infrastructure through the orchestrator with just the „what“ and without the need to understand the „how“.

Welcome aboard, fellow platform engineer!

Feedback

We’re always looking to improve this tutorial. If you found something unclear, ran into issues, or have suggestions for making it better, please let us know through this feedback form .

Top