The five-minute IDP

Introduction

Getting an Internal Developer Platform (IDP) running with the Humanitec Platform Orchestrator usually requires some preparatory work, like providing a Kubernetes cluster, setting up some tooling etc. This tutorial lets you set up a fully functional platform on your local machine in five minutes with minimal prerequisites using a dev container in Docker. When you’re done, you can tear it down.

Watch our video demonstrating the installation:

 

Prerequisites

To get started with this tutorial, you will need:

Configure your environment

Log in to the CLI:

humctl login

Prepare this environment variable. Use the all-lowercase spelling of your Humanitec Organization ID :

export HUMANITEC_ORG=<my-org-id>

Start the toolbox

Run the dev container in Docker and attach a terminal:

docker run --rm -it -h 5min-idp --name 5min-idp --pull always \
 -e "HUMANITEC_ORG=$HUMANITEC_ORG" \
  -v hum-5min-idp:/state \
  -v $HOME/.humctl:/root/.humctl \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --network bridge \
  ghcr.io/humanitec-tutorials/5min-idp

Install IDP components

Inside the container terminal, run the install script to install a kind cluster into the container as well as a number of objects into your Humanitec Organization. We’ll look at those objects in more detail later in the recap , but you can proceed for now.

./0_install.sh

Deploy a demo Workload

Run the demo script to create an Application in the Platform Orchestrator and deploy a Workload specified by a Score file into it. Again, we’ll look at what happens in more detail in the recap.

./1_demo.sh

Following the deployment, the script will keep checking for the URL endpoint to become reachable and print out curl ... HTTP error messages until that happens.

You’ll eventually see a message like:

Workload available at: http://5min-idp-abcd.localhost:30080

Access your Workload

In your browser, open the URL printed as the last output of the previous step. You should see a message like this:

Hello World!

Clean up

Run the cleanup script to remove all objects in the Platform Orchestrator and delete the kind cluster:

./2_cleanup.sh

Then exit the dev container terminal:

exit

Clean up your Docker environment:

docker image rm ghcr.io/humanitec-tutorials/5min-idp
docker volume rm hum-5min-idp

Troubleshooting

If you encounter this error message:

$ ./0_install.sh
ERROR: could not locate any control plane nodes for cluster named '5min-idp'. Use the --name option to select a different cluster

The data in the Docker volume from a previous run prevents the proper kind cluster setup. Execute these steps to remediate the problem:

  • Type exit to exit the container
  • Remove the volume via docker volume rm hum-5min-idp
  • Start over from the Start the toolbox step

Recap

So what just happened? A number of things, but none are rocket science. You just:

  • Created and connected to a dev container providing all the required tooling in your local Docker environment
  • Launched a Kubernetes cluster using kind in that container, complete with an ingress controller
  • Created an Application in the Platform Orchestrator as the target for the upcoming deployment, using Terraform
  • Created a number of Resource Definitions in the Platform Orchestrator, instructing the Orchestrator how to reach your cluster and how to set up an in-cluster PostgreSQL database based on our Resource Pack , also using Terraform
  • Configured an instance of the Humanitec Agent to establish a secure tunnel between your local cluster and the Platform Orchestrator

At this point, your setup looked like this:

%%{ init: { 'flowchart': { 'curve': 'stepAfter' } } }%%
flowchart BT
    subgraph platformOrchestrator[Platform Orchestrator]
        subgraph application[Application]
            subgraph environment[Environment &quot;development&quot;]
            end
        end
        subgraph resourceDefinitions[Resource Definitions]
            direction LR
            resdefK8scluster(k8s-cluster):::highlight ~~~ resdefPostgres(postgres):::highlight ~~~ resdefAgent(agent):::highlight ~~~ resdefEtc(...):::highlight
        end
    end
    subgraph workstation[Workstation]
        subgraph devContainer[Dev container]
            subgraph kindCluster[kind cluster]
                agentPod(Agent Pod)
            end
            installScript(Install script):::highlight
        end
    end
    installScript -->|terraform apply| application
    installScript -->|terraform apply| resourceDefinitions
    installScript --->|create| kindCluster
    agentPod -.-|Secure tunnel| platformOrchestrator

    class environment highlight
    class platformOrchestrator,workstation,kindCluster nested
  • You then deployed a Workload by posting a workload specification based on Score to the Orchestrator, specifically into the development Environment of the new Application:
%%{ init: { 'flowchart': { 'curve': 'stepAfter' } } }%%
flowchart BT
    subgraph platformOrchestrator[Platform Orchestrator]
        subgraph application[Application]
            subgraph environment[Environment &quot;development&quot;]
            end
        end
        subgraph resourceDefinitions[Resource Definitions]
            direction LR
            resdefK8scluster(k8s-cluster) ~~~ resdefPostgres(postgres) ~~~ resdefAgent(agent) ~~~ resdefEtc(...)
        end
    end
    subgraph workstation[Workstation]
        subgraph devContainer[Dev container]
            direction LR
            subgraph kindCluster[kind cluster]
                agentPod(Agent Pod)
            end
            demoScript(Demo script):::highlight
        end
    end
    agentPod -..-|Secure tunnel| platformOrchestrator
    demoScript -->|Deploy Score file| environment

    class environment highlight
    class platformOrchestrator,workstation,kindCluster nested
  • The deployment request triggered a run of the default Deployment Pipeline within the new Application
  • You watched as the Orchestrator connected to your cluster using the Humanitec Agent tunnel, and deployed both the demo workload and an in-cluster PostgreSQL database onto it
  • You accessed the web frontend of the demo Workload and saw that it connected to the database

Now your setup looked like this:

%%{ init: { 'flowchart': { 'curve': 'basis' } } }%%
flowchart BT
    subgraph platformOrchestrator[Platform Orchestrator]
        subgraph application[Application]
            subgraph environment[Environment &quot;development&quot;]
                workload(Workload):::highlight ~~~ activeResourcePostgres(Postgres Resource):::highlight
            end
        end
        subgraph resourceDefinitions[Resource Definitions]
            direction LR
            resdefK8scluster(k8s-cluster) ~~~ resdefPostgres(postgres) ~~~ resdefAgent(agent) ~~~ resdefEtc(...)
        end
    end
    subgraph workstation[Workstation]
        browser(Browser)
        subgraph devContainer[Dev container]
            subgraph kindCluster[kind cluster]
                direction RL
                helloWorldPod(Hello World Pod):::highlight
                postgresPod(PostgresSQL Pod):::highlight
                agentPod(Agent Pod)
            end
        end
        browser --> helloWorldPod
    end
    agentPod -.-|Secure tunnel| platformOrchestrator

    class platformOrchestrator,environment,workstation,kindCluster nested

Finally:

  • You cleaned up the Platform Orchestrator objects using Terraform, stopped the cluster, and exited the dev container

Next Steps

  • Repeat the setup and before cleaning up, take a moment to inspect the objects created during the process
    • In your browser, open the “Resource Management” section of the Platform Orchestrator and filter for the “5min” objects. Those are all of the Resource Definitions instructing the Orchestrator how to provision the different infrastructure components.
    • Open the “Applications” section and navigate into the “development” environment of the “5min-*” Application. Observe the Shared Resources, and navigate further into the “hello-world” Workload to see the running container and inspect its container log stream
    • Use kubectl to inspect the Kubernetes objects created on your kind cluster. Find the namespace named “5min-*”, and see the two Pods running in that namespace for the demo Workload and the PostgreSQL database.
  • Inspect the Score file by typing cat score.yaml in the dev container
  • Find out what the k8s-cluster, k8s-namespace, postgres and related objects mean by reading more about Resources and Resource Types .
  • Deepen your knowledge of platform engineering and the Humanitec products by following our learning path “ Master your Internal Developer Platform
Top