Add a database resource

Resource Packs

To quickstart your journey into platforming with the Orchestrator, Humanitec offers collections of Resource Definitions called “Resource Packs”. They come pre-packaged with the IaC needed to create them using Terraform.

The “in-cluster” Resource Pack allows you to work with resources that are created as containers next to your workload right on the targeted cluster. It is very well suited for experimentation or development environments where you do not need the added data security of backups but rather want your resources provisioned quickly and cost-efficiently.

You will use “in-cluster” Resource Pack to create a Resource Definition for a PostgreSQL database. You can then add the database to the Score file and configure your quickstart app to use it.

Clone the repository:

git clone https://github.com/humanitec-architecture/resource-packs-in-cluster.git

Move to the PostgreSQL example directory and prepare the Terraform variables file. You can leave the preset values:

cd resource-packs-in-cluster/examples/postgres
cp terraform.tfvars.example terraform.tfvars

Modify ./resource-packs-in-cluster/examples/postgres/main.tf:

  • Remove the resource named "example". We already have an Application and do not want to create another one.
  • Make the new Resource Definition match your Application by adjusting the matching criteria defined via the resource "humanitec_resource_definition_criteria". Change the line starting with app_id to:
app_id                 = "quickstart"

Apply the pack to create the Resource Definition, and return to the root directory:

terraform init
terraform apply -auto-approve

cd ../../..

The connection from the Terraform CLI to the Orchestrator will work immediately because your environment is already configured with the appropriate environment variables.

Verify that the new Resource Definition has been created:

humctl get resource-definition | grep hum-rp-postgres

The Workload specification

Make your Workload request and use a PostgreSQL database by editing your score.yaml.

  • Add this section at the end of the file:
# Defines dependencies needed by the Workload
resources:
  db:
    type: postgres
  • Add this line to the variables section of the container right under the OVERRIDE_MOTD variable (make sure to apply the same indentation):
OVERRIDE_POSTGRES: "postgres://${resources.db.username}:${resources.db.password}@${resources.db.host}:${resources.db.port}/${resources.db.name}"

Note how the variable value uses outputs from the db resource to construct the connection string. Their concrete values will only be known at runtime. As a developer, you do not need to specify them.

If you want to check for correctness of your edits or skip the self-editing, you can find a preconfigured file in the solutions subdirectory. To use it, execute:

cp solutions/database_deployment_score.yaml score.yaml

The Deployment

To actually provision the new database, all you need to do is deploy once more with:

humctl score deploy --wait

Check the result in the Platform Orchestrator UI. Navigate to the development Environment of the quickstart Application, and select the quickstart Workload. The “Resource Dependencies” show a db Resource of type: postgres.

Check the Kubernetes objects in the development namespace:

kubectl get all -n $NAMESPACE_DEVELOPMENT

You can see that there is a PostgreSQL container running inside your development namespace. It may take a minute or two for the required PersistentVolume to be created so the container can actually start.

Revisit the application UI. Create the local port forwarding once more:

kubectl port-forward service/${HUMANITEC_APP} 8080:8080 \
  -n ${NAMESPACE_DEVELOPMENT}

Open this URL: http://localhost:8080/

The output will now contain a status on the newly connected, empty PostgreSQL database:

Postgres table count result: 0

Quit the port forwarding via Cmd-C or Ctrl-C.

Recap

And that concludes this chapter. You have:

  • ✅ Defined a new Resource Type for PostgreSQL databases using the Humanitec in-cluster Resource Pack and Terraform
  • ✅ Expanded your Score file to request a PostgreSQL database and connect to it
  • ✅ Re-deployed your Workload using Score and humctl
  • ✅ Verified the provisioning of the new Resource

Your setup now looks like this (omitting some connections for simplicity):

%%{ init: { 'flowchart': { 'curve': 'linear' } } }%%
flowchart LR
  subgraph scoreFile[Score file]
    direction TB
    scoreWorkload(Workload) ~~~ scoreDb(Resource\npostgres)
  end
  subgraph platformOrchestrator[Platform Orchestrator]
    cloudAccount(Cloud Account)
    subgraph application[Application]
        envDevelopment(Environment\n"development")
        envStaging(Environment\n"staging")
    end
    resDefCluster(Resource Definition\nCluster)
    resDefNamespace(Resource Definition\nNamespace)
    resDefDb(Resource Definition\nPostgreSQL)
  end
  subgraph cloudInfrastructure[Cloud Infrastructure]
    subgraph k8sCluster[Kubernetes Cluster]
      subgraph namespaceDev[Namespace development]
        workloadDev(Workload) --> dbDev(PostgreSQL)
      end
      subgraph namespaceStaging[Namespace staging]
        workloadStaging(Workload)
      end
    end
  end
  scoreFile -->|humctl score deploy| envDevelopment
  envDevelopment --> namespaceDev
  envStaging --> namespaceStaging
  resDefCluster -.- k8sCluster

  %% Using the predefined styles
  class scoreDb,resDefDb,dbDev highlight
  class application,k8sCluster nested

Note that the staging namespace does not yet have the PostgreSQL provisioned because you have not deployed there yet.

You will do that in the chapter where you also apply some Environment-specific configuration.

Top