Deployments

Deployments drive the provisioning of infrastructure and applications within each environment. Deployments are executed within a runner.

Basic Examples

You create deployments using the CLI. Deployments may be executed directly by your developers, by the platform engineers testing a new configuration, or more frequently by your automated CI/CD processes.

If you maintain a single manifest file as the source of truth for an environment, you may execute deployments directly against that:

hctl deploy my-project my-environment ./manifest.yaml

If the environment is a combination of multiple partial manifests, you will use the --merge flag to avoid removing resources declared by other partial manifests:

hctl deploy my-project my-environment ./acme-app-manifest.yaml --merge

When deploying a Score workload, you will use a slightly different deploy command:

hctl score deploy my-project my-environment ./score.yaml

See the help output from hctl deploy for details of other supported flags and options to adjust the deployment behavior.

Validating deployments and manifests before execution

You may wish to validate or test a deployment without executing any changes or changing any state. The Platform Orchestrator supports multiple levels of detail for this.

The --dry-run flag

When the --dry-run flag is specified, the Orchestrator will validate the manifest, construct and validate the full resource graph, and attempt to resolve placeholders where possible before returning a success or failure result. Dry run deployments do not record any deployment history or logs and can be used as a light-weight check for commit-hooks, pull request tests, or other validation steps.

The --plan-only flag

When the --plan-only flag is set, the Orchestrator performs a deployment as normal, however, it stops the deployment after the Plan step and before Apply. The deployment history is captured, and the caller can examine the logs of the deployment to see the planned changes. The planned deployment will fail if modules, providers, or runners are misconfigured. This may also be used as a pull request validation step during CI/CD.

What happens during a deployment

When you create a deployment against an environment, the Platform Orchestrator performs the following high-level steps:

  1. The Platform Orchestrator receives the manifest as an input to the deployment request.

    • If the deployment is a Score deployment, the Orchestrator creates a manifest from it by converting the Score workload into a new resource and combining it with the manifest from the previous deployment.
  2. Once the manifest is finalized, the Platform Orchestrator creates a resource graph using the resources requested in the manifest as the initial nodes.

  3. The Platform Orchestrator uses the configured resource-types and modules to expand the resource graph and follow any dependency or co-provisioning rules.

  4. Once the resource graph is complete and valid, the Platform Orchestrator then converts this internally into an OpenTofu/Terraform file describing the desired state of the environment. The matched modules become the module sources used to provision each node in the resource graph.

  5. The Platform Orchestrator launches a new runner instance which executes a Plan and Apply of the OpenTofu/Terraform file against the state storage for the environment. The runner is chosen based on the runner-rules. The runner returns metadata and status information to the Orchestrator.

  6. Finally, any requested output variables are returned in an encrypted form to the client.

Returning resource outputs securely

In some deployment scenarios, you may rely on the Platform Orchestrator to provision infrastructure only and return to your CI/CD pipeline to perform the final application deployment or configuration using the outputs of the Orchestrator deployment. For example, passing values into a Helm deployment or Ansible playbook.

When a manifest defines workload variables outputs, these values are returned by the runner to the Orchestrator. Because these outputs may include sensitive data such as database credentials, these outputs are stored in an end-to-end encrypted form. Only the client that initiated the deployment may decrypt the outputs.

To obtain the variables outputs, use the --result flag to specify a file or ‘-’ for standard output:

hctl deploy my-project my-environment ./manifest.yaml --result=-

Returning runner logs securely

Each deployment records logs of the runner setup, Plan, and Apply steps. Again, for security reasons, these logs are stored in an end-to-end encrypted form and you will need the decryption key which is printed during deployment execution. These can be retrieved using the hctl logs CLI command or during deployment with the --show-logs flag. The decryption key can be shared to allow trusted parties such as platform engineers to view the logs.

The format of the logs is generally JSON, but may vary due to the providers used, the value of the TF_LOG environment variable within the runner, or other factors.

These logs do not include contents of the workload runtimes such as pod logs. If you wish to see runtime logs, you will need to use other observability tools in combination with the Platform Orchestrator.

Cloning and promoting between environments

When you manage multiple environments of the same project, you may wish to promote the manifest from one environment to another. This may be to promote from staging to production, or to clone development to an ephemeral environment. You can do this by specifying the environment name as the manifest source:

hctl deploy my-project my-new-environment environment://my-source-environment

Note that this clones the manifest but may result in different modules being used to provision the resources since the new environment may be subject to different module rules than the source environment.

Deployment history

Each environment maintains a history of deployments executed against it. This includes the manifest, status information, and metadata regarding the resolved resource graph.

You can list all past deployments into an environment:

hctl get deployments my-project my-environment

Using the deployment ID of any deployment, query its details:

hctl get deployment 12345678-abcd-dcba-1234-ba0987654321 -o yaml

If you wish to redeploy or rollback to the manifest used in a previous deployment, you can specify the ID as the manifest source:

hctl deploy my-project my-environment deployment://12345678-abcd-dcba-1234-ba0987654321

Destroying environments

Apart from the normal deployment mode, and the plan-only mode, the third type of deployment is a “destroy”. This deployment type is triggered when you delete an environment from a project. During a destroy deployment, the Platform Orchestrator executes a OpenTofu/Terraform destroy against the current state of the environment to teardown all provisioned infrastructure and applications. If the destroy fails, the environment status will be set to “delete_failed” and the delete can be retried. This may be necessary if the runner or provider configurations need to be updated.

In some situations, you may need to force delete the environment without waiting for a fully successful destroy deployment. This will leave any resources and state behind and simply forget the environment in the Orchestrator. You can force delete an environment using the CLI command hctl delete env my-project my-env --force. This should be considered a manual “break-glass” operation only and not performed as part of regular CI/CD.

Top