Set up GitOps

How does GitOps work with the Platform Orchestrator?

“GitOps” is one of the modes of operation when using the Orchestrator in tandem with the Humanitec Operator. It utilizes a 3rd party GitOps Operator like ArgoCD  or FluxCD  to pull the Kubernetes custom resources (CRs) created by the Platform Orchestrator from a Git repository into the cluster where they will be processed normally by the Humanitec Operator.

The Orchestrator thus does not require access to your cluster to deploy CRs like in Direct mode.

GitOps mode is set up via these steps:

  1. Provide a Resource Definition for a Kubernetes cluster Resource (of type k8s-cluster) using the k8s-cluster-git Driver

    The Resource Definition defines a branch in a Git repository as the deployment target instead of an actual cluster.

  2. Create matching criteria on this Resource Definition as you would for a regular cluster

  3. Install the Humanitec Operator on your target cluster

  4. Configure your GitOps operator to synchronize the contents of the target Git branch onto your target cluster

The way developers deploy their workloads via the Platform Orchestrator does not change.

On deployment, the Platform Orchestrator will now write the Kubernetes CRs representing the Resource Graph to the Git repository and branch configured in the cluster Resource Definition. From here, they can be picked up by the GitOps Operator for synchronization into the cluster. Once synchronized, the Humanitec Operator processes them normally just like in Direct Mode.

We will use ArgoCD as an example GitOps operator, though other tools like FluxCD will work just as well.

Modes of operation: GitOps mode

Before you begin

To create this setup, you will need the following resources and permissions:

  • A Humanitec Organization and Administrator access for creating Resource Definitions
  • A target Kubernetes cluster to run your workloads
  • A Git hosting service to house your repositories
  • A GitOps operator able to sync objects from Git to your target cluster
  • Permissions to configure the GitOps operator
  • The Humanitec Operator installed on your cluster
  • The humctl CLI installed locally
  • (optional) The argocd CLI  installed locally
  • (optional) Access to the designated namespace on your cluster, via kubectl or another Kubernetes management tool

Select your Git hosting service

You need a Git hosting service such as GitHub, GitLab, Bitbucket, or other, to house the repository to serve as the deployment target for the Platform Orchestrator. Your Git service may be cloud-based or self-hosted.

Whichever the service, the Humanitec Platform Orchestrator requires network access on one of the protocols available to reach it (SSH or HTTPS). If your service is self-hosted, provide access for the Humanitec public IPs.

Define your Git repository layout

Each Application Environment in the Platform Orchestrator to be covered by the GitOps approach requires its own Git target, which is a combination of repository, branch, and directory path.

You therefore need to map Applications and Environments to these Git targets.

It is up to you how to organize your repository structure. Consider these aspects when choosing your layout:

  • Security: Depending on your Git hosting tool, separate repositories and/or branches provide better options for fine-grained access control than directories within the same repository/branch
  • Workflows: Do you require review workflows inside the repositories to greenlight changes coming in from the Platform Orchestrator before synching to the target clusters? If so, you may want to use a PR process inside your Git tool

Some possible layouts are shown below. The list is not exhaustive, but may help guide you towards the fitting setup for your requirements.

  1. Condensed: one Git repository for the entire Organization, one single branch, directories for Applications and sub-directories for Environments
Layout overview

Git hosting tool

Git repo "Humanitec GitOps"

main branch

├ app-1
│ ├ development
│ ├ staging
│ └ production
└ app-2
  ├ development
  ├ pr-441
  ├ qa
  └ production

Platform Orchestrator

App 2

Environment
id: development

Environment
id: pr-441

Environment
id: qa

Environment
id: production

App 1

Environment
id: development

Environment
id: staging

Environment
id: production

  1. Fine grained by Environment using branches: One Git repository per Application, one branch per Environment, no subdirectories (all files in root dir /)
Layout overview

Git hosting tool

Git repo App 1

main branch (production)

/

development branch

/

staging branch

/

Git repo App 2

main branch (production)

/

development branch

/

pr-441 branch

/

qa branch

/

Platform Orchestrator

App 2

Environment
id: development

Environment
id: pr-441

Environment
id: qa

Environment
id: production

App 1

Environment
id: development

Environment
id: staging

Environment
id: production

  1. Fine grained by Environment using repos: same as previous, but using a separate Git repository instead of branches for each Environment for higher security

  2. Fine grained by Environment using directories: One Git repository per Application, one branch, directories for Environments

Layout overview

Git hosting tool

Git repo App 1

main branch

├ development
├ staging
└ production

Git repo App 2

main branch

├ development
├ pr-441
├ qa
└ production

Platform Orchestrator

App 2

Environment
id: development

Environment
id: pr-441

Environment
id: qa

Environment
id: production

App 1

Environment
id: development

Environment
id: staging

Environment
id: production

  1. Use a Git PR process for approvals: As a variation on any of the previous setups, use a PR process inside the Git repository for all or select Environments. This process is external to the Platform Orchestrator, using functionality of your Git hosting tool
Layout overview

This layout shows using a PR process for the production Environment only.

The directories in the main branch are used as the Git source for all environments by the GitOps operator.

Git hosting tool

Platform Orchestrator

Git repo App 1

App 1

Merge via
PR process

Write CRs to
"development"
directory

Write CRs to
"staging"
directory

Write CRs to
"production"
directory

Orchestrator target branch

├ production

main branch

├ development
├ staging
└ production

Environment
id: development

Environment
id: staging

Environment
id: production

Configure the GitOps Operator

Configure the GitOps Operator (ArgoCD, FluxCD, or other) to synchronize the contents of the relevant Git repository branches and directories to namespaces on your target cluster(s). Each Application Environment needs to be synced to its own namespace.

Note that in GitOps mode, the Platform Orchestrator does not create the Kubernetes namespace for an Environment. You need to either configure the GitOps operator to create it or do so by other means.

The exact setup depends on your choice of GitOps operator. For ArgoCD you will need at least:

  • A Repository  definition for each Git repository to be accessed
  • An Application  definition for each Humanitec Application Environment to be covered
Sample ArgoCD Repository

An ArgoCD Repository  is maintained as a Kubernetes secret. These commands to configure a repository using HTTPS access and a password or token. Replace all values according to your own setup.

# Create the secret
kubectl create secret generic gitops-app-development \
  -n argocd \
  --from-literal=url=https://github.com/my-org/gitops-app.git \
  --from-literal=type=git \
  --from-literal=username=git \
  --from-literal=project=default \
  --from-literal=password='sometoken'
# Add the required ArgoCD annotation
kubectl annotate secret gitops-app-development \
  -n argocd \
  managed-by=argocd.argoproj.io
# Add the required ArgoCD label
kubectl label secret gitops-app-development \
  -n argocd \
  argocd.argoproj.io/secret-type=repository
Sample ArgoCD Application

This is a bare minimum example manifest for an ArgoCD Application . The configuration matches the layout #4 shown above, so there is one Git repository per Humanitec Application with a single main branch and using separate directories per Environment.

This specific Application defines the development Environment for the Application gitops-app. Replace all values according to your setup.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  # Ensure the name is unique across all Application Environments served by this ArgoCD installation
  name: gitops-app-development
  namespace: argocd
spec:
  destination:
    namespace: gitops-app-development
    # In this setup, ArgoCD is running on the same cluster as the workloads
    server: https://kubernetes.default.svc
  project: default
  source:
    # The directory for the "development" Environment
    path: development
    # Configure an ArgoCD Repository for this URL
    repoURL: https://github.com/my-org/gitops-app.git
    targetRevision: refs/heads/main
  syncPolicy:
    # This property enables auto-sync
    automated: {}
    syncOptions:
    # This option lets ArgoCD create the destination namespace
    - CreateNamespace=true

Create a GitOps cluster Resource Definition

To have the Platform Orchestrator write the Kubernetes custom resources (CRs) representing the Resource Graph to a Git repository instead of a cluster, prepare a Resource Definition for the Resource Type k8s-cluster and using the Driver k8s-cluster-git.

  1. Prepare a Resource Definition similar to this example, adjusting all values to your setup:
Sample GitOps cluster Resource Definition

github-for-gitops.yaml (view on GitHub ) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: github-for-gitops
entity:
  name: github-for-gitops
  driver_type: humanitec/k8s-cluster-git
  type: k8s-cluster
  driver_inputs:
    values:
      # Git repository for pushing manifests
      url: git@github.com:example-org/gitops-repo.git
      # Branch in the git repository, optional. If not specified, the default branch is used.
      branch: development
      # Path in the git repository, optional. If not specified, the root is used.
      path: "${context.app.id}/${context.env.id}"
      # Load Balancer, optional. Though it's not related to the git, it's used to create ingress in the target K8s cluster.
      loadbalancer: 35.10.10.10
    secrets:
      credentials:
        ssh_key: my-git-ssh-key
        # Alternative to ssh_key: password or Personal Account Token
        # password: my-git-ssh-pat

Notes:

  • The k8s-cluster-git Resource Definition can serve more than one Application and/or Environment. Use these Placeholders as dynamic elements to comply with your chosen repository layout:
    • ${context.app.id}
    • ${context.env.id}
  • The value of the credentials for accessing the Git repository must be passed in on creation of the Resource Definition. The value will be stored in the Humanitec secret store and retrieved from there by the Platform Orchestrator for a deployment. Using a secret reference to your own secret store is not possible as that reference could only be resolved by a Humanitec Operator from within your infrastructure, not externally by the Orchestrator.
  1. Add matching criteria to comply with your chosen repository layout. E.g. to use this Resource Definition and thus the Git targets it represents for the development and staging Environments of the Application gitops-app, you would add:
criteria:
- app_id: gitops-app
  env_type: development
- app_id: gitops-app
  env_type: staging
  1. Install the Resource Definition:
humctl apply -f github-for-gitops.yaml

Perform a Deployment

You are now ready to perform a GitOps-based deployment. This requires an Application in the Orchestrator and a workload specified via Score.

  1. If you do not have these ready, create them now:
# Create an Application
humctl create app gitops-app
### Create a demo Score file
cat <<EOF > score.yaml
apiVersion: score.dev/v1b1
metadata:
  name: hello-world
containers:
  hello-world:
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do printenv && sleep 30; done"]
EOF
  1. Then deploy the Score file to your Application, waiting for deployment completion:
humctl score deploy --app gitops-app --env development --wait
  1. Check the contents of your remote Git repository. The Deployment will have pushed a set of Humanitec custom resources (CRs) to the Git target specified in the GitOps cluster Resource Definition.

    If you used the basic demo Score file shown above, there will be two CRs of kind: Resource named resource-<guresid>.yaml as well a CR of kind: Workload named workload-<name>.yaml.

    Depending on your Git service, you may use its web interface, tooling CLI, or the git client if you have cloned the target repository:

# Show the contents of the "gitops-app/development" folder in the "main" branch
$ git ls-tree --name-only origin/main:gitops-app/development
resource-2cc109ddc1014e0c325d0afdf1eea87151f0a0d9.yaml
resource-c2fb2be3bd92f85bc755b6ebf8409d2dd469964b.yaml
workload-hello-world.yaml
  1. The CRs prefixed with resource- represent the Resource Graph of the Deployment. Open the Graph by using this command, or via the Humanitec Portal . For the basic demo Score file shown above, the Graph will have just two elements:
humctl resources graph deploy . --app gitops-app --env development

workload

base-env

Perform a GitOps Sync

Pushing the CR files into the target repository completes the Deployment for the Platform Orchestrator. It is now up to the GitOps operator to synchronize these CRs into your target cluster and namespace.

For ArgoCD, if you have auto-sync  enabled on the ArgoCD Application, then the new CR files in the repository will have triggered a new sync automatically. Otherwise, you need to manually trigger a sync via the web UI or using the argocd CLI :

argocd app sync gitops-app-development

Once the Humanitec CRs are synched into the target namespace, the Humanitec Operator will act on them and provision all required real-world resources, including all objects on the Kubernetes cluster.

If you used the basic demo Score file shown above, the application details tree in the ArgoCD UI will look like this:

ArgoCD gitops-app tree

Note that the cadence of object creation works as follows:

  1. The GitOps operator (here: ArgoCD) synchronizes the Humanitec CRs onto the cluster. These are the objects of the kinds resource and workload shown in the second level of the tree underneath the gitops-app.
  2. Based on the CRs, the Humanitec Operator creates all the objects in the third level of the tree
  3. Based on those objects, the Kubernetes cluster mechanisms create all the objects in the fourth and fifth level of the tree

The workload is now running in your cluster. Future Humanitec Deployments into the same Environment will now trigger the same series of deployment steps.

Resource deletion

When a resource is no longer present in the Resource Graph, the Platform Orchestrator will delete it from the set of CR objects in the GitOps repository. What happens after that depends on the sync setup of your GitOps operator.

Therefore, if the GitOps operator is configured to always delete cluster objects on deletion from the Git source, then real-world resources will also be deleted once they are not present in the Graph any longer. This may result in data loss and prevent rollbacks. We therefore recommend configuring your GitOps operator so that objects deleted from Git are retained on the cluster.

In ArgoCD, this aspect of a sync is called resource pruning. You can configure pruning behavior through the sync options  on the Application.

Provide access to runtime data

In the GitOps setup, the target Kubernetes cluster is disconnected from the Platform Orchestrator by design. The Orchestrator therefore cannot provide runtime information about the workloads, such as Pods status or container logs, like it does in a Direct mode setup.

You may optionally extend your GitOps setup to allow the Orchestrator least privilege read access to the target cluster so that it may provide live runtime information about workloads running on it.

This works by adding a secondary Resource of type k8s-cluster and the specific Resource ID k8s-cluster-runtime to the Resource Graph. It provides access information to the cluster just like a regular, non-GitOps cluster Resource, but with reduced permissions. The Platform Orchestrator will identify it and use it for querying runtime data.

Follow these steps to configure runtime access:

  1. (Recommended) Configure a dedicated Cloud Account for obtaining runtime information.

    Depending on your cluster and namespace topology, you may choose to create separate Cloud Accounts per cluster, per Application, or even per Environment.

    Perform the required permission setup on the Kubernetes cluster. Refer to the instructions for your cluster type:

  2. Create a Resource Definition for the runtime cluster access. It is of type k8s-cluster and using the Driver for your cluster type (e.g. humanitec/k8s-cluster-gke). Each of its matching criteria must always match the exact Resource ID k8s-cluster-runtime, plus potentially other criteria as per your cluster topology.

    Configure the Resource Definition to use the Cloud Account you just created in the previous step in its driver_account property.

    Adjust the matching criteria as required.

Sample GitOps runtime cluster Resource Definition

gke-temporary-credentials-runtime.yaml (view on GitHub ) :

# Connect to a GKE cluster to obtain runtime data in a GitOps setup
# using temporary credentials defined via a Cloud Account
apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: gke-temporary-credentials-runtime
entity:
  name: gke-temporary-credentials-runtime
  type: k8s-cluster
  # The driver_account references a Cloud Account of type "gcp-identity"
  # which needs to be configured for your Organization
  driver_account: gcp-temporary-creds
  driver_type: humanitec/k8s-cluster-gke
  driver_inputs:
    values:
      loadbalancer: 35.10.10.10
      name: demo-123
      zone: europe-west2-a
      project_id: my-gcp-project
    secrets:
      # Optional: set this property to use the Humanitec Agent for accessing runtime information
      # if the target cluster is private. This requires the Agent to be configured
      # for the cluster and the proper Agent Resource Definition to be matched.
      agent_url: "${resources['agent#agent'].outputs.url}"
  criteria:
  # The runtime cluster Resource Definition needs to always match this exact res_id
  # Specify other matching criteria according to your setup
  - res_id: k8s-cluster-runtime

  1. (Recommended/required) Use the Humanitec Agent to securely route runtime data requests from the Platform Orchestator to your cluster. This is a hard requirement for any cluster that is not accessible from outside of your own infrastructure, which will be the case for most clusters in a GitOps setup. Use the secret agent_url in the Resource Definition to configure Agent use as shown in the example.

  2. Have the GitOps cluster Resource co-provision a Resource of the cluster runtime kind by adding a provision section as shown in the example.

Sample GitOps cluster Resource Definition with co-provisioning of runtime cluster

github-for-gitops.yaml (view on GitHub ) :

apiVersion: entity.humanitec.io/v1b1
kind: Definition
metadata:
  id: github-for-gitops
entity:
  name: github-for-gitops
  driver_type: humanitec/k8s-cluster-git
  type: k8s-cluster
  driver_inputs:
    values:
      # Git repository for pushing manifests
      url: git@github.com:example-org/gitops-repo.git
      # When using a GitHub personal access token, use the HTTPS URL:
      # url: https://github.com/example-org/gitops-repo.git
      # Branch in the git repository, optional. If not specified, the default branch is used.
      branch: development
      # Path in the git repository, optional. If not specified, the root is used.
      path: "${context.app.id}/${context.env.id}"
      # Load Balancer, optional. Though it's not related to the GitOps setup, it's used to create ingress in the target K8s cluster if such resources are part of the Resource Graph, just like with a non-GitOps cluster.
      loadbalancer: 35.10.10.10
    secrets:
      credentials:
        ssh_key: my-git-ssh-key
        # Alternative to ssh_key: password or Personal Account Token
        # password: my-git-ssh-pat 
  # To co-provision a non-GitOps cluster resource from which the Orchestrator will fetch runtime info.
  # The provision key specifies `k8s-cluster-runtime` as Resource Id and it must be used in the non-GitOps cluster resource definition Matching Criteria.
  provision:
    k8s-cluster#k8s-cluster-runtime:
      is_dependent: false
      match_dependents: false
  1. Define a naming pattern for your namespaces, and implement it in both your GitOps operator (e.g. ArgoCD) as well the Platform Orchestrator.

    For any Application Environment, the GitOps Operator needs to synchronize all objects into the same namespace where the runtime cluster setup expects to find the workloads. The Orchestrator queries the Resource of type k8s-namespace that is implicitly added to any Deployment.

    You can adjust the namespace name provided by that resource as shown in namespaces. The GitOps runtime example also contains a sample implementation.

  2. Start deploying. Any Deployment matching Resource Definitions you created will now have runtime data available through the Humanitec Portal  or by using the CLI:

# Get runtime information about the Environment "development" in app "gitops-app"
humctl api get \
  "/orgs/my-org/apps/gitops-app/envs/development/runtime"

# Get container logs of the container "my-container" of workload "my-workload"
humctl api get \
  "/orgs/my-org/apps/gitops-app/envs/development/logs?workload_id=my-workload&container_id=my-container"

Recap

You have seen how to:

  • Design your Git repository layout to match your Application and Environment topology
  • Configure the Platform Orchestrator to deploy CRs to a Git target through a specific cluster Resource Definition
  • Configure your GitOps operator to sync objects from that Git target into the proper namespace
  • Perform an end-to-end deployment via the Orchestrator and Score
  • Optionally provide access to workload runtime data via the Orchestrator