Overview
On this page
Humanitec is used across a wide range of companies of different sizes operating in different industries. Different companies have different requirements for security and compliance. This page outlines the ways in which Humanitec can be configured to meet those requirements.
Connectivity
To deploy Workloads and provision infrastructure, Humanitec requires network connectivity to various systems. This includes the Kubernetes API of clusters being deployed into and some instances of resources such as database instances. Since these systems generally reside in private networks, network connectivity from the Humanitec SaaS system and the systems needs to be established.
Humanitec provides two primary ways of configuring access, direct and Agent . There is a third option of using a bastion host specifically for working with Relation Database instances.
Direct
Control plane endpoints can be directly exposed to the public internet with an allow list including Humanitec’s source IPs.
The following are the current source IPs for Humanitec:
34.159.97.57
35.198.74.96
34.141.77.162
34.89.188.214
34.159.140.35
34.89.165.141
34.32.134.107
34.91.7.12
34.91.109.253
34.141.184.227
34.147.1.204
35.204.216.33
If you are using the Humanitec Terraform Provider , you can easily extract these IP ranges from a Data Source .
These are not the public IPs your workloads will be using for egress. They depend on the cluster you are deploying to.
For the Humanitec default (demo) clusters, IPs may also be different.
Hint: To see the public IP of any deployed workload, have it log the output of this statement:
wget -qO- https://ipecho.net/plain | xargs echo
Agent
The Humanitec Agent is a small containerized Workload that runs within the client’s private network. The Agent establishes a secure encrypted tunnel with the Humanitec SaaS system. Using the Agent allows fine-grained control over which endpoints Humanitec can access. This also means that the private network does not have to be exposed to the public network.
At least one instance of the Agent should be run in each disconnected network.
Secret management
Secret management is an important part of any secure Internal Developer Platform (IDP). Secrets can be generated as part of infrastructure provisioning such as database credentials, or be supplied by developers such as API tokens for third-party services.
Managing secrets securely is best achieved by using a dedicated secret store solution providing access control, versioning, and advanced features like key rotation and notifications.
The Platform Orchestrator comes integrated with a secret store managed by Humanitec. You can choose to use that store or work with secret stores running in your own private network.
Regardless of the secret store in use, the Humanitec API never returns secrets. Secrets are only fetched from the secret store at deployment time and are used in the provisioning of Resources or for creating Kubernetes (K8s) Secret objects. How that’s done depends on the secret store you use.
Humanitec-hosted secret store
By default, the Platform Orchestrator acts as the IDP’s secret manager. This is useful for organizations that seek a fully managed experience.
All secrets submitted into the Platform Orchestrator via any interface (API, UI, CLI, or Terraform provider) are securely stored in an internal secret vault.
Because the Platform Orchestrator can access all secrets, it can deploy K8s manifests directly into the target cluster including K8s Secret objects, and call any Drivers requiring secrets for Resource provisioning.
Your own secret store
Using your own secret store, or stores, is the preferred mode for organizations who want to ensure that secrets are always stored within their own networks.
In this mode, the Platform Orchestrator does not store any secrets except for access credentials it requires to directly access systems. These are secrets related to Resources of type k8s-cluster
, logging
, and k8s-namespace
. All other Workload and Resource secrets are provided through the configured secret store, and maintained by your own processes.
To use this mode, it’s necessary to run the Humanitec Operator in the K8s cluster being deployed to. The Platform Orchestrator writes instructions for the Humanitec Operator through K8s custom resources (CRs) which are free from any secrets. Instead, they contain secret references identifying particular secrets in your own stores.
The Humanitec Operator can extract secrets from the secret store, inject them into Workloads via K8s Secret objects, and call Drivers as part of Resource provisioning. Still being able to write secrets via the Platform Orchestrator is optional.
See the Humanitec Operator architecture guide for details on this setup, and for further options not requiring any cluster access.
Supported secret stores
At this time, the following external secret stores are supported:
- AWS Secrets Manager
- Azure Key Vault
- Google Cloud Secret Manager (see how to connect it )
- HashiCorp Vault (see how to connect it )
Secrets lifecycle
The lifecycle of a secret depends on who is managing it.
If a secret was introduced via the Platform Orchestrator (regardless of whether it’s been written to the Humanitec-hosted secret store or your own secret store), then the Platform Orchestrator manages it. Its lifecycle is tied to the lifecycle of its associated object:
- If a secret is related to Resources, it is tied to the Lifecycle of its Resource .
- If a secret is a secret Shared Value, it is tied to the Lifecycle of a Shared Value .
If a secret is used via a secret reference, then you manage it according to your own processes. Neither Platform Orchestrator nor Humanitec Operator will delete it, and they do not require write access to it.