⛵Helm Chart
Overview
This guide explains how to deploy enVector services on an existing Kubernetes cluster using Helm. It focuses on configuring external dependencies (database, storage) and license management, plus options to manage sensitive values safely.
Running enVector on Kubernetes provides:
Parity: Closer to production-grade behavior than Docker Compose
Scalability: Horizontal Pod Autoscaling (HPA) and node pools
Security: Namespaces, RBAC, and network policies
Extensibility: Ability to integrate with Ingress controllers, TLS (e.g., cert-manager), storage classes, and monitoring stacks (to be installed separately)
Prerequisites
Prepare the following before deploying:
Kubernetes cluster running (local or managed, e.g. GKE, OKE, EKS)
kubectl v1.28+ and Helm v3.12+ installed
Administrative privileges on the cluster
64GB+ RAM and 50GB+ free disk space available in the cluster
Docker Hub access token (for pulling private enVector images)
PostgreSQL service available (e.g., OCI Database with PostgreSQL, GCP Cloud SQL, or equivalent)
S3-compatible object storage service available (e.g., OCI Object Storage, AWS S3, MinIO, etc.)
(Optional) Ingress, TLS, Monitoring, StorageClass, NetworkPolicy — not included in the Helm chart; install separately if required
Step 1: Get the Helm Chart from GitHub
Release naming & ServiceAccount (IRSA / Workload Identity)
Before editing values.yaml, decide how the release name and ServiceAccount should look:
nameOverride/fullnameOverridelet you pin predictable resource names if your platform enforces naming rules.labelsadds custom labels to every Kubernetes object (helpful for cost allocation or policy engines).serviceAccount.create,serviceAccount.name, andserviceAccount.annotationscontrol the ServiceAccount shared by all Pods. Add IAM, IRSA, or Workload Identity annotations here instead of editing templates manually.
These knobs live near the top of values.yaml so you can align the deployment with your organization’s naming and IAM standards before moving on to the rest of the configuration.
Step 2: Configure values.yaml (DB, Storage, License)
The chart ships with a values.yaml. Update these keys to match your environment.
2.1 Metadata DB (PostgreSQL)
Notes:
Use a PostgreSQL endpoint reachable from the cluster.
Adjust
sslmodeto your security posture (require, verify-ca, verify-full, disable).schemais passed asES2_DB_SERVICE_SCHEMA.
2.2 Object Storage (S3-compatible, e.g., MinIO/S3/OCI)
Notes:
es2b waits for the storage endpoint (
host:port) before starting.For AWS S3, set
secure: true,port: 443, andhasHealthcheckEndpoint: false.For MinIO, set the MinIO service DNS/port and you may set
hasHealthcheckEndpoint: true.
2.3 License (es2c)
The es2c component reads a license token from /es2/license/token.jwt. You have three ways to provide it:
Chart-managed Secret (simple & secure by default)
First install (creates the Secret using your token file):
Upgrades reuse the existing Secret automatically. To rotate the token:
Use an existing Secret you manage
Or via CLI:
External Secrets Operator (ESO) – recommended for centralized secret mgmt
See 2.4 below to enable ESO and map the token.
2.4 Managing sensitive values via External Secrets Operator (optional)
Set externalSecrets.enabled=true when you want ESO to write Kubernetes Secrets for the chart. Each credential type uses its own sub-block so you can mix and match:
Notes:
secretStoreRefis mandatory whenever any ESO block is enabled.dbSecret.dataandstorageSecret.datamirror the env keys that Pods expect (ES2_DB_SERVICE_URL, ES2_STORAGE_SERVICE_USER, ES2_STORAGE_SERVICE_PASSWORD).The license ExternalSecret only renders when both
externalSecrets.enabled=trueandexternalSecrets.license.remoteRef.keyare set, matching the template logic.Need to mix AWS Parameter Store (for DB/storage) with Secrets Manager (for license)? Set
externalSecrets.<name>.secretStoreRefunder each block to point at different SecretStores; if omitted, each block falls back to the globalsecretStoreRef.Non-sensitive values such as storage
host,port,secure,region,bucket_name, and DBschemastill live underexternalServices.
2.5 Image pull secret
Create image pull secret (example name: regcred):
Required: set this to the Docker registry Secret you created with your Docker token. There is no default; configure it explicitly. Example using the Secret name regcred:
2.6 Image tags (version alignment)
Set the image tags in values.yaml to match the ES2 version you plan to deploy. Avoid using latest in production. Example (v1.1.0):
Step 3: Dry-run (render & validate without installing)
Check for:
No template errors
Expected env vars including DB/Storage config
Image pull secret reference present (e.g.,
regcred)
Step 4: Install the chart
Verify resources:
TLS/HTTPS (Ingress)
Optional: TLS/HTTPS is not required. The chart works over plain HTTP by default. Enable one of the options below only if you want HTTPS via Ingress, or terminate TLS at an external load balancer instead.
Prerequisites
Install an Ingress Controller (e.g., NGINX) and note the
ingress.className.Point your domain (DNS) to the Ingress Service
EXTERNAL-IP.Optional: Install cert-manager and create a ClusterIssuer (recommended).
Quick Setup: cert-manager + ClusterIssuer (if not installed)
Option A) cert-manager automated issuance (recommended)
Enable Ingress in values.yaml and set the ClusterIssuer.
Deploy/upgrade (namespace optional):
Option B) Manual TLS Secret (private CA or internal networks)
Create a TLS Secret from your PEM files, then reference it in values.yaml.
Testing
Troubleshooting
Port 80 blocked or pre-redirected: HTTP-01 fails → allow 80 or use DNS-01.
Ingress class mismatch:
ingress.classNamemust match your controller.Hostname mismatch:
spec.rules.hostmust match TLS hosts.DNS propagation delay: wait for new records to propagate.
Secret format:
kubernetes.io/tlswithtls.crt/tls.key.Private cluster: public ACME unreachable → issue via DNS-01.
Node clock skew: ACME errors → sync nodes with NTP.
Ingress health checks
If your Ingress or external load balancer requires an HTTP health‑check path (e.g., AWS ALB Target Group), the services expose lightweight health endpoints:
GET
/healthz: always 200 OK (liveness)GET
/livez: alias of/healthzGET
/readyz: 200 OK when backend is healthy, 503 otherwiseGET
/version: returns plain text versionGET
/: 200 OK with body "ok"
Example (AWS Load Balancer Controller with ALB): set annotations and route /healthz to the backend service you expose via Ingress (commonly es2e).
Notes:
If you terminate TLS at the Ingress, keep the health‑check path under the same host and scheme.
For NGINX Ingress, backend Pod readiness typically controls endpoint health; you can still expose
/healthzif your upstream L7 health checker needs a path.
Notes & tips
License: es2c mounts the license at
/es2/licenseand readsES2_LICENSE_TOKENpath from ConfigMap. Keep the Secret key name aligned withes2c.license.secret.key(default:token.jwt).Storage: for MinIO, you may enable
hasHealthcheckEndpoint: true. For AWS/OCI, keep itfalse.Networking: ensure Pods can reach DB and storage endpoints (Security Groups/NetworkPolicies/Firewalls).
Upgrades: when using an external Secret for the license, you can force a rollout by bumping
es2c.license.secret.checksum(e.g.,sha256sum token.jwt | cut -d' ' -f1).
Last updated

