⛵Helm Chart
Overview
This guide explains how to deploy enVector services on an existing Kubernetes cluster using Helm. It focuses on configuring external dependencies (database, storage) and license management, plus options to manage sensitive values safely.
Running enVector on Kubernetes provides:
Parity: Closer to production-grade behavior than Docker Compose
Scalability: Horizontal Pod Autoscaling (HPA) and node pools
Security: Namespaces, RBAC, and network policies
Extensibility: Ability to integrate with Ingress controllers, TLS (e.g., cert-manager), storage classes, and monitoring stacks (to be installed separately)
Prerequisites
Prepare the following before deploying:
Kubernetes cluster running (local or managed, e.g. GKE, OKE, EKS)
kubectl v1.28+ and Helm v3.12+ installed
Administrative privileges on the cluster
64GB+ RAM and 50GB+ free disk space available in the cluster
Docker Hub access token (for pulling private enVector images)
PostgreSQL service available (e.g., OCI Database with PostgreSQL, GCP Cloud SQL, or equivalent)
S3-compatible object storage service available (e.g., OCI Object Storage, AWS S3, MinIO, etc.)
(Optional) Ingress, TLS, Monitoring, StorageClass, NetworkPolicy — not included in the Helm chart; install separately if required
Step 1: Get the Helm Chart from GitHub
mkdir helm_chart
# Clone the repository
git clone https://github.com/CryptoLabInc/envector-deployment.git
# Copy Helm chart directory
cp -r envector-deployment/kubernetes-manifests/helm ./helm_chart
cd helm_chartRelease naming & ServiceAccount (IRSA / Workload Identity)
Before editing values.yaml, decide how the release name and ServiceAccount should look:
nameOverride/fullnameOverridelet you pin predictable resource names if your platform enforces naming rules.labelsadds custom labels to every Kubernetes object (helpful for cost allocation or policy engines).serviceAccount.create,serviceAccount.name, andserviceAccount.annotationscontrol the ServiceAccount shared by all Pods. Add IAM, IRSA, or Workload Identity annotations here instead of editing templates manually.
These knobs live near the top of values.yaml so you can align the deployment with your organization’s naming and IAM standards before moving on to the rest of the configuration.
Step 2: Configure values.yaml (DB, Storage, License)
The chart ships with a values.yaml. Update these keys to match your environment.
2.1 Metadata DB (PostgreSQL)
externalServices:
metadatadb:
# libpq DSN format
url: "user=<user> password=<password> host=<host> port=<port> dbname=<database> sslmode=require options='-c search_path=public'"
schema: "public" # default schemaNotes:
Use a PostgreSQL endpoint reachable from the cluster.
Adjust
sslmodeto your security posture (require, verify-ca, verify-full, disable).schemais passed asES2_DB_SERVICE_SCHEMA.
2.2 Object Storage (S3-compatible, e.g., MinIO/S3/OCI)
externalServices:
storage:
host: <storage_host> # e.g., minio.minio.svc, s3.amazonaws.com, <gateway>
port: <storage_port> # e.g., 9000 for MinIO, 443 for AWS S3
user: <access_key>
password: <secret_key>
secure: true # true=https, false=http
region: <region> # e.g., us-east-1, ap-seoul-1
create: false # true=create bucket on startup
bucket_name: <bucket_name>
hasHealthcheckEndpoint: false # true only for MinIO; false for AWS/OCI/etc.Notes:
es2b waits for the storage endpoint (
host:port) before starting.For AWS S3, set
secure: true,port: 443, andhasHealthcheckEndpoint: false.For MinIO, set the MinIO service DNS/port and you may set
hasHealthcheckEndpoint: true.
2.3 License (es2c)
The es2c component reads a license token from /es2/license/token.jwt. You have three ways to provide it:
Chart-managed Secret (simple & secure by default)
es2c:
license:
enabled: true
createSecret: true
tokenPath: "/es2/license/token.jwt"
token: "" # leave empty in values; provide at install time via --set-fileFirst install (creates the Secret using your token file):
helm install envector . \
--set es2c.license.enabled=true \
--set es2c.license.createSecret=true \
--set-file es2c.license.token=./token.jwtUpgrades reuse the existing Secret automatically. To rotate the token:
helm upgrade envector . --set-file es2c.license.token=./new-token.jwtUse an existing Secret you manage
es2c:
license:
enabled: true
createSecret: false
existingSecret: my-license # your Secret name
secret:
key: token.jwt # key inside the SecretOr via CLI:
helm upgrade --install envector . \
--set es2c.license.enabled=true \
--set es2c.license.createSecret=false \
--set es2c.license.existingSecret=my-licenseExternal Secrets Operator (ESO) – recommended for centralized secret mgmt
See 2.4 below to enable ESO and map the token.
2.4 Managing sensitive values via External Secrets Operator (optional)
Set externalSecrets.enabled=true when you want ESO to write Kubernetes Secrets for the chart. Each credential type uses its own sub-block so you can mix and match:
externalSecrets:
enabled: true
secretStoreRef:
kind: ClusterSecretStore
name: my-secret-store
dbSecret:
enabled: true
data:
- secretKey: ES2_DB_SERVICE_URL
remoteRef:
key: prod/app/db
property: db_url
storageSecret:
enabled: true
data:
- secretKey: ES2_STORAGE_SERVICE_USER
remoteRef: { key: prod/minio, property: accessKey }
- secretKey: ES2_STORAGE_SERVICE_PASSWORD
remoteRef: { key: prod/minio, property: secretKey }
license:
enabled: true
remoteRef:
key: prod/es2/license
property: token.jwtNotes:
secretStoreRefis mandatory whenever any ESO block is enabled.dbSecret.dataandstorageSecret.datamirror the env keys that Pods expect (ES2_DB_SERVICE_URL, ES2_STORAGE_SERVICE_USER, ES2_STORAGE_SERVICE_PASSWORD).The license ExternalSecret only renders when both
externalSecrets.enabled=trueandexternalSecrets.license.remoteRef.keyare set, matching the template logic.Need to mix AWS Parameter Store (for DB/storage) with Secrets Manager (for license)? Set
externalSecrets.<name>.secretStoreRefunder each block to point at different SecretStores; if omitted, each block falls back to the globalsecretStoreRef.Non-sensitive values such as storage
host,port,secure,region,bucket_name, and DBschemastill live underexternalServices.
2.5 Image pull secret
Create image pull secret (example name: regcred):
kubectl create secret docker-registry regcred \
--docker-server=https://index.docker.io/v1/ \
--docker-username="<YOUR_DOCKERHUB_USERNAME>" \
--docker-password="<YOUR_DOCKERHUB_ACCESS_TOKEN>" \
--docker-email="<YOUR_EMAIL>"Required: set this to the Docker registry Secret you created with your Docker token. There is no default; configure it explicitly. Example using the Secret name regcred:
imagePullSecrets: [regcred]2.6 Image tags (version alignment)
Set the image tags in values.yaml to match the ES2 version you plan to deploy. Avoid using latest in production. Example (v1.1.0):
es2e:
image:
tag: v1.1.0
es2b:
image:
tag: v1.1.0
es2o:
image:
tag: v1.1.0
es2c:
# cpu or gpu; when set to gpu, the chart appends "-gpu" to the tag automatically
variant: cpu
image:
tag: v1.1.0Step 3: Dry-run (render & validate without installing)
helm upgrade --install envector . --dry-run --debugCheck for:
No template errors
Expected env vars including DB/Storage config
Image pull secret reference present (e.g.,
regcred)
Step 4: Install the chart
helm upgrade --install envector .Verify resources:
kubectl get pods,svcTLS/HTTPS (Ingress)
Optional: TLS/HTTPS is not required. The chart works over plain HTTP by default. Enable one of the options below only if you want HTTPS via Ingress, or terminate TLS at an external load balancer instead.
Prerequisites
Install an Ingress Controller (e.g., NGINX) and note the
ingress.className.Point your domain (DNS) to the Ingress Service
EXTERNAL-IP.Optional: Install cert-manager and create a ClusterIssuer (recommended).
Quick Setup: cert-manager + ClusterIssuer (if not installed)
# Install cert-manager with CRDs
helm repo add jetstack https://charts.jetstack.io && helm repo update
helm install cert-manager jetstack/cert-manager -n cert-manager --create-namespace --set installCRDs=true
# Create a staging ClusterIssuer (validate here first)
cat > letsencrypt-staging.yaml << 'EOF'
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: [email protected]
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
EOF
kubectl apply -f letsencrypt-staging.yaml
# Verify
kubectl get clusterissuer
# After validation, create a production issuer similarly
# server: https://acme-v02.api.letsencrypt.org/directoryOption A) cert-manager automated issuance (recommended)
Enable Ingress in values.yaml and set the ClusterIssuer.
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
hosts:
- host: app.example.com
paths:
- path: /
pathType: Prefix
service:
name: es2e
port: 50050
tls:
- secretName: app-tls
hosts:
- app.example.comDeploy/upgrade (namespace optional):
helm upgrade --install envector . -n <namespace> --create-namespace
# Note: when using --set, escape dots in annotation keys
helm upgrade --install envector . -n <namespace> \
--set ingress.enabled=true \
--set ingress.className=nginx \
--set ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-prod \
--set ingress.tls[0].secretName=app-tls \
--set ingress.tls[0].hosts[0]=app.example.com \
--set ingress.hosts[0].host=app.example.comOption B) Manual TLS Secret (private CA or internal networks)
Create a TLS Secret from your PEM files, then reference it in values.yaml.
kubectl create secret tls app-tls \
--cert=fullchain.pem --key=privkey.pem -n <namespace>ingress:
enabled: true
className: nginx
hosts:
- host: app.example.com
paths:
- path: /
pathType: Prefix
service:
name: es2e
port: 50050
tls:
- secretName: app-tls
hosts:
- app.example.comTesting
# DNS
nslookup app.example.com
# Pre-propagation HTTPS check
curl --resolve app.example.com:443:<LB_IP> https://app.example.com -vk
# Ingress status
kubectl get ingress -n <namespace>
# HTTPS response
curl -I https://app.example.com
# Certificate details
openssl s_client -connect app.example.com:443 -servername app.example.com -showcerts
# With cert-manager
kubectl get certificate,challenge,order -n <namespace>
kubectl logs -n cert-manager deploy/cert-managerTroubleshooting
Port 80 blocked or pre-redirected: HTTP-01 fails → allow 80 or use DNS-01.
Ingress class mismatch:
ingress.classNamemust match your controller.Hostname mismatch:
spec.rules.hostmust match TLS hosts.DNS propagation delay: wait for new records to propagate.
Secret format:
kubernetes.io/tlswithtls.crt/tls.key.Private cluster: public ACME unreachable → issue via DNS-01.
Node clock skew: ACME errors → sync nodes with NTP.
Ingress health checks
If your Ingress or external load balancer requires an HTTP health‑check path (e.g., AWS ALB Target Group), the services expose lightweight health endpoints:
GET
/healthz: always 200 OK (liveness)GET
/livez: alias of/healthzGET
/readyz: 200 OK when backend is healthy, 503 otherwiseGET
/version: returns plain text versionGET
/: 200 OK with body "ok"
Example (AWS Load Balancer Controller with ALB): set annotations and route /healthz to the backend service you expose via Ingress (commonly es2e).
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-path: /healthz
alb.ingress.kubernetes.io/healthcheck-interval-seconds: "15"
alb.ingress.kubernetes.io/success-codes: "200"
hosts:
- host: es2.example.com
paths:
- path: /
pathType: Prefix
service:
name: es2e
port: 50050
- path: /healthz
pathType: Exact
service:
name: es2e
port: 50050Notes:
If you terminate TLS at the Ingress, keep the health‑check path under the same host and scheme.
For NGINX Ingress, backend Pod readiness typically controls endpoint health; you can still expose
/healthzif your upstream L7 health checker needs a path.
Notes & tips
License: es2c mounts the license at
/es2/licenseand readsES2_LICENSE_TOKENpath from ConfigMap. Keep the Secret key name aligned withes2c.license.secret.key(default:token.jwt).Storage: for MinIO, you may enable
hasHealthcheckEndpoint: true. For AWS/OCI, keep itfalse.Networking: ensure Pods can reach DB and storage endpoints (Security Groups/NetworkPolicies/Firewalls).
Upgrades: when using an external Secret for the license, you can force a rollout by bumping
es2c.license.secret.checksum(e.g.,sha256sum token.jwt | cut -d' ' -f1).
Last updated

