Kubernetes Helm Chart
Running Ente Photos on Kubernetes? There is an Helm chart that makes it easy.
If you're not familiar - Kubernetes (K8S) handles container orchestration, Helm is basically a package manager that saves you from writing tons of YAML.
This guide walks you through deploying Ente Photos using a community-maintained Helm chart.
Chart resources:
- ArtifactHub - helm-chart package reference
- GitHub - helm-chart source code
Prerequisites
Before proceeding, ensure you have:
Kubernetes cluster:
A running Kubernetes cluster (v1.23+) with
kubectlconfiguredHelm:
Helm 3.8+ installed on your local machine
PostgreSQL database:
An external PostgreSQL database (v14+ recommended).
Options include:
- CloudNativePG (recommended for Kubernetes)
- Zalando PostgreSQL Operator
- Managed PostgreSQL (AWS RDS, Google Cloud SQL, Azure Database, etc.)
S3-compatible storage:
Object storage for photos and files.
Options include:
- AWS S3
- Wasabi
- Backblaze B2
- Scaleway Object Storage
- Garage (self-hosted, lightweight)
- Any S3-compatible provider
Ingress controller: (optional)
For external access, you'll need an ingress controller such as NGINX Ingress or Traefik
TLS certificates: (recommended)
cert-manager or pre-provisioned certificates for HTTPS
Step 1: Add the Helm repository
Add the Helm chart repository and update:
helm repo add l4g https://l4gdev.github.io/helm-charts
helm repo updateVerify the repository is available:
helm search repo l4g/ente-photosStep 2: Create a values file
Visit ArtifactHub to view the chart documentation and default values. You can copy the default values.yaml from there and customize it for your deployment.
At minimum, you need to configure the database and S3 storage.
Minimal configuration
# External PostgreSQL database (required)
externalDatabase:
host: "your-postgres-host"
port: 5432
database: "ente_db"
user: "ente"
password: "your-secure-password"
# S3 storage configuration (required)
credentials:
s3:
primary:
key: "your-s3-access-key"
secret: "your-s3-secret-key"
endpoint: "https://s3.your-region.amazonaws.com"
region: "your-region"
bucket: "your-bucket-name"Self-hosted S3 configuration
MinIO
MinIO has dropped open-source support and is no longer recommended for new deployments. Consider using Garage or a managed S3-compatible service instead.
If you're using a self-hosted S3-compatible storage (MinIO, Garage, etc.), enable path-style URLs:
museum:
config:
s3:
areLocalBuckets: true
usePathStyleUrls: true
credentials:
s3:
primary:
key: "your-access-key"
secret: "your-secret-key"
endpoint: "https://s3.example.com"
region: "us-east-1"
bucket: "ente-photos"
areLocalBuckets: trueCloudNativePG database
If using CloudNativePG, you can reference the generated secret:
externalDatabase:
host: "ente-db-rw.database.svc.cluster.local"
port: 5432
database: "ente_db"
user: "ente"
existingSecret:
enabled: true
secretName: "ente-db-app"
passwordKey: "password"Step 3: Configure ingress (optional)
For external access, configure ingress for each component.
Automate certificate and DNS management
If you have cert-manager installed, it can automatically provision TLS certificates from Let's Encrypt (or other issuers) using ingress annotations.
Similarly, external-dns can automatically create DNS records for your ingress hosts - no manual DNS configuration needed.
Both are highly recommended for production Kubernetes deployments.
# Museum API server
museum:
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: "50g"
hosts:
- host: api.photos.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: ente-api-tls
hosts:
- api.photos.example.com
# Configure app endpoints to match your ingress
museum:
config:
apps:
publicAlbums: "https://albums.photos.example.com"
accounts: "https://accounts.photos.example.com"
cast: "https://cast.photos.example.com"
# Photos web app
web:
photos:
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: photos.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: ente-photos-tls
hosts:
- photos.example.com
# Auth web app
auth:
ingress:
enabled: true
className: nginx
hosts:
- host: auth.photos.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: ente-auth-tls
hosts:
- auth.photos.example.com
# Accounts web app
accounts:
ingress:
enabled: true
className: nginx
hosts:
- host: accounts.photos.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: ente-accounts-tls
hosts:
- accounts.photos.example.comStep 4: Configure email (optional)
For sending verification codes via email instead of checking logs:
credentials:
smtp:
enabled: true
host: "smtp.example.com"
port: 587
username: "your-smtp-username"
password: "your-smtp-password"
from: "noreply@example.com"Step 5: Install the chart
Create a namespace and install the chart:
kubectl create namespace ente-photos
helm install ente-photos l4g/ente-photos \
--namespace ente-photos \
--values values.yamlMonitor the deployment:
kubectl get pods -n ente-photos -wWait for all pods to be in Running state.
Step 6: Verify the installation
Check that Museum (the API server) is healthy:
kubectl exec -it deploy/ente-photos-museum -n ente-photos -- wget -qO- http://localhost:8080/pingIf ingress is configured, verify external access:
curl https://api.photos.example.com/pingStep 7: Create your first user
Open the Photos web app in your browser (e.g., https://photos.example.com).
Select Don't have an account? to create a new user and follow the prompts.
TIP
If you haven't configured SMTP, retrieve the verification code from the Museum logs:
kubectl logs deploy/ente-photos-museum -n ente-photos | grep -i "ott"Configuration reference
For a complete list of all configuration options, see the default values on ArtifactHub.
Encryption keys
The chart automatically generates encryption keys if not provided. For production use, you should generate and store these securely:
# Generate keys using openssl
# Encryption key (32 bytes, base64 encoded)
openssl rand 32 | base64
# Hash key (64 bytes, base64 encoded)
openssl rand 64 | base64
# JWT secret (32 bytes, base64 encoded)
openssl rand 32 | base64Configure in your values file:
credentials:
encryption:
key: "your-generated-encryption-key"
hash: "your-generated-hash-key"
jwt:
secret: "your-generated-jwt-secret"WARNING
If you don't provide these keys, they will be regenerated on each Helm upgrade, which will invalidate encryption keys making existing data not-accessible.
Always set explicit keys for production deployments. Remember to save them securely
Using existing secrets
For production deployments, store sensitive values in Kubernetes secrets:
credentials:
existingSecret: "my-ente-credentials"
externalDatabase:
host: "your-postgres-host"
existingSecret:
enabled: true
secretName: "my-postgres-credentials"
passwordKey: "password"The credentials secret should contain a credentials.yaml key with the complete credentials configuration.
Disabling web frontends
If you only need the API server (e.g., for mobile apps only):
web:
photos:
enabled: false
auth:
enabled: false
accounts:
enabled: false
share:
enabled: falseResource limits
Configure resource requests and limits for production:
museum:
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
web:
photos:
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256MiUpgrading
To upgrade to a new chart version:
helm repo update
helm upgrade ente-photos l4g/ente-photos \
--namespace ente-photos \
--values values.yamlUninstalling
To remove the deployment:
helm uninstall ente-photos --namespace ente-photosWARNING
This does not delete persistent data in your database or S3 storage.
Clean up those resources manually if needed.
Troubleshooting
Database connection issues
Check PostgreSQL connectivity:
kubectl exec -it deploy/ente-photos-museum -n ente-photos -- \
sh -c 'wget -qO- "http://localhost:8080/ping"'View Museum logs for database errors:
kubectl logs deploy/ente-photos-museum -n ente-photos | grep -i "database\|postgres"S3 connection issues
Verify S3 credentials and endpoint:
kubectl logs deploy/ente-photos-museum -n ente-photos | grep -i "s3\|bucket"Pod startup failures
Check pod events and logs:
kubectl describe pod -l app.kubernetes.io/name=ente-photos -n ente-photos
kubectl logs -l app.kubernetes.io/name=ente-photos -n ente-photos --all-containersWhat next?
After installation, you may want to:
- Configure apps to connect mobile apps to your server
- Configure object storage for advanced S3 settings and CORS configuration
- Manage users to configure admin access and user management
