Skip to content

Kubernetes

This guide covers deploying Goiabada to a Kubernetes cluster.

Regardless of your Kubernetes setup, Goiabada requires:

RequirementDetails
HTTPS accessBoth auth server and admin console must be accessible via HTTPS
Two hostnamesSeparate domains/subdomains for auth server and admin console
Shared databaseMySQL, PostgreSQL, or SQL Server (SQLite not supported - pods need a shared database)
Empty databaseFor fresh deployments, the database must be empty - Goiabada seeds it with OAuth clients configured for your specific URLs
Proxy headersThe generated manifests set TRUST_PROXY_HEADERS=true for proper client IP detection behind an ingress
Large proxy buffersOAuth responses have large headers; nginx needs proxy-buffer-size: 128k (the setup wizard configures this automatically)
Internet → Gateway/Ingress (HTTPS/TLS) → {
auth.example.com → goiabada-authserver Service (9090) → Pods
admin.example.com → goiabada-adminconsole Service (9091) → Pods
}

The setup wizard generates:

  • Namespace: Isolated environment for Goiabada resources
  • Secret: Database password, session keys, OAuth client secret (base64 encoded)
  • ConfigMap: URLs, database connection details, app configuration
  • Deployments: Auth server and admin console with health checks and resource limits
  • Services: ClusterIP services for internal routing
  • Ingress: External access with TLS termination
  • A Kubernetes cluster
  • kubectl configured to access your cluster
  • A database server (MySQL, PostgreSQL, or SQL Server) - can be in-cluster or external
  • Two domain names for auth server and admin console

Example setup: ingress-nginx + cert-manager

Section titled “Example setup: ingress-nginx + cert-manager”

This is the recipe we tested. Adapt as needed for your environment.

The install command below modifies externalTrafficPolicy from Local to Cluster for better compatibility with cloud LoadBalancers.

Terminal window
curl -s https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.14.0/deploy/static/provider/cloud/deploy.yaml \
| sed 's/externalTrafficPolicy: Local/externalTrafficPolicy: Cluster/' \
| kubectl apply -f -

Wait for it to be ready:

Terminal window
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
Terminal window
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml

Wait for it to be ready:

Terminal window
kubectl wait --namespace cert-manager \
--for=condition=ready pod \
--selector=app.kubernetes.io/instance=cert-manager \
--timeout=120s

Step 3: Create ClusterIssuer for Let’s Encrypt

Section titled “Step 3: Create ClusterIssuer for Let’s Encrypt”

Save as letsencrypt-issuer.yaml:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected] # Replace with your email
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx

Apply it:

Terminal window
kubectl apply -f letsencrypt-issuer.yaml

Goiabada needs a shared database (SQLite is not supported for Kubernetes). Options include:

  • Managed services: AWS RDS, Google Cloud SQL, Azure Database, Supabase, PlanetScale, Neon
  • Self-hosted: Deploy MySQL/PostgreSQL in your cluster or on a VM

Example (Supabase): Use the connection pooler (not direct connection):

Host: aws-0-us-east-1.pooler.supabase.com
Port: 5432
User: postgres.your-project-ref

Use the setup wizard:

Terminal window
./goiabada-setup
  1. Select “3. Kubernetes cluster”
  2. Choose your database type
  3. Enter your domain names (e.g., https://auth.example.com)
  4. Enter your Kubernetes namespace (default: goiabada)
  5. Configure admin credentials
  6. Enter your database connection details

The wizard will:

  • Test database connectivity
  • Check if the database is empty (warns you if it contains existing Goiabada data)
  • Generate all Kubernetes manifests in a single file

Deploy:

Terminal window
kubectl apply -f goiabada-k8s.yaml

After deploying, get the LoadBalancer IP:

Terminal window
kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

Create DNS A records pointing to this IP:

RecordValue
auth.example.com<ADDRESS>
admin.example.com<ADDRESS>
Terminal window
# Check pods
kubectl get pods -n goiabada
# Check Ingress status (should show ADDRESS)
kubectl get ingress -n goiabada
# Check certificates (should show READY=True after DNS propagates)
kubectl get certificates -n goiabada

Expected output:

NAME READY STATUS RESTARTS AGE
goiabada-authserver-xxxxx-xxxxx 1/1 Running 0 1m
goiabada-adminconsole-xxxxx-xxxxx 1/1 Running 0 1m
Terminal window
# Check certificate status
kubectl describe certificate -n goiabada
# Check challenges
kubectl get challenges -n goiabada
kubectl describe challenges -n goiabada
ProblemSolution
DNS not resolvingVerify DNS A records point to ingress-nginx LoadBalancer IP
Port 80 not accessibleCheck firewall rules; ACME HTTP-01 needs port 80
Challenge timeoutSee “Connectivity Issues” below

Connectivity issues (port 80 not responding)

Section titled “Connectivity issues (port 80 not responding)”

Some cloud providers have issues with externalTrafficPolicy: Local on LoadBalancer services. The install command in Step 1 changes this to Cluster to avoid the issue.

If you installed ingress-nginx without the sed modification, patch it:

Terminal window
kubectl patch svc ingress-nginx-controller -n ingress-nginx \
-p '{"spec":{"externalTrafficPolicy":"Cluster"}}'

Then delete any failed certificate challenges to trigger a retry:

Terminal window
kubectl delete challenges -n goiabada --all

If you see a 502 error after entering credentials (typically at /auth/completed), this is caused by nginx’s proxy buffer being too small for OAuth’s large response headers.

Solution: Ensure your Ingress has the proxy buffer annotation:

metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"

The setup wizard adds this automatically. If you created Ingress resources manually, add this annotation and reapply.

This means the database has OAuth clients configured for different URLs than you’re using.

Solutions:

  1. Use a fresh/empty database
  2. Use the same URLs as the previous deployment
  3. Manually update redirect_uris in the clients table
Terminal window
# Check pod logs
kubectl logs -n goiabada deployment/goiabada-authserver
# Test connectivity from within cluster
kubectl run -it --rm debug --image=busybox --restart=Never -- \
nc -zv your-db-host 5432
Terminal window
kubectl logs -n goiabada deployment/goiabada-authserver --previous

Common causes:

  • Database connection failed
  • Invalid configuration values
  • Database not empty with mismatched URLs

For production deployments:

Terminal window
kubectl scale deployment goiabada-authserver -n goiabada --replicas=3
kubectl scale deployment goiabada-adminconsole -n goiabada --replicas=2

The generated manifests include default limits. Adjust based on your load:

resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"

Both services expose health checks used by Kubernetes probes:

  • Auth server: http://goiabada-authserver:9090/health
  • Admin console: http://goiabada-adminconsole:9091/health
  1. Update image tags in your manifest or regenerate with the setup wizard
  2. Apply changes:
    Terminal window
    kubectl apply -f goiabada-k8s.yaml
  3. Monitor rollout:
    Terminal window
    kubectl rollout status deployment/goiabada-authserver -n goiabada