| oauth2 | ||
| sysroot | ||
| install.sh | ||
| push.sh | ||
| README.md | ||
GarageLab Quadlet PoC
Philosophy:
- No software installation on host system except for podman
- Applications run via podman and systemd quadlet units, config via
Exec=/Environment=whenever possible - Caddy reverse proxy: good enough performance, low resource usage, simple configuration, automatic TLS via LetsEncrypt
Since this is just a proof of concept, there are some todos left:
- Rootless containers
- Backup jobs for keycloak, nextcloud, discourse with encryption and offsite backups
- Public Outbound SMTP with SPF + DKIM
- basic system usage monitoring (
node-exporter/podman-exporter)
Overview
| Name | Purpose | Memory Limit |
|---|---|---|
| Caddy | Reverse proxy, SSL termination | 64M |
| Keycloak | Identity provider (SSO/OIDC) | 1,152M |
| Nextcloud | File storage & collaboration | 1,200M |
| Discourse | Community forum | 2,864M |
Caddy
Only Caddy publishes external ports (80, 443, 443/UDP) for HTTP1 and HTTP2.
Applications are available via TLS only, caddy uses port 80 for http-to-https redirects.
The gateway network connects the caddy container with all application containers for reverse proxy traffic.
Each application stack has its own isolated network for internal communication.
For simplicitly, we avoid mTLS, all internal reverse proxy traffic is plain HTTP.
Logs: podman logs -f caddy-svc-app
Stats: podman logs -f caddy-svc-app 2>&1 | goaccess --log-format=caddy
graph LR
subgraph Internet
EXT[External Traffic<br/>80/443/443-UDP]
end
subgraph gateway-net
KC_APP[keycloak-svc-app<br/>:8080]
DC_APP[discourse-svc-app<br/>:80]
NC_APP[nextcloud-svc-app<br/>:80]
CADDY[caddy-svc-app<br/>:80, :443<br>LetsEncrypt TLS]
end
EXT --> CADDY
CADDY -->|nextcloud.garage-lab.de| NC_APP
CADDY -->|forum.garage-lab.de| DC_APP
CADDY -->|login.garage-lab.de<br>admin.garage-lab.de| KC_APP
Keycloak
For production, keycloak requires a PostgreSQL database. We use postgres 17 as its the latest supported version by keycloak.
Keycloak uses a connection pool with 8 connections by default, we tune postgres memory settings to 20 max connections and 64 mb shared buffers which should be more than enough for all keycloak data to fit in memory.
Keycloak also requires a volume for application-internal data and config files.
Logs: podman logs -f keycloak-svc-app
graph LR
subgraph Gateway
GW_NET[gateway-net]
end
subgraph Keycloak Stack
KC_NET[keycloak-net]
KC_APP[keycloak-svc-app<br/>Memory: 768M]
KC_PG[keycloak-svc-postgres<br/>Memory: 384M]
VOL_DATA[(keycloak-vol-data)]
VOL_PG[(keycloak-vol-postgres)]
end
GW_NET -->|:8080| KC_APP
KC_APP --> KC_NET
KC_PG --> KC_NET
VOL_DATA --> KC_APP
VOL_PG --> KC_PG
Nextcloud
For production, nextcloud requires a PostgreSQL database and a redis for caching. We use the valkey fork because of redis licensing issues.
Nextcloud doesnt use a connection pool, we have apache prefork with 10 processes and we set max 2 conns and max 1 persistent connections per process and tun postgres to max 20 connections and 64MB shared buffers. maybe use pgbouncer in the future if necessary.
Nextcloud consists of webserver (apache/php) and cronjob. Both need access to the data volume. Both connect to database and redis via internal network.
Logs: podman logs -f nextcloud-svc-app
Nextcloud cli: podman exec -it nextcloud-svc-app php occ <args>
graph LR
subgraph Gateway
GW_NET[gateway-net]
end
subgraph Nextcloud Stack
NC_NET[nextcloud-net]
NC_APP[nextcloud-svc-app<br/>Memory: 512M]
NC_CRON[nextcloud-svc-cron<br/>Memory: 256M]
NC_PG[nextcloud-svc-postgres<br/>Memory: 384M]
NC_VK[nextcloud-svc-valkey<br/>Memory: 48M]
VOL_VK[(nextcloud-vol-valkey)]
VOL_PG[(nextcloud-vol-postgres)]
VOL_DATA[(nextcloud-vol-data)]
end
GW_NET[gateway-net] -->|:8080| NC_APP
NC_APP --> NC_NET[nextcloud-net]
NC_PG --> NC_NET
NC_VK --> NC_NET
NC_CRON --> NC_NET
VOL_DATA --> NC_APP
VOL_DATA --> NC_CRON
VOL_PG --> NC_PG
VOL_VK --> NC_VK
Discourse
For production, discourse requires a PostgreSQL database with pgvector and a redis for caching. We use the valkey fork because of redis licensing issues.
The latest discourse image currently comes with only pg15 client libraries, and discourse backup runs via sidekiq inside the container. For backup postgres client library version must match postgres server version, so we use the discourse-provided postgresql image with pgvector, pinned to pg15 until discourse upgrades client libraries to pg17.
Discourse container needs quite a few volumes for discourse data and runs both ruby unicorn and sidekiq background queue. Unicorn webserver and sidekiq background queue connect to both postgres and valkey via internal network.
Logs: podman exec -it discourse-svc-app tail -f /log/rails/production.log (production_errors.log, sidekiq.log, unicorn.stderr.log, unicorn.stdout.log)
Discourse cli: podman exec -it discourse-svc-app discourse <command>
Rails console: podman exec -it discourse-svc-app rails c
graph LR
subgraph Gateway
GW_NET[gateway-net]
end
subgraph Discourse Stack
DC_NET[discourse-net]
DC_APP[discourse-svc-app<br/>Memory: 2048M]
DC_PG[discourse-svc-postgres<br/>Memory: 768M]
DC_VK[discourse-svc-valkey<br/>Memory: 48M]
VOL_DATA[(discourse-vol-app-data)]
VOL_UP[(discourse-vol-app-uploads)]
VOL_BK[(discourse-vol-app-backups)]
VOL_PG[(discourse-vol-postgres)]
VOL_PGD[(discourse-vol-postgres-data)]
VOL_VK[(discourse-vol-valkey)]
end
GW_NET --> DC_APP
DC_APP --> DC_NET
DC_PG --> DC_NET
DC_VK --> DC_NET
VOL_DATA --> DC_APP
VOL_UP --> DC_APP
VOL_BK --> DC_APP
VOL_PG --> DC_PG
VOL_PGD --> DC_PG
VOL_VK --> DC_VK