Files
kubeviz/deploy/quadlet
Clemens Hering 61e0819853
All checks were successful
Deploy KubeViz / deploy (push) Successful in 11s
Sorted headers
2026-03-01 11:56:26 +01:00
..
2026-03-01 11:56:26 +01:00
2026-03-01 11:45:20 +01:00
2026-03-01 07:40:49 +01:00
2026-03-01 11:45:20 +01:00
2026-03-01 07:40:49 +01:00

Quadlet Templates (AlmaLinux + Podman)

Files:

  • kubeviz.container: system-level Quadlet unit template
  • kubeviz-traefik.container: direct Traefik-label variant (shared Podman network)
  • traefik.network: optional shared network Quadlet
  • kubeviz.env.example: optional external environment file

1. Install template

sudo mkdir -p /etc/containers/systemd
sudo cp deploy/quadlet/kubeviz.container /etc/containers/systemd/kubeviz.container

Alternative (Traefik-label mode):

sudo cp deploy/quadlet/traefik.network /etc/containers/systemd/traefik.network
sudo cp deploy/quadlet/kubeviz-traefik.container /etc/containers/systemd/kubeviz.container

Optional env file:

sudo mkdir -p /etc/kubeviz
sudo cp deploy/quadlet/kubeviz.env.example /etc/kubeviz/kubeviz.env
# then uncomment EnvironmentFile in kubeviz.container

2. Set real image

Edit /etc/containers/systemd/kubeviz.container and replace:

  • ghcr.io/REPLACE_ME/kubeviz:v0.1.0

For Gitea CI/CD without external registry, use a stable local tag:

Image=localhost/kubeviz:prod
Pull=never

3. Start service

sudo systemctl daemon-reload
sudo systemctl enable --now kubeviz.service
sudo systemctl status kubeviz.service
sudo journalctl -u kubeviz.service -f

4. Update rollout

sudo systemctl restart kubeviz.service

Because Pull=always is set, Podman will pull the latest image for the configured tag on restart.

5. Traefik integration

Route kubeviz.valtrix.systems to http://127.0.0.1:18080. Keep COOKIE_SECURE=true in production.

If you use kubeviz-traefik.container, Traefik discovers KubeViz via labels and the shared traefik network instead of localhost port mapping.

6. Gitea pipeline (direct deploy on server)

Workflow template is included at:

  • .gitea/workflows/deploy-kubeviz.yml
  • scripts/deploy-with-podman.sh

The deploy script builds with Podman, tags localhost/kubeviz:prod, and restarts kubeviz.service. The workflow uses git checkout (no Node runtime dependency). For private repos, set Gitea secret CI_REPO_TOKEN. For private base images (for example dhi.io/golang:*), ensure the runner user is logged in with Podman and has an auth file at either:

  • $XDG_RUNTIME_DIR/containers/auth.json or
  • $HOME/.config/containers/auth.json

The deploy script forwards REGISTRY_AUTH_FILE to sudo podman automatically.

Default workflow mode uses user services (systemctl --user) and rootless Podman:

  • SYSTEMD_SCOPE=user
  • PODMAN_USE_SUDO=false
  • quadlet target: ~/.config/containers/systemd/kubeviz.container
  • user unit target in quadlet should be WantedBy=default.target

So no root sudo is required for normal deploy runs.

CSP hardening recommendation:

  • Keep a single CSP source to avoid policy conflicts.
  • In these templates, Traefik sets CSP and app-level CSP is disabled via APP_CSP_ENABLED=false.

Required sudo permissions for the Gitea runner user (example):

gitea-runner ALL=(root) NOPASSWD:/usr/bin/podman build *,/usr/bin/podman tag *,/usr/bin/systemctl restart kubeviz.service,/usr/bin/systemctl is-active kubeviz.service

Only needed when you switch to SYSTEMD_SCOPE=system or PODMAN_USE_SUDO=true.

The user must be the one that executes the Gitea Actions runner service (often gitea-runner). Check it with:

systemctl cat gitea-runner | grep -E '^User='

For BasicAuth labels, use htpasswd hashes (not plain passwords), for example:

htpasswd -nB smb

Then set the generated value in:

  • traefik.http.middlewares.kubeviz-auth.basicauth.users=smb:<hash>

After updating sudoers:

sudo systemctl daemon-reload
sudo systemctl restart gitea-runner