# Quadlet Templates (AlmaLinux + Podman) Files: - `kubeviz.container`: system-level Quadlet unit template - `kubeviz-traefik.container`: direct Traefik-label variant (shared Podman network) - `traefik.network`: optional shared network Quadlet - `kubeviz.env.example`: optional external environment file ## 1. Install template ```bash sudo mkdir -p /etc/containers/systemd sudo cp deploy/quadlet/kubeviz.container /etc/containers/systemd/kubeviz.container ``` Alternative (Traefik-label mode): ```bash sudo cp deploy/quadlet/traefik.network /etc/containers/systemd/traefik.network sudo cp deploy/quadlet/kubeviz-traefik.container /etc/containers/systemd/kubeviz.container ``` Optional env file: ```bash sudo mkdir -p /etc/kubeviz sudo cp deploy/quadlet/kubeviz.env.example /etc/kubeviz/kubeviz.env # then uncomment EnvironmentFile in kubeviz.container ``` ## 2. Set real image Edit `/etc/containers/systemd/kubeviz.container` and replace: - `ghcr.io/REPLACE_ME/kubeviz:v0.1.0` For Gitea CI/CD without external registry, use a stable local tag: ```ini Image=localhost/kubeviz:prod Pull=never ``` ## 3. Start service ```bash sudo systemctl daemon-reload sudo systemctl enable --now kubeviz.service sudo systemctl status kubeviz.service sudo journalctl -u kubeviz.service -f ``` ## 4. Update rollout ```bash sudo systemctl restart kubeviz.service ``` Because `Pull=always` is set, Podman will pull the latest image for the configured tag on restart. ## 5. Traefik integration Route `kubeviz.valtrix.systems` to `http://127.0.0.1:18080`. Keep `COOKIE_SECURE=true` in production. If you use `kubeviz-traefik.container`, Traefik discovers KubeViz via labels and the shared `traefik` network instead of localhost port mapping. ## 6. Gitea pipeline (direct deploy on server) Workflow template is included at: - `.gitea/workflows/deploy-kubeviz.yml` - `scripts/deploy-with-podman.sh` The deploy script builds with Podman, tags `localhost/kubeviz:prod`, and restarts `kubeviz.service`. The workflow uses `git` checkout (no Node runtime dependency). For private repos, set Gitea secret `CI_REPO_TOKEN`. For private base images (for example `dhi.io/golang:*`), ensure the runner user is logged in with Podman and has an auth file at either: - `$XDG_RUNTIME_DIR/containers/auth.json` or - `$HOME/.config/containers/auth.json` The deploy script forwards `REGISTRY_AUTH_FILE` to `sudo podman` automatically. Default workflow mode uses user services (`systemctl --user`) and rootless Podman: - `SYSTEMD_SCOPE=user` - `PODMAN_USE_SUDO=false` - quadlet target: `~/.config/containers/systemd/kubeviz.container` So no root sudo is required for normal deploy runs. Required sudo permissions for the Gitea runner user (example): ```text gitea-runner ALL=(root) NOPASSWD:/usr/bin/podman build *,/usr/bin/podman tag *,/usr/bin/systemctl restart kubeviz.service,/usr/bin/systemctl is-active kubeviz.service ``` Only needed when you switch to `SYSTEMD_SCOPE=system` or `PODMAN_USE_SUDO=true`. The user must be the one that executes the Gitea Actions runner service (often `gitea-runner`). Check it with: ```bash systemctl cat gitea-runner | grep -E '^User=' ``` For BasicAuth labels, use htpasswd hashes (not plain passwords), for example: ```bash htpasswd -nB smb ``` Then set the generated value in: - `traefik.http.middlewares.kubeviz-auth.basicauth.users=smb:` After updating sudoers: ```bash sudo systemctl daemon-reload sudo systemctl restart gitea-runner ```