Compare commits

...

10 Commits

Author SHA1 Message Date
S
e624885bb9 chore: add repo variables configuration 2026-03-03 14:18:03 -06:00
S
b799cb7970 feat: add phases 10-11, enhance phase 8 direct-check mode, and update Caddy migration
- Phase 10: local repo cutover (rename origin→github, add Gitea remote, push branches/tags)
- Phase 11: custom runner infrastructure with toolchain-based naming
  (go-node-runner, jvm-android-runner) and repo variables via Gitea API
- Add container_options support to manage_runner.sh for KVM passthrough
- Phase 8: add --allow-direct-checks flag for LAN/split-DNS staging
- Phase 7.5: add Cloudflare TLS block, retry logic for probes, multi-upstream support
- Add toggle_dns.sh helper and update orchestration scripts for phases 10-11

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-03 14:14:11 -06:00
S
63f5bf6ea7 feat: add support for public DNS target IP and private DNS allowance in Cloudflare setup 2026-03-02 23:27:04 -06:00
S
14a5773a2d feat: add phase 8.5 Nginx to Caddy migration wrapper and enhance post-check script for direct access handling 2026-03-02 22:45:49 -06:00
S
9224b91374 feat: enhance canary mode to support domain-aware upsert behavior for site blocks 2026-03-02 22:23:52 -06:00
S
b52d3187d9 feat: enhance canary mode in Nginx to Caddy migration script to preserve existing routes 2026-03-02 22:22:07 -06:00
S
78376f0137 feat: add phase 7.5 Nginx to Caddy migration script and update usage guide 2026-03-02 22:20:36 -06:00
S
96214654d0 feat: add recommended Caddyfile and update usage guide for production configuration 2026-03-02 22:06:27 -06:00
S
3c86890983 feat: update Caddy image to slothcroissant/caddy-cloudflaredns:latest and enhance Docker support in validation script 2026-03-02 20:25:38 -06:00
S
d9fb5254cd feat: add post-migration check section to usage guide for infrastructure validation 2026-03-02 20:25:29 -06:00
30 changed files with 3248 additions and 81 deletions

View File

@@ -96,6 +96,11 @@ LOCAL_REGISTRY= # Local registry prefix (e.g. registry.local:5
# AUTO-POPULATED by phase3 scripts — do not fill manually: # AUTO-POPULATED by phase3 scripts — do not fill manually:
GITEA_RUNNER_REGISTRATION_TOKEN= # Retrieved from Gitea admin panel via API GITEA_RUNNER_REGISTRATION_TOKEN= # Retrieved from Gitea admin panel via API
# Custom runner image build contexts (phase 11)
# Absolute paths to directories containing Dockerfiles for custom runner images.
GO_NODE_RUNNER_CONTEXT= # Path to Go + Node toolchain Dockerfile (e.g. /path/to/augur/infra/runners)
JVM_ANDROID_RUNNER_CONTEXT= # Path to JDK + Android SDK toolchain Dockerfile (e.g. /path/to/periodvault/infra/runners)
# ----------------------------------------------------------------------------- # -----------------------------------------------------------------------------
# REPOSITORIES # REPOSITORIES
@@ -124,6 +129,8 @@ TLS_MODE=cloudflare # TLS mode: "cloudflare" (DNS-01 via CF API) o
CADDY_DOMAIN= # Wildcard cert base domain (e.g. privacyindesign.com → cert for *.privacyindesign.com) CADDY_DOMAIN= # Wildcard cert base domain (e.g. privacyindesign.com → cert for *.privacyindesign.com)
CADDY_DATA_PATH= # Absolute path on host for Caddy data (e.g. /mnt/nvme/caddy) CADDY_DATA_PATH= # Absolute path on host for Caddy data (e.g. /mnt/nvme/caddy)
CLOUDFLARE_API_TOKEN= # Cloudflare API token with Zone:DNS:Edit (only if TLS_MODE=cloudflare) CLOUDFLARE_API_TOKEN= # Cloudflare API token with Zone:DNS:Edit (only if TLS_MODE=cloudflare)
PUBLIC_DNS_TARGET_IP= # Phase 8 Cloudflare A-record target for GITEA_DOMAIN (public ingress IP recommended)
PHASE8_ALLOW_PRIVATE_DNS_TARGET=false # true only for LAN-only/split-DNS setups using private RFC1918 target IPs
SSL_CERT_PATH= # Absolute path to SSL cert (only if TLS_MODE=existing) SSL_CERT_PATH= # Absolute path to SSL cert (only if TLS_MODE=existing)
SSL_KEY_PATH= # Absolute path to SSL key (only if TLS_MODE=existing) SSL_KEY_PATH= # Absolute path to SSL key (only if TLS_MODE=existing)

View File

@@ -55,6 +55,7 @@ The entire process is driven from a MacBook over SSH. Nothing is installed on th
| 6 | `phase6_github_mirrors.sh` | Configure push mirrors from Gitea to GitHub, disable GitHub Actions | | 6 | `phase6_github_mirrors.sh` | Configure push mirrors from Gitea to GitHub, disable GitHub Actions |
| 7 | `phase7_branch_protection.sh` | Apply branch protection rules to all repos | | 7 | `phase7_branch_protection.sh` | Apply branch protection rules to all repos |
| 8 | `phase8_cutover.sh` | Deploy Caddy HTTPS reverse proxy (Cloudflare DNS-01 or existing certs), mark GitHub repos as mirrors | | 8 | `phase8_cutover.sh` | Deploy Caddy HTTPS reverse proxy (Cloudflare DNS-01 or existing certs), mark GitHub repos as mirrors |
| 7.5 (optional) | `phase7_5_nginx_to_caddy.sh` | One-time multi-domain Nginx -> Caddy migration helper (canary/full), supports `sintheus.com` + `privacyindesign.com` in one Caddy |
| 9 | `phase9_security.sh` | Deploy Semgrep + Trivy + Gitleaks security scanning workflows | | 9 | `phase9_security.sh` | Deploy Semgrep + Trivy + Gitleaks security scanning workflows |
Each phase has three scripts: the main script, a `_post_check.sh` that independently verifies success, and a `_teardown.sh` that cleanly reverses the phase. Each phase has three scripts: the main script, a `_post_check.sh` that independently verifies success, and a `_teardown.sh` that cleanly reverses the phase.
@@ -96,6 +97,8 @@ gitea-migration/
├── run_all.sh # Full pipeline orchestration ├── run_all.sh # Full pipeline orchestration
├── post-migration-check.sh # Read-only infrastructure state check ├── post-migration-check.sh # Read-only infrastructure state check
├── teardown_all.sh # Reverse teardown (9 to 1) ├── teardown_all.sh # Reverse teardown (9 to 1)
├── phase7_5_nginx_to_caddy.sh # Optional one-time Nginx -> Caddy consolidation step
├── TODO.md # Phase 7.5 migration context, backlog, and DoD
├── manage_runner.sh # Dynamic runner add/remove/list ├── manage_runner.sh # Dynamic runner add/remove/list
├── phase{1-9}_*.sh # Main phase scripts ├── phase{1-9}_*.sh # Main phase scripts
├── phase{1-9}_post_check.sh # Verification scripts ├── phase{1-9}_post_check.sh # Verification scripts
@@ -228,7 +231,7 @@ When `TLS_MODE=cloudflare`, Caddy handles certificate renewal automatically via
| MacBook | macOS, Homebrew, jq >= 1.6, curl >= 7.70, git >= 2.30, shellcheck >= 0.8, gh >= 2.0, bw >= 2.0 | | MacBook | macOS, Homebrew, jq >= 1.6, curl >= 7.70, git >= 2.30, shellcheck >= 0.8, gh >= 2.0, bw >= 2.0 |
| Unraid | Linux, Docker >= 20.0, docker-compose >= 2.0, jq >= 1.6, passwordless sudo for SSH user | | Unraid | Linux, Docker >= 20.0, docker-compose >= 2.0, jq >= 1.6, passwordless sudo for SSH user |
| Fedora | Linux with dnf, Docker CE >= 20.0, docker-compose >= 2.0, jq >= 1.6, passwordless sudo for SSH user | | Fedora | Linux with dnf, Docker CE >= 20.0, docker-compose >= 2.0, jq >= 1.6, passwordless sudo for SSH user |
| Network | MacBook can SSH to both servers, DNS A record pointing to Unraid (needed for Phase 8 TLS), Cloudflare API token (if using `TLS_MODE=cloudflare`) | | Network | MacBook can SSH to both servers; for `TLS_MODE=cloudflare`, provide `CLOUDFLARE_API_TOKEN` plus `PUBLIC_DNS_TARGET_IP` (public ingress IP recommended; private IP requires `PHASE8_ALLOW_PRIVATE_DNS_TARGET=true`) |
## Quick Start ## Quick Start

118
TODO.md Normal file
View File

@@ -0,0 +1,118 @@
# TODO — Phase 7.5 Nginx -> Caddy Consolidation
## Why this exists
This file captures the decisions and migration context for the one-time "phase 7.5"
work so we do not lose reasoning between sessions.
## What happened so far
1. The original `phase8_cutover.sh` was designed for one wildcard zone
(`*.${CADDY_DOMAIN}`), mainly for Gitea cutover.
2. The homelab currently has two active DNS zones in scope:
- `sintheus.com` (legacy services behind Nginx)
- `privacyindesign.com` (new Gitea public endpoint)
3. Decision made: run a one-time migration where a single Caddy instance serves
both zones, then gradually retire Nginx.
4. Implemented: `phase7_5_nginx_to_caddy.sh` to generate/deploy a multi-domain
Caddyfile and run canary/full rollout modes.
## Current design decisions
1. Public ingress should be HTTPS-only for all migrated hostnames.
2. Backend scheme is mixed for now:
- Keep `http://` upstream where service does not yet have TLS.
- Keep `https://` where already available.
3. End-to-end HTTPS is a target state, not an immediate requirement.
4. A strict toggle exists in phase 7.5:
- `--strict-backend-https` fails if any upstream is `http://`.
5. Canary-first rollout:
- first migration target is `tower.sintheus.com`.
6. Canary mode is additive:
- preserves existing Caddy routes
- updates only a managed canary block for `tower.sintheus.com`.
## Host map and backend TLS status
### Canary scope (default mode)
- `tower.sintheus.com -> https://192.168.1.82:443` (TLS backend; cert verify skipped)
- `${GITEA_DOMAIN} -> http://${UNRAID_GITEA_IP}:3000` (HTTP backend for now)
### Full migration scope
- `ai.sintheus.com -> http://192.168.1.82:8181`
- `photos.sintheus.com -> http://192.168.1.222:2283`
- `fin.sintheus.com -> http://192.168.1.233:8096`
- `disk.sintheus.com -> http://192.168.1.52:80`
- `pi.sintheus.com -> http://192.168.1.4:80`
- `plex.sintheus.com -> http://192.168.1.111:32400`
- `sync.sintheus.com -> http://192.168.1.119:8384`
- `syno.sintheus.com -> https://100.108.182.16:5001` (verify skipped)
- `tower.sintheus.com -> https://192.168.1.82:443` (verify skipped)
- `${GITEA_DOMAIN} -> http://${UNRAID_GITEA_IP}:3000`
## Definition of done (phase 7.5)
Phase 7.5 is done only when all are true:
1. Caddy is running on Unraid with generated multi-domain config.
2. Canary host `tower.sintheus.com` is reachable over HTTPS through Caddy.
3. Canary routing is proven by at least one path:
- `curl --resolve` tests, or
- split-DNS/hosts override, or
- intentional DNS cutover.
4. Legacy Nginx remains available for non-migrated hosts during canary.
5. No critical regressions observed for at least 24 hours on canary traffic.
## Definition of done (final state after full migration)
1. All selected domains route to Caddy through the intended ingress path:
- LAN-only: split-DNS/private resolution to Caddy, or
- public: DNS to WAN ingress that forwards 443 to Caddy.
2. Caddy serves valid certificates for both zones.
3. Functional checks pass for each service (UI load, API, websocket/streaming where relevant).
4. Nginx is no longer on the request path for migrated domains.
5. Long-term target: all backends upgraded to `https://` and strict mode passes.
## What remains to happen
1. Run canary:
- `./phase7_5_nginx_to_caddy.sh --mode=canary`
2. Route canary traffic to Caddy using one method:
- `curl --resolve` for zero-DNS-change testing, or
- split-DNS/private DNS, or
- explicit DNS cutover if desired.
3. Observe errors/latency/app behavior for at least 24 hours.
4. If canary is clean, run full:
- `./phase7_5_nginx_to_caddy.sh --mode=full`
5. Move remaining routes in batches (DNS or split-DNS, depending on ingress model).
6. Validate each app after each batch.
7. After everything is stable, plan Nginx retirement.
8. Later hardening pass:
- enable TLS on each backend service one by one
- flip each corresponding upstream to `https://`
- finally run `--strict-backend-https` and require it to pass.
## Risks and why mixed backend HTTP is acceptable short-term
1. Risk: backend HTTP is unencrypted on LAN.
- Mitigation: traffic stays on trusted local network, temporary state only.
2. Risk: if strict mode is enabled too early, rollout blocks.
- Mitigation: keep strict mode off until backend TLS coverage improves.
3. Risk: moving all DNS at once can create broad outage.
- Mitigation: canary-first and batch DNS cutover.
## Operational notes
1. If Caddyfile already exists, phase 7.5 backs it up as:
- `${CADDY_DATA_PATH}/Caddyfile.pre_phase7_5.<timestamp>`
2. Compose stack path for Caddy:
- `${UNRAID_COMPOSE_DIR}/caddy/docker-compose.yml`
3. Script does not change Cloudflare DNS records automatically.
- DNS updates are intentional/manual to keep blast radius controlled.
4. Do not set public Cloudflare proxied records to private `192.168.x.x` addresses.
5. Canary upsert behavior is domain-aware:
- if site block for the canary domain does not exist, it is added
- if site block exists, it is replaced in-place
- previous block content is printed in logs before replacement

View File

@@ -31,8 +31,9 @@ Before running anything, confirm:
DNS and TLS are only needed for Phase 8 (Caddy reverse proxy). You can set these up later: DNS and TLS are only needed for Phase 8 (Caddy reverse proxy). You can set these up later:
- A DNS A record for your Gitea domain pointing to `UNRAID_IP`
- If using `TLS_MODE=cloudflare`: a Cloudflare API token with Zone:DNS:Edit permission - If using `TLS_MODE=cloudflare`: a Cloudflare API token with Zone:DNS:Edit permission
- `PUBLIC_DNS_TARGET_IP` set to your ingress IP for `GITEA_DOMAIN` (public IP recommended)
- If you intentionally use LAN-only split DNS with a private IP target, set `PHASE8_ALLOW_PRIVATE_DNS_TARGET=true`
### 2. Passwordless sudo on remote hosts ### 2. Passwordless sudo on remote hosts
@@ -154,6 +155,23 @@ If you prefer to run each phase individually and inspect results:
./phase9_security.sh && ./phase9_post_check.sh ./phase9_security.sh && ./phase9_post_check.sh
``` ```
### Optional Phase 7.5 (one-time Nginx -> Caddy migration)
Use this only if you want one Caddy instance to serve both legacy and new domains.
```bash
# Canary first (default): tower.sintheus.com + Gitea domain
./phase7_5_nginx_to_caddy.sh --mode=canary
# Full host map cutover
./phase7_5_nginx_to_caddy.sh --mode=full
# Enforce strict end-to-end TLS for all upstreams
./phase7_5_nginx_to_caddy.sh --mode=full --strict-backend-https
```
Detailed migration context, rationale, and next actions are tracked in `TODO.md`.
### Skip setup (already done) ### Skip setup (already done)
```bash ```bash
@@ -162,7 +180,17 @@ If you prefer to run each phase individually and inspect results:
### What to verify when it's done ### What to verify when it's done
After the full migration completes: After the full migration completes, run the post-migration check:
```bash
./post-migration-check.sh
# or equivalently:
./run_all.sh --dry-run
```
This probes all live infrastructure and reports the state of every phase — what's done, what's pending, and any errors. See [Post-Migration Check](#post-migration-check) below for details.
You can also verify manually:
1. **HTTPS access**: Open `https://YOUR_DOMAIN` in a browser — you should see the Gitea login page with a valid SSL certificate. 1. **HTTPS access**: Open `https://YOUR_DOMAIN` in a browser — you should see the Gitea login page with a valid SSL certificate.
2. **Repository content**: Log in as admin, navigate to your org, confirm all repos have commits, branches, and (if enabled) issues/labels. 2. **Repository content**: Log in as admin, navigate to your org, confirm all repos have commits, branches, and (if enabled) issues/labels.
@@ -215,6 +243,44 @@ When resuming from a later phase, Gitea is already running on ports 3000. Use:
--- ---
## Post-Migration Check
A standalone read-only script that probes live infrastructure and reports the state of every migration phase. No mutations — safe to run at any time, before, during, or after migration.
```bash
./post-migration-check.sh
# or:
./run_all.sh --dry-run
```
### What it checks
- **Connectivity**: SSH to Unraid/Fedora, Docker daemons, GitHub API token validity
- **Phase 1-2**: Docker networks, compose files, app.ini, container health, admin auth, API tokens, organization
- **Phase 3**: runners.conf, registration token, per-runner online/offline status
- **Phase 4**: GitHub source repos accessible, Gitea repos migrated, Fedora mirrors active
- **Phase 5**: Workflow directories present in Gitea repos
- **Phase 6**: Push mirrors configured, GitHub Actions disabled
- **Phase 7**: Branch protection rules with approval counts
- **Phase 8**: DNS resolution, Caddy container, HTTPS end-to-end, TLS cert, GitHub `[MIRROR]` marking
- **Phase 9**: Security scan workflows deployed
### Output format
Three states:
| State | Meaning |
|-------|---------|
| `[DONE]` | Already exists/running — phase would skip this step |
| `[TODO]` | Not done yet — phase would execute this step |
| `[ERROR]` | Something is broken — needs attention |
`[TODO]` is normal for phases you haven't run yet. Only `[ERROR]` indicates a problem.
The script exits 0 if no errors, 1 if any `[ERROR]` found. A summary at the end shows per-phase counts.
---
## Edge Cases ## Edge Cases
### GitHub API rate limit hit during migration ### GitHub API rate limit hit during migration
@@ -251,7 +317,7 @@ Then re-run Phase 4. Already-migrated repos will be skipped.
**Symptom**: Preflight check 14 fails. **Symptom**: Preflight check 14 fails.
**Fix**: Add or update your DNS A record. If using a local DNS server or `/etc/hosts`, ensure the record points to `UNRAID_IP`. DNS propagation can take minutes to hours. **Fix**: Phase 8 can auto-upsert the Cloudflare A record for `GITEA_DOMAIN` when `TLS_MODE=cloudflare`. Set `PUBLIC_DNS_TARGET_IP` first. Use a public ingress IP for public access. For LAN-only split DNS, set `PHASE8_ALLOW_PRIVATE_DNS_TARGET=true`.
### Caddy fails to start or obtain TLS certificate in Phase 8 ### Caddy fails to start or obtain TLS certificate in Phase 8

View File

@@ -269,17 +269,17 @@ _ENV_VAR_TYPES=(
) )
# Conditional variables — validated only when TLS_MODE matches. # Conditional variables — validated only when TLS_MODE matches.
_ENV_CONDITIONAL_TLS_NAMES=(CLOUDFLARE_API_TOKEN SSL_CERT_PATH SSL_KEY_PATH) _ENV_CONDITIONAL_TLS_NAMES=(CLOUDFLARE_API_TOKEN PUBLIC_DNS_TARGET_IP PHASE8_ALLOW_PRIVATE_DNS_TARGET SSL_CERT_PATH SSL_KEY_PATH)
_ENV_CONDITIONAL_TLS_TYPES=(nonempty path path) _ENV_CONDITIONAL_TLS_TYPES=(nonempty ip bool path path)
_ENV_CONDITIONAL_TLS_WHEN=( cloudflare existing existing) _ENV_CONDITIONAL_TLS_WHEN=( cloudflare cloudflare cloudflare existing existing)
# Conditional variables — validated only when GITEA_DB_TYPE is NOT sqlite3. # Conditional variables — validated only when GITEA_DB_TYPE is NOT sqlite3.
_ENV_CONDITIONAL_DB_NAMES=(GITEA_DB_PORT GITEA_DB_NAME GITEA_DB_USER GITEA_DB_PASSWD) _ENV_CONDITIONAL_DB_NAMES=(GITEA_DB_PORT GITEA_DB_NAME GITEA_DB_USER GITEA_DB_PASSWD)
_ENV_CONDITIONAL_DB_TYPES=(port nonempty nonempty password) _ENV_CONDITIONAL_DB_TYPES=(port nonempty nonempty password)
# Optional variables — validated only when non-empty (never required). # Optional variables — validated only when non-empty (never required).
_ENV_OPTIONAL_NAMES=(UNRAID_SSH_KEY FEDORA_SSH_KEY LOCAL_REGISTRY) _ENV_OPTIONAL_NAMES=(UNRAID_SSH_KEY FEDORA_SSH_KEY LOCAL_REGISTRY GO_NODE_RUNNER_CONTEXT JVM_ANDROID_RUNNER_CONTEXT)
_ENV_OPTIONAL_TYPES=(optional_path optional_path nonempty) _ENV_OPTIONAL_TYPES=(optional_path optional_path nonempty optional_path optional_path)
# Human-readable format hints for error messages. # Human-readable format hints for error messages.
_validator_hint() { _validator_hint() {

223
lib/phase10_common.sh Normal file
View File

@@ -0,0 +1,223 @@
#!/usr/bin/env bash
# =============================================================================
# lib/phase10_common.sh — Shared helpers for phase 10 local repo cutover
# =============================================================================
# Shared discovery results (parallel arrays; bash 3.2 compatible).
PHASE10_REPO_NAMES=()
PHASE10_REPO_PATHS=()
PHASE10_GITHUB_URLS=()
PHASE10_DUPLICATES=()
# Parse common git remote URL formats into: host|owner|repo
# Supports:
# - https://host/owner/repo(.git)
# - ssh://git@host/owner/repo(.git)
# - git@host:owner/repo(.git)
phase10_parse_git_url() {
local url="$1"
local rest host path owner repo
if [[ "$url" =~ ^[a-zA-Z][a-zA-Z0-9+.-]*:// ]]; then
rest="${url#*://}"
# Drop optional userinfo component.
rest="${rest#*@}"
host="${rest%%/*}"
path="${rest#*/}"
elif [[ "$url" == *@*:* ]]; then
rest="${url#*@}"
host="${rest%%:*}"
path="${rest#*:}"
else
return 1
fi
path="${path#/}"
path="${path%.git}"
owner="${path%%/*}"
repo="${path#*/}"
repo="${repo%%/*}"
if [[ -z "$host" ]] || [[ -z "$owner" ]] || [[ -z "$repo" ]] || [[ "$owner" == "$path" ]]; then
return 1
fi
printf '%s|%s|%s\n' "$host" "$owner" "$repo"
}
phase10_host_matches() {
local host="$1" expected="$2"
[[ "$host" == "$expected" ]] || [[ "$host" == "${expected}:"* ]]
}
# Return 0 when URL matches github.com/<owner>/<repo>.
# If <repo> is omitted, only owner is checked.
phase10_url_is_github_repo() {
local url="$1" owner_expected="$2" repo_expected="${3:-}"
local parsed host owner repo
parsed=$(phase10_parse_git_url "$url" 2>/dev/null) || return 1
IFS='|' read -r host owner repo <<< "$parsed"
phase10_host_matches "$host" "github.com" || return 1
[[ "$owner" == "$owner_expected" ]] || return 1
if [[ -n "$repo_expected" ]] && [[ "$repo" != "$repo_expected" ]]; then
return 1
fi
return 0
}
phase10_url_is_gitea_repo() {
local url="$1" domain="$2" org="$3" repo_expected="$4"
local parsed host owner repo
parsed=$(phase10_parse_git_url "$url" 2>/dev/null) || return 1
IFS='|' read -r host owner repo <<< "$parsed"
phase10_host_matches "$host" "$domain" || return 1
[[ "$owner" == "$org" ]] || return 1
[[ "$repo" == "$repo_expected" ]] || return 1
}
phase10_canonical_github_url() {
local owner="$1" repo="$2"
printf 'https://github.com/%s/%s.git' "$owner" "$repo"
}
phase10_canonical_gitea_url() {
local domain="$1" org="$2" repo="$3"
printf 'https://%s/%s/%s.git' "$domain" "$org" "$repo"
}
# Stable in-place sort by repo name (keeps arrays aligned).
phase10_sort_repo_arrays() {
local i j tmp
for ((i = 0; i < ${#PHASE10_REPO_NAMES[@]}; i++)); do
for ((j = i + 1; j < ${#PHASE10_REPO_NAMES[@]}; j++)); do
if [[ "${PHASE10_REPO_NAMES[$i]}" > "${PHASE10_REPO_NAMES[$j]}" ]]; then
tmp="${PHASE10_REPO_NAMES[$i]}"
PHASE10_REPO_NAMES[i]="${PHASE10_REPO_NAMES[j]}"
PHASE10_REPO_NAMES[j]="$tmp"
tmp="${PHASE10_REPO_PATHS[i]}"
PHASE10_REPO_PATHS[i]="${PHASE10_REPO_PATHS[j]}"
PHASE10_REPO_PATHS[j]="$tmp"
tmp="${PHASE10_GITHUB_URLS[i]}"
PHASE10_GITHUB_URLS[i]="${PHASE10_GITHUB_URLS[j]}"
PHASE10_GITHUB_URLS[j]="$tmp"
fi
done
done
}
# Discover local repos under root that map to github.com/<github_owner>.
# Discovery rules:
# - Only direct children of root are considered.
# - Excludes exclude_path (typically this toolkit repo).
# - Accepts a repo if either "github" or "origin" points at GitHub owner.
# - Deduplicates by repo slug, preferring directory basename == slug.
#
# Args:
# $1 root dir (e.g., /Users/s/development)
# $2 github owner (from GITHUB_USERNAME)
# $3 exclude absolute path (optional; pass "" for none)
# $4 expected count (0 = don't enforce)
phase10_discover_local_repos() {
local root="$1"
local github_owner="$2"
local exclude_path="${3:-}"
local expected_count="${4:-0}"
PHASE10_REPO_NAMES=()
PHASE10_REPO_PATHS=()
PHASE10_GITHUB_URLS=()
PHASE10_DUPLICATES=()
if [[ ! -d "$root" ]]; then
log_error "Local repo root not found: ${root}"
return 1
fi
local dir top github_url parsed host owner repo canonical
local i idx existing existing_base new_base duplicate
for dir in "$root"/*; do
[[ -d "$dir" ]] || continue
if [[ -n "$exclude_path" ]] && [[ "$dir" == "$exclude_path" ]]; then
continue
fi
if ! git -C "$dir" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
continue
fi
top=$(git -C "$dir" rev-parse --show-toplevel 2>/dev/null || true)
[[ "$top" == "$dir" ]] || continue
github_url=""
if github_url=$(git -C "$dir" remote get-url github 2>/dev/null); then
if ! phase10_url_is_github_repo "$github_url" "$github_owner"; then
github_url=""
fi
fi
if [[ -z "$github_url" ]] && github_url=$(git -C "$dir" remote get-url origin 2>/dev/null); then
if ! phase10_url_is_github_repo "$github_url" "$github_owner"; then
github_url=""
fi
fi
[[ -n "$github_url" ]] || continue
parsed=$(phase10_parse_git_url "$github_url" 2>/dev/null) || continue
IFS='|' read -r host owner repo <<< "$parsed"
canonical=$(phase10_canonical_github_url "$owner" "$repo")
idx=-1
for i in "${!PHASE10_REPO_NAMES[@]}"; do
if [[ "${PHASE10_REPO_NAMES[$i]}" == "$repo" ]]; then
idx="$i"
break
fi
done
if [[ "$idx" -ge 0 ]]; then
existing="${PHASE10_REPO_PATHS[$idx]}"
existing_base="$(basename "$existing")"
new_base="$(basename "$dir")"
if [[ "$new_base" == "$repo" ]] && [[ "$existing_base" != "$repo" ]]; then
PHASE10_REPO_PATHS[idx]="$dir"
PHASE10_GITHUB_URLS[idx]="$canonical"
PHASE10_DUPLICATES+=("${repo}: preferred ${dir} over ${existing}")
else
PHASE10_DUPLICATES+=("${repo}: ignored duplicate ${dir} (using ${existing})")
fi
continue
fi
PHASE10_REPO_NAMES+=("$repo")
PHASE10_REPO_PATHS+=("$dir")
PHASE10_GITHUB_URLS+=("$canonical")
done
phase10_sort_repo_arrays
for duplicate in "${PHASE10_DUPLICATES[@]}"; do
log_info "$duplicate"
done
if [[ "${#PHASE10_REPO_NAMES[@]}" -eq 0 ]]; then
log_error "No local GitHub repos found under ${root} for owner '${github_owner}'"
return 1
fi
if [[ "$expected_count" -gt 0 ]] && [[ "${#PHASE10_REPO_NAMES[@]}" -ne "$expected_count" ]]; then
log_error "Expected ${expected_count} local repos under ${root}; found ${#PHASE10_REPO_NAMES[@]}"
for i in "${!PHASE10_REPO_NAMES[@]}"; do
log_error " - ${PHASE10_REPO_NAMES[$i]} -> ${PHASE10_REPO_PATHS[$i]}"
done
return 1
fi
return 0
}

View File

@@ -73,6 +73,9 @@ parse_runner_entry() {
# "true" → /Library/LaunchDaemons/ (starts at boot, requires sudo) # "true" → /Library/LaunchDaemons/ (starts at boot, requires sudo)
# "false" (default) → ~/Library/LaunchAgents/ (starts at login) # "false" (default) → ~/Library/LaunchAgents/ (starts at login)
RUNNER_BOOT=$(ini_get "$RUNNERS_CONF" "$target_name" "boot" "false") RUNNER_BOOT=$(ini_get "$RUNNERS_CONF" "$target_name" "boot" "false")
# container_options: extra Docker flags for act_runner job containers.
# e.g. "--device=/dev/kvm" for KVM passthrough. Ignored for native runners.
RUNNER_CONTAINER_OPTIONS=$(ini_get "$RUNNERS_CONF" "$target_name" "container_options" "")
# --- Host resolution --- # --- Host resolution ---
# Also resolves RUNNER_COMPOSE_DIR: centralized compose dir on unraid/fedora, # Also resolves RUNNER_COMPOSE_DIR: centralized compose dir on unraid/fedora,
@@ -354,8 +357,9 @@ add_docker_runner() {
# shellcheck disable=SC2090 # intentional — RUNNER_LABELS_YAML rendered via envsubst # shellcheck disable=SC2090 # intentional — RUNNER_LABELS_YAML rendered via envsubst
export RUNNER_LABELS_YAML export RUNNER_LABELS_YAML
export RUNNER_CAPACITY export RUNNER_CAPACITY
export RUNNER_CONTAINER_OPTIONS
render_template "${SCRIPT_DIR}/templates/runner-config.yaml.tpl" "$tmpfile" \ render_template "${SCRIPT_DIR}/templates/runner-config.yaml.tpl" "$tmpfile" \
"\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY}" "\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY} \${RUNNER_CONTAINER_OPTIONS}"
runner_scp "$tmpfile" "${RUNNER_DATA_PATH}/config.yaml" runner_scp "$tmpfile" "${RUNNER_DATA_PATH}/config.yaml"
rm -f "$tmpfile" rm -f "$tmpfile"
@@ -422,9 +426,9 @@ add_native_runner() {
local tmpfile local tmpfile
tmpfile=$(mktemp) tmpfile=$(mktemp)
# shellcheck disable=SC2090 # intentional — RUNNER_LABELS_YAML rendered via envsubst # shellcheck disable=SC2090 # intentional — RUNNER_LABELS_YAML rendered via envsubst
export RUNNER_NAME RUNNER_DATA_PATH RUNNER_LABELS_YAML RUNNER_CAPACITY export RUNNER_NAME RUNNER_DATA_PATH RUNNER_LABELS_YAML RUNNER_CAPACITY RUNNER_CONTAINER_OPTIONS
render_template "${SCRIPT_DIR}/templates/runner-config.yaml.tpl" "$tmpfile" \ render_template "${SCRIPT_DIR}/templates/runner-config.yaml.tpl" "$tmpfile" \
"\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY}" "\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY} \${RUNNER_CONTAINER_OPTIONS}"
cp "$tmpfile" "${RUNNER_DATA_PATH}/config.yaml" cp "$tmpfile" "${RUNNER_DATA_PATH}/config.yaml"
rm -f "$tmpfile" rm -f "$tmpfile"

511
phase10_local_repo_cutover.sh Executable file
View File

@@ -0,0 +1,511 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase10_local_repo_cutover.sh — Re-point local repos from GitHub to Gitea
# Depends on: Phase 8 complete (Gitea publicly reachable) + Phase 4 migrated
#
# For each discovered local repo under /Users/s/development:
# 1. Rename origin -> github (if needed)
# 2. Ensure repo exists on Gitea (create if missing)
# 3. Add/update origin to point at Gitea
# 4. Push all branches and tags to Gitea origin
# 5. Ensure every local branch tracks origin/<branch> (Gitea)
#
# Discovery is based on local git remotes:
# - repo root is a direct child of PHASE10_LOCAL_ROOT (default /Users/s/development)
# - repo has origin/github pointing to github.com/${GITHUB_USERNAME}/<repo>
# - duplicate clones are deduped by repo slug
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
source "${SCRIPT_DIR}/lib/phase10_common.sh"
load_env
require_vars GITEA_ADMIN_TOKEN GITEA_ADMIN_USER GITEA_ORG_NAME GITEA_DOMAIN GITEA_INTERNAL_URL GITHUB_USERNAME
phase_header 10 "Local Repo Remote Cutover"
LOCAL_REPO_ROOT="${PHASE10_LOCAL_ROOT:-/Users/s/development}"
EXPECTED_REPO_COUNT="${PHASE10_EXPECTED_REPO_COUNT:-3}"
DRY_RUN=false
ASKPASS_SCRIPT=""
PHASE10_GITEA_REPO_EXISTS=false
PHASE10_REMOTE_BRANCHES=""
PHASE10_REMOTE_TAGS=""
for arg in "$@"; do
case "$arg" in
--local-root=*) LOCAL_REPO_ROOT="${arg#*=}" ;;
--expected-count=*) EXPECTED_REPO_COUNT="${arg#*=}" ;;
--dry-run) DRY_RUN=true ;;
--help|-h)
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--local-root=PATH Root folder containing local repos (default: /Users/s/development)
--expected-count=N Require exactly N discovered repos (default: 3, 0 disables)
--dry-run Print planned actions only (no mutations)
--help Show this help
EOF
exit 0
;;
*)
log_error "Unknown argument: $arg"
exit 1
;;
esac
done
if ! [[ "$EXPECTED_REPO_COUNT" =~ ^[0-9]+$ ]]; then
log_error "--expected-count must be a non-negative integer"
exit 1
fi
cleanup() {
if [[ -n "$ASKPASS_SCRIPT" ]]; then
rm -f "$ASKPASS_SCRIPT"
fi
}
trap cleanup EXIT
setup_git_auth() {
ASKPASS_SCRIPT=$(mktemp)
cat > "$ASKPASS_SCRIPT" <<'EOF'
#!/usr/bin/env sh
case "$1" in
*sername*) printf '%s\n' "$GITEA_GIT_USERNAME" ;;
*assword*) printf '%s\n' "$GITEA_GIT_TOKEN" ;;
*) printf '\n' ;;
esac
EOF
chmod 700 "$ASKPASS_SCRIPT"
}
git_with_auth() {
GIT_TERMINAL_PROMPT=0 \
GIT_ASKPASS="$ASKPASS_SCRIPT" \
GITEA_GIT_USERNAME="$GITEA_ADMIN_USER" \
GITEA_GIT_TOKEN="$GITEA_ADMIN_TOKEN" \
"$@"
}
ensure_github_remote() {
local repo_path="$1" repo_name="$2" github_url="$3"
local existing origin_existing has_bad_github
has_bad_github=false
if existing=$(git -C "$repo_path" remote get-url github 2>/dev/null); then
if phase10_url_is_github_repo "$existing" "$GITHUB_USERNAME" "$repo_name"; then
if [[ "$existing" != "$github_url" ]]; then
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would set github URL -> ${github_url}"
else
git -C "$repo_path" remote set-url github "$github_url"
fi
fi
return 0
fi
has_bad_github=true
fi
if origin_existing=$(git -C "$repo_path" remote get-url origin 2>/dev/null); then
if phase10_url_is_github_repo "$origin_existing" "$GITHUB_USERNAME" "$repo_name"; then
if [[ "$has_bad_github" == "true" ]]; then
if [[ "$DRY_RUN" == "true" ]]; then
log_warn "${repo_name}: would remove misconfigured 'github' remote and rebuild it from origin"
else
git -C "$repo_path" remote remove github
log_warn "${repo_name}: removed misconfigured 'github' remote and rebuilt it from origin"
fi
fi
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would rename origin -> github"
log_info "${repo_name}: would set github URL -> ${github_url}"
else
git -C "$repo_path" remote rename origin github
git -C "$repo_path" remote set-url github "$github_url"
log_success "${repo_name}: renamed origin -> github"
fi
return 0
fi
fi
if [[ "$has_bad_github" == "true" ]]; then
log_error "${repo_name}: existing 'github' remote does not point to GitHub repo ${GITHUB_USERNAME}/${repo_name}"
return 1
fi
log_error "${repo_name}: could not find GitHub remote in 'origin' or 'github'"
return 1
}
ensure_gitea_origin() {
local repo_path="$1" repo_name="$2" gitea_url="$3"
local existing
if existing=$(git -C "$repo_path" remote get-url origin 2>/dev/null); then
if phase10_url_is_gitea_repo "$existing" "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name"; then
if [[ "$existing" != "$gitea_url" ]]; then
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would normalize origin URL -> ${gitea_url}"
else
git -C "$repo_path" remote set-url origin "$gitea_url"
fi
fi
return 0
fi
# origin exists but points somewhere else; force it to Gitea.
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would set origin URL -> ${gitea_url}"
else
git -C "$repo_path" remote set-url origin "$gitea_url"
fi
return 0
fi
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would add origin -> ${gitea_url}"
else
git -C "$repo_path" remote add origin "$gitea_url"
fi
return 0
}
ensure_gitea_repo_exists() {
local repo_name="$1"
local create_payload http_code
get_gitea_repo_http_code() {
local target_repo="$1"
local tmpfile curl_code
tmpfile=$(mktemp)
curl_code=$(curl \
-s \
-o "$tmpfile" \
-w "%{http_code}" \
-H "Authorization: token ${GITEA_ADMIN_TOKEN}" \
-H "Accept: application/json" \
"${GITEA_INTERNAL_URL}/api/v1/repos/${GITEA_ORG_NAME}/${target_repo}") || {
rm -f "$tmpfile"
return 1
}
rm -f "$tmpfile"
printf '%s' "$curl_code"
}
PHASE10_GITEA_REPO_EXISTS=false
if ! http_code="$(get_gitea_repo_http_code "$repo_name")"; then
log_error "${repo_name}: failed to query Gitea API for repo existence"
return 1
fi
if [[ "$http_code" == "200" ]]; then
PHASE10_GITEA_REPO_EXISTS=true
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: Gitea repo already exists (${GITEA_ORG_NAME}/${repo_name})"
fi
return 0
fi
if [[ "$http_code" != "404" ]]; then
log_error "${repo_name}: unexpected Gitea API status while checking repo (${http_code})"
return 1
fi
create_payload=$(jq -n \
--arg name "$repo_name" \
'{name: $name, auto_init: false}')
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would create missing Gitea repo ${GITEA_ORG_NAME}/${repo_name}"
return 0
fi
if gitea_api POST "/orgs/${GITEA_ORG_NAME}/repos" "$create_payload" >/dev/null 2>&1; then
log_success "${repo_name}: created missing Gitea repo ${GITEA_ORG_NAME}/${repo_name}"
return 0
fi
log_error "${repo_name}: failed to create Gitea repo ${GITEA_ORG_NAME}/${repo_name}"
return 1
}
count_items() {
local list="$1"
if [[ -z "$list" ]]; then
printf '0'
return
fi
printf '%s\n' "$list" | sed '/^$/d' | wc -l | tr -d '[:space:]'
}
list_contains() {
local list="$1" needle="$2"
[[ -n "$list" ]] && printf '%s\n' "$list" | grep -Fxq "$needle"
}
fetch_remote_refs() {
local url="$1"
local refs ref short
PHASE10_REMOTE_BRANCHES=""
PHASE10_REMOTE_TAGS=""
refs=$(git_with_auth git ls-remote --heads --tags "$url" 2>/dev/null) || return 1
[[ -n "$refs" ]] || return 0
while IFS= read -r line; do
[[ -z "$line" ]] && continue
ref="${line#*[[:space:]]}"
ref="${ref#"${ref%%[![:space:]]*}"}"
[[ -n "$ref" ]] || continue
case "$ref" in
refs/heads/*)
short="${ref#refs/heads/}"
PHASE10_REMOTE_BRANCHES="${PHASE10_REMOTE_BRANCHES}${short}"$'\n'
;;
refs/tags/*)
short="${ref#refs/tags/}"
[[ "$short" == *"^{}" ]] && continue
PHASE10_REMOTE_TAGS="${PHASE10_REMOTE_TAGS}${short}"$'\n'
;;
esac
done <<< "$refs"
PHASE10_REMOTE_BRANCHES="$(printf '%s' "$PHASE10_REMOTE_BRANCHES" | sed '/^$/d' | LC_ALL=C sort -u)"
PHASE10_REMOTE_TAGS="$(printf '%s' "$PHASE10_REMOTE_TAGS" | sed '/^$/d' | LC_ALL=C sort -u)"
}
print_diff_summary() {
local repo_name="$1" kind="$2" local_list="$3" remote_list="$4"
local missing_count extra_count item
local missing_preview="" extra_preview=""
local preview_limit=5
missing_count=0
while IFS= read -r item; do
[[ -z "$item" ]] && continue
if ! list_contains "$remote_list" "$item"; then
missing_count=$((missing_count + 1))
if [[ "$missing_count" -le "$preview_limit" ]]; then
missing_preview="${missing_preview}${item}, "
fi
fi
done <<< "$local_list"
extra_count=0
while IFS= read -r item; do
[[ -z "$item" ]] && continue
if ! list_contains "$local_list" "$item"; then
extra_count=$((extra_count + 1))
if [[ "$extra_count" -le "$preview_limit" ]]; then
extra_preview="${extra_preview}${item}, "
fi
fi
done <<< "$remote_list"
if [[ "$missing_count" -eq 0 ]] && [[ "$extra_count" -eq 0 ]]; then
log_success "${repo_name}: local ${kind}s match Gitea"
return 0
fi
if [[ "$missing_count" -gt 0 ]]; then
missing_preview="${missing_preview%, }"
log_info "${repo_name}: ${missing_count} ${kind}(s) missing on Gitea"
if [[ -n "$missing_preview" ]]; then
log_info " missing ${kind} sample: ${missing_preview}"
fi
fi
if [[ "$extra_count" -gt 0 ]]; then
extra_preview="${extra_preview%, }"
log_info "${repo_name}: ${extra_count} ${kind}(s) exist on Gitea but not locally"
if [[ -n "$extra_preview" ]]; then
log_info " remote-only ${kind} sample: ${extra_preview}"
fi
fi
}
dry_run_compare_local_and_remote() {
local repo_path="$1" repo_name="$2" gitea_url="$3"
local local_branches local_tags
local local_branch_count local_tag_count remote_branch_count remote_tag_count
local_branches="$(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads | LC_ALL=C sort -u)"
local_tags="$(git -C "$repo_path" tag -l | LC_ALL=C sort -u)"
local_branch_count="$(count_items "$local_branches")"
local_tag_count="$(count_items "$local_tags")"
log_info "${repo_name}: local state = ${local_branch_count} branch(es), ${local_tag_count} tag(s)"
if [[ "$PHASE10_GITEA_REPO_EXISTS" != "true" ]]; then
log_info "${repo_name}: remote state = repo missing (would be created)"
if [[ "$local_branch_count" -gt 0 ]]; then
log_info "${repo_name}: all local branches would be pushed to new Gitea repo"
fi
if [[ "$local_tag_count" -gt 0 ]]; then
log_info "${repo_name}: all local tags would be pushed to new Gitea repo"
fi
return 0
fi
if ! fetch_remote_refs "$gitea_url"; then
log_warn "${repo_name}: could not read Gitea refs via ls-remote; skipping diff"
return 0
fi
remote_branch_count="$(count_items "$PHASE10_REMOTE_BRANCHES")"
remote_tag_count="$(count_items "$PHASE10_REMOTE_TAGS")"
log_info "${repo_name}: remote Gitea state = ${remote_branch_count} branch(es), ${remote_tag_count} tag(s)"
print_diff_summary "$repo_name" "branch" "$local_branches" "$PHASE10_REMOTE_BRANCHES"
print_diff_summary "$repo_name" "tag" "$local_tags" "$PHASE10_REMOTE_TAGS"
}
push_all_refs_to_origin() {
local repo_path="$1" repo_name="$2"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would push all branches to origin"
log_info "${repo_name}: would push all tags to origin"
return 0
fi
if ! git_with_auth git -C "$repo_path" push --all origin >/dev/null; then
log_error "${repo_name}: failed pushing branches to Gitea origin"
return 1
fi
if ! git_with_auth git -C "$repo_path" push --tags origin >/dev/null; then
log_error "${repo_name}: failed pushing tags to Gitea origin"
return 1
fi
return 0
}
retarget_tracking_to_origin() {
local repo_path="$1" repo_name="$2"
local branch upstream_remote upstream_short branch_count
branch_count=0
while IFS= read -r branch; do
[[ -z "$branch" ]] && continue
branch_count=$((branch_count + 1))
if ! git -C "$repo_path" show-ref --verify --quiet "refs/remotes/origin/${branch}"; then
# A local branch can exist without an origin ref if it never got pushed.
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would create origin/${branch} by pushing local ${branch}"
else
if ! git_with_auth git -C "$repo_path" push origin "refs/heads/${branch}:refs/heads/${branch}" >/dev/null; then
log_error "${repo_name}: could not create origin/${branch} while setting tracking"
return 1
fi
fi
fi
if [[ "$DRY_RUN" == "true" ]]; then
log_info "${repo_name}: would set upstream ${branch} -> origin/${branch}"
continue
else
if ! git -C "$repo_path" branch --set-upstream-to="origin/${branch}" "$branch" >/dev/null 2>&1; then
log_error "${repo_name}: failed to set upstream for branch '${branch}' to origin/${branch}"
return 1
fi
upstream_remote=$(git -C "$repo_path" for-each-ref --format='%(upstream:remotename)' "refs/heads/${branch}")
upstream_short=$(git -C "$repo_path" for-each-ref --format='%(upstream:short)' "refs/heads/${branch}")
if [[ "$upstream_remote" != "origin" ]] || [[ "$upstream_short" != "origin/${branch}" ]]; then
log_error "${repo_name}: branch '${branch}' upstream is '${upstream_short:-<none>}' (expected origin/${branch})"
return 1
fi
fi
done < <(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads)
if [[ "$branch_count" -eq 0 ]]; then
log_warn "${repo_name}: no local branches found"
fi
return 0
}
if ! phase10_discover_local_repos "$LOCAL_REPO_ROOT" "$GITHUB_USERNAME" "$SCRIPT_DIR" "$EXPECTED_REPO_COUNT"; then
exit 1
fi
log_info "Discovered ${#PHASE10_REPO_NAMES[@]} local repos in ${LOCAL_REPO_ROOT}"
for i in "${!PHASE10_REPO_NAMES[@]}"; do
log_info " - ${PHASE10_REPO_NAMES[$i]} -> ${PHASE10_REPO_PATHS[$i]}"
done
setup_git_auth
SUCCESS=0
FAILED=0
for i in "${!PHASE10_REPO_NAMES[@]}"; do
repo_name="${PHASE10_REPO_NAMES[$i]}"
repo_path="${PHASE10_REPO_PATHS[$i]}"
github_url="${PHASE10_GITHUB_URLS[$i]}"
gitea_url="$(phase10_canonical_gitea_url "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name")"
log_info "--- Processing repo: ${repo_name} (${repo_path}) ---"
if ! ensure_github_remote "$repo_path" "$repo_name" "$github_url"; then
FAILED=$((FAILED + 1))
continue
fi
if ! ensure_gitea_repo_exists "$repo_name"; then
FAILED=$((FAILED + 1))
continue
fi
if [[ "$DRY_RUN" == "true" ]]; then
dry_run_compare_local_and_remote "$repo_path" "$repo_name" "$gitea_url"
fi
if ! ensure_gitea_origin "$repo_path" "$repo_name" "$gitea_url"; then
log_error "${repo_name}: failed to set origin to ${gitea_url}"
FAILED=$((FAILED + 1))
continue
fi
if ! push_all_refs_to_origin "$repo_path" "$repo_name"; then
FAILED=$((FAILED + 1))
continue
fi
if ! retarget_tracking_to_origin "$repo_path" "$repo_name"; then
FAILED=$((FAILED + 1))
continue
fi
if [[ "$DRY_RUN" == "true" ]]; then
log_success "${repo_name}: dry-run plan complete"
else
log_success "${repo_name}: origin now points to Gitea and tracking updated"
fi
SUCCESS=$((SUCCESS + 1))
done
printf '\n'
TOTAL=${#PHASE10_REPO_NAMES[@]}
log_info "Results: ${SUCCESS} succeeded, ${FAILED} failed (out of ${TOTAL})"
if [[ "$DRY_RUN" == "true" ]]; then
if [[ "$FAILED" -gt 0 ]]; then
log_error "Phase 10 dry-run found ${FAILED} error(s); no changes were made"
exit 1
fi
log_success "Phase 10 dry-run complete — no changes were made"
exit 0
fi
if [[ "$FAILED" -gt 0 ]]; then
log_error "Phase 10 failed for one or more repos"
exit 1
fi
log_success "Phase 10 complete — local repos now push/track via Gitea origin"

112
phase10_post_check.sh Executable file
View File

@@ -0,0 +1,112 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase10_post_check.sh — Verify local repo remote cutover to Gitea
# Checks for each discovered local repo:
# 1. origin points to Gitea org/repo
# 2. github points to GitHub owner/repo
# 3. every local branch tracks origin/<branch>
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
source "${SCRIPT_DIR}/lib/phase10_common.sh"
load_env
require_vars GITEA_ORG_NAME GITEA_DOMAIN GITHUB_USERNAME
phase_header 10 "Local Repo Remote Cutover — Post-Check"
LOCAL_REPO_ROOT="${PHASE10_LOCAL_ROOT:-/Users/s/development}"
EXPECTED_REPO_COUNT="${PHASE10_EXPECTED_REPO_COUNT:-3}"
for arg in "$@"; do
case "$arg" in
--local-root=*) LOCAL_REPO_ROOT="${arg#*=}" ;;
--expected-count=*) EXPECTED_REPO_COUNT="${arg#*=}" ;;
--help|-h)
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--local-root=PATH Root folder containing local repos (default: /Users/s/development)
--expected-count=N Require exactly N discovered repos (default: 3, 0 disables)
--help Show this help
EOF
exit 0
;;
*)
log_error "Unknown argument: $arg"
exit 1
;;
esac
done
if ! [[ "$EXPECTED_REPO_COUNT" =~ ^[0-9]+$ ]]; then
log_error "--expected-count must be a non-negative integer"
exit 1
fi
if ! phase10_discover_local_repos "$LOCAL_REPO_ROOT" "$GITHUB_USERNAME" "$SCRIPT_DIR" "$EXPECTED_REPO_COUNT"; then
exit 1
fi
PASS=0
FAIL=0
for i in "${!PHASE10_REPO_NAMES[@]}"; do
repo_name="${PHASE10_REPO_NAMES[$i]}"
repo_path="${PHASE10_REPO_PATHS[$i]}"
github_url="${PHASE10_GITHUB_URLS[$i]}"
gitea_url="$(phase10_canonical_gitea_url "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name")"
log_info "--- Checking repo: ${repo_name} (${repo_path}) ---"
origin_url="$(git -C "$repo_path" remote get-url origin 2>/dev/null || true)"
if [[ -n "$origin_url" ]] && phase10_url_is_gitea_repo "$origin_url" "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name"; then
log_success "origin points to Gitea (${gitea_url})"
PASS=$((PASS + 1))
else
log_error "FAIL: origin does not point to ${gitea_url} (found: ${origin_url:-<missing>})"
FAIL=$((FAIL + 1))
fi
github_remote_url="$(git -C "$repo_path" remote get-url github 2>/dev/null || true)"
if [[ -n "$github_remote_url" ]] && phase10_url_is_github_repo "$github_remote_url" "$GITHUB_USERNAME" "$repo_name"; then
log_success "github points to GitHub (${github_url})"
PASS=$((PASS + 1))
else
log_error "FAIL: github does not point to ${github_url} (found: ${github_remote_url:-<missing>})"
FAIL=$((FAIL + 1))
fi
branch_count=0
while IFS= read -r branch; do
[[ -z "$branch" ]] && continue
branch_count=$((branch_count + 1))
upstream_remote=$(git -C "$repo_path" for-each-ref --format='%(upstream:remotename)' "refs/heads/${branch}")
upstream_short=$(git -C "$repo_path" for-each-ref --format='%(upstream:short)' "refs/heads/${branch}")
if [[ "$upstream_remote" == "origin" ]] && [[ "$upstream_short" == "origin/${branch}" ]]; then
log_success "branch ${branch} tracks origin/${branch}"
PASS=$((PASS + 1))
else
log_error "FAIL: branch ${branch} tracks ${upstream_short:-<none>} (expected origin/${branch})"
FAIL=$((FAIL + 1))
fi
done < <(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads)
if [[ "$branch_count" -eq 0 ]]; then
log_warn "No local branches found in ${repo_name}"
fi
done
printf '\n'
log_info "Results: ${PASS} passed, ${FAIL} failed"
if [[ "$FAIL" -gt 0 ]]; then
log_error "Phase 10 post-check FAILED"
exit 1
fi
log_success "Phase 10 post-check PASSED — local repos track Gitea origin"

172
phase10_teardown.sh Executable file
View File

@@ -0,0 +1,172 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase10_teardown.sh — Reverse local repo remote cutover from phase 10
# Reverts local repos so GitHub is origin again:
# 1. Move Gitea origin -> gitea (if present)
# 2. Move github -> origin
# 3. Set local branch upstreams to origin/<branch> where available
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
source "${SCRIPT_DIR}/lib/phase10_common.sh"
load_env
require_vars GITEA_ORG_NAME GITEA_DOMAIN GITHUB_USERNAME
phase_header 10 "Local Repo Remote Cutover — Teardown"
LOCAL_REPO_ROOT="${PHASE10_LOCAL_ROOT:-/Users/s/development}"
EXPECTED_REPO_COUNT="${PHASE10_EXPECTED_REPO_COUNT:-3}"
AUTO_YES=false
for arg in "$@"; do
case "$arg" in
--local-root=*) LOCAL_REPO_ROOT="${arg#*=}" ;;
--expected-count=*) EXPECTED_REPO_COUNT="${arg#*=}" ;;
--yes|-y) AUTO_YES=true ;;
--help|-h)
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--local-root=PATH Root folder containing local repos (default: /Users/s/development)
--expected-count=N Require exactly N discovered repos (default: 3, 0 disables)
--yes, -y Skip confirmation prompt
--help Show this help
EOF
exit 0
;;
*)
log_error "Unknown argument: $arg"
exit 1
;;
esac
done
if ! [[ "$EXPECTED_REPO_COUNT" =~ ^[0-9]+$ ]]; then
log_error "--expected-count must be a non-negative integer"
exit 1
fi
if [[ "$AUTO_YES" != "true" ]]; then
log_warn "This will revert local repo remotes so GitHub is origin again."
printf 'Continue? [y/N] ' >&2
read -r confirm
if [[ "$confirm" != "y" ]] && [[ "$confirm" != "Y" ]]; then
log_info "Teardown cancelled"
exit 0
fi
fi
if ! phase10_discover_local_repos "$LOCAL_REPO_ROOT" "$GITHUB_USERNAME" "$SCRIPT_DIR" "$EXPECTED_REPO_COUNT"; then
exit 1
fi
set_tracking_to_origin_where_available() {
local repo_path="$1" repo_name="$2"
local branch branch_count
branch_count=0
while IFS= read -r branch; do
[[ -z "$branch" ]] && continue
branch_count=$((branch_count + 1))
if git -C "$repo_path" show-ref --verify --quiet "refs/remotes/origin/${branch}"; then
if git -C "$repo_path" branch --set-upstream-to="origin/${branch}" "$branch" >/dev/null 2>&1; then
log_success "${repo_name}: branch ${branch} now tracks origin/${branch}"
else
log_warn "${repo_name}: could not set upstream for ${branch}"
fi
else
log_warn "${repo_name}: origin/${branch} not found (upstream unchanged)"
fi
done < <(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads)
if [[ "$branch_count" -eq 0 ]]; then
log_warn "${repo_name}: no local branches found"
fi
}
ensure_origin_is_github() {
local repo_path="$1" repo_name="$2" github_url="$3" gitea_url="$4"
local origin_url github_url_existing gitea_url_existing
origin_url="$(git -C "$repo_path" remote get-url origin 2>/dev/null || true)"
github_url_existing="$(git -C "$repo_path" remote get-url github 2>/dev/null || true)"
gitea_url_existing="$(git -C "$repo_path" remote get-url gitea 2>/dev/null || true)"
if [[ -n "$origin_url" ]]; then
if phase10_url_is_github_repo "$origin_url" "$GITHUB_USERNAME" "$repo_name"; then
git -C "$repo_path" remote set-url origin "$github_url"
if [[ -n "$github_url_existing" ]]; then
git -C "$repo_path" remote remove github
fi
return 0
fi
if phase10_url_is_gitea_repo "$origin_url" "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name"; then
if [[ -z "$gitea_url_existing" ]]; then
git -C "$repo_path" remote rename origin gitea
else
git -C "$repo_path" remote set-url gitea "$gitea_url"
git -C "$repo_path" remote remove origin
fi
else
log_error "${repo_name}: origin remote is unexpected (${origin_url})"
return 1
fi
fi
if git -C "$repo_path" remote get-url origin >/dev/null 2>&1; then
:
elif [[ -n "$github_url_existing" ]]; then
git -C "$repo_path" remote rename github origin
else
git -C "$repo_path" remote add origin "$github_url"
fi
git -C "$repo_path" remote set-url origin "$github_url"
if git -C "$repo_path" remote get-url github >/dev/null 2>&1; then
git -C "$repo_path" remote remove github
fi
if git -C "$repo_path" remote get-url gitea >/dev/null 2>&1; then
git -C "$repo_path" remote set-url gitea "$gitea_url"
fi
return 0
}
SUCCESS=0
FAILED=0
for i in "${!PHASE10_REPO_NAMES[@]}"; do
repo_name="${PHASE10_REPO_NAMES[$i]}"
repo_path="${PHASE10_REPO_PATHS[$i]}"
github_url="${PHASE10_GITHUB_URLS[$i]}"
gitea_url="$(phase10_canonical_gitea_url "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name")"
log_info "--- Reverting repo: ${repo_name} (${repo_path}) ---"
if ! ensure_origin_is_github "$repo_path" "$repo_name" "$github_url" "$gitea_url"; then
FAILED=$((FAILED + 1))
continue
fi
set_tracking_to_origin_where_available "$repo_path" "$repo_name"
SUCCESS=$((SUCCESS + 1))
done
printf '\n'
TOTAL=${#PHASE10_REPO_NAMES[@]}
log_info "Results: ${SUCCESS} reverted, ${FAILED} failed (out of ${TOTAL})"
if [[ "$FAILED" -gt 0 ]]; then
log_error "Phase 10 teardown completed with failures"
exit 1
fi
log_success "Phase 10 teardown complete"

288
phase11_custom_runners.sh Executable file
View File

@@ -0,0 +1,288 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase11_custom_runners.sh — Deploy per-repo runner infrastructure & variables
# Depends on: Phase 3 complete (runner infra), Phase 4 complete (repos on Gitea)
#
# Steps:
# 1. Build custom toolchain images on Unraid (go-node-runner, jvm-android-runner)
# 2. Consolidate macOS runners into a shared instance-level runner
# 3. Deploy per-repo Docker runners via manage_runner.sh
# 4. Set Gitea repository variables from repo_variables.conf
#
# Runner strategy:
# - Linux runners: repo-scoped, separate toolchain images per repo
# - Android emulator: shared (repos=all) — any repo can use it
# - macOS runner: shared (repos=all) — any repo can use it
#
# Idempotent: skips images that already exist, runners already running,
# and variables that already match.
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
load_env
require_vars GITEA_ADMIN_TOKEN GITEA_INTERNAL_URL GITEA_ORG_NAME \
UNRAID_IP UNRAID_SSH_USER UNRAID_SSH_PORT \
GO_NODE_RUNNER_CONTEXT JVM_ANDROID_RUNNER_CONTEXT \
ACT_RUNNER_VERSION
phase_header 11 "Custom Runner Infrastructure"
REPO_VARS_CONF="${SCRIPT_DIR}/repo_variables.conf"
REBUILD_IMAGES=false
for arg in "$@"; do
case "$arg" in
--rebuild-images) REBUILD_IMAGES=true ;;
*) ;;
esac
done
SUCCESS=0
FAILED=0
# ---------------------------------------------------------------------------
# Helper: rsync a build context directory to Unraid
# ---------------------------------------------------------------------------
rsync_to_unraid() {
local src="$1" dest="$2"
local ssh_key="${UNRAID_SSH_KEY:-}"
local ssh_opts="ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=accept-new -p ${UNRAID_SSH_PORT}"
if [[ -n "$ssh_key" ]]; then
ssh_opts="${ssh_opts} -i ${ssh_key}"
fi
rsync -az --delete \
--exclude='.env' \
--exclude='.env.*' \
--exclude='envs/' \
--exclude='.git' \
--exclude='.gitignore' \
-e "$ssh_opts" \
"${src}/" "${UNRAID_SSH_USER}@${UNRAID_IP}:${dest}/"
}
# ---------------------------------------------------------------------------
# Helper: check if a Docker image exists on Unraid
# ---------------------------------------------------------------------------
image_exists_on_unraid() {
local tag="$1"
ssh_exec "UNRAID" "docker image inspect '${tag}' >/dev/null 2>&1"
}
# ---------------------------------------------------------------------------
# Helper: list all keys in an INI section (for repo_variables.conf)
# ---------------------------------------------------------------------------
ini_list_keys() {
local file="$1" section="$2"
local in_section=false
local line k
while IFS= read -r line; do
line="${line#"${line%%[![:space:]]*}"}"
line="${line%"${line##*[![:space:]]}"}"
[[ -z "$line" ]] && continue
[[ "$line" == \#* ]] && continue
if [[ "$line" =~ ^\[([^]]+)\] ]]; then
if [[ "${BASH_REMATCH[1]}" == "$section" ]]; then
in_section=true
elif $in_section; then
break
fi
continue
fi
if $in_section && [[ "$line" =~ ^([^=]+)= ]]; then
k="${BASH_REMATCH[1]}"
k="${k#"${k%%[![:space:]]*}"}"
k="${k%"${k##*[![:space:]]}"}"
printf '%s\n' "$k"
fi
done < "$file"
}
# ---------------------------------------------------------------------------
# Helper: upsert a Gitea repo variable (create or update)
# ---------------------------------------------------------------------------
upsert_repo_variable() {
local repo="$1" var_name="$2" var_value="$3"
local owner="${GITEA_ORG_NAME}"
# Check if variable already exists with correct value
local existing
if existing=$(gitea_api GET "/repos/${owner}/${repo}/actions/variables/${var_name}" 2>/dev/null); then
local current_value
current_value=$(printf '%s' "$existing" | jq -r '.value // .data // empty' 2>/dev/null)
if [[ "$current_value" == "$var_value" ]]; then
log_info " ${var_name} already set correctly — skipping"
return 0
fi
# Update existing variable
if gitea_api PUT "/repos/${owner}/${repo}/actions/variables/${var_name}" \
"$(jq -n --arg v "$var_value" '{value: $v}')" >/dev/null 2>&1; then
log_success " Updated ${var_name}"
return 0
else
log_error " Failed to update ${var_name}"
return 1
fi
fi
# Create new variable
if gitea_api POST "/repos/${owner}/${repo}/actions/variables" \
"$(jq -n --arg n "$var_name" --arg v "$var_value" '{name: $n, value: $v}')" >/dev/null 2>&1; then
log_success " Created ${var_name}"
return 0
else
log_error " Failed to create ${var_name}"
return 1
fi
}
# =========================================================================
# Step 1: Build toolchain images on Unraid
# =========================================================================
log_step 1 "Building toolchain images on Unraid"
REMOTE_BUILD_BASE="/tmp/gitea-runner-builds"
# Image build definitions: TAG|LOCAL_CONTEXT|DOCKER_TARGET
IMAGE_BUILDS=(
"go-node-runner:latest|${GO_NODE_RUNNER_CONTEXT}|"
"jvm-android-runner:slim|${JVM_ANDROID_RUNNER_CONTEXT}|slim"
"jvm-android-runner:full|${JVM_ANDROID_RUNNER_CONTEXT}|full"
)
for build_entry in "${IMAGE_BUILDS[@]}"; do
IFS='|' read -r img_tag build_context docker_target <<< "$build_entry"
if [[ "$REBUILD_IMAGES" != "true" ]] && image_exists_on_unraid "$img_tag"; then
log_info "Image ${img_tag} already exists on Unraid — skipping"
continue
fi
# Derive a unique remote directory name from the image tag
remote_dir="${REMOTE_BUILD_BASE}/${img_tag%%:*}"
log_info "Syncing build context for ${img_tag}..."
ssh_exec "UNRAID" "mkdir -p '${remote_dir}'"
rsync_to_unraid "$build_context" "$remote_dir"
log_info "Building ${img_tag} on Unraid (this may take a while)..."
local_build_args=""
if [[ -n "$docker_target" ]]; then
local_build_args="--target ${docker_target}"
fi
# shellcheck disable=SC2086
if ssh_exec "UNRAID" "cd '${remote_dir}' && docker build ${local_build_args} -t '${img_tag}' ."; then
log_success "Built ${img_tag}"
else
log_error "Failed to build ${img_tag}"
FAILED=$((FAILED + 1))
fi
done
# Clean up build contexts on Unraid
ssh_exec "UNRAID" "rm -rf '${REMOTE_BUILD_BASE}'" 2>/dev/null || true
# =========================================================================
# Step 2: Consolidate macOS runners into shared instance-level runner
# =========================================================================
log_step 2 "Consolidating macOS runners"
# Old per-repo macOS runners to remove
OLD_MAC_RUNNERS=(
macbook-runner-periodvault
macbook-runner-intermittent-fasting-tracker
)
for old_runner in "${OLD_MAC_RUNNERS[@]}"; do
if ini_list_sections "${SCRIPT_DIR}/runners.conf" | grep -qx "$old_runner" 2>/dev/null; then
log_info "Old runner section '${old_runner}' found — phase 11 runners.conf already has it removed"
log_info " (If still registered in Gitea, run: manage_runner.sh remove --name ${old_runner})"
fi
# Remove from Gitea if still registered (launchd service)
if launchctl list 2>/dev/null | grep -q "com.gitea.runner.${old_runner}"; then
log_info "Removing old macOS runner '${old_runner}'..."
"${SCRIPT_DIR}/manage_runner.sh" remove --name "$old_runner" 2>/dev/null || true
fi
done
# Deploy the new shared macOS runner
if launchctl list 2>/dev/null | grep -q "com.gitea.runner.macbook-runner"; then
log_info "Shared macOS runner 'macbook-runner' already registered — skipping"
else
log_info "Deploying shared macOS runner 'macbook-runner'..."
if "${SCRIPT_DIR}/manage_runner.sh" add --name macbook-runner; then
log_success "Shared macOS runner deployed"
else
log_error "Failed to deploy shared macOS runner"
FAILED=$((FAILED + 1))
fi
fi
# =========================================================================
# Step 3: Deploy per-repo and shared Docker runners
# =========================================================================
log_step 3 "Deploying Docker runners"
# Phase 11 Docker runners (defined in runners.conf)
PHASE11_DOCKER_RUNNERS=(
unraid-go-node-1
unraid-go-node-2
unraid-go-node-3
unraid-jvm-slim-1
unraid-jvm-slim-2
unraid-android-emulator
)
for runner_name in "${PHASE11_DOCKER_RUNNERS[@]}"; do
log_info "--- Deploying runner: ${runner_name} ---"
if "${SCRIPT_DIR}/manage_runner.sh" add --name "$runner_name"; then
SUCCESS=$((SUCCESS + 1))
else
log_error "Failed to deploy runner '${runner_name}'"
FAILED=$((FAILED + 1))
fi
done
# =========================================================================
# Step 4: Set repository variables from repo_variables.conf
# =========================================================================
log_step 4 "Setting Gitea repository variables"
if [[ ! -f "$REPO_VARS_CONF" ]]; then
log_warn "repo_variables.conf not found — skipping variable setup"
else
# Iterate all sections (repos) in repo_variables.conf
while IFS= read -r repo; do
[[ -z "$repo" ]] && continue
log_info "--- Setting variables for repo: ${repo} ---"
# Iterate all keys in this section
while IFS= read -r var_name; do
[[ -z "$var_name" ]] && continue
var_value=$(ini_get "$REPO_VARS_CONF" "$repo" "$var_name" "")
if [[ -z "$var_value" ]]; then
log_warn " ${var_name} has empty value — skipping"
continue
fi
upsert_repo_variable "$repo" "$var_name" "$var_value" || FAILED=$((FAILED + 1))
done < <(ini_list_keys "$REPO_VARS_CONF" "$repo")
done < <(ini_list_sections "$REPO_VARS_CONF")
fi
# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
printf '\n'
log_info "Results: ${SUCCESS} runners deployed, ${FAILED} failures"
if [[ $FAILED -gt 0 ]]; then
log_error "Some operations failed — check logs above"
exit 1
fi
log_success "Phase 11 complete — custom runner infrastructure deployed"

204
phase11_post_check.sh Executable file
View File

@@ -0,0 +1,204 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase11_post_check.sh — Verify custom runner infrastructure deployment
# Checks:
# 1. Toolchain images exist on Unraid
# 2. All phase 11 runners registered and online in Gitea
# 3. Shared macOS runner has correct labels
# 4. Repository variables set correctly
# 5. KVM available on Unraid (warning only)
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
load_env
require_vars GITEA_ADMIN_TOKEN GITEA_INTERNAL_URL GITEA_ORG_NAME \
UNRAID_IP UNRAID_SSH_USER UNRAID_SSH_PORT
phase_header 11 "Custom Runners — Post-Check"
REPO_VARS_CONF="${SCRIPT_DIR}/repo_variables.conf"
PASS=0
FAIL=0
WARN=0
run_check() {
local desc="$1"
shift
if "$@"; then
log_success "$desc"
PASS=$((PASS + 1))
else
log_error "FAIL: $desc"
FAIL=$((FAIL + 1))
fi
}
run_warn_check() {
local desc="$1"
shift
if "$@"; then
log_success "$desc"
PASS=$((PASS + 1))
else
log_warn "WARN: $desc"
WARN=$((WARN + 1))
fi
}
# =========================================================================
# Check 1: Toolchain images exist on Unraid
# =========================================================================
log_info "--- Checking toolchain images ---"
check_image() {
local tag="$1"
ssh_exec "UNRAID" "docker image inspect '${tag}' >/dev/null 2>&1"
}
run_check "Image go-node-runner:latest exists on Unraid" check_image "go-node-runner:latest"
run_check "Image jvm-android-runner:slim exists on Unraid" check_image "jvm-android-runner:slim"
run_check "Image jvm-android-runner:full exists on Unraid" check_image "jvm-android-runner:full"
# =========================================================================
# Check 2: All phase 11 runners registered and online
# =========================================================================
log_info "--- Checking runner status ---"
# Fetch all runners from Gitea admin API (single call)
ALL_RUNNERS=$(gitea_api GET "/admin/runners" 2>/dev/null || echo "[]")
check_runner_online() {
local name="$1"
local status
status=$(printf '%s' "$ALL_RUNNERS" | jq -r --arg n "$name" \
'[.[] | select(.name == $n)] | .[0].status // "not-found"' 2>/dev/null)
if [[ "$status" == "not-found" ]] || [[ -z "$status" ]]; then
log_error " Runner '${name}' not found in Gitea"
return 1
fi
if [[ "$status" == "offline" ]] || [[ "$status" == "2" ]]; then
log_error " Runner '${name}' is offline"
return 1
fi
return 0
}
PHASE11_RUNNERS=(
macbook-runner
unraid-go-node-1
unraid-go-node-2
unraid-go-node-3
unraid-jvm-slim-1
unraid-jvm-slim-2
unraid-android-emulator
)
for runner in "${PHASE11_RUNNERS[@]}"; do
run_check "Runner '${runner}' registered and online" check_runner_online "$runner"
done
# =========================================================================
# Check 3: Shared macOS runner has correct labels
# =========================================================================
log_info "--- Checking macOS runner labels ---"
check_mac_labels() {
local labels
labels=$(printf '%s' "$ALL_RUNNERS" | jq -r \
'[.[] | select(.name == "macbook-runner")] | .[0].labels // [] | .[].name' 2>/dev/null)
local missing=0
for expected in "self-hosted" "macOS" "ARM64"; do
if ! printf '%s' "$labels" | grep -qx "$expected" 2>/dev/null; then
log_error " macbook-runner missing label: ${expected}"
missing=1
fi
done
return "$missing"
}
run_check "macbook-runner has labels: self-hosted, macOS, ARM64" check_mac_labels
# =========================================================================
# Check 4: Repository variables set correctly
# =========================================================================
log_info "--- Checking repository variables ---"
check_repo_variable() {
local repo="$1" var_name="$2" expected="$3"
local owner="${GITEA_ORG_NAME}"
local response
if ! response=$(gitea_api GET "/repos/${owner}/${repo}/actions/variables/${var_name}" 2>/dev/null); then
log_error " Variable ${var_name} not found on ${repo}"
return 1
fi
local actual
actual=$(printf '%s' "$response" | jq -r '.value // .data // empty' 2>/dev/null)
if [[ "$actual" != "$expected" ]]; then
log_error " Variable ${var_name} on ${repo}: expected '${expected}', got '${actual}'"
return 1
fi
return 0
}
if [[ -f "$REPO_VARS_CONF" ]]; then
while IFS= read -r repo; do
[[ -z "$repo" ]] && continue
# Read all keys from the section using inline parsing
local_in_section=false
while IFS= read -r line; do
line="${line#"${line%%[![:space:]]*}"}"
line="${line%"${line##*[![:space:]]}"}"
[[ -z "$line" ]] && continue
[[ "$line" == \#* ]] && continue
if [[ "$line" =~ ^\[([^]]+)\] ]]; then
if [[ "${BASH_REMATCH[1]}" == "$repo" ]]; then
local_in_section=true
elif $local_in_section; then
break
fi
continue
fi
if $local_in_section && [[ "$line" =~ ^([^=]+)=(.*) ]]; then
k="${BASH_REMATCH[1]}"
v="${BASH_REMATCH[2]}"
k="${k#"${k%%[![:space:]]*}"}"
k="${k%"${k##*[![:space:]]}"}"
v="${v#"${v%%[![:space:]]*}"}"
v="${v%"${v##*[![:space:]]}"}"
run_check "Variable ${k} on ${repo}" check_repo_variable "$repo" "$k" "$v"
fi
done < "$REPO_VARS_CONF"
done < <(ini_list_sections "$REPO_VARS_CONF")
else
log_warn "repo_variables.conf not found — skipping variable checks"
WARN=$((WARN + 1))
fi
# =========================================================================
# Check 5: KVM available on Unraid
# =========================================================================
log_info "--- Checking KVM availability ---"
check_kvm() {
ssh_exec "UNRAID" "test -c /dev/kvm"
}
run_warn_check "KVM device available on Unraid (/dev/kvm)" check_kvm
# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
printf '\n'
TOTAL=$((PASS + FAIL + WARN))
log_info "Results: ${PASS} passed, ${FAIL} failed, ${WARN} warnings (out of ${TOTAL})"
if [[ $FAIL -gt 0 ]]; then
log_error "Some checks failed — review above"
exit 1
fi
log_success "Phase 11 post-check complete"

185
phase11_teardown.sh Executable file
View File

@@ -0,0 +1,185 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase11_teardown.sh — Remove custom runner infrastructure deployed by phase 11
# Reverses:
# 1. Repository variables
# 2. Docker runners (per-repo + shared emulator)
# 3. Shared macOS runner → restore original per-repo macOS runners
# 4. Toolchain images on Unraid
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
load_env
require_vars GITEA_ADMIN_TOKEN GITEA_INTERNAL_URL GITEA_ORG_NAME
phase_header 11 "Custom Runners — Teardown"
REPO_VARS_CONF="${SCRIPT_DIR}/repo_variables.conf"
AUTO_YES=false
for arg in "$@"; do
case "$arg" in
--yes|-y) AUTO_YES=true ;;
*) ;;
esac
done
if [[ "$AUTO_YES" != "true" ]]; then
log_warn "This will remove all phase 11 custom runners and repo variables."
printf 'Continue? [y/N] ' >&2
read -r confirm
if [[ "$confirm" != "y" ]] && [[ "$confirm" != "Y" ]]; then
log_info "Aborted"
exit 0
fi
fi
REMOVED=0
FAILED=0
# =========================================================================
# Step 1: Delete repository variables
# =========================================================================
log_step 1 "Removing repository variables"
if [[ -f "$REPO_VARS_CONF" ]]; then
while IFS= read -r repo; do
[[ -z "$repo" ]] && continue
log_info "--- Removing variables for repo: ${repo} ---"
# Parse keys from section
in_section=false
while IFS= read -r line; do
line="${line#"${line%%[![:space:]]*}"}"
line="${line%"${line##*[![:space:]]}"}"
[[ -z "$line" ]] && continue
[[ "$line" == \#* ]] && continue
if [[ "$line" =~ ^\[([^]]+)\] ]]; then
if [[ "${BASH_REMATCH[1]}" == "$repo" ]]; then
in_section=true
elif $in_section; then
break
fi
continue
fi
if $in_section && [[ "$line" =~ ^([^=]+)= ]]; then
k="${BASH_REMATCH[1]}"
k="${k#"${k%%[![:space:]]*}"}"
k="${k%"${k##*[![:space:]]}"}"
if gitea_api DELETE "/repos/${GITEA_ORG_NAME}/${repo}/actions/variables/${k}" >/dev/null 2>&1; then
log_success " Deleted ${k} from ${repo}"
REMOVED=$((REMOVED + 1))
else
log_warn " Could not delete ${k} from ${repo} (may not exist)"
fi
fi
done < "$REPO_VARS_CONF"
done < <(ini_list_sections "$REPO_VARS_CONF")
else
log_info "repo_variables.conf not found — skipping"
fi
# =========================================================================
# Step 2: Remove Docker runners
# =========================================================================
log_step 2 "Removing Docker runners"
PHASE11_DOCKER_RUNNERS=(
unraid-go-node-1
unraid-go-node-2
unraid-go-node-3
unraid-jvm-slim-1
unraid-jvm-slim-2
unraid-android-emulator
)
for runner_name in "${PHASE11_DOCKER_RUNNERS[@]}"; do
log_info "Removing runner '${runner_name}'..."
if "${SCRIPT_DIR}/manage_runner.sh" remove --name "$runner_name" 2>/dev/null; then
log_success "Removed ${runner_name}"
REMOVED=$((REMOVED + 1))
else
log_warn "Could not remove ${runner_name} (may not exist)"
fi
done
# =========================================================================
# Step 3: Remove shared macOS runner, restore original per-repo runners
# =========================================================================
log_step 3 "Restoring original macOS runner configuration"
# Remove shared runner
if launchctl list 2>/dev/null | grep -q "com.gitea.runner.macbook-runner"; then
log_info "Removing shared macOS runner 'macbook-runner'..."
"${SCRIPT_DIR}/manage_runner.sh" remove --name macbook-runner 2>/dev/null || true
REMOVED=$((REMOVED + 1))
fi
# Note: original per-repo macOS runner sections were replaced in runners.conf
# during phase 11. They need to be re-added manually or by re-running
# configure_runners.sh. This teardown only cleans up deployed resources.
log_info "Note: original macOS runner sections (macbook-runner-periodvault,"
log_info " macbook-runner-intermittent-fasting-tracker) must be restored in"
log_info " runners.conf manually or via git checkout."
# =========================================================================
# Step 4: Remove toolchain images from Unraid
# =========================================================================
log_step 4 "Removing toolchain images from Unraid"
IMAGES_TO_REMOVE=(
"go-node-runner:latest"
"jvm-android-runner:slim"
"jvm-android-runner:full"
)
for img in "${IMAGES_TO_REMOVE[@]}"; do
if ssh_exec "UNRAID" "docker rmi '${img}' 2>/dev/null"; then
log_success "Removed image ${img}"
REMOVED=$((REMOVED + 1))
else
log_warn "Could not remove image ${img} (may not exist or in use)"
fi
done
# =========================================================================
# Step 5: Remove phase 11 runner sections from runners.conf
# =========================================================================
log_step 5 "Cleaning runners.conf"
RUNNERS_CONF="${SCRIPT_DIR}/runners.conf"
PHASE11_SECTIONS=(
unraid-go-node-1
unraid-go-node-2
unraid-go-node-3
unraid-jvm-slim-1
unraid-jvm-slim-2
unraid-android-emulator
macbook-runner
)
for section in "${PHASE11_SECTIONS[@]}"; do
if ini_list_sections "$RUNNERS_CONF" | grep -qx "$section" 2>/dev/null; then
ini_remove_section "$RUNNERS_CONF" "$section"
log_success "Removed [${section}] from runners.conf"
REMOVED=$((REMOVED + 1))
fi
done
# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
printf '\n'
log_info "Results: ${REMOVED} items removed, ${FAILED} failures"
if [[ $FAILED -gt 0 ]]; then
log_error "Some removals failed — check logs above"
exit 1
fi
log_success "Phase 11 teardown complete"

608
phase7_5_nginx_to_caddy.sh Executable file
View File

@@ -0,0 +1,608 @@
#!/usr/bin/env bash
set -euo pipefail
# =============================================================================
# phase7_5_nginx_to_caddy.sh — One-time Nginx -> Caddy migration cutover helper
#
# Goals:
# - Serve both sintheus.com and privacyindesign.com hostnames from one Caddy
# - Keep public ingress HTTPS-only
# - Support canary-first rollout (default: tower.sintheus.com only)
# - Preserve current mixed backend schemes (http/https) unless strict mode is enabled
#
# Usage examples:
# ./phase7_5_nginx_to_caddy.sh
# ./phase7_5_nginx_to_caddy.sh --mode=full
# ./phase7_5_nginx_to_caddy.sh --mode=full --strict-backend-https
# ./phase7_5_nginx_to_caddy.sh --mode=canary --yes
# =============================================================================
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh"
AUTO_YES=false
MODE="canary" # canary|full
STRICT_BACKEND_HTTPS=false
# Reuse Unraid's existing Docker network.
UNRAID_DOCKER_NETWORK_NAME="br0"
usage() {
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--mode=canary|full Rollout scope (default: canary)
--strict-backend-https Require all upstream backends to be https://
--yes, -y Skip confirmation prompts
--help, -h Show this help
EOF
}
for arg in "$@"; do
case "$arg" in
--mode=*) MODE="${arg#*=}" ;;
--strict-backend-https) STRICT_BACKEND_HTTPS=true ;;
--yes|-y) AUTO_YES=true ;;
--help|-h) usage; exit 0 ;;
*)
log_error "Unknown argument: $arg"
usage
exit 1
;;
esac
done
if [[ "$MODE" != "canary" && "$MODE" != "full" ]]; then
log_error "Invalid --mode '$MODE' (use: canary|full)"
exit 1
fi
confirm_action() {
local prompt="$1"
if [[ "$AUTO_YES" == "true" ]]; then
log_info "Auto-confirmed (--yes): ${prompt}"
return 0
fi
printf '%s' "$prompt"
read -r confirm
[[ "$confirm" =~ ^[Yy]$ ]]
}
load_env
require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_COMPOSE_DIR \
UNRAID_CADDY_IP UNRAID_GITEA_IP \
GITEA_DOMAIN CADDY_DATA_PATH TLS_MODE
if [[ "$TLS_MODE" == "cloudflare" ]]; then
require_vars CLOUDFLARE_API_TOKEN
elif [[ "$TLS_MODE" == "existing" ]]; then
require_vars SSL_CERT_PATH SSL_KEY_PATH
else
log_error "Invalid TLS_MODE='${TLS_MODE}' — must be 'cloudflare' or 'existing'"
exit 1
fi
phase_header "7.5" "Nginx to Caddy Migration (Multi-domain)"
# host|upstream|streaming(true/false)|body_limit|insecure_skip_verify(true/false)
FULL_HOST_MAP=(
"ai.sintheus.com|http://192.168.1.82:8181 http://192.168.1.83:8181|true|50MB|false"
"photos.sintheus.com|http://192.168.1.222:2283|false|50GB|false"
"fin.sintheus.com|http://192.168.1.233:8096|true||false"
"disk.sintheus.com|http://192.168.1.52:80|false|20GB|false"
"pi.sintheus.com|http://192.168.1.4:80|false||false"
"plex.sintheus.com|http://192.168.1.111:32400|true||false"
"sync.sintheus.com|http://192.168.1.119:8384|false||false"
"syno.sintheus.com|https://100.108.182.16:5001|false||true"
"tower.sintheus.com|https://192.168.1.82:443 https://192.168.1.83:443|false||true"
)
CANARY_HOST_MAP=(
"tower.sintheus.com|https://192.168.1.82:443 https://192.168.1.83:443|false||true"
)
GITEA_ENTRY="${GITEA_DOMAIN}|http://${UNRAID_GITEA_IP}:3000|false||false"
CADDY_COMPOSE_DIR="${UNRAID_COMPOSE_DIR}/caddy"
SELECTED_HOST_MAP=()
if [[ "$MODE" == "canary" ]]; then
SELECTED_HOST_MAP=( "${CANARY_HOST_MAP[@]}" )
else
SELECTED_HOST_MAP=( "${FULL_HOST_MAP[@]}" "$GITEA_ENTRY" )
fi
validate_backend_tls_policy() {
local -a non_tls_entries=()
local entry host upstream
for entry in "${SELECTED_HOST_MAP[@]}"; do
IFS='|' read -r host upstream _ <<< "$entry"
if [[ "$upstream" != https://* ]]; then
non_tls_entries+=( "${host} -> ${upstream}" )
fi
done
if [[ "${#non_tls_entries[@]}" -eq 0 ]]; then
log_success "All selected backends are HTTPS"
return 0
fi
if [[ "$STRICT_BACKEND_HTTPS" == "true" ]]; then
log_error "Strict backend HTTPS is enabled, but these entries are not HTTPS:"
printf '%s\n' "${non_tls_entries[@]}" | sed 's/^/ - /' >&2
return 1
fi
log_warn "Using mixed backend schemes (allowed):"
printf '%s\n' "${non_tls_entries[@]}" | sed 's/^/ - /' >&2
}
emit_site_block() {
local outfile="$1" host="$2" upstream="$3" streaming="$4" body_limit="$5" skip_verify="$6"
{
echo "${host} {"
if [[ "$TLS_MODE" == "existing" ]]; then
echo " tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
fi
echo " import common_security"
echo
if [[ -n "$body_limit" ]]; then
echo " request_body {"
echo " max_size ${body_limit}"
echo " }"
echo
fi
echo " reverse_proxy ${upstream} {"
if [[ "$streaming" == "true" ]]; then
echo " import proxy_streaming"
else
echo " import proxy_headers"
fi
if [[ "$skip_verify" == "true" && "$upstream" == https://* ]]; then
echo " transport http {"
echo " tls_insecure_skip_verify"
echo " }"
fi
echo " }"
echo "}"
echo
} >> "$outfile"
}
emit_site_block_standalone() {
local outfile="$1" host="$2" upstream="$3" streaming="$4" body_limit="$5" skip_verify="$6"
{
echo "${host} {"
if [[ "$TLS_MODE" == "cloudflare" ]]; then
echo " tls {"
echo " dns cloudflare {env.CF_API_TOKEN}"
echo " }"
elif [[ "$TLS_MODE" == "existing" ]]; then
echo " tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
fi
echo " encode zstd gzip"
echo " header {"
echo " Strict-Transport-Security \"max-age=31536000; includeSubDomains; preload\""
echo " X-Content-Type-Options \"nosniff\""
echo " X-Frame-Options \"SAMEORIGIN\""
echo " Referrer-Policy \"strict-origin-when-cross-origin\""
echo " -Server"
echo " }"
echo
if [[ -n "$body_limit" ]]; then
echo " request_body {"
echo " max_size ${body_limit}"
echo " }"
echo
fi
echo " reverse_proxy ${upstream} {"
echo " header_up Host {host}"
echo " header_up X-Real-IP {remote_host}"
if [[ "$streaming" == "true" ]]; then
echo " flush_interval -1"
fi
if [[ "$skip_verify" == "true" && "$upstream" == https://* ]]; then
echo " transport http {"
echo " tls_insecure_skip_verify"
echo " }"
fi
echo " }"
echo "}"
echo
} >> "$outfile"
}
caddy_block_extract_for_host() {
local infile="$1" host="$2" outfile="$3"
awk -v host="$host" '
function trim(s) {
sub(/^[[:space:]]+/, "", s)
sub(/[[:space:]]+$/, "", s)
return s
}
function brace_delta(s, tmp, opens, closes) {
tmp = s
opens = gsub(/\{/, "{", tmp)
closes = gsub(/\}/, "}", tmp)
return opens - closes
}
function has_host(labels, i, n, parts, token) {
labels = trim(labels)
gsub(/[[:space:]]+/, "", labels)
n = split(labels, parts, ",")
for (i = 1; i <= n; i++) {
token = parts[i]
if (token == host) {
return 1
}
}
return 0
}
BEGIN {
depth = 0
in_target = 0
target_depth = 0
found = 0
}
{
line = $0
if (!in_target) {
if (depth == 0) {
pos = index(line, "{")
if (pos > 0) {
labels = substr(line, 1, pos - 1)
if (trim(labels) != "" && labels !~ /^[[:space:]]*\(/ && has_host(labels)) {
in_target = 1
target_depth = brace_delta(line)
found = 1
print line
next
}
}
}
depth += brace_delta(line)
} else {
target_depth += brace_delta(line)
print line
if (target_depth <= 0) {
in_target = 0
}
}
}
END {
if (!found) {
exit 1
}
}
' "$infile" > "$outfile"
}
caddy_block_remove_for_host() {
local infile="$1" host="$2" outfile="$3"
awk -v host="$host" '
function trim(s) {
sub(/^[[:space:]]+/, "", s)
sub(/[[:space:]]+$/, "", s)
return s
}
function brace_delta(s, tmp, opens, closes) {
tmp = s
opens = gsub(/\{/, "{", tmp)
closes = gsub(/\}/, "}", tmp)
return opens - closes
}
function has_host(labels, i, n, parts, token) {
labels = trim(labels)
gsub(/[[:space:]]+/, "", labels)
n = split(labels, parts, ",")
for (i = 1; i <= n; i++) {
token = parts[i]
if (token == host) {
return 1
}
}
return 0
}
BEGIN {
depth = 0
in_target = 0
target_depth = 0
removed = 0
}
{
line = $0
if (in_target) {
target_depth += brace_delta(line)
if (target_depth <= 0) {
in_target = 0
}
next
}
if (depth == 0) {
pos = index(line, "{")
if (pos > 0) {
labels = substr(line, 1, pos - 1)
if (trim(labels) != "" && labels !~ /^[[:space:]]*\(/ && has_host(labels)) {
in_target = 1
target_depth = brace_delta(line)
removed = 1
next
}
}
}
print line
depth += brace_delta(line)
}
END {
if (!removed) {
exit 1
}
}
' "$infile" > "$outfile"
}
upsert_site_block_by_host() {
local infile="$1" entry="$2" outfile="$3"
local host upstream streaming body_limit skip_verify
IFS='|' read -r host upstream streaming body_limit skip_verify <<< "$entry"
local tmp_new_block tmp_old_block tmp_without_old tmp_combined
tmp_new_block=$(mktemp)
tmp_old_block=$(mktemp)
tmp_without_old=$(mktemp)
tmp_combined=$(mktemp)
: > "$tmp_new_block"
emit_site_block_standalone "$tmp_new_block" "$host" "$upstream" "$streaming" "$body_limit" "$skip_verify"
if caddy_block_extract_for_host "$infile" "$host" "$tmp_old_block"; then
log_info "Domain '${host}' already exists; replacing existing site block"
log_info "Previous block for '${host}':"
sed 's/^/ | /' "$tmp_old_block" >&2
caddy_block_remove_for_host "$infile" "$host" "$tmp_without_old"
cat "$tmp_without_old" "$tmp_new_block" > "$tmp_combined"
else
log_info "Domain '${host}' not present; adding new site block"
cat "$infile" "$tmp_new_block" > "$tmp_combined"
fi
mv "$tmp_combined" "$outfile"
rm -f "$tmp_new_block" "$tmp_old_block" "$tmp_without_old"
}
build_caddyfile() {
local outfile="$1"
local entry host upstream streaming body_limit skip_verify
: > "$outfile"
{
echo "# Generated by phase7_5_nginx_to_caddy.sh"
echo "# Mode: ${MODE}"
echo
echo "{"
if [[ "$TLS_MODE" == "cloudflare" ]]; then
echo " acme_dns cloudflare {env.CF_API_TOKEN}"
fi
echo " servers {"
echo " trusted_proxies static private_ranges"
echo " protocols h1 h2 h3"
echo " }"
echo "}"
echo
echo "(common_security) {"
echo " encode zstd gzip"
echo " header {"
echo " Strict-Transport-Security \"max-age=31536000; includeSubDomains; preload\""
echo " X-Content-Type-Options \"nosniff\""
echo " X-Frame-Options \"SAMEORIGIN\""
echo " Referrer-Policy \"strict-origin-when-cross-origin\""
echo " -Server"
echo " }"
echo "}"
echo
echo "(proxy_headers) {"
echo " header_up Host {host}"
echo " header_up X-Real-IP {remote_host}"
echo "}"
echo
echo "(proxy_streaming) {"
echo " import proxy_headers"
echo " flush_interval -1"
echo "}"
echo
} >> "$outfile"
for entry in "${SELECTED_HOST_MAP[@]}"; do
IFS='|' read -r host upstream streaming body_limit skip_verify <<< "$entry"
emit_site_block "$outfile" "$host" "$upstream" "$streaming" "$body_limit" "$skip_verify"
done
}
if ! validate_backend_tls_policy; then
exit 1
fi
log_step 1 "Creating Caddy data directories on Unraid..."
ssh_exec UNRAID "mkdir -p '${CADDY_DATA_PATH}/data' '${CADDY_DATA_PATH}/config'"
log_success "Caddy data directories ensured"
log_step 2 "Deploying Caddy docker-compose on Unraid..."
if ! ssh_exec UNRAID "docker network inspect '${UNRAID_DOCKER_NETWORK_NAME}'" &>/dev/null; then
log_error "Required Docker network '${UNRAID_DOCKER_NETWORK_NAME}' not found on Unraid"
exit 1
fi
ssh_exec UNRAID "mkdir -p '${CADDY_COMPOSE_DIR}'"
TMP_COMPOSE=$(mktemp)
CADDY_CONTAINER_IP="${UNRAID_CADDY_IP}"
GITEA_NETWORK_NAME="${UNRAID_DOCKER_NETWORK_NAME}"
export CADDY_CONTAINER_IP CADDY_DATA_PATH GITEA_NETWORK_NAME
if [[ "$TLS_MODE" == "cloudflare" ]]; then
CADDY_ENV_VARS=" - CF_API_TOKEN=${CLOUDFLARE_API_TOKEN}"
CADDY_EXTRA_VOLUMES=""
else
CADDY_ENV_VARS=""
CADDY_EXTRA_VOLUMES=" - ${SSL_CERT_PATH}:${SSL_CERT_PATH}:ro
- ${SSL_KEY_PATH}:${SSL_KEY_PATH}:ro"
fi
export CADDY_ENV_VARS CADDY_EXTRA_VOLUMES
render_template "${SCRIPT_DIR}/templates/docker-compose-caddy.yml.tpl" "$TMP_COMPOSE" \
"\${CADDY_DATA_PATH} \${CADDY_CONTAINER_IP} \${CADDY_ENV_VARS} \${CADDY_EXTRA_VOLUMES} \${GITEA_NETWORK_NAME}"
if [[ -z "$CADDY_ENV_VARS" ]]; then
sed -i.bak '/^[[:space:]]*environment:$/d' "$TMP_COMPOSE"
rm -f "${TMP_COMPOSE}.bak"
fi
if [[ -z "$CADDY_EXTRA_VOLUMES" ]]; then
sed -i.bak -e :a -e '/^\n*$/{$d;N;ba' -e '}' "$TMP_COMPOSE"
rm -f "${TMP_COMPOSE}.bak"
fi
scp_to UNRAID "$TMP_COMPOSE" "${CADDY_COMPOSE_DIR}/docker-compose.yml"
rm -f "$TMP_COMPOSE"
log_success "Caddy compose deployed to ${CADDY_COMPOSE_DIR}"
log_step 3 "Generating and deploying multi-domain Caddyfile..."
TMP_CADDYFILE=$(mktemp)
HAS_EXISTING_CADDYFILE=false
if ssh_exec UNRAID "test -f '${CADDY_DATA_PATH}/Caddyfile'" 2>/dev/null; then
HAS_EXISTING_CADDYFILE=true
BACKUP_PATH="${CADDY_DATA_PATH}/Caddyfile.pre_phase7_5.$(date +%Y%m%d%H%M%S)"
ssh_exec UNRAID "cp '${CADDY_DATA_PATH}/Caddyfile' '${BACKUP_PATH}'"
log_info "Backed up previous Caddyfile to ${BACKUP_PATH}"
fi
if [[ "$MODE" == "canary" && "$HAS_EXISTING_CADDYFILE" == "true" ]]; then
TMP_WORK=$(mktemp)
TMP_NEXT=$(mktemp)
cp /dev/null "$TMP_NEXT"
ssh_exec UNRAID "cat '${CADDY_DATA_PATH}/Caddyfile'" > "$TMP_WORK"
for entry in "${CANARY_HOST_MAP[@]}"; do
upsert_site_block_by_host "$TMP_WORK" "$entry" "$TMP_NEXT"
mv "$TMP_NEXT" "$TMP_WORK"
TMP_NEXT=$(mktemp)
done
cp "$TMP_WORK" "$TMP_CADDYFILE"
rm -f "$TMP_WORK" "$TMP_NEXT"
log_info "Canary mode: existing routes preserved; canary domains upserted"
else
build_caddyfile "$TMP_CADDYFILE"
fi
scp_to UNRAID "$TMP_CADDYFILE" "${CADDY_DATA_PATH}/Caddyfile"
rm -f "$TMP_CADDYFILE"
log_success "Caddyfile deployed"
log_step 4 "Starting/reloading Caddy container..."
ssh_exec UNRAID "cd '${CADDY_COMPOSE_DIR}' && docker compose up -d 2>/dev/null || docker-compose up -d"
if ! ssh_exec UNRAID "docker exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile" &>/dev/null; then
log_warn "Hot reload failed; restarting caddy container"
ssh_exec UNRAID "docker restart caddy" >/dev/null
fi
log_success "Caddy container is running with new config"
probe_http_code_ok() {
local code="$1" role="$2"
if [[ "$role" == "gitea_api" ]]; then
[[ "$code" == "200" ]]
return
fi
[[ "$code" =~ ^(2|3)[0-9][0-9]$ || "$code" == "401" || "$code" == "403" ]]
}
probe_host_via_caddy() {
local host="$1" upstream="$2" role="$3"
local max_attempts="${4:-5}" wait_secs="${5:-5}"
local path="/"
if [[ "$role" == "gitea_api" ]]; then
path="/api/v1/version"
fi
local tmp_body http_code attempt
tmp_body=$(mktemp)
for (( attempt=1; attempt<=max_attempts; attempt++ )); do
http_code=$(curl -sk --resolve "${host}:443:${UNRAID_CADDY_IP}" \
-o "$tmp_body" -w "%{http_code}" "https://${host}${path}" 2>/dev/null) || true
[[ -z "$http_code" ]] && http_code="000"
if probe_http_code_ok "$http_code" "$role"; then
log_success "Probe passed: ${host} (HTTP ${http_code})"
rm -f "$tmp_body"
return 0
fi
if [[ $attempt -lt $max_attempts ]]; then
log_info "Probe attempt ${attempt}/${max_attempts} for ${host} (HTTP ${http_code}) — retrying in ${wait_secs}s..."
sleep "$wait_secs"
fi
done
log_error "Probe failed: ${host} (HTTP ${http_code}) after ${max_attempts} attempts"
if [[ "$http_code" == "502" || "$http_code" == "503" || "$http_code" == "504" || "$http_code" == "000" ]]; then
local upstream_probe_raw upstream_code
upstream_probe_raw=$(ssh_exec UNRAID "curl -sk -o /dev/null -w '%{http_code}' '${upstream}' || true" 2>/dev/null || true)
upstream_code=$(printf '%s' "$upstream_probe_raw" | tr -cd '0-9')
if [[ -z "$upstream_code" ]]; then
upstream_code="000"
elif [[ ${#upstream_code} -gt 3 ]]; then
upstream_code="${upstream_code:$((${#upstream_code} - 3))}"
fi
log_warn "Upstream check from Unraid: ${upstream} -> HTTP ${upstream_code}"
fi
rm -f "$tmp_body"
return 1
}
if [[ "$MODE" == "canary" ]]; then
if confirm_action "Run canary HTTPS probe for tower.sintheus.com via Caddy IP now? [y/N] "; then
if ! probe_host_via_caddy "tower.sintheus.com" "https://192.168.1.82:443" "generic"; then
log_error "Canary probe failed for tower.sintheus.com via ${UNRAID_CADDY_IP}"
exit 1
fi
fi
else
log_step 5 "Probing all configured hosts via Caddy IP..."
PROBE_FAILS=0
for entry in "${SELECTED_HOST_MAP[@]}"; do
IFS='|' read -r host upstream _ <<< "$entry"
role="generic"
if [[ "$host" == "$GITEA_DOMAIN" ]]; then
role="gitea_api"
fi
if ! probe_host_via_caddy "$host" "$upstream" "$role"; then
PROBE_FAILS=$((PROBE_FAILS + 1))
fi
done
if [[ "$PROBE_FAILS" -gt 0 ]]; then
log_error "One or more probes failed (${PROBE_FAILS})"
exit 1
fi
fi
printf '\n'
log_success "Phase 7.5 complete (${MODE} mode)"
log_info "Next (no DNS change required): verify via curl --resolve and browser checks"
log_info "LAN-only routing option: split-DNS/hosts override to ${UNRAID_CADDY_IP}"
log_info "Public routing option: point public DNS to WAN ingress (not 192.168.x.x) and forward 443 to Caddy"
if [[ "$MODE" == "canary" ]]; then
log_info "Canary host is tower.sintheus.com; existing routes were preserved"
else
log_info "Full host map is now active in Caddy"
fi

7
phase8_5_nginx_to_caddy.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
# Backward-compat wrapper: phase 8.5 was renamed to phase 7.5.
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
echo "[WARN] phase8_5_nginx_to_caddy.sh was renamed to phase7_5_nginx_to_caddy.sh" >&2
exec "${SCRIPT_DIR}/phase7_5_nginx_to_caddy.sh" "$@"

View File

@@ -16,6 +16,31 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh" source "${SCRIPT_DIR}/lib/common.sh"
ALLOW_DIRECT_CHECKS=false
usage() {
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--allow-direct-checks Allow fallback to direct Caddy-IP checks via --resolve
(LAN/split-DNS staging mode; not a full public cutover)
--help, -h Show this help
EOF
}
for arg in "$@"; do
case "$arg" in
--allow-direct-checks) ALLOW_DIRECT_CHECKS=true ;;
--help|-h) usage; exit 0 ;;
*)
log_error "Unknown argument: $arg"
usage
exit 1
;;
esac
done
load_env load_env
require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_GITEA_IP UNRAID_CADDY_IP \ require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_GITEA_IP UNRAID_CADDY_IP \
UNRAID_COMPOSE_DIR \ UNRAID_COMPOSE_DIR \
@@ -25,7 +50,7 @@ require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_GITEA_IP UNRAID_CADDY_IP \
REPO_NAMES REPO_NAMES
if [[ "$TLS_MODE" == "cloudflare" ]]; then if [[ "$TLS_MODE" == "cloudflare" ]]; then
require_vars CLOUDFLARE_API_TOKEN require_vars CLOUDFLARE_API_TOKEN PUBLIC_DNS_TARGET_IP
elif [[ "$TLS_MODE" == "existing" ]]; then elif [[ "$TLS_MODE" == "existing" ]]; then
require_vars SSL_CERT_PATH SSL_KEY_PATH require_vars SSL_CERT_PATH SSL_KEY_PATH
else else
@@ -42,6 +67,232 @@ PHASE8_STATE_FILE="${PHASE8_STATE_DIR}/phase8_github_repo_state.json"
UNRAID_DOCKER_NETWORK_NAME="br0" UNRAID_DOCKER_NETWORK_NAME="br0"
# Compose files live in a centralized project directory. # Compose files live in a centralized project directory.
CADDY_COMPOSE_DIR="${UNRAID_COMPOSE_DIR}/caddy" CADDY_COMPOSE_DIR="${UNRAID_COMPOSE_DIR}/caddy"
PHASE8_GITEA_ROUTE_BEGIN="# BEGIN_PHASE8_GITEA_ROUTE"
PHASE8_GITEA_ROUTE_END="# END_PHASE8_GITEA_ROUTE"
PUBLIC_DNS_TARGET_IP="${PUBLIC_DNS_TARGET_IP:-}"
PHASE8_ALLOW_PRIVATE_DNS_TARGET="${PHASE8_ALLOW_PRIVATE_DNS_TARGET:-false}"
if ! validate_bool "${PHASE8_ALLOW_PRIVATE_DNS_TARGET}"; then
log_error "Invalid PHASE8_ALLOW_PRIVATE_DNS_TARGET='${PHASE8_ALLOW_PRIVATE_DNS_TARGET}' (must be true or false)"
exit 1
fi
wait_for_https_public() {
local host="$1" max_secs="${2:-30}"
local elapsed=0
while [[ $elapsed -lt $max_secs ]]; do
if curl -sf -o /dev/null "https://${host}/api/v1/version" 2>/dev/null; then
return 0
fi
sleep 2
elapsed=$((elapsed + 2))
done
return 1
}
wait_for_https_via_resolve() {
local host="$1" ip="$2" max_secs="${3:-300}"
local elapsed=0
log_info "Waiting for HTTPS via direct Caddy path (--resolve ${host}:443:${ip})..."
while [[ $elapsed -lt $max_secs ]]; do
if curl -skf --resolve "${host}:443:${ip}" "https://${host}/api/v1/version" >/dev/null 2>&1; then
log_success "HTTPS reachable via Caddy IP (after ${elapsed}s)"
return 0
fi
sleep 2
elapsed=$((elapsed + 2))
done
log_error "Timeout waiting for HTTPS via --resolve (${host} -> ${ip}) after ${max_secs}s"
if ssh_exec UNRAID "docker ps --format '{{.Names}}' | grep -qx 'caddy'" >/dev/null 2>&1; then
log_warn "Recent Caddy logs (tail 80):"
ssh_exec UNRAID "docker logs --tail 80 caddy 2>&1" || true
fi
return 1
}
check_unraid_gitea_backend() {
local raw code
raw=$(ssh_exec UNRAID "curl -sS -o /dev/null -w '%{http_code}' 'http://${UNRAID_GITEA_IP}:3000/api/v1/version' || true" 2>/dev/null || true)
code=$(printf '%s' "$raw" | tr -cd '0-9')
if [[ -z "$code" ]]; then
code="000"
elif [[ ${#code} -gt 3 ]]; then
code="${code:$((${#code} - 3))}"
fi
if [[ "$code" == "200" ]]; then
log_success "Unraid -> Gitea backend API reachable (HTTP 200)"
return 0
fi
log_error "Unraid -> Gitea backend API check failed (HTTP ${code}) at http://${UNRAID_GITEA_IP}:3000/api/v1/version"
return 1
}
is_private_ipv4() {
local ip="$1"
[[ "$ip" =~ ^10\. ]] || \
[[ "$ip" =~ ^192\.168\. ]] || \
[[ "$ip" =~ ^172\.(1[6-9]|2[0-9]|3[0-1])\. ]]
}
cloudflare_api_call() {
local method="$1" path="$2" data="${3:-}"
local -a args=(
curl -sS
-X "$method"
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}"
-H "Content-Type: application/json"
"https://api.cloudflare.com/client/v4${path}"
)
if [[ -n "$data" ]]; then
args+=(-d "$data")
fi
"${args[@]}"
}
ensure_cloudflare_dns_for_gitea() {
local host="$1" target_ip="$2" zone_id zone_name
local allow_private="${PHASE8_ALLOW_PRIVATE_DNS_TARGET}"
if [[ -z "$target_ip" ]]; then
log_error "PUBLIC_DNS_TARGET_IP is not set"
log_error "Set PUBLIC_DNS_TARGET_IP to your public ingress IP for ${host}"
log_error "For LAN-only/split-DNS use, also set PHASE8_ALLOW_PRIVATE_DNS_TARGET=true"
return 1
fi
if ! validate_ip "$target_ip"; then
log_error "Invalid PUBLIC_DNS_TARGET_IP='${target_ip}'"
log_error "Set PUBLIC_DNS_TARGET_IP in .env to the IP that should answer ${host}"
return 1
fi
zone_name="${host#*.}"
if [[ "$zone_name" == "$host" ]]; then
log_error "GITEA_DOMAIN='${host}' is not a valid FQDN for Cloudflare zone detection"
return 1
fi
if is_private_ipv4 "$target_ip"; then
if [[ "$allow_private" != "true" ]]; then
log_error "Refusing private DNS target ${target_ip} for Cloudflare public DNS"
log_error "Set PUBLIC_DNS_TARGET_IP to public ingress IP, or set PHASE8_ALLOW_PRIVATE_DNS_TARGET=true for LAN-only split-DNS"
return 1
fi
log_warn "Using private DNS target ${target_ip} because PHASE8_ALLOW_PRIVATE_DNS_TARGET=true"
fi
local zone_resp zone_err
zone_resp=$(cloudflare_api_call GET "/zones?name=${zone_name}&status=active")
if [[ "$(jq -r '.success // false' <<< "$zone_resp")" != "true" ]]; then
zone_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$zone_resp")
log_error "Cloudflare zone lookup failed for ${zone_name}: ${zone_err:-unknown error}"
return 1
fi
zone_id=$(jq -r '.result[0].id // empty' <<< "$zone_resp")
if [[ -z "$zone_id" ]]; then
log_error "Cloudflare zone not found or not accessible for ${zone_name}"
return 1
fi
local record_resp record_err record_count record_id old_ip
record_resp=$(cloudflare_api_call GET "/zones/${zone_id}/dns_records?type=A&name=${host}")
if [[ "$(jq -r '.success // false' <<< "$record_resp")" != "true" ]]; then
record_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$record_resp")
log_error "Cloudflare DNS query failed for ${host}: ${record_err:-unknown error}"
return 1
fi
record_count=$(jq -r '.result | length' <<< "$record_resp")
if [[ "$record_count" -eq 0 ]]; then
local create_payload create_resp create_err
create_payload=$(jq -n \
--arg type "A" \
--arg name "$host" \
--arg content "$target_ip" \
--argjson ttl 120 \
--argjson proxied false \
'{type:$type, name:$name, content:$content, ttl:$ttl, proxied:$proxied}')
create_resp=$(cloudflare_api_call POST "/zones/${zone_id}/dns_records" "$create_payload")
if [[ "$(jq -r '.success // false' <<< "$create_resp")" != "true" ]]; then
create_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$create_resp")
log_error "Failed to create Cloudflare A record ${host} -> ${target_ip}: ${create_err:-unknown error}"
return 1
fi
log_success "Created Cloudflare A record: ${host} -> ${target_ip}"
return 0
fi
record_id=$(jq -r '.result[0].id // empty' <<< "$record_resp")
old_ip=$(jq -r '.result[0].content // empty' <<< "$record_resp")
if [[ -n "$old_ip" && "$old_ip" == "$target_ip" ]]; then
log_info "Cloudflare A record already correct: ${host} -> ${target_ip}"
return 0
fi
local update_payload update_resp update_err
update_payload=$(jq -n \
--arg type "A" \
--arg name "$host" \
--arg content "$target_ip" \
--argjson ttl 120 \
--argjson proxied false \
'{type:$type, name:$name, content:$content, ttl:$ttl, proxied:$proxied}')
update_resp=$(cloudflare_api_call PUT "/zones/${zone_id}/dns_records/${record_id}" "$update_payload")
if [[ "$(jq -r '.success // false' <<< "$update_resp")" != "true" ]]; then
update_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$update_resp")
log_error "Failed to update Cloudflare A record ${host}: ${update_err:-unknown error}"
return 1
fi
log_info "Updated Cloudflare A record: ${host}"
log_info " old: ${old_ip:-<empty>}"
log_info " new: ${target_ip}"
return 0
}
caddyfile_has_domain_block() {
local file="$1" domain="$2"
awk -v domain="$domain" '
function trim(s) {
sub(/^[[:space:]]+/, "", s)
sub(/[[:space:]]+$/, "", s)
return s
}
function matches_domain(label, dom, wild_suffix, dot_pos) {
if (label == dom) return 1
# Wildcard match: *.example.com covers sub.example.com
if (substr(label, 1, 2) == "*.") {
wild_suffix = substr(label, 2)
dot_pos = index(dom, ".")
if (dot_pos > 0 && substr(dom, dot_pos) == wild_suffix) return 1
}
return 0
}
{
line = $0
if (line ~ /^[[:space:]]*#/) next
pos = index(line, "{")
if (pos <= 0) next
labels = trim(substr(line, 1, pos - 1))
if (labels == "" || labels ~ /^\(/) next
gsub(/[[:space:]]+/, "", labels)
n = split(labels, parts, ",")
for (i = 1; i <= n; i++) {
if (matches_domain(parts[i], domain)) {
found = 1
}
}
}
END {
exit(found ? 0 : 1)
}
' "$file"
}
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Helper: persist original GitHub repo settings for teardown symmetry # Helper: persist original GitHub repo settings for teardown symmetry
@@ -145,22 +396,53 @@ fi
# Step 2: Render + deploy Caddyfile # Step 2: Render + deploy Caddyfile
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
log_step 2 "Deploying Caddyfile..." log_step 2 "Deploying Caddyfile..."
if ssh_exec UNRAID "test -f '${CADDY_DATA_PATH}/Caddyfile'" 2>/dev/null; then GITEA_CONTAINER_IP="${UNRAID_GITEA_IP}"
log_info "Caddyfile already exists — skipping" export GITEA_CONTAINER_IP GITEA_DOMAIN CADDY_DOMAIN
else
TMPFILE=$(mktemp)
GITEA_CONTAINER_IP="${UNRAID_GITEA_IP}"
export GITEA_CONTAINER_IP GITEA_DOMAIN CADDY_DOMAIN
# Build TLS block based on TLS_MODE # Build TLS block based on TLS_MODE
if [[ "$TLS_MODE" == "cloudflare" ]]; then if [[ "$TLS_MODE" == "cloudflare" ]]; then
TLS_BLOCK=" tls { TLS_BLOCK=" tls {
dns cloudflare {env.CF_API_TOKEN} dns cloudflare {env.CF_API_TOKEN}
}" }"
else else
TLS_BLOCK=" tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}" TLS_BLOCK=" tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
fi
export TLS_BLOCK
if ssh_exec UNRAID "test -f '${CADDY_DATA_PATH}/Caddyfile'" 2>/dev/null; then
TMP_EXISTING=$(mktemp)
TMP_UPDATED=$(mktemp)
TMP_ROUTE_BLOCK=$(mktemp)
ssh_exec UNRAID "cat '${CADDY_DATA_PATH}/Caddyfile'" > "$TMP_EXISTING"
if caddyfile_has_domain_block "$TMP_EXISTING" "$GITEA_DOMAIN"; then
log_info "Caddyfile already has a route for ${GITEA_DOMAIN} — preserving existing file"
else
log_warn "Caddyfile exists but has no explicit route for ${GITEA_DOMAIN}"
log_info "Appending managed Gitea route block"
{
echo
echo "${PHASE8_GITEA_ROUTE_BEGIN}"
echo "${GITEA_DOMAIN} {"
printf '%s\n' "$TLS_BLOCK"
echo
echo " reverse_proxy ${GITEA_CONTAINER_IP}:3000"
echo "}"
echo "${PHASE8_GITEA_ROUTE_END}"
echo
} > "$TMP_ROUTE_BLOCK"
# Remove a stale managed block (if present), then append refreshed block.
sed "/^${PHASE8_GITEA_ROUTE_BEGIN}\$/,/^${PHASE8_GITEA_ROUTE_END}\$/d" "$TMP_EXISTING" > "$TMP_UPDATED"
cat "$TMP_UPDATED" "$TMP_ROUTE_BLOCK" > "${TMP_UPDATED}.final"
scp_to UNRAID "${TMP_UPDATED}.final" "${CADDY_DATA_PATH}/Caddyfile"
log_success "Appended managed Gitea route to existing Caddyfile"
fi fi
export TLS_BLOCK
rm -f "$TMP_EXISTING" "$TMP_UPDATED" "$TMP_ROUTE_BLOCK" "${TMP_UPDATED}.final"
else
TMPFILE=$(mktemp)
render_template "${SCRIPT_DIR}/templates/Caddyfile.tpl" "$TMPFILE" \ render_template "${SCRIPT_DIR}/templates/Caddyfile.tpl" "$TMPFILE" \
"\${CADDY_DOMAIN} \${GITEA_DOMAIN} \${TLS_BLOCK} \${GITEA_CONTAINER_IP}" "\${CADDY_DOMAIN} \${GITEA_DOMAIN} \${TLS_BLOCK} \${GITEA_CONTAINER_IP}"
@@ -221,23 +503,56 @@ fi
log_step 4 "Starting Caddy container..." log_step 4 "Starting Caddy container..."
CONTAINER_STATUS=$(ssh_exec UNRAID "docker ps --filter name=caddy --format '{{.Status}}'" 2>/dev/null || true) CONTAINER_STATUS=$(ssh_exec UNRAID "docker ps --filter name=caddy --format '{{.Status}}'" 2>/dev/null || true)
if [[ "$CONTAINER_STATUS" == *"Up"* ]]; then if [[ "$CONTAINER_STATUS" == *"Up"* ]]; then
log_info "Caddy container already running — skipping" log_info "Caddy container already running"
log_info "Reloading Caddy config from /etc/caddy/Caddyfile"
if ssh_exec UNRAID "docker exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile" >/dev/null 2>&1; then
log_success "Caddy config reloaded"
else
log_warn "Caddy reload failed; restarting caddy container"
ssh_exec UNRAID "docker restart caddy >/dev/null"
log_success "Caddy container restarted"
fi
else else
ssh_exec UNRAID "cd '${CADDY_COMPOSE_DIR}' && docker compose up -d 2>/dev/null || docker-compose up -d" ssh_exec UNRAID "cd '${CADDY_COMPOSE_DIR}' && docker compose up -d 2>/dev/null || docker-compose up -d"
if ssh_exec UNRAID "docker exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile" >/dev/null 2>&1; then
log_success "Caddy container started and config loaded"
else
log_success "Caddy container started" log_success "Caddy container started"
fi
fi fi
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Step 5: Wait for HTTPS to work # Step 5: Ensure DNS points Gitea domain to target ingress IP
# ---------------------------------------------------------------------------
log_step 5 "Ensuring DNS for ${GITEA_DOMAIN}..."
if [[ "$TLS_MODE" == "cloudflare" ]]; then
ensure_cloudflare_dns_for_gitea "${GITEA_DOMAIN}" "${PUBLIC_DNS_TARGET_IP}"
else
log_info "TLS_MODE=${TLS_MODE}; skipping Cloudflare DNS automation"
fi
# ---------------------------------------------------------------------------
# Step 6: Wait for HTTPS to work
# Caddy auto-obtains certs — poll until HTTPS responds. # Caddy auto-obtains certs — poll until HTTPS responds.
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
log_step 5 "Waiting for HTTPS (Caddy auto-provisions cert)..." log_step 6 "Waiting for HTTPS (Caddy auto-provisions cert)..."
wait_for_http "https://${GITEA_DOMAIN}/api/v1/version" 120 check_unraid_gitea_backend
if wait_for_https_public "${GITEA_DOMAIN}" 60; then
log_success "HTTPS verified — https://${GITEA_DOMAIN} works" log_success "HTTPS verified through current domain routing — https://${GITEA_DOMAIN} works"
else
log_warn "Public-domain routing to Caddy is not ready yet"
if [[ "$ALLOW_DIRECT_CHECKS" == "true" ]]; then
wait_for_https_via_resolve "${GITEA_DOMAIN}" "${UNRAID_CADDY_IP}" 300
log_warn "Proceeding with direct-only HTTPS validation (--allow-direct-checks)"
else
log_error "Refusing to continue cutover without public HTTPS reachability"
log_error "Fix DNS/ingress routing and rerun Phase 8, or use --allow-direct-checks for staging only"
exit 1
fi
fi
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Step 6: Mark GitHub repos as offsite backup only # Step 7: Mark GitHub repos as offsite backup only
# Updates description + homepage to indicate Gitea is primary. # Updates description + homepage to indicate Gitea is primary.
# Disables wiki and Pages to avoid unnecessary resource usage. # Disables wiki and Pages to avoid unnecessary resource usage.
# Does NOT archive — archived repos reject pushes, which would break # Does NOT archive — archived repos reject pushes, which would break
@@ -245,7 +560,7 @@ log_success "HTTPS verified — https://${GITEA_DOMAIN} works"
# Persists original mutable settings to a local state file for teardown. # Persists original mutable settings to a local state file for teardown.
# GitHub Actions already disabled in Phase 6 Step D. # GitHub Actions already disabled in Phase 6 Step D.
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
log_step 6 "Marking GitHub repos as offsite backup..." log_step 7 "Marking GitHub repos as offsite backup..."
init_phase8_state_store init_phase8_state_store
GITHUB_REPO_UPDATE_FAILURES=0 GITHUB_REPO_UPDATE_FAILURES=0
@@ -279,10 +594,11 @@ for repo in "${REPOS[@]}"; do
--arg homepage "https://${GITEA_DOMAIN}/${GITEA_ORG_NAME}/${repo}" \ --arg homepage "https://${GITEA_DOMAIN}/${GITEA_ORG_NAME}/${repo}" \
'{description: $description, homepage: $homepage, has_wiki: false, has_projects: false}') '{description: $description, homepage: $homepage, has_wiki: false, has_projects: false}')
if github_api PATCH "/repos/${GITHUB_USERNAME}/${repo}" "$UPDATE_PAYLOAD" >/dev/null 2>&1; then if PATCH_OUT=$(github_api PATCH "/repos/${GITHUB_USERNAME}/${repo}" "$UPDATE_PAYLOAD" 2>&1); then
log_success "Marked GitHub repo as mirror: ${repo}" log_success "Marked GitHub repo as mirror: ${repo}"
else else
log_error "Failed to update GitHub repo: ${repo}" log_error "Failed to update GitHub repo: ${repo}"
log_error "GitHub API: $(printf '%s' "$PATCH_OUT" | tail -n 1)"
GITHUB_REPO_UPDATE_FAILURES=$((GITHUB_REPO_UPDATE_FAILURES + 1)) GITHUB_REPO_UPDATE_FAILURES=$((GITHUB_REPO_UPDATE_FAILURES + 1))
fi fi

View File

@@ -15,8 +15,33 @@ set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "${SCRIPT_DIR}/lib/common.sh" source "${SCRIPT_DIR}/lib/common.sh"
ALLOW_DIRECT_CHECKS=false
usage() {
cat <<EOF
Usage: $(basename "$0") [options]
Options:
--allow-direct-checks Allow fallback to direct Caddy-IP checks via --resolve
(LAN/split-DNS staging mode; not a full public cutover check)
--help, -h Show this help
EOF
}
for arg in "$@"; do
case "$arg" in
--allow-direct-checks) ALLOW_DIRECT_CHECKS=true ;;
--help|-h) usage; exit 0 ;;
*)
log_error "Unknown argument: $arg"
usage
exit 1
;;
esac
done
load_env load_env
require_vars GITEA_DOMAIN GITEA_ADMIN_TOKEN GITEA_ORG_NAME \ require_vars GITEA_DOMAIN UNRAID_CADDY_IP GITEA_ADMIN_TOKEN GITEA_ORG_NAME \
GITHUB_USERNAME GITHUB_TOKEN \ GITHUB_USERNAME GITHUB_TOKEN \
REPO_NAMES REPO_NAMES
@@ -37,16 +62,50 @@ run_check() {
fi fi
} }
ACCESS_MODE="public"
if ! curl -sf -o /dev/null "https://${GITEA_DOMAIN}/api/v1/version" 2>/dev/null; then
log_warn "Public routing to ${GITEA_DOMAIN} not reachable from control plane"
if [[ "$ALLOW_DIRECT_CHECKS" == "true" ]]; then
ACCESS_MODE="direct"
log_warn "Using direct Caddy-IP checks via --resolve (${UNRAID_CADDY_IP})"
else
log_error "Public HTTPS check failed; this is not a complete Phase 8 validation"
log_error "Fix DNS/ingress routing and rerun, or use --allow-direct-checks for staging-only checks"
exit 1
fi
else
log_info "Using public-domain checks for ${GITEA_DOMAIN}"
fi
curl_https() {
if [[ "$ACCESS_MODE" == "direct" ]]; then
curl -sk --resolve "${GITEA_DOMAIN}:443:${UNRAID_CADDY_IP}" "$@"
else
curl -s "$@"
fi
}
curl_http() {
if [[ "$ACCESS_MODE" == "direct" ]]; then
curl -s --resolve "${GITEA_DOMAIN}:80:${UNRAID_CADDY_IP}" "$@"
else
curl -s "$@"
fi
}
# Check 1: HTTPS works # Check 1: HTTPS works
run_check "HTTPS returns 200 at https://${GITEA_DOMAIN}" \ # shellcheck disable=SC2329
curl -sf -o /dev/null "https://${GITEA_DOMAIN}/api/v1/version" check_https_version() {
curl_https -f -o /dev/null "https://${GITEA_DOMAIN}/api/v1/version"
}
run_check "HTTPS returns 200 at https://${GITEA_DOMAIN}" check_https_version
# Check 2: HTTP redirects to HTTPS (returns 301) # Check 2: HTTP redirects to HTTPS (returns 301)
# shellcheck disable=SC2329 # shellcheck disable=SC2329
check_redirect() { check_redirect() {
local http_code local http_code
http_code=$(curl -sI -o /dev/null -w "%{http_code}" "http://${GITEA_DOMAIN}/") http_code=$(curl_http -I -o /dev/null -w "%{http_code}" "http://${GITEA_DOMAIN}/")
[[ "$http_code" == "301" ]] [[ "$http_code" == "301" || "$http_code" == "308" ]]
} }
run_check "HTTP → HTTPS redirect (301)" check_redirect run_check "HTTP → HTTPS redirect (301)" check_redirect
@@ -54,17 +113,29 @@ run_check "HTTP → HTTPS redirect (301)" check_redirect
# shellcheck disable=SC2329 # shellcheck disable=SC2329
check_ssl_cert() { check_ssl_cert() {
# Verify openssl can connect and the cert is issued by a recognized CA # Verify openssl can connect and the cert is issued by a recognized CA
local connect_target
if [[ "$ACCESS_MODE" == "direct" ]]; then
connect_target="${UNRAID_CADDY_IP}:443"
else
connect_target="${GITEA_DOMAIN}:443"
fi
local issuer local issuer
issuer=$(echo | openssl s_client -connect "${GITEA_DOMAIN}:443" -servername "${GITEA_DOMAIN}" 2>/dev/null | openssl x509 -noout -issuer 2>/dev/null || echo "") issuer=$(echo | openssl s_client -connect "${connect_target}" -servername "${GITEA_DOMAIN}" 2>/dev/null | openssl x509 -noout -issuer 2>/dev/null || echo "")
# Check that the issuer is not empty (meaning cert is valid) # Check that the issuer is not empty (meaning cert is valid)
[[ -n "$issuer" ]] [[ -n "$issuer" ]]
} }
run_check "SSL certificate is valid" check_ssl_cert run_check "SSL certificate is valid" check_ssl_cert
# Check 4: All repos accessible via HTTPS # Check 4: All repos accessible via HTTPS
# shellcheck disable=SC2329
check_repo_access() {
local repo="$1"
curl_https -f -o /dev/null -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \
"https://${GITEA_DOMAIN}/api/v1/repos/${GITEA_ORG_NAME}/${repo}"
}
for repo in "${REPOS[@]}"; do for repo in "${REPOS[@]}"; do
run_check "Repo ${repo} accessible at https://${GITEA_DOMAIN}/${GITEA_ORG_NAME}/${repo}" \ run_check "Repo ${repo} accessible at https://${GITEA_DOMAIN}/${GITEA_ORG_NAME}/${repo}" \
curl -sf -o /dev/null -H "Authorization: token ${GITEA_ADMIN_TOKEN}" "https://${GITEA_DOMAIN}/api/v1/repos/${GITEA_ORG_NAME}/${repo}" check_repo_access "$repo"
done done
# Check 5: GitHub repos are marked as offsite backup # Check 5: GitHub repos are marked as offsite backup

14
repo_variables.conf Normal file
View File

@@ -0,0 +1,14 @@
# =============================================================================
# repo_variables.conf — Gitea Actions Repository Variables (INI format)
# Generated from GitHub repo settings. Edit as needed.
# Used by phase11_custom_runners.sh to set per-repo CI dispatch variables.
# See repo_variables.conf.example for field reference.
# =============================================================================
[augur]
CI_RUNS_ON = ["self-hosted","Linux","X64"]
[periodvault]
CI_RUNS_ON = ["self-hosted","Linux","X64"]
CI_RUNS_ON_MACOS = ["self-hosted","macOS","ARM64"]
CI_RUNS_ON_ANDROID = ["self-hosted","Linux","X64","android-emulator"]

View File

@@ -0,0 +1,20 @@
# =============================================================================
# repo_variables.conf — Gitea Actions Repository Variables (INI format)
# Copy to repo_variables.conf and edit.
# Used by phase11_custom_runners.sh to set per-repo CI dispatch variables.
# =============================================================================
#
# Each [section] = Gitea repository name (must exist in GITEA_ORG_NAME).
# Keys = variable names. Values = literal string set via Gitea API.
# Workflows access these as ${{ vars.VARIABLE_NAME }}.
#
# Common pattern: repos use fromJSON(vars.CI_RUNS_ON || '["ubuntu-latest"]')
# in runs-on to dynamically select runners.
#[my-go-repo]
#CI_RUNS_ON = ["self-hosted","Linux","X64"]
#[my-mobile-repo]
#CI_RUNS_ON = ["self-hosted","Linux","X64"]
#CI_RUNS_ON_MACOS = ["self-hosted","macOS","ARM64"]
#CI_RUNS_ON_ANDROID = ["self-hosted","Linux","X64","android-emulator"]

View File

@@ -3,11 +3,11 @@ set -euo pipefail
# ============================================================================= # =============================================================================
# run_all.sh — Orchestrate the full Gitea migration pipeline # run_all.sh — Orchestrate the full Gitea migration pipeline
# Runs: setup → preflight → phase 1-9 (each with post-check) sequentially. # Runs: setup → preflight → phase 1-11 (each with post-check) sequentially.
# Stops on first failure, prints summary of what completed. # Stops on first failure, prints summary of what completed.
# #
# Usage: # Usage:
# ./run_all.sh # Full run: setup + preflight + phases 1-9 # ./run_all.sh # Full run: setup + preflight + phases 1-11
# ./run_all.sh --skip-setup # Skip setup scripts, start at preflight # ./run_all.sh --skip-setup # Skip setup scripts, start at preflight
# ./run_all.sh --start-from=3 # Run preflight, then start at phase 3 # ./run_all.sh --start-from=3 # Run preflight, then start at phase 3
# ./run_all.sh --skip-setup --start-from=5 # ./run_all.sh --skip-setup --start-from=5
@@ -28,10 +28,12 @@ require_local_os "Darwin" "run_all.sh must run from macOS (the control plane)"
SKIP_SETUP=false SKIP_SETUP=false
START_FROM=0 START_FROM=0
START_FROM_SET=false START_FROM_SET=false
ALLOW_DIRECT_CHECKS=false
for arg in "$@"; do for arg in "$@"; do
case "$arg" in case "$arg" in
--skip-setup) SKIP_SETUP=true ;; --skip-setup) SKIP_SETUP=true ;;
--allow-direct-checks) ALLOW_DIRECT_CHECKS=true ;;
--dry-run) --dry-run)
exec "${SCRIPT_DIR}/post-migration-check.sh" exec "${SCRIPT_DIR}/post-migration-check.sh"
;; ;;
@@ -39,11 +41,11 @@ for arg in "$@"; do
START_FROM="${arg#*=}" START_FROM="${arg#*=}"
START_FROM_SET=true START_FROM_SET=true
if ! [[ "$START_FROM" =~ ^[0-9]+$ ]]; then if ! [[ "$START_FROM" =~ ^[0-9]+$ ]]; then
log_error "--start-from must be a number (1-9)" log_error "--start-from must be a number (1-11)"
exit 1 exit 1
fi fi
if [[ "$START_FROM" -lt 1 ]] || [[ "$START_FROM" -gt 9 ]]; then if [[ "$START_FROM" -lt 1 ]] || [[ "$START_FROM" -gt 11 ]]; then
log_error "--start-from must be between 1 and 9" log_error "--start-from must be between 1 and 11"
exit 1 exit 1
fi fi
;; ;;
@@ -54,13 +56,16 @@ Usage: $(basename "$0") [options]
Options: Options:
--skip-setup Skip configure_env + machine setup, start at preflight --skip-setup Skip configure_env + machine setup, start at preflight
--start-from=N Skip phases before N (still runs preflight) --start-from=N Skip phases before N (still runs preflight)
--allow-direct-checks Pass --allow-direct-checks to Phase 8 scripts
(LAN/split-DNS staging mode)
--dry-run Run read-only infrastructure check (no mutations) --dry-run Run read-only infrastructure check (no mutations)
--help Show this help --help Show this help
Examples: Examples:
$(basename "$0") Full run $(basename "$0") Full run
$(basename "$0") --skip-setup Skip setup, start at preflight $(basename "$0") --skip-setup Skip setup, start at preflight
$(basename "$0") --start-from=3 Run preflight, then phases 3-9 $(basename "$0") --start-from=3 Run preflight, then phases 3-11
$(basename "$0") --allow-direct-checks LAN mode: use direct Caddy-IP checks
$(basename "$0") --dry-run Check current state without changing anything $(basename "$0") --dry-run Check current state without changing anything
EOF EOF
exit 0 ;; exit 0 ;;
@@ -157,7 +162,7 @@ else
fi fi
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Phases 1-9 — run sequentially, each followed by its post-check # Phases 1-11 — run sequentially, each followed by its post-check
# The phase scripts are the "do" step, post-checks verify success. # The phase scripts are the "do" step, post-checks verify success.
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
PHASES=( PHASES=(
@@ -170,6 +175,8 @@ PHASES=(
"7|Phase 7: Branch Protection|phase7_branch_protection.sh|phase7_post_check.sh" "7|Phase 7: Branch Protection|phase7_branch_protection.sh|phase7_post_check.sh"
"8|Phase 8: Cutover|phase8_cutover.sh|phase8_post_check.sh" "8|Phase 8: Cutover|phase8_cutover.sh|phase8_post_check.sh"
"9|Phase 9: Security|phase9_security.sh|phase9_post_check.sh" "9|Phase 9: Security|phase9_security.sh|phase9_post_check.sh"
"10|Phase 10: Local Repo Cutover|phase10_local_repo_cutover.sh|phase10_post_check.sh"
"11|Phase 11: Custom Runners|phase11_custom_runners.sh|phase11_post_check.sh"
) )
for phase_entry in "${PHASES[@]}"; do for phase_entry in "${PHASES[@]}"; do
@@ -181,8 +188,14 @@ for phase_entry in "${PHASES[@]}"; do
continue continue
fi fi
# Phase 8 scripts accept --allow-direct-checks for LAN/split-DNS setups.
if [[ "$phase_num" -eq 8 ]] && [[ "$ALLOW_DIRECT_CHECKS" == "true" ]]; then
run_step "$phase_name" "$phase_script" --allow-direct-checks
run_step "${phase_name} — post-check" "$post_check" --allow-direct-checks
else
run_step "$phase_name" "$phase_script" run_step "$phase_name" "$phase_script"
run_step "${phase_name} — post-check" "$post_check" run_step "${phase_name} — post-check" "$post_check"
fi
done done
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------

View File

@@ -55,6 +55,11 @@
# (starts at login, no sudo needed). # (starts at login, no sudo needed).
# Ignored for docker runners. # Ignored for docker runners.
# #
# container_options — Extra Docker flags for act_runner job containers.
# Passed to the container.options field in act_runner config.
# e.g. "--device=/dev/kvm" for KVM passthrough.
# Empty = no extra flags. Ignored for native runners.
#
# STARTER ENTRIES (uncomment and edit): # STARTER ENTRIES (uncomment and edit):
#[unraid-runner] #[unraid-runner]

View File

@@ -65,7 +65,7 @@ get_env_val() {
# Prompt function # Prompt function
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Base prompt count (56 fixed + 3 TLS conditional slots — repo/DB prompts added dynamically) # Base prompt count (56 fixed + 3 TLS conditional slots — repo/DB prompts added dynamically)
TOTAL_PROMPTS=59 TOTAL_PROMPTS=61
CURRENT_PROMPT=0 CURRENT_PROMPT=0
LAST_SECTION="" LAST_SECTION=""
@@ -374,11 +374,13 @@ prompt_var "CADDY_DATA_PATH" "Absolute path on host for Caddy data"
# Conditional TLS prompts # Conditional TLS prompts
if [[ "$COLLECTED_TLS_MODE" == "cloudflare" ]]; then if [[ "$COLLECTED_TLS_MODE" == "cloudflare" ]]; then
prompt_var "CLOUDFLARE_API_TOKEN" "Cloudflare API token (Zone:DNS:Edit)" nonempty "" "TLS / REVERSE PROXY" prompt_var "CLOUDFLARE_API_TOKEN" "Cloudflare API token (Zone:DNS:Edit)" nonempty "" "TLS / REVERSE PROXY"
prompt_var "PUBLIC_DNS_TARGET_IP" "Public DNS target IP for GITEA_DOMAIN" ip "" "TLS / REVERSE PROXY"
prompt_var "PHASE8_ALLOW_PRIVATE_DNS_TARGET" "Allow private RFC1918 DNS target (LAN-only/split-DNS)" bool "false" "TLS / REVERSE PROXY"
# Skip cert path prompts but still count them for progress # Skip cert path prompts but still count them for progress
CURRENT_PROMPT=$((CURRENT_PROMPT + 2)) CURRENT_PROMPT=$((CURRENT_PROMPT + 2))
else else
# Skip cloudflare token prompt but count it # Skip cloudflare token prompt but count it
CURRENT_PROMPT=$((CURRENT_PROMPT + 1)) CURRENT_PROMPT=$((CURRENT_PROMPT + 3))
prompt_var "SSL_CERT_PATH" "Absolute path to SSL cert" path "" "TLS / REVERSE PROXY" prompt_var "SSL_CERT_PATH" "Absolute path to SSL cert" path "" "TLS / REVERSE PROXY"
prompt_var "SSL_KEY_PATH" "Absolute path to SSL key" path "" "TLS / REVERSE PROXY" prompt_var "SSL_KEY_PATH" "Absolute path to SSL key" path "" "TLS / REVERSE PROXY"
fi fi

View File

@@ -3,6 +3,7 @@
## Pre-cutover ## Pre-cutover
- [ ] `nginx -T` snapshot captured (`output/nginx-full.conf`) - [ ] `nginx -T` snapshot captured (`output/nginx-full.conf`)
- [ ] Generated Caddyfile reviewed - [ ] Generated Caddyfile reviewed
- [ ] `Caddyfile.recommended` reviewed/adapted for your domains
- [ ] `conversion-warnings.txt` reviewed and resolved for canary site - [ ] `conversion-warnings.txt` reviewed and resolved for canary site
- [ ] `validate_caddy.sh` passes - [ ] `validate_caddy.sh` passes
- [ ] DNS TTL lowered for canary domain - [ ] DNS TTL lowered for canary domain

View File

@@ -0,0 +1,130 @@
# Recommended Caddy baseline for the current homelab reverse-proxy estate.
# Source upstreams were derived from setup/nginx-to-caddy/oldconfig/*.conf.
#
# If your public suffix changes (for example sintheus.com -> privacyindesign.com),
# update the hostnames below before deployment.
{
# DNS-01 certificates through Cloudflare.
# Requires CF_API_TOKEN in Caddy runtime environment.
acme_dns cloudflare {env.CF_API_TOKEN}
# Trust private-range proxy hops in LAN environments.
servers {
trusted_proxies static private_ranges
protocols h1 h2 h3
}
}
(common_security) {
encode zstd gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
-Server
}
}
(proxy_headers) {
# Keep Nginx parity for backends that consume Host and X-Real-IP.
header_up Host {host}
header_up X-Real-IP {remote_host}
}
(proxy_streaming) {
import proxy_headers
# Flush immediately for streaming/log-tail/websocket-heavy UIs.
flush_interval -1
}
ai.sintheus.com {
import common_security
request_body {
max_size 50MB
}
reverse_proxy http://192.168.1.82:8181 {
import proxy_streaming
}
}
photos.sintheus.com {
import common_security
request_body {
max_size 50GB
}
reverse_proxy http://192.168.1.222:2283 {
import proxy_headers
}
}
fin.sintheus.com {
import common_security
reverse_proxy http://192.168.1.233:8096 {
import proxy_streaming
}
}
disk.sintheus.com {
import common_security
request_body {
max_size 20GB
}
reverse_proxy http://192.168.1.52:80 {
import proxy_headers
}
}
pi.sintheus.com {
import common_security
reverse_proxy http://192.168.1.4:80 {
import proxy_headers
}
}
plex.sintheus.com {
import common_security
reverse_proxy http://192.168.1.111:32400 {
import proxy_streaming
}
}
sync.sintheus.com {
import common_security
reverse_proxy http://192.168.1.119:8384 {
import proxy_headers
}
}
syno.sintheus.com {
import common_security
reverse_proxy https://100.108.182.16:5001 {
import proxy_headers
transport http {
tls_insecure_skip_verify
}
}
}
tower.sintheus.com {
import common_security
reverse_proxy https://192.168.1.82:443 {
import proxy_headers
transport http {
tls_insecure_skip_verify
}
}
}

View File

@@ -13,6 +13,8 @@ This module is intentionally conservative:
- SSH into a host and collect `nginx -T`, `/etc/nginx` tarball, and a quick inventory summary. - SSH into a host and collect `nginx -T`, `/etc/nginx` tarball, and a quick inventory summary.
- `nginx_to_caddy.sh` - `nginx_to_caddy.sh`
- Convert basic Nginx server blocks into a generated Caddyfile. - Convert basic Nginx server blocks into a generated Caddyfile.
- `Caddyfile.recommended`
- Hardened baseline config (security headers, sensible body limits, streaming behavior).
- `validate_caddy.sh` - `validate_caddy.sh`
- Run `caddy fmt`, `caddy adapt`, and `caddy validate` on the generated Caddyfile. - Run `caddy fmt`, `caddy adapt`, and `caddy validate` on the generated Caddyfile.
@@ -24,6 +26,7 @@ cd setup/nginx-to-caddy
./extract_nginx_inventory.sh --host=<host> --user=<user> --port=22 --yes ./extract_nginx_inventory.sh --host=<host> --user=<user> --port=22 --yes
./nginx_to_caddy.sh --input=./output/nginx-full.conf --output=./output/Caddyfile.generated --tls-mode=cloudflare --yes ./nginx_to_caddy.sh --input=./output/nginx-full.conf --output=./output/Caddyfile.generated --tls-mode=cloudflare --yes
./validate_caddy.sh --config=./output/Caddyfile.generated --docker ./validate_caddy.sh --config=./output/Caddyfile.generated --docker
./validate_caddy.sh --config=./Caddyfile.recommended --docker
``` ```
## Conversion Scope ## Conversion Scope

View File

@@ -51,7 +51,23 @@ If local `caddy` is installed:
./validate_caddy.sh --config=./output/Caddyfile.generated ./validate_caddy.sh --config=./output/Caddyfile.generated
``` ```
## 4) Canary migration (recommended) ## 4) Use the recommended baseline
This toolkit now includes a hardened baseline at:
- `setup/nginx-to-caddy/Caddyfile.recommended`
Use it when you want a production-style config instead of a raw 1:1 conversion.
You can either:
1. use it directly (if hostnames/upstreams already match your environment), or
2. copy its common snippets and service patterns into your live Caddyfile.
Validate it before deployment:
```bash
./validate_caddy.sh --config=./Caddyfile.recommended --docker
```
## 5) Canary migration (recommended)
Migrate one low-risk subdomain first: Migrate one low-risk subdomain first:
1. Copy only one site block from generated Caddyfile to your live Caddy config. 1. Copy only one site block from generated Caddyfile to your live Caddy config.
@@ -62,7 +78,7 @@ Migrate one low-risk subdomain first:
- API/websocket calls work - API/websocket calls work
4. Keep Nginx serving all other subdomains. 4. Keep Nginx serving all other subdomains.
## 5) Full migration after canary success ## 6) Full migration after canary success
When the canary is stable: When the canary is stable:
1. Add remaining site blocks. 1. Add remaining site blocks.
@@ -70,14 +86,14 @@ When the canary is stable:
3. Keep Nginx config snapshots for rollback. 3. Keep Nginx config snapshots for rollback.
4. Decommission Nginx only after monitoring period. 4. Decommission Nginx only after monitoring period.
## 6) Rollback plan ## 7) Rollback plan
If a site fails after cutover: If a site fails after cutover:
1. Repoint affected DNS entry back to Nginx endpoint. 1. Repoint affected DNS entry back to Nginx endpoint.
2. Restore previous Nginx server block. 2. Restore previous Nginx server block.
3. Investigate conversion warnings for that block. 3. Investigate conversion warnings for that block.
## 7) Domain/TLS note for your current setup ## 8) Domain/TLS note for your current setup
You confirmed the domain is `privacyindesign.com`. You confirmed the domain is `privacyindesign.com`.
@@ -86,7 +102,7 @@ If you use `TLS_MODE=cloudflare` with Caddy, ensure:
- Cloudflare token has DNS edit on the same zone. - Cloudflare token has DNS edit on the same zone.
- DNS records point to the Caddy ingress path you intend (direct or via edge proxy). - DNS records point to the Caddy ingress path you intend (direct or via edge proxy).
## 8) Suggested next step for Phase 8 ## 9) Suggested next step for Phase 8
Given your current repo config: Given your current repo config:
- keep Phase 8 Caddy focused on `source.privacyindesign.com` - keep Phase 8 Caddy focused on `source.privacyindesign.com`

View File

@@ -10,7 +10,7 @@ FORMAT_FILE=true
USE_DOCKER=false USE_DOCKER=false
DO_ADAPT=true DO_ADAPT=true
DO_VALIDATE=true DO_VALIDATE=true
CADDY_IMAGE="caddy:2" CADDY_IMAGE="slothcroissant/caddy-cloudflaredns:latest"
usage() { usage() {
cat <<USAGE cat <<USAGE
@@ -24,7 +24,7 @@ Options:
--no-adapt Skip caddy adapt --no-adapt Skip caddy adapt
--no-validate Skip caddy validate --no-validate Skip caddy validate
--docker Use Docker image instead of local caddy binary --docker Use Docker image instead of local caddy binary
--image=NAME Docker image when --docker is used (default: caddy:2) --image=NAME Docker image when --docker is used (default: slothcroissant/caddy-cloudflaredns:latest)
--help, -h Show help --help, -h Show help
USAGE USAGE
} }
@@ -47,26 +47,43 @@ if [[ ! -f "$CONFIG_FILE" ]]; then
exit 1 exit 1
fi fi
CONFIG_FILE="$(cd "$(dirname "$CONFIG_FILE")" && pwd)/$(basename "$CONFIG_FILE")"
docker_env_args=()
if [[ "$USE_DOCKER" == "true" ]]; then if [[ "$USE_DOCKER" == "true" ]]; then
require_cmd docker require_cmd docker
if [[ -n "${CF_API_TOKEN:-}" ]]; then
docker_env_args+=( -e "CF_API_TOKEN=${CF_API_TOKEN}" )
elif [[ -n "${CLOUDFLARE_API_TOKEN:-}" ]]; then
docker_env_args+=( -e "CF_API_TOKEN=${CLOUDFLARE_API_TOKEN}" )
fi
run_docker_caddy() {
if [[ "${#docker_env_args[@]}" -gt 0 ]]; then
docker run --rm "${docker_env_args[@]}" "$@"
else
docker run --rm "$@"
fi
}
if [[ "$FORMAT_FILE" == "true" ]]; then if [[ "$FORMAT_FILE" == "true" ]]; then
log_info "Formatting Caddyfile with Docker..." log_info "Formatting Caddyfile with Docker..."
docker run --rm \ run_docker_caddy \
-v "$CONFIG_FILE:/etc/caddy/Caddyfile" \ -v "$CONFIG_FILE:/etc/caddy/Caddyfile" \
"$CADDY_IMAGE" caddy fmt --overwrite /etc/caddy/Caddyfile "$CADDY_IMAGE" caddy fmt --overwrite /etc/caddy/Caddyfile
fi fi
if [[ "$DO_ADAPT" == "true" ]]; then if [[ "$DO_ADAPT" == "true" ]]; then
log_info "Adapting Caddyfile (Docker)..." log_info "Adapting Caddyfile (Docker)..."
docker run --rm \ run_docker_caddy \
-v "$CONFIG_FILE:/etc/caddy/Caddyfile:ro" \ -v "$CONFIG_FILE:/etc/caddy/Caddyfile:ro" \
"$CADDY_IMAGE" caddy adapt --config /etc/caddy/Caddyfile --adapter caddyfile >/dev/null "$CADDY_IMAGE" caddy adapt --config /etc/caddy/Caddyfile --adapter caddyfile >/dev/null
fi fi
if [[ "$DO_VALIDATE" == "true" ]]; then if [[ "$DO_VALIDATE" == "true" ]]; then
log_info "Validating Caddyfile (Docker)..." log_info "Validating Caddyfile (Docker)..."
docker run --rm \ run_docker_caddy \
-v "$CONFIG_FILE:/etc/caddy/Caddyfile:ro" \ -v "$CONFIG_FILE:/etc/caddy/Caddyfile:ro" \
"$CADDY_IMAGE" caddy validate --config /etc/caddy/Caddyfile --adapter caddyfile "$CADDY_IMAGE" caddy validate --config /etc/caddy/Caddyfile --adapter caddyfile
fi fi

View File

@@ -3,11 +3,11 @@ set -euo pipefail
# ============================================================================= # =============================================================================
# teardown_all.sh — Tear down migration in reverse order # teardown_all.sh — Tear down migration in reverse order
# Runs phase teardown scripts from phase 9 → phase 1 (or a subset). # Runs phase teardown scripts from phase 11 → phase 1 (or a subset).
# #
# Usage: # Usage:
# ./teardown_all.sh # Tear down everything (phases 9 → 1) # ./teardown_all.sh # Tear down everything (phases 11 → 1)
# ./teardown_all.sh --through=5 # Tear down phases 9 → 5 (leave 1-4) # ./teardown_all.sh --through=5 # Tear down phases 11 → 5 (leave 1-4)
# ./teardown_all.sh --yes # Skip confirmation prompts # ./teardown_all.sh --yes # Skip confirmation prompts
# ============================================================================= # =============================================================================
@@ -25,8 +25,8 @@ for arg in "$@"; do
case "$arg" in case "$arg" in
--through=*) --through=*)
THROUGH="${arg#*=}" THROUGH="${arg#*=}"
if ! [[ "$THROUGH" =~ ^[0-9]+$ ]] || [[ "$THROUGH" -lt 1 ]] || [[ "$THROUGH" -gt 9 ]]; then if ! [[ "$THROUGH" =~ ^[0-9]+$ ]] || [[ "$THROUGH" -lt 1 ]] || [[ "$THROUGH" -gt 11 ]]; then
log_error "--through must be a number between 1 and 9" log_error "--through must be a number between 1 and 11"
exit 1 exit 1
fi fi
;; ;;
@@ -37,14 +37,14 @@ for arg in "$@"; do
Usage: $(basename "$0") [options] Usage: $(basename "$0") [options]
Options: Options:
--through=N Only tear down phases N through 9 (default: 1 = everything) --through=N Only tear down phases N through 11 (default: 1 = everything)
--cleanup Also run setup/cleanup.sh to uninstall setup prerequisites --cleanup Also run setup/cleanup.sh to uninstall setup prerequisites
--yes, -y Skip all confirmation prompts --yes, -y Skip all confirmation prompts
--help Show this help --help Show this help
Examples: Examples:
$(basename "$0") Tear down everything $(basename "$0") Tear down everything
$(basename "$0") --through=5 Tear down phases 5-9, leave 1-4 $(basename "$0") --through=5 Tear down phases 11-5, leave 1-4
$(basename "$0") --cleanup Full teardown + uninstall prerequisites $(basename "$0") --cleanup Full teardown + uninstall prerequisites
$(basename "$0") --yes Non-interactive teardown $(basename "$0") --yes Non-interactive teardown
EOF EOF
@@ -58,9 +58,9 @@ done
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
if [[ "$AUTO_YES" == "false" ]]; then if [[ "$AUTO_YES" == "false" ]]; then
if [[ "$THROUGH" -eq 1 ]]; then if [[ "$THROUGH" -eq 1 ]]; then
log_warn "This will tear down ALL phases (9 → 1)." log_warn "This will tear down ALL phases (11 → 1)."
else else
log_warn "This will tear down phases 9${THROUGH}." log_warn "This will tear down phases 11${THROUGH}."
fi fi
printf 'Are you sure? [y/N] ' printf 'Are you sure? [y/N] '
read -r confirm read -r confirm
@@ -70,9 +70,11 @@ if [[ "$AUTO_YES" == "false" ]]; then
fi fi
fi fi
# Teardown scripts in reverse order (9 → 1) # Teardown scripts in reverse order (11 → 1)
# Each entry: phase_num|script_path # Each entry: phase_num|script_path
TEARDOWNS=( TEARDOWNS=(
"11|phase11_teardown.sh"
"10|phase10_teardown.sh"
"9|phase9_teardown.sh" "9|phase9_teardown.sh"
"8|phase8_teardown.sh" "8|phase8_teardown.sh"
"7|phase7_teardown.sh" "7|phase7_teardown.sh"

View File

@@ -1,5 +1,5 @@
# act_runner configuration — rendered by manage_runner.sh # act_runner configuration — rendered by manage_runner.sh
# Variables: RUNNER_NAME, RUNNER_LABELS_YAML, RUNNER_CAPACITY # Variables: RUNNER_NAME, RUNNER_LABELS_YAML, RUNNER_CAPACITY, RUNNER_CONTAINER_OPTIONS
# Deployed alongside docker-compose.yml (docker) or act_runner binary (native). # Deployed alongside docker-compose.yml (docker) or act_runner binary (native).
log: log:
@@ -22,7 +22,7 @@ cache:
container: container:
network: "" # Empty = use default Docker network. network: "" # Empty = use default Docker network.
privileged: false # Never run job containers as privileged. privileged: false # Never run job containers as privileged.
options: options: ${RUNNER_CONTAINER_OPTIONS}
workdir_parent: workdir_parent:
host: host:

49
toggle_dns.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/usr/bin/env bash
set -euo pipefail
# Toggle DNS between Pi-hole and Cloudflare on all active network services.
# Usage: ./toggle_dns.sh
# Requires sudo for networksetup.
PIHOLE="pi.sintheus.com"
CLOUDFLARE="1.1.1.1"
# Get all hardware network services (Wi-Fi, Ethernet, Thunderbolt, USB, etc.)
services=()
while IFS= read -r line; do
[[ "$line" == *"*"* ]] && continue # skip disabled services
services+=("$line")
done < <(networksetup -listallnetworkservices 2>/dev/null | tail -n +2)
if [[ ${#services[@]} -eq 0 ]]; then
echo "No network services found"
exit 1
fi
# Detect current mode from the first service that has a DNS set
current_dns=""
for svc in "${services[@]}"; do
dns=$(networksetup -getdnsservers "$svc" 2>/dev/null | head -1)
if [[ "$dns" != *"aren't any"* ]] && [[ -n "$dns" ]]; then
current_dns="$dns"
break
fi
done
if [[ "$current_dns" == "$CLOUDFLARE" ]]; then
target="$PIHOLE"
label="Pi-hole"
else
target="$CLOUDFLARE"
label="Cloudflare"
fi
echo "Switching all services to ${label} (${target})..."
for svc in "${services[@]}"; do
sudo networksetup -setdnsservers "$svc" "$target"
echo " ${svc}${target}"
done
sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponder 2>/dev/null || true
echo "DNS set to ${label} (${target})"