Compare commits
12 Commits
7e34808d76
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5a8a23ef69 | ||
|
|
c2087d5087 | ||
|
|
e624885bb9 | ||
|
|
b799cb7970 | ||
|
|
63f5bf6ea7 | ||
|
|
14a5773a2d | ||
|
|
9224b91374 | ||
|
|
b52d3187d9 | ||
|
|
78376f0137 | ||
|
|
96214654d0 | ||
|
|
3c86890983 | ||
|
|
d9fb5254cd |
11
.env.example
11
.env.example
@@ -34,14 +34,14 @@ UNRAID_MACVLAN_PARENT= # Host network interface (e.g. br0, eth0)
|
||||
UNRAID_MACVLAN_SUBNET= # LAN subnet in CIDR (e.g. 192.168.1.0/24)
|
||||
UNRAID_MACVLAN_GATEWAY= # LAN gateway (e.g. 192.168.1.1)
|
||||
UNRAID_MACVLAN_IP_RANGE= # IP range for containers (e.g. 192.168.1.192/28 — 16 IPs)
|
||||
UNRAID_GITEA_IP= # Static LAN IP for Gitea container
|
||||
UNRAID_GITEA_IP= # Static LAN IP for Gitea container (macvlan — only reachable on the LAN; runners outside the LAN must use GITEA_DOMAIN instead)
|
||||
UNRAID_CADDY_IP= # Static LAN IP for Caddy container
|
||||
|
||||
FEDORA_MACVLAN_PARENT= # Host network interface (e.g. eth0)
|
||||
FEDORA_MACVLAN_SUBNET= # LAN subnet in CIDR (e.g. 192.168.1.0/24)
|
||||
FEDORA_MACVLAN_GATEWAY= # LAN gateway (e.g. 192.168.1.1)
|
||||
FEDORA_MACVLAN_IP_RANGE= # IP range for containers (e.g. 192.168.1.208/28 — 16 IPs)
|
||||
FEDORA_GITEA_IP= # Static LAN IP for Gitea container
|
||||
FEDORA_GITEA_IP= # Static LAN IP for Gitea container (macvlan — only reachable on the LAN)
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
@@ -96,6 +96,11 @@ LOCAL_REGISTRY= # Local registry prefix (e.g. registry.local:5
|
||||
# AUTO-POPULATED by phase3 scripts — do not fill manually:
|
||||
GITEA_RUNNER_REGISTRATION_TOKEN= # Retrieved from Gitea admin panel via API
|
||||
|
||||
# Custom runner image build contexts (phase 11)
|
||||
# Absolute paths to directories containing Dockerfiles for custom runner images.
|
||||
GO_NODE_RUNNER_CONTEXT= # Path to Go + Node toolchain Dockerfile (e.g. /path/to/augur/infra/runners)
|
||||
JVM_ANDROID_RUNNER_CONTEXT= # Path to JDK + Android SDK toolchain Dockerfile (e.g. /path/to/periodvault/infra/runners)
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# REPOSITORIES
|
||||
@@ -124,6 +129,8 @@ TLS_MODE=cloudflare # TLS mode: "cloudflare" (DNS-01 via CF API) o
|
||||
CADDY_DOMAIN= # Wildcard cert base domain (e.g. privacyindesign.com → cert for *.privacyindesign.com)
|
||||
CADDY_DATA_PATH= # Absolute path on host for Caddy data (e.g. /mnt/nvme/caddy)
|
||||
CLOUDFLARE_API_TOKEN= # Cloudflare API token with Zone:DNS:Edit (only if TLS_MODE=cloudflare)
|
||||
PUBLIC_DNS_TARGET_IP= # Phase 8 Cloudflare A-record target for GITEA_DOMAIN (public ingress IP recommended)
|
||||
PHASE8_ALLOW_PRIVATE_DNS_TARGET=false # true only for LAN-only/split-DNS setups using private RFC1918 target IPs
|
||||
SSL_CERT_PATH= # Absolute path to SSL cert (only if TLS_MODE=existing)
|
||||
SSL_KEY_PATH= # Absolute path to SSL key (only if TLS_MODE=existing)
|
||||
|
||||
|
||||
11
README.md
11
README.md
@@ -55,6 +55,7 @@ The entire process is driven from a MacBook over SSH. Nothing is installed on th
|
||||
| 6 | `phase6_github_mirrors.sh` | Configure push mirrors from Gitea to GitHub, disable GitHub Actions |
|
||||
| 7 | `phase7_branch_protection.sh` | Apply branch protection rules to all repos |
|
||||
| 8 | `phase8_cutover.sh` | Deploy Caddy HTTPS reverse proxy (Cloudflare DNS-01 or existing certs), mark GitHub repos as mirrors |
|
||||
| 7.5 (optional) | `phase7_5_nginx_to_caddy.sh` | One-time multi-domain Nginx -> Caddy migration helper (canary/full), supports `sintheus.com` + `privacyindesign.com` in one Caddy |
|
||||
| 9 | `phase9_security.sh` | Deploy Semgrep + Trivy + Gitleaks security scanning workflows |
|
||||
|
||||
Each phase has three scripts: the main script, a `_post_check.sh` that independently verifies success, and a `_teardown.sh` that cleanly reverses the phase.
|
||||
@@ -96,6 +97,8 @@ gitea-migration/
|
||||
├── run_all.sh # Full pipeline orchestration
|
||||
├── post-migration-check.sh # Read-only infrastructure state check
|
||||
├── teardown_all.sh # Reverse teardown (9 to 1)
|
||||
├── phase7_5_nginx_to_caddy.sh # Optional one-time Nginx -> Caddy consolidation step
|
||||
├── TODO.md # Phase 7.5 migration context, backlog, and DoD
|
||||
├── manage_runner.sh # Dynamic runner add/remove/list
|
||||
├── phase{1-9}_*.sh # Main phase scripts
|
||||
├── phase{1-9}_post_check.sh # Verification scripts
|
||||
@@ -192,6 +195,12 @@ Phases run strictly sequentially. Phase 4 could potentially import repos in para
|
||||
- The total migration time is dominated by network transfers, not script execution
|
||||
- Sequential execution produces readable, linear logs
|
||||
|
||||
### Macvlan IPs are LAN-only — runners outside the network must use the public domain
|
||||
|
||||
Gitea containers run on macvlan IPs (e.g., `UNRAID_GITEA_IP=192.168.1.177`). These IPs are only reachable from machines on the same LAN. If a runner (e.g., a MacBook) is outside the local network (coffee shop, VPN, mobile hotspot), it cannot reach the macvlan IP and will fail to poll for jobs or report results.
|
||||
|
||||
**Fix**: Edit the runner's `.runner` file and change the `address` field from the LAN IP (`http://192.168.1.177:3000`) to the public domain (`https://YOUR_DOMAIN`). The public domain routes through Caddy (Phase 8) and works from anywhere. Restart the runner after changing the address.
|
||||
|
||||
### Docker socket mounted in runner containers
|
||||
|
||||
Runner containers get `/var/run/docker.sock` mounted, giving them root-equivalent access to the host's Docker daemon. This is required for runners to spawn job containers but is a security concern for untrusted code. For a private instance with trusted users, this is the standard Gitea runner deployment.
|
||||
@@ -228,7 +237,7 @@ When `TLS_MODE=cloudflare`, Caddy handles certificate renewal automatically via
|
||||
| MacBook | macOS, Homebrew, jq >= 1.6, curl >= 7.70, git >= 2.30, shellcheck >= 0.8, gh >= 2.0, bw >= 2.0 |
|
||||
| Unraid | Linux, Docker >= 20.0, docker-compose >= 2.0, jq >= 1.6, passwordless sudo for SSH user |
|
||||
| Fedora | Linux with dnf, Docker CE >= 20.0, docker-compose >= 2.0, jq >= 1.6, passwordless sudo for SSH user |
|
||||
| Network | MacBook can SSH to both servers, DNS A record pointing to Unraid (needed for Phase 8 TLS), Cloudflare API token (if using `TLS_MODE=cloudflare`) |
|
||||
| Network | MacBook can SSH to both servers; for `TLS_MODE=cloudflare`, provide `CLOUDFLARE_API_TOKEN` plus `PUBLIC_DNS_TARGET_IP` (public ingress IP recommended; private IP requires `PHASE8_ALLOW_PRIVATE_DNS_TARGET=true`) |
|
||||
|
||||
## Quick Start
|
||||
|
||||
|
||||
118
TODO.md
Normal file
118
TODO.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# TODO — Phase 7.5 Nginx -> Caddy Consolidation
|
||||
|
||||
## Why this exists
|
||||
|
||||
This file captures the decisions and migration context for the one-time "phase 7.5"
|
||||
work so we do not lose reasoning between sessions.
|
||||
|
||||
## What happened so far
|
||||
|
||||
1. The original `phase8_cutover.sh` was designed for one wildcard zone
|
||||
(`*.${CADDY_DOMAIN}`), mainly for Gitea cutover.
|
||||
2. The homelab currently has two active DNS zones in scope:
|
||||
- `sintheus.com` (legacy services behind Nginx)
|
||||
- `privacyindesign.com` (new Gitea public endpoint)
|
||||
3. Decision made: run a one-time migration where a single Caddy instance serves
|
||||
both zones, then gradually retire Nginx.
|
||||
4. Implemented: `phase7_5_nginx_to_caddy.sh` to generate/deploy a multi-domain
|
||||
Caddyfile and run canary/full rollout modes.
|
||||
|
||||
## Current design decisions
|
||||
|
||||
1. Public ingress should be HTTPS-only for all migrated hostnames.
|
||||
2. Backend scheme is mixed for now:
|
||||
- Keep `http://` upstream where service does not yet have TLS.
|
||||
- Keep `https://` where already available.
|
||||
3. End-to-end HTTPS is a target state, not an immediate requirement.
|
||||
4. A strict toggle exists in phase 7.5:
|
||||
- `--strict-backend-https` fails if any upstream is `http://`.
|
||||
5. Canary-first rollout:
|
||||
- first migration target is `tower.sintheus.com`.
|
||||
6. Canary mode is additive:
|
||||
- preserves existing Caddy routes
|
||||
- updates only a managed canary block for `tower.sintheus.com`.
|
||||
|
||||
## Host map and backend TLS status
|
||||
|
||||
### Canary scope (default mode)
|
||||
|
||||
- `tower.sintheus.com -> https://192.168.1.82:443` (TLS backend; cert verify skipped)
|
||||
- `${GITEA_DOMAIN} -> http://${UNRAID_GITEA_IP}:3000` (HTTP backend for now)
|
||||
|
||||
### Full migration scope
|
||||
|
||||
- `ai.sintheus.com -> http://192.168.1.82:8181`
|
||||
- `photos.sintheus.com -> http://192.168.1.222:2283`
|
||||
- `fin.sintheus.com -> http://192.168.1.233:8096`
|
||||
- `disk.sintheus.com -> http://192.168.1.52:80`
|
||||
- `pi.sintheus.com -> http://192.168.1.4:80`
|
||||
- `plex.sintheus.com -> http://192.168.1.111:32400`
|
||||
- `sync.sintheus.com -> http://192.168.1.119:8384`
|
||||
- `syno.sintheus.com -> https://100.108.182.16:5001` (verify skipped)
|
||||
- `tower.sintheus.com -> https://192.168.1.82:443` (verify skipped)
|
||||
- `${GITEA_DOMAIN} -> http://${UNRAID_GITEA_IP}:3000`
|
||||
|
||||
## Definition of done (phase 7.5)
|
||||
|
||||
Phase 7.5 is done only when all are true:
|
||||
|
||||
1. Caddy is running on Unraid with generated multi-domain config.
|
||||
2. Canary host `tower.sintheus.com` is reachable over HTTPS through Caddy.
|
||||
3. Canary routing is proven by at least one path:
|
||||
- `curl --resolve` tests, or
|
||||
- split-DNS/hosts override, or
|
||||
- intentional DNS cutover.
|
||||
4. Legacy Nginx remains available for non-migrated hosts during canary.
|
||||
5. No critical regressions observed for at least 24 hours on canary traffic.
|
||||
|
||||
## Definition of done (final state after full migration)
|
||||
|
||||
1. All selected domains route to Caddy through the intended ingress path:
|
||||
- LAN-only: split-DNS/private resolution to Caddy, or
|
||||
- public: DNS to WAN ingress that forwards 443 to Caddy.
|
||||
2. Caddy serves valid certificates for both zones.
|
||||
3. Functional checks pass for each service (UI load, API, websocket/streaming where relevant).
|
||||
4. Nginx is no longer on the request path for migrated domains.
|
||||
5. Long-term target: all backends upgraded to `https://` and strict mode passes.
|
||||
|
||||
## What remains to happen
|
||||
|
||||
1. Run canary:
|
||||
- `./phase7_5_nginx_to_caddy.sh --mode=canary`
|
||||
2. Route canary traffic to Caddy using one method:
|
||||
- `curl --resolve` for zero-DNS-change testing, or
|
||||
- split-DNS/private DNS, or
|
||||
- explicit DNS cutover if desired.
|
||||
3. Observe errors/latency/app behavior for at least 24 hours.
|
||||
4. If canary is clean, run full:
|
||||
- `./phase7_5_nginx_to_caddy.sh --mode=full`
|
||||
5. Move remaining routes in batches (DNS or split-DNS, depending on ingress model).
|
||||
6. Validate each app after each batch.
|
||||
7. After everything is stable, plan Nginx retirement.
|
||||
8. Later hardening pass:
|
||||
- enable TLS on each backend service one by one
|
||||
- flip each corresponding upstream to `https://`
|
||||
- finally run `--strict-backend-https` and require it to pass.
|
||||
|
||||
## Risks and why mixed backend HTTP is acceptable short-term
|
||||
|
||||
1. Risk: backend HTTP is unencrypted on LAN.
|
||||
- Mitigation: traffic stays on trusted local network, temporary state only.
|
||||
2. Risk: if strict mode is enabled too early, rollout blocks.
|
||||
- Mitigation: keep strict mode off until backend TLS coverage improves.
|
||||
3. Risk: moving all DNS at once can create broad outage.
|
||||
- Mitigation: canary-first and batch DNS cutover.
|
||||
|
||||
## Operational notes
|
||||
|
||||
1. If Caddyfile already exists, phase 7.5 backs it up as:
|
||||
- `${CADDY_DATA_PATH}/Caddyfile.pre_phase7_5.<timestamp>`
|
||||
2. Compose stack path for Caddy:
|
||||
- `${UNRAID_COMPOSE_DIR}/caddy/docker-compose.yml`
|
||||
3. Script does not change Cloudflare DNS records automatically.
|
||||
- DNS updates are intentional/manual to keep blast radius controlled.
|
||||
4. Do not set public Cloudflare proxied records to private `192.168.x.x` addresses.
|
||||
5. Canary upsert behavior is domain-aware:
|
||||
- if site block for the canary domain does not exist, it is added
|
||||
- if site block exists, it is replaced in-place
|
||||
- previous block content is printed in logs before replacement
|
||||
@@ -31,8 +31,9 @@ Before running anything, confirm:
|
||||
|
||||
DNS and TLS are only needed for Phase 8 (Caddy reverse proxy). You can set these up later:
|
||||
|
||||
- A DNS A record for your Gitea domain pointing to `UNRAID_IP`
|
||||
- If using `TLS_MODE=cloudflare`: a Cloudflare API token with Zone:DNS:Edit permission
|
||||
- `PUBLIC_DNS_TARGET_IP` set to your ingress IP for `GITEA_DOMAIN` (public IP recommended)
|
||||
- If you intentionally use LAN-only split DNS with a private IP target, set `PHASE8_ALLOW_PRIVATE_DNS_TARGET=true`
|
||||
|
||||
### 2. Passwordless sudo on remote hosts
|
||||
|
||||
@@ -154,6 +155,23 @@ If you prefer to run each phase individually and inspect results:
|
||||
./phase9_security.sh && ./phase9_post_check.sh
|
||||
```
|
||||
|
||||
### Optional Phase 7.5 (one-time Nginx -> Caddy migration)
|
||||
|
||||
Use this only if you want one Caddy instance to serve both legacy and new domains.
|
||||
|
||||
```bash
|
||||
# Canary first (default): tower.sintheus.com + Gitea domain
|
||||
./phase7_5_nginx_to_caddy.sh --mode=canary
|
||||
|
||||
# Full host map cutover
|
||||
./phase7_5_nginx_to_caddy.sh --mode=full
|
||||
|
||||
# Enforce strict end-to-end TLS for all upstreams
|
||||
./phase7_5_nginx_to_caddy.sh --mode=full --strict-backend-https
|
||||
```
|
||||
|
||||
Detailed migration context, rationale, and next actions are tracked in `TODO.md`.
|
||||
|
||||
### Skip setup (already done)
|
||||
|
||||
```bash
|
||||
@@ -162,7 +180,17 @@ If you prefer to run each phase individually and inspect results:
|
||||
|
||||
### What to verify when it's done
|
||||
|
||||
After the full migration completes:
|
||||
After the full migration completes, run the post-migration check:
|
||||
|
||||
```bash
|
||||
./post-migration-check.sh
|
||||
# or equivalently:
|
||||
./run_all.sh --dry-run
|
||||
```
|
||||
|
||||
This probes all live infrastructure and reports the state of every phase — what's done, what's pending, and any errors. See [Post-Migration Check](#post-migration-check) below for details.
|
||||
|
||||
You can also verify manually:
|
||||
|
||||
1. **HTTPS access**: Open `https://YOUR_DOMAIN` in a browser — you should see the Gitea login page with a valid SSL certificate.
|
||||
2. **Repository content**: Log in as admin, navigate to your org, confirm all repos have commits, branches, and (if enabled) issues/labels.
|
||||
@@ -215,6 +243,44 @@ When resuming from a later phase, Gitea is already running on ports 3000. Use:
|
||||
|
||||
---
|
||||
|
||||
## Post-Migration Check
|
||||
|
||||
A standalone read-only script that probes live infrastructure and reports the state of every migration phase. No mutations — safe to run at any time, before, during, or after migration.
|
||||
|
||||
```bash
|
||||
./post-migration-check.sh
|
||||
# or:
|
||||
./run_all.sh --dry-run
|
||||
```
|
||||
|
||||
### What it checks
|
||||
|
||||
- **Connectivity**: SSH to Unraid/Fedora, Docker daemons, GitHub API token validity
|
||||
- **Phase 1-2**: Docker networks, compose files, app.ini, container health, admin auth, API tokens, organization
|
||||
- **Phase 3**: runners.conf, registration token, per-runner online/offline status
|
||||
- **Phase 4**: GitHub source repos accessible, Gitea repos migrated, Fedora mirrors active
|
||||
- **Phase 5**: Workflow directories present in Gitea repos
|
||||
- **Phase 6**: Push mirrors configured, GitHub Actions disabled
|
||||
- **Phase 7**: Branch protection rules with approval counts
|
||||
- **Phase 8**: DNS resolution, Caddy container, HTTPS end-to-end, TLS cert, GitHub `[MIRROR]` marking
|
||||
- **Phase 9**: Security scan workflows deployed
|
||||
|
||||
### Output format
|
||||
|
||||
Three states:
|
||||
|
||||
| State | Meaning |
|
||||
|-------|---------|
|
||||
| `[DONE]` | Already exists/running — phase would skip this step |
|
||||
| `[TODO]` | Not done yet — phase would execute this step |
|
||||
| `[ERROR]` | Something is broken — needs attention |
|
||||
|
||||
`[TODO]` is normal for phases you haven't run yet. Only `[ERROR]` indicates a problem.
|
||||
|
||||
The script exits 0 if no errors, 1 if any `[ERROR]` found. A summary at the end shows per-phase counts.
|
||||
|
||||
---
|
||||
|
||||
## Edge Cases
|
||||
|
||||
### GitHub API rate limit hit during migration
|
||||
@@ -251,7 +317,7 @@ Then re-run Phase 4. Already-migrated repos will be skipped.
|
||||
|
||||
**Symptom**: Preflight check 14 fails.
|
||||
|
||||
**Fix**: Add or update your DNS A record. If using a local DNS server or `/etc/hosts`, ensure the record points to `UNRAID_IP`. DNS propagation can take minutes to hours.
|
||||
**Fix**: Phase 8 can auto-upsert the Cloudflare A record for `GITEA_DOMAIN` when `TLS_MODE=cloudflare`. Set `PUBLIC_DNS_TARGET_IP` first. Use a public ingress IP for public access. For LAN-only split DNS, set `PHASE8_ALLOW_PRIVATE_DNS_TARGET=true`.
|
||||
|
||||
### Caddy fails to start or obtain TLS certificate in Phase 8
|
||||
|
||||
@@ -298,6 +364,20 @@ This generates ed25519 keys on each host and distributes public keys to the othe
|
||||
|
||||
**Fix**: Log out and back in to the Fedora machine (SSH session), then re-run the failed script. Adding a user to the `docker` group requires a new login session to take effect.
|
||||
|
||||
### Runner cannot connect from outside the LAN
|
||||
|
||||
**Symptom**: A runner (typically a MacBook) shows as offline or fails to connect when working outside the local network (e.g., coffee shop, VPN, mobile hotspot).
|
||||
|
||||
**Cause**: Gitea containers use macvlan IPs (e.g., `192.168.1.177`) which are only reachable from machines on the same LAN. The runner's `.runner` file contains `"address": "http://192.168.1.177:3000"` which is unreachable from outside.
|
||||
|
||||
**Fix**: Edit the runner's `.runner` file and change the `address` to the public domain:
|
||||
|
||||
```json
|
||||
"address": "https://YOUR_DOMAIN"
|
||||
```
|
||||
|
||||
Then restart the runner. The public domain routes through Caddy (Phase 8) and works from anywhere. No SSH tunnel needed.
|
||||
|
||||
### Runner shows as offline after deployment
|
||||
|
||||
**Symptom**: Phase 3 post-check reports a runner as offline.
|
||||
|
||||
@@ -269,17 +269,17 @@ _ENV_VAR_TYPES=(
|
||||
)
|
||||
|
||||
# Conditional variables — validated only when TLS_MODE matches.
|
||||
_ENV_CONDITIONAL_TLS_NAMES=(CLOUDFLARE_API_TOKEN SSL_CERT_PATH SSL_KEY_PATH)
|
||||
_ENV_CONDITIONAL_TLS_TYPES=(nonempty path path)
|
||||
_ENV_CONDITIONAL_TLS_WHEN=( cloudflare existing existing)
|
||||
_ENV_CONDITIONAL_TLS_NAMES=(CLOUDFLARE_API_TOKEN PUBLIC_DNS_TARGET_IP PHASE8_ALLOW_PRIVATE_DNS_TARGET SSL_CERT_PATH SSL_KEY_PATH)
|
||||
_ENV_CONDITIONAL_TLS_TYPES=(nonempty ip bool path path)
|
||||
_ENV_CONDITIONAL_TLS_WHEN=( cloudflare cloudflare cloudflare existing existing)
|
||||
|
||||
# Conditional variables — validated only when GITEA_DB_TYPE is NOT sqlite3.
|
||||
_ENV_CONDITIONAL_DB_NAMES=(GITEA_DB_PORT GITEA_DB_NAME GITEA_DB_USER GITEA_DB_PASSWD)
|
||||
_ENV_CONDITIONAL_DB_TYPES=(port nonempty nonempty password)
|
||||
|
||||
# Optional variables — validated only when non-empty (never required).
|
||||
_ENV_OPTIONAL_NAMES=(UNRAID_SSH_KEY FEDORA_SSH_KEY LOCAL_REGISTRY)
|
||||
_ENV_OPTIONAL_TYPES=(optional_path optional_path nonempty)
|
||||
_ENV_OPTIONAL_NAMES=(UNRAID_SSH_KEY FEDORA_SSH_KEY LOCAL_REGISTRY GO_NODE_RUNNER_CONTEXT JVM_ANDROID_RUNNER_CONTEXT)
|
||||
_ENV_OPTIONAL_TYPES=(optional_path optional_path nonempty optional_path optional_path)
|
||||
|
||||
# Human-readable format hints for error messages.
|
||||
_validator_hint() {
|
||||
|
||||
327
lib/phase10_common.sh
Normal file
327
lib/phase10_common.sh
Normal file
@@ -0,0 +1,327 @@
|
||||
#!/usr/bin/env bash
|
||||
# =============================================================================
|
||||
# lib/phase10_common.sh — Shared helpers for phase 10 local repo cutover
|
||||
# =============================================================================
|
||||
|
||||
# Shared discovery results (parallel arrays; bash 3.2 compatible).
|
||||
PHASE10_REPO_NAMES=()
|
||||
PHASE10_REPO_PATHS=()
|
||||
PHASE10_GITHUB_URLS=()
|
||||
PHASE10_DUPLICATES=()
|
||||
|
||||
phase10_repo_index_by_name() {
|
||||
local repo_name="$1"
|
||||
local i
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
if [[ "${PHASE10_REPO_NAMES[$i]}" == "$repo_name" ]]; then
|
||||
printf '%s' "$i"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
printf '%s' "-1"
|
||||
}
|
||||
|
||||
# Parse common git remote URL formats into: host|owner|repo
|
||||
# Supports:
|
||||
# - https://host/owner/repo(.git)
|
||||
# - ssh://git@host/owner/repo(.git)
|
||||
# - git@host:owner/repo(.git)
|
||||
phase10_parse_git_url() {
|
||||
local url="$1"
|
||||
local rest host path owner repo
|
||||
|
||||
if [[ "$url" =~ ^[a-zA-Z][a-zA-Z0-9+.-]*:// ]]; then
|
||||
rest="${url#*://}"
|
||||
# Drop optional userinfo component.
|
||||
rest="${rest#*@}"
|
||||
host="${rest%%/*}"
|
||||
path="${rest#*/}"
|
||||
elif [[ "$url" == *@*:* ]]; then
|
||||
rest="${url#*@}"
|
||||
host="${rest%%:*}"
|
||||
path="${rest#*:}"
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
|
||||
path="${path#/}"
|
||||
path="${path%.git}"
|
||||
owner="${path%%/*}"
|
||||
repo="${path#*/}"
|
||||
repo="${repo%%/*}"
|
||||
|
||||
if [[ -z "$host" ]] || [[ -z "$owner" ]] || [[ -z "$repo" ]] || [[ "$owner" == "$path" ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
printf '%s|%s|%s\n' "$host" "$owner" "$repo"
|
||||
}
|
||||
|
||||
phase10_host_matches() {
|
||||
local host="$1" expected="$2"
|
||||
[[ "$host" == "$expected" ]] || [[ "$host" == "${expected}:"* ]]
|
||||
}
|
||||
|
||||
# Return 0 when URL matches github.com/<owner>/<repo>.
|
||||
# If <repo> is omitted, only owner is checked.
|
||||
phase10_url_is_github_repo() {
|
||||
local url="$1" owner_expected="$2" repo_expected="${3:-}"
|
||||
local parsed host owner repo
|
||||
|
||||
parsed=$(phase10_parse_git_url "$url" 2>/dev/null) || return 1
|
||||
IFS='|' read -r host owner repo <<< "$parsed"
|
||||
|
||||
phase10_host_matches "$host" "github.com" || return 1
|
||||
[[ "$owner" == "$owner_expected" ]] || return 1
|
||||
if [[ -n "$repo_expected" ]] && [[ "$repo" != "$repo_expected" ]]; then
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
phase10_url_is_gitea_repo() {
|
||||
local url="$1" domain="$2" org="$3" repo_expected="$4"
|
||||
local parsed host owner repo
|
||||
|
||||
parsed=$(phase10_parse_git_url "$url" 2>/dev/null) || return 1
|
||||
IFS='|' read -r host owner repo <<< "$parsed"
|
||||
|
||||
phase10_host_matches "$host" "$domain" || return 1
|
||||
[[ "$owner" == "$org" ]] || return 1
|
||||
[[ "$repo" == "$repo_expected" ]] || return 1
|
||||
}
|
||||
|
||||
phase10_canonical_github_url() {
|
||||
local owner="$1" repo="$2"
|
||||
printf 'https://github.com/%s/%s.git' "$owner" "$repo"
|
||||
}
|
||||
|
||||
phase10_canonical_gitea_url() {
|
||||
local domain="$1" org="$2" repo="$3"
|
||||
printf 'https://%s/%s/%s.git' "$domain" "$org" "$repo"
|
||||
}
|
||||
|
||||
# Resolve which local remote currently represents GitHub for this repo path.
|
||||
# Prefers "github" remote, then "origin".
|
||||
phase10_find_github_remote_url() {
|
||||
local repo_path="$1" github_owner="$2"
|
||||
local github_url=""
|
||||
|
||||
if github_url=$(git -C "$repo_path" remote get-url github 2>/dev/null); then
|
||||
if phase10_url_is_github_repo "$github_url" "$github_owner"; then
|
||||
printf '%s' "$github_url"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
if github_url=$(git -C "$repo_path" remote get-url origin 2>/dev/null); then
|
||||
if phase10_url_is_github_repo "$github_url" "$github_owner"; then
|
||||
printf '%s' "$github_url"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Add or update a discovered repo entry.
|
||||
# If repo already exists and path differs, explicit path wins.
|
||||
phase10_upsert_repo_entry() {
|
||||
local repo_name="$1" repo_path="$2" github_url="$3"
|
||||
local idx existing_path
|
||||
|
||||
idx="$(phase10_repo_index_by_name "$repo_name")"
|
||||
if [[ "$idx" -ge 0 ]]; then
|
||||
existing_path="${PHASE10_REPO_PATHS[$idx]}"
|
||||
if [[ "$existing_path" != "$repo_path" ]]; then
|
||||
PHASE10_REPO_PATHS[idx]="$repo_path"
|
||||
PHASE10_GITHUB_URLS[idx]="$github_url"
|
||||
log_info "${repo_name}: using explicit include path ${repo_path} (replacing ${existing_path})"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
PHASE10_REPO_NAMES+=("$repo_name")
|
||||
PHASE10_REPO_PATHS+=("$repo_path")
|
||||
PHASE10_GITHUB_URLS+=("$github_url")
|
||||
return 0
|
||||
}
|
||||
|
||||
# Add one explicitly included repo path into discovery arrays.
|
||||
# Validates path is a git toplevel and maps to github.com/<owner>/repo.
|
||||
phase10_include_repo_path() {
|
||||
local include_path="$1" github_owner="$2"
|
||||
local abs_path top github_url parsed host owner repo canonical
|
||||
|
||||
if [[ ! -d "$include_path" ]]; then
|
||||
log_error "Include path not found: ${include_path}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
abs_path="$(cd "$include_path" && pwd)"
|
||||
if ! git -C "$abs_path" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
|
||||
log_error "Include path is not a git repo: ${abs_path}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
top="$(git -C "$abs_path" rev-parse --show-toplevel 2>/dev/null || true)"
|
||||
if [[ "$top" != "$abs_path" ]]; then
|
||||
log_error "Include path must be repo root (git toplevel): ${abs_path}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
github_url="$(phase10_find_github_remote_url "$abs_path" "$github_owner" 2>/dev/null || true)"
|
||||
if [[ -z "$github_url" ]]; then
|
||||
# Explicit include-path may point to a local repo with no GitHub remote yet.
|
||||
# In that case, derive the repo slug from folder name and assume GitHub URL.
|
||||
repo="$(basename "$abs_path")"
|
||||
canonical="$(phase10_canonical_github_url "$github_owner" "$repo")"
|
||||
log_warn "Include path has no GitHub remote; assuming ${canonical}"
|
||||
else
|
||||
parsed=$(phase10_parse_git_url "$github_url" 2>/dev/null) || {
|
||||
log_error "Could not parse GitHub remote URL for include path: ${abs_path}"
|
||||
return 1
|
||||
}
|
||||
IFS='|' read -r host owner repo <<< "$parsed"
|
||||
canonical="$(phase10_canonical_github_url "$owner" "$repo")"
|
||||
fi
|
||||
|
||||
phase10_upsert_repo_entry "$repo" "$abs_path" "$canonical"
|
||||
return 0
|
||||
}
|
||||
|
||||
phase10_enforce_expected_count() {
|
||||
local expected_count="$1" root="$2"
|
||||
local i
|
||||
|
||||
if [[ "$expected_count" -gt 0 ]] && [[ "${#PHASE10_REPO_NAMES[@]}" -ne "$expected_count" ]]; then
|
||||
log_error "Expected ${expected_count} local repos under ${root}; found ${#PHASE10_REPO_NAMES[@]}"
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
log_error " - ${PHASE10_REPO_NAMES[$i]} -> ${PHASE10_REPO_PATHS[$i]}"
|
||||
done
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Stable in-place sort by repo name (keeps arrays aligned).
|
||||
phase10_sort_repo_arrays() {
|
||||
local i j tmp
|
||||
for ((i = 0; i < ${#PHASE10_REPO_NAMES[@]}; i++)); do
|
||||
for ((j = i + 1; j < ${#PHASE10_REPO_NAMES[@]}; j++)); do
|
||||
if [[ "${PHASE10_REPO_NAMES[$i]}" > "${PHASE10_REPO_NAMES[$j]}" ]]; then
|
||||
tmp="${PHASE10_REPO_NAMES[$i]}"
|
||||
PHASE10_REPO_NAMES[i]="${PHASE10_REPO_NAMES[j]}"
|
||||
PHASE10_REPO_NAMES[j]="$tmp"
|
||||
|
||||
tmp="${PHASE10_REPO_PATHS[i]}"
|
||||
PHASE10_REPO_PATHS[i]="${PHASE10_REPO_PATHS[j]}"
|
||||
PHASE10_REPO_PATHS[j]="$tmp"
|
||||
|
||||
tmp="${PHASE10_GITHUB_URLS[i]}"
|
||||
PHASE10_GITHUB_URLS[i]="${PHASE10_GITHUB_URLS[j]}"
|
||||
PHASE10_GITHUB_URLS[j]="$tmp"
|
||||
fi
|
||||
done
|
||||
done
|
||||
}
|
||||
|
||||
# Discover local repos under root that map to github.com/<github_owner>.
|
||||
# Discovery rules:
|
||||
# - Only direct children of root are considered.
|
||||
# - Excludes exclude_path (typically this toolkit repo).
|
||||
# - Accepts a repo if either "github" or "origin" points at GitHub owner.
|
||||
# - Deduplicates by repo slug, preferring directory basename == slug.
|
||||
#
|
||||
# Args:
|
||||
# $1 root dir (e.g., /Users/s/development)
|
||||
# $2 github owner (from GITHUB_USERNAME)
|
||||
# $3 exclude absolute path (optional; pass "" for none)
|
||||
# $4 expected count (0 = don't enforce)
|
||||
phase10_discover_local_repos() {
|
||||
local root="$1"
|
||||
local github_owner="$2"
|
||||
local exclude_path="${3:-}"
|
||||
local expected_count="${4:-0}"
|
||||
|
||||
PHASE10_REPO_NAMES=()
|
||||
PHASE10_REPO_PATHS=()
|
||||
PHASE10_GITHUB_URLS=()
|
||||
PHASE10_DUPLICATES=()
|
||||
|
||||
if [[ ! -d "$root" ]]; then
|
||||
log_error "Local repo root not found: ${root}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local dir top github_url parsed host owner repo canonical
|
||||
local i idx existing existing_base new_base duplicate
|
||||
for dir in "$root"/*; do
|
||||
[[ -d "$dir" ]] || continue
|
||||
if [[ -n "$exclude_path" ]] && [[ "$dir" == "$exclude_path" ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! git -C "$dir" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
|
||||
continue
|
||||
fi
|
||||
|
||||
top=$(git -C "$dir" rev-parse --show-toplevel 2>/dev/null || true)
|
||||
[[ "$top" == "$dir" ]] || continue
|
||||
|
||||
github_url="$(phase10_find_github_remote_url "$dir" "$github_owner" 2>/dev/null || true)"
|
||||
|
||||
[[ -n "$github_url" ]] || continue
|
||||
|
||||
parsed=$(phase10_parse_git_url "$github_url" 2>/dev/null) || continue
|
||||
IFS='|' read -r host owner repo <<< "$parsed"
|
||||
canonical=$(phase10_canonical_github_url "$owner" "$repo")
|
||||
|
||||
idx=-1
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
if [[ "${PHASE10_REPO_NAMES[$i]}" == "$repo" ]]; then
|
||||
idx="$i"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$idx" -ge 0 ]]; then
|
||||
existing="${PHASE10_REPO_PATHS[$idx]}"
|
||||
existing_base="$(basename "$existing")"
|
||||
new_base="$(basename "$dir")"
|
||||
if [[ "$new_base" == "$repo" ]] && [[ "$existing_base" != "$repo" ]]; then
|
||||
PHASE10_REPO_PATHS[idx]="$dir"
|
||||
PHASE10_GITHUB_URLS[idx]="$canonical"
|
||||
PHASE10_DUPLICATES+=("${repo}: preferred ${dir} over ${existing}")
|
||||
else
|
||||
PHASE10_DUPLICATES+=("${repo}: ignored duplicate ${dir} (using ${existing})")
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
|
||||
PHASE10_REPO_NAMES+=("$repo")
|
||||
PHASE10_REPO_PATHS+=("$dir")
|
||||
PHASE10_GITHUB_URLS+=("$canonical")
|
||||
done
|
||||
|
||||
phase10_sort_repo_arrays
|
||||
|
||||
for duplicate in "${PHASE10_DUPLICATES[@]}"; do
|
||||
log_info "$duplicate"
|
||||
done
|
||||
|
||||
if [[ "${#PHASE10_REPO_NAMES[@]}" -eq 0 ]]; then
|
||||
log_error "No local GitHub repos found under ${root} for owner '${github_owner}'"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$expected_count" -gt 0 ]] && [[ "${#PHASE10_REPO_NAMES[@]}" -ne "$expected_count" ]]; then
|
||||
log_error "Expected ${expected_count} local repos under ${root}; found ${#PHASE10_REPO_NAMES[@]}"
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
log_error " - ${PHASE10_REPO_NAMES[$i]} -> ${PHASE10_REPO_PATHS[$i]}"
|
||||
done
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -73,6 +73,9 @@ parse_runner_entry() {
|
||||
# "true" → /Library/LaunchDaemons/ (starts at boot, requires sudo)
|
||||
# "false" (default) → ~/Library/LaunchAgents/ (starts at login)
|
||||
RUNNER_BOOT=$(ini_get "$RUNNERS_CONF" "$target_name" "boot" "false")
|
||||
# container_options: extra Docker flags for act_runner job containers.
|
||||
# e.g. "--device=/dev/kvm" for KVM passthrough. Ignored for native runners.
|
||||
RUNNER_CONTAINER_OPTIONS=$(ini_get "$RUNNERS_CONF" "$target_name" "container_options" "")
|
||||
|
||||
# --- Host resolution ---
|
||||
# Also resolves RUNNER_COMPOSE_DIR: centralized compose dir on unraid/fedora,
|
||||
@@ -354,8 +357,9 @@ add_docker_runner() {
|
||||
# shellcheck disable=SC2090 # intentional — RUNNER_LABELS_YAML rendered via envsubst
|
||||
export RUNNER_LABELS_YAML
|
||||
export RUNNER_CAPACITY
|
||||
export RUNNER_CONTAINER_OPTIONS
|
||||
render_template "${SCRIPT_DIR}/templates/runner-config.yaml.tpl" "$tmpfile" \
|
||||
"\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY}"
|
||||
"\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY} \${RUNNER_CONTAINER_OPTIONS}"
|
||||
runner_scp "$tmpfile" "${RUNNER_DATA_PATH}/config.yaml"
|
||||
rm -f "$tmpfile"
|
||||
|
||||
@@ -422,9 +426,9 @@ add_native_runner() {
|
||||
local tmpfile
|
||||
tmpfile=$(mktemp)
|
||||
# shellcheck disable=SC2090 # intentional — RUNNER_LABELS_YAML rendered via envsubst
|
||||
export RUNNER_NAME RUNNER_DATA_PATH RUNNER_LABELS_YAML RUNNER_CAPACITY
|
||||
export RUNNER_NAME RUNNER_DATA_PATH RUNNER_LABELS_YAML RUNNER_CAPACITY RUNNER_CONTAINER_OPTIONS
|
||||
render_template "${SCRIPT_DIR}/templates/runner-config.yaml.tpl" "$tmpfile" \
|
||||
"\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY}"
|
||||
"\${RUNNER_NAME} \${RUNNER_LABELS_YAML} \${RUNNER_CAPACITY} \${RUNNER_CONTAINER_OPTIONS}"
|
||||
cp "$tmpfile" "${RUNNER_DATA_PATH}/config.yaml"
|
||||
rm -f "$tmpfile"
|
||||
|
||||
|
||||
732
phase10_local_repo_cutover.sh
Executable file
732
phase10_local_repo_cutover.sh
Executable file
@@ -0,0 +1,732 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase10_local_repo_cutover.sh — Re-point local repos from GitHub to Gitea
|
||||
# Depends on: Phase 8 complete (Gitea publicly reachable) + Phase 4 migrated
|
||||
#
|
||||
# For each discovered local repo under /Users/s/development:
|
||||
# 1. Rename origin -> github (if needed)
|
||||
# 2. Ensure repo exists on Gitea (create if missing)
|
||||
# 3. Add/update origin to point at Gitea
|
||||
# 4. Push all branches and tags to Gitea origin
|
||||
# 5. Ensure every local branch tracks origin/<branch> (Gitea)
|
||||
#
|
||||
# Discovery is based on local git remotes:
|
||||
# - repo root is a direct child of PHASE10_LOCAL_ROOT (default /Users/s/development)
|
||||
# - repo has origin/github pointing to github.com/${GITHUB_USERNAME}/<repo>
|
||||
# - duplicate clones are deduped by repo slug
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
source "${SCRIPT_DIR}/lib/phase10_common.sh"
|
||||
|
||||
load_env
|
||||
require_vars GITEA_ADMIN_TOKEN GITEA_ADMIN_USER GITEA_ORG_NAME GITEA_DOMAIN GITEA_INTERNAL_URL GITHUB_USERNAME
|
||||
|
||||
phase_header 10 "Local Repo Remote Cutover"
|
||||
|
||||
LOCAL_REPO_ROOT="${PHASE10_LOCAL_ROOT:-/Users/s/development}"
|
||||
EXPECTED_REPO_COUNT="${PHASE10_EXPECTED_REPO_COUNT:-3}"
|
||||
INCLUDE_PATHS=()
|
||||
DRY_RUN=false
|
||||
FORCE_WITH_LEASE=false
|
||||
ASKPASS_SCRIPT=""
|
||||
PHASE10_GITEA_REPO_EXISTS=false
|
||||
PHASE10_REMOTE_BRANCHES=""
|
||||
PHASE10_REMOTE_TAGS=""
|
||||
PHASE10_LAST_CURL_ERROR=""
|
||||
PHASE10_LAST_HTTP_CODE=""
|
||||
PHASE10_HTTP_CONNECT_TIMEOUT="${PHASE10_HTTP_CONNECT_TIMEOUT:-15}"
|
||||
PHASE10_HTTP_LOW_SPEED_LIMIT="${PHASE10_HTTP_LOW_SPEED_LIMIT:-1}"
|
||||
PHASE10_HTTP_LOW_SPEED_TIME="${PHASE10_HTTP_LOW_SPEED_TIME:-30}"
|
||||
PHASE10_PUSH_TIMEOUT_SEC="${PHASE10_PUSH_TIMEOUT_SEC:-120}"
|
||||
PHASE10_LSREMOTE_TIMEOUT_SEC="${PHASE10_LSREMOTE_TIMEOUT_SEC:-45}"
|
||||
PHASE10_API_CONNECT_TIMEOUT_SEC="${PHASE10_API_CONNECT_TIMEOUT_SEC:-8}"
|
||||
PHASE10_API_MAX_TIME_SEC="${PHASE10_API_MAX_TIME_SEC:-20}"
|
||||
|
||||
if [[ -n "${PHASE10_INCLUDE_PATHS:-}" ]]; then
|
||||
# Space-delimited list of extra repo roots to include in phase10 discovery.
|
||||
read -r -a INCLUDE_PATHS <<< "${PHASE10_INCLUDE_PATHS}"
|
||||
fi
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--local-root=*) LOCAL_REPO_ROOT="${arg#*=}" ;;
|
||||
--expected-count=*) EXPECTED_REPO_COUNT="${arg#*=}" ;;
|
||||
--include-path=*) INCLUDE_PATHS+=("${arg#*=}") ;;
|
||||
--dry-run) DRY_RUN=true ;;
|
||||
--force-with-lease) FORCE_WITH_LEASE=true ;;
|
||||
--help|-h)
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--local-root=PATH Root folder containing local repos (default: /Users/s/development)
|
||||
--expected-count=N Require exactly N discovered repos (default: 3, 0 disables)
|
||||
--include-path=PATH Explicit repo root to include (repeatable)
|
||||
--dry-run Print planned actions only (no mutations)
|
||||
--force-with-lease Use force-with-lease when pushing branches/tags to Gitea
|
||||
--help Show this help
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown argument: $arg"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if ! [[ "$EXPECTED_REPO_COUNT" =~ ^[0-9]+$ ]]; then
|
||||
log_error "--expected-count must be a non-negative integer"
|
||||
exit 1
|
||||
fi
|
||||
for timeout_var in PHASE10_HTTP_CONNECT_TIMEOUT PHASE10_HTTP_LOW_SPEED_LIMIT PHASE10_HTTP_LOW_SPEED_TIME \
|
||||
PHASE10_PUSH_TIMEOUT_SEC PHASE10_LSREMOTE_TIMEOUT_SEC \
|
||||
PHASE10_API_CONNECT_TIMEOUT_SEC PHASE10_API_MAX_TIME_SEC; do
|
||||
if ! [[ "${!timeout_var}" =~ ^[0-9]+$ ]] || [[ "${!timeout_var}" -le 0 ]]; then
|
||||
log_error "${timeout_var} must be a positive integer"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
cleanup() {
|
||||
if [[ -n "$ASKPASS_SCRIPT" ]]; then
|
||||
rm -f "$ASKPASS_SCRIPT"
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
setup_git_auth() {
|
||||
ASKPASS_SCRIPT=$(mktemp)
|
||||
cat > "$ASKPASS_SCRIPT" <<'EOF'
|
||||
#!/usr/bin/env sh
|
||||
case "$1" in
|
||||
*sername*) printf '%s\n' "$GITEA_GIT_USERNAME" ;;
|
||||
*assword*) printf '%s\n' "$GITEA_GIT_TOKEN" ;;
|
||||
*) printf '\n' ;;
|
||||
esac
|
||||
EOF
|
||||
chmod 700 "$ASKPASS_SCRIPT"
|
||||
}
|
||||
|
||||
git_with_auth() {
|
||||
GIT_TERMINAL_PROMPT=0 \
|
||||
GIT_ASKPASS="$ASKPASS_SCRIPT" \
|
||||
GITEA_GIT_USERNAME="$GITEA_ADMIN_USER" \
|
||||
GITEA_GIT_TOKEN="$GITEA_ADMIN_TOKEN" \
|
||||
"$@"
|
||||
}
|
||||
|
||||
run_with_timeout() {
|
||||
local timeout_sec="$1"
|
||||
shift
|
||||
if command -v perl >/dev/null 2>&1; then
|
||||
perl -e '
|
||||
my $timeout = shift @ARGV;
|
||||
my $pid = fork();
|
||||
if (!defined $pid) { exit 125; }
|
||||
if ($pid == 0) {
|
||||
setpgrp(0, 0);
|
||||
exec @ARGV;
|
||||
exit 125;
|
||||
}
|
||||
my $timed_out = 0;
|
||||
local $SIG{ALRM} = sub {
|
||||
$timed_out = 1;
|
||||
kill "TERM", -$pid;
|
||||
select(undef, undef, undef, 0.5);
|
||||
kill "KILL", -$pid;
|
||||
};
|
||||
alarm $timeout;
|
||||
waitpid($pid, 0);
|
||||
alarm 0;
|
||||
if ($timed_out) { exit 124; }
|
||||
my $rc = $?;
|
||||
if ($rc == -1) { exit 125; }
|
||||
if ($rc & 127) { exit(128 + ($rc & 127)); }
|
||||
exit($rc >> 8);
|
||||
' "$timeout_sec" "$@"
|
||||
else
|
||||
"$@"
|
||||
fi
|
||||
}
|
||||
|
||||
git_with_auth_timed() {
|
||||
local timeout_sec="$1"
|
||||
shift
|
||||
run_with_timeout "$timeout_sec" \
|
||||
env \
|
||||
GIT_TERMINAL_PROMPT=0 \
|
||||
GIT_ASKPASS="$ASKPASS_SCRIPT" \
|
||||
GITEA_GIT_USERNAME="$GITEA_ADMIN_USER" \
|
||||
GITEA_GIT_TOKEN="$GITEA_ADMIN_TOKEN" \
|
||||
"$@"
|
||||
}
|
||||
|
||||
ensure_github_remote() {
|
||||
local repo_path="$1" repo_name="$2" github_url="$3"
|
||||
local existing origin_existing has_bad_github
|
||||
has_bad_github=false
|
||||
|
||||
if existing=$(git -C "$repo_path" remote get-url github 2>/dev/null); then
|
||||
if phase10_url_is_github_repo "$existing" "$GITHUB_USERNAME" "$repo_name"; then
|
||||
if [[ "$existing" != "$github_url" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would set github URL -> ${github_url}"
|
||||
else
|
||||
git -C "$repo_path" remote set-url github "$github_url"
|
||||
fi
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
has_bad_github=true
|
||||
fi
|
||||
|
||||
if origin_existing=$(git -C "$repo_path" remote get-url origin 2>/dev/null); then
|
||||
if phase10_url_is_github_repo "$origin_existing" "$GITHUB_USERNAME" "$repo_name"; then
|
||||
if [[ "$has_bad_github" == "true" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_warn "${repo_name}: would remove misconfigured 'github' remote and rebuild it from origin"
|
||||
else
|
||||
git -C "$repo_path" remote remove github
|
||||
log_warn "${repo_name}: removed misconfigured 'github' remote and rebuilt it from origin"
|
||||
fi
|
||||
fi
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would rename origin -> github"
|
||||
log_info "${repo_name}: would set github URL -> ${github_url}"
|
||||
else
|
||||
git -C "$repo_path" remote rename origin github
|
||||
git -C "$repo_path" remote set-url github "$github_url"
|
||||
log_success "${repo_name}: renamed origin -> github"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$has_bad_github" == "true" ]]; then
|
||||
log_error "${repo_name}: existing 'github' remote does not point to GitHub repo ${GITHUB_USERNAME}/${repo_name}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Explicit include-path repos may have no GitHub remote yet.
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would add github remote -> ${github_url}"
|
||||
else
|
||||
git -C "$repo_path" remote add github "$github_url"
|
||||
log_success "${repo_name}: added github remote (${github_url})"
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
ensure_gitea_origin() {
|
||||
local repo_path="$1" repo_name="$2" gitea_url="$3"
|
||||
local existing
|
||||
|
||||
if existing=$(git -C "$repo_path" remote get-url origin 2>/dev/null); then
|
||||
if phase10_url_is_gitea_repo "$existing" "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name"; then
|
||||
if [[ "$existing" != "$gitea_url" ]]; then
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would normalize origin URL -> ${gitea_url}"
|
||||
else
|
||||
git -C "$repo_path" remote set-url origin "$gitea_url"
|
||||
fi
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
# origin exists but points somewhere else; force it to Gitea.
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would set origin URL -> ${gitea_url}"
|
||||
else
|
||||
git -C "$repo_path" remote set-url origin "$gitea_url"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would add origin -> ${gitea_url}"
|
||||
else
|
||||
git -C "$repo_path" remote add origin "$gitea_url"
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
ensure_gitea_repo_exists() {
|
||||
local repo_name="$1"
|
||||
local create_payload create_response
|
||||
|
||||
get_gitea_repo_http_code() {
|
||||
local target_repo="$1"
|
||||
local tmpfile errfile curl_code
|
||||
tmpfile=$(mktemp)
|
||||
errfile=$(mktemp)
|
||||
curl_code=$(curl \
|
||||
-sS \
|
||||
-o "$tmpfile" \
|
||||
-w "%{http_code}" \
|
||||
--connect-timeout "$PHASE10_API_CONNECT_TIMEOUT_SEC" \
|
||||
--max-time "$PHASE10_API_MAX_TIME_SEC" \
|
||||
-H "Authorization: token ${GITEA_ADMIN_TOKEN}" \
|
||||
-H "Accept: application/json" \
|
||||
"${GITEA_INTERNAL_URL}/api/v1/repos/${GITEA_ORG_NAME}/${target_repo}" 2>"$errfile") || {
|
||||
PHASE10_LAST_CURL_ERROR="$(tr '\n' ' ' < "$errfile" | sed 's/[[:space:]]\+/ /g; s/^ //; s/ $//')"
|
||||
rm -f "$tmpfile"
|
||||
rm -f "$errfile"
|
||||
return 1
|
||||
}
|
||||
rm -f "$tmpfile"
|
||||
rm -f "$errfile"
|
||||
PHASE10_LAST_CURL_ERROR=""
|
||||
PHASE10_LAST_HTTP_CODE="$curl_code"
|
||||
return 0
|
||||
}
|
||||
|
||||
create_gitea_repo() {
|
||||
local payload="$1"
|
||||
local tmpfile errfile curl_code
|
||||
tmpfile=$(mktemp)
|
||||
errfile=$(mktemp)
|
||||
curl_code=$(curl \
|
||||
-sS \
|
||||
-o "$tmpfile" \
|
||||
-w "%{http_code}" \
|
||||
--connect-timeout "$PHASE10_API_CONNECT_TIMEOUT_SEC" \
|
||||
--max-time "$PHASE10_API_MAX_TIME_SEC" \
|
||||
-X POST \
|
||||
-H "Authorization: token ${GITEA_ADMIN_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Accept: application/json" \
|
||||
-d "$payload" \
|
||||
"${GITEA_INTERNAL_URL}/api/v1/orgs/${GITEA_ORG_NAME}/repos" 2>"$errfile") || {
|
||||
PHASE10_LAST_CURL_ERROR="$(tr '\n' ' ' < "$errfile" | sed 's/[[:space:]]\+/ /g; s/^ //; s/ $//')"
|
||||
rm -f "$tmpfile"
|
||||
rm -f "$errfile"
|
||||
return 1
|
||||
}
|
||||
create_response="$(cat "$tmpfile")"
|
||||
rm -f "$tmpfile"
|
||||
rm -f "$errfile"
|
||||
PHASE10_LAST_CURL_ERROR=""
|
||||
PHASE10_LAST_HTTP_CODE="$curl_code"
|
||||
return 0
|
||||
}
|
||||
|
||||
PHASE10_GITEA_REPO_EXISTS=false
|
||||
PHASE10_LAST_CURL_ERROR=""
|
||||
PHASE10_LAST_HTTP_CODE=""
|
||||
if ! get_gitea_repo_http_code "$repo_name"; then
|
||||
log_error "${repo_name}: failed to query Gitea API for repo existence"
|
||||
if [[ -n "$PHASE10_LAST_CURL_ERROR" ]]; then
|
||||
log_error "${repo_name}: curl error: ${PHASE10_LAST_CURL_ERROR}"
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$PHASE10_LAST_HTTP_CODE" == "200" ]]; then
|
||||
PHASE10_GITEA_REPO_EXISTS=true
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: Gitea repo already exists (${GITEA_ORG_NAME}/${repo_name})"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$PHASE10_LAST_HTTP_CODE" != "404" ]]; then
|
||||
log_error "${repo_name}: unexpected Gitea API status while checking repo (${PHASE10_LAST_HTTP_CODE})"
|
||||
return 1
|
||||
fi
|
||||
|
||||
create_payload=$(jq -n \
|
||||
--arg name "$repo_name" \
|
||||
'{name: $name, auto_init: false}')
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would create missing Gitea repo ${GITEA_ORG_NAME}/${repo_name}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "${repo_name}: creating missing Gitea repo ${GITEA_ORG_NAME}/${repo_name}"
|
||||
PHASE10_LAST_HTTP_CODE=""
|
||||
if ! create_gitea_repo "$create_payload"; then
|
||||
log_error "${repo_name}: failed to create Gitea repo ${GITEA_ORG_NAME}/${repo_name} (network/API call failed)"
|
||||
if [[ -n "$PHASE10_LAST_CURL_ERROR" ]]; then
|
||||
log_error "${repo_name}: curl error: ${PHASE10_LAST_CURL_ERROR}"
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ "$PHASE10_LAST_HTTP_CODE" == "201" ]]; then
|
||||
log_success "${repo_name}: created missing Gitea repo ${GITEA_ORG_NAME}/${repo_name}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# If another process created the repo concurrently, treat it as success.
|
||||
if [[ "$PHASE10_LAST_HTTP_CODE" == "409" ]]; then
|
||||
log_warn "${repo_name}: Gitea repo already exists (HTTP 409), continuing"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_error "${repo_name}: failed to create Gitea repo ${GITEA_ORG_NAME}/${repo_name} (HTTP ${PHASE10_LAST_HTTP_CODE})"
|
||||
if [[ -n "${create_response:-}" ]]; then
|
||||
log_error "${repo_name}: API response: ${create_response}"
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
count_items() {
|
||||
local list="$1"
|
||||
if [[ -z "$list" ]]; then
|
||||
printf '0'
|
||||
return
|
||||
fi
|
||||
printf '%s\n' "$list" | sed '/^$/d' | wc -l | tr -d '[:space:]'
|
||||
}
|
||||
|
||||
list_contains() {
|
||||
local list="$1" needle="$2"
|
||||
[[ -n "$list" ]] && printf '%s\n' "$list" | grep -Fxq "$needle"
|
||||
}
|
||||
|
||||
fetch_remote_refs() {
|
||||
local url="$1"
|
||||
local refs ref short rc
|
||||
|
||||
PHASE10_REMOTE_BRANCHES=""
|
||||
PHASE10_REMOTE_TAGS=""
|
||||
|
||||
refs="$(git_with_auth_timed "$PHASE10_LSREMOTE_TIMEOUT_SEC" \
|
||||
git \
|
||||
-c "http.connectTimeout=${PHASE10_HTTP_CONNECT_TIMEOUT}" \
|
||||
-c "http.lowSpeedLimit=${PHASE10_HTTP_LOW_SPEED_LIMIT}" \
|
||||
-c "http.lowSpeedTime=${PHASE10_HTTP_LOW_SPEED_TIME}" \
|
||||
ls-remote --heads --tags "$url" 2>/dev/null)"
|
||||
rc=$?
|
||||
if [[ "$rc" -ne 0 ]]; then
|
||||
return 1
|
||||
fi
|
||||
[[ -n "$refs" ]] || return 0
|
||||
|
||||
while IFS= read -r line; do
|
||||
[[ -z "$line" ]] && continue
|
||||
ref="${line#*[[:space:]]}"
|
||||
ref="${ref#"${ref%%[![:space:]]*}"}"
|
||||
[[ -n "$ref" ]] || continue
|
||||
case "$ref" in
|
||||
refs/heads/*)
|
||||
short="${ref#refs/heads/}"
|
||||
PHASE10_REMOTE_BRANCHES="${PHASE10_REMOTE_BRANCHES}${short}"$'\n'
|
||||
;;
|
||||
refs/tags/*)
|
||||
short="${ref#refs/tags/}"
|
||||
[[ "$short" == *"^{}" ]] && continue
|
||||
PHASE10_REMOTE_TAGS="${PHASE10_REMOTE_TAGS}${short}"$'\n'
|
||||
;;
|
||||
esac
|
||||
done <<< "$refs"
|
||||
|
||||
PHASE10_REMOTE_BRANCHES="$(printf '%s' "$PHASE10_REMOTE_BRANCHES" | sed '/^$/d' | LC_ALL=C sort -u)"
|
||||
PHASE10_REMOTE_TAGS="$(printf '%s' "$PHASE10_REMOTE_TAGS" | sed '/^$/d' | LC_ALL=C sort -u)"
|
||||
}
|
||||
|
||||
print_diff_summary() {
|
||||
local repo_name="$1" kind="$2" local_list="$3" remote_list="$4"
|
||||
local missing_count extra_count item
|
||||
local missing_preview="" extra_preview=""
|
||||
local preview_limit=5
|
||||
|
||||
missing_count=0
|
||||
while IFS= read -r item; do
|
||||
[[ -z "$item" ]] && continue
|
||||
if ! list_contains "$remote_list" "$item"; then
|
||||
missing_count=$((missing_count + 1))
|
||||
if [[ "$missing_count" -le "$preview_limit" ]]; then
|
||||
missing_preview="${missing_preview}${item}, "
|
||||
fi
|
||||
fi
|
||||
done <<< "$local_list"
|
||||
|
||||
extra_count=0
|
||||
while IFS= read -r item; do
|
||||
[[ -z "$item" ]] && continue
|
||||
if ! list_contains "$local_list" "$item"; then
|
||||
extra_count=$((extra_count + 1))
|
||||
if [[ "$extra_count" -le "$preview_limit" ]]; then
|
||||
extra_preview="${extra_preview}${item}, "
|
||||
fi
|
||||
fi
|
||||
done <<< "$remote_list"
|
||||
|
||||
if [[ "$missing_count" -eq 0 ]] && [[ "$extra_count" -eq 0 ]]; then
|
||||
log_success "${repo_name}: local ${kind}s match Gitea"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$missing_count" -gt 0 ]]; then
|
||||
missing_preview="${missing_preview%, }"
|
||||
log_info "${repo_name}: ${missing_count} ${kind}(s) missing on Gitea"
|
||||
if [[ -n "$missing_preview" ]]; then
|
||||
log_info " missing ${kind} sample: ${missing_preview}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$extra_count" -gt 0 ]]; then
|
||||
extra_preview="${extra_preview%, }"
|
||||
log_info "${repo_name}: ${extra_count} ${kind}(s) exist on Gitea but not locally"
|
||||
if [[ -n "$extra_preview" ]]; then
|
||||
log_info " remote-only ${kind} sample: ${extra_preview}"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
dry_run_compare_local_and_remote() {
|
||||
local repo_path="$1" repo_name="$2" gitea_url="$3"
|
||||
local local_branches local_tags
|
||||
local local_branch_count local_tag_count remote_branch_count remote_tag_count
|
||||
|
||||
local_branches="$(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads | LC_ALL=C sort -u)"
|
||||
local_tags="$(git -C "$repo_path" tag -l | LC_ALL=C sort -u)"
|
||||
local_branch_count="$(count_items "$local_branches")"
|
||||
local_tag_count="$(count_items "$local_tags")"
|
||||
|
||||
log_info "${repo_name}: local state = ${local_branch_count} branch(es), ${local_tag_count} tag(s)"
|
||||
|
||||
if [[ "$PHASE10_GITEA_REPO_EXISTS" != "true" ]]; then
|
||||
log_info "${repo_name}: remote state = repo missing (would be created)"
|
||||
if [[ "$local_branch_count" -gt 0 ]]; then
|
||||
log_info "${repo_name}: all local branches would be pushed to new Gitea repo"
|
||||
fi
|
||||
if [[ "$local_tag_count" -gt 0 ]]; then
|
||||
log_info "${repo_name}: all local tags would be pushed to new Gitea repo"
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_info "${repo_name}: reading remote refs from Gitea (timeout ${PHASE10_LSREMOTE_TIMEOUT_SEC}s)"
|
||||
if ! fetch_remote_refs "$gitea_url"; then
|
||||
log_warn "${repo_name}: could not read Gitea refs via ls-remote; skipping diff"
|
||||
return 0
|
||||
fi
|
||||
|
||||
remote_branch_count="$(count_items "$PHASE10_REMOTE_BRANCHES")"
|
||||
remote_tag_count="$(count_items "$PHASE10_REMOTE_TAGS")"
|
||||
log_info "${repo_name}: remote Gitea state = ${remote_branch_count} branch(es), ${remote_tag_count} tag(s)"
|
||||
|
||||
print_diff_summary "$repo_name" "branch" "$local_branches" "$PHASE10_REMOTE_BRANCHES"
|
||||
print_diff_summary "$repo_name" "tag" "$local_tags" "$PHASE10_REMOTE_TAGS"
|
||||
}
|
||||
|
||||
push_all_refs_to_origin() {
|
||||
local repo_path="$1" repo_name="$2"
|
||||
local push_output push_args push_rc
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would push all branches to origin"
|
||||
log_info "${repo_name}: would push all tags to origin"
|
||||
return 0
|
||||
fi
|
||||
|
||||
push_args=(push --no-verify --all origin)
|
||||
if [[ "$FORCE_WITH_LEASE" == "true" ]]; then
|
||||
push_args=(push --no-verify --force-with-lease --all origin)
|
||||
fi
|
||||
|
||||
log_info "${repo_name}: pushing branches to origin (timeout ${PHASE10_PUSH_TIMEOUT_SEC}s)"
|
||||
push_output="$(git_with_auth_timed "$PHASE10_PUSH_TIMEOUT_SEC" \
|
||||
git \
|
||||
-c "http.connectTimeout=${PHASE10_HTTP_CONNECT_TIMEOUT}" \
|
||||
-c "http.lowSpeedLimit=${PHASE10_HTTP_LOW_SPEED_LIMIT}" \
|
||||
-c "http.lowSpeedTime=${PHASE10_HTTP_LOW_SPEED_TIME}" \
|
||||
-C "$repo_path" "${push_args[@]}" 2>&1)"
|
||||
push_rc=$?
|
||||
if [[ "$push_rc" -ne 0 ]]; then
|
||||
if [[ "$push_rc" -eq 124 ]]; then
|
||||
log_error "${repo_name}: branch push timed out after ${PHASE10_PUSH_TIMEOUT_SEC}s"
|
||||
log_error "${repo_name}: check network reachability to ${GITEA_DOMAIN} and retry"
|
||||
return 1
|
||||
fi
|
||||
if [[ "$push_output" == *"non-fast-forward"* ]] || [[ "$push_output" == *"[rejected]"* ]]; then
|
||||
log_error "${repo_name}: branch push rejected (non-fast-forward)"
|
||||
log_error "${repo_name}: run with --dry-run first to review diffs, then re-run with --force-with-lease if local should win"
|
||||
else
|
||||
log_error "${repo_name}: failed pushing branches to Gitea origin"
|
||||
fi
|
||||
printf '%s\n' "$push_output" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
push_args=(push --no-verify --tags origin)
|
||||
if [[ "$FORCE_WITH_LEASE" == "true" ]]; then
|
||||
push_args=(push --no-verify --force-with-lease --tags origin)
|
||||
fi
|
||||
|
||||
log_info "${repo_name}: pushing tags to origin (timeout ${PHASE10_PUSH_TIMEOUT_SEC}s)"
|
||||
push_output="$(git_with_auth_timed "$PHASE10_PUSH_TIMEOUT_SEC" \
|
||||
git \
|
||||
-c "http.connectTimeout=${PHASE10_HTTP_CONNECT_TIMEOUT}" \
|
||||
-c "http.lowSpeedLimit=${PHASE10_HTTP_LOW_SPEED_LIMIT}" \
|
||||
-c "http.lowSpeedTime=${PHASE10_HTTP_LOW_SPEED_TIME}" \
|
||||
-C "$repo_path" "${push_args[@]}" 2>&1)"
|
||||
push_rc=$?
|
||||
if [[ "$push_rc" -ne 0 ]]; then
|
||||
if [[ "$push_rc" -eq 124 ]]; then
|
||||
log_error "${repo_name}: tag push timed out after ${PHASE10_PUSH_TIMEOUT_SEC}s"
|
||||
log_error "${repo_name}: check network reachability to ${GITEA_DOMAIN} and retry"
|
||||
return 1
|
||||
fi
|
||||
if [[ "$push_output" == *"non-fast-forward"* ]] || [[ "$push_output" == *"[rejected]"* ]]; then
|
||||
log_error "${repo_name}: tag push rejected (non-fast-forward/conflict)"
|
||||
log_error "${repo_name}: re-run with --force-with-lease only if replacing remote tags is intended"
|
||||
else
|
||||
log_error "${repo_name}: failed pushing tags to Gitea origin"
|
||||
fi
|
||||
printf '%s\n' "$push_output" >&2
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
retarget_tracking_to_origin() {
|
||||
local repo_path="$1" repo_name="$2"
|
||||
local branch upstream_remote upstream_short branch_count
|
||||
branch_count=0
|
||||
|
||||
while IFS= read -r branch; do
|
||||
[[ -z "$branch" ]] && continue
|
||||
branch_count=$((branch_count + 1))
|
||||
|
||||
if ! git -C "$repo_path" show-ref --verify --quiet "refs/remotes/origin/${branch}"; then
|
||||
# A local branch can exist without an origin ref if it never got pushed.
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would create origin/${branch} by pushing local ${branch}"
|
||||
else
|
||||
if ! git_with_auth_timed "$PHASE10_PUSH_TIMEOUT_SEC" \
|
||||
git \
|
||||
-c "http.connectTimeout=${PHASE10_HTTP_CONNECT_TIMEOUT}" \
|
||||
-c "http.lowSpeedLimit=${PHASE10_HTTP_LOW_SPEED_LIMIT}" \
|
||||
-c "http.lowSpeedTime=${PHASE10_HTTP_LOW_SPEED_TIME}" \
|
||||
-C "$repo_path" push --no-verify origin "refs/heads/${branch}:refs/heads/${branch}" >/dev/null; then
|
||||
log_error "${repo_name}: could not create origin/${branch} while setting tracking"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_info "${repo_name}: would set upstream ${branch} -> origin/${branch}"
|
||||
continue
|
||||
else
|
||||
if ! git -C "$repo_path" branch --set-upstream-to="origin/${branch}" "$branch" >/dev/null 2>&1; then
|
||||
log_error "${repo_name}: failed to set upstream for branch '${branch}' to origin/${branch}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
upstream_remote=$(git -C "$repo_path" for-each-ref --format='%(upstream:remotename)' "refs/heads/${branch}")
|
||||
upstream_short=$(git -C "$repo_path" for-each-ref --format='%(upstream:short)' "refs/heads/${branch}")
|
||||
if [[ "$upstream_remote" != "origin" ]] || [[ "$upstream_short" != "origin/${branch}" ]]; then
|
||||
log_error "${repo_name}: branch '${branch}' upstream is '${upstream_short:-<none>}' (expected origin/${branch})"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
done < <(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads)
|
||||
|
||||
if [[ "$branch_count" -eq 0 ]]; then
|
||||
log_warn "${repo_name}: no local branches found"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
if ! phase10_discover_local_repos "$LOCAL_REPO_ROOT" "$GITHUB_USERNAME" "$SCRIPT_DIR" 0; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for include_path in "${INCLUDE_PATHS[@]}"; do
|
||||
[[ -z "$include_path" ]] && continue
|
||||
if ! phase10_include_repo_path "$include_path" "$GITHUB_USERNAME"; then
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
phase10_sort_repo_arrays
|
||||
|
||||
if ! phase10_enforce_expected_count "$EXPECTED_REPO_COUNT" "$LOCAL_REPO_ROOT"; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Discovered ${#PHASE10_REPO_NAMES[@]} local repos in ${LOCAL_REPO_ROOT}"
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
log_info " - ${PHASE10_REPO_NAMES[$i]} -> ${PHASE10_REPO_PATHS[$i]}"
|
||||
done
|
||||
|
||||
setup_git_auth
|
||||
|
||||
SUCCESS=0
|
||||
FAILED=0
|
||||
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
repo_name="${PHASE10_REPO_NAMES[$i]}"
|
||||
repo_path="${PHASE10_REPO_PATHS[$i]}"
|
||||
github_url="${PHASE10_GITHUB_URLS[$i]}"
|
||||
gitea_url="$(phase10_canonical_gitea_url "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name")"
|
||||
|
||||
log_info "--- Processing repo: ${repo_name} (${repo_path}) ---"
|
||||
|
||||
if ! ensure_github_remote "$repo_path" "$repo_name" "$github_url"; then
|
||||
FAILED=$((FAILED + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! ensure_gitea_repo_exists "$repo_name"; then
|
||||
FAILED=$((FAILED + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
dry_run_compare_local_and_remote "$repo_path" "$repo_name" "$gitea_url"
|
||||
fi
|
||||
|
||||
if ! ensure_gitea_origin "$repo_path" "$repo_name" "$gitea_url"; then
|
||||
log_error "${repo_name}: failed to set origin to ${gitea_url}"
|
||||
FAILED=$((FAILED + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! push_all_refs_to_origin "$repo_path" "$repo_name"; then
|
||||
FAILED=$((FAILED + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! retarget_tracking_to_origin "$repo_path" "$repo_name"; then
|
||||
FAILED=$((FAILED + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
log_success "${repo_name}: dry-run plan complete"
|
||||
else
|
||||
log_success "${repo_name}: origin now points to Gitea and tracking updated"
|
||||
fi
|
||||
SUCCESS=$((SUCCESS + 1))
|
||||
done
|
||||
|
||||
printf '\n'
|
||||
TOTAL=${#PHASE10_REPO_NAMES[@]}
|
||||
log_info "Results: ${SUCCESS} succeeded, ${FAILED} failed (out of ${TOTAL})"
|
||||
|
||||
if [[ "$DRY_RUN" == "true" ]]; then
|
||||
if [[ "$FAILED" -gt 0 ]]; then
|
||||
log_error "Phase 10 dry-run found ${FAILED} error(s); no changes were made"
|
||||
exit 1
|
||||
fi
|
||||
log_success "Phase 10 dry-run complete — no changes were made"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ "$FAILED" -gt 0 ]]; then
|
||||
log_error "Phase 10 failed for one or more repos"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Phase 10 complete — local repos now push/track via Gitea origin"
|
||||
133
phase10_post_check.sh
Executable file
133
phase10_post_check.sh
Executable file
@@ -0,0 +1,133 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase10_post_check.sh — Verify local repo remote cutover to Gitea
|
||||
# Checks for each discovered local repo:
|
||||
# 1. origin points to Gitea org/repo
|
||||
# 2. github points to GitHub owner/repo
|
||||
# 3. every local branch tracks origin/<branch>
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
source "${SCRIPT_DIR}/lib/phase10_common.sh"
|
||||
|
||||
load_env
|
||||
require_vars GITEA_ORG_NAME GITEA_DOMAIN GITHUB_USERNAME
|
||||
|
||||
phase_header 10 "Local Repo Remote Cutover — Post-Check"
|
||||
|
||||
LOCAL_REPO_ROOT="${PHASE10_LOCAL_ROOT:-/Users/s/development}"
|
||||
EXPECTED_REPO_COUNT="${PHASE10_EXPECTED_REPO_COUNT:-3}"
|
||||
INCLUDE_PATHS=()
|
||||
|
||||
if [[ -n "${PHASE10_INCLUDE_PATHS:-}" ]]; then
|
||||
# Space-delimited list of extra repo roots to include in phase10 discovery.
|
||||
read -r -a INCLUDE_PATHS <<< "${PHASE10_INCLUDE_PATHS}"
|
||||
fi
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--local-root=*) LOCAL_REPO_ROOT="${arg#*=}" ;;
|
||||
--expected-count=*) EXPECTED_REPO_COUNT="${arg#*=}" ;;
|
||||
--include-path=*) INCLUDE_PATHS+=("${arg#*=}") ;;
|
||||
--help|-h)
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--local-root=PATH Root folder containing local repos (default: /Users/s/development)
|
||||
--expected-count=N Require exactly N discovered repos (default: 3, 0 disables)
|
||||
--include-path=PATH Explicit repo root to include (repeatable)
|
||||
--help Show this help
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown argument: $arg"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if ! [[ "$EXPECTED_REPO_COUNT" =~ ^[0-9]+$ ]]; then
|
||||
log_error "--expected-count must be a non-negative integer"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! phase10_discover_local_repos "$LOCAL_REPO_ROOT" "$GITHUB_USERNAME" "$SCRIPT_DIR" 0; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for include_path in "${INCLUDE_PATHS[@]}"; do
|
||||
[[ -z "$include_path" ]] && continue
|
||||
if ! phase10_include_repo_path "$include_path" "$GITHUB_USERNAME"; then
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
phase10_sort_repo_arrays
|
||||
|
||||
if ! phase10_enforce_expected_count "$EXPECTED_REPO_COUNT" "$LOCAL_REPO_ROOT"; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
PASS=0
|
||||
FAIL=0
|
||||
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
repo_name="${PHASE10_REPO_NAMES[$i]}"
|
||||
repo_path="${PHASE10_REPO_PATHS[$i]}"
|
||||
github_url="${PHASE10_GITHUB_URLS[$i]}"
|
||||
gitea_url="$(phase10_canonical_gitea_url "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name")"
|
||||
|
||||
log_info "--- Checking repo: ${repo_name} (${repo_path}) ---"
|
||||
|
||||
origin_url="$(git -C "$repo_path" remote get-url origin 2>/dev/null || true)"
|
||||
if [[ -n "$origin_url" ]] && phase10_url_is_gitea_repo "$origin_url" "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name"; then
|
||||
log_success "origin points to Gitea (${gitea_url})"
|
||||
PASS=$((PASS + 1))
|
||||
else
|
||||
log_error "FAIL: origin does not point to ${gitea_url} (found: ${origin_url:-<missing>})"
|
||||
FAIL=$((FAIL + 1))
|
||||
fi
|
||||
|
||||
github_remote_url="$(git -C "$repo_path" remote get-url github 2>/dev/null || true)"
|
||||
if [[ -n "$github_remote_url" ]] && phase10_url_is_github_repo "$github_remote_url" "$GITHUB_USERNAME" "$repo_name"; then
|
||||
log_success "github points to GitHub (${github_url})"
|
||||
PASS=$((PASS + 1))
|
||||
else
|
||||
log_error "FAIL: github does not point to ${github_url} (found: ${github_remote_url:-<missing>})"
|
||||
FAIL=$((FAIL + 1))
|
||||
fi
|
||||
|
||||
branch_count=0
|
||||
while IFS= read -r branch; do
|
||||
[[ -z "$branch" ]] && continue
|
||||
branch_count=$((branch_count + 1))
|
||||
upstream_remote=$(git -C "$repo_path" for-each-ref --format='%(upstream:remotename)' "refs/heads/${branch}")
|
||||
upstream_short=$(git -C "$repo_path" for-each-ref --format='%(upstream:short)' "refs/heads/${branch}")
|
||||
if [[ "$upstream_remote" == "origin" ]] && [[ "$upstream_short" == "origin/${branch}" ]]; then
|
||||
log_success "branch ${branch} tracks origin/${branch}"
|
||||
PASS=$((PASS + 1))
|
||||
else
|
||||
log_error "FAIL: branch ${branch} tracks ${upstream_short:-<none>} (expected origin/${branch})"
|
||||
FAIL=$((FAIL + 1))
|
||||
fi
|
||||
done < <(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads)
|
||||
|
||||
if [[ "$branch_count" -eq 0 ]]; then
|
||||
log_warn "No local branches found in ${repo_name}"
|
||||
fi
|
||||
done
|
||||
|
||||
printf '\n'
|
||||
log_info "Results: ${PASS} passed, ${FAIL} failed"
|
||||
|
||||
if [[ "$FAIL" -gt 0 ]]; then
|
||||
log_error "Phase 10 post-check FAILED"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Phase 10 post-check PASSED — local repos track Gitea origin"
|
||||
193
phase10_teardown.sh
Executable file
193
phase10_teardown.sh
Executable file
@@ -0,0 +1,193 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase10_teardown.sh — Reverse local repo remote cutover from phase 10
|
||||
# Reverts local repos so GitHub is origin again:
|
||||
# 1. Move Gitea origin -> gitea (if present)
|
||||
# 2. Move github -> origin
|
||||
# 3. Set local branch upstreams to origin/<branch> where available
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
source "${SCRIPT_DIR}/lib/phase10_common.sh"
|
||||
|
||||
load_env
|
||||
require_vars GITEA_ORG_NAME GITEA_DOMAIN GITHUB_USERNAME
|
||||
|
||||
phase_header 10 "Local Repo Remote Cutover — Teardown"
|
||||
|
||||
LOCAL_REPO_ROOT="${PHASE10_LOCAL_ROOT:-/Users/s/development}"
|
||||
EXPECTED_REPO_COUNT="${PHASE10_EXPECTED_REPO_COUNT:-3}"
|
||||
INCLUDE_PATHS=()
|
||||
AUTO_YES=false
|
||||
|
||||
if [[ -n "${PHASE10_INCLUDE_PATHS:-}" ]]; then
|
||||
# Space-delimited list of extra repo roots to include in phase10 discovery.
|
||||
read -r -a INCLUDE_PATHS <<< "${PHASE10_INCLUDE_PATHS}"
|
||||
fi
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--local-root=*) LOCAL_REPO_ROOT="${arg#*=}" ;;
|
||||
--expected-count=*) EXPECTED_REPO_COUNT="${arg#*=}" ;;
|
||||
--include-path=*) INCLUDE_PATHS+=("${arg#*=}") ;;
|
||||
--yes|-y) AUTO_YES=true ;;
|
||||
--help|-h)
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--local-root=PATH Root folder containing local repos (default: /Users/s/development)
|
||||
--expected-count=N Require exactly N discovered repos (default: 3, 0 disables)
|
||||
--include-path=PATH Explicit repo root to include (repeatable)
|
||||
--yes, -y Skip confirmation prompt
|
||||
--help Show this help
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown argument: $arg"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if ! [[ "$EXPECTED_REPO_COUNT" =~ ^[0-9]+$ ]]; then
|
||||
log_error "--expected-count must be a non-negative integer"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$AUTO_YES" != "true" ]]; then
|
||||
log_warn "This will revert local repo remotes so GitHub is origin again."
|
||||
printf 'Continue? [y/N] ' >&2
|
||||
read -r confirm
|
||||
if [[ "$confirm" != "y" ]] && [[ "$confirm" != "Y" ]]; then
|
||||
log_info "Teardown cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
if ! phase10_discover_local_repos "$LOCAL_REPO_ROOT" "$GITHUB_USERNAME" "$SCRIPT_DIR" 0; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for include_path in "${INCLUDE_PATHS[@]}"; do
|
||||
[[ -z "$include_path" ]] && continue
|
||||
if ! phase10_include_repo_path "$include_path" "$GITHUB_USERNAME"; then
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
phase10_sort_repo_arrays
|
||||
|
||||
if ! phase10_enforce_expected_count "$EXPECTED_REPO_COUNT" "$LOCAL_REPO_ROOT"; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
set_tracking_to_origin_where_available() {
|
||||
local repo_path="$1" repo_name="$2"
|
||||
local branch branch_count
|
||||
branch_count=0
|
||||
|
||||
while IFS= read -r branch; do
|
||||
[[ -z "$branch" ]] && continue
|
||||
branch_count=$((branch_count + 1))
|
||||
|
||||
if git -C "$repo_path" show-ref --verify --quiet "refs/remotes/origin/${branch}"; then
|
||||
if git -C "$repo_path" branch --set-upstream-to="origin/${branch}" "$branch" >/dev/null 2>&1; then
|
||||
log_success "${repo_name}: branch ${branch} now tracks origin/${branch}"
|
||||
else
|
||||
log_warn "${repo_name}: could not set upstream for ${branch}"
|
||||
fi
|
||||
else
|
||||
log_warn "${repo_name}: origin/${branch} not found (upstream unchanged)"
|
||||
fi
|
||||
done < <(git -C "$repo_path" for-each-ref --format='%(refname:short)' refs/heads)
|
||||
|
||||
if [[ "$branch_count" -eq 0 ]]; then
|
||||
log_warn "${repo_name}: no local branches found"
|
||||
fi
|
||||
}
|
||||
|
||||
ensure_origin_is_github() {
|
||||
local repo_path="$1" repo_name="$2" github_url="$3" gitea_url="$4"
|
||||
local origin_url github_url_existing gitea_url_existing
|
||||
|
||||
origin_url="$(git -C "$repo_path" remote get-url origin 2>/dev/null || true)"
|
||||
github_url_existing="$(git -C "$repo_path" remote get-url github 2>/dev/null || true)"
|
||||
gitea_url_existing="$(git -C "$repo_path" remote get-url gitea 2>/dev/null || true)"
|
||||
|
||||
if [[ -n "$origin_url" ]]; then
|
||||
if phase10_url_is_github_repo "$origin_url" "$GITHUB_USERNAME" "$repo_name"; then
|
||||
git -C "$repo_path" remote set-url origin "$github_url"
|
||||
if [[ -n "$github_url_existing" ]]; then
|
||||
git -C "$repo_path" remote remove github
|
||||
fi
|
||||
return 0
|
||||
fi
|
||||
|
||||
if phase10_url_is_gitea_repo "$origin_url" "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name"; then
|
||||
if [[ -z "$gitea_url_existing" ]]; then
|
||||
git -C "$repo_path" remote rename origin gitea
|
||||
else
|
||||
git -C "$repo_path" remote set-url gitea "$gitea_url"
|
||||
git -C "$repo_path" remote remove origin
|
||||
fi
|
||||
else
|
||||
log_error "${repo_name}: origin remote is unexpected (${origin_url})"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if git -C "$repo_path" remote get-url origin >/dev/null 2>&1; then
|
||||
:
|
||||
elif [[ -n "$github_url_existing" ]]; then
|
||||
git -C "$repo_path" remote rename github origin
|
||||
else
|
||||
git -C "$repo_path" remote add origin "$github_url"
|
||||
fi
|
||||
|
||||
git -C "$repo_path" remote set-url origin "$github_url"
|
||||
|
||||
if git -C "$repo_path" remote get-url github >/dev/null 2>&1; then
|
||||
git -C "$repo_path" remote remove github
|
||||
fi
|
||||
|
||||
if git -C "$repo_path" remote get-url gitea >/dev/null 2>&1; then
|
||||
git -C "$repo_path" remote set-url gitea "$gitea_url"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
SUCCESS=0
|
||||
FAILED=0
|
||||
|
||||
for i in "${!PHASE10_REPO_NAMES[@]}"; do
|
||||
repo_name="${PHASE10_REPO_NAMES[$i]}"
|
||||
repo_path="${PHASE10_REPO_PATHS[$i]}"
|
||||
github_url="${PHASE10_GITHUB_URLS[$i]}"
|
||||
gitea_url="$(phase10_canonical_gitea_url "$GITEA_DOMAIN" "$GITEA_ORG_NAME" "$repo_name")"
|
||||
|
||||
log_info "--- Reverting repo: ${repo_name} (${repo_path}) ---"
|
||||
if ! ensure_origin_is_github "$repo_path" "$repo_name" "$github_url" "$gitea_url"; then
|
||||
FAILED=$((FAILED + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
set_tracking_to_origin_where_available "$repo_path" "$repo_name"
|
||||
SUCCESS=$((SUCCESS + 1))
|
||||
done
|
||||
|
||||
printf '\n'
|
||||
TOTAL=${#PHASE10_REPO_NAMES[@]}
|
||||
log_info "Results: ${SUCCESS} reverted, ${FAILED} failed (out of ${TOTAL})"
|
||||
|
||||
if [[ "$FAILED" -gt 0 ]]; then
|
||||
log_error "Phase 10 teardown completed with failures"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Phase 10 teardown complete"
|
||||
288
phase11_custom_runners.sh
Executable file
288
phase11_custom_runners.sh
Executable file
@@ -0,0 +1,288 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase11_custom_runners.sh — Deploy per-repo runner infrastructure & variables
|
||||
# Depends on: Phase 3 complete (runner infra), Phase 4 complete (repos on Gitea)
|
||||
#
|
||||
# Steps:
|
||||
# 1. Build custom toolchain images on Unraid (go-node-runner, jvm-android-runner)
|
||||
# 2. Consolidate macOS runners into a shared instance-level runner
|
||||
# 3. Deploy per-repo Docker runners via manage_runner.sh
|
||||
# 4. Set Gitea repository variables from repo_variables.conf
|
||||
#
|
||||
# Runner strategy:
|
||||
# - Linux runners: repo-scoped, separate toolchain images per repo
|
||||
# - Android emulator: shared (repos=all) — any repo can use it
|
||||
# - macOS runner: shared (repos=all) — any repo can use it
|
||||
#
|
||||
# Idempotent: skips images that already exist, runners already running,
|
||||
# and variables that already match.
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
|
||||
load_env
|
||||
require_vars GITEA_ADMIN_TOKEN GITEA_INTERNAL_URL GITEA_ORG_NAME \
|
||||
UNRAID_IP UNRAID_SSH_USER UNRAID_SSH_PORT \
|
||||
GO_NODE_RUNNER_CONTEXT JVM_ANDROID_RUNNER_CONTEXT \
|
||||
ACT_RUNNER_VERSION
|
||||
|
||||
phase_header 11 "Custom Runner Infrastructure"
|
||||
|
||||
REPO_VARS_CONF="${SCRIPT_DIR}/repo_variables.conf"
|
||||
REBUILD_IMAGES=false
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--rebuild-images) REBUILD_IMAGES=true ;;
|
||||
*) ;;
|
||||
esac
|
||||
done
|
||||
|
||||
SUCCESS=0
|
||||
FAILED=0
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: rsync a build context directory to Unraid
|
||||
# ---------------------------------------------------------------------------
|
||||
rsync_to_unraid() {
|
||||
local src="$1" dest="$2"
|
||||
local ssh_key="${UNRAID_SSH_KEY:-}"
|
||||
local ssh_opts="ssh -o ConnectTimeout=10 -o StrictHostKeyChecking=accept-new -p ${UNRAID_SSH_PORT}"
|
||||
if [[ -n "$ssh_key" ]]; then
|
||||
ssh_opts="${ssh_opts} -i ${ssh_key}"
|
||||
fi
|
||||
rsync -az --delete \
|
||||
--exclude='.env' \
|
||||
--exclude='.env.*' \
|
||||
--exclude='envs/' \
|
||||
--exclude='.git' \
|
||||
--exclude='.gitignore' \
|
||||
-e "$ssh_opts" \
|
||||
"${src}/" "${UNRAID_SSH_USER}@${UNRAID_IP}:${dest}/"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: check if a Docker image exists on Unraid
|
||||
# ---------------------------------------------------------------------------
|
||||
image_exists_on_unraid() {
|
||||
local tag="$1"
|
||||
ssh_exec "UNRAID" "docker image inspect '${tag}' >/dev/null 2>&1"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: list all keys in an INI section (for repo_variables.conf)
|
||||
# ---------------------------------------------------------------------------
|
||||
ini_list_keys() {
|
||||
local file="$1" section="$2"
|
||||
local in_section=false
|
||||
local line k
|
||||
while IFS= read -r line; do
|
||||
line="${line#"${line%%[![:space:]]*}"}"
|
||||
line="${line%"${line##*[![:space:]]}"}"
|
||||
[[ -z "$line" ]] && continue
|
||||
[[ "$line" == \#* ]] && continue
|
||||
if [[ "$line" =~ ^\[([^]]+)\] ]]; then
|
||||
if [[ "${BASH_REMATCH[1]}" == "$section" ]]; then
|
||||
in_section=true
|
||||
elif $in_section; then
|
||||
break
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
if $in_section && [[ "$line" =~ ^([^=]+)= ]]; then
|
||||
k="${BASH_REMATCH[1]}"
|
||||
k="${k#"${k%%[![:space:]]*}"}"
|
||||
k="${k%"${k##*[![:space:]]}"}"
|
||||
printf '%s\n' "$k"
|
||||
fi
|
||||
done < "$file"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: upsert a Gitea repo variable (create or update)
|
||||
# ---------------------------------------------------------------------------
|
||||
upsert_repo_variable() {
|
||||
local repo="$1" var_name="$2" var_value="$3"
|
||||
local owner="${GITEA_ORG_NAME}"
|
||||
|
||||
# Check if variable already exists with correct value
|
||||
local existing
|
||||
if existing=$(gitea_api GET "/repos/${owner}/${repo}/actions/variables/${var_name}" 2>/dev/null); then
|
||||
local current_value
|
||||
current_value=$(printf '%s' "$existing" | jq -r '.value // .data // empty' 2>/dev/null)
|
||||
if [[ "$current_value" == "$var_value" ]]; then
|
||||
log_info " ${var_name} already set correctly — skipping"
|
||||
return 0
|
||||
fi
|
||||
# Update existing variable
|
||||
if gitea_api PUT "/repos/${owner}/${repo}/actions/variables/${var_name}" \
|
||||
"$(jq -n --arg v "$var_value" '{value: $v}')" >/dev/null 2>&1; then
|
||||
log_success " Updated ${var_name}"
|
||||
return 0
|
||||
else
|
||||
log_error " Failed to update ${var_name}"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Create new variable
|
||||
if gitea_api POST "/repos/${owner}/${repo}/actions/variables" \
|
||||
"$(jq -n --arg n "$var_name" --arg v "$var_value" '{name: $n, value: $v}')" >/dev/null 2>&1; then
|
||||
log_success " Created ${var_name}"
|
||||
return 0
|
||||
else
|
||||
log_error " Failed to create ${var_name}"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# =========================================================================
|
||||
# Step 1: Build toolchain images on Unraid
|
||||
# =========================================================================
|
||||
log_step 1 "Building toolchain images on Unraid"
|
||||
|
||||
REMOTE_BUILD_BASE="/tmp/gitea-runner-builds"
|
||||
|
||||
# Image build definitions: TAG|LOCAL_CONTEXT|DOCKER_TARGET
|
||||
IMAGE_BUILDS=(
|
||||
"go-node-runner:latest|${GO_NODE_RUNNER_CONTEXT}|"
|
||||
"jvm-android-runner:slim|${JVM_ANDROID_RUNNER_CONTEXT}|slim"
|
||||
"jvm-android-runner:full|${JVM_ANDROID_RUNNER_CONTEXT}|full"
|
||||
)
|
||||
|
||||
for build_entry in "${IMAGE_BUILDS[@]}"; do
|
||||
IFS='|' read -r img_tag build_context docker_target <<< "$build_entry"
|
||||
|
||||
if [[ "$REBUILD_IMAGES" != "true" ]] && image_exists_on_unraid "$img_tag"; then
|
||||
log_info "Image ${img_tag} already exists on Unraid — skipping"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Derive a unique remote directory name from the image tag
|
||||
remote_dir="${REMOTE_BUILD_BASE}/${img_tag%%:*}"
|
||||
|
||||
log_info "Syncing build context for ${img_tag}..."
|
||||
ssh_exec "UNRAID" "mkdir -p '${remote_dir}'"
|
||||
rsync_to_unraid "$build_context" "$remote_dir"
|
||||
|
||||
log_info "Building ${img_tag} on Unraid (this may take a while)..."
|
||||
local_build_args=""
|
||||
if [[ -n "$docker_target" ]]; then
|
||||
local_build_args="--target ${docker_target}"
|
||||
fi
|
||||
|
||||
# shellcheck disable=SC2086
|
||||
if ssh_exec "UNRAID" "cd '${remote_dir}' && docker build ${local_build_args} -t '${img_tag}' ."; then
|
||||
log_success "Built ${img_tag}"
|
||||
else
|
||||
log_error "Failed to build ${img_tag}"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Clean up build contexts on Unraid
|
||||
ssh_exec "UNRAID" "rm -rf '${REMOTE_BUILD_BASE}'" 2>/dev/null || true
|
||||
|
||||
# =========================================================================
|
||||
# Step 2: Consolidate macOS runners into shared instance-level runner
|
||||
# =========================================================================
|
||||
log_step 2 "Consolidating macOS runners"
|
||||
|
||||
# Old per-repo macOS runners to remove
|
||||
OLD_MAC_RUNNERS=(
|
||||
macbook-runner-periodvault
|
||||
macbook-runner-intermittent-fasting-tracker
|
||||
)
|
||||
|
||||
for old_runner in "${OLD_MAC_RUNNERS[@]}"; do
|
||||
if ini_list_sections "${SCRIPT_DIR}/runners.conf" | grep -qx "$old_runner" 2>/dev/null; then
|
||||
log_info "Old runner section '${old_runner}' found — phase 11 runners.conf already has it removed"
|
||||
log_info " (If still registered in Gitea, run: manage_runner.sh remove --name ${old_runner})"
|
||||
fi
|
||||
# Remove from Gitea if still registered (launchd service)
|
||||
if launchctl list 2>/dev/null | grep -q "com.gitea.runner.${old_runner}"; then
|
||||
log_info "Removing old macOS runner '${old_runner}'..."
|
||||
"${SCRIPT_DIR}/manage_runner.sh" remove --name "$old_runner" 2>/dev/null || true
|
||||
fi
|
||||
done
|
||||
|
||||
# Deploy the new shared macOS runner
|
||||
if launchctl list 2>/dev/null | grep -q "com.gitea.runner.macbook-runner"; then
|
||||
log_info "Shared macOS runner 'macbook-runner' already registered — skipping"
|
||||
else
|
||||
log_info "Deploying shared macOS runner 'macbook-runner'..."
|
||||
if "${SCRIPT_DIR}/manage_runner.sh" add --name macbook-runner; then
|
||||
log_success "Shared macOS runner deployed"
|
||||
else
|
||||
log_error "Failed to deploy shared macOS runner"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
# =========================================================================
|
||||
# Step 3: Deploy per-repo and shared Docker runners
|
||||
# =========================================================================
|
||||
log_step 3 "Deploying Docker runners"
|
||||
|
||||
# Phase 11 Docker runners (defined in runners.conf)
|
||||
PHASE11_DOCKER_RUNNERS=(
|
||||
unraid-go-node-1
|
||||
unraid-go-node-2
|
||||
unraid-go-node-3
|
||||
unraid-jvm-slim-1
|
||||
unraid-jvm-slim-2
|
||||
unraid-android-emulator
|
||||
)
|
||||
|
||||
for runner_name in "${PHASE11_DOCKER_RUNNERS[@]}"; do
|
||||
log_info "--- Deploying runner: ${runner_name} ---"
|
||||
if "${SCRIPT_DIR}/manage_runner.sh" add --name "$runner_name"; then
|
||||
SUCCESS=$((SUCCESS + 1))
|
||||
else
|
||||
log_error "Failed to deploy runner '${runner_name}'"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# =========================================================================
|
||||
# Step 4: Set repository variables from repo_variables.conf
|
||||
# =========================================================================
|
||||
log_step 4 "Setting Gitea repository variables"
|
||||
|
||||
if [[ ! -f "$REPO_VARS_CONF" ]]; then
|
||||
log_warn "repo_variables.conf not found — skipping variable setup"
|
||||
else
|
||||
# Iterate all sections (repos) in repo_variables.conf
|
||||
while IFS= read -r repo; do
|
||||
[[ -z "$repo" ]] && continue
|
||||
log_info "--- Setting variables for repo: ${repo} ---"
|
||||
|
||||
# Iterate all keys in this section
|
||||
while IFS= read -r var_name; do
|
||||
[[ -z "$var_name" ]] && continue
|
||||
var_value=$(ini_get "$REPO_VARS_CONF" "$repo" "$var_name" "")
|
||||
if [[ -z "$var_value" ]]; then
|
||||
log_warn " ${var_name} has empty value — skipping"
|
||||
continue
|
||||
fi
|
||||
upsert_repo_variable "$repo" "$var_name" "$var_value" || FAILED=$((FAILED + 1))
|
||||
done < <(ini_list_keys "$REPO_VARS_CONF" "$repo")
|
||||
|
||||
done < <(ini_list_sections "$REPO_VARS_CONF")
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Summary
|
||||
# ---------------------------------------------------------------------------
|
||||
printf '\n'
|
||||
log_info "Results: ${SUCCESS} runners deployed, ${FAILED} failures"
|
||||
|
||||
if [[ $FAILED -gt 0 ]]; then
|
||||
log_error "Some operations failed — check logs above"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Phase 11 complete — custom runner infrastructure deployed"
|
||||
204
phase11_post_check.sh
Executable file
204
phase11_post_check.sh
Executable file
@@ -0,0 +1,204 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase11_post_check.sh — Verify custom runner infrastructure deployment
|
||||
# Checks:
|
||||
# 1. Toolchain images exist on Unraid
|
||||
# 2. All phase 11 runners registered and online in Gitea
|
||||
# 3. Shared macOS runner has correct labels
|
||||
# 4. Repository variables set correctly
|
||||
# 5. KVM available on Unraid (warning only)
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
|
||||
load_env
|
||||
require_vars GITEA_ADMIN_TOKEN GITEA_INTERNAL_URL GITEA_ORG_NAME \
|
||||
UNRAID_IP UNRAID_SSH_USER UNRAID_SSH_PORT
|
||||
|
||||
phase_header 11 "Custom Runners — Post-Check"
|
||||
|
||||
REPO_VARS_CONF="${SCRIPT_DIR}/repo_variables.conf"
|
||||
PASS=0
|
||||
FAIL=0
|
||||
WARN=0
|
||||
|
||||
run_check() {
|
||||
local desc="$1"
|
||||
shift
|
||||
if "$@"; then
|
||||
log_success "$desc"
|
||||
PASS=$((PASS + 1))
|
||||
else
|
||||
log_error "FAIL: $desc"
|
||||
FAIL=$((FAIL + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
run_warn_check() {
|
||||
local desc="$1"
|
||||
shift
|
||||
if "$@"; then
|
||||
log_success "$desc"
|
||||
PASS=$((PASS + 1))
|
||||
else
|
||||
log_warn "WARN: $desc"
|
||||
WARN=$((WARN + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
# =========================================================================
|
||||
# Check 1: Toolchain images exist on Unraid
|
||||
# =========================================================================
|
||||
log_info "--- Checking toolchain images ---"
|
||||
|
||||
check_image() {
|
||||
local tag="$1"
|
||||
ssh_exec "UNRAID" "docker image inspect '${tag}' >/dev/null 2>&1"
|
||||
}
|
||||
|
||||
run_check "Image go-node-runner:latest exists on Unraid" check_image "go-node-runner:latest"
|
||||
run_check "Image jvm-android-runner:slim exists on Unraid" check_image "jvm-android-runner:slim"
|
||||
run_check "Image jvm-android-runner:full exists on Unraid" check_image "jvm-android-runner:full"
|
||||
|
||||
# =========================================================================
|
||||
# Check 2: All phase 11 runners registered and online
|
||||
# =========================================================================
|
||||
log_info "--- Checking runner status ---"
|
||||
|
||||
# Fetch all runners from Gitea admin API (single call)
|
||||
ALL_RUNNERS=$(gitea_api GET "/admin/runners" 2>/dev/null || echo "[]")
|
||||
|
||||
check_runner_online() {
|
||||
local name="$1"
|
||||
local status
|
||||
status=$(printf '%s' "$ALL_RUNNERS" | jq -r --arg n "$name" \
|
||||
'[.[] | select(.name == $n)] | .[0].status // "not-found"' 2>/dev/null)
|
||||
if [[ "$status" == "not-found" ]] || [[ -z "$status" ]]; then
|
||||
log_error " Runner '${name}' not found in Gitea"
|
||||
return 1
|
||||
fi
|
||||
if [[ "$status" == "offline" ]] || [[ "$status" == "2" ]]; then
|
||||
log_error " Runner '${name}' is offline"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
PHASE11_RUNNERS=(
|
||||
macbook-runner
|
||||
unraid-go-node-1
|
||||
unraid-go-node-2
|
||||
unraid-go-node-3
|
||||
unraid-jvm-slim-1
|
||||
unraid-jvm-slim-2
|
||||
unraid-android-emulator
|
||||
)
|
||||
|
||||
for runner in "${PHASE11_RUNNERS[@]}"; do
|
||||
run_check "Runner '${runner}' registered and online" check_runner_online "$runner"
|
||||
done
|
||||
|
||||
# =========================================================================
|
||||
# Check 3: Shared macOS runner has correct labels
|
||||
# =========================================================================
|
||||
log_info "--- Checking macOS runner labels ---"
|
||||
|
||||
check_mac_labels() {
|
||||
local labels
|
||||
labels=$(printf '%s' "$ALL_RUNNERS" | jq -r \
|
||||
'[.[] | select(.name == "macbook-runner")] | .[0].labels // [] | .[].name' 2>/dev/null)
|
||||
local missing=0
|
||||
for expected in "self-hosted" "macOS" "ARM64"; do
|
||||
if ! printf '%s' "$labels" | grep -qx "$expected" 2>/dev/null; then
|
||||
log_error " macbook-runner missing label: ${expected}"
|
||||
missing=1
|
||||
fi
|
||||
done
|
||||
return "$missing"
|
||||
}
|
||||
|
||||
run_check "macbook-runner has labels: self-hosted, macOS, ARM64" check_mac_labels
|
||||
|
||||
# =========================================================================
|
||||
# Check 4: Repository variables set correctly
|
||||
# =========================================================================
|
||||
log_info "--- Checking repository variables ---"
|
||||
|
||||
check_repo_variable() {
|
||||
local repo="$1" var_name="$2" expected="$3"
|
||||
local owner="${GITEA_ORG_NAME}"
|
||||
local response
|
||||
if ! response=$(gitea_api GET "/repos/${owner}/${repo}/actions/variables/${var_name}" 2>/dev/null); then
|
||||
log_error " Variable ${var_name} not found on ${repo}"
|
||||
return 1
|
||||
fi
|
||||
local actual
|
||||
actual=$(printf '%s' "$response" | jq -r '.value // .data // empty' 2>/dev/null)
|
||||
if [[ "$actual" != "$expected" ]]; then
|
||||
log_error " Variable ${var_name} on ${repo}: expected '${expected}', got '${actual}'"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
if [[ -f "$REPO_VARS_CONF" ]]; then
|
||||
while IFS= read -r repo; do
|
||||
[[ -z "$repo" ]] && continue
|
||||
# Read all keys from the section using inline parsing
|
||||
local_in_section=false
|
||||
while IFS= read -r line; do
|
||||
line="${line#"${line%%[![:space:]]*}"}"
|
||||
line="${line%"${line##*[![:space:]]}"}"
|
||||
[[ -z "$line" ]] && continue
|
||||
[[ "$line" == \#* ]] && continue
|
||||
if [[ "$line" =~ ^\[([^]]+)\] ]]; then
|
||||
if [[ "${BASH_REMATCH[1]}" == "$repo" ]]; then
|
||||
local_in_section=true
|
||||
elif $local_in_section; then
|
||||
break
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
if $local_in_section && [[ "$line" =~ ^([^=]+)=(.*) ]]; then
|
||||
k="${BASH_REMATCH[1]}"
|
||||
v="${BASH_REMATCH[2]}"
|
||||
k="${k#"${k%%[![:space:]]*}"}"
|
||||
k="${k%"${k##*[![:space:]]}"}"
|
||||
v="${v#"${v%%[![:space:]]*}"}"
|
||||
v="${v%"${v##*[![:space:]]}"}"
|
||||
run_check "Variable ${k} on ${repo}" check_repo_variable "$repo" "$k" "$v"
|
||||
fi
|
||||
done < "$REPO_VARS_CONF"
|
||||
done < <(ini_list_sections "$REPO_VARS_CONF")
|
||||
else
|
||||
log_warn "repo_variables.conf not found — skipping variable checks"
|
||||
WARN=$((WARN + 1))
|
||||
fi
|
||||
|
||||
# =========================================================================
|
||||
# Check 5: KVM available on Unraid
|
||||
# =========================================================================
|
||||
log_info "--- Checking KVM availability ---"
|
||||
|
||||
check_kvm() {
|
||||
ssh_exec "UNRAID" "test -c /dev/kvm"
|
||||
}
|
||||
|
||||
run_warn_check "KVM device available on Unraid (/dev/kvm)" check_kvm
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Summary
|
||||
# ---------------------------------------------------------------------------
|
||||
printf '\n'
|
||||
TOTAL=$((PASS + FAIL + WARN))
|
||||
log_info "Results: ${PASS} passed, ${FAIL} failed, ${WARN} warnings (out of ${TOTAL})"
|
||||
|
||||
if [[ $FAIL -gt 0 ]]; then
|
||||
log_error "Some checks failed — review above"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Phase 11 post-check complete"
|
||||
185
phase11_teardown.sh
Executable file
185
phase11_teardown.sh
Executable file
@@ -0,0 +1,185 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase11_teardown.sh — Remove custom runner infrastructure deployed by phase 11
|
||||
# Reverses:
|
||||
# 1. Repository variables
|
||||
# 2. Docker runners (per-repo + shared emulator)
|
||||
# 3. Shared macOS runner → restore original per-repo macOS runners
|
||||
# 4. Toolchain images on Unraid
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
|
||||
load_env
|
||||
require_vars GITEA_ADMIN_TOKEN GITEA_INTERNAL_URL GITEA_ORG_NAME
|
||||
|
||||
phase_header 11 "Custom Runners — Teardown"
|
||||
|
||||
REPO_VARS_CONF="${SCRIPT_DIR}/repo_variables.conf"
|
||||
AUTO_YES=false
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--yes|-y) AUTO_YES=true ;;
|
||||
*) ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$AUTO_YES" != "true" ]]; then
|
||||
log_warn "This will remove all phase 11 custom runners and repo variables."
|
||||
printf 'Continue? [y/N] ' >&2
|
||||
read -r confirm
|
||||
if [[ "$confirm" != "y" ]] && [[ "$confirm" != "Y" ]]; then
|
||||
log_info "Aborted"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
REMOVED=0
|
||||
FAILED=0
|
||||
|
||||
# =========================================================================
|
||||
# Step 1: Delete repository variables
|
||||
# =========================================================================
|
||||
log_step 1 "Removing repository variables"
|
||||
|
||||
if [[ -f "$REPO_VARS_CONF" ]]; then
|
||||
while IFS= read -r repo; do
|
||||
[[ -z "$repo" ]] && continue
|
||||
log_info "--- Removing variables for repo: ${repo} ---"
|
||||
|
||||
# Parse keys from section
|
||||
in_section=false
|
||||
while IFS= read -r line; do
|
||||
line="${line#"${line%%[![:space:]]*}"}"
|
||||
line="${line%"${line##*[![:space:]]}"}"
|
||||
[[ -z "$line" ]] && continue
|
||||
[[ "$line" == \#* ]] && continue
|
||||
if [[ "$line" =~ ^\[([^]]+)\] ]]; then
|
||||
if [[ "${BASH_REMATCH[1]}" == "$repo" ]]; then
|
||||
in_section=true
|
||||
elif $in_section; then
|
||||
break
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
if $in_section && [[ "$line" =~ ^([^=]+)= ]]; then
|
||||
k="${BASH_REMATCH[1]}"
|
||||
k="${k#"${k%%[![:space:]]*}"}"
|
||||
k="${k%"${k##*[![:space:]]}"}"
|
||||
if gitea_api DELETE "/repos/${GITEA_ORG_NAME}/${repo}/actions/variables/${k}" >/dev/null 2>&1; then
|
||||
log_success " Deleted ${k} from ${repo}"
|
||||
REMOVED=$((REMOVED + 1))
|
||||
else
|
||||
log_warn " Could not delete ${k} from ${repo} (may not exist)"
|
||||
fi
|
||||
fi
|
||||
done < "$REPO_VARS_CONF"
|
||||
|
||||
done < <(ini_list_sections "$REPO_VARS_CONF")
|
||||
else
|
||||
log_info "repo_variables.conf not found — skipping"
|
||||
fi
|
||||
|
||||
# =========================================================================
|
||||
# Step 2: Remove Docker runners
|
||||
# =========================================================================
|
||||
log_step 2 "Removing Docker runners"
|
||||
|
||||
PHASE11_DOCKER_RUNNERS=(
|
||||
unraid-go-node-1
|
||||
unraid-go-node-2
|
||||
unraid-go-node-3
|
||||
unraid-jvm-slim-1
|
||||
unraid-jvm-slim-2
|
||||
unraid-android-emulator
|
||||
)
|
||||
|
||||
for runner_name in "${PHASE11_DOCKER_RUNNERS[@]}"; do
|
||||
log_info "Removing runner '${runner_name}'..."
|
||||
if "${SCRIPT_DIR}/manage_runner.sh" remove --name "$runner_name" 2>/dev/null; then
|
||||
log_success "Removed ${runner_name}"
|
||||
REMOVED=$((REMOVED + 1))
|
||||
else
|
||||
log_warn "Could not remove ${runner_name} (may not exist)"
|
||||
fi
|
||||
done
|
||||
|
||||
# =========================================================================
|
||||
# Step 3: Remove shared macOS runner, restore original per-repo runners
|
||||
# =========================================================================
|
||||
log_step 3 "Restoring original macOS runner configuration"
|
||||
|
||||
# Remove shared runner
|
||||
if launchctl list 2>/dev/null | grep -q "com.gitea.runner.macbook-runner"; then
|
||||
log_info "Removing shared macOS runner 'macbook-runner'..."
|
||||
"${SCRIPT_DIR}/manage_runner.sh" remove --name macbook-runner 2>/dev/null || true
|
||||
REMOVED=$((REMOVED + 1))
|
||||
fi
|
||||
|
||||
# Note: original per-repo macOS runner sections were replaced in runners.conf
|
||||
# during phase 11. They need to be re-added manually or by re-running
|
||||
# configure_runners.sh. This teardown only cleans up deployed resources.
|
||||
log_info "Note: original macOS runner sections (macbook-runner-periodvault,"
|
||||
log_info " macbook-runner-intermittent-fasting-tracker) must be restored in"
|
||||
log_info " runners.conf manually or via git checkout."
|
||||
|
||||
# =========================================================================
|
||||
# Step 4: Remove toolchain images from Unraid
|
||||
# =========================================================================
|
||||
log_step 4 "Removing toolchain images from Unraid"
|
||||
|
||||
IMAGES_TO_REMOVE=(
|
||||
"go-node-runner:latest"
|
||||
"jvm-android-runner:slim"
|
||||
"jvm-android-runner:full"
|
||||
)
|
||||
|
||||
for img in "${IMAGES_TO_REMOVE[@]}"; do
|
||||
if ssh_exec "UNRAID" "docker rmi '${img}' 2>/dev/null"; then
|
||||
log_success "Removed image ${img}"
|
||||
REMOVED=$((REMOVED + 1))
|
||||
else
|
||||
log_warn "Could not remove image ${img} (may not exist or in use)"
|
||||
fi
|
||||
done
|
||||
|
||||
# =========================================================================
|
||||
# Step 5: Remove phase 11 runner sections from runners.conf
|
||||
# =========================================================================
|
||||
log_step 5 "Cleaning runners.conf"
|
||||
|
||||
RUNNERS_CONF="${SCRIPT_DIR}/runners.conf"
|
||||
PHASE11_SECTIONS=(
|
||||
unraid-go-node-1
|
||||
unraid-go-node-2
|
||||
unraid-go-node-3
|
||||
unraid-jvm-slim-1
|
||||
unraid-jvm-slim-2
|
||||
unraid-android-emulator
|
||||
macbook-runner
|
||||
)
|
||||
|
||||
for section in "${PHASE11_SECTIONS[@]}"; do
|
||||
if ini_list_sections "$RUNNERS_CONF" | grep -qx "$section" 2>/dev/null; then
|
||||
ini_remove_section "$RUNNERS_CONF" "$section"
|
||||
log_success "Removed [${section}] from runners.conf"
|
||||
REMOVED=$((REMOVED + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Summary
|
||||
# ---------------------------------------------------------------------------
|
||||
printf '\n'
|
||||
log_info "Results: ${REMOVED} items removed, ${FAILED} failures"
|
||||
|
||||
if [[ $FAILED -gt 0 ]]; then
|
||||
log_error "Some removals failed — check logs above"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_success "Phase 11 teardown complete"
|
||||
610
phase7_5_nginx_to_caddy.sh
Executable file
610
phase7_5_nginx_to_caddy.sh
Executable file
@@ -0,0 +1,610 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# phase7_5_nginx_to_caddy.sh — One-time Nginx -> Caddy migration cutover helper
|
||||
#
|
||||
# Goals:
|
||||
# - Serve both sintheus.com and privacyindesign.com hostnames from one Caddy
|
||||
# - Keep public ingress HTTPS-only
|
||||
# - Support canary-first rollout (default: tower.sintheus.com only)
|
||||
# - Preserve current mixed backend schemes (http/https) unless strict mode is enabled
|
||||
#
|
||||
# Usage examples:
|
||||
# ./phase7_5_nginx_to_caddy.sh
|
||||
# ./phase7_5_nginx_to_caddy.sh --mode=full
|
||||
# ./phase7_5_nginx_to_caddy.sh --mode=full --strict-backend-https
|
||||
# ./phase7_5_nginx_to_caddy.sh --mode=canary --yes
|
||||
# =============================================================================
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
|
||||
AUTO_YES=false
|
||||
MODE="canary" # canary|full
|
||||
STRICT_BACKEND_HTTPS=false
|
||||
|
||||
# Reuse Unraid's existing Docker network.
|
||||
UNRAID_DOCKER_NETWORK_NAME="br0"
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--mode=canary|full Rollout scope (default: canary)
|
||||
--strict-backend-https Require all upstream backends to be https://
|
||||
--yes, -y Skip confirmation prompts
|
||||
--help, -h Show this help
|
||||
EOF
|
||||
}
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--mode=*) MODE="${arg#*=}" ;;
|
||||
--strict-backend-https) STRICT_BACKEND_HTTPS=true ;;
|
||||
--yes|-y) AUTO_YES=true ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*)
|
||||
log_error "Unknown argument: $arg"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$MODE" != "canary" && "$MODE" != "full" ]]; then
|
||||
log_error "Invalid --mode '$MODE' (use: canary|full)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
confirm_action() {
|
||||
local prompt="$1"
|
||||
if [[ "$AUTO_YES" == "true" ]]; then
|
||||
log_info "Auto-confirmed (--yes): ${prompt}"
|
||||
return 0
|
||||
fi
|
||||
printf '%s' "$prompt"
|
||||
read -r confirm
|
||||
[[ "$confirm" =~ ^[Yy]$ ]]
|
||||
}
|
||||
|
||||
load_env
|
||||
require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_COMPOSE_DIR \
|
||||
UNRAID_CADDY_IP UNRAID_GITEA_IP \
|
||||
GITEA_DOMAIN CADDY_DATA_PATH TLS_MODE
|
||||
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
require_vars CLOUDFLARE_API_TOKEN
|
||||
elif [[ "$TLS_MODE" == "existing" ]]; then
|
||||
require_vars SSL_CERT_PATH SSL_KEY_PATH
|
||||
else
|
||||
log_error "Invalid TLS_MODE='${TLS_MODE}' — must be 'cloudflare' or 'existing'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
phase_header "7.5" "Nginx to Caddy Migration (Multi-domain)"
|
||||
|
||||
# host|upstream|streaming(true/false)|body_limit|insecure_skip_verify(true/false)
|
||||
FULL_HOST_MAP=(
|
||||
"ai.sintheus.com|http://192.168.1.82:8181 http://192.168.1.83:8181|true|50MB|false"
|
||||
"getter.sintheus.com|http://192.168.1.3:8181|false||false"
|
||||
"portainer.sintheus.com|https://192.168.1.181:9443|false||true"
|
||||
"photos.sintheus.com|http://192.168.1.222:2283|false|50GB|false"
|
||||
"fin.sintheus.com|http://192.168.1.233:8096|true||false"
|
||||
"disk.sintheus.com|http://192.168.1.52:80|false|20GB|false"
|
||||
"pi.sintheus.com|http://192.168.1.4:80|false||false"
|
||||
"plex.sintheus.com|http://192.168.1.111:32400|true||false"
|
||||
"sync.sintheus.com|http://192.168.1.119:8384|false||false"
|
||||
"syno.sintheus.com|https://100.108.182.16:5001|false||true"
|
||||
"tower.sintheus.com|https://192.168.1.82:443 https://192.168.1.83:443|false||true"
|
||||
)
|
||||
|
||||
CANARY_HOST_MAP=(
|
||||
"tower.sintheus.com|https://192.168.1.82:443 https://192.168.1.83:443|false||true"
|
||||
)
|
||||
|
||||
GITEA_ENTRY="${GITEA_DOMAIN}|http://${UNRAID_GITEA_IP}:3000|false||false"
|
||||
CADDY_COMPOSE_DIR="${UNRAID_COMPOSE_DIR}/caddy"
|
||||
|
||||
SELECTED_HOST_MAP=()
|
||||
if [[ "$MODE" == "canary" ]]; then
|
||||
SELECTED_HOST_MAP=( "${CANARY_HOST_MAP[@]}" )
|
||||
else
|
||||
SELECTED_HOST_MAP=( "${FULL_HOST_MAP[@]}" "$GITEA_ENTRY" )
|
||||
fi
|
||||
|
||||
validate_backend_tls_policy() {
|
||||
local -a non_tls_entries=()
|
||||
local entry host upstream
|
||||
for entry in "${SELECTED_HOST_MAP[@]}"; do
|
||||
IFS='|' read -r host upstream _ <<< "$entry"
|
||||
if [[ "$upstream" != https://* ]]; then
|
||||
non_tls_entries+=( "${host} -> ${upstream}" )
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "${#non_tls_entries[@]}" -eq 0 ]]; then
|
||||
log_success "All selected backends are HTTPS"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$STRICT_BACKEND_HTTPS" == "true" ]]; then
|
||||
log_error "Strict backend HTTPS is enabled, but these entries are not HTTPS:"
|
||||
printf '%s\n' "${non_tls_entries[@]}" | sed 's/^/ - /' >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_warn "Using mixed backend schemes (allowed):"
|
||||
printf '%s\n' "${non_tls_entries[@]}" | sed 's/^/ - /' >&2
|
||||
}
|
||||
|
||||
emit_site_block() {
|
||||
local outfile="$1" host="$2" upstream="$3" streaming="$4" body_limit="$5" skip_verify="$6"
|
||||
|
||||
{
|
||||
echo "${host} {"
|
||||
if [[ "$TLS_MODE" == "existing" ]]; then
|
||||
echo " tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
|
||||
fi
|
||||
echo " import common_security"
|
||||
echo
|
||||
if [[ -n "$body_limit" ]]; then
|
||||
echo " request_body {"
|
||||
echo " max_size ${body_limit}"
|
||||
echo " }"
|
||||
echo
|
||||
fi
|
||||
echo " reverse_proxy ${upstream} {"
|
||||
if [[ "$streaming" == "true" ]]; then
|
||||
echo " import proxy_streaming"
|
||||
else
|
||||
echo " import proxy_headers"
|
||||
fi
|
||||
if [[ "$skip_verify" == "true" && "$upstream" == https://* ]]; then
|
||||
echo " transport http {"
|
||||
echo " tls_insecure_skip_verify"
|
||||
echo " }"
|
||||
fi
|
||||
echo " }"
|
||||
echo "}"
|
||||
echo
|
||||
} >> "$outfile"
|
||||
}
|
||||
|
||||
emit_site_block_standalone() {
|
||||
local outfile="$1" host="$2" upstream="$3" streaming="$4" body_limit="$5" skip_verify="$6"
|
||||
|
||||
{
|
||||
echo "${host} {"
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
echo " tls {"
|
||||
echo " dns cloudflare {env.CF_API_TOKEN}"
|
||||
echo " }"
|
||||
elif [[ "$TLS_MODE" == "existing" ]]; then
|
||||
echo " tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
|
||||
fi
|
||||
echo " encode zstd gzip"
|
||||
echo " header {"
|
||||
echo " Strict-Transport-Security \"max-age=31536000; includeSubDomains; preload\""
|
||||
echo " X-Content-Type-Options \"nosniff\""
|
||||
echo " X-Frame-Options \"SAMEORIGIN\""
|
||||
echo " Referrer-Policy \"strict-origin-when-cross-origin\""
|
||||
echo " -Server"
|
||||
echo " }"
|
||||
echo
|
||||
if [[ -n "$body_limit" ]]; then
|
||||
echo " request_body {"
|
||||
echo " max_size ${body_limit}"
|
||||
echo " }"
|
||||
echo
|
||||
fi
|
||||
echo " reverse_proxy ${upstream} {"
|
||||
echo " header_up Host {host}"
|
||||
echo " header_up X-Real-IP {remote_host}"
|
||||
if [[ "$streaming" == "true" ]]; then
|
||||
echo " flush_interval -1"
|
||||
fi
|
||||
if [[ "$skip_verify" == "true" && "$upstream" == https://* ]]; then
|
||||
echo " transport http {"
|
||||
echo " tls_insecure_skip_verify"
|
||||
echo " }"
|
||||
fi
|
||||
echo " }"
|
||||
echo "}"
|
||||
echo
|
||||
} >> "$outfile"
|
||||
}
|
||||
|
||||
caddy_block_extract_for_host() {
|
||||
local infile="$1" host="$2" outfile="$3"
|
||||
awk -v host="$host" '
|
||||
function trim(s) {
|
||||
sub(/^[[:space:]]+/, "", s)
|
||||
sub(/[[:space:]]+$/, "", s)
|
||||
return s
|
||||
}
|
||||
function brace_delta(s, tmp, opens, closes) {
|
||||
tmp = s
|
||||
opens = gsub(/\{/, "{", tmp)
|
||||
closes = gsub(/\}/, "}", tmp)
|
||||
return opens - closes
|
||||
}
|
||||
function has_host(labels, i, n, parts, token) {
|
||||
labels = trim(labels)
|
||||
gsub(/[[:space:]]+/, "", labels)
|
||||
n = split(labels, parts, ",")
|
||||
for (i = 1; i <= n; i++) {
|
||||
token = parts[i]
|
||||
if (token == host) {
|
||||
return 1
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
BEGIN {
|
||||
depth = 0
|
||||
in_target = 0
|
||||
target_depth = 0
|
||||
found = 0
|
||||
}
|
||||
|
||||
{
|
||||
line = $0
|
||||
|
||||
if (!in_target) {
|
||||
if (depth == 0) {
|
||||
pos = index(line, "{")
|
||||
if (pos > 0) {
|
||||
labels = substr(line, 1, pos - 1)
|
||||
if (trim(labels) != "" && labels !~ /^[[:space:]]*\(/ && has_host(labels)) {
|
||||
in_target = 1
|
||||
target_depth = brace_delta(line)
|
||||
found = 1
|
||||
print line
|
||||
next
|
||||
}
|
||||
}
|
||||
}
|
||||
depth += brace_delta(line)
|
||||
} else {
|
||||
target_depth += brace_delta(line)
|
||||
print line
|
||||
if (target_depth <= 0) {
|
||||
in_target = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
END {
|
||||
if (!found) {
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
' "$infile" > "$outfile"
|
||||
}
|
||||
|
||||
caddy_block_remove_for_host() {
|
||||
local infile="$1" host="$2" outfile="$3"
|
||||
awk -v host="$host" '
|
||||
function trim(s) {
|
||||
sub(/^[[:space:]]+/, "", s)
|
||||
sub(/[[:space:]]+$/, "", s)
|
||||
return s
|
||||
}
|
||||
function brace_delta(s, tmp, opens, closes) {
|
||||
tmp = s
|
||||
opens = gsub(/\{/, "{", tmp)
|
||||
closes = gsub(/\}/, "}", tmp)
|
||||
return opens - closes
|
||||
}
|
||||
function has_host(labels, i, n, parts, token) {
|
||||
labels = trim(labels)
|
||||
gsub(/[[:space:]]+/, "", labels)
|
||||
n = split(labels, parts, ",")
|
||||
for (i = 1; i <= n; i++) {
|
||||
token = parts[i]
|
||||
if (token == host) {
|
||||
return 1
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
BEGIN {
|
||||
depth = 0
|
||||
in_target = 0
|
||||
target_depth = 0
|
||||
removed = 0
|
||||
}
|
||||
|
||||
{
|
||||
line = $0
|
||||
|
||||
if (in_target) {
|
||||
target_depth += brace_delta(line)
|
||||
if (target_depth <= 0) {
|
||||
in_target = 0
|
||||
}
|
||||
next
|
||||
}
|
||||
|
||||
if (depth == 0) {
|
||||
pos = index(line, "{")
|
||||
if (pos > 0) {
|
||||
labels = substr(line, 1, pos - 1)
|
||||
if (trim(labels) != "" && labels !~ /^[[:space:]]*\(/ && has_host(labels)) {
|
||||
in_target = 1
|
||||
target_depth = brace_delta(line)
|
||||
removed = 1
|
||||
next
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
print line
|
||||
depth += brace_delta(line)
|
||||
}
|
||||
|
||||
END {
|
||||
if (!removed) {
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
' "$infile" > "$outfile"
|
||||
}
|
||||
|
||||
upsert_site_block_by_host() {
|
||||
local infile="$1" entry="$2" outfile="$3"
|
||||
local host upstream streaming body_limit skip_verify
|
||||
IFS='|' read -r host upstream streaming body_limit skip_verify <<< "$entry"
|
||||
|
||||
local tmp_new_block tmp_old_block tmp_without_old tmp_combined
|
||||
tmp_new_block=$(mktemp)
|
||||
tmp_old_block=$(mktemp)
|
||||
tmp_without_old=$(mktemp)
|
||||
tmp_combined=$(mktemp)
|
||||
|
||||
: > "$tmp_new_block"
|
||||
emit_site_block_standalone "$tmp_new_block" "$host" "$upstream" "$streaming" "$body_limit" "$skip_verify"
|
||||
|
||||
if caddy_block_extract_for_host "$infile" "$host" "$tmp_old_block"; then
|
||||
log_info "Domain '${host}' already exists; replacing existing site block"
|
||||
log_info "Previous block for '${host}':"
|
||||
sed 's/^/ | /' "$tmp_old_block" >&2
|
||||
caddy_block_remove_for_host "$infile" "$host" "$tmp_without_old"
|
||||
cat "$tmp_without_old" "$tmp_new_block" > "$tmp_combined"
|
||||
else
|
||||
log_info "Domain '${host}' not present; adding new site block"
|
||||
cat "$infile" "$tmp_new_block" > "$tmp_combined"
|
||||
fi
|
||||
|
||||
mv "$tmp_combined" "$outfile"
|
||||
rm -f "$tmp_new_block" "$tmp_old_block" "$tmp_without_old"
|
||||
}
|
||||
|
||||
build_caddyfile() {
|
||||
local outfile="$1"
|
||||
local entry host upstream streaming body_limit skip_verify
|
||||
|
||||
: > "$outfile"
|
||||
{
|
||||
echo "# Generated by phase7_5_nginx_to_caddy.sh"
|
||||
echo "# Mode: ${MODE}"
|
||||
echo
|
||||
echo "{"
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
echo " acme_dns cloudflare {env.CF_API_TOKEN}"
|
||||
fi
|
||||
echo " servers {"
|
||||
echo " trusted_proxies static private_ranges"
|
||||
echo " protocols h1 h2 h3"
|
||||
echo " }"
|
||||
echo "}"
|
||||
echo
|
||||
echo "(common_security) {"
|
||||
echo " encode zstd gzip"
|
||||
echo " header {"
|
||||
echo " Strict-Transport-Security \"max-age=31536000; includeSubDomains; preload\""
|
||||
echo " X-Content-Type-Options \"nosniff\""
|
||||
echo " X-Frame-Options \"SAMEORIGIN\""
|
||||
echo " Referrer-Policy \"strict-origin-when-cross-origin\""
|
||||
echo " -Server"
|
||||
echo " }"
|
||||
echo "}"
|
||||
echo
|
||||
echo "(proxy_headers) {"
|
||||
echo " header_up Host {host}"
|
||||
echo " header_up X-Real-IP {remote_host}"
|
||||
echo "}"
|
||||
echo
|
||||
echo "(proxy_streaming) {"
|
||||
echo " import proxy_headers"
|
||||
echo " flush_interval -1"
|
||||
echo "}"
|
||||
echo
|
||||
} >> "$outfile"
|
||||
|
||||
for entry in "${SELECTED_HOST_MAP[@]}"; do
|
||||
IFS='|' read -r host upstream streaming body_limit skip_verify <<< "$entry"
|
||||
emit_site_block "$outfile" "$host" "$upstream" "$streaming" "$body_limit" "$skip_verify"
|
||||
done
|
||||
}
|
||||
|
||||
if ! validate_backend_tls_policy; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_step 1 "Creating Caddy data directories on Unraid..."
|
||||
ssh_exec UNRAID "mkdir -p '${CADDY_DATA_PATH}/data' '${CADDY_DATA_PATH}/config'"
|
||||
log_success "Caddy data directories ensured"
|
||||
|
||||
log_step 2 "Deploying Caddy docker-compose on Unraid..."
|
||||
if ! ssh_exec UNRAID "docker network inspect '${UNRAID_DOCKER_NETWORK_NAME}'" &>/dev/null; then
|
||||
log_error "Required Docker network '${UNRAID_DOCKER_NETWORK_NAME}' not found on Unraid"
|
||||
exit 1
|
||||
fi
|
||||
ssh_exec UNRAID "mkdir -p '${CADDY_COMPOSE_DIR}'"
|
||||
|
||||
TMP_COMPOSE=$(mktemp)
|
||||
CADDY_CONTAINER_IP="${UNRAID_CADDY_IP}"
|
||||
GITEA_NETWORK_NAME="${UNRAID_DOCKER_NETWORK_NAME}"
|
||||
export CADDY_CONTAINER_IP CADDY_DATA_PATH GITEA_NETWORK_NAME
|
||||
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
CADDY_ENV_VARS=" - CF_API_TOKEN=${CLOUDFLARE_API_TOKEN}"
|
||||
CADDY_EXTRA_VOLUMES=""
|
||||
else
|
||||
CADDY_ENV_VARS=""
|
||||
CADDY_EXTRA_VOLUMES=" - ${SSL_CERT_PATH}:${SSL_CERT_PATH}:ro
|
||||
- ${SSL_KEY_PATH}:${SSL_KEY_PATH}:ro"
|
||||
fi
|
||||
export CADDY_ENV_VARS CADDY_EXTRA_VOLUMES
|
||||
|
||||
render_template "${SCRIPT_DIR}/templates/docker-compose-caddy.yml.tpl" "$TMP_COMPOSE" \
|
||||
"\${CADDY_DATA_PATH} \${CADDY_CONTAINER_IP} \${CADDY_ENV_VARS} \${CADDY_EXTRA_VOLUMES} \${GITEA_NETWORK_NAME}"
|
||||
|
||||
if [[ -z "$CADDY_ENV_VARS" ]]; then
|
||||
sed -i.bak '/^[[:space:]]*environment:$/d' "$TMP_COMPOSE"
|
||||
rm -f "${TMP_COMPOSE}.bak"
|
||||
fi
|
||||
if [[ -z "$CADDY_EXTRA_VOLUMES" ]]; then
|
||||
sed -i.bak -e :a -e '/^\n*$/{$d;N;ba' -e '}' "$TMP_COMPOSE"
|
||||
rm -f "${TMP_COMPOSE}.bak"
|
||||
fi
|
||||
|
||||
scp_to UNRAID "$TMP_COMPOSE" "${CADDY_COMPOSE_DIR}/docker-compose.yml"
|
||||
rm -f "$TMP_COMPOSE"
|
||||
log_success "Caddy compose deployed to ${CADDY_COMPOSE_DIR}"
|
||||
|
||||
log_step 3 "Generating and deploying multi-domain Caddyfile..."
|
||||
TMP_CADDYFILE=$(mktemp)
|
||||
HAS_EXISTING_CADDYFILE=false
|
||||
if ssh_exec UNRAID "test -f '${CADDY_DATA_PATH}/Caddyfile'" 2>/dev/null; then
|
||||
HAS_EXISTING_CADDYFILE=true
|
||||
BACKUP_PATH="${CADDY_DATA_PATH}/Caddyfile.pre_phase7_5.$(date +%Y%m%d%H%M%S)"
|
||||
ssh_exec UNRAID "cp '${CADDY_DATA_PATH}/Caddyfile' '${BACKUP_PATH}'"
|
||||
log_info "Backed up previous Caddyfile to ${BACKUP_PATH}"
|
||||
fi
|
||||
|
||||
if [[ "$MODE" == "canary" && "$HAS_EXISTING_CADDYFILE" == "true" ]]; then
|
||||
TMP_WORK=$(mktemp)
|
||||
TMP_NEXT=$(mktemp)
|
||||
cp /dev/null "$TMP_NEXT"
|
||||
ssh_exec UNRAID "cat '${CADDY_DATA_PATH}/Caddyfile'" > "$TMP_WORK"
|
||||
|
||||
for entry in "${CANARY_HOST_MAP[@]}"; do
|
||||
upsert_site_block_by_host "$TMP_WORK" "$entry" "$TMP_NEXT"
|
||||
mv "$TMP_NEXT" "$TMP_WORK"
|
||||
TMP_NEXT=$(mktemp)
|
||||
done
|
||||
|
||||
cp "$TMP_WORK" "$TMP_CADDYFILE"
|
||||
rm -f "$TMP_WORK" "$TMP_NEXT"
|
||||
log_info "Canary mode: existing routes preserved; canary domains upserted"
|
||||
else
|
||||
build_caddyfile "$TMP_CADDYFILE"
|
||||
fi
|
||||
|
||||
scp_to UNRAID "$TMP_CADDYFILE" "${CADDY_DATA_PATH}/Caddyfile"
|
||||
rm -f "$TMP_CADDYFILE"
|
||||
log_success "Caddyfile deployed"
|
||||
|
||||
log_step 4 "Starting/reloading Caddy container..."
|
||||
ssh_exec UNRAID "cd '${CADDY_COMPOSE_DIR}' && docker compose up -d 2>/dev/null || docker-compose up -d"
|
||||
if ! ssh_exec UNRAID "docker exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile" &>/dev/null; then
|
||||
log_warn "Hot reload failed; restarting caddy container"
|
||||
ssh_exec UNRAID "docker restart caddy" >/dev/null
|
||||
fi
|
||||
log_success "Caddy container is running with new config"
|
||||
|
||||
probe_http_code_ok() {
|
||||
local code="$1" role="$2"
|
||||
if [[ "$role" == "gitea_api" ]]; then
|
||||
[[ "$code" == "200" ]]
|
||||
return
|
||||
fi
|
||||
[[ "$code" =~ ^(2|3)[0-9][0-9]$ || "$code" == "401" || "$code" == "403" ]]
|
||||
}
|
||||
|
||||
probe_host_via_caddy() {
|
||||
local host="$1" upstream="$2" role="$3"
|
||||
local max_attempts="${4:-5}" wait_secs="${5:-5}"
|
||||
local path="/"
|
||||
if [[ "$role" == "gitea_api" ]]; then
|
||||
path="/api/v1/version"
|
||||
fi
|
||||
|
||||
local tmp_body http_code attempt
|
||||
tmp_body=$(mktemp)
|
||||
|
||||
for (( attempt=1; attempt<=max_attempts; attempt++ )); do
|
||||
http_code=$(curl -sk --resolve "${host}:443:${UNRAID_CADDY_IP}" \
|
||||
-o "$tmp_body" -w "%{http_code}" "https://${host}${path}" 2>/dev/null) || true
|
||||
[[ -z "$http_code" ]] && http_code="000"
|
||||
|
||||
if probe_http_code_ok "$http_code" "$role"; then
|
||||
log_success "Probe passed: ${host} (HTTP ${http_code})"
|
||||
rm -f "$tmp_body"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ $attempt -lt $max_attempts ]]; then
|
||||
log_info "Probe attempt ${attempt}/${max_attempts} for ${host} (HTTP ${http_code}) — retrying in ${wait_secs}s..."
|
||||
sleep "$wait_secs"
|
||||
fi
|
||||
done
|
||||
|
||||
log_error "Probe failed: ${host} (HTTP ${http_code}) after ${max_attempts} attempts"
|
||||
if [[ "$http_code" == "502" || "$http_code" == "503" || "$http_code" == "504" || "$http_code" == "000" ]]; then
|
||||
local upstream_probe_raw upstream_code
|
||||
upstream_probe_raw=$(ssh_exec UNRAID "curl -sk -o /dev/null -w '%{http_code}' '${upstream}' || true" 2>/dev/null || true)
|
||||
upstream_code=$(printf '%s' "$upstream_probe_raw" | tr -cd '0-9')
|
||||
if [[ -z "$upstream_code" ]]; then
|
||||
upstream_code="000"
|
||||
elif [[ ${#upstream_code} -gt 3 ]]; then
|
||||
upstream_code="${upstream_code:$((${#upstream_code} - 3))}"
|
||||
fi
|
||||
log_warn "Upstream check from Unraid: ${upstream} -> HTTP ${upstream_code}"
|
||||
fi
|
||||
rm -f "$tmp_body"
|
||||
return 1
|
||||
}
|
||||
|
||||
if [[ "$MODE" == "canary" ]]; then
|
||||
if confirm_action "Run canary HTTPS probe for tower.sintheus.com via Caddy IP now? [y/N] "; then
|
||||
if ! probe_host_via_caddy "tower.sintheus.com" "https://192.168.1.82:443" "generic"; then
|
||||
log_error "Canary probe failed for tower.sintheus.com via ${UNRAID_CADDY_IP}"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
else
|
||||
log_step 5 "Probing all configured hosts via Caddy IP..."
|
||||
PROBE_FAILS=0
|
||||
for entry in "${SELECTED_HOST_MAP[@]}"; do
|
||||
IFS='|' read -r host upstream _ <<< "$entry"
|
||||
role="generic"
|
||||
if [[ "$host" == "$GITEA_DOMAIN" ]]; then
|
||||
role="gitea_api"
|
||||
fi
|
||||
if ! probe_host_via_caddy "$host" "$upstream" "$role"; then
|
||||
PROBE_FAILS=$((PROBE_FAILS + 1))
|
||||
fi
|
||||
done
|
||||
if [[ "$PROBE_FAILS" -gt 0 ]]; then
|
||||
log_error "One or more probes failed (${PROBE_FAILS})"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
printf '\n'
|
||||
log_success "Phase 7.5 complete (${MODE} mode)"
|
||||
log_info "Next (no DNS change required): verify via curl --resolve and browser checks"
|
||||
log_info "LAN-only routing option: split-DNS/hosts override to ${UNRAID_CADDY_IP}"
|
||||
log_info "Public routing option: point public DNS to WAN ingress (not 192.168.x.x) and forward 443 to Caddy"
|
||||
if [[ "$MODE" == "canary" ]]; then
|
||||
log_info "Canary host is tower.sintheus.com; existing routes were preserved"
|
||||
else
|
||||
log_info "Full host map is now active in Caddy"
|
||||
fi
|
||||
7
phase8_5_nginx_to_caddy.sh
Executable file
7
phase8_5_nginx_to_caddy.sh
Executable file
@@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Backward-compat wrapper: phase 8.5 was renamed to phase 7.5.
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
echo "[WARN] phase8_5_nginx_to_caddy.sh was renamed to phase7_5_nginx_to_caddy.sh" >&2
|
||||
exec "${SCRIPT_DIR}/phase7_5_nginx_to_caddy.sh" "$@"
|
||||
@@ -16,6 +16,31 @@ set -euo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
|
||||
ALLOW_DIRECT_CHECKS=false
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--allow-direct-checks Allow fallback to direct Caddy-IP checks via --resolve
|
||||
(LAN/split-DNS staging mode; not a full public cutover)
|
||||
--help, -h Show this help
|
||||
EOF
|
||||
}
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--allow-direct-checks) ALLOW_DIRECT_CHECKS=true ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*)
|
||||
log_error "Unknown argument: $arg"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
load_env
|
||||
require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_GITEA_IP UNRAID_CADDY_IP \
|
||||
UNRAID_COMPOSE_DIR \
|
||||
@@ -25,7 +50,7 @@ require_vars UNRAID_IP UNRAID_SSH_USER UNRAID_GITEA_IP UNRAID_CADDY_IP \
|
||||
REPO_NAMES
|
||||
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
require_vars CLOUDFLARE_API_TOKEN
|
||||
require_vars CLOUDFLARE_API_TOKEN PUBLIC_DNS_TARGET_IP
|
||||
elif [[ "$TLS_MODE" == "existing" ]]; then
|
||||
require_vars SSL_CERT_PATH SSL_KEY_PATH
|
||||
else
|
||||
@@ -42,6 +67,232 @@ PHASE8_STATE_FILE="${PHASE8_STATE_DIR}/phase8_github_repo_state.json"
|
||||
UNRAID_DOCKER_NETWORK_NAME="br0"
|
||||
# Compose files live in a centralized project directory.
|
||||
CADDY_COMPOSE_DIR="${UNRAID_COMPOSE_DIR}/caddy"
|
||||
PHASE8_GITEA_ROUTE_BEGIN="# BEGIN_PHASE8_GITEA_ROUTE"
|
||||
PHASE8_GITEA_ROUTE_END="# END_PHASE8_GITEA_ROUTE"
|
||||
PUBLIC_DNS_TARGET_IP="${PUBLIC_DNS_TARGET_IP:-}"
|
||||
PHASE8_ALLOW_PRIVATE_DNS_TARGET="${PHASE8_ALLOW_PRIVATE_DNS_TARGET:-false}"
|
||||
|
||||
if ! validate_bool "${PHASE8_ALLOW_PRIVATE_DNS_TARGET}"; then
|
||||
log_error "Invalid PHASE8_ALLOW_PRIVATE_DNS_TARGET='${PHASE8_ALLOW_PRIVATE_DNS_TARGET}' (must be true or false)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
wait_for_https_public() {
|
||||
local host="$1" max_secs="${2:-30}"
|
||||
local elapsed=0
|
||||
while [[ $elapsed -lt $max_secs ]]; do
|
||||
if curl -sf -o /dev/null "https://${host}/api/v1/version" 2>/dev/null; then
|
||||
return 0
|
||||
fi
|
||||
sleep 2
|
||||
elapsed=$((elapsed + 2))
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
wait_for_https_via_resolve() {
|
||||
local host="$1" ip="$2" max_secs="${3:-300}"
|
||||
local elapsed=0
|
||||
log_info "Waiting for HTTPS via direct Caddy path (--resolve ${host}:443:${ip})..."
|
||||
while [[ $elapsed -lt $max_secs ]]; do
|
||||
if curl -skf --resolve "${host}:443:${ip}" "https://${host}/api/v1/version" >/dev/null 2>&1; then
|
||||
log_success "HTTPS reachable via Caddy IP (after ${elapsed}s)"
|
||||
return 0
|
||||
fi
|
||||
sleep 2
|
||||
elapsed=$((elapsed + 2))
|
||||
done
|
||||
log_error "Timeout waiting for HTTPS via --resolve (${host} -> ${ip}) after ${max_secs}s"
|
||||
if ssh_exec UNRAID "docker ps --format '{{.Names}}' | grep -qx 'caddy'" >/dev/null 2>&1; then
|
||||
log_warn "Recent Caddy logs (tail 80):"
|
||||
ssh_exec UNRAID "docker logs --tail 80 caddy 2>&1" || true
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
check_unraid_gitea_backend() {
|
||||
local raw code
|
||||
raw=$(ssh_exec UNRAID "curl -sS -o /dev/null -w '%{http_code}' 'http://${UNRAID_GITEA_IP}:3000/api/v1/version' || true" 2>/dev/null || true)
|
||||
code=$(printf '%s' "$raw" | tr -cd '0-9')
|
||||
if [[ -z "$code" ]]; then
|
||||
code="000"
|
||||
elif [[ ${#code} -gt 3 ]]; then
|
||||
code="${code:$((${#code} - 3))}"
|
||||
fi
|
||||
|
||||
if [[ "$code" == "200" ]]; then
|
||||
log_success "Unraid -> Gitea backend API reachable (HTTP 200)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_error "Unraid -> Gitea backend API check failed (HTTP ${code}) at http://${UNRAID_GITEA_IP}:3000/api/v1/version"
|
||||
return 1
|
||||
}
|
||||
|
||||
is_private_ipv4() {
|
||||
local ip="$1"
|
||||
[[ "$ip" =~ ^10\. ]] || \
|
||||
[[ "$ip" =~ ^192\.168\. ]] || \
|
||||
[[ "$ip" =~ ^172\.(1[6-9]|2[0-9]|3[0-1])\. ]]
|
||||
}
|
||||
|
||||
cloudflare_api_call() {
|
||||
local method="$1" path="$2" data="${3:-}"
|
||||
local -a args=(
|
||||
curl -sS
|
||||
-X "$method"
|
||||
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}"
|
||||
-H "Content-Type: application/json"
|
||||
"https://api.cloudflare.com/client/v4${path}"
|
||||
)
|
||||
if [[ -n "$data" ]]; then
|
||||
args+=(-d "$data")
|
||||
fi
|
||||
"${args[@]}"
|
||||
}
|
||||
|
||||
ensure_cloudflare_dns_for_gitea() {
|
||||
local host="$1" target_ip="$2" zone_id zone_name
|
||||
local allow_private="${PHASE8_ALLOW_PRIVATE_DNS_TARGET}"
|
||||
|
||||
if [[ -z "$target_ip" ]]; then
|
||||
log_error "PUBLIC_DNS_TARGET_IP is not set"
|
||||
log_error "Set PUBLIC_DNS_TARGET_IP to your public ingress IP for ${host}"
|
||||
log_error "For LAN-only/split-DNS use, also set PHASE8_ALLOW_PRIVATE_DNS_TARGET=true"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! validate_ip "$target_ip"; then
|
||||
log_error "Invalid PUBLIC_DNS_TARGET_IP='${target_ip}'"
|
||||
log_error "Set PUBLIC_DNS_TARGET_IP in .env to the IP that should answer ${host}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
zone_name="${host#*.}"
|
||||
if [[ "$zone_name" == "$host" ]]; then
|
||||
log_error "GITEA_DOMAIN='${host}' is not a valid FQDN for Cloudflare zone detection"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if is_private_ipv4 "$target_ip"; then
|
||||
if [[ "$allow_private" != "true" ]]; then
|
||||
log_error "Refusing private DNS target ${target_ip} for Cloudflare public DNS"
|
||||
log_error "Set PUBLIC_DNS_TARGET_IP to public ingress IP, or set PHASE8_ALLOW_PRIVATE_DNS_TARGET=true for LAN-only split-DNS"
|
||||
return 1
|
||||
fi
|
||||
log_warn "Using private DNS target ${target_ip} because PHASE8_ALLOW_PRIVATE_DNS_TARGET=true"
|
||||
fi
|
||||
|
||||
local zone_resp zone_err
|
||||
zone_resp=$(cloudflare_api_call GET "/zones?name=${zone_name}&status=active")
|
||||
if [[ "$(jq -r '.success // false' <<< "$zone_resp")" != "true" ]]; then
|
||||
zone_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$zone_resp")
|
||||
log_error "Cloudflare zone lookup failed for ${zone_name}: ${zone_err:-unknown error}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
zone_id=$(jq -r '.result[0].id // empty' <<< "$zone_resp")
|
||||
if [[ -z "$zone_id" ]]; then
|
||||
log_error "Cloudflare zone not found or not accessible for ${zone_name}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local record_resp record_err record_count record_id old_ip
|
||||
record_resp=$(cloudflare_api_call GET "/zones/${zone_id}/dns_records?type=A&name=${host}")
|
||||
if [[ "$(jq -r '.success // false' <<< "$record_resp")" != "true" ]]; then
|
||||
record_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$record_resp")
|
||||
log_error "Cloudflare DNS query failed for ${host}: ${record_err:-unknown error}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
record_count=$(jq -r '.result | length' <<< "$record_resp")
|
||||
if [[ "$record_count" -eq 0 ]]; then
|
||||
local create_payload create_resp create_err
|
||||
create_payload=$(jq -n \
|
||||
--arg type "A" \
|
||||
--arg name "$host" \
|
||||
--arg content "$target_ip" \
|
||||
--argjson ttl 120 \
|
||||
--argjson proxied false \
|
||||
'{type:$type, name:$name, content:$content, ttl:$ttl, proxied:$proxied}')
|
||||
create_resp=$(cloudflare_api_call POST "/zones/${zone_id}/dns_records" "$create_payload")
|
||||
if [[ "$(jq -r '.success // false' <<< "$create_resp")" != "true" ]]; then
|
||||
create_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$create_resp")
|
||||
log_error "Failed to create Cloudflare A record ${host} -> ${target_ip}: ${create_err:-unknown error}"
|
||||
return 1
|
||||
fi
|
||||
log_success "Created Cloudflare A record: ${host} -> ${target_ip}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
record_id=$(jq -r '.result[0].id // empty' <<< "$record_resp")
|
||||
old_ip=$(jq -r '.result[0].content // empty' <<< "$record_resp")
|
||||
if [[ -n "$old_ip" && "$old_ip" == "$target_ip" ]]; then
|
||||
log_info "Cloudflare A record already correct: ${host} -> ${target_ip}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local update_payload update_resp update_err
|
||||
update_payload=$(jq -n \
|
||||
--arg type "A" \
|
||||
--arg name "$host" \
|
||||
--arg content "$target_ip" \
|
||||
--argjson ttl 120 \
|
||||
--argjson proxied false \
|
||||
'{type:$type, name:$name, content:$content, ttl:$ttl, proxied:$proxied}')
|
||||
update_resp=$(cloudflare_api_call PUT "/zones/${zone_id}/dns_records/${record_id}" "$update_payload")
|
||||
if [[ "$(jq -r '.success // false' <<< "$update_resp")" != "true" ]]; then
|
||||
update_err=$(jq -r '(.errors // []) | map(.message // tostring) | join("; ")' <<< "$update_resp")
|
||||
log_error "Failed to update Cloudflare A record ${host}: ${update_err:-unknown error}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Updated Cloudflare A record: ${host}"
|
||||
log_info " old: ${old_ip:-<empty>}"
|
||||
log_info " new: ${target_ip}"
|
||||
return 0
|
||||
}
|
||||
|
||||
caddyfile_has_domain_block() {
|
||||
local file="$1" domain="$2"
|
||||
awk -v domain="$domain" '
|
||||
function trim(s) {
|
||||
sub(/^[[:space:]]+/, "", s)
|
||||
sub(/[[:space:]]+$/, "", s)
|
||||
return s
|
||||
}
|
||||
function matches_domain(label, dom, wild_suffix, dot_pos) {
|
||||
if (label == dom) return 1
|
||||
# Wildcard match: *.example.com covers sub.example.com
|
||||
if (substr(label, 1, 2) == "*.") {
|
||||
wild_suffix = substr(label, 2)
|
||||
dot_pos = index(dom, ".")
|
||||
if (dot_pos > 0 && substr(dom, dot_pos) == wild_suffix) return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
{
|
||||
line = $0
|
||||
if (line ~ /^[[:space:]]*#/) next
|
||||
pos = index(line, "{")
|
||||
if (pos <= 0) next
|
||||
|
||||
labels = trim(substr(line, 1, pos - 1))
|
||||
if (labels == "" || labels ~ /^\(/) next
|
||||
|
||||
gsub(/[[:space:]]+/, "", labels)
|
||||
n = split(labels, parts, ",")
|
||||
for (i = 1; i <= n; i++) {
|
||||
if (matches_domain(parts[i], domain)) {
|
||||
found = 1
|
||||
}
|
||||
}
|
||||
}
|
||||
END {
|
||||
exit(found ? 0 : 1)
|
||||
}
|
||||
' "$file"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helper: persist original GitHub repo settings for teardown symmetry
|
||||
@@ -145,22 +396,53 @@ fi
|
||||
# Step 2: Render + deploy Caddyfile
|
||||
# ---------------------------------------------------------------------------
|
||||
log_step 2 "Deploying Caddyfile..."
|
||||
if ssh_exec UNRAID "test -f '${CADDY_DATA_PATH}/Caddyfile'" 2>/dev/null; then
|
||||
log_info "Caddyfile already exists — skipping"
|
||||
else
|
||||
TMPFILE=$(mktemp)
|
||||
GITEA_CONTAINER_IP="${UNRAID_GITEA_IP}"
|
||||
export GITEA_CONTAINER_IP GITEA_DOMAIN CADDY_DOMAIN
|
||||
GITEA_CONTAINER_IP="${UNRAID_GITEA_IP}"
|
||||
export GITEA_CONTAINER_IP GITEA_DOMAIN CADDY_DOMAIN
|
||||
|
||||
# Build TLS block based on TLS_MODE
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
TLS_BLOCK=" tls {
|
||||
# Build TLS block based on TLS_MODE
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
TLS_BLOCK=" tls {
|
||||
dns cloudflare {env.CF_API_TOKEN}
|
||||
}"
|
||||
else
|
||||
TLS_BLOCK=" tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
|
||||
fi
|
||||
export TLS_BLOCK
|
||||
|
||||
if ssh_exec UNRAID "test -f '${CADDY_DATA_PATH}/Caddyfile'" 2>/dev/null; then
|
||||
TMP_EXISTING=$(mktemp)
|
||||
TMP_UPDATED=$(mktemp)
|
||||
TMP_ROUTE_BLOCK=$(mktemp)
|
||||
|
||||
ssh_exec UNRAID "cat '${CADDY_DATA_PATH}/Caddyfile'" > "$TMP_EXISTING"
|
||||
|
||||
if caddyfile_has_domain_block "$TMP_EXISTING" "$GITEA_DOMAIN"; then
|
||||
log_info "Caddyfile already has a route for ${GITEA_DOMAIN} — preserving existing file"
|
||||
else
|
||||
TLS_BLOCK=" tls ${SSL_CERT_PATH} ${SSL_KEY_PATH}"
|
||||
log_warn "Caddyfile exists but has no explicit route for ${GITEA_DOMAIN}"
|
||||
log_info "Appending managed Gitea route block"
|
||||
{
|
||||
echo
|
||||
echo "${PHASE8_GITEA_ROUTE_BEGIN}"
|
||||
echo "${GITEA_DOMAIN} {"
|
||||
printf '%s\n' "$TLS_BLOCK"
|
||||
echo
|
||||
echo " reverse_proxy ${GITEA_CONTAINER_IP}:3000"
|
||||
echo "}"
|
||||
echo "${PHASE8_GITEA_ROUTE_END}"
|
||||
echo
|
||||
} > "$TMP_ROUTE_BLOCK"
|
||||
|
||||
# Remove a stale managed block (if present), then append refreshed block.
|
||||
sed "/^${PHASE8_GITEA_ROUTE_BEGIN}\$/,/^${PHASE8_GITEA_ROUTE_END}\$/d" "$TMP_EXISTING" > "$TMP_UPDATED"
|
||||
cat "$TMP_UPDATED" "$TMP_ROUTE_BLOCK" > "${TMP_UPDATED}.final"
|
||||
scp_to UNRAID "${TMP_UPDATED}.final" "${CADDY_DATA_PATH}/Caddyfile"
|
||||
log_success "Appended managed Gitea route to existing Caddyfile"
|
||||
fi
|
||||
export TLS_BLOCK
|
||||
|
||||
rm -f "$TMP_EXISTING" "$TMP_UPDATED" "$TMP_ROUTE_BLOCK" "${TMP_UPDATED}.final"
|
||||
else
|
||||
TMPFILE=$(mktemp)
|
||||
|
||||
render_template "${SCRIPT_DIR}/templates/Caddyfile.tpl" "$TMPFILE" \
|
||||
"\${CADDY_DOMAIN} \${GITEA_DOMAIN} \${TLS_BLOCK} \${GITEA_CONTAINER_IP}"
|
||||
@@ -221,23 +503,56 @@ fi
|
||||
log_step 4 "Starting Caddy container..."
|
||||
CONTAINER_STATUS=$(ssh_exec UNRAID "docker ps --filter name=caddy --format '{{.Status}}'" 2>/dev/null || true)
|
||||
if [[ "$CONTAINER_STATUS" == *"Up"* ]]; then
|
||||
log_info "Caddy container already running — skipping"
|
||||
log_info "Caddy container already running"
|
||||
log_info "Reloading Caddy config from /etc/caddy/Caddyfile"
|
||||
if ssh_exec UNRAID "docker exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile" >/dev/null 2>&1; then
|
||||
log_success "Caddy config reloaded"
|
||||
else
|
||||
log_warn "Caddy reload failed; restarting caddy container"
|
||||
ssh_exec UNRAID "docker restart caddy >/dev/null"
|
||||
log_success "Caddy container restarted"
|
||||
fi
|
||||
else
|
||||
ssh_exec UNRAID "cd '${CADDY_COMPOSE_DIR}' && docker compose up -d 2>/dev/null || docker-compose up -d"
|
||||
log_success "Caddy container started"
|
||||
if ssh_exec UNRAID "docker exec caddy caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile" >/dev/null 2>&1; then
|
||||
log_success "Caddy container started and config loaded"
|
||||
else
|
||||
log_success "Caddy container started"
|
||||
fi
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 5: Wait for HTTPS to work
|
||||
# Step 5: Ensure DNS points Gitea domain to target ingress IP
|
||||
# ---------------------------------------------------------------------------
|
||||
log_step 5 "Ensuring DNS for ${GITEA_DOMAIN}..."
|
||||
if [[ "$TLS_MODE" == "cloudflare" ]]; then
|
||||
ensure_cloudflare_dns_for_gitea "${GITEA_DOMAIN}" "${PUBLIC_DNS_TARGET_IP}"
|
||||
else
|
||||
log_info "TLS_MODE=${TLS_MODE}; skipping Cloudflare DNS automation"
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 6: Wait for HTTPS to work
|
||||
# Caddy auto-obtains certs — poll until HTTPS responds.
|
||||
# ---------------------------------------------------------------------------
|
||||
log_step 5 "Waiting for HTTPS (Caddy auto-provisions cert)..."
|
||||
wait_for_http "https://${GITEA_DOMAIN}/api/v1/version" 120
|
||||
|
||||
log_success "HTTPS verified — https://${GITEA_DOMAIN} works"
|
||||
log_step 6 "Waiting for HTTPS (Caddy auto-provisions cert)..."
|
||||
check_unraid_gitea_backend
|
||||
if wait_for_https_public "${GITEA_DOMAIN}" 60; then
|
||||
log_success "HTTPS verified through current domain routing — https://${GITEA_DOMAIN} works"
|
||||
else
|
||||
log_warn "Public-domain routing to Caddy is not ready yet"
|
||||
if [[ "$ALLOW_DIRECT_CHECKS" == "true" ]]; then
|
||||
wait_for_https_via_resolve "${GITEA_DOMAIN}" "${UNRAID_CADDY_IP}" 300
|
||||
log_warn "Proceeding with direct-only HTTPS validation (--allow-direct-checks)"
|
||||
else
|
||||
log_error "Refusing to continue cutover without public HTTPS reachability"
|
||||
log_error "Fix DNS/ingress routing and rerun Phase 8, or use --allow-direct-checks for staging only"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Step 6: Mark GitHub repos as offsite backup only
|
||||
# Step 7: Mark GitHub repos as offsite backup only
|
||||
# Updates description + homepage to indicate Gitea is primary.
|
||||
# Disables wiki and Pages to avoid unnecessary resource usage.
|
||||
# Does NOT archive — archived repos reject pushes, which would break
|
||||
@@ -245,7 +560,7 @@ log_success "HTTPS verified — https://${GITEA_DOMAIN} works"
|
||||
# Persists original mutable settings to a local state file for teardown.
|
||||
# GitHub Actions already disabled in Phase 6 Step D.
|
||||
# ---------------------------------------------------------------------------
|
||||
log_step 6 "Marking GitHub repos as offsite backup..."
|
||||
log_step 7 "Marking GitHub repos as offsite backup..."
|
||||
|
||||
init_phase8_state_store
|
||||
GITHUB_REPO_UPDATE_FAILURES=0
|
||||
@@ -279,10 +594,11 @@ for repo in "${REPOS[@]}"; do
|
||||
--arg homepage "https://${GITEA_DOMAIN}/${GITEA_ORG_NAME}/${repo}" \
|
||||
'{description: $description, homepage: $homepage, has_wiki: false, has_projects: false}')
|
||||
|
||||
if github_api PATCH "/repos/${GITHUB_USERNAME}/${repo}" "$UPDATE_PAYLOAD" >/dev/null 2>&1; then
|
||||
if PATCH_OUT=$(github_api PATCH "/repos/${GITHUB_USERNAME}/${repo}" "$UPDATE_PAYLOAD" 2>&1); then
|
||||
log_success "Marked GitHub repo as mirror: ${repo}"
|
||||
else
|
||||
log_error "Failed to update GitHub repo: ${repo}"
|
||||
log_error "GitHub API: $(printf '%s' "$PATCH_OUT" | tail -n 1)"
|
||||
GITHUB_REPO_UPDATE_FAILURES=$((GITHUB_REPO_UPDATE_FAILURES + 1))
|
||||
fi
|
||||
|
||||
|
||||
@@ -15,8 +15,33 @@ set -euo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/common.sh"
|
||||
|
||||
ALLOW_DIRECT_CHECKS=false
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--allow-direct-checks Allow fallback to direct Caddy-IP checks via --resolve
|
||||
(LAN/split-DNS staging mode; not a full public cutover check)
|
||||
--help, -h Show this help
|
||||
EOF
|
||||
}
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--allow-direct-checks) ALLOW_DIRECT_CHECKS=true ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*)
|
||||
log_error "Unknown argument: $arg"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
load_env
|
||||
require_vars GITEA_DOMAIN GITEA_ADMIN_TOKEN GITEA_ORG_NAME \
|
||||
require_vars GITEA_DOMAIN UNRAID_CADDY_IP GITEA_ADMIN_TOKEN GITEA_ORG_NAME \
|
||||
GITHUB_USERNAME GITHUB_TOKEN \
|
||||
REPO_NAMES
|
||||
|
||||
@@ -37,16 +62,50 @@ run_check() {
|
||||
fi
|
||||
}
|
||||
|
||||
ACCESS_MODE="public"
|
||||
if ! curl -sf -o /dev/null "https://${GITEA_DOMAIN}/api/v1/version" 2>/dev/null; then
|
||||
log_warn "Public routing to ${GITEA_DOMAIN} not reachable from control plane"
|
||||
if [[ "$ALLOW_DIRECT_CHECKS" == "true" ]]; then
|
||||
ACCESS_MODE="direct"
|
||||
log_warn "Using direct Caddy-IP checks via --resolve (${UNRAID_CADDY_IP})"
|
||||
else
|
||||
log_error "Public HTTPS check failed; this is not a complete Phase 8 validation"
|
||||
log_error "Fix DNS/ingress routing and rerun, or use --allow-direct-checks for staging-only checks"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
log_info "Using public-domain checks for ${GITEA_DOMAIN}"
|
||||
fi
|
||||
|
||||
curl_https() {
|
||||
if [[ "$ACCESS_MODE" == "direct" ]]; then
|
||||
curl -sk --resolve "${GITEA_DOMAIN}:443:${UNRAID_CADDY_IP}" "$@"
|
||||
else
|
||||
curl -s "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
curl_http() {
|
||||
if [[ "$ACCESS_MODE" == "direct" ]]; then
|
||||
curl -s --resolve "${GITEA_DOMAIN}:80:${UNRAID_CADDY_IP}" "$@"
|
||||
else
|
||||
curl -s "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
# Check 1: HTTPS works
|
||||
run_check "HTTPS returns 200 at https://${GITEA_DOMAIN}" \
|
||||
curl -sf -o /dev/null "https://${GITEA_DOMAIN}/api/v1/version"
|
||||
# shellcheck disable=SC2329
|
||||
check_https_version() {
|
||||
curl_https -f -o /dev/null "https://${GITEA_DOMAIN}/api/v1/version"
|
||||
}
|
||||
run_check "HTTPS returns 200 at https://${GITEA_DOMAIN}" check_https_version
|
||||
|
||||
# Check 2: HTTP redirects to HTTPS (returns 301)
|
||||
# shellcheck disable=SC2329
|
||||
check_redirect() {
|
||||
local http_code
|
||||
http_code=$(curl -sI -o /dev/null -w "%{http_code}" "http://${GITEA_DOMAIN}/")
|
||||
[[ "$http_code" == "301" ]]
|
||||
http_code=$(curl_http -I -o /dev/null -w "%{http_code}" "http://${GITEA_DOMAIN}/")
|
||||
[[ "$http_code" == "301" || "$http_code" == "308" ]]
|
||||
}
|
||||
run_check "HTTP → HTTPS redirect (301)" check_redirect
|
||||
|
||||
@@ -54,17 +113,29 @@ run_check "HTTP → HTTPS redirect (301)" check_redirect
|
||||
# shellcheck disable=SC2329
|
||||
check_ssl_cert() {
|
||||
# Verify openssl can connect and the cert is issued by a recognized CA
|
||||
local connect_target
|
||||
if [[ "$ACCESS_MODE" == "direct" ]]; then
|
||||
connect_target="${UNRAID_CADDY_IP}:443"
|
||||
else
|
||||
connect_target="${GITEA_DOMAIN}:443"
|
||||
fi
|
||||
local issuer
|
||||
issuer=$(echo | openssl s_client -connect "${GITEA_DOMAIN}:443" -servername "${GITEA_DOMAIN}" 2>/dev/null | openssl x509 -noout -issuer 2>/dev/null || echo "")
|
||||
issuer=$(echo | openssl s_client -connect "${connect_target}" -servername "${GITEA_DOMAIN}" 2>/dev/null | openssl x509 -noout -issuer 2>/dev/null || echo "")
|
||||
# Check that the issuer is not empty (meaning cert is valid)
|
||||
[[ -n "$issuer" ]]
|
||||
}
|
||||
run_check "SSL certificate is valid" check_ssl_cert
|
||||
|
||||
# Check 4: All repos accessible via HTTPS
|
||||
# shellcheck disable=SC2329
|
||||
check_repo_access() {
|
||||
local repo="$1"
|
||||
curl_https -f -o /dev/null -H "Authorization: token ${GITEA_ADMIN_TOKEN}" \
|
||||
"https://${GITEA_DOMAIN}/api/v1/repos/${GITEA_ORG_NAME}/${repo}"
|
||||
}
|
||||
for repo in "${REPOS[@]}"; do
|
||||
run_check "Repo ${repo} accessible at https://${GITEA_DOMAIN}/${GITEA_ORG_NAME}/${repo}" \
|
||||
curl -sf -o /dev/null -H "Authorization: token ${GITEA_ADMIN_TOKEN}" "https://${GITEA_DOMAIN}/api/v1/repos/${GITEA_ORG_NAME}/${repo}"
|
||||
check_repo_access "$repo"
|
||||
done
|
||||
|
||||
# Check 5: GitHub repos are marked as offsite backup
|
||||
|
||||
14
repo_variables.conf
Normal file
14
repo_variables.conf
Normal file
@@ -0,0 +1,14 @@
|
||||
# =============================================================================
|
||||
# repo_variables.conf — Gitea Actions Repository Variables (INI format)
|
||||
# Generated from GitHub repo settings. Edit as needed.
|
||||
# Used by phase11_custom_runners.sh to set per-repo CI dispatch variables.
|
||||
# See repo_variables.conf.example for field reference.
|
||||
# =============================================================================
|
||||
|
||||
[augur]
|
||||
CI_RUNS_ON = ["self-hosted","Linux","X64"]
|
||||
|
||||
[periodvault]
|
||||
CI_RUNS_ON = ["self-hosted","Linux","X64"]
|
||||
CI_RUNS_ON_MACOS = ["self-hosted","macOS","ARM64"]
|
||||
CI_RUNS_ON_ANDROID = ["self-hosted","Linux","X64","android-emulator"]
|
||||
20
repo_variables.conf.example
Normal file
20
repo_variables.conf.example
Normal file
@@ -0,0 +1,20 @@
|
||||
# =============================================================================
|
||||
# repo_variables.conf — Gitea Actions Repository Variables (INI format)
|
||||
# Copy to repo_variables.conf and edit.
|
||||
# Used by phase11_custom_runners.sh to set per-repo CI dispatch variables.
|
||||
# =============================================================================
|
||||
#
|
||||
# Each [section] = Gitea repository name (must exist in GITEA_ORG_NAME).
|
||||
# Keys = variable names. Values = literal string set via Gitea API.
|
||||
# Workflows access these as ${{ vars.VARIABLE_NAME }}.
|
||||
#
|
||||
# Common pattern: repos use fromJSON(vars.CI_RUNS_ON || '["ubuntu-latest"]')
|
||||
# in runs-on to dynamically select runners.
|
||||
|
||||
#[my-go-repo]
|
||||
#CI_RUNS_ON = ["self-hosted","Linux","X64"]
|
||||
|
||||
#[my-mobile-repo]
|
||||
#CI_RUNS_ON = ["self-hosted","Linux","X64"]
|
||||
#CI_RUNS_ON_MACOS = ["self-hosted","macOS","ARM64"]
|
||||
#CI_RUNS_ON_ANDROID = ["self-hosted","Linux","X64","android-emulator"]
|
||||
45
run_all.sh
45
run_all.sh
@@ -3,11 +3,11 @@ set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# run_all.sh — Orchestrate the full Gitea migration pipeline
|
||||
# Runs: setup → preflight → phase 1-9 (each with post-check) sequentially.
|
||||
# Runs: setup → preflight → phase 1-11 (each with post-check) sequentially.
|
||||
# Stops on first failure, prints summary of what completed.
|
||||
#
|
||||
# Usage:
|
||||
# ./run_all.sh # Full run: setup + preflight + phases 1-9
|
||||
# ./run_all.sh # Full run: setup + preflight + phases 1-11
|
||||
# ./run_all.sh --skip-setup # Skip setup scripts, start at preflight
|
||||
# ./run_all.sh --start-from=3 # Run preflight, then start at phase 3
|
||||
# ./run_all.sh --skip-setup --start-from=5
|
||||
@@ -28,10 +28,12 @@ require_local_os "Darwin" "run_all.sh must run from macOS (the control plane)"
|
||||
SKIP_SETUP=false
|
||||
START_FROM=0
|
||||
START_FROM_SET=false
|
||||
ALLOW_DIRECT_CHECKS=false
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--skip-setup) SKIP_SETUP=true ;;
|
||||
--allow-direct-checks) ALLOW_DIRECT_CHECKS=true ;;
|
||||
--dry-run)
|
||||
exec "${SCRIPT_DIR}/post-migration-check.sh"
|
||||
;;
|
||||
@@ -39,11 +41,11 @@ for arg in "$@"; do
|
||||
START_FROM="${arg#*=}"
|
||||
START_FROM_SET=true
|
||||
if ! [[ "$START_FROM" =~ ^[0-9]+$ ]]; then
|
||||
log_error "--start-from must be a number (1-9)"
|
||||
log_error "--start-from must be a number (1-11)"
|
||||
exit 1
|
||||
fi
|
||||
if [[ "$START_FROM" -lt 1 ]] || [[ "$START_FROM" -gt 9 ]]; then
|
||||
log_error "--start-from must be between 1 and 9"
|
||||
if [[ "$START_FROM" -lt 1 ]] || [[ "$START_FROM" -gt 11 ]]; then
|
||||
log_error "--start-from must be between 1 and 11"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
@@ -52,16 +54,19 @@ for arg in "$@"; do
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--skip-setup Skip configure_env + machine setup, start at preflight
|
||||
--start-from=N Skip phases before N (still runs preflight)
|
||||
--dry-run Run read-only infrastructure check (no mutations)
|
||||
--help Show this help
|
||||
--skip-setup Skip configure_env + machine setup, start at preflight
|
||||
--start-from=N Skip phases before N (still runs preflight)
|
||||
--allow-direct-checks Pass --allow-direct-checks to Phase 8 scripts
|
||||
(LAN/split-DNS staging mode)
|
||||
--dry-run Run read-only infrastructure check (no mutations)
|
||||
--help Show this help
|
||||
|
||||
Examples:
|
||||
$(basename "$0") Full run
|
||||
$(basename "$0") --skip-setup Skip setup, start at preflight
|
||||
$(basename "$0") --start-from=3 Run preflight, then phases 3-9
|
||||
$(basename "$0") --dry-run Check current state without changing anything
|
||||
$(basename "$0") Full run
|
||||
$(basename "$0") --skip-setup Skip setup, start at preflight
|
||||
$(basename "$0") --start-from=3 Run preflight, then phases 3-11
|
||||
$(basename "$0") --allow-direct-checks LAN mode: use direct Caddy-IP checks
|
||||
$(basename "$0") --dry-run Check current state without changing anything
|
||||
EOF
|
||||
exit 0 ;;
|
||||
*) log_error "Unknown argument: $arg"; exit 1 ;;
|
||||
@@ -157,7 +162,7 @@ else
|
||||
fi
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Phases 1-9 — run sequentially, each followed by its post-check
|
||||
# Phases 1-11 — run sequentially, each followed by its post-check
|
||||
# The phase scripts are the "do" step, post-checks verify success.
|
||||
# ---------------------------------------------------------------------------
|
||||
PHASES=(
|
||||
@@ -170,6 +175,8 @@ PHASES=(
|
||||
"7|Phase 7: Branch Protection|phase7_branch_protection.sh|phase7_post_check.sh"
|
||||
"8|Phase 8: Cutover|phase8_cutover.sh|phase8_post_check.sh"
|
||||
"9|Phase 9: Security|phase9_security.sh|phase9_post_check.sh"
|
||||
"10|Phase 10: Local Repo Cutover|phase10_local_repo_cutover.sh|phase10_post_check.sh"
|
||||
"11|Phase 11: Custom Runners|phase11_custom_runners.sh|phase11_post_check.sh"
|
||||
)
|
||||
|
||||
for phase_entry in "${PHASES[@]}"; do
|
||||
@@ -181,8 +188,14 @@ for phase_entry in "${PHASES[@]}"; do
|
||||
continue
|
||||
fi
|
||||
|
||||
run_step "$phase_name" "$phase_script"
|
||||
run_step "${phase_name} — post-check" "$post_check"
|
||||
# Phase 8 scripts accept --allow-direct-checks for LAN/split-DNS setups.
|
||||
if [[ "$phase_num" -eq 8 ]] && [[ "$ALLOW_DIRECT_CHECKS" == "true" ]]; then
|
||||
run_step "$phase_name" "$phase_script" --allow-direct-checks
|
||||
run_step "${phase_name} — post-check" "$post_check" --allow-direct-checks
|
||||
else
|
||||
run_step "$phase_name" "$phase_script"
|
||||
run_step "${phase_name} — post-check" "$post_check"
|
||||
fi
|
||||
done
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
12
runners-conversion/augur/.env.example
Normal file
12
runners-conversion/augur/.env.example
Normal file
@@ -0,0 +1,12 @@
|
||||
# .env — Shared configuration for all runner containers.
|
||||
#
|
||||
# Copy this file to .env and fill in your values:
|
||||
# cp .env.example .env
|
||||
#
|
||||
# This file is loaded by docker-compose.yml and applies to ALL runner services.
|
||||
# Per-repo settings (name, labels, resources) go in envs/<repo>.env instead.
|
||||
|
||||
# GitHub Personal Access Token (classic or fine-grained with "repo" scope).
|
||||
# Used to generate short-lived registration tokens for each runner container.
|
||||
# Never stored in the runner agent — only used at container startup.
|
||||
GITHUB_PAT=ghp_xxxxxxxxxxxxxxxxxxxx
|
||||
6
runners-conversion/augur/.gitignore
vendored
Normal file
6
runners-conversion/augur/.gitignore
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
# Ignore real env files (contain secrets like GITHUB_PAT).
|
||||
# Only .env.example and envs/*.env.example are tracked.
|
||||
.env
|
||||
!.env.example
|
||||
envs/*.env
|
||||
!envs/*.env.example
|
||||
93
runners-conversion/augur/Dockerfile
Normal file
93
runners-conversion/augur/Dockerfile
Normal file
@@ -0,0 +1,93 @@
|
||||
# Dockerfile — GitHub Actions self-hosted runner image.
|
||||
#
|
||||
# Includes: Ubuntu 24.04 + Go 1.26 + Node 24 + GitHub Actions runner agent.
|
||||
# Designed for CI workloads on Linux x64 servers (e.g., Unraid, bare metal).
|
||||
#
|
||||
# Build:
|
||||
# docker build -t augur-runner .
|
||||
#
|
||||
# The image is also auto-built and pushed to GHCR by the
|
||||
# build-runner-image.yml workflow on Dockerfile or entrypoint changes.
|
||||
|
||||
FROM ubuntu:24.04
|
||||
|
||||
# --- Metadata labels (OCI standard) ---
|
||||
LABEL org.opencontainers.image.title="augur-runner" \
|
||||
org.opencontainers.image.description="GitHub Actions self-hosted runner for augur CI" \
|
||||
org.opencontainers.image.source="https://github.com/AIinfusedS/augur" \
|
||||
org.opencontainers.image.licenses="Proprietary"
|
||||
|
||||
# --- Layer 1: System packages (changes least often) ---
|
||||
# Combined into a single layer to minimize image size.
|
||||
# --no-install-recommends avoids pulling unnecessary packages.
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
jq \
|
||||
python3 \
|
||||
tini \
|
||||
sudo \
|
||||
unzip \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# --- Layer 2: Go 1.26 (pinned version + SHA256 verification) ---
|
||||
ARG GO_VERSION=1.26.0
|
||||
ARG GO_SHA256=aac1b08a0fb0c4e0a7c1555beb7b59180b05dfc5a3d62e40e9de90cd42f88235
|
||||
RUN curl -fsSL "https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz" -o /tmp/go.tar.gz && \
|
||||
echo "${GO_SHA256} /tmp/go.tar.gz" | sha256sum -c - && \
|
||||
tar -C /usr/local -xzf /tmp/go.tar.gz && \
|
||||
rm /tmp/go.tar.gz
|
||||
ENV PATH="/usr/local/go/bin:${PATH}"
|
||||
|
||||
# --- Layer 3: Node 24 LTS via NodeSource ---
|
||||
ARG NODE_MAJOR=24
|
||||
RUN curl -fsSL https://deb.nodesource.com/setup_${NODE_MAJOR}.x | bash - && \
|
||||
apt-get install -y --no-install-recommends nodejs && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# --- Layer 4: Create non-root runner user (UID/GID 1000) ---
|
||||
# Ubuntu 24.04 ships with an 'ubuntu' user at UID/GID 1000.
|
||||
# Remove it first, then create our runner user at the same IDs.
|
||||
RUN userdel -r ubuntu 2>/dev/null || true && \
|
||||
groupadd -f -g 1000 runner && \
|
||||
useradd -m -u 1000 -g runner -s /bin/bash runner && \
|
||||
echo "runner ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers.d/runner
|
||||
|
||||
# --- Layer 5: GitHub Actions runner agent ---
|
||||
# Downloads the latest runner release for linux-x64.
|
||||
# The runner agent auto-updates itself between jobs, so pinning
|
||||
# the exact version here is not critical.
|
||||
ARG RUNNER_ARCH=x64
|
||||
RUN RUNNER_VERSION=$(curl -fsSL https://api.github.com/repos/actions/runner/releases/latest \
|
||||
| jq -r '.tag_name' | sed 's/^v//') && \
|
||||
curl -fsSL "https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-${RUNNER_ARCH}-${RUNNER_VERSION}.tar.gz" \
|
||||
-o /tmp/runner.tar.gz && \
|
||||
mkdir -p /home/runner/actions-runner && \
|
||||
tar -xzf /tmp/runner.tar.gz -C /home/runner/actions-runner && \
|
||||
rm /tmp/runner.tar.gz && \
|
||||
chown -R runner:runner /home/runner/actions-runner && \
|
||||
/home/runner/actions-runner/bin/installdependencies.sh
|
||||
|
||||
# --- Layer 6: Work directory (pre-create for Docker volume ownership) ---
|
||||
# Docker named volumes inherit ownership from the mount point in the image.
|
||||
# Creating _work as runner:runner ensures the volume is writable without sudo.
|
||||
RUN mkdir -p /home/runner/_work && chown runner:runner /home/runner/_work
|
||||
|
||||
# --- Layer 7: Entrypoint (changes most often) ---
|
||||
COPY --chown=runner:runner entrypoint.sh /home/runner/entrypoint.sh
|
||||
RUN chmod +x /home/runner/entrypoint.sh
|
||||
|
||||
# --- Runtime configuration ---
|
||||
USER runner
|
||||
WORKDIR /home/runner/actions-runner
|
||||
|
||||
# Health check: verify the runner listener process is alive.
|
||||
# start_period gives time for registration + first job pickup.
|
||||
HEALTHCHECK --interval=30s --timeout=5s --retries=3 --start-period=30s \
|
||||
CMD pgrep -f "Runner.Listener" > /dev/null || exit 1
|
||||
|
||||
# Use tini as PID 1 for proper signal forwarding and zombie reaping.
|
||||
ENTRYPOINT ["tini", "--"]
|
||||
CMD ["/home/runner/entrypoint.sh"]
|
||||
416
runners-conversion/augur/README.md
Normal file
416
runners-conversion/augur/README.md
Normal file
@@ -0,0 +1,416 @@
|
||||
# Self-Hosted GitHub Actions Runner (Docker)
|
||||
|
||||
Run GitHub Actions CI on your own Linux server instead of GitHub-hosted runners.
|
||||
Eliminates laptop CPU burden, avoids runner-minute quotas, and gives faster feedback.
|
||||
|
||||
## How It Works
|
||||
|
||||
Each runner container:
|
||||
1. Starts up, generates a short-lived registration token from your GitHub PAT
|
||||
2. Registers with GitHub in **ephemeral mode** (one job per lifecycle)
|
||||
3. Picks up a CI job, executes it, and exits
|
||||
4. Docker's `restart: unless-stopped` brings it back for the next job
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker Engine 24+ and Docker Compose v2
|
||||
- A GitHub Personal Access Token (classic) with **`repo`** and **`read:packages`** scopes
|
||||
- Network access to `github.com`, `api.github.com`, and `ghcr.io`
|
||||
|
||||
## One-Time GitHub Setup
|
||||
|
||||
Before deploying, the repository needs write permissions for the image build workflow.
|
||||
|
||||
### Enable GHCR image builds
|
||||
|
||||
The `build-runner-image.yml` workflow pushes Docker images to GHCR using the
|
||||
`GITHUB_TOKEN`. By default, this token is read-only and the workflow will fail
|
||||
silently (zero steps executed, no runner assigned).
|
||||
|
||||
Fix by allowing write permissions for Actions workflows:
|
||||
|
||||
```bash
|
||||
gh api -X PUT repos/OWNER/REPO/actions/permissions/workflow \
|
||||
-f default_workflow_permissions=write \
|
||||
-F can_approve_pull_request_reviews=false
|
||||
```
|
||||
|
||||
Alternatively, keep read-only defaults and create a dedicated PAT secret with
|
||||
`write:packages` scope, then reference it in the workflow instead of `GITHUB_TOKEN`.
|
||||
|
||||
### Build the runner image
|
||||
|
||||
Trigger the GHCR image build (first time and whenever Dockerfile/entrypoint changes):
|
||||
|
||||
```bash
|
||||
gh workflow run build-runner-image.yml
|
||||
```
|
||||
|
||||
Wait for the workflow to complete (~5 min):
|
||||
|
||||
```bash
|
||||
gh run list --workflow=build-runner-image.yml --limit=1
|
||||
```
|
||||
|
||||
The image is also rebuilt automatically:
|
||||
- On push to `main` when `infra/runners/Dockerfile` or `entrypoint.sh` changes
|
||||
- Weekly (Monday 06:00 UTC) to pick up OS patches and runner agent updates
|
||||
|
||||
## Deploy on Your Server
|
||||
|
||||
### Choose an image source
|
||||
|
||||
| Method | Files needed on server | Registry auth? | Best for |
|
||||
|--------|----------------------|----------------|----------|
|
||||
| **Self-hosted registry** | `docker-compose.yml`, `.env`, `envs/augur.env` | No (your network) | Production — push once, pull from any machine |
|
||||
| **GHCR** | `docker-compose.yml`, `.env`, `envs/augur.env` | Yes (`docker login ghcr.io`) | GitHub-native workflow |
|
||||
| **Build locally** | All 5 files (+ `Dockerfile`, `entrypoint.sh`) | No | Quick start, no registry needed |
|
||||
|
||||
### Option A: Self-hosted registry (recommended)
|
||||
|
||||
For the full end-to-end workflow (build image on your Mac, push to Unraid registry,
|
||||
start runner), see the [CI Workflow Guide](../../docs/ci-workflows.md#lifecycle-2-offload-ci-to-a-server-unraid).
|
||||
|
||||
The private Docker registry is configured at `infra/registry/`. It listens on port 5000,
|
||||
accessible from the LAN. Docker treats `localhost` registries as insecure by default —
|
||||
no `daemon.json` changes needed on the server. To push from another machine, add
|
||||
`<UNRAID_IP>:5000` to `insecure-registries` in that machine's Docker daemon config.
|
||||
|
||||
### Option B: GHCR
|
||||
|
||||
Requires the `build-runner-image.yml` workflow to have run successfully
|
||||
(see [One-Time GitHub Setup](#one-time-github-setup)).
|
||||
|
||||
```bash
|
||||
# 1. Copy environment templates
|
||||
cp .env.example .env
|
||||
cp envs/augur.env.example envs/augur.env
|
||||
|
||||
# 2. Edit .env — set your GITHUB_PAT
|
||||
# 3. Edit envs/augur.env — set REPO_URL, RUNNER_NAME, resource limits
|
||||
|
||||
# 4. Authenticate Docker with GHCR (one-time, persists to ~/.docker/config.json)
|
||||
echo "$GITHUB_PAT" | docker login ghcr.io -u YOUR_GITHUB_USERNAME --password-stdin
|
||||
|
||||
# 5. Pull and start
|
||||
docker compose pull
|
||||
docker compose up -d
|
||||
|
||||
# 6. Verify runner is registered
|
||||
docker compose ps
|
||||
docker compose logs -f runner-augur
|
||||
```
|
||||
|
||||
### Option C: Build locally
|
||||
|
||||
No registry needed — builds the image directly on the target machine.
|
||||
Requires `Dockerfile` and `entrypoint.sh` alongside the compose file.
|
||||
|
||||
```bash
|
||||
# 1. Copy environment templates
|
||||
cp .env.example .env
|
||||
cp envs/augur.env.example envs/augur.env
|
||||
|
||||
# 2. Edit .env — set your GITHUB_PAT
|
||||
# 3. Edit envs/augur.env — set REPO_URL, RUNNER_NAME, resource limits
|
||||
|
||||
# 4. Build and start
|
||||
docker compose up -d --build
|
||||
|
||||
# 5. Verify runner is registered
|
||||
docker compose ps
|
||||
docker compose logs -f runner-augur
|
||||
```
|
||||
|
||||
### Verify the runner is online in GitHub
|
||||
|
||||
```bash
|
||||
gh api repos/OWNER/REPO/actions/runners \
|
||||
--jq '.runners[] | {name, status, labels: [.labels[].name]}'
|
||||
```
|
||||
|
||||
## Activate Self-Hosted CI
|
||||
|
||||
Set the repository variable `CI_RUNS_ON` so the CI workflow targets your runner:
|
||||
|
||||
```bash
|
||||
gh variable set CI_RUNS_ON --body '["self-hosted", "Linux", "X64"]'
|
||||
```
|
||||
|
||||
To revert to GitHub-hosted runners:
|
||||
```bash
|
||||
gh variable delete CI_RUNS_ON
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Shared Config (`.env`)
|
||||
|
||||
| Variable | Required | Description |
|
||||
|----------|----------|-------------|
|
||||
| `GITHUB_PAT` | Yes | GitHub PAT with `repo` + `read:packages` scope |
|
||||
|
||||
### Per-Repo Config (`envs/<repo>.env`)
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| `REPO_URL` | Yes | — | Full GitHub repository URL |
|
||||
| `RUNNER_NAME` | Yes | — | Unique runner name within the repo |
|
||||
| `RUNNER_LABELS` | No | `self-hosted,Linux,X64` | Comma-separated runner labels |
|
||||
| `RUNNER_GROUP` | No | `default` | Runner group |
|
||||
| `RUNNER_IMAGE` | No | `ghcr.io/aiinfuseds/augur-runner:latest` | Docker image to use |
|
||||
| `RUNNER_CPUS` | No | `6` | CPU limit for the container |
|
||||
| `RUNNER_MEMORY` | No | `12G` | Memory limit for the container |
|
||||
|
||||
## Adding More Repos
|
||||
|
||||
1. Copy the per-repo env template:
|
||||
```bash
|
||||
cp envs/augur.env.example envs/myrepo.env
|
||||
```
|
||||
|
||||
2. Edit `envs/myrepo.env` — set `REPO_URL`, `RUNNER_NAME`, and resource limits.
|
||||
|
||||
3. Add a service block to `docker-compose.yml`:
|
||||
```yaml
|
||||
runner-myrepo:
|
||||
image: ${RUNNER_IMAGE:-ghcr.io/aiinfuseds/augur-runner:latest}
|
||||
build: .
|
||||
env_file:
|
||||
- .env
|
||||
- envs/myrepo.env
|
||||
init: true
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /tmp:size=2G
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
stop_grace_period: 5m
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "${RUNNER_CPUS:-6}"
|
||||
memory: "${RUNNER_MEMORY:-12G}"
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "pgrep", "-f", "Runner.Listener"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "50m"
|
||||
max-file: "3"
|
||||
volumes:
|
||||
- myrepo-work:/home/runner/_work
|
||||
```
|
||||
|
||||
4. Add the volume at the bottom of `docker-compose.yml`:
|
||||
```yaml
|
||||
volumes:
|
||||
augur-work:
|
||||
myrepo-work:
|
||||
```
|
||||
|
||||
5. Start: `docker compose up -d`
|
||||
|
||||
## Scaling
|
||||
|
||||
Run multiple concurrent runners for the same repo:
|
||||
|
||||
```bash
|
||||
# Scale to 3 runners for augur
|
||||
docker compose up -d --scale runner-augur=3
|
||||
```
|
||||
|
||||
Each container gets a unique runner name (Docker appends a suffix).
|
||||
Set `RUNNER_NAME` to a base name like `unraid-augur` — scaled instances become
|
||||
`unraid-augur-1`, `unraid-augur-2`, etc.
|
||||
|
||||
## Resource Tuning
|
||||
|
||||
Each repo can have different resource limits in its env file:
|
||||
|
||||
```env
|
||||
# Lightweight repo (linting only)
|
||||
RUNNER_CPUS=2
|
||||
RUNNER_MEMORY=4G
|
||||
|
||||
# Heavy repo (Go builds + extensive tests)
|
||||
RUNNER_CPUS=8
|
||||
RUNNER_MEMORY=16G
|
||||
```
|
||||
|
||||
### tmpfs Sizing
|
||||
|
||||
The `/tmp` tmpfs defaults to 2G. If your CI writes large temp files,
|
||||
increase it in `docker-compose.yml`:
|
||||
|
||||
```yaml
|
||||
tmpfs:
|
||||
- /tmp:size=4G
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
```bash
|
||||
# Container status and health
|
||||
docker compose ps
|
||||
|
||||
# Live logs
|
||||
docker compose logs -f runner-augur
|
||||
|
||||
# Last 50 log lines
|
||||
docker compose logs --tail 50 runner-augur
|
||||
|
||||
# Resource usage
|
||||
docker stats runner-augur
|
||||
```
|
||||
|
||||
## Updating the Runner Image
|
||||
|
||||
To pull the latest GHCR image:
|
||||
```bash
|
||||
docker compose pull
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
To rebuild locally:
|
||||
```bash
|
||||
docker compose build
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Using a Self-Hosted Registry
|
||||
|
||||
See the [CI Workflow Guide](../../docs/ci-workflows.md#lifecycle-2-offload-ci-to-a-server-unraid)
|
||||
for the full build-push-start workflow with a self-hosted registry.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Image build workflow fails with zero steps
|
||||
|
||||
The `build-runner-image.yml` workflow needs `packages: write` permission.
|
||||
If the repo's default workflow permissions are read-only, the job fails
|
||||
instantly (0 steps, no runner assigned). See [One-Time GitHub Setup](#one-time-github-setup).
|
||||
|
||||
### `docker compose pull` returns "access denied" or 403
|
||||
|
||||
The GHCR package inherits the repository's visibility. For private repos,
|
||||
authenticate Docker first:
|
||||
|
||||
```bash
|
||||
echo "$GITHUB_PAT" | docker login ghcr.io -u USERNAME --password-stdin
|
||||
```
|
||||
|
||||
Or make the package public:
|
||||
```bash
|
||||
gh api -X PATCH /user/packages/container/augur-runner -f visibility=public
|
||||
```
|
||||
|
||||
Or skip GHCR entirely and build locally: `docker compose build`.
|
||||
|
||||
### Runner doesn't appear in GitHub
|
||||
|
||||
1. Check logs: `docker compose logs runner-augur`
|
||||
2. Verify `GITHUB_PAT` has `repo` scope
|
||||
3. Verify `REPO_URL` is correct (full HTTPS URL)
|
||||
4. Check network: `docker compose exec runner-augur curl -s https://api.github.com`
|
||||
|
||||
### Runner appears "offline"
|
||||
|
||||
The runner may have exited after a job. Check:
|
||||
```bash
|
||||
docker compose ps # Is the container running?
|
||||
docker compose restart runner-augur # Force restart
|
||||
```
|
||||
|
||||
### OOM (Out of Memory) kills
|
||||
|
||||
Increase `RUNNER_MEMORY` in the per-repo env file:
|
||||
```env
|
||||
RUNNER_MEMORY=16G
|
||||
```
|
||||
|
||||
Then: `docker compose up -d`
|
||||
|
||||
### Stale/ghost runners in GitHub
|
||||
|
||||
Ephemeral runners deregister automatically after each job. If a container
|
||||
was killed ungracefully (power loss, `docker kill`), the runner may appear
|
||||
stale. It will auto-expire after a few hours, or remove manually:
|
||||
|
||||
```bash
|
||||
# List runners
|
||||
gh api repos/OWNER/REPO/actions/runners --jq '.runners[] | {id, name, status}'
|
||||
|
||||
# Remove stale runner by ID
|
||||
gh api -X DELETE repos/OWNER/REPO/actions/runners/RUNNER_ID
|
||||
```
|
||||
|
||||
### Disk space
|
||||
|
||||
Check work directory volume usage:
|
||||
```bash
|
||||
docker system df -v
|
||||
```
|
||||
|
||||
Clean up unused volumes:
|
||||
```bash
|
||||
docker compose down -v # Remove work volumes
|
||||
docker volume prune # Remove all unused volumes
|
||||
```
|
||||
|
||||
## Unraid Notes
|
||||
|
||||
- **Docker login persistence**: `docker login ghcr.io` writes credentials to
|
||||
`/root/.docker/config.json`. On Unraid, `/root` is on the USB flash drive
|
||||
and persists across reboots. Verify with `cat /root/.docker/config.json`
|
||||
after login.
|
||||
- **Compose file location**: Place the 3 files (`docker-compose.yml`, `.env`,
|
||||
`envs/augur.env`) in a share directory (e.g., `/mnt/user/appdata/augur-runner/`).
|
||||
- **Alternative to GHCR**: If you don't want to deal with registry auth on Unraid,
|
||||
copy the `Dockerfile` and `entrypoint.sh` alongside the compose file and use
|
||||
`docker compose up -d --build` instead. No registry needed.
|
||||
|
||||
## Security
|
||||
|
||||
| Measure | Description |
|
||||
|---------|-------------|
|
||||
| Ephemeral mode | Fresh runner state per job — no cross-job contamination |
|
||||
| PAT scope isolation | PAT generates a short-lived registration token; PAT never touches the runner agent |
|
||||
| Non-root user | Runner process runs as UID 1000, not root |
|
||||
| no-new-privileges | Prevents privilege escalation via setuid/setgid binaries |
|
||||
| tini (PID 1) | Proper signal forwarding and zombie process reaping |
|
||||
| Log rotation | Prevents disk exhaustion from verbose CI output (50MB x 3 files) |
|
||||
|
||||
### PAT Scope
|
||||
|
||||
Use the minimum scope required:
|
||||
- **Classic token**: `repo` + `read:packages` scopes
|
||||
- **Fine-grained token**: Repository access → Only select repositories → Read and Write for Administration
|
||||
|
||||
### Network Considerations
|
||||
|
||||
The runner container needs outbound access to:
|
||||
- `github.com` (clone repos, download actions)
|
||||
- `api.github.com` (registration, status)
|
||||
- `ghcr.io` (pull runner image — only if using GHCR)
|
||||
- Package registries (`proxy.golang.org`, `registry.npmjs.org`, etc.)
|
||||
|
||||
No inbound ports are required.
|
||||
|
||||
## Stopping and Removing
|
||||
|
||||
```bash
|
||||
# Stop runners (waits for stop_grace_period)
|
||||
docker compose down
|
||||
|
||||
# Stop and remove work volumes
|
||||
docker compose down -v
|
||||
|
||||
# Stop, remove volumes, and delete the locally built image
|
||||
docker compose down -v --rmi local
|
||||
```
|
||||
496
runners-conversion/augur/actions-local.sh
Executable file
496
runners-conversion/augur/actions-local.sh
Executable file
@@ -0,0 +1,496 @@
|
||||
#!/usr/bin/env bash
|
||||
# actions-local.sh — Setup/start/stop local GitHub Actions runtime on macOS.
|
||||
#
|
||||
# This script prepares and manages local execution of workflows with `act`.
|
||||
# Default runtime is Colima (free, local Docker daemon).
|
||||
#
|
||||
# Typical flow:
|
||||
# 1) ./scripts/actions-local.sh --mode setup
|
||||
# 2) ./scripts/actions-local.sh --mode start
|
||||
# 3) act -W .github/workflows/ci-quality-gates.yml
|
||||
# 4) ./scripts/actions-local.sh --mode stop
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MODE=""
|
||||
RUNTIME="auto"
|
||||
RUNTIME_EXPLICIT=false
|
||||
REFRESH_BREW=false
|
||||
|
||||
COLIMA_PROFILE="${AUGUR_ACTIONS_COLIMA_PROFILE:-augur-actions}"
|
||||
COLIMA_CPU="${AUGUR_ACTIONS_COLIMA_CPU:-4}"
|
||||
COLIMA_MEMORY_GB="${AUGUR_ACTIONS_COLIMA_MEMORY_GB:-8}"
|
||||
COLIMA_DISK_GB="${AUGUR_ACTIONS_COLIMA_DISK_GB:-60}"
|
||||
WAIT_TIMEOUT_SEC="${AUGUR_ACTIONS_WAIT_TIMEOUT_SEC:-180}"
|
||||
|
||||
STATE_DIR="${TMPDIR:-/tmp}"
|
||||
STATE_FILE="${STATE_DIR%/}/augur-actions-local.state"
|
||||
|
||||
STATE_RUNTIME=""
|
||||
STATE_PROFILE=""
|
||||
STATE_STARTED_BY_SCRIPT="0"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
./scripts/actions-local.sh --mode <setup|start|stop> [options]
|
||||
|
||||
Required:
|
||||
--mode MODE One of: setup, start, stop
|
||||
|
||||
Options:
|
||||
--runtime RUNTIME Runtime choice: auto, colima, docker-desktop (default: auto)
|
||||
--refresh-brew In setup mode, force brew metadata refresh even if nothing is missing
|
||||
--colima-profile NAME Colima profile name (default: augur-actions)
|
||||
--cpu N Colima CPU count for start (default: 4)
|
||||
--memory-gb N Colima memory (GB) for start (default: 8)
|
||||
--disk-gb N Colima disk (GB) for start (default: 60)
|
||||
-h, --help Show this help
|
||||
|
||||
Examples:
|
||||
./scripts/actions-local.sh --mode setup
|
||||
./scripts/actions-local.sh --mode start
|
||||
./scripts/actions-local.sh --mode start --runtime colima --cpu 6 --memory-gb 12
|
||||
./scripts/actions-local.sh --mode stop
|
||||
./scripts/actions-local.sh --mode stop --runtime colima
|
||||
|
||||
Environment overrides:
|
||||
AUGUR_ACTIONS_COLIMA_PROFILE
|
||||
AUGUR_ACTIONS_COLIMA_CPU
|
||||
AUGUR_ACTIONS_COLIMA_MEMORY_GB
|
||||
AUGUR_ACTIONS_COLIMA_DISK_GB
|
||||
AUGUR_ACTIONS_WAIT_TIMEOUT_SEC
|
||||
EOF
|
||||
}
|
||||
|
||||
log() {
|
||||
printf '[actions-local] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[actions-local] WARNING: %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
die() {
|
||||
printf '[actions-local] ERROR: %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
local cmd="$1"
|
||||
command -v "$cmd" >/dev/null 2>&1 || die "required command not found: $cmd"
|
||||
}
|
||||
|
||||
ensure_macos() {
|
||||
local os
|
||||
os="$(uname -s)"
|
||||
[[ "$os" == "Darwin" ]] || die "This script currently supports macOS only."
|
||||
}
|
||||
|
||||
parse_args() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--mode)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--mode requires a value"
|
||||
MODE="$1"
|
||||
shift
|
||||
;;
|
||||
--runtime)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--runtime requires a value"
|
||||
RUNTIME="$1"
|
||||
RUNTIME_EXPLICIT=true
|
||||
shift
|
||||
;;
|
||||
--refresh-brew)
|
||||
REFRESH_BREW=true
|
||||
shift
|
||||
;;
|
||||
--colima-profile)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--colima-profile requires a value"
|
||||
COLIMA_PROFILE="$1"
|
||||
shift
|
||||
;;
|
||||
--cpu)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--cpu requires a value"
|
||||
COLIMA_CPU="$1"
|
||||
shift
|
||||
;;
|
||||
--memory-gb)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--memory-gb requires a value"
|
||||
COLIMA_MEMORY_GB="$1"
|
||||
shift
|
||||
;;
|
||||
--disk-gb)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--disk-gb requires a value"
|
||||
COLIMA_DISK_GB="$1"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
die "unknown argument: $1"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
[[ -n "$MODE" ]] || die "--mode is required (setup|start|stop)"
|
||||
case "$MODE" in
|
||||
setup|start|stop) ;;
|
||||
*) die "invalid --mode: $MODE (expected setup|start|stop)" ;;
|
||||
esac
|
||||
|
||||
case "$RUNTIME" in
|
||||
auto|colima|docker-desktop) ;;
|
||||
*) die "invalid --runtime: $RUNTIME (expected auto|colima|docker-desktop)" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
ensure_command_line_tools() {
|
||||
if xcode-select -p >/dev/null 2>&1; then
|
||||
log "Xcode Command Line Tools already installed."
|
||||
return
|
||||
fi
|
||||
|
||||
log "Xcode Command Line Tools missing; attempting automated install..."
|
||||
local marker="/tmp/.com.apple.dt.CommandLineTools.installondemand.in-progress"
|
||||
local label=""
|
||||
|
||||
touch "$marker"
|
||||
label="$(softwareupdate -l 2>/dev/null | sed -n 's/^\* Label: //p' | grep 'Command Line Tools' | tail -n1 || true)"
|
||||
rm -f "$marker"
|
||||
|
||||
if [[ -n "$label" ]]; then
|
||||
sudo softwareupdate -i "$label" --verbose
|
||||
sudo xcode-select --switch /Library/Developer/CommandLineTools
|
||||
else
|
||||
warn "Could not auto-detect Command Line Tools package; launching GUI installer."
|
||||
xcode-select --install || true
|
||||
die "Finish installing Command Line Tools, then re-run setup."
|
||||
fi
|
||||
|
||||
xcode-select -p >/dev/null 2>&1 || die "Command Line Tools installation did not complete."
|
||||
log "Xcode Command Line Tools installed."
|
||||
}
|
||||
|
||||
ensure_homebrew() {
|
||||
if command -v brew >/dev/null 2>&1; then
|
||||
log "Homebrew already installed."
|
||||
else
|
||||
require_cmd curl
|
||||
log "Installing Homebrew..."
|
||||
NONINTERACTIVE=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
|
||||
fi
|
||||
|
||||
if [[ -x /opt/homebrew/bin/brew ]]; then
|
||||
eval "$(/opt/homebrew/bin/brew shellenv)"
|
||||
elif [[ -x /usr/local/bin/brew ]]; then
|
||||
eval "$(/usr/local/bin/brew shellenv)"
|
||||
elif command -v brew >/dev/null 2>&1; then
|
||||
eval "$("$(command -v brew)" shellenv)"
|
||||
else
|
||||
die "Homebrew not found after installation."
|
||||
fi
|
||||
|
||||
log "Homebrew ready: $(brew --version | head -n1)"
|
||||
}
|
||||
|
||||
install_brew_formula_if_missing() {
|
||||
local formula="$1"
|
||||
if brew list --versions "$formula" >/dev/null 2>&1; then
|
||||
log "Already installed: $formula"
|
||||
else
|
||||
log "Installing: $formula"
|
||||
brew install "$formula"
|
||||
fi
|
||||
}
|
||||
|
||||
list_missing_formulas() {
|
||||
local formulas=("$@")
|
||||
local -a missing=()
|
||||
local formula
|
||||
for formula in "${formulas[@]}"; do
|
||||
if ! brew list --versions "$formula" >/dev/null 2>&1; then
|
||||
missing+=("$formula")
|
||||
fi
|
||||
done
|
||||
if [[ "${#missing[@]}" -gt 0 ]]; then
|
||||
printf '%s\n' "${missing[@]}"
|
||||
fi
|
||||
}
|
||||
|
||||
colima_context_name() {
|
||||
local profile="$1"
|
||||
if [[ "$profile" == "default" ]]; then
|
||||
printf 'colima'
|
||||
else
|
||||
printf 'colima-%s' "$profile"
|
||||
fi
|
||||
}
|
||||
|
||||
colima_is_running() {
|
||||
local out
|
||||
out="$(colima status --profile "$COLIMA_PROFILE" 2>&1 || true)"
|
||||
if printf '%s' "$out" | grep -qi "not running"; then
|
||||
return 1
|
||||
fi
|
||||
if printf '%s' "$out" | grep -qi "running"; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
docker_ready() {
|
||||
docker info >/dev/null 2>&1
|
||||
}
|
||||
|
||||
wait_for_docker() {
|
||||
local waited=0
|
||||
while ! docker_ready; do
|
||||
if (( waited >= WAIT_TIMEOUT_SEC )); then
|
||||
die "Docker daemon not ready after ${WAIT_TIMEOUT_SEC}s."
|
||||
fi
|
||||
sleep 2
|
||||
waited=$((waited + 2))
|
||||
done
|
||||
}
|
||||
|
||||
write_state() {
|
||||
local runtime="$1"
|
||||
local started="$2"
|
||||
cat > "$STATE_FILE" <<EOF
|
||||
runtime=$runtime
|
||||
profile=$COLIMA_PROFILE
|
||||
started_by_script=$started
|
||||
timestamp=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
EOF
|
||||
}
|
||||
|
||||
read_state() {
|
||||
STATE_RUNTIME=""
|
||||
STATE_PROFILE=""
|
||||
STATE_STARTED_BY_SCRIPT="0"
|
||||
|
||||
[[ -f "$STATE_FILE" ]] || return 0
|
||||
|
||||
while IFS='=' read -r key value; do
|
||||
case "$key" in
|
||||
runtime) STATE_RUNTIME="$value" ;;
|
||||
profile) STATE_PROFILE="$value" ;;
|
||||
started_by_script) STATE_STARTED_BY_SCRIPT="$value" ;;
|
||||
esac
|
||||
done < "$STATE_FILE"
|
||||
}
|
||||
|
||||
resolve_runtime_auto() {
|
||||
if command -v colima >/dev/null 2>&1; then
|
||||
printf 'colima'
|
||||
return
|
||||
fi
|
||||
|
||||
if [[ -d "/Applications/Docker.app" ]] || command -v docker >/dev/null 2>&1; then
|
||||
printf 'docker-desktop'
|
||||
return
|
||||
fi
|
||||
|
||||
die "No supported runtime found. Run setup first."
|
||||
}
|
||||
|
||||
start_colima_runtime() {
|
||||
require_cmd colima
|
||||
require_cmd docker
|
||||
require_cmd act
|
||||
|
||||
local started="0"
|
||||
if colima_is_running; then
|
||||
log "Colima profile '${COLIMA_PROFILE}' is already running."
|
||||
else
|
||||
log "Starting Colima profile '${COLIMA_PROFILE}' (cpu=${COLIMA_CPU}, memory=${COLIMA_MEMORY_GB}GB, disk=${COLIMA_DISK_GB}GB)..."
|
||||
colima start --profile "$COLIMA_PROFILE" --cpu "$COLIMA_CPU" --memory "$COLIMA_MEMORY_GB" --disk "$COLIMA_DISK_GB"
|
||||
started="1"
|
||||
fi
|
||||
|
||||
local context
|
||||
context="$(colima_context_name "$COLIMA_PROFILE")"
|
||||
if docker context ls --format '{{.Name}}' | grep -Fxq "$context"; then
|
||||
docker context use "$context" >/dev/null 2>&1 || true
|
||||
fi
|
||||
|
||||
wait_for_docker
|
||||
write_state "colima" "$started"
|
||||
|
||||
log "Runtime ready (colima)."
|
||||
log "Try: act -W .github/workflows/ci-quality-gates.yml"
|
||||
}
|
||||
|
||||
start_docker_desktop_runtime() {
|
||||
require_cmd docker
|
||||
require_cmd act
|
||||
require_cmd open
|
||||
|
||||
local started="0"
|
||||
if docker_ready; then
|
||||
log "Docker daemon already running."
|
||||
else
|
||||
log "Starting Docker Desktop..."
|
||||
open -ga Docker
|
||||
started="1"
|
||||
fi
|
||||
|
||||
wait_for_docker
|
||||
write_state "docker-desktop" "$started"
|
||||
|
||||
log "Runtime ready (docker-desktop)."
|
||||
log "Try: act -W .github/workflows/ci-quality-gates.yml"
|
||||
}
|
||||
|
||||
stop_colima_runtime() {
|
||||
require_cmd colima
|
||||
|
||||
if colima_is_running; then
|
||||
log "Stopping Colima profile '${COLIMA_PROFILE}'..."
|
||||
colima stop --profile "$COLIMA_PROFILE"
|
||||
else
|
||||
log "Colima profile '${COLIMA_PROFILE}' is already stopped."
|
||||
fi
|
||||
}
|
||||
|
||||
stop_docker_desktop_runtime() {
|
||||
require_cmd osascript
|
||||
|
||||
log "Stopping Docker Desktop..."
|
||||
osascript -e 'quit app "Docker"' >/dev/null 2>&1 || true
|
||||
}
|
||||
|
||||
do_setup() {
|
||||
ensure_macos
|
||||
ensure_command_line_tools
|
||||
ensure_homebrew
|
||||
local required_formulas=(git act colima docker)
|
||||
local missing_formulas=()
|
||||
local missing_formula
|
||||
while IFS= read -r missing_formula; do
|
||||
[[ -n "$missing_formula" ]] || continue
|
||||
missing_formulas+=("$missing_formula")
|
||||
done < <(list_missing_formulas "${required_formulas[@]}" || true)
|
||||
|
||||
if [[ "${#missing_formulas[@]}" -eq 0 ]]; then
|
||||
log "All required formulas already installed: ${required_formulas[*]}"
|
||||
if [[ "$REFRESH_BREW" == "true" ]]; then
|
||||
log "Refreshing Homebrew metadata (--refresh-brew)..."
|
||||
brew update
|
||||
else
|
||||
log "Skipping brew update; nothing to install."
|
||||
fi
|
||||
log "Setup complete (no changes required)."
|
||||
log "Next: ./scripts/actions-local.sh --mode start"
|
||||
return
|
||||
fi
|
||||
|
||||
log "Missing formulas detected: ${missing_formulas[*]}"
|
||||
log "Updating Homebrew metadata..."
|
||||
brew update
|
||||
|
||||
local formula
|
||||
for formula in "${required_formulas[@]}"; do
|
||||
install_brew_formula_if_missing "$formula"
|
||||
done
|
||||
|
||||
log "Setup complete."
|
||||
log "Next: ./scripts/actions-local.sh --mode start"
|
||||
}
|
||||
|
||||
do_start() {
|
||||
ensure_macos
|
||||
|
||||
local selected_runtime="$RUNTIME"
|
||||
if [[ "$selected_runtime" == "auto" ]]; then
|
||||
selected_runtime="$(resolve_runtime_auto)"
|
||||
fi
|
||||
|
||||
case "$selected_runtime" in
|
||||
colima)
|
||||
start_colima_runtime
|
||||
;;
|
||||
docker-desktop)
|
||||
start_docker_desktop_runtime
|
||||
;;
|
||||
*)
|
||||
die "unsupported runtime: $selected_runtime"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
do_stop() {
|
||||
ensure_macos
|
||||
read_state
|
||||
|
||||
local selected_runtime="$RUNTIME"
|
||||
local should_stop="1"
|
||||
|
||||
if [[ "$selected_runtime" == "auto" ]]; then
|
||||
if [[ -n "$STATE_RUNTIME" ]]; then
|
||||
selected_runtime="$STATE_RUNTIME"
|
||||
if [[ -n "$STATE_PROFILE" ]]; then
|
||||
COLIMA_PROFILE="$STATE_PROFILE"
|
||||
fi
|
||||
if [[ "$STATE_STARTED_BY_SCRIPT" != "1" ]]; then
|
||||
should_stop="0"
|
||||
fi
|
||||
else
|
||||
if command -v colima >/dev/null 2>&1; then
|
||||
selected_runtime="colima"
|
||||
elif [[ -d "/Applications/Docker.app" ]] || command -v docker >/dev/null 2>&1; then
|
||||
selected_runtime="docker-desktop"
|
||||
else
|
||||
log "No local Actions runtime is installed or tracked. Nothing to stop."
|
||||
return
|
||||
fi
|
||||
should_stop="0"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "$should_stop" != "1" && "$RUNTIME_EXPLICIT" != "true" ]]; then
|
||||
log "No runtime started by this script is currently tracked. Nothing to stop."
|
||||
log "Pass --runtime colima or --runtime docker-desktop to force a stop."
|
||||
return
|
||||
fi
|
||||
|
||||
case "$selected_runtime" in
|
||||
colima)
|
||||
stop_colima_runtime
|
||||
;;
|
||||
docker-desktop)
|
||||
stop_docker_desktop_runtime
|
||||
;;
|
||||
*)
|
||||
die "unsupported runtime: $selected_runtime"
|
||||
;;
|
||||
esac
|
||||
|
||||
if [[ -f "$STATE_FILE" ]]; then
|
||||
rm -f "$STATE_FILE"
|
||||
fi
|
||||
|
||||
log "Stop complete."
|
||||
}
|
||||
|
||||
main() {
|
||||
parse_args "$@"
|
||||
|
||||
case "$MODE" in
|
||||
setup) do_setup ;;
|
||||
start) do_start ;;
|
||||
stop) do_stop ;;
|
||||
*) die "unexpected mode: $MODE" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
111
runners-conversion/augur/check-browser-parity.sh
Executable file
111
runners-conversion/augur/check-browser-parity.sh
Executable file
@@ -0,0 +1,111 @@
|
||||
#!/usr/bin/env bash
|
||||
# check-browser-parity.sh — Verify Chrome/Firefox extension parity for all providers.
|
||||
#
|
||||
# Compares all source files between Firefox (-exporter-extension) and Chrome
|
||||
# (-exporter-chrome) variants. The only allowed difference is the
|
||||
# browser_specific_settings.gecko block in manifest.json.
|
||||
#
|
||||
# Usage: scripts/check-browser-parity.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
EXT_ROOT="$REPO_ROOT/browser-extensions/history-extensions"
|
||||
|
||||
PROVIDERS=(gemini copilot deepseek grok perplexity poe)
|
||||
|
||||
# Files that must be byte-identical between variants.
|
||||
PARITY_FILES=(
|
||||
src/content/content.js
|
||||
src/lib/export.js
|
||||
src/lib/popup-core.js
|
||||
src/lib/popup-utils.js
|
||||
src/popup/popup.js
|
||||
src/popup/popup.html
|
||||
src/popup/popup.css
|
||||
src/popup/permissions.html
|
||||
)
|
||||
|
||||
log() {
|
||||
printf '[parity] %s\n' "$*"
|
||||
}
|
||||
|
||||
err() {
|
||||
printf '[parity] FAIL: %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
failures=0
|
||||
checks=0
|
||||
|
||||
for provider in "${PROVIDERS[@]}"; do
|
||||
firefox_dir="$EXT_ROOT/${provider}-exporter-extension"
|
||||
chrome_dir="$EXT_ROOT/${provider}-exporter-chrome"
|
||||
|
||||
if [[ ! -d "$firefox_dir" ]]; then
|
||||
err "$provider — Firefox directory missing: $firefox_dir"
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ ! -d "$chrome_dir" ]]; then
|
||||
err "$provider — Chrome directory missing: $chrome_dir"
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
for file in "${PARITY_FILES[@]}"; do
|
||||
checks=$((checks + 1))
|
||||
ff_path="$firefox_dir/$file"
|
||||
cr_path="$chrome_dir/$file"
|
||||
|
||||
if [[ ! -f "$ff_path" ]]; then
|
||||
err "$provider — Firefox missing: $file"
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ ! -f "$cr_path" ]]; then
|
||||
err "$provider — Chrome missing: $file"
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! diff -q "$ff_path" "$cr_path" >/dev/null 2>&1; then
|
||||
err "$provider — $file differs between Firefox and Chrome"
|
||||
failures=$((failures + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Validate manifest.json: only browser_specific_settings.gecko should differ.
|
||||
checks=$((checks + 1))
|
||||
ff_manifest="$firefox_dir/manifest.json"
|
||||
cr_manifest="$chrome_dir/manifest.json"
|
||||
|
||||
if [[ ! -f "$ff_manifest" || ! -f "$cr_manifest" ]]; then
|
||||
err "$provider — manifest.json missing from one or both variants"
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
fi
|
||||
|
||||
# Strip browser_specific_settings block from Firefox manifest and normalize
|
||||
# trailing commas so the remaining JSON structure matches Chrome.
|
||||
ff_stripped=$(sed '/"browser_specific_settings"/,/^ }/d' "$ff_manifest" | sed '/^$/d' | sed 's/,$//')
|
||||
cr_stripped=$(sed '/^$/d' "$cr_manifest" | sed 's/,$//')
|
||||
|
||||
if ! diff -q <(echo "$ff_stripped") <(echo "$cr_stripped") >/dev/null 2>&1; then
|
||||
err "$provider — manifest.json has unexpected differences beyond browser_specific_settings"
|
||||
failures=$((failures + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
passed=$((checks - failures))
|
||||
log "Results: ${passed} passed, ${failures} failed (${checks} checks across ${#PROVIDERS[@]} providers)"
|
||||
|
||||
if [[ "$failures" -gt 0 ]]; then
|
||||
err "Browser parity check failed."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "All browser parity checks passed."
|
||||
exit 0
|
||||
115
runners-conversion/augur/check-contract-drift.sh
Executable file
115
runners-conversion/augur/check-contract-drift.sh
Executable file
@@ -0,0 +1,115 @@
|
||||
#!/usr/bin/env bash
|
||||
# check-contract-drift.sh — Enforce Constitution Principle V (contracts stay in lock-step).
|
||||
#
|
||||
# Fails when boundary-signature changes are detected under internal layers without
|
||||
# any update under contracts/*.md in the same diff range.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
log() {
|
||||
printf '[contract-drift] %s\n' "$*"
|
||||
}
|
||||
|
||||
err() {
|
||||
printf '[contract-drift] ERROR: %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
resolve_range() {
|
||||
if [[ -n "${AUGUR_CONTRACT_DRIFT_RANGE:-}" ]]; then
|
||||
printf '%s' "$AUGUR_CONTRACT_DRIFT_RANGE"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ -n "${GITHUB_BASE_REF:-}" ]]; then
|
||||
git fetch --no-tags --depth=1 origin "$GITHUB_BASE_REF" >/dev/null 2>&1 || true
|
||||
printf 'origin/%s...HEAD' "$GITHUB_BASE_REF"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ -n "${GITHUB_EVENT_BEFORE:-}" ]] && [[ -n "${GITHUB_SHA:-}" ]] && [[ "$GITHUB_EVENT_BEFORE" != "0000000000000000000000000000000000000000" ]]; then
|
||||
printf '%s...%s' "$GITHUB_EVENT_BEFORE" "$GITHUB_SHA"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if git rev-parse --verify HEAD~1 >/dev/null 2>&1; then
|
||||
printf 'HEAD~1...HEAD'
|
||||
return 0
|
||||
fi
|
||||
|
||||
printf ''
|
||||
}
|
||||
|
||||
USE_WORKTREE="${AUGUR_CONTRACT_DRIFT_USE_WORKTREE:-0}"
|
||||
RANGE=""
|
||||
if [[ "$USE_WORKTREE" == "1" ]]; then
|
||||
log "Diff source: working tree (HEAD -> working tree)"
|
||||
changed_files="$(git diff --name-only)"
|
||||
else
|
||||
RANGE="$(resolve_range)"
|
||||
if [[ -z "$RANGE" ]]; then
|
||||
log "No diff range could be resolved; skipping contract drift check."
|
||||
exit 0
|
||||
fi
|
||||
log "Diff range: $RANGE"
|
||||
changed_files="$(git diff --name-only "$RANGE")"
|
||||
fi
|
||||
|
||||
if [[ -z "$changed_files" ]]; then
|
||||
log "No changed files in range; skipping."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if printf '%s\n' "$changed_files" | grep -Eq '^contracts/.*\.md$'; then
|
||||
log "Contract files changed in range; check passed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Boundary-sensitive files that define cross-layer contracts.
|
||||
boundary_files="$(printf '%s\n' "$changed_files" | grep -E '^internal/(cli|service|provider|storage|sync|model)/.*\.go$' || true)"
|
||||
|
||||
if [[ -z "$boundary_files" ]]; then
|
||||
log "No boundary-sensitive Go files changed; check passed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
violations=()
|
||||
|
||||
while IFS= read -r file; do
|
||||
[[ -z "$file" ]] && continue
|
||||
|
||||
# Canonical model and provider interface are always contract-relevant.
|
||||
if [[ "$file" == "internal/model/conversation.go" ]] || [[ "$file" == "internal/provider/provider.go" ]]; then
|
||||
violations+=("$file")
|
||||
continue
|
||||
fi
|
||||
|
||||
# Heuristic: exported symbol signature/shape changes in boundary layers are contract-relevant.
|
||||
# Matches exported funcs, exported interfaces, and exported struct fields with JSON tags.
|
||||
diff_output=""
|
||||
if [[ "$USE_WORKTREE" == "1" ]]; then
|
||||
diff_output="$(git diff -U0 -- "$file")"
|
||||
else
|
||||
diff_output="$(git diff -U0 "$RANGE" -- "$file")"
|
||||
fi
|
||||
|
||||
if printf '%s\n' "$diff_output" | grep -Eq '^[+-](func (\([^)]*\) )?[A-Z][A-Za-z0-9_]*\(|type [A-Z][A-Za-z0-9_]* interface|[[:space:]]+[A-Z][A-Za-z0-9_]*[[:space:]].*`json:"[^"]+"`)'; then
|
||||
violations+=("$file")
|
||||
fi
|
||||
done <<< "$boundary_files"
|
||||
|
||||
if [[ "${#violations[@]}" -eq 0 ]]; then
|
||||
log "No contract-relevant signature drift detected; check passed."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
err "Contract drift detected: contract-relevant files changed without contracts/*.md updates."
|
||||
err "Update the applicable contract file(s) in contracts/ in the same change."
|
||||
err "Impacted files:"
|
||||
for file in "${violations[@]}"; do
|
||||
err " - $file"
|
||||
done
|
||||
|
||||
exit 1
|
||||
83
runners-conversion/augur/check-coverage-thresholds.sh
Executable file
83
runners-conversion/augur/check-coverage-thresholds.sh
Executable file
@@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env bash
|
||||
# check-coverage-thresholds.sh — Enforce minimum test coverage for critical packages.
|
||||
#
|
||||
# Runs `go test -cover` on specified packages and fails if any package
|
||||
# drops below its defined minimum coverage threshold.
|
||||
#
|
||||
# Usage: scripts/check-coverage-thresholds.sh
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
log() {
|
||||
printf '[coverage] %s\n' "$*"
|
||||
}
|
||||
|
||||
err() {
|
||||
printf '[coverage] FAIL: %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
# Package thresholds: "package_path:minimum_percent"
|
||||
# Set ~2% below current values to catch regressions without blocking on noise.
|
||||
THRESHOLDS=(
|
||||
"internal/sync:70"
|
||||
"internal/storage:60"
|
||||
"internal/service:50"
|
||||
"internal/service/conversion:80"
|
||||
"internal/cli:30"
|
||||
"internal/model:40"
|
||||
)
|
||||
|
||||
failures=0
|
||||
passes=0
|
||||
|
||||
for entry in "${THRESHOLDS[@]}"; do
|
||||
pkg="${entry%%:*}"
|
||||
threshold="${entry##*:}"
|
||||
|
||||
# Run go test with coverage and extract percentage
|
||||
output=$(go test -cover "./$pkg" 2>&1) || {
|
||||
err "$pkg — tests failed"
|
||||
failures=$((failures + 1))
|
||||
continue
|
||||
}
|
||||
|
||||
# Extract coverage percentage (e.g., "coverage: 72.1% of statements")
|
||||
coverage=$(echo "$output" | grep -oE 'coverage: [0-9]+\.[0-9]+%' | grep -oE '[0-9]+\.[0-9]+' || echo "0.0")
|
||||
|
||||
if [[ -z "$coverage" || "$coverage" == "0.0" ]]; then
|
||||
# Package might have no test files or no statements
|
||||
if echo "$output" | grep -q '\[no test files\]'; then
|
||||
err "$pkg — no test files (threshold: ${threshold}%)"
|
||||
failures=$((failures + 1))
|
||||
else
|
||||
err "$pkg — could not determine coverage (threshold: ${threshold}%)"
|
||||
failures=$((failures + 1))
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
|
||||
# Compare using awk for floating-point comparison
|
||||
passed=$(awk "BEGIN { print ($coverage >= $threshold) ? 1 : 0 }")
|
||||
|
||||
if [[ "$passed" -eq 1 ]]; then
|
||||
log "$pkg: ${coverage}% >= ${threshold}% threshold"
|
||||
passes=$((passes + 1))
|
||||
else
|
||||
err "$pkg: ${coverage}% < ${threshold}% threshold"
|
||||
failures=$((failures + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
log "Results: ${passes} passed, ${failures} failed (${#THRESHOLDS[@]} packages checked)"
|
||||
|
||||
if [[ "$failures" -gt 0 ]]; then
|
||||
err "Coverage threshold check failed."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "All coverage thresholds met."
|
||||
exit 0
|
||||
171
runners-conversion/augur/ci-local.sh
Executable file
171
runners-conversion/augur/ci-local.sh
Executable file
@@ -0,0 +1,171 @@
|
||||
#!/usr/bin/env bash
|
||||
# ci-local.sh — Run augur CI quality gates locally.
|
||||
# Mirrors .github/workflows/ci-quality-gates.yml without GitHub-hosted runners.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
run_contracts=false
|
||||
run_backend=false
|
||||
run_extensions=false
|
||||
explicit_stage=false
|
||||
skip_install=false
|
||||
|
||||
declare -a suites=()
|
||||
declare -a default_suites=(
|
||||
"tests"
|
||||
"tests-copilot"
|
||||
"tests-deepseek"
|
||||
"tests-perplexity"
|
||||
"tests-grok"
|
||||
"tests-poe"
|
||||
)
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: ./scripts/ci-local.sh [options]
|
||||
|
||||
Runs local CI gates equivalent to .github/workflows/ci-quality-gates.yml:
|
||||
1) contracts -> scripts/check-contract-drift.sh
|
||||
2) backend -> go mod download, go vet ./..., go test ./... -count=1
|
||||
3) extensions -> npm ci + npm test in each extension test suite
|
||||
|
||||
If no stage options are provided, all stages run.
|
||||
|
||||
Options:
|
||||
--contracts Run contracts drift check stage
|
||||
--backend Run backend Go stage
|
||||
--extensions Run extension Jest stage
|
||||
--suite NAME Extension suite name (repeatable), e.g. tests-deepseek
|
||||
--skip-install Skip dependency install steps (go mod download, npm ci)
|
||||
-h, --help Show this help
|
||||
|
||||
Examples:
|
||||
./scripts/ci-local.sh
|
||||
./scripts/ci-local.sh --backend
|
||||
./scripts/ci-local.sh --extensions --suite tests --suite tests-copilot
|
||||
./scripts/ci-local.sh --contracts --backend --skip-install
|
||||
EOF
|
||||
}
|
||||
|
||||
log() {
|
||||
printf '[ci-local] %s\n' "$*"
|
||||
}
|
||||
|
||||
die() {
|
||||
printf '[ci-local] ERROR: %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
local cmd="$1"
|
||||
command -v "$cmd" >/dev/null 2>&1 || die "required command not found: $cmd"
|
||||
}
|
||||
|
||||
parse_args() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--contracts)
|
||||
explicit_stage=true
|
||||
run_contracts=true
|
||||
shift
|
||||
;;
|
||||
--backend)
|
||||
explicit_stage=true
|
||||
run_backend=true
|
||||
shift
|
||||
;;
|
||||
--extensions)
|
||||
explicit_stage=true
|
||||
run_extensions=true
|
||||
shift
|
||||
;;
|
||||
--suite)
|
||||
shift
|
||||
[[ $# -gt 0 ]] || die "--suite requires a value"
|
||||
suites+=("$1")
|
||||
shift
|
||||
;;
|
||||
--skip-install)
|
||||
skip_install=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
die "unknown argument: $1"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$explicit_stage" == false ]]; then
|
||||
run_contracts=true
|
||||
run_backend=true
|
||||
run_extensions=true
|
||||
fi
|
||||
|
||||
if [[ "${#suites[@]}" -eq 0 ]]; then
|
||||
suites=("${default_suites[@]}")
|
||||
fi
|
||||
}
|
||||
|
||||
run_contracts_stage() {
|
||||
require_cmd git
|
||||
log "Stage: contracts"
|
||||
AUGUR_CONTRACT_DRIFT_USE_WORKTREE=1 scripts/check-contract-drift.sh
|
||||
}
|
||||
|
||||
run_backend_stage() {
|
||||
require_cmd go
|
||||
log "Stage: backend"
|
||||
if [[ "$skip_install" == false ]]; then
|
||||
go mod download
|
||||
fi
|
||||
go vet ./...
|
||||
go test ./... -count=1
|
||||
}
|
||||
|
||||
run_extensions_stage() {
|
||||
require_cmd npm
|
||||
log "Stage: extensions"
|
||||
for suite in "${suites[@]}"; do
|
||||
local suite_dir="$REPO_ROOT/browser-extensions/history-extensions/$suite"
|
||||
[[ -d "$suite_dir" ]] || die "extension suite directory not found: $suite_dir"
|
||||
log "Suite: $suite"
|
||||
if [[ "$skip_install" == false ]]; then
|
||||
(cd "$suite_dir" && npm ci)
|
||||
fi
|
||||
(cd "$suite_dir" && npm test -- --runInBand)
|
||||
done
|
||||
}
|
||||
|
||||
main() {
|
||||
parse_args "$@"
|
||||
|
||||
local started_at
|
||||
started_at="$(date +%s)"
|
||||
log "Starting local CI pipeline in $REPO_ROOT"
|
||||
|
||||
if [[ "$run_contracts" == true ]]; then
|
||||
run_contracts_stage
|
||||
fi
|
||||
|
||||
if [[ "$run_backend" == true ]]; then
|
||||
run_backend_stage
|
||||
fi
|
||||
|
||||
if [[ "$run_extensions" == true ]]; then
|
||||
run_extensions_stage
|
||||
fi
|
||||
|
||||
local ended_at duration
|
||||
ended_at="$(date +%s)"
|
||||
duration="$((ended_at - started_at))"
|
||||
log "All selected stages passed (${duration}s)."
|
||||
}
|
||||
|
||||
main "$@"
|
||||
69
runners-conversion/augur/docker-compose.yml
Normal file
69
runners-conversion/augur/docker-compose.yml
Normal file
@@ -0,0 +1,69 @@
|
||||
# docker-compose.yml — GitHub Actions self-hosted runner orchestration.
|
||||
#
|
||||
# All configuration is injected via environment files:
|
||||
# - .env → shared config (GITHUB_PAT)
|
||||
# - envs/augur.env → per-repo config (identity, labels, resource limits)
|
||||
#
|
||||
# Quick start:
|
||||
# cp .env.example .env && cp envs/augur.env.example envs/augur.env
|
||||
# # Edit both files with your values
|
||||
# docker compose up -d
|
||||
#
|
||||
# To add another repo: copy envs/augur.env.example to envs/<repo>.env,
|
||||
# fill in values, and add a matching service block below.
|
||||
|
||||
x-runner-common: &runner-common
|
||||
image: ${RUNNER_IMAGE:-ghcr.io/aiinfuseds/augur-runner:latest}
|
||||
build: .
|
||||
env_file:
|
||||
- .env
|
||||
- envs/augur.env
|
||||
tmpfs:
|
||||
- /tmp:size=2G,exec
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
stop_grace_period: 5m
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "${RUNNER_CPUS:-4}"
|
||||
memory: "${RUNNER_MEMORY:-4G}"
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "pgrep", "-f", "Runner.Listener"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "50m"
|
||||
max-file: "3"
|
||||
|
||||
services:
|
||||
runner-augur-1:
|
||||
<<: *runner-common
|
||||
environment:
|
||||
RUNNER_NAME: unraid-augur-1
|
||||
volumes:
|
||||
- augur-work-1:/home/runner/_work
|
||||
|
||||
runner-augur-2:
|
||||
<<: *runner-common
|
||||
environment:
|
||||
RUNNER_NAME: unraid-augur-2
|
||||
volumes:
|
||||
- augur-work-2:/home/runner/_work
|
||||
|
||||
runner-augur-3:
|
||||
<<: *runner-common
|
||||
environment:
|
||||
RUNNER_NAME: unraid-augur-3
|
||||
volumes:
|
||||
- augur-work-3:/home/runner/_work
|
||||
|
||||
volumes:
|
||||
augur-work-1:
|
||||
augur-work-2:
|
||||
augur-work-3:
|
||||
161
runners-conversion/augur/entrypoint.sh
Executable file
161
runners-conversion/augur/entrypoint.sh
Executable file
@@ -0,0 +1,161 @@
|
||||
#!/usr/bin/env bash
|
||||
# entrypoint.sh — Container startup script for the GitHub Actions runner.
|
||||
#
|
||||
# Lifecycle:
|
||||
# 1. Validate required environment variables
|
||||
# 2. Generate a short-lived registration token from GITHUB_PAT
|
||||
# 3. Configure the runner in ephemeral mode (one job, then exit)
|
||||
# 4. Trap SIGTERM/SIGINT for graceful deregistration
|
||||
# 5. Start the runner (run.sh)
|
||||
#
|
||||
# Docker's restart policy (restart: unless-stopped) brings the container
|
||||
# back after each job completes, repeating this cycle.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
RUNNER_DIR="/home/runner/actions-runner"
|
||||
RUNNER_LABELS="${RUNNER_LABELS:-self-hosted,Linux,X64}"
|
||||
RUNNER_GROUP="${RUNNER_GROUP:-default}"
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
log() {
|
||||
printf '[entrypoint] %s\n' "$*"
|
||||
}
|
||||
|
||||
die() {
|
||||
printf '[entrypoint] ERROR: %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Environment validation — fail fast with clear errors
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
validate_env() {
|
||||
local missing=()
|
||||
|
||||
[[ -z "${GITHUB_PAT:-}" ]] && missing+=("GITHUB_PAT")
|
||||
[[ -z "${REPO_URL:-}" ]] && missing+=("REPO_URL")
|
||||
[[ -z "${RUNNER_NAME:-}" ]] && missing+=("RUNNER_NAME")
|
||||
|
||||
if [[ ${#missing[@]} -gt 0 ]]; then
|
||||
die "Missing required environment variables: ${missing[*]}. Check your .env and envs/*.env files."
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Token generation — PAT → short-lived registration token
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
generate_token() {
|
||||
# Extract OWNER/REPO from the full URL.
|
||||
# Supports: https://github.com/OWNER/REPO or https://github.com/OWNER/REPO.git
|
||||
local repo_slug
|
||||
repo_slug="$(printf '%s' "$REPO_URL" \
|
||||
| sed -E 's#^https?://github\.com/##' \
|
||||
| sed -E 's/\.git$//')"
|
||||
|
||||
if [[ -z "$repo_slug" ]] || ! printf '%s' "$repo_slug" | grep -qE '^[^/]+/[^/]+$'; then
|
||||
die "Could not parse OWNER/REPO from REPO_URL: $REPO_URL"
|
||||
fi
|
||||
|
||||
log "Generating registration token for ${repo_slug}..."
|
||||
|
||||
local response
|
||||
response="$(curl -fsSL \
|
||||
-X POST \
|
||||
-H "Authorization: token ${GITHUB_PAT}" \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/${repo_slug}/actions/runners/registration-token")"
|
||||
|
||||
REG_TOKEN="$(printf '%s' "$response" | jq -r '.token // empty')"
|
||||
|
||||
if [[ -z "$REG_TOKEN" ]]; then
|
||||
die "Failed to generate registration token. Check that GITHUB_PAT has 'repo' scope and is valid."
|
||||
fi
|
||||
|
||||
log "Registration token obtained (expires in 1 hour)."
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Cleanup — deregister runner on container stop
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
cleanup() {
|
||||
log "Caught signal, removing runner registration..."
|
||||
|
||||
# Generate a removal token (different from registration token)
|
||||
local repo_slug
|
||||
repo_slug="$(printf '%s' "$REPO_URL" \
|
||||
| sed -E 's#^https?://github\.com/##' \
|
||||
| sed -E 's/\.git$//')"
|
||||
|
||||
local remove_token
|
||||
remove_token="$(curl -fsSL \
|
||||
-X POST \
|
||||
-H "Authorization: token ${GITHUB_PAT}" \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/${repo_slug}/actions/runners/remove-token" \
|
||||
| jq -r '.token // empty' || true)"
|
||||
|
||||
if [[ -n "$remove_token" ]]; then
|
||||
"${RUNNER_DIR}/config.sh" remove --token "$remove_token" 2>/dev/null || true
|
||||
log "Runner deregistered."
|
||||
else
|
||||
log "WARNING: Could not obtain removal token. Runner may appear stale in GitHub until it expires."
|
||||
fi
|
||||
|
||||
exit 0
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
main() {
|
||||
validate_env
|
||||
generate_token
|
||||
|
||||
# Trap signals for graceful shutdown
|
||||
trap cleanup SIGTERM SIGINT
|
||||
|
||||
# Remove stale configuration from previous run.
|
||||
# On container restart (vs recreate), the runner's writable layer persists
|
||||
# and config.sh refuses to re-configure if .runner already exists.
|
||||
# The --replace flag only handles server-side name conflicts, not this local check.
|
||||
if [[ -f "${RUNNER_DIR}/.runner" ]]; then
|
||||
log "Removing stale runner configuration from previous run..."
|
||||
rm -f "${RUNNER_DIR}/.runner" "${RUNNER_DIR}/.credentials" "${RUNNER_DIR}/.credentials_rsaparams"
|
||||
fi
|
||||
|
||||
log "Configuring runner '${RUNNER_NAME}' for ${REPO_URL}..."
|
||||
log "Labels: ${RUNNER_LABELS}"
|
||||
log "Group: ${RUNNER_GROUP}"
|
||||
|
||||
"${RUNNER_DIR}/config.sh" \
|
||||
--url "${REPO_URL}" \
|
||||
--token "${REG_TOKEN}" \
|
||||
--name "${RUNNER_NAME}" \
|
||||
--labels "${RUNNER_LABELS}" \
|
||||
--runnergroup "${RUNNER_GROUP}" \
|
||||
--work "/home/runner/_work" \
|
||||
--ephemeral \
|
||||
--unattended \
|
||||
--replace
|
||||
|
||||
log "Runner configured. Starting..."
|
||||
|
||||
# exec replaces the shell with the runner process.
|
||||
# The runner picks up one job, executes it, and exits.
|
||||
# Docker's restart policy restarts the container for the next job.
|
||||
exec "${RUNNER_DIR}/run.sh"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
28
runners-conversion/augur/envs/augur.env.example
Normal file
28
runners-conversion/augur/envs/augur.env.example
Normal file
@@ -0,0 +1,28 @@
|
||||
# augur.env — Per-repo runner configuration for the augur repository.
|
||||
#
|
||||
# Copy this file to augur.env and fill in your values:
|
||||
# cp envs/augur.env.example envs/augur.env
|
||||
#
|
||||
# To add another repo, copy this file to envs/<repo>.env, adjust the values,
|
||||
# and add a matching service block in docker-compose.yml.
|
||||
|
||||
# Runner image source (default: GHCR).
|
||||
# For self-hosted registry on the same Docker engine:
|
||||
# RUNNER_IMAGE=localhost:5000/augur-runner:latest
|
||||
# Docker treats localhost registries as insecure by default — no daemon.json changes needed.
|
||||
# RUNNER_IMAGE=ghcr.io/aiinfuseds/augur-runner:latest
|
||||
|
||||
# Repository to register this runner with.
|
||||
REPO_URL=https://github.com/AIinfusedS/augur
|
||||
|
||||
# Runner identity — must be unique per runner within the repo.
|
||||
RUNNER_NAME=unraid-augur
|
||||
RUNNER_LABELS=self-hosted,Linux,X64
|
||||
RUNNER_GROUP=default
|
||||
|
||||
# Resource limits for this repo's runner container.
|
||||
# Tune based on the repo's CI workload.
|
||||
# augur CI needs ~4 CPUs and ~4GB RAM for Go builds + extension tests.
|
||||
# 3 runners x 4 CPUs = 12 cores total.
|
||||
RUNNER_CPUS=4
|
||||
RUNNER_MEMORY=4G
|
||||
744
runners-conversion/augur/runner.sh
Executable file
744
runners-conversion/augur/runner.sh
Executable file
@@ -0,0 +1,744 @@
|
||||
#!/usr/bin/env bash
|
||||
# runner.sh — Setup, manage, and tear down a GitHub Actions self-hosted runner.
|
||||
#
|
||||
# Supports two platforms:
|
||||
# - macOS: Installs the runner agent natively, manages it as a launchd service.
|
||||
# - Linux: Delegates to Docker-based runner infrastructure in infra/runners/.
|
||||
#
|
||||
# Typical flow:
|
||||
# 1) ./scripts/runner.sh --mode setup # install/configure runner
|
||||
# 2) ./scripts/runner.sh --mode status # verify runner is online
|
||||
# 3) (push/PR triggers CI on the self-hosted runner)
|
||||
# 4) ./scripts/runner.sh --mode stop # stop runner
|
||||
# 5) ./scripts/runner.sh --mode uninstall # deregister and clean up
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MODE=""
|
||||
RUNNER_DIR="${AUGUR_RUNNER_DIR:-${HOME}/.augur-runner}"
|
||||
RUNNER_LABELS="self-hosted,macOS,ARM64"
|
||||
RUNNER_NAME=""
|
||||
REPO_SLUG=""
|
||||
REG_TOKEN=""
|
||||
FORCE=false
|
||||
FOREGROUND=false
|
||||
PUSH_REGISTRY=""
|
||||
|
||||
PLIST_LABEL="com.augur.actions-runner"
|
||||
PLIST_PATH="${HOME}/Library/LaunchAgents/${PLIST_LABEL}.plist"
|
||||
|
||||
# Resolved during Linux operations
|
||||
INFRA_DIR=""
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
./scripts/runner.sh --mode <setup|start|stop|status|build-image|uninstall> [options]
|
||||
|
||||
Required:
|
||||
--mode MODE One of: setup, start, stop, status, build-image, uninstall
|
||||
|
||||
Options (macOS):
|
||||
--runner-dir DIR Installation directory (default: ~/.augur-runner)
|
||||
--labels LABELS Comma-separated labels (default: self-hosted,macOS,ARM64)
|
||||
--name NAME Runner name (default: augur-<hostname>)
|
||||
--repo OWNER/REPO GitHub repository (default: auto-detected from git remote)
|
||||
--token TOKEN Registration/removal token (prompted if not provided)
|
||||
--force Force re-setup even if already configured
|
||||
--foreground Start in foreground instead of launchd service
|
||||
|
||||
Options (Linux — Docker mode):
|
||||
On Linux, this script delegates to Docker Compose in infra/runners/.
|
||||
Configuration is managed via .env and envs/*.env files.
|
||||
See infra/runners/README.md for details.
|
||||
|
||||
Options (build-image):
|
||||
--push REGISTRY Tag and push to a registry (e.g. 192.168.1.82:5000)
|
||||
|
||||
Common:
|
||||
-h, --help Show this help
|
||||
|
||||
Examples (macOS):
|
||||
./scripts/runner.sh --mode setup
|
||||
./scripts/runner.sh --mode setup --token ghp_xxxxx
|
||||
./scripts/runner.sh --mode start
|
||||
./scripts/runner.sh --mode start --foreground
|
||||
./scripts/runner.sh --mode status
|
||||
./scripts/runner.sh --mode stop
|
||||
./scripts/runner.sh --mode uninstall
|
||||
|
||||
Examples (Linux):
|
||||
./scripts/runner.sh --mode setup # prompts for .env, starts runner
|
||||
./scripts/runner.sh --mode start # docker compose up -d
|
||||
./scripts/runner.sh --mode stop # docker compose down
|
||||
./scripts/runner.sh --mode status # docker compose ps + logs
|
||||
./scripts/runner.sh --mode uninstall # docker compose down -v --rmi local
|
||||
|
||||
Examples (build-image — works on any OS):
|
||||
./scripts/runner.sh --mode build-image # build locally
|
||||
./scripts/runner.sh --mode build-image --push 192.168.1.82:5000 # build + push to registry
|
||||
|
||||
Environment overrides:
|
||||
AUGUR_RUNNER_DIR Runner installation directory (macOS only)
|
||||
EOF
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers (consistent with actions-local.sh)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
log() {
|
||||
printf '[runner] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[runner] WARNING: %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
die() {
|
||||
printf '[runner] ERROR: %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
local cmd="$1"
|
||||
command -v "$cmd" >/dev/null 2>&1 || die "required command not found: $cmd"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Platform detection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
detect_os() {
|
||||
case "$(uname -s)" in
|
||||
Darwin) printf 'darwin' ;;
|
||||
Linux) printf 'linux' ;;
|
||||
*) die "Unsupported OS: $(uname -s). This script supports macOS and Linux." ;;
|
||||
esac
|
||||
}
|
||||
|
||||
ensure_macos() {
|
||||
[[ "$(detect_os)" == "darwin" ]] || die "This operation requires macOS."
|
||||
}
|
||||
|
||||
# Locate the infra/runners/ directory relative to the repo root.
|
||||
# The script lives at scripts/runner.sh, so repo root is one level up.
|
||||
find_infra_dir() {
|
||||
local script_dir
|
||||
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
local repo_root="${script_dir}/.."
|
||||
INFRA_DIR="$(cd "${repo_root}/infra/runners" 2>/dev/null && pwd)" || true
|
||||
|
||||
if [[ -z "$INFRA_DIR" ]] || [[ ! -f "${INFRA_DIR}/docker-compose.yml" ]]; then
|
||||
die "Could not find infra/runners/docker-compose.yml. Ensure you are running from the augur repo."
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Argument parsing
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
parse_args() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--mode)
|
||||
shift; [[ $# -gt 0 ]] || die "--mode requires a value"
|
||||
MODE="$1"; shift ;;
|
||||
--runner-dir)
|
||||
shift; [[ $# -gt 0 ]] || die "--runner-dir requires a value"
|
||||
RUNNER_DIR="$1"; shift ;;
|
||||
--labels)
|
||||
shift; [[ $# -gt 0 ]] || die "--labels requires a value"
|
||||
RUNNER_LABELS="$1"; shift ;;
|
||||
--name)
|
||||
shift; [[ $# -gt 0 ]] || die "--name requires a value"
|
||||
RUNNER_NAME="$1"; shift ;;
|
||||
--repo)
|
||||
shift; [[ $# -gt 0 ]] || die "--repo requires a value"
|
||||
REPO_SLUG="$1"; shift ;;
|
||||
--token)
|
||||
shift; [[ $# -gt 0 ]] || die "--token requires a value"
|
||||
REG_TOKEN="$1"; shift ;;
|
||||
--force)
|
||||
FORCE=true; shift ;;
|
||||
--foreground)
|
||||
FOREGROUND=true; shift ;;
|
||||
--push)
|
||||
shift; [[ $# -gt 0 ]] || die "--push requires a registry address (e.g. 192.168.1.82:5000)"
|
||||
PUSH_REGISTRY="$1"; shift ;;
|
||||
-h|--help)
|
||||
usage; exit 0 ;;
|
||||
*)
|
||||
die "unknown argument: $1" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
[[ -n "$MODE" ]] || die "--mode is required (setup|start|stop|status|build-image|uninstall)"
|
||||
case "$MODE" in
|
||||
setup|start|stop|status|build-image|uninstall) ;;
|
||||
*) die "invalid --mode: $MODE (expected setup|start|stop|status|build-image|uninstall)" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Repo detection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
detect_repo() {
|
||||
if [[ -n "$REPO_SLUG" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
local remote_url=""
|
||||
remote_url="$(git remote get-url origin 2>/dev/null || true)"
|
||||
if [[ -z "$remote_url" ]]; then
|
||||
die "Could not detect repository from git remote. Use --repo OWNER/REPO."
|
||||
fi
|
||||
|
||||
# Extract OWNER/REPO from HTTPS or SSH URLs
|
||||
REPO_SLUG="$(printf '%s' "$remote_url" \
|
||||
| sed -E 's#^(https?://github\.com/|git@github\.com:)##' \
|
||||
| sed -E 's/\.git$//')"
|
||||
|
||||
if [[ -z "$REPO_SLUG" ]] || ! printf '%s' "$REPO_SLUG" | grep -qE '^[^/]+/[^/]+$'; then
|
||||
die "Could not parse OWNER/REPO from remote URL: $remote_url. Use --repo OWNER/REPO."
|
||||
fi
|
||||
|
||||
log "Auto-detected repository: $REPO_SLUG"
|
||||
}
|
||||
|
||||
# ===========================================================================
|
||||
# macOS: Native runner agent + launchd service
|
||||
# ===========================================================================
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Runner download and verification (macOS)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
detect_arch() {
|
||||
local arch
|
||||
arch="$(uname -m)"
|
||||
case "$arch" in
|
||||
arm64|aarch64) printf 'arm64' ;;
|
||||
x86_64) printf 'x64' ;;
|
||||
*) die "Unsupported architecture: $arch" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
download_runner() {
|
||||
require_cmd curl
|
||||
require_cmd shasum
|
||||
require_cmd tar
|
||||
|
||||
local arch
|
||||
arch="$(detect_arch)"
|
||||
|
||||
log "Fetching latest runner release metadata..."
|
||||
local release_json
|
||||
release_json="$(curl -fsSL "https://api.github.com/repos/actions/runner/releases/latest")"
|
||||
|
||||
local version
|
||||
version="$(printf '%s' "$release_json" | grep '"tag_name"' | sed -E 's/.*"v([^"]+)".*/\1/')"
|
||||
if [[ -z "$version" ]]; then
|
||||
die "Could not determine latest runner version from GitHub API."
|
||||
fi
|
||||
log "Latest runner version: $version"
|
||||
|
||||
local tarball="actions-runner-osx-${arch}-${version}.tar.gz"
|
||||
local download_url="https://github.com/actions/runner/releases/download/v${version}/${tarball}"
|
||||
|
||||
# Extract expected SHA256 from release body.
|
||||
# The body contains HTML comments like:
|
||||
# <!-- BEGIN SHA osx-arm64 -->HASH<!-- END SHA osx-arm64 -->
|
||||
local sha_marker="osx-${arch}"
|
||||
local expected_sha=""
|
||||
expected_sha="$(printf '%s' "$release_json" \
|
||||
| python3 -c "
|
||||
import json,sys,re
|
||||
body = json.load(sys.stdin).get('body','')
|
||||
m = re.search(r'<!-- BEGIN SHA ${sha_marker} -->([0-9a-f]{64})<!-- END SHA ${sha_marker} -->', body)
|
||||
print(m.group(1) if m else '')
|
||||
" 2>/dev/null || true)"
|
||||
|
||||
mkdir -p "$RUNNER_DIR"
|
||||
local dest="${RUNNER_DIR}/${tarball}"
|
||||
|
||||
if [[ -f "$dest" ]]; then
|
||||
log "Tarball already exists: $dest"
|
||||
else
|
||||
log "Downloading: $download_url"
|
||||
curl -fSL -o "$dest" "$download_url"
|
||||
fi
|
||||
|
||||
if [[ -n "$expected_sha" ]]; then
|
||||
log "Verifying SHA256 checksum..."
|
||||
local actual_sha
|
||||
actual_sha="$(shasum -a 256 "$dest" | awk '{print $1}')"
|
||||
if [[ "$actual_sha" != "$expected_sha" ]]; then
|
||||
rm -f "$dest"
|
||||
die "Checksum mismatch. Expected: $expected_sha, Got: $actual_sha"
|
||||
fi
|
||||
log "Checksum verified."
|
||||
else
|
||||
warn "Could not extract expected SHA256 from release metadata; skipping verification."
|
||||
fi
|
||||
|
||||
log "Extracting runner into $RUNNER_DIR..."
|
||||
tar -xzf "$dest" -C "$RUNNER_DIR"
|
||||
rm -f "$dest"
|
||||
|
||||
log "Runner extracted (version $version)."
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Registration (macOS)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
prompt_token() {
|
||||
if [[ -n "$REG_TOKEN" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "A registration token is required."
|
||||
log "Obtain one from: https://github.com/${REPO_SLUG}/settings/actions/runners/new"
|
||||
log "Or via the API:"
|
||||
log " curl -X POST -H 'Authorization: token YOUR_PAT' \\"
|
||||
log " https://api.github.com/repos/${REPO_SLUG}/actions/runners/registration-token"
|
||||
log ""
|
||||
printf '[runner] Enter registration token: '
|
||||
read -r REG_TOKEN
|
||||
[[ -n "$REG_TOKEN" ]] || die "No token provided."
|
||||
}
|
||||
|
||||
register_runner() {
|
||||
if [[ -z "$RUNNER_NAME" ]]; then
|
||||
RUNNER_NAME="augur-$(hostname -s)"
|
||||
fi
|
||||
|
||||
log "Registering runner '${RUNNER_NAME}' with labels '${RUNNER_LABELS}'..."
|
||||
|
||||
local config_args=(
|
||||
--url "https://github.com/${REPO_SLUG}"
|
||||
--token "$REG_TOKEN"
|
||||
--name "$RUNNER_NAME"
|
||||
--labels "$RUNNER_LABELS"
|
||||
--work "${RUNNER_DIR}/_work"
|
||||
--unattended
|
||||
)
|
||||
|
||||
if [[ "$FORCE" == "true" ]]; then
|
||||
config_args+=(--replace)
|
||||
fi
|
||||
|
||||
"${RUNNER_DIR}/config.sh" "${config_args[@]}"
|
||||
log "Runner registered."
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# launchd service management (macOS)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
create_plist() {
|
||||
mkdir -p "${RUNNER_DIR}/logs"
|
||||
mkdir -p "$(dirname "$PLIST_PATH")"
|
||||
|
||||
cat > "$PLIST_PATH" <<EOF
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>${PLIST_LABEL}</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>${RUNNER_DIR}/run.sh</string>
|
||||
</array>
|
||||
<key>WorkingDirectory</key>
|
||||
<string>${RUNNER_DIR}</string>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>KeepAlive</key>
|
||||
<true/>
|
||||
<key>StandardOutPath</key>
|
||||
<string>${RUNNER_DIR}/logs/stdout.log</string>
|
||||
<key>StandardErrorPath</key>
|
||||
<string>${RUNNER_DIR}/logs/stderr.log</string>
|
||||
<key>EnvironmentVariables</key>
|
||||
<dict>
|
||||
<key>PATH</key>
|
||||
<string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
|
||||
<key>HOME</key>
|
||||
<string>${HOME}</string>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
||||
EOF
|
||||
|
||||
log "Launchd plist created: $PLIST_PATH"
|
||||
}
|
||||
|
||||
load_service() {
|
||||
if launchctl list 2>/dev/null | grep -q "$PLIST_LABEL"; then
|
||||
log "Service already loaded; unloading first..."
|
||||
launchctl unload "$PLIST_PATH" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
launchctl load "$PLIST_PATH"
|
||||
log "Service loaded."
|
||||
}
|
||||
|
||||
unload_service() {
|
||||
if launchctl list 2>/dev/null | grep -q "$PLIST_LABEL"; then
|
||||
launchctl unload "$PLIST_PATH" 2>/dev/null || true
|
||||
log "Service unloaded."
|
||||
else
|
||||
log "Service is not loaded."
|
||||
fi
|
||||
}
|
||||
|
||||
service_is_running() {
|
||||
launchctl list 2>/dev/null | grep -q "$PLIST_LABEL"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# macOS mode implementations
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_setup_darwin() {
|
||||
detect_repo
|
||||
|
||||
if [[ -f "${RUNNER_DIR}/.runner" ]] && [[ "$FORCE" != "true" ]]; then
|
||||
log "Runner already configured at $RUNNER_DIR."
|
||||
log "Use --force to re-setup."
|
||||
do_status_darwin
|
||||
return
|
||||
fi
|
||||
|
||||
download_runner
|
||||
prompt_token
|
||||
register_runner
|
||||
create_plist
|
||||
load_service
|
||||
|
||||
log ""
|
||||
log "Setup complete. Runner is registered and running."
|
||||
log ""
|
||||
log "To activate self-hosted CI, set the repository variable CI_RUNS_ON to:"
|
||||
log ' ["self-hosted", "macOS", "ARM64"]'
|
||||
log "in Settings > Secrets and variables > Actions > Variables."
|
||||
log ""
|
||||
log "Or via CLI:"
|
||||
log " gh variable set CI_RUNS_ON --body '[\"self-hosted\", \"macOS\", \"ARM64\"]'"
|
||||
log ""
|
||||
log "Energy saver: ensure your Mac does not sleep while the runner is active."
|
||||
log " System Settings > Energy Saver > Prevent automatic sleeping"
|
||||
}
|
||||
|
||||
do_start_darwin() {
|
||||
[[ -f "${RUNNER_DIR}/.runner" ]] || die "Runner not configured. Run --mode setup first."
|
||||
|
||||
if [[ "$FOREGROUND" == "true" ]]; then
|
||||
log "Starting runner in foreground (Ctrl-C to stop)..."
|
||||
exec "${RUNNER_DIR}/run.sh"
|
||||
fi
|
||||
|
||||
if service_is_running; then
|
||||
log "Runner service is already running."
|
||||
return
|
||||
fi
|
||||
|
||||
if [[ ! -f "$PLIST_PATH" ]]; then
|
||||
log "Plist not found; recreating..."
|
||||
create_plist
|
||||
fi
|
||||
|
||||
load_service
|
||||
log "Runner started."
|
||||
}
|
||||
|
||||
do_stop_darwin() {
|
||||
unload_service
|
||||
log "Runner stopped."
|
||||
}
|
||||
|
||||
do_status_darwin() {
|
||||
log "Runner directory: $RUNNER_DIR"
|
||||
|
||||
if [[ ! -f "${RUNNER_DIR}/.runner" ]]; then
|
||||
log "Status: NOT CONFIGURED"
|
||||
log "Run --mode setup to install and register the runner."
|
||||
return
|
||||
fi
|
||||
|
||||
# Parse runner config
|
||||
local runner_name=""
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
runner_name="$(python3 -c "import json,sys; d=json.load(open(sys.argv[1])); print(d.get('agentName',''))" "${RUNNER_DIR}/.runner" 2>/dev/null || true)"
|
||||
fi
|
||||
if [[ -z "$runner_name" ]]; then
|
||||
runner_name="(could not parse)"
|
||||
fi
|
||||
|
||||
log "Runner name: $runner_name"
|
||||
|
||||
if service_is_running; then
|
||||
log "Service: RUNNING"
|
||||
else
|
||||
log "Service: STOPPED"
|
||||
fi
|
||||
|
||||
if pgrep -f "Runner.Listener" >/dev/null 2>&1; then
|
||||
log "Process: ACTIVE (Runner.Listener found)"
|
||||
else
|
||||
log "Process: INACTIVE"
|
||||
fi
|
||||
|
||||
# Show recent logs
|
||||
local log_file="${RUNNER_DIR}/logs/stdout.log"
|
||||
if [[ -f "$log_file" ]]; then
|
||||
log ""
|
||||
log "Recent log output (last 10 lines):"
|
||||
tail -n 10 "$log_file" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
local diag_dir="${RUNNER_DIR}/_diag"
|
||||
if [[ -d "$diag_dir" ]]; then
|
||||
local latest_diag
|
||||
latest_diag="$(ls -t "${diag_dir}"/Runner_*.log 2>/dev/null | head -n1 || true)"
|
||||
if [[ -n "$latest_diag" ]]; then
|
||||
log ""
|
||||
log "Latest runner diagnostic (last 5 lines):"
|
||||
tail -n 5 "$latest_diag" 2>/dev/null || true
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
do_uninstall_darwin() {
|
||||
log "Uninstalling self-hosted runner..."
|
||||
|
||||
# Stop service first
|
||||
unload_service
|
||||
|
||||
# Remove plist
|
||||
if [[ -f "$PLIST_PATH" ]]; then
|
||||
rm -f "$PLIST_PATH"
|
||||
log "Removed plist: $PLIST_PATH"
|
||||
fi
|
||||
|
||||
# Deregister from GitHub
|
||||
if [[ -f "${RUNNER_DIR}/config.sh" ]]; then
|
||||
if [[ -z "$REG_TOKEN" ]]; then
|
||||
detect_repo
|
||||
log ""
|
||||
log "A removal token is required to deregister the runner."
|
||||
log "Obtain one from: https://github.com/${REPO_SLUG}/settings/actions/runners"
|
||||
log "Or via the API:"
|
||||
log " curl -X POST -H 'Authorization: token YOUR_PAT' \\"
|
||||
log " https://api.github.com/repos/${REPO_SLUG}/actions/runners/remove-token"
|
||||
log ""
|
||||
printf '[runner] Enter removal token (or press Enter to skip deregistration): '
|
||||
read -r REG_TOKEN
|
||||
fi
|
||||
|
||||
if [[ -n "$REG_TOKEN" ]]; then
|
||||
"${RUNNER_DIR}/config.sh" remove --token "$REG_TOKEN" || warn "Deregistration failed; you may need to remove the runner manually from GitHub settings."
|
||||
log "Runner deregistered from GitHub."
|
||||
else
|
||||
warn "Skipping deregistration. Remove the runner manually from GitHub settings."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Clean up runner directory
|
||||
if [[ -d "$RUNNER_DIR" ]]; then
|
||||
log "Removing runner directory: $RUNNER_DIR"
|
||||
rm -rf "$RUNNER_DIR"
|
||||
log "Runner directory removed."
|
||||
fi
|
||||
|
||||
log "Uninstall complete."
|
||||
}
|
||||
|
||||
# ===========================================================================
|
||||
# Linux: Docker-based runner via infra/runners/
|
||||
# ===========================================================================
|
||||
|
||||
# Ensure Docker and docker compose are available.
|
||||
ensure_docker() {
|
||||
require_cmd docker
|
||||
|
||||
# Check for docker compose (v2 plugin or standalone)
|
||||
if docker compose version >/dev/null 2>&1; then
|
||||
return
|
||||
fi
|
||||
|
||||
if command -v docker-compose >/dev/null 2>&1; then
|
||||
warn "Found docker-compose (standalone). docker compose v2 plugin is recommended."
|
||||
return
|
||||
fi
|
||||
|
||||
die "docker compose is required. Install Docker Compose v2: https://docs.docker.com/compose/install/"
|
||||
}
|
||||
|
||||
# Run docker compose in the infra/runners directory.
|
||||
# Accepts any docker compose subcommand and arguments.
|
||||
compose() {
|
||||
docker compose -f "${INFRA_DIR}/docker-compose.yml" "$@"
|
||||
}
|
||||
|
||||
do_build_image() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
local dockerfile_dir="${INFRA_DIR}"
|
||||
|
||||
# Determine the image tag based on whether --push was given.
|
||||
# With --push: tag includes the registry so docker push knows where to send it.
|
||||
# Without --push: clean local name.
|
||||
local image_tag="augur-runner:latest"
|
||||
if [[ -n "$PUSH_REGISTRY" ]]; then
|
||||
image_tag="${PUSH_REGISTRY}/augur-runner:latest"
|
||||
fi
|
||||
|
||||
# Always target linux/amd64 — the Dockerfile hardcodes x86_64 binaries
|
||||
# (Go linux-amd64, runner agent linux-x64). This ensures correct arch
|
||||
# even when building on an ARM Mac.
|
||||
log "Building runner image: ${image_tag} (platform: linux/amd64)"
|
||||
DOCKER_BUILDKIT=1 docker build --platform linux/amd64 --pull -t "$image_tag" "$dockerfile_dir"
|
||||
|
||||
if [[ -n "$PUSH_REGISTRY" ]]; then
|
||||
log "Pushing to ${PUSH_REGISTRY}..."
|
||||
docker push "$image_tag"
|
||||
log "Image pushed to ${image_tag}"
|
||||
else
|
||||
log "Image built locally as ${image_tag}"
|
||||
log "Use --push <registry> to push to a remote registry."
|
||||
fi
|
||||
}
|
||||
|
||||
do_setup_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Docker-based runner setup (infra/runners/)"
|
||||
log ""
|
||||
|
||||
# Create .env from template if it doesn't exist
|
||||
if [[ ! -f "${INFRA_DIR}/.env" ]]; then
|
||||
if [[ -f "${INFRA_DIR}/.env.example" ]]; then
|
||||
cp "${INFRA_DIR}/.env.example" "${INFRA_DIR}/.env"
|
||||
log "Created ${INFRA_DIR}/.env from template."
|
||||
log "Edit this file to set your GITHUB_PAT."
|
||||
log ""
|
||||
printf '[runner] Enter your GitHub PAT (or press Enter to edit .env manually later): '
|
||||
read -r pat_input
|
||||
if [[ -n "$pat_input" ]]; then
|
||||
sed -i "s/^GITHUB_PAT=.*/GITHUB_PAT=${pat_input}/" "${INFRA_DIR}/.env"
|
||||
log "GITHUB_PAT set in .env"
|
||||
fi
|
||||
else
|
||||
die "Missing .env.example template in ${INFRA_DIR}"
|
||||
fi
|
||||
else
|
||||
log ".env already exists; skipping."
|
||||
fi
|
||||
|
||||
# Create per-repo env from template if it doesn't exist
|
||||
if [[ ! -f "${INFRA_DIR}/envs/augur.env" ]]; then
|
||||
if [[ -f "${INFRA_DIR}/envs/augur.env.example" ]]; then
|
||||
cp "${INFRA_DIR}/envs/augur.env.example" "${INFRA_DIR}/envs/augur.env"
|
||||
log "Created ${INFRA_DIR}/envs/augur.env from template."
|
||||
log "Edit this file to configure REPO_URL, RUNNER_NAME, and resource limits."
|
||||
else
|
||||
die "Missing envs/augur.env.example template in ${INFRA_DIR}"
|
||||
fi
|
||||
else
|
||||
log "envs/augur.env already exists; skipping."
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "Starting runner..."
|
||||
compose up -d
|
||||
|
||||
log ""
|
||||
log "Setup complete. Verify with: ./scripts/runner.sh --mode status"
|
||||
log ""
|
||||
log "To activate self-hosted CI, set the repository variable CI_RUNS_ON to:"
|
||||
log ' ["self-hosted", "Linux", "X64"]'
|
||||
log ""
|
||||
log "Via CLI:"
|
||||
log " gh variable set CI_RUNS_ON --body '[\"self-hosted\", \"Linux\", \"X64\"]'"
|
||||
}
|
||||
|
||||
do_start_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Starting Docker runner..."
|
||||
compose up -d
|
||||
log "Runner started."
|
||||
}
|
||||
|
||||
do_stop_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Stopping Docker runner..."
|
||||
compose down
|
||||
log "Runner stopped."
|
||||
}
|
||||
|
||||
do_status_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Docker runner status (infra/runners/):"
|
||||
log ""
|
||||
compose ps
|
||||
log ""
|
||||
log "Recent logs (last 20 lines):"
|
||||
compose logs --tail 20 2>/dev/null || true
|
||||
}
|
||||
|
||||
do_uninstall_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Uninstalling Docker runner..."
|
||||
compose down -v --rmi local 2>/dev/null || compose down -v
|
||||
log "Docker runner removed (containers, volumes, local images)."
|
||||
log ""
|
||||
log "Note: The runner should auto-deregister from GitHub (ephemeral mode)."
|
||||
log "If a stale runner remains, remove it manually:"
|
||||
log " gh api -X DELETE repos/OWNER/REPO/actions/runners/RUNNER_ID"
|
||||
}
|
||||
|
||||
# ===========================================================================
|
||||
# Entry point — routes to macOS or Linux implementation
|
||||
# ===========================================================================
|
||||
|
||||
main() {
|
||||
parse_args "$@"
|
||||
|
||||
local os
|
||||
os="$(detect_os)"
|
||||
|
||||
case "$MODE" in
|
||||
setup)
|
||||
if [[ "$os" == "darwin" ]]; then do_setup_darwin; else do_setup_linux; fi ;;
|
||||
start)
|
||||
if [[ "$os" == "darwin" ]]; then do_start_darwin; else do_start_linux; fi ;;
|
||||
stop)
|
||||
if [[ "$os" == "darwin" ]]; then do_stop_darwin; else do_stop_linux; fi ;;
|
||||
status)
|
||||
if [[ "$os" == "darwin" ]]; then do_status_darwin; else do_status_linux; fi ;;
|
||||
build-image)
|
||||
do_build_image ;;
|
||||
uninstall)
|
||||
if [[ "$os" == "darwin" ]]; then do_uninstall_darwin; else do_uninstall_linux; fi ;;
|
||||
*)
|
||||
die "unexpected mode: $MODE" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
210
runners-conversion/periodVault/actions-local.sh
Executable file
210
runners-conversion/periodVault/actions-local.sh
Executable file
@@ -0,0 +1,210 @@
|
||||
#!/usr/bin/env bash
|
||||
# actions-local.sh
|
||||
# Local GitHub Actions self-hosted runner lifecycle helper.
|
||||
set -euo pipefail
|
||||
|
||||
RUNNER_DIR="${RUNNER_DIR:-$HOME/.periodvault-actions-runner}"
|
||||
RUNNER_LABELS="${RUNNER_LABELS:-periodvault}"
|
||||
RUNNER_NAME="${RUNNER_NAME:-$(hostname)-periodvault}"
|
||||
RUNNER_WORKDIR="${RUNNER_WORKDIR:-_work}"
|
||||
RUNNER_PID_FILE="${RUNNER_PID_FILE:-$RUNNER_DIR/.runner.pid}"
|
||||
RUNNER_LOG_FILE="${RUNNER_LOG_FILE:-$RUNNER_DIR/runner.log}"
|
||||
|
||||
if git config --get remote.origin.url >/dev/null 2>&1; then
|
||||
ORIGIN_URL="$(git config --get remote.origin.url)"
|
||||
else
|
||||
ORIGIN_URL=""
|
||||
fi
|
||||
|
||||
if [[ -n "$ORIGIN_URL" && "$ORIGIN_URL" =~ ^git@github\.com:(.*)\.git$ ]]; then
|
||||
RUNNER_URL_DEFAULT="https://github.com/${BASH_REMATCH[1]}"
|
||||
elif [[ -n "$ORIGIN_URL" && "$ORIGIN_URL" =~ ^https://github\.com/.*$ ]]; then
|
||||
RUNNER_URL_DEFAULT="${ORIGIN_URL%.git}"
|
||||
else
|
||||
RUNNER_URL_DEFAULT=""
|
||||
fi
|
||||
|
||||
RUNNER_URL="${RUNNER_URL:-$RUNNER_URL_DEFAULT}"
|
||||
|
||||
ACT_WORKFLOW="${ACT_WORKFLOW:-.github/workflows/ci.yml}"
|
||||
ACT_IMAGE="${ACT_IMAGE:-ghcr.io/catthehacker/ubuntu:act-latest}"
|
||||
ACT_DOCKER_SOCKET="${ACT_DOCKER_SOCKET:-/Users/s/.colima/augur-actions/docker.sock}"
|
||||
ACT_DAEMON_SOCKET="${ACT_DAEMON_SOCKET:-/var/run/docker.sock}"
|
||||
ACT_DOCKER_CONFIG="${ACT_DOCKER_CONFIG:-/tmp/act-docker-config}"
|
||||
|
||||
usage() {
|
||||
cat <<EOF
|
||||
Usage: ./scripts/actions-local.sh <setup|start|stop|status|remove|run> [job-id]
|
||||
|
||||
Environment variables:
|
||||
RUNNER_DIR Runner installation directory (default: $RUNNER_DIR)
|
||||
RUNNER_URL GitHub repo/org URL for runner registration
|
||||
RUNNER_TOKEN Registration/removal token (required for setup/remove)
|
||||
RUNNER_LABELS Runner labels (default: $RUNNER_LABELS)
|
||||
RUNNER_NAME Runner name (default: $RUNNER_NAME)
|
||||
RUNNER_WORKDIR Runner work dir (default: $RUNNER_WORKDIR)
|
||||
|
||||
Local Actions execution (`run`) variables:
|
||||
ACT_WORKFLOW Workflow file path (default: $ACT_WORKFLOW)
|
||||
ACT_IMAGE Container image for self-hosted label mapping (default: $ACT_IMAGE)
|
||||
ACT_DOCKER_SOCKET Docker host socket (default: $ACT_DOCKER_SOCKET)
|
||||
ACT_DAEMON_SOCKET In-container daemon socket path (default: $ACT_DAEMON_SOCKET)
|
||||
ACT_DOCKER_CONFIG Docker config dir used by act (default: $ACT_DOCKER_CONFIG)
|
||||
EOF
|
||||
}
|
||||
|
||||
ensure_runner_binaries() {
|
||||
if [[ ! -x "$RUNNER_DIR/config.sh" || ! -x "$RUNNER_DIR/run.sh" ]]; then
|
||||
echo "[actions-local] Missing runner binaries in $RUNNER_DIR."
|
||||
echo "[actions-local] Download and extract GitHub runner there first."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
ensure_runner_url() {
|
||||
if [[ -z "$RUNNER_URL" ]]; then
|
||||
echo "[actions-local] RUNNER_URL is empty."
|
||||
echo "[actions-local] Set RUNNER_URL=https://github.com/<owner>/<repo> and retry."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_token() {
|
||||
if [[ -z "${RUNNER_TOKEN:-}" ]]; then
|
||||
echo "[actions-local] RUNNER_TOKEN is required for this command."
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_setup() {
|
||||
ensure_runner_binaries
|
||||
ensure_runner_url
|
||||
require_token
|
||||
|
||||
if [[ -f "$RUNNER_DIR/.runner" ]]; then
|
||||
echo "[actions-local] Runner already configured in $RUNNER_DIR (idempotent no-op)."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
(
|
||||
cd "$RUNNER_DIR"
|
||||
./config.sh \
|
||||
--unattended \
|
||||
--replace \
|
||||
--url "$RUNNER_URL" \
|
||||
--token "$RUNNER_TOKEN" \
|
||||
--name "$RUNNER_NAME" \
|
||||
--labels "$RUNNER_LABELS" \
|
||||
--work "$RUNNER_WORKDIR"
|
||||
)
|
||||
echo "[actions-local] Runner configured."
|
||||
}
|
||||
|
||||
cmd_start() {
|
||||
ensure_runner_binaries
|
||||
if [[ ! -f "$RUNNER_DIR/.runner" ]]; then
|
||||
echo "[actions-local] Runner not configured. Run setup first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -f "$RUNNER_PID_FILE" ]] && kill -0 "$(cat "$RUNNER_PID_FILE")" >/dev/null 2>&1; then
|
||||
echo "[actions-local] Runner already running (pid $(cat "$RUNNER_PID_FILE"))."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
(
|
||||
cd "$RUNNER_DIR"
|
||||
nohup ./run.sh >"$RUNNER_LOG_FILE" 2>&1 &
|
||||
echo $! >"$RUNNER_PID_FILE"
|
||||
)
|
||||
echo "[actions-local] Runner started (pid $(cat "$RUNNER_PID_FILE"))."
|
||||
echo "[actions-local] Log: $RUNNER_LOG_FILE"
|
||||
}
|
||||
|
||||
cmd_stop() {
|
||||
if [[ ! -f "$RUNNER_PID_FILE" ]]; then
|
||||
echo "[actions-local] Runner is not running."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
pid="$(cat "$RUNNER_PID_FILE")"
|
||||
if kill -0 "$pid" >/dev/null 2>&1; then
|
||||
kill "$pid"
|
||||
rm -f "$RUNNER_PID_FILE"
|
||||
echo "[actions-local] Runner stopped (pid $pid)."
|
||||
else
|
||||
rm -f "$RUNNER_PID_FILE"
|
||||
echo "[actions-local] Runner pid file was stale; cleaned up."
|
||||
fi
|
||||
}
|
||||
|
||||
cmd_status() {
|
||||
if [[ -f "$RUNNER_DIR/.runner" ]]; then
|
||||
echo "[actions-local] configured: yes"
|
||||
else
|
||||
echo "[actions-local] configured: no"
|
||||
fi
|
||||
|
||||
if [[ -f "$RUNNER_PID_FILE" ]] && kill -0 "$(cat "$RUNNER_PID_FILE")" >/dev/null 2>&1; then
|
||||
echo "[actions-local] running: yes (pid $(cat "$RUNNER_PID_FILE"))"
|
||||
else
|
||||
echo "[actions-local] running: no"
|
||||
fi
|
||||
|
||||
echo "[actions-local] runner-dir: $RUNNER_DIR"
|
||||
echo "[actions-local] runner-labels: $RUNNER_LABELS"
|
||||
}
|
||||
|
||||
cmd_remove() {
|
||||
ensure_runner_binaries
|
||||
require_token
|
||||
if [[ ! -f "$RUNNER_DIR/.runner" ]]; then
|
||||
echo "[actions-local] Runner is not configured."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
(
|
||||
cd "$RUNNER_DIR"
|
||||
./config.sh remove --token "$RUNNER_TOKEN"
|
||||
)
|
||||
echo "[actions-local] Runner registration removed."
|
||||
}
|
||||
|
||||
cmd_run() {
|
||||
local job="${1:-sdd-gate}"
|
||||
|
||||
if ! command -v act >/dev/null 2>&1; then
|
||||
echo "[actions-local] 'act' is required for local workflow execution."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mkdir -p "$ACT_DOCKER_CONFIG"
|
||||
if [[ ! -f "$ACT_DOCKER_CONFIG/config.json" ]]; then
|
||||
printf '{"auths":{}}\n' >"$ACT_DOCKER_CONFIG/config.json"
|
||||
fi
|
||||
|
||||
DOCKER_CONFIG="$ACT_DOCKER_CONFIG" \
|
||||
DOCKER_HOST="unix://$ACT_DOCKER_SOCKET" \
|
||||
act -W "$ACT_WORKFLOW" \
|
||||
-j "$job" \
|
||||
-P "self-hosted=$ACT_IMAGE" \
|
||||
-P "macos-latest=$ACT_IMAGE" \
|
||||
--container-architecture linux/amd64 \
|
||||
--container-daemon-socket "$ACT_DAEMON_SOCKET"
|
||||
}
|
||||
|
||||
COMMAND="${1:-}"
|
||||
case "$COMMAND" in
|
||||
setup) cmd_setup ;;
|
||||
start) cmd_start ;;
|
||||
stop) cmd_stop ;;
|
||||
status) cmd_status ;;
|
||||
remove) cmd_remove ;;
|
||||
run) cmd_run "${2:-}" ;;
|
||||
""|--help|-h) usage ;;
|
||||
*)
|
||||
echo "[actions-local] Unknown command: $COMMAND"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
152
runners-conversion/periodVault/check-process.sh
Executable file
152
runners-conversion/periodVault/check-process.sh
Executable file
@@ -0,0 +1,152 @@
|
||||
#!/usr/bin/env bash
|
||||
# check-process.sh
|
||||
# Process compliance checks for PR branches.
|
||||
# Validates: no main commits, no .DS_Store, scripts executable,
|
||||
# spec artifacts exist, iteration counter incremented, commit tags,
|
||||
# and file-scope allowlist enforcement.
|
||||
set -euo pipefail
|
||||
|
||||
BASE_REF="${1:-origin/main}"
|
||||
|
||||
if ! git rev-parse --verify "$BASE_REF" >/dev/null 2>&1; then
|
||||
BASE_REF="HEAD~1"
|
||||
fi
|
||||
|
||||
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
|
||||
# In GitHub Actions merge refs, HEAD is detached. Derive branch from GITHUB_HEAD_REF
|
||||
# or from the spec directory that matches changed files.
|
||||
if [[ "$BRANCH" == "HEAD" ]]; then
|
||||
if [[ -n "${GITHUB_HEAD_REF:-}" ]]; then
|
||||
BRANCH="$GITHUB_HEAD_REF"
|
||||
else
|
||||
# Fallback: find the spec directory from changed files
|
||||
for f in "${CHANGED_FILES[@]:-}"; do
|
||||
if [[ "$f" == specs/*/spec.md ]]; then
|
||||
BRANCH="${f#specs/}"
|
||||
BRANCH="${BRANCH%/spec.md}"
|
||||
break
|
||||
fi
|
||||
done
|
||||
fi
|
||||
fi
|
||||
if [[ "$BRANCH" == "main" ]]; then
|
||||
echo "[check-process] Failing: direct changes on 'main' are not allowed."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CHANGED_FILES=()
|
||||
while IFS= read -r line; do
|
||||
[[ -n "$line" ]] && CHANGED_FILES+=("$line")
|
||||
done < <(git diff --name-only "$BASE_REF"...HEAD)
|
||||
|
||||
if [[ ${#CHANGED_FILES[@]} -eq 0 ]]; then
|
||||
echo "[check-process] No changed files relative to $BASE_REF."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
FAILURES=0
|
||||
|
||||
# --- Check 1: No .DS_Store ---
|
||||
if command -v rg >/dev/null 2>&1; then
|
||||
HAS_DS_STORE="$(printf '%s\n' "${CHANGED_FILES[@]}" | rg -q '(^|/)\.DS_Store$' && echo 1 || echo 0)"
|
||||
else
|
||||
HAS_DS_STORE="$(printf '%s\n' "${CHANGED_FILES[@]}" | grep -Eq '(^|/)\.DS_Store$' && echo 1 || echo 0)"
|
||||
fi
|
||||
if [[ "$HAS_DS_STORE" == "1" ]]; then
|
||||
echo "[check-process] FAIL: .DS_Store must not be committed."
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
|
||||
# --- Check 2: Scripts executable ---
|
||||
for file in "${CHANGED_FILES[@]}"; do
|
||||
if [[ "$file" == scripts/*.sh ]] && [[ -f "$file" ]] && [[ ! -x "$file" ]]; then
|
||||
echo "[check-process] FAIL: script is not executable: $file"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# --- Check 3: Spec artifacts exist ---
|
||||
SPEC_DIR="specs/${BRANCH}"
|
||||
if [[ -d "$SPEC_DIR" ]]; then
|
||||
for artifact in spec.md plan.md tasks.md allowed-files.txt; do
|
||||
if [[ ! -f "$SPEC_DIR/$artifact" ]]; then
|
||||
echo "[check-process] FAIL: missing spec artifact: $SPEC_DIR/$artifact"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo "[check-process] FAIL: spec directory not found: $SPEC_DIR"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
|
||||
# --- Check 4: ITERATION incremented ---
|
||||
if [[ -f ITERATION ]]; then
|
||||
BRANCH_ITER="$(tr -d '[:space:]' < ITERATION)"
|
||||
BASE_ITER="$(git show "$BASE_REF":ITERATION 2>/dev/null | tr -d '[:space:]' || echo "0")"
|
||||
if [[ "$BRANCH_ITER" -le "$BASE_ITER" ]] 2>/dev/null; then
|
||||
echo "[check-process] FAIL: ITERATION ($BRANCH_ITER) must be > base ($BASE_ITER)"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- Check 5: Commit messages contain [iter N] ---
|
||||
# Skip merge commits (merge resolution, GitHub merge refs) — they don't carry iter tags.
|
||||
COMMITS_WITHOUT_TAG=0
|
||||
while IFS= read -r msg; do
|
||||
# Skip merge commits (start with "Merge " or "merge:")
|
||||
if echo "$msg" | grep -qEi '^(Merge |merge:)'; then
|
||||
continue
|
||||
fi
|
||||
if ! echo "$msg" | grep -qE '\[iter [0-9]+\]'; then
|
||||
echo "[check-process] FAIL: commit missing [iter N] tag: $msg"
|
||||
COMMITS_WITHOUT_TAG=$((COMMITS_WITHOUT_TAG + 1))
|
||||
fi
|
||||
done < <(git log --format='%s' "$BASE_REF"...HEAD)
|
||||
if [[ $COMMITS_WITHOUT_TAG -gt 0 ]]; then
|
||||
FAILURES=$((FAILURES + COMMITS_WITHOUT_TAG))
|
||||
fi
|
||||
|
||||
# --- Check 6: File-scope allowlist ---
|
||||
ALLOWLIST="$SPEC_DIR/allowed-files.txt"
|
||||
if [[ -f "$ALLOWLIST" ]]; then
|
||||
ALLOWED_PATTERNS=()
|
||||
while IFS= read -r line; do
|
||||
# Skip comments and blank lines
|
||||
line="$(echo "$line" | sed 's/#.*//' | xargs)"
|
||||
[[ -z "$line" ]] && continue
|
||||
ALLOWED_PATTERNS+=("$line")
|
||||
done < "$ALLOWLIST"
|
||||
|
||||
for file in "${CHANGED_FILES[@]}"; do
|
||||
MATCHED=false
|
||||
for pattern in "${ALLOWED_PATTERNS[@]}"; do
|
||||
# Use bash pattern matching (supports * and **)
|
||||
# Convert ** to match any path and * to match within directory
|
||||
local_pattern="${pattern}"
|
||||
# shellcheck disable=SC2254
|
||||
if [[ "$file" == $local_pattern ]]; then
|
||||
MATCHED=true
|
||||
break
|
||||
fi
|
||||
# Also try fnmatch-style: specs/foo/* should match specs/foo/bar.md
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
if python3 -c "import fnmatch; exit(0 if fnmatch.fnmatch('$file', '$local_pattern') else 1)" 2>/dev/null; then
|
||||
MATCHED=true
|
||||
break
|
||||
fi
|
||||
fi
|
||||
done
|
||||
if [[ "$MATCHED" == "false" ]]; then
|
||||
echo "[check-process] FAIL: file not in allowlist: $file"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# --- Result ---
|
||||
if [[ $FAILURES -gt 0 ]]; then
|
||||
echo "[check-process] FAILED ($FAILURES issues)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[check-process] PASS ($BASE_REF...HEAD)"
|
||||
102
runners-conversion/periodVault/ci-local.sh
Executable file
102
runners-conversion/periodVault/ci-local.sh
Executable file
@@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env bash
|
||||
# ci-local.sh
|
||||
# Local equivalent of CI checks for self-hosted runner validation.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
RUN_CONTRACTS=0
|
||||
RUN_BACKEND=0
|
||||
RUN_ANDROID=0
|
||||
RUN_IOS=0
|
||||
SKIP_INSTALL=0
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: ./scripts/ci-local.sh [options]
|
||||
|
||||
Options:
|
||||
--contracts Run process/SDD/TDD gate scripts.
|
||||
--backend Run lint + shared/android unit tests.
|
||||
--android Run Android emulator UI tests.
|
||||
--ios Run iOS simulator UI tests.
|
||||
--all Run contracts + backend + android + ios.
|
||||
--skip-install Skip setup bootstrap check.
|
||||
--help Show this help.
|
||||
|
||||
If no test scope flags are provided, defaults to: --contracts --backend
|
||||
EOF
|
||||
}
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--contracts) RUN_CONTRACTS=1 ;;
|
||||
--backend) RUN_BACKEND=1 ;;
|
||||
--android) RUN_ANDROID=1 ;;
|
||||
--ios) RUN_IOS=1 ;;
|
||||
--all)
|
||||
RUN_CONTRACTS=1
|
||||
RUN_BACKEND=1
|
||||
RUN_ANDROID=1
|
||||
RUN_IOS=1
|
||||
;;
|
||||
--skip-install) SKIP_INSTALL=1 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*)
|
||||
echo "[ci-local] Unknown option: $arg"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ $RUN_CONTRACTS -eq 0 && $RUN_BACKEND -eq 0 && $RUN_ANDROID -eq 0 && $RUN_IOS -eq 0 ]]; then
|
||||
RUN_CONTRACTS=1
|
||||
RUN_BACKEND=1
|
||||
fi
|
||||
|
||||
if [[ $SKIP_INSTALL -eq 0 ]]; then
|
||||
"$SCRIPT_DIR/setup-dev-environment.sh" --verify
|
||||
fi
|
||||
|
||||
TIMESTAMP="$(date +%Y%m%d-%H%M%S)"
|
||||
LOG_DIR="$PROJECT_ROOT/build/local-ci"
|
||||
LOG_FILE="$LOG_DIR/local-ci-$TIMESTAMP.log"
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
run_step() {
|
||||
local step_name="$1"
|
||||
shift
|
||||
echo ""
|
||||
echo "================================================"
|
||||
echo "[ci-local] $step_name"
|
||||
echo "================================================"
|
||||
"$@" 2>&1 | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
echo "[ci-local] Writing log to $LOG_FILE"
|
||||
echo "[ci-local] Starting local CI run at $(date -u '+%Y-%m-%dT%H:%M:%SZ')" | tee -a "$LOG_FILE"
|
||||
|
||||
if [[ $RUN_CONTRACTS -eq 1 ]]; then
|
||||
run_step "check-process" "$SCRIPT_DIR/check-process.sh" origin/main
|
||||
run_step "validate-sdd" "$SCRIPT_DIR/validate-sdd.sh" origin/main
|
||||
run_step "validate-tdd" env FORCE_AUDIT_GATES=1 "$SCRIPT_DIR/validate-tdd.sh" origin/main
|
||||
fi
|
||||
|
||||
if [[ $RUN_BACKEND -eq 1 ]]; then
|
||||
run_step "ktlint+unit-tests" ./gradlew ktlintCheck shared:jvmTest androidApp:testDebugUnitTest
|
||||
fi
|
||||
|
||||
if [[ $RUN_ANDROID -eq 1 ]]; then
|
||||
run_step "android-ui-tests" "$SCRIPT_DIR/run-emulator-tests.sh" android
|
||||
fi
|
||||
|
||||
if [[ $RUN_IOS -eq 1 ]]; then
|
||||
run_step "ios-ui-tests" "$SCRIPT_DIR/run-emulator-tests.sh" ios
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "[ci-local] PASS"
|
||||
echo "[ci-local] Log: $LOG_FILE"
|
||||
244
runners-conversion/periodVault/fix-android-emulator.sh
Executable file
244
runners-conversion/periodVault/fix-android-emulator.sh
Executable file
@@ -0,0 +1,244 @@
|
||||
#!/usr/bin/env bash
|
||||
# fix-android-emulator.sh — Install Android OS system image and fix/create phone or Wear OS AVD
|
||||
# Usage: ./scripts/fix-android-emulator.sh
|
||||
# Run when emulator fails with "No initial system image for this configuration".
|
||||
# Supports phone (default) and Wear OS emulators. Requires: Android SDK (ANDROID_HOME or
|
||||
# ~/Library/Android/sdk). Installs SDK command-line tools if missing.
|
||||
#
|
||||
# ENV VARs (defaults use latest SDK):
|
||||
# ANDROID_HOME SDK root (default: $HOME/Library/Android/sdk on macOS)
|
||||
# ANDROID_SDK_ROOT Same as ANDROID_HOME if set
|
||||
# ANDROID_EMULATOR_API_LEVEL API level, e.g. 35 or 30 (default: auto = latest from sdkmanager --list)
|
||||
# ANDROID_AVD_NAME AVD name to fix or create (default: phone, or wear when type=wearos)
|
||||
# ANDROID_EMULATOR_DEVICE Device profile for new AVDs (default: pixel_8 for phone, wear_os_square for Wear)
|
||||
# ANDROID_EMULATOR_TYPE phone (default) or wearos — which system image and device profile to use
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
# --- Default ENV VARs: latest SDK ---
|
||||
# ANDROID_HOME: SDK root (default: $HOME/Library/Android/sdk on macOS)
|
||||
export ANDROID_HOME="${ANDROID_HOME:-${ANDROID_SDK_ROOT:-}}"
|
||||
if [[ -z "$ANDROID_HOME" ]]; then
|
||||
if [[ -d "$HOME/Library/Android/sdk" ]]; then
|
||||
export ANDROID_HOME="$HOME/Library/Android/sdk"
|
||||
else
|
||||
echo -e "${RED}ERROR: ANDROID_HOME not set and ~/Library/Android/sdk not found.${NC}"
|
||||
echo "Set ANDROID_HOME to your Android SDK root, or install Android Studio / SDK."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Emulator type: phone (default) or wearos — determines system image and default device profile
|
||||
EMULATOR_TYPE="${ANDROID_EMULATOR_TYPE:-phone}"
|
||||
EMULATOR_TYPE=$(echo "$EMULATOR_TYPE" | tr '[:upper:]' '[:lower:]')
|
||||
# AVD name and device profile (override with ANDROID_AVD_NAME / ANDROID_EMULATOR_DEVICE)
|
||||
if [[ "$EMULATOR_TYPE" == "wearos" ]]; then
|
||||
AVD_NAME="${ANDROID_AVD_NAME:-wear}"
|
||||
DEVICE_PROFILE="${ANDROID_EMULATOR_DEVICE:-wear_os_square}"
|
||||
else
|
||||
AVD_NAME="${ANDROID_AVD_NAME:-phone}"
|
||||
DEVICE_PROFILE="${ANDROID_EMULATOR_DEVICE:-pixel_8}"
|
||||
fi
|
||||
|
||||
# --- Find or install SDK command-line tools (sdkmanager, avdmanager) ---
|
||||
SDKMANAGER=""
|
||||
AVDMANAGER=""
|
||||
for d in "$ANDROID_HOME/cmdline-tools/latest/bin" "$ANDROID_HOME/tools/bin"; do
|
||||
if [[ -x "$d/sdkmanager" ]]; then
|
||||
SDKMANAGER="$d/sdkmanager"
|
||||
AVDMANAGER="$d/avdmanager"
|
||||
break
|
||||
fi
|
||||
done
|
||||
if [[ -z "$SDKMANAGER" ]] && command -v sdkmanager &>/dev/null; then
|
||||
SDKMANAGER="sdkmanager"
|
||||
AVDMANAGER="avdmanager"
|
||||
fi
|
||||
|
||||
install_cmdline_tools() {
|
||||
echo -e "${YELLOW}Downloading Android SDK command-line tools...${NC}"
|
||||
local zip_url="https://dl.google.com/android/repository/commandlinetools-mac-11076708_latest.zip"
|
||||
local zip_file="$PROJECT_ROOT/build/cmdlinetools.zip"
|
||||
local tmp_dir="$ANDROID_HOME/cmdline-tools"
|
||||
mkdir -p "$(dirname "$zip_file")" "$tmp_dir"
|
||||
if ! curl -fsSL -o "$zip_file" "$zip_url"; then
|
||||
echo -e "${RED}Download failed. Install command-line tools manually:${NC}"
|
||||
echo " Android Studio → Settings → Appearance & Behavior → System Settings → Android SDK"
|
||||
echo " → SDK Tools tab → check 'Android SDK Command-line Tools (latest)' → Apply"
|
||||
exit 1
|
||||
fi
|
||||
(cd "$tmp_dir" && unzip -q -o "$zip_file" && mv cmdline-tools latest 2>/dev/null || true)
|
||||
rm -f "$zip_file"
|
||||
SDKMANAGER="$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager"
|
||||
AVDMANAGER="$ANDROID_HOME/cmdline-tools/latest/bin/avdmanager"
|
||||
if [[ ! -x "$SDKMANAGER" ]]; then
|
||||
# Some zips unpack to cmdline-tools/ inside the zip
|
||||
if [[ -d "$tmp_dir/cmdline-tools" ]]; then
|
||||
mv "$tmp_dir/cmdline-tools" "$tmp_dir/latest"
|
||||
fi
|
||||
SDKMANAGER="$ANDROID_HOME/cmdline-tools/latest/bin/sdkmanager"
|
||||
AVDMANAGER="$ANDROID_HOME/cmdline-tools/latest/bin/avdmanager"
|
||||
fi
|
||||
if [[ ! -x "$SDKMANAGER" ]]; then
|
||||
echo -e "${RED}Command-line tools install failed. Install from Android Studio SDK Manager.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}Command-line tools installed.${NC}"
|
||||
}
|
||||
|
||||
if [[ -z "$SDKMANAGER" ]] || [[ ! -x "$SDKMANAGER" ]]; then
|
||||
install_cmdline_tools
|
||||
fi
|
||||
|
||||
# --- Ensure PATH for this script ---
|
||||
export PATH="$ANDROID_HOME/cmdline-tools/latest/bin:$ANDROID_HOME/emulator:$ANDROID_HOME/platform-tools:$PATH"
|
||||
|
||||
# --- Default to latest SDK system image (when ANDROID_EMULATOR_API_LEVEL unset) ---
|
||||
# Parses sdkmanager --list for highest API level with Google Play arm64-v8a image.
|
||||
set_latest_system_image() {
|
||||
local list_output
|
||||
list_output=$("$SDKMANAGER" --list 2>/dev/null) || true
|
||||
local best_api=0
|
||||
local best_package=""
|
||||
local pkg api
|
||||
# Match package lines (path may be first column or whole line): system-images;android-NN;google_apis...;arm64-v8a
|
||||
while IFS= read -r line; do
|
||||
pkg=$(echo "$line" | sed -n 's/.*\(system-images;android-[0-9][0-9]*;google_apis[^;]*;arm64-v8a\).*/\1/p')
|
||||
[[ -z "$pkg" ]] && continue
|
||||
api=$(echo "$pkg" | sed 's/.*android-\([0-9][0-9]*\).*/\1/')
|
||||
if [[ "$api" =~ ^[0-9]+$ ]] && [[ "$api" -gt "$best_api" ]]; then
|
||||
best_api="$api"
|
||||
best_package="$pkg"
|
||||
fi
|
||||
done <<< "$list_output"
|
||||
if [[ -n "$best_package" ]] && [[ "$best_api" -gt 0 ]]; then
|
||||
ANDROID_EMULATOR_API_LEVEL="$best_api"
|
||||
SYSTEM_IMAGE_PACKAGE="$best_package"
|
||||
echo -e "${GREEN}Using latest SDK system image: API $best_api ($SYSTEM_IMAGE_PACKAGE)${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Parses sdkmanager --list for highest API level with Wear OS image.
|
||||
# Matches: system-images;android-NN;wear;arm64-v8a or ...;google_apis;wear_os_arm64
|
||||
set_latest_system_image_wear() {
|
||||
local list_output
|
||||
list_output=$("$SDKMANAGER" --list 2>/dev/null) || true
|
||||
local best_api=0
|
||||
local best_package=""
|
||||
local pkg api
|
||||
while IFS= read -r line; do
|
||||
# Must be a system image line containing android-NN and wear (wear; or wear_os)
|
||||
[[ "$line" != *"system-images"* ]] && continue
|
||||
[[ "$line" != *"android-"* ]] && continue
|
||||
[[ "$line" != *"wear"* ]] && continue
|
||||
# Extract package: system-images;android-NN;... (semicolon-separated, may be first column)
|
||||
pkg=$(echo "$line" | sed -n 's/.*\(system-images;android-[0-9][0-9]*;[^;]*;[^;]*\).*/\1/p')
|
||||
[[ -z "$pkg" ]] && continue
|
||||
api=$(echo "$pkg" | sed 's/.*android-\([0-9][0-9]*\).*/\1/')
|
||||
if [[ "$api" =~ ^[0-9]+$ ]] && [[ "$api" -gt "$best_api" ]]; then
|
||||
best_api="$api"
|
||||
best_package="$pkg"
|
||||
fi
|
||||
done <<< "$list_output"
|
||||
if [[ -n "$best_package" ]] && [[ "$best_api" -gt 0 ]]; then
|
||||
ANDROID_EMULATOR_API_LEVEL="$best_api"
|
||||
SYSTEM_IMAGE_PACKAGE="$best_package"
|
||||
echo -e "${GREEN}Using latest Wear OS system image: API $best_api ($SYSTEM_IMAGE_PACKAGE)${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# If ANDROID_EMULATOR_API_LEVEL not set, detect latest from SDK (phone or Wear OS)
|
||||
if [[ -z "${ANDROID_EMULATOR_API_LEVEL:-}" ]]; then
|
||||
if [[ "$EMULATOR_TYPE" == "wearos" ]]; then
|
||||
set_latest_system_image_wear
|
||||
else
|
||||
set_latest_system_image
|
||||
fi
|
||||
fi
|
||||
|
||||
# Fallback when detection didn't set a package (e.g. no sdkmanager list)
|
||||
API_LEVEL="${ANDROID_EMULATOR_API_LEVEL:-35}"
|
||||
if [[ -z "${SYSTEM_IMAGE_PACKAGE:-}" ]]; then
|
||||
if [[ "$EMULATOR_TYPE" == "wearos" ]]; then
|
||||
# Wear OS: images often at API 30; package format android-NN;wear;arm64-v8a
|
||||
WEAR_API="${ANDROID_EMULATOR_API_LEVEL:-30}"
|
||||
SYSTEM_IMAGE_PACKAGE="system-images;android-${WEAR_API};wear;arm64-v8a"
|
||||
API_LEVEL="$WEAR_API"
|
||||
elif [[ "$API_LEVEL" == "36" ]]; then
|
||||
SYSTEM_IMAGE_PACKAGE="system-images;android-36;google_apis_playstore_ps16k;arm64-v8a"
|
||||
else
|
||||
SYSTEM_IMAGE_PACKAGE="system-images;android-${API_LEVEL};google_apis_playstore;arm64-v8a"
|
||||
fi
|
||||
fi
|
||||
|
||||
# --- Accept licenses (non-interactive) ---
|
||||
echo -e "${YELLOW}Accepting SDK licenses...${NC}"
|
||||
yes 2>/dev/null | "$SDKMANAGER" --licenses >/dev/null 2>&1 || true
|
||||
|
||||
# --- Install system image ---
|
||||
echo -e "${YELLOW}Installing system image: $SYSTEM_IMAGE_PACKAGE${NC}"
|
||||
if ! "$SDKMANAGER" "$SYSTEM_IMAGE_PACKAGE"; then
|
||||
echo -e "${RED}Failed to install system image. Try a different API level:${NC}"
|
||||
echo " ANDROID_EMULATOR_API_LEVEL=34 $0"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Verify image has system.img (path from package: a;b;c;d -> a/b/c/d) ---
|
||||
REL_IMAGE_DIR=$(echo "$SYSTEM_IMAGE_PACKAGE" | sed 's/;/\//g')
|
||||
IMAGE_DIR="$ANDROID_HOME/$REL_IMAGE_DIR"
|
||||
if [[ ! -f "$IMAGE_DIR/system.img" ]]; then
|
||||
echo -e "${RED}Installed image missing system.img at $IMAGE_DIR${NC}"
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}System image OK: $IMAGE_DIR${NC}"
|
||||
|
||||
# --- Resolve AVD directory (phone may point to e.g. Pixel_9_Pro.avd via .ini) ---
|
||||
AVD_INI="$HOME/.android/avd/${AVD_NAME}.ini"
|
||||
AVD_DIR=""
|
||||
if [[ -f "$AVD_INI" ]]; then
|
||||
AVD_PATH=$(grep "^path=" "$AVD_INI" 2>/dev/null | cut -d= -f2-)
|
||||
if [[ -n "$AVD_PATH" ]] && [[ -d "$AVD_PATH" ]]; then
|
||||
AVD_DIR="$AVD_PATH"
|
||||
fi
|
||||
fi
|
||||
if [[ -z "$AVD_DIR" ]]; then
|
||||
AVD_DIR="$HOME/.android/avd/${AVD_NAME}.avd"
|
||||
fi
|
||||
|
||||
# Update existing AVD config to use the working system image
|
||||
if [[ -d "$AVD_DIR" ]] && [[ -f "$AVD_DIR/config.ini" ]] && [[ -f "$IMAGE_DIR/system.img" ]]; then
|
||||
CONFIG="$AVD_DIR/config.ini"
|
||||
if grep -q "image.sysdir" "$CONFIG"; then
|
||||
# Portable sed: write to temp then mv (macOS sed -i needs backup arg)
|
||||
sed "s|image.sysdir.1=.*|image.sysdir.1=$REL_IMAGE_DIR/|" "$CONFIG" > "${CONFIG}.tmp"
|
||||
mv "${CONFIG}.tmp" "$CONFIG"
|
||||
echo -e "${GREEN}Updated AVD config to use $REL_IMAGE_DIR${NC}"
|
||||
fi
|
||||
elif [[ ! -d "$AVD_DIR" ]]; then
|
||||
echo -e "${YELLOW}Creating AVD '$AVD_NAME' with device profile $DEVICE_PROFILE...${NC}"
|
||||
echo no | "$AVDMANAGER" create avd \
|
||||
-n "$AVD_NAME" \
|
||||
-k "$SYSTEM_IMAGE_PACKAGE" \
|
||||
-d "$DEVICE_PROFILE" \
|
||||
--force
|
||||
echo -e "${GREEN}AVD '$AVD_NAME' created.${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}Done. Start the emulator with:${NC}"
|
||||
echo " emulator -avd $AVD_NAME"
|
||||
echo ""
|
||||
if [[ "$EMULATOR_TYPE" == "wearos" ]]; then
|
||||
echo "Fix Wear OS only: ANDROID_EMULATOR_TYPE=wearos $0"
|
||||
echo "Or fix both phone and Wear: $0 && ANDROID_EMULATOR_TYPE=wearos $0"
|
||||
else
|
||||
echo "Or run deploy: ./scripts/deploy-emulator.sh android"
|
||||
fi
|
||||
echo ""
|
||||
77
runners-conversion/periodVault/init-audit.sh
Executable file
77
runners-conversion/periodVault/init-audit.sh
Executable file
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env bash
|
||||
# init-audit.sh
|
||||
# Initializes local audit scaffolding used by process gates.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
AUDIT_DIR="$PROJECT_ROOT/audit"
|
||||
|
||||
mkdir -p "$AUDIT_DIR"
|
||||
|
||||
if [[ ! -f "$AUDIT_DIR/requirements.json" ]]; then
|
||||
cat >"$AUDIT_DIR/requirements.json" <<'JSON'
|
||||
{
|
||||
"version": 1,
|
||||
"lastUpdated": "2026-02-21",
|
||||
"requirements": [
|
||||
{
|
||||
"id": "R-CI-SELF-HOSTED",
|
||||
"description": "CI jobs run on self-hosted runner labels with documented fallback."
|
||||
},
|
||||
{
|
||||
"id": "R-DEV-SETUP",
|
||||
"description": "Repository provides idempotent bootstrap script and verification commands."
|
||||
},
|
||||
{
|
||||
"id": "R-DEV-GUIDE",
|
||||
"description": "Developer guide is aligned with README, scripts, and local workflow."
|
||||
}
|
||||
]
|
||||
}
|
||||
JSON
|
||||
echo "[init-audit] Created audit/requirements.json"
|
||||
else
|
||||
echo "[init-audit] Found audit/requirements.json"
|
||||
fi
|
||||
|
||||
if [[ ! -f "$AUDIT_DIR/test-runs.json" ]]; then
|
||||
cat >"$AUDIT_DIR/test-runs.json" <<'JSON'
|
||||
{
|
||||
"version": 1,
|
||||
"runs": []
|
||||
}
|
||||
JSON
|
||||
echo "[init-audit] Created audit/test-runs.json"
|
||||
else
|
||||
echo "[init-audit] Found audit/test-runs.json"
|
||||
fi
|
||||
|
||||
if [[ ! -f "$PROJECT_ROOT/CODEX-REPORT.md" ]]; then
|
||||
cat >"$PROJECT_ROOT/CODEX-REPORT.md" <<'MD'
|
||||
# CODEX Report
|
||||
|
||||
## Requirements Mapping
|
||||
- R-CI-SELF-HOSTED: pending
|
||||
- R-DEV-SETUP: pending
|
||||
- R-DEV-GUIDE: pending
|
||||
|
||||
## Constitution Compliance Matrix
|
||||
| Principle | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| I | pending | |
|
||||
| X | pending | |
|
||||
| XX | pending | |
|
||||
|
||||
## Evidence
|
||||
- Add command outputs and CI links.
|
||||
|
||||
## Risks
|
||||
- Add known risks and mitigations.
|
||||
MD
|
||||
echo "[init-audit] Created CODEX-REPORT.md template"
|
||||
else
|
||||
echo "[init-audit] Found CODEX-REPORT.md"
|
||||
fi
|
||||
|
||||
echo "[init-audit] Audit scaffolding ready."
|
||||
107
runners-conversion/periodVault/monitor-pr-checks.sh
Executable file
107
runners-conversion/periodVault/monitor-pr-checks.sh
Executable file
@@ -0,0 +1,107 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: scripts/monitor-pr-checks.sh <pr-number>
|
||||
|
||||
Environment overrides:
|
||||
CHECK_FAST_INTERVAL_SECONDS default: 60
|
||||
CHECK_SLOW_INTERVAL_SECONDS default: 180
|
||||
CHECK_MIN_FAST_WINDOW_SECONDS default: 900
|
||||
CHECK_STABLE_CYCLES_FOR_SLOW default: 5
|
||||
EOF
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "-h" ]] || [[ "${1:-}" == "--help" ]]; then
|
||||
usage
|
||||
exit 0
|
||||
fi
|
||||
|
||||
PR_NUMBER="${1:-}"
|
||||
if [[ -z "$PR_NUMBER" ]]; then
|
||||
usage >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
FAST_INTERVAL_SECONDS="${CHECK_FAST_INTERVAL_SECONDS:-60}"
|
||||
SLOW_INTERVAL_SECONDS="${CHECK_SLOW_INTERVAL_SECONDS:-180}"
|
||||
MIN_FAST_WINDOW_SECONDS="${CHECK_MIN_FAST_WINDOW_SECONDS:-900}"
|
||||
STABLE_CYCLES_FOR_SLOW="${CHECK_STABLE_CYCLES_FOR_SLOW:-5}"
|
||||
|
||||
start_ts="$(date +%s)"
|
||||
stable_cycles=0
|
||||
last_fingerprint=""
|
||||
err_file="$(mktemp)"
|
||||
trap 'rm -f "$err_file"' EXIT
|
||||
|
||||
echo "Monitoring PR #${PR_NUMBER} checks"
|
||||
echo "Policy: fast=${FAST_INTERVAL_SECONDS}s, slow=${SLOW_INTERVAL_SECONDS}s, min-fast-window=${MIN_FAST_WINDOW_SECONDS}s, stable-cycles-for-slow=${STABLE_CYCLES_FOR_SLOW}"
|
||||
|
||||
while true; do
|
||||
now_ts="$(date +%s)"
|
||||
elapsed="$((now_ts - start_ts))"
|
||||
elapsed_mm="$((elapsed / 60))"
|
||||
elapsed_ss="$((elapsed % 60))"
|
||||
|
||||
if ! checks_json="$(gh pr checks "$PR_NUMBER" --json name,state,link 2>"$err_file")"; then
|
||||
err_msg="$(tr '\n' ' ' <"$err_file" | sed 's/[[:space:]]\+/ /g; s/^ //; s/ $//')"
|
||||
echo "[$(date -u '+%Y-%m-%dT%H:%M:%SZ')] elapsed ${elapsed_mm}m${elapsed_ss}s | check query failed: ${err_msg:-unknown error}"
|
||||
sleep "$FAST_INTERVAL_SECONDS"
|
||||
continue
|
||||
fi
|
||||
if [[ "$checks_json" == "[]" ]]; then
|
||||
echo "[$(date -u '+%Y-%m-%dT%H:%M:%SZ')] elapsed ${elapsed_mm}m${elapsed_ss}s | no checks yet"
|
||||
sleep "$FAST_INTERVAL_SECONDS"
|
||||
continue
|
||||
fi
|
||||
|
||||
success_count="$(jq '[.[] | select(.state=="SUCCESS")] | length' <<<"$checks_json")"
|
||||
failure_count="$(jq '[.[] | select(.state=="FAILURE" or .state=="ERROR" or .state=="STARTUP_FAILURE" or .state=="TIMED_OUT")] | length' <<<"$checks_json")"
|
||||
cancelled_count="$(jq '[.[] | select(.state=="CANCELLED")] | length' <<<"$checks_json")"
|
||||
skipped_count="$(jq '[.[] | select(.state=="SKIPPED" or .state=="NEUTRAL")] | length' <<<"$checks_json")"
|
||||
active_count="$(jq '[.[] | select(.state=="PENDING" or .state=="QUEUED" or .state=="IN_PROGRESS" or .state=="WAITING" or .state=="REQUESTED")] | length' <<<"$checks_json")"
|
||||
total_count="$(jq 'length' <<<"$checks_json")"
|
||||
|
||||
fingerprint="$(jq -r 'sort_by(.name) | map("\(.name)=\(.state)") | join(";")' <<<"$checks_json")"
|
||||
if [[ "$fingerprint" == "$last_fingerprint" ]]; then
|
||||
stable_cycles="$((stable_cycles + 1))"
|
||||
else
|
||||
stable_cycles=0
|
||||
last_fingerprint="$fingerprint"
|
||||
fi
|
||||
|
||||
echo "[$(date -u '+%Y-%m-%dT%H:%M:%SZ')] elapsed ${elapsed_mm}m${elapsed_ss}s | total=${total_count} success=${success_count} skipped=${skipped_count} active=${active_count} failed=${failure_count} cancelled=${cancelled_count}"
|
||||
|
||||
if [[ "$failure_count" -gt 0 ]]; then
|
||||
echo "Failing checks:"
|
||||
jq -r '.[] | select(.state=="FAILURE" or .state=="ERROR" or .state=="STARTUP_FAILURE" or .state=="TIMED_OUT") | " - \(.name): \(.state) \(.link)"' <<<"$checks_json"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$active_count" -eq 0 ]]; then
|
||||
if [[ "$cancelled_count" -gt 0 ]]; then
|
||||
echo "Checks ended with cancellations."
|
||||
jq -r '.[] | select(.state=="CANCELLED") | " - \(.name): \(.link)"' <<<"$checks_json"
|
||||
exit 1
|
||||
fi
|
||||
if [[ "$((success_count + skipped_count))" -eq "$total_count" ]]; then
|
||||
echo "All checks passed."
|
||||
exit 0
|
||||
fi
|
||||
echo "Checks finished with non-success states."
|
||||
jq -r '.[] | " - \(.name): \(.state) \(.link)"' <<<"$checks_json"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if (( elapsed < MIN_FAST_WINDOW_SECONDS )); then
|
||||
sleep "$FAST_INTERVAL_SECONDS"
|
||||
continue
|
||||
fi
|
||||
|
||||
if (( stable_cycles >= STABLE_CYCLES_FOR_SLOW )); then
|
||||
sleep "$SLOW_INTERVAL_SECONDS"
|
||||
else
|
||||
sleep "$FAST_INTERVAL_SECONDS"
|
||||
fi
|
||||
done
|
||||
538
runners-conversion/periodVault/run-emulator-tests.sh
Executable file
538
runners-conversion/periodVault/run-emulator-tests.sh
Executable file
@@ -0,0 +1,538 @@
|
||||
#!/usr/bin/env bash
|
||||
# run-emulator-tests.sh — Run all emulator/simulator UI tests for PeriodVault
|
||||
# Usage: ./scripts/run-emulator-tests.sh [android|ios|all]
|
||||
# Logs to build/emulator-tests.log; script reads the log to detect adb errors (e.g. multiple devices).
|
||||
#
|
||||
# iOS watchdog env controls:
|
||||
# IOS_HEARTBEAT_SECONDS (default: 30)
|
||||
# IOS_STARTUP_PROGRESS_TIMEOUT_SECONDS (default: 900)
|
||||
# IOS_TEST_STALL_TIMEOUT_SECONDS (default: 480)
|
||||
# IOS_UNRESPONSIVE_STALL_TIMEOUT_SECONDS(default: 120)
|
||||
# IOS_HARD_TIMEOUT_SECONDS (default: 10800)
|
||||
# IOS_ACTIVE_CPU_THRESHOLD (default: 1.0)
|
||||
set -euo pipefail
|
||||
|
||||
PLATFORM="${1:-all}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# shellcheck source=scripts/lib.sh
|
||||
source "$SCRIPT_DIR/lib.sh"
|
||||
ensure_log_file "emulator-tests.log"
|
||||
|
||||
# Start Android emulator headless for test runs (no GUI window needed)
|
||||
export EMULATOR_HEADLESS=1
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
ANDROID_PASS=0
|
||||
IOS_PASS=0
|
||||
ANDROID_FAIL=0
|
||||
IOS_FAIL=0
|
||||
|
||||
run_android() {
|
||||
echo -e "${YELLOW}=== Android Emulator Tests ===${NC}"
|
||||
|
||||
if ! ensure_android_emulator; then
|
||||
echo -e "${RED}ERROR: Could not start or connect to Android emulator. See $LOG_FILE${NC}"
|
||||
ANDROID_FAIL=1
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Disable animations for stable UI tests
|
||||
run_and_log "adb_disable_animations" adb shell "settings put global window_animation_scale 0; settings put global transition_animation_scale 0; settings put global animator_duration_scale 0" || true
|
||||
|
||||
# Pre-flight: verify emulator is responsive via adb shell
|
||||
echo "Verifying Android emulator is responsive..."
|
||||
if ! adb shell getprop sys.boot_completed 2>/dev/null | grep -q "1"; then
|
||||
echo -e "${RED}ERROR: Android emulator not responsive (sys.boot_completed != 1). Aborting.${NC}"
|
||||
ANDROID_FAIL=1
|
||||
return 1
|
||||
fi
|
||||
echo "Android emulator is responsive."
|
||||
|
||||
# Uninstall the app to ensure a clean database for tests
|
||||
echo "Cleaning app data..."
|
||||
adb uninstall periodvault.androidApp 2>/dev/null || true
|
||||
adb uninstall periodvault.androidApp.test 2>/dev/null || true
|
||||
|
||||
echo "Running Android instrumented tests..."
|
||||
local GRADLE_PID
|
||||
local GRADLE_EXIT=0
|
||||
local TOTAL_ANDROID_TESTS=0
|
||||
TOTAL_ANDROID_TESTS=$(find androidApp/src/androidTest -name '*.kt' -type f -exec grep -hE '@Test' {} + 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [[ -z "$TOTAL_ANDROID_TESTS" ]]; then
|
||||
TOTAL_ANDROID_TESTS=0
|
||||
fi
|
||||
./gradlew androidApp:connectedDebugAndroidTest 2>&1 &
|
||||
GRADLE_PID=$!
|
||||
|
||||
# Progress/liveness watchdog:
|
||||
# - emits heartbeat every 30s with completed Android test cases and emulator health
|
||||
# - kills early only if emulator is unresponsive and test progress is stalled for 10m
|
||||
# - retains a generous hard timeout as last-resort safety net
|
||||
local HEARTBEAT_SECONDS=30
|
||||
local UNRESPONSIVE_STALL_TIMEOUT_SECONDS=600
|
||||
local HARD_TIMEOUT_SECONDS=7200 # 2 hours
|
||||
(
|
||||
local start_ts now_ts elapsed
|
||||
local last_progress_ts
|
||||
local completed=0
|
||||
local last_completed=0
|
||||
local stale_seconds=0
|
||||
local emu_health=""
|
||||
|
||||
start_ts=$(date +%s)
|
||||
last_progress_ts=$start_ts
|
||||
|
||||
while kill -0 $GRADLE_PID 2>/dev/null; do
|
||||
sleep "$HEARTBEAT_SECONDS"
|
||||
now_ts=$(date +%s)
|
||||
elapsed=$((now_ts - start_ts))
|
||||
|
||||
completed=$(find androidApp/build/outputs/androidTest-results/connected -name '*.xml' -type f -exec grep -ho "<testcase " {} + 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [[ -z "$completed" ]]; then
|
||||
completed=0
|
||||
fi
|
||||
|
||||
if [[ "$completed" -gt "$last_completed" ]]; then
|
||||
last_progress_ts=$now_ts
|
||||
last_completed=$completed
|
||||
fi
|
||||
|
||||
if adb shell getprop sys.boot_completed 2>/dev/null | grep -q "1"; then
|
||||
emu_health="responsive"
|
||||
else
|
||||
emu_health="UNRESPONSIVE"
|
||||
fi
|
||||
|
||||
stale_seconds=$((now_ts - last_progress_ts))
|
||||
local elapsed_mm elapsed_ss
|
||||
elapsed_mm=$((elapsed / 60))
|
||||
elapsed_ss=$((elapsed % 60))
|
||||
|
||||
if [[ "$TOTAL_ANDROID_TESTS" -gt 0 ]]; then
|
||||
echo "Android progress: ${completed}/${TOTAL_ANDROID_TESTS} tests complete | elapsed ${elapsed_mm}m${elapsed_ss}s | emulator ${emu_health}"
|
||||
else
|
||||
echo "Android progress: ${completed} tests complete | elapsed ${elapsed_mm}m${elapsed_ss}s | emulator ${emu_health}"
|
||||
fi
|
||||
|
||||
if [[ "$elapsed" -ge "$HARD_TIMEOUT_SECONDS" ]]; then
|
||||
echo "WATCHDOG: killing Gradle (PID $GRADLE_PID) after hard timeout ${HARD_TIMEOUT_SECONDS}s"
|
||||
kill $GRADLE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $GRADLE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
|
||||
if [[ "$emu_health" == "UNRESPONSIVE" ]] && [[ "$stale_seconds" -ge "$UNRESPONSIVE_STALL_TIMEOUT_SECONDS" ]]; then
|
||||
echo "WATCHDOG: killing Gradle (PID $GRADLE_PID) - emulator unresponsive and no progress for ${stale_seconds}s"
|
||||
kill $GRADLE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $GRADLE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
done
|
||||
) &
|
||||
local WATCHDOG_PID=$!
|
||||
wait $GRADLE_PID 2>/dev/null || GRADLE_EXIT=$?
|
||||
kill $WATCHDOG_PID 2>/dev/null || true
|
||||
wait $WATCHDOG_PID 2>/dev/null || true
|
||||
|
||||
if [[ $GRADLE_EXIT -eq 137 ]] || [[ $GRADLE_EXIT -eq 143 ]]; then
|
||||
echo -e "${RED}Android emulator tests terminated by watchdog${NC}"
|
||||
ANDROID_FAIL=1
|
||||
run_and_log "adb_restore_animations" adb shell "settings put global window_animation_scale 1; settings put global transition_animation_scale 1; settings put global animator_duration_scale 1" || true
|
||||
return 1
|
||||
elif [[ $GRADLE_EXIT -eq 0 ]]; then
|
||||
echo -e "${GREEN}Android emulator tests PASSED${NC}"
|
||||
ANDROID_PASS=1
|
||||
# Emit runtime evidence for CI tracking
|
||||
local android_duration_s=""
|
||||
local android_test_count=""
|
||||
if [[ -f androidApp/build/reports/androidTests/connected/debug/index.html ]]; then
|
||||
android_test_count="$(grep -o '<div class="counter">[0-9]*</div>' androidApp/build/reports/androidTests/connected/debug/index.html | head -1 | grep -o '[0-9]*' || echo "")"
|
||||
android_duration_s="$(grep -o '<div class="counter">[0-9a-z.]*s</div>' androidApp/build/reports/androidTests/connected/debug/index.html | head -1 | grep -o '[0-9.]*' || echo "")"
|
||||
fi
|
||||
echo "RUNTIME_EVIDENCE: {\"suite\": \"android_ui\", \"tests\": ${android_test_count:-0}, \"duration\": \"${android_duration_s:-unknown}s\", \"timestamp\": \"$(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}"
|
||||
else
|
||||
echo -e "${RED}Android emulator tests FAILED${NC}"
|
||||
ANDROID_FAIL=1
|
||||
echo "Test reports: androidApp/build/reports/androidTests/connected/debug/"
|
||||
run_and_log "adb_restore_animations" adb shell "settings put global window_animation_scale 1; settings put global transition_animation_scale 1; settings put global animator_duration_scale 1" || true
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Re-enable animations
|
||||
run_and_log "adb_restore_animations" adb shell "settings put global window_animation_scale 1; settings put global transition_animation_scale 1; settings put global animator_duration_scale 1" || true
|
||||
}
|
||||
|
||||
run_ios() {
|
||||
echo -e "${YELLOW}=== iOS Simulator Tests ===${NC}"
|
||||
|
||||
# Find an available simulator
|
||||
local SIM_ID
|
||||
SIM_ID=$(xcrun simctl list devices available -j 2>/dev/null | python3 -c "
|
||||
import json, sys
|
||||
data = json.load(sys.stdin)
|
||||
for runtime, devices in data.get('devices', {}).items():
|
||||
if 'iOS' in runtime:
|
||||
for d in devices:
|
||||
if d.get('isAvailable'):
|
||||
print(d['udid'])
|
||||
sys.exit(0)
|
||||
sys.exit(1)
|
||||
" 2>/dev/null) || true
|
||||
|
||||
if [[ -z "$SIM_ID" ]]; then
|
||||
echo -e "${RED}ERROR: No available iOS simulator found.${NC}"
|
||||
IOS_FAIL=1
|
||||
return 1
|
||||
fi
|
||||
|
||||
local SIM_NAME
|
||||
SIM_NAME=$(xcrun simctl list devices available | grep "$SIM_ID" | sed 's/ (.*//' | xargs)
|
||||
echo "Using simulator: $SIM_NAME ($SIM_ID)"
|
||||
|
||||
# Boot simulator if needed
|
||||
xcrun simctl boot "$SIM_ID" 2>/dev/null || true
|
||||
|
||||
# Health check: verify simulator is actually responsive (not just "Booted" in simctl)
|
||||
echo "Verifying simulator is responsive..."
|
||||
local HEALTH_OK=false
|
||||
for i in 1 2 3 4 5; do
|
||||
if xcrun simctl spawn "$SIM_ID" launchctl print system >/dev/null 2>&1; then
|
||||
HEALTH_OK=true
|
||||
break
|
||||
fi
|
||||
echo " Attempt $i/5: simulator not responsive, waiting 5s..."
|
||||
sleep 5
|
||||
done
|
||||
if [[ "$HEALTH_OK" != "true" ]]; then
|
||||
echo -e "${RED}ERROR: Simulator $SIM_NAME ($SIM_ID) reports Booted but is not responsive.${NC}"
|
||||
echo "Attempting full restart..."
|
||||
xcrun simctl shutdown "$SIM_ID" 2>/dev/null || true
|
||||
sleep 3
|
||||
xcrun simctl boot "$SIM_ID" 2>/dev/null || true
|
||||
sleep 10
|
||||
if ! xcrun simctl spawn "$SIM_ID" launchctl print system >/dev/null 2>&1; then
|
||||
echo -e "${RED}ERROR: Simulator still unresponsive after restart. Aborting.${NC}"
|
||||
IOS_FAIL=1
|
||||
return 1
|
||||
fi
|
||||
echo "Simulator recovered after restart."
|
||||
fi
|
||||
echo "Simulator is responsive."
|
||||
|
||||
# Generate Xcode project if needed
|
||||
if [[ ! -f iosApp/iosApp.xcodeproj/project.pbxproj ]]; then
|
||||
echo "Generating Xcode project..."
|
||||
(cd iosApp && xcodegen generate)
|
||||
fi
|
||||
|
||||
# --- Phase 1: Build (synchronous, fail-fast) ---
|
||||
echo "Building iOS UI tests..."
|
||||
local BUILD_DIR
|
||||
BUILD_DIR=$(mktemp -d)
|
||||
local BUILD_LOG
|
||||
BUILD_LOG=$(mktemp)
|
||||
local BUILD_START
|
||||
BUILD_START=$(date +%s)
|
||||
|
||||
xcodebuild build-for-testing \
|
||||
-project iosApp/iosApp.xcodeproj \
|
||||
-scheme iosApp \
|
||||
-destination "platform=iOS Simulator,id=$SIM_ID" \
|
||||
-derivedDataPath "$BUILD_DIR" \
|
||||
> "$BUILD_LOG" 2>&1
|
||||
|
||||
local BUILD_EXIT=$?
|
||||
local BUILD_END
|
||||
BUILD_END=$(date +%s)
|
||||
echo "iOS build phase: $((BUILD_END - BUILD_START))s (exit=$BUILD_EXIT)"
|
||||
|
||||
if [[ $BUILD_EXIT -ne 0 ]]; then
|
||||
echo -e "${RED}BUILD FAILED — last 30 lines:${NC}"
|
||||
tail -30 "$BUILD_LOG"
|
||||
rm -f "$BUILD_LOG"
|
||||
rm -rf "$BUILD_DIR"
|
||||
IOS_FAIL=1
|
||||
return 1
|
||||
fi
|
||||
rm -f "$BUILD_LOG"
|
||||
|
||||
# Disable animations for stable, faster UI tests
|
||||
echo "Disabling simulator animations..."
|
||||
xcrun simctl spawn "$SIM_ID" defaults write com.apple.Accessibility ReduceMotionEnabled -bool YES 2>/dev/null || true
|
||||
|
||||
# Uninstall the app to ensure a clean database for tests
|
||||
echo "Cleaning app data..."
|
||||
xcrun simctl uninstall "$SIM_ID" com.periodvault.app 2>/dev/null || true
|
||||
|
||||
# --- Phase 2: Test (background with watchdog, parallel execution) ---
|
||||
echo "Running iOS UI tests (parallel enabled)..."
|
||||
local TEST_EXIT=0
|
||||
local TEST_LOG
|
||||
TEST_LOG=$(mktemp)
|
||||
local RESULT_BUNDLE_DIR
|
||||
RESULT_BUNDLE_DIR=$(mktemp -d)
|
||||
local RESULT_BUNDLE_PATH="$RESULT_BUNDLE_DIR/ios-ui-tests.xcresult"
|
||||
local TOTAL_IOS_TESTS=0
|
||||
TOTAL_IOS_TESTS=$(find iosApp/iosAppUITests -name '*.swift' -print0 2>/dev/null | xargs -0 grep -hE '^[[:space:]]*func[[:space:]]+test' 2>/dev/null | wc -l | tr -d ' ')
|
||||
if [[ -z "$TOTAL_IOS_TESTS" ]]; then
|
||||
TOTAL_IOS_TESTS=0
|
||||
fi
|
||||
local TEST_START
|
||||
TEST_START=$(date +%s)
|
||||
|
||||
xcodebuild test-without-building \
|
||||
-project iosApp/iosApp.xcodeproj \
|
||||
-scheme iosApp \
|
||||
-destination "platform=iOS Simulator,id=$SIM_ID" \
|
||||
-only-testing:iosAppUITests \
|
||||
-derivedDataPath "$BUILD_DIR" \
|
||||
-resultBundlePath "$RESULT_BUNDLE_PATH" \
|
||||
-parallel-testing-enabled YES \
|
||||
> "$TEST_LOG" 2>&1 &
|
||||
local XCODE_PID=$!
|
||||
|
||||
# Progress/liveness watchdog:
|
||||
# - emits heartbeat with completed test count and simulator health
|
||||
# - fails fast when CoreSimulatorService is unhealthy
|
||||
# - treats test completion, xcodebuild CPU, and log growth as activity
|
||||
# - fails when startup/test activity stalls beyond configured thresholds
|
||||
# - keeps a hard cap as a final safety net
|
||||
local HEARTBEAT_SECONDS="${IOS_HEARTBEAT_SECONDS:-30}"
|
||||
local STARTUP_PROGRESS_TIMEOUT_SECONDS="${IOS_STARTUP_PROGRESS_TIMEOUT_SECONDS:-900}"
|
||||
local TEST_STALL_TIMEOUT_SECONDS="${IOS_TEST_STALL_TIMEOUT_SECONDS:-480}"
|
||||
local UNRESPONSIVE_STALL_TIMEOUT_SECONDS="${IOS_UNRESPONSIVE_STALL_TIMEOUT_SECONDS:-120}"
|
||||
local HARD_TIMEOUT_SECONDS="${IOS_HARD_TIMEOUT_SECONDS:-10800}" # 3 hours
|
||||
local ACTIVE_CPU_THRESHOLD="${IOS_ACTIVE_CPU_THRESHOLD:-1.0}"
|
||||
|
||||
echo "iOS watchdog: heartbeat=${HEARTBEAT_SECONDS}s startup_timeout=${STARTUP_PROGRESS_TIMEOUT_SECONDS}s test_stall_timeout=${TEST_STALL_TIMEOUT_SECONDS}s unresponsive_timeout=${UNRESPONSIVE_STALL_TIMEOUT_SECONDS}s hard_timeout=${HARD_TIMEOUT_SECONDS}s cpu_active_threshold=${ACTIVE_CPU_THRESHOLD}%"
|
||||
(
|
||||
local start_ts now_ts elapsed
|
||||
local last_test_progress_ts
|
||||
local last_activity_ts
|
||||
local completed=0
|
||||
local last_completed=0
|
||||
local stale_seconds=0
|
||||
local sim_health=""
|
||||
local first_test_seen=false
|
||||
local simctl_health_output=""
|
||||
local log_size=0
|
||||
local last_log_size=0
|
||||
local xcode_cpu="0.0"
|
||||
local xcode_cpu_raw=""
|
||||
|
||||
start_ts=$(date +%s)
|
||||
last_test_progress_ts=$start_ts
|
||||
last_activity_ts=$start_ts
|
||||
|
||||
while kill -0 $XCODE_PID 2>/dev/null; do
|
||||
sleep "$HEARTBEAT_SECONDS"
|
||||
now_ts=$(date +%s)
|
||||
elapsed=$((now_ts - start_ts))
|
||||
|
||||
# Keep watchdog alive before first completed test appears; do not fail on zero matches.
|
||||
completed=$(grep -E -c "Test [Cc]ase .* (passed|failed)" "$TEST_LOG" 2>/dev/null || true)
|
||||
if [[ -z "$completed" ]]; then
|
||||
completed=0
|
||||
fi
|
||||
|
||||
if [[ "$completed" -gt "$last_completed" ]]; then
|
||||
last_test_progress_ts=$now_ts
|
||||
last_activity_ts=$now_ts
|
||||
last_completed=$completed
|
||||
first_test_seen=true
|
||||
fi
|
||||
|
||||
# xcodebuild output growth indicates ongoing work even when a test has not completed yet.
|
||||
log_size=$(wc -c < "$TEST_LOG" 2>/dev/null || echo 0)
|
||||
if [[ -n "$log_size" ]] && [[ "$log_size" -gt "$last_log_size" ]]; then
|
||||
last_log_size=$log_size
|
||||
last_activity_ts=$now_ts
|
||||
fi
|
||||
|
||||
# CPU usage provides another liveness signal during long-running UI tests.
|
||||
xcode_cpu_raw=$(ps -p "$XCODE_PID" -o %cpu= 2>/dev/null | tr -d ' ' || true)
|
||||
if [[ -n "$xcode_cpu_raw" ]]; then
|
||||
xcode_cpu="$xcode_cpu_raw"
|
||||
else
|
||||
xcode_cpu="0.0"
|
||||
fi
|
||||
if awk "BEGIN { exit !($xcode_cpu >= $ACTIVE_CPU_THRESHOLD) }"; then
|
||||
last_activity_ts=$now_ts
|
||||
fi
|
||||
|
||||
if simctl_health_output=$(xcrun simctl spawn "$SIM_ID" launchctl print system 2>&1); then
|
||||
sim_health="responsive"
|
||||
else
|
||||
sim_health="UNRESPONSIVE"
|
||||
|
||||
# Fail fast when the simulator service itself is down. Waiting longer does not recover this state.
|
||||
if echo "$simctl_health_output" | grep -Eiq "CoreSimulatorService connection became invalid|not connected to CoreSimulatorService|Unable to locate device set|Connection refused|simdiskimaged.*(crashed|not responding)|Unable to discover any Simulator runtimes"; then
|
||||
echo "WATCHDOG: CoreSimulatorService unhealthy; killing xcodebuild (PID $XCODE_PID) immediately"
|
||||
echo "$simctl_health_output" | head -5 | sed 's/^/ simctl: /'
|
||||
kill $XCODE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $XCODE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
fi
|
||||
|
||||
stale_seconds=$((now_ts - last_activity_ts))
|
||||
local elapsed_mm elapsed_ss
|
||||
elapsed_mm=$((elapsed / 60))
|
||||
elapsed_ss=$((elapsed % 60))
|
||||
|
||||
if [[ "$TOTAL_IOS_TESTS" -gt 0 ]]; then
|
||||
echo "iOS progress: ${completed}/${TOTAL_IOS_TESTS} tests complete | elapsed ${elapsed_mm}m${elapsed_ss}s | simulator ${sim_health} | xcodebuild cpu ${xcode_cpu}%"
|
||||
else
|
||||
echo "iOS progress: ${completed} tests complete | elapsed ${elapsed_mm}m${elapsed_ss}s | simulator ${sim_health} | xcodebuild cpu ${xcode_cpu}%"
|
||||
fi
|
||||
|
||||
if [[ "$elapsed" -ge "$HARD_TIMEOUT_SECONDS" ]]; then
|
||||
echo "WATCHDOG: killing xcodebuild (PID $XCODE_PID) after hard timeout ${HARD_TIMEOUT_SECONDS}s"
|
||||
kill $XCODE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $XCODE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
|
||||
if [[ "$first_test_seen" != "true" ]] && [[ "$elapsed" -ge "$STARTUP_PROGRESS_TIMEOUT_SECONDS" ]]; then
|
||||
echo "WATCHDOG: killing xcodebuild (PID $XCODE_PID) - no completed iOS test observed within startup timeout (${STARTUP_PROGRESS_TIMEOUT_SECONDS}s)"
|
||||
kill $XCODE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $XCODE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
|
||||
if [[ "$first_test_seen" == "true" ]] && [[ "$stale_seconds" -ge "$TEST_STALL_TIMEOUT_SECONDS" ]]; then
|
||||
echo "WATCHDOG: killing xcodebuild (PID $XCODE_PID) - no iOS test activity for ${stale_seconds}s"
|
||||
kill $XCODE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $XCODE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
|
||||
if [[ "$sim_health" == "UNRESPONSIVE" ]] && [[ "$stale_seconds" -ge "$UNRESPONSIVE_STALL_TIMEOUT_SECONDS" ]]; then
|
||||
echo "WATCHDOG: killing xcodebuild (PID $XCODE_PID) - simulator unresponsive and no test activity for ${stale_seconds}s"
|
||||
kill $XCODE_PID 2>/dev/null || true
|
||||
sleep 5
|
||||
kill -9 $XCODE_PID 2>/dev/null || true
|
||||
break
|
||||
fi
|
||||
done
|
||||
) &
|
||||
local WATCHDOG_PID=$!
|
||||
|
||||
wait $XCODE_PID 2>/dev/null || TEST_EXIT=$?
|
||||
kill $WATCHDOG_PID 2>/dev/null || true
|
||||
wait $WATCHDOG_PID 2>/dev/null || true
|
||||
|
||||
local TEST_END
|
||||
TEST_END=$(date +%s)
|
||||
echo "iOS test phase: $((TEST_END - TEST_START))s (exit=$TEST_EXIT)"
|
||||
echo "iOS total (build+test): $((TEST_END - BUILD_START))s"
|
||||
|
||||
# Show test summary (passed/failed counts and any failures)
|
||||
echo "--- Test Results ---"
|
||||
grep -E "Test [Cc]ase .* (passed|failed)" "$TEST_LOG" || true
|
||||
echo ""
|
||||
echo "--- Failures ---"
|
||||
grep -E "(FAIL|error:|\*\* TEST FAILED)" "$TEST_LOG" || echo " (none)"
|
||||
echo ""
|
||||
echo "--- Last 20 lines ---"
|
||||
tail -20 "$TEST_LOG"
|
||||
rm -f "$TEST_LOG"
|
||||
|
||||
if [[ $TEST_EXIT -eq 0 ]]; then
|
||||
local SKIP_ALLOWLIST="${IOS_SKIPPED_TESTS_ALLOWLIST:-audit/ios-skipped-tests-allowlist.txt}"
|
||||
if ! bash "$SCRIPT_DIR/validate-ios-skipped-tests.sh" "$RESULT_BUNDLE_PATH" "$SKIP_ALLOWLIST"; then
|
||||
echo -e "${RED}iOS skipped-test gate FAILED${NC}"
|
||||
TEST_EXIT=1
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -rf "$RESULT_BUNDLE_DIR"
|
||||
rm -rf "$BUILD_DIR"
|
||||
|
||||
# Re-enable animations
|
||||
xcrun simctl spawn "$SIM_ID" defaults write com.apple.Accessibility ReduceMotionEnabled -bool NO 2>/dev/null || true
|
||||
|
||||
if [[ $TEST_EXIT -eq 137 ]] || [[ $TEST_EXIT -eq 143 ]]; then
|
||||
echo -e "${RED}iOS simulator tests terminated by watchdog${NC}"
|
||||
IOS_FAIL=1
|
||||
return 1
|
||||
elif [[ $TEST_EXIT -eq 0 ]]; then
|
||||
echo -e "${GREEN}iOS simulator tests PASSED${NC}"
|
||||
IOS_PASS=1
|
||||
# Emit runtime evidence for CI tracking
|
||||
local ios_test_count=""
|
||||
ios_test_count="$TOTAL_IOS_TESTS"
|
||||
local ios_elapsed_s=""
|
||||
ios_elapsed_s="$(($(date +%s) - $(date -j -f "%Y-%m-%dT%H:%M:%SZ" "$(date -u '+%Y-%m-%dT%H:%M:%SZ')" +%s 2>/dev/null || echo 0)))"
|
||||
echo "RUNTIME_EVIDENCE: {\"suite\": \"ios_ui\", \"tests\": ${ios_test_count:-0}, \"timestamp\": \"$(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}"
|
||||
else
|
||||
echo -e "${RED}iOS simulator tests FAILED${NC}"
|
||||
IOS_FAIL=1
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
echo "================================================"
|
||||
echo " PeriodVault Emulator/Simulator Test Runner"
|
||||
echo "================================================"
|
||||
echo ""
|
||||
|
||||
case "$PLATFORM" in
|
||||
android)
|
||||
run_android
|
||||
;;
|
||||
ios)
|
||||
run_ios
|
||||
;;
|
||||
all)
|
||||
run_android || true
|
||||
echo ""
|
||||
run_ios || true
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [android|ios|all]"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
echo ""
|
||||
echo "================================================"
|
||||
echo " Results Summary"
|
||||
echo "================================================"
|
||||
if [[ "$PLATFORM" == "all" || "$PLATFORM" == "android" ]]; then
|
||||
if [[ $ANDROID_PASS -eq 1 ]]; then
|
||||
echo -e " Android: ${GREEN}PASSED${NC}"
|
||||
elif [[ $ANDROID_FAIL -eq 1 ]]; then
|
||||
echo -e " Android: ${RED}FAILED${NC}"
|
||||
else
|
||||
echo -e " Android: ${YELLOW}SKIPPED${NC}"
|
||||
fi
|
||||
fi
|
||||
if [[ "$PLATFORM" == "all" || "$PLATFORM" == "ios" ]]; then
|
||||
if [[ $IOS_PASS -eq 1 ]]; then
|
||||
echo -e " iOS: ${GREEN}PASSED${NC}"
|
||||
elif [[ $IOS_FAIL -eq 1 ]]; then
|
||||
echo -e " iOS: ${RED}FAILED${NC}"
|
||||
else
|
||||
echo -e " iOS: ${YELLOW}SKIPPED${NC}"
|
||||
fi
|
||||
fi
|
||||
echo "================================================"
|
||||
|
||||
# Exit with failure if any platform failed
|
||||
if [[ $ANDROID_FAIL -eq 1 ]] || [[ $IOS_FAIL -eq 1 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
730
runners-conversion/periodVault/runner.sh
Executable file
730
runners-conversion/periodVault/runner.sh
Executable file
@@ -0,0 +1,730 @@
|
||||
#!/usr/bin/env bash
|
||||
# runner.sh — Setup, manage, and tear down a GitHub Actions self-hosted runner.
|
||||
#
|
||||
# Supports two platforms:
|
||||
# - macOS: Installs the runner agent natively, manages it as a launchd service.
|
||||
# - Linux: Delegates to Docker-based runner infrastructure in infra/runners/.
|
||||
#
|
||||
# Typical flow:
|
||||
# 1) ./scripts/runner.sh --mode setup # install/configure runner
|
||||
# 2) ./scripts/runner.sh --mode status # verify runner is online
|
||||
# 3) (push/PR triggers CI on the self-hosted runner)
|
||||
# 4) ./scripts/runner.sh --mode stop # stop runner
|
||||
# 5) ./scripts/runner.sh --mode uninstall # deregister and clean up
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
MODE=""
|
||||
RUNNER_DIR="${PERIODVAULT_RUNNER_DIR:-${HOME}/.periodvault-runner}"
|
||||
RUNNER_LABELS="self-hosted,macOS,periodvault"
|
||||
RUNNER_NAME=""
|
||||
REPO_SLUG=""
|
||||
REG_TOKEN=""
|
||||
FORCE=false
|
||||
FOREGROUND=false
|
||||
PUSH_REGISTRY=""
|
||||
BUILD_TARGET=""
|
||||
|
||||
PLIST_LABEL="com.periodvault.actions-runner"
|
||||
PLIST_PATH="${HOME}/Library/LaunchAgents/${PLIST_LABEL}.plist"
|
||||
|
||||
# Resolved during Linux operations
|
||||
INFRA_DIR=""
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
./scripts/runner.sh --mode <setup|start|stop|status|build-image|uninstall> [options]
|
||||
|
||||
Required:
|
||||
--mode MODE One of: setup, start, stop, status, build-image, uninstall
|
||||
|
||||
Options (macOS):
|
||||
--runner-dir DIR Installation directory (default: ~/.periodvault-runner)
|
||||
--labels LABELS Comma-separated labels (default: self-hosted,macOS,periodvault)
|
||||
--name NAME Runner name (default: periodvault-<hostname>)
|
||||
--repo OWNER/REPO GitHub repository (default: auto-detected from git remote)
|
||||
--token TOKEN Registration/removal token (prompted if not provided)
|
||||
--force Force re-setup even if already configured
|
||||
--foreground Start in foreground instead of launchd service
|
||||
|
||||
Options (Linux — Docker mode):
|
||||
On Linux, this script delegates to Docker Compose in infra/runners/.
|
||||
Configuration is managed via .env and envs/*.env files.
|
||||
See infra/runners/README.md for details.
|
||||
|
||||
Options (build-image):
|
||||
--target TARGET Dockerfile target: slim or full (default: builds both)
|
||||
--push REGISTRY Tag and push to a registry (e.g. localhost:5000)
|
||||
|
||||
Common:
|
||||
-h, --help Show this help
|
||||
|
||||
Examples (macOS):
|
||||
./scripts/runner.sh --mode setup
|
||||
./scripts/runner.sh --mode setup --token ghp_xxxxx
|
||||
./scripts/runner.sh --mode start
|
||||
./scripts/runner.sh --mode start --foreground
|
||||
./scripts/runner.sh --mode status
|
||||
./scripts/runner.sh --mode stop
|
||||
./scripts/runner.sh --mode uninstall
|
||||
|
||||
Examples (Linux):
|
||||
./scripts/runner.sh --mode setup # prompts for .env, starts runners
|
||||
./scripts/runner.sh --mode start # docker compose up -d
|
||||
./scripts/runner.sh --mode stop # docker compose down
|
||||
./scripts/runner.sh --mode status # docker compose ps + logs
|
||||
./scripts/runner.sh --mode uninstall # docker compose down -v --rmi local
|
||||
|
||||
Examples (build-image — works on any OS):
|
||||
./scripts/runner.sh --mode build-image # build slim + full
|
||||
./scripts/runner.sh --mode build-image --target slim # build slim only
|
||||
./scripts/runner.sh --mode build-image --push localhost:5000 # build + push to local registry
|
||||
|
||||
Environment overrides:
|
||||
PERIODVAULT_RUNNER_DIR Runner installation directory (macOS only)
|
||||
EOF
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
log() {
|
||||
printf '[runner] %s\n' "$*"
|
||||
}
|
||||
|
||||
warn() {
|
||||
printf '[runner] WARNING: %s\n' "$*" >&2
|
||||
}
|
||||
|
||||
die() {
|
||||
printf '[runner] ERROR: %s\n' "$*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
local cmd="$1"
|
||||
command -v "$cmd" >/dev/null 2>&1 || die "required command not found: $cmd"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Platform detection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
detect_os() {
|
||||
case "$(uname -s)" in
|
||||
Darwin) printf 'darwin' ;;
|
||||
Linux) printf 'linux' ;;
|
||||
*) die "Unsupported OS: $(uname -s). This script supports macOS and Linux." ;;
|
||||
esac
|
||||
}
|
||||
|
||||
ensure_macos() {
|
||||
[[ "$(detect_os)" == "darwin" ]] || die "This operation requires macOS."
|
||||
}
|
||||
|
||||
find_infra_dir() {
|
||||
local script_dir
|
||||
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
local repo_root="${script_dir}/.."
|
||||
INFRA_DIR="$(cd "${repo_root}/infra/runners" 2>/dev/null && pwd)" || true
|
||||
|
||||
if [[ -z "$INFRA_DIR" ]] || [[ ! -f "${INFRA_DIR}/docker-compose.yml" ]]; then
|
||||
die "Could not find infra/runners/docker-compose.yml. Ensure you are running from the periodvault repo."
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Argument parsing
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
parse_args() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--mode)
|
||||
shift; [[ $# -gt 0 ]] || die "--mode requires a value"
|
||||
MODE="$1"; shift ;;
|
||||
--runner-dir)
|
||||
shift; [[ $# -gt 0 ]] || die "--runner-dir requires a value"
|
||||
RUNNER_DIR="$1"; shift ;;
|
||||
--labels)
|
||||
shift; [[ $# -gt 0 ]] || die "--labels requires a value"
|
||||
RUNNER_LABELS="$1"; shift ;;
|
||||
--name)
|
||||
shift; [[ $# -gt 0 ]] || die "--name requires a value"
|
||||
RUNNER_NAME="$1"; shift ;;
|
||||
--repo)
|
||||
shift; [[ $# -gt 0 ]] || die "--repo requires a value"
|
||||
REPO_SLUG="$1"; shift ;;
|
||||
--token)
|
||||
shift; [[ $# -gt 0 ]] || die "--token requires a value"
|
||||
REG_TOKEN="$1"; shift ;;
|
||||
--target)
|
||||
shift; [[ $# -gt 0 ]] || die "--target requires a value (slim or full)"
|
||||
BUILD_TARGET="$1"; shift ;;
|
||||
--force)
|
||||
FORCE=true; shift ;;
|
||||
--foreground)
|
||||
FOREGROUND=true; shift ;;
|
||||
--push)
|
||||
shift; [[ $# -gt 0 ]] || die "--push requires a registry address (e.g. localhost:5000)"
|
||||
PUSH_REGISTRY="$1"; shift ;;
|
||||
-h|--help)
|
||||
usage; exit 0 ;;
|
||||
*)
|
||||
die "unknown argument: $1" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
[[ -n "$MODE" ]] || die "--mode is required (setup|start|stop|status|build-image|uninstall)"
|
||||
case "$MODE" in
|
||||
setup|start|stop|status|build-image|uninstall) ;;
|
||||
*) die "invalid --mode: $MODE (expected setup|start|stop|status|build-image|uninstall)" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Repo detection
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
detect_repo() {
|
||||
if [[ -n "$REPO_SLUG" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
local remote_url=""
|
||||
remote_url="$(git remote get-url origin 2>/dev/null || true)"
|
||||
if [[ -z "$remote_url" ]]; then
|
||||
die "Could not detect repository from git remote. Use --repo OWNER/REPO."
|
||||
fi
|
||||
|
||||
REPO_SLUG="$(printf '%s' "$remote_url" \
|
||||
| sed -E 's#^(https?://github\.com/|git@github\.com:)##' \
|
||||
| sed -E 's/\.git$//')"
|
||||
|
||||
if [[ -z "$REPO_SLUG" ]] || ! printf '%s' "$REPO_SLUG" | grep -qE '^[^/]+/[^/]+$'; then
|
||||
die "Could not parse OWNER/REPO from remote URL: $remote_url. Use --repo OWNER/REPO."
|
||||
fi
|
||||
|
||||
log "Auto-detected repository: $REPO_SLUG"
|
||||
}
|
||||
|
||||
# ===========================================================================
|
||||
# macOS: Native runner agent + launchd service
|
||||
# ===========================================================================
|
||||
|
||||
detect_arch() {
|
||||
local arch
|
||||
arch="$(uname -m)"
|
||||
case "$arch" in
|
||||
arm64|aarch64) printf 'arm64' ;;
|
||||
x86_64) printf 'x64' ;;
|
||||
*) die "Unsupported architecture: $arch" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
download_runner() {
|
||||
require_cmd curl
|
||||
require_cmd shasum
|
||||
require_cmd tar
|
||||
|
||||
local arch
|
||||
arch="$(detect_arch)"
|
||||
|
||||
log "Fetching latest runner release metadata..."
|
||||
local release_json
|
||||
release_json="$(curl -fsSL "https://api.github.com/repos/actions/runner/releases/latest")"
|
||||
|
||||
local version
|
||||
version="$(printf '%s' "$release_json" | grep '"tag_name"' | sed -E 's/.*"v([^"]+)".*/\1/')"
|
||||
if [[ -z "$version" ]]; then
|
||||
die "Could not determine latest runner version from GitHub API."
|
||||
fi
|
||||
log "Latest runner version: $version"
|
||||
|
||||
local tarball="actions-runner-osx-${arch}-${version}.tar.gz"
|
||||
local download_url="https://github.com/actions/runner/releases/download/v${version}/${tarball}"
|
||||
|
||||
local sha_marker="osx-${arch}"
|
||||
local expected_sha=""
|
||||
expected_sha="$(printf '%s' "$release_json" \
|
||||
| python3 -c "
|
||||
import json,sys,re
|
||||
body = json.load(sys.stdin).get('body','')
|
||||
m = re.search(r'<!-- BEGIN SHA ${sha_marker} -->([0-9a-f]{64})<!-- END SHA ${sha_marker} -->', body)
|
||||
print(m.group(1) if m else '')
|
||||
" 2>/dev/null || true)"
|
||||
|
||||
mkdir -p "$RUNNER_DIR"
|
||||
local dest="${RUNNER_DIR}/${tarball}"
|
||||
|
||||
if [[ -f "$dest" ]]; then
|
||||
log "Tarball already exists: $dest"
|
||||
else
|
||||
log "Downloading: $download_url"
|
||||
curl -fSL -o "$dest" "$download_url"
|
||||
fi
|
||||
|
||||
if [[ -n "$expected_sha" ]]; then
|
||||
log "Verifying SHA256 checksum..."
|
||||
local actual_sha
|
||||
actual_sha="$(shasum -a 256 "$dest" | awk '{print $1}')"
|
||||
if [[ "$actual_sha" != "$expected_sha" ]]; then
|
||||
rm -f "$dest"
|
||||
die "Checksum mismatch. Expected: $expected_sha, Got: $actual_sha"
|
||||
fi
|
||||
log "Checksum verified."
|
||||
else
|
||||
warn "Could not extract expected SHA256 from release metadata; skipping verification."
|
||||
fi
|
||||
|
||||
log "Extracting runner into $RUNNER_DIR..."
|
||||
tar -xzf "$dest" -C "$RUNNER_DIR"
|
||||
rm -f "$dest"
|
||||
|
||||
log "Runner extracted (version $version)."
|
||||
}
|
||||
|
||||
prompt_token() {
|
||||
if [[ -n "$REG_TOKEN" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "A registration token is required."
|
||||
log "Obtain one from: https://github.com/${REPO_SLUG}/settings/actions/runners/new"
|
||||
log "Or via the API:"
|
||||
log " curl -X POST -H 'Authorization: token YOUR_PAT' \\"
|
||||
log " https://api.github.com/repos/${REPO_SLUG}/actions/runners/registration-token"
|
||||
log ""
|
||||
printf '[runner] Enter registration token: '
|
||||
read -r REG_TOKEN
|
||||
[[ -n "$REG_TOKEN" ]] || die "No token provided."
|
||||
}
|
||||
|
||||
register_runner() {
|
||||
if [[ -z "$RUNNER_NAME" ]]; then
|
||||
RUNNER_NAME="periodvault-$(hostname -s)"
|
||||
fi
|
||||
|
||||
log "Registering runner '${RUNNER_NAME}' with labels '${RUNNER_LABELS}'..."
|
||||
|
||||
local config_args=(
|
||||
--url "https://github.com/${REPO_SLUG}"
|
||||
--token "$REG_TOKEN"
|
||||
--name "$RUNNER_NAME"
|
||||
--labels "$RUNNER_LABELS"
|
||||
--work "${RUNNER_DIR}/_work"
|
||||
--unattended
|
||||
)
|
||||
|
||||
if [[ "$FORCE" == "true" ]]; then
|
||||
config_args+=(--replace)
|
||||
fi
|
||||
|
||||
"${RUNNER_DIR}/config.sh" "${config_args[@]}"
|
||||
log "Runner registered."
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# launchd service management (macOS)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
create_plist() {
|
||||
mkdir -p "${RUNNER_DIR}/logs"
|
||||
mkdir -p "$(dirname "$PLIST_PATH")"
|
||||
|
||||
cat > "$PLIST_PATH" <<EOF
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Label</key>
|
||||
<string>${PLIST_LABEL}</string>
|
||||
<key>ProgramArguments</key>
|
||||
<array>
|
||||
<string>${RUNNER_DIR}/run.sh</string>
|
||||
</array>
|
||||
<key>WorkingDirectory</key>
|
||||
<string>${RUNNER_DIR}</string>
|
||||
<key>RunAtLoad</key>
|
||||
<true/>
|
||||
<key>KeepAlive</key>
|
||||
<true/>
|
||||
<key>StandardOutPath</key>
|
||||
<string>${RUNNER_DIR}/logs/stdout.log</string>
|
||||
<key>StandardErrorPath</key>
|
||||
<string>${RUNNER_DIR}/logs/stderr.log</string>
|
||||
<key>EnvironmentVariables</key>
|
||||
<dict>
|
||||
<key>PATH</key>
|
||||
<string>/opt/homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
|
||||
<key>HOME</key>
|
||||
<string>${HOME}</string>
|
||||
</dict>
|
||||
</dict>
|
||||
</plist>
|
||||
EOF
|
||||
|
||||
log "Launchd plist created: $PLIST_PATH"
|
||||
}
|
||||
|
||||
load_service() {
|
||||
if launchctl list 2>/dev/null | grep -q "$PLIST_LABEL"; then
|
||||
log "Service already loaded; unloading first..."
|
||||
launchctl unload "$PLIST_PATH" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
launchctl load "$PLIST_PATH"
|
||||
log "Service loaded."
|
||||
}
|
||||
|
||||
unload_service() {
|
||||
if launchctl list 2>/dev/null | grep -q "$PLIST_LABEL"; then
|
||||
launchctl unload "$PLIST_PATH" 2>/dev/null || true
|
||||
log "Service unloaded."
|
||||
else
|
||||
log "Service is not loaded."
|
||||
fi
|
||||
}
|
||||
|
||||
service_is_running() {
|
||||
launchctl list 2>/dev/null | grep -q "$PLIST_LABEL"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# macOS mode implementations
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
do_setup_darwin() {
|
||||
detect_repo
|
||||
|
||||
if [[ -f "${RUNNER_DIR}/.runner" ]] && [[ "$FORCE" != "true" ]]; then
|
||||
log "Runner already configured at $RUNNER_DIR."
|
||||
log "Use --force to re-setup."
|
||||
do_status_darwin
|
||||
return
|
||||
fi
|
||||
|
||||
download_runner
|
||||
prompt_token
|
||||
register_runner
|
||||
create_plist
|
||||
load_service
|
||||
|
||||
log ""
|
||||
log "Setup complete. Runner is registered and running."
|
||||
log ""
|
||||
log "To activate self-hosted CI, set these repository variables:"
|
||||
log ' CI_RUNS_ON_MACOS: ["self-hosted", "macOS", "periodvault"]'
|
||||
log ""
|
||||
log "Via CLI:"
|
||||
log ' gh variable set CI_RUNS_ON_MACOS --body '"'"'["self-hosted","macOS","periodvault"]'"'"
|
||||
log ""
|
||||
log "Energy saver: ensure your Mac does not sleep while the runner is active."
|
||||
log " System Settings > Energy Saver > Prevent automatic sleeping"
|
||||
}
|
||||
|
||||
do_start_darwin() {
|
||||
[[ -f "${RUNNER_DIR}/.runner" ]] || die "Runner not configured. Run --mode setup first."
|
||||
|
||||
if [[ "$FOREGROUND" == "true" ]]; then
|
||||
log "Starting runner in foreground (Ctrl-C to stop)..."
|
||||
exec "${RUNNER_DIR}/run.sh"
|
||||
fi
|
||||
|
||||
if service_is_running; then
|
||||
log "Runner service is already running."
|
||||
return
|
||||
fi
|
||||
|
||||
if [[ ! -f "$PLIST_PATH" ]]; then
|
||||
log "Plist not found; recreating..."
|
||||
create_plist
|
||||
fi
|
||||
|
||||
load_service
|
||||
log "Runner started."
|
||||
}
|
||||
|
||||
do_stop_darwin() {
|
||||
unload_service
|
||||
log "Runner stopped."
|
||||
}
|
||||
|
||||
do_status_darwin() {
|
||||
log "Runner directory: $RUNNER_DIR"
|
||||
|
||||
if [[ ! -f "${RUNNER_DIR}/.runner" ]]; then
|
||||
log "Status: NOT CONFIGURED"
|
||||
log "Run --mode setup to install and register the runner."
|
||||
return
|
||||
fi
|
||||
|
||||
local runner_name=""
|
||||
if command -v python3 >/dev/null 2>&1; then
|
||||
runner_name="$(python3 -c "import json,sys; d=json.load(open(sys.argv[1])); print(d.get('agentName',''))" "${RUNNER_DIR}/.runner" 2>/dev/null || true)"
|
||||
fi
|
||||
if [[ -z "$runner_name" ]]; then
|
||||
runner_name="(could not parse)"
|
||||
fi
|
||||
|
||||
log "Runner name: $runner_name"
|
||||
|
||||
if service_is_running; then
|
||||
log "Service: RUNNING"
|
||||
else
|
||||
log "Service: STOPPED"
|
||||
fi
|
||||
|
||||
if pgrep -f "Runner.Listener" >/dev/null 2>&1; then
|
||||
log "Process: ACTIVE (Runner.Listener found)"
|
||||
else
|
||||
log "Process: INACTIVE"
|
||||
fi
|
||||
|
||||
local log_file="${RUNNER_DIR}/logs/stdout.log"
|
||||
if [[ -f "$log_file" ]]; then
|
||||
log ""
|
||||
log "Recent log output (last 10 lines):"
|
||||
tail -n 10 "$log_file" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
local diag_dir="${RUNNER_DIR}/_diag"
|
||||
if [[ -d "$diag_dir" ]]; then
|
||||
local latest_diag
|
||||
latest_diag="$(ls -t "${diag_dir}"/Runner_*.log 2>/dev/null | head -n1 || true)"
|
||||
if [[ -n "$latest_diag" ]]; then
|
||||
log ""
|
||||
log "Latest runner diagnostic (last 5 lines):"
|
||||
tail -n 5 "$latest_diag" 2>/dev/null || true
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
do_uninstall_darwin() {
|
||||
log "Uninstalling self-hosted runner..."
|
||||
|
||||
unload_service
|
||||
|
||||
if [[ -f "$PLIST_PATH" ]]; then
|
||||
rm -f "$PLIST_PATH"
|
||||
log "Removed plist: $PLIST_PATH"
|
||||
fi
|
||||
|
||||
if [[ -f "${RUNNER_DIR}/config.sh" ]]; then
|
||||
if [[ -z "$REG_TOKEN" ]]; then
|
||||
detect_repo
|
||||
log ""
|
||||
log "A removal token is required to deregister the runner."
|
||||
log "Obtain one from: https://github.com/${REPO_SLUG}/settings/actions/runners"
|
||||
log "Or via the API:"
|
||||
log " curl -X POST -H 'Authorization: token YOUR_PAT' \\"
|
||||
log " https://api.github.com/repos/${REPO_SLUG}/actions/runners/remove-token"
|
||||
log ""
|
||||
printf '[runner] Enter removal token (or press Enter to skip deregistration): '
|
||||
read -r REG_TOKEN
|
||||
fi
|
||||
|
||||
if [[ -n "$REG_TOKEN" ]]; then
|
||||
"${RUNNER_DIR}/config.sh" remove --token "$REG_TOKEN" || warn "Deregistration failed; you may need to remove the runner manually from GitHub settings."
|
||||
log "Runner deregistered from GitHub."
|
||||
else
|
||||
warn "Skipping deregistration. Remove the runner manually from GitHub settings."
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -d "$RUNNER_DIR" ]]; then
|
||||
log "Removing runner directory: $RUNNER_DIR"
|
||||
rm -rf "$RUNNER_DIR"
|
||||
log "Runner directory removed."
|
||||
fi
|
||||
|
||||
log "Uninstall complete."
|
||||
}
|
||||
|
||||
# ===========================================================================
|
||||
# Linux: Docker-based runner via infra/runners/
|
||||
# ===========================================================================
|
||||
|
||||
ensure_docker() {
|
||||
require_cmd docker
|
||||
|
||||
if docker compose version >/dev/null 2>&1; then
|
||||
return
|
||||
fi
|
||||
|
||||
if command -v docker-compose >/dev/null 2>&1; then
|
||||
warn "Found docker-compose (standalone). docker compose v2 plugin is recommended."
|
||||
return
|
||||
fi
|
||||
|
||||
die "docker compose is required. Install Docker Compose v2: https://docs.docker.com/compose/install/"
|
||||
}
|
||||
|
||||
compose() {
|
||||
docker compose -f "${INFRA_DIR}/docker-compose.yml" "$@"
|
||||
}
|
||||
|
||||
do_build_image() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
local targets=()
|
||||
if [[ -n "$BUILD_TARGET" ]]; then
|
||||
targets+=("$BUILD_TARGET")
|
||||
else
|
||||
targets+=("slim" "full")
|
||||
fi
|
||||
|
||||
for target in "${targets[@]}"; do
|
||||
local image_tag="periodvault-runner:${target}"
|
||||
if [[ -n "$PUSH_REGISTRY" ]]; then
|
||||
image_tag="${PUSH_REGISTRY}/periodvault-runner:${target}"
|
||||
fi
|
||||
|
||||
log "Building runner image: ${image_tag} (target: ${target}, platform: linux/amd64)"
|
||||
DOCKER_BUILDKIT=1 docker build --platform linux/amd64 --pull \
|
||||
--target "$target" \
|
||||
-t "$image_tag" \
|
||||
"$INFRA_DIR"
|
||||
|
||||
if [[ -n "$PUSH_REGISTRY" ]]; then
|
||||
log "Pushing ${image_tag}..."
|
||||
docker push "$image_tag"
|
||||
log "Image pushed: ${image_tag}"
|
||||
else
|
||||
log "Image built locally: ${image_tag}"
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ -z "$PUSH_REGISTRY" ]]; then
|
||||
log ""
|
||||
log "Use --push <registry> to push to a registry."
|
||||
log "Example: ./scripts/runner.sh --mode build-image --push localhost:5000"
|
||||
fi
|
||||
}
|
||||
|
||||
do_setup_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Docker-based runner setup (infra/runners/)"
|
||||
log ""
|
||||
|
||||
if [[ ! -f "${INFRA_DIR}/.env" ]]; then
|
||||
if [[ -f "${INFRA_DIR}/.env.example" ]]; then
|
||||
cp "${INFRA_DIR}/.env.example" "${INFRA_DIR}/.env"
|
||||
log "Created ${INFRA_DIR}/.env from template."
|
||||
log "Edit this file to set your GITHUB_PAT."
|
||||
log ""
|
||||
printf '[runner] Enter your GitHub PAT (or press Enter to edit .env manually later): '
|
||||
read -r pat_input
|
||||
if [[ -n "$pat_input" ]]; then
|
||||
sed -i "s/^GITHUB_PAT=.*/GITHUB_PAT=${pat_input}/" "${INFRA_DIR}/.env"
|
||||
log "GITHUB_PAT set in .env"
|
||||
fi
|
||||
else
|
||||
die "Missing .env.example template in ${INFRA_DIR}"
|
||||
fi
|
||||
else
|
||||
log ".env already exists; skipping."
|
||||
fi
|
||||
|
||||
if [[ ! -f "${INFRA_DIR}/envs/periodvault.env" ]]; then
|
||||
if [[ -f "${INFRA_DIR}/envs/periodvault.env.example" ]]; then
|
||||
cp "${INFRA_DIR}/envs/periodvault.env.example" "${INFRA_DIR}/envs/periodvault.env"
|
||||
log "Created ${INFRA_DIR}/envs/periodvault.env from template."
|
||||
log "Edit this file to configure REPO_URL, RUNNER_NAME, and resource limits."
|
||||
else
|
||||
die "Missing envs/periodvault.env.example template in ${INFRA_DIR}"
|
||||
fi
|
||||
else
|
||||
log "envs/periodvault.env already exists; skipping."
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "Starting runners..."
|
||||
compose up -d
|
||||
|
||||
log ""
|
||||
log "Setup complete. Verify with: ./scripts/runner.sh --mode status"
|
||||
log ""
|
||||
log "To activate self-hosted CI, set these repository variables:"
|
||||
log ' gh variable set CI_RUNS_ON --body '"'"'["self-hosted","Linux","X64"]'"'"
|
||||
log ' gh variable set CI_RUNS_ON_ANDROID --body '"'"'["self-hosted","Linux","X64","android-emulator"]'"'"
|
||||
}
|
||||
|
||||
do_start_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Starting Docker runners..."
|
||||
compose up -d
|
||||
log "Runners started."
|
||||
}
|
||||
|
||||
do_stop_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Stopping Docker runners..."
|
||||
compose down
|
||||
log "Runners stopped."
|
||||
}
|
||||
|
||||
do_status_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Docker runner status (infra/runners/):"
|
||||
log ""
|
||||
compose ps
|
||||
log ""
|
||||
log "Recent logs (last 20 lines):"
|
||||
compose logs --tail 20 2>/dev/null || true
|
||||
}
|
||||
|
||||
do_uninstall_linux() {
|
||||
find_infra_dir
|
||||
ensure_docker
|
||||
|
||||
log "Uninstalling Docker runners..."
|
||||
compose down -v --rmi local 2>/dev/null || compose down -v
|
||||
log "Docker runners removed (containers, volumes, local images)."
|
||||
log ""
|
||||
log "Note: Runners should auto-deregister from GitHub (ephemeral mode)."
|
||||
log "If stale runners remain, remove them manually:"
|
||||
log " gh api -X DELETE repos/OWNER/REPO/actions/runners/RUNNER_ID"
|
||||
}
|
||||
|
||||
# ===========================================================================
|
||||
# Entry point
|
||||
# ===========================================================================
|
||||
|
||||
main() {
|
||||
parse_args "$@"
|
||||
|
||||
local os
|
||||
os="$(detect_os)"
|
||||
|
||||
case "$MODE" in
|
||||
setup)
|
||||
if [[ "$os" == "darwin" ]]; then do_setup_darwin; else do_setup_linux; fi ;;
|
||||
start)
|
||||
if [[ "$os" == "darwin" ]]; then do_start_darwin; else do_start_linux; fi ;;
|
||||
stop)
|
||||
if [[ "$os" == "darwin" ]]; then do_stop_darwin; else do_stop_linux; fi ;;
|
||||
status)
|
||||
if [[ "$os" == "darwin" ]]; then do_status_darwin; else do_status_linux; fi ;;
|
||||
build-image)
|
||||
do_build_image ;;
|
||||
uninstall)
|
||||
if [[ "$os" == "darwin" ]]; then do_uninstall_darwin; else do_uninstall_linux; fi ;;
|
||||
*)
|
||||
die "unexpected mode: $MODE" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
280
runners-conversion/periodVault/setup-dev-environment.sh
Executable file
280
runners-conversion/periodVault/setup-dev-environment.sh
Executable file
@@ -0,0 +1,280 @@
|
||||
#!/usr/bin/env bash
|
||||
# setup-dev-environment.sh
|
||||
# Idempotent bootstrap for local Period Vault development.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
INSTALL_MISSING=0
|
||||
RUN_CHECKS=0
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage: ./scripts/setup-dev-environment.sh [--install] [--verify] [--help]
|
||||
|
||||
Options:
|
||||
--install Attempt safe auto-install for supported tools (Homebrew on macOS).
|
||||
--verify Run post-setup verification commands.
|
||||
--help Show this help.
|
||||
|
||||
Notes:
|
||||
- Script is idempotent and safe to re-run.
|
||||
- Without --install, the script reports actionable install commands.
|
||||
- It never writes credentials/tokens and does not run privileged commands automatically.
|
||||
EOF
|
||||
}
|
||||
|
||||
log() { printf "${BLUE}[%s]${NC} %s\n" "setup" "$*"; }
|
||||
ok() { printf "${GREEN}[ok]${NC} %s\n" "$*"; }
|
||||
warn() { printf "${YELLOW}[warn]${NC} %s\n" "$*"; }
|
||||
fail() { printf "${RED}[error]${NC} %s\n" "$*" >&2; }
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--install) INSTALL_MISSING=1 ;;
|
||||
--verify) RUN_CHECKS=1 ;;
|
||||
--help|-h) usage; exit 0 ;;
|
||||
*)
|
||||
fail "Unknown option: $arg"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ! -x "$PROJECT_ROOT/gradlew" ]]; then
|
||||
fail "Missing executable Gradle wrapper at ./gradlew."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
OS="$(uname -s)"
|
||||
IS_MAC=0
|
||||
IS_LINUX=0
|
||||
case "$OS" in
|
||||
Darwin) IS_MAC=1 ;;
|
||||
Linux) IS_LINUX=1 ;;
|
||||
*)
|
||||
warn "Unsupported OS: $OS. Script will run checks but skip auto-install."
|
||||
;;
|
||||
esac
|
||||
|
||||
declare -a REQUIRED_TOOLS
|
||||
declare -a OPTIONAL_TOOLS
|
||||
declare -a MISSING_REQUIRED
|
||||
declare -a MISSING_OPTIONAL
|
||||
declare -a REMEDIATION_HINTS
|
||||
|
||||
REQUIRED_TOOLS=(git java)
|
||||
OPTIONAL_TOOLS=(gh act adb emulator avdmanager sdkmanager)
|
||||
|
||||
if [[ $IS_MAC -eq 1 ]]; then
|
||||
REQUIRED_TOOLS+=(xcodebuild xcrun)
|
||||
fi
|
||||
|
||||
have_cmd() {
|
||||
command -v "$1" >/dev/null 2>&1
|
||||
}
|
||||
|
||||
append_unique_hint() {
|
||||
local hint="$1"
|
||||
local existing
|
||||
for existing in "${REMEDIATION_HINTS[@]:-}"; do
|
||||
if [[ "$existing" == "$hint" ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
REMEDIATION_HINTS+=("$hint")
|
||||
}
|
||||
|
||||
detect_java_major() {
|
||||
local raw version major
|
||||
raw="$(java -version 2>&1 | head -n 1 || true)"
|
||||
version="$(echo "$raw" | sed -E 's/.*"([0-9]+)(\.[0-9]+.*)?".*/\1/' || true)"
|
||||
if [[ -z "$version" ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
major="$version"
|
||||
echo "$major"
|
||||
}
|
||||
|
||||
install_with_brew() {
|
||||
local formula="$1"
|
||||
if ! have_cmd brew; then
|
||||
append_unique_hint "Install Homebrew first: https://brew.sh/"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if brew list --formula "$formula" >/dev/null 2>&1; then
|
||||
ok "brew formula '$formula' already installed"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log "Installing '$formula' via Homebrew..."
|
||||
if brew install "$formula"; then
|
||||
ok "Installed '$formula'"
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
try_install_tool() {
|
||||
local tool="$1"
|
||||
if [[ $INSTALL_MISSING -ne 1 ]]; then
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ $IS_MAC -eq 1 ]]; then
|
||||
case "$tool" in
|
||||
git) install_with_brew git ;;
|
||||
gh) install_with_brew gh ;;
|
||||
act) install_with_brew act ;;
|
||||
java)
|
||||
install_with_brew openjdk@17
|
||||
append_unique_hint "If needed, configure JAVA_HOME for JDK 17+: export JAVA_HOME=\$(/usr/libexec/java_home -v 17)"
|
||||
;;
|
||||
*)
|
||||
return 1
|
||||
;;
|
||||
esac
|
||||
return $?
|
||||
fi
|
||||
|
||||
if [[ $IS_LINUX -eq 1 ]]; then
|
||||
append_unique_hint "Install '$tool' using your distro package manager and re-run this script."
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
tool_hint() {
|
||||
local tool="$1"
|
||||
if [[ $IS_MAC -eq 1 ]]; then
|
||||
case "$tool" in
|
||||
git|gh|act) echo "brew install $tool" ;;
|
||||
java) echo "brew install openjdk@17 && export JAVA_HOME=\$(/usr/libexec/java_home -v 17)" ;;
|
||||
xcodebuild|xcrun) echo "Install Xcode from the App Store and run: sudo xcodebuild -runFirstLaunch" ;;
|
||||
adb|emulator|avdmanager|sdkmanager) echo "Install Android Studio + Android SDK command-line tools, then add platform-tools/emulator/cmdline-tools/latest/bin to PATH." ;;
|
||||
*) echo "Install '$tool' and ensure it is on PATH." ;;
|
||||
esac
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ $IS_LINUX -eq 1 ]]; then
|
||||
case "$tool" in
|
||||
git) echo "sudo apt-get install -y git" ;;
|
||||
java) echo "sudo apt-get install -y openjdk-17-jdk" ;;
|
||||
gh) echo "Install GitHub CLI from https://cli.github.com/" ;;
|
||||
act) echo "Install act from https://github.com/nektos/act" ;;
|
||||
*) echo "Install '$tool' using your package manager and add it to PATH." ;;
|
||||
esac
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "Install '$tool' and ensure it is on PATH."
|
||||
}
|
||||
|
||||
log "Checking local development prerequisites..."
|
||||
|
||||
for tool in "${REQUIRED_TOOLS[@]}"; do
|
||||
if have_cmd "$tool"; then
|
||||
ok "Found required tool: $tool"
|
||||
else
|
||||
warn "Missing required tool: $tool"
|
||||
if try_install_tool "$tool" && have_cmd "$tool"; then
|
||||
ok "Auto-installed required tool: $tool"
|
||||
else
|
||||
MISSING_REQUIRED+=("$tool")
|
||||
append_unique_hint "$(tool_hint "$tool")"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
for tool in "${OPTIONAL_TOOLS[@]}"; do
|
||||
if have_cmd "$tool"; then
|
||||
ok "Found optional tool: $tool"
|
||||
else
|
||||
warn "Missing optional tool: $tool"
|
||||
if try_install_tool "$tool" && have_cmd "$tool"; then
|
||||
ok "Auto-installed optional tool: $tool"
|
||||
else
|
||||
MISSING_OPTIONAL+=("$tool")
|
||||
append_unique_hint "$(tool_hint "$tool")"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if have_cmd java; then
|
||||
JAVA_MAJOR="$(detect_java_major)"
|
||||
if [[ "$JAVA_MAJOR" =~ ^[0-9]+$ ]] && [[ "$JAVA_MAJOR" -ge 17 ]]; then
|
||||
ok "Java version is compatible (major=$JAVA_MAJOR)"
|
||||
else
|
||||
fail "Java 17+ is required (detected major=$JAVA_MAJOR)."
|
||||
append_unique_hint "$(tool_hint "java")"
|
||||
if [[ ! " ${MISSING_REQUIRED[*]} " =~ " java " ]]; then
|
||||
MISSING_REQUIRED+=("java")
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
log "Installing git hooks (idempotent)..."
|
||||
"$SCRIPT_DIR/install-hooks.sh"
|
||||
ok "Git hooks configured"
|
||||
|
||||
echo ""
|
||||
echo "================================================"
|
||||
echo "Setup Summary"
|
||||
echo "================================================"
|
||||
if [[ ${#MISSING_REQUIRED[@]} -eq 0 ]]; then
|
||||
ok "All required prerequisites are available."
|
||||
else
|
||||
fail "Missing required prerequisites: ${MISSING_REQUIRED[*]}"
|
||||
fi
|
||||
|
||||
if [[ ${#MISSING_OPTIONAL[@]} -eq 0 ]]; then
|
||||
ok "All optional developer tools are available."
|
||||
else
|
||||
warn "Missing optional tools: ${MISSING_OPTIONAL[*]}"
|
||||
fi
|
||||
|
||||
if [[ ${#REMEDIATION_HINTS[@]} -gt 0 ]]; then
|
||||
echo ""
|
||||
echo "Suggested remediation:"
|
||||
for hint in "${REMEDIATION_HINTS[@]}"; do
|
||||
echo " - $hint"
|
||||
done
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Verification commands:"
|
||||
echo " - ./gradlew shared:jvmTest"
|
||||
echo " - ./scripts/run-emulator-tests.sh android"
|
||||
echo " - ./scripts/run-emulator-tests.sh ios"
|
||||
echo " - ./scripts/verify.sh"
|
||||
|
||||
if [[ $RUN_CHECKS -eq 1 ]]; then
|
||||
echo ""
|
||||
log "Running lightweight verification commands..."
|
||||
"$PROJECT_ROOT/gradlew" --version >/dev/null
|
||||
ok "Gradle wrapper check passed"
|
||||
if have_cmd gh; then
|
||||
gh --version >/dev/null
|
||||
ok "GitHub CLI check passed"
|
||||
fi
|
||||
if have_cmd xcrun; then
|
||||
xcrun simctl list devices available >/dev/null
|
||||
ok "iOS simulator listing check passed"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ ${#MISSING_REQUIRED[@]} -gt 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ok "Developer environment bootstrap completed."
|
||||
72
runners-conversion/periodVault/setup.sh
Executable file
72
runners-conversion/periodVault/setup.sh
Executable file
@@ -0,0 +1,72 @@
|
||||
#!/usr/bin/env bash
|
||||
# setup.sh — Cross-platform developer environment setup entrypoint.
|
||||
#
|
||||
# macOS: Dispatches to scripts/setup-dev-environment.sh (full bootstrap).
|
||||
# Linux: Minimal bootstrap (JDK check, git hooks, Gradle dependencies).
|
||||
#
|
||||
# Usage: ./scripts/setup.sh [--install] [--verify]
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
OS="$(uname -s)"
|
||||
|
||||
if [[ "$OS" == "Darwin" ]]; then
|
||||
exec "${SCRIPT_DIR}/setup-dev-environment.sh" "$@"
|
||||
fi
|
||||
|
||||
if [[ "$OS" != "Linux" ]]; then
|
||||
echo "Unsupported OS: $OS. This script supports macOS and Linux."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# --- Linux bootstrap ---
|
||||
|
||||
echo "=== periodvault development setup (Linux) ==="
|
||||
echo ""
|
||||
|
||||
# Check JDK
|
||||
if command -v java >/dev/null 2>&1; then
|
||||
JAVA_MAJOR="$(java -version 2>&1 | head -1 | sed -E 's/.*"([0-9]+).*/\1/')"
|
||||
echo "[ok] Java is installed (major version: $JAVA_MAJOR)"
|
||||
if [[ "$JAVA_MAJOR" -lt 17 ]]; then
|
||||
echo "[warn] JDK 17+ is required. Found major version $JAVA_MAJOR."
|
||||
echo " Install: sudo apt-get install -y openjdk-17-jdk-headless"
|
||||
fi
|
||||
else
|
||||
echo "[error] Java not found."
|
||||
echo " Install: sudo apt-get install -y openjdk-17-jdk-headless"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check Android SDK
|
||||
if [[ -n "${ANDROID_HOME:-}" ]]; then
|
||||
echo "[ok] ANDROID_HOME is set: $ANDROID_HOME"
|
||||
else
|
||||
echo "[warn] ANDROID_HOME not set. Android SDK may not be available."
|
||||
echo " Set ANDROID_HOME to your Android SDK path for Android builds."
|
||||
fi
|
||||
|
||||
# Install git hooks
|
||||
if [[ -x "$SCRIPT_DIR/install-hooks.sh" ]]; then
|
||||
echo ""
|
||||
echo "Installing git hooks..."
|
||||
"$SCRIPT_DIR/install-hooks.sh"
|
||||
echo "[ok] Git hooks configured"
|
||||
fi
|
||||
|
||||
# Download Gradle dependencies
|
||||
if [[ -x "$PROJECT_ROOT/gradlew" ]]; then
|
||||
echo ""
|
||||
echo "Downloading Gradle dependencies..."
|
||||
"$PROJECT_ROOT/gradlew" --no-daemon dependencies > /dev/null 2>&1 || true
|
||||
echo "[ok] Gradle dependencies downloaded"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=== Setup complete (Linux) ==="
|
||||
echo ""
|
||||
echo "Verification commands:"
|
||||
echo " ./gradlew shared:jvmTest"
|
||||
echo " ./gradlew androidApp:testDebugUnitTest"
|
||||
31
runners-conversion/periodVault/test-audit-enforcement.sh
Executable file
31
runners-conversion/periodVault/test-audit-enforcement.sh
Executable file
@@ -0,0 +1,31 @@
|
||||
#!/usr/bin/env bash
|
||||
# test-audit-enforcement.sh
|
||||
# Smoke checks for process/audit enforcement scripts.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
"$SCRIPT_DIR/check-process.sh" HEAD~1
|
||||
"$SCRIPT_DIR/validate-sdd.sh" HEAD~1
|
||||
FORCE_AUDIT_GATES=1 "$SCRIPT_DIR/validate-tdd.sh" HEAD~1
|
||||
|
||||
TMP_REPORT="$(mktemp)"
|
||||
cat >"$TMP_REPORT" <<'MD'
|
||||
# CODEX Report
|
||||
## Requirements Mapping
|
||||
- sample
|
||||
## Constitution Compliance Matrix
|
||||
| Principle | Status | Notes |
|
||||
|-----------|--------|-------|
|
||||
| I | pass | sample |
|
||||
## Evidence
|
||||
- sample
|
||||
## Risks
|
||||
- sample
|
||||
MD
|
||||
"$SCRIPT_DIR/validate-audit-report.sh" "$TMP_REPORT"
|
||||
rm -f "$TMP_REPORT"
|
||||
|
||||
echo "[test-audit-enforcement] PASS"
|
||||
476
runners-conversion/periodVault/test-infra-runners.sh
Executable file
476
runners-conversion/periodVault/test-infra-runners.sh
Executable file
@@ -0,0 +1,476 @@
|
||||
#!/usr/bin/env bash
|
||||
# test-infra-runners.sh — Integration tests for self-hosted CI runner infrastructure.
|
||||
#
|
||||
# Tests cover:
|
||||
# 1. Shell script syntax (bash -n) for all infrastructure scripts
|
||||
# 2. runner.sh argument parsing and help output
|
||||
# 3. setup.sh cross-platform dispatch logic
|
||||
# 4. Docker image builds (slim + full) with content verification
|
||||
# 5. Docker Compose configuration validation
|
||||
# 6. ci.yml runner variable expression syntax
|
||||
# 7. lib.sh headless emulator function structure
|
||||
# 8. entrypoint.sh env validation logic
|
||||
#
|
||||
# Usage: ./scripts/test-infra-runners.sh [--skip-docker]
|
||||
#
|
||||
# --skip-docker Skip Docker image build tests (useful in CI without Docker)
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
PASS_COUNT=0
|
||||
FAIL_COUNT=0
|
||||
SKIP_COUNT=0
|
||||
SKIP_DOCKER=false
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
log() { echo "[test-infra] $*"; }
|
||||
pass() { PASS_COUNT=$((PASS_COUNT + 1)); log "PASS: $*"; }
|
||||
fail() { FAIL_COUNT=$((FAIL_COUNT + 1)); log "FAIL: $*"; }
|
||||
skip() { SKIP_COUNT=$((SKIP_COUNT + 1)); log "SKIP: $*"; }
|
||||
|
||||
assert_file_exists() {
|
||||
local path="$1" label="$2"
|
||||
if [[ -f "$path" ]]; then
|
||||
pass "$label"
|
||||
else
|
||||
fail "$label — file not found: $path"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_file_executable() {
|
||||
local path="$1" label="$2"
|
||||
if [[ -x "$path" ]]; then
|
||||
pass "$label"
|
||||
else
|
||||
fail "$label — not executable: $path"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_contains() {
|
||||
local haystack="$1" needle="$2" label="$3"
|
||||
if echo "$haystack" | grep -qF -- "$needle"; then
|
||||
pass "$label"
|
||||
else
|
||||
fail "$label — expected to contain: $needle"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_not_contains() {
|
||||
local haystack="$1" needle="$2" label="$3"
|
||||
if ! echo "$haystack" | grep -qF -- "$needle"; then
|
||||
pass "$label"
|
||||
else
|
||||
fail "$label — should NOT contain: $needle"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_exit_code() {
|
||||
local expected="$1" label="$2"
|
||||
shift 2
|
||||
local actual
|
||||
set +e
|
||||
"$@" >/dev/null 2>&1
|
||||
actual=$?
|
||||
set -e
|
||||
if [[ "$actual" -eq "$expected" ]]; then
|
||||
pass "$label"
|
||||
else
|
||||
fail "$label — expected exit $expected, got $actual"
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Parse args
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--skip-docker) SKIP_DOCKER=true; shift ;;
|
||||
*) echo "Unknown arg: $1"; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ===========================================================================
|
||||
# Section 1: File existence and permissions
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 1: File existence and permissions ==="
|
||||
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/Dockerfile" "Dockerfile exists"
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/docker-compose.yml" "docker-compose.yml exists"
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/entrypoint.sh" "entrypoint.sh exists"
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/.env.example" "env.example exists"
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/envs/periodvault.env.example" "periodvault.env.example exists"
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/.gitignore" ".gitignore exists"
|
||||
assert_file_exists "$PROJECT_ROOT/infra/runners/README.md" "runners README exists"
|
||||
assert_file_exists "$PROJECT_ROOT/scripts/runner.sh" "runner.sh exists"
|
||||
assert_file_exists "$PROJECT_ROOT/scripts/setup.sh" "setup.sh exists"
|
||||
assert_file_exists "$PROJECT_ROOT/.github/workflows/build-runner-image.yml" "build-runner-image workflow exists"
|
||||
|
||||
assert_file_executable "$PROJECT_ROOT/infra/runners/entrypoint.sh" "entrypoint.sh is executable"
|
||||
assert_file_executable "$PROJECT_ROOT/scripts/runner.sh" "runner.sh is executable"
|
||||
assert_file_executable "$PROJECT_ROOT/scripts/setup.sh" "setup.sh is executable"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 2: Shell script syntax validation (bash -n)
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 2: Shell script syntax ==="
|
||||
|
||||
for script in \
|
||||
"$PROJECT_ROOT/scripts/runner.sh" \
|
||||
"$PROJECT_ROOT/scripts/setup.sh" \
|
||||
"$PROJECT_ROOT/infra/runners/entrypoint.sh"; do
|
||||
name="$(basename "$script")"
|
||||
if bash -n "$script" 2>/dev/null; then
|
||||
pass "bash -n $name"
|
||||
else
|
||||
fail "bash -n $name — syntax error"
|
||||
fi
|
||||
done
|
||||
|
||||
# ===========================================================================
|
||||
# Section 3: runner.sh argument parsing
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 3: runner.sh argument parsing ==="
|
||||
|
||||
# --help should exit 0 and print usage
|
||||
HELP_OUT="$("$PROJECT_ROOT/scripts/runner.sh" --help 2>&1)" || true
|
||||
assert_contains "$HELP_OUT" "Usage:" "runner.sh --help shows usage"
|
||||
assert_contains "$HELP_OUT" "--mode" "runner.sh --help mentions --mode"
|
||||
assert_contains "$HELP_OUT" "build-image" "runner.sh --help mentions build-image"
|
||||
assert_exit_code 0 "runner.sh --help exits 0" "$PROJECT_ROOT/scripts/runner.sh" --help
|
||||
|
||||
# Missing --mode should fail
|
||||
assert_exit_code 1 "runner.sh without --mode exits 1" "$PROJECT_ROOT/scripts/runner.sh"
|
||||
|
||||
# Invalid mode should fail
|
||||
assert_exit_code 1 "runner.sh --mode invalid exits 1" "$PROJECT_ROOT/scripts/runner.sh" --mode invalid
|
||||
|
||||
# ===========================================================================
|
||||
# Section 4: setup.sh platform dispatch
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 4: setup.sh structure ==="
|
||||
|
||||
SETUP_CONTENT="$(cat "$PROJECT_ROOT/scripts/setup.sh")"
|
||||
assert_contains "$SETUP_CONTENT" "Darwin" "setup.sh handles macOS"
|
||||
assert_contains "$SETUP_CONTENT" "Linux" "setup.sh handles Linux"
|
||||
assert_contains "$SETUP_CONTENT" "setup-dev-environment.sh" "setup.sh dispatches to setup-dev-environment.sh"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 5: entrypoint.sh validation logic
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 5: entrypoint.sh structure ==="
|
||||
|
||||
ENTRY_CONTENT="$(cat "$PROJECT_ROOT/infra/runners/entrypoint.sh")"
|
||||
assert_contains "$ENTRY_CONTENT" "GITHUB_PAT" "entrypoint.sh validates GITHUB_PAT"
|
||||
assert_contains "$ENTRY_CONTENT" "REPO_URL" "entrypoint.sh validates REPO_URL"
|
||||
assert_contains "$ENTRY_CONTENT" "RUNNER_NAME" "entrypoint.sh validates RUNNER_NAME"
|
||||
assert_contains "$ENTRY_CONTENT" "--ephemeral" "entrypoint.sh uses ephemeral mode"
|
||||
assert_contains "$ENTRY_CONTENT" "trap cleanup" "entrypoint.sh traps for cleanup"
|
||||
assert_contains "$ENTRY_CONTENT" "registration-token" "entrypoint.sh generates registration token"
|
||||
assert_contains "$ENTRY_CONTENT" "remove-token" "entrypoint.sh handles removal token"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 6: Dockerfile structure
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 6: Dockerfile structure ==="
|
||||
|
||||
DOCKERFILE="$(cat "$PROJECT_ROOT/infra/runners/Dockerfile")"
|
||||
assert_contains "$DOCKERFILE" "FROM ubuntu:24.04 AS base" "Dockerfile has base stage"
|
||||
assert_contains "$DOCKERFILE" "FROM base AS slim" "Dockerfile has slim stage"
|
||||
assert_contains "$DOCKERFILE" "FROM slim AS full" "Dockerfile has full stage"
|
||||
assert_contains "$DOCKERFILE" "openjdk-17-jdk-headless" "Dockerfile installs JDK 17"
|
||||
assert_contains "$DOCKERFILE" "platforms;android-34" "Dockerfile installs Android SDK 34"
|
||||
assert_contains "$DOCKERFILE" "build-tools;34.0.0" "Dockerfile installs build-tools 34"
|
||||
assert_contains "$DOCKERFILE" "system-images;android-34;google_apis;x86_64" "Full stage includes system images"
|
||||
assert_contains "$DOCKERFILE" "avdmanager create avd" "Full stage pre-creates AVD"
|
||||
assert_contains "$DOCKERFILE" "kvm" "Full stage sets up KVM group"
|
||||
assert_contains "$DOCKERFILE" "HEALTHCHECK" "Dockerfile has HEALTHCHECK"
|
||||
assert_contains "$DOCKERFILE" "ENTRYPOINT" "Dockerfile has ENTRYPOINT"
|
||||
assert_contains "$DOCKERFILE" 'userdel -r ubuntu' "Dockerfile removes ubuntu user (GID 1000 conflict fix)"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 7: docker-compose.yml structure
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 7: docker-compose.yml structure ==="
|
||||
|
||||
COMPOSE="$(cat "$PROJECT_ROOT/infra/runners/docker-compose.yml")"
|
||||
assert_contains "$COMPOSE" "registry:" "Compose has registry service"
|
||||
assert_contains "$COMPOSE" "runner-slim-1:" "Compose has runner-slim-1"
|
||||
assert_contains "$COMPOSE" "runner-slim-2:" "Compose has runner-slim-2"
|
||||
assert_contains "$COMPOSE" "runner-emulator:" "Compose has runner-emulator"
|
||||
assert_contains "$COMPOSE" "registry:2" "Registry uses official image"
|
||||
assert_contains "$COMPOSE" "/dev/kvm" "Emulator gets KVM device"
|
||||
assert_contains "$COMPOSE" "no-new-privileges" "Security: no-new-privileges"
|
||||
assert_contains "$COMPOSE" "init: true" "Uses tini (init: true)"
|
||||
assert_contains "$COMPOSE" "stop_grace_period" "Emulator has stop grace period"
|
||||
assert_contains "$COMPOSE" "android-emulator" "Emulator runner has android-emulator label"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 8: ci.yml runner variable expressions
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 8: ci.yml runner variable expressions ==="
|
||||
|
||||
CI_YML="$(cat "$PROJECT_ROOT/.github/workflows/ci.yml")"
|
||||
assert_contains "$CI_YML" 'vars.CI_RUNS_ON_MACOS' "ci.yml uses CI_RUNS_ON_MACOS variable"
|
||||
assert_contains "$CI_YML" 'vars.CI_RUNS_ON_ANDROID' "ci.yml uses CI_RUNS_ON_ANDROID variable"
|
||||
assert_contains "$CI_YML" 'vars.CI_RUNS_ON ' "ci.yml uses CI_RUNS_ON variable"
|
||||
assert_contains "$CI_YML" 'fromJSON(' "ci.yml uses fromJSON() for runner targeting"
|
||||
|
||||
# Verify fallback values are present (safe default = current macOS runner)
|
||||
assert_contains "$CI_YML" '"self-hosted","macOS","periodvault"' "ci.yml has macOS fallback"
|
||||
|
||||
# Verify parallelism: test-ios-simulator should NOT depend on test-android-emulator
|
||||
# Extract test-ios-simulator needs line
|
||||
IOS_SECTION="$(awk '/test-ios-simulator:/,/runs-on:/' "$PROJECT_ROOT/.github/workflows/ci.yml")"
|
||||
assert_not_contains "$IOS_SECTION" "test-android-emulator" "test-ios-simulator does NOT depend on test-android-emulator (parallel)"
|
||||
assert_contains "$IOS_SECTION" "test-shared" "test-ios-simulator depends on test-shared"
|
||||
|
||||
# Verify audit-quality-gate waits for both platform tests
|
||||
AUDIT_SECTION="$(awk '/audit-quality-gate:/,/runs-on:/' "$PROJECT_ROOT/.github/workflows/ci.yml")"
|
||||
assert_contains "$AUDIT_SECTION" "test-android-emulator" "audit-quality-gate waits for android emulator"
|
||||
assert_contains "$AUDIT_SECTION" "test-ios-simulator" "audit-quality-gate waits for ios simulator"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 9: lib.sh headless emulator support
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 9: lib.sh headless emulator support ==="
|
||||
|
||||
LIB_SH="$(cat "$PROJECT_ROOT/scripts/lib.sh")"
|
||||
assert_contains "$LIB_SH" "start_emulator_headless()" "lib.sh defines start_emulator_headless()"
|
||||
assert_contains "$LIB_SH" "-no-window" "Headless emulator uses -no-window"
|
||||
assert_contains "$LIB_SH" "-no-audio" "Headless emulator uses -no-audio"
|
||||
assert_contains "$LIB_SH" "swiftshader_indirect" "Headless emulator uses swiftshader GPU"
|
||||
|
||||
# Verify OS-aware dispatch in ensure_android_emulator
|
||||
assert_contains "$LIB_SH" '"$(uname -s)" == "Linux"' "ensure_android_emulator detects Linux"
|
||||
assert_contains "$LIB_SH" 'start_emulator_headless' "ensure_android_emulator calls headless on Linux"
|
||||
assert_contains "$LIB_SH" 'start_emulator_windowed' "ensure_android_emulator calls windowed on macOS"
|
||||
|
||||
# Verify headless zombie kill is macOS-only
|
||||
ZOMBIE_LINE="$(grep -n 'is_emulator_headless' "$PROJECT_ROOT/scripts/lib.sh" | grep 'Darwin' || true)"
|
||||
if [[ -n "$ZOMBIE_LINE" ]]; then
|
||||
pass "Headless zombie kill is guarded by Darwin check"
|
||||
else
|
||||
fail "Headless zombie kill should be macOS-only (Darwin guard)"
|
||||
fi
|
||||
|
||||
# ===========================================================================
|
||||
# Section 10: .gitignore protects secrets
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 10: .gitignore protects secrets ==="
|
||||
|
||||
GITIGNORE="$(cat "$PROJECT_ROOT/infra/runners/.gitignore")"
|
||||
assert_contains "$GITIGNORE" ".env" ".gitignore excludes .env"
|
||||
assert_contains "$GITIGNORE" "!.env.example" ".gitignore keeps .example files"
|
||||
|
||||
# ===========================================================================
|
||||
# Section 11: Docker image builds (requires Docker)
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 11: Docker image builds ==="
|
||||
|
||||
if $SKIP_DOCKER; then
|
||||
skip "Docker image build tests (--skip-docker)"
|
||||
elif ! command -v docker &>/dev/null; then
|
||||
skip "Docker image build tests (docker not found)"
|
||||
elif ! docker info >/dev/null 2>&1; then
|
||||
skip "Docker image build tests (docker daemon not running)"
|
||||
else
|
||||
DOCKER_PLATFORM="linux/amd64"
|
||||
|
||||
# --- Build slim ---
|
||||
log "Building slim image (this may take a few minutes)..."
|
||||
if docker build --platform "$DOCKER_PLATFORM" --target slim \
|
||||
-t periodvault-runner-test:slim "$PROJECT_ROOT/infra/runners/" >/dev/null 2>&1; then
|
||||
pass "Docker build: slim target succeeds"
|
||||
|
||||
# Verify slim image contents
|
||||
SLIM_JAVA="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:slim \
|
||||
java -version 2>&1 | head -1)" || true
|
||||
if echo "$SLIM_JAVA" | grep -q "17"; then
|
||||
pass "Slim image: Java 17 is installed"
|
||||
else
|
||||
fail "Slim image: Java 17 not found — got: $SLIM_JAVA"
|
||||
fi
|
||||
|
||||
SLIM_SDK="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:slim \
|
||||
bash -c 'ls $ANDROID_HOME/platforms/' 2>&1)" || true
|
||||
if echo "$SLIM_SDK" | grep -q "android-34"; then
|
||||
pass "Slim image: Android SDK 34 is installed"
|
||||
else
|
||||
fail "Slim image: Android SDK 34 not found — got: $SLIM_SDK"
|
||||
fi
|
||||
|
||||
SLIM_RUNNER="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:slim \
|
||||
bash -c 'ls /home/runner/actions-runner/run.sh' 2>&1)" || true
|
||||
if echo "$SLIM_RUNNER" | grep -q "run.sh"; then
|
||||
pass "Slim image: GitHub Actions runner agent is installed"
|
||||
else
|
||||
fail "Slim image: runner agent not found"
|
||||
fi
|
||||
|
||||
SLIM_USER="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:slim \
|
||||
whoami 2>&1)" || true
|
||||
if [[ "$SLIM_USER" == "runner" ]]; then
|
||||
pass "Slim image: runs as 'runner' user"
|
||||
else
|
||||
fail "Slim image: expected user 'runner', got '$SLIM_USER'"
|
||||
fi
|
||||
|
||||
SLIM_ENTRY="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:slim \
|
||||
bash -c 'test -x /home/runner/entrypoint.sh && echo ok' 2>&1)" || true
|
||||
if [[ "$SLIM_ENTRY" == "ok" ]]; then
|
||||
pass "Slim image: entrypoint.sh is present and executable"
|
||||
else
|
||||
fail "Slim image: entrypoint.sh not executable"
|
||||
fi
|
||||
|
||||
# Verify slim does NOT have emulator
|
||||
SLIM_EMU="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:slim \
|
||||
bash -c 'command -v emulator || echo not-found' 2>&1)" || true
|
||||
if echo "$SLIM_EMU" | grep -q "not-found"; then
|
||||
pass "Slim image: does NOT include emulator (expected)"
|
||||
else
|
||||
fail "Slim image: unexpectedly contains emulator"
|
||||
fi
|
||||
else
|
||||
fail "Docker build: slim target failed"
|
||||
fi
|
||||
|
||||
# --- Build full ---
|
||||
log "Building full image (this may take several minutes)..."
|
||||
if docker build --platform "$DOCKER_PLATFORM" --target full \
|
||||
-t periodvault-runner-test:full "$PROJECT_ROOT/infra/runners/" >/dev/null 2>&1; then
|
||||
pass "Docker build: full target succeeds"
|
||||
|
||||
# Verify full image has emulator
|
||||
FULL_EMU="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:full \
|
||||
bash -c 'command -v emulator && echo found' 2>&1)" || true
|
||||
if echo "$FULL_EMU" | grep -q "found"; then
|
||||
pass "Full image: emulator is installed"
|
||||
else
|
||||
fail "Full image: emulator not found"
|
||||
fi
|
||||
|
||||
# Verify full image has AVD pre-created
|
||||
FULL_AVD="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:full \
|
||||
bash -c '${ANDROID_HOME}/cmdline-tools/latest/bin/avdmanager list avd 2>/dev/null | grep "Name:" || echo none' 2>&1)" || true
|
||||
if echo "$FULL_AVD" | grep -q "phone"; then
|
||||
pass "Full image: AVD 'phone' is pre-created"
|
||||
else
|
||||
fail "Full image: AVD 'phone' not found — got: $FULL_AVD"
|
||||
fi
|
||||
|
||||
# Verify full image has system images
|
||||
FULL_SYSIMG="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:full \
|
||||
bash -c 'ls $ANDROID_HOME/system-images/android-34/google_apis/x86_64/ 2>/dev/null | head -1 || echo none' 2>&1)" || true
|
||||
if [[ "$FULL_SYSIMG" != "none" ]]; then
|
||||
pass "Full image: system-images;android-34;google_apis;x86_64 installed"
|
||||
else
|
||||
fail "Full image: system images not found"
|
||||
fi
|
||||
|
||||
# Verify full image has xvfb
|
||||
FULL_XVFB="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:full \
|
||||
bash -c 'command -v Xvfb && echo found || echo not-found' 2>&1)" || true
|
||||
if echo "$FULL_XVFB" | grep -q "found"; then
|
||||
pass "Full image: Xvfb is installed"
|
||||
else
|
||||
fail "Full image: Xvfb not found"
|
||||
fi
|
||||
|
||||
# Verify kvm group exists and runner is a member
|
||||
FULL_KVM="$(docker run --rm --platform "$DOCKER_PLATFORM" periodvault-runner-test:full \
|
||||
bash -c 'id runner 2>/dev/null' 2>&1)" || true
|
||||
if echo "$FULL_KVM" | grep -q "kvm"; then
|
||||
pass "Full image: runner user is in kvm group"
|
||||
else
|
||||
fail "Full image: runner not in kvm group — got: $FULL_KVM"
|
||||
fi
|
||||
else
|
||||
fail "Docker build: full target failed"
|
||||
fi
|
||||
|
||||
# --- Docker Compose validation ---
|
||||
log "Validating docker-compose.yml..."
|
||||
# Create temp env files for validation
|
||||
cp "$PROJECT_ROOT/infra/runners/.env.example" "$PROJECT_ROOT/infra/runners/.env"
|
||||
cp "$PROJECT_ROOT/infra/runners/envs/periodvault.env.example" "$PROJECT_ROOT/infra/runners/envs/periodvault.env"
|
||||
|
||||
if docker compose -f "$PROJECT_ROOT/infra/runners/docker-compose.yml" config --quiet 2>/dev/null; then
|
||||
pass "docker compose config validates"
|
||||
else
|
||||
fail "docker compose config failed"
|
||||
fi
|
||||
|
||||
# Verify compose defines expected services
|
||||
COMPOSE_SERVICES="$(docker compose -f "$PROJECT_ROOT/infra/runners/docker-compose.yml" config --services 2>/dev/null)"
|
||||
assert_contains "$COMPOSE_SERVICES" "registry" "Compose service: registry"
|
||||
assert_contains "$COMPOSE_SERVICES" "runner-slim-1" "Compose service: runner-slim-1"
|
||||
assert_contains "$COMPOSE_SERVICES" "runner-slim-2" "Compose service: runner-slim-2"
|
||||
assert_contains "$COMPOSE_SERVICES" "runner-emulator" "Compose service: runner-emulator"
|
||||
|
||||
# Clean up temp env files
|
||||
rm -f "$PROJECT_ROOT/infra/runners/.env" "$PROJECT_ROOT/infra/runners/envs/periodvault.env"
|
||||
|
||||
# --- Cleanup test images ---
|
||||
docker rmi periodvault-runner-test:slim periodvault-runner-test:full 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# ===========================================================================
|
||||
# Section 12: build-runner-image.yml workflow structure
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=== Section 12: build-runner-image.yml structure ==="
|
||||
|
||||
BUILD_WF="$(cat "$PROJECT_ROOT/.github/workflows/build-runner-image.yml")"
|
||||
assert_contains "$BUILD_WF" "slim" "Build workflow includes slim target"
|
||||
assert_contains "$BUILD_WF" "full" "Build workflow includes full target"
|
||||
assert_contains "$BUILD_WF" "matrix" "Build workflow uses matrix strategy"
|
||||
assert_contains "$BUILD_WF" "ghcr.io" "Build workflow pushes to GHCR"
|
||||
|
||||
# ===========================================================================
|
||||
# Results
|
||||
# ===========================================================================
|
||||
|
||||
log ""
|
||||
log "=============================="
|
||||
TOTAL=$((PASS_COUNT + FAIL_COUNT + SKIP_COUNT))
|
||||
log "Results: $PASS_COUNT passed, $FAIL_COUNT failed, $SKIP_COUNT skipped (total: $TOTAL)"
|
||||
log "=============================="
|
||||
|
||||
if [[ $FAIL_COUNT -gt 0 ]]; then
|
||||
log "FAILED"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "ALL PASSED"
|
||||
exit 0
|
||||
239
runners-conversion/periodVault/test-test-quality-gate.sh
Executable file
239
runners-conversion/periodVault/test-test-quality-gate.sh
Executable file
@@ -0,0 +1,239 @@
|
||||
#!/usr/bin/env bash
|
||||
# test-test-quality-gate.sh — Integration-style tests for validate-test-quality.sh.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
PASS_COUNT=0
|
||||
FAIL_COUNT=0
|
||||
declare -a TMP_REPOS=()
|
||||
|
||||
log() {
|
||||
echo "[test-quality-test] $*"
|
||||
}
|
||||
|
||||
pass() {
|
||||
PASS_COUNT=$((PASS_COUNT + 1))
|
||||
log "PASS: $*"
|
||||
}
|
||||
|
||||
fail() {
|
||||
FAIL_COUNT=$((FAIL_COUNT + 1))
|
||||
log "FAIL: $*"
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
if ! command -v "$1" >/dev/null 2>&1; then
|
||||
echo "Missing required command: $1"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
run_expect_success() {
|
||||
local label="$1"
|
||||
shift
|
||||
if "$@" >/tmp/test-quality-gate.out 2>&1; then
|
||||
pass "$label"
|
||||
else
|
||||
fail "$label"
|
||||
cat /tmp/test-quality-gate.out
|
||||
fi
|
||||
}
|
||||
|
||||
run_expect_failure() {
|
||||
local label="$1"
|
||||
shift
|
||||
if "$@" >/tmp/test-quality-gate.out 2>&1; then
|
||||
fail "$label (expected failure but command succeeded)"
|
||||
cat /tmp/test-quality-gate.out
|
||||
else
|
||||
pass "$label"
|
||||
fi
|
||||
}
|
||||
|
||||
create_fixture_repo() {
|
||||
local repo
|
||||
repo="$(mktemp -d)"
|
||||
|
||||
mkdir -p "$repo/scripts" \
|
||||
"$repo/audit" \
|
||||
"$repo/androidApp/src/androidTest/kotlin/example" \
|
||||
"$repo/iosApp/iosAppUITests"
|
||||
|
||||
cp "$PROJECT_ROOT/scripts/validate-test-quality.sh" "$repo/scripts/"
|
||||
chmod +x "$repo/scripts/validate-test-quality.sh"
|
||||
|
||||
cat > "$repo/androidApp/src/androidTest/kotlin/example/ExampleUiTest.kt" <<'EOF'
|
||||
package example
|
||||
|
||||
import org.junit.Test
|
||||
|
||||
class ExampleUiTest {
|
||||
@Test
|
||||
fun usesAntiPatternsForFixture() {
|
||||
Thread.sleep(5)
|
||||
try {
|
||||
// fixture-only
|
||||
} catch (e: AssertionError) {
|
||||
// fixture-only
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
cat > "$repo/iosApp/iosAppUITests/ExampleUiTests.swift" <<'EOF'
|
||||
import XCTest
|
||||
|
||||
final class ExampleUiTests: XCTestCase {
|
||||
func testFixtureUsesAntiPatterns() {
|
||||
sleep(1)
|
||||
if XCUIApplication().buttons["Example"].exists {
|
||||
XCTAssertTrue(true)
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
cat > "$repo/audit/test-quality-baseline.json" <<'EOF'
|
||||
{
|
||||
"version": 1,
|
||||
"generated_at": "2026-02-20T16:00:00Z",
|
||||
"metrics": [
|
||||
{
|
||||
"id": "android_thread_sleep_calls",
|
||||
"description": "Android Thread.sleep",
|
||||
"mode": "rg",
|
||||
"root": "androidApp/src/androidTest",
|
||||
"glob": "*.kt",
|
||||
"pattern": "Thread\\.sleep\\(",
|
||||
"baseline": 1,
|
||||
"allowed_growth": 0
|
||||
},
|
||||
{
|
||||
"id": "android_assertionerror_catches",
|
||||
"description": "Android AssertionError catches",
|
||||
"mode": "rg",
|
||||
"root": "androidApp/src/androidTest",
|
||||
"glob": "*.kt",
|
||||
"pattern": "catch \\([^\\)]*AssertionError",
|
||||
"baseline": 1,
|
||||
"allowed_growth": 0
|
||||
},
|
||||
{
|
||||
"id": "ios_sleep_calls",
|
||||
"description": "iOS sleep calls",
|
||||
"mode": "rg",
|
||||
"root": "iosApp/iosAppUITests",
|
||||
"glob": "*.swift",
|
||||
"pattern": "\\bsleep\\(",
|
||||
"baseline": 1,
|
||||
"allowed_growth": 0
|
||||
},
|
||||
{
|
||||
"id": "ios_conditional_exists_guards_in_test_bodies",
|
||||
"description": "iOS conditional exists checks in test bodies",
|
||||
"mode": "swift_test_body_pattern",
|
||||
"root": "iosApp/iosAppUITests",
|
||||
"glob": "*.swift",
|
||||
"pattern": "if[[:space:]]+[^\\n]*\\.exists",
|
||||
"baseline": 1,
|
||||
"allowed_growth": 0
|
||||
},
|
||||
{
|
||||
"id": "ios_noop_assert_true",
|
||||
"description": "iOS no-op assertTrue(true) in test bodies",
|
||||
"mode": "swift_test_body_pattern",
|
||||
"root": "iosApp/iosAppUITests",
|
||||
"glob": "*.swift",
|
||||
"pattern": "XCTAssertTrue\\(true\\)",
|
||||
"baseline": 1,
|
||||
"allowed_growth": 0
|
||||
},
|
||||
{
|
||||
"id": "ios_empty_test_bodies",
|
||||
"description": "iOS empty or comment-only test bodies",
|
||||
"mode": "rg_multiline",
|
||||
"root": "iosApp/iosAppUITests",
|
||||
"glob": "*.swift",
|
||||
"pattern": "(?s)func\\s+test[[:alnum:]_]+\\s*\\([^)]*\\)\\s*(?:throws\\s*)?\\{\\s*(?:(?://[^\\n]*\\n)\\s*)*\\}",
|
||||
"baseline": 0,
|
||||
"allowed_growth": 0
|
||||
},
|
||||
{
|
||||
"id": "ios_placeholder_test_markers",
|
||||
"description": "iOS placeholder markers in test bodies",
|
||||
"mode": "swift_test_body_pattern",
|
||||
"root": "iosApp/iosAppUITests",
|
||||
"glob": "*.swift",
|
||||
"pattern": "(TODO|FIXME|placeholder|no-op)",
|
||||
"baseline": 0,
|
||||
"allowed_growth": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
TMP_REPOS+=("$repo")
|
||||
echo "$repo"
|
||||
}
|
||||
|
||||
test_baseline_pass() {
|
||||
local repo
|
||||
repo="$(create_fixture_repo)"
|
||||
run_expect_success "validate-test-quality passes when metrics match baseline" \
|
||||
bash -lc "cd '$repo' && scripts/validate-test-quality.sh"
|
||||
}
|
||||
|
||||
test_growth_fails() {
|
||||
local repo
|
||||
repo="$(create_fixture_repo)"
|
||||
echo "Thread.sleep(10)" >> "$repo/androidApp/src/androidTest/kotlin/example/ExampleUiTest.kt"
|
||||
run_expect_failure "validate-test-quality fails when metric grows past threshold" \
|
||||
bash -lc "cd '$repo' && scripts/validate-test-quality.sh"
|
||||
}
|
||||
|
||||
test_allowed_growth_passes() {
|
||||
local repo
|
||||
repo="$(create_fixture_repo)"
|
||||
|
||||
local tmp
|
||||
tmp="$(mktemp)"
|
||||
jq '(.metrics[] | select(.id == "ios_sleep_calls") | .allowed_growth) = 1' \
|
||||
"$repo/audit/test-quality-baseline.json" > "$tmp"
|
||||
mv "$tmp" "$repo/audit/test-quality-baseline.json"
|
||||
|
||||
echo "sleep(1)" >> "$repo/iosApp/iosAppUITests/ExampleUiTests.swift"
|
||||
|
||||
run_expect_success "validate-test-quality honors allowed_growth threshold" \
|
||||
bash -lc "cd '$repo' && scripts/validate-test-quality.sh"
|
||||
}
|
||||
|
||||
main() {
|
||||
require_cmd jq
|
||||
require_cmd rg
|
||||
require_cmd awk
|
||||
|
||||
test_baseline_pass
|
||||
test_growth_fails
|
||||
test_allowed_growth_passes
|
||||
|
||||
log "Summary: pass=$PASS_COUNT fail=$FAIL_COUNT"
|
||||
if [[ "$FAIL_COUNT" -gt 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
local repo
|
||||
for repo in "${TMP_REPOS[@]:-}"; do
|
||||
[[ -d "$repo" ]] && rm -rf "$repo"
|
||||
done
|
||||
rm -f /tmp/test-quality-gate.out
|
||||
return 0
|
||||
}
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
main "$@"
|
||||
78
runners-conversion/periodVault/validate-audit-report.sh
Executable file
78
runners-conversion/periodVault/validate-audit-report.sh
Executable file
@@ -0,0 +1,78 @@
|
||||
#!/usr/bin/env bash
|
||||
# validate-audit-report.sh
|
||||
# Structural + semantic validation for CODEX audit reports.
|
||||
set -euo pipefail
|
||||
|
||||
REPORT_PATH="${1:-CODEX-REPORT.md}"
|
||||
|
||||
if [[ ! -f "$REPORT_PATH" ]]; then
|
||||
echo "[validate-audit-report] Missing report: $REPORT_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
FAILURES=0
|
||||
|
||||
# --- Check 1: Required sections exist ---
|
||||
required_sections=(
|
||||
"## Requirements Mapping"
|
||||
"## Constitution Compliance Matrix"
|
||||
"## Evidence"
|
||||
"## Risks"
|
||||
)
|
||||
|
||||
for section in "${required_sections[@]}"; do
|
||||
if command -v rg >/dev/null 2>&1; then
|
||||
if ! rg -q "^${section//\//\\/}$" "$REPORT_PATH"; then
|
||||
echo "[validate-audit-report] Missing section: $section"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
else
|
||||
if ! grep -Eq "^${section//\//\\/}$" "$REPORT_PATH"; then
|
||||
echo "[validate-audit-report] Missing section: $section"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# --- Check 2: Reject forbidden placeholders ---
|
||||
forbidden_patterns=("TODO" "TBD" "UNMAPPED" "PLACEHOLDER" "FIXME")
|
||||
for pattern in "${forbidden_patterns[@]}"; do
|
||||
if command -v rg >/dev/null 2>&1; then
|
||||
count="$(rg -c "$pattern" "$REPORT_PATH" 2>/dev/null || echo 0)"
|
||||
else
|
||||
count="$(grep -c "$pattern" "$REPORT_PATH" 2>/dev/null || echo 0)"
|
||||
fi
|
||||
if [[ "$count" -gt 0 ]]; then
|
||||
echo "[validate-audit-report] Forbidden placeholder '$pattern' found ($count occurrences)"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# --- Check 3: Non-empty sections (at least 1 non-blank line after heading) ---
|
||||
for section in "${required_sections[@]}"; do
|
||||
# Extract content between this heading and the next ## heading (or EOF)
|
||||
section_escaped="${section//\//\\/}"
|
||||
content=""
|
||||
if command -v awk >/dev/null 2>&1; then
|
||||
content="$(awk "/^${section_escaped}\$/{found=1; next} found && /^## /{exit} found{print}" "$REPORT_PATH" | grep -v '^[[:space:]]*$' || true)"
|
||||
fi
|
||||
if [[ -z "$content" ]]; then
|
||||
echo "[validate-audit-report] Section is empty: $section"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# --- Check 4: Requirements mapping has entries (table rows or list items) ---
|
||||
req_entries="$(awk '/^## Requirements Mapping$/{found=1; next} found && /^## /{exit} found && /^\|[^-]/{print} found && /^- /{print}' "$REPORT_PATH" | wc -l | tr -d ' ')"
|
||||
if [[ "$req_entries" -lt 1 ]]; then
|
||||
echo "[validate-audit-report] Requirements Mapping has no entries (expected table rows or list items)"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
fi
|
||||
|
||||
# --- Result ---
|
||||
if [[ $FAILURES -gt 0 ]]; then
|
||||
echo "[validate-audit-report] FAILED ($FAILURES issues)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[validate-audit-report] PASS ($REPORT_PATH)"
|
||||
99
runners-conversion/periodVault/validate-ios-skipped-tests.sh
Executable file
99
runners-conversion/periodVault/validate-ios-skipped-tests.sh
Executable file
@@ -0,0 +1,99 @@
|
||||
#!/usr/bin/env bash
|
||||
# validate-ios-skipped-tests.sh — Fail when iOS test results contain non-allowlisted skipped tests.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
scripts/validate-ios-skipped-tests.sh <xcresult_path> [allowlist_file]
|
||||
|
||||
Arguments:
|
||||
xcresult_path Path to .xcresult bundle generated by xcodebuild test
|
||||
allowlist_file Optional allowlist of skipped test names (one per line, # comments allowed)
|
||||
Default: audit/ios-skipped-tests-allowlist.txt
|
||||
EOF
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "--help" || "${1:-}" == "-h" ]]; then
|
||||
usage
|
||||
exit 0
|
||||
fi
|
||||
|
||||
RESULT_PATH="${1:-}"
|
||||
if [[ -z "$RESULT_PATH" ]]; then
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ALLOWLIST_PATH="${2:-$PROJECT_ROOT/audit/ios-skipped-tests-allowlist.txt}"
|
||||
|
||||
require_cmd() {
|
||||
if ! command -v "$1" >/dev/null 2>&1; then
|
||||
echo "Missing required command: $1" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
require_cmd xcrun
|
||||
require_cmd jq
|
||||
require_cmd sort
|
||||
require_cmd comm
|
||||
require_cmd mktemp
|
||||
|
||||
if [[ ! -d "$RESULT_PATH" ]]; then
|
||||
echo "xcresult bundle not found: $RESULT_PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TMP_JSON="$(mktemp)"
|
||||
TMP_SKIPPED="$(mktemp)"
|
||||
TMP_ALLOWLIST="$(mktemp)"
|
||||
TMP_UNALLOWED="$(mktemp)"
|
||||
|
||||
cleanup() {
|
||||
rm -f "$TMP_JSON" "$TMP_SKIPPED" "$TMP_ALLOWLIST" "$TMP_UNALLOWED"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
if ! xcrun xcresulttool get test-results tests --path "$RESULT_PATH" --format json > "$TMP_JSON" 2>/dev/null; then
|
||||
echo "Failed to parse xcresult test results: $RESULT_PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
jq -r '
|
||||
.. | objects
|
||||
| select((.result == "Skipped") or (.status == "Skipped") or (.outcome == "Skipped") or (.testStatus == "Skipped"))
|
||||
| (.name // .identifier // empty)
|
||||
' "$TMP_JSON" | sed '/^[[:space:]]*$/d' | sort -u > "$TMP_SKIPPED"
|
||||
|
||||
if [[ -f "$ALLOWLIST_PATH" ]]; then
|
||||
{
|
||||
grep -vE '^[[:space:]]*(#|$)' "$ALLOWLIST_PATH" || true
|
||||
} | sed 's/[[:space:]]*$//' | sed '/^[[:space:]]*$/d' | sort -u > "$TMP_ALLOWLIST"
|
||||
else
|
||||
: > "$TMP_ALLOWLIST"
|
||||
fi
|
||||
|
||||
if [[ ! -s "$TMP_SKIPPED" ]]; then
|
||||
echo "Skipped-test gate: PASS (no skipped iOS tests)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
comm -23 "$TMP_SKIPPED" "$TMP_ALLOWLIST" > "$TMP_UNALLOWED"
|
||||
|
||||
if [[ -s "$TMP_UNALLOWED" ]]; then
|
||||
echo "Skipped-test gate: FAIL (non-allowlisted skipped iOS tests found)"
|
||||
cat "$TMP_UNALLOWED" | sed 's/^/ - /'
|
||||
if [[ -s "$TMP_ALLOWLIST" ]]; then
|
||||
echo "Allowlist used: $ALLOWLIST_PATH"
|
||||
else
|
||||
echo "Allowlist is empty: $ALLOWLIST_PATH"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Skipped-test gate: PASS (all skipped iOS tests are allowlisted)"
|
||||
exit 0
|
||||
58
runners-conversion/periodVault/validate-sdd.sh
Executable file
58
runners-conversion/periodVault/validate-sdd.sh
Executable file
@@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env bash
|
||||
# validate-sdd.sh
|
||||
# Ensures changed spec folders keep mandatory SDD artifacts.
|
||||
set -euo pipefail
|
||||
|
||||
BASE_REF="${1:-origin/main}"
|
||||
|
||||
if ! git rev-parse --verify "$BASE_REF" >/dev/null 2>&1; then
|
||||
BASE_REF="HEAD~1"
|
||||
fi
|
||||
|
||||
CHANGED_SPEC_FILES=()
|
||||
while IFS= read -r line; do
|
||||
[[ -n "$line" ]] && CHANGED_SPEC_FILES+=("$line")
|
||||
done < <(git diff --name-only "$BASE_REF"...HEAD -- 'specs/**')
|
||||
|
||||
if [[ ${#CHANGED_SPEC_FILES[@]} -eq 0 ]]; then
|
||||
echo "[validate-sdd] No changes under specs/."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
SPEC_DIRS=()
|
||||
add_unique_dir() {
|
||||
local candidate="$1"
|
||||
local existing
|
||||
for existing in "${SPEC_DIRS[@]:-}"; do
|
||||
[[ "$existing" == "$candidate" ]] && return 0
|
||||
done
|
||||
SPEC_DIRS+=("$candidate")
|
||||
}
|
||||
|
||||
for path in "${CHANGED_SPEC_FILES[@]}"; do
|
||||
if [[ "$path" =~ ^specs/[^/]+/ ]]; then
|
||||
spec_dir="$(echo "$path" | cut -d/ -f1-2)"
|
||||
add_unique_dir "$spec_dir"
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ${#SPEC_DIRS[@]} -eq 0 ]]; then
|
||||
echo "[validate-sdd] PASS (no feature spec directories changed)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
FAILED=0
|
||||
for dir in "${SPEC_DIRS[@]}"; do
|
||||
for required in spec.md plan.md tasks.md allowed-files.txt; do
|
||||
if [[ ! -f "$dir/$required" ]]; then
|
||||
echo "[validate-sdd] Missing required file: $dir/$required"
|
||||
FAILED=1
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
if [[ $FAILED -ne 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "[validate-sdd] PASS ($BASE_REF...HEAD)"
|
||||
89
runners-conversion/periodVault/validate-tdd.sh
Executable file
89
runners-conversion/periodVault/validate-tdd.sh
Executable file
@@ -0,0 +1,89 @@
|
||||
#!/usr/bin/env bash
|
||||
# validate-tdd.sh
|
||||
# Guard that production code changes are accompanied by tests.
|
||||
set -euo pipefail
|
||||
|
||||
BASE_REF="${1:-origin/main}"
|
||||
|
||||
if ! git rev-parse --verify "$BASE_REF" >/dev/null 2>&1; then
|
||||
BASE_REF="HEAD~1"
|
||||
fi
|
||||
|
||||
CHANGED_FILES=()
|
||||
while IFS= read -r line; do
|
||||
[[ -n "$line" ]] && CHANGED_FILES+=("$line")
|
||||
done < <(git diff --name-only "$BASE_REF"...HEAD)
|
||||
|
||||
if [[ ${#CHANGED_FILES[@]} -eq 0 ]]; then
|
||||
echo "[validate-tdd] No changed files."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
is_production_file() {
|
||||
local f="$1"
|
||||
[[ "$f" == shared/src/commonMain/* ]] && return 0
|
||||
[[ "$f" == androidApp/src/main/* ]] && return 0
|
||||
[[ "$f" == iosApp/iosApp/* ]] && [[ "$f" != iosApp/iosAppUITests/* ]] && [[ "$f" != iosApp/iosAppTests/* ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
is_test_file() {
|
||||
local f="$1"
|
||||
[[ "$f" == shared/src/commonTest/* ]] && return 0
|
||||
[[ "$f" == shared/src/jvmTest/* ]] && return 0
|
||||
[[ "$f" == androidApp/src/androidTest/* ]] && return 0
|
||||
[[ "$f" == androidApp/src/test/* ]] && return 0
|
||||
[[ "$f" == iosApp/iosAppUITests/* ]] && return 0
|
||||
[[ "$f" == iosApp/iosAppTests/* ]] && return 0
|
||||
return 1
|
||||
}
|
||||
|
||||
PROD_COUNT=0
|
||||
TEST_COUNT=0
|
||||
for file in "${CHANGED_FILES[@]}"; do
|
||||
if is_production_file "$file"; then
|
||||
PROD_COUNT=$((PROD_COUNT + 1))
|
||||
fi
|
||||
if is_test_file "$file"; then
|
||||
TEST_COUNT=$((TEST_COUNT + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$PROD_COUNT" -gt 0 && "$TEST_COUNT" -eq 0 ]]; then
|
||||
echo "[validate-tdd] Failing: production code changed without matching test updates."
|
||||
echo "[validate-tdd] Production files changed: $PROD_COUNT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CHANGED_TEST_FILES=()
|
||||
TEST_PATH_REGEX='^(shared/src/(commonTest|jvmTest)/|androidApp/src/(androidTest|test)/|iosApp/iosApp(UI)?Tests/)'
|
||||
while IFS= read -r line; do
|
||||
[[ -n "$line" ]] && CHANGED_TEST_FILES+=("$line")
|
||||
done < <(
|
||||
if command -v rg >/dev/null 2>&1; then
|
||||
printf '%s\n' "${CHANGED_FILES[@]}" | rg "$TEST_PATH_REGEX" || true
|
||||
else
|
||||
printf '%s\n' "${CHANGED_FILES[@]}" | grep -E "$TEST_PATH_REGEX" || true
|
||||
fi
|
||||
)
|
||||
for test_file in "${CHANGED_TEST_FILES[@]:-}"; do
|
||||
if [[ -f "$test_file" ]]; then
|
||||
if command -v rg >/dev/null 2>&1; then
|
||||
if rg -q 'catch[[:space:]]*\([[:space:]]*AssertionError|XCTExpectFailure|@Ignore|@Disabled' "$test_file"; then
|
||||
echo "[validate-tdd] Failing: potential weak assertion/skip anti-pattern in $test_file"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
if grep -Eq 'catch[[:space:]]*\([[:space:]]*AssertionError|XCTExpectFailure|@Ignore|@Disabled' "$test_file"; then
|
||||
echo "[validate-tdd] Failing: potential weak assertion/skip anti-pattern in $test_file"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "${FORCE_AUDIT_GATES:-0}" == "1" ]]; then
|
||||
echo "[validate-tdd] FORCE_AUDIT_GATES enabled."
|
||||
fi
|
||||
|
||||
echo "[validate-tdd] PASS ($BASE_REF...HEAD)"
|
||||
358
runners-conversion/periodVault/validate-test-quality.sh
Executable file
358
runners-conversion/periodVault/validate-test-quality.sh
Executable file
@@ -0,0 +1,358 @@
|
||||
#!/usr/bin/env bash
|
||||
# validate-test-quality.sh — Enforce anti-pattern regression thresholds for UI tests.
|
||||
#
|
||||
# Usage:
|
||||
# scripts/validate-test-quality.sh [baseline_file]
|
||||
# scripts/validate-test-quality.sh --help
|
||||
#
|
||||
# Behavior:
|
||||
# - Loads metric baselines from JSON.
|
||||
# - Counts pattern matches in configured roots via ripgrep.
|
||||
# - Fails if any metric exceeds baseline + allowed_growth.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
DEFAULT_BASELINE_FILE="$PROJECT_ROOT/audit/test-quality-baseline.json"
|
||||
BASELINE_FILE="${1:-$DEFAULT_BASELINE_FILE}"
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
validate-test-quality.sh: enforce UI test anti-pattern regression thresholds.
|
||||
|
||||
Usage:
|
||||
scripts/validate-test-quality.sh [baseline_file]
|
||||
scripts/validate-test-quality.sh --help
|
||||
EOF
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "--help" || "${1:-}" == "-h" ]]; then
|
||||
usage
|
||||
exit 0
|
||||
fi
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
NC='\033[0m'
|
||||
|
||||
FAILURES=0
|
||||
|
||||
fail() {
|
||||
echo -e "${RED}FAIL:${NC} $1"
|
||||
FAILURES=$((FAILURES + 1))
|
||||
}
|
||||
|
||||
pass() {
|
||||
echo -e "${GREEN}PASS:${NC} $1"
|
||||
}
|
||||
|
||||
require_cmd() {
|
||||
if ! command -v "$1" >/dev/null 2>&1; then
|
||||
fail "Missing required command: $1"
|
||||
fi
|
||||
}
|
||||
|
||||
relative_path() {
|
||||
local path="$1"
|
||||
if [[ "$path" == "$PROJECT_ROOT/"* ]]; then
|
||||
echo "${path#$PROJECT_ROOT/}"
|
||||
else
|
||||
echo "$path"
|
||||
fi
|
||||
}
|
||||
|
||||
count_matches() {
|
||||
local pattern="$1"
|
||||
local root="$2"
|
||||
local glob="$3"
|
||||
local output=""
|
||||
local status=0
|
||||
|
||||
set +e
|
||||
output="$(rg --count-matches --no-messages --pcre2 -N -g "$glob" "$pattern" "$root" 2>/dev/null)"
|
||||
status=$?
|
||||
set -e
|
||||
|
||||
if [[ "$status" -eq 1 ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$status" -ne 0 ]]; then
|
||||
return "$status"
|
||||
fi
|
||||
|
||||
if [[ -z "$output" ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "$output" | awk -F: '{sum += $NF} END {print sum + 0}'
|
||||
}
|
||||
|
||||
count_matches_multiline() {
|
||||
local pattern="$1"
|
||||
local root="$2"
|
||||
local glob="$3"
|
||||
local output=""
|
||||
local status=0
|
||||
|
||||
set +e
|
||||
output="$(rg --count-matches --no-messages --pcre2 -U -N -g "$glob" "$pattern" "$root" 2>/dev/null)"
|
||||
status=$?
|
||||
set -e
|
||||
|
||||
if [[ "$status" -eq 1 ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "$status" -ne 0 ]]; then
|
||||
return "$status"
|
||||
fi
|
||||
|
||||
if [[ -z "$output" ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "$output" | awk -F: '{sum += $NF} END {print sum + 0}'
|
||||
}
|
||||
|
||||
list_metric_files() {
|
||||
local root="$1"
|
||||
local glob="$2"
|
||||
rg --files "$root" -g "$glob" 2>/dev/null || true
|
||||
}
|
||||
|
||||
count_swift_test_body_pattern_matches() {
|
||||
local pattern="$1"
|
||||
local root="$2"
|
||||
local glob="$3"
|
||||
local files=()
|
||||
|
||||
while IFS= read -r file_path; do
|
||||
[[ -n "$file_path" ]] && files+=("$file_path")
|
||||
done < <(list_metric_files "$root" "$glob")
|
||||
if [[ "${#files[@]}" -eq 0 ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
|
||||
awk -v pattern="$pattern" '
|
||||
function update_depth(line, i, c) {
|
||||
for (i = 1; i <= length(line); i++) {
|
||||
c = substr(line, i, 1)
|
||||
if (c == "{") depth++
|
||||
else if (c == "}") depth--
|
||||
}
|
||||
}
|
||||
|
||||
/^[[:space:]]*func[[:space:]]+test[[:alnum:]_]+[[:space:]]*\(.*\)[[:space:]]*(throws)?[[:space:]]*\{/ {
|
||||
in_test = 1
|
||||
depth = 0
|
||||
update_depth($0)
|
||||
if ($0 ~ pattern) count++
|
||||
if (depth <= 0) {
|
||||
in_test = 0
|
||||
depth = 0
|
||||
}
|
||||
next
|
||||
}
|
||||
|
||||
{
|
||||
if (!in_test) next
|
||||
|
||||
if ($0 ~ pattern) count++
|
||||
update_depth($0)
|
||||
if (depth <= 0) {
|
||||
in_test = 0
|
||||
depth = 0
|
||||
}
|
||||
}
|
||||
|
||||
END { print count + 0 }
|
||||
' "${files[@]}"
|
||||
}
|
||||
|
||||
count_swift_empty_test_bodies() {
|
||||
local root="$1"
|
||||
local glob="$2"
|
||||
local files=()
|
||||
|
||||
while IFS= read -r file_path; do
|
||||
[[ -n "$file_path" ]] && files+=("$file_path")
|
||||
done < <(list_metric_files "$root" "$glob")
|
||||
if [[ "${#files[@]}" -eq 0 ]]; then
|
||||
echo "0"
|
||||
return 0
|
||||
fi
|
||||
|
||||
awk '
|
||||
function update_depth(line, i, c) {
|
||||
for (i = 1; i <= length(line); i++) {
|
||||
c = substr(line, i, 1)
|
||||
if (c == "{") depth++
|
||||
else if (c == "}") depth--
|
||||
}
|
||||
}
|
||||
|
||||
function test_body_has_code(body, cleaned) {
|
||||
cleaned = body
|
||||
gsub(/\/\/.*/, "", cleaned)
|
||||
gsub(/[ \t\r\n{}]/, "", cleaned)
|
||||
return cleaned != ""
|
||||
}
|
||||
|
||||
/^[[:space:]]*func[[:space:]]+test[[:alnum:]_]+[[:space:]]*\(.*\)[[:space:]]*(throws)?[[:space:]]*\{/ {
|
||||
in_test = 1
|
||||
depth = 0
|
||||
body = ""
|
||||
update_depth($0)
|
||||
if (depth <= 0) {
|
||||
empty_count++
|
||||
in_test = 0
|
||||
depth = 0
|
||||
body = ""
|
||||
}
|
||||
next
|
||||
}
|
||||
|
||||
{
|
||||
if (!in_test) next
|
||||
|
||||
body = body $0 "\n"
|
||||
update_depth($0)
|
||||
|
||||
if (depth <= 0) {
|
||||
if (!test_body_has_code(body)) {
|
||||
empty_count++
|
||||
}
|
||||
in_test = 0
|
||||
depth = 0
|
||||
body = ""
|
||||
}
|
||||
}
|
||||
|
||||
END { print empty_count + 0 }
|
||||
' "${files[@]}"
|
||||
}
|
||||
|
||||
count_metric() {
|
||||
local mode="$1"
|
||||
local pattern="$2"
|
||||
local root="$3"
|
||||
local glob="$4"
|
||||
|
||||
case "$mode" in
|
||||
rg)
|
||||
count_matches "$pattern" "$root" "$glob"
|
||||
;;
|
||||
rg_multiline)
|
||||
count_matches_multiline "$pattern" "$root" "$glob"
|
||||
;;
|
||||
swift_test_body_pattern)
|
||||
count_swift_test_body_pattern_matches "$pattern" "$root" "$glob"
|
||||
;;
|
||||
swift_empty_test_bodies)
|
||||
count_swift_empty_test_bodies "$root" "$glob"
|
||||
;;
|
||||
*)
|
||||
echo "__INVALID_MODE__:$mode"
|
||||
return 0
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
require_cmd jq
|
||||
require_cmd rg
|
||||
require_cmd awk
|
||||
|
||||
echo "=== Test Quality Gate ==="
|
||||
echo "Baseline: $(relative_path "$BASELINE_FILE")"
|
||||
|
||||
if [[ ! -f "$BASELINE_FILE" ]]; then
|
||||
fail "Baseline file not found: $(relative_path "$BASELINE_FILE")"
|
||||
echo ""
|
||||
echo -e "${RED}Test quality gate failed with $FAILURES issue(s).${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if jq -e '.metrics | type == "array" and length > 0' "$BASELINE_FILE" >/dev/null; then
|
||||
pass "Baseline includes metric definitions"
|
||||
else
|
||||
fail "Baseline file has no metrics"
|
||||
fi
|
||||
|
||||
while IFS= read -r metric; do
|
||||
[[ -z "$metric" ]] && continue
|
||||
|
||||
metric_id="$(jq -r '.id // empty' <<<"$metric")"
|
||||
description="$(jq -r '.description // empty' <<<"$metric")"
|
||||
mode="$(jq -r '.mode // "rg"' <<<"$metric")"
|
||||
root_rel="$(jq -r '.root // empty' <<<"$metric")"
|
||||
glob="$(jq -r '.glob // empty' <<<"$metric")"
|
||||
pattern="$(jq -r '.pattern // ""' <<<"$metric")"
|
||||
baseline="$(jq -r '.baseline // empty' <<<"$metric")"
|
||||
allowed_growth="$(jq -r '.allowed_growth // empty' <<<"$metric")"
|
||||
|
||||
if [[ -z "$metric_id" || -z "$root_rel" || -z "$glob" || -z "$baseline" || -z "$allowed_growth" ]]; then
|
||||
fail "Metric entry is missing required fields: $metric"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$mode" != "swift_empty_test_bodies" && -z "$pattern" ]]; then
|
||||
fail "Metric '$metric_id' requires non-empty pattern for mode '$mode'"
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! [[ "$baseline" =~ ^[0-9]+$ ]]; then
|
||||
fail "Metric '$metric_id' has non-numeric baseline: $baseline"
|
||||
continue
|
||||
fi
|
||||
|
||||
if ! [[ "$allowed_growth" =~ ^[0-9]+$ ]]; then
|
||||
fail "Metric '$metric_id' has non-numeric allowed_growth: $allowed_growth"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$root_rel" == /* ]]; then
|
||||
root_path="$root_rel"
|
||||
else
|
||||
root_path="$PROJECT_ROOT/$root_rel"
|
||||
fi
|
||||
|
||||
if [[ ! -d "$root_path" ]]; then
|
||||
fail "Metric '$metric_id' root directory not found: $(relative_path "$root_path")"
|
||||
continue
|
||||
fi
|
||||
|
||||
current_count="0"
|
||||
if ! current_count="$(count_metric "$mode" "$pattern" "$root_path" "$glob")"; then
|
||||
fail "Metric '$metric_id' failed while counting matches"
|
||||
continue
|
||||
fi
|
||||
|
||||
if [[ "$current_count" == __INVALID_MODE__:* ]]; then
|
||||
fail "Metric '$metric_id' uses unsupported mode '$mode'"
|
||||
continue
|
||||
fi
|
||||
|
||||
max_allowed=$((baseline + allowed_growth))
|
||||
delta=$((current_count - baseline))
|
||||
|
||||
if [[ "$current_count" -le "$max_allowed" ]]; then
|
||||
pass "$metric_id ($description): current=$current_count baseline=$baseline allowed_growth=$allowed_growth threshold=$max_allowed delta=$delta"
|
||||
else
|
||||
fail "$metric_id ($description): current=$current_count exceeds threshold=$max_allowed (baseline=$baseline allowed_growth=$allowed_growth delta=$delta)"
|
||||
fi
|
||||
done < <(jq -c '.metrics[]' "$BASELINE_FILE")
|
||||
|
||||
echo ""
|
||||
if [[ "$FAILURES" -eq 0 ]]; then
|
||||
echo -e "${GREEN}Test quality gate passed.${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo -e "${RED}Test quality gate failed with $FAILURES issue(s).${NC}"
|
||||
exit 1
|
||||
85
runners-conversion/periodVault/verify.sh
Executable file
85
runners-conversion/periodVault/verify.sh
Executable file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
GRADLEW="./gradlew"
|
||||
|
||||
if [[ ! -x "$GRADLEW" ]]; then
|
||||
echo "Missing or non-executable ./gradlew. Did you generate the Gradle wrapper?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the task list once (quiet output to reduce noise)
|
||||
ALL_TASKS="$($GRADLEW -q tasks --all || true)"
|
||||
|
||||
if [[ -z "$ALL_TASKS" ]]; then
|
||||
echo "Could not read Gradle tasks. Exiting."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Prefer KMP aggregate task when available
|
||||
if echo "$ALL_TASKS" | grep -qE '^allTests[[:space:]]+-'; then
|
||||
TASKS="allTests"
|
||||
else
|
||||
# Fallback: collect common test tasks, excluding device-dependent instrumentation tests
|
||||
TASKS="$(
|
||||
echo "$ALL_TASKS" \
|
||||
| awk '{print $1}' \
|
||||
| grep -E '(^test$|Test$|^check$)' \
|
||||
| grep -v 'AndroidTest' \
|
||||
| grep -v 'connectedAndroidTest' \
|
||||
| grep -v 'deviceAndroidTest' \
|
||||
| sort -u \
|
||||
| tr '\n' ' '
|
||||
)"
|
||||
fi
|
||||
|
||||
# Strip spaces and validate
|
||||
if [[ -z "${TASKS// /}" ]]; then
|
||||
echo "No test tasks found. Exiting."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Running: $GRADLEW $TASKS"
|
||||
# Run all tasks in one go (faster, simpler)
|
||||
$GRADLEW $TASKS
|
||||
|
||||
echo "==================="
|
||||
echo "ALL TESTS PASSED!"
|
||||
echo "==================="
|
||||
|
||||
# --- Commit, push, and create PR if on a feature branch ---
|
||||
|
||||
# Skip commit/push/PR when invoked from a git hook (prevents infinite loop)
|
||||
if [[ "${GIT_PUSH_IN_PROGRESS:-}" == "1" ]] || [[ -n "${GIT_DIR:-}" && "${GIT_DIR}" != ".git" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
BRANCH="$(git rev-parse --abbrev-ref HEAD)"
|
||||
MAIN_BRANCH="main"
|
||||
|
||||
if [[ "$BRANCH" == "$MAIN_BRANCH" ]]; then
|
||||
echo "On $MAIN_BRANCH — skipping commit/push/PR."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Stage all changes (except untracked files the user hasn't added)
|
||||
if git diff --quiet && git diff --cached --quiet && [[ -z "$(git ls-files --others --exclude-standard)" ]]; then
|
||||
echo "No changes to commit."
|
||||
else
|
||||
git add -A
|
||||
COMMIT_MSG="feat: $(echo "$BRANCH" | sed 's/^[0-9]*-//' | tr '-' ' ')"
|
||||
git commit -m "$COMMIT_MSG" || echo "Nothing to commit."
|
||||
fi
|
||||
|
||||
# Push branch to remote (skip hooks to avoid re-triggering verify.sh)
|
||||
GIT_PUSH_IN_PROGRESS=1 git push --no-verify -u origin "$BRANCH"
|
||||
|
||||
# Create PR if one doesn't already exist for this branch
|
||||
if gh pr view "$BRANCH" --json state >/dev/null 2>&1; then
|
||||
PR_URL="$(gh pr view "$BRANCH" --json url -q '.url')"
|
||||
echo "PR already exists: $PR_URL"
|
||||
else
|
||||
TITLE="$(echo "$BRANCH" | sed 's/^[0-9]*-//' | tr '-' ' ' | awk '{for(i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) substr($i,2)}1')"
|
||||
PR_URL="$(gh pr create --title "$TITLE" --body "Automated PR from verify.sh" --base "$MAIN_BRANCH" --head "$BRANCH")"
|
||||
echo "PR created: $PR_URL"
|
||||
fi
|
||||
@@ -55,6 +55,11 @@
|
||||
# (starts at login, no sudo needed).
|
||||
# Ignored for docker runners.
|
||||
#
|
||||
# container_options — Extra Docker flags for act_runner job containers.
|
||||
# Passed to the container.options field in act_runner config.
|
||||
# e.g. "--device=/dev/kvm" for KVM passthrough.
|
||||
# Empty = no extra flags. Ignored for native runners.
|
||||
#
|
||||
# STARTER ENTRIES (uncomment and edit):
|
||||
|
||||
#[unraid-runner]
|
||||
|
||||
@@ -65,7 +65,7 @@ get_env_val() {
|
||||
# Prompt function
|
||||
# ---------------------------------------------------------------------------
|
||||
# Base prompt count (56 fixed + 3 TLS conditional slots — repo/DB prompts added dynamically)
|
||||
TOTAL_PROMPTS=59
|
||||
TOTAL_PROMPTS=61
|
||||
CURRENT_PROMPT=0
|
||||
LAST_SECTION=""
|
||||
|
||||
@@ -374,11 +374,13 @@ prompt_var "CADDY_DATA_PATH" "Absolute path on host for Caddy data"
|
||||
# Conditional TLS prompts
|
||||
if [[ "$COLLECTED_TLS_MODE" == "cloudflare" ]]; then
|
||||
prompt_var "CLOUDFLARE_API_TOKEN" "Cloudflare API token (Zone:DNS:Edit)" nonempty "" "TLS / REVERSE PROXY"
|
||||
prompt_var "PUBLIC_DNS_TARGET_IP" "Public DNS target IP for GITEA_DOMAIN" ip "" "TLS / REVERSE PROXY"
|
||||
prompt_var "PHASE8_ALLOW_PRIVATE_DNS_TARGET" "Allow private RFC1918 DNS target (LAN-only/split-DNS)" bool "false" "TLS / REVERSE PROXY"
|
||||
# Skip cert path prompts but still count them for progress
|
||||
CURRENT_PROMPT=$((CURRENT_PROMPT + 2))
|
||||
else
|
||||
# Skip cloudflare token prompt but count it
|
||||
CURRENT_PROMPT=$((CURRENT_PROMPT + 1))
|
||||
CURRENT_PROMPT=$((CURRENT_PROMPT + 3))
|
||||
prompt_var "SSL_CERT_PATH" "Absolute path to SSL cert" path "" "TLS / REVERSE PROXY"
|
||||
prompt_var "SSL_KEY_PATH" "Absolute path to SSL key" path "" "TLS / REVERSE PROXY"
|
||||
fi
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
## Pre-cutover
|
||||
- [ ] `nginx -T` snapshot captured (`output/nginx-full.conf`)
|
||||
- [ ] Generated Caddyfile reviewed
|
||||
- [ ] `Caddyfile.recommended` reviewed/adapted for your domains
|
||||
- [ ] `conversion-warnings.txt` reviewed and resolved for canary site
|
||||
- [ ] `validate_caddy.sh` passes
|
||||
- [ ] DNS TTL lowered for canary domain
|
||||
|
||||
149
setup/nginx-to-caddy/Caddyfile.recommended
Normal file
149
setup/nginx-to-caddy/Caddyfile.recommended
Normal file
@@ -0,0 +1,149 @@
|
||||
# Recommended Caddy baseline for the current homelab reverse-proxy estate.
|
||||
# Source upstreams were derived from setup/nginx-to-caddy/oldconfig/*.conf.
|
||||
#
|
||||
# If your public suffix changes (for example sintheus.com -> privacyindesign.com),
|
||||
# update the hostnames below before deployment.
|
||||
{
|
||||
# DNS-01 certificates through Cloudflare.
|
||||
# Requires CF_API_TOKEN in Caddy runtime environment.
|
||||
acme_dns cloudflare {env.CF_API_TOKEN}
|
||||
|
||||
# Trust private-range proxy hops in LAN environments.
|
||||
servers {
|
||||
trusted_proxies static private_ranges
|
||||
protocols h1 h2 h3
|
||||
}
|
||||
}
|
||||
|
||||
(common_security) {
|
||||
encode zstd gzip
|
||||
|
||||
header {
|
||||
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||
X-Content-Type-Options "nosniff"
|
||||
X-Frame-Options "SAMEORIGIN"
|
||||
Referrer-Policy "strict-origin-when-cross-origin"
|
||||
-Server
|
||||
}
|
||||
}
|
||||
|
||||
(proxy_headers) {
|
||||
# Keep Nginx parity for backends that consume Host and X-Real-IP.
|
||||
header_up Host {host}
|
||||
header_up X-Real-IP {remote_host}
|
||||
}
|
||||
|
||||
(proxy_streaming) {
|
||||
import proxy_headers
|
||||
# Flush immediately for streaming/log-tail/websocket-heavy UIs.
|
||||
flush_interval -1
|
||||
}
|
||||
|
||||
ai.sintheus.com {
|
||||
import common_security
|
||||
|
||||
request_body {
|
||||
max_size 50MB
|
||||
}
|
||||
|
||||
reverse_proxy http://192.168.1.82:8181 {
|
||||
import proxy_streaming
|
||||
}
|
||||
}
|
||||
|
||||
getter.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy http://192.168.1.3:8181 {
|
||||
import proxy_headers
|
||||
}
|
||||
}
|
||||
|
||||
portainer.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy https://192.168.1.181:9443 {
|
||||
import proxy_headers
|
||||
transport http {
|
||||
tls_insecure_skip_verify
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
photos.sintheus.com {
|
||||
import common_security
|
||||
|
||||
request_body {
|
||||
max_size 50GB
|
||||
}
|
||||
|
||||
reverse_proxy http://192.168.1.222:2283 {
|
||||
import proxy_headers
|
||||
}
|
||||
}
|
||||
|
||||
fin.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy http://192.168.1.233:8096 {
|
||||
import proxy_streaming
|
||||
}
|
||||
}
|
||||
|
||||
disk.sintheus.com {
|
||||
import common_security
|
||||
|
||||
request_body {
|
||||
max_size 20GB
|
||||
}
|
||||
|
||||
reverse_proxy http://192.168.1.52:80 {
|
||||
import proxy_headers
|
||||
}
|
||||
}
|
||||
|
||||
pi.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy http://192.168.1.4:80 {
|
||||
import proxy_headers
|
||||
}
|
||||
}
|
||||
|
||||
plex.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy http://192.168.1.111:32400 {
|
||||
import proxy_streaming
|
||||
}
|
||||
}
|
||||
|
||||
sync.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy http://192.168.1.119:8384 {
|
||||
import proxy_headers
|
||||
}
|
||||
}
|
||||
|
||||
syno.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy https://100.108.182.16:5001 {
|
||||
import proxy_headers
|
||||
transport http {
|
||||
tls_insecure_skip_verify
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
tower.sintheus.com {
|
||||
import common_security
|
||||
|
||||
reverse_proxy https://192.168.1.82:443 {
|
||||
import proxy_headers
|
||||
transport http {
|
||||
tls_insecure_skip_verify
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -13,6 +13,8 @@ This module is intentionally conservative:
|
||||
- SSH into a host and collect `nginx -T`, `/etc/nginx` tarball, and a quick inventory summary.
|
||||
- `nginx_to_caddy.sh`
|
||||
- Convert basic Nginx server blocks into a generated Caddyfile.
|
||||
- `Caddyfile.recommended`
|
||||
- Hardened baseline config (security headers, sensible body limits, streaming behavior).
|
||||
- `validate_caddy.sh`
|
||||
- Run `caddy fmt`, `caddy adapt`, and `caddy validate` on the generated Caddyfile.
|
||||
|
||||
@@ -24,6 +26,7 @@ cd setup/nginx-to-caddy
|
||||
./extract_nginx_inventory.sh --host=<host> --user=<user> --port=22 --yes
|
||||
./nginx_to_caddy.sh --input=./output/nginx-full.conf --output=./output/Caddyfile.generated --tls-mode=cloudflare --yes
|
||||
./validate_caddy.sh --config=./output/Caddyfile.generated --docker
|
||||
./validate_caddy.sh --config=./Caddyfile.recommended --docker
|
||||
```
|
||||
|
||||
## Conversion Scope
|
||||
|
||||
@@ -51,7 +51,23 @@ If local `caddy` is installed:
|
||||
./validate_caddy.sh --config=./output/Caddyfile.generated
|
||||
```
|
||||
|
||||
## 4) Canary migration (recommended)
|
||||
## 4) Use the recommended baseline
|
||||
|
||||
This toolkit now includes a hardened baseline at:
|
||||
- `setup/nginx-to-caddy/Caddyfile.recommended`
|
||||
|
||||
Use it when you want a production-style config instead of a raw 1:1 conversion.
|
||||
You can either:
|
||||
1. use it directly (if hostnames/upstreams already match your environment), or
|
||||
2. copy its common snippets and service patterns into your live Caddyfile.
|
||||
|
||||
Validate it before deployment:
|
||||
|
||||
```bash
|
||||
./validate_caddy.sh --config=./Caddyfile.recommended --docker
|
||||
```
|
||||
|
||||
## 5) Canary migration (recommended)
|
||||
|
||||
Migrate one low-risk subdomain first:
|
||||
1. Copy only one site block from generated Caddyfile to your live Caddy config.
|
||||
@@ -62,7 +78,7 @@ Migrate one low-risk subdomain first:
|
||||
- API/websocket calls work
|
||||
4. Keep Nginx serving all other subdomains.
|
||||
|
||||
## 5) Full migration after canary success
|
||||
## 6) Full migration after canary success
|
||||
|
||||
When the canary is stable:
|
||||
1. Add remaining site blocks.
|
||||
@@ -70,14 +86,14 @@ When the canary is stable:
|
||||
3. Keep Nginx config snapshots for rollback.
|
||||
4. Decommission Nginx only after monitoring period.
|
||||
|
||||
## 6) Rollback plan
|
||||
## 7) Rollback plan
|
||||
|
||||
If a site fails after cutover:
|
||||
1. Repoint affected DNS entry back to Nginx endpoint.
|
||||
2. Restore previous Nginx server block.
|
||||
3. Investigate conversion warnings for that block.
|
||||
|
||||
## 7) Domain/TLS note for your current setup
|
||||
## 8) Domain/TLS note for your current setup
|
||||
|
||||
You confirmed the domain is `privacyindesign.com`.
|
||||
|
||||
@@ -86,7 +102,7 @@ If you use `TLS_MODE=cloudflare` with Caddy, ensure:
|
||||
- Cloudflare token has DNS edit on the same zone.
|
||||
- DNS records point to the Caddy ingress path you intend (direct or via edge proxy).
|
||||
|
||||
## 8) Suggested next step for Phase 8
|
||||
## 9) Suggested next step for Phase 8
|
||||
|
||||
Given your current repo config:
|
||||
- keep Phase 8 Caddy focused on `source.privacyindesign.com`
|
||||
|
||||
49
setup/nginx-to-caddy/toggle_dns.sh
Executable file
49
setup/nginx-to-caddy/toggle_dns.sh
Executable file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Toggle DNS between Pi-hole and Cloudflare on all active network services.
|
||||
# Usage: ./toggle_dns.sh
|
||||
# Requires sudo for networksetup.
|
||||
|
||||
PIHOLE="192.168.1.4"
|
||||
CLOUDFLARE="1.1.1.1"
|
||||
|
||||
# Get all hardware network services (Wi-Fi, Ethernet, Thunderbolt, USB, etc.)
|
||||
services=()
|
||||
while IFS= read -r line; do
|
||||
[[ "$line" == *"*"* ]] && continue # skip disabled services
|
||||
services+=("$line")
|
||||
done < <(networksetup -listallnetworkservices 2>/dev/null | tail -n +2)
|
||||
|
||||
if [[ ${#services[@]} -eq 0 ]]; then
|
||||
echo "No network services found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Detect current mode from the first service that has a DNS set
|
||||
current_dns=""
|
||||
for svc in "${services[@]}"; do
|
||||
dns=$(networksetup -getdnsservers "$svc" 2>/dev/null | head -1)
|
||||
if [[ "$dns" != *"aren't any"* ]] && [[ -n "$dns" ]]; then
|
||||
current_dns="$dns"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ "$current_dns" == "$CLOUDFLARE" ]]; then
|
||||
target="$PIHOLE"
|
||||
label="Pi-hole"
|
||||
else
|
||||
target="$CLOUDFLARE"
|
||||
label="Cloudflare"
|
||||
fi
|
||||
|
||||
echo "Switching all services to ${label} (${target})..."
|
||||
for svc in "${services[@]}"; do
|
||||
sudo networksetup -setdnsservers "$svc" "$target"
|
||||
echo " ${svc} → ${target}"
|
||||
done
|
||||
|
||||
sudo dscacheutil -flushcache
|
||||
sudo killall -HUP mDNSResponder 2>/dev/null || true
|
||||
echo "DNS set to ${label} (${target})"
|
||||
@@ -10,7 +10,7 @@ FORMAT_FILE=true
|
||||
USE_DOCKER=false
|
||||
DO_ADAPT=true
|
||||
DO_VALIDATE=true
|
||||
CADDY_IMAGE="caddy:2"
|
||||
CADDY_IMAGE="slothcroissant/caddy-cloudflaredns:latest"
|
||||
|
||||
usage() {
|
||||
cat <<USAGE
|
||||
@@ -24,7 +24,7 @@ Options:
|
||||
--no-adapt Skip caddy adapt
|
||||
--no-validate Skip caddy validate
|
||||
--docker Use Docker image instead of local caddy binary
|
||||
--image=NAME Docker image when --docker is used (default: caddy:2)
|
||||
--image=NAME Docker image when --docker is used (default: slothcroissant/caddy-cloudflaredns:latest)
|
||||
--help, -h Show help
|
||||
USAGE
|
||||
}
|
||||
@@ -47,26 +47,43 @@ if [[ ! -f "$CONFIG_FILE" ]]; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CONFIG_FILE="$(cd "$(dirname "$CONFIG_FILE")" && pwd)/$(basename "$CONFIG_FILE")"
|
||||
|
||||
docker_env_args=()
|
||||
|
||||
if [[ "$USE_DOCKER" == "true" ]]; then
|
||||
require_cmd docker
|
||||
if [[ -n "${CF_API_TOKEN:-}" ]]; then
|
||||
docker_env_args+=( -e "CF_API_TOKEN=${CF_API_TOKEN}" )
|
||||
elif [[ -n "${CLOUDFLARE_API_TOKEN:-}" ]]; then
|
||||
docker_env_args+=( -e "CF_API_TOKEN=${CLOUDFLARE_API_TOKEN}" )
|
||||
fi
|
||||
|
||||
run_docker_caddy() {
|
||||
if [[ "${#docker_env_args[@]}" -gt 0 ]]; then
|
||||
docker run --rm "${docker_env_args[@]}" "$@"
|
||||
else
|
||||
docker run --rm "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
if [[ "$FORMAT_FILE" == "true" ]]; then
|
||||
log_info "Formatting Caddyfile with Docker..."
|
||||
docker run --rm \
|
||||
run_docker_caddy \
|
||||
-v "$CONFIG_FILE:/etc/caddy/Caddyfile" \
|
||||
"$CADDY_IMAGE" caddy fmt --overwrite /etc/caddy/Caddyfile
|
||||
fi
|
||||
|
||||
if [[ "$DO_ADAPT" == "true" ]]; then
|
||||
log_info "Adapting Caddyfile (Docker)..."
|
||||
docker run --rm \
|
||||
run_docker_caddy \
|
||||
-v "$CONFIG_FILE:/etc/caddy/Caddyfile:ro" \
|
||||
"$CADDY_IMAGE" caddy adapt --config /etc/caddy/Caddyfile --adapter caddyfile >/dev/null
|
||||
fi
|
||||
|
||||
if [[ "$DO_VALIDATE" == "true" ]]; then
|
||||
log_info "Validating Caddyfile (Docker)..."
|
||||
docker run --rm \
|
||||
run_docker_caddy \
|
||||
-v "$CONFIG_FILE:/etc/caddy/Caddyfile:ro" \
|
||||
"$CADDY_IMAGE" caddy validate --config /etc/caddy/Caddyfile --adapter caddyfile
|
||||
fi
|
||||
|
||||
@@ -5,7 +5,7 @@ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
# shellcheck source=./lib.sh
|
||||
source "$SCRIPT_DIR/lib.sh"
|
||||
|
||||
TIMEZONE="America/New_York"
|
||||
TIMEZONE="America/Chicago"
|
||||
SSH_PORT="22"
|
||||
AUTO_YES=false
|
||||
ENABLE_UFW=true
|
||||
@@ -17,14 +17,14 @@ Usage: $(basename "$0") [options]
|
||||
Prepare a brand-new Raspberry Pi OS host for monitoring stack workloads.
|
||||
|
||||
Options:
|
||||
--timezone=ZONE Set system timezone (default: America/New_York)
|
||||
--timezone=ZONE Set system timezone (default: America/Chicago)
|
||||
--ssh-port=PORT SSH port allowed by firewall (default: 22)
|
||||
--skip-firewall Skip UFW configuration
|
||||
--yes, -y Non-interactive; skip confirmation prompts
|
||||
--help, -h Show help
|
||||
|
||||
Example:
|
||||
$(basename "$0") --timezone=America/New_York --yes
|
||||
$(basename "$0") --timezone=America/Chicago --yes
|
||||
USAGE
|
||||
}
|
||||
|
||||
@@ -39,6 +39,19 @@ for arg in "$@"; do
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate --ssh-port (must be 1-65535) before we risk enabling UFW with a bad rule
|
||||
if ! [[ "$SSH_PORT" =~ ^[0-9]+$ ]] || [[ "$SSH_PORT" -lt 1 ]] || [[ "$SSH_PORT" -gt 65535 ]]; then
|
||||
log_error "--ssh-port must be a number between 1 and 65535 (got: '$SSH_PORT')"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate --timezone against timedatectl's known list
|
||||
if ! timedatectl list-timezones 2>/dev/null | grep -qx "$TIMEZONE"; then
|
||||
log_error "Unknown timezone: '$TIMEZONE'"
|
||||
log_error "Run 'timedatectl list-timezones' for valid options"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
require_cmd sudo apt systemctl timedatectl curl
|
||||
|
||||
if ! confirm_action "This will install/update OS packages and Docker on this Pi. Continue?" "$AUTO_YES"; then
|
||||
@@ -85,6 +98,10 @@ sudo systemctl enable --now docker
|
||||
|
||||
log_info "Configuring Docker daemon defaults..."
|
||||
sudo mkdir -p /etc/docker
|
||||
if [[ -f /etc/docker/daemon.json ]]; then
|
||||
sudo cp /etc/docker/daemon.json /etc/docker/daemon.json.bak
|
||||
log_info "Backed up existing daemon.json to daemon.json.bak"
|
||||
fi
|
||||
sudo tee /etc/docker/daemon.json >/dev/null <<'JSON'
|
||||
{
|
||||
"log-driver": "json-file",
|
||||
@@ -119,5 +136,5 @@ fi
|
||||
log_success "Bootstrap complete"
|
||||
log_info "Recommended next steps:"
|
||||
log_info "1) Re-login to apply docker group membership"
|
||||
log_info "2) Run setup/pi-monitoring/mount_ssd.sh"
|
||||
log_info "2) (Optional) Run setup/pi-monitoring/mount_ssd.sh if you have an SSD"
|
||||
log_info "3) Copy stack.env.example to stack.env and run deploy_stack.sh"
|
||||
|
||||
@@ -40,7 +40,9 @@ services:
|
||||
command:
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
- '--storage.tsdb.path=/prometheus'
|
||||
- '--storage.tsdb.retention.time=30d'
|
||||
- '--storage.tsdb.retention.time=${PROMETHEUS_RETENTION_TIME:-15d}'
|
||||
- '--storage.tsdb.retention.size=${PROMETHEUS_RETENTION_SIZE:-2GB}'
|
||||
- '--storage.tsdb.wal-compression'
|
||||
- '--web.enable-lifecycle'
|
||||
ports:
|
||||
- "${BIND_IP:-0.0.0.0}:${PROMETHEUS_PORT:-9090}:9090"
|
||||
|
||||
11
setup/pi-monitoring/portainer-agent.sh
Normal file
11
setup/pi-monitoring/portainer-agent.sh
Normal file
@@ -0,0 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
docker run -d \
|
||||
-p 9001:9001 \
|
||||
--name portainer_agent \
|
||||
--restart=always \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
|
||||
-v /:/host \
|
||||
portainer/agent:2.39.0
|
||||
@@ -10,7 +10,7 @@ COMPOSE_PROJECT_NAME=pi-monitoring
|
||||
OPS_ROOT=/srv/ops
|
||||
|
||||
# Host timezone for containers
|
||||
TZ=America/New_York
|
||||
TZ=America/Chicago
|
||||
|
||||
# Bind IP for published ports (0.0.0.0 = all interfaces)
|
||||
BIND_IP=0.0.0.0
|
||||
@@ -34,6 +34,11 @@ UPTIME_KUMA_IMAGE=louislam/uptime-kuma:1
|
||||
GRAFANA_ADMIN_USER=admin
|
||||
GRAFANA_ADMIN_PASSWORD=replace-with-strong-password
|
||||
|
||||
# Prometheus retention (whichever limit is hit first wins)
|
||||
# Reduce these on microSD to prevent filling the card.
|
||||
PROMETHEUS_RETENTION_TIME=15d
|
||||
PROMETHEUS_RETENTION_SIZE=2GB
|
||||
|
||||
# Optional comma-separated plugin list for Grafana
|
||||
# Example: grafana-piechart-panel,grafana-clock-panel
|
||||
GRAFANA_PLUGINS=
|
||||
|
||||
@@ -3,11 +3,11 @@ set -euo pipefail
|
||||
|
||||
# =============================================================================
|
||||
# teardown_all.sh — Tear down migration in reverse order
|
||||
# Runs phase teardown scripts from phase 9 → phase 1 (or a subset).
|
||||
# Runs phase teardown scripts from phase 11 → phase 1 (or a subset).
|
||||
#
|
||||
# Usage:
|
||||
# ./teardown_all.sh # Tear down everything (phases 9 → 1)
|
||||
# ./teardown_all.sh --through=5 # Tear down phases 9 → 5 (leave 1-4)
|
||||
# ./teardown_all.sh # Tear down everything (phases 11 → 1)
|
||||
# ./teardown_all.sh --through=5 # Tear down phases 11 → 5 (leave 1-4)
|
||||
# ./teardown_all.sh --yes # Skip confirmation prompts
|
||||
# =============================================================================
|
||||
|
||||
@@ -25,8 +25,8 @@ for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--through=*)
|
||||
THROUGH="${arg#*=}"
|
||||
if ! [[ "$THROUGH" =~ ^[0-9]+$ ]] || [[ "$THROUGH" -lt 1 ]] || [[ "$THROUGH" -gt 9 ]]; then
|
||||
log_error "--through must be a number between 1 and 9"
|
||||
if ! [[ "$THROUGH" =~ ^[0-9]+$ ]] || [[ "$THROUGH" -lt 1 ]] || [[ "$THROUGH" -gt 11 ]]; then
|
||||
log_error "--through must be a number between 1 and 11"
|
||||
exit 1
|
||||
fi
|
||||
;;
|
||||
@@ -37,14 +37,14 @@ for arg in "$@"; do
|
||||
Usage: $(basename "$0") [options]
|
||||
|
||||
Options:
|
||||
--through=N Only tear down phases N through 9 (default: 1 = everything)
|
||||
--through=N Only tear down phases N through 11 (default: 1 = everything)
|
||||
--cleanup Also run setup/cleanup.sh to uninstall setup prerequisites
|
||||
--yes, -y Skip all confirmation prompts
|
||||
--help Show this help
|
||||
|
||||
Examples:
|
||||
$(basename "$0") Tear down everything
|
||||
$(basename "$0") --through=5 Tear down phases 5-9, leave 1-4
|
||||
$(basename "$0") --through=5 Tear down phases 11-5, leave 1-4
|
||||
$(basename "$0") --cleanup Full teardown + uninstall prerequisites
|
||||
$(basename "$0") --yes Non-interactive teardown
|
||||
EOF
|
||||
@@ -58,9 +58,9 @@ done
|
||||
# ---------------------------------------------------------------------------
|
||||
if [[ "$AUTO_YES" == "false" ]]; then
|
||||
if [[ "$THROUGH" -eq 1 ]]; then
|
||||
log_warn "This will tear down ALL phases (9 → 1)."
|
||||
log_warn "This will tear down ALL phases (11 → 1)."
|
||||
else
|
||||
log_warn "This will tear down phases 9 → ${THROUGH}."
|
||||
log_warn "This will tear down phases 11 → ${THROUGH}."
|
||||
fi
|
||||
printf 'Are you sure? [y/N] '
|
||||
read -r confirm
|
||||
@@ -70,9 +70,11 @@ if [[ "$AUTO_YES" == "false" ]]; then
|
||||
fi
|
||||
fi
|
||||
|
||||
# Teardown scripts in reverse order (9 → 1)
|
||||
# Teardown scripts in reverse order (11 → 1)
|
||||
# Each entry: phase_num|script_path
|
||||
TEARDOWNS=(
|
||||
"11|phase11_teardown.sh"
|
||||
"10|phase10_teardown.sh"
|
||||
"9|phase9_teardown.sh"
|
||||
"8|phase8_teardown.sh"
|
||||
"7|phase7_teardown.sh"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# act_runner configuration — rendered by manage_runner.sh
|
||||
# Variables: RUNNER_NAME, RUNNER_LABELS_YAML, RUNNER_CAPACITY
|
||||
# Variables: RUNNER_NAME, RUNNER_LABELS_YAML, RUNNER_CAPACITY, RUNNER_CONTAINER_OPTIONS
|
||||
# Deployed alongside docker-compose.yml (docker) or act_runner binary (native).
|
||||
|
||||
log:
|
||||
@@ -22,7 +22,7 @@ cache:
|
||||
container:
|
||||
network: "" # Empty = use default Docker network.
|
||||
privileged: false # Never run job containers as privileged.
|
||||
options:
|
||||
options: ${RUNNER_CONTAINER_OPTIONS}
|
||||
workdir_parent:
|
||||
|
||||
host:
|
||||
|
||||
Reference in New Issue
Block a user