Skip to content

Lab Audit & Triage (Phase 0.5)

Date: 2026-05-11 Purpose: Walk every entity discovered during lab inventory, assign a target disposition, and produce a clear model for Phase 1 inventory to implement.

Dispositions:

  • KEEP — include in target-state inventory as-is
  • KEEP-UNTIL — keep temporarily, decommission at a named phase
  • MIGRATE — functionality moves to a new location
  • DECOMMISSION — delete/destroy, reclaim resources
  • CONSOLIDATE — merge into another entity
  • NEEDS-DECISION — ambiguous; owner input required (listed in appendix)

1. Physical & Hypervisor Layer

1.1 Proxmox Host (prox)

Field Value
Hardware ASUS MINIPC PN64, i7-12700H (14c/20t), 60GB RAM, 1TB NVMe
IP 192.168.6.71
NIC enp2s0, MTU 9000, bridge vmbr0 (VLAN-aware)
Storage local-lvm at 88.65% used (704GB of 796GB allocated)
Memory 95% used (57GB of 60GB), swap 4.7GB of 8GB
NVMe wear 28% (healthy)
Disposition KEEP

Resource pressure: Memory and local-lvm are both near capacity. The decommissions below should recover approximately 15-20GB RAM and 200+GB disk from stopped guests that still have allocations.

1.2 Synology NAS (whrrr) — RS2421+

Field Value
IP (primary) 192.168.6.215 (LAN1)
IP (LAN2) 192.168.6.214
IP (LAN3) 192.168.6.216
Drives 12x SATA in 3 volume groups
Disposition KEEP (class: appliance)

Volume utilization:

VG Volume Size Used Free Notes
vg1 volume_1 34.55T 97% 1.3T SerializedWatchables + NetBackup — media archive, intentional
vg1 volume_2 22.03T 99% 225G Movems — media archive, intentional
vg1 volume_4 350G 88% 42G Plex metadata, docker, dockerPrime — manage down
vg1 volume_5 250G 56% 106G Healthy
vg1 volume_7 1.0T 30% 690G Healthy
vg2 volume_6 15.0T 84% 2.4T GD MBP TM, Longterm Storage, Scans, SynoNVR, VirtMach
vg2 volume_8 4.88T 4% 4.6T Docker (btrfs), homes — healthy, lots of headroom
vg3 volume_9 29.3T 100% 170G Prawns — full, media archive

Actions:

  • volume_4 (88%): Clean Plex transcoder cache and unused Docker images
  • volume_9 (100%): User confirmed intentional; capacity alarm covers this
  • Multi-homing design (3 NICs → per-VLAN IPs) deferred to Phase 7

1.3 Unifi Network Stack

Device Model IP MAC Firmware Disposition
UDM SE Dream Machine PRO SE 98.167.204.33 (WAN) 60:22:32:26:2f:89 5.1.10 KEEP (class: appliance)
USW Pro Max 16 PoE USW Pro Max 16 PoE 192.168.1.58 28:70:4e:32:67:6d 7.5.2 KEEP (class: appliance)
USW Flex Mini USW Flex Mini 192.168.1.95 28:70:4e:cd:01:8c 2.1.6 KEEP (class: appliance)
Dining AP AC Pro (U6) 192.168.1.34 f4:92:bf:63:32:69 6.9.1 KEEP (class: appliance)
Hallway AP U7 Pro 192.168.1.110 9c:05:d6:fd:6c:88 8.6.9 KEEP (class: appliance)
Cable Internet Modem 192.168.100.1 d0:21:f9:05:8f:21 1.4.3 KEEP (not inventoried — ISP-owned)

1.4 Cameras (VLAN 6 — Security)

Device IP MAC Disposition
G4 Carport 192.168.8.173 d0:21:f9:94:13:c6 KEEP (class: appliance)
G4 Front Door 192.168.8.76 d0:21:f9:92:5a:ea KEEP (class: appliance)
G5 Flex (Rack Monitor) 192.168.8.10 f4:e2:c6:0e:be:65 KEEP (class: appliance)

2. VMs on Proxmox

VMID Name Status Disposition Target Class Notes
100 saltierpoop running KEEP managed_appliance Saltbox media stack. 20 vCPU, 45GB RAM, 260GB disk. OS-layer managed only.
106 metrimon stopped KEEP (stopped) host Ubuntu 24.04, 10GB RAM, 96GB disk. Stopped but retained.
107 dnsproject stopped KEEP (stopped) host Community-script DNS project. 3GB RAM, 20GB disk.
121 penpot stopped KEEP (stopped) host Community-script design tool. 4GB RAM, 10GB disk.
200 haos running KEEP appliance Home Assistant OS. 6GB RAM, 32GB disk. USB passthrough (Zigbee + other). Critical for Phase 8 power tracking.

Note: Stopped VMs are retained — current status is not indicative of future use. No VM decommissions planned.


3. LXCs on Proxmox

3.1 Running — Keep

VMID Name IP RAM Disk Disposition Notes
104 blocktopus 192.168.6.80 2GB 16GB KEEP-UNTIL Phase 7 PiHole. Stays until AdGuard migration.
114 nfs-monitoring 192.168.6.107 8GB 112GB KEEP NFS bridge to Whrrr (synorpn + prawns mounts). Active, high I/O.
119 harbor-registry 192.168.6.119 4GB 80GB KEEP Container registry at reg.realemail.app. Recordurbate depends on this.
120 octoprint 192.168.6.222 1GB 4GB KEEP 3D printer control. Privileged for USB serial passthrough.

3.2 Running — Decommission after Phase 5

VMID Name IP RAM Disk Disposition Notes
108 prometheus 192.168.6.237 2GB 8GB DECOMMISSION after Phase 5 Duplicate of saltierpoop container. Replaced by Phase 5 monitoring stack.
112 grafana 192.168.6.249 512MB 2GB DECOMMISSION after Phase 5 Duplicate of saltierpoop container. Replaced by Phase 5 monitoring stack.

3.3 Running — Keep (owner-confirmed)

VMID Name IP RAM Disk Disposition Notes
111 influxdb 192.168.6.132 2GB 8GB KEEP Owner-confirmed. Significant inbound traffic (134GB). Likely consumed by HAOS for long-term history. Evaluate in Phase 5 whether Prometheus replaces the need.
116 pulse 192.168.6.199 1GB 4GB KEEP Owner-confirmed. Running Debian, significant network I/O (225GB in, 31GB out). Purpose to be documented during Phase 1 inventory.

3.4 Stopped — Decommission

VMID Name RAM Alloc Disk Alloc Disposition Rationale
101 alpine-prometheus 256MB 1GB DECOMMISSION Superseded by LXC 108, which itself is superseded by Phase 5 stack.
103 unmanic 4GB 8GB DECOMMISSION Media transcoder. Tdarr runs in saltierpoop and covers this use case.
105 k6-loadtest 6GB 12GB DECOMMISSION Load testing tool. Not relevant to homelab ops. Run on-demand if ever needed.
109 graylog 8GB 30GB DECOMMISSION Log aggregator experiment. Loki (in saltierpoop now, canonical in Phase 5) replaces this.
110 sqlserver2022 23.5GB 60GB DECOMMISSION SQL Server experiment. Massive allocation (23.5GB RAM!). No active consumers.
113 mysql 1GB 8GB DECOMMISSION Standalone MySQL. MariaDB in saltierpoop covers DB needs.
117 caddy 2GB 12GB DECOMMISSION Reverse proxy experiment. Traefik is the standard.
118 reactive-resume 3GB 8GB DECOMMISSION Resume builder. Use the SaaS version if needed.
122 netboot.xyz 512MB 8GB DECOMMISSION PXE boot server. Not in use. Re-deploy as container in Phase 9 if needed.

3.5 Stopped — Keep (owner-confirmed)

VMID Name RAM Alloc Disk Alloc Disposition Notes
102 aiProject 9GB 64GB KEEP (stopped) Owner-confirmed. Large allocation for AI project.
115 ollama 10GB 35GB KEEP (stopped) Rehydrate in Phase 9 for RAG project. Leave allocated but stopped until then.

Total resource recovery from stopped LXC decommissions (section 3.4 only): ~48GB RAM allocation, ~147GB disk. This frees meaningful headroom on local-lvm for Phase 5+ workloads.


4. Saltierpoop Containers (Saltbox-managed)

These are NOT reconciled by our repo. Listed for awareness, monitoring, and Homepage generation. The full list from docker ps:

4.1 Core Infrastructure (Saltbox)

Container Image Role Notes
traefik reverse-proxy Existing Cloudflare + Authentik/Authelia setup
authentik + authentik-postgres + authentik-redis postgres:16-alpine auth (strategic) SSO provider
authelia + authelia-redis redis:alpine auth (legacy) Deprecate over time per PLAN.md
plex plexinc/pms-docker media server GPU passthrough for transcoding
portainer portainer-ce:alpine-sts container management
gitea gitea/gitea git hosting Candidate for promotion to Forgejo in Phase 10
mariadb mariadb:10 database Saltbox services depend on this
postgres (likely separate from authentik-postgres) database

4.2 Media Stack (Saltbox)

Container Role
sonarr TV automation
radarr Movie automation
bazarr Subtitles
prowlarr Indexer management
jackett Indexer proxy (legacy, prowlarr supersedes)
nzbhydra2 NZB meta-search
qbittorrentvpn Torrent client (VPN-routed) — currently restarting
recyclarr Quality profile sync
tautulli Plex analytics
kometa Plex metadata management
tdarr Media transcoding
autoscan Library scan trigger
nzbthrottle NZB bandwidth management
seerr Request management
lazylibrarian Book management
audiobookshelf Audiobook server

4.3 Monitoring & Observability (Saltbox — duplicated)

Container Notes
prometheus Duplicate of LXC 108. Both decommissioned in Phase 5.
grafana Duplicate of LXC 112. Both decommissioned in Phase 5.
loki-alloy Grafana Alloy (log collector). Moves to Phase 5 stack.
netdata + netdata-docker-socket-proxy Host monitoring. Evaluate vs node_exporter in Phase 5.
glances + glances-docker-socket-proxy System monitoring. Overlaps with netdata.
jaeger Distributed tracing. Likely unused in homelab context.
uptime + uptime-docker-socket-proxy Uptime Kuma. Migrate to infra-services in Phase 4.

4.4 Other Services (Saltbox)

Container Role Notes
code-server VS Code in browser
foundry FoundryVTT (tabletop RPG) Public port 30000
resiliosync File sync Port 55555
dockwatch + dockwatch-docker-socket-proxy Container update notifications
notifiarr Notification relay
flaresolverr Cloudflare bypass for indexers
error-pages Custom error pages for Traefik

qbittorrentvpn

This container was in a restart loop at time of discovery (Restarting (1)). Likely a VPN credential or config issue. Not urgent (Saltbox-managed) but worth noting for the operator.


5. VMs on Whrrr (Synology VMM)

Name IP Tailscale IP Disposition Notes
recordurbate 192.168.6.98 100.85.192.18 KEEP (class: customer-app host) Hosts the recordurbate-tiktok VM.
ubuncap 192.168.6.100 100.127.229.145 KEEP (class: customer-app host) Hosts the tiktok_* container stack at /mnt/streams/tiktok/.

Customer App: recordurbate-tiktok (on ubuncap)

14 containers running: nginx, frontend, backend, 5x rq-worker, smart-monitor, discord-bot, telegram-bot, rq-dashboard, redis, postgres. All images from reg.realemail.app/recordurbate-tiktok/*. Compose at /mnt/streams/tiktok/docker-compose.yml.

Disposition: KEEP (class: customer-app). Inventoried for monitoring and backup awareness only. Not managed by our Ansible or Komodo.


6. New VM Provision Plan: infra-services

Field Value
Host Whrrr (Synology VMM) — baseline recommendation
OS Ubuntu 24.04 LTS
vCPU 4
RAM 8GB
Disk 50GB (on volume_8, which has 4.5TB free)
Network VLAN 4 (Servers), static IP in 192.168.6.0/24
Tailscale Yes — node name infra-services
Class host

Rationale: VMM-on-Whrrr offloads memory from Proxmox (which is at 95%). The 1TB VMM allocation in DSM is already provisioned. Volume_8 has ample headroom.

Services to host (phased):

  • Phase 4: Komodo, ARA, Homepage, Uptime Kuma
  • Phase 5: Prometheus, Grafana, Loki, Alertmanager, Promtail

Fallback: If VMM proves limiting (no GPU passthrough, limited networking options), provision on Proxmox after LXC decommissions free up headroom.


7. Tailscale Node Audit

Node Tailscale IP Status Disposition
recordurbate 100.85.192.18 online KEEP — fix SEC-006 (--accept-routes) in Phase 3
whrrr 100.71.93.130 online KEEP
poopcastle 100.79.103.61 idle, exit node KEEP — primary exit node
ubuncap 100.127.229.145 recently online KEEP
captainkangapoo-dec2025 100.101.249.86 offline 2d KEEP — personal desktop, intermittently online
saltierpoop 100.86.38.77 offline 53d REHYDRATE — reinstall/restart Tailscale on this VM
ben-iphone-17promax 100.118.161.43 offline 77d REHYDRATE — re-login on device
bens-mbp-m4 100.95.172.83 offline 77d REHYDRATE — re-login on device
openclaws-virtual-machine 100.98.132.22 offline 77d REHYDRATE — macOS VM, re-login
bens-ipad 100.120.194.99 offline 590d KEEP — owner-confirmed. Rehydrate when device is next used.
proxbox-cube 100.97.134.65 offline 351d KEEP — owner-confirmed. Rehydrate when device is next used.

Exit node: poopcastle (HAOS) is the sole exit node. Owner confirmed only one is configured.


8. Port Forward Audit (UDM SE)

Name External Port Dest IP Dest Port Disposition
HTTP 80 192.168.6.243 (saltierpoop) 80 KEEP — Cloudflare → Traefik chain
HTTP_Too 8080 192.168.6.243 (saltierpoop) 8080 KEEP — Traefik alt port
HTTPS 443 192.168.6.243 (saltierpoop) 443 KEEP — Primary public ingress
DSM HTTP 5000 192.168.6.215 (whrrr) 5000 CLOSE in Phase 7 (SEC-003)
DSM HTTPS 5001 192.168.6.215 (whrrr) 5001 CLOSE in Phase 7 (SEC-003)
SMB 445 192.168.6.215 (whrrr) 445 CLOSED (SEC-001, Phase 0)
omgwtfbbq 9001 192.168.1.84 9001 SEC-004 — see section 9

Firewall rules: The Unifi API returned empty data for firewallrule and firewallgroup. This confirms SEC-002: no inter-VLAN firewall rules exist. All VLAN isolation is aspirational until Phase 7 enforces it.


9. SEC-004 Investigation: omgwtfbbq (192.168.1.84:9001)

Finding: Port forward named omgwtfbbq forwarding external port 9001 to 192.168.1.84:9001. Logging is enabled. The operator does not recall creating it.

Analysis:

  • 192.168.1.84 is on the GenPop VLAN (192.168.1.0/24, default LAN)
  • Port 9001 is commonly used by: Portainer Agent, Supervisor (process manager), or various web admin panels
  • The IP does not appear in the current Unifi client list (25 of 50 clients retrieved) — it may be a device that has since been reassigned a different DHCP lease or is currently offline
  • The forward has log: true, so Unifi should have access logs if recent traffic hit it

Status: Owner has paused the firewall rule while investigating what the port/device is for. The forward is currently inactive.

Remaining steps:

  1. Identify what device held 192.168.1.84 (check DHCP lease history in Unifi)
  2. Determine what service listens on port 9001 on that device
  3. If intentional, document and re-evaluate during Phase 7 network tightening
  4. If unidentified, delete the forward permanently

10. Duplicate Service Consolidation Plan

Prometheus (3 instances!)

Instance Location Status Action
alpine-prometheus (LXC 101) Proxmox stopped DECOMMISSION now
prometheus (LXC 108) Proxmox running DECOMMISSION after Phase 5
prometheus (container) saltierpoop running Leave for Saltbox; our Phase 5 stack is canonical

Grafana (2 instances)

Instance Location Status Action
grafana (LXC 112) Proxmox running DECOMMISSION after Phase 5
grafana (container) saltierpoop running Leave for Saltbox

Database Engines (3 stopped, 2 running)

Engine Location Status Action
mariadb (container) saltierpoop running KEEP — Saltbox services depend on it
authentik-postgres (container) saltierpoop running KEEP — Authentik dependency
mysql (LXC 113) Proxmox stopped DECOMMISSION — no consumers
sqlserver2022 (LXC 110) Proxmox stopped DECOMMISSION — no consumers, 23.5GB RAM(!)
influxdb (LXC 111) Proxmox running KEEP — owner-confirmed, evaluate in Phase 5

11. WiFi Network Mapping (SEC-005)

All four SSIDs share networkconf_id: 60d1337ea2456413f08e0121 which maps to GenPop (VLAN 1). The three EAP-secured SSIDs likely intend different access tiers:

SSID Security Likely Intent Current VLAN Target VLAN
The LAN Before Time WPA2-PSK Guest/household 1 (GenPop) 1 (GenPop) — correct
IsThisTheKrustyKrab WPA2-EAP Personal devices? 1 (GenPop) 2 (Personal)
Rebellious Amish Family WPA2-EAP Trusted? 1 (GenPop) NEEDS-DECISION
HotSignalsInYourArea WPA2-EAP IoT? 1 (GenPop) NEEDS-DECISION

Owner confirmed all SSIDs are planned for active use. Exact VLAN mapping to be determined during Phase 7 network tightening.


Appendix: Decision Log

All items previously marked NEEDS-DECISION have been resolved by the owner.

# Entity Decision Date
1 LXC 102 aiProject KEEP — retained for future AI work 2026-05-11
2 LXC 111 influxdb KEEP — evaluate in Phase 5 2026-05-11
3 LXC 116 pulse KEEP — purpose to be documented in Phase 1 2026-05-11
4 Tailscale bens-ipad KEEP — rehydrate when device is used 2026-05-11
5 Tailscale proxbox-cube KEEP — rehydrate when device is used 2026-05-11
6 Second exit node Only one: poopcastle (HAOS) 2026-05-11
7 WiFi SSIDs All planned for use; VLAN mapping in Phase 7 2026-05-11
8 SEC-004 omgwtfbbq PAUSED — firewall rule disabled, investigating 2026-05-11