feat(inventory): add initial inventory structure with placeholders
All checks were successful
Picsur Deploy / Validate Picsur Compose (pull_request) Successful in 12s
Picsur Deploy / Deploy Picsur to Dockerino (pull_request) Has been skipped

Adds 4-layer inventory system as Single Source of Truth:
- inventory/physical.yaml: Physical hosts (Hestia, Proxmox, TrueNAS, ER605)
- inventory/proxmox.yaml: VMs and LXC containers (dockerino, media, homeassistant)
- inventory/truenas.yaml: ZFS pools, disks, datasets, NFS exports
- inventory/network.yaml: VLANs, subnets, DNS

All files have PLACEHOLDER fields to be filled with real data
using the discovery commands in inventory/README.md
This commit is contained in:
gaia 2026-04-09 12:16:38 -03:00
parent 665e5e1f40
commit e3a9c44d5a
5 changed files with 677 additions and 0 deletions

113
inventory/README.md Normal file
View File

@ -0,0 +1,113 @@
# Inventory — Single Source of Truth
Este diretório contém o inventário completo do homelab em arquivos YAML.
## Arquivos
| Arquivo | Conteúdo |
|---------|----------|
| `physical.yaml` | Máquinas físicas — hardware, MACs, IPs, localização |
| `proxmox.yaml` | VMs e LXC containers no Proxmox |
| `truenas.yaml` | Discos, pools ZFS, datasets, shares |
| `network.yaml` | VLANs, subnets, DNS, DHCP |
## Princípio
> **Sempre atualize o inventory PRIMEIRO**, antes de fazer qualquer mudança na infraestrutura real.
Exemplo:
1. Você quer trocar o IP do Dockerino de 10.0.0.50 pra 10.0.0.51
2. Edita `inventory/proxmox.yaml` → muda o IP do dockerino
3. Terraform/Ansible pegam o novo IP e aplicam
4. PR mostra: "Dockerino IP: 10.0.0.50 → 10.0.0.51"
5. Após merge, a mudança é aplicada automaticamente
## Como preencher com dados reais
### 1. Hestia (esta máquina)
```bash
# IP e MAC
ip link show
hostname -I
# CPU e RAM
lscpu | grep -E "^Model name|^CPU\(s\)"
free -h
# Discos
lsblk -d -o NAME,SIZE,TYPE | grep disk
```
### 2. Proxmox
```bash
# SSH para o Proxmox
ssh root@<proxmox-ip>
# Listar VMs e containers
pvesh get /cluster/resources
# Listar disks
pvesh get /cluster/disks/list
# Detalhes de uma VM específica
pvesh get /qemu/<vmid>/config
pvesh get /lxc/<vmid>/config
```
### 3. TrueNAS
```bash
# SSH para TrueNAS
ssh root@<truenas-ip>
# Status dos pools
zpool status -v
# Datasets
zfs list -o name,mountpoint,used
# Exports NFS
cat /etc/exports
# Discos
lsblk -d -o NAME,SIZE,TYPE,ROTA | grep disk
smartctl -a /dev/sdX
```
### 4. ER605 (Router)
Acesse a UI do Omada Controller (provavelmente em https://10.0.0.50:8043) e consulte:
- LAN Settings → DHCP
- VLANs
- Port Forwards
## Formato dos arquivos
Todos os arquivos usam YAML. Campos com `PLACEHOLDER` precisam ser preenchidos com dados reais.
## Validação
```bash
# Instalar yq se necessário
sudo apt install yq
# Validar syntax do YAML
yq eval '.' inventory/physical.yaml
# Extrair IPs de todas as máquinas
yq eval '.physical_hosts | to_entries | .[].value.network.ip' inventory/physical.yaml
```
## Ordem de leitura para Terraform/Ansible
```
physical.yaml (camada 0 — fatos)
proxmox.yaml + truenas.yaml (camada 1 — provisionamento)
ansible/ (camada 2 — configuração de OS)
services/ (camada 3 — aplicações)
```

78
inventory/network.yaml Normal file
View File

@ -0,0 +1,78 @@
# ===========================================
# INVENTÁRIO DE REDE
# ===========================================
# TODO: Descobrir IPs reais via:
# - ER605 Admin UI: LAN settings
# - AdGuard: 10.0.0.2 → Settings > DHCP
# ===========================================
network:
domain: "hackerfortress.cc"
gateway: "PLACEHOLDER" # IP do ER605 na VLAN1
# ===========================================
# Subnets e VLANs
# ===========================================
vlans:
1:
name: "infra"
subnet: "10.0.0.0/24"
gateway: "PLACEHOLDER"
dhcp_server: true
dhcp_range:
start: "10.0.0.100"
end: "10.0.0.200"
static_leases:
# TODO: Adicionar leases fixos已知
# "MAC_ADDRESS": "IP"
"hestia-mac": "PLACEHOLDER"
"truenas-mac": "PLACEHOLDER"
"proxmox-mac": "PLACEHOLDER"
10:
name: "geral"
subnet: "10.0.10.0/24"
gateway: "PLACEHOLDER"
dhcp_server: true
dhcp_range:
start: "10.0.10.100"
end: "10.0.10.200"
20:
name: "iot"
subnet: "10.0.20.0/24"
gateway: "PLACEHOLDER"
dhcp_server: true
dhcp_range:
start: "10.0.20.100"
end: "10.0.20.200"
30:
name: "guests"
subnet: "10.0.30.0/24"
gateway: "PLACEHOLDER"
dhcp_server: true
dhcp_range:
start: "10.0.30.100"
end: "10.0.30.200"
# ===========================================
# DNS — Services
# ===========================================
dns_services:
adguard:
ip: "10.0.0.2"
port: 53
web_ui: "http://10.0.0.2"
roles:
- dns-recursive
- dns-blocklist
# ===========================================
# Port Forwards (ER605)
# ===========================================
# TODO: ER605 Admin UI > NAT > Port Forwarding
forwarding:
# external_port: [protocol, internal_ip, internal_port, description]
#443: ["TCP", "10.0.0.50", "443", "Picsur HTTPS"]
#80: ["TCP", "10.0.0.50", "80", "Picsur HTTP"]

182
inventory/physical.yaml Normal file
View File

@ -0,0 +1,182 @@
# ===========================================
# INVENTÁRIO FÍSICO — Single Source of Truth
# ===========================================
# Este arquivo mapeia TODAS as máquinas físicas do homelab.
# UPDATE: Sempre que mudar algo físico (IP, MAC, disco), atualize aqui PRIMEIRO.
# ===========================================
physical_hosts:
# ===========================================
# HESTIA — Notebook (esta máquina)
# ===========================================
hestia:
description: "Notebook Dell Latitude 5490 — usado como workstation e runner de CI/CD"
location: "rack caseiro"
hardware:
cpu: "Intel i5-8250U"
ram_gb: 16
disk:
- device: /dev/sda
type: SSD
size_gb: 256
mount: /
network:
mac: "PLACEHOLDER" # TODO: ip link show | grep ether
ip: "PLACEHOLDER" # TODO: hostname -I
gateway: "PLACEHOLDER" # IP do ER605
dns: "10.0.0.2" # AdGuard
os:
distro: "Debian"
version: "13"
hostname: "hestia"
roles:
- runner-ci # Gitea Actions runner
- workstation
ssh:
user: "iamferreirajp"
port: 22
# ===========================================
# PROXMOX — Server principal
# ===========================================
proxmox:
description: "Servidor mini-ITX — Proxmox VE rodando VMs e containers"
location: "rack caseiro"
hardware:
cpu: "PLACEHOLDER"
ram_gb: 64
disk:
- device: /dev/sda
type: SSD
size_gb: 512
mount: /
role: "Proxmox OS"
network:
mac: "PLACEHOLDER"
ip: "PLACEHOLDER"
gateway: "PLACEHOLDER"
dns: "10.0.0.2"
os:
distro: "Proxmox VE"
version: "PLACEHOLDER"
hostname: "proxmox"
roles:
- hypervisor # Proxmox (gerencia VMs)
- nfs-client # Mount TrueNAS volumes
ssh:
user: "root"
port: 22
# ===========================================
# TRUENAS — Storage server
# ===========================================
truenas:
description: "Servidor de storage — TrueNAS Scale baremetal"
location: "rack caseiro"
hardware:
cpu: "PLACEHOLDER"
ram_gb: 32
disk:
# TODO: lsblk -d -o NAME,SIZE,TYPE | grep disk
- device: /dev/sdb
type: HDD
size_tb: 4
role: "data"
- device: /dev/sdc
type: HDD
size_tb: 4
role: "data"
- device: /dev/sdd
type: HDD
size_tb: 4
role: "data"
- device: /dev/sde
type: HDD
size_tb: 4
role: "data"
- device: /dev/sdf
type: SSD
size_gb: 500
role: "SLOG/Cache"
network:
mac: "PLACEHOLDER"
ip: "PLACEHOLDER"
gateway: "PLACEHOLDER"
dns: "10.0.0.2"
os:
distro: "TrueNAS Scale"
version: "PLACEHOLDER"
hostname: "truenas"
roles:
- storage # NFS/SMB shares
- nfs-server # Exporta volumes
ssh:
user: "root"
port: 22
# ===========================================
# ER605 — Router TP-Link (Omada)
# ===========================================
er605:
description: "Router TP-Link ER605 — gateway + DHCP + VLANs"
location: "rack caseiro"
hardware:
model: "TP-Link ER605"
wan_port: "1Gbps"
lan_ports: 4
network:
mac: "PLACEHOLDER"
ip: "PLACEHOLDER" # Tipicamente .1 da subnet
gateway: "PLACEHOLDER" # WAN upstream
dns: "PLACEHOLDER"
os:
firmware: "Omada Controller"
controller_url: "http://10.0.0.50:8043"
roles:
- gateway
- dhcp-server
- firewall
management:
web_ui: "http://PLACEHOLDER"
ssh: "disabled"
# ===========================================
# VLANs — mapeamento de rede
# ===========================================
vlans:
1:
name: "infra"
subnet: "10.0.0.0/24"
dhcp_range: "10.0.0.100-10.0.0.200"
description: "Infraestrutura — Gitea, AdGuard, Omada Controller"
10:
name: "geral"
subnet: "10.0.10.0/24"
dhcp_range: "10.0.10.100-10.0.10.200"
description: "Workstations e laptops"
20:
name: "iot"
subnet: "10.0.20.0/24"
dhcp_range: "10.0.20.100-10.0.20.200"
description: "Dispositivos IoT — sensores, câmeras"
30:
name: "guests"
subnet: "10.0.30.0/24"
dhcp_range: "10.0.30.100-10.0.30.200"
description: "Rede de visitantes"
# ===========================================
# DNS — AdGuard
# ===========================================
dns:
adguard:
description: "DNS recursivo + bloqueador de ads"
ip: "10.0.0.2"
roles:
- dns-recursive
- dns-block
web_ui: "http://10.0.0.2"
upstream_dns:
- "1.1.1.1"
- "8.8.8.8"

141
inventory/proxmox.yaml Normal file
View File

@ -0,0 +1,141 @@
# ===========================================
# INVENTÁRIO PROXMOX — VMs e Containers
# ===========================================
# Máquinas virtuais e containers rodando no Proxmox.
# TODO: Preencher com dados reais via: pvesh get /qemu-auto, /lxc-auto
# ===========================================
proxmox_node: "proxmox"
# ===========================================
# Virtual Machines (VMs)
# ===========================================
vms:
homeassistant:
description: "Home Assistant OS rodando como VM"
status: "running"
os_type: "qubes" # HAOS usa o tipo qubes
vmid: "PLACEHOLDER"
resources:
cpu_cores: 4
ram_mb: 4096
disk_gb: 32
boot_order: "scsi0"
network:
bridge: "vmbr0"
vlan: 10 # Rede geral
volumes:
# TrueNAS NFS mounts dentro da VM
nfs_config: "/mnt/nfs/homeassistant/config"
nfs_media: "/mnt/nfs/media"
roles:
- home-automation
# PLACEHOLDER — adicione mais VMs aqui
# ===========================================
# Containers (LXC)
# ===========================================
containers:
dockerino:
description: "Container principal — Docker + Docker Compose (swarm mode)"
status: "running"
os_type: "debian"
vmid: "PLACEHOLDER"
resources:
cpu_cores: 4
ram_mb: 8192
disk_gb: 64
network:
ip: "10.0.0.50/24"
bridge: "vmbr0"
vlan: 1 # Rede infra
gateway: "PLACEHOLDER" # IP do ER605
dns: "10.0.0.2"
volumes:
# Mounts do TrueNAS NFS
nfs_picsur: "/mnt/nfs/picsur/data"
nfs_docker_volumes: "/mnt/nfs/docker-volumes"
docker:
version: "PLACEHOLDER"
compose_version: "PLACEHOLDER"
services:
- picsur
- adguard #outro instance?
- outline
- nginx-proxy
- homer
- bookstack
- flatnotes
- homebox
- speedtest
- omada-controller
- twingate
roles:
- docker-host
- reverse-proxy
- application-host
media:
description: "Container — Jellyfin e serviços de mídia"
status: "running"
os_type: "debian"
vmid: "PLACEHOLDER"
resources:
cpu_cores: 4
ram_mb: 8192
disk_gb: 128
network:
ip: "PLACEHOLDER" # TODO: Descobrir IP
bridge: "vmbr0"
vlan: 1
gateway: "PLACEHOLDER"
dns: "10.0.0.2"
volumes:
nfs_media: "/mnt/nfs/media"
docker:
version: "PLACEHOLDER"
services:
- jellyfin
roles:
- media-server
# ===========================================
# Storage Pools (Proxmox → TrueNAS)
# ===========================================
nfs_mounts:
nfs-media:
server: "PLACEHOLDER" # IP do TrueNAS
export: "/mnt/tank/media"
mount_point: "/mnt/nfs/media"
usage: "Jellyfin media files"
nfs-picsur:
server: "PLACEHOLDER"
export: "/mnt/tank/picsur"
mount_point: "/mnt/nfs/picsur"
usage: "Picsur image storage"
nfs-docker-volumes:
server: "PLACEHOLDER"
export: "/mnt/tank/docker-volumes"
mount_point: "/mnt/nfs/docker-volumes"
usage: "Docker named volumes (named volumes persistem entre recreações)"
nfs-homeassistant:
server: "PLACEHOLDER"
export: "/mnt/tank/homeassistant"
mount_point: "/mnt/nfs/homeassistant"
usage: "Home Assistant config"
# ===========================================
# Notes
# ===========================================
# Para descobrir IPs das VMs:
# pvesh get /qemu/<vmid>/agent/network-get-interfaces
# pvesh get /lxc/<vmid>/agent/network-get-interfaces
#
# Para listar todos os containers:
# pvesh get /cluster/resources

163
inventory/truenas.yaml Normal file
View File

@ -0,0 +1,163 @@
# ===========================================
# INVENTÁRIO TRUENAS — Discos, Pools e Datasets
# ===========================================
# TODO: Obter dados reais via:
# 1. Web UI TrueNAS > Storage > Disks
# 2. CLI: zpool status, zfs list
# ===========================================
truenas:
hostname: "truenas"
version: "PLACEHOLDER"
ip: "PLACEHOLDER"
# ===========================================
# Discos Físicos
# ===========================================
disks:
# TODO: lsblk -d -o NAME,SIZE,TYPE,ROTA | grep disk
# TODO: smartctl -a /dev/sdX
sdb:
size_tb: 4
type: "HDD"
model: "PLACEHOLDER"
serial: "PLACEHOLDER"
role: "data"
pool: "tank"
sdc:
size_tb: 4
type: "HDD"
model: "PLACEHOLDER"
serial: "PLACEHOLDER"
role: "data"
pool: "tank"
sdd:
size_tb: 4
type: "HDD"
model: "PLACEHOLDER"
serial: "PLACEHOLDER"
role: "data"
pool: "tank"
sde:
size_tb: 4
type: "HDD"
model: "PLACEHOLDER"
serial: "PLACEHOLDER"
role: "data"
pool: "tank"
sdf:
size_gb: 500
type: "SSD"
model: "PLACEHOLDER"
serial: "PLACEHOLDER"
role: "SLOG/Cache"
pool: "tank"
# ===========================================
# ZFS Pools
# ===========================================
pools:
tank:
description: "Pool principal — dados de todos os serviços"
vdev_type: "mirror" # mirror = 2x4TB espelhados (RAID-1)
# TODO: Descobrir configuração real — pode ser raidz ou mirror
disks:
- /dev/sdb
- /dev/sdc
- /dev/sdd
- /dev/sde
compression: "lz4"
ashift: 12 # 4K sectors
atime: "off"
# ===========================================
# Datasets — o que existe hoje
# ===========================================
datasets:
tank:
docker-volumes:
description: "Volumes Docker do Dockerino (NFS-mounted)"
mount_point: "/mnt/tank/docker-volumes"
share: "docker-volumes"
nfs_export: "*(rw,no_root_squash,subtree_check)"
used_by:
- dockerino:/mnt/nfs/docker-volumes
picsur:
description: "Dados do Picsur (imagens armazenadas)"
mount_point: "/mnt/tank/picsur"
share: "picsur"
nfs_export: "*(rw,no_root_squash,subtree_check)"
used_by:
- dockerino:/mnt/nfs/picsur
media:
description: "Biblioteca de mídia — Jellyfin"
mount_point: "/mnt/tank/media"
share: "media"
nfs_export: "*(rw,no_root_squash,subtree_check)"
used_by:
- media:/mnt/nfs/media
- homeassistant:/mnt/nfs/media
homeassistant:
description: "Config do Home Assistant"
mount_point: "/mnt/tank/homeassistant"
share: "homeassistant"
nfs_export: "*(rw,no_root_squash,subtree_check)"
used_by:
- homeassistant:/mnt/nfs/homeassistant
backup:
description: "Backups periódicos"
mount_point: "/mnt/tank/backup"
snapshot_enabled: true
snapshot_schedule: "daily"
# ===========================================
# SMB Shares (se TrueNAS exportar via SMB)
# ===========================================
smb_shares:
# TODO: smbctl listshares
media:
dataset: "tank/media"
share_name: "media"
description: "Biblioteca de mídia"
picsur:
dataset: "tank/picsur"
share_name: "picsur"
description: "Dados do Picsur"
# ===========================================
# NFS Exports
# ===========================================
nfs_exports:
# TODO: cat /etc/exports
tank:
- path: "/mnt/tank/docker-volumes"
clients: "10.0.0.50" # Dockerino
options: "rw,no_root_squash,subtree_check"
- path: "/mnt/tank/picsur"
clients: "10.0.0.50" # Dockerino
options: "rw,no_root_squash,subtree_check"
- path: "/mnt/tank/media"
clients: "10.0.0.60,10.0.0.70" # Media VM, HA VM
options: "rw,no_root_squash,subtree_check"
- path: "/mnt/tank/homeassistant"
clients: "10.0.0.70" # HA VM
options: "rw,no_root_squash,subtree_check"
# ===========================================
# Notes
# ===========================================
# Para descobrir configuração real:
# Web UI: Storage > Disks
# CLI: zpool status -v
# CLI: zfs list -o name,mountpoint,used,available
# CLI: cat /etc/exports