dlx-ansible/docs/STORAGE-AUDIT.md

12 KiB

Proxmox Storage Audit Report

Generated: 2026-02-08


Executive Summary

The Proxmox cluster consists of 3 nodes with a mixture of local and shared NFS storage. Total capacity is ~17 TB, with significant redundancy across nodes. Current utilization varies widely by node.

  • proxmox-00: High local storage utilization (84.47% root), extensive container deployment
  • proxmox-01: Docker-focused, high disk utilization on dlx-docker (81.06%)
  • proxmox-02: Lowest utilization, 2 VMs and 1 active container

Physical Hardware

proxmox-00 (192.168.200.10)

NAME    SIZE    TYPE
loop0    16G    loop
loop1     4G    loop
loop2   100G    loop
loop3   100G    loop
loop4    16G    loop
loop5   100G    loop
loop6    32G    loop
loop7   100G    loop
loop8   100G    loop
sda     1.8T    disk  → /mnt/pve/dlx-sda (1.8TB dir)
sdb     1.8T    disk  → NFS mount (nfs-sdd)
sdc     1.8T    disk  → NFS mount (nfs-sdc)
sdd     1.8T    disk  → NFS mount (nfs-sde)
sde     1.8T    disk  → /mnt/dlx-nfs-sde (1.8TB NFS)
sdf   931.5G    disk  → dlx-sdf4 (785GB LVM)
sdg       0B    disk  → (unused/not configured)
sr0    1024M    rom   → (CD-ROM)

proxmox-01 (192.168.200.11)

NAME      SIZE      TYPE
loop0     400G      loop
loop1     400G      loop
loop2     100G      loop
sda     953.9G      disk  → /mnt/pve/dlx-docker (718GB dir, 81% full)
sdb     680.6G      disk  → (appears unused, no mount)

proxmox-02 (192.168.200.12)

NAME        SIZE      TYPE
loop0        32G      loop
sda         3.6T      disk  → NFS mount (nfs-sdb-02)
sdb         3.6T      disk  → /mnt/dlx-nfs-sdb-02 (3.6TB NFS)
nvme0n1   931.5G      disk  → /mnt/pve/dlx-data (670GB dir, 10% full)

Storage Backend Configuration

Shared NFS Storage (Accessible from all nodes)

Storage Type Total Used Available % Used Content Shared
dlx-nfs-sdb-02 NFS 3.9 TB 2.9 GB 3.7 TB 0.07% images, rootdir, backup
dlx-nfs-sdc-00 NFS 1.9 TB 139 GB 1.7 TB 7.47% images, rootdir
dlx-nfs-sdd-00 NFS 1.9 TB 12 GB 1.8 TB 0.63% iso, vztmpl, rootdir, snippets, backup, images, import
dlx-nfs-sde-00 NFS 1.9 TB 54 GB 1.7 TB 2.83% iso, vztmpl, rootdir, snippets, backup, images, import
TOTAL NFS - ~9.7 TB ~209 GB ~8.7 TB ~2.2% -

Local Storage by Node

proxmox-00 Storage

Storage Type Status Total Used Available % Used Notes
dlx-sda dir ✓ active 1.9 TB 61 GB 1.8 TB 3.3% Local dir storage
dlx-sdb zfspool ✓ active 1.9 TB 4.2 GB 1.9 TB 0.2% ZFS pool
dlx-sdf4 lvm ✓ active 785 GB 157 GB 610 GB 20.5% LVM thin pool
local dir ✓ active 62 GB 52 GB 6.3 GB 84.5% ⚠️ CRITICAL: 90% full on root FS
local-lvm lvmthin ✓ active 116 GB 0 GB 116 GB 0% Thin provisioning pool

proxmox-01 Storage

Storage Type Status Total Used Available % Used Notes
dlx-docker dir ✓ active 718 GB 568 GB 97 GB 81.1% ⚠️ HIGH: Docker container storage
local dir ✓ active 62 GB 42 GB 15 GB 69.5% Template storage
local-lvm lvmthin ✓ active 116 GB 0 GB 116 GB 0% Thin provisioning pool

proxmox-02 Storage

Storage Type Status Total Used Available % Used Notes
dlx-data dir ✓ active 702 GB 63 GB 602 GB 9.1% NVME-backed (fast)
local dir ✓ active 92 GB 43 GB 44 GB 47.2% Template/OS storage
local-lvm lvmthin ✓ active 160 GB 0 GB 160 GB 0% Thin provisioning pool

Disabled Storage (not currently in use)

Storage Type Node Reason
dlx-docker dir proxmox-00, proxmox-02 Disabled on these nodes
dlx-data dir proxmox-00, proxmox-01 Disabled on these nodes
dlx-sda dir proxmox-01 Disabled
dlx-sdb zfspool proxmox-01, proxmox-02 Disabled on these nodes
dlx-sdf4 lvm proxmox-01, proxmox-02 Disabled on these nodes

Container & VM Allocation

proxmox-00: Infrastructure Hub (16 LXC Containers, 0 VMs)

All Running:

  1. dlx-postgres (103) - PostgreSQL database

    • Allocated: 100 GB | Used: 2.8 GB | Mem: 16 GB
  2. dlx-gitea (102) - Git hosting

    • Allocated: 100 GB | Used: 5.7 GB | Mem: 8 GB
  3. dlx-hiveops (112) - Application

    • Allocated: 100 GB | Used: 3.7 GB | Mem: 4 GB
  4. dlx-kafka (113) - Message broker

    • Allocated: 31 GB | Used: 2.2 GB | Mem: 4 GB
  5. dlx-redis-01 (115) - Cache

    • Allocated: 100 GB | Used: 81 GB | Mem: 8 GB
  6. dlx-ansible (106) - Ansible control

    • Allocated: 16 GB | Used: 3.7 GB | Mem: 4 GB
  7. dlx-pihole (100) - DNS/Ad-block

    • Allocated: 16 GB | Used: 2.6 GB | Mem: 4 GB
  8. dlx-npm (101) - Nginx Proxy Manager

    • Allocated: 4 GB | Used: 2.4 GB | Mem: 4 GB
  9. dlx-mongo-01 (111) - MongoDB

    • Allocated: 100 GB | Used: 7.6 GB | Mem: 8 GB
  10. dlx-smartjournal (114) - Journal Application

    • Allocated: 157 GB | Used: 54 GB | Mem: 33 GB

Stopped (5):

  • dlx-wireguard (105) - 32 GB allocated
  • dlx-mysql-02 (108) - 200 GB allocated
  • dlx-mattermost (107) - 32 GB allocated
  • dlx-mysql-03 (109) - 200 GB allocated
  • dlx-nocodb (116) - 100 GB allocated

Total Allocation: 1.8 TB | Running Utilization: ~172 GB


proxmox-01: Docker & Services (5 LXC Containers, 0 VMs)

All Running:

  1. dlx-docker (200) - Docker host

    • Allocated: 421 GB | Used: 36 GB | Mem: 16 GB
  2. dlx-sonar (202) - SonarQube analysis

    • Allocated: 422 GB | Used: 354 GB | Mem: 16 GB ⚠️ HEAVY DISK USER
  3. dlx-odoo (201) - ERP system

    • Allocated: 100 GB | Used: 3.7 GB | Mem: 16 GB

Stopped (10):

  • dlx-swarm-01/02/03 (210, 211, 212) - 65 GB each
  • dlx-snipeit (203) - 50 GB
  • dlx-fleet (206) - 60 GB
  • dlx-coolify (207) - 50 GB
  • dlx-kube-01/02/03 (215-217) - 50 GB each
  • dlx-www (204) - 32 GB
  • dlx-svn (205) - 100 GB

Total Allocation: 1.7 TB | Running Utilization: ~393 GB


proxmox-02: Development & Testing (2 VMs, 1 LXC Container)

Running:

  1. dlx-www (303, LXC) - Web services
    • Allocated: 31 GB | Used: 3.2 GB | Mem: 2 GB

Stopped (2 VMs):

  1. dlx-atm-01 (305) - ATM application VM

    • Allocated: 8 GB (max disk 0)
  2. dlx-development (306) - Dev environment VM

    • Allocated: 160 GB | Mem: 16 GB

Total Allocation: 199 GB | Running Utilization: ~3.2 GB


Storage Mapping & Usage Patterns

Shared NFS Mounts

All Nodes can access:
├── dlx-nfs-sdb-02  → Backup/images (3.9 TB) - 0.07% used
├── dlx-nfs-sdc-00  → Images/rootdir (1.9 TB) - 7.47% used
├── dlx-nfs-sdd-00  → Templates/ISO/backup (1.9 TB) - 0.63% used
└── dlx-nfs-sde-00  → Templates/ISO/images (1.9 TB) - 2.83% used

Node-Specific Storage

proxmox-00 (Control Hub):
├── local (62 GB) ⚠️ CRITICAL: 84.5% FULL
├── dlx-sda (1.9 TB) - 3.3% used
├── dlx-sdb ZFS (1.9 TB) - 0.2% used
├── dlx-sdf4 LVM (785 GB) - 20.5% used
└── local-lvm (116 GB) - 0% used

proxmox-01 (Docker/Services):
├── local (62 GB) - 69.5% used
├── dlx-docker (718 GB) ⚠️ HIGH: 81.1% USED
└── local-lvm (116 GB) - 0% used

proxmox-02 (Development):
├── local (92 GB) - 47.2% used
├── dlx-data (702 GB) - 9.1% used (NVME, fast)
└── local-lvm (160 GB) - 0% used

Capacity & Utilization Summary

Metric Value Status
Total Capacity ~17 TB ✓ Adequate
Total Used ~1.3 TB ✓ 7.6%
Total Available ~15.7 TB ✓ Healthy
Shared NFS 9.7 TB (2.2% used) ✓ Excellent
Local Storage 7.3 TB (18.3% used) ⚠️ Mixed

Critical Issues & Recommendations

🔴 CRITICAL: proxmox-00 Root Filesystem

Issue: / (root) is 84.5% full (52.6 GB of 62 GB)

Impact:

  • System may become unstable
  • Package installation may fail
  • Logs may stop being written

Recommendation:

  1. Clean up old logs: journalctl --vacuum=time:30d
  2. Check for old snapshots/backups
  3. Consider moving /var to separate storage
  4. Monitor closely for growth

🟠 HIGH PRIORITY: proxmox-01 dlx-docker

Issue: dlx-docker storage at 81.1% capacity (568 GB of 718 GB)

Impact:

  • Limited room for container growth
  • Risk of running out of space during operations

Recommendation:

  1. Audit running containers: docker ps -a --format "{{.Names}}: {{json .SizeRw}}"
  2. Remove unused images/layers
  3. Consider expanding partition or migrating data
  4. Set up monitoring for capacity

🟠 HIGH PRIORITY: proxmox-01 dlx-sonar

Issue: SonarQube using 354 GB (82% of allocated 422 GB)

Impact:

  • Large analysis database
  • May need separate storage strategy

Recommendation:

  1. Review SonarQube retention policies
  2. Archive old analysis data
  3. Consider separate backup strategy

⚠️ Medium Priority: Storage Inconsistency

Issue: Disabled storage backends across nodes

Backend disabled on Notes
dlx-docker proxmox-00, 02 Only enabled on 01
dlx-data proxmox-00, 01 Only enabled on 02
dlx-sda proxmox-01 Enabled on 00 only
dlx-sdb (ZFS) proxmox-01, 02 Only enabled on 00
dlx-sdf4 (LVM) proxmox-01, 02 Only enabled on 00

Recommendation:

  1. Document why each backend is disabled per node
  2. Standardize storage configuration across cluster
  3. Consider cluster-wide storage policy

⚠️ Medium Priority: Container Lifecycle

Issue: 15 containers are stopped but still allocating space (1.2 TB total)

Recommendation:

  1. Audit stopped containers (dlx-swarm-, dlx-kube-, etc.)
  2. Delete unused containers to reclaim space
  3. Document intended purpose of stopped containers

Recommendations Summary

Immediate (Next week)

  1. Compress logs on proxmox-00 root filesystem
  2. Audit dlx-docker usage and remove unused images
  3. Monitor proxmox-01 dlx-docker capacity

Short-term (1-2 months)

  1. Expand dlx-docker partition or migrate high-usage containers
  2. Archive SonarQube data or increase disk allocation
  3. Clean up stopped containers or document their retention

Long-term (3-6 months)

  1. Implement automated capacity monitoring
  2. Standardize storage backend configuration across cluster
  3. Establish storage lifecycle policies (snapshots, backups, retention)
  4. Consider tiered storage strategy (fast NVME vs. slow SATA)

Storage Performance Tiers

Based on hardware analysis:

Tier Storage Speed Use Case
Tier 1 (Fast) nvme0n1 (proxmox-02) NVMe OS, critical services
Tier 2 (Medium) ZFS/LVM pools HDD/SSD VMs, container data
Tier 3 (Shared) NFS mounts Network Backups, shared data
Tier 4 (Archive) Large local dirs HDD Infrequently accessed

Optimization Opportunity: Align hot data to Tier 1, cold data to Tier 3


Appendix: Raw Storage Stats

Storage IDs & Content Types

  • images - VM/container disk images
  • rootdir - Root filesystem for LXCs
  • backup - Backup snapshots
  • iso - ISO images
  • vztmpl - Container templates
  • snippets - Config snippets
  • import - Import data

Size Conversions

  • 1 TB = ~1,099 GB
  • 1 GB = ~1,074 MB
  • All sizes in binary (not decimal)

Report Generated: 2026-02-08 via Ansible Data Source: pvesm status and pvesh API Next Audit Recommended: 2026-03-08