Skip to content

Container Mount Analysis - Storage Mapping to Proxmox ZPools

[!abstract] Overview This document maps every container volume mount configured in the Docker Swarm stack files to its underlying physical storage, identifying whether mounts are on local disk or NFS, and which Proxmox zpool stores that disk.

Infrastructure Summary

Proxmox Hypervisors

Host IP VMs Hosted
proxmox-1 100.1.100.10 201 (homenet1), 210 (homenet2), 202 (homenet3), 204 (homenet4), 205 (homenet5), 106 (OMV)
proxmox-2 100.1.100.15 102 (librenms), 103 (unifi), 111 (ubuntu-server)

Docker Swarm Nodes

Node IP VM ID Proxmox Host ZPool (Root Disk) ZPool (Data Disk)
homenet-ubuntu1 100.1.100.201 201 proxmox-1 ssdpool (128GB) zfs3 (50GB)
homenet-ubuntu2 100.1.100.202 210 proxmox-1 ssdpool (128GB) zfs3 (50GB)
homenet-ubuntu3 100.1.100.203 202 proxmox-1 ssdpool zfs3
homenet-ubuntu4 100.1.100.204 204 proxmox-1 ssdpool (102GB) zfs3 (50GB)
homenet-ubuntu5 100.1.100.205 205 proxmox-1 ssdpool (101GB) zfs3 (64GB)

NFS Server

Server IP VM ID Function
OMV 100.1.100.199 106 NFS exports for all shared storage

Storage Type Mapping

Path Prefixes and Storage Types

Mount Path Prefix Storage Type Physical Location Proxmox ZPool
/homenet_config/ Local bind mount ./config/ dir on each node ssdpool (VM root disk)
/homenet_data/ Local symlink /data/ on each node ssdpool (VM root disk)
/mnt/db/ Local disk Direct mount on node 201 ssdpool or zfs3
/nfs_data/ NFS OMV: /HomeNetServices/ OMV VM disk on proxmox-1
/nfs_media/ NFS OMV: /SharedStuff/ OMV VM disk on proxmox-1
/nfs_cams/ NFS OMV: /CameraFootage/ OMV VM disk on proxmox-1
/nfs_personal/ NFS OMV: /PersonalData/ OMV VM disk on proxmox-1
/var/run/docker.sock Socket Docker daemon ssdpool (VM root disk)
/proc, /sys, / System Host filesystem ssdpool (VM root disk)
tmpfs mounts RAM In-memory N/A (RAM)

Live Disk Usage (Verified 2026-01-17)

[!tip] Regenerate Data Run ./sh-verify-storage-mapping.sh to regenerate this data.

NFS Mount Usage (All Nodes)

Mount Point Size Used Available Use% Source
/nfs_data (HomeNetServices) 3.0T 2.8T 254G 92% 100.1.100.199
/nfs_media (SharedStuff) 3.0T 2.8T 254G 92% 100.1.100.199
/nfs_media_lib (MediaLibrary) 3.0T 2.8T 254G 92% 100.1.100.199
/nfs_service (ServiceData) 3.0T 2.8T 254G 92% 100.1.100.199
/nfs_personal (PersonalData) 503G 380G 124G 76% 100.1.100.199
/nfs_cams (CameraFootage) 69G 27G 42G 40% 100.1.100.199

Local Storage by Node

Node Filesystem Size Used Avail Use% Mount
201 (Manager) /dev/sdb2 98G 82G 12G 88% /
201 /dev/sdf 59G 46G 13G 79% /home
202 /dev/sda2 125G 95G 25G 80% /
202 /dev/sdb1 50G 47G 4.9M 100% /data
203 /dev/sda2 98G 39G 54G 42% /
203 /dev/sdb1 50G 47G 880K 100% /data
204 /dev/sda2 98G 48G 45G 52% /
204 /dev/sdc 25G 18G 7.2G 71% /home
204 /dev/sdb1 50G 42G 5.6G 89% /data
205 /dev/sda2 98G 58G 35G 63% /
205 /dev/sdc 30G 23G 6.8G 77% /home
205 /dev/sdb1 63G 51G 9.2G 85% /data

[!danger] Critical Alert Nodes 202 and 203 have /data at 100% - requires immediate attention!

Proxmox ZPool Utilization

Host ZPool Size Allocated Free Capacity Health
proxmox-1 ssdpool 1.81T 818G 1.01T 44% ONLINE*
proxmox-1 zfs2 1.81T 113G 1.70T 6% ONLINE
proxmox-1 zfs3 7.25T 4.74T 2.51T 65% ONLINE
proxmox-2 ssdpool 888G 151G 737G 17% ONLINE
proxmox-2 zfs3 7.25T 5.39T 1.86T 74% ONLINE

[!warning] ZPool Errors proxmox-1 ssdpool has 12 errors detected - run zpool status ssdpool for details.

VM Disk Assignments (Proxmox)

VM ID Node Name Root Disk (scsi0) Data Disk (scsi1) Additional
201 homenet-ubuntu1 ssdpool:128G zfs3:50G influxdb:30G, mariadb:15G, postgres:10G, home:60G
210 homenet-ubuntu2 ssdpool:128G zfs3:50G -
202 homenet-ubuntu3 ssdpool (default) zfs3 -
204 homenet-ubuntu4 ssdpool:102G zfs3:50G home:25G
205 homenet-ubuntu5 ssdpool:101G zfs3:64G home:30G
106 OMV NFS Server ssdpool:32G zfs3:3T cameras:70G, personal:512G

Complete Mount Inventory by Stack

STACK-HOMENET1.YML (Node 201 - Critical Data Layer)

Service Container Path Host Path Storage Type ZPool
influxdb /var/lib/influxdb2 /mnt/db/influxdb Local ssdpool/zfs3 (201)
influxdb /etc/influxdb2 /homenet_config/influxdb Local ssdpool (201)
mariadb /config /mnt/db/mariadb Local ssdpool/zfs3 (201)
db (PostgreSQL) /var/lib/postgresql/data /mnt/db/postgres Local ssdpool/zfs3 (201)

STACK-HOMENET2.YML (Access Portals)

Service Container Path Host Path Storage Type ZPool
ddns /updater/data /homenet_data/ddns Local ssdpool (manager)
homer /www/assets /homenet_config/homer Local ssdpool (201)
homepage /app/config /homenet_config/homepage Local ssdpool (201)
homepage /var/run/docker.sock Docker socket Local ssdpool (201)
vaultwarden /data /homenet_data/vaultwarden Local ssdpool (bitwarden node)

STACK-HOMENET3.YML (Node 203 - Surveillance)

Service Container Path Host Path Storage Type ZPool
ispy /agent/Media/XML/ /homenet_config/ispy Local ssdpool (203)
ispy /agent/Media/WebServerRoot/Media/ /nfs_cams/ispy NFS OMV disk

STACK-HOMENET4.YML (Node 202/204 - Media & Apps)

Service Container Path Host Path Storage Type ZPool
uptime-kuma /app/data /homenet_data/uptime-kuma Local ssdpool (204)
satisfactory-server /config /homenet_config/satisfactory Local ssdpool (node)
palworld-dedicated-server /palworld /homenet_data/palworld Local ssdpool (node)
plex /config /homenet_config/plex Local ssdpool (202)
plex /data/shared /nfs_media NFS OMV disk
plex /data/cameras /nfs_cams NFS OMV disk
plex /transcode tmpfs RAM N/A
jellyfin /config /homenet_config/jellyfin Local ssdpool (202)
jellyfin /data/media /nfs_media NFS OMV disk
tautulli /config /homenet_config/tautulli Local ssdpool (node)
smokeping /config /homenet_config/smokeping Local ssdpool (node)
n8n /home/node/.n8n /homenet_data/n8n Local ssdpool (204)

DOCKER-COMPOSE.WORKER2.YML (Node 202 - ARR Stack)

[!info] Docker Compose The ARR stack runs via Docker Compose (not Swarm) due to network_mode: service:wireguard requirement.

Service Container Path Host Path Storage Type ZPool
wireguard /config /homenet_config/wiregaurd Local ssdpool (202)
transmission /config /homenet_config/transmission Local ssdpool (202)
transmission /downloads /nfs_media/Downloads NFS OMV disk
transmission /watch /nfs_media/Torrents NFS OMV disk
jackett /config /homenet_config/jackett Local ssdpool (202)
prowlarr /config /homenet_config/prowlarr Local ssdpool (202)
radarr /config /homenet_config/radarr Local ssdpool (202)
radarr /downloads /nfs_media/Downloads NFS OMV disk
radarr /movies /nfs_media/Movies NFS OMV disk
sonarr /config /homenet_config/sonarr Local ssdpool (202)
sonarr /downloads /nfs_media/Downloads NFS OMV disk
sonarr /tv /nfs_media/TV Shows NFS OMV disk
lidarr /config /homenet_config/lidarr Local ssdpool (202)
lidarr /music /nfs_media/Music NFS OMV disk
tdarr /app/server /homenet_config/tdarr/server Local ssdpool (202)
tdarr /temp /homenet_data/tdarr/transcode_cache Local ssdpool (202)
tdarr /media /nfs_media NFS OMV disk
clamav /var/lib/clamav /homenet_config/clamav Local ssdpool (202)
clamav /scan /nfs_media/Downloads NFS (ro) OMV disk

Photo Management Stacks

Stack Service Container Path Host Path Storage Type
immich immich /photos /nfs_personal/Photos-Immich NFS
immich postgres14 /var/lib/postgresql/data /homenet_data/immich-db Local
paperless paperless /consume /nfs_personal/Documents/Paperless/consume NFS
paperless paperless /media /nfs_personal/Documents/Paperless/media NFS
photoprism photoprism /photoprism/originals /nfs_personal/Photos NFS
librephotos backend /data /nfs_personal/Photos NFS

Summary Statistics

Total Mounts by Storage Type

Storage Type Count Description
Local (ssdpool) ~110 Config files, databases, app data
NFS ~35 Media, photos, documents
Named Volumes ~12 Docker-managed persistent storage
Socket Mounts ~8 Docker daemon access
System Mounts ~15 /proc, /sys, /rootfs (monitoring)
tmpfs ~3 RAM-based transcoding

Storage by Proxmox ZPool

ZPool Location Size Used For
ssdpool (proxmox-1) 2x1TB SSD mirror 1.81TB VM 201-205 root disks, all local mounts
ssdpool (proxmox-2) 960GB SSD 888GB Non-Docker VMs
zfs3 (proxmox-1) 4TB+4TB stripe 7.25TB VM data disks (scsi1), databases
zfs3 (proxmox-2) 4TB+4TB stripe 7.25TB Additional storage
OMV disks (proxmox-1) VM 106 disks ~3.5TB All NFS exports

Key Findings

[!success] Configuration Patterns 1. All container configs (/homenet_config/*) are stored on ssdpool (SSD-backed) for performance 2. Databases (MariaDB, PostgreSQL, InfluxDB) are on local ssdpool or zfs3 for latency 3. Media files are on NFS from OMV server for centralized access 4. Camera footage is on dedicated NFS mount with 69GB capacity 5. Personal data (photos, documents) is on separate NFS mount for backup isolation

[!warning] Architecture Note All Docker VMs run on proxmox-1 (100.1.100.10). The OMV NFS server is also VM 106 on proxmox-1.


Storage Flow Diagram

graph TB
    subgraph "proxmox-1 (100.1.100.10)"
        SSDPOOL1[ssdpool 1.81TB<br/>VM Root Disks]
        ZFS3_1[zfs3 7.25TB<br/>VM Data Disks]
    end

    subgraph "Docker Swarm Nodes"
        N201[Node 201<br/>Manager]
        N202[Node 202<br/>Media/ARR]
        N203[Node 203<br/>Surveillance]
        N204[Node 204<br/>Dashboards]
        N205[Node 205<br/>General]
    end

    subgraph "VM 106 - OMV NFS Server"
        NFS_DATA[/HomeNetServices<br/>3.0TB - 92% used]
        NFS_MEDIA[/SharedStuff<br/>3.0TB - 92% used]
        NFS_CAMS[/CameraFootage<br/>69GB - 40% used]
        NFS_PERSONAL[/PersonalData<br/>503GB - 76% used]
    end

    SSDPOOL1 --> N201
    SSDPOOL1 --> N202
    SSDPOOL1 --> N203
    SSDPOOL1 --> N204
    SSDPOOL1 --> N205

    ZFS3_1 --> N201
    ZFS3_1 --> N202
    ZFS3_1 --> N203
    ZFS3_1 --> N204
    ZFS3_1 --> N205

    NFS_DATA --> N201
    NFS_DATA --> N202
    NFS_DATA --> N203
    NFS_DATA --> N204
    NFS_DATA --> N205

    NFS_MEDIA --> N202
    NFS_CAMS --> N203
    NFS_PERSONAL --> N201

Verification Commands

# Check NFS mounts on any Docker node
df -h | grep nfs

# Verify symlinks
ls -la /homenet_config /homenet_data

# Check Proxmox zpools (SSH to Proxmox host)
ssh root@100.1.100.10 'zpool list'

# Check VM disk assignments on Proxmox
ssh root@100.1.100.10 'qm config 201'

# Verify NFS exports from OMV
ssh 100.1.100.201 'showmount -e 100.1.100.199'

# Run full verification script
./sh-verify-storage-mapping.sh

  • [[NFS-Architecture|NFS Storage Architecture]]
  • [[Storage-Critical-Warning|Storage Critical Warning]]
  • [[Volume-Management|Docker Volume Management]]
  • [[../01-Infrastructure/Cluster-Overview|Cluster Overview]]
  • [[../06-Troubleshooting/NFS-Mount-Issues|NFS Troubleshooting]]

Last Updated: 2026-01-17 Status: Active Live Metrics Verified: 2026-01-17 via sh-verify-storage-mapping.sh