Skip to content

NFS Storage Architecture

[!warning] Storage Critical Warning Multiple NFS mounts at 92% capacity. See [[Storage-Critical-Warning|Storage Critical Warning]] for details.

Overview

All Docker Swarm nodes mount persistent storage from a centralized OpenMediaVault (OMV) NFS server. This provides shared storage for service data, media libraries, camera recordings, and personal files.

NFS Server: 100.1.100.199 (OpenMediaVault)

[!danger] Single Point of Failure The OMV NFS server is a critical infrastructure dependency. If it fails, all nodes lose access to persistent storage, causing widespread service failures.

NFS Mounts on All Nodes

Each of the 5 Docker nodes has the following NFS mounts:

Mount Point NFS Export Total Size Used Available Use% Purpose
/nfs_data 100.1.100.199:/HomeNetServices 3.0TB 2.8TB 254GB 92% Service configurations & data
/nfs_media 100.1.100.199:/SharedStuff 3.0TB 2.8TB 254GB 92% Plex media library
/nfs_media_lib 100.1.100.199:/MediaLibrary 3.0TB 2.8TB 254GB 92% Alternative media library mount
/nfs_service 100.1.100.199:/ServiceData 3.0TB 2.8TB 254GB 92% Service-specific data
/nfs_cams 100.1.100.199:/CameraFootage 69GB 27GB 42GB 40% Camera recordings (iSpy)
/nfs_personal 100.1.100.199:/PersonalData 503GB 379GB 124GB 76% Personal files

Storage Layout Diagram

graph TB
    OMV[OMV NFS Server<br/>100.1.100.199]

    subgraph "NFS Exports"
        VOL1[HomeNetServices<br/>3TB - 92% used]
        VOL2[SharedStuff<br/>3TB - 92% used]
        VOL3[MediaLibrary<br/>3TB - 92% used]
        VOL4[ServiceData<br/>3TB - 92% used]
        VOL5[CameraFootage<br/>69GB - 40% used]
        VOL6[PersonalData<br/>503GB - 76% used]
    end

    subgraph "Docker Nodes"
        N1[Node 201<br/>Manager]
        N2[Node 202<br/>Worker]
        N3[Node 203<br/>Worker]
        N4[Node 204<br/>Worker]
        N5[Node 205<br/>Worker]
    end

    OMV --> VOL1
    OMV --> VOL2
    OMV --> VOL3
    OMV --> VOL4
    OMV --> VOL5
    OMV --> VOL6

    VOL1 --> N1
    VOL1 --> N2
    VOL1 --> N3
    VOL1 --> N4
    VOL1 --> N5

    VOL2 --> N1
    VOL2 --> N2
    VOL2 --> N3
    VOL2 --> N4
    VOL2 --> N5

    VOL5 --> N3
    VOL6 --> N1

Shared Storage Pools

[!caution] Overlapping Mounts Multiple mount points (/nfs_data, /nfs_media, /nfs_media_lib, /nfs_service) appear to share the same underlying 3TB volume on the OMV server. Effective available space is ~254GB total, not 254GB per mount.

Evidence: - All four mounts show identical size (3.0T) - All show identical usage (2.8T used, 254G available) - All show identical 92% utilization

Implication: - Writing to /nfs_data consumes space from the shared pool - Space exhaustion affects ALL dependent mounts - Must monitor aggregate usage, not individual mounts

For convenience, the manager node has local symlinks:

/homenet_config  $(pwd)/config
/homenet_data  /nfs_data

Services use /homenet_data/ in volume mounts, which resolves to /nfs_data/.

Service Data Storage

Critical Service Data Locations

Node 201 (Manager):

/nfs_data/influx-2.1.1/         # InfluxDB time-series data
/nfs_data/mariadb/              # MariaDB databases
/nfs_data/postgresql/           # PostgreSQL databases
/nfs_data/elasticsearch/        # Elasticsearch indices (when active)
/nfs_data/swarmpit/             # Swarmpit persistence
/nfs_data/redis/                # Redis cache
/nfs_data/grafana/              # Grafana dashboards
/nfs_data/prometheus/           # Prometheus TSDB

Node 202 (Worker - Media):

/nfs_media/Movies/              # Plex movies
/nfs_media/TV Shows/            # Plex TV shows
/nfs_media/Music/               # Plex music
/nfs_data/plex/                 # Plex metadata & config
/nfs_data/transmission/         # Torrent downloads
/nfs_data/immich/               # Immich photo library
/nfs_data/photoprism/           # PhotoPrism data
/nfs_data/librephotos/          # LibrePhotos data

Node 203 (Worker - Surveillance):

/nfs_cams/ispy/                 # iSpy camera recordings

All Nodes:

/homenet_config/<service>/      # Service configurations (synced)

Storage Capacity Analysis

Current Utilization (2026-01-11)

Critical Threshold Mounts (92%): - /nfs_data - Service data - /nfs_media - Media library - /nfs_media_lib - Alt media mount - /nfs_service - Service-specific data

Remaining capacity: ~254GB (shared across all four mounts)

Projected Time to Full

Based on typical usage patterns:

Aggressive Growth (500GB/month): - Time to full: ~2 weeks

Moderate Growth (100GB/month): - Time to full: ~2.5 months

Conservative Growth (50GB/month): - Time to full: ~5 months

[!danger] Immediate Action Required With only 254GB remaining across critical mounts, storage exhaustion is imminent. See [[Storage-Critical-Warning|Storage Critical Warning]] for mitigation plan.

Storage Consumers

Top 5 Space Consumers

# Run this command to identify large directories
du -h --max-depth=1 /nfs_data | sort -hr | head -20

Expected top consumers: 1. Plex metadata & cache (/nfs_data/plex/) 2. Photo libraries (Immich, PhotoPrism, LibrePhotos) 3. Database files (MariaDB, PostgreSQL, InfluxDB) 4. Prometheus TSDB (/nfs_data/prometheus/) 5. Elasticsearch indices (when active)

Media library consumers: 1. Movies (/nfs_media/Movies/) 2. TV Shows (/nfs_media/TV Shows/) 3. Music (/nfs_media/Music/) 4. Transmission downloads (/nfs_data/transmission/)

NFS Configuration

Mount Options

Typical mount options in /etc/fstab:

100.1.100.199:/HomeNetServices /nfs_data nfs defaults,_netdev 0 0
100.1.100.199:/SharedStuff /nfs_media nfs defaults,_netdev 0 0
100.1.100.199:/CameraFootage /nfs_cams nfs defaults,_netdev 0 0

Mount flags: - defaults - rw, suid, dev, exec, auto, nouser, async - _netdev - Wait for network before mounting

NFS Server Configuration

OMV Server: 100.1.100.199

Access OMV web interface to manage: - Export permissions - Storage pools - Disk configuration - RAID/ZFS setup

Troubleshooting NFS Issues

Common Problems

1. Mount Not Available

# Check NFS server reachability
ping 100.1.100.199

# Show NFS exports from server
showmount -e 100.1.100.199

# Verify mount
df -h | grep nfs

# Remount if needed
sudo mount -a

2. Stale File Handle

# Unmount and remount
sudo umount -l /nfs_data
sudo mount -a

# Force remount all NFS
./sh-correct-mounts.sh

3. Permission Denied

# Check NFS export permissions on OMV server
# Verify UID/GID matches (services use 1000:1000)
ls -la /nfs_data/

4. Performance Issues

# Check network latency to NFS server
ping -c 10 100.1.100.199

# Monitor NFS stats
nfsstat -m

# Check for network bottlenecks
iftop

Verification Script

#!/bin/bash
# Verify all NFS mounts

MOUNTS=(
  "/nfs_data"
  "/nfs_media"
  "/nfs_media_lib"
  "/nfs_service"
  "/nfs_cams"
  "/nfs_personal"
)

for mount in "${MOUNTS[@]}"; do
  if mountpoint -q "$mount"; then
    echo "✅ $mount is mounted"
  else
    echo "❌ $mount is NOT mounted"
  fi
done

Use existing script: ./sh-correct-mounts.sh

Backup Strategy

Critical Data Backup

Databases (automated via cron):

# Daily backups
./sh-backup-databases.sh

# Vaultwarden
./sh-backup-vaultwarden.sh

Backup locations: - Local: /homenet_data/backups/ - Off-site: TBD (consider Duplicati when restored)

NFS Server Backup

OMV Server Backup Strategy: 1. RAID/ZFS for redundancy (verify configuration) 2. Snapshots for point-in-time recovery 3. Off-site replication (recommended)

[!warning] Backup Status Duplicati backup service is currently offline (0/1 replicas). Consider restoring or implementing alternative backup solution.

Performance Optimization

Network Considerations

NFS Performance Factors: - Network bandwidth (10Gbps internal) - Disk I/O on OMV server - Concurrent access patterns - NFS protocol version (NFSv3 vs NFSv4)

Monitoring NFS Performance

Prometheus metrics: - Node exporter provides NFS stats - Monitor latency and throughput

Manual monitoring:

# NFS statistics
nfsstat -m

# I/O wait
iostat -x 5

# Network utilization
iftop -i <interface>

Migration & Expansion

Expanding Storage

Option 1: Add disks to OMV server 1. Add physical disks 2. Extend existing volume 3. No service downtime

Option 2: New storage pool 1. Create new NFS export 2. Mount on Docker nodes 3. Migrate data incrementally 4. Update service volume mounts

Option 3: Storage tiering 1. Hot data on fast storage (SSDs) 2. Cold data on slow storage (HDDs) 3. Automated migration policies

Data Migration

# Migrate data between mounts
rsync -avh --progress /nfs_data/oldpath/ /nfs_service/newpath/

# Update Docker volume mounts
# Edit stack files to point to new location

# Redeploy services
docker stack deploy -c stack-<name>.yml <stack> --with-registry-auth
  • [[Storage-Critical-Warning|Storage Critical Warning]]
  • [[Volume-Management|Docker Volume Management]]
  • [[01-Infrastructure/Node-201-Manager|Node 201 Storage]]
  • [[01-Infrastructure/Node-202-Worker|Node 202 Media Storage]]
  • [[06-Troubleshooting/NFS-Mount-Issues|NFS Troubleshooting]]
  • [[03-Operations/Backup-Procedures|Backup Procedures]]

Useful Commands

# Check all NFS mounts
df -h | grep nfs

# Verify NFS server exports
showmount -e 100.1.100.199

# Remount all NFS shares
./sh-correct-mounts.sh

# Find large files
find /nfs_data -type f -size +1G -exec ls -lh {} \;

# Storage usage by directory
du -h --max-depth=1 /nfs_data | sort -hr

# Monitor real-time NFS stats
watch -n 5 'df -h | grep nfs'

Last Updated: 2026-01-11 Status: ⚠️ Critical - 92% capacity Next Action: Immediate capacity planning and cleanup required