System Overview
| OS | Ubuntu 24.04 (Proxmox VM with GPU passthrough) |
| GPU | NVIDIA RTX 2000 Pro Blackwell (16GB VRAM) |
| Frigate | 0.16.4 (stable-tensorrt Docker image) |
| Detector | ONNX (GPU-accelerated via TensorRT) |
| Models | YOLOv9-c-640 and/or D-FINE-L-640 |
| MQTT | Eclipse Mosquitto 2 |
| Cameras | RTSP via UniFi Protect (192.168.1.10:7447) |
1. Prerequisites
Docker and Docker Compose must be installed. NVIDIA drivers must be working (nvidia-smi should show your GPU).
Install NVIDIA Container Toolkit
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \
| sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \
| sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \
| sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Verify GPU works inside Docker:
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi
You should see your RTX 2000 Pro listed. If not, check that the NVIDIA runtime is configured:
docker info | grep -i runtime
# Should show: nvidia
2. Directory Structure
mkdir -p ~/frigate/{config/model_cache,storage,media,mqtt/{config,data,log}}
cd ~/frigate
Final layout:
~/frigate/
βββ docker-compose.yml
βββ config/
β βββ config.yml
β βββ model_cache/
β βββ yolov9-c-640.onnx (~97MB)
β βββ dfine-l.onnx (~150-200MB)
βββ storage/
βββ media/
βββ mqtt/
βββ config/mosquitto.conf
βββ data/
βββ log/
3. Build Detection Models
You need at least one model. Both are free, open source, and run on your NVIDIA GPU via ONNX/TensorRT.
Critical: Frigate 0.16+ does NOT ship pre-built models and the old download URLs return 404. You MUST build models locally using Docker. The build process uses ~20-25GB of disk space temporarily.
Option A: YOLOv9-c-640 (Recommended Starting Point)
Fast, well-tested, recommended by Frigate for NVIDIA GPUs. ~97MB output.
Known issue: The Frigate docs show a single
docker build --outputcommand, but the ONNX export silently fails becauseonnxscriptis missing from the base dependencies. The fix below uses a two-step process: build the image first, then run the export withonnxscriptinstalled.
cd ~/frigate/config/model_cache
# Step 1: Build the Docker image (downloads weights, installs deps)
docker build -t yolov9-builder --build-arg MODEL_SIZE=c --build-arg IMG_SIZE=640 -f- . <<'EOF'
FROM python:3.11
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /yolov9
ADD https://github.com/WongKinYiu/yolov9.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx==1.18.0 onnxruntime onnx-simplifier>=0.4.1
ARG MODEL_SIZE
ARG IMG_SIZE
ADD https://github.com/WongKinYiu/yolov9/releases/download/v0.1/yolov9-${MODEL_SIZE}-converted.pt yolov9-${MODEL_SIZE}.pt
RUN sed -i "s/ckpt = torch.load(attempt_download(w), map_location='cpu')/ckpt = torch.load(attempt_download(w), map_location='cpu', weights_only=False)/g" models/experimental.py
EOF
# Step 2: Export to ONNX (installs onnxscript, runs export, copies output)
docker run --rm -v $(pwd):/output yolov9-builder sh -c \
"pip install onnxscript && python3 export.py --weights ./yolov9-c.pt --imgsz 640 --simplify --include onnx && cp /yolov9/*.onnx /output/"
# Step 3: Rename to match config
mv yolov9-c.onnx yolov9-c-640.onnx
# Verify (should be ~97MB, NOT 0 bytes)
ls -lh yolov9-c-640.onnx
Why two steps? The Dockerfile RUN export.py fails silently because onnxscript is not in the YOLOv9 requirements.txt. The build completes but produces no .onnx file. Running the export separately with pip install onnxscript first is the fix we discovered.
Option B: D-FINE Large (Higher Accuracy)
Transformer-based (DETR). Better localization, handles low light and motion blur better than YOLO. ~150-200MB output.
This is a single command β unlike YOLOv9, D-FINE’s build process works correctly in one shot because all dependencies are included.
cd ~/frigate/config/model_cache
docker build . --build-arg MODEL_SIZE=l --output . -f- <<'EOF'
FROM python:3.11 AS build
RUN apt-get update && apt-get install --no-install-recommends -y libgl1 && rm -rf /var/lib/apt/lists/*
COPY --from=ghcr.io/astral-sh/uv:0.8.0 /uv /bin/
WORKDIR /dfine
RUN git clone https://github.com/Peterande/D-FINE.git .
RUN uv pip install --system -r requirements.txt
RUN uv pip install --system onnx onnxruntime onnxsim onnxscript
RUN mkdir -p output
ARG MODEL_SIZE
RUN wget https://github.com/Peterande/storage/releases/download/dfinev1.0/dfine_${MODEL_SIZE}_obj2coco.pth -O output/dfine_${MODEL_SIZE}_obj2coco.pth
RUN sed -i '58s/data = torch.rand(.*)/data = torch.rand(1, 3, 640, 640)/' tools/deployment/export_onnx.py
RUN python3 tools/deployment/export_onnx.py -c configs/dfine/objects365/dfine_hgnetv2_${MODEL_SIZE}_obj2coco.yml -r output/dfine_${MODEL_SIZE}_obj2coco.pth
FROM scratch
ARG MODEL_SIZE
COPY --from=build /dfine/output/dfine_${MODEL_SIZE}_obj2coco.onnx /dfine-${MODEL_SIZE}.onnx
EOF
# Verify (~150-200MB, NOT 0 bytes)
ls -lh dfine-l.onnx
D-FINE also comes in s (small) and m (medium) β change MODEL_SIZE=s or MODEL_SIZE=m in the command.
Verify Your Models
ls -lh ~/frigate/config/model_cache/
If any model file is 0 bytes, the export failed. Check the Docker build output for errors. The most common cause for YOLOv9 is the missing onnxscript β use the two-step process above.
Model Comparison
| YOLOv9-c-640 | D-FINE-L-640 | |
|---|---|---|
| Architecture | CNN (YOLO) | Transformer (DETR) |
| COCO AP | ~53% | ~54% |
| File Size | ~97MB | ~150-200MB |
| Inference | ~15-30ms | ~20-40ms |
| Strengths | Speed, well-tested | Localization, low light, motion blur |
| Labels | 80 COCO classes | 80 COCO classes |
4. MQTT Configuration
cat > ~/frigate/mqtt/config/mosquitto.conf << 'EOF'
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
listener 1883
allow_anonymous true
listener 9001
protocol websockets
EOF
For authenticated MQTT (optional, recommended for production):
# Create password file
docker run -it --rm -v ~/frigate/mqtt/config:/mosquitto/config eclipse-mosquitto:2 \
mosquitto_passwd -c /mosquitto/config/passwd frigate_user
# Then update mosquitto.conf:
# allow_anonymous false
# password_file /mosquitto/config/passwd
And add to Frigate config:
mqtt:
user: frigate_user
password: your_password
5. Docker Compose
cat > ~/frigate/docker-compose.yml << 'EOF'
version: "3.9"
services:
frigate:
container_name: frigate
image: ghcr.io/blakeblackshear/frigate:stable-tensorrt
restart: unless-stopped
privileged: true
shm_size: "256mb"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
- ./config:/config
- ./media:/media/frigate
- ./storage:/storage
- type: tmpfs
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554"
- "8555:8555/tcp"
- "8555:8555/udp"
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
depends_on:
- mqtt
mqtt:
container_name: mqtt
image: eclipse-mosquitto:2
restart: unless-stopped
volumes:
- ./mqtt/config:/mosquitto/config
- ./mqtt/data:/mosquitto/data
- ./mqtt/log:/mosquitto/log
ports:
- "1883:1883"
- "9001:9001"
EOF
Notes:
- Image must be
stable-tensorrtfor NVIDIA GPU. Using plainstablewill not use TensorRT acceleration. shm_size: "256mb"is fine for 1-2 cameras. Increase to"512mb"or"1024mb"if you add more cameras and see errors.- Changing
shm_sizerequiresdocker compose down && docker compose up -d(not justrestart), because it’s a container creation setting. - Port 8971 is NOT needed β it was removed/changed in 0.16+.
6. Frigate Configuration
Create ~/frigate/config/config.yml using nano (not cat << EOF which can corrupt special characters):
nano ~/frigate/config/config.yml
Paste the config below. This is the Hybrid mode (GPU detection, CPU video decode) with D-FINE Large.
mqtt:
enabled: true
host: mqtt
port: 1883
topic_prefix: frigate
client_id: frigate
database:
path: /config/frigate.db
detectors:
onnx:
type: onnx
model:
model_type: dfine
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/dfine-l.onnx
labelmap_path: /labelmap/coco-80.txt
ffmpeg:
hwaccel_args: []
face_recognition:
enabled: true
model_size: large
min_area: 5000
detection_threshold: 0.6
recognition_threshold: 0.85
unknown_score: 0.8
min_faces: 1
blur_confidence_filter: true
go2rtc:
streams:
camera_main:
- rtsp://192.168.1.10:7447/G5b6LuOsIljCpbGJ
camera_sub:
- rtsp://192.168.1.10:7447/E6n3kh6s7pEIZoMV
birdseye:
enabled: true
mode: continuous
detect:
enabled: true
objects:
track:
- person
filters:
person:
min_area: 3000
min_score: 0.5
threshold: 0.6
record:
enabled: true
retain:
days: 7
mode: motion
alerts:
retain:
days: 30
detections:
retain:
days: 30
snapshots:
enabled: true
clean_copy: true
timestamp: true
bounding_box: true
crop: false
quality: 100
retain:
default: 14
cameras:
camera:
enabled: true
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/camera_main
input_args: preset-rtsp-restream
roles:
- record
- path: rtsp://127.0.0.1:8554/camera_sub
input_args: preset-rtsp-restream
roles:
- detect
detect:
enabled: true
width: 1280
height: 720
fps: 15
face_recognition:
enabled: true
min_area: 4000
record:
enabled: true
retain:
days: 7
snapshots:
enabled: true
retain:
default: 30
height: 720
quality: 100
motion:
threshold: 20
contour_area: 30
improve_contrast: true
Save with Ctrl+X, then Y, then Enter.
Config Gotchas We Discovered
These are real errors we hit during setup that you will also hit if you copy configs from the internet:
use_experimentalunderui:section β Removed in Frigate 0.16+. Delete it. Any config withui: use_experimental: falsewill fail validation.Empty motion mask β
mask: []is invalid. Either provide actual coordinates or remove themask:line entirely.events:underrecord:β Changed in 0.16+. The oldevents: pre_capture: 5format is gone. Usealerts:anddetections:instead (as shown above).version: 0.16-0at the top of config β Not needed in 0.16+. Remove it.YAML corruption from
cat << EOFβ Special characters (smart quotes, non-breaking spaces) can corrupt the file when usingcat << 'EOF'. Always usenanoto paste configs.database: path: /storage/frigate.dbβ Use/config/frigate.dbinstead to keep the database with your config volume.
7. Switching Between Models
Only the model: section changes. Everything else stays identical.
D-FINE Large (currently active):
model:
model_type: dfine
width: 640
height: 640
input_tensor: nchw
input_dtype: float
path: /config/model_cache/dfine-l.onnx
labelmap_path: /labelmap/coco-80.txt
YOLOv9-c-640:
model:
model_type: yolo-generic
path: /config/model_cache/yolov9-c-640.onnx
input_tensor: nchw
input_pixel_format: rgb
input_dtype: float
width: 640
height: 640
labelmap_path: /labelmap/coco-80.txt
Critical difference: D-FINE does NOT use
input_pixel_format. YOLOv9 requiresinput_pixel_format: rgb. Mixing them up will cause detection to fail silently (no errors, just no detections).
After editing, restart:
cd ~/frigate && docker compose restart frigate
First boot with a new model takes extra time while TensorRT builds an optimized engine for your specific GPU. This is a one-time process per model.
8. Switching Between Modes
Mode 1: Hybrid (Recommended) β GPU detection, CPU video decode
This is what the config above uses. GPU handles object detection and face recognition. CPU handles video streams.
ffmpeg:
hwaccel_args: []
Mode 2: Full GPU β GPU detection + GPU video decode
Uses more VRAM but offloads everything from CPU. Change one line:
ffmpeg:
hwaccel_args: preset-nvidia-h264
For H.265 cameras use preset-nvidia-h265 instead.
Mode 3: CPU Only β No GPU at all
For testing or freeing the GPU. Use Docker image ghcr.io/blakeblackshear/frigate:stable (not stable-tensorrt).
Replace the detector and model sections:
detectors:
cpu:
type: cpu
num_threads: 8
ffmpeg:
hwaccel_args: []
Remove the entire model: section β Frigate uses its built-in SSDLite model automatically for CPU mode.
9. Start Frigate
cd ~/frigate
docker compose up -d
docker compose logs -f frigate
Watch for these in the logs:
frigate.detectors.plugins.onnx INFO : ONNX: /config/model_cache/dfine-l.onnx loadedβ Model loaded successfully- Camera connection messages β Streams connecting
- Any errors (see Troubleshooting section)
10. Verify
Web UI: http://your-server-ip:5000
GPU usage:
watch -n 2 nvidia-smi
Expected GPU processes in Hybrid mode:
frigate.detector.onnxβ object detectionfrigate.embeddings_managerβ face recognition embeddings
If you see ffmpeg processes on the GPU, your hwaccel_args is still set to a preset. Set it to [] for CPU decode.
Logs:
docker compose logs -f frigate
Test detection: Walk in front of a camera. You should see person detections in the Frigate UI Events tab.
11. Face Recognition
Face recognition requires object detection to find a person first. It cannot run independently.
Setup
- Open Frigate UI β Settings β Face Recognition
- Click “Add Person” β Enter name β Upload 3-5 clear photos of their face
- Or let Frigate detect faces from events, then assign names in the UI
Face recognition runs on the GPU via the embeddings manager (~788MB VRAM). The face_recognition section in the config controls sensitivity:
face_recognition:
enabled: true
model_size: large # Options: small, large
min_area: 5000 # Minimum face size in pixels
detection_threshold: 0.6 # How confident to detect a face
recognition_threshold: 0.85 # How confident to match a known face
12. Adding More Cameras
Step 1: Add streams to go2rtc
Use separate main (high-res for recording) and sub (low-res for detection) streams:
go2rtc:
streams:
camera_main:
- rtsp://192.168.1.10:7447/G5b6LuOsIljCpbGJ
camera_sub:
- rtsp://192.168.1.10:7447/E6n3kh6s7pEIZoMV
hallway_main:
- rtsp://192.168.1.10:7447/YOUR_MAIN_STREAM_KEY
hallway_sub:
- rtsp://192.168.1.10:7447/YOUR_SUB_STREAM_KEY
Step 2: Add camera definition under cameras:
hallway:
enabled: true
ffmpeg:
inputs:
- path: rtsp://127.0.0.1:8554/hallway_main
input_args: preset-rtsp-restream
roles:
- record
- path: rtsp://127.0.0.1:8554/hallway_sub
input_args: preset-rtsp-restream
roles:
- detect
detect:
enabled: true
width: 1280
height: 720
fps: 15
face_recognition:
enabled: true
min_area: 4000
record:
enabled: true
retain:
days: 7
snapshots:
enabled: true
retain:
default: 30
height: 720
quality: 100
motion:
threshold: 20
contour_area: 30
improve_contrast: true
Step 3: Restart
cd ~/frigate && docker compose restart frigate
If adding many cameras at once, consider increasing shm_size in docker-compose.yml (requires down then up).
13. Ports
| 5000 | Frigate Web UI |
| 8554 | RTSP Restream |
| 8555 | WebRTC (TCP + UDP) |
| 1883 | MQTT |
| 9001 | MQTT WebSocket |
14. Quick Commands
| config.yml | Active Frigate config |
| config.yml.yolov9.bak | Backup for switching back to YOLOv9 |
| frigate.db | Frigate database (events, faces) |
| frigate.db-shm | SQLite shared memory (auto-managed) |
| frigate.db-wal | SQLite write-ahead log (auto-managed) |
| model_cache/ | ONNX model files |
| backup-config.yaml | Old backup from debugging |
| backup.db | Old database backup |
When to use restart vs down/up:
restartβ config changes (config.yml edits)downthenupβ docker-compose.yml changes (shm_size, ports, volumes, image)