I can't speak for the author, but they said they have a Coral TPU passed into the LXC & container, which I also have on my Proxmox setup for Frigate NVR.
Depending on your hardware platform, there could be valid reasons why you wouldn't want to run Frigate NVR in a VM. Frigate NVR it works best when it can leverage the GPU for video transcoding and TPU for object detection. If you pass the GPU to the VM, then the Proxmox host no longer has video output (without a secondary GPU).
Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM. This is a non-starter for systems where there is no extra PCIe slot for a graphics card, such as the many power-efficient Intel N100 systems that do a good job running Frigate.
The reason why you'd put Docker into LXC is that's the best supported way to get docker engine working on Proxmox without a VM. You'd want to do it on Proxmox because it brings other benefits like a familiar interface, clustering, Proxmox Backup Server, and a great community. You'd want to run Frigate NVR within Docker because it is the best supported way to run it.
At least, this was the case in Proxmox 8. I haven't checked what advancements in Proxmox 9 may have changed this.
> Unless you have an Intel Arc iGPU, Intel Arc B50/B60, or fancy server GPU, you won't have SR-IOV on your system, and that means you have to pass the entire GPU into the VM.
This is changing, specifically on QEMU with virtio-gpu, virgl, and Venus.
Virgl exposes a virtualized GPU in the guest that serializes OpenGL commands and sends them to the host for rendering. Venus is similar, but exposes Vulkan in the guest. Both of these work without dedicating the host GPU to the guest, it gives mediated access to the GPU without any specific hardware.
There's also another path known as vDRM/host native context that proxies the direct rendering manager (DRM) uAPI from the guest to the host over virtio-gpu, which allows the guest to use the native mesa driver for lower overhead compared to virgl/Venus. This does, however, require a small amount of code to support per driver in virglrenderer. There are patches that have been on the QEMU mailing list to add this since earlier this year, while crosvm already supports it.
To add to this, while I haven’t used it yet myself (busy with too many other projects), this gist has the clearest and most up to date instructions on setting up QEMU with virglrenderer that I’ve found so far, with discussion on current issues: https://gist.github.com/peppergrayxyz/fdc9042760273d137dddd3...
I have Frigate and a Coral USB running happily in a VM on an N97. GPU pass through is slightly annoying (need to use a custom ROM from here: https://github.com/LongQT-sea/intel-igpu-passthru). I think SRIOV works but haven’t tried. And Coral only works in USB3 mode if you pass the whole PCIe controller.
I've been debating if I should move my frigate off an aging Unraid server to spare mini PC with Proxmox. The mini has a N97 with 16gb ram. How cameras do you have in your frigate instance on that N97? Just wondering if a N97 is capable of handling 4+ cameras. I do have a Coral TPU for inference & detection.
I have around 6 cameras, mostly 1080p, and about 8 GB RAM and 3 cores on the VM (plus Coral USB and Intel VAAPI). CPU usage is about 30 - 70% depending on how much activity there is. I also have other VMs on the machine running container services and misc stuff.
There are some camera stability issues which are probably WiFi related (2.4 GHz is overloaded) and Frigate also has its own issues (e.g. with detecting static objects as moving) but generally I’m happy with it. If I optimize my setup some more I could probably get it to a < 50% utilization.
Perfect thanks. I'll give the N97 a go and put it to good use as a dedicated frigate NVR box. It certainly has a much lower power draw than my Unraid server.
At first I had the unholy abomination that is Frigate LXC container, but since it's not trivially updatable and breaks other subtle things, I ended up going with Docker. Was debating getting it into a VM, but for most part, docker on LXC only gave me solvable problems.
It's not always better. Docker on lxc has a lot of advantages. I would rather use plain lxc on production systems, but I've been homelabbing on lxc+docker for years.
It's blazing fast and I cut down around 60% of my RAM consumption. It's easy to manage, boots instantly, allows for more elastic separation while still using docker and/or k8s. I love that it allows me to keep using Proxmox Backup Server.
I'm postponing homelab upgrade for a few years thanks to that.
> While it can be convenient to run “Application Containers” directly as Proxmox Containers, doing so is currently a tech preview. For use cases requiring container orchestration or live migration, it is still recommended to run them inside a Proxmox QEMU virtual machine.
The way I understand it is that Docker with LXC allows for compute / resource sharing, where as dedicated VMs will will require passing through the entire discrete GPU. So, the VMs require a total passthrough of those Zigbees, container wouldn't?
I'm not exactly sure how the outcome would have changed here though.
It should in an ideal world but docker is a very leaky abstraction imho and you will run into a number of problems.
It has improved as of newer kernel and docker versions but they were problems (overlayfs/zfs incompatibilities/ uid mapping problems in docker images/ capabilities requested by docker not available in LXC, rootless docker problems,...)
Knowing when to use a vm and when to use a container is sometimes an opaque problem.
This is one of those cases where a VM is a much better choice.