Hacker News new | past | comments | ask | show | jobs | submit login
Frigate: Open-source network video recorder with real-time AI object detection (frigate.video)
625 points by thunderbong on Nov 18, 2023 | hide | past | favorite | 140 comments



I’ve been using Frigate for six months on a raspberry pi 4 with a Google Coral TPU. It’s connected to 2 network cameras streaming in 2mp each.

Frigate standalone works super smooth with no hiccups at all. I am using object detection for people and have not yet had a false positive or false negative. Additionally, I record not only the events, but but also a 24/7 video. Frigate takes care of garbage collecting old assets.

I have it hooked up to my Home Assistant running on the same raspberry pi. From there, I get notifications to my phone which include a live video, snapshot and video recording. The UX and configuration options are way better than any commercial end user product I have found.

It’s been a literal lifesaver, also fun and easy to use. Would recommend 10 of 10. I have no affiliation with the maintainers.


I use Homeassistant and Frigate on a $100 x86 noname micro-machine with five 4K cameras and it is awesome. CPU use does not go above 10%. Would not claim zero false positives though. However, it is smart enough to filter out non-moving false positives which many cmmercial-grade systems can not, lol.

Can't say it saved my life, but I programmed my smart bulbs to go red if a bear was spotted in any frame in the past 30 min though.

Frigate is great and worth of praise.


This is hilarious. I programmed my living room to go red if the air quality in my daughter’s room dropped to a certain degree. It should be annoying by now but instead it’s hilarious.


detectives Knows Yufarted and Bada Irinhere will solve the case and answer why a girl had so much to suffer.. trust the experts


Something may be doing over my head, but I have to ask: is this a fart/diaper joke?


No, my neighbors weed smoke was coming through her closet and raising particulate matter. He’s moved out but we’ve kept the automation and it also cranks up the air purifier.

I did hear about the farting causing VOCs to rise and that kind of happens with us too.


My guess would be to prevent smoking or vaping


may have a CO2/pollen/particulate detector, and/or child could have asthma/allergies.


Inside lights, to warn you not to go out? Or outside lights, to warn the bear that you switched to "war mode"? :D


I've also used the same solution, although I ended up scaling from a Pi to a larger 13th gen Intel box with two USB Corals due to number of cameras. It's been ridiculously reliable running from a docker compose stack for years now, including using Watchtower to auto-upgrade the Frigate container. It's really easy to map the corals via docker compose as well.

It's nuts how cheaply you can make such a good system with AI-detection features, its more than paid for itself vs commercial options with monthly fees. High quality weatherproof PoE cameras are crazy affordable now too, and you can VLAN them off your home network with no connection to the internet to further harden the system.


I have almost exactly the same setup, except using the intel gpu for accelerated inference with an OpenVINO Yolo model and HA for notifications. Super reliable.


Where did you buy the Coral TPUs? I'm having a difficult time finding them for sale.


Bought both back roughly just after Covid era so a while ago. It was even worse during the Covid chip shortage, I had to pay a significant premium over list price for the 2nd one. Haven’t had tried to buy one since so don’t have any advice I’m afraid.



It looks awesome and I look forward to trying it out.

Curious how you claim zero false negatives though (as in missing a person it should have detected), unless you’re reviewing all the data or have another system hooked up to it verifying? Or perhaps you simply mean to imply nothing bad has happened due to a missed detection?

In curious hope it did during Halloween with all the costumes? Are you able to have it pre alert you that kids are coming to the door?


I have set up two zones per camera: The “yard” and the “entry” which is directly in front of the door. Whenever someone is at the entry, I get a notification. Since I walk these paths myself, I have hundreds of test points. Apart from that, I’m happy to _always_ get a notification before the bell rings. And I never have to open the door before I know who is there.

The “yard” part, I review every week. Here, of course, I have no data to compare it to, so I cannot say if it’s missing anything.

So yes, I’m certain it misses nothing that’s important to me - is there someone at my door and who is it.


PS: No Halloween in Switzerland so I cannot tell(;

It works well with other costumes like Trail Running and Hiking gear. It also works in the dark (IR camera) and in crazy hard tests like just peeking around the corner with only part of the face visible.


Is this actually legal in Switzerland?


I’m allowed to record my own property whilst following reasonable rules.

If you’re interested, here’s an intro to the relevant legislation: https://www.edoeb.admin.ch/edoeb/de/home/datenschutz/ueberwa...


"literal lifesaver" what did it actually do?


Yeah, I agree Frigate is brilliant but I would love to hear how it has literally saved OP’s life! There must be a story behind that.


Spy stuff


I’m thinking motion detection could be useful to measure a toddlers sleep. You know, plan your day out based on the quality of it.


Toddlers have kinda evolved to be manageable. I'm thinking more "wake up, armed people are approaching your back door".


> and have not yet had a false positive or false negative

How would you know if you have zero false negatives unless you watch the whole video stream everyday?



> It’s been a literal lifesaver,.

Whose lives have been saved?

> also fun

Do people actually work in security companies for the LOLz? Nobody told me.


I've been using it for 2 months now, and I strongly agree - it's very reliable, and object detection is spot on. I'm running 2 cameras with just the cpu, and still have plenty of breathing room.


Mind providing more details about the hardware in your setup, cameras etc?


Sure. The cameras are Tapo C320WS. They are cheap, waterproof, connect to wifi if there’s no ethernet and can stream video in two resolutions at the same time. I use the lower resolution for motion detection and the higher resolution for object detection and event recording.

The whole thing is running in docker-compose on Raspberry Pi and Coral TPU.


thank you for sharing the cameras -- I've been looking for a recommendation with a specific model name


I am testing Frigate for a couple months now. It is a very ambitious project and I would love to see it succeed.

Here are the observations:

* You don’t actually need hardware decoding or a Coral, but they do help. You will of course need to provision more CPU horse-power for NVR. * Motion detection uses the usual implementation from OpenCV. Unfortunately this algorithm is not very good in my experience. Many things I would consider as motion are missed (false negative), many things I would not consider motion are being detected (false positive). These factors mean that one is tempted to go ham on masking to filter out false positives, which then leads to further false negatives. I’m genuinely surprised the motion algorithm that’s implemented in OpenCV is still the state of art of what’s available openly. * Object detection is somewhat knee-capped by the models available publicly. They are not very good either. Frigate has built its behaviour around these models with an assumption that these models are largely pretty accurate, which in my experience has ended up with quite a few missed recordings for important events, which led me to switch to create recordings based on motion (I’m not in a very densely populated area and reviewing the recordings isn’t too onerous.) * Support for coral is… shaky at best. There are some indications that the production of these devices has largely stopped (and finding them to purchase is hard and expensive,) and maintenance of the drivers and libraries to interface with coral seems to be minimal or non-existent to the point where some Linux distributions have started dropping the relevant packages from their repositories. On the upside, running these models on the CPU isn’t that expensive, especially considering that the models are invoked very sparingly.

I’m currently thinking of moving over to continuous recording, perhaps trying out moonfire-nvr or mayhaps handwriting a gstreamer pipeline. Simple software -> fewer failure modes.

(NB: I worked at a computer vision startup in the past, my views are naturally influenced by that experience.)


> I’m currently thinking of moving over to continuous recording, perhaps trying out moonfire-nvr or mayhaps handwriting a gstreamer pipeline. Simple software -> fewer failure modes.

Moonfire's author here. Please do give it a try! Right now it's a little too simple even for me, lacking any real support for motion or events. [1] But I'd like to keep that simple server core that just handles the recording and database functionality, while allowing separate processes to handle Frigate-like computer vision stuff or even just on-camera motion detection, and enhance the UI and add stuff like MQTT/HA integration to support that well. I'd definitely welcome help with those areas. (And UI is really not an area of expertise of mine, as you can see from e.g. this bug: <https://github.com/scottlamb/moonfire-nvr/issues/286>.)

For now I actually run Moonfire and Frigate side-by-side. They're almost complete opposites in terms of what they support, but I find both are useful.

[1] The database schema has the concept of "signals" (timeseriesed enums like motion/still/unknown or door open/door closed/unknown), but my code to populate that based on camera or alarm system events is in my separate "playground" of half-finished stuff, and the crappy UI for it is rotting in one of my working copies. I'd like Moonfire's database/API layer to also have a more Frigate-like concept of "events" and one of "object tracks".


A killer feature would be a time series showing the rate of change from frame to frame. That would allow someone to jump to the more interesting parts of a video.


I'm happy to flesh this idea out together if you start a github discussion or issue (feature request).


feels like frigate and moonfire would be cool to join forces then maybe MoonFrigate XD


I'd love that, but it's hard to collaborate given differences in schedules, working styles, architectural direction, etc. I briefly asked Blake about collaboration back in 2020, and he was open to it, but (understandably) only in areas where we can work independently. (He mentioned wanting frontend help, which is about as far from what Moonfire and I are good at as can be.) I'm not sure what that'd be right now. I see Moonfire's server as basically a DBMS/engine for streaming video. If at some point, Frigate folks are interested in basically replacing their database and (now go2rtc) with Moonfire's, I'm absolutely open to discussing it. But Moonfire would need some feature work to avoid regressing anything they care about, with some discussions around what the API would look like and such. And I'm not moving real fast unfortunately (mostly due to lack of my time and lack of Rust-savvy collaborators).

So more realistically, I see Frigate as something I use in the short- to medium-term and take inspiration from in the long term. I'm not above essentially copying their motion and object tracking algorithms into a plugin, with proper attribution of course.

Frigate is neat because it challenged my idea of what the "minimum viable product" for an NVR could be. When I first looked at it, IIRC it didn't really have any UI at all and didn't support continuous recording. Instead, it just saved events as .mp4 files and published metadata over MQTT for a Home Assistant-based UI. It hadn't occurred to me that would be a useful system, but of course it was.


> Motion detection uses the usual implementation from OpenCV. Unfortunately this algorithm is not very good in my experience.

In frigate 0.13 (currently in beta) the motion detection has been fully rewritten, which has been a large improvement in my and other's experience. We also have docs now that walk users through tuning the motion detectoin.

This is along with many other changes along what you are describing like object tracking and improvements to initial object detection when motion is first detected.


I found motion detection to be the easy part when building my NVR. I just used trial and error and scipy filters and eventually found something I'm happy with.

Handwriting a GST pipeline is pretty much what I did. I start with frame differences(I only decode the keyframes that happen every few seconds, so motion detection has to work in a single frame to have good response time).

Then I do a greyscale erosion to suppress small bits of noise and prioritize connected regions.

After that I take the average value of all pixels, and I subtract it, to suppress the noise floor, and also possibly some global uniform illumination changes.

Then I square every pixel, to further suppress large low intensity background noise stuff, and take the average of those squares.

I mostly only run object detection after motion is detected, and I have a RAM buffer to capture a few seconds before an event occurs.

NVR device code(In theory this can be imported and run from a few like python script), but it needs some cleanup and I've never tried it outside the web server.

https://github.com/EternityForest/iot_devices.nvr/blob/main/...

GST wrapper utilities it uses, motion detection algorithms at top:

https://github.com/EternityForest/scullery/blob/Master/scull...

My CPU object detection is OK, but the public, fast, easy to run models and my limited understanding of them is the weak point. I wound up doing a bunch of sanity check post filters and I'm sure it could be done much better with better models and better pre/post filtering.


How do you work with so much python code with no type annotations? I get it for smaller projects but isn't half this stuff meant to be library code?


Some of this code is older, before I was more serious about this specific code, and moving to type annotations has been pretty much the big project of the year for me for everything personal, among other "Eliminate everything hacky" projects, going back into 10yo code and cleaning up tons of stuff.

My bigger priority has been moving from Mako to Jinja2, especially for some particularly horrid templates that could not be highlighted or formatted because there's not many good Mako tools, JSON schema validation, but I definitely agree type annotations are critical.

VS Code is smart enough to catch a lot of stuff sans annotations though, so you can get by with a lot of nonsense, especially when half your time is just fighting GStreamer and you're not paying as much attention to the python side.

There's nothing better than GST that I've ever seen for dealing with media without actually having to touch the performance critical stuff in your own code, but it is not easy to debug stuff buried in autogenerated python bindings to C code, especially with an extra RPC layer to use a background process and defend against segfaults.

There's also lots of other weird stuff, like imports not at the top of the file, meant to support systems where some module wasn't available, and generally all kinds of cleanup that's slowly happening.


> Support for coral is… shaky at best. There are some indications that the production of these devices has largely stopped (and finding them to purchase is hard and expensive,) and maintenance of the drivers and libraries to interface with coral seems to be minimal or non-existent to the point where some Linux distributions have started dropping the relevant packages from their repositories.

I've recently gone through the process of trying to install pycoral on Rocky Linux 9. I had to build from source, and there was some challenge because documentation for the build process was sparse. There was some conflicting information about files I had to edit, values I had to set, what was supported and what wasn't.


FYI you no longer need a Coral, frigate now supports OpenVINO on intel iGPUs and it works as good or better than a Coral


> Object detection is somewhat knee-capped by the models available publicly. They are not very good either.

Yep, the current models are based on ImageNet which is of course wildly different content to the typical security camera. It's no surprise that its recognition is often pretty poor, especially from the typical ceiling angles that cameras are mounted at


Why not use segment anything, theirs some really amazing ones like the ones in Yolov8


We use Yolov8 with custom training for people detection - in our situation (hospital) we regularly get 96% accuracy.

I highly recommend - best model we tried out of dozens of public models.


Frigate seems like one of the most promising new NVR/VMS products out there, but still lacks the feature-completeness to replace Blue Iris. The biggest gap right now in my mind is Frigate's poor feature set for continuous recording, which seems like very basic functionality but ends up as a low priority for a lot of these "event-first" products that are more patterned off of consumer products.


My solution for this at the moment is to run a a separate NVR using continuous recording in parallel with a Frigate instance.

- Redundant disks/mirroring on the NVR

- Replication of Frigate's Event Database and Recordings to remote network storage

I primarily use Frigate as a general event index, with 'active-objects' as its recording criteria, and look at the NVR when there may be gaps in Frigate's coverage.

I've also been writing my own software to integrate with Frigate to help make better sense of activity and events at a macro level, compared to its current user interface.


I've been using it for continuous recording of my cameras. It would be working flawlessly except for the piss poor firmware of my Reolink cameras firmware causing their rtsp server to choke.


Frigate recently bundled an instance of go2rtc which can connect to Reolink cameras via http/flv and re-stream as RTSP. This solved my issues with Reolink.

go2rtc also works nicely for on demand transcoding of my H265-only cams to H264 to view the live stream in Firefox.


I use blueiris which barely works because I'm overloading it with 9 reolinks at 4k. I'd like to figure out my bottleneck but it works enough barely that it's not worth mucking with it.


Curious - does having 4k (compared to 2k or 1080p) make a huge difference for security cameras for surveillance on a property like a home?


It depends on the application. 4K on a camera that's high up and covers a lot of area is good because you see more pixels on objects. Also stuff like identifying people's faces or license plates. 4K for a camera that is close up, like say a doorbell, is IMO less useful.


4K to FHD is the difference difference being able to see someone’s face, or not. Or being able to read a license plate, or to.

Digital zoom also can be more useful.

What I’ve learned from clients is to get the best resolution cameras you can and PoE cameras only if the use case remotely is safety or security.


Mine are all high up in second floor so it does make a difference.


Have you tried Neolink to make Reolink RTSP little better?


I have a reolink E1 hooked to my Homeassistant, the rstp stream seems to crash from time to time. I had never heard of neolink, would it help on this case?


Not really. Blue Iris has too many features that aren't really needed, and the lack of Linux support makes it a non starter in many cases. Also, the AI features on BI are far worse.


I'm currently using Frigate for continuous recording and it's great and I don't feel like I'm missing any features. What are some features that are missing?


>which seems like very basic functionality but ends up as a low priority for a lot of these

for security based purposes, why would you want to save all of that data that is not changing? you'll just end up fast-forwarding to the interesting bits anyways if you have to go to the footage.


Neither motion or object detection are really that reliable, in any system I've worked with. The norm in commercial systems has long been to record continuously and use motion/object detection/other classifiers to annotate the recording. That gives you the opportunity to search for events, like thefts, that may not have been detected by classification. You also have access to footage well before and after the detected event, which is often absolutely critical to answering useful questions (e.g. how did someone get past the fence?). Common patterns like 10 seconds before/30 seconds after just aren't always sufficient.

Unfortunately consumer devices are almost always cloud-based, where storage but especially upstream bandwidth are much more costly considerations, so recording only on detection has become the norm in the consumer world.

External triggers are also an important feature in commercial systems that a lot of open source projects miss---but Frigate isn't guilty of this one, it can receive triggers by MQTT, which is the same thing I do right now with Blue Iris. That's the big thing that has me optimistic about Frigate going forward. Because motion and object detection are so inconsistent, triggering VMS events based on access control systems and intrusion sensors is often a much more reliable (and even easier to maintain) approach.


One of the niftiest ways I've seen this done was some software I used circa 2000 (I don't remember the name). It would create a variable-rate timelapse by saving a frame every time the image changed more than $x percent, calculated as the sum of differences of pixels from the previous frame, or thereabouts.

If someone was walking across the yard it would save every frame. The movement of the sun would move shadows enough to trigger a new image every few minutes. A bug flying past was small enough that it wouldn't trigger anything. The result was you could get a short video of everything interesting that happened through the day: shadows of trees sliding over the ground, every frame of the car pulling out of the driveway, shadows sliding over the ground some more, cat walks across the yard then lays down, shadows pan around more while the cat sits still, cat gets up and walks away, shadows pan around until the delivery guy comes...

It was an incredibly low-CPU way to see everything that happened without missing anything, and without having to fine-tune the motion detection very much. You just mask out any areas with constant motion, then adjust the slider for how much change triggered the next frame, which would let you adjust how fast the timelapse would go during the boring parts.

I've always wondered why the technique never became widespread.


That sounds incredibly useful, and it doesn't sound like it should be particularly hard to implement in modern systems. Maybe it just needs a term so people can search and advertise it.


Being able to buffer a video signal so that data can be saved from before a triggering event happened is not a new idea. One of Sony's cameras FS700 I think, could only record a few seconds at 240fps, and then stop. But it had a end trigger mode where it would just keep a buffer so you could press the button after the event (think after you see the lightning strike), and then it would just dump the contents of the buffer up to the point you hit stop. Same thing for sports, hit the button at the catch. Much easier than anticipating starting in time.

Essentially, the same concept, just need enough of a buffer to allow for the pre-roll which wouldn't be a lot at the lower bitrate IP data coming from the cameras


Because sometimes these systems don't detect those changes. Continuous recording with object detection and tagging solves this problem.


Wait, what now? I have mine set to retain 3 full days of continuous.


Perhaps this has improved, but when I tried it out a few months ago I found that the playback for continuous recording was extremely basic and didn't have features like easy-to-use variable speed scrub to make it practical to search for things. I might try it out again today because I would like to go to something that doesn't have to run on Windows, but my use case is more around continuous recording with around a month of history than event detection.

Space management for rolling retention is also a new feature in Frigate and very basic, I don't think it has a way to do different retention policies by camera group and alarm.


A better UI for recordings viewing and seeing times of activity are coming in the future.

Frigate already supports customizing recording retention per camera for 24/7 and event based recordings.

You can also set different retention periods based on the type of objects that were detected.


I believe you can trigger recording with mqtt, so you could make an automation for it. You could try to bump this https://github.com/blakeblackshear/frigate/issues/2590#issue.... You could even try to use ai to write the feature.

There's only been one incident where I would have liked continuous, I've tweaked events to be more than enough.


Ah yes, you are right. I’ll never actually look at the continuous unless I’m robbed, so it doesn’t bother me. Person and animal detection is really good, so you’d have to be very motivated to want to slop around in the space between.


OK. I have a bunch of ring cameras and cannot get them connected to Amazon anymore. The person that sold the house didn't leave the packaging, and the Amazon app does not allow connection to the temp wifi; you must scan the QR code or enter the serial number? I've never been able to get them reconnected since the Hurricane last year.

Can I somehow use these with Frigate? Is there a way to root these Ring cameras and use them?

I never liked the idea of paying a service fee, nor having Amazon pull the videos into their free neighborhood watch program.

Any suggestions on using them now?


You can’t, Amazon doesn’t support any open protocols with their ring cameras. You can get the serial number off the camera itself, usually on the back so it needs to be removed first.

Also most of those ring cameras are 2.4ghz only so you need a dedicated 2.4ghz wifi network. It won’t connect at all if you have an SSID that is broadcasting both 5ghz and 2.4ghz on the same name.


I had that problem with some Tapo smart plugs, and an ecoflow delta max battery. To connect those, I disabled SSID broadcast on 5ghz temporarily; connected the plugs to the 2.4ghz network; then re-enabled the 5ghz. Works fine now with both networks up in the same SSID.


Try registering it with your address. I can't remember if it asks you "did you just buy the house?" or not, but I had this problem and it sends an email to the prior registrant, and if they voluntarily release it, or do nothing for 30 days (IIRC), it turns the camera back over to you. I got this far with it, but I was too lazy to install the app and actually set it up.


Runs really well within a Docker container on my M1 Mac Mini with 3 2K (2560x1440p) Reolink cameras.

Paired with running Scrypted for HomeKit Secure Video (have also found using the RTSP streams rebroadcast from it to be more stable than having multiple sinks connected straight to the camera), and this makes a really good persistent NVR solution that I can also use to monitor remotely without necessarily VPN’ing back into my home network or exposing Frigate thru a separate reverse proxy.


Consider checking if you can compute everything on a single Orange Pi 5 first. It seems that preliminary support has been merged!

https://github.com/blakeblackshear/frigate/pull/8382


Really the best NVR / motion detection out there. Incredibly good camera support through go2rtc and ffmpeg. Supports accelerated video codecs via ffmpeg. You can use your own Yolo weights and models for object detection. There are some that are trained for high angle person detection that are great for surveillance cameras, for example.

Frigate also has pretty solid OpenVINO support now which means accelerated inference on modern-ish intel cpu/gpus, which is a game changer when you have several cameras.

Great docs, too.


go2rtc is a really great addition. Local 2way audio through a doorbell is awesome.


It seems like for a basic setup I need:

- an intel-based PC (can be a minipc, doesn't need a powerful CPU)

- a USB Coral TPU ($60)

- some wired PoE cameras (from $60 each)

My question: what do people typically use to power the cameras? A single PoE switch, or multiple PoE injectors?

My Arlo Pro 2 cameras are apparently EOL and might stop receiving free cloud services in a couple of months. So this seems like a good time to upgrade to higher resolution cameras.

(The Frigate docs advise against using Wi-Fi cameras, which would otherwise be my preference.)


I just have a PoE switch. It's actually easier to run ethernet than power, especially outside. Clogging up your WiFi spectrum with megabits of constant video seems like a terrible idea.


> Clogging up your WiFi spectrum with megabits of constant video seems like a terrible idea.

Yes, that's the thing I like about the Arlo system I have now: it has its own wifi network so, even if it's using spectrum, it's probably not affecting my LAN throughput.

> It's actually easier to run ethernet than power, especially outside.

This is true, but the house where I live already has power available everywhere I might need a camera. The thing I don't like about running new cables is the need to drill holes through exterior walls.


Your LAN can handle it :)

If the current setup can't, plugging the cameras and the NVR into a separate switch and none of that traffic will go near the rest of your LAN.

Wifi on the other hand, there's really no (practical) segregation to speak of - the spectrum has limited bandwidth, it doesn't matter if it's a different SSID / wifi network, it'll affect your Wifi!


I don't get why people don't get that part. What they mean by "Up to 1234 Mbps" on the box is "1234 Mbps shared". It's a giant wire occupying 1/4 mile around the AP, whereas, in wired Ethernet it's 1Gbps per link per direction.

A GbE switch with wire-rate transfer guarantee can handle 1Gbps traffic between arbitrary combination of ports. All the camera traffic coming from port 9 to 16 going to NVR on port 7 have no impact to traffic between upstream router on port 26 and your PCs on port 3 and 5. That cannot happen with Wi-Fi because everything is inherently on the same shared port 1(sometimes literally); each 4Mbps incoming is 4Mbps of download speed taken from your laptop. Double if destination is also on Wi-Fi.

This might be fine if there's just few cameras, but it's something to be aware of.


If you live in a sufficiently low density area (rural or suburban with large lots) you can put the two networks on different channels and they won’t meaningfully interfere with each order.


The chances are, if you live somewhere like that, somewhere that actually has a low noise floor for RF, then you likely also have a lot of space to cover with your wifi - and that means sacrificing range (via additional access points) for the second network..

For me: Wifi is great! But, whenever it's practical, I avoid it... Everything with it is a tradeoff!


> Yes, that's the thing I like about the Arlo system I have now: it has its own wifi network so, even if it's using spectrum, it's probably not affecting my LAN throughput.

Wifi6 is changing a lot of this, but generally speaking Wifi performance is not optimized for media style traffic. Media traffic does best with low jitter (variance of latency) as this tends to keep buffer sizes low and avoids dropping frames. Wifi is not very good at low jitter, and though Wifi6 is a lot better than previous Wifi standards, it's still much harder to keep jitter low on Wifi than it is on a LAN. On top of that, as the sibling commenter says, even if you have a separate Wifi network, spectrum doesn't segment that neatly. Wireless traffic uses multiplexing methods (there are several and if you're interested, the methods are fascinating [1]) to roughly use the same spectrum. These multiplexing methods obviously need to do more work the more traffic there is on the spectrum.

If you can route your media traffic through LAN do it. Obviously as you say, running new cable is a lot of work so it's understandable why you use Wifi. But LAN is just so much better that if you have the time/money (doing it yourself/hiring someone) to do it, I highly recommend you do.

[1]: https://www.intechopen.com/chapters/66562


> it has its own wifi network so, even if it's using spectrum, it's probably not affecting my LAN throughput

Unless you're using dedicated APs it is absolutely affecting your other wifi users.


The Arlo system generally does use its own dedicated APs.


Arlo’s base station is an AP.


Not just that but if a thief is going to break into your home, a wifi hammer will render a lot of smarthome gear including cameras useless.


Did you mean "jammer" or is there something new called a "wifi hammer"? If so, it sounds interesting.


Haha, yes jammer. Fat thumbs.


If you have a newer intel-based PC, you might not even need the Coral. Frigate added support for Intel's OpenVINO. They're also adding support for the RK3588's rockchip npu, but it's still newer so I wouldn't recommend unless you like tinkering.

For PoE, I'd just do whatever is convenient. I've done setups with 2 PoE switches before so I could just run one cable between the front/back and then branch out from there.


Single PoE switch with cameras on a VLAN (so they don't have internet access). I use my old framework main board (yay for reuse!). Started with a USB Coral but switched to NVMe, which is more reliable passing through to a VM.

Frigate links some Dahua camera recommendations in their documentation: https://docs.frigate.video/frigate/hardware/

I installed them and they've been rock solid. Low light performance is excellent. The turret form factor is nice and unobtrusive.


Last time I looked at this the Coral devices were out of stock and price gouged. Looks like I can at least order now with a lead time of 22 weeks from mouser.

https://coral.ai/products/m2-accelerator-dual-edgetpu/


Depends on the version. They have thousands of m.2 in stock.

https://www.mouser.com/c/?q=coral


Whoops I misread the factory lead time as the estimate time :)


No worries. I picked up some nvme to make a low power frigate box when I saw the USB is often unobtainium. Figuring out the whole m.2 keying scheme vs the ports I have took the longest of it.

I found that one key type is for wifi daughter cards so I was able to save a pcie slot or nvme slot using that in the small form factor host.


> what do people typically use to power the cameras? A single PoE switch, or multiple PoE injectors?

It basically doesn't matter at all - I have a mixture of both in my home, multiple PoE switches and multiple PoE injectors for things like cameras, wireless APs etc. Use whatever fits needs/budget/location, you don't have to go nuts buying a single high end PoE switch. There's often good deals to be had on used PoE switches on ebay etc too if really budget conscious.

The only real advantage of going with a single or fewer PoE switches is you have less things to put on a UPS, if you require the system to still work when power goes down. A UPS that can run say 4 cameras, the PoE switch and a system running Frigate for more than a few hours can get pretty expensive too, in my experience - most cheap UPSes are designed to get you enough power to save some files and shutdown a PC in a matter of minutes, not hours.

Cheap intel box with a Coral runs Frigate fantastically, and if a tower build plenty of room for internal storage drives.


Yeah, consumer UPSs tend to scale their inverter capacity along with their battery capacity, which means if you're shopping for huge capacity for long runtime, you end up paying extra for a huge inverter you don't need.

I've gone the other route, with a simple power supply that charges an ever-evolving fleet of whatever cheap 12-volt batteries aren't doing anything else, which then feeds DC-DC converters for the various loads. For stuff that's natively 12-volt like my wifi router and cable modem, I just run those directly off the battery rail.

This setup is quiet, efficient, and presently runs the modem, router, service pi, and my RIPE Atlas probe, for somewhere upwards of 20 hours, for something like $150. If I added a 12v-to-48v converter and a small PoE switch feeding a few cameras, it would probably cut the runtime in half, but I could just throw more battery at it for pennies on the watt-hour.


Single PoE switch works well, they are inexpensive. If you have Poe injectors that can work too.

Wifi cameras are more for convenience than reliability or dependancy.


Doesn’t have to be PoE cameras. I use wifi cameras too, pretty much any camera with rtsp/onvif would work.

Chances are, a single switch is more cost effective than multiple injectors. But you also need Ethernet routed throughout your house. One alternative is to have the G.hn (powerline) adapter with PoE. This way, you can be both network and power with one plug without wiring your house.


Wi-Fi cameras are not a great idea. Sure, they are convenient, but Wi-Fi is a shared access medium (every device on, say, channel 11, has to “cooperate” with all the other devices about when it can transmit, including devices on neighboring SSIDs) and something that is constantly streaming video (or worse, multiple devices!) is going to quickly consume available bandwidth and offer a poor Wi-Fi experience. (But most people only care about convenience.) Plus, Wi-Fi is easily jammed, which is not great from a security perspective.


Ehh, I have a mix of 4K and 2K cameras, it hasn’t been much of an issue. I run OPNsense with a single EAP670 and there hasn’t been much performance degradation. PoE is definitely ideal but not an option for many, including me since I rent. I think the G.hn plugs are probably my best option for PoE if I really needed it.

Edit: not sure why you’re being downvoted


If you're happy with your wifi cameras' performance with Frigate, I'd love a recommendation.


I have a mix of reolink and amcrest ones, I can grab the models later tonight when I’m at my computer


Please do, also looking for recommendations :)


I have a variety of PoE power supplies based on where all my wires are running. I have one PoE switch near my main router that goes directly to a few cams. I have a second PoE switch in my living room that hooks into one in-house ethernet port and splits/powers two outdoor cams. Then I have a number of WiFi cams still where it wasn't convenient to get ethernet.


I am using it all: PoE switch, then couple injectors where it is needed for some specific reason and then also PoE splitters (one cable leaving from PoE switch, going to splitter and then to 4 different PoE cameras, powering everything with one PoE output from switch).

I would not use WiFi cameras. Standard RTSP PoE h264 is the way to go.


Check out the Nvidia Tesla P4. It's basically an uncooled low profile 1080 8GB.


I have a single PoE switch for my ubiquiti cameras and polycom voip phones. My original need for the PoE switch was actually the access points and not the cameras but I slowly converted from nest to these.


PoE switch with a big UPS so that recording does not stop in case of power outages.


I’m having really great results with Frigate. Took me a while to figure out that low resolution sub streams should be used for object detection and the high resolution for recording.

Also learned that go2rtc allows me to have one connection to a camera but then restream it to homebridge, frigate, etc. I use to fumble with ffmpeg to do it but go2rtc is easier.

I have a small form factor Dell desktop refurb with an m.2 coral tpu in the wifi slot. I use the coral for object detection and the Intel GPU for decode acceleration.

I’m curious as to why I haven’t seen NVR software that uses cameras built in object and motion detection and listens for those alarms or events?


One thing I'd like to see is offloading object detection to the camera. Many cameras now include event detection and I'd like to use the on-board dedicated hardware in each camera rather than trying to do it on a local GPU for each of my data streams.


There is now the event trigger in the API where you can tell frigate something has happend like a doorbell press. I'm still not clear if this just acts like a regular motion event where it'll do person detection ect after. if so I was thinking of setting up a mm-microwave sensor at my front door because the shadows constantly trigger motion events.


I use StalkedByTheState (https://github.com/hcfman/sbts-install) with 15 cameras all being evaluated with an NVIDIA GPU with large model yolov6 and matches double checked with large yolov7. Practically never get a false positive in a complex environment and never get a miss. The port to the Orin series still needs to be completed though.


Wow. I have always been tempted two give it a go with an Odissey.

Question: how tough is it to integrate a roboflow dataset in the custom model section?

https://docs.frigate.video/configuration/objects


I wonder if Frigate runs on Arm SBCs such as https://www.hardkernel.com/shop/odroid-m1s-with-8gbyte-ram/ ?


Support for some RockChip SBCs was recently merged in for 0.13 as a community supported board https://deploy-preview-6262--frigate-docs.netlify.app/config...


Wow, that's great! Thanks for the link.


It sounds awesome for remote dwelling security. Don't bother with rabbits and foxes, turn on the lights and play "you are trespassing turn back now" for unexpected guests. Perhaps can even tell if they have guns or crowbars.


How welcoming you are.


I've been using Frigate for about 6 months. It's significantly better than Blue Iris and zoneminder.


I've been following the project from a distance. I'm waiting for proper nix support to test it.


How does this compare to Blueiris?


BI is windows only. Frigate is mostly used on Linux.


hows this compare to something like blueiris, i'm running blue iris with 5x 4K cameras on a small intel NUC, the AI detection works pretty good but always would prefer something lighter and opensource


The best! Open source and with Coral AI support


Does it have a decent mobile app for alerts?


I don't think so. That is the main reason I'm keeping Blue Iris going.

My wife can use the BI app (even though it uses an outdated UI design). I haven't found any Frigate-based UX that would meet that bar (and that bar is not even particularly high with the BI app).

Would love to know if I'm missing a Frigate UX option that is "family friendly".


I mean alerts can go out to home assistant or any mqtt it looks like, so for assists that seems like the easiest option MQTT or the HA app


it's amazing to me that there are countries when people can legally film random people walking in the street in front of their house and store that data without any possible legal repercussions.


What's people's opinions on Frigate+?

For anyone that uses it, can you prove since details on value?


A number of users have posted their experiences so far on the GitHub discussion

- https://github.com/blakeblackshear/frigate/discussions/7932#... - https://github.com/blakeblackshear/frigate/discussions/7932#... - https://github.com/blakeblackshear/frigate/discussions/7932#... - https://github.com/blakeblackshear/frigate/discussions/7932#...

I have found my frigate+ model to be much more accurate and crazy good even at night. Will be curious how things change when it snows here more often, since I've not submitted any examples of winter at this house yet.


Great write up on the version 13 experience so far with custom models: https://rogerstechtalk.com/my-frigate-v13-beta-experience/

Also an interview with the Frigate author Blake covering the custom models and the project overall: https://youtu.be/04GZBbn_nRE


Besides upload and annotate [1], you can technically create your own models and use that within the Frigate configs already for free (https://docs.frigate.video/configuration/objects/#custom-mod...).

[1]You could always mount a cloud drive within Frigate's Docker config to have frigate upload camera footage to a cloud server.


I can tell you from my experience. Since I started using frigate+, I’ve had very minimal false positives. I go days without one! Even my lpr camera picks up vehicles at night. It only sees tail/head lights and the license plate, everything else is pitch black. I still get bounding boxes. Would definitely recommend it, plus it’s only going to get better


silly question but doesn't yolov8 (seganything) handle that kind of detection extremely well these days?


Can it detect a dog peeing indoor?

I always wanted to set off an alarm if my dog tries to pee inside.


Off-the-shelf, not that specific scenario…

If your dog has a consistent space it pees, maybe. You would setup a camera in the room and define an area (basically draw a box around the rug) and further refine with parameters like “Between 9.00 to 17.00 on Monday to Friday, Alert”

Insofar as I’m aware there isn’t any specific dog detection models shipped with the tool. You might be able to find image detection models to add-on easily, but YMMV.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: