For any device connected to network the one should assume it will upload data to internet. Intentionally or not. Blocking it on router is the way to go.
Not discrediting the article however is this partially incorrect? Most of these embedded Linux devices that are “intelligent” doorbells, that is run inference locally vs cloud compute, can’t locally build models however can run inference. So they must send new facial images of people they want to identify to a cloud solution to do so?
Long day, not sure if words are my friend right now…
there seems to be two main issues: (1) the uploading of thumbnails regardless of user settings and (2) the ability to watch a video stream unauthorized. the discoverer only details the first, to give anker time to investigate and fix the latter. besides their potential use for image recognition, the thumbnail images are used in notifications. i don't think any devices are powerful enough to do facial recognition on device yet. we've just started to get voice recognition on device, and that entails much less information and a lot smaller problem space. but if someone says they don't want to use cloud services, no data at all should be sent to the cloud.
"With regard to eufy Security’s facial recognition technology, this is all processed and stored locally on the user's device."
Running a simple face recognition algorithm locally seems entirely possible. Just measure distance between the eyes, distance to nose, etc., and keep a local database of all the faces encountered. Most of the time the basestation is not doing much except storing the video stream, so it may have enough spare capacity (say capacity that would otherwise be used to stream video to a phone) to also train an AI in the background, even if it takes 30 minutes per face. Whether that database/training gets sent back to eufy and is added to a global face recognition algorithm is a different matter. Running an AI can require very few resources depending on the algorithm. Training takes more resources but can be run over the long term. Smartphones are perfectly capable of running and training a variety of ML algorithms and a basestation that can handle many video streams is likely to be more powerful than a smartphone. They might even outsource the computation to your phone.
Nest cameras claim to do "ML at the edge", and then there are devices like nVidia's Jetson Nano. I strongly believe that facial rec without going to the cloud is possible with current tech.
Guessing here, but I can see propagating those ML models, where one camera learns of the face and it is sent out to other cameras in the same "security system" logical unit as a reasonable thing.
IPhones do local facial recognition in the Photos app. I don’t expect a security camera to be as powerful as an iPhone but if that is the main thing it is doing, it is not an unreasonable thing to expect it to do.
https://news.ycombinator.com/item?id=33178885