Maybe I'm missing something, but how would this prevent you from using setTimeout/setInterval? But I agree that these projects often work great in small use cases, but quickly crumble under "real world" scenarios.
I'd be hesitant to run something like a 30fps render loop in a web app. Its been years since I last saw or tried that in a real world app but it didn't end well for performance.
Your best bet would be to queue up specific UI changes that need to be made as diff's rather than checking the entire UI state. At that point, though, you might as well run them immediately as the change is needed.
If that was still a perf problem you would end up chasing a very complex solution like react fiber to partially update the UI on a loop while periodically pausing for user events.
Sure, if you blow away the entire app on every state change. But that would lose not only state defined in components (like `i` in ClickMe) but also all state implicitly stored in DOM elements (selection, focus, scroll position, input value, media playback).
I would almost certainly never implement a UI as a render loop, but if you wanted to go down that path requestAnimationFrame is a much more idiomatic way to do it if you want to match the user's display refresh rate.
I've had a very similar experience a few weeks ago and even fell for the same double-userdata misunderstanding.
The documentation itself is actually good and extensive, however it feels like the authors expect its users to deeply understand the library itself. It is also harder to find some "Getting Started" docs as most provided examples felt way too bloated coming from a nodejs/ws background. Compared to the other library[1] I was considering, it took much longer to get even a simple "echo" server running.
However having used lws for some time now, I am really happy with it! The API is very clean, mostly intuitive and provides everything you need, without feeling bloated or becoming too verbose. Sometimes documentation still feels a bit harder to find, but it can be figured out eventually.
One great feature for me was being able to accept both WebSocket as well as raw TCP connections on the same port, this is extremely easy and just required settings the flag LWS_SERVER_OPTION_FALLBACK_TO_RAW.
I encountered other hiccups. They are fully documented and completely valid, but were really confusing to me as a first-time user:
* Sending data requires a specific memory layout[2] – namely having to allocate memory for the websocket header youself before the actual message you want to send. This gave me confusing segfaults in the beginning.
* Sending data / responding to a client message will probably (but not always) fail when just naively using "lws_write()". To correctly send data you need to manually queue your message data, request the "ON_WRITABLE" callback[3] and only then actually send.
Mozilla's DeepSpeech is so large that you can't really read it and understand it. This one is 10x less code while recognition quality is 75% better (lower relative word error rate).
So this one is small enough that you can read the source code if you want to, while DeepSpeech is not.
Good point – calling this clickbait might be too cynical.
From this perspective I can definitely get behind advertising the project with the LoC measurement.
Subjectively, I still find this to be a bit of a "not telling the whole truth", however I also only ever toyed around with speech recognition ai.
The Bluetooth unlock worked fine during this outage. The only feature unavailable was “remote unlock/start” where the user is not within Bluetooth range of the car.
Interesting. Seems like that would be an everyday issue...plenty of dead spots in cell coverage, underground parking garages, etc. Or perhaps they check for "no network", but not "network is there, endpoint is borked".
I find leaving them in the repo makes it pretty clear how your have to configure your build process. For GCC, I instantly know I need `-I ./tinyvm/include/` and can use `#include <tvm/tvm.h>` etc.
Once you have a known event, then you can use the information that you know about that to start asking more questions.
Proximity is an obvious one, who was near or in contact with the alleged terrorist in the hours, days, weeks beforehand. Were there recurrences of close proximity amongst those individuals previously not considered persons of interest.
This methodology is precisely the same as how fraud and money laundering risk is mitigated. Once you work back from a KE (known event) to build up the pattern of related activities you can use this as a signature on observed future behaviour to try and identify future activity that you'd like to investigate more closely or act on.
It's more like you'd have a map with a line that shows where the terrorist has been, you can overlay on that map everywhere every other whatsapp user has been and pull out any that spend a significant amount of time with the known terrorist. Then you know exactly who they associate with and can interrogate their profiles directly to see if they are viable suspects.