Try https://channelstream.org/ - you can talk via rest calls to it - it scales up to thousands of connections on a single small machine. You have tiny JS WS/long pool clients available with reconnection logic or you can roll your own if you like.
I have Kobo H2O and it works great till this day. I charge it very rarly - maybe every 2-3 months. It reads everything I like. And it was one of the first with IP68 certification.
The feeling is understandable and it could be worrying, but since some years Nicolas seems to be focusing on Shiro games and game dev tooling (heaps, hide). The compiler is developed by the Haxe Foundation and the ecosystem by the community which is not that big but has quite a few talents.
I can't speak for the parent commenter, but there is often code processing the input/output of machine learning models that benefits from high-performance implementations. To give two examples:
1. We recently implemented an edit tree lemmatizer for spaCy. The machine learning model predicts labels that map to edit trees. However, in order to lemmatize tokens, the trees need to be applied. I implemented all the tree wrangling in Cython to speed up processing and save memory (trees are encoded as compact C unions):
2. I am working on a biaffine parser for spaCy. Most implementations of biaffine parsing use a Python implementation of MST decoding, which is unfortunately quite slow. Some people have reported that decoding dominates parsing time (rather than applying an expensive transformer + biaffine layer). I have implemented MST decoding in Cython and it barely shows up in profiles:
Here you can find multiple projects for free approved and shared by gov agency.