Hacker News new | past | comments | ask | show | jobs | submit | ergo14's comments login

https://www.gunb.gov.pl/projekty-architektoniczno-budowlane

Here you can find multiple projects for free approved and shared by gov agency.


Lit also does fine without VDOM and has good performance characteristics.


RiotJS as well.


You are holding it wrong ;)


So all your microservices implement sagas or other synchronisation patterns that ensure 100% data consistency?


If you distribute the state over ALL your services and need 100% data consistency you’re holding it wrong


Ok, how big percentage of them implement data consistency in your company? :)


No one pushes for anything agressive in Poland - the only narrative is that we and baltics should be ready to defend ourselves.


Try https://channelstream.org/ - you can talk via rest calls to it - it scales up to thousands of connections on a single small machine. You have tiny JS WS/long pool clients available with reconnection logic or you can roll your own if you like.


Thing is Kobo doesn't tie you anywhere - I use it in europe - I never ever used their online store even once.


I have Kobo H2O and it works great till this day. I charge it very rarly - maybe every 2-3 months. It reads everything I like. And it was one of the first with IP68 certification.


I really love what Nicolas is doing. But it kinda feels like whole Haxe ecosystem driving force is this one guy.


The feeling is understandable and it could be worrying, but since some years Nicolas seems to be focusing on Shiro games and game dev tooling (heaps, hide). The compiler is developed by the Haxe Foundation and the ecosystem by the community which is not that big but has quite a few talents.


We speed up our ML code 40 times using it.


Would that be in the data loading that you are getting the most benefit?

I'm curious, since most of the big libraries are already just cuda calls anyway but I'm always interested in anything to speed up the full process.


I can't speak for the parent commenter, but there is often code processing the input/output of machine learning models that benefits from high-performance implementations. To give two examples:

1. We recently implemented an edit tree lemmatizer for spaCy. The machine learning model predicts labels that map to edit trees. However, in order to lemmatize tokens, the trees need to be applied. I implemented all the tree wrangling in Cython to speed up processing and save memory (trees are encoded as compact C unions):

https://github.com/explosion/spaCy/blob/master/spacy/pipelin...

2. I am working on a biaffine parser for spaCy. Most implementations of biaffine parsing use a Python implementation of MST decoding, which is unfortunately quite slow. Some people have reported that decoding dominates parsing time (rather than applying an expensive transformer + biaffine layer). I have implemented MST decoding in Cython and it barely shows up in profiles:

https://github.com/explosion/spacy-experimental/blob/master/...


In this case was multicore computation without GIL if i remember correctly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: