Interesting, thanks. Do you plan speed/performance comparisons as a scripting / pluign language? That would be relevant for games or graphics intense usecases—blackjack-rs considered scheme/lisp but opted for mlua due to performance)
For pure computational work, Stak is 2 to 2.5 slower than CPython. These are E2E benchmarks for the interpreter commands. As you said, I should probably add benchmarks of scripting use cases with Lua.
Do you have a link? I searched and couldn’t find it just two days ago. The only thing I find is that you can achieve similar thing with shortcuts und passthrough.
I’m using Sphinx/Myst/pydata theme for my blog which is a very cool combination. But Hugo seems more established.
> Additionally ractor has a companion library, ractor_cluster which is needed for ractor to be deployed in a distributed (cluster-like) scenario. ractor_cluster shouldn’t be considered production ready, but it is relatively stable and we’d love your feedback!
Actors are better suited for highly heterogeneous task sets, where am actor or small set of them correspond to some task and you may have thousands or more.
Homogenous tasks should use an approach that is aware of and takes advantage of the homogeneity, e.g., the sort of specific optimizations a framework might make to orchestrate data flows to keep everything busy with parallel tasks.
You can use actors for orchestration, but you're really just using them because they're there, not because they bring any special advantages to the task. Any other solution that works is fine and there would never be a particular reason to switch to actors if you already had a working alternative.
The problem you face is not so much a lack of such things existing but such a staggering multiplicity that it is hard to poke through them all, and despite the sheer quantity, you may still find that your particular problem doesn't fit any of them terribly well. You can find anything from OpenMP, which amounts to "let's try to treat a whole bunch of resources a lot like one big computer" https://en.wikipedia.org/wiki/OpenMP, through things like Kubernetes which can be used to deploy all kinds of "worker nodes" for all sorts of tasks even if it doesn't do the orchestration of the task itself, and so many combinations of anything in between, plus you have all sorts of clustering technologies, message bus technologies, on-demand VMs spinning up, so many primitives that even if they aren't designed for this can be relatively easily put together into whatever it is you actually need that it is all rather about bewildering abundance than shortage.
There's also the entire category of "data lakes" and other nouns that have "data" applied as an adjective that includes various orchestration techniques because just being storage isn't enough, that is its own entire market segment.
Has anybody experience with this chip? Can I use this only with trained neural networks? Or can I use it for general purpose SIMD computation? Can I address the chip conveniently from Rust or C++?
Neuromorphic hardware is an area where I encountered analogue computing [1]. Biological neurons would be modeled by a leaky integration (resistor/capacitator) unit. The system was 1*10^5 times faster than real-time—too fast to use it for robotics—and consumed little power but was sensitive to temperature (much as our brains). If I recall correctly, the technology has been used at cern, as digital HW would have required too high clock speeds. I have no clue what happened to the technology but there were other attempts at neuromorphic, analogue hardware. It was very exciting to observe and use this research!
I worked on a similar project - the Stanford braindrop chip. It's a really interesting technology, what happened is that most people don't really know how to train those systems. There's a group called Vivum that seems to have a solution.
I work with QDI systems, and I've long suspected that it would be possible to use those same design principles to make analog circuits robust to timing variation. QDI design is about sequencing discrete events using digital operators - AND and OR. I wonder if it is possible to do the same with continuous "events" using the equivalent analog operators, mix and sum.
We got some nice results with spiking/pulsed networks but the small number of units limits the application so we usually end up in a simulator or using more abstract models. There seems to be a commercial product but also only with 0.5K neurons, might be enough for 1D data processing though and filling a good niche there (1mW!) [2]
Just read about it and there are familiar names on the author list. I really wish this type of technology gained more traction but I am afraid it will not receive the deserved focus considering the direction of current research in AI.
> I wonder if it is possible to do the same with continuous "events" using the equivalent analog operators, mix and sum.
reply