Hacker News new | past | comments | ask | show | jobs | submit | leeoniya's comments login

"Efficiency trades off against resiliency"

https://blog.nelhage.com/post/efficiency-vs-resiliency/


is it OSS?

what do you use for parsing?


no. I am using papaparse, i am a fan of your tiny libs.

<3

is there a good source that shows which were dismissed as meritless vs ones dismissed due to lack of standing?

made me think of, "bend entropy towards definable outcomes"

https://m.youtube.com/watch?v=2QJjMwBAORw


if you scroll down, looks like same shape as charts below it for other jobs that can be done remotely: banking, marketing, research, etc.

what am i missing?


Those graphs finish higher than the initial dip. The software one does not. But yes, the same overall shape.


> JSON BinPack is space-efficient, but what about runtime-efficiency?

> When transmitting data over the Internet, time is the bottleneck, making computation essentially free in comparison.

i thought this was an odd sales pitch from the jsonbinpack site, given that a central use-case is IoT, which frequently runs on batteries or power-constrained environments where there's no such thing as "essentially free"


Fair point! "Embedded" and "IoT" are overloaded terms. For example, you find "IoT" devices all the way from extremely low powered micro-controllers to Linux-based ones with plenty of power and they are all considered "embedded". I'll take notes to improve the wording.

That said, the production-ready implementation of JSON BinPack is designed to run on low powered devices and still provide those same benefits.

A lot of the current work is happening at https://github.com/sourcemeta/jsontoolkit, a dependency of JSON BinPack that implements a state-of-the-art JSON Schema compiler (I'm a TSC member of JSON Schema btw) to do fast and efficient schema evaluation within JSON BinPack on low powered devices compared to the current prototype (which requires schema evaluation for resolving logical schema operators). Just an example of the complex runtime-efficiency tracks we are pursuing.


> batteries or power-constrained environments

I would imagine that CPUs are much more efficient than a satellite transmitter, probably? I guess you'd have to balance the additional computational energy required vs. the savings in energy from less transmitting.


Yeah, it all depends very much, given how huge the "embedded/IoT" spectrum is. Each use case has its own unique constraints, which makes it very hard to give general advice.


For sure, but radio transmitter time is almost always much more expensive than CPU time! It’s 4mA-20mA vs 180mA on an esp32; having the radio on is a 160mA load! As long as every seven milliseconds compressing saves a millisecond of transmission, your compression algorithm comes out ahead.


Sounds like you are pretty familiar with satellite transmission at the hardware level. If so, I would love to chat to get your brains on it. I don't know much of the hardware constraints myself.


You can; I worked mostly with WiMAX instead of direct satellite but the radio transmission is the killer either way.


> on an esp32;

ironically the main criticism i've heard of these is how power-inefficient they are :P


> It took at least 17y for Amazon to get rid of its last Oracle database:

this is from CockroachDB license, pretty much straight out of Oracle's playbook:

> You will not perform Benchmarks against any products or services provided under terms that restrict performing and disclosing the results of benchmarks of such products or services, unless You have the lawful right to waive such terms. If You perform or disclose, or direct or permit any third party to perform or disclose, any Benchmark, You will include in any disclosure and will disclose to Licensor all information necessary to replicate such Benchmark, and You agree that Licensor may perform and disclose the results of benchmarks of Your products or services, irrespective of any restrictions on benchmarks in the terms governing Your products or services.


That seems... fine? The terms basically imply that if you publish a benchmark you need to let CRDB reproduce your benchmark and discuss it publicly


it knows we're all in a voting ring called Github Users



It was discussed here even more recently than that.


That's a completely different article.


now rewrite it back to JS with https://github.com/KilledByAPixel/LittleJS

j/k :D


Why, from C to Zig, from Zig to Rust. Compile the Rust version to WASM to finally make it runnable in the browser.


i'm actually quite curious how it would perform relative to the C version. the article shows 1000x particles, but LittleJS has demos with a couple orders of magnitude more than that at 60fps.

e.g. https://killedbyapixel.github.io/LittleJS/examples/stress/


Not looked into the code, the correct way would be to move the particles engine into shader code, and the limit would be as much as the graphics card can take.

It appears that after all these years, not everyone has bought into shader programming mentality, which is quite understable as only proprietary APIs have good debugging tools for them.


JS engines like V8 are very good at JIT and optimization based on actual profiling. If we talk about pure CPU modeling, I suspect a good JIT will soon enough produce machine code on par with best AOT compilers. (BTW the same should apply to JVM and CLR languages, and maybe even to LuaJIT to some extent.)


From my cursory reading of v8 blogs, most of its optimizations revolve around detecting patterns in JS objects and replacing them with C++ classes.


Exactly. Detecting patterns that are typical for human coders and replacing them with stuff that uses the machine efficiently is what most compilers do, even for low-level languages like C. You write a typical `for` loop, the compiler recognizes it and unrolls it, and / or replaces it with a SIMD-based version with many iterations run per clock.


Doesn't Zig compile to WASM too?


Yes, but the point of the joke was to make the loop longer, while keeping it somehow logical. I wish I managed to insert Purescript, Elixir, Pony and ATS somehow.


Yes, for instance this is mixed Zig/C project (the C part are the sokol headers for the platform-glue code):

https://floooh.github.io/pacman.zig/pacman.html

The Git repo is here:

https://github.com/floooh/pacman.zig

...in this specific project, the Emscripten SDK is used for the link step (while compilation to WASM is handled by the Zig compiler, both for the Zig and C sources).

The Emscripten linker enables the 'embedded Javascript' EM_JS magic used by the C headers, and it also does additional WASM optimizations via Binaryen, and creating the .html and .js shim file needed for running WASM in browsers.

It's also possible to create WASM apps running in browsers using only the Zig toolchain, but this requires solving those same problems in a different way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: