Hacker News new | past | comments | ask | show | jobs | submit | legobmw99's comments login

Is there a technical reason to now allow closures as the integrand?

Mayve because they aren't guaranteed to be actual functions (in the mathematical sense) and could return random values

The Fn trait could be used, which prevents mutation, but allows a lot of useful closures. I should note, a motivated user could provide a junk function no matter what the type accepted is

It seems like it is lacking the functionality R's integrate has for handling infinite boundaries, but I suppose you could implement that yourself on the outside.

For what it's worth,

    use integrate::adaptive_quadrature::simpson::adaptive_simpson_method;
    use statrs::distribution::{Continuous, Normal};

    fn dnorm(x: f64) -> f64 {
        Normal::new(0.0, 1.0).unwrap().pdf(x)
    }
    
    fn main() {
        let result = adaptive_simpson_method(dnorm, -100.0, 100.0, 1e-2, 1e-8);
        println!("Result: {:?}", result);
    }
prints Result: Ok(1.000000000053865)

It does seem to be a usability hazard that the function being integrated is defined as a fn, rather than a Fn, as you can't pass closures that capture variables, requiring the weird dnorm definition


I’ve seen this attributed to John von Neumann, of all people


It seems like he did everything! I first heard of Von Neumann in international relations & economics classes as the person who established game theory, then later in CS classes as the creator of mergesort, cellular automata, Von Neumann architecture, etc.


Wait til you hear about what he did in Math and Physics...

Very easy to claim he was the most intelligent human to ever live. Or perhaps he was never human...

https://en.wikipedia.org/wiki/The_Martians_(scientists)


I consider LLMs to be the first successful non-von-neumann architecture in many decades


I’d be shocked if jart didn’t know this, but it seems unlikely that an LLM would generate one of these most vexing parses, unless explicitly asked


I think you're thinking of something different to the issue in the parent comment. The most vexing parse is, as the name suggests, a problem at the parsing stage rather than the earlier lexing phase. Unlike the referenced lexing problem, it does't require any hack for compilers to deal with it. That's because it's not really a problem for the compiler; it's humans that find it surprising.


Given all the things that were new to the author in the article, I wouldn’t be shocked at all. There’s just a huge number of things to know, or to have come across.


Justine is proficient in C, she is the author of a libc (cosmopolitan) among other things, like Actually Portable Executables [1].

I would expect her to know C quite well, and that's probably an understatement.

[1] https://justine.lol/ape.html


I wonder how much of this has to do with Covid interrupting a lot of recent grads’ time in college and forcing a large percentage of their courses to be online for a time

It seems pretty obvious to me that the quality of both teaching and assessment plummeted during this time, so I suspect that it’s even harder than usual to trust things like a transcript to see what an applicant really knows


The author links to https://arxiv.org/abs/2006.10343, which seems like a good place to start on normalizing flows for Bayes


Pyro has a nice normalizing flows tutorial: https://pyro.ai/examples/normalizing_flows_intro.html


Ah, I did not realize that the `realNVP` was a link! Thanks.


If what I want is not an executable but a shared library, does this get me anything?

I currently have a use case that uses a server running an emscripten build (using SMODULARIZE and some exports, I suppose it’s not a true dylib)


Importing a wasm module from a wasm module is (non)surprisingly impossible to do -- you have to have a linker, abi and all that.


It is possible provided some care. I was looking into this with WAForth which compiles the wasm and loads it via a host function (ie. it is the hosts responsibility to make it available). I wanted to enable dynamic loading of words from disk which requires some book keeping and shuffling a bunch of bytes around during compilation to write out the bits necessary to have the host do that linking. It isn't impossible to do, just tedious and in my case, having to write it in WAT is a pain.


Yep, you need to do the nasty bits by hand, that's what I mean.



Thanks, I'm seeing but the documentation is so scarce and I'm not a proefficient C expert.

What syntax can be used to run emception? Thank you.


It’s sadly a bit more of a proof of concept than a hackable project. The docker build in the readme did work last time I tried, and there is a demo site at https://jprendes.github.io/emception/, but I’ve failed to modify it in the past to do other things

There is a fork at https://github.com/emception/emception that is trying to make it more production ready, but it looks like that may have stalled


I guess the search for compiling C++ on some kind of bytecode continues. Thanks a bunch for the links and details, much appreciated.


The closest I’ve found in terms of description is https://github.com/feschber/lan-mouse, but the lack of encryption on the connection has discouraged me from using it.

I’m also a paid Synergy user, and it’s frankly comedic how long Wayland support has been on their road map. I’m not convinced it’s ever coming, which means I’m probably only 1-2 distro updates from being forced to use something else


I'm in the same boat.


Synergy core (aka synergy 1) just had Wayland support merged this week. “Next year” has finally come to pass


It seems a little odd to me that this is not just… a vscode extension pack?


A good question. The VS Code extension API is pretty powerful but extensions run separately from the main workbench process and they can't draw any meaningful UI on it. This was a great design decision IMHO as it is the core reason VS Code has a reputation for a minimalist UI and good performance despite being based on Electron.

However it also made it impossible to build the kinds of experiences we wanted to with Positron. Positron has a bunch of top-level UI as well as some integrated/horizontal services that don't make sense as extensions. We built those into the core of the system which is why a fork was necessary.

It's a goal for Positron to be extensible, so it has its own API alongside VS Code's API, and both the R and Python language systems are implemented as extensions.


I'm curious if you would be willing to elaborate on what your plans are for longer term feature parity with vscode? As in I can imagine as vscode receives continued development and new features you will have the development burden of having to integrate these updates into your fork. Are you planning to keep up-to-date with vscode or will the products essentially drift a part over time? If the latter would this mean extension developers have to build separate extensions for your IDE?


We merge upstream from VS Code every month and we plan to keep up to date with it, so extensions will continue to work and we'll continue to inherit new features as they become available.

It's a development burden for sure -- but still an order of magnitude cheaper than trying to build a good workbench surface from scratch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: