Sadly, only the mounting rim is normed. The form of the body is not. Meaning, containers from different manufacturers don't stack necessarily and lids are not interchangeable.
NixOS makes this rather easy too. There exist modules for synapse, livekit, etc (MAS module is still a Pull Request, though) and the setup is quite doable: https://wiki.nixos.org/wiki/Matrix
However, you still need to know what you are doing (the manual helps) and connect the pieces (in theory there could be a nixpkgs module that does this for you but apparently nobody did bother). Once its done you can lean back.
I've been running my homeserver happily for > 5 years and it was fairly straightforward.
Yeah, but then you have to run NixOS, which besides the sheer difficulty of being Linux, for the average user, stacks Nix on top, which comes with its own learning curve.
Also, based on your description, you're a techie. I highly doubt the average city hall clerk will set up their on Matrix server.
That is only partly correct. Devenv.sh misses outputs defining packages compared to flake.nix, i.e. there is no way to deterministically build the project as a nix package. It's the developer environment devenv competes with. Both can even work in tandem. Devenv for the devenv, flakes for the rest.
The compiler is FOSS under a permissive license Apache 2.0). Only the online editor, similar to Overleaf, is not Open Source. Please check the facts before hitting reply.
Scientific/Numeric/Data-Python is essentially a DSL around C-API which creates friction (try for example to map a custom function over a Pandas column). Whereas in Julia, it's just Julia. It's liberating to just extend and use a library written in the same language. It leads to surprising synergies.
> First, we evaluate, for each voxel, subject and narrative independently, whether the fMRI responses can be predicted from a linear combination of GPT-2’s activations (Fig. 1A). We summarize the precision of this mapping with a brain score M: i.e. the correlation between the true fMRI responses and the fMRI responses linearly predicted, with cross-validation, from GPT-2’s responses to the same narratives (cf. Methods).
Was this cross checked against arbitrary Inputs to GPT-2? I gather, with 1.5 Billion parameters, you can find a representative linear combination for everything.
They assume linearity. They map their choice of GPT-properties to their choice of (brain blood flow!) properties. They then claim there are correlations, with a few fMRI datasets.
If something serious was on the line, with this type of analysis, you'd be fired.
Reading this it feels like we might as well give up on there being any science any more, tbh. For this to appear in Nature -- it feels like the rubicon has been crossed.
How can we expect the public not to be "anti-vax" (etc.), or otherwise scientifically competent in the basic tennets of modern science (experiment, refutation, peer review) -- if Nature isnt?
It's not Nature, it's Scientific Reports. The bar to publication in the two couldn't be more different. Nature is one of the premier high impact journals, Sci. Rep. is a pretty middle of the road somewhat new open access journal.
> Scientific Reports is an online peer-reviewed open access scientific mega journal published by Nature Portfolio, covering all areas of the natural sciences. The journal was launched in 2011.[1] The journal has announced that their aim is to assess solely the scientific validity of a submitted paper, rather than its perceived importance, significance or impact.[2]
Of that last line, this is quite literally the opposite. The only grounds to accept this paper is how on-trend this topic is. The "scientific validity" of correlating floating point averages over historical text documents, and brain blood flow... is, c. 0%
This just is a crystallization all the pseudoscience trends of the last (> decade): associative statistical analysis; assuming linearity; reification fallacy; failure to construct relevant hypotheses to test; no counterfactual analysis; no series attempt at falsification; trivial sample sizes; profound failure to provide a plausible mechanism; profound failure to understand the basic theory in the relevant domains; "AI"; "Neural"; "fMRI"; etc.; paper participates in a system of financial incentives largely benefitting industrial companies with investment in relevant tech; paper is designed to be a press release for those companies.
If I were to design and teach a lecture series on contemporary pseudoscience, I'd be half-inclined to spend it all on this paper alone. It's a spectacular confluence of these trends.
I work in neuroscience and pharmacology. My impression of my own field is far different than what you state here. You made a statement about all scientific exploration but you seem to only read about a few limited areas
I happen to be BS-facing, it must be said. I ought calm myself with the vast amount of "normal science".
But likewise, we're in an era when "the man on the street" feels easy appealing to "the latest paper" delivered to him via an aside in a newspaper.
And at the same time, the "scientific" industry which produces this papers seems to have not merely taken the on-trend funding, but scarified its own methods to capture it.
In otherwords, "the man on the street" seems to have become the target demographic for a vast amount of science. From pop-psych to this, all designed to dazzle the lay reader.
Once only on popsci book shelves, now, everywhere in Nature!
Live View (a WYSIWYM Editor) - you type markdown, but see it rendered, comes to mind. Also some interesting extensions/add-ons centered around knowledge management are arising.
Unfortunately, Obsidian itself is not open-source though; but worth checking out.