Hacker Newsnew | past | comments | ask | show | jobs | submit | asgraham's commentslogin

I know for a fact [1] that the neuroscientific discoveries were not independent of physics: the people doing the developing were largely former physicists. They likely didn't cite anything because why would you cite phase transitions or criticality? You learn about them in class as a physicist. I strongly suspect the ecology results weren't independent either, but all the theoretical ecologists I know are relatively young (if mostly former physicists) so no first person accounts.

The part of this that could totally be true is that a clinical application somewhere along the way "independently" "reinvented" it. There's a hilarious collection of peer-reviewed journal articles out there inventing a "new" method of calculating the sizes of shapes and areas under the curve. The method involves adding up really small rectangles. (I think a top comment already mentioned the Tai article [2])

[1] source: my doctoral advisor was a really really old theoretical neuroscientist who trained as an electrical engineer and mathematician. If you want a more concrete example, the work of Bard Ermentrout on neural criticality starting in the 70's or 80's. He read a lot of physics textbooks.

[2] https://science.slashdot.org/story/10/12/06/0416250/medical-...


Good correction! Ermentrout is a fair example. You're right that a lot of neuroscience criticality work came from retrained physicists. The paper distinguishes between independent derivation and cross-trained import. The title for this post over-simplifies this. I made this change to try to increase engagement, since the full detailed title got zero engagement.

Where I'd push back: even after physicists brought the tools into neuroscience, the receiving field didn't connect it back to the parallel work in ecology or cardiology. Ermentrout's neural work and Goldberger's cardiac work used the same underlying math but didn't cross-cite. The silos reformed around the imported tools.

You're correct that "none of them knew" is too strong. Fair point. "Most of them didn't talk to each other even after import" is closer to what the citation data actually shows.


> because why would you cite phase transitions or criticality? You learn about them in class as a physicist

I'm not sure if you're being entirely serious with that remark, but clearly citing the earlier work would have bolstered their credibility: interdisciplinary research is a plus and hardly something to hide. If it's something that's taught in physics class, you can cite a common textbook.


The disease of having 100 citations in each paper had not yet broken out when the papers in question were written. A good paper in 1994 probably had about 8 references, and certainly not any to common textbooks.


I would read it as there being a different threshold for what is citation-worthy versus presumed background knowledge.

Imagine if every graphics paper had to cite every concept they use from arithmetic, trigonometry, and linear algebra textbooks...


This was citation worthy because it's new knowledge to the field. Even in a graphics paper, you can cite whatever basic techniques you're using if it's not clear that everyone will be familiar with them.


The irony is that youth are simulatenously the biggest consumers of (new) social media, and the staunchest haters [EDIT: this is directly contradicted by the research article I found below…]. I can’t find the source so take it with a grain of salt, but I’ve read that something like 80% of TikTok users under some age think they’d be happier if it didn’t exist and/or wish it didn’t exist.

I don’t think this is really an issue of censorship to a lot of people (though that may be how it shakes out in the government) but rather of control over their digital environment and sanity.

EDIT: I don’t think this is what I’m remembering, but it has concrete numbers somewhat lower than I thought (48% of teens think social media harms people their age, but only 14% think it harms them personally) https://www.pewresearch.org/internet/2025/04/22/teens-social...


It's not even irony? They want to quit, but it's too hard.


I was so with you the first half of that. But the notion that everything should be capitalism is just as wrong as the notion that nothing should be capitalism (or, that capitalism only leads to bad things; obviously wrong but somehow a broadly accepted truism).

Capitalism works when a market works; capitalism fails when a market fails. Healthcare is a great example, because there’s an obvious and inherent imbalance in demand vs supply. Firefighting is another great example. These also have externalities to the community as a whole that everyone gets, even when you don’t pay/need the service; so it makes sense to make everyone pay (taxes). Even if you never have a child, even if you send your kids to private school, you live in a society that could only exist because of a (formerly, relatively) high standard of public education. So everyone pays for schools.

The idea of government bureaucrats lining their pockets is also (formerly, relatively) ridiculous: who would get into US government bureaucracy to make money? They are all (formerly, relatively) doing it almost uniformly because they believe in the mission, because they would almost all make more money going private.


Lots of good suggestions. However for Svelte in particular I’ve had a lot of trouble. You can get good results as long as you don’t care about runes and Svelte 5. It’s too new, and there’s too much good Svelte code out there used in training that doesn’t use Svelte 5. If you want AI generated Svelte code, restricting yourself to <5 is going to improve your results.

(YMMV: this was my experience as of three or four months ago)


Those prices seem geared toward people who are completely price insensitive, who just want "the best" at any cost. If the margins on that premium model are as high as they should be, it's a smart business move to give them what they want.


Really cool dataset! Love seeing people actually doing the hard work of generating data rather than just trying to analyze what exists (I say this as someone who’s gone out of his way to avoid data collection).

Have you played at all with thought-to-voice? Intuitively I’d think EEG readout would be more reliable for spoken rather than typed words, especially if you’re not controlling for keyboard fluency.


Yeah we do both text and voice (roughly 70% of data collection is typed, 30% spoken). Partly this is to make sure the model is learning to decode semantic intent (rather than just planned motor movements). Right now, it's doing better on the typed part, but I expect that's just because we have more data of that kind.

It does generalize between typed and spoken, i.e. it does much better on spoken decoding if we've also trained on the typing data, which is what we were hoping to see.


> we do both text and voice (roughly 70% of data collection is typed, 30% spoken). Partly this is to make sure the model is learning to decode semantic intent (rather than just planned motor movements)

Both of these modes are incredibly slow thinking. Conciously shifting from thinking in concepts to thinking in words is like slamming on brakes for a school zone on an autobahn.

I've gathered most people think in words they can "hear in their head", most people can "picture a red triangle" and literally see one, and so on. Many folks who are multi-lingual say they think in a language, or dream in that language, and know which one it is.

Meanwhile, some people think less verbally or less visually, perhaps not verbally or visually at all, and there is no language (words).

A blog post shared here last month discussed a person trying to access this conceptual mode, which he thinks is like "shower thoughts" or physicists solving things in their heads while staring into space, except "under executive function". He described most of his thoughts as words he can hear in his head, with these concepts more like vectors. I agree with that characterization.

I'm curious what % of folks you've scanned may be in this non-word mode, or if the text and voice requirement forces everyone into words.


I agree that thinking in words is much slower than thinking in concepts would be -- that's the point of training models like this, so that ideally people can always just think in concepts. That said, we do need to get some kind of ground truth of what they're thinking in order to train the model, so we do need them to communicate that (in words).

One thing that's particularly exciting here is that the model often gets the high-level idea correct, without getting any words correct (as in some of the examples above), which suggests that it is picking up the idea rather than the particular words.


> ideally people can always just think in concepts

Are you pursing an idea of how to help people like this author* access this mode that some of us are always in unless kicked out of it by the need for words?

Very needed right now — the opposite of the YouTube-ization of idea transfer.

It doesn't seem clear this is accessible without other changes in wiring? The inability to "picture" things as visuals seems to swap out for "conceptualizing" things in -- well, I don't have words for this.

An attempt from that essay:

This is not what Hadamard is talking about when he describes the wordless thought of the mathematicians and researchers he has surveyed. Instead, what they seem to be doing is something similar to this subconscious, parallelized search, except they do it in a “tensely” focused way.

The impression I get is that Hadamard loads a question into his mind (either in a non-verbal way, or by reading a mathematical problem that has been written by himself or someone else), and then he holds the problem effortfully centered in his mind. Effortfully, but wordlessly, and without clear visualizations. Describing the mental image that filled his mind while working on a problem concerning infinite series for his thesis, Hadamard writes that his mind was occupied by an image of a ribbon which was thicker in certain places (corresponding to possibly important terms). He also saw something that looked like equations, but as if seen from a distance, without glasses on: he was unable to make out what they said.

I’m not sure what is going on here.

* https://www.henrikkarlsson.xyz/p/wordless-thought

A couple of this author's speculations aren't how I'd say it works when this is one's default mode, but most are in the neighborhood. He comes the closest of what I've read by people who do think the way the author thinks — which seems to be most people.


Interesting! I imagine speech-related motor artifacts don't help matters either, even if noise starts mattering less at scale.


Yeah -- we have the participants use chinrests as well, which reduces head motion artifacts for typing but less so for speaking (because they have to move their heads for that of course). so a lot of the data is with them keeping their heads quite still, although the model is becoming much more robust to this over time.


This isn’t really “Show HN” so you might want to remove that, but looks really awesome!

https://news.ycombinator.com/showhn.html


Thank you! I changed that. Yeah, some really awesome freebies this year. I've been following music production freebies for over a decade and it's never been like this year.


I was initially skeptical of this claim because I’d previously learned that to cross the blood-brain barrier particles need to be ~200nm (PM2.5 = 2500nm). However, PM2.5 does seem to be an important category of particles for brain damage: somehow these particles can access the brain [1]. Obviously, yes, it depends on exactly the particle whether it will be “neurotoxic,” but generally “unnatural” particles in the brain are not going to do good things. (I am not an expert in particulates) it seems like things larger than this don’t penetrate the blood-brain barrier, so they can’t be neurotoxic. So PM2.5 is probably at an intersection of large enough to be unhealthy but small enough that the blood brain barrier doesn’t help (probably some evolutionary argument to be made here).

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC9491465/#:~:text=PM...


The article does suggest the particles travel "from the nose to the brain", but I think that may be a bit of hyperbole.

In the studies described, they weren't looking for these particles in the brain.

There is potentially a case to be made that the particles result in systemic inflammation, or some other pathway which leads to effects in the brain, rather than a direct action.


Doesn’t this mean browser sandboxing is secure, not JS? Or are you referring to some specific aspect of JS I’m not aware of? (I’m not aware of a lot of JS)

It’s maybe a nit-pick, since most JS is run sandboxed, so it’s sort of equivalent. But it was explicitly what GP asked for. Would it be more accurate to say Electron is secure, not JS?


I'm really curious about this comment. What would it mean for a programming language to be secure?

Any two Turing-complete programming languages are equally secure, no?

Surely the security can only ever come from whatever compiles/interprets it? You can run JavaScript on a piece of paper.


Turing completeness is irrelevant, as it only addresses computation. Security has to do with system access, not computational capacity. Brainfuck is Turing complete, but lacks any primitives to do more than read from a single input stream and write to a single output stream. Unless someone hooks those streams up to critical files, you can't use it to attack a system.

Language design actually has a lot of impact on security, because it defines what primitives you have available for interacting with the system. Do you have an arbitrary syscall primitive? Then the language is not going to help you write secure software. Is your only ability to interact with the system via capability objects that must be provided externally to authorize your access? Then you're probably using a language that put a lot of thought into security and will help out quite a lot.


A number of operating system security features, such as ASLR, exist because low level languages allow reading and writing memory that they didn't create.

Conversely, barring a bug in the runtime or compiler, higher level languages don't enable those kinds of shenanigans.

See for example the heart bleed bug, where openssl would read memory it didn't own when given a properly malformed request.


I mean, JavaScript doesn’t even have APIs for reading a file from disk, let alone executing an arbitrary binary. (Anything similar comes from a runtime like NodeJS.) You can’t access memory in different JS processes… so what would make it insecure?

To be fair, a plugin system built on JS with all plugins interacting in the same JS context as the main app has some big risks. Anything plugin can change definitions and variable in the global scope with some restrictions. But any language where you execute untrusted code in the same context/memory/etc as trusted code has risks. the only solution is sandboxing plugins


It’s partially that for sure, but I think it’s also a kind of “common sense” feeling of the public that if people use technology to commit a crime, there must therefore be a record of that crime and therefore the police should be able to use that record to easily stop technology-crime. See: every police show ever.

That was never possible before. Historically, conversations didn’t leave records, and when they did, they were trivially burned. There was no sense that the police should have access to the records because there were no records.

The technical and ethical problems of this “common sense” are far from obvious to most whose primary exposure to and mode of thinking about policing and technology is what we see on TV.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: