Hacker News new | past | comments | ask | show | jobs | submit login

First, a disclaimer: this is a great article and an important moral issue, and reminded me of the fantastic museum the Germans set up at the Charité in Berlin, a longstanding medical institution that obviously had a significant role to play in murderous Nazi “science”, and regardless deals with those sacrificing their wellbeing to science via testing and tissue donation to this day. Definitely do not skip, if you’re ever in town.

Ok now what I’m really curious about — I would ask that we take “AI is a big deal now that the frame problem is tractable, and LLMs are shockingly good at encoding human brain activity” for granted in this discussion, if possible:

Is anyone else way more scared than they’re ever been about (science) news, and feel unable to read stories that are objectively interesting and important to you because they remind you of the uncertainty to come? Even if we abstract away all the economic automation, aesthetic and cultural conflict, and ethics/safety concerns of the new tech… we are reading people’s minds. Sure it’s mostly in fMRIs for now-it’s clearly in its infant stages, they’ve only had since early 2023 to work on it in earnest—I think it’s moving at lightning speed for life sciences. For example, look into the explosion of consumer/prosumer/freelancer BCI tech, such as EEG headsets (g.tech Unicorn being my hacker fave, $1000 for a modular Bluetooth headset) and fNIRs visors (Muse being the big player in the consumer space, the prosumer space is still coming down in price to feasible levels). For another example, this paper is what first alerted me to this underreported situation (tho now that there’s a whole issue of Nature on it, I imagine it’ll gain prominence):

https://arxiv.org/abs/2309.14030v2

So, anyone have advice? Again, other than “you’re wrong it’s not a big deal”, please :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: