Hacker News new | past | comments | ask | show | jobs | submit login
Human neuroscience mustn't forget its human dimension (nature.com)
46 points by rntn 3 months ago | hide | past | favorite | 65 comments



The time for cybernetic enhancements of human minds and bodies is, reasonably, a decade or two away from becoming mainstream. Leading up to that, I wonder if there will be enough effort put into consumer protections so that it doesn't turn into a knock-off cyberpunk novel.

I don't think we live in any dystopian future or Black Mirror episode yet, but I do think that if a character in one of those episodes had a flashback to pre-cyberpunk life, it'd look quite a bit like 2024.

Are consumer protections even the right way to navigate this? Could American government reasonably handle this? Supposedly, it'd be the NIH and its peer organizations doing this, and I trust them a reasonable amount.


While I'd love to try out cybernetic enhancements in a decade or two, as far as I know we're not even close to anything now, it's possible with an expensive MRI to see some brain activity, and cochlear implants do exist so that's something, but other than that, how would the cybernetic enhancements even connect to the brain, know what kind of information to transfer in what form, and do anything at all?


And even if we get there with something like neuralink, how big is the gap between training it for someone who is currently deficient in that limb or sense and willing to go through the rigorous and extensive mental and physical training required to master the new tool, and Joe Blow consumer?


I forget what sci fi story explored this, but society bifurcated into two branches of augments. One was what I'll call the "apple" crowd. You could get "boring" upgrades like a Inspector Gadget style hand. Or you could go black market and get cutting edge stuff that could and likely would cause you massive damage. But the drawbacks were outweighed by the advantages you could gain. So the tech companies sort of had clandestine observation reports to accelerate their own research.


Tangentially, I wonder how much of the research chemical ("legal highs") market is just companies releasing things to gather data for compounds that could be then commercialized.


This is a fun thing to think about, but commercialization of drugs is antagonistic towards the drugs working around the drug enforcement agencies that technically provide legal highs.

Cannabis is the closest thing that’s combining the two. MDMA is possible, but I can imagine can be synthesized to remove the euphoria once there has been more testing. Mushrooms, LSD, and Ketamine all show their maximum effectuation at lower doses would be taken recreationally.


> I wonder if there will be enough effort put into consumer protections so that it doesn't turn into a knock-off cyberpunk novel.

There will be zero. We’re entering the “find out” phase of the fucking around with making the government as ineffective as possible.


We're really not. The government is as effective as it wants to be.


Or rather, the government as effective as we collectively want it to be. And before somebody comes to explain about lobby and all, looking back in history there were solutions to even worse iron fists (I don't like those solutions, but they exist)


Lobbying only works because government employees allow it to work.


And government employees allow it to work because the populace allows it.

Gov employees aren't the main target or problem in lobbying. Politicians are. And people who keep on voting for politicians whose campaigins are funded by big business and billionairres.

Who have you voted lately?


I'd revise that to 50 to 100 years away. Still way sooner than we'll have superhuman AGI in silicon, anyway! ;D


>The time for cybernetic enhancements of human minds and bodies is, reasonably, a decade or two away from becoming mainstream.

Hopefully, but maybe not for us. If it gets any hotter we'll have to start migrating between hemispheres in the summer.


> we'll

we??

WE???

bro if it gets THAT bad, there'll be too much chaos. Hope you have a private jet & lots of inflation-proof assets if you're trying to prepare for something as terrible as that


>inflation-proof assets

Gold was not very valuable to the Aztecs since they had a lot of it. What exactly is "inflation proof"?

I think saying anything about the far future is just a projection of fantasy. Reality will crush such fantasies under its weight.


The majority doesnt even use most of the features they have on their smart phone. Rate of tech change does not mean people learn faster. People still take time to learn anything new. And the majority hardly have any interest in what mesmerizes and fascinates technologists. And rightly so cuz tools are meant to have purpose. Most technologists these days build shit cuz its possible to do so not cuz they have thpught to deeply about what people actually need. This gap will grow as rate of tech change increases.


> reasonably

Based on what?


An imagination that seems very reasonable to commenter but wildly off-base to everyone else.

I'd personally bet more on a resurgence of typewriters and film in the next decade or two over deeper integration with the increasingly contrived electronic world.


Now what makes you think that?

I don’t agree with OP at all, but I also see no indication of typewriters or film coming back, except as a retro novelty.

I am hopeful for smaller less rent-seeking software shops to get a foothold while the big players chase statistical models.


Have you used a typewriter or film recently? They’re terrible to use compared to electronics in practically every way and, I’m my opinion, only exist as a form of retro-fetishism.


>but wildly off-base to everyone else.

Sure thing, except no. The only assertion that is wildly off-base is yours.


the exponential acceleration of technology and currently-existing products such as polarized contact lenses, subdermal chips, and AR-sunglasses


I don't see how any of those technologies necessitates cybernetics, especially in the traditional sense of mimetic devices or brain computer interfaces.

It doesn't seem like a great argument for declaring these technologies, which have very little to do with the content of TFA, are imminently becoming mainstream.


The enhancement with written language already happened +10,000y ago, the enhancement with pocket calculators +50y ago. There are some bionic devices for locomotion, hearing, sight, cardio, epilepsy, but mainstream cybernetic enhancements haven't really kicked off yet. We still need to solve hard problems like durability, energy and biointerfacing.


It is exciting and fraught both


Since the article mentions Leborgne's story as a preface for the rest of the article, but fails to describe what is significant about that story, here it is: https://www.sciencedirect.com/science/article/abs/pii/S18788...


I think it is vital to maintain a strong focus on ethical considerations in this field especially


The same companies that test self-driving cars on local populaces before getting consent?

The sentiment is admirable but there isn't a govt or company in the world that sees mind control devices and thinks "how can we use this to protect individual rights?"


Based on the title, I was waiting to see mention of ML/AI but it quickly went down a path I didn't expect: the potential downsides of have AI de-anonymize data that was previously anonymized.

I'm kind of surprised by this take and ultimately think this article wasn't particularly interesting.

Here's my take on the cross-section between neuroscience and AI: https://bower.sh/who-will-understand-consciousness

Ultimately, I don't think we are going to discover consciousness in chaotic biological systems. Rather, we will find it in silicon.


Tag team since the 1950s. Reinforcement learning is a great example of the back and fort—-Sutton and Barton’s major contribute (temporal difference learning) in computer science) followed by experimental wetware confirmation of this mechanism in CNS.


Oh that's interesting! I'd love to read more collaboration between the two disciplines.


Creating a computational simulacrum of consciousness means nothing for our understanding if we don't understand biological consciousness, it just means we've created a very capable computational system. As someone with a background in both computer science and neuroscience, the approach of abandoning the brain to go play with our toys strikes me as profoundly arrogant, and more importantly, missing the point.


> if we don't understand biological consciousness, it just means we've created a very capable computational system.

This is the conceit of humanity. Consciousness does not rest on us. Further, we are so inescapably biased here. A system cannot study itself and make any meaningful progress. We are shackled by the limitations of this bias, bogged down by the muck and mire of disentangling ourselves from the equation.

> the approach of abandoning the brain to go play with our toys

In 10 years this comment will not age well.

> strikes me as profoundly arrogant, and more importantly, missing the point

You've missed the point. The best way to study consciousness is by removing as much of ourselves from the equation. We must first understand consciousness through an external conscious entity. Only then can we introspect and learn something through biology.


Even if you create a "conscious" system, if you don't study the brain in vivo, you have no way of relating that "consciousness" back to our own. You've build a very intelligent system, it just had nothing to do with brains.

E: if you want to talk about conceit, look no further than the idea that I'm order to study the brain, we "just" need to build one first.


Seems like this article is putting the cart before the horse - what recent breakthroughs have there been to justify saying that we're entering some new era of neuroscience, or that AI has anything to do with that new era?


The Australian Human Rights Commission discussed many of these issues in its March report, "Protecting Cognition: Background Paper on Neurotechnology":

https://humanrights.gov.au/our-work/technology-and-human-rig...


Where is the author's name? Strange editorial. Neuroscientists should "involve participants"? What if they cannot speak, like that guy Broca's area was found in?

>>Without a doubt, human neuroscience is entering a new and important era. However, it can fulfil its goals of improving human experiences only when study participants are involved in discussions about the future of such research.


First, a disclaimer: this is a great article and an important moral issue, and reminded me of the fantastic museum the Germans set up at the Charité in Berlin, a longstanding medical institution that obviously had a significant role to play in murderous Nazi “science”, and regardless deals with those sacrificing their wellbeing to science via testing and tissue donation to this day. Definitely do not skip, if you’re ever in town.

Ok now what I’m really curious about — I would ask that we take “AI is a big deal now that the frame problem is tractable, and LLMs are shockingly good at encoding human brain activity” for granted in this discussion, if possible:

Is anyone else way more scared than they’re ever been about (science) news, and feel unable to read stories that are objectively interesting and important to you because they remind you of the uncertainty to come? Even if we abstract away all the economic automation, aesthetic and cultural conflict, and ethics/safety concerns of the new tech… we are reading people’s minds. Sure it’s mostly in fMRIs for now-it’s clearly in its infant stages, they’ve only had since early 2023 to work on it in earnest—I think it’s moving at lightning speed for life sciences. For example, look into the explosion of consumer/prosumer/freelancer BCI tech, such as EEG headsets (g.tech Unicorn being my hacker fave, $1000 for a modular Bluetooth headset) and fNIRs visors (Muse being the big player in the consumer space, the prosumer space is still coming down in price to feasible levels). For another example, this paper is what first alerted me to this underreported situation (tho now that there’s a whole issue of Nature on it, I imagine it’ll gain prominence):

https://arxiv.org/abs/2309.14030v2

So, anyone have advice? Again, other than “you’re wrong it’s not a big deal”, please :)


As with all knowledge and technology people will find ways to use it for money and power, until there is a fundamental shift in human society.


I think mind uploading will happen first, and tbh i dont care about my carbon copy


What would your uploaded mind do in a non-corporeal state? Besides likely go insane?


In Cory Doctorow's Walkaway, a human's mind is uploaded to a computer after her death. Except, the scan of her brain is not perfect, because physics. So, the uploaded mind keeps going insane every few hours/day. And the researchers keep trying to play with the simulation parameters to try to stabilize it/her.


Answer questions on some website to make AfterLife currency to buy new skins, I imagine.


Is there Steam in the non corporeal world? I could finally catch up on all these unplayed games in my library.


I imagine that an uploaded mind in a non-corporeal state would have unprecedented opportunities for exploration


who said i d be noncorporeal? There are many kinds of corpora


I've watched a playthrough of SOMA, no thanks.


The previous era showed that dead fish get emotional when they see pictures of humans. Can't wait to see what will be shown in the new era where you throw in additional billion or so parameters to predict dozen observations.


> dead fish get emotional when they see pictures of humans

could you explain further here?


I'm the first author of the salmon fMRI paper, if you have any questions. Generally, how the investigators do their statistics can lead to implausible conclusions. Extraordinary claims should require extraordinary evidence.


Wow, thanks for the awesome work! It's my favorite neuroimaging study, and we have a print of the poster on our lab's wall.

How do you find the rigor of neuroimaging analyses have developed since that paper? I don't follow it much, but I've seen some quite wild looking stuff being published (e.g. predicting smallish datasets by feeding voxel activations into a huge ANN). Are my concerns about a new era of overfitting realistic?


> we have a print of the poster on our lab's wall

That's amazing - you made my day with that statement.

I left neuroscience for the software world back in 2012, so I don't have a lot of data points since then. I know between 2009 and 2012 the field went from ~50% of papers doing the right statistical corrections to about ~90%, which is a huge step in the right direction. I hope those numbers are even better today.

The expense of MRI time means that studies include far fewer subjects than they might want/need. My opinion is that there are still significant challenges that go beyond correction for multiple comparisons, like data peeking and low-power experimental designs. I think that we should move to a mindset where we need replication and convergent evidence for major claims. Not a single study with 18 college freshman participants.


Why and how?

Also how do u determine a salmon is sad?


Pretty much what TeMPOraL said. You can scan pretty much anything with fMRI and find results if you don't use proper statistical corrections. I have found "significant" voxels in a pumpkin before while doing testing. Our argument was/is that scientists need to have appropriate rigor in their analyses, otherwise you can reach ridiculous conclusions - like a dead fish looking alive...


With fMRI. The emotion the salmon "feels" doesn't matter, what matters is that it was "feeling" something while being dead.


It is a reference to this (http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf) fMRI study, where using standard fMRI analysis techniques researchers found significant results when using a dead salmon as a subject.




The wild and unsubstantiated claims in this thread from "cybernetic enhancements" to "mind uploading" are so hand wavy we could start an entire jazz hand quartet.

I refer back to my comment from a few days earlier:

https://news.ycombinator.com/item?id=40733615#40734638


Please don't post dismissive comments about the rest of the community and please especially don't do it repeatedly.

There's an entire spectrum of views here. Singling out the segment that you most dislike and overgeneralizing from there is something that many readers do*, but it leads to low-quality, irritable comments and those lead to lame discussion.

https://news.ycombinator.com/newsguidelines.html

* https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Dang, with respect, is there any room for critical discussion on this site? I understand wanting to maintain the climate, but a lot of these discussions are coming from non-experts or have profoundly negative ethical implications. Particularly when things like the brain comes up, it seems like science fiction is more accepted than actual science.


Certainly there is room for critical discussion.

My GP comment was about irritable generalizations driven by a common cognitive bias— that's something different, no? or perhaps you're talking about something else?


I suppose there's a line between constructive criticism and ranting.


Or curious criticism!


Look, I want my cyberpunk future. Stop pointing out that it's not going to happen in my lifetime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: