Hacker News new | past | comments | ask | show | jobs | submit login
A connectomic study of a petascale fragment of human cerebral cortex (seas.harvard.edu)
102 points by lawrenceyan 4 months ago | hide | past | favorite | 40 comments



This is fantastic progress. There’s two directions of attack that are necessary to get brain emulation.

First, scanning ability needs to advance enough to scan an entire brain with various stains at the decananometer voxel level.

Second, we still don’t know enough about how neural cells in the brain function. Just recently, glial cells like astrocytes we’re recognized to have a significant effect on signaling and the entropy of neural activity. I expect more and more learnings from neuroscience will slowly work their way into the field of neuromorphic computing and eventually we’ll have a complete picture.

Source: doing my undergrad thesis on neuromorphic computing.


What’s your field of study for your degree?


I’m in a multidisciplinary engineering major called engineering science.


The dataset the paper uses: 1.4 petabyte browsable reconstruction of the human cortex - https://h01-release.storage.googleapis.com/landing.html


There was a story I read once where general AI was unobtainable but computing power continued so simulations of brains was done via processing brain slices. So every AI was just a person who had died with an intact brain that could be preserved. Self driving cars were a thing but it was someone’s grandma.


if you can recall the author/title i’d love to look at it! i’m curious how the story deals with the disjointed sensory input in the “revived” agent. e.g. does Grandma experience sight/sound/touch/feel? if not, how does she cope with sudden full-body paralysis/numbness/loss-of-familiar-senses? did she agree to this, or know it was going to happen? and so on.

i remember one subplot to a Cory Doctorow book focused on a lab that was trying to develop a self-aware machine and the barrier there was that the machine would commit suicide as soon as it understood the broader context of its being. sort of makes me wonder that in order to achieve the sorts of AI you’re talking about, we need not just map the brain but also understand (or bruteforce) enough of it to avoid agent crises. the barrier to that could be larger than just developing a wholly new neural network (idk).


There's a few bits in Neal Stephenson's "Fall; or, Dodge in Hell" that explore this.

And also, "Lena" by qntm -> https://qntm.org/mmacevedo


Google 'MMAcevedo' for a horrifying short story take on this concept.



The bobiverse covers this, but is definitely not what the op meant. There's also an arc in the Welcome to nightvale podcast that is similar (using frozen brains) but still different.


The whole Bobiverse series is built around this concept of one guy's brain scan and him embracing his new existence.

By the way, do you have the title of the book you mentioned?


the book was Walkaway: https://en.wikipedia.org/wiki/Walkaway_(Doctorow_novel)

the only reason i didn't name it before is that i couldn't recall the name off-hand.


It doesn't sound quite like the story the previous poster was thinking of, but Hannu Rajaniemi's Jean le Flambeu series (beginning with The Quantum Thief) has uploaded, edited, and copied human minds as a significant component of the story.


Can't be Quantum Thief, because in Hannu's -verse, AGIs (or 'dragons', IIRC) exist, are actually pretty easy to make, and much smarter than the uploads or derived human brains - so smart that they are far too dangerous for any side to use, and that's why they still run on brains.


In Sam Harris' podcast he mentions an ethical consideration when trying to create an AI that's conscious: an entity that's fully aware of itself yet stuck in a "box" with no way to get out. The perfect torture machine.


How is that different from general AI though?


General ai would be a unique mind. Simulation doesn't require engineers to understand the process, just that the process works. I can copy a mechanism without scientifically understanding what that mechanism is doing. I can follow a recipe or instructable or copy grandma's brain and have little to no understanding of what's really happening.

Then again, if you can copy a genius researcher and put a million of the minds to work on solving agi methodically, you don't need precision and understanding to start with. You just hope the million mind genius collective doesn't lie or mislead.


But the inputs to the simulation are still artificial so I don’t see why it really matters.


It absolutely matters. Emulating a human mind means copying all off its flaws, biases, and values.

An emulated mind probably won't want to nuke the Earth whereas an artificially created and constructed general intelligence may have no qualms in killing both itself and everyone else around it.


It would be General Intelligence but it's probably not Artificial General Intelligence in so much as the intelligence aspect wasn't designed and created. It also wouldn't be able to get smarter at the rate of singularity since its intelligence comes from using humans as raw material.


It's called The Age of Em by Robin Hanson. Here's the wikipedia[1] and a review by a popular blogger[2]. Interesting concept but not a realistic vision of the future.

[1] https://en.wikipedia.org/wiki/The_Age_of_Em [2] https://slatestarcodex.com/2016/05/28/book-review-age-of-em/


This is more or less the premise of the Bobiverse stories.


Amusing how they (minimally) protected the identity of the individual standing on a roller chair in the main illustration.


There is a nice "gallery" of cool things found in this dataset viewable in both 3d and the raw electron microscope images side by side:

https://h01-release.storage.googleapis.com/gallery.html


I feel like the brain complexity is a mirage, a distraction from the fact that even trivial organisms with few neurons demonstrate quite complex and arguably intelligent behavior, and even single cells demonstrate primitive awareness of their surroundings.


I've often wondered about the talk of always needing 10,000 x more processing power before we can emulate something like the human brain, seems to me it should be possible now but running at 10,000th of the speed. I would still be impressed.


Nothing constrains it to a linear relationship, existing compute hardware is essentially single-threaded but neurons are massively parallel. If you need to iterate through an O(n^2) algorithm over 1.4 PB and the brain can do it in O(1) then you still need an absurd level of processing power to get back to 1:10000.


Perhaps someone has been reading Anders Sandberg[1]?

1. https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf


is there a more recent comprehensive report? the link above is from 2008.


I don't think so. He's published stuff with them since [1] but not on the same topic.

1. https://www.fhi.ox.ac.uk/research/reports/


It's simply a matter of scaling this slice-scan-render technique up to the entirety of a human brain to simulate human thinking, correct? What're the biggest technological hurdles left?


My understanding of this is that the connectome itself wouldn't be sufficient to simulate human thinking.

If I understand correctly the connectome would basically be a map of each neuron to its connections, somewhat like a wiring diagram. However, I've heard that it's more realistic to regard each neuron as being more like a computer rather than a wire. Neurons have a lot of state to be accounted for, for example methylated groups controlling the expression of DNA and the resulting amounts and types of various receptors which would influence their control.


Seems we can't yet simulate a 302 neuron worm brain (C. elegans) despite having its connectome, so not correct. https://wikipedia.org/wiki/OpenWorm


Do all of these worms use exactly 302 neurons for locomotion? Or is 302 just the number found in a specific worm that was disected?


Specifically 302. Each worm has the same brain, down to the cell.


pretty much everything. Connectome != functional understanding of the brain. We have had c elegans ( worm) and more recently fly connectomes for years. we are still struggling to understand basic logic encodings in those animal models. Imho we lack a foundational understanding of the logic encoding mechanisms in the brain. Many neuroscientists/ computer scientists are working on this problem, but to my knowledge we are still not there.


There are interactions that the wiring diagram doesn't account for.

For example, some neurons release "neuromodulators" that diffuse beyond their immediate synaptic partners. Electric fields produced by a fiber bundle can affect activity in neighboring neurons.

Inside the cell, there are all kinds of processes on short (millisecond) and long (day) timescales that alter how the same neuron responds to identical stimuli.


Photographing a bird doesn’t automatically tell you how to fly. It does give you some hints though.


Was this done for the first time? Is there any novel technology that enabled this? Why are these studies rare?


This is incredible work!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: