
Lab-grown ‘mini-brains’ do mimic human brain development - chc2149
https://spectrumnews.org/news/toolbox/lab-grown-mini-brains-mimic-brain-development/
======
hacker_9
Title is an exaggeration, all they found was structures that looked similar to
structures in our brains. They have no idea what these structures actually do,
and even said there is huge amount of variability in the structures.

As an aside, this:

 _" They first froze the mini-brains and cut them into ultra-thin sections,
which they mounted onto glass slides. They then labeled the sections with
different combinations of colored fluorescent tags that are specific to
certain cell types, and imaged the sections using an automated scanner."_

Shows just how immature the neurobiology field is. I imagine that slicing, and
then manually re connecting the slices in a 3d program, must be a pretty
painstaking process. Not to mention you are left with no idea how data
traveled round the structure. A code analogy would be having a huge codebase
handed to you in little chunks, and you have to connect up the pieces by hand.
And at the end of it, not even being able to debug it and see if things even
work as expected.

~~~
hsdistefa
I worked in a lab where we had a similar problem reconstructing 3d images from
manual sections via the same procedure.

Not only is it a painstaking process, but some slices are lost due to human
error while sectioning and the tissue is mechanically distorted when placed
onto the slides (squished between the glass and the cover slip).

The solution was simple, albeit much more expensive than sectioning: just MRI
scan the organ.

~~~
foxyv
If the sample is small enough you can always use an optical coherence
microscope. You can get optical microscope resolutions in 3D of in-vivo
samples. It's similar to a confocal microscope but with much higher
resolution. Only thing is they can be dang expensive but nowhere near as much
as an MRI.

You can also get OCT devices for pretty cheap. They use them in optometrist
offices for imaging retinas.

------
balabaster
While I like this at a conceptual level, I'm struggling with it on a personal,
perhaps moral level. A real brain, trapped, prisoner in a vat... if it's a
real brain, it's going to have some level of sentience, just like the rest of
us. Yet it is a slave. This is somewhat horrific to me. AI never really
bothered me right up until I read this...

~~~
derefr
> if it's a real brain, it's going to have some level of sentience

Without a sensorimotor apparatus for the brain to manipulate and receive
reward-signals from, and thus train to control, there is very likely no
"thought" as we would consider it—nothing coherent, no _train_ of thought, no
high-level patterns. It's more like the sort of "thought" a foetus would have
before its first moment of conscious awareness.

~~~
QuantumGravy
I'm inclined to agree that such a mini-brain at this stage is _most-likely_
non-conscious and incapable of suffering, but how would we know?

Organisms with far "simpler" brains than ours appear to be capable of
suffering. I'm unaware of the extent to which human embryonic brain
development follows expected principles of evolutionary developmental biology,
but if that holds, a sufficiently developed brain should at least be given
equal ethical consideration as an animal. Vertebrates are commonly born with a
capacity for suffering, so I question how much external input is required. If
the necessary brain structures are there, those same structures should suffice
for achieving brain-states equivalent to percepts of suffering.

Looks to me like we're not quite there yet, as these cerebral organoids are
pretty darn small and comparatively disorganized, but it's only a matter of
time before the ethical questions need be faced.

With a little searching, I found similar views from neuroskeptic.
[http://blogs.discovermagazine.com/neuroskeptic/2013/09/02/th...](http://blogs.discovermagazine.com/neuroskeptic/2013/09/02/the-
case-of-the-tiny-human-brains/#.WQOYINLyup0)

~~~
derefr
Maybe I used a very bad and distracting (and political) example. "Suffering"
wasn't really something I was trying to talk about, here. (To be clear, pain
is a matter of instinct, which is to say, a predefined brain structure that
"works" from birth—and maybe before-hand—whenever it shows up.) Rather, I was
_specifically_ talking about this thing we call "thinking." The thing that
humans do when they're awake and lucid, and _don 't_ do when they're
asleep/unconscious/in a dissociative fugue. The thing that the courts care
about as _mens rea_.

Many higher-level vertebrates develop some capacity for "thinking"—that is,
for planning/strategizing, puzzle-solving, learning and synthesizing
information, forming concepts and abstractions, _overriding_ instincts with
beliefs based on evidence, etc. Humans and all other primates do this; as do
crows and their relatives; as do dolphins; etc.

But none of these animals _start_ with that ability from birth. "Thinking" has
never been observed as behavior available instinctually to any animal. A
newborn crow or gorilla, despite being rather self-sufficient with a library
of instinctual responses, won't be able to solve the puzzles that we get the
adults of those species to solve and which we cite as proof of their
intelligence. They _grow_ that ability from months/years of life experience.

In fact, almost by definition, "thinking" is something you have to _build_ :
it's a big conceptual hierarchy with sensory data at its base. And—as far as
we know—instinct can't pre-bake the needed raw sensory data into brains. (It
can bake in higher-level associations, like scent- or pattern-associations for
predator avoidance. But it doesn't include these in low-level "raw" terms
where a brain can derive any of its _own_ interpretations or abstractions from
them.)

Without accumulated experience, a brain is still experiencing the world, and
_reacting_ to the world, and experiencing _reward signals_ from the world—like
most animals do—but is not yet _thinking_ about the world. Thinking is a
matter of updating beliefs about schemas/models/concepts; and those things
only _exist_ when derived from sense-data "evidence."

~~~
mileycyrusXOXO
Thanks for sharing your thoughts. I recently had a discussion concerning
animal sentience and was having trouble articulating my thoughts. You have
managed to concisely convey "thought" in an eloquent fashion.

------
idiot74
How long until we can use these for machine learning?

~~~
Robotbeat
After Elon Musk's Neuralink figures out a scalable way to interface with
neurons. Need probably millions of electrodes, not just a few hundred like we
use today.

It would be kind of crazy if we develop full AI by literally using brains in
vats. But it makes sense (even if it is horrifying). Meat is cheap, and the
brain is like an exa-OPS computer running on 20 Watts of power. If you could
solve the interface problem and figure out how to actually use it practically,
brain is like 6 or 7 orders of magnitude cheaper than the next-cheapest
computing substrate. That's like half a century of Moore's Law (and Moore's
Law is basically over now... much slower pace, at least).

(Here's an example using rat neurons:
[https://singularityhub.com/2010/10/06/videos-of-robot-
contro...](https://singularityhub.com/2010/10/06/videos-of-robot-controlled-
by-rat-brain-amazing-technology-still-moving-forward/) And this one:
[https://www.newscientist.com/article/dn6573-brain-cells-
in-a...](https://www.newscientist.com/article/dn6573-brain-cells-in-a-dish-
fly-fighter-plane/) )

~~~
shrimp_emoji
However, wetware still ages, gets diseases, needs life support, and is fairly
non-serviceable.

If I could pick what platform _I_ run on, it'd be hardware (and I hope such
hardware eventually comes along).

~~~
Florin_Andrei
Sure. But if they're cheap to grow, just make more.

"Cattle, not pets."

(I do realize the pretty horrific undertones of this discussion.)

------
dkarapetyan
Getting spooky and ethically tricky.

------
jlebrech
put a neural net in those

