1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.
2. Ion channels may or may not be affected by quantum effects.
3. The search space is huge (but organisms aren't optimal and natural selection is probably local search)
4. If it took ~3.8b years to get from cells to humans, how do we fast-forward:
* brain mapping (replicating the biological "architecture")
* gene editing on animal models to build tissues and/or brains that can be interfaced (and if such interface could exist how do we prevent someone from trying to use human slaves as computers? Using which tissues for computation is torture?)
* simulation with computational models outside of ECT (quantum computers or some new physics phenomenon)
Note: those 3.8b years are from a cell to human. We haven't built anything remotely similar to a cell. And I'm not claiming that an AGI system will need cells or spiking nets, most likely a lot of those are redundant. But the entropy and complexity of biological systems is huge and even rodents can outperform state of the art models at general tasks.
IMHO, the quickest path to AGI would be to focus on climate change and making academia more appealing.
Rodents? Try insects . In the late 40s and early 50s, when neural networks were first explored with great enthusiasm, some of the leading minds of that generation believed (were convinced, in fact) that artificial intelligence (or AGI in today's terms) is five/ten years away; the skeptics, like Alan Turing, thought it was fifty years away. Seventy years later and we've not achieved insect-level intelligence, we don't know what path would lead us to insect-level intelligence, and we don't know how long it would take to get there.
: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.
They are creepy smart.
Something about predatory nature of both insects seems to tune up their intelligence. Of course it never hurts having the BBC tell your story either.
Yep. To be a predator, you need to outwit your prey and think fast, so it's thought to be a natural INT grinder. `w´
Presumably, this could drive up the INT of prey too, but maybe it's cheaper to just be faster/harder to see? But you can't be THAT hard to see, and the speed only saves you in failed ambushes, so planning successful ambushes continues to reward the INT of predators (unless they just enter the speed arms race, like cheetahs or tiger beetles).
They probably can, internally; they just can't operate on tokens we recognize as numbers explicitly. For a computer analogy, take Windows Notepad - there's probably plenty of sorting, computing square roots and linear interpolation being done under the hood in the GUI rendering code - but none of that is exposed in the interface you use to observe and communicate with the application.
I think you'd be surprised how much progress is also being made outside those two factors. It's sort of like saying graphics only improve with more RAM and faster compute. We know there's more to it than that.
In many cases, the cutting edge of a few years ago is easily bested by today's tutorial samples and 30 seconds of training. We're doing better with less data and orders of magnitude less compute.
An illustrative example comes from the first lesson in fastai's deep learning course: an image classifier that would have been SOTA as late as 2012/13, can be built by the hobbyist in like 30 seconds.
That said, I don't disagree that this is all narrow AI, at best.
The key, of course, is redefining life and intelligence as whatever the current state-of-the-art accomplishes. (Cue explanations that the brain is just a giant pattern matcher.) It makes drawing parallels and prophesying advancements so much easier. Of all our sciences, that's perhaps the one thing we've perfected--the science of equivocation. And we perfected it long ago; perhaps even millennia ago.
Rodents can't play Go or a lot of other humanly-meaningful tasks. We don't need to build an artificial cell. A cell is too many components that by blind luck happened to find ways to work together, this is as far from efficient design as can be. The same way we don't build two-legged airplanes, we don't need anything that's close to the wet spiky mess that happens in human brains. It's more likely that we have all the ingredients already in ML, and we need to connect them in an ingenious way and amp up the parallelism.
What about pigeons predicting breast cancer with 99% probability, rats learning to drive cars, monkeys building tools?
Rodents stand a bigger chance at learning Go than AlphaZero spontaneously building stone tools and driving cars.
AlphaZero is also capable of playing Chess, Shogi and Go at a super-super-human.
> pigeons predicting breast cancer with 99%
pigeons contain 340M neurons (with dendrites and all, giving them higher computational capacity than ANN units).
> Rodents stand a bigger chance at learning Go
They probably don't ; probably because they can't understand the objective function and their brain capacity is limited
We don't have anything remotely close to a wetware-enabled transportation device, something that can move on flat land, climb mountains, swim in bodies of water, crawl in caves, hide in trees.
Within the constrained problem, the machine exceeds humans. But generally, the wetware handles moving around much better.
Same with AI: in a constrained problem, the AI can excel (beat humans in chess and go). But I doubt we will see a general AI any time soon.
human AI also evolved by solving constrained problems, one at a time. Life existed before the visual system , but once this was solved it moved on to do other things. In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition, and we are closing to certain output (motor) systems: NLP text synthesis systems seem a lot like the central pattern generators that control human gait, except for language. What seems to be missing is the "higher-level ", more abstract kernels that create intent, which are also difficult to train because we don't have a lot of meaningful datasets. Or maybe , we have too big datasets (the entirety of wikipedia) but we don't know how to encode it in a meaningful way for training. It's not clear however that these "integrating systems" are going to be fundamentally different to solve than other subsystems. It certainly doesn't seem to be so in the brain, since neocortex (which hosts both sensory and motor and higher level systems) is rather homogeneous. In any case, it seems we 're solving problems one after another without copying nature's designs, so it's not automatically true that we need to copy nature in order to keep solving more.
Do you have examples of those systems which are competitive in general use rather than specialized niches? The cloud offerings from Amazon, Google, etc. are good in the specific cases they’re trained on but fall off rapidly once you get new variants which a human would handle easily.
Can't tell if sarcasm.
>You assert that an area of physics or mathematics familiar to few neuroscientists solves a fundamental problem in their field. Example: "The cerebellum is a tensor of rank 10^12; sensory and motor activity is contravariant and covariant vectors".
So yeah, I feel that it's pretty fringe (as you suggested).
So it is plausible that nature may have evolved to be affected by quantum effects.
Actually it's not so obvious that the brain is not differentiable. If you do a cursory search, you'll find quite a lot of research into biologically plausible mechanism for backpropagation. Not saying the brain does backprop, we just don't know and it's not outside of the realm of plausibility
In a sense, everything is affected by quantum effects. However, with neurons, they are generally large enough that quantum effects do not dominate. Voltage gated channels are dozens to hundreds of amino-acids long. Generally, there are hundreds to millions of ion channels in a cell membrane and the quantum tunneling of a few sodium ions in or out of the cell will generally not affect gestalt behavior of the cell, let alone a nervous system's long term state. Suffice to say, ion channels are not dominated by quantum behavior.
Largely, we have the building blocks to replicate neurons (as we currently understand them) in silico. However, as is typical with modeling, you get out what you put in. Meaning that how you set your models up will mostly determine what they do. Setting your net size, the parameters of you PDEs, boundary values, etc. are the most important things.
Now, that gets you a result, and it's likely to take a fair bit of time to run through. To get it up to real time the limiting factor really ends up being heat. Silicon takes a LOT of energy as compared to our heads, ~10^4 more per 'neuron'. If we want to get to real time, we're gonna need to deal with the entropy.
But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel? And if not, what's the difference?
Yes, it does.
This reminds me of the idea that free will doesn't exist, but that we have to act as if it were.
So by analogy to that, maybe the AI isn't really suffering, but you have to act as if it were.
More food for thought:
Some surgery blocks memory but can be incredibly painful. Do we need to worry about that? Is the suffering that the brain can not remember "real"?
Gene expression is often tied to the environment the organism is in. Mere possession a gene isn't enough to benefit from it. Some expressions don't take effect immediately, but rather activate in subsequent generations.
Epigenetics is a whole equally large layer on top of this system. A single-focus approach may not be sufficient, and even if it is, it's not likely to cope with environmental entropy very well.
 I understand gene to mean some ill-defined, not necessarily contiguous set of genetic sequences (DNA, RNA, and analogs) with an identifiable, particularized expression that effects reproductive (specifically, replicative) success. I think over time "gene" has been redefined and narrowed in a way to make it easier to claim to have made supposedly model-breaking discoveries.
 Some others on HN have made strong cases for why epigenetics isn't a meaningful departure from the classic genetic model; just a cautionary tail for eager reductivists who would draw unsupported conclusions from the classic model. See, also, note #1.
Like what is language, what is intelligence? Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.
Making Alexa turn on the lights or using Google Translate are cool party tricks though.
Idc how many Doom games ya made, but I’m sorry to say a bunch of software engineers aren’t gonna crack this one.
“to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance” - https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the...
Having no clue is not something to be proud (or ashamed) of.
> I’m sorry to say a bunch of software engineers aren’t gonna crack this one.
Doesn’t sound like you’re at all sorry, it sounds like you’re thrilling in putting these uppity tryhards in their place for daring to attack something you hold sacred.