Hacker News new | past | comments | ask | show | jobs | submit login

Progress in AI is due to data and computational power advances. I wonder what kind of advances are needed for AGI.

1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.

2. Ion channels may or may not be affected by quantum effects.

3. The search space is huge (but organisms aren't optimal and natural selection is probably local search)

4. If it took ~3.8b years to get from cells to humans, how do we fast-forward:

* brain mapping (replicating the biological "architecture")

* gene editing on animal models to build tissues and/or brains that can be interfaced (and if such interface could exist how do we prevent someone from trying to use human slaves as computers? Using which tissues for computation is torture?)

* simulation with computational models outside of ECT (quantum computers or some new physics phenomenon)

Note: those 3.8b years are from a cell to human. We haven't built anything remotely similar to a cell. And I'm not claiming that an AGI system will need cells or spiking nets, most likely a lot of those are redundant. But the entropy and complexity of biological systems is huge and even rodents can outperform state of the art models at general tasks.

IMHO, the quickest path to AGI would be to focus on climate change and making academia more appealing.




> even rodents can outperform state of the art models at general tasks.

Rodents? Try insects [1]. In the late 40s and early 50s, when neural networks were first explored with great enthusiasm, some of the leading minds of that generation believed (were convinced, in fact) that artificial intelligence (or AGI in today's terms) is five/ten years away; the skeptics, like Alan Turing, thought it was fifty years away. Seventy years later and we've not achieved insect-level intelligence, we don't know what path would lead us to insect-level intelligence, and we don't know how long it would take to get there.

[1]: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.


This jumping spider has ~600k neurons in its brain - https://youtu.be/UDtlvZGmHYk

They are creepy smart.


Speaking of Portias and smarts, I'm just going to recommend "Children of Time" here (and its recently released sequel, "Children of Ruin"). It's a story of a future where humans accidentally uplifted jumping spiders instead of monkeys, and goes deeply into how the minds, societies and technology of such spiders would be fundamentally different from our own.


Just wanted to say holy crap that video was amazing - exciting and suspenseful!


Here's another one for ya if you get stuck with a case of the nosleeps - https://www.youtube.com/watch?v=7wKu13wmHog

Something about predatory nature of both insects seems to tune up their intelligence. Of course it never hurts having the BBC tell your story either.


>Something about predatory nature of both insects seems to tune up their intelligence.

Yep. To be a predator, you need to outwit your prey and think fast, so it's thought to be a natural INT grinder. `w´

Presumably, this could drive up the INT of prey too, but maybe it's cheaper to just be faster/harder to see? But you can't be THAT hard to see, and the speed only saves you in failed ambushes, so planning successful ambushes continues to reward the INT of predators (unless they just enter the speed arms race, like cheetahs or tiger beetles).


What is I.N.T.? I couldn't find a definition.


Parent is using the commonly accepted stat abbreviation for intelligence in role playing games


> [1]: To those saying that insects or rodents can't play Go or chess -- they can't sort numbers, either, and even early computers did it better than humans.

They probably can, internally; they just can't operate on tokens we recognize as numbers explicitly. For a computer analogy, take Windows Notepad - there's probably plenty of sorting, computing square roots and linear interpolation being done under the hood in the GUI rendering code - but none of that is exposed in the interface you use to observe and communicate with the application.


Computers still do that much better -- there's no way an insect, or a mammal, brain internally sorts ten million numbers -- and even much better (at least faster) than humans. My point is only that the fact computers can do some tasks better than insects or humans is irrelevant, in itself, to the question of intelligence.


> Progress in AI is due to data and computational power advances.

I think you'd be surprised how much progress is also being made outside those two factors. It's sort of like saying graphics only improve with more RAM and faster compute. We know there's more to it than that.

In many cases, the cutting edge of a few years ago is easily bested by today's tutorial samples and 30 seconds of training. We're doing better with less data and orders of magnitude less compute.


But not towards AGI. We're just improving on narrow AI after recent breakthroughs thanks to the hardware being powerful enough and large datasets being available.


The point the poster above is trying to make is that given the same amount of data, improvements in technique is leading to significant improvements in accuracy.

An illustrative example comes from the first lesson in fastai's deep learning course: an image classifier that would have been SOTA as late as 2012/13, can be built by the hobbyist in like 30 seconds.

That said, I don't disagree that this is all narrow AI, at best.


Having access to cheap and scalable compute and storage should be helpful for AGI too. It doesn't solve anything but it does give more access to more people.


I'm sure neural nets will herald AI right after the mechanical gears and pneumatic pistons that were envisioned as the secret sauce during the turn of the last century.

The key, of course, is redefining life and intelligence as whatever the current state-of-the-art accomplishes. (Cue explanations that the brain is just a giant pattern matcher.) It makes drawing parallels and prophesying advancements so much easier. Of all our sciences, that's perhaps the one thing we've perfected--the science of equivocation. And we perfected it long ago; perhaps even millennia ago.


> even rodents can outperform state of the art models at general tasks

Rodents can't play Go or a lot of other humanly-meaningful tasks. We don't need to build an artificial cell. A cell is too many components that by blind luck happened to find ways to work together, this is as far from efficient design as can be. The same way we don't build two-legged airplanes, we don't need anything that's close to the wet spiky mess that happens in human brains. It's more likely that we have all the ingredients already in ML, and we need to connect them in an ingenious way and amp up the parallelism.


AlphaZero has coded all the rules for the respective three games, they do a tree search and their neural network output layer has exactly n neurons for max(n) possible moves. Although it's impressive they don't teach it heuristics and strategies, it's a very specific task.

What about pigeons predicting breast cancer with 99% probability, rats learning to drive cars, monkeys building tools?

Rodents stand a bigger chance at learning Go than AlphaZero spontaneously building stone tools and driving cars.


You are talking about AlphaGo. AlphaZero was not given any prior knowledge of the game and is trained exclusively through self-play -- and it outperforms Monte Carlo tree search-based systems such as AlphaGo and Stockfish in chess 100-0 with a fraction of the training time.

AlphaZero is also capable of playing Chess, Shogi and Go at a super-super-human.


As impressive as AlphaZero surely is, I don't think it ever got a proper comparison to Stockfish. It was running on a veritable supercomputer while Stockfish was running in a crippled mode on crippled hardware.


Not working in this area but the abstract of the AlphaZero paper [0] seems to disagree about your /any prior knowledge/ point: "Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case."

[0] https://arxiv.org/abs/1712.01815


This is my point exactly. The model is trained without any prior domain knowledge at all. It only has access to a game world where the constrains in the world is a representation of the game's rules.


You can view these as optimized pattern recognizer regexes. You start with a blank fully connected graph and it eventually converge on a useful function. That graph has many paths encoded in it that represents specific optimal game play.


Isn't this how the neurons and synapses in our brain work, though?


Maybe... there’s some other properties of biological neurons we don’t capture in NNs currently.


The natural environment encodes "all the rules" for real animals, too. You need some constraints or else there is nothing to be learned. One could say that every survival task is also specific , but is a slight variation of previously learned one.

> pigeons predicting breast cancer with 99%

pigeons contain 340M neurons (with dendrites and all, giving them higher computational capacity than ANN units).

> Rodents stand a bigger chance at learning Go

They probably don't ; probably because they can't understand the objective function and their brain capacity is limited


Scientist have just recently taught rats how to play hide-and-seek for fun. Other scientists have found out that slime mold will model the Japanese railroad system. I wouldn't be surprised if rodents (plural) instinctively have a go strategy once someone figures how to make an analog game for them.


its probably safe to assume that even if rodents are behaviorally trained to follow complex rules, they are mostly pattern-matching, and are lacking higher-level abstraction and communication models like humans do. If they did they would at least attempt to communicate with us, like we do with them. In such a case, an elephant that plays go by patternmatching is no different from a neural network that learned by patternmatching


The problem with the analogy is that the car, by far, is not a general transportation device. Practically, most cars are solving a very constrained transportation problem: moving on roads that humans made.

We don't have anything remotely close to a wetware-enabled transportation device, something that can move on flat land, climb mountains, swim in bodies of water, crawl in caves, hide in trees.

Within the constrained problem, the machine exceeds humans. But generally, the wetware handles moving around much better.

Same with AI: in a constrained problem, the AI can excel (beat humans in chess and go). But I doubt we will see a general AI any time soon.


> constrained problem

human AI also evolved by solving constrained problems, one at a time. Life existed before the visual system , but once this was solved it moved on to do other things. In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition, and we are closing to certain output (motor) systems: NLP text synthesis systems seem a lot like the central pattern generators that control human gait, except for language. What seems to be missing is the "higher-level ", more abstract kernels that create intent, which are also difficult to train because we don't have a lot of meaningful datasets. Or maybe , we have too big datasets (the entirety of wikipedia) but we don't know how to encode it in a meaningful way for training. It's not clear however that these "integrating systems" are going to be fundamentally different to solve than other subsystems. It certainly doesn't seem to be so in the brain, since neocortex (which hosts both sensory and motor and higher level systems) is rather homogeneous. In any case, it seems we 're solving problems one after another without copying nature's designs, so it's not automatically true that we need to copy nature in order to keep solving more.


> In AI we have a number of sensory systems seemingly solved: Speech recognition, visual object recognition,

Do you have examples of those systems which are competitive in general use rather than specialized niches? The cloud offerings from Amazon, Google, etc. are good in the specific cases they’re trained on but fall off rapidly once you get new variants which a human would handle easily.


There are many vision models where classification is better than human. I m not sure what you mean 'fall of rapidly'; they do fail however for certain inputs where humans are better. But we 're talking about models that contain 6 to 7 orders of magnitude less neurons than an adult brain.


It's also interesting in the context of how we build our technology in general: we constraint our environments just as much we develop tools that operate in them. E.g. much as cars were created for roads, we adapted our communities and the terrain around them by building roads and supporting infrastructure. A lot of things around us rely on access to clean water at pressure, which is something we built into our environments, etc.


> A cell is too many components that by blind luck happened to find ways to work together

Can't tell if sarcasm.


carbon chemistry + thermodynamics


!= "luck"


so you think cells had some insight on how to evolve themselves?


more like caused to happen by the Creator.


Who created the creator?


From what I understand, quantum effects being essential to the process is a fringe belief. Penrose is probably the most famous 'serious person' (sorry Deepak Chopra) to espouse the idea, but I'm inclined to believe that might be a Linus Pauling/Vitamin C sort of scenario. Penrose started from the perspective of believing there must be quantum effects, then began fishing for physical evidence of it.


I was taught that the quantum theory of memory and cognition generally falls under Eric Schwartz's "neuro-bagging" fallacy [0]. That is:

>You assert that an area of physics or mathematics familiar to few neuroscientists solves a fundamental problem in their field. Example: "The cerebellum is a tensor of rank 10^12; sensory and motor activity is contravariant and covariant vectors".

So yeah, I feel that it's pretty fringe (as you suggested).

[0] https://web.archive.org/web/20170828092031/http://cns-web.bu...


One interesting hypothesis, re: lithium isotopes in Posner molecules: https://www.kitp.ucsb.edu/sites/default/files/users/mpaf/p17...


"The Secret of Scent" by Luca Turin [0] if I remember correctly goes into research that indicates that there may be quantum effects that explain how shape/chirality of molecules affect smell. [0] https://www.amazon.com/Secret-Scent-Adventures-Perfume-Scien...

So it is plausible that nature may have evolved to be affected by quantum effects.


Yeah, "quantum mechanics and cognition are very complex and therefore equivalent", sorry I don't know who to attribute the quote to.


I think you're recalling the end of this comic[1], which was on the front page of HN a couple weeks ago. So the quote is probably attributable to either Scott Aaronson or Zach Weinersmith.

[1] https://www.smbc-comics.com/comic/the-talk-3


Yes! Thanks


You forgot to mention, crucially, that neurons in close proximity affect each other, which is just one of the things that makes modeling of more than a few neurons in time domain a complete non-starter. It all results in enormous systems of PDEs which we don't know how to solve yet at all. You could say that we do not have the right mathematical apparatus to model any such thing.


I don't follow that. What would prevent (perhaps quite slow) simulation of a larger system of such neurons? E.g. N-body problems are analytically beyond us, but can be simulated to arbitrary precision with certain trade-offs.


Time domain solutions do not exist for more than a dozen neurons. At least they did not when I took a computational neuroscience MOOC a couple of years ago. State of the art at the time was the nervous system of an earthworm. That is, if you consider what you actually need to do to simulate how potentials will change in the brain over time give a certain starting state and stimuli, the math gets so complicated (and awkward) so quickly that it's not really tractable with the mathematical (or simulation) apparatus we currently have to go beyond such trivial systems.


> 1. Biological brains are non-differentiable spiking networks much more complicated than backpropagated ANNs.

Actually it's not so obvious that the brain is not differentiable. If you do a cursory search, you'll find quite a lot of research into biologically plausible mechanism for backpropagation. Not saying the brain does backprop, we just don't know and it's not outside of the realm of plausibility


> 2. Ion channels may or may not be affected by quantum effects.

In a sense, everything is affected by quantum effects. However, with neurons, they are generally large enough that quantum effects do not dominate. Voltage gated channels are dozens to hundreds of amino-acids long. Generally, there are hundreds to millions of ion channels in a cell membrane and the quantum tunneling of a few sodium ions in or out of the cell will generally not affect gestalt behavior of the cell, let alone a nervous system's long term state. Suffice to say, ion channels are not dominated by quantum behavior.

Largely, we have the building blocks to replicate neurons (as we currently understand them) in silico. However, as is typical with modeling, you get out what you put in. Meaning that how you set your models up will mostly determine what they do. Setting your net size, the parameters of you PDEs, boundary values, etc. are the most important things.

Now, that gets you a result, and it's likely to take a fair bit of time to run through. To get it up to real time the limiting factor really ends up being heat. Silicon takes a LOT of energy as compared to our heads, ~10^4 more per 'neuron'. If we want to get to real time, we're gonna need to deal with the entropy.


This reminds me of a interesting armchair moral dilemma: Assume we have the tech to replicate/simulate a biological brain. Now say we want to study the effects of extreme pain/torture etc on the brain. Instead of studying living animals or humans we'd just simulate a brain, and simulate sending it pain signals and see what happens.

But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel? And if not, what's the difference?


> But, if this is a 100% replicated brain, doesn't that mean its suffering is just as real as a real brain's suffering, and therefor just as cruel?

Yes, it does.


Or, assuming you don't believe in souls, "real" brain's suffering isn't real either. (The brain is just a machine, right?)

This reminds me of the idea that free will doesn't exist, but that we have to act as if it were.

So by analogy to that, maybe the AI isn't really suffering, but you have to act as if it were.

More food for thought:

Some surgery blocks memory but can be incredibly painful. Do we need to worry about that? Is the suffering that the brain can not remember "real"?


I think the word 'real' is way too vague in this context.



Fwiw, after a certain amount of pain, brain "transcends it": everything disappears, there are some curious colors here and there, but there is no pain. Experienced that during an in ear infection.


> gene editing

Gene expression is often tied to the environment the organism is in. Mere possession a gene isn't enough to benefit from it. Some expressions don't take effect immediately, but rather activate in subsequent generations.

Epigenetics is a whole equally large layer on top of this system. A single-focus approach may not be sufficient, and even if it is, it's not likely to cope with environmental entropy very well.


If you can craft a gene[1] to express some particular phenotype (a big if), surely you can craft it to express itself without reliance on epigentic[2] chemistry.

[1] I understand gene to mean some ill-defined, not necessarily contiguous set of genetic sequences (DNA, RNA, and analogs) with an identifiable, particularized expression that effects reproductive (specifically, replicative) success. I think over time "gene" has been redefined and narrowed in a way to make it easier to claim to have made supposedly model-breaking discoveries.

[2] Some others on HN have made strong cases for why epigenetics isn't a meaningful departure from the classic genetic model; just a cautionary tail for eager reductivists who would draw unsupported conclusions from the classic model. See, also, note #1.


We still haven’t solved language nor intelligence.

Like what is language, what is intelligence? Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.

Making Alexa turn on the lights or using Google Translate are cool party tricks though.

Idc how many Doom games ya made, but I’m sorry to say a bunch of software engineers aren’t gonna crack this one.


> Some of the smartest linguists and philosophers would proudly declare they have no fucking clue.

to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance” - https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the...

Having no clue is not something to be proud (or ashamed) of.

> I’m sorry to say a bunch of software engineers aren’t gonna crack this one.

Doesn’t sound like you’re at all sorry, it sounds like you’re thrilling in putting these uppity tryhards in their place for daring to attack something you hold sacred.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: