Depends on your goal. If you want to simulate human intelligence, maybe you want to replicate constraints.
A fair chunk of AI work boils down to “make something that acts like a human.” On the other end of the spectrum is stuff that is more specialized, like very targeted classifiers; there is no reason to expect those would benefit from this.
Has there been research on whether it's possible to similate human intelligence without also getting all of its flaws? Susceptibility to logical fallacies, random difficulty with factual recall, fading memory over time, biases, prejudice, etc.
That's something you could philosophise about but not research unless you already have a human level intelligence to test. We won't know if it's even possible to replicate in silicon for a very long time
No, AI work boils down to “output computation results that humans hallucinate could have been generated by a human””
We’re just normalizing to our innate sensibilities. We have no idea if we’re making intelligence or what that even means as us humans must work within the constraints we evolved into. We have no idea if we generalized consciousness as it could exist across space time.
You and I will never exist outside our universe and observe what makes it tick. We’re hanging out on Earth making mannequins talk, hallucinating we’re gods because of it. Humans are artificial intelligence given their lack of direct observation of so much of the universe.
Conway came up with his game of life after the universe. Sorry, AI researchers, life, consciousness and visualization were already created by reality. We’re just working on an easy to use Dewey Decimal system to catalog it.
Eh, maybe. The question there becomes, how much of human behavior is specifically a consequence of the physical constraints of laying out our neurons? I'm guessing not much. The article doesn't mention any change in output prompted by the change in internal architecture, which doesn't really confirm my guess, but doesn't falsify it either.
there's been physics simulation of bipedal models learning to walk via reinforcement learning. the results always were a bit choppy and robotic, until they implemented signals propagation delays. this led to more natural, fluid movements that definitely looked more "human". sorry, can't find the video.
personally i absolutely do think that for generating convincingly human-like intelligence you also need some human constraints, otherwise you will get some uncanny valley.
another example would be alpha-zeros play style. AIs don't play like humans, they maximize their chance of winning in the long term without going for good looking opportunities that hurt their chances in the long run (like human players do).
Did the bipedal walking people try optimizing for energy instead? If so, and the models were still choppy, maybe that's actually better.
I guess I just find the goal of imitating human intelligence including all its mistakes to be a silly goal. The only time you want that instead of an actual human is if you're trying to deceive people into thinking your AI is a human. Otherwise, you just want the correct answer (or, if you're afraid, you want a strictly sub-human intelligence).
A fair chunk of AI work boils down to “make something that acts like a human.” On the other end of the spectrum is stuff that is more specialized, like very targeted classifiers; there is no reason to expect those would benefit from this.