Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, this. Modern AI is not AI - AI is synthetic machine life and essentially always has been, in fiction and non-fictional idealism.

Deep neural networks have stuck us in hope-fueled uncanny valley, and very smart people tend to become very confused about their technology when they're subject to it.

This technology has its place about heuristic programming, definitely, but is not AI.



Someone will definitely be quick to reply with the "AI is a moving goalpost, things stop being called AI when they work, for example..." - so I'll offer my counterpoint up front. These things were never AI except in marketing lingo and in connection to the research in machine learning. The common folk definition of AI doesn't change - it's still the same vision of computers in science fiction, with which you can converse, and which can think better than you (except in some specific ways in which are super-dumb - this is necessary for the story to have any plot).


Right. And just to be clear I love the field and most everybody in it, specifically for their idealism - I'm a practitioner myself - clearly everyone involved in the modern field of AI wants to leave a legacy of beneficial impact, and sees AI as their tool to do so. I just think if we're looking for life we won't find it in the gates of a transistor.


Previously to GPT-2, I would have agreed. With that and GPT-3, I think most people from the 70s familiar with science fiction versions of AI would think this was pretty close to how they conceived of AI, at least until they were educated on the specifics of how it works.


I said this having GPT-3 in mind too. To be fair: my mind was absolutely blown when I first got to play with it, and I'm still deeply impressed by it. The experience altered my beliefs on human intelligence. I've always appreciated the joke that sometimes humans can be hard to distinguish from a Markov chain, but GPT-3 actually made me take this seriously - the quality of output on a 1-5 sentence range is astonishing in how natural it feels.

Still, it doesn't take more than a few sentences from GPT-3 to realize there's nothing on the other side. There's no spark of sapience, not even intelligence. There's the layer that can string together words, but there's no layer that can reflect on them and check whether they make any kind of sense.

To be fair, "sapience" may be a bit too high for the lower bound of what is an AI. I still think the bound is on that control / reflexivity layer. It's the problem that stifled old-school symbolic AIs. To this day, we can't even sketch an abstract, formal model for this[0]. Maybe DNNs are the way to go, maybe they aren't. But GPT-3 isn't even trying to solve that problem.

I'm not sure when I'll be ready to call something a proper AI, but I think it would first need to demonstrate some amount of on-line higher-level regulation and learning. I.e. not a pretrained, fixed model, but a system where I could tell it, "look, you're doing this wrong, you need to do [X]", and it'll be able to pick up on a refined pattern with a couple examples, not a couple hundred thousand. For this, the system would need to have some kind of model of concepts - it can't be just one big blob of matrix multiplications, with all conceptual structure flattened and smeared over every number.

--

[0] - I think the term of art here is "metacognition", but I'm not sure. It looks like what I'm thinking about, particularly metacognitive regulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: