Hacker News new | past | comments | ask | show | jobs | submit login

> ... "AI can't do X" until we actually achieve X and then the goalpost is moved.

This gets said often but I don't think anybody whose credible in the field actually makes statements like this.

Take Chess for instance. In the IBM "Big Blue" documentary from ~2004, they quote journalists as saying "AI can't play Chess as well as humans but if it could, then AI would be solved." Why did the techniques from Big Blue not seem to go anywhere?

I know for a fact that scientists _were not saying this_. The Lighthill Debates on AI specifically talk about why playing games well doesn't really prove anything. [0]

I do agree that OCR has improved greatly by AI but this feels very niche. Somebody mentioned defect anomaly detection in another comment and I was not aware of this. All useful for sure. Still, this doesn't amount to anywhere near the hype that was announced earlier last decade. Moreover, the economics in AI are mostly awful despite everybody's seemingly best efforts. [1]

Even if its useful in some vague sense, it's not necessarily economically useful. Amazon has ~10,000 people working on Alexa. [2] Have they turned a profit on these endeavours? I understand they can absorb the costs but its not clear to me how the economics will work out here.

ML models haven't even been useful in places where statistical methods have reined supreme such as Renaissance Technologies and other hedge funds. No large companies are using neural networks in a significant capacity to my knowledge.

Another big tell for me is the lack of any consumer products in the space. Where did they go? why are they missing? This is what I mean by "everybody is competing for the top 20 or so customers."

This is compounded by the unstructured nature of most data. Most databases are still terrible, especially at the few institutions large enough to have it and large enough for it to make a difference in their business. There should be more focus into this problem if anything. A well tuned and structured database will be many times more useful than a fancy model that needs constant retraining. But I guess its not as cool so nobody cares.

[0]: https://www.youtube.com/watch?v=03p2CADwGF8 -- highly recommended, with many of the arguments still resonating today.

[1]: https://a16z.com/2020/02/16/the-new-business-of-ai-and-how-i...

[2]: https://qr.ae/pGJUKk -- couldn't find a better source offhand.




>> The Lighthill Debates on AI specifically talk about why playing games well doesn't really prove anything.

For a bit of context, that is a televised debate between Sir James Lighthill, comissioned by the UK government to write a report (the "Lighthill Report") on the state of AI research, on the one side, and John McCarthy [1], Donald Michie [2] and Richard Gregory [3], on the other side. The Lighthill Report is widely considered to be a principal cause of the first AI winter, of the 1970's, which killed AI research dead for a good decade or so (until the next winter, of the 1980's). The debate at that point was basically just for show as Lighthill had already submitted his report.

Now, I don't know which part of the televised debate you mean when you say that [the debate] talks about why playing games well doesn't really prove anything, but that sounds very much like Lighthill's opinion. On the other side, we have Donald Michie, of course, creator of MENACE [4], the first reinforcement learning system that played tic-tac-toe and was built out of matchboxes [5] [6]. Reinforcement learning is, of course, considered important today.

John McCarthy himself was critical of AI game playing research, particularly on chess. In his response to the Lighthill report [7], he has this to say:

Lighthill had his shot at AI and missed [8], but this doesn't prove that everything in AI is ok. In my opinion, present AI research suffers from some major deficiencies apart from the fact that any scientists would achieve more if they were smarter and worked harder.

1. Much work in AI has the ``look ma, no hands'' disease. Someone programs a computer to do something no computer has done before and writes a paper pointing out that the computer did it. The paper is not directed to the identification and study of intellectual mechanisms and often contains no coherent account of how the program works at all. As an example, consider that the SIGART Newsletter prints the scores of the games in the ACM Computer Chess Tournament just as though the programs were human players and their innards were inaccessible. We need to know why one program missed the right move in a position - what was it thinking about all that time? We also need an analysis of what class of positions the particular one belonged to and how a future program might recognize this class and play better.

McCarthy absolutely did not think that "playing games well doesn't really prove anything". He believed that getting machines to play games[9] better than humans would illuminate the mechanisms of the human mind that allow humans to play chess, and to do other things besides. Chess was, for him, a model of human thinking, the "drosophila of AI" [10], much like drosophila is a model organism for biology research.

McCarthy would not have been happy with today's achievements in AI game playing, such as AlphaGo and family. He would have considered them symptoms of the "look ma, no hands disease", results with no real scientific significance [11]. Michie, who created the term "Ultra Strong Machine Learning" [12] to describe machine learning that improves the performance of the human user would probably have thought the same about today's uses of reinforcement learning.

However, neither of them would have agreed that "playing games well doesn't really prove anything".

>> Why did the techniques from Big Blue not seem to go anywhere?

Note that Deep Blue, IBMI's chess-playing system that beat Gary Kasparov, did not use machine learning. Only good, old minimax and an opening book of moves compiled by chess grandmasters [13]. Minimax only works for board games, and then two-player, zero-sum games with complete information, and so cannot be used outside of chess, go, and other similar games. This is why it did "not seem to go anywhere". It was the kind of AI that McCarthy blasted as having no scientific value.

_______

[1] Like Donald Michie, but in the US.

[2] Like John McCarthy, but in the UK.

[3] I honestly have no idea. Probably important early pioneer of AI.

[4] The "Matchbox Educable Noughts And Crosses Engine".

[5] Michie didn't have access to a computer.

[6] Great material about MENACE here: https://rodneybrooks.com/forai-machine-learning-explained/

[7] "Review of ``Artificial Intelligence: A General Survey''" http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthi...

[8] Oops.

[9] Read: chess.

[10] http://jmc.stanford.edu/articles/drosophila/drosophila.pdf

[11] https://www.wired.com/2011/10/john-mccarthy-father-of-ai-and...

  "Computer chess has developed much as genetics might have if the geneticists
  had concentrated their efforts starting in 1910 on breeding racing
  Drosophila," McCarthy wrote following Deep Blue's win. "We would have some
  science, but mainly we would have very fast fruit flies."
[12] "Machine learning in the next five years" https%3A%2F%2Fdl.acm.org%2Fdoi%2F10.5555%2F3108771.3108781&usg=AOvVaw0rwP_cc1GnNGNs7dBa7Qao

[13] "AI: A Modern Approach" http://aima.cs.berkeley.edu/ See chapter 5 "Adversarial Search and Games".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: