Martin Casado captures the essence well: “I don’t recall another time in CS history where a result like [whether LLMs have a capability] is obvious to the point of banality to one group. And heretical to another.”
100% this. The topic seems extremely and unnecessarily polarized. I was at a high level senior academic AI meet last week, and people in that space are really confused when I say I use current LLM models daily in production business environments, as they (a) somehow believe business processes that can deal with incorrectness are somehow not existant and (b) see flawed basic non lingual reasoning as a huge obstacle to any practical apllication.
On the other side I meet CEOs and CTOs that believe they will replace whole business divisions with of the shelf AI in the coming year.
100% this. The topic seems extremely and unnecessarily polarized. I was at a high level senior academic AI meet last week, and people in that space are really confused when I say I use current LLM models daily in production business environments, as they (a) somehow believe business processes that can deal with incorrectness are somehow not existant and (b) see flawed basic non lingual reasoning as a huge obstacle to any practical apllication.
On the other side I meet CEOs and CTOs that believe they will replace whole business divisions with of the shelf AI in the coming year.