This logic only applies to generative pre-training, behavior cloning, and other training methods which rely on learning to mimic well-structured content from the real world.
It does not apply to intelligence gathered through methods like RL.
How does the author think about the intelligence of AlphaGo, for instance, which was trained entirely by self-play?
Good point. This calls to mind LeCun's recent argument about missing models that can learn from raw experience or "self-play". When we have a ChatGPT that understands language strictly from audio / video inputs, then we can start to talk about human-like intelligence.
As for AlphaGo, I would put it the same category of intelligence as a calculator. It does one thing well -- approximate a Monte Carlo Tree Search.
Excited to share Autodoc: An AI-powered code generation service for docs sites. Drop an HTML tag on your docs site homepage and get an automatic interface for text-to-code in your framework. Instead of searching through manual pages, users of your framework can simply express what they want to accomplish - e.g. "swap the ith and jth rows of a matrix" in numpy - and it will generate code on their behalf.
This is a prime example of the moving goalpost of what intelligence "actually" is - in previous eras, we would undoubtedly consider understanding context, putting together syntactically correct sentences and extracting the essence from texts as "intelligent"
Whether this thing is worthy of the label of "intelligent" or not is fairly uninteresting. What matters for something like this is its accuracy and if it can be trusted - that is what I think OP is getting at.
Have you ever read "A Canticle for Leibowitz"? A peripheral bit in the story has a monk develop a mathematical system to determine what word would come next in a manuscript whose edge has been lost. Walter M. Miller, writing that story in 1959, does not portray such a system as having or being perceived to have "actual intelligence", because he can easily imagine that a complex system could appear to work in that way without intelligence.
Does it do all that, or does it just pretend to understand context and extract the essence from texts? It looks as if it does because it follows the form you'd expect an answer to have if the person is intelligent. But when you look more closely, it often falls apart.
It reminds me of people who use "big words" without actually understanding them. If they don't overdo it or really miss the meaning of a term, they can seem much more educated than they are.
You're asserting here that it understands context, but you haven't provided any argument in support of that assertion.
I think you'll also need to define what you mean by "understanding" (because that term is loaded with anthropocentric connotations) and clearly state what "context" you think the model has.
3D printing housing is exciting; I'm also optimistic about the future of modular architecture, in which walls (or even room-sized units) are manufactured and transported to the construction site, plumbing/electricity already included, and assembled like Legos. Nexii (https://www.nexii.com/) is working on something similar. It seems there are significant construction cost reductions, faster to build, and easier to repair, although the space of possible buildings is more limited than in the 3D-printed approach