If you were given a demo of an AI system that uses a completely new/revolutionary approach towards various different problems with success, how open would you be to rethinking your position on 'Optimization techniques'?
Modeling seems as a stop-gap towards getting over the limitations of Weak AI. As I recall, this is what knowledge-based expert systems tried in a time's past and failed at because it's nothing but a glorified masking of the underlying problem with limited human inputted rulesets. I don't agree with Yann LeCun that the way forward to AGI is modeling. I feel like it's the best solution people worked up towards the limitations of Weak AI which were broadly and publicly acknowledged in 2017 and early 2018.
> The main limitation right now: the ideas are very computationally expensive.
This is because the fundamental core set of algorithms being used by the industry are fundamentally flawed yet favorable to big data/cloud computing.. A quite lucractive business model for currently entrenched tech companies. It's why they spend so much effort ensuring the broad range of AI techniques fundamentally stay the way they are.. because if they do, it means boat loads of money for them.
> So we'll need engineers and researchers to help us to continue scaling our supercomputing clusters and build working systems to test our ideas.
When you're attempting to resolve something and you are shown YoY that it isn't being resolved and requires even more massive amounts of compute, it means you're doing something wrong. It will be better to take a step back an re-evaluate your approach fundamentally. Again, what is the willingness you have to do so if shown something far more novel?