> Neural networks are merely generalized methods used to translate data.
That's true of all computation!
To see this, consider lambda-calculus, which is a calculus of functions (i.e. things that convert inputs into outputs). The computational equivalence of lambda-calculus with Turing machines was one of the early computability theory.
Yeah that cringy start to the OP's point removed quite a bit of credibility, despite stating 'they work in the area'.
Probing towards a meta learner that is useful in many contexts is an intriguing idea though.
I also wonder about the dream for meta learners and if they can approach a limit that is different than 'Adam-and-crew' when getting away from domain-specific parameter landscapes. I imagine much of the work will tend towards context switching to deal with effectively multiple domain specific optimizers, but I may be very far off from what is observed here.
That's true of all computation!
To see this, consider lambda-calculus, which is a calculus of functions (i.e. things that convert inputs into outputs). The computational equivalence of lambda-calculus with Turing machines was one of the early computability theory.