maybe it's the combination of sheer computing power and availability of data that allows better models. maybe we've never looked at algorithmically generated models because we also want a narrative (commonsensical explanation) to the model, not just a matter of algorithmically finding correlations between jelly beans and acne, say.
part of me thinks, ok the machines found something. now can we actually use that to understand the world, rather than build more recommendation algorithms? (haha). I'm not sure what an advisor will say about a doc student who says, let's just throw reams of data at a machine until we find a meaningful correlation, and then let's reason from the correlations (my guess is 'no', that's not really the scientific method is it).
I'm hopeful, and I don't think the answer to the question will come from academia.
Currently they are saying yes, please, and desperately trying to source ever bigger volumes of data.
Norvig et. al's article "The unreasonable effectiveness of data" has been cited over 400 times which is a lot for something with no formula.
The word you want is "interpretable". Powerful blackbox models have been known for a while, but some industries prefer interpretable models like linear and decision trees, sometimes you're even forbidden from using others due to regulation.