Why do you think systems of partial differential equations (common in physics) are somehow provide more understanding than the corresponding ML math (at the end of the day both can produce results using a lots of matrix multiplications).
... because people understand things about what is described when dealing with such systems in physics, and people don't understand how the weights in ML learned NNs produce the overall behavior? (For one thing, the number of parameters is much greater with the NNs)
Stuff that we can't program directly, but can program using machine learning.
Speech recognition. OCR. Reccomendation engines.
You don't write OCR by going "if there's a line at this angle going for this long and it crosses another line at this angle then it's an A".
There's too many variables and influence of each of them is too small and too tightly coupled with others to be able to abstract it into something that is understandeable to a human brain.
> AI arguably accomplishes this using some form of abstraction though does it not?
It's unabstractable for people, because the most abstract model that works still has far too many variables for our puny brains.
> artists routinely engage in various forms of unusual abstraction
Abstraction in art is just another, unrelated meaning of the word. Like execution of a program vs execution of a person. You could argue executing the journalist for his opinions isn't bad, because execution of mspaint.exe is perfectly fine, but it won't get you far :)
"This program generates the most likely outputs" isn't a scientific explanation, it's teleology.