This gives me the sense, personally, that economists aren't interested in making accurate predictions about the world. Other fields would, I think, test their theories against observations.
There's a lot of structural econometric papers that do exactly what you ask, but you need graduate level statistics and a deep understanding of discrete choice, identification and simulation methods.
Structural econometrics is a field where PhD students, in their 5 year of study, usually produce only one complete study, if that.
I agree with the sentiment that if work is messy, teaching should have messy as well. But not when you're starting out with new tools.
But if you compared these notes to the notes for a college level physics course, you would find a similar level of abstraction, idealized models, and absence of real world data. Those things are not in themselves indicators that physicists (or economists) don't care about the real world. In any mature field, there is a body of knowledge and techniques to be learnt. There's a certain formalism to be picked up, rather than just staring at data.
There might be legitimate reasons for dismissing the general approach taken by mainstream economic theory, but what you seem to be saying ("hmmm, my intuition is that this stuff doesn't focus enough on accurately predicting the real world") is not a reasoned critique.
pandaSDMX can pull SDMX data from e.g. ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank; with requests-cache for caching data requests:
The scikit-learn estimator interface includes a .score() method. "3.3. Model evaluation: quantifying the quality of predictions" https://scikit-learn.org/stable/modules/model_evaluation.htm...
statsmodels also has various functions for statistically testing models:
"latex2sympy parses LaTeX math expressions and converts it into the equivalent SymPy form" and is now merged into SymPy master and callable with sympy.parsing.latex.parse_latex(). It requires antlr-python-runtime to be installed.
IDK what Julia has for economic data retrieval and model scoring / cost functions?
You say this as though using mock-up data to teach techniques isn't a universal practice in literally every other discipline.
Pretty much every course I took in undergrad physics had no real world data. The intro level courses were especially fun, when we'd go into the lab and get such horrible data that we'd never conclude what they're teaching in the theory classes. We wondered what the point of the lab even was.
The biggest offender is the friction model. Heck no - it's not proportional to the normal force. No one could successfully show that in the lab. And a quick Google search shows you a trivial experiment where just changing the orientation and keeping the normal force the same leads to wildly different frictions.
Ever taken statistics courses? You're not doing multiple regression analysis on real world data on day 1. On day 1 you're learning odds using playing cards and coin flips.
Curiously enough, my undergrad statistics textbook was loaded with problems where the data was taken straight from a journal paper. The book has poor reviews on Amazon, but I think it's the best I've seen.
You could test your own theory against observations that calculations with real world data are very much a part of economics, but are just not part of this particular course.
Of course, it depends on who they work for. Effectively, the American field of economics is an exercise in decoupling private reality from public theory.