
The Use of Knowledge in Society (1945) - dedalus
https://www.econlib.org/library/Essays/hykKnw.html?chapter_num=1#book-reader
======
1996
TLDR: statistics only offer summaries, hiding things that may be very
important. But these things can't be seen: you hit a complexity problem if you
try to consider absolutely everything.

It's like when the curse of dimentionality meet big O notation, and you throw
your hands in the air and give up - or satisfy yourself with simplest
solutions, while knowing their limitations.

Note how this applies to all abstractions. They are generally helpful, but as
the saying goes, the maps are not the territories.

Except in economics, economist try their hardest to ignore these limitations,
even if they go against basic logic. It's hard for a man to understand
something when his continuous employment is based on not understanding.

But a minimum wage will increase unemployment. There is no workaround. A price
ceiling will cause shortages. There is no workaround. People get elected
promising there are workaround, and people get paid by promising to deliver
workaround, but it's just smoke and mirrors.

I find it funny when I see on TV they are now going to allow some compensation
for organ donors to try to fix the organ shortage problem. Using insights from
this paper from 1945 and basic econ, you can already say that it's an increase
in the price ceiling, so it will improve shortages.

But there will always be shortages unless a market can exist. People who fight
against basic logic will just have a little less blood on their hands. But if
they can sleep at night and help politicians deliver fuzzy feeling to their
electorate, I guess that's good enough to deserve their wage.

After all, people just want to be reassured that the dirty hands of capitalism
won't be touching the beautiful world of love and kindness that medicine is,
especially regarding organ donation (cue emotional music with people cuddling)

~~~
sillysaurusx
One interesting correlation with what you’re saying: in AI programming, there
is the concept of a “loss function”, which is basically just a number that
tells you how well your model is doing at a given task. For example, how well
it’s predicting the next word in a sentence. The lower the loss, the better
the quality.

Or that’s what everyone believes. Empirically that doesn’t seem true. Just
because your loss goes down, quality doesn’t necessarily go up. It’s a
statistical correlation, but it’s only a loose correlation at best.

So it’s kind of doubly related here: not only is the knowledge a statistical
summary, which hides the details of what’s happening, but we also don’t know a
better way to do it. No one knows how to measure quality other than to show
some samples to a bunch of humans, or to use the incorrect loss function.

I suppose the point is that it’s very easy to _accidentally_ hide such
complexities using math. No one intentionally sat down and said “I’m going to
come up with a very imprecise loss function that gives a misleading view of
how well AI models are doing.” It just happened by default. And it seems like
that might be related to your point of economists ignoring limitations, for
example. The limitations are inherent in the statistical summary.

------
Wolfenstein98k
One of the greatest articles of all time. Deeper every time you read it.
Planning will always fail at scale, this helps explain why and mitigate

