
Optimization over Explanation - kawera
https://medium.com/berkman-klein-center/optimization-over-explanation-41ecb135763d
======
xapata
The trouble with these black box models isn't that we're afraid of
uncertainty, it's that we're worried they're using a bad proxy. Throwing more
data at the algorithm won't solve an omitted variable problem or an
endogeneity problem.

------
xg15
> _But here’s another: Accept that we’re not always going to be able to
> understand our machine’s “thinking.” Instead, use our existing policy-making
> processes [...] to decide what we want these systems optimized for. Measure
> the results. Fix the systems when they don’t hit their marks. Celebrate and
> improve them when they do._

If really _no one_ understands how the algorithm works, how could anyone
meaningfully "fix" or even improve the algorithm?

------
everdev
> Human-constructed models aim at reducing the variables to a set small enough
> for our intellects to understand. Machine learning models can construct
> models that work — for example, they accurately predict the probability of
> medical conditions — but that cannot be reduced enough for humans to
> understand or to explain them.

Some problems in life are too complex to explain in 5min to someone without
knowledge of the space.

~~~
hyperion2010
Witness the number of times someone complains about a feature of a programming
language where to anyone who has encountered the problem it was created to
solve it is fairly obvious that the complainer simply doesn't get it, and
usually no amount of explaining will alleviate them of their complaints. They
usually have to have the problem for themselves because it is not something
that is easy to articulate.

