
Preventing undesirable behavior of intelligent machines - vo2maxer
https://science.sciencemag.org/content/366/6468/999.full
======
salawat
Now _this_ is fascinating!

This is a start toward what I'd call a sane form of ML Engineering. That is,
assuming you can manage to mathematically express the fairness constraints
ahead of time. It'll go a long way toward proofing against known hazardous
functions.

One thing I'm curious of though is whether or not this approach severely
increased training time. I'd imagine it would, since for every constraint
you're stripping out a higher and higher chunk of possible functions to
emulate. Although it makes me wonder whether the time viable candidate
function could be found may be usable as a sort of walking stick to tease out
relationships not immediately apparent in the data. I'd expect
convergence/divergence of constraints and primary function optimizations to
play a big role in determining whether your dataset will converge to a
solution, or whether you just sent you're supercomputer loose on a wild goose
chase.

