Hacker News new | past | comments | ask | show | jobs | submit login

Yes, a utility monster is conceivable under both theories. It's amusing how a utility monster represents a serious objection for utilitarian philosophers, but is a total non-concern for people that solve optimization problems for a living. It would be like software engineers lying awake in bed worrying about what they'd do if they just stopped writing bugs one day.



Avoiding weird solutions by adding appropriate constraints is extremely important to people who solve optimization problems in practice. The classic example from the inventor of linear programming is the diet problem [1], where the naive LP suggested to eat nothing but bouillon cubes or drink 500 gallons of vinegar.

[1]https://resources.mpi-inf.mpg.de/departments/d1/teaching/ws1...


It's even a matter of being aware of assumed, implicit constraints. "In the early 1950s [...] the nutritional requirements didn't show a limit on the amount of salt? "Isn't too much salt dangerous?" He replied that it wasn't necessary; most people had enough sense not to consume too much."


There are certainly cases of weird degenerate solutions to optimization problems. Lots of examples in machine learning (one classic at https://openai.com/blog/faulty-reward-functions/).

In more old-school convex optimization the closest thing is probably insufficiently constrained problems. If you don't say the amount of each food you eat has to be greater than zero, you can get all your dietary needs satisfied for the low low price of minus infinity dollars!

From another perspective, though, perhaps there's a "No True Scotsman" side of this. Is a utility monster a sign of a badly-specified problem, or are they a definitive sign of one? If the former, it stands to reason it's not a "concern" for modellers -- it's a dream!


Utility monster is a case of lack of time constraint on optimization problem and lack of robustness.

For example, the special "Felix" case ignores the case where said guy is stuck by gamma radiation and dies. Over time, the probability of that k or a bunch of other catastrophes ruinning the solution tends to 1.

Therefore, the best solution avoids the most known catastrophes and is updated as new possibilities of those are found. (Tontine lotto, anyone?) Minimax loss optimal. Maximin (maximizing gain without increasing base loss) could be decent as well. Deciding between the two is better left to wizards.

Online stochastic optimization is mathematical black magic anyway so far.

But then, satisfying humans is much easier given all the built in biases we have. Keeping things alive long term is much harder.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: