Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"AI plays Dwarf Fortress" is something I'd like to see.


How do you define a "win" condition in Dwarf Fortress to train the AI?


DF itself already computes the prosperity of your fortress for you. This score and the sheer number of (non-enemy) dwarfs are the way you progress through the game. So a reward function based on these two inputs, plus maybe something like "months since last major catastrophe" could work.


Dwarven Utilitarianism


"Avoid dwarven deaths"


Doesn't that get you dwarven overpopulation and sad dwarves?


"maximize happiness according to Marxist values. Allow workers to control and maximize the means of production"


No, because a lot of dwarves only have a large number of children because of the high mortality rate of infant dwarves. By decreasing the expected mortality of the baby dwarves, the dwarven families will have the opportunity to choose having less children based on their situation, turning the apparent exponential chart of population growth into a plateauing sigmoidal line.


Avoid Death doesn't mean reproduce.


Depends on whether it's the death of the society or the dwarves you're optimising for, I guess? I don't know the game well enough to say, but suicide could be the most reliable way to minimise death.

I prefer "maximise happiness" if only because the Repugnant Conclusion is more interesting than the VHEMT.

https://en.wikipedia.org/wiki/Mere_addition_paradox https://en.wikipedia.org/wiki/Voluntary_Human_Extinction_Mov...


You can aim for "maximize happiness" without assuming that maximized happiness is merely the addition of the happiness of each person.


Can you mention one or two alternatives? Summing something like log-happiness doesn't help (and really isn't much more than objecting for the sake of objecting.) Thresholds for dignity just raise the zero-bar, and inequality metrics argue _for_ the repugnant conclusion.

Maybe mean/median/minimum happiness? They don't intuitively appeal to me, but I'd like to hear an argument for them from someone who believes in them (as well as arguments for systems I haven't considered.)

Of course there are plenty of non-utilitarian systems to choose from, but I'm sure few their proponents would like to describe as maximising happiness :-)


You are retracing the arguments made by Rawls in Theory of Justice. The result he proposes is a maximin principle that gives (grossly simplified) based on the maximum of access to social goods that is compatible with similar access for everyone else (equality principle). Differences are to be favored if they advance the situation of the least well off and if they are based on contribution/merit (differences principle).

Most commonly, maximin is extended to leximin i. e. Looking at the next least welloff. There are traditions of critique both from communitarianism (Sandel et al.) and libertarianism (Nozick et al.), both worth looking into.


Maximizing the minimum value is a sound optimization method. It gets an overall sound system that doesn't get any obvious break point.

In terms of happiness, it makes sure that no one is left behind, and tends to produce a more homogeneous society than optimizing for the average.

Also, [1].

[1] https://www.smbc-comics.com/index.php?db=comics&id=2569


> doesn't get any obvious break point

Well, aside from the imperative to euthanise the least happy regardless of their absolute happiness, I guess :-).

And making most individual actions normally called moral or immoral difficult to justify or condemn except that they might indirectly affect the least happy person.

And then there's the weird idea that my local actions may have moral weight depending on the arbitrary existence of an unhappy person in a distant country.

All seems pretty bizarre to me.


Lose gloriously.


Therein lies the point. :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: