Hacker News new | comments | show | ask | jobs | submit login
Project Vania – A Fair Distributor for Python (github.com)
38 points by folivora 120 days ago | hide | past | web | 12 comments | favorite



This is so very easy when the weights are universally agreed upon.

In the many cases in my life where it wasn't so, here is the method that worked for me:

1) Divide the tasks into small enough bits that there # tasks / # people >> 1 2) Assign tasks at random 3) Let people trade and make it easy for them

This is statistically fair (which is enough in practice in every occasion I've had to solve a division of labor problem) and maximizes utility, which is a lot more important (this means the generalized version of "every task will be performed by the person that is least averse to performing it").

I'm still waiting for a good generic tool to do this, I lately toyed with the idea of releasing a Google Spreadsheet app


Let people trade the lottery tickets (for getting assigned tasks) before you do the lottery.

Ie everyone starts with a 1/n chance of having to do task 1 and a 1/n chance of task 2. Alice might want to trade away her chance of having to do task 1 in exchange for taking on more risk of having to do task 2.

I actually wrote a tool that plans oncall shifts like that for Google's Site Reliability Engineering when I was working there. There's some nice economic theory for how you have to organise the auction to elicit honest preferences---so that you can run the whole thing in a computer completely automated instead of actually having to wait for people to do the haggling manually.

(If you don't do that step, your haggling only after the random assignment is likely to be more practical.)

See "Elicitation of honest preferences for the assignment of individuals to positions" (http://www.eecs.harvard.edu/cs286r/courses/spring02/papers/l...). The main result you get from the paper is that it proves a relatively simple single linear optimization model to figure out the assignments of tasks correct and incentive compatible.


Extremely cool, thanks a lot.


This is interesting. However, Minimizing a weighted sum doesn't correspond very well to most concepts of fairness would it be possible to change the allocation rule to something like the Shapley value which naturally encapsulates at least some fairness axioms.


Fairness is a rather interesting problem with many definitions, see for example [1,2,3], I was hoping for a more extensive set of implementations; still good work!

[1] https://www.amazon.com/Cake-Cutting-Algorithms-Fair-You-Can/...

[2] https://www.cs.cmu.edu/~arielpro/15896s15/docs/paper10.pdf

[3] https://en.wikipedia.org/wiki/Fair_cake-cutting


Something is wrong with the example in the README, since it is supposed to assign four tasks to three teams but only assigns three of the tasks...


Yup. Backend development is cancelled, apparently.

So fair it makes the whole architecture serverless!


Once I tried to implement soccer team balancer. We made skill points to each player (so everybody had a final score) and I wanted to write a small Django application which balances the teams the most fair way. It turned out it is a really hard mathematical problem. My stupid brute-force solution ran for minutes for 10 players :D Wish this library has been existed back then!


Wouldn't brute force iterate over at most 2^10 = 1024 subsets of players (assuming 2 teams with 0-10 players each)?


Neat! Any insight into why you used PuLP for modeling the optimization problem? I'm looking into good python optimization modeling libraries. Right now I use the Julia JuMP package, but pure python would be nice.


I've been using PuLP for some experimentation. I like it because it's easy to use, and it uses the excellent COIN solvers by default.


Unless you are need MIP scipy.optimize is nice.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: