1. I have a limited time in which to secure the next contract
2. Potential clients opportunities appear at a fixed rate (eg 1-2/week)
3. Each client has a different, unknown, maximum daily rate (MDR) they are willing pay. I can discover the MDR only by quoting a higher rate ("sorry the most we can go to is $XXX").
4. If I quote a lower rate than the client's MDR, I have a new contract and the game stops.
Given my goal is to find the client who will pay the highest daily rate before the deadline, what is the best strategy?
My best guess at the moment is to start at a high rate, and gradually decrease it as the deadline approaches. But how can I use the information I gather about rejected client's MDRs to decide the best daily rate to quote future potential clients?
Is that actually your goal though? Are you sure you wouldn't prefer a client who will offer repeat business at a decent but not maximal daily rate? How about a client who will offer a more interesting job, or one who will offer you the opportunity to learn something new?
In so many of these optimisation problems, the real difficulty is specifying exactly what you want to optimise (never mind making sure that the specification is tractable). In general the solution you get from your algorithm will depend sensitively on your objective: if you're not completely sure about the objective, you shouldn't be sure about the solution.
When it comes to contracting, this is precisely the goal. Your questions refer to freelancing rather than contracting, which is quite different.
Changing clients is preferable as there are more opportunities for networking, and there is a novelty factor: Each slow-moving bureaucratically-controlled poorly-defined enterprise IT project is slow-moving, bureaucratically-controlled and poorly-defined in its own way.
As you gain experience and projects, increase your rate 10% on each new contract until you encounter consistent resistance.
It's a more cautious approach and takes longer but it worked for me.
It's why oracle can charge tens of thousands of dollars for software that might run 30 dollars a month from another company.
Also: I'm talking about Oracle the enterprise financial management system. I don't know how the database systems get licensed. But I do think it's funny that most complaints from techies about Oracle are really related to the difficulty of learning Oracle's UI or licensing, when most websites, platforms, APIs, etc. these days have worse UI and licensing that's just as bad as Oracle's.
And anyway it's just an example.
Else, we can view this as a simple utility maximization problem where each high offer costs you one period of no work. If you knew distribution of MDR exactly, this would be a simple problem. So you can throw away your first two offers (by bidding infinity) to obtain estimators for the mean and standard error of the distribution. Although you should have an idea on the shape of the distribution as you realistically can not gather enough data to derive it.
From there on, you have an estimated probability on whether your offer will be accepted or rejected. If it's accepted, you end up with the money. If not, you end up in the same place, but your utility will be discounted for one time period. To simplify the solution, you can disregard the effects of learning for now and solve the problem to include that later.
Once you factor out the recursion from your utility function (which includes a discounted version of itself, in case of rejection), you should end up with a nice but complex calculus problem. Nonetheless, your optimal offer will depend on the estimators of mean and standard deviation. So each period, you should offer your "optimal number", update the mean/standard deviation with respect to the response and calculate the offer for the next period accordingly.
ED: You can ping me if you need a contract microeconomist :p
What about fixed price projects? You can potentially earn a much higher day rate. Clients aren't paying for your time in front of a computer, they're paying for your expertise.
Conventional wisdom and common sense really means "using an ill defined heuristic", which isn't so obviously better than the discussed algorithm. Common sense, as the saying goes, is neither common nor sense, and using this phrase just hides the actual algorithms people really apply.
There is no reason to think that common sense encompasses more reliable judgement than the simple maths here.
If in your first serious relationship you find yourself saying, "I really love this person. They're fantastic. I don't want to give them up." Conventional wisdom would probably say, "Well, don't give them up then."
People had extremely rich lives, deep thoughts, and interesting problems to solve, throughout the 3-4K years of history.
So Joe Sixpack today, or even a Joe IvyLeagueDegree today is no more sophisticated than e.g. a citizen of ancient Athens or Rome or Bagdad in most everyday matters.
It’s not a comment on the intelligence of the average historical person, or the richness of their emotional life. It’s just an acknowledgement that the tools we used to try to improve our lot for most of human history (including “common sense”) were not very successful, at least compared to the ones we’re using now.
The problem as stated, includes (although there may be variants):
"•After evaluating a person you can offer them a job. But if you reject them at that point, you can never come back to them.
•You want to choose the best person in the sample. Every other outcome except choosing the very best person is equally bad."
In reality choosing the 2nd best mate or candidate isn't much worse than choosing the best and is certainly better than choosing the worst (unless you believe in 'the one'?).
And in reality we can go back to rejected candidates / mates.
My understanding of the article is that that is their problem (with the problem) also.
If the ill-defined heuristic has survived for millennia, then it's probably better than so new algorithm in the context it is used. It shows good survival characteristics, and people don't keep around stuff they don't benefit from for millennia.
The problem suggest anything like that. It suggest you should spend 36.8% of dating time (or date count) on exploring, not 36.8% of your life...
Second problem with the blog, that it just produces lot of possible issues, but does not show any example where 36.8 algorithm would fail horribly. Maybe those drawbacks does not matter in practice.
He spend most of the time debating the exemplification without ever challenging the solution, but the example is just there to entertain the casual reader, the interesting part is the math behind.
*You're on a road trip and want to refuel ok the cheapest pump in range, that satisfied the uncertainties of the original problem plus the given constraints.
Strawman alert! The rate at which people date varies tremendously with age and life circumstance. To treat someone's dating life like a piece of uniform bar stock which can be cut off at the 36.8% mark is so obviously a bad approach, I'm immediately less sure of the article.
EDIT: It turns out the article's entire point is that the model is too simple. It really rubs me the wrong way that he starts out with an implementation which is way too simple.
Assuming that you enter the marriage market at 18, you then have 10 years to find a partner.
Assume that you have a girl/boyfriend and are 'serious' with them, likely meaning sexual activity. The average number of partners in the US is 7.2
From the optimal strategy in the Secretary problem, you should be 'serious' for ~37% of the time that you have available to date.
Running through the math here, you have 6.2 partners (assuming that you stick with the most recent one for childbirth) over a 10 year dating lifetime. Assume that everyone magically follows the optimal path. That means the trial-time of a 'serious' dating partner is ~7 months and 4 days, on average.
So, if you were to be totally average and also follow the optimal dating strategy, meaning that you dated for 10 years and had 7 serious relationships, you would date 6 people before settling down, and each of the 'calibration' people would last about 7 months from partner to partner.
That sounds reasonable and very similar to a lot of personal anecdata. I've seen a lot of people do ~half a year flings through their college years (18-22) before settling down with someone in the last year of college or thereabouts.
Since it's such a clear strawman, it doesn't clearly illustrate the point you're trying to make for those readers who can see through it. It's common sense to exclude childhood as irrelevant to dating experience. It's common sense, that people will be more active in dating during certain phases in their life.
The whole point of the strawman, is that it appears to support a point by only weakly representing the counterpoint. A fairer analysis has been posted several times in other comments here.
By applying the strategy in an unintelligent way which seems designed to make the model fail.
it will always say a longer possible search is better because it raises your odds to success and the model features no cost for searching
That's kind of the whole point of this tradeoff and of the analogies used with it. That there is a cost for searching and not "finding."
Concretely, we've very consciously had a "just looking" phase, where we look at a bunch of houses with the rule that we absolutely will not buy one of them, no matter how appealing: they're only there to give us a benchmark. We've tried to size it in proportion to the number of houses we think we'd plausibly want to look at, given our horizon for buying.
I'm curious if anyone has found other useful methodologies for guiding these sorts of life decisions?
Shocker: People aren't always rational.
Shocker: Initial conditions do not stay constant.
Shocker: People's priorities wildly vacillate even on short and medium scales.
Shocker: The economic ideal of the perfectly informed and rational consumer is a complete fantasy and violates multiple physical, computational, and biological theories.
This applies to both the article and the parent's comment.
It explains why people pollute pretty well.
Oversimplified models fall down in the real world. Game theory is useful within a domain.
You'll find Game Theory a lot easier to understand if you mentally replace every instance of "Math" with "AI". (So instead of saying "The Nash Equilibirum is...", just say "I trained an RNN and this is how it would play"). That's really all it is. Usually the games are designed in a simple way because the "AI" that is being run is nothing more complicated than min-cut/max-flow.
"Pollution" specifically is a case of the "Free-Rider Problem", and there are plenty of game theoretic results that explain it.
If the game doesn't properly account for some incentive, it's the problem of the model designer, just like it would be the problem of the RNN designer not gathering the right type of training data.
I would guess I had done 36.8% of my dating by age 25 - not 39. Suddenly the model starts to fit better.
And the reason why this matters is that the dating that matters is the dating from when we can have children to when we're done. So if a woman starts seriously dating at 20, and is done with kids by 35, then somewhere around 25 is indeed the proper cutoff.
Men can have children longer, but at some point don't want to.
This wiring does not work with all individuals nor can it always cope with modern life. But it is still a pretty big factor.
It turns out there are variants of the stable marriage problem that allow for accounting for all sort of changes in the preconditions, but I wouldn't expect a random blog author with a bone to pick against math to actually look into it.
Do you know exactly how many "applicants" exist? No.
Does the length of the "contract" depend significantly on the length of "exploration"? Yes, but not in the model.
Instantly closing that site. sigh
It's also the case that the problems that best approximate it's constraints, and for which it offers or suggests a good solution, are probably not usually the familiar problems with which it is usually associated. (Though I can see places that are decent fits that could exist upstream from those, like seeking a parallelizable approach to filtering resumes to get M interview candidates from a pool of N resumes while doing holistic comparison of resumes rather than scoring against an abstract rubric.)
I don't even think you save time parking closer to the door since you frequently have to wait for others to get out of your way.
I mean, I agree that any time you use an algorithm, you want to know under what conditions it holds... but that doesn't mean that it's suddenly a bad idea to use it.
>The secretary problem does effectively demonstrate the general principle that in life we should spend some time exploring
I have no idea what the author is talking about here... There is no "exploring" in the stable marriage problem.
The secretary problem is about a single agent attempting to make a selection based off of incomplete information, and does involve an exploration phase.
All of the OPs complaints about the Marriage problem apply to the Stable Marriage problem, and yet, the real world doesn't seem to care that the OP thinks it shouldn't work. People use it anyway, and it largely works.