this system is limited in that it can only do "money" for significance. it can't also tie it to something you loathe. I don't think any of the available things ( https://www.beeminder.com/ was mentioned by someone ) can do this, but you can do it with a friend you trust.
to those who suggested having the money go to charity: Having the money go to a charity you LIKE is a terrible idea because then if you fail in your task "well at least my favorite charity is getting money" You actually have incentive to fail.
We begrudgingly offer you this gift as a token of our accumulated tardiness to company meetings.
"4 Months, one week, 5 days, two hours, 51 minutes, 39 seconds.
3084 cigarettes not smoked, saving EUR 478.02.
Life saved: One week, one day, 13 hours, 36 minutes."
After I saved up an X amount of money (a year or so?), I bought myself a laptop from that; the first laptop I ever bought for myself. Quite a decent motivator if I may say so myself.
Sure, I had relapses. Multiple even. Last one in 2014, whilst on vacation (very difficult time for me not smoking on vacation). I bought one pack, ended up throwing away half of it.
Feel free to get in touch with me if you need a hand fixing any of this!
If I'm possibly going to give my money away, why would I want to do this to some random contract, instead maybe a Charity or somewhere other than directly to the contract?
Banks, credit card companies, and brokerages all make more money if you make poor financial decisions (fees, interest, and active trading, respectively).
Auto mechanics and car companies make money if your car doesn't work or doesn't last past the warranty period.
Even piano teachers only make money if you continue needing lessons, rather than becoming able to learn on your own.
Hopefully enlightened businesses will follow the model that customer referrals are more scalable than bleeding the money out of any one individual.
Blockchains and smart contracts purport to lift of out of these conditions, but, as you point out, this one seems to suffer from the same flaws.
You misspelled "capitalist".
To some, "capitalist" is an attribute of a society that resist state intervention in economic collaboration. It is equal to the phrase "free market" and viewed with great skepticism.
To others, "capitalist" is more tantamount to greed and cronyism; it is understood in a way that's contrary to traditional anarchic theory about the state.
You might detect that I usually find myself in the second category.
Because of this dichotomy, I like to think of the procession of ages of humankind represented by the dawn of the internet being from "statist/industrial" toward "humanitarian/information-driven".
But you can also certainly think of it as being from "capitalist" to "humanitarian" or whatever may go in that second slot, sure.
What type of system do you think cryptocoins facilitate?
Which results in unnecessary procedures, and unnecessary prescriptions.
>Banks, credit card companies, and brokerages all make more money if you make poor financial decisions
Which caused the housing crisis and great recession.
>Auto mechanics and car companies make money if your car doesn't work or doesn't last past the warranty period.
Which results in some of the most untrustworthy salespeople in one of the most untrustworthy industries.
All of those industries are regulated by the government, precisely because the incentives are wrong and it causes people to do more harm than good if left unregulated.
Hackernews is supposed to have smart people, but the pithy nonsense like the parent I’m responding to makes it abundantly clear that that’s not the case.
If a doctor's treatment record and long-term ROI follows them around, or if piano students can track their proficiency over time, the market can act on that information. Without it, we're all flying blind so of course we get suckered into bad exchanges.
It also reflects opportunities for stronger regulations... For example, financial advisor were required to act in the best interest of their customers (until Trump repealed Obama's executive order)
Take my thoughts on this with a huge grain of salt, given my conflict of interest, but I really dislike commitment devices that destroy things — either information or other forms of value. StickK’s anti-charities seem the most egregious, actively harming the world. I’m certainly motivated to not allow the world to become a worse place, so it’s not that it would be ineffective as a commitment contract. Just that I’m also motivated to prevent things that don’t make the world worse in any way, like paying money to a third party (who’s not evil).
That assumes that (1) there are "good" charities and "bad" charities, and (2) that most of Stickk's users are good people who end up benefiting "bad" charities when they fail.
Point (1) seems hard to prove and point (2) is impossible to prove without detailed info about how Stickk is used.
It seems like you're in this market, so perhaps you have data like this. Do you? If not, how do you justify the above assertion?
But more to the point, I'm just viewing it from the individual user's perspective. From your point of view, you're actively harming the world by donating to an anti-charity. It doesn't matter that those other wrong-on-the-internet people think your donation is doing good. :)
You want to focus on the individual user, and that makes sense. But to him/her, they're using an anti-charity because they want to increase their odds of accomplishing their goal. So when thinking about whether it is "good for the world" or "bad for the world" that people do this, we have to factor in:
1: the x% probability that the user will fail, and money will be donated, and
2: the y% chance that the user would have failed without the anti-charity option, but succeeds instead because of it.
When you consider both of these aspects, it is not at all clear that the anti-charity option is bad for the world.
Exactly zero harm befalls the world in the latter case.
I'd even go so far as to say you could achieve the same effect with a true charity by risking an amount that you really couldn't afford. But, as I point out in the blog post I linked to above, no one ever has the guts to actually do that.
Btw, on Beeminder we very occasionally have people who risk thousands of dollars. I think they tend to be really hardcore fans who may in fact view it as improving the world by giving us money. Hence them shrugging off smaller amounts and needing to risk something ridiculous to stay motivated to stay on track. But it's still super win-win (because they do stay on track overall -- and sometimes staying on track means things like finishing a PhD!).
Is there any evidence to support this theory? Seems like the existence of tools that offer anti-charity donations indicate that for at least some people, this assumption is incorrect.
Like transfer a small percentage to the owner as a fee for the service, then send the rest to a faulty address or contract setup to never be retrievable?
Since that is now money that is forcibly and irrevocably taken out of circulation, so therefore everyone else's money is worth slightly more because of it!
Probably not yet, but maybe eventually. It's essentially the same as burning it.
While BTC, ETH, XMR function like currencies and can be traded - they are not recognized as a currency in most places
Money going to charity won't make me feel that bad about not achieving my goal.
"oh well atleast someone needy is getting the money"
Defeats the whole point of the exercise( pardon the pun).
Seems like a half-baked solution. Interesting, but incomplete.
- How to verify people achieved their goal (this website introduces "supervisor" but I'm not sure that's gonna work well)
- You gain nothing from achieving goals, v.s. you lose money if you didn't
- The gain (goal achieved, which can be done without losing money) is much less than lose (lose real money), so no incentive for people to use this service.
This website will face the same challenge.
I'm happy to see there are someone out there having the exact same idea, I hope they can do well sincerely.
One solution to that would be having the app share the money from the losers between all the winners (less a fee for running the app if you wanted to monetize it).
It's already iffy with the 'lose money if you don't meet them' approach, but if you could actually get a piece of all the users moneys, yeah, you might as well say you're a winner (or only create goals where you can easily win) and game the system.
Hence Beeminder rationalizing just keeping all the money! :)
The way we had proposed to deal with funds that were not returned to the user because of failing a goal was to add them to a reward pool of sorts. We would take a percentage of any failed goal funds, but the majority would be given as rewards to others who had completed their goals in a similar area or time period. The details of this hadn't been fleshed out but one potential way to do it we discussed would be to distribute the reward pool at the end of every month to those who had completed gals in that month (proportional to the amount of money staked).
This would potentially remove some people's concerns listed in the comments about the money not being handled well in case of goal failure. Also it might drive more usage of the service if people would maybe even profit.
What's to stop me from putting up a stake, using a second email address as my "supervisor", and just confirming at the end of the month that I've reached my goal? Without doing anything, that makes me eligible for a chunk of the reward pool.
Maybe there isn't a great way to implement this.
Shouldn't that check that msg.sender is either goals[_hash].owner or owner ?
For example, a fundraising period. Kill functions work well when you can effectively say "oops we screwed up the event window, we're going to relaunch with a fresh event window."
For something that is intended to exist permanently, the kill function is strictly harmful as it creates risk for anyone depending on the contract (unless your intention is to avoid having people depend on it, I suppose).
A true trustless smart contract would have no owner.
I think this could be rewritten as "send an email".
It did bring to mind an interesting related idea, though, specifically for software side projects: instead of relying on a human to validate your success, create a smart contract around a test suite; funds are released by passing your tests by the deadline.
But it makes wonder that there's perhaps more to consider about "your goals" if you need "a supervisor" to verify that you met your goals and a monetary penalty for failing to meet your goals.
More importantly, one's _choice_ of goals and the process by which they strive to achieve them (the journey) is every bit as important as the "verifiable" success/failure rate if we're talking about personal goals.
Thank you for mentioning and supporting Mozilla Firefox!
1) Run 10K on Strava.
2) Solve programming challenge on Codeforces/TopCoder.
3) Pass coursera class.
4) Have X bitcoins on your wallet
5) Check-in in some specific location.