There is enormous value in prediction markets, I hate that they are illegal in the US.
When real money is at stake then people make the decision that is most rational and filter out much of their bias.
I do think prediction markets should be legal and required to be 100% transparent. The scenario where people bet on something that they have control or influence over could be dangerous.
The thing that I don't understand about a decentralized prediction market is where do the results come from? Somebody has to say, "this bet won", right?
For an easy to join one that I work on shameless plug checkout https://scicast.org
You don't have to trust the person running the market.
I would like to see non-centralized single market that has protocol for multiple transparent brokers that compete with fees and have their reputation there for everyone to see and judge any way they want. This way the risk of large bets could be divided between brokers.
Broker would be just identity and the 'implementation' could be single individual (anonymous or not), multiple individuals (broker syndicate) who would vote among themselves and manage their membership. Ability to attach decentralized algorithms as brokers should be also possible if taken into account when designing the protocols.
Is this a problem for Bitcoin that there's a 'central' counterparty (the network)? If it is, then multiple Truthcoin networks could be run.
Contract outcomes are determined in a trustless and decentralized way, through a weighted vote based on present and past consensus with a unique Nash Equilibrium where all voters report accurately on the state of markets.
As long as there are more nodes who would not benefit from this particular rigging as there are nodes who would, everyone can act in their own self-interest and the outcome would be correct.
Oh, so it has a built-in mining subsidy that miners are rewarded with? Because without that bitcoin suffers from an even worse version of the exact same problem and is not incentive compatible.
> As long as there are more nodes who would not benefit from this particular rigging as there are nodes who would, everyone can act in their own self-interest and the outcome would be correct.
What counts as a node? Can I spin up a million or a billion nodes on AWS and influence the vote that way?
Proof of work? Proof of stake? It's not exactly an unsolved problem.
Really? Where does it assume honesty on the part of all participants? If we could assume that, Truthcoin would just be a majority-vote...
> What stops me from making a massively stupid bet, and then rigging the market to diverge from reality?
How do you do that, given that everyone has massive incentives to vote against your rigging?
Is it a vote mechanism? Then how do you protect against sybil attacks?
Is it a proof of work system? Then how do you align incentives of the miners with accuracy of the prediction market?
It's a weighted vote, with loss of coin penalties for not voting or for voting against the majority and coin rewards for voting on low vote "decisions" and for voting for the outcome that ends up being the majority.
The defence here is the assumption that the usefulness of the market long term will motivate people to vote "fairly". I think that's incredibly naive but it might still work. There are strong incentives to vote for who you think the eventual winning side will be and the assumption that that will likely be the truth (for well designed questions) isn't crazy.
There are a lot of obvious ways to attack this type of system but I don't have the expertise to know how viable they are.
Synthetic assets are very much vulnerable to manipulation by whales. Ultimately you are just providing some incentive structure to keep the small players honest, and hoping that the small-bit players add up to significant security in aggregate, more than any possible combination of colluding players.
Generally speaking that's not a very safe assumption to make, particularly when the underlying settlement medium is irrevocable.
I'm wondering about this too. In many instances this would still necessitate a form of "trust" in a third party information provider. For example, if you're betting on the Mayweather fight, then you might be relying on an API providing fight outcomes from ESPN.
The decision is made by distributed consensus. Each decision maker is incentivised to vote with the consensus, so they will vote truthfully if the answer is easy to check, or vote 'void' if it isn't.
Each market creator loses money if the result is void, so it's in their interest to make bets that are easily validated.
In fact Intrade realized that the only markets people were interested in were sports markets and have focused exclusively on that area since the relaunch.
It's hard to say if any of these are the fatal problem with Intrade, but it's not like we'll know until some prediction market does 'take off', and we should at least address the known problems even if they don't look like huge problems.
> I don't think there were any restrictions on customers until after the CFTC charges were filed.
They didn't make it easy on US customers well before CFTC, as I found out first hand. You had to really want to sign up. It was much harder to get my money into Intrade than, say, Bitcoin (and people claim the difficulty of fiat->Bitcoin is a major barrier to Bitcoin adoption, so...)
> I have a hard time believing that counterparty risk was meaningful.
Even if we assume customers were idiots who didn't believe in counterparty risk before, they certainly do now.
> In fact Intrade realized that the only markets people were interested in were sports markets and have focused exclusively on that area since the relaunch.
Or... those are very safe areas legally for Intrade, and that's why they ban other markets?
For example, if you bet $100 on Hillary Clinton winning the 2016 election, you don't get to invest that $100 in treasury bonds or the S&P or whatever, which is a pretty significant cost. On real futures exchanges this is mitigated with margin requirements and the legal threat of suing or bankrupting you if you don't make good on your promise to pay.
However collecting on a non-collateralized loan when things go bad is difficult if you also maintain pseudo-anonymity. But with some sort of KYC procedure in place it could work.
As described the problem TruthCoin is trying to solve puts a lot more stress on the consensus mechanism, because it requires the participants to do more work to stay in line with the consensus.
In practice I think what ends up happening is that somebody publishes a feed and everybody pulls their results from that and "votes" as they're told to by the feed. This may not be too bad in itself as the feed is probably not too bad, and the people publishing it probably have an incentive to keep publishing good data. But I do worry that this system will make it hard to get off the de-facto default data feed if it starts producing bad data, since it's designed to punish people who step outside the consensus...
"Public Bads, for example “The ‘New Haven Lighthouse Point Park’ lighthouse to be destroyed
before date X” are unlikely to be funded this way, because the Winning State must be made public
somehow, and criminals must remain private. Attempts to shift the Schelling-indicator from “Number
1/2/3” to something else, such as “on a Monday/Tuesday/Wednesday evening”, have the disadvantage
of alerting law enforcement in advance of the likelihood and manner of a future crime. Trades made just
before the crime would, for free, alert law enforcement to an imminent threat of crime. Such ‘tip-off
trades’ would be made by any profit-seeking members of the crime group themselves. Therefore,
(surprisingly and fortunately) this feature cannot fund anonymous goods such as crimes"
This gives no useful information to the police, but it's enough for the criminal to unambiguously identify himself.
I'm disappointed that they didn't consider such scenarios.
Not to mention the fact that the objection still holds true to the extent that information is revealed. If you say the lighthouse is going to be destroyed in 2015, the police can still give the lighthouse additional security measures in advance of 2015, if not at the specific time. The less information you reveal, the less money you'll make from the market (since you're only trading on part of your private information).
Additionally, while there's nothing stopping a distributed system from allowing death contracts to pay out in the event of a murder or forbidding death contracts in general, it's quite possible that the participants in a market could develop social norms which prevent this from happening (e.g. everyone votes "void" when a dead person is murdered, or everyone votes "void" on all death contracts). I'm not saying I'd rely on it, but I think in mature markets of this nature, you'd actually see some of that behavior.
Well of course. Why do you care if the lighthouse is destroyed? The counter-party to the bet is anyone who actually wants the lighthouse destroyed and are willing to pay to see it through.
Likewise, for all you'll know looking at the transactions, a payment conditional on someone's death could be a will, a life insurance policy, a political bet or an assassination contract. These all look the same to the network; What matters is the intention behind them.
A prediction market provides trust that a payment will happen if a specific event happens.
But insurance also requires another kind of trust. If you want to insure your goods, the insurer needs to evaluate the moral hazard in order to determine his rate. But here, he doesn't know if the buyer of the contract is the owner of the lighthouse, so there's no way he can do that: maybe the buyer turns out to be someone who won't suffer from the loss of the lighthouse and who can easily commit a crime. Because the moral hazard is huge, the insurer can't provide insurance at competitive rates.
Insurance contracts where the buyer of the insurance is anonymous to the insurer don't seem desirable to me. They enable assassination markets but have no real use (except for small amounts that won't create moral hazard, but I'd call that betting rather than insurance).
In my example, the private information is the knowledge of the moment of the explosion one second before it happens, coupled with knowledge of a cryptographic secret. It's valuable on the market because you can only get this information if you published the contract and committed the crime yourself: therefore, the contract rewards the person responsible for committing the crime and only that person. The reason people who hate lighthouses will back that contract even if they don't trust the criminal is that they have nothing to lose: they will only "lose" their bet if the criminal destroys the lighthouse, which is what they wanted all along so it's actually a win from their point of view.
But the cryptographic secret is not valuable anywhere else, and the police has no use for it. The only information the police gets from the market is that the lighthouse has rich enemies willing to employ criminal means. Which they probably already knew.
It's quite possible that the participants in a market could develop social norms which prevent this from happening
The market would be even better off if those contracts were banned as soon as they appear. The fact that people can randomly decide whether a contract is valid after you've invested money is a risk for legitimate users (e.g. if you bought insurance against drought, you want to be certain that you'll get paid), and it's not necessary here. To avoid "public bad" contracts, you only need a deletion mechanism.
But for such mechanisms to exist, the possibility of "public bad" contracts must be acknowledged, not dismissed. That's the point I wanted to make.
You can get these contracts settled by any trusted party, or any combination you choose of trusted parties. There's already someone out there who is offering to escrow money if you want to assassinate a politician. The next logical step is for them to use multi-sig and act as an arbitor, but without the need to hold the funds. It's inevitable that someone will do this, if they're not doing it already.
If anything it would be better to have this activity happening on a public network so that law enforcement can find out who somebody is trying to assassinate etc.
1. People are punished for failing to rig the vote. If a collective does manage to push through a counterfactual vote, everyone who tried to stop them is punished.
2. Voters in smaller betting pools will be low-information voters. Take a local election: there may be a few dozen or hundred local participants in the prediction market, enough to support betting, but there may be only be one or two legitimate news sources. Voters outside the local area could be easily misled by fake news sources. So throw up a couple of fake websites, use them as sources in a Wikipedia edit-war, and you might be able to trick the low information voters into voting your way and drowning out the local voters with personal knowledge of the event. Your deception only has to last long enough for the bet to be settled, and there is no way to punish it within the system. (If it falls through, the dupes will be punished.)
> through a weighted vote based on present and past consensus
How can Truthcoin protect against votestuffing?
TLDR: proof of voting is a good consensus algorithm, it's not a good distribution model.
Really wish the documents were in text/markdown/HTML instead of PDFs... would be much easier to review (and help improve).
I wonder what the effects of desired outcome manipulation by using large amounts of POW to vote? Ideally, you'd want 1:1 connections into the network by individuals.
It would be required to have the same amount of bets bought and sold, just like the stock markets.