I think it's far more interesting to ask how a thing might work, which uses cases might be dramatically underserved today and serve as a beachhead, or the tradeoffs being made rather than just say something is a "bad idea."
Dropbox launch: https://news.ycombinator.com/item?id=8863
Coinbase launch: https://news.ycombinator.com/item?id=4703443
A 2012 thread discussing comment negativity where, coincidentally, the top comment is from @iamwil who posted this link and is on the DIRT team: https://news.ycombinator.com/item?id=4363717
A classic thread from 2012 where PG talks about negative comments:
To me, the most interesting ideas in the world are the ones that at first blush look like they can't possibly work. But upon thinking through how they might, you learn something.
Props to everyone in the thread who is asking genuine questions and actually trying to understand what the team is building.
Currently, there is no support for moderation at scale. Projects like OpenStreetMap offer a valuable resource, but struggle to maintain quality at scale. This is a relevant article: https://blog.emacsen.net/blog/2018/02/16/osm-is-in-trouble/.
With DIRT's model, we want to create a way to build data sets with a focus on accuracy that can scale.
…by big oil's dollars.
Don't take me wrong. I don't really see the value in inventing a truth bureaucracy that rewards participants in fake money. This dirt system is just silicon valley, in it's typical naivety, trying to reinvent propaganda.
No, it's not. (One might either attribute a political cause to it or seek a political correction for it, but inherently it's social but not directly “associated with the governance of a country or other area”.)
Try "The principles relating to or inherent in a sphere or activity, especially when concerned with power and status."
Regardless, I'm sure you understood perfectly well what the other commenters were meaning by "political", you're just arguing semantics.
(edit: sorry for the snark, but this is such a common, easy way to derail discussion without really adding any substance so I try to call out stuff like this when I can.)
The definition I cited is the one that fit the clear upthread use; yes, the are other senses of “political”, but to configure them is equivocation.
>If the data is incorrect, anyone can challenge the data and earn tokens for identifying these inaccurate facts.
How do you moderate censorship or conflicting information ?
if someone uploads my personal info, without my consent. How do I get to purge it ? From the current model, it seems like I ll have to pay money to "request" purging my own data.
I think verified identity could be a registry on DIRT. Adding reputation on top of voting is something we're exploring.
That's true for unencrypted information stored directly on the blockchain. For applications where you need the ability to delete data and don't need strong censorship resistance, one solution is to store private data off-chain and only store the location and hash of the data in the blockchain. This article discusses that idea in detail:
Per GDPR specifically, I am learning about the regulations and my understanding is that the right to be forgotten focuses on personal information. Personal information is not a great fit for DIRT registries because it is not publicly verifiable. The concept of voting to determine correctness works best if that data is public and observable.
No resistance to censorship.
No resistance to conflicting information.
More money = more “truth” with no recourse, because removing info costs.
Incorrect info loses you money, but there is no functional systemic way to determine correct/incorrect.
Waaaaay too much marketing and nothing else extant.
Most companies don't do business in Europe.
As defined under GDPR? I’d love citations for that claim.
But unfortunately, it was beaten out by 2%.
How would I remove my personal data, or any proprietary data, if someone uploaded that to DIRT?
> if someone adds your personal data to the blockchain, your options for removing it are very limited.
Also, given that contributors have to put money up to add information, what incentive do they have to add information in the first place?
To flag information as incorrect, you need to take tokens and challenge the data. A challenge starts a vote and anyone in the DIRT network can vote with their tokens on what information is correct. The vote winner and majority voters earn tokens. The vote loser and minority voters are penalized.
We are planning to publish our protocol design in a few weeks with more details.
Basically, when someone doxxes someone, and dumps the result onto your system, how to you remove it?
If you don't have a facility for rendering intentionally malicious information impossible to access, your system is fundamentally broken.
For the DIRT protocol, if want to remove information, you also need to deposit tokens and put forth evidence to convince voters in the network to side with you. If you lose the vote, then you would also lose tokens. We put forth an economic penalty for being inaccurate.
> For the DIRT protocol, if want to remove information, you also need to deposit tokens and put forth evidence to convince voters in the network to side with you. If you lose the vote, then you would also tokens. We put forth an economic for being malicious.
Sorry if this is a stupid question but if I lose a vote, can I bring it up for a vote again? I think that should be ok, right? Even if I get some fact removed by some kind of trickery where I sneak in a vote it should be OK because if the fact belongs there, maybe someone else can add it again?
I think this works well for facts like Robert Kennedy as a matter of fact did NOT kill John F Kennedy. However, some things are not objectively clear. What happens when people keep adding "fake news" that is not obviously/patently false? Does someone need to pay to have it removed? How often do I do that? Every time someone adds it?
Now again the flip side is troublesome. We can't require identification for anyone to post facts, right? I mean I think that would be unthinkable, right? As such, how do we "rate limit" "fake news"?
Sorry if all of these have obvious answers. I just couldn't think of it...
1. Repeat votes - If you lose the vote, you can vote again. DIRT works not only for correcting intentionally misleading data, but also for fixing out of data information. It's similar to a bug bounty for data.
2. Cycle of challenges - Each time misinformation re-surfaces, you would have to challenge and vote down the data. However, every time you are successful in your challenge, you earn tokens for your efforts.
3. Fake news - Subjective information can be harder to adjudicate with DIRT and that will not be our initial market. As an engineer, I would love to believe that the blockchain can solve fake news. However, a lot of research shows it is not the lack of data that the issue, but rather perspective. People believe what they want to believe.
Just because everyone else on the network votes that it should stay there doesn't mean the law agrees.
You HAVE to have moderation facilities.
You're right this only works well for objective information or facts. Hence, any articles that hint of an opinion (i.e. pick any Wikipedia editing war topic) would result in a mono-culture dominated by those with the most tokens.
I'm really curious to see how this plays out. Best of luck.
1. Economic - you stand to lose the tokens. Similar to mining on bitcoin, the cost to spam the network is proportional to the value of the network. If there are a lot of users, the value of the token is higher and it will be more expensive to acquire the tokens needed to game the system. Even if you are well funded, this will hurt.
2. Reader disinterest - There is the negative feedback loop where users of the information will stop using the information, and the incentive to game the system will drop.
Unsolicited startup idea: build a more accurate Stack Overflow with DIRT.
That said - the explanation didn't fit as well into a one liner :)
For example, in the cryptocurrency space. Projects raise funding through initial coin offers (ICOs). In an ICO, you can contribute ETH to a smart contract for the promise of tokens in the future. Having an openly editable list of ICOs and their contribution address would not work. The list would be quickly spammed because malicious actors have a really high incentive to put their personal wallet address as the contribution address.
Transparency would be the third benefit. With the blockchain, you can see the entire history of votes. Every transaction is recorded. Today, if a website accepts bribes for reviews, visitors to the site do not know that this happened. With DIRT, if a wealthy token holder had a lot of tokens and tries to throw a vote, you can see the attack happening.
Is there a method for doing this built into the protocol, or would that be a responsibility for the implementer?
I agree that transparency could be a great benefit of this technology, but if a "wealthy token holder" can create several puppet accounts with their own tokens, throwing a vote can be made to look "organic". Does DIRT do anything to prevent this?
(Thanks btw, it's great to see you active in the comments.)
Please proof. Otherwise good irony.
No answer to the question why selling votes should result in more accuracy. Buzzword Bingo. An overly broad approach. Lot's of social proof but thin on content. This reminds me of all those ICOs we see these days.
Looking forward to read the whitepaper. But somehow I have the feeling it will either never come or it will be just another marketing brochure without technical details.
There's two parts of the design that leads to more accuracy for DIRT:
1. Skin in the game - a token deposit to write encourages accuracy because you can lose the deposit if you are incorrect.
2. Encouraging moderation - moderators can earn tokens. If you vote and challenge correctly, you can earn tokens. This creates an economic reward for moderators that can protects the data accuracy in the long term.
We're posting the whitepaper and more importantly, launching the protocol with a first application in the coming months. Stay tuned!
you can lose the deposit if you are incorrect
Using voting to establish "fact" seems like it could go very, very wrong...
Applied to the case of getting accurate VC listings, DIRT has a ploy to get VCs to PAY to get tokens to challenge incorrect entries. Consumers also have an interest in the quality of information, but a primary concern lies with the subject of an entry.
DIRT -If I may, my request to you is to document the heck out of your policies and expected behaviors. The grey line of "ridiculous" that I point out is something that you've mentioned in another response, that you're not in the business of fake news. At some point, you'll need to be making decisions and providing ethical guidelines.
A better, but less mainstream-relatable example is a list of ERC-20 smart contract addresses.
For me, it's actually not data easily verifiable as true or false, but more for "wisdom of the crowds" type of knowledge—things that you couldn't put up on a source like Wikipedia. These tend to be lists or recommendations that contain some subjectivity, but also tend to coalesce around a mostly-agreed upon set of answers from a trusted set of sources.
In the centralized world, we usually rely upon institutions like the Michelin Guide to develop a fair set of criteria, but we ultimately as end users trust that institution's "objectivity" and judge whether we think that list is valuable. Sometimes when I research, I informally end up creating lists of lists and combining them ad-hoc if I can't tell which of them is more trusted. These lists also tend to end up being static or only updated once or twice a year and can fall horribly out of date.
I think TCR incentives could potentially be really interesting as an alternative to these lists which rely on the institution's brand. For example, I think Quora Answer Wikis (like this one: https://www.quora.com/What-are-the-best-independent-coffee-s...) and general consensus for recommendations in forums for questions like "Which cities should I visit in Thailand if I'm looking for nightlife and places to hike?" or "Which REST framework library should I use for a Django project?" It'd be amazing if DIRT could balance the incentives for community members to contribute to this type of data and keep them as living lists, with all changes and updates maintained through a community with the right checks and balances and incentives.
From the Medium post:
>If the data is correct, it is freely shared. If the data is incorrect, anyone can challenge the data and earn tokens for identifying these inaccurate facts. Our protocol and platform makes it economically irrational for misinformation to persist in a data set.
I think the more interesting data would be data that's on a gray scale, e.g. using the above coffee shop in San Francisco example, obviously if John Doe tries to get his burger joint on the list as a growth hack even though they don't serve coffee, that should easily be verified as misinformation. But what if a coffee shop just closed for business, or moved to Mill Valley but thinks they should still be on the list, or just switched beans and raised the prices so that everyone agrees that it no longer deserves to be on the list?
Disclaimer: I know most of the team working on DIRT, and I don't know very much about TCRs.
People in the earlier days of the internet imagined a better world brought about by immediate and unfettered access to information. Many have tried to make freely available information on the internet. Wikipedia, IMDB, and Freebase are direct products of this school of thought. However, we can only count these on one hand. In fact, most free data projects languish and have a hard time getting off the ground.
What we all discovered as we built out the web is that only some kinds of data can be maintained for free sustainably. Sure, if it's something that engages fandom, like all the different types of starships in star trek, people are intrinsically motivated to update that list. But if it's something that's considered dry but useful, like the tax rates in every county in the US, or points of interest on a map, there won't be enough people with intrinsic motivation to keep that updated.
As builders and users of the web, we've compensated by subsidizing that dry/useful data, typically with a company selling advertising or subscriptions in adjacent services. The implicit deal we make as users is if the company provides the data for free, we're ok with the company accrue profits off the data we help curate. Recently, the sentiment has been growing that this may have been a raw deal for users of the web as a company's profits accrue to the point of immense power over our lives.
What I think the builders of the early web got wrong, was that certain types are data needed to involve other incentives besides intrinsic. While we've found other ways to incentivize users in the 2.5 decades of the internet, cryptocurrencies now give us one more tool in our toolbox to use economic incentives to design systems that converge on the curated lists that are regularly updated.
With this new toolkit, we may be able find another way to provide freely curated data without using subsidization. Instead of the value capture accruing in a single company, we may find a way to sustainably distribute it amongst the curators.
We're not as sure subjective data is a good first fit for TCRs. With any startup, it's better to find a niche application that's a great fit, and we think we've found one in objective data for the crypto space.
I think another aspect that might be exciting for you to think about is if you're able to link the data between registries. It's a non-obvious aspect that almost no one asks about.
1) More efficient than centralized curation - Social media companies receive millions of requests to take down copyright information or spam sites. Today, you have centralized teams vetting each request individually and can take months to review. For this use case, DIRT is a valuable alternative to vetting information because it reduces the noise in each submission.
2) Commercial data - For markets where people can profit for spreading misinformation, open editing is not the best approach. You could create a wikipedia list of stores that sell hand made jewelry. Sellers can benefit from inclusion on the list, and would want to join the list regardless of whether they meet the criteria. The likely outcome is that the list would not be useful.
We’re believers in the blockchain and decentralization, but decentralized information curation is not needed everywhere. However, in markets where there is a critical, single point of failure, where you need transparency because you cannot trust any single actor, and where you have a high demand for data accuracy, DIRT can be very useful.
And what would prevent hand-made jewelry sellers from uniting against a single legitimate seller?
The only solution is to provide sources and have them manually reviewed by a trusted party, because voting only gives a majority opinion (weighted by how much money one wants to spend on that issue), not the truth.
The other owners of the tokens! That's the conceit of a TCR. The owners of the tokens have an incentive to maintain their value. The value of their tokens comes from their ability to get you onto the list of that token. The value of being on that list comes from the prestige of the list (i.e. its track record of honesty/quality).
> And what would prevent hand-made jewelry sellers from uniting against a single legitimate seller?
They could do this, but again, they'd be doing it at some cost to the prestige of their list.
If there were an economic incentive to challenge facts published by the NYT, would Blair's deception have persisted for several years? It's even possible that a subject in one of his stories would stake tokens to challenge the accuracy of the story. Blair could then respond by voting against the challenge with a large number of tokens, but would other parties join Blair or the challenger? They would probably investigate further and effectively join the challenge as neutral arbiters. The incentive on both sides is to provide persuasive information to win votes. The direct token incentive for the neutral parties is two-fold: vote with the winning side to gain tokens, and increase the value of the tokens they hold by helping the registry to become more popular so that demand for the tokens increases. If consumers of the registry value accuracy, that incentivizes the entire network to fiercely defend the accuracy of the data.
If you need to find trusted data, a curator exists only if the market for that information is large enough.
Curious how this will play out. Threshold for contributing is non-trivial, so the wasteland scenario is my academic guess. The contribution scoring and ranking will help, but can you be anonymous?
Today, if a publication or data source posts information about you or your business, there is no means to correct this data. You can post about it on Twitter or email the service, and wait for a response. You can't just fix this information. With DIRT, we create a way for people to at least be part of the curation process.
Different types of data require different governance models. Some datasets are not as critical to protect and could have a low token stake. For other pieces of information, you want every writer to have more skin in the game and thus have a high stake for writing. Our bet with building a protocol is to test what incentives will curate the best data.