So whomever is majority gets to decide what information is trustworthy? I understand it's supposed to utilize sensors and all sorts of criteria to come up with a logical & reasonable trustworthiness check. But what if one of your majority voting blocs doesn't believe in science? It's humans doing the verification, which can be corrupted.
It's irrelevant if there will be people who refuse to trust these sources, as long as they can establish a pedigree by continuously providing accurate information. If info from one node can be invalidated by other trusted sources after its original reporting (by publishing the facts after the dust cloud clears), the node will have the incentive to stick to the truth.
Any group that holds to principles and establishes a strong reputation would tend to become a target for activists to take over and push their agenda. At least this seems to be how things work out in the US.
The spectacular implosion of the ACLU in recent years comes to mind. They spend so much time sniping each other over tweets that we lost Roe vs Wade on their watch.
The entire organization spent each working day sending tweets and nothing else?
How long do you think a tweet takes to write?
What are they supposed to do about roe v wade? Are you passing the blame for that being overturned to them instead of Trump and republicans who are responsible?
Is this a way to help republicans win in the midterms, deflect the blame for roe v wade to some unrelated organization?
I can definitely see how fake or real activism would benefit hedge funds to drive attention away from news about how they've been destroying the financial markets and economy through deregulation and corruption.
I can also see why they would want to own newspapers to just directly trade on whatever they know will be in their own newspaper tomorrow (or news website in 1 second).
Everyone has biases, that includes companies since they are composed of and controlled by people It's difficult to determine if a person or companies biases affected the information they provide with certainty. Further I'll state that since everyone has these biases it's basically a wash and pointless to discuss on an individual level.
"Higher probability of being truthful is actually too low a low bar"
We can't set some minimum level of reliability that needs to be passed because we are ranking sources against each other not against an undefined value. Even figuring out what that value is seems extremely difficult. This reminds me of the corruption discussion a few weeks ago, people were comparing the US to nothing, just making statements like "There's alot of corruption".
Since we need to get information about what's going on around us we have to use one or more sources of information. Therefore if there is a contradiction between sources we use reputation.
--------------------------------------------------------------------
"You should simply believe what I say due to my strong reputation on social networks"
I don't know your reputation nor has anyone else told me about you. However let's talk about why, in general, individuals on the internet are more likely to be less trust worthy than dedicated news outlets. This is assuming you are only using reputation and there's no other variables.
1. People on the internet can and normally do use a pseudonym. Having to create a new account with a new name isn't that painful if your reputation is destroyed for some reason. This is much more difficult for a business.
Since the consequences of starting over are less for a person it increases the probability of they are lying because it reduces the effect of a punishment
2. An individuals primary function isn't the distribution of information nor are they funded by it. CNN is a news network, that's their main function. If people stop watching them they can go out of business whereas a person on the internet just continue living even if their reputation is ruined.
Since the stakes are so low for a person that increases the probability they'll lie (This is similar to 1)
3. Due to anonymity it would be difficult to sue individuals on the internet for defamation or to hold them accountable in. CNN is a company that is registered as a corporation.
It's difficult to enact punishments for a person on the internet vs companies and the less likely a punishment will occur or the less painful the punishment the higher the probability a person will commit the crime
No it's the difference between probability and conditional probability. Both individuals and authorities have demonstrated the same probability of just delivering their preconceived bias when it really matters.
As for social networks. Both businesses and reddit'ers gain reputation by simply saying things others want them to say. Your theories all depend on people wanting the truth, as opposed to being told what they want to hear. And my statement was actually a joke because of how you so quickly pivoted from asking what alternative there was to reputation, to then asking me for thorough evidence.
What is the difference between between conditional probability and probability?
The reason I asked you for proof was because the website you linked doesn't show that activists have taken over us news and are pushing an agends. Most of the points are about money, hedgefunds , and it also talks about how local papers are better to determine local corruption.
Conditional probability in this case is the probability computed with samples restricted to only the politically-charged topics, when the nuanced facts might not lead to public behavior they felt was right.
I'm not sure if it's going to be effective at all. How do you even know that an authorative data source is actually authorative and correct?
It's also kind of funny that this idea comes from Japan, where school history textbooks, an authorative data source of sorts, have had a rather optimistic view on the world during 1937–1945.
"Despite the efforts of the nationalist textbook reformers, by the late 1990s the most common Japanese schoolbooks contained references to, for instance, the Nanjing Massacre, Unit 731, and the comfort women of World War II,[2] all historical issues which have faced challenges from ultranationalists in the past.[3] The most recent of the controversial textbooks, the New History Textbook, published in 2000, which significantly downplays Japanese aggression, was shunned by nearly all of Japan's school districts.[2]"
Seems like that's been cleaned up and they were omitting information, which is bad, but when I think of misinformation I think of lies.
EDIT: Just to be clear, your statement was "where school history textbooks, an authorative data source of sorts, have had a rather optimistic view " which implies it is widespread and happening now. My counter shows that not to be true.
Wouldn’t a decentralized and uncontrolled endorsement layer eventually just get gamed the same way the news layer does? At some point you AJ e to decide who to trust, wether they’re endorsers, reporters, or trolls.
I have seen zero evidence that people will change their beleifs when presented with evidence.
All I can see is a dystopia where it's illegal to try and connect to one of these networks with a device that doesn't have a government-signed bootloader and illegal to portray yourself on these networks as anyone other than your real legal identity. Yay Sybil attacks are harder but general purpose computing [1] is well and truly dead.
I don't see why it would be problematic to require the real identities of those who want to act as authorities of information. That's how it has always worked. Ideas are trustworthy because people who they are endorsed by people who you deem as trustworthy.
"On May 31, 2005, Vanity Fair reported that Felt, then aged 91, claimed to be the man once known as "Deep Throat".[23] Later that day, Woodward, Bernstein, and Bradlee released a statement through The Washington Post confirming that the story was true"
Yes, I've also been thinking about this a lot, and reached similar conclusions.
One of the main issues I can't think my way around is privacy. I don't want my full and true trust network to be publicly available, although I might provide a more detailed map to my trusted peers. I might also "lie" about a given weight depending on who asks, for diplomatic purposes (perhaps you have a close friend who also happens to be rather gullible).
A working system is going to take a lot of effort on behalf of each user to correctly and accurately annotate their own trust graphs, on an ongoing basis - perhaps this just too impractical.
I performed a slightly unethical experiment many years ago, In which I created an entirely fake Facebook account (back when people actually used Facebook), and slowly sent out friend requests to personal acquaintances. All it takes is one or two to get started, and every subsequent person sees "n mutual friends" and is more likely to accept the random request. It snowballed from there, and eventually I had "infiltrated" a non-trivial portion of my own social network with an entirely fictional persona.
(Ethics note: I only sent friend requests to people I was already friends with on my real account - so I wasn't obtaining any new private information I wouldn't otherwise have had access to - and I didn't perform any interactions beyond sending friend requests)
Any kind of trust network is going to need to deal with this sort of infiltration - and I'm not sure how.
And another thing - trust is bought and sold all the time. Social media influencers sell a small fragment of their trust level every time they do a paid endorsement. If there's some kind of explicit trust network, people will pay others to obtain a higher trust level. Is there anything we can do about that?
Usability, bootstrapping and privacy - the biggest problems as I see it ordered by difficulty (yup).
Privacy - make queries require some trust from you. Your software may decide what is allowed based on your needs, e.g. low trust have lots of limiting and global limits while letting close friends ask as much as they want, you could also take into account who are you replying to. Some other privacy issues could perhaps be solved with having a wallet of identities.
Usability - very hard, I think it must be weighted and I think it should be weighted in a way that's not algorithmic. It should be you explicit trust to somebody - that's a rock foundation that if you tell me you trust some person 80% I know it's you saying that and not what some algorithm computed. We have a really good idea about our social network trust in hour heads and we keep updating it, but it seems hard to transfer to a device or even verbalize. Assigning weights to each trusted person is just too much to ask even from the most engaged users. Some app could perhaps ask you whether you ask more person A or B from time to time after asking for a short list of your most trusted peers or something like that, but it is a hard problem which I see no clear solution to yet. Adjusting is also not obvious, if you got high trust result to say some mechanic and he turned out to be terrible you'd like to see which person did it come from and perhaps lower trust your trust to them.
Because they could be compromised. That's another problem. If it becomes what it can, billions will be spent to try to affect results. People can be compromised and have no idea. I think FB and such faced such problem already but it is easier to counter as a centralized entity.
Social consequences would be huge but it does not fix the world completely. E.g. there still would be stupid hubs. That is, many people are going to trust some celebrity and that celebrity is going to sell trust abuse power etc. But that's those people choice.
In short it just lets you query trust, but the way people assign it seems more like an education system problem. Part of my how much I trust somebody is how good is he at assigning trust. Some people I dont trust much not because I think they are malicious but because I know they can be influenced easily or are not careful about assesing their trust.
In my experience despite the Internet and all that comes with it, asking trusted people remains the best way to learn about many things. They just point me to something and I know its worth my time.
Would be nice if it could scale and not waste time on both sides.
> We have a really good idea about our social network trust in hour heads and we keep updating it, but it seems hard to transfer to a device or even verbalize.
yup yup yup. I hadn't thought about the A/B comparison approach, I like that.
I also think you need to have separate weights for "how much I trust this person" and "how much I trust this person's weights". Going back to my "gullible friend" example - I might trust their first-hand stories perfectly well, but would trust them much less than my other peers when it comes to relaying second-hand information.
I disagree about separate weights. I don't trust judgement of somebody gullible.
But it's related to the problem of what is trust. E.g. I may trust somebody to create a secure software, but I wouldn't trust her take care of my dog. So at the beginning I thought there should be dimensionality of trust. Currently I think it overly complicates things not to mention the problem of categories. So trust means whatever it means for you and people you trust and specific use cases can perhaps be covered with multiple identities instead.
Since you've spent some time on the problem if you have any other insights/ideas/problems I'd be delighted to hear them.
There's a lot of interesting dynamics. It should influence politics a lot. And at the beginning I thought maybe a crime scene too making it harder to infiltrate gangs, but then I realized that as a criminal I wouldn't dare make a list of my accomplices on a device which can end up in hands of law enforcement.
Why would trust be transitive? I don't trust people 3 degrees of connection away from me. Why would I? If someone X I know and trust directly introduces me to source Y, that's already only a slight endorsement. Somebody I know clicking a button 3 years ago saying they trust person A who liked some article means pretty much nothing.
The Japanese government is very trustworthy and would never ever lie. Same goes for every other government. This “ministry of truth” OSI layer sounds like a fantastic idea!
>"The proposal imagines the endorsement layer would not rely on a single sensor or information source, but instead offer users the chance to add data to the endorsement layer. The result would theoretically be "an endorsement graph with a data structure expressing the connection between additional information linked to the data."
This is essentially the semantic web, no? I've always liked the idea but the problem is still trust at the end of the day. You can look to metadata to see whether your actual data is trustworthy or not, but then you'll have to trust that your metadata is correct, and if you do it as the article hints at crowdsourced, that's a tough sell in particular for real-time events.
Interesting but I am a little skeptical of any internet protocols coming out of Japan. Large parts of their internet discriminate against traffic from outside Japan by blocking foreign traffic/transactions entirely or giving low priority.
Its just something I have observed over the years. Its like a different internet with a good vpn. Recent examples that come to mind- Starbucks.jp blocks logins from non Japanese ip addresses, I had to use a vpn to buy some merchandise even with local address and card. And space ALC the translation site runs super slowly outside Japan (like way beyond pacific ocean latency).
I wonder if he's referring to their low bandwidth often crowded international connections. I don't know if this is still the case though, was a few years ago
That's actually a pretty helpful idea, and I think a number of creative solutions can be tried to determine which one of them would stick. I'm thinking it could even fund itself and finance investigative or scientific work (at some future adoption point).
Interesting how everyone screaming about trusting an expert has never trusted an expert regarding epistemology. The problem isn't that the criteria are subjective, but that they get appropriated by politics if this is not institutionally prevented.
I wonder how decentralized this could be. As described, it seems like there may be a decentralized version of this and that was unexpected (for me at least)
No idea if it still exists, but there used to be a plugin/extension called WOT (web of trust) that crowdsourced ratings on sites from its users for display to future visitors. This is less a layer and more AR, but I guess this is a similar idea. Any decentralized effort is still going to have federation keys.
We need to learn to stop worrying and love the disinformation. Human should be skeptical of everything they read/see/hear, be it internet/TV/radio, including this post.
Being skeptical is all well and good but pragmatically speaking, people do not have enough time in the day to verify all the information coming their way, so they must be selective about what they are skeptical about and figure out how to establish trust with authorities to remove some of that burden from themself.
I have seen too many fake social media posts that re-use footage from old natural disasters, protests, political rallies, terrorist attacks and even movies - all to forward some kind of false political viewpoint.
These spread quickly due to outrage, but disproving them is hard.