Because our personal data is incorporated into a highly networked system (the various ad platforms), is worth much more than if it isolated, or “decentralized.”
Unless a new system, that allows individuals more fine grained control over how their data is designed to aggregate with mass numbers of other users’ data, this personal data will have a marginal value approaching zero.
This implies that other means of financing the vast client-server system,that is increasingly dominant in the first and second world societies, will have to be found.
It will also require pretty restrictive regulation to prevent companies from simply proceeding as they are, aggregating mass quantities of user data, from which they profit.
While awareness of the problems, potential and real, is slowly dawning amongst the “civilian” population, it is a confused and ill- informed type of concern.
I’m thinking of friends who get upset at learning the government is spying on everyone, who simultaneously post countless intimate details of their lives on Facebook (or Instagram, Twitter, Reddit, etc.)
My point is that the cost and revenues are not a function of each other as much as they're both a function of usage. Once you get a high volume of users, it's pretty natural to convert at least a small part of that to advertising revenue, even if doesn't fit well to your platform (cf. Twitter). It takes a principled decision not to dip into that revenue source -- I remember once reading how Jason Calacanis was pissed at Jimmy Wales for not wanting to include ads on Wikipedia.
Majority of that complexity is not there to make user experience better. I actively use both platforms, and I find that Mastodon provides a better user experience in pretty much every way. The UI is snappier, it has better features for interacting with others, it doesn't restrict the API, and so on.
The main cost of Twitter comes from building tools to monetize and exploit the users of the platform.
Which is precisely my point. Or at least one half of it.
Surely there must be a middle ground between an app that people enjoy using being worth $570B and employing 25,000 experts to tweak it slightly, and being worth less than nothing after server costs.
I agree with you that should a decentralized social media app take off, it’s likely to be less profitable, but I dont think that necessarily means that no-one wants to work on it, that no-one wants to use it, or that it won’t exist.
Just yesterday a Google spokesmodel talked about “balancing user privacy against business needs”
But making profits is not a human right!
This is not a contradiction. When you post, you can choose, what informations get out. You have control. (in theory)
When you get spied on, you can't choose.
In other words, yes the whole world may see pictures of me and my friends (even though I don't post on those services), but I still don't share my bedroom insights nor pillow talk.
Yeah, tanks hows about I get to see the tunes that I liked myself and make those judgements. Never pressed the "love" button again.
I presume this is what you're getting at. That these companies get to harvest my data and make insights about me that I don't get to make myself.
Hosting is incredibly cheap nowadays, and the whole point behind decentralization is that no single server has to host all the users. You can run a Digital Ocean server for 5 bucks a month and host your own community. Meanwhile, containarization makes it much simpler to deploy and keep complex apps up to date.
Other decentralized services such as PeerTube are also popping up, and all these services are able to talk to one another via an open protocol called ActivityPub.
You might be wondering how people are supposed to make money off this and the answer is that they aren't. We don't need walled gardens and megacorps mining our data in order to talk to one another.
DRM for user data: you have a stock of data about yourself and everytime someone want to access it they have to ask for permission using the DRM you have put in.
ShopIn is working to decentralize buying personas and put users in the drivers seat of how companies can use that data.
It’ll be interesting to see how it works.
Not to sound like a hipster, but I feel like the internet went downhill when it went mainstream. That's when there was a huge incentive to take advantage of people, whether it's spam, scams, or propaganda campaigns.
The centralized control of our data is a separate issue, no?
Not really. The more data is centralized the stronger the advantage the centralizer has over the centralized. In other words, the greater incentive/payoff to taking advantage.
On the subject of misinformation, there are a two common, non-exclusive explanations:
- Ordinary people are ill equipped to judge the provenance of news information on the web
- Ordinary people’s news intake is being manipulated to maximize their base emotional reactions for profit. Current affairs are interwoven with cat videos and soft pornography in an endless feed to tickle the “good”/“bad” classification impulses of our slug brains.
Both are true in my experience. Little short of improved and updated education programs will address the first. However, I’m hopeful that redecentralization can help with the second.
I’m willing to give it a whirl!
In a decentralized system where the parts are diverse, there is no easy way to game every subsystem at once. Further, if the parts are loosely coupled, even if someone manages to game one of the subsystems, the damage is quite isolated, and other parts can carry on unaffected.
I think it's easier to manipulate a centralized platform's algorithms. That "high efficiency" of a centralized platform is also what's making the manipulators more effective.
Same with malware makers and so on. It's always easier to do something when there's a single dominant platform.
Must have been when humans started to speak. Fake News is nothing new, just the name is. In the past there were gossip and rumors, sketchy newspapers who spreaded whatever sold best, and sometimes even satirical things like Bonsai Kittens (https://en.wikipedia.org/wiki/Bonsai_Kitten) in the '90s.
The only difference today is that people moved from the streets to the net, and gossip-snakes have upgraded their game in quality because of modern tools. But whether centralized or decentralized, the game was and will always be there.
For some people including your parent perhaps, it is axiomatic that free information implies an abundance of false information.
The construction was a little strange. "If only..." usually references an existing technology ironically. In this case I think they are saying there will never be such a tech.
The problem is that it is possible to do, and the solution to it is not something we want in general, but we're heading towards it anyways, and TBL is trying to avoid that with his machinations, and the commenter is effectively asking "how does this solve a problem that would naturally be solved with a solution we really, really don't want, and TBL is going out of his way not to solve?".
I know which I'm more afraid of...
Another option is accountability, requiring a confirmed identity to be disclosed with any posting. If people knew who was behind fake news, it would potentially be less damaging.
Current web technology typically sees one entity controlling the data; if you sign up for a website, your data goes in their database. That's due to the common stack we all build on: some form of API/web server, some form of SQL/NoSQL database, some form of client. If we had a world where developers could simply and freely implement services on top of a different paradigm, we'd stop having to worry about big central entities like Facebook, Google, your local computer store, whoever -- being able to sell your data. They simply wouldn't have it. I'm not sure what that paradigm is, but IPFS, Secure Scuttlebutt, Solid, the Dat Project etc. are all interesting takes on what a solution to that problem might look like.
At the other end of the spectrum is governance, which is something sorely lacking, and allows these misinformation campaigns to spread so freely. Do we still have to worry about governance in a world where we own our own data? Yes, probably, if the surge of illegal and genuinely nasty content like Child Pornography on the new big distributed networks (eg. ZeroNet) is any indicator. Censorship is a spectrum, and while I can appreciate sticking to the view that free speech is best for all, there are some behaviours which society has mostly agreed are objectionable.
The current paradigm of each social service provider essentially being a self-determining fiefdom isn't too dissimilar to how humaity's always done things, I guess (for instance, different pubs/bars will have differing clienteles and degrees of socially acceptable behaviour), except for the scale -- which unfortunately probably does merit a change in how our current major sites are run (YouTube and Twitter in particular seem like they've been gamed incredibly well by cynical parties). I don't know how we solve this problem, but better technical underpinnings might make this a simpler problem to solve. Independently, however, I note that there are places like https://social.coop where they're trying to take that problem on head-first.
It will be interesting to see what the next few years of the internet look like. My hope is that we see some of the people interested in this problem start focussing on normal people instead of other techies; suggesting to my friends that they move to Secure Scuttlebutt is pretty hard, as the majority of them really only use their phones (or tablets).
Personally, I think something that tied together messaging and an event invitation/calendaring system would take a massive knock out of the people who are only hanging onto Facebook for those two things, but that's still a techy view of "kill centralisation" and not a social view of "how does that new platform get governed?".
Free speech exists specifically to allow people to go against "what society has mostly agreed is objectionable", that's the whole point.
"What society has mostly agreed is objectionable" is the problem that free speech fixes.
I don't particularly believe it's worth diving into the nuance of it here -- because I'll do a bad job! -- but I should make it clear that I wasn't intending to make any comment about whether the principle is right or wrong (I mean, the UN makes it pretty clear http://www.un.org/en/universal-declaration-human-rights/ :)); I'm merely pointing out that free speech/censorship exist on a spectrum, and our tolerances and interpretations of those principles shift over time. As an example, many democratic nations have decided that there are consequences, enshrined in law, for what they define as hate speech.
I suspect that even in the USA, a country that is extremely passionate about protecting the right to free speech, you'd be pretty hard pressed to argue that sharing pirated content (... at the "less objectionable" end of the spectrum ;)) is an expression of free speech. As the content becomes more objectionable, that argument becomes more difficult.
I'm using "objectionable content" as an example here to simplify the argument; I'll freely admit that what scares me is the distortion of the principle of free speech by (as the sibling comment points out) bad actors. I don't know that I have the education or skillset to discuss that problem effectively, but it is an issue that is beginning to have widespread societal impacts.
That aside, the real point is that solving the problem of centralization vs. decentralization does not make the choice of totalitarianism vs. anarchism, or determine/enforce the behaviours we determine to be socially acceptable on whichever platform we use. The technical underpinnings may favour one end of the spectrum or the other, but do not replace a genuine, social discussion about the governance of our social networks -- whether we replace the current batch or not.
I agree with your point (if I'm understanding it correctly), you can't use software to encode human values.
Piracy is an interesting example. Companies implemented DRM to try and stop people from copying information they own, and it failed. Now people are attempting to do the same to stop companies from copying their information. It didn't work in one direction, why would it work in the other?
There is no such thing as free speech in an era of weaponised media. The principle has become a nonsense.
Free speech cannot exist when there are huge well-funded and technology sophisticated opinion forming machines influencing the public - because shouting down and discrediting “unpopular” (i.e. politically and economically undesirable) opinions is just censorship dressed up as free debate.
Simply put, the way to fight the abuse of free speech is with more free speech, not less, more access to information, not less, more freedom of assembly and association, not less. This principle has been the foundation of the least repressive societies in human history and it has a definite and proven track record, unlike limiting speech, which simply does not.
You're aware the 20th centuries relative journalistic neutrality is a historical aberration, right?
Is there any published post-mortem on what went wrong before, and why it's different this time?
Solid has exactly the right vision, but with the wrong architecture because it attempts to re-purpose those failed projects.
And for those who are wondering why I think it's interesting to find Facebook opengraph and twitter cards markup, it's because you can use it for decentralized purposes.
RDF shows up in graph databases a lot too.
The popular success of technologies depends on countless factors that keep changing at a fast pace, and those factors include a great deal of luck and randomness.
There is no reason to believe that repurposing some basic technologies for something different in a different environment is more likely to fail than inventing entirely new technologies.
Assuming this, it would be smart then, to note those imperfections and constraints, and make a point of not mimicking them when embarking on a new attempt. Instead of, say, blindly flailing about, rehashing the idea indefinitely until it accidentally works.
There might have some chance to the procedure, but its difficult to argue that we can't affect those probabilities, by thinking real damn hard on it, and maybe even working hard too. But working hard without thinking... I'm not so sure about.
I just don't think it's justified to rubbish technologies purely based on a lack of massive traction in a particular environment.
In a simple analogy its a bit like where neural networks were not very long ago, a nice idea with potential but wasn't quite there, many said it was a dead end. Some technology simply takes longer to mature, or is dependent on missing bits of the puzzle to emerge before they can be properly utilised.
So what are these 'missing bits':
1. Ease of use, RDF, Semantics, description logic, OWL, Ontologies, Controlled Vocabularies, Taxonomies, Turtle (the supposed easy to read format), JSON-LD etc have a steep learning curve and no good non academic path to adoption. It takes a good amount of effort and time to see why this is useful. Its not the sort of tech a developer is going to have a crack at on his next project, and if they did they would fail and then reinforce the impression that it doesn't work. Even some of the fundamental principles are hard for many developers to get their head around, i.e. Closed World Assumption vs Open World Assumption (CWA is a problem I see so often by devs outside in general when dealing with data).
2. Tools, helps with the above, but also the options for a decent open source triple store that support the operations you need has been challenging. Then trying to run this operationally in the time of the Cloud is a ball ache most don't want. AWS Neptune possibly helps with this, otherwise I would recommend Blazegraph. Hmm and then there is creating a Ontology yourself, Protege which is mehh but free, or Topbraid which is expensive and meh! Programming language support, it is assumed you will do al your work in Java, if not good luck. Tools and libraries are not well maintained in general, large due to academic funded rounds of work, people move on, lots get abandoned.
3. Obvious gaps in the tech, for example one that SHACL has filled, also rules which arguably SWRL is filling, all a bit ad-hoc and takes years to get consensus, I'm sure there are others that I haven't hit and that in itself is a worry.
4. Understanding problems this tech is good for. Personally I think the Semantic Web, as described in general, isn't very practical and seems to mask what the tech is good for. Which again in my opinion is Enterprise Data Integration. Joining up lots of different data sources into something well defined (tends to be really poor semantics in traditional DBs), which is particularly a problem in conglomerates. Bringing together data from many places, and mapping them together means you don't need to go out and make everyone agree to your standard model.
I'll now give some hand wavy ideas:)
Its often said this is for machines to communicate and make decisions which is correct, not sure RDF on web page is particularly useful in this context, APIs are. So much labour goes into connecting one API to another API, there is no discovery or means of a machine understanding what the data comes in actually means, nor would it with the current approach to having a son blob, and a page of documentation that poorly describes it. I think there is a massive opportunity here, and if it is solved it would dramatically disrupt how backend engineering works.
There certainly does seem to be an as yet untapped intersection between some of what GraphQL solves, with what JSON-LD and SPARQL provides where a greater than the sum of their parts result could emerge, this is a hunch on my part.
There are zero marginal costs, so media providers are incentivized to spend huge amounts of money to create the most compelling content so that they can dominate the contest for attention. This creates a system with winner take all dynamics. Then, once you have won, you are incentivized to spend your winning to ensure that you continue winning. One of the ways you do this is by buying smaller competitors. Hence centralization is fundamental.
Is there peer to peer VoIP that works? Discovering the other end isn't a problem; this is chat for a game, so there's a way to tell the other end about IP addresses and such. The question is whether you can reliably set up a VoIP connection between N known endpoints, including proper echo suppression and conference bridging, without a central server. Are the issues of getting through firewalls with cooperation from both sides solved? Is there anything that works well enough that non-technical users can use it reliably?
( https://westurner.github.io/hnlog/#comment-16615679 )
> ActivityPub (and OStatus, and ActivityStreams/Salmon, and OpenSocial) are all great specs and great ideas. Hosting and moderation cost real money (which spammers/scammers are wasting).
> Know what's also great? Learning. For learning, we have the xAPI/TinCan spec and also schema.org/Action.
Mastodon has now supplanted GNU StatusNet.
I wanted to run my web metadata extractor on it, but there isn't any metadata there anyway. Which is kind of fun for a project about web metadata.
does ld+json, microformats, microdata, rdf3a, and opengraph mostly by calling other packages.
I feel if we could give users/devs/whoever an appreciation for the structured data (or lack thereof) in a page and it's benefits we might see a general improvement in usability and interoperability. One can dream.
This project http://webdatacommons.org/ extracts from Common Crawl, but it's just raw data and you're on your own for visualization and discovery tools!
I think something like this: https://www.ssllabs.com/ssltest/
but for embedded data would be interesting.
The project doesn't look particularly active from a commit standpoint, though I can imagine a whole lot of non-implementation work has to be done.
Best of luck to Tim and the Solid team!
1) The idea of a set of reusable APIs around data ownership could actually be the foundation of a new internet. If it becomes impossible to concentrate data, the system will be more resistant.
2) The appeal of the Internet lies in the services that are available to users. A decentralized system doesn't appeal to capitalism, value is best captured through centralization and intermediation. Capitalist enterprises produce the bulk of services at the moment. This is a major adoption hurdle.
3) Dr. Berners-Lee appears to be a (digital) anarchist at heart, and it's a painfully beautiful vision.
4) He is partnering with capitalists to try to achieve an anarchist vision. This is a contradiction in terms. Capitalism is about competition for the working class (i.e. users), domination and centralization.
5) It sounds like what he wants is a more egalitarian and democratic system. He might want to talk to more radical political theorists (I assume he may have already in the decades he's been kicking around).
I wish Dr. Berners-Lee the best. I hope Solid gets some adoption. I've heard some people are using Mastodon. Here's hoping it's a trend. Nonetheless, if enough people start migrating, the corporations will offer them something to come back. After all, they can afford to, tech profits in established companies are obscene.
I disagree with you and Marx on this. Capitalism is primarily about trading. Sometimes it leads to centralisation but it can also work through innovation and effective competition for decentralisation. So Tim's efforts to set up a framework which will support the latter is not a "contradiction in terms" with capitalism.
Or maybe the fact that he is doing so demonstrates that your terms and assumptions are incorrect.
At the same time, the fact that other capitalists are partnering with government agencies maybe shows that the Rothbard quote doesn't reflect the full picture.
Could be that capitalism and freedom are orthogonal concerns.
― Murray N. Rothbard
"Anarchism holds the state to be undesirable, unnecessary and harmful."
As private property cannot exist without violence and capitalism cannot exist without the state, some forms of anarchism are incompatible with capitalism.
I'm not an anarchist, just stating some definitions.
All. You meant all the goods AND services. Unless you count the kids living on a commune who sell berries at the farmers market.
He insisted total control over ZigZag. He wanted it to exist as proprietary software not open protocol or anything like that. Decades passed noting came for his ideas and software.
Then came the web, html, RDFs etc. and Ted Nelson became desperate. He received patent for zzstructure in 2001. When group of talented individuals approached him he first approved their work in GZZ (Gnu ZigZag) then he screwed them totally over when they started to get work done.
Ted Nelson's genius is voided by his insecurities, greed or something. Maybe when his patent expires 2021 something will come of it.
We can take this farther. There are a lot of people trying to raise the bar on development tools and at least some of us believe that people tend to do what they know. What is demonstrated for them.
Give them crappy tools and they will be satisfied to create crappy things with them. Give someone a better tool and many will rise to that level.
There are several aspects of code comprehension that I think would be conducive to his designs. And monitors are getting wide enough now that we nearly have the space to do them.
Go to Ted Nelson's Wikipedia page and mouse over the "Project Xanadu" link.
> Give them crappy tools and they will be satisfied to create crappy things with them. Give someone a better tool and many will rise to that level.
The value of Wikipedia page previews has very little to do with the quality of the technology. The quality of the technology is perfectly adequate for at least one version of what Ted Nelson is describing.
The value is determined by what those tooltips are legally allowed to link-- can they show all primary documents which anyone on HN can trivially discover and access in a digital format, or can they merely show other Wikipedia pages?
As long as it is illegal to link at least primary documents that whose authorship was funded by public money (like everything on Scihub), it really doesn't matter what type of view or even doc format you have.
And if xanadu documents were more useful (because of their interlinking and history) than the close-able flat documents (or perhaps closed documents that only link to closed source), then naturally things would slowly trend towards open documents by default, as it becomes more difficult to constrain yourself to the closed-source world.
The problem today might be then, that everyone already accepts that primary documents are likely to be sealed off, and there's no significant incentive otherwise. But if the idea of having a sufficiently useful document upgraded from a flat pdf to a historically-archiving xanadu document...
Not surprising given the source, but I like to think Hacker News expects better than an "I Was Devastated" headline.