Hacker News new | comments | show | ask | jobs | submit login
Tim Berners-Lee’s Solid is a platform designed to re-decentralize the web (vanityfair.com)
306 points by rapnie 4 months ago | hide | past | web | favorite | 109 comments



It’s amazing to me that a huge component of the problems TBL and colleagues are trying to solve are not discussed in the article, or here: we exchange our data to access services that are valuable, and expensive to develop to maintain.

Because our personal data is incorporated into a highly networked system (the various ad platforms), is worth much more than if it isolated, or “decentralized.”

Unless a new system, that allows individuals more fine grained control over how their data is designed to aggregate with mass numbers of other users’ data, this personal data will have a marginal value approaching zero.

This implies that other means of financing the vast client-server system,that is increasingly dominant in the first and second world societies, will have to be found.

It will also require pretty restrictive regulation to prevent companies from simply proceeding as they are, aggregating mass quantities of user data, from which they profit.

While awareness of the problems, potential and real, is slowly dawning amongst the “civilian” population, it is a confused and ill- informed type of concern.

I’m thinking of friends who get upset at learning the government is spying on everyone, who simultaneously post countless intimate details of their lives on Facebook (or Instagram, Twitter, Reddit, etc.)


You are making a huge assumption here, namely that advertising-supported services that cost billions to develop and maintain, and which generate billions in advertising revenue, would still cost billions to develop and maintain if they were not generating billions in advertising revenue. This assumption is in my opinion entirely incorrect.


The underlying reason for those services having both high costs and high revenues is that they have extremely high volume of users. I'm sure something like Duck Duck Go has much lower operational costs, even though it's providing not substantially different service than Google's search engine.

My point is that the cost and revenues are not a function of each other as much as they're both a function of usage. Once you get a high volume of users, it's pretty natural to convert at least a small part of that to advertising revenue, even if doesn't fit well to your platform (cf. Twitter). It takes a principled decision not to dip into that revenue source -- I remember once reading how Jason Calacanis was pissed at Jimmy Wales for not wanting to include ads on Wikipedia.


Compare Twitter with Mastodon Social. Twitter employs 3000 people and it's a hugely complex piece of software. Meanwhile, Mastodon is mostly built by a single guy, and a small volunteer community.

Majority of that complexity is not there to make user experience better. I actively use both platforms, and I find that Mastodon provides a better user experience in pretty much every way. The UI is snappier, it has better features for interacting with others, it doesn't restrict the API, and so on.

The main cost of Twitter comes from building tools to monetize and exploit the users of the platform.


> The main cost of Twitter comes from building tools to monetize and exploit the users of the platform.

Which is precisely my point. Or at least one half of it.


Yup, I'm just expanding on that to illustrate with a concrete example that you don't need megacorps to build these kinds of platforms in practice.


As I recall, the internet behemoths settled on ad revenue when other forms of generating revenue failed for their systems. People were not willing to pay monthly for Facebook nor google search. But maybe that’ll change after people react to the current situation.


Your choices aren't between tracking and subscription, they're between tracking, subscription, and advertisement. You can advertise without tracking (contextually, the original claim of AdWords) and you can sell tracking data to non-advertisers. To say that the economic model of the internet depends on privacy violation is a "the world has to be this bad" argument that just doesn't hold up, especially given the memory of internet advertisements in the time before so much brainpower was funneled in to tracking us.


Except that what we actually have now is "tracking and advertisement" with the former being largely covert, but probably as big, if not bigger as a commercial driver. Watch as loads of the Internet lose their shit over users introducing privacy measures, and very little actually seeming to be done to introduce advertising that doesn't violate privacy.


That invasive advertising surveillance must happen — or we wont be able to afford to stay in touch with each other — is a very common argument, but not a convincing one to me. This was zucc’s argument to Congress for extracting as much personal data as possible from people.

Surely there must be a middle ground between an app that people enjoy using being worth $570B and employing 25,000 experts to tweak it slightly, and being worth less than nothing after server costs.

I agree with you that should a decentralized social media app take off, it’s likely to be less profitable, but I dont think that necessarily means that no-one wants to work on it, that no-one wants to use it, or that it won’t exist.


That invasive advertising surveillance must happen — or we wont be able to afford to stay in touch with each other — is a very common argument

Just yesterday a Google spokesmodel talked about “balancing user privacy against business needs”

https://www.theregister.co.uk/2018/06/29/california_data_pri...

But making profits is not a human right!


The products wouldn't exist if they didn't make a profit. So really, it's balancing user privacy against user desire for the product to exist.


Corporations are people too and profit is the oxygen they need.


Balance compliance with the law against desire to make profits, is the accurate translation. Pure Silicon Valley.


Yeah I love seeing corporations have their liberty curtailed in jail when they break the law. /s


Don't assume the cost of the infrastructure for privacy-invasive advertising is free, or that it's just automatically easy to do such things once you're offering some other user-facing service. Acquiring, storing, correlating, and sharing all of that user data is very expensive in architectural terms. It will vary by business model, but I would imagine anywhere from 20-80% of operating costs go out the window if you architect your systems to only offer the services users actually wanted.


"I’m thinking of friends who get upset at learning the government is spying on everyone, who simultaneously post countless intimate details of their lives on Facebook (or Instagram, Twitter, Reddit, etc.)"

This is not a contradiction. When you post, you can choose, what informations get out. You have control. (in theory)

When you get spied on, you can't choose. In other words, yes the whole world may see pictures of me and my friends (even though I don't post on those services), but I still don't share my bedroom insights nor pillow talk.


"Because our personal data is incorporated into a highly networked system (the various ad platforms), is worth much more than if it isolated, or “decentralized.” Actually this is IMHO a myth. It is worth more to the ad companies. But not to me. In fact, my data is more interesting and more valuable to me than to the ad system. When I can run apps (including AI) which work for me doing aggregation across all kinds of data about me they will provide me with much greater power. Personal data integration. AIs which work for me, report to me. Being able to write trusted apps which have read/access to all aspects of a person's life, not just the stuff they shared in a social network silo.


Yeah it really irked me when I discovered that I couldn't easily review the tracks that I "loved" with Apple music. When researching just why I found some disingenuous nonsense on the Apple support forums about how they use it to help provide a better experience.

Yeah, tanks hows about I get to see the tunes that I liked myself and make those judgements. Never pressed the "love" button again.

I presume this is what you're getting at. That these companies get to harvest my data and make insights about me that I don't get to make myself.


WhatsApp used to work and be profitable for Ç99 a year.


was it worth not selling at that amount of profitability though?


Ask founders, ask users.


users don't have a decision in the sale.


You think they weren't impacted ?


Mastodon Social is a perfect example of a thriving decentralized platform. It is a network of small communities that are mostly run by volunteers or are supported by the users directly. There are no ads or exploitation of users in this system.

Hosting is incredibly cheap nowadays, and the whole point behind decentralization is that no single server has to host all the users. You can run a Digital Ocean server for 5 bucks a month and host your own community. Meanwhile, containarization makes it much simpler to deploy and keep complex apps up to date.

Other decentralized services such as PeerTube are also popping up, and all these services are able to talk to one another via an open protocol called ActivityPub.

You might be wondering how people are supposed to make money off this and the answer is that they aren't. We don't need walled gardens and megacorps mining our data in order to talk to one another.


> Unless a new system, that allows individuals more fine grained control over how their data is designed to aggregate with mass numbers of other users’ data

DRM for user data: you have a stock of data about yourself and everytime someone want to access it they have to ask for permission using the DRM you have put in.


Some of the most valuable data we have is our purchase history. That lets advertisers know what we buy. They can then target us better.

ShopIn is working to decentralize buying personas and put users in the drivers seat of how companies can use that data.

It’ll be interesting to see how it works.


https://solid.mit.edu - for anyone who wants to see the basics in more detail.


How does decentralizing things combat the scourge of misinformation and fake news? Wouldn't it exacerbate the issue?

Not to sound like a hipster, but I feel like the internet went downhill when it went mainstream. That's when there was a huge incentive to take advantage of people, whether it's spam, scams, or propaganda campaigns.

The centralized control of our data is a separate issue, no?


> a huge incentive to take advantage of people... The centralized control of our data is a separate issue, no?

Not really. The more data is centralized the stronger the advantage the centralizer has over the centralized. In other words, the greater incentive/payoff to taking advantage.

On the subject of misinformation, there are a two common, non-exclusive explanations:

- Ordinary people are ill equipped to judge the provenance of news information on the web

- Ordinary people’s news intake is being manipulated to maximize their base emotional reactions for profit. Current affairs are interwoven with cat videos and soft pornography in an endless feed to tickle the “good”/“bad” classification impulses of our slug brains.

Both are true in my experience. Little short of improved and updated education programs will address the first. However, I’m hopeful that redecentralization can help with the second.

I’m willing to give it a whirl!


If there is any loophole in a centralized system, there are huge benefits to exploiting it. So, there are strong incentives to attempt that, and a matter of time before someone finds a loophole that's not easy to patch. Correspondingly, keeping up with attacks and patching loopholes is a necessary and intensive endeavor.

In a decentralized system where the parts are diverse, there is no easy way to game every subsystem at once. Further, if the parts are loosely coupled, even if someone manages to game one of the subsystems, the damage is quite isolated, and other parts can carry on unaffected.


When did the most "fake news" started appearing? When you could promote it on one of the most centralized platforms around: Facebook.

I think it's easier to manipulate a centralized platform's algorithms. That "high efficiency" of a centralized platform is also what's making the manipulators more effective.

Same with malware makers and so on. It's always easier to do something when there's a single dominant platform.


> When did the most "fake news" started appearing?

Must have been when humans started to speak. Fake News is nothing new, just the name is. In the past there were gossip and rumors, sketchy newspapers who spreaded whatever sold best, and sometimes even satirical things like Bonsai Kittens (https://en.wikipedia.org/wiki/Bonsai_Kitten) in the '90s.

The only difference today is that people moved from the streets to the net, and gossip-snakes have upgraded their game in quality because of modern tools. But whether centralized or decentralized, the game was and will always be there.


Specially if a platform like Facebook allows injecting promoted stuff into user's streams for money.


If only there was some way we could control what information people were able to share. Then only the correct information would spread.

/s


Is there anything to this comment beyond hollow, off-target sarcasm? The comment you're responding to seems like a legitimate question: The OP describes his horror at the increasingly common manipulation and misinformation campaigns on the Web as being part of Berners-Lee's motivation, but this solution seems to make them easier if anything, and certainly doesn't appear to make them harder.


I took that comment to say that its parent was calling for the impossible. So not off-topic.

For some people including your parent perhaps, it is axiomatic that free information implies an abundance of false information.

The construction was a little strange. "If only..." usually references an existing technology ironically. In this case I think they are saying there will never be such a tech.


The existing tech he's referring to is what we're approaching with the web as it is now (silos of centralized systems, in which information could be controlled), and the irony is that a large part of why decentralized systems (and tim's construction) are notable is to avoid such a scenario. They explicitly do not solve the problem of fake news, because they explicitly do not want to live in the world where fake news is a solved problem (because the issues that go with its solution are far worse than the problem of fake news).

The problem is that it is possible to do, and the solution to it is not something we want in general, but we're heading towards it anyways, and TBL is trying to avoid that with his machinations, and the commenter is effectively asking "how does this solve a problem that would naturally be solved with a solution we really, really don't want, and TBL is going out of his way not to solve?".


I think he's saying one way to control misinformation is by applying censorship, with the obvious implication that censorship is undesirable. You take the good with the bad, whether that means freedom of speech and some misinformation, or "correct" information and censoring any other news or view points.

I know which I'm more afraid of...


One other option is to increase the cost of putting information out there. That's basically what we had pre-internet. Having gatekeepers isn't censorship but it does cut down on dissemination of fake news. But that too has its downsides.

Another option is accountability, requiring a confirmed identity to be disclosed with any posting. If people knew who was behind fake news, it would potentially be less damaging.


There are two closely intertwined but separate issues: centralization and governance.

Current web technology typically sees one entity controlling the data; if you sign up for a website, your data goes in their database. That's due to the common stack we all build on: some form of API/web server, some form of SQL/NoSQL database, some form of client. If we had a world where developers could simply and freely implement services on top of a different paradigm, we'd stop having to worry about big central entities like Facebook, Google, your local computer store, whoever -- being able to sell your data. They simply wouldn't have it. I'm not sure what that paradigm is, but IPFS, Secure Scuttlebutt, Solid, the Dat Project etc. are all interesting takes on what a solution to that problem might look like.

At the other end of the spectrum is governance, which is something sorely lacking, and allows these misinformation campaigns to spread so freely. Do we still have to worry about governance in a world where we own our own data? Yes, probably, if the surge of illegal and genuinely nasty content like Child Pornography on the new big distributed networks (eg. ZeroNet) is any indicator. Censorship is a spectrum, and while I can appreciate sticking to the view that free speech is best for all, there are some behaviours which society has mostly agreed are objectionable.

The current paradigm of each social service provider essentially being a self-determining fiefdom isn't too dissimilar to how humaity's always done things, I guess (for instance, different pubs/bars will have differing clienteles and degrees of socially acceptable behaviour), except for the scale -- which unfortunately probably does merit a change in how our current major sites are run (YouTube and Twitter in particular seem like they've been gamed incredibly well by cynical parties). I don't know how we solve this problem, but better technical underpinnings might make this a simpler problem to solve. Independently, however, I note that there are places like https://social.coop where they're trying to take that problem on head-first.

It will be interesting to see what the next few years of the internet look like. My hope is that we see some of the people interested in this problem start focussing on normal people instead of other techies; suggesting to my friends that they move to Secure Scuttlebutt is pretty hard, as the majority of them really only use their phones (or tablets).

Personally, I think something that tied together messaging and an event invitation/calendaring system would take a massive knock out of the people who are only hanging onto Facebook for those two things, but that's still a techy view of "kill centralisation" and not a social view of "how does that new platform get governed?".


"...while I can appreciate sticking to the view that free speech is best for all, there are some behaviours which society has mostly agreed are objectionable..."

Free speech exists specifically to allow people to go against "what society has mostly agreed is objectionable", that's the whole point.

"What society has mostly agreed is objectionable" is the problem that free speech fixes.


Free speech is... an incredibly nuanced topic, and I apologise for phrasing this discussion poorly in the original post.

I don't particularly believe it's worth diving into the nuance of it here -- because I'll do a bad job! -- but I should make it clear that I wasn't intending to make any comment about whether the principle is right or wrong (I mean, the UN makes it pretty clear http://www.un.org/en/universal-declaration-human-rights/ :)); I'm merely pointing out that free speech/censorship exist on a spectrum, and our tolerances and interpretations of those principles shift over time. As an example, many democratic nations have decided that there are consequences, enshrined in law, for what they define as hate speech.

I suspect that even in the USA, a country that is extremely passionate about protecting the right to free speech, you'd be pretty hard pressed to argue that sharing pirated content (... at the "less objectionable" end of the spectrum ;)) is an expression of free speech. As the content becomes more objectionable, that argument becomes more difficult.

I'm using "objectionable content" as an example here to simplify the argument; I'll freely admit that what scares me is the distortion of the principle of free speech by (as the sibling comment points out) bad actors. I don't know that I have the education or skillset to discuss that problem effectively, but it is an issue that is beginning to have widespread societal impacts.

That aside, the real point is that solving the problem of centralization vs. decentralization does not make the choice of totalitarianism vs. anarchism, or determine/enforce the behaviours we determine to be socially acceptable on whichever platform we use. The technical underpinnings may favour one end of the spectrum or the other, but do not replace a genuine, social discussion about the governance of our social networks -- whether we replace the current batch or not.


Thanks for clarifying. I feel bad for jumping on you so quickly. There is a growing anti-free speech movement that scares me and I come down on it hard even at the hint of it. I've lost good friends over it.

I agree with your point (if I'm understanding it correctly), you can't use software to encode human values.

Piracy is an interesting example. Companies implemented DRM to try and stop people from copying information they own, and it failed. Now people are attempting to do the same to stop companies from copying their information. It didn't work in one direction, why would it work in the other?


Until the people who manipulate free speech for political and financial gain either poison it or ban it.

There is no such thing as free speech in an era of weaponised media. The principle has become a nonsense.

Free speech cannot exist when there are huge well-funded and technology sophisticated opinion forming machines influencing the public - because shouting down and discrediting “unpopular” (i.e. politically and economically undesirable) opinions is just censorship dressed up as free debate.


No matter how dangerous free speech may be in your mind, censorship is 1000x more dangerous. I see evidence of this all around me as the remnants of an inhumane regime still stand in every city here and in the cities of most of our neighbors. If no such evidence is present in your environment, consider yourself extremely fortunate and please don't take it for granted. Pick up a history book and consider the implications of what you are saying very carefully.

Simply put, the way to fight the abuse of free speech is with more free speech, not less, more access to information, not less, more freedom of assembly and association, not less. This principle has been the foundation of the least repressive societies in human history and it has a definite and proven track record, unlike limiting speech, which simply does not.


>There is no such thing as free speech in an era of weaponised media. The principle has become a nonsense.

You're aware the 20th centuries relative journalistic neutrality is a historical aberration, right?

https://www.independent.co.uk/news/long_reads/frondeurs-and-...

https://theconversation.com/techniques-of-19th-century-fake-...


I notice there are semantic web / linked data / foaf folks on board. These were some of TimBL's past, largely failed attempts to redirect the web's future.

Is there any published post-mortem on what went wrong before, and why it's different this time?

Solid has exactly the right vision, but with the wrong architecture because it attempts to re-purpose those failed projects.


I crawled one page of the top 15 million websites and 6 million of them had some kind of microformat thing on the front page, mostly opengraph (1/2) and twitter cards (1/4), but that's still something.

And for those who are wondering why I think it's interesting to find Facebook opengraph and twitter cards markup, it's because you can use it for decentralized purposes.


Semantic web technologies have had quite a bit of adoption, but it's all pretty niche. JSON-LD embedded in html is the current widely deployed iteration and were it not for googles love of structured data it would not be nearly as big.

RDF shows up in graph databases a lot too.


The semantic web hasn't "failed", it's still developing. Remember, it's not meant for human eyes (by and large) anyway. Semantic Web tech is enabling tech that end users may never even know exists.


The Google Knowledge Graph is the most successful (and centralized) vision of the Semantic Web there is. Hardly a failure.


You mean "failed" like neural networks had failed before they suddenly took over the world?

The popular success of technologies depends on countless factors that keep changing at a fast pace, and those factors include a great deal of luck and randomness.

There is no reason to believe that repurposing some basic technologies for something different in a different environment is more likely to fail than inventing entirely new technologies.


If we assume that the previous constructions weren't perfect, and it wasn't pure luck (or lack thereof) that shut them down, then our 20/20 hindsight should be capable of distilling (some of) those imperfections, environmental constraints and perhaps even a root cause for failure.

Assuming this, it would be smart then, to note those imperfections and constraints, and make a point of not mimicking them when embarking on a new attempt. Instead of, say, blindly flailing about, rehashing the idea indefinitely until it accidentally works.

There might have some chance to the procedure, but its difficult to argue that we can't affect those probabilities, by thinking real damn hard on it, and maybe even working hard too. But working hard without thinking... I'm not so sure about.


Of course it makes sense to think about the ways in which specific technological choices and designs may have negatively affected the chances of success. I'm sure Tim Berners-Lee has done a lot of thinking.

I just don't think it's justified to rubbish technologies purely based on a lack of massive traction in a particular environment.


sure but if he's going to rehash an old idea, it's reasonable to ask why he expects it to work this time, when it failed last time. If done the research, providing it isn't a large ask; and if he hasn't, then...


A good question!

In a simple analogy its a bit like where neural networks were not very long ago, a nice idea with potential but wasn't quite there, many said it was a dead end. Some technology simply takes longer to mature, or is dependent on missing bits of the puzzle to emerge before they can be properly utilised.

So what are these 'missing bits':

1. Ease of use, RDF, Semantics, description logic, OWL, Ontologies, Controlled Vocabularies, Taxonomies, Turtle (the supposed easy to read format), JSON-LD etc have a steep learning curve and no good non academic path to adoption. It takes a good amount of effort and time to see why this is useful. Its not the sort of tech a developer is going to have a crack at on his next project, and if they did they would fail and then reinforce the impression that it doesn't work. Even some of the fundamental principles are hard for many developers to get their head around, i.e. Closed World Assumption vs Open World Assumption (CWA is a problem I see so often by devs outside in general when dealing with data).

2. Tools, helps with the above, but also the options for a decent open source triple store that support the operations you need has been challenging. Then trying to run this operationally in the time of the Cloud is a ball ache most don't want. AWS Neptune possibly helps with this, otherwise I would recommend Blazegraph. Hmm and then there is creating a Ontology yourself, Protege which is mehh but free, or Topbraid which is expensive and meh! Programming language support, it is assumed you will do al your work in Java, if not good luck. Tools and libraries are not well maintained in general, large due to academic funded rounds of work, people move on, lots get abandoned.

3. Obvious gaps in the tech, for example one that SHACL has filled, also rules which arguably SWRL is filling, all a bit ad-hoc and takes years to get consensus, I'm sure there are others that I haven't hit and that in itself is a worry.

4. Understanding problems this tech is good for. Personally I think the Semantic Web, as described in general, isn't very practical and seems to mask what the tech is good for. Which again in my opinion is Enterprise Data Integration. Joining up lots of different data sources into something well defined (tends to be really poor semantics in traditional DBs), which is particularly a problem in conglomerates. Bringing together data from many places, and mapping them together means you don't need to go out and make everyone agree to your standard model.

I'll now give some hand wavy ideas:) Its often said this is for machines to communicate and make decisions which is correct, not sure RDF on web page is particularly useful in this context, APIs are. So much labour goes into connecting one API to another API, there is no discovery or means of a machine understanding what the data comes in actually means, nor would it with the current approach to having a son blob, and a page of documentation that poorly describes it. I think there is a massive opportunity here, and if it is solved it would dramatically disrupt how backend engineering works.


The Semantic Web has basically become GraphQL.


Well no not really, as that completely lacks the semantic part.

There certainly does seem to be an as yet untapped intersection between some of what GraphQL solves, with what JSON-LD and SPARQL provides where a greater than the sum of their parts result could emerge, this is a hunch on my part.


Media centralizes. This is fundamental.

There are zero marginal costs, so media providers are incentivized to spend huge amounts of money to create the most compelling content so that they can dominate the contest for attention. This creates a system with winner take all dynamics. Then, once you have won, you are incentivized to spend your winning to ensure that you continue winning. One of the ways you do this is by buying smaller competitors. Hence centralization is fundamental.


Ban media organizations larger than some size (or some better proxy). Nothing is fundamental when you can control the rules.


Companies like Sinclair would probably love that, they can more easily control lots of small, key media companies.


Only when there’s a profit motive - which admittedly most do, although some are principled and independent.


It's not just profit motive. Popularity Effects centralize systems too.


Can anyone explain how Solid https://solid.mit.edu/ compares to (or relates to) Mastodon https://joinmastodon.org/ ? They seem to have the same goals.


mastodon is only federated microblogging. The Solid webpage suggests address and contact management, file sharing, article publication and web annotation.


The protocol underneath mastodon can do all that too. I’d like to see federation with existing decentralised protocols with similar intent.


Here's a somewhat related question on peer to peer systems.

Is there peer to peer VoIP that works? Discovering the other end isn't a problem; this is chat for a game, so there's a way to tell the other end about IP addresses and such. The question is whether you can reliably set up a VoIP connection between N known endpoints, including proper echo suppression and conference bridging, without a central server. Are the issues of getting through firewalls with cooperation from both sides solved? Is there anything that works well enough that non-technical users can use it reliably?


Isn't that what WebRTC does?


WebRTC defines STUN (Session Traversal Utilities for NAT) and TURN (Traversal Using Relay NAT) for dealing with firewalls. The problem is that they're optional and not always implemented. Secondly TURN relays data using a central server so can add latency or hit bandwidth bottlenecks.


Somewhat, but in practice, WebRTC punts half of the problem to the site's own code. That's why some sites using WebRTC work great almost everywhere, and others seem to work great as long as you're not on any kind of unusual connection/firewall/proxy.


WebRTC behaves properly until you support multiple browsers. Then it becomes a nightmare to maintain.


Does this solution entail DNS roots outside of ICANN's purvey? The United States regularly seizes private domains under abusive legislation similar like the DMCA. in many other cases such as Wikileaks, private corporations simply casually agree to the US Governments demand to rescind registration. https://www.theguardian.com/media/blog/2010/dec/03/wikileaks...


Does anyone have some links on Solid that aren't media articles? I can't find anything, not even in Tim's homepage.



Spec: https://github.com/solid/solid-spec

Source: https://github.com/solid/solid

...

From https://news.ycombinator.com/item?id=16615679 ( https://westurner.github.io/hnlog/#comment-16615679 )

> ActivityPub (and OStatus, and ActivityStreams/Salmon, and OpenSocial) are all great specs and great ideas. Hosting and moderation cost real money (which spammers/scammers are wasting).

> Know what's also great? Learning. For learning, we have the xAPI/TinCan spec and also schema.org/Action.

Mastodon has now supplanted GNU StatusNet.


Here is the HN discussion from 2 years ago on Solid:

https://news.ycombinator.com/item?id=12280764


Firefox Focus giving https security warning and won't let me visit the site.


Amusingly, the Solid website's SSL cert isn't compatible with centos 7, either curl or Python. Which is kinda the state of web security in a nutshell, sometimes.

I wanted to run my web metadata extractor on it, but there isn't any metadata there anyway. Which is kind of fun for a project about web metadata.


I’m on mobile so I can’t fully inspect the cert but since it’s hosted at MIT, my guess is the Comodo-backed cert is though Internet2. The site maintainer probably neglected to include the intermediate cert which is fine for many browsers because they cache those but is a problem for some.


Yeah, I think Qualys is agreeing with you, they say the chain is incomplete:

https://www.ssllabs.com/ssltest/analyze.html?d=solid.mit.edu


is your extractor open source? (if so, could I get a link?)


This:

https://github.com/scrapinghub/extruct

does ld+json, microformats, microdata, rdf3a, and opengraph mostly by calling other packages.


thank you so much for the link.

I feel if we could give users/devs/whoever an appreciation for the structured data (or lack thereof) in a page and it's benefits we might see a general improvement in usability and interoperability. One can dream.


Yeah, one thing I would like to create in my web-scale crawling activities is some structured data discovery tools. Google surfaces a few things as part of their search results, but not in bulk.

This project http://webdatacommons.org/ extracts from Common Crawl, but it's just raw data and you're on your own for visualization and discovery tools!


Thanks again - it was very easy to install and use.

I think something like this: https://www.ssllabs.com/ssltest/

but for embedded data would be interesting.


The only commits from Tim himself are some documentation and a typo fix: https://github.com/solid/solid/commits?author=timbl

The project doesn't look particularly active from a commit standpoint, though I can imagine a whole lot of non-implementation work has to be done.

Best of luck to Tim and the Solid team!


Will he officially support EME DRM in this new platform? You know, for the good of the web, or something.


Related, Ted Nelson recently posted a series of videos describing the important ideas on Xanadu, the original hypertext project: https://m.youtube.com/watch?v=hMKy52Intac#


Ted Nelson had great ideas but he totally screwed up the delivery.

He insisted total control over ZigZag. He wanted it to exist as proprietary software not open protocol or anything like that. Decades passed noting came for his ideas and software.

Then came the web, html, RDFs etc. and Ted Nelson became desperate. He received patent for zzstructure in 2001. When group of talented individuals approached him he first approved their work in GZZ (Gnu ZigZag) then he screwed them totally over when they started to get work done.

Ted Nelson's genius is voided by his insecurities, greed or something. Maybe when his patent expires 2021 something will come of it.

http://www.nongnu.org/gzz/


There is almost nothing like his ideas out in the wild. The closest analogs I know of are, ironically enough, tools for software developers.

We can take this farther. There are a lot of people trying to raise the bar on development tools and at least some of us believe that people tend to do what they know. What is demonstrated for them.

Give them crappy tools and they will be satisfied to create crappy things with them. Give someone a better tool and many will rise to that level.

There are several aspects of code comprehension that I think would be conducive to his designs. And monitors are getting wide enough now that we nearly have the space to do them.


> There is almost nothing like his ideas out in the wild.

Go to Ted Nelson's Wikipedia page and mouse over the "Project Xanadu" link.

> Give them crappy tools and they will be satisfied to create crappy things with them. Give someone a better tool and many will rise to that level.

The value of Wikipedia page previews has very little to do with the quality of the technology. The quality of the technology is perfectly adequate for at least one version of what Ted Nelson is describing.

The value is determined by what those tooltips are legally allowed to link-- can they show all primary documents which anyone on HN can trivially discover and access in a digital format, or can they merely show other Wikipedia pages?

As long as it is illegal to link at least primary documents that whose authorship was funded by public money (like everything on Scihub), it really doesn't matter what type of view or even doc format you have.


that may not be entirely true; if you imagined a universe where xanadu exists, and was popular as a general tool, then you can envision that open-source primary documents would be preferred over closed, simply on the basis that you can actually consistently link to them (in the same fashion that, today, non-paywalled sites are naturally favored over paywall for sharing on HN).

And if xanadu documents were more useful (because of their interlinking and history) than the close-able flat documents (or perhaps closed documents that only link to closed source), then naturally things would slowly trend towards open documents by default, as it becomes more difficult to constrain yourself to the closed-source world.

The problem today might be then, that everyone already accepts that primary documents are likely to be sealed off, and there's no significant incentive otherwise. But if the idea of having a sufficiently useful document upgraded from a flat pdf to a historically-archiving xanadu document...


Seems like a great idea that won't go anywhere. Consider: Solid has to fight the network effects and the marketing/lobbying budgets of all the major players including Google, Facebook, et al., all of whom have a vested financial interest in expanding their walled gardens, driving lock-in, and harvesting user data as insidiously as possible.


They're fighting a headwind, for sure, but if the argument you're making was absolutely true, we'd all be using AOL today.


I had a few reactions to this concept.

1) The idea of a set of reusable APIs around data ownership could actually be the foundation of a new internet. If it becomes impossible to concentrate data, the system will be more resistant.

2) The appeal of the Internet lies in the services that are available to users. A decentralized system doesn't appeal to capitalism, value is best captured through centralization and intermediation. Capitalist enterprises produce the bulk of services at the moment. This is a major adoption hurdle.

3) Dr. Berners-Lee appears to be a (digital) anarchist at heart, and it's a painfully beautiful vision.

4) He is partnering with capitalists to try to achieve an anarchist vision. This is a contradiction in terms. Capitalism is about competition for the working class (i.e. users), domination and centralization.

5) It sounds like what he wants is a more egalitarian and democratic system. He might want to talk to more radical political theorists (I assume he may have already in the decades he's been kicking around).

I wish Dr. Berners-Lee the best. I hope Solid gets some adoption. I've heard some people are using Mastodon. Here's hoping it's a trend. Nonetheless, if enough people start migrating, the corporations will offer them something to come back. After all, they can afford to, tech profits in established companies are obscene.


"4) He is partnering with capitalists to try to achieve an anarchist vision. This is a contradiction in terms. Capitalism is about competition for the working class (i.e. users), domination and centralization."

I disagree with you and Marx on this. Capitalism is primarily about trading. Sometimes it leads to centralisation but it can also work through innovation and effective competition for decentralisation. So Tim's efforts to set up a framework which will support the latter is not a "contradiction in terms" with capitalism.


"4) He is partnering with capitalists to try to achieve an anarchist vision. This is a contradiction in terms. Capitalism is about competition for the working class (i.e. users), domination and centralization."

Or maybe the fact that he is doing so demonstrates that your terms and assumptions are incorrect.

At the same time, the fact that other capitalists are partnering with government agencies maybe shows that the Rothbard quote doesn't reflect the full picture.

Could be that capitalism and freedom are orthogonal concerns.


“Capitalism is the fullest expression of anarchism, and anarchism is the fullest expression of capitalism. Not only are they compatible, but you can't really have one without the other. True anarchism will be capitalism, and true capitalism will be anarchism”

― Murray N. Rothbard


This is anarcho-capitalism (https://en.wikipedia.org/wiki/Anarcho-capitalism). There are other types of anarchism (https://en.wikipedia.org/wiki/Anarchism).

"Anarchism holds the state to be undesirable, unnecessary and harmful."

As private property cannot exist without violence and capitalism cannot exist without the state, some forms of anarchism are incompatible with capitalism.

I'm not an anarchist, just stating some definitions.


"Capitalist enterprises produce the bulk of services at the moment."

All. You meant all the goods AND services. Unless you count the kids living on a commune who sell berries at the farmers market.


did you forget public schools, hospitals, roads, fire services, police services, military, medicare and libraries exist for a second there?


I am generally supportive of your position, but these are actually organized as "State Capitalism". They are not subject to internal democratic controls. Hierarchy still exists though there is nominal democratic political control.


Unless those kids are giving away those berries, I'm pretty sure they are participating in a capitalist economy too.


It's not all, though it is the astounding majority. Non-profits are not capitalist though they can mimic them in internal structure and worker co-ops are a kind of market socialism as the workers own the means of production.


Clickbait title.

Not surprising given the source, but I like to think Hacker News expects better than an "I Was Devastated" headline.


Clickbait title


Yeah, but didn't Al Gore invent that?


-4 for a little joke? what, am I back on fucking reddit? I figured "hackers" would appreciate what is, in essence hacker humor that any hacker or computer nerd worth his salt would get. Damn it is hard to find like-minded people on the Internets these days..used to be much easier back in the 1990s......




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: