This project looks like it has huge potential, I just want to be able to understand at a glance what privacy controls are in place without reading the source code before committing my personal information.
Found the specifications documentation linked to from some of the example apps. It would be great to have more of this information on the landing page for the project.
They have three sample applications, yet all you can click on is somebody's blog entry. Clicking on the "publishing" app gets you a screenshot of the abstract of someone's paper. It's a single page web site, like all the cool kids have now. Clicking on the top menu items just scrolls the page.
It has MIT and Bernars-Lee behind it, so it can't be totally bogus. If those names weren't on this, I'd assume it was from someone either clueless or crooked.
There's a decent description of Solid on Github. From there, you can see the real problem. It's only useful if the big players adopt it. Which they won't, because it breaks their walled gardens.
This looks like is another try at Bernars-Lee's "semantic web" - hammer as much content as possible into standard formats so it can be machine processed. This is an old idea, and tends to break down once you get beyond contact lists and library catalogs.
There have been major efforts to make that work in the business sector, where it's called "electronic data interchange", and parties want to exchange purchase orders, invoices, and bills of lading. It's worth looking at that area to see how hard this is for even simple-seeming problems like that. And they have cooperation - buyer, seller, and shipper all want that data to flow smoothly between the parties. Trying to do this in today's world of competing closed web empires is much tougher.
The medical data records people have it even worse, I hear.
We wound up implementing something very similar, about the same time (I wish I had known about SOLID earlier), except using CRDTs and graphs (which can support Linked Data). Our implementation is live and functioning already, including:
- Realtime updates across a decentralized network.
- End-to-end encryption with P2P identities.
- Backs up to localStorage/disk (if you Electron-ify/etc. it)
- Can also be backed up by remote storage services.
It is as if IPFS and Firebase had a love child, and we've tried really hard to reduce the API to just a couple lines of code to get fully working P2P social netowrking dApps in place.
- Intro http://hackernoon.com/so-you-want-to-build-a-p2p-twitter-wit...
- 4min interactive coding tutorial https://scrimba.com/c/c2gBgt4
In our case, we have a tombstone delete method. So as long as every peer that saved your data (and people aren't gonna just save it for free) comes online at some point, after you've nulled/tombstoned the data, then it will be deleted.
This is only true because of our CRDT system that lets you update/mutate data. It is not true or a general property of other decentralized systems though.
I was about to say "but there's no information" but then you added the Github links. Thanks!
I don't see why that's not referenced on the main page. I mean it might be a marketing tool to get funding, but it still needs to have links to same prototypes for the technical audience (even a little github cat logo to the repo would have been good).
Even a non-tech audience would understand a link to "see the spec" and knows what Github is, even if they wouldn't be able to understand any of what's there.
I'm sure when they come back to this they'll change the SHA1 to SHA256 (I'd hope) .. might just do that myself and submit a pull request.
What's wrong with RSA? DSA has been depreciated in most tools. Do you think they should use ECDSA?
I assume this is a continuation of - or somehow related to - the semantic web project that W3C spent a lot of time spinning its wheels on back in the early '00s. Back in the day, I bought into the hype that this would be the next big thing, but it never gained traction. Nobody understood it. It was too meta.
Trying to do anything with semantic web specifications was like writing an academic treatise on the philosophy of meaning, and ultimately delivered no more value to users than a hacked up <table> layout.
However, metadata is really useful in some contexts. Say you have a huge collection of scientific data from a particle accelerator, astronomy database, satellite imagery or sensors.
How do you set that up for search?
How do you make it worthwhile for academics to release data like this and get credit as they do for writing a paper?
How do you have provenance for derived data?
How do you set up a unique identifier so the data can be referenced and found as required?
You have data about the data. You have metadata. If you're smart you standardize it and bingo. You have a use for metadata.
The information model is quite flexible, and not tied to any specific platform... but it's not for the faint of heart.
I don't think the semantic web is hard to understand. The semantic web is a distributed database that everyone can contribute to and access. It allows you to write queries to explore the data on the web, rather than having to browse through it.
For those that are new to the idea of the semantic web, DBpedia is a decent example:
The BBC website also makes use of semantic web technologies:
How so? Can existing tools achieve the goals that SOLID is trying to deliver?
Distributed is hard, but it's not impossible.
The Whole Internet has become so centralized that it's about due for something to come along that will blow it up and move power back to the edges of the network. That's what happen when mini-computers shattered centralized mainframes, and then when PCs shattered centralized minis. The Internet leveraged all of those PCs to move even farther out to the edge for a time and now it's back to a centralized system again.
You almost sound like the voice of the status quo tech investment establishment. I'm not saying you are, but back in the late 80's we were repeatedly turned down for funding using almost the same language that you're using.
And I agree, from that point of view whatever the next BIG thing that comes along and shatters the existing system will look at first like a bad investment whose ROI is unclear.
I don't know if SOLID is the next big thing or not. From what I've seen from the spec I don't think it's likely. But these ideas which have been around for 20 years are starting to gain traction among the smartest people in the industry.
As Kevin Kelly said, something can be inevitable, but no one knows the form it will take. Distributed is one of those things. We don't know what it will look like when it does take off, but most of the pieces are in place and waiting for the right implementation at the right time to catch fire and move power back to the edge of the network again. ROI will soon follow after that as all of the late comers pile on with the only goal of making money and try to stop others from making money from it. And so the cycle will start again towards centralization again.
This pattern is fundamentally flawed. X needs to be useful on its own, no further assumptions made. Otherwise it will never get traction, regardless how much we wish B to come true.
We know from the mainly failed P2P wave that just wishing for decentralization is not enough.
They refer to "applications built using the Solid stack" at one point, which implies that it's an application platform. But elsewhere they talk about how it's "a proposed set of conventions and tools", which implies that it's an interchange data format.
What am I missing?
Blockstack essentially lets you pick for example Dropbox as the datastore and an app would save files in a directory on your Dropbox instead of a server controlled by the app developer.
Solid seems to talk about also keeping track of your social connections. So I suppose it also tracks people's profile data and public keys locally?
this should answer all of these questions
Apparently it's a drop-in replacement for Google Wave.
Unvoting this one. Not going to help them achieve their mission.
'Web developer' as in 'developer of the web' :-)
You could write some scripts to publish your personal block to ZeroNet, and add some "also available on Zeronet" links to the regular HTTP version.
It takes little steps and it has to start with us in the tech community. Even if our distributed tools don't grow, at least we can say we tried.
Currently when faced with regulation they can go with a straight face and say that it's technically not possible to let users own their own data. Google has to control the data it uses so as to operate its business, and privacy is just an unfortunate casualty of that. Projects like this (if successful) demonstrate that you can actually have your cake and eat it too and will (at least potentially) give regulators a lot more a leeway to force the hand of companies.
It's a far cry from things like Solid's vision of a fully decentralized web of linked data, but at least it's something.
That might sound a little abstract, so I will give an example: A year ago I built an PWA which had the ability to use a WebDAV server as a backend to store data and sync multiple clients. While some might argue that WebDAV is not the perfect protocol, most of its disadvantages can be worked around. The real issue is that the user has to enter his credentials to every app which wants to store some data for him.
In my opinion, that is something browsers could make a lot easier and just let the user grant/refuse access to the 'cloud-storage'. That cloud storage would need a standardized protocol (e.g. WebDAV) to manage its access, but that way browsers could offer web apps a large storage and the user could select its own storage provider (in the browser settings).
Most users would probably stay with Google Drive or Dropbox, but at least their data would not be stuck in the walled gardens of the various web app providers and some might even choose to store their data in a Nextcloud (at least that is what I do with my PWA).
Img tags should have an href and an ipfs option. The browser can than chose to pull from either source, depending on user preferences or which one is faster. Users can be prompted if they want to store/serve that data in their own ipfs cache.
I think the FF59 plans for IPFS just make it easier to interface with your existing IPFS server (if it's running). Making it first class would be a game changer, but don't expect to see it in Chromium or Webkit any time soon.
So what I don't understand is how it would make app developers give you the control over your data?
Regarding your img-tag remarks: what should href and ipfs attributes do? I mean href is an attribute used for linking, not for loading data like the src attribute and adding attributes for different protocols isn't a good idea either as for that use case we have URLs. So whats wrong with?
<img src="images/cat.jpg" some-new-attr="ipfs://mdgpreefl215jfeiwef2456/cat.jpg">
The img tag example isn't that great of an example I'll admit. But it could eventually be extended to permanence of the entire page/site. Although there are already better tools suited for that, like ZeroNet.
Like, we already have XML DTDs and schema.org and whatnot. Everything appears to be in place to build everything that I can see this doing, and has been for decades. But it hasn't turned into anything because the problem isn't how to store the data, it's how to use it, and allowing everyone to manipulate all data seems to repeatedly be demonstrated as a failure.
Is it really just "let websites manipulate my data, stored on my machines"? What would possibly incentivize websites to do so, rather than pull in data and subtly break it for others (intentionally or by accident)? Users are absolutely not going to understand why site X broke site Y, only that site Y is broken.
Or am I missing something fundamental? Entirely possible, I can't figure much out at all.
There are some parts of the project that I don't like or maybe simply don't understand. For example, its user stories include an utterly simplistic privacy system .
What if Ian starts spamming everyone on the entire web (let's call this 'root node') with his "you've got a file from Ian" notices? Some kind of rate limiting system is required in this case, but is it really possible to decentralize such a system?
I imagine a system of many communities that can be subscribed to, think of subreddits, with their own behavioral rules (code of conduct?) requirements, groups, permissions, blacklists etc.
So if Ian and Jane both are subscribed to the same community that grants them the permission to do the described actions (thus Ian is not banned nor is over his rate limit to send his notice to Jane, and Jane's privacy settings permit people like Ian to send their notices to her), they can be performed. I'd call that a 'third party node'.
Such a system would also solve the problem of discoverability. I expect the rise of the githubs and gitlabs of Solid if this problem is not accounted for early on.
Let's say these two users already got to know each other and they want to decouple from the restrictions of the third party node they were met at, how can they pair their 'profile nodes' so that there's no more third party constrains to limit their interactions? Let's say the profile nodes include a social, facebook-y, functionality in them.
Ian sends a direct pairing request to Jane. She accepts, by including him to a personal custom group named 'new friends' that will restrict him to be able to see only a few of her photos (maybe based on the tags that were used on the photos, maybe based on creation timestamp ranges so he is able to see only her most recent/ probably less embarrassing ones, who knows, she's the one to decide).
On her personal node, it's her rules. Much better control than the current social media sites provide.
This calls for a really privilege-centered system. Can Solid provide it?
That didn't really happen, because our approach was different at the end of the day. We wanted to raise VC money and get a lot of user adoption, and they were focusing more on promoting RDF, SPARQL, ontologies and so on:
However, I did meet a lot of cool people at the W3C and now some of them are our advisors!
PS: If you watch that youtube video, let me know what you think. Is it a clear explanation? Do you feel there is a need for this? It's all available online already btw.
Bigger and more influential apps will always roll you into their corner. It's what every single vendor would do. It's what you would do.
Plus, as a user, I don't like analytic data being collected. As a software provider, analytics drive nearly every decision.
I like the spirit of what Solid is trying to do, but as long as targeted advertising works better than randomizing every ad you see (it always will), there's zero reason for any profit-seeking entity to choose it.
The gist of it would be:
If I write something as a tweet, wordpress would be the first to pick it up and possibly manage it. WP would be more of a CMS for your data. Write a tweet, it can be shown on your blog, and possibly to twitter and/or facebook. This would work for other content types. The value of twitter and fb is the network affect and curration.
A simple example is that I would want any food delivery app, or web-connected restaurant to know my dietary data instantly, but not really much else. Any new-media site I connect to I want to instantly know what kind of content I find boring or distracting, but not really anything else.
I mean, IPFS is designed to replace http, which is TBL's baby, but lately he is really into web decentralization, and IPFS is farther along on that than any other technology, from what I understand. Maybe http was a kluge and he would be happy to have it replaced. Maybe Solid could run on top of IPFS, or is that not possible.
And web decentralization is not going to have any real impact unless there is a standard set of technologies that can be the basis for all the main use-cases, and that the average, computer-illiterate user can adopt by clicking a link or two, or better yet comes built into their browser. IPFS is much further along here than anyone else, so it seems like the Solid people should be looking at integrating with it, if that is technologically possible.
Nobody really cares cause the email is free.
The bright side.. Greed usually tends to overreach at some point, after which may come more public outcry.
Not in the same vein as google having my internet based data, but the us govt very clearly and publicly has many such extensive registries that are widely known and relied on parts of the country's basic infrastructure.
AIUI (disclaimer: haven't spent a huge amount of time studying it), Solid focuses on data and storage, not on compute. It proposes that each user's data should be stored in that user's own storage, under their control, in formats that are standardized so that they are compatible across multiple applications. The applications themselves, though, still execute as they do today, probably on servers owned by the developer. Applications interact with each other by virtue of supporting common formats. A big part of Solid is defining these standardized data formats.
Sandstorm is focused on compute: it says that applications running on your behalf should run on your own server, in isolated sandboxes. An application's storage format is a private implementation detail, and applications never directly access each other's storage. Instead, applications communicate via standardized protocols.
Abstractly, you can think of Sandstorm as object-oriented, in that it combines data and compute into an "object" that implements an "interface".
In my opinion (as the architect of Sandstorm), protocols are the correct place to find interoperability; data is the wrong place. The data format inherently defines the feature set which can be implemented on top of it, and thus forcing apps to use a standard data format tends to prevent them from implementing new features. It is much easier to create protocol compatibility because a complex app with a larger feature set can implement compatibility shims for other protocols using code.
Also, of course, Sandstorm's sandboxing model has huge security benefits. Solid doesn't seem to provide any security benefits since the apps still run remotely on the developers' servers where they could do anything they want with your data.
Calling this project “Solid” reeks of “unsinkable ship.”
Does a site that sells widgets really have nothing in common with a site that sells gizmos? Surely that's not right.
So the question is: do they share most of an application, or a protocol, or data? I think data is the best answer here. It requires more coordination, but the ultimate benefits are much greater, both to users and to new businesses trying to innovate (less friction to bring in new customers).
PII is the most circulated currency on the internet and anything that stands between companies and their cashflow (privacy/data ownership projects like this) seems like it would be a nonstarter.
That is, if your current PII suggests that you are a fan of (Brand X) right now, it is more valuable than if it shows you were a fan of (Brand X) 5, 10, 20 years ago. There will still be correlations that can be drawn from historical data, and some data's value doesn't decay as a function of time (DOB, for example), but it would still be an incremental improvement over the current system.
As technologists, I don't think throwing up our hands and saying "It's too [late|difficult|expensive] to solve this problem!" is the right solution in most cases.
Such a campaign against unsanctioned data would be a likely consequence of the success of solid or a similar protocol.
For example, if "MeWe" were to incorporate this I think they would grow faster and at some point put enough pressure on Facebook to do the same.
It seems like like a scam.
This sounds interesting, I will check back. I would love to own my own social media profile, where I get to install it on any server or service I wish to, and have full control over it. I'm guessing that's what this is.
Actually, any interchangeability is impossible. Apps are different, thus they have different data schemas. If they had the same schema they would be the same app, just with a different color.
About reservations, I don't see why would you want to move your data. Reservations are tied to companies in a way no standard can solve. And by design.
Maybe company A has a history of all your travel reservations with them, and company B has a history of all your travel reservations with them. Then company C offers a service where they collect travel reservations from various services to create an business expense report. You, the supposed owner of the data would like to authorized C to import your data from A and B.
There are probably many other examples of something that came unexpectedly from nowhere and suddenly there were copies.
Why do I have to search for information?
Why is it, that all those shitty, shiny looking new homepages are the same. NO CONTENT.
Ok, I'm done. Not interested anymore.
But it's easy for devs so none of it matters.
It's likely a pretty common thought process with open source internet devs.
That’s why you don’t roll your own authentication.
>> The project aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.
What is the product? How does it achieve its means?
its not entirely false though, he has expressed regret in the past about how it worked out. the web isn't really a great idea but it caught on anyway. arguably its been the cause of a new scams, hacks and certain politicians getting into office.
i do remember alan kay not being too happy about the whole situation too
What were the features of your, and/or others' web browsers, that distinguished them from TBL's work?
Mind: there were several other alternatives out there, with Viola being amongst the more interesting -- it aimed at becoming a suite of web-related tools and capabilities, based on what I've seen.