We're obviously a long way off from colonizing space and needing the Internet to spread, but we still have the physics problems here on Earth.
I'm not convinced that centralization in its current iteration (cloud operators controlling huge infrastructures) is the best long run. As we saw with the recent Azure outage in South Central US, even the huge infrastructure can have problems too.
Secure decentralization has seemed like a panacea for a long time - for all things that resemble a public utility. Even things like the power grid.
You might be interested in checking the InterPlanetary File System  which attempt to tackle this among other issue.
I can't find it now but I remember the doc mentioning the need for a future space network to be decentralized, so there is that too.
So, the idea behind IPFS and others (SSB comes to mind, except, yacht-themed) is that it's largely a collection of offline networks, and when the planets align -- quite literally -- those networks will exchange all their new blocks.
It's a neat concept.
Additionally you can layer in forward error-correction above TCP to reduce packet loss due to the physical medium.
As the parents suggest, even at Earth-Moon distances, we need to completely rethink things.
We’ll end up with the same, TCP-based system, except it’s going to be somewhat different, as time scales are not invariant for us humans.
Even then, you will be playing on a specially-modified version of the game that disables server-side anticheat systems, instead relying on human referees.
I think this is a reference to Secure Scuttlebutt: http://scuttlebutt.nz/
But I think it's not particularly accurate, in that, while latency would be extreme, that doesn't necessarily translate to bandwidth - and bandwidth constraints shaped Usenet and especially FidoNet as much as latency.
I think a more likely primary mode of operation would be WWW-like, but only your local part is actually real-time; everything else is synced in bulk as and when possible, with some creative approaches to update conflicts for writable resources.
The clock is not used to say that the time is exactly the same on all nodes, it is used to guarantee that if two events have timestamps whose difference is larger than some threshold they can be ordered reliably. You don't need an atomic clock to do it, for instance CockroachDB only requires NTP, but of course, the smaller the error margin, the faster the system is.
That being said, given that speed will be limited by the distance traveled by information and the speed of light, I suppose those systems won't have much edge over purely causality-based ones. In other words, CRDTs will rule Space :)
However, a total ordering of events seems plausible only from the relative perspective of an observer and we would need to figure out, how to think about how things like transfer duration affect each oberservers understanding of ordering.
So you would just have to agree that one observer's clock is the "master clock", and then everyone translates their local clock time into the corresponding master clock time (and all timestamps are written with respect to the 'time zone' of the master clock).
That makes for good science fiction, because Quantum Mechanics is so poorly understood by most people, but it’s in no way possible or implied by the theory. Any entangled channel of communication would appear to be random noise without a Classical channel of communication, which effectively limits entanglement to light speed.
The first answer gives a nice explanation in depth. https://physics.stackexchange.com/questions/203831/ftl-commu...
The key point is this:
Alice therefore still measures two overlapping bell curves, overall! Where are the interference patterns?! That is very simple: when Bob and Alice compare their measurements in the first case, Bob's 0-measurement can be used to "filter" Alice's patterns...
That comparison is what requires the Classical channel, and we’re back to light speed. If you try to use a Quantum channel to compare you just have two things to compare and a lot of noise.
* The interface is dead simple - share this folder, done.
* It is a read-write browser. Netscape (and other browsers) used to be this way - they had some limited HTML creation tools. Beaker brings this back in the form of making an "editable copy" of a website. It's a choice in the address bar.
* Making an "editable copy" doesn't have to mean you're now editing raw HTML. An editable copy can direct how it is edited through JS. (See the recently released "dead-lite" for an example of this.)
All these attempts are exciting but I'm actually starting to use Beaker because it's so useful even without adoption.
(Also, I'm not even sure how you could p2p private user data, unless you expect everyone to carry around one or more yubikeys, or implant chips into fingers or something; plus all devices need into buy into that. But I haven't given that much thought.)
* You can generate domains freely using pubkeys and without coordinating with other devices, therefore enabling the browser to generate new sites at-will and to fork existing sites
* Integrity checks & signatures within the protocol which enables multiple untrusted peers to 'host'. This also means the protocol scales horizontally to meet demand.
* Versioned URLs
* Protocol methods to read site listings and the revision history
* Offline writes which sync to the network asynchronously
* Standard Web APIs for reading, writing, and watching the files on Websites from the browser. This means the dat network can be used as the primary data store for apps. It's a networked data store, so you can build multi-user applications with dat and client-size JS alone.
I'm probably forgetting some. You do still need devices which provide uptime, but they can be abstracted into the background and effectively act as dumb/thin CDNs. And, if you don't want to do that, it is still possible to use your home device as the primary host, which isn't very easy with HTTP.
The first concern I had/have is about security. If everybody runs their own server, isn't this a security nightmare waiting to happen?
I understand from the presentation that these websites won't run php or other server side scripts which at least take some concern away.
Tara also showed how easy it was to copy a website, while pretty cool, that is also a nightmare scenario for most companies. If your competitors can clone your websites and pretend to be you, how do users know who's data they are looking at?
> You can generate domains freely using pubkeys and without coordinating with other devices, therefore enabling the browser to generate new sites at-will and to fork existing sites.
Not entirely sure what you mean,
- We can generates HTTP sites at will (all you need is an IP address);
- We have existing protocols for mirroring sites (not implemented universally, but nor is dat://);
- When you talk about pubkeys with coordination, there are obvious problems like the last paragraph of my original comment, right? Again, I'm probably misinterpreting what you're saying.
> Integrity checks & signatures within the protocol which enables multiple untrusted peers to 'host'.
Basically subresource integrity? Granted, with this protocol you can in theory retrieve objects from any peers (provided that they actually want to cache/pin your objects), not just the ones behind a revproxy/load balancer, so that's a potential win from decentralization.
> Versioned URLs
We can have that over HTTP, but usually it's not economical to host old stuff. In this case, someone still needs to pin the old stuff, no? I can see that client side snapshots could be more standardized, but we do have WARC with HTTP.
(EDIT: on second thought, it's much easier to implement on the "server"-side too.)
> Protocol methods to read site listings and the revision history
> Standard Web APIs for reading, writing, and watching the files on Websites from the browser.
You can build that on top of HTTP too.
My takeaway is it's simply a higher-level protocol than HTTP, so it's unfair to compare it to HTTP. Are there potential benefits from being decentralized? Yes. But most of what you listed comes from being designed as a higher-level protocol.
That's not really so easy from a consumer device with a dynamic IP.
> - When you talk about pubkeys with coordination, there are obvious problems like the last paragraph of my original comment, right? Again, I'm probably misinterpreting what you're saying.
You do need to manage keys and pair devices, yeah.
> My takeaway is it's simply a higher-level protocol than HTTP, so it's unfair to compare it to HTTP. Are there potential benefits from being decentralized? Yes. But most of what you listed comes from being designed as a higher-level protocol.
The broader concept of Beaker is to improve on the Web, and we do that by making it possible to author sites without having to setup or manage a server.
Decentralization is a second-order effect. Any apps that use dat for the user profile & data will be storing & publishing that data via the user's device. Those apps will also be able to move some/all of their business logic clientside, because theyre just using Web APIs to read & write. Add to that the forkability of sites, and you can see why this can be decentralizing: it moves more of the Web app stack into the client-side where hopefully it'll be easier for users to control.
I see, I was looking at it backwards.
As an example you talk about needing a special device to manage keys which presents problems. It centralises your identity to your yubi key (instead of email), lose your yubi key and you lose your identity, what if it gets wet, crushed, corrupted, your fucked. Instead we encrypt the key and distribute it across the net, if a copy is deleted or corrupted there are other copies and its available to you anywhere anytime. Currently your identity is centralised to your email, if your email goes down you lose your identity, if its distributed and a copy goes down you just use it like normal.
Distribution solves pretty much all the problems centralisation creates, its just really complicated so we generally don't bother.
Of course it's not higher level of HTTP, I never said that. I said higher level than HTTP. HTTP is just a stateless transport protocol, of course dat is higher level, and as I said much of the benefits described can be built on top of HTTP (and have been, just not standardized or not widespread).
> it all scales poorly, it can all be improved with distribution
Pretty sure it all does NOT scale poorly, as has been proven over the past thirty years. What's being solved here is not a problem of scale. "It can all be improved with distribution" is very hand wavy and doesn't really say anything. DNS and many other protocols are already distributed, btw.
> Instead we encrypt the key and distribute it across the net, ..., if its distributed and a copy goes down you just use it like normal.
There are two kinds of crypto, symmetric key and public key. Symmetric key is easily out of the window. For public key crypto, you always need a secret key and that has to be prior knowledge, not something negotiated on the fly, and of course prior knowledge has to be kept somewhere and presumably synced if you need it elsewhere, and it definitely can be lost. "Distributed secret keys solving everything" sounds like nonsense to me; there's always a secret key that is the starting point (call it the master key, if that makes more sense) and can't be distributed.
Btw, I skimmed through Beaker docs, and it seems they resolve names through DNS (what else can they do) and even use HTTP for discovery.
Naming is a consensus problem. The key here is having the freedom of choice between trusted providers. The central source could be provided by a single cryptographic key, by many keys, m-of-n schemes or other arbitrary contracts, even in P2P form.
I'm really interested in what kind of user interfaces the Beaker people come up with when it comes to their "editable cloned websites" (forks).
Most popular websites; unlikely. Even HN isn't static enough to be pinned considering there is a new comment about once a minute or so.
The real cost is scale, $20 year will cover a few thousand users but if you want googles scale it will cost you in bandwidth and complexity. p2p like torrents radically reduces the cost of bandwidth by distributing it, but more importantly it reduces complexity by standardising it.
Once the complexity is standardised budget web hosting can provide google scale for dirt cheap, and there are millions of budget hosting companies too many to shutdown them all giving you censorship resistance.
The original vision for the web was that editing/creation had the same status as viewing/consumption, and that websites were writable as well as readable. This is what Amaya implemented. It never gained wide adoption, but is served as a reference implementation of the W3C's vision of the web. (In my experience Amays is not particularly usable because it regularly crashes, but that could be fixed.)
Is Beaker similar to Amaya extended to use transport layers beyond http, such as ipfs?
Wiki markup is different from HTML markup, but it represents many of the same (early) text-formatting and resource-linking concepts, while limiting the excessively powerful features of arbitrary layout, scripting, etc.
How is access control implemented?
It seems like this basically only applies to web content you want to give everyone access to and can have 100% of application logic run client-side.
That's a pretty narrow cross-section of the existing web...
Access control in Beaker is through that private key - you need it in order to edit the 'dat' (name for a synced folder). So, no, there aren't a lot of complex permissions available - but you can also separate an app into several dats and use a master one to manage the permissions of those. Not terribly complex, but it's actually surprising how much you can do. (It's tough to wrap your head around not having a server - but it's actually true.)
But help me out - I think alot of the Web falls into this category:
* User logs in to edit their data (has private key to their dat).
* User shares their data (blog, photo feed, whatever) with others (who don't have the key).
* Those others merge all incoming feeds into a single master feed.
You could replicate YouTube, Facebook, Twitter this way - usually there are not complex permissions in these apps, are there? (Not that you'd want to replicate them...)
Maybe Twitter is too specialized an example. What about any kind of search? You do need an index, and someone still has to own that index, and “donate” computing power to update that index. You own your self-hosted data, like many of us already do, but there will still be gatekeepers, e.g. Google for our current web.
EDIT: I realized that with a clever enough architecture and probably much more computing power than necessary in a trusted environment, no one needs to own the index. But it seems way more advanced than this protocol. (I’m completely new to this stuff so please excuse my naive skepticism.)
I also am not sure what yourself (or newnewpdro) are looking for in the web or what appeals to you - for me, Google simply doesn't work for me - sure for technical issues it does, but it is basically Stack Overflow search in that department. If I'm looking for personal blogs, I can't just type "personal blog" into Google and find anything worthwhile - it's all clickbait of a fashion. The best way I've found of finding blogs is either to look through Pinboard tags or to click around on other blogs until I eventually get somewhere. It's horribly inefficient - but it's rewarding when I get there. I'm making a personally-edited blog directory to try to aid discovery - and yeah I actually think there's a lot we can do if we all did more grassroots search and directories. Anyway, that's my perspective - wondering what you're looking for in this thread. Have enjoyed your other questions above (below?)
You're referring to write access, which is a small subset of access control.
How do you restrict read access to a group of specific people? Encrypt the data and distribute keys to the privileged parties? How does revocation work?
Look up Tara Vancil's talk on "A Web Without Servers" if you need a crystal clear explanation - don't know if I'm doing adequately. And my blog is at kickscondor.com if you're curious why I had to write my own blog warez. Also, there is a resurgence in blogging happening right now with the shakedown of social media. It's great.
1. A (current?) limitation I've noticed with Beaker is that you can only edit a site implementation (for a specific address) on a single machine. What would you do if you had multiple computers / locations that would want to make updates (JS or User Content)
2. What about errors. Don't they persist in the address's history? What if there's something undesirable that got added by accident
3. What about mobile? How could someone visit/browse on mobile without a non-distributed proxy http address
1. So this answer is a bit convoluted because I am still learning, wish I could keep it short. So my setup might be a bit 'naughty' because I'm currently saving the 'key' in my JS in a seperate dat. Right now I have one dat that acts as the 'admin' and one that is the actual blog - but I am going to move to hyperdb (the new solution for multi-writer support.) There are actually a couple libraries cropping up for doing this sort of thing and I'm not up on all of them. So this is more of a 'need to make up my mind' thing than a capability thing with Beaker. There are a TON of libraries and a TON of possibilities - (there's an 'awesome dat' page that just goes on and on and on...) - but I am still researching a better way (and, who knows, maybe my way is fine.) I want something that could be in place ten years from now - because I do think JS and HTML will.
2. No, files can get replaced. Not sure if that is your question. Yes the undesirable will persist in the history, but you can overwrite it. I can prevent bad content, though, by checking it in my JS code and allowing a preview first.
3. Yeah this is a problem - there is a Bunsen Browser in development, but I haven't tried it. I am honestly okay with everyone browsing through HTTPS, though - I like Beaker for the admin tool. Again, I have a browser-based blog software without needing to run a server anywhere at all.
Really cool point about extending Twine! That had never occurred to me. Amazing.
(As an aside, I originally didn't like using Webmentions on a static site - I had planned on making a cron to periodically check for Webmentions and republish. But now I really appreciate that it goes hand-in-hand with moderating comments. I look over the incoming Webmentions, nuke any spam, and republish. No bad feelings about comments that sat in the queue for a day - they are still out on the web at their original URL.)
Practically, as pointed in the article, there are laws to comply with and those are IMHO the biggest lock to decentralization.
The fair middle ground between extreme centralization à la Facebook/Twitter and total network anarchy is something based on federation, like emails and Mastodon. With federations, there are several providers for the same end-user application, with native data exchange and interoperability. The idea is to give the power to anyone with hosting capabilities to compete with the Giants, even if only a few domains will actually survive (like Gmail, Hotmail, etc because of network effects and funds, probably).
What we need is a framework, or a backbone, that allows people to easily create new federated-native apps ("dapps") without thinking about consensus issues, protocols versioning, and with native laws compliance.
Unfortunately, the world decided to go the centralised way. At some point I had to re-work my outstanding papers, because any mention of peer-to-peer or even decentralisation meant immediate rejection. Internet service providers went more greedy, so if you don't build your own global backbone to have some leverage, you need to pay someone who does or you're hosed. Even the laws in place start to strongly reflect an expectation of overpowered centralised platform beneath any communication.
Then, finally, what we ultimately need is to figure out the money flow. People want polished products and that costs money. The centralised platforms we have today have succeeded because they figured some funding. Achieving that in a decentralised world is the main problem we should be looking at. I'm afraid "just slap blockchain on it" is a highly detrimental approach, but I haven't seen anything more serious (not that I looked seriously).
Dislaimer: I'm in Google now, but this comment actually reflects my personal post-INRIA sentiments.
I agree that this is probably the way forward.
The only downside is how your identity is tied to the service provider you choose.
It was a PITA when Lavabit went down and I lost that email address.
Fully agree with this. The link to identity is not often brought up. I run a university lab focussed on re-decentralisation of the Internet as a day job. We focus on identity & p2p + trust.
Beaker browser is impressive early work focussed on the raw bit transport. It re-uses DNS for global discovery, its hard to do everything decentralised at once. How to do global search on a decentralised Twitter or spam control?
The hard issue we need to solve in the coming decade is the governance of such systems. Ideally it would rule itself. Definition of a self-governance system as: a distributed system in which autonomous individuals can collectively exercise all of the necessary functions of power without intervention from any authority which they cannot themselves alter.
Federation is what we have now and it has a tendency toward centralization, as we see with the WWW and the mega sites where users aggregate.
This is really needed. There are endless great tools for centralized apps that make it trivial. I could build a usable forum website in rails in a day. I have no idea how to do that in a decentralized and secure way.
The problem then moved onto being one of _curation_. Companies such as google, facebook, amazon are in the business of providing curation: i.e. taking away the leg-work of what we should attend to.
A de-centralized web doesn't appear to decentralize the problem of curation at all, which means we are going to still end up with centralized curation and the same or similar monopolies on attention that we have now.
...feels, to me, like a huge mistake.
How would one eliminate hate speech and toxic content from it? Or illegal content? Or anything you put there and need removed to keep living your life freely? The technologists developing this tech hand-wave these concerns away citing "freedom of speech" -- but one's freedom ends where another's begins, and hate speech, toxic content, illegal content, not being able to have what you said or did forgotten online, all these things curtail someone's freedom.
And by making it decentralised, they're just making it harder for people who are the victims of these problems to hold the people responsible accountable and to stop them. These technologists want freedom of speech at the expense of everyone else's freedom.
You simply make your own choices and don't follow/subscribe/view all that illegal, toxic, hate content. You know, the same way you do today by not visiting all those illegal, toxic, hate websites. They still exists though for those who don't share your views on policing content for other people.
The women whose boyfriends posted private sex pictures as revenge, or the minorities who will be the victims of hate groups organizing on social media, the children who were filmed while being raped and have their video circulating online, the victims of bullying whose bullies are empowered by other people seeing it and not doing anything to stop them...
You can choose to ignore this when you see it, but the victims can't, and it's for their freedom that I'm concerned for.
Just because it can't be dealt with by threatening the odd CEO or two, doesn't mean that it won't be dealt with some other way. Now, it may be the case that governments react to a change like this and just accept that they can't control terrorism, child porn etc etc. But it would be astonishingly naive to assume that that's the most likely outcome.
What is considered toxic for you might not be considered toxic by another person. That's personal choice. If you need to eliminate some sort of content, that inevitably will lead to this https://www.reuters.com/article/us-china-internet/china-laun...
Current web tech is inherently centralizing. Say you want to create an experience like Instagram or Twitter, delivered via HTTP. You have to pay for bandwidth, CDNs, storage, app servers, DB servers, etc etc. At scale, it's millions a month. So only corporations can do it, and with a few exceptions (eg Craigslist, Stack Exchange) they end up monetizing and "growth hacking" in user hostile ways.
The big open question is: can we create an experience as compelling as Instagram or Twitter over the P2P web?
It's a hard technical challenge, and today the answer is no. But if we get there, then internet mass media can be delivered via open source projects over open protocols, with a bunch of competing clients to chose from. No central organization controls and monetizes the thing.
Like BitTorrent, but for applications more complex and interactive than just file sharing.
If you're interested, here are imo the most compelling projects in this space:
- Patchwork / Secure Scuttlebutt
They are working on overlapping subsets of the same fundamental challenges, eg:
- How does a node choose what to download? The BitTorrent answer is "only things the user explicitly asked for". The blockchain answer is "the entire global dataset since the start of time". For something like a decentralized Twitter, both of those are unsatisfactory, you need something in between.
- How do you log in? Current systems either have no persistent identity at all (eg BitTorrent) or they just generate a local keypair, and it's your job to back it up and never lose it (eg SSB, Dat, all blockchain protocols). Both are unacceptable for wide-audience social media. Ppl lose their devices, get new devices, forget their password, etc all the time. They expect and rely on password reset, etc.
So there's a lot of hard tech and UX problems left unsolved, but also a lot of recent projects making solid progress
You can make some nice proof-of-concepts with a group of volunteers, but the effort required to provide a UX comparable to centralized services is going to take more than a handful of people working evenings and weekends.
Decentralized services generally do not afford the same monetization opportunities as central services. Decentralized proponents consider this a feature rather than a bug, but it leaves open the question: Who is going to pay for all of this?
Facebook had $20.4 billion in operating expenses in 2017. Less than 1/3 of that was the cost of its 25,000 employees (at the end of 2017). Facebook is spending more on its infrastructure than it is on all of its employees combined (and that much more when you reduce it to just engineers). Engineers are maybe 1/5 of its operating costs, including their all-in costs.
Both Facebook and Alphabet had roughly $15 billion in total capex for 2017. Data centers, networks, electricity, et al. cost a lot at that scale. It's not a pittance. Facebook spent ~$7 billion in 2017 on capital expenditures related to their network, data centers, etc.
Facebook's first Asia data center is a billion dollars to just start up. When they put up new data centers in places like Henrico County VA, New Albany OH, or Newton County GA, it's similarly nearly a billion dollars a shot to start those up. Once you have dozens of those operating, it's billions of dollars per year to operate them all.
Two engineers can't do twice as much as one engineer. Perfecting the ordering of the news feed is significantly less valuable to users than just having a news feed in the first place. Building a speech-to-text engine that works 99% of the time costs hundreds of millions of dollars more than one that works 95% of the time, but is it worth that much to users? Think of the number of engineers at Facebook or Twitter who just work on infrastructure, or supporting other engineers, or perfecting ad placement to improve CTR by 0.5%. All of these are tangential to the core experience, in many cases required or at least valuable only because Facebook is so big.
I can't just pick on Facebook here; this is why all companies will always get disrupted. Massive layers of scale behind the scenes to support products that are fundamentally simple, combined with advancing publicly available technologies helping newcomers.
I think your point on UX/UI is an important point though. Open source has a turbulent history with functional UX. We’ve done an incredible job helping the technical communities understand why open source is important but because so many of us are technically focused, we’ve fallen somewhat short on helping UX and design focused communities understand why open projects are important, on a deep level, in much the same way technically focused understand.
If we’re aiming for mass adoptions across the spectrum, onboarding the UX/UI communities is as important as it was for the technically focused to understand.
Also, there is another point worth considering, someone recently made a convincing argument to me that sometimes mass adoption may not be a good thing. Mass adoption leads to eternal September and depending on what the project is, eternal September may destroy a community. A project with a solid technical foundation but difficult UX/UI experience can be a good barrier to prevent eternal September.
> It's a hard technical challenge, and today the answer is no.
This is why I completely dismiss almost every "distributed" solution. If you can show me a business model/design document for a distributed service that can scale to big tech levels, deliver a user experience that matches current solutions, while also incentivizing developers enough with money to get them to build it, I will be swayed. However, every solution I have seen makes massive tradeoffs that negatively affect all 3 criterias compared to current centralized solutions.
If you need another example google a tool called "git". It is used by almost all software developers nowadays and even by quite a few authors. You don't need to setup anything to use it. If you have ssh you can just share your text based data with others by giving them access to your repository's directory. I bet it transfers more MB/day than Twitter. But nobody counts it because it's so distributed that it's hard to put an owner label on it. (Although the originators can be named quite clearly.)
And sure, you can use git by itself and send the diffs to each other over email. However, then you have to deal with coordinating where the head is which is a pain across every node since it is constantly changing. Thus, most people don't do it that way and instead use a centralized service like github.
Why not send it via ssh? It's much easier. And if the data is not too big you don't send diffs but complete file states. Git usually calculates the diffs on the fly by comparing two file states.
Decentralized serving has just as much bandwidth, storage, and iron (if not more). Does it somehow make those resources cheaper?
For example, in 2016 a friend of mine and I made an electron app called WebTorrent Desktop. It has over a million downloads. The total bandwidth transfer so far is probably a lot--wild guess, maybe a few million dollars worth?
But it is free and open source and costs roughly $0 to run--just enough to keep the website up. That's the magic of decentralization--you're simply writing software. You're not running a service.
Consider the total monthly internet bill of all Twitter users combined. Extremely rough guess, 500 mil montly actives * $40/mo for Comcast or something = $20b per month.
The crowd has more than enough bandwidth, disk space, and CPU cycles to run services at any scale. That's another magical aspect of dapps: the total resources available to the system automatically scale with the number of users. It's up to us to figure out how to harness it.
That's the point.
Disclosure: I'm working in this field.
Featuring Mathias Buus and Paul Frazee from the Beaker project.
This is not only a technology problem, it's (mostly, I'd say) a social one. Humans will always want more power and control, whether it's in real life or online.
Every single type of governance has fallen victim to human greed and ambition, as will any kind of Internet, I believe.
Fix the users - save the Internet! :)
In A Thousand Plateaus, Deleuze and Guattari talk about the opposition between the state apparatus and the "war machine" (their term for a nomadic/decentralized structure). They talk about how it seems like nomadic societies are primitive, but actually a lot of nomadic societies have "collective mechanisms of inhibition" to ward off the formation of a state apparatus, by preventing power from accumulating within any one party and "evening it out" among everyone.
The applicability of D&G's ideas on the war machine to our current problem of platform power is immediately apparent. A centralized platform is exactly like a state apparatus. In our situation the collective mechanisms of inhibition might be something like stronger/more proactive antitrust laws to break up/nationalize entities that become infrastructural components of the society.
But as you've mentioned, I think this problem of "uneven development" is a feature of any marketplace-like structure. In sufficiently large numbers, a power law tends to assert itself with no other checks on power. This is why blockchains by themselves won't solve the problem. The debate, then, shifts to be about whether this is a feature or a bug, which is something that I'm never sure about.
To close, another quote from ATP comes to mind ("smooth space" is another term they use for nomadic spaces):
> Smooth spaces are not in themselves liberatory. But the struggle is changed or displaced in them, and life reconstitutes its stakes, confronts new obstacles, invents new paces, switches adversaries. Never believe that a smooth space will suffice to save us.
That said, as from my previous comment, I'm not totally confident that this kind of decentralization is even optimal, but that's a story for another time.
Which is ARIN/RIPE/APNIC/AFRINIC/LACNIC and the DNS root zones, and ICANN on top.
Not to say that's only bad, just trying to illustrate that in this case, D&G's point is actually pretty tangible.
We've had all kinds of redundant network topologies that used independent networks for decades. The internet is decentralized, and it works pretty well, all things considered. The web is fairly decentralized, too: DNS is independent of a registrar is independent of a network service provider and all are independent of ISP's, and even those are independent of backbones.
The only thing that isn't very decentralized is the client-server IP/TCP/HTTP model. You can provide decentralized versions of HTTP services, but those are the things that are the most costly and inconvenient to decentralize. It can be done, but it's a huge pain with very little benefit.
Distributed network will even more depend on the ISPs due to self-serving nature.
Perhaps the 'decentralized' web should also address the very foundation of the network - the network infrastructure and access to it.
Does Internet need to depend on the ISPs?
You could see this play out every time any party has tried to take full control of Bitcoin. So far everyone has failed
It's all cyclical.
And who will pay for it? With experience Xanadu seems like the only solution. The reason why it's in development hell is because the problem it's trying to solve is so hard.
In the past people used the simple "mail" command to read emails but now they choose GMail or others because the UI/UX (or any other reason) is better or they like it more. The SMTP (federated) protocol is hidden.
Blockchains do not have a monopoly on decentralization. People who assert this are trying to redefine the term to mean some kind of extreme P2P model that fits their narrative.
Almost all our traffic goes through Google, Amazon and Facebook. It's extremely centralised.
Blockchain a don't claim to have a monpoloy - they're just the most recent thing that's repopularised decentralisation.
If Amazon servers go down, so does a significant portion of the internet. That's centralisation at work!
Blockchain tech doesn't claim to have a monopoly on the term 'decentralisation', it's just re-popularised the technology.
So is bitcoin. Only 3 or 4 companies own the majority of mining.
Companies on the internet are centralized, but not the internet itself.
Still some issues with centralisation (since consesus is achieved through vote delegates), but that's much easier to fix than redistributing hashpower.
This seems wrong to me.
It boils down to: what’s the best way to provide services that I want?
I’m working on a project to provide a decentralized marketplace for software and infrastructure services, competing with AWS and Azure. The marketplace itself is blockchain-based: partially decentralized, but with a permissioned blockchain that still allows governance, legal compliance, removal of bad actors, KYC compliance, etc. The kind of things that customers (corporations) need for them to use the marketplace.
I think we need to be pragmatic about it and figure out where technologies like blockchains can help build better services, instead of trying to cram decentralized systems into everything whether it makes sense or not.
Many of the problems alluded to in this article, in particular the privacy risk of centralized data, are more effectively solved by policy changes and iterated technology (differential privacy as well as bread-and-butter cryptography) rather than furious hand-waving about blockchain protocols.
The fundamental technologies were designed with decentralization in mind
Mastodon is just peered IRC all over again
The ISPs have poopooed running shared services from home connections.
DNS and the core protocols can run in decentralized ways no problem
It’s the social order that doesn’t enable it
Disclosure: I founded Namebase which is a registrar for Handshake
Tor simply hides where your web requests originate from - it's up to you to to visit HTTPS sites and encrypt your communications.
Also, Tor is quite decentralised but the existence of directory authorities undermines this, since presents a centralised component.
At a high level, the client workstation must not be allowed to send any packets to anything other than the socks port running on the Tor host. The Workstation must have a static arp entry for it's gateway. The Workstation should use a ram-disk linux distro and not persist anything to unencrypted disk. The Tor host must not allow anything inbound other than the Tor SOCKS port. The Tor node must only speak outbound on 80 and 443 (formerly known as the fascist firewall setup). Ideally, the Tor node should be running on a cheap VPS host, ideally payed for with a burner card and accessed via a VPN so that Tor traffic from the home ISP is not evident. The VPS host should be cycled from time to time.
This is of course a lot of setup work, but most of it can be automated.
[Edit] Speak of the devil. Here is a zero-day published on the Tor browser 
 - https://www.zdnet.com/article/exploit-vendor-drops-tor-brows...
And most enterprise software (and almost no individual user) barely needs more than a rack of current gen servers.
So yeah, decentralization will be upon us soon enough.
My router stays online 24/7. It already has a web server built in. I could hack it to make it serve a public website.
But there’s absolutely no way I’m going to do that. The security and maintenance requirements are just too much of a PITA.
It’s much easier, more secure and more reliable (and likely cheaper once you figure in depreciation and opportunity costs) to set up and maintain an instance in the cloud, or a serverless site.
And if you don’t like the big cloud providers, there are many smaller outfits that can do the basics - compute and object storage are all you really need for a small site.
Consumer hardware and software are not really well suited to running publicly faceing websites.
thats why you need one of the sharing protocols, like IPFS that make security everyone's responsibility, not just yours.
I don't get why you think cloud solution is so much better. Glorified CDNs are a clumsy intermediate solution until internet connections get fast enough for everyone, that running a sharing node will have negligible impact. E.g. no cloud provider can compete with Popcorn Time in speed, despite billions of dollars of effort.
Everyone responsability = No one responsability.
If your data is lost by IPFS you don't have anyone to sue.
I used to run a couple of minecraft servers for my kids. Judging by how amazing their friends thought that was... I don't think there were many other people in the town doing that.
On the other hand, I still have a raspberry pi running 24/7 to this day.
If 24/7 home computers were never a thing, what did BBSes run on?
But you made a claim that "this was never the case", when clearly there was an era of home desktop computing when your desktop could act as a hobbyist server. This era gave rise to BBSes, then MUDs, then Minecraft servers (and many things in between).
Yes, but again, you have used your PC for something. This has shifted to smaller (Raspberry Pis) or dedicated servers (or "cloud" stuff), so you could run this stuff there.
The total capacity of infrastructure entities like AWS will increase by 10x at a minimum over the next decade. By comparison, your phone or laptop will modestly nudge forward. Consumers are not going to buy 10x the number of laptops, desktops and smartphones that they do today, ten years out. Most likely, those figures will barely move (the smartphone industry is already stagnating). Most of the incremental spending and investment will go into the centralized infrastructure by the giants.
Network speeds will continue to increase relatively rapidly. We can easily go from routine 50-100mbps home lines to 1gbps over the next decade. We're not going to see a 10x increase in the power of the average laptop (lucky if it doubles in ten years). It's primarily going to be useful for streaming/consuming very large amounts of data from epic scale central systems for gaming, 4K+, VR, etc. Decentralized systems owned by consumers will be far too weak to fill that role.
The AI future isn't going to be decentralized. The very expensive infrastructure that will demand, and its need to run 24/7, will be centralized and owned by extraordinarily large corporations.
It's precisely the typical consumer's home hardware that will act as the ultimate bottleneck guaranteeing decentralized can never take off. This has always been obvious, it won't prevent the fantasy from maintaining its allure of course. That will perpetually draw headlines and hype in tech, for decades to come, with no mass adoption breakthrough.
Which, of course, will not necessarily be connected... but that's a part of decentralization and freedom. "Diamond Age" and its virtual polities come to mind.
Firefox has been supportive of the effort for some time already, working on libdweb: https://github.com/mozilla/libdweb
Mitra @ the Internet Archive, when integrating DWeb ( https://news.ycombinator.com/item?id=17685682 ) and I talked about this.
I showed him a cryptographically secure method of having passwords (that keys are not derived from) that allows for password resets (without a server).
For a high-level conceptual explanation of this approach, see our 1 minute Cartoon Cryptography animated explainer series:
This same method can be used for doing Shamir Secret "recover your account based on your 3 best friends" method, which I believe will be the best UX for most users.
This is an already solved problem.
If you’re only willing to offer me the “Lector” package, I’m going to pass.
i think the UX for a completely keychain-centric auth/authz framework can be much better than what we have today with password managers. a master password + device-entangled PINs protecting per-app/agent keys drastically reduces the possibility of getting locked out of your account AND provides for master password reset by unlocking and re-encrypting your keychain using the local device-entangled key.
I'm in tech and I'm interested in a decentralized web but I also feel that throwing the baby out with the bathwater isn't a great idea. The article says, "The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms." To me it sounds like we're basically saying, "ok Facebook/Google/Twitter/Instagram... you're all too big to regulate so we're going to build A WHOLE NEW INTERNET". If they're smart enough to pollute the current system, they're smart enough to pollute a new system. In fact, these corporations are so big that you'll find out eventually that they've funded quite a bit of this decentralized web.
As a parent, I would feel at least a little better seeing some bankers, Pharma bros, tech execs, etc. actually go to jail and have their lives ruined for their blatant disregard of pretty much everything. I don't want to tell my kids, "well, we're too dumb to regulate the internet so we made another one.. and that one got messed up too... herp derp"
Disclosure: I founded Namebase.io which is a registrar for Handshake
When you pick a random tld like .io for example you are not getting the reliability of a .com. .io had a few big issues last year (1/5 of dns queries were failing, ex-google employee bought ns-a1.io and was able to take over all .ios).
As more tlds come from good and bad faith actors people will flock to .com as a known respected entity. Limiting to .com, .org, .net and country codes and slowly introducing new tlds made more sense and gave time to estiblish trust / create brand awareness. 500 a year creates noise and forces distrust of any unusual or new tld.
(1) IPv4 (2) Bandwidth limits
IPv6 makes NAT unnecessary. With IP scarcity gone, IP addresses might become permanent like phone numbers.
ISPs are currently making money off fixed IP addresses. Market forces would change that eventually.
Even if lawsuits didn't kill p2p networks, Virus / safety concerns would have. Imo, trust is a bigger issue than discovery, hence need for curation - centralization.
The reputation system of thepiratebay makes it my primary torrent site.
Laziness, convenience is more of a trust than a search issue.
Many uploads are viruses/adware/ransomware masquerading as movies, books, games...
This necessitates multiple downloads - it's frustrating. I remember downloading several gigs of rar and encrypted .avi movies files, only to be greeted with a message asking me to fill a survey to get password.
Yify - a reputable source eliminated this concern for movies.
If decentralization works out, I believe specialized search engines will emerge.
But note, trust is the bigger issue than search or content discovery for decentralization. If not, iTunes store, app stores and other walled gardens would have long failed.
Perhaps the most decentralized part of the internet today is BitTorrent. It’s a very efficient way of sharing files and has a lot of success. One can see how BitTorrent could become the backbone of a decentralized web. However:
1 BitTorrent “naturally” throttles popular files over anything else. Niche items which are hosted by fewer people will be slower to download => BitTorrent makes a cultural echo chamber
2 BitTorrent needs some kind of centralized search engine: it’s not possible for everyone on the network to host a copy of the entire index of files on the network. The only way is to have a search engine, much like Pirate Bay. In fact one could say that google was this in the first place.
3 decentralized social media would be much more polluted with fake accounts since no “authority” would be able to fix it.
People have been excited by decentralized web for at least 5 years. The technology has existed for at least 10 years. If it was to happen, I think it would have happened already...
AMP is not any faster to load than the actual site not on google's CDN. The only reason it appears faster in most situations is because google is abusing it's monopoly search position to pre-load and prioritize AMP results.
AMP gives google more control and that's why they push it so hard. This plus their quest for hiding and/or getting rid of URLs so they can use AOL-style keywords within their AMP walled garden is multiple steps back. All the way to the late 90s.