Hacker News new | past | comments | ask | show | jobs | submit login
Decentralize All the Things (techcrunch.com)
143 points by fidotron on Jan 10, 2015 | hide | past | favorite | 64 comments



The thing is, email is already distributed. SMTP, NNTP, ftp, talk, finger, etc., we already have all these decentralized applications and protocols. But the adoption of dynamic IP allocation and firewalls has made it quite inconvenient to run these services. Now there's a whole other layer built to deal with these issues, and so there could very well be a pendulum swing in the direction of decentralization all over again.

My feeling is that if we could reclaim all those good old protocols (ftp, and all the other above-mentioned ones) and if we could run them locally easily and added a couple more generic services such as bittorrent and gnutella (for search), we'd have a nice decentralized stack to build new applications on.


I am starting to think that the killer app for a decentralised web is actually business. Think of the typical large corporation:

* Massive reluctance to use cloud services and active blocking of services like Dropbox.

* Centralised "intranet" style software is usually slow and badly designed.

* Internet connections are slow, but local network connections are usually fast.

* Desktop computers spend most of their time completely idle and will often have lots of spare disk space.

* Centralised services are very expensive to maintain and difficult to develop.

Simple P2P apps that would run on corporate networks and require no maintenance could be very powerful.


The idea that distributed apps are "no maintenance" or require less overhead than centralized apps is the big lie that will always make such proposals fall apart. If there is a reluctance or genuine opposition to cloud services maybe you should ask why and consider the fact that they might know more than you do about the reasoning (we don't use it because Dropbox internal security is shit) and that there is no decentralized service that is less expensive to develop or maintain than an equivalent centralized service.


I think OneNote is an example of a good decentralised system. You get a multiuser wiki that only needs a shared folder to save. It is installed alongside MS Office and can be setup without any involvement from IT staff. It is several orders of magnitude easier than installing MediaWiki and needs no big internet connection.

It is a big lie to believe that multi-user applications always need infrastructure, they just need to be well designed. I expect that a very large OneNote system may need professional support at some point. But when it does the value is already proven without ever having to ask permission from an engineer.

You are right to be sceptical of cloud although perhaps for different reasons. Amazon S3 is probably more secure and resilient than the typical corporate SAN. But the random office in some small town will not have enough bandwidth to use it. To the average user the corporate run services are exactly like the cloud; they are slow and not designed for them. With properly design P2P systems normal users could actually choose their own tech again.


OneNote is fascinating; it's a great product Microsoft have hardly marketed. Possibly as it's the antithesis of Sharepoint, which is several orders of magnitude harder than installing MediaWiki and therefore people feel committed to using it.


SharePoint is also fascinating. It has an amazing set of features that make total sense from an engineering an features point of view. Yet in my experience it is slow, buggy, and most of the features are never used properly.

It has proper versioning and locking that should make team work much simpler. But the philosophy is never really explained and people are just told to interact with features and let the magic solve their problems.

Yet people have been doing versioning and locking for donkeys years without ever needing any help, you just make a new file with V2 at the end. We just need software to provide a little consistency and surface conflicts. SharePoint obscures this behind feature bloat and infrastructure.


Would you consider Skype (in the original supernode topology) to be a centralized or decentralized service?


Clearly centralized since MicroSoft bought it and started making changes.


Or Spotify, in the beginning


We used to run a decentralized server on our office PCs to aid in protein folding. The program would lie in the background and monitor CPU usage, then spring into action once the CPU was idle for some time, and utilize all but one CPU core to process one of the work packages available on a central server. To fold proteins, you'd just add your request to the queue, and some time later, you'd get the results, calculated on various PCs from customer support to the labs. Good times.


Uhhh, either I don't get what they are trying to say or the article doesn't really say anything at all?

They state the example that a bitcoin like blockchain isn't the right solution for distributed email? Well no shit? A few paragraphs before that they even say "blockchain is a distributed database", well you don't want a distributed database to deliver messages to individuals.

And yes, torrent doesn't need a database everyone agrees upon to distribute Linux ISOs, obviously.

And uhh, yes different protocols for different usecases are needed? What does that have to do with making more thinkgs work the way bittorrent does?

Bitcoin distributes its database "like bittorrent does". And on top of that there's stuff to make everyone agree what goes into that database. For for torrent you give people a hash of the file they'll have in the end because the file doesn't change. Otherwise it's just as peer to peer, distributing the blockchain should be just as fast as torrent, agreeing upon stuff isn't.

Here's a proposed protocol for distributed mail: https://bitmessage.org/wiki/Main_Page

Did I miss the message they were trying to convey?


Did I miss the message they were trying to convey?

Nope, you had it in your first sentience, the article doesn't really say anything at all. I've recently started considering TC's coverage "tech-adjacent", a poorly-formed idea that their target audience is people who work in or with tech-heavy fields, but would never be called on to design or implement systems themselves. TC can talk tech-adjacent stuff to techies, or tech to tech-adjacent, but talking tech to techies elicits a uncanny vally-ish reaction.

FWIW, I think what the article inadvertently speaks to is that the apprehensiveness to strongly centralized systems I've watched grow for at least a decade is starting to resonate with even more people.


You'll have to forgive me for tangenting this a bit, but yours and my sister posters mulling prompted some mulling of my own.

While I can certainly say I've gained the apprehensiveness you speak of, I realize I've gained it for systems at scale; centralized or otherwise. In fact, for badly engineered distributed systems, where you previously had one problem (getting modules to communicate) you now have N; (getting modules to communicate over potentially unreliable channels; which may themselves have additional hidden points of failure and varying failure modes/symptoms, etc etc). I'm being a bit high-level-hand-wavy with this, but it seems to get the point across (I hope.)

My takeaway being, I'm curious if there have been people who have addressed this potential conundrum (and if you even see what I stated as a problem at all.). Data to show that systems are more/less reliable due to certain failure modes under different architectures would be nice, but that seems like a MASSIVE undertaking to do if you have any desire for the data to be meaningful; if only that it may be testing the competency of the team rather than the actual underlying relative difficulty. (although if done broadly enough maybe that would still provide useful guidance? I'm rambling at this point, but some thoughts.)


Agree. I liked the concept of the article if not the title. From there, I was quickly disappointed. Mainly what happens to me is I look for evidence to support (or refute) an idea I'm mulling. This piece fell short in that I couldn't easily refer others to the article and expect them to come away with any increased understanding.

If anyone knows of a good treatment of centralized/distributed at the service or team levels, please do share.


> Did I miss the message they were trying to convey?

No, they didn't do a good job of explaining it. 'We' basically failed at building easy-to-use decentralized 'services' and those who don't think it's a big deal to do so built a bunch of centralized services that were easy-to-use and also made their investors rich. Users don't understand all the ramifications of using a centralized service, so they ended up using what was easier, instead of what was better.

Further, the reasons to move away from a centralized service are difficult for most people to comprehend, so there's not a way for clear intent to emerge, other than a bunch of us rattling the cage now and again trying to get others to pay attention.


> They state the example that a bitcoin like blockchain isn't the right solution for distributed email? Well no shit? A few paragraphs before that they even say "blockchain is a distributed database", well you don't want a distributed database to deliver messages to individuals.

I think what they were going for was that the blockchain would act as a central point of organization for a distributed network of peers that are performing the actual task of sending messages.

You're right that the article led to no conclusion though. I felt like I hadn't consumed any content. But then I kind of expected that from an article like this on Techcrunch.


I think you misunderstand how Bitmessage works. It doesn't use a blockchain. It's like alt.anonymous.messages, but with a Bitcoin-like broadcast protocol

http://www.quora.com/What-is-Bitmessage-and-how-does-it-work


That's what I was trying to say. Different protocols for different things. You don't want a blockchain for bitmessage.

I know how bitmessage works, sorry if that wasn't clear from my post.


Back in the day, there was Gilmour who would publish an article about Twitter every week on TC. "Twitter this", "twitter that", "twitter is the answer to any possible problem out there". Now it's blockchains. That's TC's MO, nothing new.


The article is only attempting to get the news out that there's an emerging movement to decentralize, and that the underlying technologies aren't baked. If you knew that already, you probably didn't get a ton out of the article.


Bitmessage is abhorrent for sending offline messages with, the rebroadcast system does not scale well.

You need a good DHT solution like I2P Bote to get a decent, decentralised offline messaging system.


I have been playing with an idea to decentralize search (or, more generally, information processing).

I would consist of

- a publicly available hierarchical type system (similar to how classes look like in an OOP language with multiple inheritance). For example, URL is a type, or set of search terms is a type.

- any number of nodes on the net. Each of them would provide a mapping from a set of input data to a set of output data (data is defined in the terms of the types). For example, Google would provide a mapping from set of search terms to URLs, but that's a very generic node.

- the nodes also advertise the cost of one mapping operation. The cost is given in some kind of currency (?).

- there should be routers that can find a route between a set of input types to a set of output types. The route is composed of nodes, obviously, that pass information between them and finally the output is the stuff the user wanted.

- the user pays the sum of the cost needed for his query (transformation)

- the user could pay nothing if some nodes have negative cost. There could be a node that takes a set of URLs (search results) and produces a set of URLs that are basically the original URLs + some advertisements. Therefore if somebody specifies that he wants a transformation from types X to types Z (and provides data of types X) under cost C, then the routers should find a set of nodes under these constraints

- nodes could provide mappings that are the same as other nodes, and the routers would slowly insert them into mapping chains, and rate them based on some user feedback (this is essentially a genetic algorithm)

This system could very much help good competition in the distributed information processing industry. Nowadays nobody can really compete with Google or Bing because you need to build _everything_ in one step. A distributed infrastructure like this that's based on the evolution of components would lower the barriers to entry dramatically. Also, data would be shared across participants, not owned only by the one single market leader.

Ok, this is just a very naive and early idea, and it's also too big to implement in my spare time at the moment, but I think it'd be an interesting way to apply the good old Unix principles for building internet services.


How would you guarantee the trustworthiness of a node? How do you guarantee that a node is actually doing what it is supposed to and not inserting malware into its results? Reputation systems can be gamed (and will be if there's money at stake).

I like the notion of trustworthy, general-purpose, publicly distributed computing: http://unenumerated.blogspot.com/2014/12/the-dawn-of-trustwo...


I don't know, maybe you cannot guarantee it -- similarly, you cannot guarantee the trustworthyness of Google, or Verisign, etc. They have good reputation in general (let aside the recent NSA stuff), therefore you trust them when you use them. Current systems can be broken as well (starting from DNS poisoning to hacking the actual service, etc.). The idea is not to fix all the problems once -- I'm mostly concerned with a) distributing responsibility, data and therefore power across more parties b) lowering barriers to entry into complex information processing, and as a result, increase innovation. A bit like a cathedral vs. bazaar approach.

Again, I don't consider myself a computer wizard who could implement every aspect of this stuff; I am mostly interested in the algo that selects/evolves processing routes based on queries. But then there's the security, payment, service discovery, etc.


> How would you guarantee the trustworthiness of a node?

With math.

First you need a common consensus of what the publicly searchable corpus is at a given point in time. Computing a hash on the entire set may be impractical, so you partition the crawlspace. Dynamic content makes it tricky to validate distributed crawling, which motivates things like http://named-data.net

A blockchain might be useful as the public ledger for retrospective diffs when asking "What is all the currently public available data right now?"


Very interesting. How about creating a github project for this? Would like to follow.

Blockchain based DNS systems already exist. Web search mostly builds on links and DNS. If one thinks about it TLD names are pretty much fixed, where as in Namecoin they are subject to consensus. What one can do, and that's one example out of many, is create unique hashes for any content to create indestructible information.


Maybe in the near future I'll create some kind of skeleton, but I have to write this down properly, not just in a HN comment.

Blockchain is not something that I'd use for these kind of stuff as it trades computational power for trust. As it came up somewhere else: the whole Bitcoin accounting could be run on a Raspberry Pi if we trusted the guy who owns the RPi -- but people who want to use BC don't trust anybody that much. Current DNS systems are based on trust and they work reasonably well, and I think while the blockchain paradigm is fashionable and fascinating, it does not need to be applied to every single kind of problem in distributed computing.


With Namecoin anyone can design new naming protocols based on consensus. That's not possible with ICANN's DNS (the Internet is centralized in this regard - nobody can change the semantics of ".com"). New naming protocols are a huge deal => for searching & finding any information, for Identification, and more. Namecoin did not get it right, and PoW style blockchains are too complicated. But potentially one gets things like single-sign on, integrated micropayments, and things which are hard to imagine (smart contracts, etc.).


I believe the field of research your referring to is called Peer-to-Peer Information Retrieval. Doing a search on it will bring up many papers published on it going through the various attempts, issues with and theories on building such a system that's sufficiently good enough to be competitive with a centralized search engine.



If you need a crawler for this, let me know. :)


It already exists and it is called Ethereum.

https://www.ethereum.org/


While I assume one could, I'm not sure it would be easy to implement web search in Ethereum?

First of all, you would need a way for a contract to be able to access webpages to get information about links. I the contract could pay people to give it the content of a webpage? I'm not sure how it would verify it for non https websites, without getting copies from multiple users, which would cause problems if the webpage was not static.

Second of all, the memory to store what pages link to what pages would, I think, be expensive for contract storage. Maybe if it used swarm?

I guess it would use swarm.

Still, the contract would have to have a short running time (in order to be relatively inexpensive). Maybe if instead of the contract producing the results, it would have others create the results, and then verify them in some way, and reward those that created the results?

I don't know.

I don't really know my stuff, but it seems to me that implementing a decentralized search engine with Ethereum would be fairly difficult.


I'd be interested to hear what decentralized technology projects HN readers are working on and/or most interested in -- and hopefully that a community could help with. Care to share your favorites below?


I'm helping out on a project that is a mix of decentralized and distributed. It is "a (nascent) specification for distributed, immutable, derived, addressable data. It defines a layer for addressing data, providing a foundation for downstream functions such as data caching, transport, discovery, and computation." https://github.com/tdxlabs/fondo I know there are related projects (mentioned and linked in the repo above), but this one scratches an itch for me, and I'd like to see who else is interested. Feedback of all kinds is encouraged.


Would there be any room for cooperation with camlistore?


I'd be happy to learn more and meet people from (or using) the project. There do appear to be points of overlap.


We at spatch.co are strong believers in decentralized protocols. Email is currently the "best" we have for decentralized communication and collaboration. We are trying to change that by building a modern decentralized collaboration platform.

We don't have anything open source yet but we hope to start opening up code and specs to the community soon.

Our website is unfortunately out of date, it makes Spatch look like too much of an app (what's shown is a prototype we've built to show off some use cases). We will be updating it soon along with a jobs page.

We are well funded and hiring, based in London.


The elephant in the room that nobody seems to see is that most ISP upload speeds are a quarter at best of the download. I have a 50megabit service with 5megabit upload. When we talk about internet freedom/net neutrality, its always talk regarding big companies being clocked, should Netflix be-able to use 30% of the nations internet resources without paying more...but wait consumers are paying for the connections.. Its a pricing debate. This occurrence of decentralization is the result of the restriction of possibility,we are stuck in the most part with the options that the isp's allow. Which are structured to get more money out of servers of the internet by restricting the availability of high or even equal uploads. even with my own home server I can't host a website that allows for any reasonable dynamic exchange.


I predict that "decentralized" will be the next big buzzword. That is all.


I'll see your bet, and raise you: "Block chain computing."


Everybody has a project they kick around in the back of their head but will probably never get around to doing.

My project is an O/S. It should be built from the ground up to be networkable, self-updating, and handle biometrics and P2P trust. And of course heavy use of encryption. In addition, it should operate along two streams: everything should have an execute and a verify stream running simultaneously.

The reason this is my dream project is that we've reached the point where several weaknesses of the existing architecture are coming together to create problems. They need to be solved at the lowest level possible in the stack.

I agree with the author: the initial decentralized nature of the internet has been perverted by all of the major players. There is a brutal fight underway by big companies trying to build walled gardens. These companies do not have my best interests in mind.


> In addition, it should operate along two streams: everything should have an execute and a verify stream running simultaneously.

What would a "verify stream" do ... input validation, regression testing, anomaly detection, ..? Would it be verifying code or data or endpoint identity?


I've been listening to the TDD guys for some while, and I think they may be on to something that's lot more important than just writing code. They describe TDD as "double entry bookkeeping". That is, you enter things in twice and then you always have a cross-check to make sure you prevent errors. The same theory is in action in most modern commercial airline cockpits: one person could fly the plane alone, but there's a purposeful redundency built into the system -- a cross check. Same goes for flight control hardware: one system is never enough.

I'm sold. I want everything I own, from the metal on up, to consist of competing and cross-checking systems, ideally open source from different groups.

So to answer your question, yes. All of it. It's got to be baked in everywhere.

Sketching this out on the back of a napkin somewhere would be a hoot.


I wish the article would have hit on a key distinction: there is a difference between technically decentralized and economically decentralized. The first is about network topology. The second is about who owns and benefits economically.


OP here -- yeah, that's a very good and cogent point. Hope to explore it more in the future.


I sometimes think network programming is a little hairy and makes p2p apps pretty difficult to make. I wonder what can make it a little easier, like some networked filesystem or protocol... I guess there isn't anything really relevant.

I wonder if something like bittorrent or libtorrent, which seems to be a state machine, could be integrated as a kernel module so it can offer a more diverse set of network programming tools. For example I can easily find a CSS, JS, php or ruby framework, but there's nothing like a reliable DHT library.

There's not enough demand for it though, but I wish there was.


The only things that make p2p programming hard are firewalls.


Not only. Asymmetric bandwidth really sucks for p2p. And most "broadband" connections in the US are asymmetric.


Yes, you're right, that's the other one but as long as home connections are consumption centric that will likely remain. It's been that way since time immemorial, 300/75 was my first modem, the only time that I had symmetrical up/down was in the ISDN period.


Your point about firewalls makes me wonder if their time is past. The Sony hack was clearly an inside job, and we've always known that the majority of data breaches over history have been inside jobs. It seems we need to focus on better local security, and once we've done that, the need for firewalls disappears. And then we can open internet connections freely again, like we used to before firewalls became so popular.


I agree on decentralization of things, but for this we need tools that make it easy to decentralize, tools that we don't have yet.

Installing a tool for creating your own "facebook network", or "search engine", or "maps engine" should be as easy as installing an app in Apple MacOs. Anybody should be able to install it.

Historically only very highly trained system admins could do it.

Probably tools like docker will be used in the backend for something like this, but we need easy to use front ends, standards, and so on.


This seems possibly relevant: https://github.com/friendica/red

From the README: It consists of an open source webapp providing a complete multi-user decentralised publishing, sharing, and communications system - known as a "hub".


My PrIoT Manifesto (https://news.ycombinator.com/item?id=8820432) is just about decentralization of Internet of Things. Please share it if you agree with it.


How can you right this kind of article and completely ignore some of the best technology in the space that solves many of the problem mentioned ?I.E Bitshares 10 seconds block, 10000 transaction per second and more, Ripple 5 seconds block.


people interested in this should check out secure scuttlebutt [1]. it's a decentralized data network built around logs (think kappa architecture [2]) with ecc keypair identities, proof of authorship, and content-addressed linking

1. https://github.com/ssbc/secure-scuttlebutt 2. http://www.kappa-architecture.com/


Relativity was about decentralizing physics. Capitalism was about decentralizing economic influence. Decentralize all the things and then they endure!


I think the article resonates more with the goals of freenetproject.org, but for some reason has this fixation with the blockchain.


I've been hoping we return to dencentralization for awhile http://timepedia.blogspot.com/2008/05/decentralizing-web.htm... but it seems the economics, bandwidth, and technology currently doesn't favor it.

Consider what it would take to do decentralized search with a reasonable latency. Everyone would have to have a copy of Google's index, or at atleast a reasonable chunk of it, or proximity to some mesh node with a reasonable chunk of it.

But mobile devices and mobile bandwidth are not really efficient as nodes to have complete copies, or partial copies. They're low on memory, and it's not really power efficient or network efficient for them to do the work.

But what's worse is the problem is updating the index. If everyone ran a distributed crawler, there'd have to be some way to coordinate who crawls what and make sure there is not too much duplication of effort (else we waste network bandwidth and have a tragedy of the commons) You also run into problems of people trying to (and economically incentivized to) poison the index, and so you'd need an overhead of duplicate just to avoid that.

It's not impossible to arrange a distributed crawl/map/reduce over millions of clients, the Folding@Home and other crowdsourced computation platforms show it can be done. But they don't have to manage replication and updating of petabyte scale databases, the problems they coordinate are easily sliced up and the results to be transmitted back aren't large.

I think this tends to drive the network back towards super-nodes, as the leaf nodes aren't efficient, and the cost associated with doing the work can be pushed onto others (free rider problem). Eventually, the people running the super-nodes tend to become relied upon by everyone, and those become subject to the same kinds of problems we see today -- government attacks, hackers, privacy invasion, etc.

It's not like SMTP/USENET weren't once relatively decentralized networks. But as millions and then billions of people came on line, the cost to run SMTP or USENET nodes was too high in complexity for most people to run their own. Small and medium dialup ISPs arose to handle that need. But since these were commodities, and little differentiation other than price, the prices were quickly driven to the marginal cost, and there isn't much money in running an email server for a few hundred people, you need scale. That eventually drove consolidation, and that's why today, you either get your email from Gmail, Yahoo, AOL, iCloud, Hotmail, or one of the other larger players.

It may very well be that for some types of problems, there is inherent efficiency in centralization. We should look at perhaps how to change the way the services work to favor cost effectiveness of decentralization, so that rather than it being favored for ideological or aesthetic reasons, it is favored by an economic gradient.


There is certainly an inherent efficiency in centralization, economy of scale in resource costs both physical and human.

But even the large players have to distribute their data centers - there's a geographical limit at which you must decentralize again anyway. So the key here is to decentralize at the proper scale - large enough nodes to be cost effective, widely enough distributed to prevent SPOF and provide good response times. I'm pretty sure this can be done at a level smaller than where the Google/Yahoo/Microsofts of the world operate.


> what it would take to do decentralized search

http://yacy.net/ already does this.


Yeah, I thought these string indexing algorithms are inherently decentralized. Ie. the data structures themselves are big trees.


If the author favors decentralization, then they should advocate for decentralization of the economy, not decentralization of just the Internet. Unless they imagine some utopia where corporations no longer get involved with the running of the Internet, then one has to consider how other economic factors, and also the law, contributes to corporate centralization, which in turn leads to centralization on the Internet.

There was a stretch in the mid 20th Century when most Western economies saw a limited amount of geographic decentralization, as industry left the cities and moved to the country-side. The USA was the leader of this trend. But since the 1980s there have been forces working, throughout the economy, to re-centralize various industries. We see this most dramatically in the tech industry, especially in the USA, where the number of computer programming jobs has plummeted in areas such as Ohio, Illinois, Arkansas, Kansas, etc, while rising in California and New York -- the programming jobs were once distributed across the country but are now increasingly concentrated into 2 mega-urban zones on the coast.

Urban centralization tends to be influenced by political centralization. If the political system concentrates, then the urban system will tend to concentrate. Consider: "Empirical studies of [urban] “primacy” identify two strong factors determining the size of the largest city: urban population as a whole (no surprise) and, more interestingly, political structure: federal or decentralized systems do not have primate cities as large as those with high centralization. Thus Mexico City is still larger than Shanghai, because of China’s decentralization."

http://siteresources.worldbank.org/DEC/Resources/84797-12518...

Political concentration increases the economies of agglomeration: "The term economies of agglomeration is used in urban economics to describe the benefits that firms obtain by locating near each other ('agglomerating'). This concept relates to the idea of economies of scale and network effects. Simply put, as more firms in related fields of business cluster together, their costs of production may decline significantly (firms have competing multiple suppliers, greater specialization and division of labor result). Even when competing firms in the same sector cluster, there may be advantages because the cluster attracts more suppliers and customers than a single firm could achieve alone. Cities form and grow to exploit economies of agglomeration."

http://en.wikipedia.org/wiki/Economies_of_agglomeration

Furthermore, there has been a decline in the number of startups (which therefore concentrates more power in the hands of older firms):

https://www.fedinprint.org/items/fednsr/707.html

More so, the law encourages corporate leaders (of firms listed on stock exchanges) to focus on hitting quarterly targets. Any significant deviation from earnings expectations is now sufficient basis for a lawsuit: the CEO has mislead investors, this is a breech of fiduciary duty, what about shareholder rights, etc, etc, on and on, I'm sure you know the rhetoric. This tends to mean that firms can only grow to a certain size before they get bought by an older firm, as going through with a public listing is more demanding than it used to be, and exposes the firm to more legal risks than it used to.

We should not expect that the Internet to escape the wider forces at work. If economics, and the political system, encourage centralization, then we will see centralization of the Internet.

You either advocate for decentralization of the whole economy, or you don't advocate for decentralization at all. The Internet is not free of the economy.

All that said, it is worth remembering that there has been dramatic decentralization of industry over the last 70 years. In 1945 the USA accounted for 50% of all global industrial production. Nowadays, it accounts for less than 20%. Asia has seen a dramatic rise in its share of industrial production.


I agree with the premise of the article but there's more to consider than just the blockchain. There are three issues that must be dealt with when trying to make it easy for developers to create decentralised systems that can withstand the test of time. There are a number of (open-source) things I'm working on that could help so I'll describe in terms of the problems they're solving.

First is the issue of how one creates, deploys and manages the whole lifetime of an application. The current stacks are not up to the task given the huge complexity, footprints and vulnerabilities we see. All the tools we currently see are stopgaps that attempt to paper over the cracks. This will become even more important (and visible) as the Internet of things becomes a reality. IMHO this is where unikernels such as Mirage OS can be very impactful [1].

Second is the issue of how data is synced and pushed around. As indicated in the article, the blockchain is not appropriate for this. Bit-torrent may provide some of the answer but not all of it. We needs systems that can track provenance that can be composed in the way most appropriate for a given application. The Irmin datastore, which is based on the principles of Git, is one approach to this [2].

Third is the dual problem of identity and how to form secure end-to-end connections over the modern Internet (with all the middleboxes and NATs we now have). This is clearly something all the central providers want to control by being your One True Identity - but as we already know from the pseudonym problems, humans are not that simple. A nascent project called Signpost aims to tackle this [3].

One of the use-cases I'm working towards is to make it trivially simple for users to run their own core infrastructure of Mail, Contacts and Calendars [4]. Building things on top of this infrastructure then becomes more compelling and it's just as applicable to business as it is to individuals.

Even though I've listed the tools that I think will alleviate the issues, the fact remains that all these problems needs to resolved somehow - otherwise we'll be back to centralised systems regardless. In addition, I've only mentioned the technical challenges — the business model challenges are another thing entirely.

[1] http://openmirage.org/wiki/overview-of-mirage

[2] http://openmirage.org/blog/introducing-irmin

[3] http://nymote.org/docs/2013-foci-signposts.pdf

[4] http://nymote.org/blog/2013/introducing-nymote/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: