
The end of the cloud is coming - lwhsiao
https://venturebeat.com/2017/11/04/the-end-of-the-cloud-is-coming/
======
arca_vorago
As a sysadmin, any time I read some variation of; "At some point even data
centers may become a thing of the past.", I know they don't know what they are
talking about. As a matter of fact it has provided much joy through laughter
(followed by required sysadmin scotch) at the show "Silicon Valley" for
obviously parodying the issue. Datacenters aren't going anywhere, and this
strange fascination in hipster-hackers with instant uber-decentralization-
pushes concerns me because it ignores some of the more real (and fixable)
issues at hand like dns centralization in favor of magical "p2p(+blockchain)
will save us all" thinking not backed by much real world practical
implementation.

Don't get me wrong, I'm a darknet, meshnet supporter. I love decentralization.
That said, I support the establishment of the infrastructure required to
support it independent of end-user devices, and I think for security and other
purposes it's at least possible they should remain separate, and devs
shouldn't assume so much right to cpu-cycles.

So in essence the topology I think that is preferable would be properly called
decentralized-distributed.

Of course thats part of the reason I support things that go against that
common grain, such as ipv6nat.

~~~
jfoutz
> I support things that go against that common grain, such as ipv6nat.

> ipv6nat

Die in a fire while stung to death by scorpions and your eyes are eaten by
spiders.

No. that is too much. You are my fellow human, i don't wish that on anyone.

> ipv6nat

Still. I hope you have to poop while stuck in a meeting.

~~~
nerdponx
Isn't this a security thing? I'm not anything close to a security expert, but
I always understood that it was best not to have your personal computer
exposed directly to the Internet. Not to mention just having control over my
own network.

Also, my ISP only gave me one ipv6 address. I'm OK with that.

~~~
tokenizerrr
You can have a firewall without NAT

------
nathanaldensr
I wish one of these "next-generation technology" blog articles would mention
Named Data Networking[1]. I think the real problem with the web is the IP
protocol itself, not any higher-level protocol or particular way of using IP.

From a very high level: The Web should be built on top of a broadcast
protocol, not a point-to-point protocol. If it were, many complex issues would
be solved.

[1]
[https://en.wikipedia.org/wiki/Named_data_networking](https://en.wikipedia.org/wiki/Named_data_networking)

~~~
gwright
I'm glad to see this mentioned but I'm not sure named data networking is based
on 'broadcast' model. I think it is closer to anycasting behavior with packet
delivery gated by security mechanisms to ensure delivery only occurs to an
authenticated 'requester'.

~~~
nathanaldensr
Sorry, I didn't intend on saying that it was based on broadcast. I should've
been more precise in saying that it enables the flow of broadcast data (e.g.,
streaming from one source to many destinations) much, much better than point-
to-point protocols like IP.

------
KaiserPro
So there are a number of issues that are not really answered here.

1) the cloud is a costed model. you pay a company to look after your stuff, so
you dont have to manage it yourself

2) the cloud _is_ more expensive, but not compared to hiring your own
infrastructure people (depends on scale of course)

3) distrubuted networks of things are much much slower than centralised. save
for a few exceptions.

4) Trust. even though you stuff should be encrypted, spaffing it out in the
open is a large risk, because should your old keys leak, it'll be trivial to
retrieve old data.

5) money. who on earth is going to buy, run and maintain this infrastructure?
whats in it for them? Who organises development of tools, patching etc? who do
I call when there is an outage.

in short for this to work, it would required a 180 degree culture shift of
capitailsm.

also physics, to make globally distrubuted systems both fast, relaiable and
consistent

~~~
fredrb
> 180 degree culture shift of capitalism

You can monetize from distributed systems; and I think this will happen more
often with Blockchain, render farms and things like that.

> who on earth is going to buy, run and maintain this infrastructure?

This. As much as I love the idea of decentralization, I can't see it getting
much traction because of this.

~~~
pdimitar
> > who on earth is going to buy, run and maintain this infrastructure?

> This. As much as I love the idea of decentralization, I can't see it getting
> much traction because of this.

At least 90% of all smartphone users are clueless as to when apps work in the
background on their devices, so if you install an app that turns your
smartphone into a lightweight node in a p2p network, almost everybody will be
none the wiser IMO.

~~~
KaiserPro
and how many of your programs can run in a guaranteed nothing environment?

Can your process your sensitive data on an unknown device without leaking any
secrets?

Sure, pushing bits from one place to the other is easy. But _processing_ that
data in any meaningful way is impossibly hard

~~~
pdimitar
Do you expect a plan that covers 100% of all problematic points from the get
go? As if the current technologies aren't patched to oblivion right now. We
can at least avoid the problems we already know exist. That's a progress in
itself, wouldn't you agree?

Like with physical exercise, you start off somewhere and you discover
solutions to problems you didn't know how to solve, along the way.

Also, I am not sure what you mean. What part of distributing encrypted data
and decrypting it with your key on arrival is hard?

Admittedly, it would require a big shift in how we code apps. I am not opposed
to that though.

~~~
KaiserPro
I have mildly secret data that I want to process, for example a credit card
transaction, how do I securely decrypt, process, re-encrypt and upload to the
destination?

Processing that data requires a remote CPU that can't be spied upon. In a de-
centralised environment how can I get a reasonable guarantee that the person
hosting the CPU isn't just listening to the secrets?

~~~
pdimitar
And how do you guarantee that in a centralized system? You don't. It's a
wishful thinking based on unfounded trust.

I can't answer your question directly. We simply can't rely on anyone hence we
have encryption and key chains.

~~~
KaiserPro
What threat model are you looking at? I'm talking about providing enough
guarentees to sign a contract with another buisness, not hide from government
actors.

as a normal buisness I want to process sensitive information. That is not
possible in the world you describe, with current CPUs

------
thisisit
IMHO "cloud" as a whole was way overhyped for the value it provided. While it
delivered on simple requirements any thing complex meant things got out of
hand quickly. Quite a lot of applications were sold as "to be used by business
user" but then setup ensured the complexity was so high that it required whole
dedicated technical teams to manage it.

That said, is it the end? It is doubtful - A lot of conventional, old school
companies which were against moving to cloud because of various reasons are
now seriously considering cloud. Quite a lot of these companies are heavily
dependent on Oracle, SAP etc which is just now rolling out/pushing their cloud
products. Maybe it is the shoe shine boy and Joe Kennedy moment, but it is
difficult to tell.

~~~
simonh
I don't think it's difficult to tell at all. Real businesses have built
massive systems on cloud infrastructure and the pace of them doing so is
accelerating. Real money is being made all round, so there's no bubble. It's
not all predicated on possible future profits that won't ever materialize. Are
there too many startups that won't go anywhere? maybe, but that's not going to
lead to the collapse of AWS or Azure.

Meanwhile the solutions this article is promoting don't even really exist yet
except in very limited, primitive forms. Nobody is going to build a Facebook
on bit torrent and the blockchain any time soon.

~~~
pdimitar
> _Nobody is going to build a Facebook on bit torrent and the blockchain any
> time soon._

This might be a nitpick but I don't think you realize that most p2p advocates
despise Facebook and all centralized mastodonts.

The idea the author expresses revolves around a worldwide volunteer-based
peer-to-peer CDN than anything else.

------
physicsyogi
>This is a problem mostly because of the way we’ve organized the web. There
are many clients that want to get content and use programs and only a
relatively few servers that have those programs and content. When someone
posts a funny picture of a cat on Slack, even though I’m sitting next to 20
other people who want to look at that same picture, we all have to download it
from the server where it’s hosted, and the server needs to send it 20 times.

The article reminds me of the Pied Piper platform from _Silicon Valley_ , but
especially the above quote.

~~~
im_down_w_otp
Isn't the point of an HTTP caching proxy, in-part, to mitigate that exact
problem?

~~~
stephenr
Doesn't really work if the upstream is https, unless you MITM all your proxy
users.

~~~
solatic
You can architect for this though. Let's say you're a data-heavy SaaS selling
primarily to enterprise, which operates their own datacenters. Instead of
naively forcing all requests to go first to api.saas.example, you allow the
enterprise to configure its account to first hit a cache service (that you
write and distribute) located at saas-cache.enterprise.internal, allowing both
the SaaS and the enterprise to save on bandwidth by not fetching the same data
20 times over.

~~~
stephenr
Seems like a more common approach would be for the customer to use the on-
premise version of the software in question.

------
tboughen
Why does the author implicitly assume that the amount of spare local storage
is enough for the swarm+redundancy? Furthermore how many backups are required
for data to ensure the same level of protection as a modern cloud provider?
Does the math even work?

~~~
zrm
> Why does the author implicitly assume that the amount of spare local storage
> is enough for the swarm+redundancy? Furthermore how many backups are
> required for data to ensure the same level of protection as a modern cloud
> provider? Does the math even work?

It isn't really a problem. N+M erasure coding is very efficient. Even if the
average node only has 40% uptime, that means on average you can recover the
data with only 2.5 distributed copy-equivalents. You would want slightly more
than that in case the specific nodes end up having below average reliability,
but the numbers are completely reasonable.

Moreover, for anything which is shared with a non-trivial number of people,
each person would have a full copy which means the level of built-in
redundancy is already massive overkill and there is no need to spend third
party storage on any additional redundancy at all.

~~~
PuffinBlue
> Even if the average node only has 40% uptime, that means on average you can
> recover the data with only 2.5 distributed copy-equivalents.

Wouldn't this mean that if X is the total amount of consumed storage currently
used, you still need 2.5 times X _additional_ unused storage across the swarm
in which to hold the 2.5 distributed copies?

If so do average people keep such amounts of free storage around?

~~~
zrm
> Wouldn't this mean that if X is the total amount of consumed storage
> currently used, you still need 2.5 times X _additional_ unused storage
> across the swarm in which to hold the 2.5 distributed copies?

No. The first copy-equivalent is the original data.

And again, even that's only for bespoke personal data. As soon as you have a
few people with a copy -- which is even the case for most "personal" files,
because friends and family will have copies of your photos etc -- then you
don't need any redundancy past the copies each person already has on their own
device.

Moreover, cloud providers are already wasting more space than that. People
have a copy of their data on their own devices, but also on the servers, and
then the servers have at least one backup. That's three copy-equivalents
already.

> If so do average people keep such amounts of free storage around?

Millions of people buy computers with 1+TB hard drives and then use less than
10% of the space.

And it's possible to use the free space without really depriving the owner of
it, because you can set a minimum free space threshold and transfer data off
the machine if it ever falls below that, so the space is _only_ used if it
would otherwise have been free space.

~~~
PuffinBlue
> No. The first copy-equivalent is the original data.

Ah, got it.

> Millions of people buy computers with 1+TB hard drives and then use less
> than 10% of the space.

See that's the bit that's weird to me. My OS drives are small SSD's and the
spinning platters are all comfortably full.

I will say though, I'm replicating once onsite and once offsite so my own
redundancy is pretty high. If I could get over the 'someone else having
physical access' thing (I don't use cloud for most personal data) I suppose
IPFS or equivalent would be cool.

~~~
zrm
> See that's the bit that's weird to me. My OS drives are small SSD's and the
> spinning platters are all comfortably full.

You have to remember that you know what you're doing. You know how much space
you need and you know how to add more later, so you don't buy more than you
need.

The typical person buys a 2TB hard drive because they have "thousands of
photos" and the 2TB drive is only $15 more than the 0.5TB drive, even though
"thousands of photos" consume like 0.005TB.

And they're rational to do it because they know they aren't good at predicting
whether they will fill the smaller drive and it's worth $15 to hedge against
the ordeal of adding more storage later.

Which means many people will buy a 2TB drive and use it to store 75GB of data.

------
0xbear
While it is _technically_ possible to build heavily decentralized, managed
hardware deployments, no one has figured out yet how to charge for them except
in the case of very large customers. Until this happens, there will be no
swing back to anything decentralized.

Although I do understand the allure: just about anyone today could very
economically purchase 5-10 servers with 10-18 cores each (and those will be
_real_ cores, not hyperthreads). There's nothing impossible about automating
software management on such a thing, even to the extent that you'd get in
cloud (VMs, containers, distributed storage, automated updates, VM migration,
network partitioning, etc). I believe Microsoft will lease you a fully set up
shipping container with Azure in it, all that needs to be done is scale this
down.

But again, how does one extract billions of dollars in profits from something
like that?

~~~
kshitij_libra
I think the point would be that the central server `could` gather aggregated
data / important business metrics that it could use to provide further value.
I think the author also mentions something like this. Ex: provide analytics
back to the end user that could be a paid service, or use that information to
provide a second set of b2b offerings to other partners. etc.

~~~
0xbear
Now compare that to the current model of nickel and diming for every single
thing and charging a 90% margin on traffic, and you’ll see why this is not
attractive to the current players.

~~~
kshitij_libra
Well, I think intrinsic ineffeciency of storing all that data inside your
central and then running huge spark jobs to crunch through them creates huge
costs in the first place that probably needs to be offset by higher margins
being charged from the end consumer. If you can just monitor the users
engagement with your offering and charge on that without having to actually go
through countless logs to do that, might be what the author is leaning to. But
yeah, easier said than done. Knowing which metrics to capture back centrally
itself could be a challenging thing to do, if the data is decentralized ....

------
justboxing
I never understood the difference between Mainframe Computers / Computing and
Cloud Computers / Computing.

In both cases, a centralized server connected to the internet does all your
application, storage and processing. If that server goes down / connection is
lost, in both cases you and your "SaaS" applications are SOL ( Sh*t Out of
Luck). Also in both cases, you offer hackers a centralized location to target
and hack.

What's the difference, except in the name and in the marketing ?

~~~
Tarq0n
The difference is the move from specialized to interchangeable, commodity
hardware. This allows cloud providers establish an abstraction layer over
servers, because it doesn't matter which machine serves your demand at any
given moment.

------
Iv
At the core of the cloud paradigm and the centralized/decentralized issue lies
no technical problems but a political one about control and privacy invasion.

The only reason why we have youtube instead of VLC + eDonkey packages are
because of lawsuits. It is a technological solution from 15 years ago that is
technologically superior and much harder to censor.

Datacenters and "cloud-based" services do not answer to a technical problem
but to a political one: how do we control information flow in a decentralized
net? The answer: provide bottleneck of information for free.

~~~
pdimitar
The one thing the article didn't address is exactly the pushback from
regulators and corporations. They will NOT stand by and watch idly as their
control is getting pulled out from their hands.

Even if such a network gets invented, I am pretty certain governments will
make aggressive laws along he lines of "if your computer caches a trailer of
this new Marvel movie, we can confiscate your tech for copyright
infringement".

As much as I want the problems to be only technical, IMO the much bigger
battle will be in courts and public hearings.

------
hguhghuff
None of these arguments lead me to think the cloud is coming to an end.

~~~
larkeith
None of these arguments lead me to think the author understands the benefits
of, or has ever built, a cloud architecture.

Half of his points conflate the cloud with the centralized internet, while the
rest are weak or irrelevant.

------
pron
Even if all the arguments are true and take into account all tradeoffs --
which I don't think is the case -- there is a big difference between "will
eventually happen" and "is coming" or "we're facing the end".

------
nicostouch
Yeahhh no. Try building Netflix on this as a platform. Then write a new
article.

~~~
xj9
you mean popcorntime? or webtorrent + thepiratebay.org?

these get shut down due to legal issues, but, from a technical perspective,
building netflix using bittorrent (or ipfs) is 100% doable and quite reliable

~~~
techdragon
Until you want to watch that one kinda obscure thing that isn't adequately
seeded... then your out of luck. (I'm aware of server "pre-seed"/enhancement
options this just assumes you want to watch something that isn't seeded or
pre-seeded)

~~~
candiodari
That's because it isn't done legally, and therefore augmenting the network
with a few "supernodes" (I believe that's what napster called them) is a big
no-no. You could say the same about Netflix/HBO/..., by the way. Watching
obscure things (that aren't worth Netflix or the copyright owner's time to
negotiate rights for ... good luck). And if the present trend continues, and
Disney, and Paramount, and ... start their own streaming platforms with
exclusivity, well, popcorntime is going to be the vastly superior option in a
year or two at most.

In the other case for a distributed example, you have the Steam platform.
Obscure things download quickly and without much delay. And steam avoids the
"rights negotiating" problem by being a marketplace.

------
forkLding
How would a newb get around to developing with P2P services?

Curious and want to try something new.

~~~
meritt
WebRTC

------
mrep
> The compute speed and storage capacity of computers are growing faster than
> the bandwidth of the networks.

So your proposed solution is to spread out compute and storage to hosts that
are even more network constrained???

------
tmikaeld
The article also assume that the client bandwidth is unlimited, but in reality
more and more ISPs have data caps.

If a p2p network would become this large, the need for more client bandwidth
would increase exponentially.

~~~
pdimitar
Where do you see these "more and more ISPs having data caps" exactly, please?

I live in Eastern Europe and I have a legit gigabit connection with no limits.
A load of people around here pay $10 for 75Mbps and connections most USA can
only dream of.

Assuming you talk about USA... just saying, world > USA.

~~~
tmikaeld
All mobile broadband isps in eu and Asia (where they usually cannot get
anything else) have caps at just a few gigs of data. Even in Sweden, they are
adding hidden limits when reaching a TB of data on cabled broadband. This
means streaming 4K video will reach these limits.

~~~
pdimitar
I definitely agree for the rural / less accessible areas. Carriers definitely
have a monopoly there and certain countries like Sweden, Norway and Finland
are too vast to have good cabled coverage.

In most of the EU cities you got plenty of choice however.

~~~
tmikaeld
That's absolutely true, the closer to the root nodes of EU you get the cheaper
the bandwidth is. Heck, in Romania you can even get 10Gbit on a small
datacenter for a few thousand €.

So sure, if the larger nodes can weigh up the smaller ones in a distributed
network, then it might be plausible.

------
Dragon256
ZeroNet seem to be a step in that direction. Completely P2P, allows static
HTML and dynamic sites, unique identity and a lot more.

------
ohthehugemanate
Hello, this is the year 2003 calling. We want our arguments for, and vision
of, a peer-to-peer Utopia back.

------
DumpOfGenius
Richard Hendricks, is that you?

------
cheschire
As I’ve written before, privacy and risk mitigation are the two reasons this
article is wrong. The cloud is already the pinnacle of decentralization, and
distributed web is just a fad.

[https://nrrd.io/privacy-and-risk-mitigation-are-the-
biggest-...](https://nrrd.io/privacy-and-risk-mitigation-are-the-biggest-
threats-to-the-decentralized-web-b685ce7cf125)

