Hacker News new | past | comments | ask | show | jobs | submit login
Edge Computing (dtprinciples.blogspot.com)
71 points by robin_reala 10 months ago | hide | past | favorite | 45 comments



I don't think latency is actually the most important feature of edge computing. Sure, it's great that if you use Cloudflare Workers you get very close to end users and have low latency. I think the real advantages will be:

1. Set-it-and-forget-it scalability

2. Compliance with data locality laws

#2 is going to be the absolute wave of the future. Not just in Europe, but everywhere. Every country is going to introduce laws that mean their citizens' data needs to stay in region or in country. A widely distributed edge will make that easy to handle because it can be a configuration option.


The author also drastically overestimates the capabilities of cloud vendors. CDN and edge have been and will continue to be great business (as you well know) despite pundits predicting their sublimation to cloud subsidiaries for over a decade. For one, capacity planning a public cloud is a lot different and a lot harder than cdn/edge. So the cloud vendor must then act a lot like a cdn/edge, which none have shown any competence in doing. There's a reason Amazon still uses Akamai, Level3, and others for their own Prime Video. CloudFront is a joke in comparison to any prime CDN in terms of network breadth, latency, and overall support from the vendor.


cloudfront is a joke. It doesn't even encrypt the traffic inside their network.


Performance wise putting things (i.e., data, compute power, networking, etc) nearer clients is the only way to somehow 'beat' mother-nature. That's why latency really is the main feature in edge computing. Point (1) is already realized by Cloud computing (or autonomic computing, see [1]). (2) has been more focused nowadays but it has also been a problem in Cloud, but it is not really a "feature" in either settings.

[1] http://www.cs.cmu.edu/~15849g/readings/kephart03.pdf (2003)


I think this is a bit hand wavey. Being able to run workloads on the edge does not solve the data storage problem.


> #2 is going to be the absolute wave of the future. Not just in Europe, but everywhere. Every country is going to introduce laws that mean their citizens' data needs to stay in region or in country

I can't wait for all these special snowflakes laws. And then the whining that locals players are all getting wiped out by foreign competitors that got their start in much larger unified markets where they could benefit from economies of scale.


It looks like the term "edge computing" conflates the location of the services and the method of distributing the services (serverless apps). You could imagine writing servers that then get automatically distributed to the edge as containers or VMs.


For #2, how would you deal with the user moving to a different region?


I think your #2 is actually #1, although I would phrase it as:

#1 ownership and control over access to one's own data (privacy)

#2 access to one's own data (accessibility and queryability)


Arent both already the case with, say aws lambda?


What exactly do you mean by #1?


I mean that the promise of edge computing is that you write some code, deploy it and don't worry about scaling it, regions, availability zones and the like. It's just code that gets run.


> It's just code that gets run.

I agree. But it also means that it's mostly unrelated to the edge, or edge computing; with the same constraints the code can run anywhere.

To me the promise of edge is that it could work quite well with decentralized apps (not limited to blockchain-based). The work you guys are doing with IPFS is a great start.

Add: When you think about it, most current apps are centralized and it's unsurprising that they can work well with the cloud (AWS/Azure/Google); edge is just mostly CDN for now. Decentralized is when those models become incompatible, and where edge can show strength.


CDN is mostly related to "content" (i.e., data) delivery. Edge is more generic, and relates not only to "content", but also to "processing" and "networking" to reduce the response time multiple clients experience.

Decentralized apps is not really related, though one can really employ edge datacenters to achieve so..


The point I'm making is that edge can't do much now in terms of "processing". Most data is centralized and you'll need to hit a centralized system to access it. Facilities needing lower latency are more likely to keep it on-prem. As of now, it's mostly dealing with CDNish workloads.

However, that changes with decentralized apps since they place a different set of architectural demands, and don't have centralized datasources. For instance, a search in p2p space might involve connecting to a lot of peers - latency matters. Data (often signed chains) might need to be fetched from dozens of different sources, combined and queried locally - again latency matters. Clusters of people who you talk to are often co-located, edge wins again.

Latency doesn't matter when it's a hand countable number of simultaneous queries (as in current apps). We can even work around it with approaches like batching, as with GraphQL.


Sure, most apps still follow the client x server architecture. To properly take advantage of edge computing, one needs to rearchitect current applications, and microservices is one example of how to do it.

The point of edge computing is exactly to incentivize facilities/companies to not keep data on premises, or at least to ship some data (i.e. non sensitive) out of it. It doesn’t scale the way most companies need it to.

Latency is key for some important applications like self-driving cars and industrial automation, not really to make some queries in GraphQL..


> ... and microservices is one example of how to do it.

The fundamental problem is where the data resides. Microservices are well understood today, but taking them to the edge isn't; there isn't a path to do that for typical apps. So most microservices which are being used at the edge are doing caching/transcoding/resizing etc.

> Latency is key for some important applications like self-driving cars and industrial automation

They keep compute on-vehicle or on-prem. For data services (not media delivery), latency is:

a) either supremely important to be fully local (vehicles, automation)

b) or it doesn't matter enough to be on a 3rd party edge network. The diminishing returns in typical apps is what the article is alluding to.

> not really to make some queries in GraphQL

You're misrepresenting what I said - and it seems deliberate.

I mentioned GraphQL as one of the attempts to solve latency issues in typical apps.

For example, some apps use graphql/dataloader[1] because it can "coalesce all individual loads which occur within a single frame of execution before calling your batch function with all requested keys. This ensures no additional latency while capturing many related requests into a single batch."

So in typical apps, there isn't a big benefit to putting general compute on the edge - because the network calls are chunky and not chatty, and their data is centralized. GraphQL (along with libs/frameworks) being one way to turn chatty into chunky.

[1]: https://github.com/graphql/dataloader


> The fundamental problem is where the data resides. Microservices are well understood today, but taking them to the edge isn't; there isn't a path to do that for typical apps. So most microservices which are being used at the edge are doing caching/transcoding/resizing etc.

If an application follows a Microservice architecture, then it is possible to containerize it and enable its autonomic management through frameworks like Kubernetes and Docker Swarm, which is one way forward to improve performance efficiency of typical apps.

Yes, the problem is where the data resides. Migrate portions of it nearer to clients to the Edge, and clients will have the illusion of lower latency. Maintaining data consistency and coherence are yet problems not properly tackled, but in edge computing it means computations happen in these migrated portions of data, and not "all the way" at the cloud (higher latency) anymore.

> They keep compute on-vehicle or on-prem. For data services (not media delivery), latency is:

Not always. They will use the Edge mostly for inference and for learning from others (transfer learning), but sometimes also for training and model updates. And in cases like this (known as Mobile Edge Cloud), the ability to obtain granular and immediate control by supporting custom logic at the edge of a network is particularly useful in routing traffic to the microservices that make up a service.

I agree, GraphQL helps reducing latency, mostly because the amount of data is now "granular" and it is one way to overcome limitations in 'traditional' apps (e.g. Rest APIs). Coalescing may be useful in some cases, but perhaps not in situations where one needs to maintain 95th/99th percentiles up to certain threshold constraints. Together with Edge (as I guess you mentioned), it is a win-win performance-wise. But then again, maintaining consistency and coherence are yet problems to be tackled.

And guess what, applications still need to be updated to use GraphQL and use its features. Just like people will do when developing for the Edge.


With 5G that last mile between the metro hub and the user is only 5ms. This is critical for things like factory automation, autopilot, etc. Deploy your workload to AWS Wavelength or Azure Edge Zone 5G, and stick a 5G card in your factory tools or drones or whatever, and you're done.

So IMO cloud providers having presence in all the big telco hubs is the future of "edge". Or at least a future.

Disclosure, used to work at Azure Edge Zones.


But what's the intersection of critical stuff that needs to shave those milliseconds and stuff that is fine with the reduced reliability and latency variability of wireless comms vs fixed line?


Those are important questions and nobody knows for sure yet. Another of the challenges is capacity planning. Since the footprint is so much smaller, there will presumably be much more significant spikes in traffic and load. Like when CIS is in town or something. Throw in all the variations like GPU or FPGA (which may be more common on edge), and it becomes challenging to balance available capacity vs cost.

My guess is edge business model will be more like long term commitment than spin up and down at will.


Yep... 5G edge computing has been in the making for a while, it was talked about 10-ish years ago (maybe not with that name) when 5G was an upcoming thing and it must have been in r&d drawing board collecting justifications before it. So marketing has had a long time to build use cases already.

Of course a "build it and they will come" argument can be made in favour of something we haven't yet been able to imagine but it's starting to feel far fetched. Or maybe it's just a question of how far-edgy it makes sense to be vs cost (the original idea IIRC was building capacity colocated with the base stations).


There was interest from several large companies in reserving capacity in several different locations while I was there. So it's definitely happening on some level.

That said, that's kind of "boring edge", companies that just want one closer datacenter to run stuff. But, that's probably where the most money is.

I wouldn't be surprised if it takes a while to open up to general public where you can dynamically scale up and down though (beyond what e.g. Cloudflare workers provides) due to the capacity challenges I mentioned elsewhere.


this is a great point.


For what applications is 5ms more useful than 15 or 30? Much of that latency is in the last mile transport / multiplexing / buffering. I’m 7ms from Google Cloud/AWS Oregon regions from Seattle - distance to the data center is rarely the latency driver (with perhaps the exception of tier 2/3 markets - but then again, where’s my Cheyenne, WY edge zone?).


I'd say it's still an open question. Certainly it's not going to be beneficial for websites etc. Mostly automation scenarios. Maybe games, maybe live video editing, maybe remote desktop? Or maybe it will be a big flop. I think it's worth the investment to find out though.


I love this post. Just some facts and a few reasonable opinions about a topic. It seems so grounded. It's like I asked someone very smart what they thought about edge computing.


I want a joint venture between Starlink and Cloudflare. Put compute resources on the satellites and have a distributed CDN + edge workers in space. Having satellite internet with latency competitive with cable is fine and all, but what if you could beat it?


IT person sighs heavily, clamping down their helmet as they board the Dragon capsule for the weekly turning it off and on again.


Supposedly, in the 1940s, when Arthur C Clarke (now mainly known as an SF author) proposed the idea of geostationary satellites [0] as extra-terrestrial radio relays, he believed that they would have to be manned space-stations because of the need to keep changing the valves that had blown.

https://en.wikipedia.org/wiki/Arthur_C._Clarke#Geostationary...


If GEO becomes crowded enough, it's not possible to just send more hardware and park the defective one somewhere else and launch costs keep going down I wonder if we'll see more human serviceable hardware.


We just need a Terraform provider that maps provisioning to launching a Falcon-9, and deprovisioning to firing a photon torpedo at the satellite.


Sounds like a job for a fleet of orbital rocket boots.

http://freefall.purrsia.com/ff1000/fv00955.htm


All the jokes about launching rockets in the "what is a monad" articles are finally coming true!


You say this jokingly, but onboard software is a feature in some satellites right now. The ability to push and update applications that run directly on the vehicle instead of needing a downlink to a ground processing station is on the roadmap for increasingly compute expensive operations, but still pretty basic right now. I don't think there are any that will run untrusted code from arbitrary developers, though. That's a pretty steep risk.

I'm having trouble finding any unclassified examples of this, but you can read about TechSat-21, which was supposed to be a proof of concept back in 2004 but the program was canceled because of cost overruns: https://ml.jpl.nasa.gov/papers/the_techsat-21_autonomous_sci...


My followup comment about Terraform was tongue-in-cheek, but I'm absolutely serious about wanting edge processing on-satellite, and I think its bound to happen in the future. I think that either Starlink will adopt a strategy like this, or Amazon's Project Kuiper will, and further cement AWS's technological lead.


Project Kuiper?


I am waiting for someone to "discover" placing "edge computing" nodes on-premise with client computers to remove the last-mile latency, speed up bandwidth-intensive operations like file sharing, and lessen the dependence on unreliable pro-sumer grade Internet connections (read: the no-SLA, somewhat reliable, typically violently asymmetric connections available for small to medium-sized businesses in a lot of the US for a reasonable cost, often from a monopoly provider).

I guess "Azure Stack" is sort of that now, but that's not targeted at the small to medium-sized business market (yet).


Azure Stack and AWS Outposts are already targeting at least mid-size businesses. The issue that they are still not simple to implement and give marginal benefits to most use cases. I never actually managed to get one of these onboard, but we did evaluate them on behalf of some clients who were to trying to some very high-end digital experiences for customers. Geofencing, personalized kiosk experiences, dwell time tracking. And they wanted it all to be really fast. We did some pricing (for both our services and equipment/hosting costs) and it would have been millions and taken at least 6 months to build out. I also worked at a pretty big retailer that had some some ancient "edge" compute workloads that we were in the process of decommissioning because it was too big of a pain to keep them updated, powered and connected at 300 locations.

Chik-Fil-A has a great write up of how they run K8s on racks in every restaurant: https://medium.com/@cfatechblog/edge-computing-at-chick-fil-...

Typical small businesses don't have the resources of needs to do any of that. They are already far more likely to invest in turnkey technology like Salesforce, Square, etc than provision anything at all in the cloud on their own.


That seems to be at least one reason for AWS Outposts...so Amazon doesn't lose revenue as some edge things move to customer local on-prem. Though there's no small/medium business version of it yet.


NCR has been delivering on-prem solutions worldwide for decades, their most popular offering, Aloha, stores transactions at the site in cases of Internet outages, and pushes them up whenever conditions improve. This works as well in suburbia and Western Europe as it does in Africa, Eastern Europe, Asia, and South America

I personally work on their edge team to deliver backend services for their next-gen offering.


This is a great insight for full applications.

But latency (mostly) aside, there seem to be a lot interesting use cases for what is more or less programmable CDN configuration. Particularly when you have a relatively straightforward application (architecturally at least), but want to tap into the very distributed, very scalable CDN layer for little bits of critical functionality.


This.

I used Lambda@Edge last week to smooth some rough edges between CloudFront and S3.

S3 is just a HTTP server for CloudFront and S3 in itself is a rather dumb HTTP server. Lambda@Edge allowed me to add some additional features to that integration to make S3 a bit more bearable, like automatic index files and path cleanups.


I'm hoping to see for visual commputing(not computer vision) in the feed someday.

I also have a fantasy tests for OS. Check what happens when camera feeds the monitor input back on monitor keeping monitor's pixel vales as input to this fictional visual OS


This article is missing a huge part of why Edge computing is necessary: Transit Cost and Backbone Peering.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: