Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Pros and cons of V8 isolates?
171 points by pranay01 on June 15, 2022 | hide | past | favorite | 153 comments
I was reading this article on Cloudflare workers https://blog.cloudflare.com/cloud-computing-without-containe... and seemed like isolates have significant advantage over serverless technology like lambda etc.

What are the downsides of v8? Is it poor security isolation?




The downside of v8 isolates is: you have to reinvent a whole bunch of stuff to get good isolation (both security and of resources).

Here's an example. Under no circumstances should CloudFlare or anyone else be running multiple isolates in the same OS process. They need to be sandboxed in isolated processes. Chrome sandboxes them in isolated processes.

Process isolation is slightly heavier weight (though forking is wicked fast) but more secure. Processes give you the advantage of using cgroups to restrict resources, namespaces to limit network access, etc.

My understanding is that this is exactly what Deno Deploy does (https://deno.com/deploy).

Once you've forked a process, though, you're not far off from just running something like Firecracker. This is both true and intense bias on my part. I work on https://fly.io, we use Firecracker. We started with v8 and decided it was wrong. So obviously I would be saying this.

Firecracker has the benefit of hardware virtualization. It's pretty dang fast. The downside is, you need to run on bare metal to take advantage of it.

My guess is that this is all going to converge. v8 isolates will someday run in isolated processes that can take advantage of hardware virtualization. They already _should_ run in isolated processes that take advantage of OS level sandboxing.

At the same time, people using Firecracker (like us!) will be able to optimize away cold starts, keep memory usage small, etc.

The natural end state is to run your v8 isolates or wasm runtimes in a lightweight VM.


The future of compute is fine-grained. Cloudflare Workers is all about fine-grained compute, that is, splitting compute into small chunks -- a single HTTP request, rather than a single server instance. This is what allows us to run every customer's code (no matter how little traffic they get) in hundreds of locations around the world, at a price accessible to everyone.

The finer-grained your compute gets, the higher the overhead of strict process isolation gets. At the point Cloudflare is operating at, we've measured that imposing strict process isolation would mean an order of magnitude more overhead, in terms of CPU and memory usage. It depends a bit on the workload of course, but it's big. Yes, this is with all the tricks, zygote processes, etc.

We have plenty of defense-in-depth that we can do instead of process isolation, that doesn't have such enormous cost. [0] [1]

IMO, the platforms that stubbornly insist that process isolation is the only answer are the ones that are going to lose out eventually, just as bare metal has been supplanted by VMs, which are in turn being replaced by containers, etc. Over time we move to finer-grained primitives because doing so unlocks more power.

[0] https://blog.cloudflare.com/mitigating-spectre-and-other-sec...

[1] https://blog.cloudflare.com/spectre-research-with-tu-graz/


Point of order: containers aren't a mechanism to increase compute granularity; they're an abstraction designed to make compute easier to package and deploy. Containers can be bin-packed m:n into VMs or machines, but that's just a detail; over time, containers are all going to end up VM-scheduled, as VMs get cheaper and cheaper.

Meanwhile, the multitenant-containers/jails/zones people conclusively had the wrong side of the argument, despite how granular they were; multitenant shared-kernel is unsafe.

I have no opinion about whether language runtime isolation is competitively safe with virtualization. It's probably situational. I just object to the simple linear progression you're presenting.


Containers _are_ a mechanism to increase to compute granularity. Some of our workloads were having trouble scaling to 128 cores, and by using containers, we can have more of them running with fewer cpu's. Furthermore, it's straightforward to support providing burst capacity to applications running within a container, given all the OS needs to do is to give those cgroups some extra CPU time for a limited time.


> Containers _are_ a mechanism to increase to compute granularity. Some of our workloads were having trouble scaling to 128 cores, and by using containers, we can have more of them running with fewer cpu's.

As opposed to having VMs? Otherwise, I don't see how containers can increase granularity.


As opposed to the original comment that says they aren’t useful for increasing granularity.


It sounds like containers enabled you to manage the deployment of multiple processes on a single machine more easily which doesn't contradict the comment you're replying to. Would it not have been possible without containers?


> containers aren't a mechanism to increase compute granularity

Yes they are. Instead of thinking in terms of a whole running operating system with dozens of services, now you are thinking in terms of individual (micro?)services that are relatively isolated from each other. We stuff a lot more containers per box than we used to stuff VMs per box.

But it's true containers (of the namespace/cgroup/seccomp variety) have failed to be a sufficiently secure isolation mechanism to use them for multi-tenant scenarios, so instead we mostly pack containers from the same owner together.

I'd sort of argue that Firecracker and gVisor are actually container engines that happen to use CPU features meant for hardware VMs for additional security hardening. The granularity of compute that you put in them is more container-ish than traditional VM-ish.


They are, or should be, entirely self contained such that whatever segregation is employed - be it hardware via a VM or in kernel with apparmor or SELinux provides sufficient segregation for the work load. V8s problem is JavaScript and NPM, but limiting the blast radius with hardware virtualisation is a win for segregation and v8 will win, at least for front end, because it’s got the mindset. As long as the library ecosystem cleans up.


"In kernel with apparmor or SELinux" can't possibly provide sufficient workload isolation, because it implies workloads share a kernel. It's easy to rattle off relatively recent kernel LPEs that no mandatory access control configuration would have prevented.

The Linux kernel simply wasn't designed to provide the kind of isolation "naive" containers want them to. Actually, generalize that out: Unix kernels in general weren't designed this way. It just doesn't work.


End game then is LittleKernel/Zircon on Fly.io? When do we get to play with those?


Whether software based access control is sufficient depends on the workload and where in the stack the workload runs. I agree though, hardware virtualisation based is more secure and less complex. It also requires access to bare metal, so a providers service or run it yourself, which is a trade off.


> over time, containers are all going to end up VM-scheduled, as VMs get cheaper and cheaper

That kind of started with Github actions.


This makes me really wish I was in a position to spend time trying to earn CloudFlare bug bounties. ;)

Sure, there are massive advantages to using finer-grained sandboxes, but that doesn’t mean it’s safe.


> The future of compute is fine-grained. Cloudflare Workers is all about fine-grained compute, that is, splitting compute into small chunks -- a single HTTP request, rather than a single server instance. This is what allows us to run every customer's code (no matter how little traffic they get) in hundreds of locations around the world, at a price accessible to everyone.

I don't think this holds any truth at all.

Cloudflare Workers were designed to be so "fine-grained" because their whole rationale is to have very small compute steps at each request to do a very small touch-up at each request/response, With negligible or tolerable performance impact.

This is not a major paradigm change. It's a request handler placed on edge servers to do a minor touch up without the client noticing. Conceptually they are the same as a plain old Controller from Spring or Express. They just have tighter constraints because they run on resource-constrained hardware and handle performance-constrained requests. Other than this, they are a plain old request handler.


Considering you're engaging the tech lead of the tech in question, it's intriguing what you mean by this. Is it that kentonv is lying or that they're mistaken, or something else?


To be clear, I'm the tech lead of Cloudflare Workers, and wrote the core runtime implementation. Sorry, I should have stated that more clearly above.

While minor request/response rewrites were an early target use case, the platform is very much intended to support building whole applications running entirely on Workers. We do think this is a major paradigm shift.


I like CloudFlare - but still haven't heard back since signing up for Workers for Platforms the day it was announced


I love CloudFlare workers EXCEPT the ridiculous limit they place on outgoing web requests. It makes doing something like writing a distributed web scraper or crawler impractical, so poo poo on them.


> It makes doing something like writing a distributed web scraper or crawler impractical

That's why they have the limits.


Never thought I’d see a self-aware wolf in a technical discussion.


I don't think that's what CloudFlare workers are for.


> Under no circumstances should CloudFlare or anyone else be running multiple isolates in the same OS process.

That depends on your scenario. In our case, all the JavaScript code is ours so we're not worried about it trying to exploit bugs and escape into native. Running multiple Isolates / Contexts gives us isolation on the JS side but also lots of sharing (the v8::Platform object and several other globals are shared in the process).

Of course, if you're running untrusted JavaScript code what you're saying makes sense (though I wouldn't go as far as Firecracker, low-rights sandbox processes a-la Chrome do the job).


My understanding is that you are worried about code other customers are running (since both of you are running in the same process); but I might be wrong/not getting the angle of all of this?


Yes, fair. That makes complete sense.


> Process isolation is slightly heavier weight (though forking is wicked fast) but more secure. Processes give you the advantage of using cgroups to restrict resources, namespaces to limit network access, etc. My understanding is that this is exactly what Deno Deploy does.

Interestingly, as does Android: https://stackoverflow.com/a/12703292


> Under no circumstances should CloudFlare or anyone else be running multiple isolates in the same OS process. They need to be sandboxed in isolated processes. Chrome sandboxes them in isolated processes.

Is it because V8 isolates rely on the process sandbox or just to have a double sandbox?

from https://blog.cloudflare.com/cloud-computing-without-containe...

> Isolates are lightweight contexts that group variables with the code allowed to mutate them. Most importantly, a single process can run hundreds or thousands of Isolates, seamlessly switching between them.

Cloudflare runs multiple isolates per process.

> We also have a few layers of security built on our end, including various protections against timing attacks, but V8 is the real wonder that makes this compute model possible

They also talk about how they removed many browser APIs for security, but seems to heavily rely on V8 isolates for sandboxing.


CloudFlare actually makes the case for running in dedicated processes on their own blog: https://blog.cloudflare.com/mitigating-spectre-and-other-sec...

Running untrusted code in the same process gives that code a tremendous blast radius if they exploit a vulnerability in, say, a fetch implementation. I do not understand why they would do this.

Isolating processes adds a layer of protection. People who exploit your implementation have limited access to the system (they can't read another user's memory, for example, which often contains sensitive info – like private keys).

KVM adds _another_ layer.

If you have a process running in a namespace within a KVM, someone would need to exploit the process, the Linux Kernel, and the underlying virtualization extensions to do serious damage.


The process, the Linux kernel, underlying virtualization extensions (maybe; not totally following that one) and the mandatory access control rules applied to the VM runtime --- in Firecracker's case, the BPF jail it runs in.


Belt, suspenders, antigravity device, as Steve Summit has been known to say.


> Under no circumstances should CloudFlare or anyone else be running multiple isolates in the same OS process.

TFA: "Most importantly, a single process can run hundreds or thousands of Isolates, seamlessly switching between them." :)


And that's true as long as all the isolates trust each other.


This is such a great answer, I just wanted to send a +1 in agreement about isolates eventually being able to leverage cpu virt. Much nodding happened while reading your answer.


Answering the security question specifically: v8 is a runtime and not a security boundary. Escaping it isn't trivial, but it is common [1]. You should still wrap it in a proper security boundary like gVisor [2].

The security claims from Cloudflare are quite bold for a piece of software that was not written to do what they are using it for. To steal an old saying, anyone can develop a system so secure they can't think of a way to defeat it.

1. https://www.cvedetails.com/vulnerability-list/vendor_id-1224...

2. https://gvisor.dev/


Docker went through an era of thinking they could built multitenant systems that amortized the cost of resources across customers. Java went through an era of thinking they could build multitenant VMs that amortized resources across customers. Unix went through an era of thinking they could build multitenant processes that amortized resources across customers. As did Windows.

I point out that Docker is trying - and failing - to offer us the same feature set we were promised by protected memory, preemptive multitasking operating systems 30 years ago. I'm still waiting. I don't recall if there were old farts complaining about this in the early 90's, but I suspect they exist.

Also Java had a bunch of facilities for protection and permissions that I've not heard anyone claim exist in V8. Not that feature equality is required (in fact that may seal your doom), but I don't see feature parity either.

edit: some of facilities Java investigated do exist in cgroups, so one might argue that there is a union of v8 and cgroups that is a superset of Java, but unless I am very mistaken, isolates don't work that way, so isolates are still not the way forward there.


>cost of resources across customers.

IMHO, this is all a solution chasing a problem. A server from Hetzner costs the equivalent of 30 minutes of my time a month. A VM on a shared machine can cost even less. There's just not cost there to save in order to justify the security risks and performance ghosts. It only makes sense for extremely bursty loads or background processing where latency isn't important. But those are pretty atypical scenarios and usually buying for max load is still cheaper than the engineering time to set up the system.


One server may be 30 mins of your time. But whether that's cost efficient depends on your domain.

There's B2B work typically characterized by a low number of very high value transactions, so you run a few servers, and your math works out.

And then there are consumer services such as Instagram or Google maps or TikTok that see a huge amount of very low value requests that only contribute a small trifling in ad revenue. When your service needs to process millions of requests to pay for your service, your math no longer works out.

One of the beauties of modern computing is that our economies of scale enable us to deliver complex services such as Instagram etc at cost, for free. That's pretty amazing if you think of it, but engineers need to be very stingy for this to work out.


The math still works out because there's not much cost savings to be had in using V8 instead of VM/physical machine instances to share physical equipment cost. It's worth that small extra bit of money to pay to have your code running with the extra bit of protection.


> A server from Hetzner costs the equivalent of 30 minutes of my time a month.

My toy code, which takes ~6ms (at p75) to exec, runs in 200+ locations, serves requests from 150 different countries (with users reporting end-to-end latencies in low 50ms). This costs well below $50/mo on Cloudflare, and $0 in devops. Make what you will of that.


What problem did that solve?


Have you seen Oceania and Africa egress prices?


how many request a month


It has seen everything between 10rps to 10000rps, and never once complained. Latencies remain steady, requests remain serviceable, abuse (DDoS/bots) still gets caught, throughput just seems to scale elastically even if not boundlessly.


> I point out that Docker is trying - and failing - to offer us the same feature set we were promised by protected memory, preemptive multitasking operating systems 30 years ago.

You were given them, by those facilities, 30 years ago.

Then someone started fiddling with CPU design, and things got weird.


I find it funny how we've ended up here:

"Let's use containers, that way we aren't running unnecessary redundant kernels in each VM!"

later

"Oh shit, our containers share a kernel, let's add another kernel to each container to fix this!"

Maybe the next step is that we realize there's so many CPU bugs, that we really just need to give each container their own hardware :)


> Maybe the next step is that we realize there's so many CPU bugs, that we really just need to give each container their own hardware :)

I am reasonably sure that most of the micro services I write would be very happy running on a 400mhz CPU with a couple hundred megs of RAM, if they were rewritten in native code, or even just compiled to native code instead of being ran on top of Node. Throw it all on a minimal OS that provides networking and some file IO.

How much does it cost to manufacture 400mhz CPUs with onboard memory? Those must cost a few pennies each, throw in a 4GB SSD, preferably using SLC chips for reliability, and a NIC, and sell 4 "machine" clusters of them for ~$100 a pop.


> [...] if they were rewritten in native code, [...] Throw it all on a minimal OS that provides networking and some file IO.

You may want to check out MirageOS[0]. It gives you a library OS with the primitives you say you need, and then all you have to do is import them in your application code as if you are writing your typical OCaml, build the virtual appliance and boot it up anywhere you want.

[0] https://mirage.io/docs/overview-of-mirage


There's also https://github.com/includeos/IncludeOS

However, how much is this kind of stuff is actually used at scale today?


Well we’ve kind of already started exploring that option with unikernels. Compile each app as its own complete software stack.

https://en.m.wikipedia.org/wiki/Unikernel


Nice! Thanks for the link. As I've been playing around building a toy OS for my Raspberry Pi and learning more about hardware virtualization, I was thinking exactly along these lines, i.e. instead of virtualizing an full OS, virtualizing a minimal image with just a network driver and a flat address space per process/app seems like it would have benefits (i.e. no virtual memory mapping or privilege ring transitions eating cycles). I wasn't aware it was already an area that had much research behind it.

From the article:

> For example, off-the-shelf applications such as nginx, SQLite, and Redis running over a unikernel have shown a 1.7x-2.7x performance improvement

That's a pretty healthy boost. Are there any significant efforts close to general use anyone is aware of in this space? (Or good reasons why it's a totally terrible idea and hasn't gone anywhere!)


Not first hand experience, but I was adjacent to a team that tried this approach. In short its playing on hard mode. Dev experience and tools, deployment chains, testing, etc are all possible but nowhere near your awesome userspace toolchains. Be prepared for lots of DIY action like building your own network or file io. The team also wasnt able to quite get down to the superfast ms start times and MB images that we were going for.

My memory is fuzzy but ISTR that the people behind seastar/osv moved on to an approach closer to what mrkurt is emphasizing with “container” composition plus firecracker for the environment.


That's what cloud-native processors are for.


To be fair to Clouldflare, they do a lot of very thoughtful work on sandboxing at all levels, including at the process level as well as above and below that. A lot of interesting details here:

https://blog.cloudflare.com/mitigating-spectre-and-other-sec...

The article focuses on Spectre, but it also mentions general stuff.

edit: It is true they are using V8 for a novel purpose, but a lot of their techniques mirror how Chrome sandboxes V8 (process sandboxing, an outer OS sandbox, etc.).


I have a lot of respect for Kenton, but the facts are: v8 security team recommends running untrusted code in separate processes [1], and as described in your link, Cloudflare doesn't do that. From what Kenton said earlier on this[2], it sounded like they had already committed to their architecture before that v8 recommendation was made. They thought through a lot of possible attacks and created some clever mitigations, and ultimately decided "its probably ok".

[1] https://v8.dev/docs/untrusted-code-mitigations#sandbox-untru...

[2] https://news.ycombinator.com/item?id=18280061

eta: Also, Chrome does use process isolation - every browser tab runs in a separate process. The whole point of Cloudflare isolates is to avoid the overhead of a fork() for each worker request.


Also Cloudflare has already broken the Internet once by doing unorthodox things.

Sometimes the guy missing a finger is exactly the one you want to take safety advice from. Sometimes it's just a matter of time before he's missing three fingers.


Chrome's use of strict process isolation is a fairly new thing. For most of its history, it was trivially easy to open any other site in the same process as your own site (via window.open(), iframe, etc.), and V8's sandboxing was the thing protecting those sites from each other. So V8 was, in fact, designed for this.

When Spectre hit, Chrome concluded that defending the web platform from Spectre in userspace was too hard, so they decided to go all-in on process isolation so that Spectre was the kernel's problem instead. This is a great defense-in-depth strategy if you can spare the resources. On most platforms where Chrome runs, the additional overhead was deemed worth it. (Last I heard there were still some lower-powered Android phones where the overhead was too high but that was a while ago, maybe they've moved past that.)

That doesn't mean V8 has decided their own security doesn't matter anymore. Securely sandboxing the renderer process as a whole isn't trivial, especially as it interacts with the GPU and whatnot. So it's still very much important to Chrome that V8's sandbox remains tight, and the renderer sandbox provides secondary defense-in-depth.

When it comes to Cloudflare Workers, the performance penalty for strict process isolation is much higher than it is for a browser, due to the finer-grained nature of our compute. (Imagine a browser that has 10,000 tabs open and all of them are regularly receiving events, rather than just one being in the foreground...) But we have some advantages: we were able to rethink the platform from the ground up with side channel defense in mind, and we are free to reset the state of any isolate's state any time we need to. That lets us implement a different set of defense-in-depth measures, including dynamic process isolation. More details in the blog post linked earlier [0]. We also did research and co-authored a paper with the TU Graz team (co-discoverers of Spectre) on this [1].

I may be biased, but to be perfectly honest, the security model I find most terrifying is the public clouds that run arbitrary native code in hardware VMs. The VM software may be a narrower attack surface than V8, but the hardware attack surface is gigantic. One misimplemented CPU instruction allowing, say, reading physical addresses without access control could blow up the whole industry overnight and take years to fix. Spectre was a near miss / grazing hit. M1 had a near miss with M1racles [2]. I think it's more likely than not that really disastrous bugs exist in every CPU, and so I feel much, much safer accepting customer code as JavaScript / Wasm rather than native code.

[0] https://blog.cloudflare.com/mitigating-spectre-and-other-sec...

[1] https://blog.cloudflare.com/spectre-research-with-tu-graz/

[2] https://m1racles.com/


As you scale up with R2, D3, Email Workers etc, is it possible that the _future_ scale of touching very sensitive, "ought-to-be-secure / separate" data / code help reconsider this decision?

Without process isolation, all it takes is one bad Chromium commit or a incorrectly allow-listed JS/V8 command for this model to fall through regardless of how thorough the defense-in-depth/vetted Workers may be.

"the public clouds that run arbitrary native code in hardware VMs." -> Isn't it double whammy then that a V8 isolate + HW attack surface in combination could provide an order of magnitude more hackability?

"Last I heard there were still some lower-powered Android phones where the overhead was too high" -> I believe a Zygote-process fork was made available to Chromium at some point (https://chromium.googlesource.com/chromium/src/+/HEAD/docs/l... ?).


> all it takes is one bad Chromium commit

All it takes is one bad Xen commit, or one bad design choice at Intel, for VM-based clouds to be broken. Every isolation mechanism has bugs, this is not unique to V8.

> for this model to fall through regardless of how thorough the defense-in-depth/vetted Workers may be.

No no, when I say "defense in depth" I mean we have fallback contingencies to contain the damage of a V8 bug.

> Isn't it double whammy then that a V8 isolate + HW attack surface in combination could provide an order of magnitude more hackability?

No, the opposite. Having to break two layers at the same time is much harder than breaking just one thing. VM-based clouds are largely counting on just one layer today (most of which is burned into silicon).


A few corrections:

1. Chrome has always had a multi-process architecture, with at least one dedicated process for each tab's renderer (which hosts the v8 runtime).

2. Chrome was already in the process of moving cross-origin iframes to separate processes when Spectre was announced. The announcement of Spectre style vulnerabilities just significantly increased the pace of implementation and deployment.


1/ is not correct. I worked on Chrome for many years. Processes were frequently reused across tabs for many reasons, including the window.open() example that kenton mentioned earlier, but also just resource exhaustion.


Yes, I didn't mean to imply that Chrome wasn't multi-process all along. But Chrome historically used it more for fault isolation (= crashing one tab doesn't crash them all) rather than security isolation. Historically window.open() would (usually) open a new tab that actually ran in the same process, and iframes ran in the same process, so getting any arbitrary site running in the same process as yours was easy.


I don't think what Chrome is doing is quite that simple? When you click on a link in a tab, it might take you to a different site. Also, there can be iframes from different websites in the same browser tab.

Also see "limitations" on this page:

https://www.chromium.org/Home/chromium-security/site-isolati...


That link tells you this:

> Cross-site documents are always put into a different process, whether the navigation is in the current tab, a new tab, or an iframe (i.e., one web page embedded inside another). Note that only a subset of sites are isolated on Android, to reduce overhead.


Browser requirements may differ from Cloudflare's, though. In particular, browsers run arbitrary code from anywhere - just clicking a link downloads and runs code off of anywhere on the planet, theoretically - while Cloudflare runs code from their customers.

They do have a free tier, so yes, there are risks here. But one mechanism they do according to that post, IIUC, is to keep paying customers separate from the free tier. So if you are a paying customer, any attacker against you is at least someone that Cloudflare has the credit card information of. Still dangerous, but very different than the browser risk model.


It’s worse than the browser threat model. In a browser, security-maximalist users can choose to only visit trusted sites, which reduces the attack surface somewhat, though far from completely (there’s still exploits hidden in ads and watering hole attacks to worry about). With CloudFlare, you’re keeping out… people without credit cards? What kind of even remotely serious attacker can’t obtain a stolen credit card, or else sign up for a credit card with false identification? At most the credit card restriction prevents attackers from creating large numbers of accounts, reducing the chance they’ll be co-located with you, but even that can probably be circumvented by sufficiently sophisticated attackers.


You're right that credit cards are far from guarantees. Still, I'd imagine Cloudflare do something like cluster users by risk level at some granularity. Brand-new paying accounts might be put together in the same process, while paying customers for over a year might be in another. You might be attacked even in the latter group, but the attacker can do that at a much lower frequency (the card isn't recently stolen; and they've been paying for this attack for a year - only to be kicked out now if found).

Also, the browser risk model is much worse because of the scale. If you manage to get malicious code onto a popular website then you get many, many opportunities to attack people. With Cloudflare you can only attack the other tenants of your process, and you're paying every time you do it (in a browser, it runs on the target's machine, free for you). And even then as the blogpost mentions they randomize pairings, limit the amount of time they spend together, etc.

Overall I think this is far less risky than the browser situation. And Cloudflare isn't the only one that thinks so - their competitors also put multiple tenants in the same process. Done carefully, it's a reasonable model.


V8 is still very hard to escape -- an escape is a remote hole in Chrome isn't it? To be honest, I'm happy to base my security on "as safe as Chrome".


Chrome runs each origin in a different process. Chrome also runs all the privileged code in separate processes, so like file, network, GPU, and many OS calls are not actually happening from within the same process as v8 executing web JS code. Process isolation is also necessary to mitigate CPU and OS level timing attacks between origins (and the privileged process).

v8 is a pretty good security boundary, but to be as secure as Chrome you need the other layers too.


Hi Elliot! I think it's more complex than that.

Chrome (all web browsers) have a massive and ancient attack surface -- all of HTML, JS, CSS, every DOM and web platform API, every image codec, video, and on and on and on.

CF workers have a comparatively minuscule attack surface that takes into account all the most recent lessons (high-res timers, for example).

So yeah, if we call Chrome of 2008 a baseline, then Chrome's more recent process isolation increases security. But so to does CF's much reduced attack surface.


I wouldn't say those are equivalent. Any exploit in v8 gets full access to the process, which in a multi tenant situation means access to other tenants. And v8 itself is hardly bug free.

Reducing the JS API surface might reduce the probability of finding a way out, but it doesn't reduce the blast radius. 2008 was also 14 years ago, I'm not sure that's the right bar for security!

The OP was also speaking to using v8 getting you the same level of security as Chrome, not to architectural decisions in CF beyond that.


> an escape is a remote hole in Chrome isn't it

Maybe. Chrome has a lot of extra security hardening you don't get if you are just running the V8 engine alone.

The going rate for a V8 escape is probably around $100k at the moment. Totally worth it financially to snarf credit cards from all the sites on Cloudflare.


If the patch gap is small, yes. But are you patching V8? Node generally isn't.


As a user of CF workers, the API is clearly designed to allow the runtime to redeploy e.g., for patches.

Example: there is no notification when your worker is unloading. It just dies. This is sometimes annoying as a developer, but it's the kind of decision a platform makes when it wants (needs) to be able to shut down code arbitrarily and restart it later on a different version of the platform.


Personally, I think I'd recommend SELinux and SecComp before I use something like gVisor. There's significant performance impact with application kernels.


Those are not mutually exclusive, but if you use a syscall blocklist only you will have to refuse certain workloads


I've been hoping a runtime would arise for this use case explicitly, and v8 is close. But the solution has to have multitenancy, limiting of memory and cpu, and security baked in from the beginning. The lua runtime seems promising, but again, not designed for that.


We are going the long way around to figure out that Tannenbaum was right and we need microkernels. The IPC costs cause sticker shock, but at the end of the day perhaps they are table stakes for a provably correct system.


I am still waiting for a seL4 based router, firewall, DNS server, file server, etc. Even better if we could get it into critical infrastructure (water treatment, electricity, etc). Something that is exposed to the insanity of the internet, but may not necessarily have high performance requirements. We have super computers in our pockets, we can take some performance hit if it eliminates large security concerns.


I was reading the Wikipedia page about this debate and I find this low-level stuff so fascinating. Wish I had done CS instead of Physics.

At a low level and at a place of ignorance, I kind of wish monolithic kernels hadn't won, just like I wish the morbidly obese browsers hadn't either... They sound way more, well, sound in their theoretical basis


There's a few clever tricks that seem to make a pretty big difference with microkernels, but I know just enough to be dangerous.

I think one of the biggest ones I recall, and I think it was part of the L4 secret sauce, was the idea of throwing a number of small services into a single address space, so that a context switch requires resetting the read and write permissions, but not purging the lookup table, making context switches a couple times faster. With 64 bit addressing that gets easier, and allows other tricks. For instance recent concurrent garbage collectors in Java alias the same pages to 4(?) different addresses with different read/write permissions. Unfortunately this means your Java process can 'only' address 4 exabytes of memory instead of 16, but we will surely persevere, somehow. How that plays with Meltdown and Specter might put the kibosh on this though, and most processors don't actually support 64 bit addressing. Some only do 44, 48.

That's pretty old news though. The more interesting one to me is whether there's a flavor of io_uring that could serve as the basis for a new microkernel, allowing for 'chattier' interprocess relationships to be workable by amortizing the call overhead across multiple interactions.


Something like MirageOS looks pretty cool


I know V8 from back to front and I feel that it is quite safe, after all, there's not much you can do with a javascript interpreter ...

Regarding the CVEs mentioned, the overwhelming majority of them have a "denial of service" effect, not sure how something like gVisor helps you mitigate that.


If you know it back to front, then you are likely aware of the number of TurboFan vulnerabilities that have been found being exploited in the wild.


Sure, and most of them have been addressed properly.

Honestly (as another commenter said), V8 being behind one of Google's largest revenue streams makes it feel "safe enough" for me, they have all the resources and incentive to make it as safe as realistically possible.

Of course it's not perfect, but is there anything better at the moment?


With all their resources and incentive, Google have chosen to rely on process isolation in Chrome, not just V8.


Process isolation (security) belongs to the OS so there's nothing stopping one from doing the same, also, it shouldn't be much difficult to implement.


We did pretty much the same thing as Cloudflare does for workers in the Parse Cloud Code backend many years ago--when possible, we ran multiple V8 isolates in the same OS process. There are certain issues we ran into:

* Security-wise, isolates aren't really meant to isolate untrusted code. That's certainly the way Chrome treats them, and you would expect that they would know best. For instance, if you go to a single webpage in Chrome that has N different iframes from different origins, you will get N different Chrome renderer processes, each of which contain V8 isolates to run scripts from those origins.

* Isolation is not perfect for resources either. For instance, one big issue we had when running multiple isolates in one process is that one isolate could OOM and take down the all the isolates in the process. This was because there were certain paths in V8 which basically handled hitting the heap limit by trying to GC synchronously N times, and if that failed to release enough space, V8 would just abort the whole process. Maybe V8 has been rearchitected so that this no longer occurs.

Basically, the isolation between V8 isolates is pretty weak compared to the isolation you get between similar entities in other VMs (e.g. Erlang processes in the BEAM VM). And it's also very weak when compared to the isolation you get from a true OS process.


> isolates aren't really meant to isolate untrusted code.

They totally are meant for that. That's arguably the main thing they are designed to do.

The fact that people add defense-in-depth layers on top of it doesn't mean that the first layer isn't expected to be effective.


> I was reading this article on Cloudflare workers [...] and seemed like isolates have significant advantage over serverless technology like lambda etc.

You are conflating serverless (which is a particular deployment and lifecycle model) with a specific technique for implementing it.

There are inevitably going to be different performance/capability tradeoffs between this and other ways of implementing serverless, such as containers, VMs, etc.

> Simultaneously, they don’t use a virtual machine or a container, which means you are actually running closer to the metal than any other form of cloud computing I’m aware of.

V8 is a VM. Not in the VMware/KVM sense of the term, but a VM nonetheless.


> We believe that is the future of Serverless and cloud computing in general, and I’ll try to convince you why.

Reads like an opinion piece. Technically weak on details and the comparisons leave a lot to be desired

> I believe lowering costs by 3x is a strong enough motivator that it alone will motivate companies to make the switch to Isolate-based providers.

depends on so much more than price... like do I want to introduce a new vendor for this isolate stuff?

> An Isolate-based system can’t run arbitrary compiled code. Process-level isolation allows your Lambda to spin up any binary it might need. In an Isolate universe you have to either write your code in Javascript (we use a lot of TypeScript), or a language which targets WebAssembly like Go or Rust.

If one writes Go or Rust, there are much better ways to run them than targeting WASM

Containers are still the defacto standard


> If one writes Go or Rust, there are much better ways to run them than targeting WASM

wasm has its place, especially for contained workloads that can be wrapped in strict capability boundaries compile-time (think, file-encoding jobs that shouldn't access anything else but said files: https://news.ycombinator.com/item?id=29112713).

> Containers are still the defacto standard.

afa FaaS is concerned, wasmedge [0], atmo [1], tarmac [2], krustlet [3], blueboat [4] and numerous other projects are turning up the heat [5]!

[0] https://github.com/WasmEdge/WasmEdge

[1] https://github.com/suborbital/atmo

[2] https://github.com/madflojo/tarmac

[3] https://github.com/krustlet/krustlet

[4] https://github.com/losfair/blueboat

[5] https://news.ycombinator.com/item?id=30155295


WASM could be nice if you do not control the running environment (browser), once it matures more. I don't see the benefit of adding the v8 runtime as a layer for anything that is not native to it. More abstraction and complexity gives rise to concern


There are interesting plans to speed up [the cold start latency of] V8 by compiling it to wasm[0]

AFAIU the main point is that for wasm you can condense startup to a mmap operation and that you can "freeze" a running wasm process (essentially by dumping the memory/stacks to a new wasm module. In [0] they use [1] to do this.

[0] https://bytecodealliance.org/articles/making-javascript-run-... [1] https://github.com/bytecodealliance/wizer


> If one writes Go or Rust, there are much better ways to run them than targeting WASM

Is there a similar priced way to run Go or Rust with similarly fast cold starts and low latency?

I think this is the real selling point of isolates; inefficient if you want to run a postgres server but perfect for low latency edge stuff.

There is the usual argument that on premise/dedicated servers scales way better that people expect, nonetheless the trend seems to be a move towards smart CDNs...


> Is there a similar priced way to run Go or Rust with similarly fast cold starts and low latency?

I keep hearing good things about Firecracker in that regard (https://firecracker-microvm.github.io/).


> Is there a similar priced way to run Go or Rust with similarly fast cold starts and low latency?

A binary?


I tried to find the "A binary" service but no cloud offering seems to have it. You always have to wrap it in a container, VM or external runtime to do that.


If you don't care about isolation... to do so requires a VM to securely isolate and that is a significant startup time and resource usage over a wasm module.


You could run it in a very simple container, unshare(1) style. This adds no measurable overhead to binary startup time. https://man7.org/linux/man-pages/man1/unshare.1.html


Containers do not provide sufficient isolation to run untrused binaries. That's why aws built and uses firecracker for lambda.


VMs are also full of side channels. Depending on how much isolation is a concern, you need to own the host.

I don't trust VMs particularly more than containers in this respect: Containers have a lot of attack surface, but VMs also have a lot of complicated in the code in the kernel, in addition to having complicated emulated device drivers and a large silicon-based attack surface.


Google Cloud Functions and Cloud Run are good options, I prefer Run which runs arbitrary containers

'FROM scratch' for a binary only container


>If one writes Go or Rust, there are much better ways to run them than targeting WASM

>Containers are still the defacto standard

But that really depends on just how compute heavy the service in question is. For a lot of lightweight "frontend" (yes we're still talking server side, but you get the point) code for some API endpoint there's some lightweight glue logic between it and whatever backend services are involved. Targeting WASM in V8 might not have much of a performance hit at all compared to a native binary and if it allows your tiny service to only use 3MB of RAM instead of e.g. 100MB then that's still a pretty big win, even discounting the resource cost, because now you're avoiding context switches and running off of a shared runtime that's already going to have a bunch of hot paths fitting into cache.

The argument for V8 Isolates vs containers has a ton of overlap with the argument for containers vs. VMs. Yes there are security concerns, yes you're giving up some level of functionality but if you don't need a full container then the lighter abstraction might be the more efficient choice just as containers are often a better fit than a full VM. If the service in question is a good fit for Node then it might also be a good fit even using WASM from a Rust codebase to run on V8.


I'd be surprised if many Go -> WASM would fit in 3MB of ram

That's before you worry about the fact that Go has a different runtime and mem management strategy. I'd only use Go WASM for toys at this point


Fair point, Go in particular is a bad example on my part. I'm only suggesting that just because some existing codebase is in a different language doesn't mean that running on WASM is never going to be practical. Yeah not as good as native, but if it means you can target V8 as a VM instead of x86 then it still might win out.


Indeed, I'm keeping my eye on WASM but still expect it to require more time to bake

I may have also reached the point in my career where I don't want to use the latest and greatest runtime. Stability has its merits


There is also an issue of compatibility.

Underneath the hood other serverless technologies like lambda are running lightweight VMs running linux. Therefore they can easily accept any linux compatible container and they can run it for you in a serverless way.

Cloudflare Workers are running modified Node.js runtimes. You can only run code that is compatible with their modified Node.js runtime. For Cloudflare to be able to offer a product that runs arbitrary linux compatible containers they would have to change their tech stack to start using lightweight VMs.

If you want to run Node.js, then Cloudflare Workers probably works fine. But if you want to run something else (that doesn't have a good WASM compatibility story) then Cloudflare Workers won't work for you.


Not to be pedantic, but its not a modified Node.js runtime, it is a wholly custom runtime based on V8 directly. They're working on some Node.js API compatibility, but its not at all Node.js[0]

To quote directly:

Cloudflare Workers, however, run directly on V8. There are a few reasons for this. One reason is speed of execution for functions that have not been used recently. Cold starts are an issue in serverless computing, but running functions on V8 means that the functions can be 'spun up' and executed, typically, within 5 milliseconds or less. (Node.js has more overhead and usually takes a few milliseconds longer.) Another reason is that V8 sandboxes JavaScript functions automatically, which increases security.

[0]: https://www.cloudflare.com/learning/serverless/glossary/what...


So Cloudflare Workers are Heroku?


The JVM has a similar feature since in the early 2000s: https://www.flux.utah.edu/janos/jsr121-internal-review/java/...

I don't know how popular it was in Java's heydays, but it doesn't seem used today. Being tied to a handful of languages may have been an issue.


As far as I know, the JSR-121 proposal was never actually implemented (or if it was, it was never publicly released). The OpenJDK issue is marked as "Won't Fix".

https://bugs.openjdk.org/browse/JDK-4599433


Not really a surprise after the SecurityManager was not very successful in managing security. It turned out to be a huge undertaking and a constant burden.

https://openjdk.org/jeps/411 (deprecation in JDK 17)


Sounds similar to .Net's AppDomains which were deprecated in .Net Core


Is this similar to .NET AppDomain?


> Simultaneously, they don’t use a virtual machine or a container, which means you are actually running closer to the metal than any other form of cloud computing I’m aware of.

Because it's positioning a key benefit, I have a lot of issues with this sentence. First, there are many bare metal cloud options (ranging from machine types to full-blown providers). Second, a container doesn't put you any further away from the bare metal than a process.


Right now if you're running WASM on their workers you pay for a ton of stuff that JS doesn't have to and probably don't get a ton of leverage in terms of performance. This is really unfortunate, so you're stuck with JS for most types of workloads.

But it is likely one of the most accessible compute platforms for web development. Very easy to get started. Everyone and their mother know JS. Similar API to browsers (service workers). Great docs. Free tier. Tons of other stuff that webdev needs on their platform. They are adding object storage, sqlite, messaging, strong consistency for websockets on top of it. Their pricing is extremely competitive and dead simple.

I think there is a chance that more and more stuff gets built there. Isolates are a part of it and might be a competitive advantage, but from a developer's perspective they are not the selling point, but an implementation detail.


I deployed a toy Worker program at work the other day, and, while I'm fairly excited about workers, IMO they still have quite a ways to go, in terms of UX:

* Going from "I have some code" to "it's running" took many, many clicks. Luckily, they have CloudFlare Pages integration now, so you can just throw your code in a repo and the server will run it, PHP-style.

* Only JS is supported, more or less.

* The documentation is fairly good, but more examples would be great.

* Integration with other sites seems lacking. For example, I didn't find a way to redirect one of my site's endpoints to a worker.

I suspect that much of my pain was because I didn't use Wrangler, though, so the above may not apply if you use the canonical way.


I am really hoping that someone builds an isolates based faas runtime. I think CloudFlare talked about open sourcing their stuff.

I have 3 products where I’d allow client code to run once we can make that happen.


Is the appeal of isolates in this case the cold start time or the isolation? We're working on some open source infrastructure for running sandboxed (gVisor) containers on the fly from web services[1], and one of the use cases people have is serving Jupyter notebooks which seems like it might resemble your use case?

[1] https://github.com/drifting-in-space/spawner/


It’s the isolation and sandboxing. So in a sense deno is appealing because you can whitelist functionality.

Say things like custom workflow logic or custom data transformations. They might require an api call.


I’m currently building a FaaS runtime using v8 isolates, which I hope to open-source soon. That’s actually not that hard since isolates are, isolated from each other.

Performance-wise, it’s also very promising. For a simple hello world, the « cold start » (which is mostly compile time) is around 10ms, and on subsequent requests it runs in 1ms.


It doesn’t worry you that the v8 team specifically tells you not to do this?

eta link: https://v8.dev/docs/untrusted-code-mitigations#sandbox-untru...


Can you give a link to this? Cloudflare (Workers) and Deno (Deploy) both uses v8 isolates for their runtimes, with I believe some significant clients running production code (huge clients like Vercel and Supabase use these solutions)

Edit:

> If you execute untrusted JavaScript and WebAssembly code in a separate process from any sensitive data, the potential impact of SSCA is greatly reduced. Through process isolation, SSCA attacks are only able to observe data that is sandboxed inside the same process along with the executing code, and not data from other processes.

I do run isolates in separate processes to prevent security issues, even if that may not be enough. Still an early prototype for now.


I'm talking about this: https://v8.dev/docs/untrusted-code-mitigations#sandbox-untru...

As long as you run each customer in a separate OS-level process, you should be good. But then, that is not much different from Lambda or other FAAS implementations.


For now, each process runs many isolates - but a single server run many processes. Cloudflare have implemented a similar mechanism [1]:

> Workers are distributed among cordons by assigning each worker a level of trust and separating low-trusted workers from those we trust more highly. As one example of this in operation: a customer who signs up for our free plan will not be scheduled in the same process as an enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8.

[1] https://blog.cloudflare.com/mitigating-spectre-and-other-sec...


Blueboat may be what you’re looking for

https://github.com/losfair/blueboat


The security analysis elsewhere in this thread is correct (v8 isolates are not a security boundary), but I think that may miss a different point.

A cloud provider would be unwise to use v8 isolates as a way to separate tenants from different customers. But there might be many cases where the same customer might benefit from having one of their shardable workloads leverage v8 isolates.

Of course not every single-tenant multi-shard workload is appropriate; it all depends on what isolation properties are needed, and how amenable the shards are to colocation.


It's getting to the point of absurdity how much v8 is inserting itself all sorts of places it has no right being. Especially as the rate of computers getting faster decreases we should be working to burn less CPU to get what we want done, rather than wasting almost all of it on running a complete web browser just to run our software. It's ridiculous.

It's sad to see cloudflare falling so low. I guess they're heading toward their ultimate destination that is the result of all companies that go public.


v8 is going to be slower and is more restrictive on what can run than containers. It will be much better when you have many relatively small, infrequently used components.

The security model in v8 is no better than that of containers as there are limits to how much isolation you can give to code running in the same process. If you look at how Chrome uses v8, it is only used in carefully sandboxed processes, so it is clearly being treated as untrusted. (Though I still think v8 has done a truly amazing job locking things down for a pure userspace application)

The start-up time mentioned in the article assumes that the isolate and context creation time is the most significant delay. For JavaScript in particular, the code will need to be compiled again, and any set up executed. In any but the most trivial application the compilation and initial execution will significantly outweigh the compile step.

Despite the issues with v8 isolates or other equivalent web workers, I would not be surprised if they become more common than containers. There's a lot of buzz about them and they leveraged skills that website engineers have. Additionally, many applications can be made more private if small pieces of execution can be distributed to a data custodian of some sort that can run small untrusted bits of code on the data and then apply aggregation or add noise to the result before sending it out.


They don't run every node API.

I was trying to install posthog serverside the other day with remix which was to be hosted on workers but received several errors about buffers not being available.

This all said, isolates have been really cool to work with. Being able to run lightweight code at the edge has opened a unique set of opportunities like dynamically routing things quickly even for static sites.


This is intentional IIRC. Node is a Javascript runtime, has competitors like Deno with very different APIs. Cloudflare has gone the route of being as close to the ES spec as possible instead of committing to any specific runtime APIs.

For the case at hand, It doesn’t support buffer APIs but supports ArrayBuffers, so you can build on top of it. You can use any off the shelf browserify/buffer shim package to work it around.


Surely if a process is running multiple Isolates simultaneously, they're multi threaded and still require context switching? (accepted, thread switches are less resource intensive than process switches). Interestingly when Chrome runs on Windows desktops it seems to allocate separate processes for each Isolate anyway, but I'm guessing this is not baked into V8?


Thread switching is much cheaper than process switching.

Threads are just stacks. Processes are whole address spaces. All kinds of caches need to be flushed when you switch processes (especially if you have more processes than you have PCID entries), but not when switching threads.


V8, at its core, is single-threaded. I think that's the reason for separate processes for each isolate.


A V8 isolate is single threaded and you get 1 by default, but other isolates can be created and used in parallel in multiple threads.


Is this the future => Yep. but probably a few iterations from this.

Downsides? Sure. It can't run many popular languages.

Security? I'm not a security guy, but Cloudflare seems to have pretty good security.

lastly, I'm a fan of what Cloudflare is building. They're darn close to getting me off of AWS.


> Security? I'm not a security guy, but Cloudflare seems to have pretty good security.

Didn't they have one of the most spectacular private-data-leaking vulnerabilities of all time not so long ago? And for the dumbest possible reason (connecting C code to the internet) too.


do you have a link or the name/time of the incident?



For me, Cloudflare's decision to go with V8 Isolates was really smart and that blog post is a brilliant explanation of how one looks at trade-offs in engineering. Truly amazing work.

That said, no I don't believe V8 Isolates are the future of computing - and I think I'll explain why by comparing it to shared PHP hosting.

PHP became so big because it solved something that was very important for a lot of people at the time - how can I deploy and run my code without eating up a lot of RAM and CPU. If you deployed Python, your app would be running the whole Python stack just for you. You'd need resources for a python interpreter, all the HTTP libraries, all your imports, and then your code. On the other hand, if you ran PHP, you'd be sharing the PHP interpreter, the PHP standard library (which was written in C and not in PHP), and the server would only need to run your code. If your code was basically `mysql_select($query); foreach ($result in $results)...`, then the server was executing basically nothing. All the hard work was in C and all the request/response stuff was in C and all of that code was being shared with everyone else on the box so there was almost not RAM usage.

V8 Isolates are similar in a lot of ways. You get to share the V8 runtime and pay its cost once and then just run the code needed for the user. It makes sharing space really good.

So how isn't this the future? Not to knock PHP, but PHP isn't dominating the world. Again, this isn't about knocking PHP, but it's not like people are going around being like "you'd be stupid to use anything other than PHP because of those reasons." Likewise, V8 Isolates aren't going to dominate the world. Most of the time, you're working at a place you have services that will be getting consistent traffic and you can put lots of endpoints into a service and you just run that service. There are things like having well-JIT'd code that can be good with long-running processes, you can use local caches with long-running processes, you pay the startup costs once (even if they might be tiny in some cases). And I should note that there is work to get serverless stuff some of those advantages as well. I believe Amazon has done work with kinda freezing some serverless stuff so that a warm instance can handle a new request. But given the cost of a lot of serverless options, it seems expensive if you're getting consistent traffic.

Again, I think Cloudflare's move with Workers was brilliant. It offers 80% of what you need at a lot cost and without needing the same type of high-effort, high-resource setup that wouldn't make as much sense for them. I wish Cloudflare a ton of success with workers - it's a great thing to try. I don't think it's the future of computing. Really, it's just too narrow to be the future of computing - nothing like that is going to be the future of computing.

If you're worried that you're missing the future of computing if you don't hop on it, don't be. There's no single future of computing and while V8 Isolates are great for Cloudflare's purpose, I don't think it provides advantages for a lot of workloads. Again, I think that is such a brilliant article and I think Cloudflare made a smart decision. I just don't think it's the future of computing.


Does dart isolate have any similarity to this ? https://dart.dev/guides/language/concurrency


Would say, super generally and speaking loosely, yes: simply because I knew "isolate" from Dart, not V8, and didn't miss a beat with the article: in a sense, I'm surprised find the exact same concept in V8. I assume it was in V8 first, IIRC Dart/Flutter started from Chrome.


Is there anything like this for Python? I would like to execute python user scripts on my machine, but isolate them from each other.


What's the concern here? You could containerize your scripts, each in a separate Docker container.


Well, I'm planning to run hundreds of scripts for each user. I think something like context switching like in V8 isolates would remove a lot of overhead.

Do you have an idea for a different approach?


1. Shorter cold starts 2. Secure environment 3. Heavily tested in production

These are good PROS of Isolate for Serverless Computing, IMHO.


In my humble opinion, the future of compute is going back from this enormous overengineering.


(I have to admit, my first thought was, why would a some vegetable juice mess with the future of computing? And, are we talking the spicy V8, low salt, or the regular?)[0]

[0] https://en.wikipedia.org/wiki/V8_(beverage)


I love v8 isolates so far-- I'm building chat tooling with it

When added to the "edge", it means they're (insanely) fast, obliterate cold-start problem (which is killer in chat where you might have not have retry), and as long as what you write can execute between 10-50ms (with ~30s for follow-on queries) it sometimes feels like cheating

The same way Cloudflare "pushes" configuration to their network, they use a similar mechanism to push code to their edge nodes.

They have killer dev tooling too-- https://github.com/cloudflare/wrangler2

You *DON'T* need to think about regions ever-- just deploy to a lot of small regions instantly & it usually "just works" and is fast everywhere.

For extra credit, you also get access to rough-grained location information from each "node" in their network that your users connect to (globally you can get access to rough-grained local timezone, country, city, zipcode, etc): https://blog.cloudflare.com/location-based-personalization-u...

ex. for chat, could so something like to prompt for location info: https://i.imgur.com/0qTt1Qd.gif

Kenton Varda (https://twitter.com/KentonVarda) who was in charge of Protobuf and other projects gave an overview tech talk @ 10:23 speaks to isolates: https://youtu.be/HK04UxENH10?t=625

## Downsides encountered so far

- Not 1-1 replacement, think of your code like a highly-performant service worker (usual suspects: https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...)

- Many libraries (like Axios for instance) won't work since they call out to Nodejs (this might be a good thing, there are so many web APIs available I was able to write a zero-dependency lib pretty easily) They're adding bespoke support for packages: https://blog.cloudflare.com/node-js-support-cloudflare-worke...

- There's only a tiny of bit of customization for Workers required, however, there's a bit of platform risk

If you haven't tried before, definitely worthy of further examination

Re: security, it seems like a pretty good model.


Are <name_your_tech_here> the future of computing?


It seems to me that WASM is clearly a better suited technically as the core runtime of the future for serverless platforms... but the question is are isolates the VHS and WASM the Betamax in this story?


They're not comparable in that way. WASM runs inside a v8 runtime environment (or in other places), I'm pretty sure you can run wasm inside an isolate. You can basically think of WASM and JS as two different interpreters you can run inside of V8's little mini operating system.


Actually that's incorrect, there are a nunber of wasm runtimes that don't use v8. Here's links to a couple.

https://wasmtime.dev/

https://www.fastly.com/products/edge-compute




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: