I like Vercel but if you have more than 10 devs on your team you have to be upgraded to enterprise which when my company asked for a quote we received an increase for 15 devs from what pro would have been. It was from $300 to $3000 a month!
We just wanted to use vercel for hosting and previews but after the quote we ended up going with aws amplify.
I don't get it. We run a production headless workload on Vercel and it is slower, has fewer features, yet is more complex and expensive than traditional LAMP server stacks.
.. a traditional LAMP stack is not in any way comparable to what Vercel (and other similar services) offer.
We don't use Vercel either for it is slower and offers less functionality than competitors but comparing the general concept to a LAMP stack is definitely not valid.
However I have to admit that we don't necessarily run the "target" applications that Vercel was really designed for. We have a full suite of Lambda backend services that communicate with each other in VPCs and over SQS. We have scheduled executions using CloudFront and listen to events happening in several other AWS subservices.
Now, we also have two "simpler" React applications that we have actually test-driven on Vercel before, but the cold start time of functions was honestly quite horrible and detrimental in comparison, and the problem that really drove us away very quickly was the lack of integrated database and authentication solutions (e.g. AWS Amplify/DynamoDB/Cognito and Firebase/Firestore/Auth). I know about Railway and 0auth, but we have no interest in using several different providers with uncertain confidence in them just for basic application logic, especially because they don't integrate back into Vercel at all (like e.g. functions being called when a DB record changes) and seem to be more expensive than even AWS itself.
If Vercel was more like a "Heroku for serverless", and at the very least offered some of these basic features directly, and then had some good ways of connecting them together, we would potentially reconsider.
Here's an example with SvelteKit[1]. You can put a middleware.ts (or.js) in the root folder of any framework and it will work. We also have examples for using Edge API Routes, which work with any framework[2].
okbel, thanks for the reply. I am interested in translating a Cloudflare Worker (and I know that's the tech underneath Vercel Edge Functions, as leerob indicated last year) that, on my site on Cloudflare Pages, manipulates headers for assets-caching and a Content Security Policy. This is framework-agnostic — I've used it with Hugo, Eleventy, and Astro. Thus, any headers-related examples which *don't* start with `import { NextResponse } from 'next/server'` would be very helpful. :-D Again, thanks, and congratulations on today's launch to GA.
guibibeau, that would help, too (although I presume it still would come under the subject of HTTP headers). In short: *any* examples that *don't* require any part of Next.js --- or any other specific framework --- would be what a number of us have wanted since last fall's announcement. Thanks!
Vercel is all proprietary/closedsource right, just Next.js is opensource right? Truly asking/wondering, not making a statement or trying to attack, just getting clarity?
I ask because I work on VA.gov which requires FedRamped SaaS and PaaS OR on-premise installs and I don't think Vercel has either but we could use some parts of Vercel as on-prem if there were a paid on-prem support system.
VA.gov right now is a static build system (Metalsmith + Liquid templates) stored in an S3 bucket that we are working on moving to Next.js.
Here are the open source repos that make up VA.gov:
Yes, the open source Next.js doesn't include support for deploying middleware to the edge. If you run `next start` it runs middleware in the main server process. I work at Netlify and to support running Next middleware on edge functions we had to write our own open source integration for it, just as we did for deploying Next to lambda functions.
As someone who has never used serverless functions before..
Netlify functions are based on AWS lambda functions, which spin up a container with say a node.js runtime environment everytime a request (or another event) comes in, then executes the user provided node.js function within the runtime.
I can imagine that handy for something like cronjobs or api requests which happen occasionally.
But using Netlify functions for things like Next.js SSR (or API routes) means, that for every single website request, a new container is setup, the function is run once, and the container is discarded. Am i right in this? Isn't that a huge overhead in contrast to a long running server process?
(Netlify engineer here) To echo what the others have said, lambdas are a bit smarter than that. They actually keep the node process running until it has been idle for around 5-15 minutes. We can make use of that ourselves by reusing the Next server too. We even cache data locally in the lambda tmpdir (though we do that more for Gatsby). The cold start issue though is one of the reasons that using isolates rather than Node-based functions is so great. Both Cloudflare Workers (used by Vercel) and Deno Deploy (used by Netlify) have extremely fast starts, even when cold.
It's generally just a better fit for this model, which is why I'm so glad to see the approach becoming more popular and am genuinely please that Vercel's offering has gone GA. The more this model of edge computing spreads, the better. More framworks will add support, and it will be a virtuous cycle. The WinterCG work on creating a common standard for these runtimes is also a great project.
Read up on isolates. Really interesting stuff! It indeed seems to be a better and more natural fit for something like SSR route handlers, as far as i understand it.
> More framworks will add support, and it will be a virtuous cycle.
I'm in the early phase of developing a SSG in (TypeScript) Node currently. I think i'll study isolates and Deno a bit more, before going further. :)
V8 Isolates aren't exactly new (been around for about a decade); I'm sure the good folks at Netlify are up to speed on them. :)
Containers were clearly the safer bet when FaaS options hit the scene 6-8 years ago. Now with WASM opening up possibilities, Cloudflare made the smart move to base their edge compute off isolates, given the significantly reduced overhead.
Will be fun to see how the cloud competition responds!
I think they meant "I read up on isolates", rather than "You should read up on isolates"! I think you're exactly right though. Isolates are the way forward, and are gradually going to take more and more use cases from node-based functions.
Vercel uses AWS lambda under the hood as well, so it's probably comparable. AWS lambda is a bit smarter than what you said, it keeps containers around for some time able to respond to more requests. There's also concurrency, so if there's a lot of requests it'll keep a lot of containers running to deal with them, and then spin them down as the concurrent requests stop. The exact details are AWS secret sauce AFAIK, but Vercel claim in their docs that under 1% of requests get a cold start which is where a container needs to be setup to handle that request. https://vercel.com/support/articles/how-can-i-improve-server...
Congrats on GA! I'd add to the pros that these edge runtime are a lot more standards-compliant than Node too.
Users do get conufused about the CPU quota thing. They compare the 10ms or so that you get with edge functions, vs 10 seconds on lambda and think it's loads worse, without realising they're comparing apples and oranges. Both Cloudflare and Deno Deploy are limiting CPU time, whereas Lambdas are limited on wall clock time. I've had no trouble keeping long-running Deno Deploy functions running for far longer than a Lambda could run, e.g. when using Server-Sent Events or passing-through a large file download. As long as they're blocking on IO rather than CPU then you're fine. As long as you're not doing something like trying to process images, you're much more likely to hit a 10s lambda timeout than a 10ms CLoudflare or Deno limit.
What you're describing is the cold start[1] request lifecycle for Lambda functions. But if a successive request comes soon thereafter, there's a faster warm start path, which is significantly quicker (~ <100ms).
There's also a way to "provision"[2] resources to ensure that the function always stays warm, but that adds to the fixed cost, which is otherwise, near zero, if there are no requests.
> Vercel is all proprietary/closedsource right, just Next.js is opensource right? Truly asking/wondering, not making a statement or trying to attack, just getting clarity?
Based on vercel.com/oss, it appears that most bits of the Vercel PaaS are closed source, and I don't see an on-premises hosting option. Vercel also doesn't appear in FedRAMP Marketplace search.
Vercel is fairly clear about where it runs your code - depending on the type of page/deployment its going to be either S3, Lambda, or a cloudflare edge worker. There is a ton of logic behind how deployments work but the general benefit of using a service like Vercel is basically exactly as described above -- you push your code and it just works and you don't worry about it.
This is an attractive prospect for a lot of folks who want to spend their time making their apps better and not building/monitoring infrastructure. However, there are some folks who really like to build and maintain their own infra and in that case a service like Vercel might not be the best fit.
Would be really appreciated if they would enable their free plan to be used for "commercial" sites. Their definition of a commercial-purpose site is so expansive it precludes just about every site I need to stand up. I know they deserve to make money, but Cloudflare, Netlifly, Heroku, etc. all offer this, so makes it hard to justify not just using one of them.
We just wanted to use vercel for hosting and previews but after the quote we ended up going with aws amplify.