Hacker Newsnew | past | comments | ask | show | jobs | submit | cramforce's commentslogin

The big difference is how the microvm is utilized. Lambda reserves the entire VM to handle a request end to end. Fluid can use a VM for multiple concurrent requests. Since most workloads are often idle waiting for IO, this ends up being much more efficient.

Displaimer: CTO of Vercel here


The concurrent request handling seems great for our AI eval workloads, where we're waiting for LLM API calls and DB operations but curious how Vercel handles potential noisy neighbor issues when one request consumes excessive CPU/memory?

Disclosure: CEO of Scorecard- AI eval platform, current Vercel customer. Intrigued since most of our time serverless time is spent waiting for model responses, but cautious about 'magic' solutions.


We built Fluid with noisy neighbors(=requests to the same instance) in mind. So because we are a data-driven team, we

1. track metrics and have our own dashboards to ensure we proactively understand and act whenever something like that happens 2. also use these metrics in our routing to smartly know when to scale up. we have tested a lot of variations of all the metrics we gather and things are looking good

anyway, the more workload types we will host with this system, the more we know and the better/performant it will get. we're running this for a while now, and it shows great results.

there's no magic, just data coming from a complex system, fed into a fairly complex system!

hope that answers the question, and thanks for trusting us


So if undertood 1. correctly I could use this solution to potencially save money, but it could turn into a nigthmare very quickly if you guys aren't watching?


Yes quite helpful- thanks for explaining and will try it out!


i think the majority of Vercel customers are doing web site hosting & most of the web requests are IO bound so it makes sense to handle multiple requests per microvm.

can't say the same if customer is doing CPU bound workload.


but then fluid will break that resources request requirement right?


That's 1 part, yes!

Part 2 is that you can also use an actual server if your workloads happens to be predictable (or is partly predictable). That gives you better cost efficiency for that part of the workload

Displaimer: CTO of Vercel here


Love it! When I first learned about serverless as a vibe years ago it didn’t click, but it’s so clear to me that abstracting compute fully is a huge win as long as you can safely assume things about the way your code will run.

It’s interesting to see so many companies coming at it from different angles. Fly, Vercel, Cloudflare, Northflank, Temporal? etc.

Pretty much all the code I need for my work can run in GitHub Actions, so I’m not so much the target, but still enjoy watching the development.

If you can answer, what’s the overall initiative here for Vercel? Own all the compute, as opposed to just the front end ish things?


The initiative is really focused on our current workloads like APIs, SSR, etc.

It definitely makes Vercel suitable for a broader set of workloads, though!


Author here. Let me know if you have questions!


Co-author and CTO of Vercel here. Happy to answer questions!


<CTO of Vercel here>

We're embracing copy-and-paste with the product rather than increasing abstraction. The tool provides the HTML or React code for building the generated UI. The way you actually use it, is by copy-pasting it into your app. From that moment on your workflow and specially debugging workflow is exactly as before. The tool just writes the first 100 lines of code to get you started with a great UI.


Hi @cramforce. Vercel's greed destroyed React. You guys should be ashamed of it (but I'm sure you aren't).


New generations are behind a waitlist, but you can see what the service does on the explore page (and bottom of homepage linked above) https://v0.dev/explore


<CTO of Vercel here>

- You can turn this on without changing the code of your site (just need to activate emitting the metadata in the CMS)

- Supports any framework

- We're working on an open content-source mapping standard that makes the underlying tech available to any CMS, e-commerce system, or other content source.


Have you considered that content might come from Git? If so, how would content branches fit into this?

(Disclosure: I'm building a library that turns a Git repository into a branch-enabled GraphQL content management API. See https://github.com/contentlab-sh/contentlab )


Are there any plans to make this available to non-enterprise customers?


The editing functionality is delegated to the CMS. Respectively content changes flow through the exact pipeline as changes originated directly from the CMS would. In fact, we allow the CMS to completely control the actual editing experience.

The main point of this feature is that if an employee does find a typo on some article, they can just fix it rather than going to the CMS, finding the actual place where that text is stored, making the change, etc.


Might be interesting for marketing guys but take it from a guy who’s worked with publishers. They hate inline editors.


Yes! Any postgres and redis client will just work. And the blob store has a REST API.


Based on this code example:

    import { sql } from '@vercel/postgres';

    const { rows } = await sql`
        INSERT INTO products (name)
        VALUES (${formData.get('name')})
    `;
Presumably authentication is handled transparently? I really like that - reminds me of Deno's new KV cloud stuff too.

Is that done with environment variables? I'd want a way to tap into that from Python code as well.


Yeah, if you go into the dashboard it gives you a bunch of options for connecting to the DB including the names of the automatically generated environment variables. And that includes POSTGRES_URL which most tools default to.


There are a bunch of examples here https://vercel.com/templates/ai


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: