Hacker News new | past | comments | ask | show | jobs | submit login
Prisma Postgres – Runs on bare metal and unikernels (prisma.io)
284 points by gniting 32 days ago | hide | past | favorite | 150 comments



$49/month for 60k queries… The pricing of these serverless offerings is always completely out of this world.

My $5 VPS can handle more queries in an hour. Like, I realize there’s more included, but…

Is it truly impossible to serve this stuff somewhere closer to cost? If this is close to cost, is this truly as efficient as it gets?


While in EA (early access), there's no additional cost of Prisma Postgres, you're effectively getting a free Postgres db.

The pricing you are citing is for Accelerate, which does advanced connection pooling and query caching across 300 POPs globally.

All that being said, we'll def address the "can you pls make Prisma Postgres pricing simpler to grok?" question before we GA this thing. Thanks for the feedback!


I think from my perspective there’s some cognitive dissonance when I imagine myself getting the free plan with a cool 60k queries included, and then switching to the paid plan, and finding that that I still have a cool 60k queries included. Wat?

I rationally realize there is a price/usage point where this makes sense, but emotionally it doesn’t feel good.

Start the plan at $59 and include 2M queries.


Thanks for the feedback!


Does a simple selection query count the same as a 50 line query joining many tables and doing aggregates?


Yes


What makes its connection pool "advanced"?


- autoscaling - connection pool size limits - query limits - multi-region support out of the box

more here: https://www.prisma.io/docs/accelerate/connection-pooling

also recommend trying out the Accelerate speed test to get a "feel for it": https://accelerate-speed-test.prisma.io/


I'm wondering big the prisma overhead is here. I know that historically, prisma has been very slow - the sidecar process and the generic protocol and database agnosticism come with a steep cost, and the numbers shown in the benchmark both seem rather low to me. 4x the performance of "very slow" isn't super impressive.. but of course, I can't verify this right now since I don't have access to machine where I can duplicate this and run some raw postgres queries to compare.


The global caching gets even more interesting and beneficial when the end users are all over the globe, and you're caching queries that simply take longer to execute on the db. You'll save time both on latency and compute that way. For example, everything running on the speedtest is through a single internal Accelerate project so we have some data: the overall average is 58x faster, with 779.53 ms served from origin and 13.28 ms served from cache.

Nonetheless, we absolutely have room for improvement on the ORM in terms of performance, and are working on those as well!


When I looked at your pricing, this were my thoughts:

1) 60k queries? I burn through that in an hour. All it takes is the Google bot and some shitty AI scraper to come along and crawl our site - which happens every single day.

2) $18 per million? I don't know how many queries I need per day, at the moment, but give 1), I will surely burn through a million dozens of times oer months...

...at which point this thing will be just as expensive as an RDS instance on AWS, potentially even more so if we hit traffic peaks (every single user causes hundreds of queries, if not thousands).

3) I don't even understand who to interpret the egress cost. No idea how to predict what the pricing will be. Maybe some calculator where we can slot in the relevant estimated values would be nice?


Thanks for walking through your thinking, feedback taken!

What's your take on the pricing calculator? We've been working on an improved version, and would love to hear your thoughts on this. In your case, what inputs would you find helpful to put in to arrive at a calculation, considering that you're unsure about projecting both queries and egress? How would you go about putting in estimated values for those?


is there any page which explains how does Accelerate works? I am interested knowing the internal technical details


It sounds like a clone of Cloudflare hyperdrive and others:

https://developers.cloudflare.com/hyperdrive/get-started/


Cloudflare announced Hyperdrive on September 28, 2023

Prisma announced Accelerate on January 16, 2023

But you are right, they are close in functionality though Hyperdrive requires a CF Worker to operate.


If you mean beyond the product page and our docs, then no. Accelerate is not an OSS product from Prisma.

All that being said, if there are specific questions that you have which relate to a use case, ask away!


I feel like that is always the crux of these solutions.

For example, I used Aurora Serverless v2 in a deployment, and eventually it just made sense to use a reserved instance because the fee structure doesn't make sense.

If I actually scale my app on these infrastructuers, I pay way more. I feel it's only great for products that _arent_ successful.


I always thought serverless meant you could scale out AND lower the cost. It always seems to turn out that serverless is more expensive the more you use it. I guess at certain volume, a serverless instance is meaningless since it’s always on anyway.


> I guess at certain volume, a serverless instance is meaningless since it’s always on anyway.

Bingo. The pricing alignment makes sense:

You share the risk of idle, but provided capacity with the provider: no fixed capacity for no fixed pricing.

The capex for the provider are fixed, though.

That's why I think more competition in the serverless Postgres space is fantastic: Sure, it's not a pure price competition, providers try to bundle with slightly different customer groups they focus on.

But underneath it, technology is being built which will make offering serverless ever more cost effective.

We might see a day where serverless (i.e. unbundeled storage and computed) with dedicated compute is cheaper than standalone GCP/ AWS/ Azure Postgres.


Its 8$ per million query, it really is ok. The baseline of 49$ is the price of the general prisma services. For a production workload cloud based this is in the low-end and if you work on product salaries are always the number one cost.


Indeed, as others have mentioned, you get 60k queries for free! Don't even need to add a card. Then, you rather pay for the usage (primarily by number of queries) you have. The $49 Pro plan you mentioned gives you additional features, such as more projects, higher query limits, and a lower $$ price per million queries. On the Starter plan though, you can get going for absolutely free, incl. those 60k queries, and only pay for the queries above that.

We are also working on making this simpler to understand. We want to make sure our pricing is as easy to grok and as affordable as possible. Keep an eye out for improvements as we get to GA!


That would be indeed crazy, luckily you were to fast reading the pricing:

https://www.prisma.io/pricing

60k are included.

But, I totally agree to your overall statement. The premium for hosted DBs is quite high despite the competition.

Usually, if you want hardware to handle real world production data volumes (not 1 vCPU and 512MB, but more like 4 vCPUs and 8G) you are very soon around $200 to $300. A VPC with that size is around $15?

The hosted solutions are just so damn easy to get started.


> Is it truly impossible to serve this stuff somewhere closer to cost?

Often these types of SaaS are hyper cloud backed so their own costs tend to be high

Don’t know whether that’s the case here. Agreed though that pricing also raised my eyebrows


They need to cover the dev costs and any bloating their organization might have (no idea about Prisma but lots of these startups are bloated). Eventually, the tech will get democratized and the costs will come down.


They've pivoted a number of times already. They started with Graph Cool which was a sort of serverless graphql db back around 2016 iirc.

Honestly I'm surprised they lasted this long.


yeah the whole "but you need to do maintainance" aspect of using a real server is overblown

OS' are pretty stable these days, and you can containerize on your server to keep environments separate, and duplicate

I guess it just comes with experience, but at the same time, the devops skillsets necessary for dealing with serverless stuff is also totally out of this world. most places I've worked at, marketing hasn't even launched a campaign, there is no product validation about how much traffic you'll get, and you're optimizing for all this scale that's never going to happen


Agreed that existing serverless stacks like lambda are a nightmare. But the real problem is that they don't solve the state management problem. (You need step functions to compose lambdas AND you need a bunch of custom logic for recovery in case lambdas crashes/pause/timeout).

I do hope tomorrow's engineers won't have to learn devops to use the cloud. My team works on what we think is the better way to do serverless, check it out! https://dbos.dev/


> If this is close to cost, is this truly as efficient as it gets?

I saw them hiring Rust devs recently, which makes me feel like they do things efficiently(hopefully). That being said, Serverless is the greed-driven-model, where you start by thinking, "meh, we don't need that many queries/executions/whatever anyways, we will save plenty mulas we'd waste otherwise renting a reserved instance sitting idle most of the time", then something bad happens and you overrun the bill and then go into "sh+t, need to always rent that higher tier, else we risk going bankrupt" and since your stuff is already built, you can no longer change your stuff without another big re-write and fear or breaking things.


Switching out a Postgres provider surely is on the easier side of things to migrate?

With most of these serverless providers, there's no technological lock in I am aware of, it's all just Postgres features paired with DevOps convenience.


What size dataset can you fit on that $5 VPS where it handles those queries in reasonable time? Serious question, all the $5 VPS' I've seen are too low spec to get anywhere with them. Eg a digital ocean $6/mo VPS will get you a single measely gig of RAM and a 25 GiB SSD. Without being more explicit about "realize there more included" is a $5 VPS really even a valid point of comparison?

I don't know why people ever buy plane tickets, walking's free.


1 million requests in a month is ~0.4 requests per second.

With the Prisma pricing, $1k gets you up to a 48req/s load average, and that's without the geo balancing. For a little more you can get a dedicated Postgres instance with 128GB memory and 1TB+ of disk on DO that would definitely handle magnitudes more load.

Of course there are a bunch of trade-offs, but as the original poster said the gap is pretty wide/wild.


Anything with indexes will be completely fine. Hell, your little instance can probably do hundreds of primary key lookups every second. How fast would you burn through your query allowance on Prisma with that?

The point is that when I buy managed postgres, the thing I expect to be paying for is, well, postgres. Not a bunch of geo load balancing that I’m never going to need.

That’s why the comparison is with the thing that actually does what I want.


On hetzner it gets you at least 4 cores, 8GB RAM and 80GB local SSD. For $49 you can almost get a dedicated server with 8 cores and 64GB RAM. More than enough to handle that load Edit: this is for $8 but general point still stands


I find it hard to trust managing Postgres database to someone who decided to use CamelCase by default for the table and column naming in Postgres


I agree but that's not cameCase but PascalCase.


Stay HN, HN.

(I say that with appreciation -- I always think of the former as UpperCamelCase because I never used Pascal)


Better tell wikipedia: "The format indicates the first word starting with either case,'

https://en.wikipedia.org/wiki/Camel_case


Nah - camel's head can be tall just like the hump.


That's what happens when Javascript developers think they're full-stack.


The Prisma engine is written in Rust (and the original product was written in Scala), so your snide comment is actually a bit inaccurate. You've also ironically failed to spell JavaScript using the correct casing.


Why is that a bad thing? Otherwise you need a translation layer to convert to language-appropriate column names.


Postgres has very counterintuitive behaviour around case-sensitivity.

  CREATE TABLE CamelCase(...)
  SELECT * FROM "CamelCase";
will fail with "CamelCase" not being a table.

That's because the CREATE TABLE statement creates a table named "camelcase", not "CamelCase", despite what you might assume from the query.


Daily WTF.

No bit seriously, thanks for sharing. I'll always do lower case table names from now on.


It’s not lower case, you should be using snake_case


yeah. both


Thats not particular to postgres, that's the same in Oracle (or any other db following ISO case handling I believe), just with true (quoted) table name being all uppercase instead for the latter but with same error.

You can quote the table name in the create statement to get the camelcase table name. Just remember to quote in both creation and usage or neither, is a good rule of thumb


I'm not super well versed in this domain, but I believe Postgres columns need to be wrapped in double quotes to respect case, or else they're all treated as lower, or something along those lines?


I seem to recall it's a SQL thing, not just PostgreSQL, although Oracle might not follow the spec. Definitely annoying.


Yes, SQL does specify case-folding unless quoted.

However, the SQL standard prescribes the case to be folded to uppercase, while PostgreSQL folds to lowercase.



That's largely only true for schema names and table names, not all identifiers.

The original root cause was having schemas backed by directories, and table definitions backed by .frm files. So on a case-insensitive filesystem like on Windows or MacOS, MySQL enables corresponding case-insensitivity logic for the affected types of identifiers.

For a deeper dive, see my post https://www.skeema.io/blog/2022/06/07/lower-case-table-names...


I can't speak much in detail, but maybe the following will paint you a picture.

I did contract work for a large international financial institution, known for being "one of the big N" (N<5). Lots of data/backend/db work, in several languages/stacks. Then a new style/naming convention for databases got pushed, by middle/higher management. It included identifiers in both camel-case and pascal-case. It was clearly "designed" by somebody with a programming background in languages that use similar conventions.

I noticed how there would be trouble ahead, because databases have (often implicit) naming conventions of their own. Not without reason. They have been adopted (or "discovered") by more seasoned database engineers, usually first and foremost as for causing the least chance of interoperability issues. Often it is technically possible to deviate from them (your db vendor XYZ might support it), but the trouble typically doesn't emerge on the database level itself. Instead it is tooling and programming languages/frameworks on top of it, where things start to fall apart when deviating from the conventional wisdom of database naming conventions.

That also happened with that client. Turned out that the two major languages/frameworks/stacks they used for all their in-house projects (as well as many external product/services), fell apart on incompatibility with the new styling/naming conventions. All internal issues, with undocumented details (lots of low-level debugging to even find the issues). I already had predicted it beforehand, saw it coming, reported it, but got ignored. Not long after, I was "let go". Maybe because of tightened budgets, maybe because several projects hit a wall (not going anywhere, in large part because of the above mentioned f#-up). I'm sure the person who original caused the situation still got royally paid, bonuses included, regardless.

Anyways, the moral of the story here is this: even if you technically could deviate from well established database naming conventions, you can get yourself in a world of hurt if you do. Also if it appears to resolve naming inconsistencies with programming languages of choice.


Other group of people who do this is ORM-happy Windows (C#/F#?/...?) devs.

Welcome to quoting everything for the rest of your query life.


This has nothing to do with Windows. EF Core just happens to be best-in-class ORM, unlike the "ORMs" on other platforms which give the concept bad rep.


I find it hard to take anyone seriously who makes life harder for themselves for no good reason . Different strokes.


The camel-casing makes life hard because any manual queries you write, you need to quote the names AND uppercase certain letters, as well as the fact its just inconsistent.


You should get in the habit of quoting identifiers in pg anyway as there are so many reserved words.


I imagine most people using this will rarely write a manual query. ORMs are extremely common in JS land these days. I don’t like it, but to each their own.


I imagine that the people who use CamelCase table names will one day make some analysts and data scientists very sad, assuming the company they work for grows large enough to hire them.


I was a big promoter of Prisma but can no longer recommend it. They built something really cool but have basically abandoned it, with major issues languishing for years without attention.

I guess they were busy working on this instead... for now.


Did you know it’s impossible to set a statement timeout for your DB connections in prisma? There’s no hook to run commands when a connection in a pool is established, and there’s no exposed setting for it. The only way to manage it is either to set it at the user level or to set up whatever they call a middleware layer (client extension?) that issues the command to set the timeout before every single query

An engine that doesn’t allow you to set per-connection settings effectively is pretty crazy IMO.


It's possible now with the built-in "db adapter" plugin. I also have lots of misgivings about the Prisma ORM, but this particular thing is possible now.


Can’t find any docs on this, could you point me to what you’re referencing?


Configure Prisma with https://www.prisma.io/docs/orm/overview/databases/postgresql...

Then you can pass `statement_timeout` when you create the `pg.Pool`


Oh I see, thanks! So you just need to opt out of their drivers entirely and configure with the alternative package’s driver.


One of the biggest thing that surprised me is that there's a binary written in Rust that listens for your queries and then passes them to the underlying database.

That's unnecessary moving part IMHO. Is it still the case or they changed their architecture recently?


It's still there, albeit not for long.

There were ample reasons in the past whereby going down this path made architectural sense. The primary one being multi-language support. Since then, TS and JS have found their way to the top of "the programming langs of choice" charts and so we're digging into removing the Rust based components and making them optional.

We'll share more on this in the coming weeks.


That's great news. The rust client has eaten a few days of my and my co-workers' life by causing deployment issues


How would that affect the now unofficial Go client?


Great point! We'll be taking all of that into account. The idea is not to make this a breaking change as a lot of community work went into the various lang support modules that are available.

Partly this is what makes this a fun and interesting challenge that goes beyond just the tech and the team is eager to get their hands dirty to solve this. For now, lots of ideas on the board. Once we have a plan, we will start to share in order to get community feedback.


The challenge, as we see it, is not that we didn't address open issues (we've put out a release every 3 weeks for the past 4+ years!), the challenge instead is that we didn't explain how we pick the issues that we do to work on. That is being addressed and you'll soon see us share our process transparently.

With close to 400K monthly active developers using our library and over 9M monthly downloads on NPM, one can imagine that the issues keep piling up!


From an outside perspective it seems like issues are prioritized based on how easy they are to fix.

To be fair this kind of makes sense, especially when the thing that makes many tasks difficult is finding solutions that don't break existing usage.


Yep, I can totally see how that may come across. Hopefully, once we publish our process, all of this becomes clear and the community can be part of the selection process itself.


One constant problem I run into is that Prisma does not return the IDs not the full rows when doing a batch insert. It only returns the counts.


Honestly, I hated Prisma for a while. I've tried to actively rip it out of multiple projects. But, typed queries + views being supported have rally started to change my mind. Prisma is great for basic CRUD operations, and those two features give me a really solid, type safe escape hatch for most of my complaints.

Dropping the rust client will solve another big complaint. I definitely feel the issues languishing problem. I've submitted a few confirmed reproducible bugs that have hung out for a couple years. Still, I'm happier with their recent direction than I would have expected.


It's not every day you get to launch a hosted Postgres service that has something fundamentally new to offer. That's what we have done with Prisma Postgres, and I'm incredibly excited for it.

We are using Firecracker and unikernals to deliver true scale-to-zero without cold-starts. Happy to go into more detail if anyone is interested.


You ommited another fairly known serverless Postgres provider which also does that (scale to 0 and no coldstart):

Nile (thenile.dev).

Not affiliated in any way with Nile, just a happy user.

I find their pricing much easier to reason about and plan, something I found super cluttered and hard to reason about on your pricing page.

Competition in the serverless Postgres space is always welcome from a customer perspective, but my gripe is currently a) bundling with Prisma - I might not want to use your tool and b) cluttered pricing.


Thanks for the feedback!

The point re: pricing explanation is well taken. We've already done a revision and will work on another one as we get more feedback on the latest version.


Appreciate the response!

In any case: Best of success in bringing this to GA, it’s great you’re among teams working on making this accessible!


Wow, I thought you guys were just reselling Neon like some others. This is genuinely impressive technically. It's got me looking at Unikraft Cloud for other stuff too.

That said, do you plan do offer branching or any other features that Neon offer? I think that's their big selling point along with separate billing for compute and storage.


Prisma team member here... Yes, when we go GA, we'll offer features that'll give you the comfort of wanting to run your production loads on Prisma Postgres!


Congrats on the launch!

I’m a bit confused about the pricing.

The docs and pricing pages on your website don’t seem to outline how the pay-as-you-go pricing will work.

Is this still being figured out?


Essentially, you pay for database queries and events, with 60'000 included for free, which is plenty for experimenting and small projects. Price per million queries/events is then based on the plan you're subscribed to, and with Starter you have zero monthly fixed costs and only pay for queries and events above 60'000. No CPU-time and similar that's usually hard to grok.

Take a look at the Accelerate and Pulse pricing details. Prisma Postgres comes bundled with these, so the pay-as-you-go pricing is the same: https://www.prisma.io/pricing#accelerate

We'll continue to make improvements to the pricing on the way to General Availability to make it both as easy to understand and affordable as possible.


If pricing is only done by storage limits, egress and query count (but not resource usage) how do you prevent something like a massive cross join with an aggregation from just running for ages on a single query?


We are looking for input on this during the EA phase. Here's how we plan for this to work:

- Each incremental concurrent query allocates additional compute resource to your database - All queries share that pool of compute resource - Queries have strict timeout limits. 10 seconds on most plans configurable up to 60 seconds.

Prisma Postgres is designed to serve interactive applications with users waiting for a response. In a sense, we are adopting some of the design principles underlying DynamoDB (strict limits on queries) and combining it with the flexibility of a Postgres database that is fully yours to configure and use as you see fit.


I don't want limits on the kinds of queries I can run when those limits are artificial ones imposed to work around limitations of a too-simple billing system instead of being inherent in the domain. I'm willing to pay extra for the occasional huge query, but I wouldn't feel comfortable knowing I couldn't make it at all.


How much overlap is there in the “serverless database that starts fast” group and the “might have a query that runs for 60 sec” group? If you’re running queries that are that complex, wouldn’t you be better served with a different database service?


If I used two services, I'd have to move data from one service to another.


That said, a production OLTP workload database would also have query timeout imposed, otherwise those fine folks in the crappy query writing department will surely bring it down.


Having query timeouts instantly makes me write this solution off.

Companies, especially smaller ones starting out, will run analytics in the same DB as the application DB.

A major plus of using a Postgres DB is the flexibility of doing analytics and serving apps. It can do it all. Analytics queries will often easily exceed your timeout limits.


Does that mean that using Prisma Accelerate and Pulse with an external database will cost the same as using it with the bundled database? (Since I don’t see database-specific costs for storage, read replicas, PITR, backups)


While Prisma Postgres in in early access, yes. This is why there's no ability to change the database config right now.

However, when we release Prisma Postgres in GA (couple of months), you will be able to upgrade your postgres instance (CPU, storage, etc.) and that will be db-specific cost.


Maybe they can take this opportunity to close the feature gap with other ORM: no support for partial index, no support for partition, bad support of JSON column, no support "for update", no support for "now()", poor query performance.


Thanks for the list of things you'd like to see added or fixed, specifics are always much appreciated, so we can better understand!

Historically, we haven't been very good at explaining how we pick the issues we work on. That is being addressed and you'll soon see us share our process transparently. This will allow everyone to better understand how we're going to continue improving the Prisma ORM.


Are you folks planning on more first class postgres support? From the outside, it really seems like forcing MongoDB into the orm has forced a pretty watered down experience for the main orm (I.e. not typed SQL).

I don't know if it's true, but it seems like you're be able to address the backlog more easily of you didn't have to force abstractions that work for a no SQL db


no geospatial data types!


How does it handles backups, read replicas, failover? And more importantly - how it scales with load? For example our workload fluctuates between 1 vCPU to 128 vCPU in an hour. How it would handle this?


During EA (early access), we don't recommend using PPG (Prisma Postgres) for production loads. For now (during EA), there's no ability to scale up the base system config.

However, when we roll out the service in GA, you'll be able to "upgrade" the base system and one of the plan tiers will support autoscaling along the lines of what you've described.

The EA launch is for us to get PPG into the hands of our users and to vehemently listen to requests/requirements/bugs... your request has been noted and thanks for raising it!


Last time I checked, Firecracker didn't have a very compelling I/O story, which made it in my opinion not completely adequate for running Postgres (or any other database).

In contrast, other similar VMM seem to have a better one, like Cloud Hypervisor [1]. Why then FC and not CH? (I've nothing against FC, actually love it and have been using it, but it appears not being the best I/O wise).

[1]: https://github.com/cloud-hypervisor/cloud-hypervisor


> Firecracker didn't have a very compelling I/O story

Can you provide any sources for this claim? We're running Firecracker in production over at blacksmith dot sh and haven't been able to reproduce any perf regressions in Firecracker over CH in our internal benchmarking.


The major tradeoff with firecracker is a reduction in runtime performance for a quick boot time (if you actually need that - this obviously doesn't work if your app takes seconds to boot). There are quite a lot of other tradeoffs too like 'no gpu' because that needs some of the support that they remove to make things boot fast. That's why projects like 'cloud hypervisor' exist.



That issue is from 2020 and has already been addressed. The fact that io_uring support is not "GA" is mostly a semantic aspect that doesn't affect most use-cases.


Nice, this is very cool.

Very much of the opinion that the Unikernel stuff (and especially what UniKraft are offering) is being massively slept on.


It absolutely is -- to be fair the previous versions of Unikraft weren't quite easy or maybe ready for wide consumption, but they took some funding and at the very least their marketing and documentation massively improved.

Hugely impressive.

A little weird that they say "no cold start" versus... "minimal cold start".

Cold starts in milliseconds != no cold starts, though I get it -- marketing is marketing and it's not wrong enough to be egregious :)

That said, super excited that someone has built a huge complex database like Postgres on Unikraft.

Been a while since I kicked the tires on Unikraft but looks like it's time to do it again, because this isn't the only software that could use this model, given an effective unikernel stack.


So nice to hear some of you just as excited about unikernels as we are!

Re: zero/minimal cold-start... Technically, you're right, though I'd say if you don't notice it's there, it's as good as not even being there. :) You get the pragmatism though, appreciate it.

Lots of cool stuff coming for Prisma Postgres that all this tech enables, looking forward to keep telling you all about them.


Let's hope it is not another "Netlify" honeypot aka "settle in boys, generous free plan, port everything and lock yourself in. I'll start adjusting those prices next year when it will cost you $10 even to send some emails from your contact form".


Huh? Netlify free tier is still free, I use it for multiple projects


They've made some nasty pricing changes in the past. For example they decided on per-user pricing (based on git committers) if you used them for the very common task of deploying branch-based previews on PRs. At my last job this was a sudden increase of thousands of dollars a year for a 15 engineer team. We dropped Netlify instead.


I recall Prism as a player in the GraphQL server field, this was about 8 years ago.

Anyone knows what happened to that? According to their website they now offer only Postgres services?


Prisma was born out of graph.cool (https://www.graph.cool/) and their GraphQL implementation became the middleware between the Prisma DB client (@prisma/client) and their Rust db abstraction layer (prisma-engine).

I believe they're getting rid of the residual GraphQL bits though, if I'm not mistaken.


Nikolas from Prisma here! I've been around for almost 8 years (since the Graphcool days) and this description is pretty accurate but I can add a bit more color. The major product evolutions we've had were:

- Graphcool: A GraphQL BaaS written in Scala.

- Prisma 1 (see [1]): A GraphQL proxy server between DB and app server. This was essentially the "GraphQL engine" of Graphcool that we ripped out and made it available as an open-source component. However, its auto-generated CRUD GraphQL API never was meant to be consumed by a frontend. Instead, it was the abstraction layer for the app server to interact with the DB (at first only for GraphQL APIs on the app server via `prisma-binding`, then the first version of Prisma Client that worked with any API layer — both of these were thin JS/TS layers that talked to the GraphQL proxy server where the actual DB queries were generated).

- Prisma 2+ aka Prisma ORM (see [3]): We realized that with Prisma 1, we were essentially competing with ORMs but that our architecture was way too complex (devs needed to stand up and manage an entire server where other ORMs could be used with a simple `npm install`). So, we rewrote the Scala "DB-to-GraphQL" engine in Rust to be able to provision it via a download during `npm install` and run it as a sidecar process on the app server, added a migration system and Prisma ORM was born. That being said, it has evolved a lot since then. We dropped GraphQL in favor of a way more efficient wire protocol [4] and have continuously reduced the footprint and responsibility of the query engine (e.g. you can now use Prisma ORM with standard Node.js DB drivers [5]).

If you want more details, I talked more about this evolution on Twitter [6] a while ago.

This launch is a huge milestone for us and it's definitely one of the most exciting launches I've been a part of at Prisma!

[1] https://www.prisma.io/blog/prisma-raises-4-5m-to-build-the-g...

[2] https://github.com/prisma-labs/prisma-binding

[3] https://www.prisma.io/blog/prisma-the-complete-orm-inw24qjea...

[4] https://www.prisma.io/blog/prisma-5-f66prwkjx72s

[5] https://www.prisma.io/docs/orm/overview/databases/database-d...

[6 ] https://x.com/nikolasburk/status/1384908813069869058


If I already have a postgres operator setup on Kubernetes - what additional benefits is this getting me?

Right now super fast start times aren't really needed since it's a database that I expect to be running for a while - ms vs 30s is fine. It's easier to setup a test database on the same machine I'm running automated tests so that may not be a good usecase either.

I'm glad that they've made improvements to startup speed and size of the containers which would be good if it's open sourced, but I don't know if this is good as a paid service if you already have an easy way of setting up new clusters in k8s.


With Prisma Postgres, you also automatically get a connection pool and a global cache: you don't have to worry about overwhelming you db with a large number of connections, and you can add global caching to any query in just a few lines of additional code, e.g.:

``` const user = await prisma.user.findMany({ cacheStrategy: { swr: 60, ttl: 60 } }) // <--- set a cache strategy in really just a single line ```

Both of these are super important when serving users that are spread across the globe. Latencies between regions add up. If you're curious, check out https://accelerate-speed-test.prisma.io/

Similarly enabled already on the db, you can subscribe to any change happening to your db, e.g. to send out welcome emails when a new user is added to the User table. Makes event-driven architectures super easy. Take a look at Pulse, which comes bundled with Prisma Postgres: prisma.io/pulse

Another benefit of a managed service of course is that you don't have to worry about managing any of it. Things happen, traffic spikes, servers go down, for a lot of us, that's nice to not worry about and rather focus on building and shipping the things that make our own products and services unique. Also, in a lot of situations, provisioning something complex is just not worth it, and a quick deployment to try or test something is desired.


Naive, invalidation-unaware caching is a very poor hack that can work mainly for requests that are probably not your core feature, and therefore probably not your bottleneck.


Prisma is an absolute joke. I remember hearing so much about it with the hype train, and I looked into it, and it's an absolute catastrophe with a very beautiful homepage. There are GitHub issues where the creator is arguing with people about not supporting foreign key constraints or something, and he didnt understand the use case (cant remember specifics right now, but this is where I immediately noped right out)


Meanwhile, some companies are building products with Prisma and are enjoying their choice. I love Prisma with Postgresql and Typescript, it's a very productive tool.

My first opinion wasn't very far from yours, but then I adopted it. It has served me well after a year and multiple projects.


Prisma definitely supports foreign key constraints. I think you mean joins which they only recently started supporting.


Yes, you're right lol. Equally as bad. I actually thought it might be that but couldn't quite remember. Imagine trying to ask people for the use case of why they need joins...


WTF?


I don’t understand the free pricing tier:

$18 /million queries, 60k included

60k queries are free and then the 60_001st starts incurring costs at 18/1_000_000 per query?


> Essentially, you pay for database queries and events, with 60'000 included for free, which is plenty for experimenting and small projects. Price per million queries/events is then based on the plan you're subscribed to, and with Starter you have zero monthly fixed costs and only pay for queries and events above 60'000. No CPU-time and similar that's usually hard to grok. Take a look at the Accelerate and Pulse pricing details. Prisma Postgres comes bundled with these, so the pay-as-you-go pricing is the same: https://www.prisma.io/pricing#accelerate We'll continue to make improvements to the pricing on the way to General Availability to make it both as easy to understand and affordable as possible.

Nvm answered further down in the thread by eampiart


Our pricing for Prisma Postgres indeed presents a bit of a mental shift compared to traditional database providers:

We charge for query volume, not for compute!

We believe that ultimately this is a more intuitive way for developers to think about database cost.

Generally, our goal is that developers need to only think about _queries_ — we'll take care of everything else to make sure those queries can run efficiently. Developers shouldn't need to worry about compute, scaling, downtime, etc.


How about the scenario where I do a select … where … on a view and that view is defined to have 5 CTEs on different tables and then a final select doing some complex stuff there.

Is that billed as one query or 6?


That would be billed as a single query. We think this is a much simpler way to reason about your cost compared to counting rows scanned, CPU time consumed or something more granular like that.

If your query is very expensive, it will take longer to complete, and that will be a signal to you the developer to simplify your query or identify an index that can help speed it up. Prisma Optimise will help you identify and improve such queries.


While I think as an arm-chair business wonk y’all should count it as 6 queries and not 1 as a payer and consumer I’m even more likely to use it now that you count us as only a single query. :-)


Is there a way to import data in from a table, or would I have to manually break it up into 1kb chunk insert queries?

Let's say I have a ~150GB postgres DB right now and I want to move it to Prisma Postgres. At 1kb/request and $8/million requests, does that mean 150 million requests billed at $1200 just to get started?


Right now during Early Access, the first hard limit would be that the database size is capped at 1GB. Of course, this will be increased as we go to General Availability. We're also planning on shipping a way to directly connect to the database vs. operating through Accelerate queries solely. During EA, I'd probably go with seeding from a pg dump. For GA, we'll provide a better way to move to Prisma Postgres if you're already using a pg database somewhere else.


Congrats on the launch, this looks very interesting!

Will it ever be possible to self-host Prisma Postgres (or Pulse)? It's great that I can use your platform, but if I adopted e.g. Pulse, that's a non-trivial vendor lock in. I'd feel _much_ safer/confident building my app around the Prisma stack (besides the ORM, which I like).


Self-hosting for Prisma Postgres, Pulse, and Accelerate isn’t currently on the agenda. Our focus is on building Prisma into a company that gives developers the best tools for working with data. We handle a lot of the underlying complexity, abstracting it away to make the developer experience seamless so that teams can concentrate on building their applications rather than worrying about infrastructure—just as we do with databases and the Prisma ORM.

Your point about concerns around vendor lock-in is completely valid, and we get it. However, we're confident that our approach with the ORM, along with the trust we’ve built with the community, will set the stage for us to develop long-term commercial products that are fit for serious, production-ready deployments.


Thanks for clarification, that makes sense.

Just to give you a bit more context, I've been evaluating CDC (change data capture) tooling and Prisma Pulse was one of the options. My primary data storage is Postgres but I have a need to react when data in some tables is changed (depending on some user-provided filters). I'm currently handling that with naive message push to SQS, because Debezium/Kafka setup is too expensive/complex. Prisma Pulse looks great, but that CDC part of my app is crucial and I need an option to be able to host it myself/on premises for some customers.

However I totally understand the need to build a moat — good luck on your journey!


> because Debezium/Kafka setup is too expensive/complex

Totally get that. Many of our users like Pulse because of that.

> I need an option to be able to host it myself/on premises for some customers.

Completely understand the need and thanks for the wishes!


Any comparison with neon.tech?


I just answered the same questions on Twitter [1], so just quoting that:

Neon is a lot more feature-rich right now (since PPG just came out) and has awesome stuff like branching.

On the long-run, we expect to have a similar feature set. Additionally, our underlying tech has the benefits of avoiding cold starts and likely being more cost-effective.

That being said, I see it as a major benefit that with Prisma you not only get a DB but an entire data layer (incl global caching and real-time DB events) thanks to the first-class integration of our other products like Accelerate and Pulse [2].

[1] https://x.com/nikolasburk/status/1851522983346532669

[2] https://www.prisma.io/blog/announcing-prisma-postgres-early-...



No offense but $0.09/GiB for DB read is so expensive. It sounds like servers would be querying Prisma over the internet? Then webmasters get to double pay their webhost for egress to the internet. Developers need to relearn how to self-host!


None taken!

The intention is clearly not to have any double dipping, so the verbiage and presentation of the pricing elements needs to improve... and it will! The feedback regarding clarity on pricing is well received. We've know that we've got some work on our hands along those lines and we're working to address it. Stay tuned!


Anyone complaining here just isn't the target user. People who need an always available but zero maintenance and low usage seems to be the target user?

> always-on database with pay-as-you-go pricing for storage and queries (no fixed cost, no cost for compute). It's like a serverless database — but without cold starts and a generous free tier


Might be nice for setups consisting out of dozens of tiny micro services each with their own just as tiny databases.


Virtualization layer (Firecracker + KVM)

The "secret" sauce probably. Firecracker is so cool.


Everyone runs Postgres in VMs. The real magic, if you want to call it that, is VM snapshots to reduce cold start time.


> At Prisma, we believe that deploying a database should be as simple as adding a new page in Notion

I don't know what this means, but setting up a PostGres DB is a single PSQL command, or a few clicks in PgAdmin.


Drizzle [0] added an easter egg on their website in response to this announcement it seems.

[0]: https://orm.drizzle.team/


This team’s ongoing, and frankly unhealthy, fixation on Prisma is getting so damned old (and childish).

It's rare to witness a group so openly attempt to undermine another, while simultaneously drawing on the very same team’s ideas to inform their own product direction. Despite repeated public feedback calling out this unprofessional behavior, there appears to be little desire for growth or maturity.

Kudos to the Prisma team for consistently upholding a high standard of professionalism by ignoring them.

Wondering why you posted this... do you condone such childish behavior?


I posted this ‘cause I found it funny and coming from a data engineering world just dipped into Typescript APIs and ORMs in the past two weeks for a personal project.

I have no affiliation with any provider/ name/ person in this space.

I condone making premature judgements.

I have no clue about Drizzles behavior and the only thing I learned about Prisma in the last two weeks is that they have their own schema definition language, a separate Rust proxy and seemingly didn’t address open feature requests for a couple of years which is why for now I didn’t pick them in my project.


wow cool. Never heard about unikernels. Does it solve the issue with firecracker of not being able to reclaim memory? Would imagine that can be an issue for a long running app like a constantly busy database


Thank you!

Reclaiming memory is orthogonal to unikernals. It's a concern of the VMM, and Firecracker does support this through the balloon device: https://github.com/firecracker-microvm/firecracker/blob/main...

Now, the unikernel helps us consume less memory, which is a good place to start :-)


I guess this is a different way to complicate simple things and sell them. "serverless database is what you need, my dude!"


Congratulations! Some innovation, some engineering. The other way, I was thinking about using uni kernels to deploy storage appliances via firecracker and here we go.


Amazing tech! Well done Prisma team!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: