It has some weird ramifications though:
- when they go to implement a new feature (like recently added JSON column support) they have to implement it on both sides which can cause bugs like this: https://github.com/prisma/prisma/issues/2432
- they're a little limited to the semantics of GraphQL based RPC, which namely excludes any stateful stuff like arbitrary START TRANSACTION; blocks that might or might not commit. See https://github.com/prisma/prisma-client-js/issues/349 for more info on that
I wonder if their intention is to re-use the engine between different JS processes for caching / sharding or something like that, or to add Prisma clients in other languages. Why create the indirection?
I do like Prisma's type safety compared to the pure TypeScript alternatives like TypeORM and MikroORM -- it's really good at typing the results of specific queries and preventing you from accessing stuff that wasn't explicitly loaded. The style of the query language is the cleanest I've seen out of the three as well IMO.
Edit: I think node modules can install arbitrary binaries to some serverless JS runtimes, not sure specifically about Cloudflare but I know their dev tool bundles JS using webpack, which would exclude other binaries from node_modules.
- Prisma 1 was a completely independent server and Prisma 2 was most likely started as a rewrite of Prisma 1 so it followed the same approach
- This indirection will be removed if someone can finally land a Rust binding to NAPI (looking at you Neon binding people)
- Prisma plans to support multiple languages thus it makes sense to have an agnostic engine
I'm the co-founder of Prisma, so should be able to answer some of your questions :-)
Another reason for the split is performance. It's reasonable to ask how performant a library that simply marshals some data from a database really has to be. But it is important to realise that Prisma Client is quite a bit more ambitious than that. Where other libraries usually tries to generate a single complex query, Prisma will often issue multiple smaller queries and partly join the data in memory. The throughput difference between V8 and Rust is significant here.
You are right that our architecture precludes us from doing things like explicitly starting a transaction and keeping it open for a longer duration of time. Our long-term goal is to create an Application Data Platform for medium-sized software development teams that can't afford to invest in internal infrastructure to the same degree as big tech companies. If you are curious what this might look like, you can take a look at TAO at Facebook or Strato at Twitter. For long running transactions specifically, we believe that they are often misused by developers who think they get a certain guarantee that they don't actually get from wrapping their workload in a transaction. There are often better approaches - both more correct, and easier to reason about, and that's what we want to teach people.
Currently we are building 30+ different binaries for each release in order to support most sensible platforms. This is a pain for us, but I hope most of our users will see that this is something they rarely have to worry about if at all. We believe that WASM + WASI will enable us to eventually remove the need for the binary for Applications running on Node, but the ecosystem is not quite there yet.
Ultimately, I think the biggest step forward represented by Prisma 2 is the type safety and result typing. We have been pushing the TS compiler to its limits, and I believe the developer experience speaks for itself. We have a lot of work to do in order to build out the feature set, but I hope many developers will appreciate the improved ergonomics, and trust that we will work diligently over the coming months to add the features that they need.
Thank you for looking into Prisma!
With respect to building out a big-boy operational datastore -- I think that's really cool. It'd be nice for me to be able to use something like TAO or EVCache or what have you without having to build it all myself, that's for sure. I understand why Prisma's API is constrained compared to a regular relational database in order to support those needs. That said, I think that the very best (and certainly most sell-able) Application Data Platform doesn't require adopters to drop key abilities or semantics they are used to in order to switch away from a normal database. I think those semantics only need to be dropped at the kind of scale which very few Prisma users are ever going to reach, yet they pay the productivity penalty for those missing semantics from the very first moment they begin using the tool.
Yes, you can do a lot of the same things you might want to do with transactions with nested or batch operations, but, not everything. For example, Rails' transactional testing feature is battle tested and seemingly well loved by the community, and currently impossible with Prisma. Instead, you must use a slower and more error-prone database cleaner tool. Another example would be a bank style database with double entry accounting. You want to decrement one account by a certain amount and increment another account by a certain amount transactionally, but only if the from account has a total greater than the certain amount. `SELECT FOR UPDATE` to the rescue in Postgres, but negative account balances with Prisma.
Teaching developers to not hold transactions open for a long time, or to use smart, efficiently implemented nested inserts is a good thing without a doubt, but you could still do that education while preserving transaction semantics. Devs have been used to having those since the 70s. The two aren't in conflict if you ask me. It would make your life harder, that's for sure, but it would make my life as a potential user easier, and remove one argument for not switching over.
The implementation choice seems odd to me, nonetheless.
You might want to from your container on the edge, and separating off the work to another process makes having single-process lightweight containers more difficult (do you have multi-process container? do you sidecar the workers? etc).
So yes, I too found the architecture a bit odd. I have also seen it in https://mediasoup.org where it makes more sense to use more native workers, but carries same multi-process challenges.
With performance in general you want your chatty transacting code as close to the database as possible and to merely invoke it from afar. Then you have many very short RTT from the transaction, plus one long RTT before and after. Stored procedures or functions are actually the optimal here.
The problem with this tool, like every other multi-SQL-flavor-SQL ORM and query builder, is that it requires users to learn yet another language. In addition to Node.js and SQL, users need to learn the Prisma query language. This is not trival, and users that are already accustomed to working with SQL will need to relearn PrismaSQL.
This would lead to PostgreSQL, MYSQL, SQLite, etc-specific query builders. Knex is close, but it ultimately doesn't work for most because it's missing some language-specific features (e.g. ON CONFLICT DO UPDATE). While this doesn't exactly meet the type safety benefits of Prisma, the benefits in ease of use and feature-parity of a language-specific query builder far outweigh the difficulties of learning a new query language like Prisma.
I'll speak to my personal experience...
I became allergic to ORMs after experiencing much of the pain that you describe. Like you, I quickly found ORMs were simply an additional domain language / abstraction over my database that provided more pain that usefulness. Every time I wanted to make I change to the code I had to wade through tons of docs and/or stackoverflow posts by other frustrated users. If I wanted type safety I had to express and maintain types/decoders/encoders myself. Huge pain, and things always got stale leading to a massive mistrust in my data layers.
Prisma doesn't feel like those experiences. Their schema-first, client code gen approach works surprisingly well. Using the generated API feels really intuitive, and TypeScript is there the whole time providing guidance and autocomplete for me. The object tree query syntax is quite refreshing compared to the builder pattern approach taken by the alternatives. I always found the builder pattern overwhelming and often a guessing game at how to compose them.
I think Prisma doesn't try to be too clever with their data API. They solve the 99% in a manner that is simple and convenient, for everything else you have the raw query API - much like other solutions.
I'd suggest giving it a try. You may like it.
One advantage is the additional domain language is also the language of array/list manipulation, and not having to maintain any encoders/decoders (I honestly don't know what these are).
Entity framework follows the unit of work pattern from gang of four, and if I remember correctly eloquent follows the active record pattern.
When I started I found the active record pattern much more intuitive, but I prefer the unit of work pattern now. Mostly because I think unit of work works better with transactions/constraints then active record pattern.
You are talking about one where the goal is to make querying the database more idiomatic on your development language, at the cost of some flexibility. This works very well as long as you stay within the bounds of the abstraction, and breaks terribly when you step out of it. The engineering goal is to make the abstraction just broad enough to represent most of the common queries without making it less idiomatic.
The second kind is the type that tries to abstract databases into a specialized query language. The goal here is to bring things you don't get on plain SQL (like type integration or a single DBMS independent language) without losing expressive power. That's the one the GP is talking about.
Maybe you mean something else, but I haven't had any issues when I've had to break out of the ORM and write portions in SQL. (usually once or twice every couple of development man years)
I'm not sure what type integration is, but the ORM I'm most familiar with does allow spanning multiple DBMS's with the same code. (unless when you had to break into DBMS specific SQL for performance reasons)
Is type integration allowing static type checks against your query language? Entity Framework does this as well.
I think EF is the second type of ORM, unless I'm misunderstanding you.
> The problem with this tool, like every other multi-SQL-flavor-SQL ORM and query builder, is that it requires users to learn yet another language. In addition to Node.js and SQL, users need to learn the Prisma query language. This is not trival, and users that are already accustomed to working with SQL will need to relearn PrismaSQL.
I'm not sure I'd fully agree with this! The "other language" in this case is an intuitive and natural API (in Node.js/TypeScript) for querying data , so hopefully, there won't be much overhead to "learn" anything new. It should be rather the opposite and pretty straightforward to pick up, auto-completion and type-safety will also contribute to making the experience of querying data fluent without much learning overhead.
We specifically decided to abstract away from SQL because we found that many developers don't feel productive with SQL as their main database abstraction  (that's also why so many people roll their own data access layers in the end).
It sounds like your thinking is generally aligned with ours actually! The main difference is that we concluded that the query builder shouldn't be SQL-flavored but just a natural API for any Node.js or TypeScript devs.
Prisma's approach is well thought out, and I appreciate the new angle on an old and challenging problem. For Typescript users, especially new users, Prisma can be a big win. Abstracting SQL has huge benefits here (especially type safety/static analysis), and I look forward to seeing where this project goes.
That being said, my favorite database is PostgreSQL because it has so many features (and I'm just comfortable with it). At some point, a tool like Prisma (or Knex, TypeORM, etc) just cannot support all PostgreSQL features because it needs to support other flavors too. While some users may find this trade off acceptable, I always find myself hacking around the tool to use the raw features. Therefore, my ideal environment would be a full-featured PostgreSQL query builder.
TL;DR I see the benefits of Prisma, but they're not for me at this point
I’m thinking of porting a project from Firebase to Postgres and whether to use an ORM and which one are hotly contested points in my research thus far.
If you want "more ORM" then Knex then `Objection.js` is a good option. I think the other main option in node land is TypeORM. Either of there is probably a good choice.
You essentially describe Elixir's Ecto, which has been lovely to work with. You may want to check it out.
There is no query based language, the runtime API is generated from your DB layer, and is fully TS/JS, so there is no new language to learn. It is just TS/JS.
You don't know which API to use? Type a dot and all APIs is there, this is called Intellisense.
So, it is not `like every other multi-SQL-flavor-SQL ORM and query builder`
`I think the best approach to this problem is a single-SQL-flavor query builder that attempts to match SQL as closely as possibly`, you do not work with GraphQL, do you...
Knex. I think Objection.js is much better.
It has type checked queries in plain SQL (based on SQLite syntax), compile time consistency checks, autocomplete, schema migrations, and more. The normal queries have all the guarantees, but in case you might want to use some vendor specific features, it also has the option of vendor queries.
Very very cool library.
Also, no mention of aggregations, or if someone could point me to it?
As far as typed query builders go, even though things like JOOQ are simply amazing, but I fear that the query building approach has not really caught on for database access and people seem to prefer "object oriented" methods like ORMs.
Any comments from HN folks about why that is?
It certainly doesn't replace the need to know some SQL, but it does delay that which is great for so many people.
I'm definitely using this instead of any ORM for every project I can.
I really love having the fully typed interface for Typescript. Both for static type checking and for code completion in your editor.
We are using Prisma 2 as the default database client for Blitz.js  which results in a super nice stack. Especially because the Prisma DB types flow all the way into your React components.
You can find the full recording here: https://www.youtube.com/watch?v=AnJxKWQG_fM
> the problem: Working with databases is difficult
Working with databases is a relatively solved problem. You can access them from just about any language on any platform. A more accurate statement would be: choosing the right access method to work with databases is difficult.
For me, it's the fiddly bit where you interface between the programming language and the database is a PITA.
For those of us who tend to think more visually or kinesthetically, I appreciate tools that are trying to solve problems at a less-lingual/code level.
This makes me think of all brainstorming sessions I attended in my life with people asking over and over again, not convincing themselves with most answers: "Ok, guys, seriously now, what problem do we think we solving here?"
As a status announcement, this isn't quite as exciting as it sounds because migrations are still "experimental". Still great to see!
Prisma is a database toolkit that's used by application developers to develop server-side applications in Node.js and TypeScript (e.g. REST APIs, microservices, gRPC calls, GraphQL APIs, ..., anything that talks to a database). The main tool Prisma Client is a query builder that's used to programmatically send queries to a database from Node.js/TS.
Hasura is a "GraphQL-as-a-Service" provider that generates a GraphQL API for your database. This GraphQL API is typically accessed by frontend developers. That setup can be great when your application doesn't require a lot of business logic and the CRUD capabilities that are exposed in the GraphQL API fit your needs (though I believe you can add business logic in Hasura by integrating serverless functions).
With Prisma, you're still in full control of your own backend application and can choose whatever tech stack you like for developing it (as long as it's Node.js-based, though Prisma Client will be in available in more languages the future)!
By the way, we also love GraphQL. We're currently brewing a new "GraphQL application framework" that can be used on top of Prisma. That way it will be possible to auto-generate resolvers for Prisma models to reduce the boilerplate you need to write, while still keeping the full control of your GraphQL schema.
You can learn more about this here: https://www.nexusjs.org/
(I’m from Hasura)
You can extend business logic in Hasura in a number of ways, including (but not exclusively) ones that work well with serverless and async architectures. Other examples follow:
1. You can extend it by adding business logic in the database via user-defined functions.
Eg: You want a fulltext search or a PostGIS function that is better off in the DB anyway.
2. You can bring your own GraphQL server with custom resolvers and Hasura will merge them into its own API and let you “join” across them as well.
3. You can bring REST APIs and add graphql types for them in Hasura and use it as custom resolvers that extend the schema as well.
Hasura’s key value add is an instant GraphQL API backed by your own data-sources (database, GraphQL, REST) and then a fine-grained authorization system on it.
Like Nikolas said, very different from Prisma. Hasura aims to add value as “infrastructure” by guaranteeing performance and security where as Prisma is like an ORM/database toolkit.
Does it make sense to slap prisma on top of an existing g production database?
It was a definitely a design goal for us to make the existing production database use-cases as seamless as possible.
Instead of adding a new DSL on top of the database, Hasura maps much of the DML subset of SQL over to GraphQL (tables, views, functions) so that we're not re-inventing that bit and the translation is restricted to the "relation set" to "tree" transformation. json aggregation and json operations in Postgres are phenomenal! Hasura's authz RLS-like layer injects authorization in as well to make that GraphQL API actually useful.
JOOQ has probably done the most phenomenal job in mapping almost all database constructs to a native language library, but there's a solid amount of type magic there which I'm not sure is portable to every language.
It is, but I've just been too lazy so far to actually do it.
If you give it a try and have a database handy, I bet you can have it up and running in less than 10 minutes.
Regarding prisma, I see my impression was off a bit from talking with our team that used prisma in a green field project with migrations quite happily. And I somehow forgot that migrations are still marked experimental and are kind of new.
PostGraphile at least has the decency to acknowledge it in their docs and suggest putting logic inside the database (not some "action" nonsense), which is an actual practice, albeit most don't like it. And they don't specifically tell you to hook client directly to the damn generated API!
This helps one with solving the nplusone problem to some extent without having to maintain code specifically for DataLoader + some orm / custom query code. Comparatively, code via the Prisma Client API is usually straightforward and succinct.
I'd like to give Prisma a try, but if there's no sane way to change my schema (part of my daily backend workflow) then it's less interesting, no matter how nice the API is. For now I'll stick with Sequelize.
However, we do see a lot of folks using third-party migrations (like knex.js or indeed Sequelize) and then still get the benefits of Prisma Client  through introspection  for the time being. For non-critical applications we also already see lots of users who are trying out Migrate and help us improve it through constant feedback! I'd love to hear your thoughts on the current version so that we can make sure to consider your feedback and ideas for Migrate when building it out over the next few months.
Prisma Migrate is different from ActiveRecord migrations (which are very familiar with) because the DB schema is state-based. The Prisma schema file acts as source of truth and the DB schema will be migrated to match it.
Can you elaborate on what you would perceive as reaching the level of Django/ActiveRecord? I'd be interesting in specific aspects/items.
1. a declarative model - ie. defining the db schema rather than the migrations
2. auto generated migrations with the ability to customize
3. integration with tools for deployment and testing
You probably have a much better idea of the landscape, but reach out to Andrew Godwin , he wrote south and then rewrote it to become django migrations.
 - https://www.aeracode.org/
So, bring on type safe access, but don't make me learn yet another DSL which only works 70% of the time.
The ORM layer is not a DSL but some nicely done JS/TS functions
You are splitting hairs here as far as I'm concerned. You need to learn the an API so you can do 70% of your queries. Then you need to learn SQL so you can do the other 30% of your queries and actually understand how to design a database. The queries that the "nicely done JS/TS functions" are replacing are almost always the simplest, most basic queries. Do you really need a special query language to say `select * from widgets`?
The big problem with every ORM layer is you are essentially learning a disposable language. Every ORM says it's the best way to Query ever, and yet here we are, 5000 ORMs later and SQL is still an essential skill for developers.
I know... "This time it's different!".
The main problems I've run into have been around utilizing standard postgres naming patterns (snake case for tables and fields instead of camelcase) and mapping the names in the prisma schema. Ran into a handful of bugs related to having these mappings that have all been fixed since. It still requires a post-introspect step to add the mappings, but that's not too big of a deal. Ideally the introspection would be able to handle database-specific conventions.
Couple of other things I've run into that already have github issues:
- It would be great if along with the create/connect options on relationships for nested writes there was also an upsert.
- Better transaction support beyond just nested writes would be great and probably a requirement for a lot of apps. Thankfully, my server is relatively simple right now so banking a bit on prisma improving as my app grows in complexity.
> The main problems I've run into have been around utilizing standard postgres naming patterns (snake case for tables and fields instead of camelcase) and mapping the names in the prisma schema. Ran into a handful of bugs related to having these mappings that have all been fixed since.
Better re-introspection flows are indeed very much on our radar and something that we want to tackle soon! Would be great if you could leave a comment with your use case on GitHub , so we can make sure to address it properly when planning and prioritizing new features! :)
> Better transaction support beyond just nested writes would be great and probably a requirement for a lot of apps.
Same here! It would be really helpful for us if you could share some details about your use cases for transactions in the feature request  so that we can incorporate them in our planning and design of the feature!
For your reference: https://github.com/prisma/prisma/discussions/2138
Both Prisma and ORMs abstract away from SQL and let you "think in objects". However, how they do that is different.
With ORMs, you typically map tables to model classes
With Prisma, the focus is on queries and structural typing; queries return plain objects that are fully typed based on the query.
For a broader comparison with ORMs, check out the documentation page about why Prisma is not an ORM: https://www.prisma.io/docs/understand-prisma/prisma-in-your-...
Prisma takes a fundamentally different approach by generating a database client that returns plain old JS objects. We've written more extensively about this topic in the docs: https://www.prisma.io/docs/understand-prisma/why-prisma
Look at the modern type systems we have around and try to see how they are different from what was mainstream by that time.