Hacker News new | past | comments | ask | show | jobs | submit login

Related graphql implementations with similar concepts:

- https://www.graphile.org/postgraphile/

- https://hasura.io/

Love the idea of having APIs flow out of a single set of schema definitions. The Rails style of speccing a model, migrations, and controller/serializer or graphql types feels overly verbose and repetitive.

To me the biggest thing these groups could do to speed adoption is flesh out the feature development / test story. For instance, the postgraphile examples have development scripts that constantly clear out the DB and no tests. Compared to Rails, it's hard to imagine how you'd iterate on a product.

Are there other reasons this hasn't seen more widespread adoption? Is there some inherent architectural flaw or just not enough incremental benefit?




We've been using it for about a year in production across four projects and it's been a major boon for productivity because it allows our frontend developers to do simple backend features on their own. I think there are a lot of use cases where it would be a great fit for many projects and there are I'm sure a lot of companies that could save a lot of time by using it as part of their toolkit.

Also it doesn't have to be all or nothing. For example, there might be an API endpoint needed where there are multiple mutations needed inside a transaction but we just route between that custom backend and postgrest using nginx.

Other things we use:

- dbmate for migrations: https://github.com/amacneil/dbmate

- postgrest-client, a library I developed for constructing type-safe queries to Postgrest in Elm: https://github.com/alex-tan/postgrest-client


You are addressing one of the issues on my mind: Migration.

In many situations, some kind of formal migration management is obligatory and my guess is that with PostgREST you will often want to test your migrations on a current database snapshot before deploying it in production...


I think the approach is fine for prototyping, but the architectural flaw that can make it a bad choice for production systems is that it ties your APIs to your data model. When you need to evolve your API or extend your data model, you get to choose between dealing with potentially massive data migrations and downtime, or having a third layer of glue in the form of database views. You probably would need to adopt a rule of no direct-to-table access, and a separate schema per API version containing views, and dealing with the pain of rewriting the views for all the supported API versions when you need to change your data model. Or scale the database horizontally, with views on foreign data wrappers that pull from the real data sources. Which I think this sort of solution is marketed as a way for a Database Administrator (singular) to expose an API, rather than a scalable approach to application development.


> You probably would need to adopt a rule of no direct-to-table access

That's like one of the oldest RDBMS best practices: no direct to table access, with every application (or class of business user with direct access to the DB) having access through a tailored selection of views so that apps/users are largely isolated from DB changes.

> and a separate schema per API version containing views

You'd probably only need a separate schema for semver-major API versions, minor versions would necessarily be supersets which could be accommodate by backwards-compatible schema extension.

> and dealing with the pain of rewriting the views for all the supported API versions when you need to change your data model.

With a well-designed normalized relational model, most changes to the base model are adding attributes or tables, which will have zero impact on view definitions to maintain an existing API. The next most common is factoring an existing attribute out to a different table because what was conceived of as a 1:1 relationship becomes 1:N, which requires adding a join to view defs where that attribute is involved.


It’s funny how “best practices” get mirrored basically everywhere up and down the stack. Everything at some point will have some way of decoupling data-access and data-structure, you just choose the level you want to do it at (toolkit, ORM, network api, database, schema/view, tablespace, filesystem...) and the number and granularity of layers, picking what you are most comfortable with.

It’s like, no matter how low-level you go, at some point someone had a problem with components being tightly coupled and introduced an indirection layer “for next time”.


How do you deal with writing operations in this approach?


> How do you deal with writing operations in this approach?

The same as read operations, with views, which can be, in Postgres, automatically updatable if they are a simple thin layer over base tables, and otherwise can be made updatable by means of appropriate triggers defining the meaning of INSERT/UPDATE/DELETE on them just as the view definition defines the meaning of SELECT.


> You probably would need to adopt a rule of no direct-to-table access, and a separate schema per API version containing views, and dealing with the pain of rewriting the views for all the supported API versions when you need to change your data model.

My understanding is there is no direct-to-table access with PostgREST; you're only ever interacting with views. Keep your view api versions in separate schemas. If you change your tables, yes, you'll need to update your views, but I don't see how that's much different than having to modify ORM code to accommodate schema changes.


The examples in these docs start with querying tables, and only afterwards go "oh, it also works with views, if you need more complex filtering".


Right, and from my experience the automatic CRUD endpoints don't deal well with views unless you annotate the view with postgrest-specific comments, which seems rather icky to me.


you are confusing postgrest with postgraphile :)


Good point. Whoops.


You can’t just throw complexity at ppl from the first lines of documentation especially when talkin about new ideas


I'd love to point out that in a strictly RESTful design, your client is supposed to be discovering the data model from queries, not embedding it in code. Ergo data and schema migrations should be no trouble at all.

Not sure I understand how any other approach to provisioning an data API is going to be able to avoid requiring a data model.


In practice it is extremely rare to find a pure REST model. It is hard to do properly.


I actually think the one schema to rule them all approach is a bad one. Different parts of the stack have different use cases and require different data representations. I think it's a fundamental flaw many ORMs fall into.

As for postgrest and graphQL, queries and data transformations are proof that your DB schema shouldn't be forced the the stack and onto the clients. I would argue that graphQL strived for the exact opposite of single schema and tends to 'clients should choose their own API' approach.


You haven't really looked into postgrest, it does not force the database schema on to the client. You expose a schema with views that are tailored for the frontend and can be totally different from the underlying tables that hold the actual data


I admit I haven't but I totally agree with you. I'm attempting to say that postgrest and graphQL are _not_ examples of apis from 'a single schema definition' in that they are not the previous incarnation of that desire where an ORM would tightly couple your imperative types to your normalized schema. These new technologies have a wholistic/referential public API but they allow client, server and DB to all have different schemas and I just want to highlight the difference.


Yes, but the constrains sql has wrt. how you represent certain datatypes can lead to problems. Sure in postgres you can fix that by a variety of compound types, and fully new types implemented as pg extension, which can scare people of.

Still there are a tone of applications for which just the basic types are good enough and the server is just a thin wrapper.

In the end the main requirement for efficiency using it is you skill with not basic SQL/plpgSQL (stored procedure, trigger functions, views and knowledge about the effects of the used isolation level). All well documented in postgres.


> In the end the main requirement for efficiency using it is you skill with not basic SQL/plpgSQL

Another way I've heard this phrased is "It's much easier to hire JS devs than PL/pgSQL devs."


Yes, this is the true reason why these elegant solutions don't see so much adoption.

Worse is better. Cheap JS devs to solve everything with heaps of imperative code is cheaper than elegant solutions.

Even long term this is true. Tech debt with code that people can understand and modify is cheaper than tech debt with an elegant DSL that require learning and mental ability beyond if then else.


There's a postgres extension called plv8 that essentially allows one to write database functions in JavaScript, instead of plpgsql.


Oh, absolutely. Same with Python, Tcl, Perl... but that's not the part I'm talking about. The business logic being encoded as pgpl/* is just fine. But having to write complex triggers that wrap pgpl functions and making sure the state of the functions is as expected and all of the other wiring stuff is just not something most people are used to.

I've recently seen an uptick in "Postgres-native" stuff being built, though. So maybe there's hope for the future.


what do datatypes have to do with this. For both PostgREST and any other api layer the situation is the same. The data sits in the db/tabes (so both parties are restricted to the types available in pg) and on the other end you need to output json. The fact that you can have some "smart" internal datatypes in other api layers has 0 advanteges, both systems have to basically take data from the db and display it as json, the intermediate types are irelevant


you can write your stored procedures in JS of Python if you don't like plpgsql that much :)


> Are there other reasons this hasn't seen more widespread adoption? Is there some inherent architectural flaw or just not enough incremental benefit?

I'm a fan of the concept and would love to have a middle layer as thin as postgrest, but authentication/authorization has been my roadblock. I still use node as an app server because of passport. I had not heard of the postgrest starter kit before today and will check out https://docs.subzero.cloud/athentication-authorization-flow/

If they correct the typo before I get to reading it, it will be https://docs.subzero.cloud/authentication-authorization-flow...


you could use Auth0 [1] to authenticate your clients using all the identity providers (FB/GOOGLE...etc) and then just have the api clients use that JWT token when talking to postgrest

If however you do only want to authenticate the users yourself, the login function can be very simple [2]

1. http://postgrest.org/en/v6.0/auth.html#jwt-from-auth0

2. https://github.com/subzerocloud/postgrest-starter-kit/blob/m...


I am using postgraphile in library mode with keycloak and passport.js (an open source project https://www.keycloak.org/) instead of auth0 and it works great.


I would expect authorization and validation to be a real pain with PostgREST (I don't know enough about it to tell definitively, though).

Even row-level access control, while not widespread any way, is often not enough. Complex validation in SQL can be done, of course, but SQL isn't exactly meant for these things, IMHO.


This is an interesting read on validation in postgres: https://begriffs.com/posts/2017-10-21-sql-domain-integrity.h...


Stored procedures often offer a better and more efficient way to implement data validation than non database native alternatives.


It may indeed be efficient.

Maybe I just don't like the idea of putting all my application logic into SQL statements.

What I like about Django's ORM, for example, that you can declare a field to be an IP address, and this not only does the validation (which SQL can do easily, I am sure), but this declaration also carries over to HTML forms, admin interface, REST API serialization and HTML Templates.


why would you "expect" that if you don't know about it?


I know more than basic SQL. Maybe I am indeed not an accomplished enough master in SQL, but I can't see why I would ever prefer SQL over Python in terms of defining application logic.


This is a solved problem https://github.com/subzerocloud/postgrest-starter-kit

It’just the tools you linked didnt prioritize the development workflow from the point of view od a backend developer (code in files, git, tests, migrations) but from the frontend developer perspective (ui to create tables)


The SubZero starter kit is great, and made this whole architecture seem practical to me. Thanks for your hard work here! (I'm currently using PostGraphile, but the starter kit is still an important tool for me.)


Have you tried the PostGraphile Starter? It’s not officially released yet but you can find out more in the Discord chat #announcements channel. https://discord.gg/graphile



do you happen to know if there is a video presentation about this project? after skimming the docs and checking the comments here, i am still unsure what this project has to offer that you don't already have or that you can easily piece together. thanks


How do you easely piece together a (poerfull) rest api just by having the definitions of the tables in the database?

Very very old videos but you can get the core idea https://begriffs.com/posts/2014-12-30-intro-to-postgrest.htm...

https://begriffs.com/posts/2016-03-20-postgrest-tour.html


thanks


I've been using Postgraphile along with Apollo Client with code-generated types for queries. When I change the db schema, I get compile errors in my client. Feels like I'm cheating


I'm doing something similar, but using type-graphql + apollo server + graphql-code-generator + mikro-orm (v3).

I just define my model classes. Graphql resolvers look like regular rest endpoints. Everything else is generated from there and I even get compile errors for my apollo client queries too.

I really love the typesafety of it all!


Thats just bad schema definitions then, you should have defaults and upon making mutations with only a subset of parameters it shouldnt fail, queries shouldnt fail anyway.

If you are actually changing variable names then your client would fail regardless of what you're using


We used Hasura for a while, but it was too easy for clients to write queries which would take it forever to complete.

Postgrest was better, but we had trouble getting it to work with JWT, and have a reasonable permissions model.

Postgrest was DAMN fast though, it was really nice to work with.

In the end we had to write our own.


FYI query whitelisting is on its way in Hasura which pretty much solves the first issue


Query whitelisting has been available since beta 1. It is called Allow Lists :) https://docs.hasura.io/1.0/graphql/manual/deployment/allow-l...

We recommend every production app to setup Allow Lists to prevent arbitrary querying.


> The Rails style of speccing a model, migrations, and controller/serializer or graphql types feels overly verbose and repetitive.

For the record, ActiveRecord models do not specify, for example, table schemas - they are loaded from the database in the runtime, and migrations are not really necessary - they provide schema versioning with a rollback possibility, something that AFAIK the databases don't usually offer. Rails (and Ruby) is actually so non-verbose and non-repetitive, that a lot of people is criticising it for the very reason, usually complaining about it being too "magical".


> they provide schema versioning with a rollback possibility, something that AFAIK the databases don't usually offer

But stand-alone migration tools like Sqitch do.[0]

[0] https://github.com/sqitchers/sqitch


I haven't used Rails a lot, but in Django you get a lot more from the ORM models than just database queries.

The Django ORM is used pervasively throughout django and third party libraries. You can compose and reuse querysets in Templates, Form validation, authorization and so on. You can create a CRUD admin interface and a REST API with almost no extra configuration.

It also has a very mature migration system, meaning you can have your migrations in code, and apply them in production or in multiple environments.

One problem I expect with PostgREST (haven't used it yet) is that validation and authorization may suffer. At the very least this will have to be configured either as SQL data or as SQL stored procedures, with SQL not being the strong suit of most frontend people.

They probably have safeguards against SQL injections through their API. Logic validation will be harder, and Postgres only supports rudimentary permissions, so all of this has to be implemented in stored procedures, also.

My main concern with PostgREST is that the database is stateful and persistent, and that you have to put more code in there. Which means every deployment is going to be a migration, no matter what. You better don't forget any of the state changes you made in your development database. Version management will certainly be harder.

Please don't get me wrong. I am sure PostgREST has perfectly valid usecases, and I like Postgres a lot. I am just dropping a few thoughts on why I would prefer a traditional application between database and frontend.


There are trade offs when you couple your front end to your database structure.

Additionally, if you are using a lossy intermediate query representation instead of plain SQL, you will end up losing a lot of he database performance and find yourself doing complex joins on your front end.


This is a common misunderstanding and I think a big reason why people are reluctant to try this type of architecture. The idea here is not to couple client code to the database structure. You should expose schemas that contain views on top of the basic schema, which provides good encapsulation and prvents both of the issues you bring up. (This was common 25 years ago, when server-side code often amounted to a db schema containing stored procedures.)


That’s a step in the right direction but I’m not sure if it really solves the problem.

But as I said, there are trade offs. I didn’t say it’s always a bad idea.


I agree there are always trade-offs, but at least as far as "losing a lot of he database performance" and "doing complex joins on your front end", I don't think there is anything in the PostgREST approach that would push in that direction. For more background on the performance part (Oracle-specific but most of it carries over) check out http://www.prohuddle.com/webinars/ToonKoppelaars/ThickDB.php


It's amazing that at PostgREST we have reached the same pattern for exposing the API(views + sps).

Thanks a lot for sharing.


when it comes to these tools, all your statements are wrong, give them 5 minutes and check our their capabilities (there is no coupling and joins in frontend) :)


> Are there other reasons this hasn't seen more widespread adoption?

This approach had widespread adoption in the 1990's, most notably via PowerBuilder[0].

> Is there some inherent architectural flaw or just not enough incremental benefit?

Yes, there is an inherent architectural flaw with two-tiered systems in that they force a system to choose to place complexity either in the GUI or in the RDBMS. This often leads to "Jenga Programming" maintenance techniques which cause the code base to take a life of its own.

It turns out that this does not scale.

0 - https://en.wikipedia.org/wiki/PowerBuilder#History


PostgREST's approach is not like PowerBuilder's.

PowerBuilder didn't encourage you to know your DB ins-and-outs and use all of its features. It was DB agnostic and through its DataWindow you got a RAD way of doing CRUD for a desktop app but you still needed to add application logic in PowerScript.

PostgREST on the other hand pushes you to know PostgreSQL.

A more suitable analogy would be to something like Oracle Apex[1].

[1]:https://en.wikipedia.org/wiki/Oracle_Application_Express


> PostgREST's approach is not like PowerBuilder's.

My intent was not to equate PostgREST with PowerBuilder, but instead to use the latter as a well-known example of a two-tier system.

> PowerBuilder didn't encourage you to know your DB ins-and-outs and use all of its features.

While PowerBuilder was "DB agnostic", as you identified, non-trivial applications relied heavily on RDBMS logic (stored procs, views, etc.). So while PB may not have "encouraged" RDBMS specific implementations, those which I encountered were never without.

Granted, the client connection protocol vastly differs between the two offerings. Yet the system architecture is shared IMHO.


Why do you think this is a two tier system?

Even here [1] i count 4 places to put logic into besides frontend

1. https://github.com/subzerocloud/postgrest-starter-kit/blob/m...


> Why do you think this is a two tier system?

The first sentence of site reads thusly:

> PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API.

If there is a database on one end and a client on the other, with no intervening system between them, that is the definition of a two-tiered system.


Sure you can use it as two tier if that does the job but it does not mean you are forced to, you can very well have logic in the proxy layer and then have logic below the db layer (workers reacting to database changes). If i apply the same logic (two tier bad) that i can say every other way of building an api is two tier since you have the frontend and then the rest of the code in php/node... the db would not really count as a later since in those scenatios ppl use it as a dumb store


> Sure you can use it as two tier if that does the job but it does not mean you are forced to ...

The project home page declares this to be its raison d'être.

> ... you can very well have logic in the proxy layer and then have logic below the db layer ...

Why have a "proxy layer" to "then have logic below the db layer"? Why not just let PostgreSQL manage the data and have...

wait for it...

A system which defines the application logic? I mean, that is what you described in a roundabout way.

> If i apply the same logic (two tier bad) that i can say every other way of building an api is two tier since you have the frontend and then the rest of the code in php/node... the db would not really count as a later since in those scenatios ppl use it as a dumb store

No, what you describe as "an api" (which assumes an external client that interacts with it) and "then the rest of the code" (which is colloquially known as application logic) using PostgreSQL (what you call "a dumb store"), is known in the industry as a "three-tiered system."

Not a "two-tiered" system.

This is because there are three significant actors involved in system operations:

1 - something that requests the service provided (browser, phone, desktop binary, etc.).

2 - something that provides the service (application logic).

3 - something that persists relevant state as determined by #2.

See what happened there?

Three distinct things; a client, something that does stuff, and something that stores stuff.


I couldn't stop thinking REST could be derived from sql..

Similarly after a week of vuejs I couldn't stop thinking redux/vuex should be an embedded sqlite ...


You might find CouchDB and PouchDB interesting then. CouchDB works over REST and there's a client-side version called PouchDB that you can selectively sync between.


> Are there other reasons this hasn't seen more widespread adoption?

I think it's because PostgreSQL got major features for web development — like JSON and RLS — not too long ago and most developers are just catching up.

JSON was added back in 2013[1] and RLS back in 2016[2].

[1]: https://www.postgresql.org/docs/release/9.3.0/

[2]: https://www.postgresql.org/docs/release/9.5.0/


JSON support wasn't all that great until JSONB columns were added in December 2014[1]. I think that was when devs started taking it seriously.

[1] https://www.postgresql.org/docs/9.4/release-9-4.html


The Graphile folks are working on Graphile Migrate to improve the migrations story: https://github.com/graphile/migrate

Separately, I've got a fairly advanced setup of schema/seed lifecycle flow in a Graphile-powered application that will likely be open-sourced at some point. The simplicity of the architecture makes these things relatively easy to spin up from scratch.


Thanks for mentioning Graphile Migrate. Migrations are often the biggest issue people have with something like PostgREST and PostGraphile; I’ve used a lot of the various migration tools and Graphile Migrate is the seed of my vision for a much more compelling migration story for everyone in the PostgreSQL community (especially people who don’t mind writing SQL but find the current up/down pattern too verbose/clunky as I do). There’s lots more to come, so make sure you follow the project if you’re interested.


How does it compare with marv [0]? [0] https://www.npmjs.com/package/marv


Any equivalent in MySQL? I have a legacy project in MySQL 5.1 I need to keep up. I have been using Peewee to generate the structure of the DB but would like to do some direct API call like Postgrest



xmysql - Creates instant REST APIs over a mysql database.

https://github.com/o1lab/xmysql/

Give it a try.

(Im the author)


o1lab, FYI, I considered xmysql recently, for my needs it's not currently enough because I'd like to have access rights management. IMHO utilizing the DB users for that is ideal - authenticate the mysql Db users in your rest service session. (PostgREST seems to do this)


ACL and other features are work in progress. Please watch repo for updates.


Deploy Kong in front?


Inherent architectural flaws: - Complete lack of separation of concerns: you can see this by the terrible idea of simply slapping a transformation procedure to a non-specified part of the codebase. Every time you do this you end up having to maintain a codebase which has no architecturally defined place anywhere. See: https://postgrest.org/en/v6.0/how-tos/casting-type-to-custom...

- There is no ORM that you can control. It doesn't mean that there is no transformation needed though. It just simply means that you lose control over it. All the work which were once under your control now is handled by the database layer. Of course you can go to that layer and change the logic of the framework yourself (by sending strangely formatted HTTP requests according to the doc), but then you gain nothing by using this framework. Of course this problem stems from the same fact that there are no separation of concerns. So basically the framework is simply ignoring the fact that there is an inherent object relational impedance mismatch. Of course the authors are aware that there is a mismatch but they simply sweep it under the rug by saying that:

  "The PostgREST URL grammar limits the kinds of queries clients can perform. It prevents arbitrary, potentially poorly constructed and slow client queries. It’s good for quality of service, but means database administrators must create custom views and stored procedures to provide richer endpoints."
This will come back and bite you in the ass when you try to work with anything slightly complicated like inheritance for example. So the authors basically dump every single person in the world under the category of poor programmers, and they offer their excellence in place of your lack of skill or talent. Go, figure! I am always very skeptical when I read something like this. See: https://postgrest.org/en/v6.0/api.html#resource-embedding

https://postgrest.org/en/v6.0/api.html#custom-queries

- No testability: pl/pgSQL is a really poor language to write unit/integration tests for, mostly because it's ancient and it's procedural and there are no native frameworks to do so. Of course you can use pl/Python and others, but then again, why would you when you can write the same thing in Python without embedding the code in Postgres, and the Python ORM would take care of having the same performance on the database level for you.

- Lack of tools for migrations, etc. The list goes on.


Coming from a traditional layered approach I can see why this "de-layered" approach can seem wrong. After all I had to ditch the knowledge I had about Django ORM, Sequelize, HIbernate(I wish I could forget entirely about Spring/JPA books I've read but I still remember some parts). But as mentioned in other comments if you give it 5 minutes—and embrace the db—perhaps you'd see the advantages of this approach over more traditional ones.

> There is no ORM that you can control

You don't need an ORM to control your database. If you treat the database as the source of truth(this would be the architecturally defined place) you can do data transformations with plain and short SQL. The great thing about this is that you can leverage PostgreSQL rich functionality and extension ecosystem and you don't have to worry if your ORM supports a pg type or if you need a plugin for that.

> by sending strangely formatted HTTP requests according to the doc

The "strangely formattted request" wouldn't be so strange if you note that PostgreSQL cast operator is "::"(e.g. `select col::json from table`).

> Python ORM would take care of having the same performance on the database level for you.

That's just false. ORMs have to point to the lowest common denominator of databases. They don't use the full potential of pg(things like LATERAL joins, json_agg, etc). PostgREST queries are specifically tailored for pg and they're being constantly improved.

> No testability

I've shared this before, but by using pgTAP[1] you can get native SQL tests.

HNers have shared migration tools in other comments.

[1]: https://pgtap.org/


> Coming from a traditional layered approach I can see why this "de-layered" approach can seem wrong.

One of the most significant architectural defects of this approach is by demanding the persistent store provide functionality beyond solely managing data, it both conflates disparate concerns (business logic + data management) as well as unduly complicates system evolution.

> If you treat the database as the source of truth(this would be the architecturally defined place) you can do data transformations with plain and short SQL.

And if you treat the database as the source of truth for managing data, then the idea of making it an application engine would be anathema.


> disparate concerns (business logic + data management)

We argue that all "business logic" is in fact "data logic" and they're the same concern. There are several comments on this post on how to evolve this approach(only use VIEWs/sps in a single schema and expose this to clients).

> And if you treat the database as the source of truth for managing data, then the idea of making it an application engine would be anathema.

Why? Could you elaborate more on that?


> > And if you treat the database as the source of truth for managing data, then the idea of making it an application engine would be anathema.

> Why? Could you elaborate more on that?

Sure.

Your previous declaration of:

> We argue that all "business logic" is in fact "data logic" and they're the same concern.

Is both the source of my position and your question.

For at least the last 50 years of which I am aware, the idea of "data rules all" has inextricably lead to migrating system logic closer to where the data resides. In the last 35-ish years, this meant defining application logic in stored procedures executed in an RDBMS (such as what you describe).

This architectural philosophy is commonly know as "client/server", or "two-tier systems," and has been studied extensively (as I am sure you are aware). What makes this relevant to the argument found in the "Motivation" section on the PostgREST site is:

  Using PostgREST is an alternative to
  manual CRUD programming.
And then:

  The PostgREST philosophy establishes
  a single declarative source of truth:
  the data itself.
To achieve that, the logic must reside within the RDBMS in order to fulfill the project goal set forth in the very first line of the site's home page:

  PostgREST is a standalone web server
  that turns your PostgreSQL database
  directly into a RESTful API.
For all intents and purposes, PostgREST could be considered an RDBMS connection pool, with the connection protocol being HTTP. Note that this is not a judgement, only an observation.

Now, consider a different system architecture commonly known as "three-tier" or "multi-tier." The philosophy of this type of system is largely defined by separation of concerns. This is to say, have each tier be responsible for a single "layer" involved in system definition.

In multi-tier systems, the persistent store (RDBMS) is solely responsible for ensuring data integrity. It may, in fact, service disparate components as well as disparate components may use more than one persistent store. An example of the latter is PostgreSQL and Redis deployed in concert. Programs, components, and/or systems which interact with a persistent store are responsible for providing the behaviour expected by their clients whilst the persistent store is expected to enforce rules regarding the relationships it manages.

Since the persistent store is only asked to fulfill a data integrity contract, having it also attempt to be an application engine would clearly conflate concerns, limit what persistent stores could be used, as well as result in assumptive implementations. Hence anathema to those who would treat the persistent store (database) as the source of truth strictly for the one concern which it was designed to address:

Managing data.


I see what you mean. This is the view that considers that the database should only be a Persistence Layer and not a Processing Engine.

There is a great talk[1] that @felixyz shared regarding the shortcomings of the Persistence Layer approach and the advantages of Processing Engine approach(which we at PostgREST embrace).

In summary, having the "business logic" away from the db results in worse performance: more network roundtrips, ineffective cache utilization, wasted CPU cycles in traversing your programming language VM(plus all libs) code paths. Turns out this actually puts more strain on the DB and makes your whole system scale worse.

> Hence anathema to those who would treat the persistent store (database) as the source of truth strictly for the one concern which it was designed to address: Managing data.

I think marrying these concerns and having them in a single place is more of a benediction for OLTP systems development and maintenance. After all, PostgreSQL has gotten more powerful features over the years and it has evolved to do more than data management(LISTEN/NOTIFY, JIT, etc).

I really recommend watching that talk. Btw, slides here[2].

[1]: http://www.prohuddle.com/webinars/ToonKoppelaars/ThickDB.php

[2]: https://www.nloug.nl/downloads/ogh20170615_toon_koppelaars.p...


This is still a horrible idea.

No IDE support, no debugging support, poor language choice. It's just simply screaming NO NO NO NO NO NO NO on every level of the framework.

Not to mention that these days it is more important than ever to make the code also executable in cloud environments like GCP Cloud SQL or AWS RDS.

> You don't need an ORM to control your database.

You are actually wrong about that and you are basically the living proof that I was right. You didn't even try to address what I was writing about object relational impedance mismatch and inheritance.

> you can leverage PostgreSQL rich functionality

I think this is a slightly outdated world-view. As I mentioned above these days the most important driving factors are not what a specific DB vendor supports but mostly what the cloud providers do. And many of the cool things are only supported by plugins in Postgres anyway.

> you can do data transformations with plain and short SQL

You guess what? You can do the same in ORMs too! In any language which has a Postgres DB driver! No one is stopping you from that.


> object relational impedance mismatch and inheritance

Can you elaborate on that? PostgreSQL is an object-relational dbms btw, it supports inheritance[1]. Though I don't know exactly what impedance are you referring to since PostgREST doesn't use an ORM.

[1]: https://www.postgresql.org/docs/12/tutorial-inheritance.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: