
PostgREST - deepersprout
https://postgrest.org/
======
rgbrgb
Related graphql implementations with similar concepts:

\-
[https://www.graphile.org/postgraphile/](https://www.graphile.org/postgraphile/)

\- [https://hasura.io/](https://hasura.io/)

Love the idea of having APIs flow out of a single set of schema definitions.
The Rails style of speccing a model, migrations, and controller/serializer or
graphql types feels overly verbose and repetitive.

To me the biggest thing these groups could do to speed adoption is flesh out
the feature development / test story. For instance, the postgraphile examples
have development scripts that constantly clear out the DB and no tests.
Compared to Rails, it's hard to imagine how you'd iterate on a product.

Are there other reasons this hasn't seen more widespread adoption? Is there
some inherent architectural flaw or just not enough incremental benefit?

~~~
stubish
I think the approach is fine for prototyping, but the architectural flaw that
can make it a bad choice for production systems is that it ties your APIs to
your data model. When you need to evolve your API or extend your data model,
you get to choose between dealing with potentially massive data migrations and
downtime, or having a third layer of glue in the form of database views. You
probably would need to adopt a rule of no direct-to-table access, and a
separate schema per API version containing views, and dealing with the pain of
rewriting the views for all the supported API versions when you need to change
your data model. Or scale the database horizontally, with views on foreign
data wrappers that pull from the real data sources. Which I think this sort of
solution is marketed as a way for a Database Administrator (singular) to
expose an API, rather than a scalable approach to application development.

~~~
dragonwriter
> You probably would need to adopt a rule of no direct-to-table access

That's like one of the oldest RDBMS best practices: no direct to table access,
with every application (or class of business user with direct access to the
DB) having access through a tailored selection of views so that apps/users are
largely isolated from DB changes.

> and a separate schema per API version containing views

You'd probably only need a separate schema for semver-major API versions,
minor versions would necessarily be supersets which could be accommodate by
backwards-compatible schema extension.

> and dealing with the pain of rewriting the views for all the supported API
> versions when you need to change your data model.

With a well-designed normalized relational model, most changes to the base
model are adding attributes or tables, which will have zero impact on view
definitions to maintain an existing API. The next most common is factoring an
existing attribute out to a different table because what was conceived of as a
1:1 relationship becomes 1:N, which requires adding a join to view defs where
that attribute is involved.

~~~
andrey_utkin
How do you deal with writing operations in this approach?

~~~
dragonwriter
> How do you deal with writing operations in this approach?

The same as read operations, with views, which can be, in Postgres,
automatically updatable if they are a simple thin layer over base tables, and
otherwise can be made updatable by means of appropriate triggers defining the
meaning of INSERT/UPDATE/DELETE on them just as the view definition defines
the meaning of SELECT.

------
stefanchrobot
Somebody in our team put this on production. I guess this solution has some
merits if you need something quick, but in the long run it turned out to be
painful. It's basically SQL over REST. Additionally, your DB schema becomes
your API schema and that either means you force one for the purposes of the
other or you build DB views to fix that.

~~~
cryptonector
> or you build DB views to fix that.

That's what VIEWs are for! Well, one use-case of VIEWs, anyways.

There's nothing wrong with the schema as the API since you can use VIEWs to
maintain backwards compatibility as you evolve your product.

Put another way: you will have an API, you will need to maintain backwards
compatibility. Not exposing a SQL schema as an API does not absolve you or
make it easier to be backwards-compatible.

You might argue that you could have server-side JSON schema mapping code to
help with schema transitions, and, indeed, that would be true, but whatever
you write that code in, it's code, and using SQL or something else is just as
well.

~~~
mycall
How do you do CRUD with views? I know Reads are what views do.

~~~
flukus
You can have insert/update trigger on views. You shouldn't but you can.

More realistically, stored procs would do the CUD parts.

~~~
dragonwriter
> You can have insert/update trigger on views. You shouldn't but you can.

You can, and there is no reason you shouldn't.

> More realistically, stored procs would do the CUD parts.

Why are stored procs more realistic?

~~~
taffer
Triggers are implicit, have side effects and are not deterministic. They are
confusing and surprising.

A procedure call is explicit, a trigger is implicit. You don't call a trigger,
it just happens as a side effect of something else. People tend to forget
implicit things. Suddenly you notice that something is acting strangely or
slowly in your application. You can look at your functions and procedures and
try to find the problem. But if your application has triggers all over the
place, how do you know what is going on? A trigger can change a dozen rows,
which in turn can change other rows, so changing a single row can fire
thousands or millions of triggers. Also, triggers are not fired in a
particular order, the database is free to change the query plan according to
what it thinks is best at the moment, so triggers are not deterministic.
Triggers can sometimes work and sometimes not.

Almost everything that can be done with a trigger can be done with a
procedure, but explicitly, deterministically and in most cases without side
effects.

~~~
dragonwriter
> You don't call a trigger

You call an instead of trigger implementing updatability of a view, or the
select query defining a view, just as much as you call property setters or
getters in OOP languages.

With the DB triggers, as in many OOP languages (C#, Python), this is an
implementation detail obscured from the calling site, which is good for loose
coupling, modularity, etc.

Your objections, while IMO still overblown, have relevance to some uses of
triggers (they are particularly applicable to AFTER triggers and BEFORE
triggers other than those implementing constraints, but least applicable to
INSTEAD OF triggers implementing view updatability, which is what we are
discussing here.)

~~~
taffer
> much as you call property setters or getters in OOP languages

Good design is obvious and orthogonal[1]. If you write _setters_ in an OOP
language in such a way that they do surprising things, i.e. not just _setting_
a value, then I would call that bad design.

> which is good for loose coupling, modularity, etc.

What do you gain by using triggers in this case? All you get is mental
overhead, because whenever you use DML you have to keep in mind that there
might be a trigger hiding somewhere that does strange things. If you call a
procedure instead, you make it clear that you want to do more than just a
simple update or insert.

> [...] it is possible for the method call to make use of concurrency and
> parallelism constructs [...] to do a unknown number of things in an unknown
> order

Why would I want this? I want my code simple[2], stupid and obvious, and not
convoluted, clever and surprising[3].

[1] [https://stackoverflow.com/a/1527430](https://stackoverflow.com/a/1527430)

[2]
[https://www.youtube.com/watch?v=rI8tNMsozo0](https://www.youtube.com/watch?v=rI8tNMsozo0)

[3]
[https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...](https://en.wikipedia.org/wiki/Principle_of_least_astonishment)

~~~
cryptonector
Why do you think triggers must be astonishing (but OOP not so)??

~~~
taffer
see
[https://news.ycombinator.com/item?id=21455027](https://news.ycombinator.com/item?id=21455027)

~~~
cryptonector
There's nothing wrong with that if that's the logic you want!

------
haolez
I think PostgREST is the first big tool written in Haskell that I’ve used in
production. From my experience, it’s flawless. Kudos to the team.

~~~
antpls
Having a bit of experience with OCaml, I hoped to see what production-ready
Haskell code looked like with this library. I tried to read some files of the
project and... IMHO "production-ready" Haskell code is still not easily
readable, for example, the main file for the tests :

[https://github.com/PostgREST/postgrest/blob/master/test/Main...](https://github.com/PostgREST/postgrest/blob/master/test/Main.hs)

and

[https://github.com/PostgREST/postgrest/blob/master/test/Quer...](https://github.com/PostgREST/postgrest/blob/master/test/QueryCost.hs)

I don't know, maybe it lacks comments ? The code is really not easy to follow
if you are not using Haskell 100% of your coding time.

While the library may work well in practice, it's a maintainability red flag
and, by using this library, you rely on rare Haskell programmers for the
future.

~~~
ruslan_talpa
it is hard to read if you don't have some knowledge of haskell indeed and the
comments part is true but it's not any harder then folowing other codebases if
you don't know the particular language so i don't think this is a strong
argument.

Another point is - it's not a library and you are not the one maintaining it
:) the same way you are not maintaining, but still using things like
postgresql,nginx,redis, rabbitmq.

I bet it's a lot easier to learn haskell and patch postgrest then to know C
for 10 years and patch postgresql :)

~~~
antpls
> you are not maintaining, but still using things like postgresql,nginx,redis,
> rabbitmq

I don't maintain them because they are written in either C or C++, which has
many more practitioners, guidelines and tools to trust.

On the other hand, I can easily see people using postgres' extensions have to
quickly make a patch to fix a bug or change a behavior, extensions being
smaller.

The issue is that, eventually, you would like to patch, review or audit the
extension. All of these operations will require you to find third-party
Haskell developers, Haskell auditors, Haskell reviewers, who are rare in the
job market, and therefore it represents a risk for your project.

If the core developers stop to maintain the extension, no one else might be
available to maintain it, and now you have code debt and code that no one can
fix

~~~
ruslan_talpa
> no one can fix

Really? No one? I learned haskell (my first FP lang) and rewrote the core of
PostgREST (in my spare time) in about 6m... so stop scaring people :)

------
z3t4
Often you do not want users to have access to a whole table, but only posts
made by the user, or posts _to_ to user. I could however see this replace
Excel apps. But then you will also have to generate the user interface for it
to be useful. The developer should only have to specify the views, the rest
can be automated. I once made such a tool in order to save a few hundred man-
hours on a tight budget, and it worked fairly well. But for most apps you want
to customize every layer.

~~~
wichert
You can do that with row-level security. The PostgREST documentation has
examples for that specific use case:
[https://postgrest.org/en/v6.0/auth.html#roles-for-each-
web-u...](https://postgrest.org/en/v6.0/auth.html#roles-for-each-web-user)

~~~
yoloClin
Is column based authorisation possible?

What about group/role based security concepts?

~~~
chishaku
You can create a view with a subset of columns and grant permissions on the
view.

~~~
yoloClin
I feel like this is just moving business logic /back/ into the database.

It' very similar to what we were doing with stored procs 15 years ago and just
moves the problem from business logic back to database layer. Given the
choice, I'd prefer to write constraints in !SQL, personally.

------
pointlessjon
I like this idea. Especially helpful for prototyping a web UI against an
arbitrary existing dataset. PostgREST is much more full featured and as
commented by a few others “production-ready”(?) but if you’re into this and
looking for something a bit more naive but just as accessible I wrote a
similar utility to expose some Postgres data over http:
[https://github.com/daetal-us/grotto](https://github.com/daetal-us/grotto)

~~~
steve-chavez
I'd say it has been production-ready for some years now. There are some
documented cases of companies using it in production here:
[http://postgrest.org/en/v6.0/#in-
production](http://postgrest.org/en/v6.0/#in-production).

------
deepersprout
I really like this approach for mostly crud apps. What is missing is

\- something to conveniently version control database objects

\- something to conveniently debug stored procedures. Maybe directly from
vscode or your preferred editor.

If those two things get solved somehow, pg could be a really awesome
application server.

~~~
ruslan_talpa
for the first one, read my other comments, this is solved. The second one, you
can start here
[https://www.pgadmin.org/docs/pgadmin4/4.13/debugger.html](https://www.pgadmin.org/docs/pgadmin4/4.13/debugger.html)

~~~
deepersprout
> for the first one, read my other comments, this is solved.

> The second one, you can start here
> [https://www.pgadmin.org/docs/pgadmin4/4.13/debugger.html](https://www.pgadmin.org/docs/pgadmin4/4.13/debugger.html)

I think Starter Kit and the pgAdmin debugger lack in convenience. If you write
C# or node js code in your preferred editor, you can debug it there. You can
debug your express routes, webapi or resteasy controllers in
vscode/vs/eclipse/intellij without leaving the file you later commit to git.

Starter Kit and the pgAdmin debugger are fine tools, but they come nowhere
close to how you work with a js, C#, java, python or whatever you like
codebase.

The development workflow with stored procedures imho is broken, and I think
that is one of the main reasons people do not use them much.

~~~
ruslan_talpa
It's true that a tool developed by 1-2 ppl recently is not a convenint as the
tools developed by armies of developers over decades :) but, when it comes to
PostgREST way of building apis, the debugging does not have the same meaning
as in other ecosystems.

an api backed by postgres+postgrest is 80% tables and views declarations ...
how do you debug a view ... it makes no sense. You just define it and say
"select * from view" (even from your IDE) and see if you get what you expect,
that's why one can do a lot (develop complex apis) with less (limited debug
tools)

~~~
deepersprout
You don't debug views of course. But with postgREST you have to write your
logic in stored procedures. So say I want to write an accounting app that has
an invoice function. A view is not enough to create an invoice, because there
are complex rules to apply to the data, so I have to write a SP. I need to
debug that stored procedure, and if I have to install pgAdmin and switch from
my very much loved and customized editor to debug my invoicing procedure, the
workflow is broken, and so it comes that I will write that invoicing function
in C#, java, python or js instead, because the tooling is better.

Compare that to a node application: just open vscode, start editing away and
press F5 to test it. If you want to debug it, add a breakpoint and step away.
When you are done you commit and push to git.

It should be the same with stored procedures.

~~~
ruslan_talpa
Let's start with the fact that most of the data centric apis have a 80/20
split on read write, so there is virtually not SP in 80% of your api, so no
need to debug 80% of the code :)

so you "might" need stored procedures only for your write part and even then
you need them when the input data needs to be split and sent to different
tables.

The complex "rules" are nothing more then "constraints" on your data which are
split between the columns of the table and become so simple that there is
almost nothing to debug.

If it were the case that with postgrest one needs to write complicated stored
procedures all over the place you'd be 100% correct. The thing is you don't
need them in most cases and when you do they are short simple functions that
deal with focused things so there is way less chance to get them wrong.

This has been my experience with using this type of stack for apps like
project management/invoicing.... etc (basically basecamp+freshbooks)

~~~
deepersprout
Basically you are saying "I don't need stored procedures, therefore neither
should you".

If you don't use them clearly you don't need to debug them. That does not
change what I said earlier: a difficult development workflow hinders their
adoption.

~~~
ruslan_talpa
not exactly what i meant.

it's true that it's not a polished workflow to debug stored procedures because
you you have to jump from your editor to pgadmin but i don't think this is
such a big deal for two reasons.

\- postgrest architecture is such that you rarely need stored procedures (not
just me, all the projects), it's the exact same way ppl use databases without
them (they just send queries to the db, same here). 90% of the code in this
type of project is table/view definitions (with constraints) and appropriate
grant/rls statements. So there is very little imperative code to debug.

\- The type of stored procedures used is quite simple, isolated and mostly
self contained, a single function, maybe calling some other helper functions
(you are always 2 levels deep at most) so it's not like in other envs where
you have to follow the code jumps between hundreds of functions and classes.

------
hippich
I am (slowly) working on a project with similar tool (postgraphile) to
eliminate most of CRUD stuff. One thing I always wondered - how you would
version control the schema itself? I settled on Skitch -
[https://sqitch.org/](https://sqitch.org/)

------
no_wizard
Interesting to me I’d this is written in Haskell!

I highly recommend reading the source code
[https://github.com/PostgREST/postgrest](https://github.com/PostgREST/postgrest)

------
korijn
How do you version your API with this kind of tooling?

As in, how do you change the data model without breaking clients?

~~~
ruslan_talpa
you don't expose your tables directly, you expose a schema that consists only
of views and stored procedures. If you really need a totally different version
then you jsut exppose a new schema but more often it's the same situation as
in graphql ecosystem, you jsut add a new column/view/procedure and don't
delete the old one. Postgrest has the same power to describe what you want as
a graphql api woudl have (by using it's select parameter)

~~~
mlthoughts2018
It’s interesting that the project’s own tutorials do not discuss any of that
and not only demonstrate making the API schema exactly equal to table schemas,
but go further and claim that adding intermediate business logic or using
tools that mediate between the data and business logic, like ORMs, are bad
abstractions that should be intentionally avoided.

I mean, I agree with the strategy you state about views or stored procedures,
but those are just in-database ways of achieving the same kinds of things you
might prefer to write in a different language (thus ORM or query engine)
because it puts the app or business logic all into the same version controlled
system, leverages programming language ecosystems and tools that are often way
more valuable than raw database programming (even in Postgres), etc.

Basically, if PostgREST needs you to do the old tricks of views & stored
procedures to manage an abstraction layer that safely allows the underlying
data schema to change, I just don’t see the benefit over doing this in a much
better language ecosystem, like Python, and using much better web server tools
to generate the APIs.

PostgREST looks much more useful for quick prototypes, internal use cases
where schema breakage might be OK occasionally, or just mirroring & monitoring
data as-is for ops and diagnostics. From a performance perspective, it might
be fast enough for production, but that’s almost never as big a concern as
managing the intermediate abstraction layer and associated app tooling.

Does not look like a good idea for production applications that need an
intermediate API layer adapting the data to the use case.

~~~
philwelch
So there are different schools of thought about using relational databases
here and I think this is where a lot of the tension comes from.

I think everyone agrees in principle that in a service-oriented architecture
you need well-defined, safe, hardened interfaces between services. In an ORM
world, the assumption seems to be that the database itself isn't really a
service with a well-defined interface, but rather a private data store that
just accepts whatever SQL you throw at it.

But what if you think of the database itself as a service? If that's the case,
then your service interface should definitely not be arbitrary SQL. This is
where you introduce views and stored procedures, which change your DB from a
private implementation detail that you have to hide behind a service boundary
to a service that sets its own boundaries.

In this world, your REST services have an HTTP client to make service calls to
each other, and they have a Postgres client to make 'service calls' to your
database. PostgREST is just a deterministic proxy that adapts one service
protocol to another, the same way you would use grpc-gateway if you had gRPC
services that you wanted to call from REST clients.

I don't think PostgREST obviates the need to write intermediate API layers, at
least if those intermediate API layers are doing anything interesting. It may
obviate the need to write API layers that only parameterize SQL statements and
serialize JSON responses. But that's a good thing to obviate.

And yeah, you should definitely version control your DB schemas, views, and
stored procedures. We aren't barbarians :)

~~~
ruslan_talpa
Very well said (I'll have to remember this way of explaining it). IMO in most
projects, the intermediate layers never do anything interesting so that's why
postgrest (as a proxy) is a good fit

~~~
philwelch
> IMO in most projects, the intermediate layers never do anything interesting
> so that's why postgrest (as a proxy) is a good fit

I try to avoid generalizations about "most projects", because different people
have different experiences and it's hard to make a good argument about which
case is more typical. At best I think you can lay out the toolbox and explain
where PostgREST fits in the toolbox. Whether or not you should use it on a
particular project depends on the particular project, and I have no idea what
mlthoughts2018 is working on or has worked on in the past, so that's an
entirely different question :)

------
oftenwrong
Why not use a more descriptive title?

For example:

"PostgREST: a web server that turns a PostgreSQL database into a REST API"

~~~
loeg
HN policy is generally to use the literal title of the linked website,
deficient as it may be.

~~~
detaro
Given it's the project website, and it's prominently visible, adding the
tagline ("Serve a RESTful API from any Postgres database") would be fair to be
part of the title.

------
mleonhard
I'm using the JOOQ type-safe SQL generator with PostgreSQL. My application
server build script runs PostgreSQL in a Docker container, creates the
database and tables, applies all migrations (via Flyway), and then invokes
JOOQ which connects to the database and creates Java classes based on the
tables and columns in the database.

JOOQ mostly prevents SQL syntax errors, column name errors, column type
errors, supplying the wrong number of arguments, etc. These become compile-
time errors.

With PostgREST and other JSON APIs, you only get run-time errors. And you rely
on test coverage to check code correctness.

I prefer compile-time errors to runtime-errors. I find that software utilizing
comnpile-time checks is easier to maintain.

~~~
steve-chavez
PostgreSQL already gives you SQL syntax errors(try creating a VIEW with a
misspelled SELCT), column type errors(try doing a `select 'asdf'::int;`),
wrong number of arguments on a sp call(try putting one more argument to
`select int4_sum(2, 3);`).

Thanks to PostgreSQL transactional DDL[1] you would get all of these errors at
creation-time and without any change to your database if any migration is
wrong. There's no need for a SQL codegen to get this already included safety.

Btw, PostgREST is not only a JSON API. Out of the box, it supports CSV, plain
text and binary output and it's extendable for supporting other media
types[2]. If you have to output xml by using pg xml functions you can do so
with PostgREST.

[1]:
[https://wiki.postgresql.org/wiki/Transactional_DDL_in_Postgr...](https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis)

[2]: [http://postgrest.org/en/v6.0/configuration.html#raw-media-
ty...](http://postgrest.org/en/v6.0/configuration.html#raw-media-types)

------
thijsvandien
Somewhat related discussion from a week ago:
[https://news.ycombinator.com/item?id=21362190](https://news.ycombinator.com/item?id=21362190)

------
cik
I love the idea - and it's definitely something I'll put through the paces on
one of my projects shortly. Being able to separate the schema from data
ingestion, and data transmission is a very powerful scale option for one of
the things I'm playing with.

------
consultSKI
This is so smart. Common sense to the max! Sad I didn't think of it.

------
kissgyorgy
Don't do this with a public API with third-party clients!!

This way you are directly tying the REST API to your database schema. The
whole point of having a public API (you know, Application Programming
Interface) is that you can serve your data in a controlled way, maybe totally
different from your schema. In the moment you change a little bit on your
schema, congratulations, you broke all clients.

------
jayd16
So why use REST at all at this point? What is the benefit REST is bringing to
the table here?

Seems like if you want a declarative API you might as well do something like a
local read replica a la Firebase. Seems like the natural progression of these
API as single schema technologies.

Is the main reasons for sticking to REST here compatibility or is there
something in the RESTful design we want to hold on to?

~~~
z3t4
REST it pretty stupid, but it works over HTTP(S) and is state-less. And tools
that use HTTP has nice abstraction layers already, and are very common, so it
becomes simple to use.

Personally for talking with a web front-end I would use Websocket's with long-
polling as fallback. And use JSON instead of query-string for querying. It
does however require yet an abstraction layer, and is more brittle and less
secure then REST.

REST is a school-bus. Other methods are like exotic sports-cars.

~~~
jayd16
Well sure it's common but my question is whether that's the only reason. If
we're going to down the path of declarative requests and the like, why not
push it further like firebase has done? I'd prefer that a lot more if there
was a self hosted/open source alternative.

------
siquick
What’s the benefit of using this over just using a small framework like
Flask/Express with a Postgres lib?

~~~
takeda
It's micro-service like, or some other crap?

I had situation where was implementing something quite simple - an URL
shortener. I didn't use PostgREST, but I decided to use ORM, because it was
simple CRUD operation. It had an option to either use generated url or allow
user to specify a custom one. And it worked as expected.

But then once completed I decided to add extra functionality, for example
extra statistics, like what IPs were accessing it and how much. Adding
expiration times etc.

I realized that ORM encouraged me to implement all of my logic in the
application even when I actually would put less load on the database and made
things simpler if I would let the database do many things for me and use types
and functionality provided to me. I am not taking here about using stored
procedures, I could do all operations as 1 at most 2 SQL statements. While ORM
had to send multiple. In the end I dropped SQLAlchemy (this was python code)
and just use psycopg2 directly didn't even bother with wrappers, just used
built-in pools. It was also easier for me to make my code use two endpoints
for reading and writing, so I can scale my code better.

I realized that ORM did not save me much code at all, it was the same amount
of code with or without it, and without ORM I had greater control of what I
wanted to do.

I previously believed that ORM was standing in a way when your application
gets bigger, but my belief was ORM was good for small projects. Here I
realized that it doesn't bring much benefits even for simple projects.

I think REST interface like this is doubling down on what ORM tries to do.
Maybe it could be beneficial in places that don't have libraries to
communicate with a database and only can make http requests?

~~~
FreelanceX
You should try SqlAlchemy Core. It does not have the drawbacks of an ORM but
still saves you the trouble of writing sql queries as raw strings.

~~~
takeda
It's better but still not great, because it tries to make your SQL independent
of the database. So some things are harder to express that way. You have to
understand your database, and then you need to understand how to express in
SQLAlchemy to get your desired SQL statement.

PyCharm actually has a database support, you can configure it so it connects
to your database, then it will fetch database schema. After that it will
recognize your SQL statements in the string, offer auto complete and even take
these into account when refactoring your code.

IMO this is how impedance mismatch should have been handled from the
beginning.

------
louis8799
I am not sure if it is a good idea to add PostgREST to your stack. As
PostgREST can only interact with your DB, so you would probably be calling
PostgREST from another REST. In this case, you would be better off using ORM.

------
ijidak
I've wanted this for a long time.

Is there anything like this for Microsoft SQL Server?

~~~
steve-chavez
You could use PostgreSQL Foreign data wrappers[1] and leverage PostgREST for a
SQL Server schema.

tds_fdw[2] works pretty well for this(I've used it in a project related to
open data). Basically, you'd have to map mssql tables to pg foreign tables[3]
defined on a pg schema. Lastly expose this pg schema through PostgREST.

[1]:
[https://wiki.postgresql.org/wiki/Foreign_data_wrappers](https://wiki.postgresql.org/wiki/Foreign_data_wrappers)

[2]: [https://github.com/tds-fdw/tds_fdw/](https://github.com/tds-
fdw/tds_fdw/)

[3]: [https://github.com/tds-
fdw/tds_fdw/blob/master/ForeignTableC...](https://github.com/tds-
fdw/tds_fdw/blob/master/ForeignTableCreation.md#example)

------
biolurker1
so basically this is like Firebase but in RDBMS which is quite awesome

------
janeshmane
This is intriguing, but how does one go about scaling this? Relational DBs are
often where scaling breaks down and sticking more of the application in that
problematic part of the stack seems like it could end poorly...

------
arunc
How is the REST API document generated? Looks neat!

------
jitans
This should have been: PostgGRPC

------
ruslan_talpa
Have you just discovered this and posted to HN? :)

~~~
pictur
So?

~~~
ben_jones
The frequency by which this project appears at the top of HN does not
correlate with its production usage and thus feels like gorilla marketing, at
least to me.

~~~
detaro
Looking through the search results, it seems like it's been potentially high
on the front page 3 times in 4 years.

~~~
ruslan_talpa
That was the surprise part behind my other (downvoted) comment, surprised that
it still comes up on homepage with direct links (as oposed to some new
development around it)

------
hudo
"Object-relational mapping is a leaky abstraction leading to slow imperative
code"

So they added REST on top of ORM, few more layers of data transformation and
even leakier abstraction, so poor dev doesn't have to worry about "low level"
SQL.

I lost count of how many different libs/frameworks i saw that exposed CRUD
through HTTP, all failed miserably, because it is actually very dumb idea.

~~~
ruslan_talpa
postgrest is not (and does not use) a ORM

Postgrest is more like a compiler, it takes one language as input (REST) and
outputs another language (SQL) as output.

It has 0 relation to the ORM concept

~~~
arithma
"ORM" is more like a compiler, it takes one language/(api can be seen as a
language) as input and outputs another language (SQL) as output.

~~~
blondin
totally in agreement with you here. an ORM could also be viewed as a compiler.
the original comment didn't deserve the backlash.

