Hacker News new | comments | show | ask | jobs | submit login
PostgREST – A fully RESTful API from any existing PostgreSQL database (github.com)
179 points by developeron29 on Mar 26, 2017 | hide | past | web | favorite | 87 comments



Note for the submitter that "Show HN" is generally for things you yourself have made – see [0]. On the other hand, thanks for submitting; this is a great project and it deserves attention.

Postgrest is a great example of a real-world Haskell codebase. It's concise for the amount of functionality it offers, which is characteristic of functional languages (<3000SLOC of source, not counting tests). I'd encourage anyone interested in working with non-toy functional codebases to take a look around with it, or better yet, submit a PR (there are a few "beginner" issues!).

[0]: https://news.ycombinator.com/showhn.html


> Postgrest is a great example of a real-world Haskell codebase.

It's hard to read. Very messy.

https://github.com/begriffs/postgrest/tree/master/src/PostgR...

Despite 15 years of programming experience in python, js, C, etc. I feel like I'm going to have to duck my head down to be chastised for not understanding the language and not being "intelligent" enough to see the depth.

It looks like jibberish, to me. Not trying to be offensive. I'm sure the person who written it had it make sense to them. You likely also notice, a lack of code documentation. Bad form. Don't tell me it's because I don't know haskell, that's why I'm not expending the time to learn it, despite the buzz.

Meanwhile, SQLAlachemy and Hibernate isn't reporting complaints and a human being could actually parse it to understand what the hecks going on there. And despite it being Python or Java - easier and far more widely adopted languages - they're documented extensively, the authors didn't solipsistically assume others would "get it".

Which is a pattern I've been seeing with hardcore functional advocates in communities. They are the kind of people who'd work 2 weeks on a paper for a mathematical proof, shove it to you in the hallway to look smart, and say "It's obvious". It's not, you're just trying to show you're smart, but no one's understanding you - and that is important in you winning people over and not looking arrogant.

> It's concise for the amount of functionality it offers

Make it 4 times as many lines. Because there is so much condensed inside of this.

>8k stars. Less than 50 contributors. Of which, only the top 7 changes more than 100 lines.

If this considered a real world haskell, it's no wonder there's not a lot of people using it.

It does more to demonstrate functional programmers lack empathy for enterprise ones. Because even with solid grasp of CS concepts, even Haskell's own proponents are having a hard time stomaching contributing to it.


> It's hard to read. [...] Despite 15 years of programming experience in python, js, C, etc. I feel like I'm going to have to duck my head down to e chastised for not understanding the language. [...] Don't tell me it's because I don't know haskell, that's why I'm not expending the time to learn it, despite the buzz.

To me you're a classic example of the "Blub Paradox" [1]: languages with other features/language constructs than you're used to just seem worthless, complex, without value and foreign to you.

You mention a lot of experience in hardcore imperative/imperative-OOP languages (C, JS, Python). Haskell, Rust, Scala etc. require a bit of a different mindset and thus have an initial steep learning curve.

Compare it to lamda's. Did you see the value in anonymous functions the very first time you were introduced to the concept? Now that you've probably used them for a while, would you be happy to use a programming language without them?

You're not willing to learn the language because you don't understand it, and dismiss the language as having no value because you don't understand it. Is that really fair without at least having a basic understanding of the language?

[1] http://wiki.c2.com/?BlubParadox / http://www.paulgraham.com/avg.html . I'm not a fan of the "I Know LISP So I'm Better Than You" tone of the original article, but I do fully agree with the key point that it's hard to evaluate language constructs you're not used to.


I know some Haskell, but don't find code like this to be very readable:

    offsetParams :: M.HashMap ByteString NonnegRange
    offsetParams = M.fromList [(toS (replaceLast "limit" k), fromMaybe allRange (rangeGeq <$> (readMaybe =<< (toS <$> v)))) | (k,v) <- qParams, isJust v, endingIn ["offset"] k]
Not because it's something I don't understand at the concept level (there isn't anything complicated there), but because of the formatting and general lack of comments.

Maybe I'm talking about bike shed color, but it could be much better just by having some extra newlines that separate the pieces of list comprehension expression.


The "blub paradox" argument isn't a very good argument. It's effectively a gradeschool style ad hominem wrapped in fancier clothing.

Also, I don't think his criticism is of the "blub" mold anyhow. It seems to me more like it's criticism of the particular codebase's documentation, although it does seem to stretch into a hasty generalization about "all Haskell".


I don't agree that blub is an ad-hominem,or even a fallacy.

Blub basically says "you need to be proficient in X before you can see value in X, this if you are not proficient in X you won't see it's value". You can disagree with the statement, but there's no logical fallacy there.


> languages with other features/language constructs than you're used to just seem worthless, complex, without value and foreign to you.

Document the Haskell code and not stuff so much logic into one line. I think that's a fair thing to ask for in software engineering.

Anyone who complains about Haskell seems to have either a.) not programmed it b.) weren't programming "correctly" c.) a xenophobe/blub/NOS

I know what lambdas are, I know there are functions and folds. That in and of itself doesn't tell the story of the data that's actually being manipulated and the expected output. So that's why you have a 20-30 line comment block explaining what's going on in important API functions.

You could program python, JS or ruby similarly; wrapping multiple functions on the same line until you have to actually trial each function to figure out why your not getting an expected result. It feels clever, at the moment, but we don't do it because we're not helping colleagues understand it.

> You're not willing to learn the language because you don't understand it, and dismiss the language as having no value because you don't understand it. Is that really fair without at least having a basic understanding of the language?

The commenter was heralding the project as a "great example" of "real-world haskell" and recommended contributing to it. I was shocked to see 8k stars and a paltry 50 contributors; of which, only the top 10 have contributed over 100 lines of code. That's half the contributors, proportionally, to something like neovim or redis, and even worse compared to node.js projects.

And, I looked at the code of the project, which is open source, and didn't see code documentation.

I'm trying to be generous. My advice: stop packing the code so close and document it, probably more than normal, since the symbology in their is highly dense.


It's not like putting the code on one line. It's like complaining about using map instead of a for loop but you don't know the abstractions.

There's a line of too much abstraction, and lots of.haskell code crossed it, but the postgres code ive looked at doesn't seem to suffer from it.

Also I really hate the types are documentation people, but I mean it's hard to argue with it for asking about what transformation is happening.

Edit: also if be surprised if those numbers are all that bad if you consider the size of the codebase but yeah it's hard to get contributors to a Haskell code base.


Here is a project with about the same number of stars/contributors/contributors with more then 100 lines/time since release

https://github.com/graphql/graphql-js/graphs/contributors?fr...

:) So by your logic, i could say:

"If this considered a real world javascript/node, it's no wonder there's not a lot of people using it." which is obviously wrong.

You are using the wrong metrics to judge the language popularity. Virtually every OSS project has only a handful of core contributors.

PS1: Note that i am not arguing PostgREST is a great example of real world haskell, it's not for me to say since i have bias :)

PS2: You are right that the source code is not commented and it could use a little of that but what you are missing (because you are not used to haskell) is that a lot of the documentation for haskell projects comes from the types and type signature of the functions. So just by looking at that, in a lot of cases, a haskell programmer can reliably tell what the function is doing without any documentation


It's indefensible to have code that tightly compacted, throwing functions around and not documented.

Don't care if it's Python (basically pseudocode), Javascript, Haskell or Erlang.

For all the time Haskell commenters turn criticism into a Socratic dialog, they could be considering ways to make their code digestable. It's been what, 20 years now?

Not just to non-haskell programmers, but all the haskell programmers I see swarming around threads like these on the internet; you'd think they'd actually be teaming up and collaborating with each other. I can't put my finger on it, I think there's a trend, a feeling, that it's not suitable for programmers focused on achieving business ends. It's more of a hobby thing

Programming in the enterprise means confronting the reality that there are design trade-offs and someone else has to read the code. Haskell programmers seem to go off the deep end trying to pull off a hack of making haskell work a certain way, then its "pencils down", sayonara. The real world doesn't accept that, programs have to be maintained by others.


I have a (non-contributor) experience with the PostgREST codebase. It's true PostgREST codebase is not great (lots of long functions, need some work to extract the main program into a library etc.) but the codebase definitely is easy to navigate and modify. For a transient project at work I've asked a dev to evaluate how much work it would be to port PostgREST to MSSQL. My dev had a working proof-of-concept in a few days. The bottleneck was not navigating and patching PostgREST codebase but dealing with MSSQL. We decided to go another route only for the lack of a native MSSQL driver in Haskell: there's an ODBC driver (which my dev used for his PoC) but I don't really trust ODBC+FreeTDS. Hence, we'll write a good-old C# binary with a Microsoft-blessed driver to expose the few ad-hoc queries our own team needs rather than a generic solution for the whole company and we'll move on. In short, I have some data invalidating your argument about the PostgREST code style being indefensible. Although I don't enjoy PostgREST codebase I love the product because it delivers what the website claims. Rather than diminishing the hard work of devs using a throwaway/slandering account, I just reckon that Haskell and "ship it culture" are reconcilable.


I also did something similar to postgREST for MSSQL in java, not in haskell because of the same driver issues you mentioned. I managed to implement jwt auth pgREST style taking advantage of MSSQL user impersonation though I did jumped into some limitations, for example the lack of enviroment variables in transaction scope in MSSQL forces you to have to declare one db user for an app user.

I have little experience in Haskell despite that it was easy for me to understand the gist of what postgREST does and port it to java.


ODBC+FreeTDS is not generally a good idea. Microsoft's ODBC driver works fine though.

https://blog.afoolishmanifesto.com/posts/install-and-configu... explains how to get it going on debian.


Some lines are indeed atrociously long. No Haskell programmer I know would advocate this. I think the project maintainer should insist on sane code formatting. All the critical lines I checked were from contributors. Other than that, I think the case you are trying to make here is largely imaginary. That is not to say that the project couldn't use some more source level documentation, but that you do not see how not actually knowing the language in syntax and semantic and never having written any code in it could possibly cloud your judgement on the comprehensibility, pretty much says all about your case, if you ask me. (I hope that sentence wasn't to long, if you ask nicely I'll add some documentation)


Why is it not ok to be complicated/complex if it's worth it? People do not complain about the complexity of a car compared to a cart, it's just doing much much more. It's ok to want a cart for whatever reason but I don't understand why then you have to say that a car is bad?

A lot of people are happy writing Haskell. In fact they seem to like it so much that they would kill for a Haskell job!


How much more does Haskell do compared to other languages? Does it posses some unique properties which puts it high above other languages? Back to cart and car comparison: despite several levels of magnitude difference in complexity the difference in complexity to operate is not that great. That is, you are not directly exposed to the underlying complexity. Meanwhile programming languages should be optimised for reading first.


> Meanwhile programming languages should be optimised for reading first.

Interesting, because I totally disagree with this. My preference is for programs designed for safety and amenability to static analysis first. Maybe there's a reason I'm not a Python dev anymore=)


It's more about familiarity with the abstractions provided by the language.

Sure, the complexity difference in operating a cart vs a car is not that big. But if someone has never seen a car or familiarized themselves with how to operate one then the perceived complexity is going to be enormous.


Arguably, A person who is familiar with the operation of a cart would be at least somewhat familiar with the operation of a car in that they both roll in a specific direction based on user input.

That said, the complexity of operation in a car arises from increasing the number of controls from 1 to, at minimum, 10. I would say the cart to car analogy is more akin to learning a complex API.

I would say that a better analogy for traditional languages vs. Haskell is that of a car to a fighter jet. The fighter jet is faster and really really good at certain things like movement on 2 planes and firing missiles. However, the fighter jet is orders of magnitude more difficult to operate and requires much more maintenance. For real world concerns such as getting groceries or transporting furniture, a car is the much more practical choice. I mean, we'd all love to own a fighter jet, but would any of us really have a use for it other than screwing around?


On one hand I would want to argue with you ... because for me this [1] is arguably more readable, than i.e. java code-base. But I haven't done anything in Java for ~5 years. And I have worked through a haskell text-book recently :-)

On the other hand, I heard somebody say "Haskell program file usually consists of 10 language extensions, 20 imports, and 1 line of Perl" and I understand the sentiment :-)

[1] https://github.com/begriffs/postgrest/blob/0b486ccf446c6480d...


Some feedback: I need to see some kind of "big picture" usage highlights. I find it hard to picture what endpoints are generated based on the tables: does each table/row become a resource with its own url? How are relational queries and joins handled?

I looked at the docs and they seem to discuss various concepts and other minutiae but there is no real overview that cuts through the fat.

It's not immediately obvious to me how it fulfills its stated claim:

> PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.

> Using PostgREST is an alternative to manual CRUD programming. Custom API servers suffer problems. Writing business logic often duplicates, ignores or hobbles database structure. Object-relational mapping is a leaky abstraction leading to slow imperative code. The PostgREST philosophy establishes a single declarative source of truth: the data itself.

The second paragraph in particular is a pretty bold claim, so I find it strange that there is very little elaboration on how this project can eliminate the need for custom APIs.


Some usage examples here[1]. I do wish they were available in the README though.

[1]https://postgrest.com/en/v0.4/api.html


You are right in saying that the docs lack "the big picture", but that will be fixed soon(ish).

The big picture is that it's not PostgREST alone that accomplishes this "big claim" of eliminating the need for custom APIs. It's the combination of using openresty(nginx)/postgrest/postgres/rabbitmq together that gives you the possibility of "defining" apis rather then "manually coding" apis.


Sounds like an interesting tightrope walk. I feel a bit of concern about the number of moving parts being part of a single solution, having configured similar selections of software myself, but I'll wait and see. Best of luck.


those moving parts are all there already in every project (nginx/db/rabbitmq), people are just not using their full power.


Fair point. I can't really judge until I see some kind of overview that deals with the pragmatic aspects of working with it, or trying it myself (which I may do some time).



Looks like to define richer endpoints you create views in the database and stored procedures. It can follow foreign key and m2m through a select url param

select=id,foreignkey(name)


Alternative:

pREST : GOlang based RESTful API ( PostgreSQL database ) : https://github.com/nuveo/prest http://postgres.rest/

"Problem: There is the PostgREST written in haskell, keep a haskell software in production is not easy job, with this need that was born the pREST."


[Curious/Discussion] Why the 'problem'? ( Haskell vs. Golang )

Dec 15, 2016: "Today the postgrest is not distributed a binary, need to install Haskell to compile. pREST in its 1.0 version will be binary distributed (cross compiling, run in Windows by example), so you don't need to have anything installed on the server to run. How old is Haskell and how many developers exist? Go has less years and more developers, if you need to fix a problem in postgrest turn where?"

"The reason the pREST was born was because of a lack of developer, I am using pREST in my company, I know haskell and my developers nothing, if you have any problem (in production) just sits in my hand (hand want that)."

https://github.com/nuveo/prest/issues/41


Haskell literally has the same deployment benefits as go. They both deliver single static binaries with just a few dynamic dependencies (for haskell libc, libgmp)...

Honestly, the authors seem to have no clue what they're talking about and were looking for a good excuse to build something cool :). Which is fine!


Not a Haskell user, but what makes it difficult to keep it in Production?


Nothing. I honestly have no idea what they're talking about. Haskell apps tend to be tanks once they're deployed. The only issue I've ever seen was a Haskell app getting OOMed because the dev box only had 8GB of RAM and someone deployed some data science infra to the box sometime later (which needed >8GB RAM). I think this statement is more accurate:

> "Problem: There is the PostgREST written in Haskell, not in Go"

Moreover, if you want to use something like PostgREST but need to extend it, and don't know Haskell, this is a completely legitimate problem. While I'm all for competing implementations, the pREST author's claim for why it's needed is inaccurate.


Extension of something written in a language like Haskell[0] definitely poses challenges. Even if the language makes it un-modifiable in your environment, there's no reason it can't be treated like a black-box component and used.

[0] By "like Haskell", I really mean "unreadable/unwritable by the team supporting it". This could apply equally to Ruby, Java, bash, SQL, JavaScript, and Python depending on the makeup of the team supporting it.


What's difficult about deploying a single binary?



A paraphrasing of the discussion:

q: "What issues did you have with Haskell in production?"

a: "PostgREST doesn't provide binaries and building it is hard"

q: "Actually PostgREST binaries are provided for multiple platforms." [1]

a: "The real reason is we don't have Haskell developers here." Case closed.

From personal experience PostgREST is high quality work; a set it and forget it service that doesn't need babysitting. Bugs are typically about missing capabilities, as opposed to failures. "Low defect" Haskell that does what it says on the tin.

[1] I don't know exactly when binaries first appeared but they were available as of April 2016, about seven months before the first prest commit.


This is neat and I am going to definitely dig in an play around with it. I guess my biggest worry with projects like these is what happens when something breaks? If I decide to integrate a piece of tech into my stack, I need to be able to intimately understand what's going on under the hood. If something breaks or works in a way I don't expect, I need to know it well so I can diagnose and fix the problem. Something that abstracts away this much of the dirty work makes it less attractive to me for anything serious. Its also written in Haskell which is an awesome language I hear. However the syntax is foreign compared to "traditional" languages and its less well understood due to a smaller community. That means I can't just fork the code and fix a bug if I end up in a bind. Just seems kinda risky. Hope to be proven wrong because it is a really cool idea. Best of luck to the authors.



What's the use case for something like this? The claim that it writes APIs better than I could by hand doesn't make a lot of sense to me--writing an API-ORM-thing, sure, but not a non-trivial API. I've never built an API that is simply a CRUD front-end to a database--there's always business logic + the output of the API very rarely matches the database tables underneath e.g. you may be rendering 2-3 different models, but that's never revealed to the end-user.


Don't confuse PostgREST with something that you just point at a (poorly designed) database schema and magic happens and you get a nice api.

You point it to a schema that consist only of views (that you define) and stored procedures (that you write) that abstract away the underlying tables. You define constraints on all your columns so junk does not get into the db. You define database roles and RLS policies and give them privileges so that you control who has access to what.

You still in a way write backend code, but in this case backend code is mostly views/constraints/triggers and in rare cases stored procedures.


I see. Either way, that's not really obvious from the readme or the tagline, "REST API from any existing PostgreSQL database."


I guess this is like a lower-level version of Parse (on a different, transactional stack too). Pretty cool. I wonder though, often times I have cases that are mostly CRUD but with a little extra: e.g., "create this object and kick off a Stripe payment", or "create this object and send an MQ message". With Parse, you just write Node triggers to do that. Would I have to dig through Haskell code (or hire Haskell developers) to do the same here, or does postgrest support an easier way to do that?


You can trigger external actions by connecting PostgreSQL pubsub (LISTEN/NOTIFY) with an external job queue. https://postgrest.com/en/v0.4/intro.html#external-notificati...

A NOTIFY SQL command can be sent out from either a stored procedure or a table trigger. (The docs could use some examples of this, but that's the idea.)


Be aware that Postgres' NOTIFY is not stored/queued in any way. If your external client is not listening, it will get lost.

I would instead have the trigger insert a row into a "pending tasks" table - and then send a NOTIFY.


http://debezium.io/ is a better way to do this.

It uses the logical decoding feature to get all row changes and writes them to a Kafka queue.

This means it can pick up where it left when it crashes.



This is meant to be only a data layer; it's not a full backend. Presumably your application events are handled at a different layer in your stack.

That said, you could add event handling inside of Postgresql via triggers and one of the supported procedural languages (like PL/Python)


Postgres supports async notifications, so you could just write a little service which gets a message when a record changes, and put the Stripe logic there. https://postgrest.com/en/v0.4/intro.html#external-notificati...

Not as simple as Parse, I guess, but that's a tradeoff of using lower-level technology.


Odd choice to push JSON serialization onto the DB while touting horizontal scaling. Still pretty cool that this much is possible with Postgres and a minimal frontend.


Doing json serialisation in the db is the only way to extract tree like data from the db. Another point to consider is that this is a much lower burden on the database then you think (roughly speaking, it adds 15%-20% more cpu load then a normal query), this is C code doing this, can't get much faster that. This type of load is easily horizontally scalable using read replicas which in RDS is basically one click.

But forget all that talk about horizontal scalability, 99% of the projects will never outgrow a single (big) database so it's no use in complicating things with "elastic" setups and "webscale". People don't realise just how fast postgres is http://akorotkov.github.io/blog/2016/05/09/scalability-towar...

Who among us have worked on projects that have 1M queries per second, not many.


It's not really the only way (PostgreSQL has recursive queries), but it's definitely the most convenient (and in many cases the most efficient) way if you're not worried about referential integrity.


Care to explain how you would extract in a single query tree like data (without duplicate data going over the wire)?


I use `WITH RECURSIVE` to traverse trees:

https://www.postgresql.org/docs/current/static/queries-with....


it's not about traversing trees, it's about "returning" trees, there is a difference :)


Yes, and I use it to do things like "starting from this point in the tree, recursively return all of its children". Usually I ask for `parent_id`, other times I ask PostgreSQL to accumulate an array describing the path the database took to get from here to there. In no case have I found `WITH RECURSIVE` to be insufficiently powerful for working with trees.


I don't think you understand what i am referring to when i say "returning data". How do you return this http://pastebin.com/9cqzQETj information from the database, in a single query, without duplicate data.

consider there are 3 tables clients/projects/tasks with FKs between them.

Please paste the query that returns all the information above using "with recursive"

This is how the communication between the client an PG goes https://www.pgcon.org/2014/schedule/attachments/330_postgres...

- client sends an SQL command - server replies with RowDescription detailing the structure - after that a DataRow message is sent for every row - finally, the server sends CommandComplete and ReadyForQuery

So inherently the protocol can only represent a 2D array.


You’d use these [0] functions to build a json string and return your structure, one row per root of each hierarchy (you might only return one row if you’re after a single hierarchical json object). [0] https://www.postgresql.org/docs/9.5/static/functions-json.ht...


Please read the whole thread, this is the exact point i was making, json is the only way to return tree like data. willglynn was saying you can do it with "with recursive" and i think he was confusing "traversing/inspecting" tree like data with "returning" tree like data.


I think this framework is a better way if clients need access database via RESTful API. At least, it can encapsulate business logic easily to be a microservice.

https://www.reddit.com/r/ruby/comments/61bb6h/squad_simple_e...


Wanted a throwaway API recently and then finally realized that this cannot be deployed to Heroku with the PG add-on. Disappointing.


Why?


What cases I should use this? I mean, look like it used for simple/beginning project, when ORM can do quite handy


An ORM is a library integrated into a language runtime. Postgrest is a service – a separate process – which sits in front of a Postgres database, offering a RESTful HTTP API over that database. This means web or mobile HTTP clients can access the database in a safe, controlled manner.

Postgrest basically shifts the work of writing a basic CRUD API (a task for which you would probably use an ORM) to declaring a SQL schema. From that schema, it infers which endpoints should exist and what they should do. For a certain class of web app, this can be a HUGE time saver.

Beyond that, consider checking out the "Motivation" section of the website: https://postgrest.com/en/v0.4/intro.html


To be fair, you could do the same with an abstraction layer over an ORM.


I'm designing a web app that doesn't need much actual backend, save some static data slightly too large to ship with the static files. I have nginx serving static files and proxying db requests to postgrest.


What specific ORM do you have in mind when you write "handy?" I'd really like to know in case I've missed something.


Can't help to ask myself "What problem is this project solving" - Perhaps so that more front-end oriented folks can easily access a data source otherwise only exposed by SQL.


Out of interest is there an equivalent of this for MySQL?


Here's a Node-based framework: http://loopback.io/


Is there anything like this for SPARQL endpoints?


Not valid for Show HN.


[flagged]


This is a super rude and arrogant thing to say. Have you considered, before dismissing a project with 1.2k commits, that it's not the 54 authors who worked on the project for three years who are missing something, but you who heard about it at most two hours ago?


[flagged]


Wow, unbelievable. I don't know in what world you think it's OK to call something useless just because you don't personally see its use, when faced with evidence that plenty of people in this thread alone do.

I understand you're not a native speaker (neither am I), so I just want to make it very clear to you in case you don't realize it: Your original message, and this reply you just wrote, are both completely out of place and uncalled for. And incorrect on a technical level.

This tool is exposing DB operations as an HTTP API. It wasn't written for you specifically, it was written to solve a particular scenario. One day, you'll encounter said scenario, and you won't even realize there's a tool out there to specifically solve it because you dismissed it as "useless". And instead of "doing something great with your time", you'll waste a few days/weeks reimplementing a poorer version of that wheel.

I'm not affiliated with the project, I don't even use it, but I can still appreciate what it does. If you can't, that's fine, but don't shit on people's work no more knowledge about it than "reading the short introduction". Lest you want me to go through your Github profile and shit on your work based on how you name your repos, which would be just as productive.


Ok the tone of my message might be out of line a little bit, but I find it important to make people aware of incorrect usage of principles/tools/framework/stuff. I would not have so much problem with this project if they would call it "API" instead of "REST API", because it would communicate the purpose much clearer.

Also it would be a huge design mistake to use this on any serious project and probably not everybody will realize this.


I have a little bit of experience with PostgreSQL.

Coming from MySQL background, I was pleasantly surprised how much abstraction you can build into your PostgreSQL database itself. With just a few views and perhaps functions you can abstract away your actual data in clean and performant way, and thus enable use of such REST API without worrying too much about future schema change.


You've just formulated one of the core concepts/techniques PostgREST relies on. You don't expose your core tables as an API, you expose views that extract data from your core tables, even if in the beginning those views are nothing more then "select * from some_table". As the api evolves, you are free to change the logic in those views without the need to touch your core tables


That's actually a very good counter-argument against mine.


Agreed, this is a web api for postgresql - there is nothing REST about it.


A web page, for example, should not have direct access to the database.


If an app can't have direct access to the database, why would letting it access via a web api be better?


I can think of a few use cases.

- The developer is writing apps in languages that do not have PostgreSQL drivers. Or the available PostgreSQL drivers have major drawbacks. HTTP libraries are pretty much ubiquitous, and the benefits of using HTTP might outweight the drawbacks of the existing drivers or the drawbacks of using HTTP.

- HTTP as a protocol is very good in the sense that there is a lot of tooling around it for load balancing, proxying, security, etc. Depending on the skill level and distribution in the organization, it may make a lot of sense to use HTTP as a protocol for accessing the database so that certain aspects of security, high availability, etc. can be the responsibility of system administrators, rather than developers who must hack into the database driver.

The two use cases above are not theoretical. Someone invented DBSlayer a decade ago, which is like a PostgREST for MySQL. You can read their rationale here: https://open.blogs.nytimes.com/2007/07/25/introducing-dbslay...

And a third use case:

- The author deliberately wants to expose a public database, as a public learning environment of some sort. No production data is stored in the database.


because the web api (PostgREST) has a strict control on the types of queries the client is allowed to execute thus preventing DOS attack against the db that force it to run complicated/unoptimised joins or function that use a lot of CPU


PostgreSQL has built-in strict control also.


no it does not, it has control of what data a user can access, it has no control of what types of joins he can do using the tables he has access to or what functions he can execute.

For example anyone can do "select md5(bigtextcolumn) from articles" and kill the db if one would expose the entire sql language to the web. PostgREST protects you agains that.


Is it possible to include YAML in the response format?

    - [name, age, sex, phone]
    - [Taylor Swift, 27, female, 555-SWIFT]
I find YAML to be the best format for everything.


Since you can get JSON and JSON is a subset of YAML, you also have YAML.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: