Hacker News new | past | comments | ask | show | jobs | submit login

Somebody in our team put this on production. I guess this solution has some merits if you need something quick, but in the long run it turned out to be painful. It's basically SQL over REST. Additionally, your DB schema becomes your API schema and that either means you force one for the purposes of the other or you build DB views to fix that.



> or you build DB views to fix that.

That's what VIEWs are for! Well, one use-case of VIEWs, anyways.

There's nothing wrong with the schema as the API since you can use VIEWs to maintain backwards compatibility as you evolve your product.

Put another way: you will have an API, you will need to maintain backwards compatibility. Not exposing a SQL schema as an API does not absolve you or make it easier to be backwards-compatible.

You might argue that you could have server-side JSON schema mapping code to help with schema transitions, and, indeed, that would be true, but whatever you write that code in, it's code, and using SQL or something else is just as well.


How do you do CRUD with views? I know Reads are what views do.


> How do you do CRUD with views? I know Reads are what views do

Both automatically updatable views (supporting insert/update/delete by default because their structure provides a clean mapping to the backing table) and views made updatable through instead-of triggers exist in Postgres.


  CREATE TRIGGER <name>
  INSTEAD OF INSERT OR UPDATE OR DELETE
  ON <view-name> ..


You can have insert/update trigger on views. You shouldn't but you can.

More realistically, stored procs would do the CUD parts.


> You can have insert/update trigger on views. You shouldn't but you can.

You can, and there is no reason you shouldn't.

> More realistically, stored procs would do the CUD parts.

Why are stored procs more realistic?


Triggers are implicit, have side effects and are not deterministic. They are confusing and surprising.

A procedure call is explicit, a trigger is implicit. You don't call a trigger, it just happens as a side effect of something else. People tend to forget implicit things. Suddenly you notice that something is acting strangely or slowly in your application. You can look at your functions and procedures and try to find the problem. But if your application has triggers all over the place, how do you know what is going on? A trigger can change a dozen rows, which in turn can change other rows, so changing a single row can fire thousands or millions of triggers. Also, triggers are not fired in a particular order, the database is free to change the query plan according to what it thinks is best at the moment, so triggers are not deterministic. Triggers can sometimes work and sometimes not.

Almost everything that can be done with a trigger can be done with a procedure, but explicitly, deterministically and in most cases without side effects.


> You don't call a trigger

You call an instead of trigger implementing updatability of a view, or the select query defining a view, just as much as you call property setters or getters in OOP languages.

With the DB triggers, as in many OOP languages (C#, Python), this is an implementation detail obscured from the calling site, which is good for loose coupling, modularity, etc.

Your objections, while IMO still overblown, have relevance to some uses of triggers (they are particularly applicable to AFTER triggers and BEFORE triggers other than those implementing constraints, but least applicable to INSTEAD OF triggers implementing view updatability, which is what we are discussing here.)


> much as you call property setters or getters in OOP languages

Good design is obvious and orthogonal[1]. If you write setters in an OOP language in such a way that they do surprising things, i.e. not just setting a value, then I would call that bad design.

> which is good for loose coupling, modularity, etc.

What do you gain by using triggers in this case? All you get is mental overhead, because whenever you use DML you have to keep in mind that there might be a trigger hiding somewhere that does strange things. If you call a procedure instead, you make it clear that you want to do more than just a simple update or insert.

> [...] it is possible for the method call to make use of concurrency and parallelism constructs [...] to do a unknown number of things in an unknown order

Why would I want this? I want my code simple[2], stupid and obvious, and not convoluted, clever and surprising[3].

[1] https://stackoverflow.com/a/1527430

[2] https://www.youtube.com/watch?v=rI8tNMsozo0

[3] https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...


Why do you think triggers must be astonishing (but OOP not so)??



There's nothing wrong with that if that's the logic you want!


> Triggers are implicit, have side effects and are not deterministic. > ... > A procedure call is explicit, a trigger is implicit.

Method dispatch (whether the language be OOP or otherwise) is also "implicit".

Think of triggers as controlled code injection.


You still have to call a method if you want it to do something. It doesn't simply happen as a side effect of something else. A method call only ever does one thing at a time, not multiple things in a random order. Read this if you still think triggers are a good idea: https://blogs.oracle.com/oraclemagazine/the-trouble-with-tri...


> A method call only ever does one thing at a time, not multiple things in a random order.

That's...not at all guaranteed if it is possible for the method call to make use of concurrency and parallelism constructs. It's actually not at all uncommon for method calls to do a unknown number of things in an unknown order (and it can even have an unknown number of them still in progress when the method returns!)


Exactly.


Yeah, and in SQL that's INSERT/UPDATE/DELETE on a table/view. Syntactically, using a stored procedure is really no different, and it can have function calls you don't know about because its body could dispatch just like triggers do.

This whole thing about triggers being implied/hidden is not based on anything other than taste.


> and it can have function calls you don't know about because its body could dispatch just like triggers do.

Please give an example of a stored procedure that does something else than what the code says it will do.

> This whole thing [...] is not based on anything other than taste.

Good taste: Insert, update and delete doing exactly what the statement says. Procedures doing exactly what the code says.

Bad taste: Insert, update and delete surprising you with magic tricks.


> > and it can have function calls you don't know about because its body could dispatch just like triggers do.

> Please give an example of a stored procedure that does something else than what the code says it will do.

That's not what I wrote. I wrote that the caller of an SP need not know anything about the SP's body any more than the caller of a DML need not know anything about triggers. Of course that is true. And of course it's also true that the caller could know all those details in either case.


Let's say customer 1234 wants us to delete his/her customer record, so we issue the following statement:

  DELETE FROM customers WHERE customer_id = 1234;
The statement runs successfully and of course we would expect the customer record to be deleted, because that's what we just said, isn't it? However, the customer is still there somehow. What has happened? Was our statement wrong?

It turns out our applications has triggers and by surprise the DELETE changed into

  UPDATE customers SET contract_status = 'terminated' WHERE emp_id = 1234;
just because a few years ago someone thought that the logic of terminating a contract should be wrapped in a now long-forgotten trigger that magically transforms a straightforward DELETE into an UPDATE.

If the developer had instead put the logic into a procedure like terminate_contract(customer_id), we wouldn't have had this problem in the first place.


There's nothing wrong with having a trigger that turns deletes into updates, if that's what you want. Why would you be surprised that you can do this, or that it's been done?

Your schema is an interface here, so read its docs!

EDIT: BTW, it's quite reasonable to have a policy to not delete, but mark deleted/terminated, rows in certain tables. This is especially important for security-relevant tables where the names of entities are liable to get written down in external (to the DB) ACLs. It's quite reasonable to implement such a policy in the DB as triggers -- indeed, there's no better way to do it, since otherwise a user who doesn't read the docs (like you) but does have privilege, might break the database.

So, actually, I really think you've completely misunderstood. What you thought was a problem was likely a solution to a bigger problem that you were not aware of. If you removed that trigger, you likely broke something. Meanwhile you're convinced triggers are a bad idea and spread that around as gospel.


> Why would you be surprised[...]

The statement says DELETE and that is what I want, the rdms executes it and tells me "deleted 1" (yes, it does). What am I supposed to think?

> Your schema is an interface here, so read its docs!

What docs? Apparently triggers are obvious and self-documenting, or at least that's what the guy who wrote them thought.

> there's no better way to do it, since otherwise a user who doesn't read the docs (like you) but does have privilege, might break the database.

Why properly revoke the deletion permission and offer terminate_contract(customer_id) as an alternative if you can use a magic trick instead?


> > Why would you be surprised[...] > > The statement says DELETE and that is what I want, the rdms executes it and tells me "deleted 1" (yes, it does). What am I supposed to think?

Well, you know about triggers!!! So you're supposed to think that the expected thing happened, whatever the schema designer intended.

Whoever wrote that trigger did it for a very good reason, probably the one I gave earlier (namely: to prevent name reuse, and also because gee, you might create a new contract with that customer later and not want to start from scratch).

> > Your schema is an interface here, so read its docs! > > What docs?

Those written by the people who wrote that trigger, designed the tables, your whole schema.

> Why properly revoke the deletion permission and offer terminate_contract(customer_id) as an alternative if you can use a magic trick instead?

(I assume you meant to write "Why _not_ ...".)

Well, for one, because someone who doesn't understand these things might have enough privilege and ignorance to do a real DELETE and break your data.

Moreover, the logical operation here is a DELETE! Just because you mark the row deleted instead does not make it not a logical deletion.

You're choosing to take one aspect of SQL RDBMSes extremely literally while then also choosing to ignore or malign another part. That's not a recipe for success.


> Well, you know about triggers!!!

Yes, and I also know about other nefarious things like monkey patching in some OO languages. Doing such things is antisocial behavior and something that 99% of developers don't expect.

> Those written by the people who wrote that trigger, designed the tables, your whole schema.

As I said, there was no documentation.

> you might create a new contract with that customer later and not want to start from scratch

Or you might get sued for non-compliance with the GDPR.

> because someone who doesn't understand these things might have enough privilege and ignorance to do a real DELETE and break your data.

Or you revoke the privileges and create a straightforward procedure. No magic, no surprises. And you can still grant delete privileges to a role that can delete customer records for GDPR reasons.

> You're choosing to take one aspect of SQL RDBMSes extremely literally while then also choosing to ignore or malign another part. That's not a recipe for success.

The difference is that SELECT, INSERT, UPDATE, and DELETE are straightforward features that do exactly what you say unless you use another malicious feature recommended by leading database experts not to use:

avoid all triggers, all of them, they are evil. — Tom Kyte

My advice is to avoid using TRIGGERs — Joe Celko


I agree about monkey patching (but not code injection).

> As I said, there was no documentation.

Well, there's your problem :)

> Or you might get sued for non-compliance with the GDPR.

Non-sequitur. That's just an argument that your lawyers need to be involved in matters of data retention.

> malicious feature

That's utter nonsense. If it was "malicious", it would have been removed by now. All the evidence says that they are useful.

What next? No foreign keys? Those are typically implemented as internally-generated triggers. That must make them Doubly-malicious, or something.


> There's nothing wrong with having a trigger that turns deletes into updates, if that's what you want.

While I wouldn't use it on base tables, a view that provides a “current” picture where deletes are updates (or even inserts of a deletion event) into a base tabl with (potentially along with others) provides a complete historical picture of the status of the data of which the view presents a current snapshot is a useful approach.


That's fine, though if you have to worry about sufficiently-privileged and insufficiently-knowledgeable DBAs/devs, then it's best to have such triggers on base tables. After all, you want referential integrity at all times -- it should take drastic action to perform operations that don't adhere to your business rules.


Yes, a DELETE trigger on a table that reports successful deletion but doesn't actually delete the row referenced, leaving it still in the table, is bad design.

It's not evidence that triggers are bad, it's just bad design.

(An INSTEAD OF DELETE trigger on a view that marks a base table entry as logically deleted, causing the corresponding data to be removed from the view is, OTOH, not bad in the same way.)

> If the developer had instead put the logic into a procedure like terminate_contract(customer_id), we wouldn't have had this problem in the first place.

Or if they put it as an INSTEAD OF DELETE trigger on a current contracts view, with the design described above; conversely a badly-named store proc would have the same basic problem as the trigger at issue here. This isn't a trigger problem.


I disagree. The logical operation is "delete", and "mark deleted" is a perfectly fine concrete implementation of "logical delete".


> You shouldn't but you can.

Shouldn't? Why?


They're non-obvious and can bite you in several ways. A lot has been written about why triggers are bad (random example: https://www.itprotoday.com/development-techniques-and-manage...). Back before service layers were common and desktop clients connected to the database directly they were common and just about everyone has some war stories of triggers breaking things.


A lot of things are non-obvious, especially when you're not aware of them and don't expect them.


what's wrong with views (which should have been used formt he start)? What were the pain points?


They are great when used well, but non-materialized views can kill performance with large data sets.


That's the same as saying "unoptimized selects can kill performance with large data sets". Of course they can. That's what optimization is for.

We have quite large amount of data (100TB+ and trillions of rows at this point [0]) and no problem with views.

[0] https://www.citusdata.com/customers/pex


a view is nothing but a query, so if the view is "killing" the performace for you, running the same query from the client will not change anything, the porformance will get "killed" in the exact same way.


Yes, if you are running the same query. Some of the worst use of views I've seen involve massive joins without filters, and then filtering further down, so you end up working with a recordset in the millions of records rather than a few thousand.


I understand the problem you are describing, i would say that is a wrong type of view to create, if one plans to write filters on top of that view then it should be written so the filters can be inlined in the view (which would basically give you the finished query you would be sending from the layer above)


That kind of reminds me of the classic doctors joke though. If it hurts when you do that, don't do that.


It's kind of unclear what problem you are trying to describe. Views shouldn't confound the query planner, and creating views with "filters" sounds like probably a mistake--query the view with the predicates you need then.


So filtering later outside of the db? If you filter on a view the optimiser should be able to sort it out - as others have mentioned, it’s no different from a regular query.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: