Granted for this to work the DB needs to be somewhat stable in size. For us, it was a list of products that rarely changes, so the Redis cache can be built on server bootstrap.
CREATE INDEX trigram_index ON words_table USING GIN (sentence GIN_TRGM_OPS)
It's definitely air tight, and nice knowing that no version of my data can currently be inconsistent, but there are definitely costs. For one, it's just slow. Bear in mind, I'm on commodity hardware, like t2, but it still seems slow, as some writes take up to 1-2s. It's also not great that it's basically magic, I can't really make alterations to change or improve what it's doing. Lastly, it's hard to version control. I have the triggers checked in with my application code, but it would be easy to not even be aware of their existence. On application startup,I override existing triggers with the ones from the code, just to make sure they're in sync, and I explicitly log out this process, but I could still see this whole process happening without it being at all clear to another developer.
Would I use these features again? Probably not.
How big are the inserts? I put my event-sourcing log in a regular table that fully duplicate the rows of the inputs as json. Is not that slow ever to me.
1. json functions: json_build_object, json_agg, etc.
Ever want to product a hierarchical result set?
'users', coleasce(u.users, ''::json)
FROM group g
LATERAL LEFT JOIN (
FROM "user" u
WHERE g.id = u.group_id
) u ON true
At the extreme, you could use the PostgREST extension make an entire REST JSON API with just PostgreSQL.
2. With PostgreSQL's record-level security, end users can get their own connections to the database.
You can even have an instant GraphQL server via Postgraphile. 
It's amazing how much effort we put into restricting access on the server, but ignore user roles on the data layer.
The end result is that column, much less row, level controls end up unused in most cases.
Very security conscious places may just use stored procedures for everything, and limit access to those.
Edit: it’s not the same thing but fwiw pg row level security is completely decoupled from the dbms access control.
That's one of the reasons PostgREST seems promising to me.
It definitely introduced me to RLS though.
- transactional DDL: https://www.google.com/search?q=transactional+ddl+postgresql
- expression indexes: https://www.postgresql.org/docs/current/indexes-expressional...
- partial indexes: https://www.postgresql.org/docs/current/indexes-partial.html
- hyperloglog: https://github.com/citusdata/postgresql-hll/releases/tag/v2....
That is such a huge feature!
PERFORM pg_notify("send_sms", <RECORD>::TEXT);
NOTIFY payloads are deduped over the course of a transaction, with a O(n^2) algorithm if all payloads are different.
If you do a NOTIFY on delete or update and hit a few million rows, the query can go from seconds to hours.
The solution would be to write your payloads to a temp table and send them in batch when the transaction ends. Notifying in chunks of 100 will make the process 10'000 faster.
Foreign data wrappers on the other hand are used by me all the time, as you can have a "common" stuff database and just inject that into all kinds of related ones (like per-tenant setups etc.). I've ran in some issues with joins and performance, but if need be you can just materialize them locally and work from that.
At $work we use table inheritance - each client gets their own schema with their own copies of the table. The idea is cute, but the problem is that the structure of indexing doesn't cooperate with it. Therefore even the simplest query against a single parent table turns into thousands of index lookups on thousands of different tables. As you can imagine, this is not exactly a scalable situation.
If they fixed the index structure, if need be only for a few index types, it would make a huge difference. But it doesn't sound like that is easy internally or will happen any time soon.
I use it regularly and it may help people. Makes use of triggers and some of the other items mentioned.
I haven't used it, but it fits your request of a postgresql specific migration tool where you don't have to futz with manually writing migration scripts.
I did look at that, think I was participant in last discussion but somehow it got lost in the bookmarks.
It doesn't have rollbacks (that I can see) but maybe it is good enough (aka better than Alembic migrations). It appears to deal with actual SQL rather than models which is nice. Will check it out.
But it does seem like a lot more work when rapidly iterating schema.
- Foreign key support for inherited tables (i.e. a foreign key referencing a table with inheritance, the other way round works just fine). Table inheritance is super useful but lack of full foreign key support means we have to give up on some integrity (or replace them with hand-written constraints).
- Bloom indexes on arrays and bigint (int8) types - would make it feasible to (almost) 'index all things' when working mostly with numeric data.
The manual says "These deficiencies [with table inheritance] will probably be fixed in some future release" -- and has since at least 2002, so I think it's safe to assume the situation will never improve.
Later I learned that table inheritance wasn't meant to be used for polymorphism, but rather to solve performance problems. I guess that sharded tables now fill that niche in a better way.
“Although inheritance is frequently useful, it has not been integrated with unique constraints or foreign keys, which limits its usefulness.”
Seems nice but usually you actually would prefer composition over inheritance.
One thing is not like the others.
Interesting the pub/sub. Anyone tried it? It s it ok for production loads?
In fact the second sentence in the documentation is "[...] NOTIFY channel is invoked, either by this session or another one connected to the same database, all the sessions currently listening on that notification channel are notified, and each will in turn notify its connected client application." 
One feature of PubSub that is also a caveat is that it is transactional. That is great for production code, but makes transactional tests impossible.
The best thing is that it can also track deletes and updates unlike simple polling of the database using select queries.
TLDR as you didn't even bother to Google:
Allows you to create application defined locks. Useful for building job queues and other systems where row locks aren't ideal.
Ah, that makes more sense. Thanks!
You could just do a select with the limit then update that row from the primary key...