That kind of thinking is probably what spawned the whole "do the join in the app, not the database" anti-pattern. The truth is, the database is going to be much faster at performing a join than loading the contents of two tables into your app and iterating. If you need the data that results from doing a join, doing a join is the best way to get it.
Unless you already have your entire database in-memory in your app, that is. In that case, why do you have a database?
What you are missing here is "denormalization" -- e.g. many-to-many relationships. You can either use a JOIN with a table on a "normalized" database, or keep managing the result of the join in application code. Loading the entire tables into application code very seldom has anything to do with it...
A more realistic example is, do you get Alice's pets by doing a JOIN on tables Person, Pet, PetOwnedByPerson ("SQL") -- or by having an array column "pets" in Person? ("NoSQL")
I don't think materialized views are an answer to this problem, precisely because postgres materialized views do not automatically update when the underlying data has changed.
Sure, you have to write the code to do updates; but "materialized views" are a technique long used even in SQL databases that had no automated support for them; denormalized derived tables that are used for regular, recurring queries while the DB retains normalized base tables are a common thing (heck, in Enterprise environments, I've more than once run into a setup where custom maintenance of jury-rigged materialized views are used in a database system with strong materialized view support simply because the system has been around and maintained longer than the server software has had that support.)
It's a term used frequently in NoSQL land, to explain a key difference to people coming from SQL. In SQL land, normalizing your data is still the canonical thing to do, and I don't recall anyone in academia officially talking about denormalizing ever having its place...but in industry, the realities of use cases and performance have meant its usage. But I don't know that it's a standard tool given out to graduating software devs and DBAs.
The link I posted above is a common SQL land usage. Dimensional modeling in a star schema is very widely used in BI projects.
There's actually two competing philosophies on data warehousing (Inmon and Kimball), but I've only ever used Kimball's method, which favors denormalization.
Yes, and almost always used as pre-emptive optimization that didn't need to be done. Often times even making things slower because the person doing it didn't understand what they were doing, they just heard "denormalizing makes it fast".
It really depends. Sometimes it's possible to slurp relatively small subsets in. Sometimes you've already got the data loaded in memory for other reasons. Sometimes (especially in long-running ETL stuff) you can come up with a way better query plan than a query-by-query approach could achieve.
But yeah, usually, don't be clever and don't spaff the contents of the database across a network just to do a join.
I think the "do the join in the app" anti-pattern was developed by this group of people who think that in app the programming environment they know and that it's working in their dev environment and that is all that needs to be considered to make a decision.
I would generalize what you say even further: The database is faster at most of the data crunching you need.
Unless you already have your entire database in-memory in your app, that is. In that case, why do you have a database?