
PostgreSQL 11 Released - MarkusWinand
https://www.postgresql.org/about/news/1894/
======
Doches
Previous HN discussion on the beta
([https://news.ycombinator.com/item?id=17144221](https://news.ycombinator.com/item?id=17144221))
and pre-releases
([https://news.ycombinator.com/item?id=18043425](https://news.ycombinator.com/item?id=18043425)).

~~~
dang
Also
[https://news.ycombinator.com/item?id=17119785](https://news.ycombinator.com/item?id=17119785),
[https://news.ycombinator.com/item?id=17864837](https://news.ycombinator.com/item?id=17864837),
and
[https://news.ycombinator.com/item?id=18150727](https://news.ycombinator.com/item?id=18150727).

------
thegabez
Major enhancements in PostgreSQL 11 include:

* Improvements to partitioning functionality, including:

\-- Add support for partitioning by a hash key

\-- Add support for PRIMARY KEY, FOREIGN KEY, indexes, and triggers on
partitioned tables

\-- Allow creation of a “default” partition for storing data that does not
match any of the remaining partitions

\-- UPDATE statements that change a partition key column now cause affected
rows to be moved to the appropriate partitions

\-- Improve SELECT performance through enhanced partition elimination
strategies during query planning and execution

* Improvements to parallelism, including:

\-- CREATE INDEX can now use parallel processing while building a B-tree index

\-- Parallelization is now possible in CREATE TABLE ... AS, CREATE
MATERIALIZED VIEW, and certain queries using UNION

\-- Parallelized hash joins and parallelized sequential scans now perform
better

* SQL stored procedures that support embedded transactions

* Optional Just-in-Time (JIT) compilation for some SQL code, speeding evaluation of expressions

* Window functions now support all framing options shown in the SQL:2011 standard, including RANGE distance PRECEDING/FOLLOWING, GROUPS mode, and frame exclusion options

* Covering indexes can now be created, using the INCLUDE clause of CREATE INDEX

* Many other useful performance improvements, including the ability to avoid a table rewrite for ALTER TABLE ... ADD COLUMN with a non-null column default

[https://www.postgresql.org/docs/11/static/release-11.html](https://www.postgresql.org/docs/11/static/release-11.html)

~~~
chadash
small, but useful feature is that you can now use keywords _quit_ or _exit_ to
exit the psql command line. Helpful change for those of us who forget that the
only current command is "\q".

EDIT: ctrl+D also works, just be careful not to press twice, since doing so
will exit psql and then exit the shell altogether

~~~
c0l0
In bash, you can use `set -o ignoreeof` for your interative shells to not
trigger logout upon receiving EOF/^D.

~~~
bartlomieju
Thanks, that's super useful! I found myself accidentally quitting terminal
multiple times by pressing Ctrl+D one too many times.

------
bsaul
Anyone has an idea on what makes postgresql both such a robust and always
evolving db ? i’m amazed at the pace at which they’re adding deep features (
such as jit) without breaking anything release after release, in an open
source environment.

Usually this kind of things are the consequences of a good architecture but i
wonder if anyone has an more in-depth explanation on what in the architecture
makes it that good.

~~~
devilmoon
Not my personal experience, but one of my professors is trying to get his
patch approved into Postgres for an upcoming release; since his changes are
very deep and touch the kernel (he wants to add support for temporal data) he
told me the whole process will take years and the whole community will examine
these changes before anything is added.

I think that pretty much explains it: Cutting edge features are proposed way
earlier than commercial DBs start thinking about implementing them, and the
community hones them over years before they are released to the mainstream
public.

~~~
tomnipotent
> Cutting edge features are proposed way earlier than commercial DBs

This is not true. SQL Server & Oracle already have pretty every feature just
released in 11 and have had most of them for many years.

~~~
orf
Like native JSON support?

~~~
zeroname
That's a "hipster" feature, not something that the "enterprise" customers of
these products are asking for.

~~~
gleenn
Not sure if you're trying to be funny about the hipster thing.

I've seen the json features used at multiple serious organizations. It turned
what would have otherwise been unnecessarily complicated queries into really
reasonable queries in our ETL pipeline.

~~~
derefr
It's a difference of mindset.

Postgres includes many features that increase the number of simple _use-cases_
for the existing data in their database, thus allowing you to replace a
Postgres _complement_ (like Mongo) with Postgres for that use-case.

"Enterprise" DBMSes like Oracle and MSSQL, meanwhile, don't focus so much on
expanding the use-cases of their offering, as they do on expanding the number
of ways a sysadmin can take the _existing_ use-cases and make them scale
better, with less manual maintenance. Oracle and Microsoft allow you to
replace their _ecosystem of tools and extensions_ with a DBMS-internal feature
for the given sysadmin need.

So Postgres has an expanding ecosystem of tooling and extensions, but is
slowly eating its complements; while Oracle and MSSQL have an expanding space
of complementary offerings, but are slowly eating their own product
ecosystems.

Or, to put that another way: Postgres focuses more on making life easier for
people who write SQL; Oracle and Microsoft focus more on making life easier
for the DBAs who manage the DBMS cluster that people are running SQL against.

------
rb666
_The inclusion of the keywords "quit" and "exit" in the PostgreSQL command-
line interface to help make it easier to leave the command-line tool_

YES, at last. This beats parallel queries any day.

~~~
overcast
Seriously, that alone was very disorienting jumping into PGSQL for the first
time. The command line definitely not as user friendly as MySQL.

~~~
ibotty
There are widely different opinions about that. I have never heard someone
proclaim MySQL has a great command line, while I heard that about PostgreSQL
many times.

I am sure it's more a matter of familiarity.

~~~
briffle
I came to Postgresql from Oracle. the ability to hit the up arrow in the CLI
was groundbreaking :)

------
enraged_camel
Serious question: when you need a relational database, are there any technical
reasons why you would use anything other than Postgres these days?

~~~
mrcarruthers
One thing that Postgres does not do well is row churn. If you have a table
that have rows that are constantly added/deleted/updated, Postgres tends to
bloat pretty quickly because of the way they handle updates. Say what you will
about using a relational db for that kind of data, but if you need to,
Postgres isn't a great choice.

~~~
pgaddict
That really depends.

You're right UPDATEs may be an issue (because we handle them essentially as
DELETE+INSERT). Generally speaking, row churn in the table alone is not an
major issue - it's easy to clean up by vacuum, and it will be reused for new
data. And you can limit the amount of bloat by tweaking the autovacuum
parameters.

What's more painful is bloated indexes (e.g. due to UPDATEs that modify
indexed columns), because that's much harder / more expensive to get rid of.

The thing is - this is part of the MVCC design, and it has some significant
advantages too. It's not like the alternative approaches have no downsides.

~~~
emddudley
In some applications, at least, vacuuming is not sufficient to deal with row
churn.

I use PostgreSQL in an embedded device. There is a high insertion rate, and
eventually when the disk starts to get full I need to get rid of old rows.

Using plain DELETE and VACUUM does not work. The deletes aren't fast enough to
keep up with the inserts, and vacuuming reduces performance to the point that
I have to drop data that is waiting to be inserted. This is on a high
performance SSD and I've tuned postgresql.conf. (Bigger/better hardware is not
possible in my application).

Instead, I think partitions with DROP PARTITION are the only way to handle
high volume row churn. Dropping a partition is practically instant and incurs
no vacuum penalty.

~~~
pgaddict
Yeah, DROP PARTITION is definitely going to be much more efficient than DELETE
+ cleanup. No doubt about that.

Not sure what postgresql.conf tuning you've tried, but in general we recommend
making autovacuum more frequent, but performing the cleanup in smaller chunks.
Also, batching inserts usually helps a lot. But maybe you've already tried all
that. There's definitely a limit - a balance between ingestion and cleanup.

------
netcraft
I've used lots of different databases over the years and PG is my pick. Thanks
to all who were involved, this looks like a great release.

------
Heliosmaster
Congratulations for the release! PostgreSQL is, imho, a great example of how
open-source software can be stable and performant. Kudos to the team

------
drej
I know we got 11 just today, but I can't wait for 12 already! Rumours about
alternative storage systems are extremely exciting, the idea of having a
columnar materialised view (just guessing, not sure what the actual
implementation will be like).

I can't wait to throw out all the columnar databases and the ETLs I have to
support them, all just for a few queries.

~~~
munk-a
pg12 may be removing the (mandatory) optimization fences in CTEs, this is
really exciting to me as it'll enable much more maintainable complex queries
to not suffer a performance hit.

As a general rule of programming, I never want syntax sugaring or a chosen
expression pattern to impact runtime performance. I have found it's optimal to
write expressive code first and performant code second - and if there's a
trivial expression transformation between the two a compiler or interpreter
better be equipped to do it for you.

~~~
smilliken
I'll play devil's advocate: the optimization fence on CTEs is a godsend. The
query planner is often the wrong kind of smart. A big table can get
autovacuumed in the middle of the night, cause the planner to use a new query
plan, and a query that used to be instantaneous takes minutes.

If you know the query plan you want, you can usually force it using CTEs.

Really I'd be happier without a query planner at all.

~~~
munk-a
Sorry, to clarify the "(mandatory)" in the comment above, pg12 will be making
CTEs optionally optimization fences, so people who want to hand-plan their
query still can, but people who wish to use CTEs for organization and clarity
will be able to do so without being forced to do so in a manner that matches
performance needs.

I think being happy not having any query planner at all is a bit extreme, but
there are times when queries need to be tuned to force early data culling or
adherence to indexes, I find these times to ultimately be pretty rare and
generally have to review all the instances of our manual optimization whenever
we update postgres to see which ones the query planner can now do better. This
change would still allow you to sidestep the query planner, but would also
open up expressive CTEs to use in performance sensitive queries.

~~~
anarazel
I think "will be making" unfortunately includes a bit of wishful thinking.
It's far from clear we'll have a patch that's mature enough by the time the
feature freeze is there _and_ that also has the necessary support.

------
egeozcan
Maybe I'm just spoiled from how fast they implement all these features but I
wish that, in the near future, table inheritance would also get some love. In
its current state, inheritance needs a lot of manual checks[0]:

> unique, primary key, and foreign key constraints are not inherited.

[0]: [https://www.postgresql.org/docs/11/static/ddl-
inherit.html](https://www.postgresql.org/docs/11/static/ddl-inherit.html)

------
makmanalp
I'm very excited about the JIT query compilation stuff finally making it into
postgres! A big win for OLAP queries, although it looks like they compile
specific expressions within the query rather than the whole query operator,
but that could change. I'm very curious to see how this interacts with the
query planner (e.g. when it's worth the overhead or not).

~~~
anarazel
Right, it's "just" expressions and tuple deforming right now. Especially the
former required significant refactoring (landing in v10), but that's nothing
against the refactoring required to do proper whole query compilation. I'm
working on the necessary changes (have posted prototype), which have
independent advantages as well.

There's planning logic to decide whether JIT is worthwhile, but that's
probably the weakest part right now.

~~~
makmanalp
Right - I don't mean to minimize the effort at all, it must have been a
massive undertaking already! Thank you for all your work!

------
SnowingXIV
PostgreSQL has been such a joy to use for both hobby and professional
projects. It's so reliable. I wonder when Heroku will support 11 and I'm
curious if upgrading from 10 to 11 with heroku using the simple pg_upgrade
would break anything. If it's just a pure performance increase, I'm looking
forward to doing it to a few production applications.

------
generalpf
I'm really surprised pgsql procedures weren't already able to start, commit,
and rollback transactions. Well done devs!

------
flyinglizard
Piggybacking for a Postgres question: I have thousands of legacy software
instances all around the world, running Postgres with very marginal internet
connections (think IoT). I’d like to continuously get all their data into my
cloud. Should I use Postgres built in async replication for that?

~~~
jeltz
Hard to say for sure without knowing more but I would probably build my own
custom protocol for syncing this, because the replication (both binary and
logical) is built with the assumption that you have a reasonably constant
connection from the master to the replica. If not you will start to build up
large amounts of write-ahead logs.

~~~
Cthulhu_
Never invent the wheel if you haven't done your research yet.

Colleagues of mine successfully employed CouchDB in a situation where devices
were offline for an x amount of time (running on tablets in airplanes), once
they had internet again they would reliably start syncing data with the main
database again. This was a number of years ago though, I haven't heard
anything about CouchDB since.

~~~
megous
It works perfectly with intermittent connectivity, restarts, etc. That will
not be a problem.

I'm not sure about thousands of nodes syncing to a single replica though. That
may be a bit over the top.

~~~
pgaddict
Who says you need to feed it directly to the replica. Logical replication
provides infrastructure for decoding changes, and you may fetch it any way you
want / how often you want / feed it wherever you want (file, another database,
...).

It would be trivial to write something that connects to a bunch of machines
regularly, fetches the decoded increment and feed it somewhere (say, to a
single database over a shared connection).

See test_decoding / pg_recvlogical for examples.

------
truth_seeker
Congratulations and big thanks to PG community & Contributors. It has been my
choice for all kinds of systems, OLTP, OLAP and Data warehousing. Its just get
better with PG 11

I would love to see native column store format, distributed sharding support
and in memory tables in future releases.

------
Nelkins
Does anybody know where I can find an example of hash partitioning? I see it
mentioned here in the docs but am having trouble finding an example:
[https://www.postgresql.org/docs/11/static/ddl-
partitioning.h...](https://www.postgresql.org/docs/11/static/ddl-
partitioning.html)

edit: Found an example here: [https://pgdash.io/blog/partition-
postgres-11.html](https://pgdash.io/blog/partition-postgres-11.html)

Followup question: Is there a way to re-partition? Say, if your data grows and
you want to split the data up further?

~~~
amitlan
The documentation of creating partitions [1] says this:

"When creating a hash partition, a modulus and remainder must be specified.
The modulus must be a positive integer, and the remainder must be a non-
negative integer less than the modulus. Typically, when initially setting up a
hash-partitioned table, you should choose a modulus equal to the number of
partitions and assign every table the same modulus and a different remainder
(see examples, below). However, it is not required that every partition have
the same modulus, only that every modulus which occurs among the partitions of
a hash-partitioned table is a factor of the next larger modulus. This allows
the number of partitions to be increased incrementally without needing to move
all the data at once. For example, suppose you have a hash-partitioned table
with 8 partitions, each of which has modulus 8, but find it necessary to
increase the number of partitions to 16. You can detach one of the modulus-8
partitions, create two new modulus-16 partitions covering the same portion of
the key space (one with a remainder equal to the remainder of the detached
partition, and the other with a remainder equal to that value plus 8), and
repopulate them with data. You can then repeat this -- perhaps at a later time
-- for each modulus-8 partition until none remain. While this may still
involve a large amount of data movement at each step, it is still better than
having to create a whole new table and move all the data at once."

[1] [https://www.postgresql.org/docs/current/static/sql-
createtab...](https://www.postgresql.org/docs/current/static/sql-
createtable.html#SQL-CREATETABLE-PARTITION)

------
shroom
Looks like a great release even to a Postgres noob like myself. Only just
recently started using Postgres but I really enjoy it and it inspires me to
dust of my database skills and learn more SQL.

I'd like to take the opportunity and ask if anyone has any recommendations or
"must have" settings to the Postgres-terminal. Preferably on MacOSX.

For I found the formatting acting kind of weird in some cases when selecting
multiple columns or writing long queries. I figure there must be some things
you setup at the start and then can not live without...

Any other tips are also welcome! Thanks

~~~
jeltz
My only settings are setting extended query mode to automatic to handle output
with many columns and to turn on query timing. But it is common for people to
also change the NULL symbol from empty string to something else or to have
unicode borders.

My .psqlrc:

    
    
        \set QUIET 1
        \x auto
        \timing on
        \set QUIET 0

------
romed
Have fun dumping and reloading your entire database.

~~~
jeltz
You can use pg_upgrade. It has been stable for about 10 years now.

