
New Features Coming in PostgreSQL 10 - ioltas
http://rhaas.blogspot.com/2017/04/new-features-coming-in-postgresql-10.html
======
avar
This bit about ICU support v.s. glibc:

    
    
        > [...] Furthermore, at least on Red Hat, glibc regularly whacks
        > around the behavior of OS-native collations in minor releases,
        > which effectively corrupts PostgreSQL's indexes, since the index
        > order might no longer match the (revised) collation order.  To
        > me, changing the behavior of a widely-used system call in a
        > maintenance release seems about as friendly as locking a family
        > of angry racoons in someone's car, but the glibc maintainers
        > evidently don't agree.
    

Is a reference to the PostgreSQL devs wanting to make their index order a
function of strxfrm() calls and to not have it change when glibc updates,
whereas some on the glibc list think it should only be used for feeding it to
the likes of strcmp() in the same process:

    
    
        > The only thing that matters about strxfrm output is its strcmp
        > ordering.  If that changes, it's either a bug fix or a bug
        > (either in the code or in the locale data).  If the string
        > contents change but the ordering doesn't, then it's an
        > implementation detail that is allowed to change.
    

\-- [https://sourceware.org/ml/libc-
alpha/2015-09/msg00197.html](https://sourceware.org/ml/libc-
alpha/2015-09/msg00197.html)

~~~
JoachimSchipper
Florian Weimer's reply is also interesting:

"Why do you think that? I don't see this documented anywhere, and I doubt it
is something many readers of the C standard, the man page, or the glibc manual
would expect.

The manual suggests to store the strxfrm output and use it for sorting. I
expect that some applications put it into on-disk database indexes as a
result. This will lead to subtle breakage on glibc updates.

(The larger problem is that there are _definitely_ databases out there which
use B-tree indexes in locale collation order, which break in even more subtle
ways if we make minor changes to the collation order.)"

~~~
ajross
Which manual suggests storing the output of strxfrm? The glibc man page
doesn't seem to.

I don't know that this is resolvable. The documented behavior of strxfrm() is
just about its output properties. Improvements to the transformation algorithm
would be expected to be made, if it's improvable.

If a database needs this to be static over time it needs to pick a
_particular_ transformation algorithm and specify it exactly, not just rely on
whatever the C library happens to provide.

I mean, not only are PostgreSQL locale-sorted-indexes not portable across
glibc releases. They aren't portable across any other system change either. No
moving between distros or doing distro upgrades, etc... Those are all
misfeatures probably worth fixing.

~~~
petergeoghegan
BTW, the new amcheck tool, available in Postgres 10, lets you validate that an
index is consistent with its designated sort order (B-Tree operator class). At
least you now have some way of detecting the kind of inconsistency you
describe.

I wrote amcheck, and maintain a version targeting earlier releases of
PostgreSQL on Github:
[https://github.com/petergeoghegan/amcheck](https://github.com/petergeoghegan/amcheck)

------
fiatjaf
Ok, I'm not a database manager for enormous projects, so these changes may be
great, but I don't understand them and don't care about them. Postgres is
already the most awesome thing in Earth to me.

Still, if my opinion counts I think SELF-UPDATING MATERIALIZED VIEWS should be
the next priority.

~~~
petepete
How do you mean? Couldn't you use a trigger to update the view?

~~~
okket
A trigger on what? Every update, insert, delete, etc.? On every table in the
view?

Even if that is possible, it may be a major performance killer. This has to be
done internally, I think.

~~~
jeltz
I am not sure it necessarily would be that bad. After all foreign keys are
implemented with triggers and they are usually fast enough. You just need to
write trigger functions which are fast enough.

------
jacques_chester
I deeply appreciate the great care that Postgres committers take in writing
their merge messages.

I think of it as a sign of respect for future developers to take the time to
write a clear account of what has happened.

~~~
lobster_johnson
Postgres is one of the few projects that still use a strict patch-oriented
development process that's based almost entirely around mailing-list
communication.

While core team members can commit directly to the repo, everyone else must
submit the code changes for review to the pgsql-hackers mailing list as a
clean, self-contained patch, where it's discussed and considered for
inclusion. An accepted patch might be committed right away, or it will be
queued up for the next scheduled "commitfest" [2], when patches are reviewed
and finally committed to mainline. (I don't know how the commitfest interacts
with git exactly; the commitfest database doesn't even link to git, only to
email discussions.)

From the outside it seems a bit antiquated, but it's apparently been working
well for them. The Postgres team is a pretty conservative bunch; they only
switched from CVS to git in late 2010, for example.

They also _really_ care about code quality, getting the design right early,
and covering all possible edge cases. As a result, Postgres solid, clean, has
unusually few legacy oddities, and almost never any subtle, suprising breaking
changes. If you read the MySQL manual, it's absolutely _littered_ with sloppy
little breakages throughout its history: Like how, until 5.0.something, when
comparing a "date" value with a "datetime" value, the time portion would be
silently ignored and ('2017-04-08 14:04' = '2017-04-08') would return true;
but they fixed that, and broke a lot of client code because they didn't stop
to realize that a lot of developers depended on that behaviour.

[1]
[https://wiki.postgresql.org/wiki/Submitting_a_Patch](https://wiki.postgresql.org/wiki/Submitting_a_Patch)

[2] [https://commitfest.postgresql.org](https://commitfest.postgresql.org)

~~~
lathiat
> If you read the MySQL manual, it's absolutely littered with sloppy little
> breakages throughout its history: Like how, until 5.0.something, when
> comparing a "date" value with a "datetime" value, the time portion would be
> silently ignored and ('2017-04-08 14:04' = '2017-04-08') would return true;
> but they fixed that, and broke a lot of client code because they didn't stop
> to realize that a lot of developers depended on that behaviour.

This is an interesting comment for two reasons. Firstly because a lot of
people also complain about MySQL's archaic defaults which often stay too long
because of upgrade concerns (though they fortunately are fixing a lot of them
already or for MySQL 8.0 - hooray).

But also because it speaks volumes, in my opinion, about the MySQL
documentation that these are documented in the first place. I worked at MySQL
for 9 years and though it was always clear our manual was always a good source
of information, now that I am working on Ubuntu & OpenStack it is painfully
obvious just how good the MySQL documentation team and processes were compared
to many other projects. Even just the version ChangeLog.

I'm not saying other projects don't get it right (and have no opinion at all
about postgresql's documentation state), but MySQL seems to get it pretty
right in general.

~~~
lobster_johnson
But MySQL's problem is that those "archaic defaults" shouldn't have happened
in the first place.

No matter how good MySQL is good at documenting their bugginess, weirdness,
flakiness and overall history of legacy warts, the point was PostgreSQL
eliminates pretty much all of this through promoting, at every single stage of
the development process, the same strict, uncompromising QA principles. By
doing this, legacy behaviour generally disappears.

Postgres isn't bug-free, of course. But they take great care to not be
continually chased by a tsunami of technical debt. If you build a house on a
crappy foundation, you get a crappy house, so it's a good idea to spend time
on the foundation before building the house. The Postgres team spent _years_
on the foundation, before building the higher-level parts, and the rewards are
obvious.

Meanwhile, MySQL has spent _years and years_ slowly mopping up tech debt.
Things have gotten a lot better with the tighter semantics, such as preventing
"February 29th" from being inserted, or rejecting "0000-00-00" as a date, or
silently ignoring data coercion errors, and so on. But those things shouldn't
have happened in the first place if the developers had been better at QA. So
while you're right that the documentation is decent at describing various
legacy semantics, it's also encodes a history of carelessness that's rather
embarrassing reading.

By the way, date/datetime regression I mention isn't in the documentation at
all. It's in their bug tracker.

------
qaq
Even a single feature from the list would make 10 an amazing release, all of
them together is just unbelievable. Very happy we are using PG :)

------
iEchoic
I'm so excited for table partitioning. I use table inheritance in several
places in my current project, but have felt the pain of foreign key
constraints not applying to inherited children. Reading about table
partitioning, I'm realizing that this is a much better fit for my use case.

Postgres continues to amaze me with the speed at which they introduce the
right features into such a heavily-used and production-critical product.
Thanks Postgres team!

~~~
amitlan
Unfortunately, foreign keys won't be supported right away.

Read about the new feature and its limitations here:
[https://www.postgresql.org/docs/devel/static/ddl-
partitionin...](https://www.postgresql.org/docs/devel/static/ddl-
partitioning.html#ddl-partitioning-declarative)

~~~
iEchoic
Thanks, I hadn't read this. That's too bad, hopefully we'll see that in the
future (if it's technically possible at all?). That'd be a huge feature for
me.

------
lazzlazzlazz
How is Postgres so consistently the best open-source DB project from features
to documentation? It's unreal.

~~~
int_19h
Not just a DB project, either. I'd say it's one of the best executed (in a
very broad sense of the word) open source projects around, in general.

From end user perspective, they have stable, quality releases with a
predictable cycle and subsequent maintenance releases. They have great
documentation - one of the best in the industry, much less open source. Things
generally work as you'd expect them to, and when not (e.g. for historical or
implementation reasons), you have clear and convincing explanations. And so
on.

I haven't seen their developer side, but based on other people's feedback,
it's also good - high quality bar for code, stringent review process etc. More
importantly, they seem to be making the right (= leading to more stable
quality releases with great features) technical decisions consistently, which
to me is a hallmark of a very well run team.

I also can't remember any publicized "drama" around Postgres, either on the
inside (dev disagreements etc), or between the team and the users. It looks
like everyone's happy, or at least happy enough.

I don't know what the magic sauce is here, but it feels like many other open
source projects could learn a lot from the Postgres team and community.

------
jordanthoms
Will DDL replication for the logical replication be landing in 10 or later?

We have some use cases where logical replication would be very helpful, but
keeping the schema in sync manually seems like a pain - will there be a
documented workaround if DDL replication doesn't make it in?

------
djcj88
I did read the article, but I can't find any mention of addressing the "Write
amplification" issue as described by Uber when they moved away from postgres.
[https://eng.uber.com/mysql-migration/](https://eng.uber.com/mysql-migration/)
I had heard talk on Software Engineering Daily that this new major revision
was supposed to address that.

Is this issue resolved by the new "Logical replication" feature? It doesn't
seem directly related, but it seems like maybe that is what he is referring to
in this blog post?

~~~
snuxoll
Write amplification is a result of PostgreSQL's decision to not used clustered
indexes, there's not much that can be done to avoid it without a massive
redesign of the storage engine - though there are patches out there to reduce
the penalty in some cases. In all reality though, Uber wanted a key-value
store and not an RDBMS, MySQL was a better choice for this since InnoDB isn't
much more than a fast K/V store (hence why MySQL uses clustered indexes).

~~~
frik
> massive redesign of the storage engine

Have the Postgres thought about adding support for more than one storage
engine? Then they could implement new ideas in a fork, an one could run them
side-by-side and migrate over to it.

[https://www.postgresql.org/message-
id/4CB597FF.1010403@cheap...](https://www.postgresql.org/message-
id/4CB597FF.1010403@cheapcomplexdevices.com)

For example MySQL had been mocked for its old ISAM storage engine. Then MySQL
added InnoDB as another storage engine, the SQL interface is the same.

~~~
snuxoll
Pluggable storage engines for databases don't work that well in practice.
Either you end up with the MySQL situation where the storage engine is so dumb
that you can't push any smart optimizations into it (making having pluggable
engines moot in the first place), or you have to write such a large interface
that it's not worth providing.

~~~
pgaddict
That depends on what you mean by pluggable storage and how it's implemented
...

For example PostgreSQL supported custom index access methods, which you might
see as a custom storage format (although only for secondary storage). You had
to modify the source code and rebuild PostgreSQL, but there was a fairly clear
separation / internal API that allowed that. Since PostgreSQL 9.6 you can do
that without the custom build (i.e. you can create a new index in an extension
and use CREATE ACCESS METHOD to plug it into the server).

We don't have such clear internal separation for the primary storage, but it's
hard to ignore the possible benefits of alternative storage types. Another
reason is that we're constantly scavenging for free bits in various places
(e.g. flags in tuple headers needed by new features etc), and allowing
multiple formats would help with this by supporting "old" and "new" table
format. So it'll likely follow what happened to indexes - build a clear
internal API, allow multiple storage formats internally, eventually make it
usable from extensions.

(These are just my personal opinions, of course.)

------
nickpeterson
Can anyone recommend a decently up to date book on postgres administration? Or
are docs really the only way? I've used SQL Server for years but would likely
choose postgres for an independent project if I intended to commercialize it.
That said, I don't use it at work so it's hard to get in depth experience.

~~~
pgaddict
There's PostgreSQL 9 Admin Cookbook from Simon Riggs, for example (disclosure:
I work for Simon).

Packt has several other good books about PostgreSQL, but always check the
author - they started publishing books authored by people entirely unknown in
the community, that are "inspired" by book published before (you might also
use "plagiarism" instead).

~~~
nickpeterson
Yeah packtpub is a real crapshoot. They're great in that they'll seemingly
publish whatever tech subject you want to write about. The downside is they
publish anything...

~~~
pgaddict
Yeah, although there's a difference between "publishing whatever" and
"publishing books that copy from other books".

Ultimately it's not the publisher but the author that matters, I guess.

------
Normal_gaussian
Extended Statistics! I was following the replication changes, but have just
discovered the extended statistics and am more excited about them.

The directory renaming at the bottom of the post is interesting - I wonder if
many other projects have to do things like this?

~~~
frik
It would be great if some Linux distros clear up the directory mess. There are
directories in there with names that no ones remembers what they originally
meant in the UNIX of 1970s, or what ever. For compatibility they could be just
hard/soft-links to a more sane directory structure.

Well the same goes for Windows. With Win95, WinNT 3.5, WinXP, WinVista they
restructured the internal directory tree and renamed things. It was okay with
WinXP, just the long user folder was trouble some because of 260 chars
MAX_PATH limit. But with Vista and 64-bit support the fucked up and it's now a
big mess in Win7+ (syswow64, system32, registry, winsxs, dotNet folders, ...
such a big mess and sometimes also waste of HDD space by duplicates of files).

~~~
barrkel
winsxs uses hard links - space wastage is more likely from more versions than
just dupes. Also, many windows tools won't account t correctly for hard links
in disk usage stats.

------
api
The feature I'd really love is master selection with Raft or similar and
automatic query redirection to the master for all write queries (and maybe for
reads with a query keyword).

That would make it very easy and robust to cluster pg without requiring a big
complicated (a.k.a. high admin overhead and failure prone) stack with lots of
secondary tools.

This kind of fire and forget cluster is really the killer feature of things
like MongoDB and RethinkDB. Yes people with really huge deployments might want
something more tunable, but that's only like 1% of the market.

Of course those NoSQL databases also offer eventual and other weaker but more
scalable consistency modes, but like highly tuned manual deployment these too
are features for the 1% of the market that actually needs that kind of scale.

A fire and forget cluster-able fully consistent SQL database would be nirvana
for most of the market.

~~~
xyzzy_plugh
Can't pgbouncer/pgpool2 solve query redirection? I don't understand the desire
for all-in-one solutions.

~~~
api
That desire comes from three places:

1\. Minimize cognitive load by minimizing the number of things you have to
learn.

2\. Minimize deployment complexity and dependencies.

3\. Complexity is just evil I'm general. Linear increases in complexity result
in exponential increases in bugs, vulnerabilities, and failure modes. It's
just combinatorics.

~~~
mb4nck
patroni builds upon etcd and (optionally) haproxy, two rather mature pieces of
infrastructure (which can both be made HA on their own if SPOFs are to be
avoided).

I understand where you are coming from, but having this kind of multi-server-
who-is-master knowledge baked into Postgres itself will surely take another
couple of releases, if it will be included at all.

Probably BDR (bi-directional replication, master-master like logical
replication) will be there first, but whether the question is whether it will
help a lot for local scale-out workloads (as opposed to glueing two
datacenters together and allow transactions on both sides).

------
elvinyung
Dumb question: does declarative partitioning pave the way for native sharding
in Postgres? I'm not super super familiar, but it _seems_ like along with some
other features coming in Postgres 10, like parallel queries and logical
replication, that this is eventually the goal.

~~~
rhaas
I hope that it will have that effect. We need a few other features first:
partitionwise join, partitionwise aggregate, asynchronous query, and ideally
hash partitioning.

~~~
qeternity
Doesn't this basically replicate all of the work done by Citus?

------
smac8
Wow, so awesome. I do hope at some point we can see some language improvements
to PLPGSQL. More basic data structures could go a long way in making that
language really useful, and I still consider views/stored procedures a
superior paradigm to client side sql logic

~~~
rhaas
I agree with you that stored procedures are superior to client-side logic,
because it means that you can have multiple routes of access to the database
and all of them enforce the same business logic. But what exactly do you mean
by "more basic data structures"?

~~~
pgaddict
PL/SQL has various types of collections, for example, that are super-useful
when you need to do more complicated processing without having to create
temporary tables and such.

------
acdha
What's the ops experience for a replicated setup like these days? i.e.
assuming you want basic fault-tolerance at non-exotic size / activity levels,
how much of a job is someone acquiring if, say, there are reasons they can't
just use AWS RDS?

~~~
Heliosmaster
Streaming replication isn't hard at all: [http://davide.im/setting-up-a-
failover-database-for-postgres...](http://davide.im/setting-up-a-failover-
database-for-postgresql/)

~~~
scurvy
It's not hard to setup initially, but I'll admit that it's not very good.

It's not very good in a long-lived scenario where you're changing your
replication topology for routine maintenance tasks. Changing from master to
replica is easy, but now you have to rebuild that original master off of the
former replica now. Completely start over. You can't just start up again from
a given transaction ID. MySQL's GTID implementation is _much_ better in this
regard. You can change masters and replicas all repoint them without
rebuilding. You can't do that (currently) with Postgresql. It's a major pain
point.

~~~
sorkin2
> Changing from master to replica is easy, but now you have to rebuild that
> original master off of the former replica now. Completely start over. You
> can't just start up again from a given transaction ID. MySQL's GTID
> implementation is much better in this regard. You can change masters and
> replicas all repoint them without rebuilding. You can't do that (currently)
> with Postgresql.

Have you heard of pg_rewind?
[https://www.postgresql.org/docs/current/static/app-
pgrewind....](https://www.postgresql.org/docs/current/static/app-
pgrewind.html)

~~~
scurvy
I had not. Looks like it requires 9.5 or later? We're running 9.4 so we'll
have to upgrade to use it. Thanks!

~~~
mb4nck
You can get pg_rewind for 9.4 (and 9.3 in its branch) here:

[https://github.com/vmware/pg_rewind/tree/REL9_4_STABLE](https://github.com/vmware/pg_rewind/tree/REL9_4_STABLE)

It's from the people who wrote it upstream, they provide the code there for
earlier Postgres releases.

~~~
scurvy
Does this help at all with upgrades? Upgrading from 9.4 to 9.5 means you need
to rebuild your entire replication topology because the master's identifier
has changed (in the initdb step of the official docs).

------
StreamBright
For analytical loads the following is going to be great:

    
    
      While PostgreSQL 9.6 offers parallel query, this feature 
      has been significantly improved in PostgreSQL 10, with new 
      features like Parallel Bitmap Heap Scan, Parallel Index 
      Scan, and others.  Speedups of 2-4x are common with 
      parallel query, and these enhancements should allow those 
      speedups to happen for a wider variety of queries.

------
hodgesrm
Impressive feature list. Glad to see logical replication is finally making it
in.

~~~
brianwawok
What is you use case for it? My only thought was sending just one table to
replica to be used to do analytics on ..

~~~
rhaas
Replication across major versions, for example to upgrade without downtime.
Partial replication, to distribute shared data across a series of clusters, or
for analytics and reporting as you mention. Replicating the data without
replicating any table bloat. Being able to do limited writes (e.g. to
temporary tables) on the standby. [http://rhaas.blogspot.com/2011/02/case-for-
logical-replicati...](http://rhaas.blogspot.com/2011/02/case-for-logical-
replication.html)

~~~
hodgesrm
Indeed--anything where you want the secondary to be other than a bit-for-bit
copy of the primary. It's also convenient for HA in some cases due to the fact
that the DBMS copies are fully independent, hence free from propagated bit-
level errors and also available for unimpeded reads.

MySQL started with logical replication very early on and it has proven
extraordinarily useful. One of the more interesting use cases is feeding log
transactions into data warehouses, which should be possible in PostgreSQL 10.

------
hartator
I am considering more and more a move back from MongoDB to PostgreSQL. I will
be missing being schema less so much though. Migrations - particularly Rails
migrations - left a bad taste in my mouth. Anyone did the move recently and
what are their feelings?

~~~
stemcc
You can easily have schema-less with Postgres's jsonb data type.

~~~
hartator
Not really. Postgres ORMs are not meant to do schema-less and tables still
need to be created.

~~~
stickfigure
You're right that ORMs aren't really set up to use jsonb but it can be done.
I've had pretty good success with a kind of "hybrid" putting most relational
data in regular columns (with FK constraints) and adding flexible data in
jsonb. The main trick is understanding how to build custom column types in
whatever ORM you actually use.

I can imagine a future with specialized ORMs designed around jsonb, but the
current state of the art is probably not as bad as you think.

------
knv
Any recommendations for scaling Postgresql's best practices? Really appreciate
it.

------
mark_l_watson
I know that several RDF data stores use PostgreSQL as a backend data store.
With new features like better XML support, as well as older features for
storing hierarchical data, I am wishing for a plugin or extension for handling
RDF with limited (not RDFS or OWL) SPARQL query support. I almost always have
PostgreSQL available, and for RDF applications it would be very nice to not
have to run a separate service.

I tend to view PostgreSQL as a "Swiss Army knife" and having native RDF
support would reinforce that.

~~~
frik
Which RDF data store uses Postgres as DB backend? And can one import WikiData?
(does it scale) (I would rather avoid these old school RDF special case stores
from SematicWeb days 10 years ago.)

~~~
kuschku
[https://github.com/cayleygraph/cayley](https://github.com/cayleygraph/cayley)
is currently on the frontpage of HN, and it does use PGSQL as backend.

~~~
jerven
Does it scale is a very valid question for Cayley. Running sparql.uniprot.org
I know virtuoso scales easily to 27 billion triples. Best benchmark I have
seen for Cayley is 21 million RDF triples.

RDF and relational are similar but free form RDF is hard to put into a pure
relational schema, as schema is derived from the data in RDF. There are ways
to recover relational schema's from RDF but these are not yet in production
state.

~~~
osi
i helped build a triple store on top of postgresql back in '06-'08\. company
is gone so i don't know where the IP ended up, but it was a good foundation.
we were competitive with the other players at the time.

------
ams6110
A question on this statement, in the SCRAM authentication description:
_stealing the hashed password from the database or sniffing it on the wire is
equivalent to stealing the password itself_

How is that the case? That's exactly the thing that hashed passwords prevent.
Of course, if it's just an MD5 hash that's feasibly vulnerable to brute-
forcing today, but it's still not "equivalent" to having the clear-text
password.

~~~
jhgg
The point is that you only send the hash to the database to connect. If you
steal the hash, you can connect to the database using said hash, not needing
the plaintext. The password might as well be the hash in this case. Hence the
equivalency.

Using that scheme, all you prove is that you know the hash of the password.
SCRAM allows you to prove you know the plaintext password without actually
transmitting it.

~~~
xyzzy_plugh
If you steal the hash from the database, yes. I don't know how stealing the
hash over-the-wire is equivalent to having the password, since it is salted
(with a salt generated by the server) and is not reusable.

~~~
MBCook
Because the next time you connect to the server you provide the same hash. The
person doesn't know your plane text, but they can get into the server just
fine.

~~~
xyzzy_plugh
You can't provide the same salted wire hash. You'd need the pre-salted hash,
which only the client and server now. I fail to see how this answers my
question.

------
bladecatcher
This is great because I couldn't go to production with earlier releases of
logical decoding. Now we don't have to depend on a third party add on!

~~~
felixge
We're currently experimenting with logical decoding in 9.6, so I'd be curious
to hear what problems you've been running into.

------
mozumder
I could use a count of the number of file I/Os that each query takes, in order
to optimize my queries further...

~~~
anarazel
That's been there for a while:

    
    
        EXPLAIN (ANALYZE, BUFFERS) yourquery;
    

If you enable track_io_timing (has some overhead on platforms with slow
timestamps, e.g. older VMware), you even get timing.

If you want that aggregated, rather than for an individual query, you should
look into pg_stat_statements.

~~~
mozumder
The BUFFERS count is more for row count info as it operates on large chunks of
data, instead of index optimization that needs to count how many times index
structures are accessed. Counting IOs directly would be more useful for tuning
indexes.

~~~
anarazel
Huh? It shows you the number of io operations.

------
awinter-py
fascinating that the road to improving the expr evaluator is better opcode
dispatch and jit -- same tradeoffs every programming language project is
looking at right now.

------
qxmat
DECLARE @please VARCHAR(3) = '???';

------
MR4D
You guys are awesome - keep up the good work!

------
awinter-py
the join speedup for provably unique operands sounds awesome

