It's great to know that the only required storage components nowadays could be PG and ElasticSearch (as PG's full-text search can't compete with ES), and that the former is a no-brainer to setup (on top of AWS, Rackspace, etc.) or cheap to acquire (with Heroku Postgres for example).
Good job !
NB I've been using PostgreSQL for a few months on a side project and I've been hugely impressed. I wanted to add full text searching at some point and rather than using Lucene or Solr (or similar) I thought I would use PostgreSQL's own search capabilities - which certainly makes some things a lot simpler than using a separate search engine.
1) it has suboptimal support for handling compound words (like finding the "wurst" in "bratwurst"). If the body you're searching is in a language that uses compounds (like german), then you have to use ispell dictionaries which have rudimentary support for compounds and which aren't maintained any more in many cases because ispell has been more or less replaced by hunspell which has far superior compound support which in turn is not supported by postgres.
2) If you use a dictionary for FTS (which you have to if you need to support compounds), the dictionary has to be loaded once per connection. Loading a 20MB dictionary takes about 0.5 seconds, so if you use Postgres FTS, you practically have to use persistent connections or some kind of proxy (like pgbouncer). Not a huge issue, but more infrastructure to keep in mind.
3) It's really hard to do google-suggest like query suggestions. In the end I had to resort to a bad hack in my case.
Nothing unsolvable, but not-quite-elastic search either.
2) There is a shared dict extension that load the dict only once. See: http://pgxn.org/dist/shared_ispell/
3) About the suggestions: you should look at pg_trgm contrib extension.
About 3: That's what I was using before moving to tsearch, but pg_trgm based suggestions sometimes were really senseless and confusing to users. Also, using pg_trgm for suggestion will cause suggestions to be shown that then don't return results when I later run the real search using tsearch.
I'm happy with my current hack though, so I just wanted to give a heads-up.
What I will do is bear those limitations in mind and if I ever do have those problems there is, I guess, a reasonable chance that postgres development will have addressed them by them or I will redesign things to use a separate search engine.
If you use a dictionary for FTS (which you have to if
you need to support compounds), the dictionary has to
be loaded once per connection. Loading a 20MB dictionary
takes about 0.5
I'm thinking of a case where only maybe ~0.5% of queries will use FTS. I'm working on discussion software, and I'd like the discussions to be searchable, but realistically only a small percentage of queries will actually involve full text searching.
So either use a persistent connection for full text searching (I'm connecting via pgbouncer for FTS-using connections) or, as karavelov recommended below, use http://pgxn.org/dist/shared_ispell/ which will load the dictionary once into shared memory (much less infrastructure needed for this one)
The step that causes the trouble is ts_lexize(), not storing and consequently looking it up.
Again, I made it work for my case, but it was some hassle and involved running a home-grown script over an ispell dictionary I've created by converting a hunspell one into an ispell one.
Why not using: MY_COLUMN ~ '.wurst\y.' ? Here is the doc for LIKE, SIMILAR and regexes: http://www.postgresql.org/docs/9.2/static/functions-matching...
For example, I can now find the "wurst" in "Weisswürste" which, yes, I could do with a regex, but I can also find the "haus" in "Krankenhäuser" and all other special cases in the language I'm working with without having to write special regexes for every special term I might come up with.
That's an incredibly helpful answer, which provides solutions as well as pertinent info.
These advances within the GIN inverted index infrastructure will also greatly benefit jsonb, since it has two GIN operator classes (this is more or less the compelling way to query jsonb).
1. Solr doesn't handle multi-word synonyms (without a hack), PG does. (ex: "Northern Ireland" => "UK")
2. Solr uses TF-IDF out of the box, and PG doesn't.
3. PG is good enough for 90% of cases, but Solr has some advanced stuff that PG doesn't. Like integration with OpenNLP, things like the WordDelimiterFilter. (Andre3000 = Andre 3000)
4. PG is kinda annoying in that it will parse "B.C." as a hostname, even though I want it to be a province.
5. Solr is faster than PG, but PG has everything in one server.
6. Solr handles character-grams and word-grams better.
With the GIN optimizations in 9.4 this need to re-evaluated. Solr will probably still be faster but maybe not enough for it to matter.
Otherwise, it's fairly easy to implement, and you get full SQL support so joins, transactions, etc. So you can always prototype it and see what limitations you run into.
If you're just getting started with Postrges, I have some examples of using its' FTS search here: http://monkeyandcrow.com/blog/postgres_railsconf2013/
It's targeted at Rails, but I always show the SQL first, so you should be able to adapt it.
The app I'm working on right now evolved from the former to the latter, prompting me to switch from Mongo to Postgres, and it's made the code base much, much simpler. Mongo gets really painful when you have to fetch several documents (serially, because you have to follow the links between them) and join them at the application level before rendering the output.
For that kind of application, SQL with joins and sub-selects is so much better.