Incremental updates and intelligent, flexible, efficient search are all immediately doable with existing open source software.
Why are people hellbent on using Postgres where it's suboptimal for a dataset this large that needs intelligent searching?
Seriously people: https://github.com/ncolomer/elasticsearch-osmosis-plugin
Learn the JSON document and query formats, and then proceed to jump with glee whenever you encounter a problem well served by a search engine instead of doing hack-neyed manual indices and full-text queries and poorly managed shards on Postgres.
Postgres is for important operational data. Excellent at that. Not so great for search, bulk, or static datasets.
ElasticSearch is so well-designed that I just expose a raw query interface to it to our frontend JS guy and he just builds his own queries from the user inputs.
ElasticSearch is probably like 20-30% of the technological secret sauce of my startup.
In particular, roads with long straight segments don't have many nodes, so the nearest road isn't always the road the nearest node is on - and a road that travels through a bounding box won't always have a node in the bounding box.
Does ElasticSearch support indexing on geometry more complicated than points?
If it can be represented as a geo point or a composite of multiple geo points, then ElasticSearch can grok it. Otherwise, no.
If you want to query arbitrary paths, that's on you to bridge the gap between a spatial database and a graph store.
I'm not really sure what you're looking for. This post was about OpenStreetMaps.
I store paths which are comprised of an ordered sequence of points. Depending on what spatial tool you're using, you might call this a path, a linestring, a line, or an ordinate array. In the diagram the points are black dots and the path is shown in purple.
I want to do a bounding box query - finding the paths that are entirely or partly inside a given box. In the diagram, the box I'm querying for is shown in red. As you can see in the diagram, the purple path passes through the red box, but none of the black points defining the path are within the box.
I can accomplish this with a single query using Oracle Spatial  or PostGIS . It requires that the spatial database understand shapes more complicated than just points. Can elasticsearch? There aren't any examples of this I can find in the documentation.
The strengths of ElasticSearch are in trival sharding and replication intelligent, fast, and soft real-time search.
It's also got a very powerful, easy to understand, highly programmable query syntax that is very easy to generate in code.
It's not a spatial database and what I was originally talking about wasn't designed to solve pathing/graph traversal, but you could still do n-dimensional indexed spatial search in ElasticSearch and that is something I do on a regular basis although it's not the "base" use-case for their geo API.
We didn't look into elastic search, since we wanted to give ArangoDB a try. We will have a look into it, thanks for the hint!
ElasticSearch is a search engine, first and foremost, and while you could use it as a database-of-first-resort, I'd be hesitant to recommend as much. For one thing, it doesn't take durability very seriously.
As a result, I have to assume you chose wisely if you're using ArangoDB for a standard database use-case.
It is developed locally and we wanted to try if it scales up and assists us, or if we should go the "traditional" Postgres way that everybody else goes.
This is the biggest hurdle to overcome, in my experience. A custom data format is typically essential (most location databases arrive as CSVs or XML, which are useless for real-time querying), but imports can take forever.
It's sometimes, counterintuitively, been more worthwhile to concentrate on the performance of importing than of querying; the out-of-the-box query performance you get with (no)SQL often isn't terrible, but your import script usually starts out pretty awful.
I consider efficient export formats for geospatial data (or similar rich/complex data sources) to be a bit of an unsolved problem. It is not difficult to design storage formats that are literally a couple orders of magnitude faster to process but the formats most people are using were designed for files small enough that parsing efficiency doesn't matter. Consequently, at my company we spend time designing highly optimized parsers for inherently inefficient formats and designing non-standard internal export formats that nothing else understands but which are nonetheless vastly faster to use at scale. It is a big problem that it seems like it should be solved by now.
I think what we can try is to import the data in the same format they have it in OSM and use smart indexes within the database to issue queries quickly. I think this will be the major investigation when going forward.
With the benefit of this experience, I decided it was worth switching to OSM's alternative 'PBF' format . It's a dense binary format that doesn't require additional compression. It's also reportedly 6 times faster to read than gzipped XML. Honestly it seems very complicated to parse, but if you're willing to work with Java or C there's a parser already available. 
Thanks for the link
First, there is real value in having code like this available as open source, and working using open data. The analogy to crypto would only hold if there were already good open alternatives around. It doesn't sound like that's the case. But second, crypto is basically a solved problem with clearly defined but subtle best practices. Geocoding is totally different. There's plenty of room for experimentation and totally new approaches that wouldn't fit into existing frameworks. I don't see how discouraging that experimentation can possibly be in anyone's best interest.
(Since credentials are being asked for: I used to work on the geocoder of Google Maps, as well as on geocoding data quality issues.)
As stated before, we wanted to use the data that is already there and make it available in a userfriendly way, something the current geocoders for OSM don't really do.
When you talk about the claim I make, that I wish OSM is going to be the first goto place when people want to search something on a map is still valid. It's my wish, not a general statement I'm making. And hey, it worked with Wikipedia. For many people it's the first place they go to when they are looking for information, so I think this can also work for OSM
When I need to look up stuff I usually go to Google Maps, too, but that's only because the OSM solutions out there frustrate me with irrelevant results and I have to scroll through endless amounts of data to get to the information I'm looking for. :)
Apple didn't really use OSM. The current OSM is pretty good, by the way. It's just a pity some data is still wrong (for example, many postcodes in Madrid, ES).