Clearly Google Maps has the ability to turn into a feedback loop. Traffic exists -> people use Google Maps to find better routes -> traffic is modified due to people taking alternate routes -> new traffic emerges.
So my question is: what is Google Maps traffic optimizing for? The best traffic experience for User 3982274, or the best traffic experience for the conglomerate of all cars on the road?
Should Google Maps route several cars through a suboptimal route, if it results in traffic as a whole becoming better?
If Google Maps is "greedy" for every driver, can that make a traffic problem worse?
In reality, I guess this problem is more hypothetical than real, at least today. But imagine this: in 30 years, if all cars are self-driving and self-navigating via systems like Google Maps, what is the system optimizing for?
edit: there's also Braess's paradox. I'm not sure if it applies here, but perhaps it does -- could "sending some users down a new route during heavy traffic" be identical to "adding a road to a network", which can therefore result in the paradox (worse network conditions for everyone)?
WHCA* turned out to be a bit too suboptimal for our use-case, people generally expected "perfectly optimal" routes to be used for aircraft, and they weren't even overly happy with most-optimal "for-all" paths either. We eventually implemented a relatively simple "AStar-3D", essentially just A* against a space-time graph, and it's greedy/FIFO -- meaning it's optimal for each aircraft at the time the aircraft runs it's path. That made people happy -- aircraft no longer did seemingly stupid things like "oscillate", or get "temp. stuck" for overly long periods, etc.
I had no idea cooperative path-planning was so damn difficult -- I remember estimating it as a 1-week mini-project initially. Wow, such naivety, and that's when you even have perfect information! Such a cool domain, tons of respect for the work that's being done here, even if there are some tricky/ethical aspects that are going to come into play eventually, inevitably. :)
0 - https://www.aaai.org/Papers/AIIDE/2005/AIIDE05-020.pdf
Algorithms in between might work better in real life, e.g. people routes can be adjusted slightly to make paths better overall, but no adjustment (away from greedy) is made that the average pilot would find overly unfair or unpractical.
For example, if there are two roads leading up to the destination, one at 100% capacity and the other at 0%. The app will start routing people from road 1 to road 2. When the two balance out and the app will stop the suggestion. Even though it helped only some individual users, the end result is a 50/50 split, so good for everyone.
I live in a rural area, between a major population center and a major resort area, with one major highway and a few small back roads that provide alternate paths for part of the highways route. Every summer weekend the highway becomes highly congested.
Google quickly starts routing people down the back roads because of a 30 minute delay on the highway. A sudden crush of cars hits these back roads, and they end up gridlocked for 3-4 hours. Google then realizes traffic is literally stopped on these roads, and stops sending new traffic down those routes. But the people already on them are still stuck for hours.
It gets smelly when a bunch of drivers take a shit on the side of the road because they can’t go anywhere else, and leave it there.
All because Google simultaneously made an ‘individually optimal’ decision for a whole bunch of individual drivers at once.
Another example in the same area actually causes a backup on the highway itself. Google started suggesting one back road that required an unprotected left turn across oncoming traffic on the highway, to avoid a 10-15 minute delay further on the highway. Drivers dutifully followed directions by getting into the left turn lane.
The drain rate of the left turn rate is slow because oncoming traffic is also high. The left turn lane fills up, and one driver with directions to turn then stops in the traffic lanes to wait for room to get into the turn lane. And suddenly the highway is now encountering 2-3 hour delays that don’t clear for most of the day.
Probably impossible at a global level but I wonder if it’s possible to eventually model and update those cost functions periodically to represent local maps
It's evident they (G) need to work with Traffic Engineers and not cowboy it.
𝘥²𝜃/𝘥t² + g/l sin(𝜃) = 0
(where g is the gravitational acceleration and l is the length of the pendulum)
Dynamical systems whose behaviour is linear are kinda well-behaving and easy to analyse. (linear in this context means that the system's response to a linear combination of inputs will be the same as the linear combination of its responses to the individual inputs) Non-linear systems on the other hand behave chaotically, producing wildly different responses to slight differences in their inputs.
I'm not sure what a "dynamic system" is though.
Consider a contrived example of two similar bridges gapping some body of water. Bridge A is fed by a major highway, Bridge B is downstream a couple of miles and connected to the highway on both sides. For most drivers, if there are no other cars on the road, the preferred route is Bridge A.
The global optimum would be to divert some percentage of highway traffic to Bridge B, _before_ we saturate Bridge A. But for each individual on the highway this would mean a detour and would prefer someone else to take Bridge B.
In theory, if Google Maps starts directing some portion of traffic through an alternate route, it's because it's less congested. As it does so, the main route also becomes less congested. In theory, it should reach a point where roughly, cars are being assigned to both routes, as both routes have roughly equalized in performance. In theory.
However, Google Maps updates its traffic predictions much more slowly than the "speed of sound" by any useful definition of the way disturbances propagate in traffic. As 'frogblast noted, far more cars may be suddenly directed down a side road than it can carry, and the recommendation stops being broadcast too late.
In this very theoretical scenario, left to its own (with its objective function "minimizing prediction error") Google Maps could end up making predictions that make traffic more predictable, without anybody being able to notice anything is going wrong.
A similar concept could be applied to the infamous "Youtube algorithm" for predicting user interests, might end up just showing videos that make users more predictable
Not requiring a central point of control is an additional benefit. The reduction of traffic would be an "emergent" behavior.
If most users are connected to the same system, an obvious direction would be optimizing globally - if there are two routes to go, just load balance them.
I live in Beijing and the traffic is horrible sometimes. The Uber counterpart Didi mandates the routes, and sometimes counterintuitively nice - it seems to be a detour in a narrow valley but it's faster because there is no traffic jam there.
I'm not sure Uber or Didi is doing this already. At the end of the day, if most vehicles' GPS is connected to a single system, while the system is recommending routes to most users. Then it would be possible for the system to optimize for the whole population, rather than being greedy for individuals and create traffic problems.
I beleive that greater mileage contributes only to car's service frequency, and it's pennies when amortized to years of duty. Driver's and passenger's time is much more valuable.
Google and yandex average traffic by hour, they don't factor in their own users. That would be double counting and assuming that they are the only service.
For example, does Google Maps send some users deliberately down a route that it thinks is suboptimal so that it can better learn traffic patterns over a wider range of roads? My instincts as a data scientist tell me this would be a great way to gather more data and to create a better system as a whole, but at the expense of some users having longer drive times for some routes.
Putting my tin foil hat on, I've long suspected that Waze is used as the experimentation platform for Google Maps in this way. Where I live, Waze presents some highly unusual routes that I know are not optimal having lived here forever, whereas Google Maps is more on point.
When the actual numbers listed for those cities are:
Berlin - 21%
Jakarta - 22%
São Paulo - 23%
Sydney - 43%
Tokyo - not listed
Washington D.C. - 29%
Are they essentially saying that they lowered 3% inaccuracy to ~1.5% in Taichung? (And nevermind the fact that 51% is described as "more than 50%"...)
Of course this type of work is fascinating. Getting from 97 to 98.5% accuracy is far far more difficult than getting from 95.5 to 97%. But I don't enjoy the fudging of the perception of results.
Optimizing traffic is probably doing more harm than good. It's one kind of premature optimization. I see it as a way of masking potential underlying problems like not enough investment in infrastructure, or too many cars for the capacity.
Squeezing out the performance from existing infrastructure, is probably harming the longer game of having a good transporting system for the users. And that's a game that should be played by the owner of the infrastructure, not some third party which just exploit the externalities by sending heavy road traffic through calm neighborhoods every-time there is a traffic jam, often increasing the risks of further accidents and grid-locking everything, because those roads were not designed for those spikes.
The worse is that more often than not, those routing apps are not even making the user win some time. But it makes the user happy because he believes he took the right direction by following the app direction.
Often it's net negative for everybody.
I’m on my iPhone so I can’t browse the data, but XML and JSON can be found here.