Hacker News new | past | comments | ask | show | jobs | submit login
Traffic Prediction with Advanced Graph Neural Networks (deepmind.com)
222 points by beefman 45 days ago | hide | past | favorite | 42 comments



Google Map's traffic prediction has always led me to a very curious question:

Clearly Google Maps has the ability to turn into a feedback loop. Traffic exists -> people use Google Maps to find better routes -> traffic is modified due to people taking alternate routes -> new traffic emerges.

So my question is: what is Google Maps traffic optimizing for? The best traffic experience for User 3982274, or the best traffic experience for the conglomerate of all cars on the road?

Should Google Maps route several cars through a suboptimal route, if it results in traffic as a whole becoming better?

If Google Maps is "greedy" for every driver, can that make a traffic problem worse?

In reality, I guess this problem is more hypothetical than real, at least today. But imagine this: in 30 years, if all cars are self-driving and self-navigating via systems like Google Maps, what is the system optimizing for?

edit: there's also Braess's paradox. I'm not sure if it applies here, but perhaps it does -- could "sending some users down a new route during heavy traffic" be identical to "adding a road to a network", which can therefore result in the paradox (worse network conditions for everyone)?

https://en.wikipedia.org/wiki/Braess%27s_paradox


It's a good question, even if it's likely not applicable practically yet. In a game my company created, we implemented cooperative realtime pathfinding using WHCA* -- an algorithm that David Silver published [0] (he's now working at DeepMind last I looked).

WHCA* turned out to be a bit too suboptimal for our use-case, people generally expected "perfectly optimal" routes to be used for aircraft, and they weren't even overly happy with most-optimal "for-all" paths either. We eventually implemented a relatively simple "AStar-3D", essentially just A* against a space-time graph, and it's greedy/FIFO -- meaning it's optimal for each aircraft at the time the aircraft runs it's path. That made people happy -- aircraft no longer did seemingly stupid things like "oscillate", or get "temp. stuck" for overly long periods, etc.

I had no idea cooperative path-planning was so damn difficult -- I remember estimating it as a 1-week mini-project initially. Wow, such naivety, and that's when you even have perfect information! Such a cool domain, tons of respect for the work that's being done here, even if there are some tricky/ethical aspects that are going to come into play eventually, inevitably. :)

0 - https://www.aaai.org/Papers/AIIDE/2005/AIIDE05-020.pdf


I can see how solving for perfect cooperation can leave people irritated. That's why most algorithms used in OSs takes latency, fairness etc. into account.

Algorithms in between might work better in real life, e.g. people routes can be adjusted slightly to make paths better overall, but no adjustment (away from greedy) is made that the average pilot would find overly unfair or unpractical.


When User 3982274 is on a busy road using the app, Google optimizes for that user's experience. If every user on that road is using the app at the same time, these algorithms should theoretically result in the optimal condition you described above.

For example, if there are two roads leading up to the destination, one at 100% capacity and the other at 0%. The app will start routing people from road 1 to road 2. When the two balance out and the app will stop the suggestion. Even though it helped only some individual users, the end result is a 50/50 split, so good for everyone.


I live in an area strongly impacted by Google making ‘individually optimal’ decisions for each driver, and actually leaving those drivers in a dramatically worse situation.

I live in a rural area, between a major population center and a major resort area, with one major highway and a few small back roads that provide alternate paths for part of the highways route. Every summer weekend the highway becomes highly congested.

Google quickly starts routing people down the back roads because of a 30 minute delay on the highway. A sudden crush of cars hits these back roads, and they end up gridlocked for 3-4 hours. Google then realizes traffic is literally stopped on these roads, and stops sending new traffic down those routes. But the people already on them are still stuck for hours.

It gets smelly when a bunch of drivers take a shit on the side of the road because they can’t go anywhere else, and leave it there.

All because Google simultaneously made an ‘individually optimal’ decision for a whole bunch of individual drivers at once.

——

Another example in the same area actually causes a backup on the highway itself. Google started suggesting one back road that required an unprotected left turn across oncoming traffic on the highway, to avoid a 10-15 minute delay further on the highway. Drivers dutifully followed directions by getting into the left turn lane.

The drain rate of the left turn rate is slow because oncoming traffic is also high. The left turn lane fills up, and one driver with directions to turn then stops in the traffic lanes to wait for room to get into the turn lane. And suddenly the highway is now encountering 2-3 hour delays that don’t clear for most of the day.


In theory, the learning based approach in this blog post could solve this without any architectural changes. They mention predictions spilling to nearby roads, learning the pattern that "if the highway is backed up now, the little road will be too in a few minutes". Once that is baked into the model, Google Maps won't direct (as many) people onto the site road because they would get stuck in the predicted traffic there. The nice thing about this is it also handles the case where other mapping applications direct drivers there, and the "smartest" mapping application will still get you there fastest.


I wonder if this will be improved with more realistic cost functions that adequately model those localized effects.

Probably impossible at a global level but I wonder if it’s possible to eventually model and update those cost functions periodically to represent local maps


This happens sometimes on I80 in the Winter (usually btwn Auburn and Tahoe). Snow conditions can wreak havoc on weekends. People are routed to backroads --but not everyone has chains on (so they may have to fumble and get them on while on a road without wide shoulders to pull over), or clearance that will enable to take the back roads --so they get stuck in even bigger snarl.

It's evident they (G) need to work with Traffic Engineers and not cowboy it.


Surprisingly, it isn't always the case that each driver optimizing their own route will lead to a global optimum - see Braess's paradox. "If every driver takes the path that looks most favourable to them, the resultant running times need not be minimal." [0]

[0] https://en.wikipedia.org/wiki/Braess's_paradox


That's definitely not how it works. Traffic is a nonlinear dynamical system with all sorts of non-intuitive coupling between various components.


What's the difference between a dynamic system and a dynamical system?


A dynamical system (https://en.wikipedia.org/wiki/Dynamical_system) is basically a bunch of (continuous / discrete) variables whose evolution is governed by a set of (differential / difference) equations. E.g. a pendulum is a dynamical system whose state can be described with a single variable 𝜃 (the angle of the pendulum), governed by the differential equation:

𝘥²𝜃/𝘥t² + g/l sin(𝜃) = 0

(where g is the gravitational acceleration and l is the length of the pendulum)

Dynamical systems whose behaviour is linear are kinda well-behaving and easy to analyse. (linear in this context means that the system's response to a linear combination of inputs will be the same as the linear combination of its responses to the individual inputs) Non-linear systems on the other hand behave chaotically, producing wildly different responses to slight differences in their inputs.

I'm not sure what a "dynamic system" is though.


In some cases that’s exactly how it works. There’s a four way stop on a major commuter route where I live. Most of the traffic passes through it in a straight line and there is a major build up, like 20-30 min long during rush hour. In one direction there is a side road that loops around off the main road for about 2 miles in a big U and joins back up again at that 4-way stop. Before Waze became popular you could shoot around this bypass and be the only car on that side of the stop sign. Now there is a pretty significant amount of traffic there every rush hour thanks to the GPS apps. Lowell road and Barnes Hill road in Concord, MA for anyone curious.


I think that modelling is to simplistic. Travel time is often times dependent on traffic load, with a stark discontinuity reaching 100% (stop and go traffic).

Consider a contrived example of two similar bridges gapping some body of water. Bridge A is fed by a major highway, Bridge B is downstream a couple of miles and connected to the highway on both sides. For most drivers, if there are no other cars on the road, the preferred route is Bridge A.

The global optimum would be to divert some percentage of highway traffic to Bridge B, _before_ we saturate Bridge A. But for each individual on the highway this would mean a detour and would prefer someone else to take Bridge B.


That's how it theoretically works as a free service where all users are equal. It's not difficult to image a tiered subscription model which finds a sub-optimal 70/30 split more profitable. It's also important to note it's optimizing for time, not fuel usage (a shorter path may require expensive elevation changes, for instance), traffic noise for neighborhoods, safety, services access, etc.


This is the correct answer. Optimizing for the user is optimizing for the conglomerate. By giving each user the path of least resistance, the highest systemic throughput is achieved.


This is correct. The optimal routing that improves transit time for individual travelers would also minimize global transit time. However, it is true that there could be externalities to this behavior, like cars being routed through neighborhoods with higher likelihood of accidents.


There's been a lot of anger in some neighborhoods when mapping apps start directing people through their small streets. This isn't really hypothetical.

In theory, if Google Maps starts directing some portion of traffic through an alternate route, it's because it's less congested. As it does so, the main route also becomes less congested. In theory, it should reach a point where roughly, cars are being assigned to both routes, as both routes have roughly equalized in performance. In theory.


Indeed, the central hypothesis of most of the traffic simulation tools is what’s known as a Wardrop Equilibrium, in which users choose selfishly optimal routes until the traffic is evenly spread and no user has an incentive to deviate to an alternative route. Of course this is an instance of the more general and well known notion of Nash equilibrium.

https://en.m.wikipedia.org/wiki/Route_assignment#Equilibrium...


This is how Uber matches trips[0]. Optimising not for the fastest pick up for any one rider, but to reduce the total time of arrival in the whole network.

[0] http://blogs.cornell.edu/info2040/2019/10/23/uber-ride-shari...


It might as well be noted that Braess's paradox is a phenomenon observed in a world of (mostly) predictionless navigating, i.e. before Google Maps. When fluids flow, phonons communicate "traffic information" at the speed of sound and the resulting flow is usually efficient. You can get Braess's paradox in physics, but you need quantum mechanics:

https://ui.adsabs.harvard.edu/abs/2012PhRvL.108g6802P/abstra...

However, Google Maps updates its traffic predictions much more slowly than the "speed of sound" by any useful definition of the way disturbances propagate in traffic. As 'frogblast noted, far more cars may be suddenly directed down a side road than it can carry, and the recommendation stops being broadcast too late.


Fun short fiction story on how predictive systems whose predictions influence what they might predict might evolve: https://www.lesswrong.com/posts/SwcyMEgLyd4C3Dern/the-parabl...

In this very theoretical scenario, left to its own (with its objective function "minimizing prediction error") Google Maps could end up making predictions that make traffic more predictable, without anybody being able to notice anything is going wrong.

A similar concept could be applied to the infamous "Youtube algorithm" for predicting user interests, might end up just showing videos that make users more predictable


People seem to focus on the volume of cars. But traffic is not just a matter of volume but of friction between cars. Reducing the friction (naively) appears to be a simpler problem. You wouldn't even need full self driving, only enough tech for cars to merge/switch lanes without slowing down.

Not requiring a central point of control is an additional benefit. The reduction of traffic would be an "emergent" behavior.


> in 30 years, if all cars are self-driving and self-navigating via systems like Google Maps, what is the system optimizing for?

If most users are connected to the same system, an obvious direction would be optimizing globally - if there are two routes to go, just load balance them.

I live in Beijing and the traffic is horrible sometimes. The Uber counterpart Didi mandates the routes, and sometimes counterintuitively nice - it seems to be a detour in a narrow valley but it's faster because there is no traffic jam there.

I'm not sure Uber or Didi is doing this already. At the end of the day, if most vehicles' GPS is connected to a single system, while the system is recommending routes to most users. Then it would be possible for the system to optimize for the whole population, rather than being greedy for individuals and create traffic problems.


If drivers discovered Google Maps was intentionally sending them down sub-optimal routes, they'd quickly switch to a different navigator. (Even if the Google way was better for the network overall).


Not if the sub-optimal routes happen only intermittently and the extent to which they’re sub-optimal (or whether they’re sub-optimal at all) is difficult to determine. If you take the same route every day you’ll notice when it changes, but who are you to say a one-time detour wasn’t the right decision given what you don’t know about traffic conditions?


Yandex, Google's Russian rival, is doing just this from time to time. I am taking a taxi from home to office every day and it sends a driver via longer routes when traffic is heavier.

I beleive that greater mileage contributes only to car's service frequency, and it's pennies when amortized to years of duty. Driver's and passenger's time is much more valuable.


I assume at some point Google will just start offering me a list of Pareto efficient activities that maximize the global action-value function, economic nirvana will be achieved for all of humanity, Sergey Brin will ascend beyond the physical plane, and then maybe, hopefully, they can stop serving me ads.


Doesn't that already happen as a corollary of a greedy per user optimization? Each marginal car on the best course slows down the traffic, ergo weighting it slightly lower vs. alternatives. Isn't GMaps just filling out the best routes first?


> Clearly Google Maps has the ability to turn into a feedback loop.

Google and yandex average traffic by hour, they don't factor in their own users. That would be double counting and assuming that they are the only service.


There's a likely future for someone boarding self-driving equipment: higher-quality (of car, traffic conditions, wait, etc) depending on price. And public transit with dedicated/prioritized lanes (by law).


What I've always wondered is to what extent A/B testing is conducted after any such algorithm is implemented.

For example, does Google Maps send some users deliberately down a route that it thinks is suboptimal so that it can better learn traffic patterns over a wider range of roads? My instincts as a data scientist tell me this would be a great way to gather more data and to create a better system as a whole, but at the expense of some users having longer drive times for some routes.

Putting my tin foil hat on, I've long suspected that Waze is used as the experimentation platform for Google Maps in this way. Where I live, Waze presents some highly unusual routes that I know are not optimal having lived here forever, whereas Google Maps is more on point.


Is it just me, or is it a little disingenuous to write the claim "...improve the accuracy of real time ETAs by up to 50% in places like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C."

When the actual numbers listed for those cities are:

  Berlin - 21%
  Jakarta - 22%
  São Paulo - 23%
  Sydney - 43%
  Tokyo - not listed
  Washington D.C. - 29%


It gets even more muddled when you consider they mention the following: "While Google Maps’ predictive ETAs have been consistently accurate for over 97% of trips, we worked with the team to minimise the remaining inaccuracies even further - sometimes by more than 50% in cities like Taichung."

Are they essentially saying that they lowered 3% inaccuracy to ~1.5% in Taichung? (And nevermind the fact that 51% is described as "more than 50%"...)

Of course this type of work is fascinating. Getting from 97 to 98.5% accuracy is far far more difficult than getting from 95.5 to 97%. But I don't enjoy the fudging of the perception of results.


It could mean that 97% of the trips had an predictive error of, say, 2 minutes, while the remaining 3% had an error of, say, 10 minutes, and they reduced that error to 5 minutes.


I'm not sure this is advancing in the right direction for the users.

Optimizing traffic is probably doing more harm than good. It's one kind of premature optimization. I see it as a way of masking potential underlying problems like not enough investment in infrastructure, or too many cars for the capacity.

Squeezing out the performance from existing infrastructure, is probably harming the longer game of having a good transporting system for the users. And that's a game that should be played by the owner of the infrastructure, not some third party which just exploit the externalities by sending heavy road traffic through calm neighborhoods every-time there is a traffic jam, often increasing the risks of further accidents and grid-locking everything, because those roads were not designed for those spikes.

The worse is that more often than not, those routing apps are not even making the user win some time. But it makes the user happy because he believes he took the right direction by following the app direction.

Often it's net negative for everybody.


related question, is there actual open source data available for traffic or other kinds of urban movement for people to toy around with?


Houston has some.

I’m on my iPhone so I can’t browse the data, but XML and JSON can be found here.

https://traffic.houstontranstar.org/datafeed/datafeed_info.a...


Uber has some data, I don't recall if it's traffic or some proxy of it like travel time.


I think in addition to predict traffic jam for individuals, this can be used to adaptively control traffic lights and open/close lanes to reduce traffic jam for the whole system.


Just have to point out that there is absolutely a difference between user and system optimal, even outside of Braess Paradox situations. If you’ve never noticed it, it’s because traffic engineers worked hard to mitigate it.


They can't even time the traffic lights correctly around here.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: