
DeLorean: Using Speculation to Enable Low-Latency Interaction for Cloud Gaming - azefiel
http://research.microsoft.com/apps/pubs/default.aspx?id=226843
======
Schwolop
For those who want to try it out, some binaries and videos are found here:
[https://onedrive.live.com/?cid=611c3830c430d4e3&id=611C3830C...](https://onedrive.live.com/?cid=611c3830c430d4e3&id=611C3830C430D4E3%21107&ithint=folder,.txt&authkey=!AHF5kvrCruH0BrI)

The user perception results are very impressive - round trip times of 250ms
were not significantly noticeably different from the control fat client
system.

~~~
antimagic
Not just user perception - they did have some objective measures as well, such
as the amount of health lost whilst completing a task, and the amount of time
taken to complete a task. Even these objective measures came back as showing
no significant difference between DeLorean and the fat client. _That 's_
impressive.

~~~
darkmighty
It _is_ very impressive. If you think about it though, you realize there's not
limit to how good this scheme can be (for singleplayer) -- the limitation is
essentially the bandwidth. Now the impressive part is that they can adapt to
bandwidth conditions and do it all while keeping the premise of required
computations < mobile capacity << actual game.

------
orasis
Silly thought: What if the universe actually works like this? Instead of
actually calculating the full interaction of all particles, it just optimizes
by speculating.

~~~
rjaco31
The universe is very strongly distributed, it doesn't have such problems

~~~
silentOpen
Entanglement.

------
drcode
One of the current annoyances of the oculus rift is that it requires a cable
that restricts freedom of movement... I'd bet dollars to donuts a big part of
the Oculus development effort right now is going specifically towards these
kinds of "prediction" algos so that a more low-powered PC with a battery can
reside in the helmet, which can communicate with a custom low-latency wireless
network to a "base station" that holds most of the GPU horsepower.

I think a lot of software innovation is still possible that will allow a VR
headset to mask maybe a 50ms delay (smaller than in OP) with advanced algos,
allowing you to get photorealistic, real-time graphics through low-power
hardware that fits in the headset.

~~~
jzelinskie
A few times John Carmack has mentioned that they are working with Android. I
think they've given up wireless networks and are simply waiting for the
hardware to shrink to the point where you can embed the computer in the Rift.

~~~
infogulch
The new rift literally uses the complete front panel from a Galaxy Note III.
Complete with (unused) touch sensors and connectors.

[https://www.ifixit.com/Teardown/Oculus+Rift+Development+Kit+...](https://www.ifixit.com/Teardown/Oculus+Rift+Development+Kit+2+Teardown/27613)

~~~
jzelinskie
I was specifically talking about GPU/SoCs on the Rift rather than connecting
it to a PC.

------
Rapzid
Edited: Nm

Having read the article, I'm still very sceptical of the feel for games that
receive a high frequency of user input. It's precisely when a sudden, radical
change in input is made that lag is most noticeable from my experience. I'm
also curious about the extra resource requirements when it comes to physics of
which many modern games are heavily reliant. That being after actually
successfully modifying a the physics system on a game engine you'd actually
want to use to be able to support such a technique..

All of the methods and applications mentioned are really cool, but I'm still
EXTREMELY sceptical of practical "cloud" gaming.

~~~
mr_luc
I was just going to mention the physics engine aspect of it.

I saw this in miniature a couple of years ago, when I was experimenting with
real-time gaming using web technologies (I know, I know, TCP), but one blocker
for low-hanging-fruit projects is that to get a good experience you need
client-side prediction, but then it's very hard to get anything like that and
also use a pre-existing physics engine.

The real problem is: I'm a wuss.

Client-side prediction seems like magic to me, and has hurt my head every time
I've programmed it, and it goes right back to seeming like magic when I've
written it, and I never feel entirely certain that I'm on solid ground.

I always come away with the impression that networked physics is hardcore, and
I don't have enough chest hair to deal with it. :)

------
33a
I wrote about some related ideas on my blog earlier this year.

[http://0fps.net/2014/02/26/replication-in-networked-games-
sp...](http://0fps.net/2014/02/26/replication-in-networked-games-spacetime-
consistency-part-3/)

You can do quite a bit of latency hiding without even having to speculatively
predict things by just shifting the rendered state of the world in time.

~~~
rincebrain
You can't shift the rendered state without prediction if it's an interactive
experience without prediction or rendering parallel pathways because
computational bandwidth makes exploring the narrowed state space more
efficient than waiting on RTT for interactivity...

------
readerrrr
One cheap way of reducing lag where fast mouse movements are done, would be to
render a much wider view, send the entire image to the client and then client
uses the appropriate segment based on the latest mouse position.

At downvoter: Why would you downvote a suggestion that is on topic?

~~~
SammoJ
This would work for 2D games, but 3D projection doesn't allow this.
Furthermore in a 3D game there are more than two degrees of freedom.

~~~
leni536
You can render the images on the sides of a cube with 90° fov, put your
viewpoint in the center and that's it. Once I programmed something like this
and it works pretty well. I suppose that Google street view works in a similar
way but I can't back this up.

------
trurl42
I'm really confused by the Android bashing in the introduction of the article.

> platform compatibility headaches and per-platform performance tuning [35,
> 30, 24].

That seems a bit biased, doesn't it?

~~~
wtracy
Did you miss that this article was published by Microsoft Research?

------
dmead
the quake engine and moreso the mod called "unlagged" had a version of this.

they would try to infer what an object's motion would be a few frames ahead of
what's coming in from the network. the clients (iirc) would only correct
what's in the scene graph if the server disagreed. unlagged was a super
aggressive version of this that let other clients be the deciding factor. it
was especially useful (or frustrating) for instagib over bad connections.

------
easytiger
Err, how can they call it that?

DeLorean is still trademarked; I know it's a different industry but I imagine
considering the inference about the relevance of time to the software and the
brand association with time that no lawyer would have approved that.

[http://delorean.com/](http://delorean.com/)

Also I'm pretty sure a lot of games such as FPSs already do this with
direction of motion etc and have done for many many years. John Carmack
definitely implemented a long time ago amongst similar techniques such as
latency prediction. The only difference here is the location of the rendering.

[http://gafferongames.com/networking-for-game-
programmers/wha...](http://gafferongames.com/networking-for-game-
programmers/what-every-programmer-needs-to-know-about-game-networking/)

> _am now allowing the client to guess at the results of the users movement
> until the authoritative response from the server comes through. This is a
> biiiig architectural change. The client now needs to know about solidity of
> objects, friction, gravity, etc. I am sad to see the elegant client-as-
> terminal setup go away, but I am practical above idealistic._

 _edit_ , instead of downvoting please engage in a discussion. This place is
turning into reddit.

~~~
sp332
Trademarks are made within certain categories.
[http://www.legalzoom.com/trademarks-glossary/trademark-
class...](http://www.legalzoom.com/trademarks-glossary/trademark-class-
classification.html) That's why Apple Computer didn't conflict with Apple
Corps record company until they started selling music.

Anyway, I'm not sure the word "DeLorean" was ever trademarked. By 2008 most
related trademarks had been abandoned
[http://www.deloreanmotorcar.com/dmc/tmarks.htm](http://www.deloreanmotorcar.com/dmc/tmarks.htm)
and I can't find anything in a quick USPTO search now.

------
beachstartup
interesting that they should use an id game, doom3, because quakeworld
implemented something similar:

from
[http://en.wikipedia.org/wiki/QuakeWorld](http://en.wikipedia.org/wiki/QuakeWorld)
:

QuakeWorld's most important feature is its rewritten networking code (for
client-side player prediction and delta compression). Player prediction
allowed QuakeWorld clients to compensate for high latency, thus allowing
dialup users to move around in the virtual world without being affected by the
disorienting effects of latency. This helped Quake network play reach an
additional market, as opposed to those who had LAN or broadband connections at
the time.

~~~
nardi
This is completely different. Every networked multiplayer FPS since QuakeWorld
uses QuakeWorld's player input client-side prediction algorithm. This
speculative prediction is _predicting player input before the player gives it_
, not _predicting the server 's response to given input_.

~~~
darkmighty
Well it's essentially the same algorithm on a different side. FPS _clients_
predict the server state and here it's the _server_ predicting client state.
The methods to predict/deal with prediction error/etc are directly applicable
to both.

------
explorigin
This is not really new stuff (but good to see nevertheless). Search youtube
for hitbox lag compensation (particularly in "Source" games like tf2).

~~~
webbastard2
Read the paper. It's not what you think it is.

------
snarfy
There are some games this will simply not work for, like fighting games, where
even the delay caused by the LCD rendering each frame is noticeable. The ADC
in the LCD is either too slow or has too much processing happening. Many
hardcore fighting game fans prefer CRTs due to this.

There is no cure for latency. The network is physical. Instead of trying to
abstract it away in a library and assume lag doesn't exist, it should be
designed for in the application. The last time Microsoft abstracted the
network away, we ended up with asp.net WebForms.

~~~
moron4hire
Assuming infinite bandwidth and rendering speed:

It sounds like, for a connection of latency L, at time T-2L, the server sends
to the client multiple fully rendered frames, each representing the render
state for time T for every possible user input that could occur between T-2L
and T. The client receives these frames at time T-L, while sending a user
input state at T-L which the server will use for a future packs of images. At
time T, the client chooses the frame to display based on the actual input the
user performed at time T, discarding the rest, and the server receives the
user's input from time T-L, using it to render and send the next pack of
images for time T+2L.

So while it is strictly true that latency is impossible to avoid for the first
two rendered frames, that's only a small fraction of a second and only at the
very beginning. After that it's eliminated. Now obviously, rendering speed and
bandwidth change the equation and is probably why there is a limit to the
latency it can correct for. But this does appear to be a cure for _interframe_
latency.

It's lazy tree culling of every render state at every point in time for every
conceivable user input state at that time. It's certainly a big tree, but it
seems they are making the statement that it's small enough to fit in modern
bandwidth capacities.

Of course, you wouldn't be able to use this in your fighting game example to
counteract the relative slow speed of the LCD, but that is because there would
be no way to cull the set of images at the LCD itself.

~~~
snarfy
It's very similar to branch prediction in modern cpu architectures. Branches
are expensive, so both sides are executed. It is a good idea, and I'm wrong
for my remark equating any of this to webforms.

I believe we have a lot we could do to make latency lower, and effort should
be focused there. Branch prediction would not be necessary if branching were
cheap. This latency prediction system would not be necessary either if latency
was low, but it's not. Our hardware sucks. All of the layers of software suck.
OculusVR is a testament to how much room for improvement there is in regards
to latency.

------
rasz_pl
or you could, you know, render CLIENT SIDE? and OWN the game instead of
borrowing it from the cloud?

