
Dolphin Progress May and June 2020 - todsacerdoti
https://dolphin-emu.org/blog/2020/07/05/dolphin-progress-report-may-and-june-2020/
======
alvarosevilla95
Slightly related, a couple of weeks ago a guy named Fizzi released a dolphin
build with rollback netcode support for Super Smash Bros Melee, making it a
game with better netcode than most current big name fighters.

The way he implemented it is insane, he had to reverse engineer a way to
generate arbitrary game states in game, as using dolphin savestates was not
performant enough.

He's also added other amazing features, like in game matchmaking, or my
favorite, being able to stream a game state through the Wiis ethernet port, so
two players can be competing on original hardware while a stream setup
upscales that footage in realtime to 16:9 4k through Dolphin, and also
generates real time match stats, saves replays... Incredible stuff.

Here's the project page: [https://slippi.gg/](https://slippi.gg/) and a video
from a top melee player showcasing the matchmaking:
[https://www.youtube.com/watch?v=erbZV8u6-hA](https://www.youtube.com/watch?v=erbZV8u6-hA)

~~~
dahfizz
> Fizzi released a dolphin build with rollback netcode ... making it a game
> with better netcode than most current big name fighters.

I see this repeated often without justification. Why is rollback better?

Based on my basic understanding, rollback netcode would absolutely fall apart
under any suboptimal network conditions. If there's too much lag or packet
loss, you and the opponent character will start teleporting all over the
screen, missing lots of inputs, etc.

It makes sense that rollback netcode would be preferable when paired with a
very good, reliable network (e.g. so if any rollback is needed, it's only a
frame or two). But if you're selling a casual game meant for anyone to be able
to play, a delay based netcode makes more sense to me. An average Joe can
understand "the game is buffering", but if the game starts rolling back too
much and you can't control your character because he's teleporting all around,
that just looks like a broken game.

An interesting follow up question: if rollback is objectively the better
approach, why is it implemented less often? Is it significantly more difficult
to develop and does that offset how much value it would add to the game?

~~~
dmonitor
Rollback is not a replacement for delay-based, but an enhancement to delay.
Slippi, for example, still delays the inputs by 2 frames to minimize the
inconsistencies that appear. This can (in theory, not sure if it’s implemented
in slippi) be increased to any number of frames to reap the benefits that
delay based netcode has. Only using delay is objectively inferior because any
flaw that rollback has is also shared by delay. You are welcome to try out the
old Melee netplay and compare it to the new Slippi to experience this for
yourself.

The reason rollback is rarely implemented is because it is hard to motivate a
company to do it when the benefits are so intangible. “Better netcode” doesn’t
really sell games, flashy graphics does. And since most of the FGC has existed
offline, online is just an afterthought.

This video explains fairly well:
[https://youtu.be/1JHetORRpfQ](https://youtu.be/1JHetORRpfQ)

~~~
Firehawke
That and most developers are in Japan, where rollback doesn't show any real
benefit because the infrastructure there is small and fast enough to make even
the worst netcode viable.

------
yjftsjthsd-h
Am I the only one constantly blown away by how _deeply_ they can hook into
games? Not just "we perfectly emulate the original hardware and produce pixel-
perfect screens", but "we read the 3D models and can increase resolution and
widen the view" \- that's crazy! It's totally different than how I usually
imagine "emulation".

~~~
Jasper_
That's partly because we got tremendously lucky in how the hardware sends
data. Unlike some newer GPUs, the hardware has dedicated memory for the
Projection and Model-View Matrices, and the vertex pipeline is fixed-function.

Dolphin isn't reading 3D models, they're just tweaking the matrices that the
game sends. Increased resolution sort of similar the same -- the game sends
the hardware triangles in an abstract model space, which is converted to clip
space (where the left of the screen is -1.0 and the right is 1.0). Increasing
the resolution just means adjusting the rest of the transform; everything the
game sends is in abstract space.

(Yes, I know there's a bit more to it than that, e.g. virtual XFBs, LOD
biases, and so forth)

Basically, unlike the PS2, the GameCube is pretty much an ancestor of modern
PC GPUs, so its way of doing graphics is very familiar to us, and maps quite
well. And this is not an accident! ATI bought the company that made the
GameCube's GPU (ArtX), and put out a GPU so good (the r300) that NVIDIA nearly
went out of business.

Also, fun, I ran into the same "GLES doesn't support glReadPixels for depth"
when working on the lens flare code for Wind Waker and Super Mario Galaxy in
[https://noclip.website](https://noclip.website). Had to do some dirty tricks
to convert it to a color format first.

~~~
johnchristopher
What happened after that ? What turned the wave in favour of nVidia ? I was
there (as a player) but I don't have clear memories.

~~~
Jasper_
My understanding is that ATI's management bungled the software aspect, fell
super behind on driver development, and gave NVIDIA way too much time to catch
up, but you'll probably have to talk to someone that was there at ATI to
figure out what really happened.

~~~
tunap
Nvidia acquiring 3DFX Interactive certainly helped. IIRC, 3dfx was spanking
both Nvidia & ATI, who's offerings were superior to Nvidia, in the PC graphics
market.

~~~
msbarnett
Not really. NVidia bought 3DFX for the Voodoo/Rampage T&L IP in 2000, around
the same time that ATI bought ArtX.

The R300/Radeon 9700 (based on the ArtX IP) came out late summer of 2002, and
several months later NVidia launched the dismal GeForce FX series (which is
the project that most of the 3DFX “rampage” team went to work on after
acquisition). The fundamental problem with NVIDIA’s design was that it was
optimized for 16-bit precision pixel shaders, and then-new DirectX 9’s “pixel
shader 2.0” model mandated a 24-bit minimum precision. The R300 ran FP24 math
natively, and consequently ran circles around the FX series in DX9
performance.

NVidia didn’t really catch up to and surpass ATI until much, much later.

~~~
tunap
My timeline was off,in the late 90' the 3DFX Voodoo(3?l was eating both their
lunches.

I agree ATI still dominated Nvidia in sales well into the 00's, Nvidia took a
long time to offer competitive cards. The solder issues & Cap Plague didn't
help them any, either.

------
ChrisMarshallNY
This is a cool project.

It reminds me a bit of this multi-year, labor-of-love project:
[http://www.starryexpanse.com](http://www.starryexpanse.com)

------
Firehawke
I'm curious if they'd tried using CHD. It's worked pretty well for MAMEdev,
and even the RetroArch core versions of a few emulators support it.

------
reedwolf
Thought this was going to be about the KDE file explorer, which is an
excellent piece of software, btw.

------
solnyshok
re-encoding my gczs into rvzs right now. looks like I am going to slash about
10 GB off my 65GB library.

~~~
jepler
Did you find more info on rvzs? The progress report makes it sound like rvz
compresses random data, but that's plainly not true. I'd love to understand
what "garbage data" actually is. It must actually have lots of pattern, rather
than being random data.

~~~
bonzini
It's data not pointed to by the filesystem. Most of the time it's zeroes, but
it could be meaningful data for copy protection or sometimes even source code
was found (this last thing IIRC, I might be confusing two different things).

