
Network Transparency with Wayland - CameronNemo
https://mstoeckl.com/notes/gsoc/blog.html
======
ChuckMcM
I have two words for that page "holy sh*t" that is the most excellent
performance visualization stuff I've seen in a long time[1]. Nice that it is a
GSoC project this year, can't wait to see the final output.

[1] Seriously, the last time I saw something that good and detailed was an
investigation into cache line miss performance back in 2002 on the Intel P4 at
NetApp. That investigation figured out that the transaction rate in Intel's
memory controller resulted in an 18+% reduction in throughput and up to 50%
increase in latency for file system operations.

~~~
closeparen
FWIW this kind of visualization is the output of "go tool trace" and it's
awesome, particularly for chasing down tail latencies, as the eye can quickly
find irregularities in the pattern and see exactly what is taking the extra
time.

~~~
shereadsthenews
It's also pretty similar to `perf timechart`

------
sprash
On X11 you can design far more efficient remote apps via GLX serialization.
You could heave a headless server that has no graphics card render 3D
applications on your local machine which might have really beefy graphics
hardware. Something that will never be possible on Wayland because its truly
flawed protocol design.

This solution to "network transparency" is nothing else than pushing the whole
screen updates directly over the wire. So why not use established protocols
like VNC?

~~~
Jasper_
Remote GLX is a giant hack, and even NVIDIA gave up on it ten years ago.
Modern GPU programming is all about data management and scheduling and
bandwidth, and when you add serialization and latency into the mix,
performance tanks.

For instance, sub-buffer updates have certain constraints that make it very
fast in the local case but would require a lot of data serialized over a the
wire every frame, and networks do not have the bandwidth for that.

"network transparency" is an anti-goal in protocol design for the same reason
"rpc that acts like a function call" is inherently flawed - the network adds a
lot of complexity and different design constraints.

~~~
sprash
I only can talk from experience. I used specialized CAD programs for
simulations in the past and GLX serialization worked really well. As soon as
all textures, shaders and models are uploaded the only thing that goes over
the wire are camera position updates and small updates to the display list.

Games that try to sqeeze every ounce out of the hardware with tricks for extra
FPS are not suitable to be serialized. I agree with that.

~~~
saltcured
It's not just serialization costs, but rather a change in presumed trust
boundaries. The graphics hardware abstraction is becoming less like storage or
communication, which can be easily virtualized and reasoned about to manage
risks with delegated access. Graphics is becoming more like the host processor
and memory, running arbitrary application code. GPU allocation is more like
process or job control and scheduling, with a cross-compilation step stuck in
the middle.

So, the very abstraction of "textures, models, display lists, and draw
commands" is no longer what is being managed by the graphics stack. That is
just one legacy abstraction which could be emulated by an application or
gateway service. As people have stated elsewhere, one can continue to operate
something like X Windows to keep that legacy protocol. Or, one can run a web
browser to offer HTML+js+WebGL as another legacy and low-trust interface.

But, one cannot expect all application developers to limit themselves to these
primitive, legacy APIs. They want and need the direct bypass that lets them
put the modern GPU to good use. They are going to invest in different
application frameworks and programming models that help them in this work. I
hope that the core OS abstractions for sharing this hardware can be made
robust enough to host a mixture of such frameworks as well as enabling multi-
user sharing and virtualization of GPU hardware in server environments.

To provide transparently remote applications in this coming world, I think you
have to accept that the whole application will have to run somewhere
colocating the host and GPU device resources, if the original developer has
focused on that local rendering model. Transparency needs to be added at the
input/output layer where you can put the application's virtual window or
virtual full-screen video output through a pipe to a different screen that the
application doesn't really know or care about.

~~~
sprash
At some point in the future GPUs have to decide if they are computing devices
or graphics devices. Right now they are trying to be both.

If you purposely design graphics devices you can make many simplifications and
optimizations because you can abstract all tasks as drawing primitives. That
will make serialization very easy.

~~~
saltcured
I think they have already decided, and they are computing devices, with
various graphics techniques captured as userspace code. It is a bit of a
fiction that graphics consists of just "drawing primitives" like triangles
anymore. Those simplistic applications are supported by compatibility
libraries to abstract the real computational system.

The core of the GPU is really computational data transforms on arrays of data.
But there is a whole spectrum to these computational methods rather than just
a few discrete modes. This is where application-specific code is now supplied
to define the small bits of work as well as to redefine the entire pipeline,
e.g. of a multi-pass renderer. The differences between "transforms and
lighting", "texturing and shading", "z-buffering and blending", or even "ray-
casting and ray-tracing" are really more in the intent of the application
programmer than in actual hardware. The core hardware features are really to
support different data types/precisions, SIMD vs MIMD parallelism, fused
operations for common computational idioms, and some and memory systems to
balance the hardware for certain presumed workloads.

------
emersion
You can try the project here:
[https://gitlab.freedesktop.org/mstoeckl/waypipe/](https://gitlab.freedesktop.org/mstoeckl/waypipe/)

------
ncmncm
This serial number protocol seems deeply insecure.

The server should be sending a ticket, an encryption of the serial number, to
the client, and expecting that back. It should be salted by the client id.

------
stefan_
So what happens when you have dambufs that don't allow for CPU access.

~~~
emersion
This isn't supposed to happen. Worst case scenario, the DMA-BUF is copied
internally in the GPU from hidden memory to visible memory, and then back to
the CPU.

Being able to export a DMA-BUF is necessary anyway for multi-GPU setups.

EDIT: fixed a s/GPU/CPU/ typo

------
netsec_burn
Site is dead from here.

------
bin0
Serious question: shouldn't Wayland be done over HTTPS? Or any desktop
protocol? At least some kind of encryption, I assume. Maybe it is, and I just
don't know?

~~~
jsd1982
SSH is mentioned numerous times as a transport protocol. SSH, in case you were
unaware, is implemented using TLS.

~~~
dragonwriter
> SSH, in case you were unaware, is implemented using TLS.

No, it's not; SSH’s transport layer component, as I understand it, provides
functionality loosely comparable to TLS, but SSH does not rely on, assume,
use, or incorporate TLS in any way.

