Hacker News new | past | comments | ask | show | jobs | submit login
Wayland Architecture (freedesktop.org)
69 points by alanhaggai on March 11, 2011 | hide | past | favorite | 32 comments



This was posted before:

http://news.ycombinator.com/item?id=1872536

I flamed a lot. NETWORK TRANSPARENCY IS IMPORTANT. The effort put into Wayland would be much better put into getting NX standardized as the next generation of X protocol, and developing additional X extensions.

David Täht has some good perspective on X11 network transparency:

http://the-edge.blogspot.com/2007/10/x11-is-dead-long-live-x...


Wayland can be network-transparent; this is so obvious that the developers didn't bother to explain it (oops). Remote clients will connect to a local proxy server that compresses the window updates (VNC-like) and sends them over the network to a proxy client that decompresses into the appropriate buffers. AFAIK none of this code has been written, but it's not impossible as some people fear.

Now it may happen that no one cares enough about network transparency to bother to write the code, but I don't see that as Wayland's problem.


Interesting quote by James Gosling (of NeWS fame, among others):

"I think that a more viable solution in the long run would be to replace the X protocol with a very simple pixel copying protocol that uses the user-level rendering libraries in the application to render to a local image buffer, then copies the pixels over the net in something that looks vaguely like a video stream.

There are a variety of compression hacks that make this surprisingly efficient – this is essentially what the SunRay product does. Some analysis has been done that shows that this uses essentially the same bandwidth as the X protocol, if done well."

From "Window System Design: If I had it to do over again in 2002."


Here's the paper that led to Sun Ray:

http://labs.oracle.com/features/tenyears/volcd/papers/nrthcu...

http://labs.oracle.com/features/tenyears/volcd/papers/Nrthcu...

Their argument is "A simple pixel encoding protocol requires only modest network resources (as little as a 1Mbps home connection) and is quite competitive with the X protocol."


This may work fine on a dedicated gigabit network, which is where SunRay lives, but it does not work at all on a slow, possibly high-latency, link. To see for yourself, try running any gtk or qt app over an ADSL link or even a local wireless network, and enjoy the slide-show. Apps using server-side font rendering, on the other hand, work nicely under these circumstances.


"A pixel copying protocol" would describe VNC and RDP as well, wouldn't it? By running any GTK / Qt app over ADSL you mean running them using the X11 protocol?


"Remote clients will connect to a local proxy server that compresses the window updates (VNC-like) and sends them over the network to a proxy client that decompresses into the appropriate buffers."

By that definition everything is network-transparent. Can this do caching of glyphs? Images? All the lessons learned from X and incorporated into NX are ignored. NX is actually usable across high-latency, low-bandwidth links. Streaming bitmaps aren't.


See http://en.wikipedia.org/wiki/Xpra as an existence proof. It's an implementation of exactly what you describe using X's existing compositing manager. It has some nice properties over using something like "xmove".


Why?

Yes VNC sucks, but MS Windows isn't inherently network transparent, yet remote desktop on Windows provides an absolutely awesome end user experience. It's fast - even over relatively slow connections, it's easy, it even includes sharing of clipboard, sound, drives, and peripherals. And you can easily disconnect/reconnect session.

Everything about the user experience with Windows Remote Desktop is better than any X server implementation I've ever seen.

What would be lost by implementing something similar to Windows Remote Desktop on top of Wayland?


Compare RD with http://www.nomachine.com/ (NX) on a 128kpbs link on another continent, and then you have your answer.


All my comparisons have been "Control Panel -> System -> Remote Connections tab -> Enable remote desktop". Client built in to Windows.

versus

"Nomachine NX is a commercial product starting at $700+ but there's a limited 2 user free version, and also an admin console and a web package. Here's a 14 page PDF with install instructions and a link to the client install guide download location, along with the library version prerequisites..."

Needless to say I use RDP, push install VNC and SSH constantly and have never installed NX or seen a server with it installed or used it. I have my answer.


NX is Free Software (previous releases were under the GPL). It's widely used at Google (they implemented their own NX server: http://code.google.com/p/neatx/).

Why are you wasting time posting on HN about how you're too lazy to evaluate technology solutions properly?


Why do it on wasteland^W Wayland and not on X?


I'm assuming this answer from the Wayland FAQ is what you're looking for? http://wayland.freedesktop.org/faq.html#heading_toc_j_6


I'm surprised how vehement X defenders are about network transparency.

Like most people, all my 'desktop' apps are local - am I not allowed to have a protocol that sacrifices network transparency for local performance?

(like most people, all my remote apps are web apps)


Read the links I posted. X network transparency has no perceivable effect on local performance. What the Wayland people don't like is the server/window manager/applications split, and the current developments in the driver architecture of X.org. Those things are unrelated to network transparency.

If you're happy giving up X and SSH and doing everything through the web browser, that's ok. Some people like television. I like to get work done. When the "cloud" (some machine in some datacenter in the USA) goes down, don't say Stallman didn't warn you: http://www.guardian.co.uk/technology/2008/sep/29/cloud.compu...


- Performance is not the only consideration - local-only would remove a lot of design & implementation complexity. (similar to local pipe vs network socket etc)

- As a dev/admin, I depend on SSH but not on X.

- The "dangers of cloud" argument is not exactly helping the pitch for network transparency. :)


"Performance is not the only consideration - local-only would remove a lot of design & implementation complexity. (similar to local pipe vs network socket etc)"

Being network-transparent does not mean being complicated. See the Plan9 window systems:

http://doc.cat-v.org/plan_9/4th_edition/papers/812/

http://doc.cat-v.org/plan_9/3rd_edition/rio/


Huh? Please. No one is asking you to give up SSH. Honestly I never use remote X. It's just too clunky and dependent on a perhaps-flaky network (oops, lost the network connection for a minute? crap, there goes the entire app). I just use command-line stuff over ssh for all my remote admin, inside a screen session if necessary. Never had a problem with this.


"Honestly I never use remote X. It's just too clunky"

Why not try to fix it? This isn't the fault of network transparency or even the X11 protocol, but the tools.

"oops, lost the network connection for a minute? crap, there goes the entire app"

How is that different from losing your SSH connection? There's a screen-like program for X that gives you persistent applications: http://code.google.com/p/partiwm/wiki/xpra


You're surprised because you have it backwards.

X supporters don't like network transparency, people who like network transparency defend X.


As far as I can tell, there shouldn't be any problem using X as a Wayland Client to do your network transparency unless your remote window needs direct graphics acceleration. Its possible that in the long run some programs won't work with X, but I also expect Wayland to get network transparency in the long run.

EDIT: After looking at the Wayland mailing list, it seems there are already people hard at work getting network transparency in. So we can probably expect it in the medium run rather than the long run.


Coming in late, but I guess the thread you refer to is "Finishing the network protocol" at http://lists.freedesktop.org/archives/wayland-devel/2011-Mar... . After skimming through it, I'm not afraid for the network transparency any longer.


The problem with that is X becomes a second class citizen, like cygwin on windows.


To provide some context, Canonical and Red Hat are promoting Wayland as the future of Linux graphics.


Does this mean that if you prefer to not use compositing effects, that Wayland is much ado about nothing?


Compositing isn't just about wobbly windows and other effects - it helps with day-to-day use of the desktop by always painting every part of the window so you don't end up with tearing/repainting artifacts as you drag stuff around the window.


It also means apps don't receive as many redraw events. It's interesting how priorities have inverted over the years: when windowed systems were first developed, RAM was in short supply, so not storing invisible pixels was an important optimisation, as was avoiding updates to pixels that hadn't changed, as blitting involved the CPU and was super slow. Nowadays, RAM is plentiful and blitting is essentially free, so we try to keep the number of updates (which wake up the CPU and consume cycles and therefore [battery] power) to a minimum.


blitting is essentially free

This is only true on discrete video cards that have local memory and lots of bandwidth. Even there, the fill rate may be big, but it is still finite.

On integrated chips (~70% of the market by volume and rising) and on embedded systems where the GPU and the CPU share the memory subsystem, blitting costs memory bandwidth, and the GPU can't do it any faster than the CPU.


Yep, exactly. Take nvidia Tegra2, for example. It's fine on small screens like phones, but attach a large screen (or two) to it for a tablet, and your memory bandwidth is the main limiting factor for doing redraws. If you clear the entire screen on every frame, you max out under 25fps -- and that's just to clear the screen, and doesn't give you any time to spend to render anything. Try and do anything interesting and you quickly drop below 10fps. So you end up having to do a bunch of tricks to limit what you redraw. Forget full-screen video with a high-res screen, though.


True, though even IGPUs have decent-sized texture caches which absorb some of the load, and embedded GPUs additionally usually have embedded frame buffer memory. In any case, blitting isn't anywhere near the crippling operation it once was.


That's been my impression as well. X11 as is works perfectly fine for me. Or I should say it did until they started that XCB nonsense. Now some of my apps no longer work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: