The amount of stored personal data (content, installed software, preferences/configuration) that the average user wants to access, and the concerns (expense, privacy, resiliency, speed) related to distributing that data.
The average user of today doesn't want more CPU, they want their stuff. For most people, it's far more efficient to open a window to the remote machine that has their stuff than it is to try to move all that stuff onto the machine they're sitting at.
Meanwhile, there's these other projects creating standalone apps without the complexity and performance issues of browsers that do remote desktops. There's even a whole market of embedded, fanless systems that sell for $20-50 dollars that do it with or without protocol/media acceleration. There's also alternatives in client-server land like REBOL and its "reblets" that send whole, Internet apps to clients that are 2-50KB. Then, there's these projects doing remote desktops by embedding protocols in HTML5 in a web browser running on a native OS on expensive hardware. I'm sure it only seems like modern IT goes out of it's way to do simple things effectively. ;)
So the only reason you'd use a distributed computing system now is if a) you're running some kind of publicly accessible server or b) you need a lot of computing power.
Distributed storage is another matter, but you don't need remote desktop for that.
This is why incubating projects should use (Incubating) everywhere and clearly announce their status to avoid confusion. It isn't part of the ASF yet and, while unlikely, the incubation could not work out for any number of reasons. The usual suspects of successful incubations are mentoring them, though, so this will largely amount to very time-sensitive pedantry on my part.
(I was very confused and missed the footer on mobile the first time I read the page. I thought some new project I hadn't heard of had incubated oddly quickly.)
EDIT: Given JS vs HTML/CSS discussion elsewhere, I should also give credit to Apache site engineers for using tech that loads instantly and visually pleasing despite NoScript. Loads pretty fast on mobile, too. Good counter-example to bloat of modern Web.
It seems to me that either you have the ability to install such a client, XOR you don't trust the local machine enough to want to use it.
Another example may be if you are only allowed to access the internet through a proxy browser solution using e.g. Citrix MetaFrame, then even though you might be able to install applications locally, they will not be able to connect to the internet directly.
I worked at a small company that leveraged Guacamole to provide access via the web to a legacy Windows desktop application and working with the Guac team (including direct support from Mike) was a pleasure.
I haven't used it in a while (~6m), but to get the most of it you really had to compile from source and understand how to manage the various daemons (xrdp.ini is a mess, and configuring some things was rather counter-intuitive). Otherwise you wouldn't have resizable desktops (as in xrandr, not rescaling) and other niceties.
Not sure about audio, now that I think about it, but the gist of things is that so far no mainstream distribution got xrdp right or usable out of the box (unless we're talking about the niche LTSP-related stuff).
What Dynamic DNS solutions are most popular?
The Docker image is officially supported by the project, and Docker is easy to install on a Mac, and it's turnkey on AWS, Google, Heroku...
From Guacamole's docs: ``Guacamole can be deployed using Docker, removing the need to build guacamole-server from source or configure the web application manually. The Guacamole project provides officially-supported Docker images for both Guacamole and guacd which are kept up-to-date with each release.''
Getting it on a Mac requires installing a slew of dependencies and two daemons (one of which is a Java servlet container). Docker containers almost make it too easy with not too much overhead. To get it working on a Mac, you'd need to install a JVM and tomcat on top of the guacd daemon.
Alright, I'll shut up now. Thought I was being helpful by providing a solution now, but I guess what you wanted to hear was let's hope for a turnkey solution on OS X Homebrew rather than use a Docker image straight from the horse's mouth.
A couple of months ago I was playing with remote desktops and did not find this marvel.
I ended up playing with X2Go, which is nice BTW.
I am wondering... Assuming I have a Linux remote desktop, can I
disconnect and reconnect after a while BUT leave all of my
applications running? I mean, without logging out.
Once upon a time i read that in the eighties at
Sun Microsystems they had a big fat thin-client
based architecture with dumb thin clients that
only had vga port, keyboard, mouse and a
The nice thing about such architecture was that
once your smart-card was pulled out, the screen
blanked and your remote session was closed.
But your desktop session was still running, on
The coolest thing about this was that basically
you could pick any desk in the building and just
sit there and work.
Or you could go to one of your colleagues' desk,
pull out his/her smartcard, insert your one in,
and show him/her what you've been working on, ask
for help, collaborate or anything.
This has always fascinated me.
I was wondering if I could use guacamole to do
something similar: leave my desktop running in a
datacenter somewhere in the world and use whatever
to just connect to it and "resume" working.
Unofficially there are a few hacks that either patch the terminal service or replace it with a wrapper that tricks it. Not sure if I would trust stuff like that for important systems. (e.g. https://github.com/stascorp/rdpwrap)
noVNC is much lighter, but it seems very limited.