I was making a cloud product a number of years back and wrote a similar product for my then employer. For VNC and SSH it's pretty easy to do in Javascript and can be quite reliable (much more so than telling customers how to connect using their own software - big difference in support). Even full-screen games were doable.
It provided something similar to phpvirtualbox + a screen where you'd (if properly authenticaed) just get access to your virtual machines just by going to a particular tab. Had a list of your VMs on the left, click on one, and a javascript starts up and gets a VNC javascript (started out with flash, swapped it out for a javascript one)
That said, this doesn't work as well as people here want it to. There is a significant cost to running in the browser (keyboard for instance, pressing CTRL-W will definitely generate a "wtf" moment a few times). But the speed and the fact that I couldn't do significant protocol development (VNC has something like 7 variations on protocol, integrating those into websocket is more than just encapsulating them if you want it to work well), means that I couldn't use the most efficient protocols. This meant that effectively there were resolution limits that weren't too high.
Lastly, javascript adds (a lot) of latency. It isn't much, and it sort of stays below 200 msec or so. But you'd be surprised how incredibly irritating it is to work with just 20msec extra latency. It's barely tolerable. For server admin work, sure (even then it's irritating). For constant development work, it's very irritating.
And full-screen games were doable because generally they would lower resolution and have other sources of lag. So they actually worked better than things like Eclipse and Visual Studio. Also they make you much less likely to hit browser keys. Especially old full screen games worked really well.
You'd be really surprised how low you can get it, in one of my open source projects we have JPEG streaming and GPU accelerated capturing producing results as low as 50ms: https://github.com/Ulterius/server
The problem with this proposal is that it only works with full-screen applications. Want to have multiple tabs open to many different desktops? Forget it. Ctrl-w will screw you.
Browsers already have an API for overriding built-in keyboard shortcuts: Event.preventDefault(). It's just that browser makers have chosen to ignore it for certain special keys like Ctrl-t and Ctrl-w.
You can hack around this by grabbing the "window.onbeforeunload" event to display the "Changes you have made to this page may not be saved: Leave/Stay" dialog when closing a tab. Reddit implements this when you're writing a comment, HN does not.
The nice thing about a remote desktop, though, is that you won't actually lose changes.
I've had a very positive experience with wetty[0] which uses the ChromeOS terminal emulator (hterm) on the client side, and communicates via websocket to a small node server that makes the SSH connection. I use it to provide a shell into an internal docker container used for managing some things. Everything "just works," even tmux key bindings.
Agh, I so want this to be good. With remote desktop, I think one thing that makes a product in this area stand out is how it handles delay, packet loss and bandwidth issues. There's a ton of stuff re: adaptively changing quality and keyframe frequency, sending delta updates for parts of the screen, compressing the data, and doing all this without pegging the cpu at 100% etc etc.
Teamviewer and Nomachine NX are two examples that I use a ton that seem to have most of this stuff figured out, whereas their competitors seems to work in theory, whereas in practice they are bloated, they lag, they make my computer fan go crazy, etc.
Another thing is ease of connection through NAT/firewalls, though for that this seems to shift the burden on the server setup.
x2go is great. I desperately want to see VirtualGL integration in it though so you can run egl compositors with local rendering (since GPUs on VMs are still uncommon).
The other thing RDP is amazing at that I haven't seen Linux do well at all is resuming local sessions remotely and vice versa - with RDP it feels integrated, with x2go you're back to just transferring frames.
X2GO is extremely reliable for me. But speed is not good compared to teamviewer (which had a lot of stability issues on linux when I tested it 3-4 years ago)
Huh. Does anybody know what the good clients use as a transport? E.g., are they UDP-based? A quick skim of the docs suggests that this only uses HTTP and Websockets, which I understand are TCP-based. The only browser thing I know of does an unreliable transport is WebRTC, but that currently isn't supported: https://glyptodon.org/jira/browse/GUAC-815
pcANYWHERE for Windows had this all figured out in the late 80’s. They hooked GDI and transported the API calls wherever possible. It worked awesomely over a 56k modem. Then the guys involved sold to Symantec, and they let it go fallow, EOLing it in 2014 after 7 years of no updates.
> pcANYWHERE for Windows had this all figured out in the late 80’s. They hooked GDI and transported the API calls wherever possible.
RDP does the same; the early revs were basically GDI over wire. Later versions would have extensions/hax to transport the DirectX stuff (did they just ship framebuffers? I dunno).
Then why is it always compared favorably to X over network, which does exactly the same thing (sending graphics primitives over the network, although these days mostly having to throw over complete frame buffers because everything wants to draw itself)?
2) When times changed, RDP adapted, X did not. No one uses those old graphics primitives anymore, and X's network protocol failed to keep up with the new reality of client-side rendering. So now that protocol sucks for remoting GUIs.
Yep. For many applications, RDP on a LAN is literally indistinguishable from a local session. Sadly, nothing I've tried on the Linux desktop comes close to this performance. One of the few areas where the Linux desktop is 15 years behind.
To add to this a bit, another area where the "X ecosystem" didn't keep up is with toolkits. X11 is a very asynchronous protocol, but this is difficult to work with for toolkits so they synchronize most things, making it slow over high latency links (but unnoticeable over low latency links, like UNIX domain sockets on the local host).
Strange pcAnywhere has fallen by the wayside. That was de-facto in some support jobs I did and you could expect customers to have it for support. What do people use now if pcAnywhere is not there?
I don't do support but I did use it in non-support roles, I just can't imagine the use case nowadays.
I had the pleasure of working with the pcANYWHERE guys on interfacing their product with a product I was working on. They really loved their product and it really showed in the quality of their work. Little details like how you could hit Ctrl-Alt-Del on the remote to reboot the host. So they got fuck you money from Symantec, which I'm sure they enjoyed. But Symantec did not love the product like they did. They could have been a formidable competitor to Citrix, and look how their product line has proliferated.
About 12 years ago I deployed Windows Terminal Server in a corporate environment, and used PXE boot on the client computers to start a basic Linux distro that then launched the 2X client to access TS. Right now I'm creating a reference document for a VDI deployment using VMware/Citrix/TS. I've briefly looked at the NoMachine client by installing it to a Win10 computer, and the performance of NoMachine is worse than RDP to the same computer over the same link. Since you're happy with NoMahchine do you know of any optimization techniques for it?
Hmm - I haven't really administered Nomachine myself (we have an enterprise installation on one of the clusters at work), so I can't really help you there unfortunately, but only to say that it doesn't often crash or hang for me, and the delay is quite low enough over wan that using the GUI / mouse works almost perfectly, both on OSX and windows.
RDP is pretty darn fast (at least when I last used it ~10y ago)
I think the attraction to NX for linux users is that it is still way better than VNC.. and both of which are pretty much the only consistently functioning / packaged server options for most distros without fiddling..
I've been a nomachine fan for a few years now, use it to access my linux work laptop from my windows gaming desktop. It has some minor irritations but on the whole it handles this use case quite well. I'll definitely be giving guacamole a shot though.
Teradici PCoIP is proprietary, but absolutely, amazingly good. It is licensed by various thin client and Virtual Desktop Infrastructure products, like AWS WorkSpaces. RDP is also really effective.
I use both, but Teradici is the one that made me abandon my local Windows VMs.
We use Guacamole at work to give our customers access to the internally-hosted web app we're developing for them. I've been kind of amazed that it works as well as it does.
Great tool indeed. To be honest even web browsing works just fine with it, as long as there is no animations to display.
I use it to get around my firewall at work, which seem to block anything apart http/https. If anyone knows a better solution than guacamole, let me know :)
Since we're nitpicking: 2012-2014 can potentially be three years. The definition for "several" is relatively flexible since it's an approximate quantity. However what is agreed upon is that it has to be at least more than a couple, which three is.
Beautiful! I've been looking for something like this and hadn't heard of Guacamole.
Where I work, it's standard for developers to work using a "cloud desktop", i.e., a remote machine hosted in the cloud that's used for personal development with a very production-like environment [1]. When accompanied by a powerful laptop it's all most engineers need. However, client and server software for various protocols like RDP and VNC on various platforms is still a pain. It'd be great to have a simple and easy way to provide viable remote access built directly into servers -- from any client device with no prior setup. I'm glad to have come across this.
I'm curious how Guacamole's HTML5 rendering compares to solutions like the Ace editor when used to render terminals and text areas. At a high level, it looks like Guacamole is based on RealMint which uses the HTML5 canvas tag, whereas Ace manipulates regular text elements to effect styling. I'll have to experiment with them.
[1] And for that reason I was amused to see the following on the Guacamole home page: "Keep your desktop in the cloud: Desktops accessed through Guacamole need not physically exist. With both Guacamole and a desktop operating system hosted in the cloud, you can combine the convenience of Guacamole with the resilience and flexibility of cloud computing."
I tend to use Amazon Workspaces for this. It's a joy to work with(especially now it supports two screens). Having a desktop that is exactly the same wherever I am doesn't sound like much but is one of those small things that you really miss when it's gone. I'm not aware of any other providers, though I'm sure there are 1000s. One nice advantage of Workspaces is that it integrates with Amazon AD out of the box.
There are surprisingly few providers, and AWS WorkSpaces is unusual in that it has no minimum number of desktops. One gotcha with WorkSpaces: it is cheap enough to make sense for just one user if you the free "Simple" AD option, but "Full" AD with GPO support etc. does add a cost (though not a significant one for a business).
> it's standard for developers to work using a "cloud desktop"
Personally I prefer something like Apple's iCloud Drive; your files are downloaded locally and made available offline, letting you work on them with any device, even after it's disconnected from the Internet.
This lets my data stay mine, encrypted on my local storage, individually manageable and tag-able like all regular files, included in Time Machine etc. backups, and available even if iCloud Drive goes down.
For example, although you cannot compile full Xcode projects on iPads (yet), I can take code from them, edit it in Swift Playgrounds on my iPad, and copy it back into iCloud Drive for continuing on my iMac.
Edit: That is not to say having access to a full desktop system from any device isn't desirable, but I do prefer my files to remain available on local storage.
I want to know more about your 'cloud desktop' setup. I have tried to do it myself but it's quite laggy (like heavy lag in typing code, scrolling content). I wonder how you guys solve that.
I know Ace-like editor works great but it's usually a minimal web editor compared to something like Sublime running on vnc/rdp.
One solution for remote development is to handle some heavy lifting like responsive display logic locally in the client. For two examples of this pattern, see Cloud9 IDE [1] (which builds on Ace) and Eclipse Che [2] (which uses the Orion editor). They're web-based IDEs that run remotely on the server, but load the content, such as the file tree and files you've opened, locally into your browser. So you can load your session from anywhere and no permanent state is kept on the client.
As far as protocols like RDP and NX go, I don't know of a silver bullet. I use machines hosted in Portland from my location in Seattle and the latency is low enough that remote UIs feel close to native. Some protocols are better than others at handling high-latency connections or low-bandwidth connections. I'd recommend testing a few and see what works best for your environment. You might also measure the round trip time: for any protocol that needs a round-trip with the server to update the display, the RTT will fundamentally bound the UI's responsiveness - to do better you have to load some display logic onto the client. Scrolling and character echo are two examples of logic that's really valuable to have on the client. (Many SSH clients have local predictive echo for that reason; they echo by default except when they've detected a password prompt)
I should clarify though that a lot of development takes place on a person's local laptop, synchronizing between that and the cloud desktop (such as with Git and other tools). The former provides responsiveness and the user's preferred OS and tools, while the latter provides a production-like OS with high performance that is better able to handle complex software stacks like multi-microservice applications or sizable ecommerce websites. It's also convenient to have a relatively pristine environment that isn't frequently interrupted by e.g. laptop hibernation and network changes.
> One solution for remote development is to handle some heavy lifting like responsive display logic locally in the client.
Another example is emacs in server mode: although normally folks use a Unix socket, it's possible to use a network socket instead. The remote emacs actually contains all the state, and the local emacsclient just connects to it.
Another approach is to run emacs locally, and access remote files & commands with TRAMP.
I worked at a big hardware chip company 10~12 years ago and we also had a similar setup; except that the "cloud desktop" wasn't really a personal one, it's shared among the team. Whoever needs to do tasks that need the powerful machine to run will Remote Desktop into it; other than that normally you use the company laptop (which is pretty powerful for its time too). I don't remember having a lot of pain with Remote Desktop or VNC at the time; but the web client would certainly be cooler.
I want to throw in a thumbs up for Guacamole. I deployed it two years ago to allow us to demo our software for big enterprise clients who can't easily install things on their office computers. I can rapidly deploy an Azure Server 2012 VM set to kiosk mode with our software installed in under a minute.
Enterprise clients can then login from anywhere and get a fast controlled demo of our software.
A few months ago I also threw out our crappy old Citrix deployment and replaced it with Guacamole. When hurricane Harvey hit we had 50+ engineers working remotely without a single hiccup. It was a lifesaver.
If your environment is MS Win, may I please have more detail on how you threw out Citrix. Guacamole is not providing multiple sessions to a single computer, rather that's the functionality of a broker such as Microsoft Terminal Server (termdd.sys) which allows for multiple sessions connected to the console of a computer, likewise for Citrix. For your 50+ engineers to work remotely, they would have had to have been connected to 50 unique computers via the Guacamole Server, or you still have a Citrix server and Guacamole has replaced the Citrix ICA client...
[The web application deployed to the Guacamole server reads the Guacamole protocol and forwards it to guacd, the native Guacamole proxy. This proxy actually interprets the contents of the Guacamole protocol, connecting to any number of remote desktop servers on behalf of the user.|https://guacamole.incubator.apache.org/doc/gug/guacamole-arc...]
Our Citrix deployment was ONLY connection brokering. Engineers here work on 12-40 core desktops with 64-256GB of local ram. Unfortunately the workload isn't suited for VDI.
OK. Now I'm very curious about the move to Guacamole. Citrix ICA is the fastest remote desktop protocol around. Even a very seasoned VMware engineer I work with will only deploy Citrix for VDI that requires low latency and quick response times, as he admits that VMware's Blast protocol is not battle proven. So you are saying that Guacamole delivers better performance than Citrix. I guess I will have to try it for myself.
Like all Apache projects it seems like it's very well documented for engineers who work with it, but not really explained for newcomers :)
Does it log in into its own session or takes control over an active session on the machine?
My use case is this - My parent is using Ubuntu and in case she reports a problem, I'd like to be able to log into her session, share control over the mouse pointer so that she can show me what she's doing and I can also navigate around to uncheck the checkbox or sth like that.
Is Guacamole the right solution? If not, anyone knows a good one?
That's entirely down to configuration on the target. Guacamole is just an in browser client, so it depends how the server is set up.
For VNC, if your server is set up using X11VNC then you will be logging into the existing session, whereas if you set it up it TightVNC or Vnc4Server (can't remember what's in the Ubuntu repos) then it will be a separate session.
I second TeamViewer, really like it and works across Windows, Linux and Mac.
I had a lot of trouble getting TeamViewer to install on my cloud VMs. It needs a physical desktop or something close to it. On DigitalOcean and Scaleway, this didn't work. On Vultr and Linode, it works fine.
Open-source solutions for things like this are generally developed as infrastructure, or a basis for future work. Proprietary solutions are built for people to actually use.
Ok, anyone offering it as a service? Would be nice to have a low power laptop just to connect to a high powered guacamole instance, with IDEs and development tools installed. Able to snapshot etc.
I'm gearing up to offer this as a service by the middle of next year. However, I have a couple fully functioning proofs of concept that I need people to test. If you'd like to get in on it, shoot me a PM.
I was wondering this too. I've been considering implementing a frontend "as a service" with it and charging something like a $10 annual fee. My use is I have a restrictive firewall at work to circumvent, would anyone else find a hosted quacamole solution worthwhile?
If a server can be likened to an avocado, this enables people to expose its insides by removing the hardened outer layer, then trash the kernel and mash the rest to a pulp?
Question with the risk of sounding very stupid: I get that you don't need to install anything on the controlling machine, but what about the controlled machine? I suppose you need to install some kind of software there? There needs to be some kind of access control, no? I didn't see anything on the website though.
Which utterly obvious piece of the equation thing am I missing?
Seems I'm the idiot then, missed the "gateway" bit... so this basically sits between something providing VNC/RDP and makes it accessible with just a web server?
Usually, you need an RDP client and a remote RDP server. In Guacamole, the "RDP client" part is handled by the Guacamole server. The HTML 5 frontend that users see is not a fully fledged RDP client, but merely a display and input device for the Guacamole server, which is the actual RDP client.
Yes on copy and paste, though you have to use a text box in the browser window (a menu on the left side) to mess with the clipboard. It's not ideal, but it's possible.
I cannot remember drag & drop files off the top of my head, but I seem to think no.
Please don't use vimeo, it's terrible for people with slow connections (no 144p) and for people with fast connection there is no 2x speed mode to save time. Use Youtube or a service with similar feature.
Also, the video is mp4 w/out fallback which doesn't work for those of us who compile our browsers w/out proprietary codecs/containers (admittedly pedantic and only applies to 0.0001%, but it applied to me).
In my experience, Guacamole works very well with windows RDP servers, and feels very fast. Certainly faster than X11 or VNC remoting on the same hardware.
("SS": Session Server (where the session is running), "GS": Gateway Server (where the protocol translation is performed from RDP -or VNC- to custom HTML5), "WC": web client)
- At SS, compress images using the RDP -or VNC- protocol. Cost: CPU, RAM, and RAM bandwidth, because of desktop render, delta analyzer, image compression for the RDP -or VNC- protocol, image-specific compression (e.g. RLE in if a old protocol is negotiated), lossless compression ("bulk"), encapsulation, bandwidth/frame control, transfer.
- TCP transport between the SS and the GS. Cost: LAN traffic.
- At GS, decompress the RDP -or VNC- protocol in the "gateway" server (bulk lossless decompressor, and image decompressor -e.g. RLE for basic protocol negotiation-). Cost: mainly CPU.
- At GS, compress the the images in a format suitable for the web client. Cost: mainly CPU.
- Send the images via websocket from the GS to the WC. Cost: WAN traffic.
- Decompress and render the images in the web client. Cost: client CPU and RAM.
Then, you have to dimension how many GS you need per SS, routing between the WAN the GS, how do you balance both the SS and the GS, high availability setup, etc.
TL;DR: data is compressed and decompressed twice, because of the protocol conversion, involving extra latency because of more time for compression/decompression and more hops.
What happens if you use Guacamole to log into a remote machine, and then use a browser on the remote machine to use Guacamole to control the initial client machine?
Any HN'ers happen to use Guacamole in order to gain access to Linux hosts (via SSH, of course) from an iPad? I'd be interested in hearing about how it works.
Alternatively, I'd take recommendations for an SSH client for the iPad that supports public/private key pairs and connecting through a bastion/jumpbox/etc. (a.k.a. the "ProxyCommand" SSH client directive).
I wish I had this years ago. To do VNC and get around firewall/NAT issues I actually created port forwarder through Firebase. It surprisingly worked great https://github.com/rb365/fireport
I might try to make it work with Guacamole, web UI is definitely better than installing VNC client.
We've used Guacamole in production with a custom frontend. It's nice to work with and most problems I've encountered were covered in the old forums on SourceForge. For anyone planning to implement a custom frontend I'd recommend using WebSockets exclusively. We've had trouble with XHR based connections.
We had a project that can control qemu, xen, vmware and hyperv, Guacomole handled them all, and we integrated it easily, it supports both websockets and regular http polling, both works fast enough and nice
I’ve yet to find a good solution for remoting to a Mac from a Windows machine. Chrome Remote Desktop almost fits the bill, but I cannot get meta key bindings to work properly.
Also a "good" solution to me means the connection is established via some cloud mechanism, and doesn't require me to open a bunch of ports on my firewall.
And VNC on Mac has always had miserable usability for me - updates paint slowly from the top to the bottom of the screen, can't see my text in real time when I type.
The demo is a little weird. They open up VLC on Windows, but only to show off how it renders waveforms for an MP3. Is that a use case that people have for remoting into their PC?
They're demonstrating Guacamole's ability to handle simple animations smoothly. Music visualizers are a solid, familiar example, despite not being a super-common use case for remote desktop.
Shameless plug -- founder of https://commando.io here. We offer a product similar to Guacamole (agentless/clientless) SSH access to machines which is web based. We unfortunately only support SSH though. We have an iOS app as well so you can execute commands on servers on the go or from bed -- BedOps :-).
Yeah, TeamViewer is certainly decent enough, but I'm always looking for other open source remote desktop access mechanisms (especially now that my current day job is pretty restrictive with the laptops they issue us). I had never heard of this Apache Guacamole, but will certainly give it a try!
Searching for guacamole on github combined with recently updated shows a lot of unrelated repos. Apparently there is a github tutorial which uses a guacamole recipe, which makes searching for related things hard. I was trying to find an updated docker image for example.
Tested this, installed the docker versions (which should "just work"). The vnc library segfaulted immediately, didn't work.
How can this fail even if they have built docker images with presumably the correct libraries?
They are trying to create a brand, something catchy that will be remembered. "Apache clientless remote desktop gateway" just does not have the same ring to it.
I am really getting tired of programmer's "cute names for things". I recently went through training on adobe experience manager and literally every single internal technology was named for something that had no bearing on what it actually did. Felix, Sling, Jackrabbit ... I just did a quick google on the AEM tech stack and got the following choice quote which I feel perfectly illustrates the meaninglessness of Apache's naming "Apache Felix is to Apache Sling what Equinox is to Eclipse." what does any of that mean or do?!
I'd very much prefer if the named Guacamole the "Apache clientless remote desktop gateway"
I think the idea behind meaningless names is that they don't constrain your project. This is good for codenames, but for some reason, today codenames turn into official names (maybe because of "release early, release often" philosophy?).
EDIT: My pet peeve, though, is names that mean something unrelated and much cooler. Like (my go-to example) Terraform - glorified configuration manager appropriating the name of something infinitely more interesting.
This is my pet peeve too. Speaking as a physicist, if I ever invent a Quantum Fusion Plasma Turbine Terraformer, I'm totally going to name it the Visual Java Rusty Lispy Scripter Plus Plus. I figure that might pollute the namespace for programmers enough to show them what they do to other fields. Revenge!
If you invent a Quantum Fusion Plasma Turbine Terraformer, I imagine people will be happy to let you name it whatever you want. Whether due to extreme gratitude, extreme terror, or both.
Unfortunately most of the IT industry is filled with so many bored people they hardly feel anything about the most outrageous over the top names or the software itself.
This reminds me of a client that hired a new IT director and his first order of business was renaming all the machines from their boring names.
We had things like database servers named "db1" and "db2", load balancers were "haproxy1" and "haproxy2", web servers "web1" through "web8", that sort of thing.
The new IT director decided to name servers based on city names, where the country indicated the type (Sweden is database servers, Finland is load balancer, Japan is web servers). And these weren't obvious names, ISTR that one of the 20 names I had heard before.
So at 3am when an alert came in saying "Hagfors has high load", you had to know that was in Sweden to know it was one of the database servers. But was it the primary or the secondary?
Back when Shell's design pattern required a master NFS server per work group whoever got the money to buy their server got to pick the naming convention. Since most of the servers where I worked were for geologists we ended up with servers named scungilli and murex and lots of really long names that required aliases. And unless you knew your biology classifications very well there was no reason to know which server went with which client.
It really helps to have a Googleable name. You will get adopted faster and your community will grow if people are able to Google "enable 24 bit color Guacamole" rather than the generic alternative. Also generic language simply can't represent the sheer variety we have, or explain what projects do to newcomers who don't understand the generic language anyway.
Guacamole is way easier to remember than Apache Clientless Remote Desktop Gateway. Also while seeking help it's a lot less characters, so people wouldn't be annoyed by typing.
Disagree. Just because the name is longer doesn't mean it's harder to remember, because "Apache Clientless Remote Desktop Gateway" is much more descriptive.
Like GP, the trend over the last decade or so to use meaningless names for tech products is one of my biggest annoyances. Occasionally something gains such enough popularity that it doesn't matter ("Google") but in most cases these names are only meaningful to the people who work with these products every day and for everone else it's alphabet soup.
TIL - They almost named it "BackRub" because it "analyzed the web's 'back links'". Naming stuff is hard right? I remember thinking the name "Google" was a little silly at first but probably because I didn't know wtf a googol was (so didn't make that connection). Instead the first thing that came to my mind was a drooling baby trying to speak. "Apache" is another example of a seemingly meaningless/arbitrary name until u look into it further. So I would agree that "Google" is in the alphabet soup realm, but for some reason I'm cool with it. I'd argue its a few steps above the shameless "Guacamole" type names that have no soul.
That. CRDG. Sigh. And I'd be the guy going - What the F is CRDG and why am I supposed to install it? Guacamole is no better but at least I get to enjoy a moment a pleasure thinking about dip before I need to a) Google around trying to figure out what the damn thing is and b) set it up.
Yeah, especially as "guacamole" memory-fails to "salsa," "avocado" etc. while "Apache Clientless Remote Desktop Gateway" memory-fails to "Apache RDP gateway" or something more similarly descriptive
It could also be intended to conjuring up imagery of yourself relaxing on an island in the Caribbean drinking a margarita, eating chips and Guacamole. A gateway to a clientless, remote, desktop getaway.
So how about "Apache Clientless"? The name doesn't have to describe the entire functionality of the product, but it's nice for it to have some relevant signifier you can mentally associate with something.
Microsoft does this with their products like SQL Server. Which is confusing and can cause name collisions . Due to their dominance it’s not hard to search for but smaller products would have that problem .
> I'd very much prefer if the named Guacamole the "Apache clientless remote desktop gateway"
I don't like either one. "clientless remote desktop gateway" is way too long and cumbersome. But it seems like, recognizing that, the answer was $(shuf -n 1 /usr/share/dict/words), resulting in a generic commonplace word with no relationship to the project at all.
There's a reason why naming is called out as one of only two truly difficult problems in programming. (The others being cache invalidation and off-by-one errors.)
Well, I have no real idea why it's hard, but I know that it is. Saying you're getting really tired of bad names is like saying you're getting really tired of software that has bugs. Sure, it's annoying, it should be fixed, but it affects everything and it's hard.
The conference rooms in my office are named for local cities and towns, many of which are nearby. We have a conference room named for the city that our office is in. In my foolish engineer mind, "we're meeting in Northeast 2-A" would make more sense than "we're meeting in Whoville."
And all the printers have names of cartoon characters.
Well this seems one of the million open source Java project from Apache. At one point names were straightforward like 'commons-cli', 'commons-httpclient', 'stringutils' etc and libraries useful for large number of programmers. But nowadays it is mostly half-assed 'frameworks' with fancy name.
No it isn't, clickbait is promising one thing and delivering another, or needlessly hyping up a mundane story. Unless you thought Apache was literally selling guacamole, or the word guacamole makes you unreasonably excited, this is not clickbait.
I hate marketing as much as anyone, but even I can understand that brands sometimes have value. Google would have done much worse as a company if they'd just named themselves "Search Box".
It provided something similar to phpvirtualbox + a screen where you'd (if properly authenticaed) just get access to your virtual machines just by going to a particular tab. Had a list of your VMs on the left, click on one, and a javascript starts up and gets a VNC javascript (started out with flash, swapped it out for a javascript one)
That said, this doesn't work as well as people here want it to. There is a significant cost to running in the browser (keyboard for instance, pressing CTRL-W will definitely generate a "wtf" moment a few times). But the speed and the fact that I couldn't do significant protocol development (VNC has something like 7 variations on protocol, integrating those into websocket is more than just encapsulating them if you want it to work well), means that I couldn't use the most efficient protocols. This meant that effectively there were resolution limits that weren't too high.
Lastly, javascript adds (a lot) of latency. It isn't much, and it sort of stays below 200 msec or so. But you'd be surprised how incredibly irritating it is to work with just 20msec extra latency. It's barely tolerable. For server admin work, sure (even then it's irritating). For constant development work, it's very irritating.
And full-screen games were doable because generally they would lower resolution and have other sources of lag. So they actually worked better than things like Eclipse and Visual Studio. Also they make you much less likely to hit browser keys. Especially old full screen games worked really well.