Try working in banking for 20 years, stuck behind at least 1 layer of citrix living in citrix inception. Latency for every keystroke, your brain starts to add latency to latency that is not there to compensate for a life lived wearing citrix latency goggles.
I was a "Citrix consultant" for about two decades.
I'd walk into customer sites for the first time, meet people, and within minutes they would start ranting about how bad Citrix is.
I suspect only dentists get this kind of feedback from customers before a procedure.
Having said that, 99% of the time the problem boils down to this:
The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.
The poor schmuck in the backwater rural branch office on a 512 Kbps link shared by two dozen staff gets no say in anything, especially not the WAN link capacity.
I've seen large distributed orgs that were 100% Citrix "ugprade" from 2 Mbps WAN links to 4 Mbps to "alleviate network congestion" in an era where 100 Mbps fibre-to-the-home is standard. With 2 Mbps you can watch PDF documents slooooowly draw across the screen, top-to-bottom, line by line. Reminds me of the 2400 baud days in the early 90s downloading the first digital porn, eagerly watching the pixels filling the screen.
Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.
I agree in general but I do blame Citrix for some foot-guns. The Citrix admins at my employer have never figured out how to configure it to get keyboard latency below ~120ms (on a gigabit LAN), and the silly health meter always reports the connection as excellent. This is mostly on them - in classic enterprise IT thinking, if it’s not down your job is done - but I’m somewhat disappointed that it’s even possible to configure it to have latency twice that of a modem.
This is just flat out wrong. Any seasoned gamer can feel the difference between a few tens of milliseconds.
300ms would render most video games unplayable.
I see this claim a lot and it's making me want to build a website that gives you some common interactions (moving a mouse cursor, pressing a button) with adjustable latency so people can see just how big of an impact seemingly small amounts of lag have on how responsive something feels.
After using xterm for years, I don't like gnome-terminal anymore because its lag while typing has become noticeable. It's right around 30ms on this site, and xterm around 10-20ms.
Then have an estimation challenge mode, where it picks a random latency and you have to guess within 50ms what it is. Seriously though, that sounds both fun and useful.
If you had 300ms latency, back when I played League of Legends "your ISP is having problems today and you cannot play".
Anything above 70 is considered very bad
...for Massive Multiplayer Online Gaming (MMOG), real-time is a requirement.
As online gaming matures, players flock to games with more immersive and lifelike experiences. To satisfy this demand, developers now need to produce games with very realistic environments that have very strict data stream latency requirements:
300ms < game is unplayable
150ms < game play degraded
100ms < player performance affected
50ms > target performance
13ms > lower detectable limit
"
But this is real-time gaming. Typing should be less demanding, I'd think.
Not really, unless you're the kind of guy working in Cobol and who is used to typing with latency.
I've seen Cobol developers just ignoring the latency, keeping typing because they know what they've typed and it doesn't matter that it's slow to show up on screen.
Working with latency like that also requires the system to be predictable. If you're expecting auto complete but not confident in what it'll show, you've got to wait, if you're not sure if the input will be dropped if you type ahead too much, you've got to wait. If you need to click on things, especially if the targets change, lots of waiting.
If the system works well, yeah, you can type all the stuff, then wait for it to show up and confirm. 'BBS mode' as someone mentioned.
> I've seen Cobol developers just ignoring the latency, keeping typing because they know what they've typed and it doesn't matter that it's slow to show up on screen.
I used to do that (not in COBOL), typing into a text editor in a terminal over a 2400-baud modem. Like the other commenter said, you get used to it, but it requires a certain predictability in your environment that you don't get in modern GUIs.
Generally I think of it in terms of number of frames @ 60 fps.
Anything below one frame (16.66ms) and whether or not any sort of real feedback is even received (let alone interpreted by the brain) becomes a probability density function. With each additional frame after that providing more and more kinesthetic friction until you become completely divorced from the feedback around 15-20 frames.
That’s off by about an order of magnitude – highly skilled humans can see and react in less than 120ms. One thing which can complicate discussion on this is that there are different closely related things: how quickly you can see, understand, and react is slower than just seeing which is slower than seeing a change in an ongoing trend (that’s why you notice stutter more than isolated motion), and there are differences based on the type of change (we see motion, contrast, orientation, and color at different latencies due to how signals are processed starting in the cortex and progressing through V1, V2, V3, V4, etc.) how focused you are on the action (e.g. watching to see a bird move is different than seeing the effect of something you’re directly controlling). Audio is generally lower latency than visual, too.
All of this means that the old figures are not useful as a rule of thumb unless your task is exactly what they studied. This paper notes how unhelpful that is with ranges from 2-100ms! They found thresholds around 25ms for some tasks but as low as 6ms for some tasks.
Keyboard latency is one of the harder ends of this spectrum: the users are focused, expecting a strong (high contrast, new signal) change in direct response to their action, and everything is highly trained to the point of being reflex.
When I’m typing text, I’m not waiting for the change to hit a key outside of games but rather expecting things like text to appear as expected or a cursor to move. Awhile back I tested this and the latency difference between VSC’s ~15ms key-to-character was noticeably smoother compared to 80+ms (Atom, Sublime) and the Citrix system I tested at 120-150ms (Notepad is like 15ms normally) was enough slower that it forced a different way of thinking about it (for me, that was “like a BBS” because I grew up in the 80s).
n.b. I’m not an expert in this but worked in a neuroscience lab for years supporting researchers who studied the visual system (including this specific issue) so I’m very confident that the overall message is “it’s complicated” even if I’m misremembering some of the details.
The parent comment may be talking only about the network or Citrix components in the critical path. You also have to wait to get keyboard input (often 10s to many 10s of ms) and for double-buffering or composition (you might get updates and render during frame T, flip buffers to reach the OS compositor for frame T+1, have the compositor take another frame to render that and send it to the screen for frame T+2, though this is a bad case for a compositor, you may be paying the double buffering or flu latency twice). And it can take a while for modern LCD screens to process the inputs (changes towards the bottom of the screen take about a frame longer to display) and to physically switch the pixels.
120ms end-to-end without Citrix would be quite achievable with many modern systems (older systems (and programs written for them) were often not powerful enough to do some of the things that add latency to modern systems). So if Citrix 120ms we already get up to your ‘not immediate’ number.
But I think you’re also wrong in that eg typing latency can be noticeable even if you don’t observe a pause between pressing a key and the character appearing. If I use google docs[1] for example, I feel like I am having to move my fingers through honey to type - the whole experience just feels sluggish.
[1] this is on a desktop. On the iPad app I had multiple-second key press-to-display latency when adding a suggestion in the middle of a medium-sized doc.
Divide those figures by 10 might be closer to being accurate. 120ms is quite noticable. I know as I need to adjust latency out of Bluetooth headphones for recording. Recording with those latencies sounds like a disaster and is very very much noticable even with sounds let alone vision
While my post was wrong, in fairness the context was specifically about keyboards. Nothing to do with audio. I suppose I should have been explicit but the context was keyboard entry.
In my experience visual and feeling type things like typing have even stricter tolerances for timings is what I meant to say. If audio has a delay, visually noticing a delay will at least be as equal if not more noticable at a specific ms
We aren't talking about website loading speeds. This is about how quickly your mouse cursor moves in response to mouse movements and that latency needs to be 16ms or less.
Personally I can get latency down to 200ms over the internet into a remote datacenter with WebRTC. The challenge however in practice is that running a CPU without a GPU will eventually starve the CPU because it has to do intensive things like run a 1080p video at 60fps which aren't feasible on a CPU only machine. This CPU load will then slow down the video encoder and overall responsiveness (no, responsiveness doesn't mean a mobile layout here) of the remote desktop.
I recently had a bit of a rant about security people and how 70% of the truly dumb decisions in our industry can be attributed to them.
Your description is exactly why. Security people wedge themselves into the halls of power and then start making decisions that don't actually negatively affect them all that much.
I've literally seen a CISO that insisted everyone worked in a way they themselves did not.
sadly, the job of a CISO typically isn't "make the most pragmatic decisions possible to keep our infrastructure secure and running smoothly". In many industries, it's more lke "join as many compliance programs as possible to expand the ability to capture revenue from regulated markets".
The CISO didn't make the decision to enforce password rotation- the compliance programs your sales team asked for did
I'm the IT guy for a new non-profit. We aren't separated yet from the company that created us, but we're in the process of separating. I get to decide all this fun stuff.
I had a very brief talk with the IT team for the larger parent company when I started and explained this stupid password rotation thing, as I came from a security background, they wanted nothing of it. Set in their ways.
For the new non-profit that I'm helping spearhead, I'm not sure I'll get away from the password rotation entirely, but I can certainly set it to something more reasonable, like every 365 days, rather than every 60 days or whatever travesty most are dealing with. I'm pretty pleased about this.
> Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.
This is a really useful thing to keep in mind because even if you aren't directly bound by a requirement to follow the NIST standards, being able to point your policy people at that is handy if you can shift the conversation to “bring our policy in line with NIST” where there's a question about whether they'll later look bad for _not_ having done so. Typically these conversations are driven by risk aversion and things like federal standards help balance that perspective.
Aside from password rotation being a very questionable practice, it actually can cause productivity loss. In a big organisation like mine it can take up to 48 hours for a password change to synchronise across all the internal services. There's also the issue where some endpoint software still uses the old password behind the scenes and fails to log in too many times - causing your account to be locked. I guess you can see my frustration coming through.
I had the joy of dealing with some endpoint software like this in an organization that had mandated password changes every 30 days. Very predictably, people set recurring "change your password" reminders for the 1st of the month and the organization lost an entire day of productivity each month as they locked themselves out of their accounts en masse. So the beginning of the month was always a panicked, all-hands-on-deck day for the help desk as people were waiting on hold for hours to get their account unlocked.
Our penetration testers suggested we add password rotation, and I had to quote them the latest NIST guidelines which state "Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically)."
If they don't know better, it's not surprising other companies don't either.
> To your point, password rotation is considered an insecure practice because it causes people to append 1, 2, 3, etc to the same password.
A good solution to discourage this would be to have heuristics that'd make sure that the new password isn't too similar to the old one, but doing that without having plaintext in there somewhere is pretty difficult.
Another solution would be mandating that all of the passwords should be randomly generated, but enforcing that would be difficult, because everyone who isn't used to having 99% of their new account information being in KeePass databases with randomly generated passwords, probably would find that too cumbersome to remain productive.
This seems like a people problem that makes being secure essentially impossible, due to how people use passwords (e.g. "I just use one password across X sites because remembering multiple ones is too difficult" or "I just add a number at the end of my current password").
And others also mentioned the productivity loss, for when people are slowed down by the need to change their passwords. You might easily rotate Let's Encrypt certificates thanks to automation but when it comes to people, things aren't so easy.
At that point, you might just stick with whatever passwords you have, do some dictionary checks in the future, maybe have infrequent password rotation and otherwise stack on more mechanisms, like TOTP through whatever application the user has available, or another means of 2FA, because relying just on passwords isn't feasible.
> causes people to append 1, 2, 3, etc to the same password
It’s either that or they write them down. Because people are going to forget a password that changes every month, especially a password that has to comply with the complexity rules.
Isn't that just a characteristic of how they're evaluated? Any security error is the CISO's fault, "heads must roll", etc
Given that, they're likely to give you what you are asking from them: a brick with no functionality which will do nothing. You can't do anything with Brick, but Brick has zero outstanding CVEs
It seems to me that the reason why so many bad enterprise solutions are bought is because the buyer is not the user. It’s such a funny thing to me that people would spend tons of money without firsthand experience or at least someone they trust using it.
I've never used Citrix but I remember when I had a T-1 (1.54Mbits for the younglings) and I left a Remote Desktop session open on a laptop. Some days later I went back to the laptop and used it for an hour before I realized I was in a RDP session to a machine in another state. I wonder what Citrix screwed up to make their UX so different. Of course a decent T-1 back then probably had better latency than today's consumer HFC connection.
Yeah the T1 easily had enough bandwidth to smoothly send the 800x600 16 bit color desktop you were probably running at the time (guessing the timeframe based on usage of a T1). Frame to frame diff was probably much easier as well with less shadows and graphical effects that modern Windows or Linux DEs have.
I don’t doubt Citrix has gotten worse as well but the job it had to do back then was much easier.
> Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.
> The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.
If you were sure about this you could have as the consultant told this sentence or made this entire comment as your 'first page' of powerpoint/PDF (to make sure other hn-ers are happy!)
This _very_ much depended on where you are. I had symmetric 10Mbps at home in 1998 but when we moved to New Haven in 2008 Verizon couldn't deliver more than ISDN / T1 to large chunks the city (we literally could have used a WiFi antenna to hit their regional headquarters, too). There's so much deferred maintenance around the world.
The last time I saw a place migrate the remote offices from a less than 10Mb/s network was around 2015. That same place replaced its mainframe at 2011 because of an enormous price hike.
I quit a job because of citrix. Exactly like you said, very noticeable latency. It ate into my productivity as a part of my mental energy was going into waiting for feedback to my actions to appear on screen.
> part of my mental energy was going into waiting for feedback to my actions to appear on screen
This should not be underestimated. I was in a situation like this and I though my short term memory stopped working. I forget what steps I already did because some actions took 10-15 seconds. I often switched to another task in the meantime and could not recollect the last step I did 10 seconds ago. Such delays are poison for for intellectual task where you would need concentration for.
There is no excuse for any modern device to make such pauses. It is also far too expensive for any company. The price for hardware is too low to let any user wait.
That's exactly it. Instead of tasks going "1, 2, 3" in my head, it was more like "1,...,1,...,1". I had to keep reloading every task into my working memory, with lots of brief pauses to think "did that click register", or "when I typed those words, was the context on that text box?". It's a truly torturous level of friction.
I didn’t deal with citrix but I did have to frequently SSH into cruise ships at a job some years ago. Goodness was the latency frustrating beyond belief. I didn’t last more than 6 months at that job.
Every single command input/key stroke could take 2-5+ seconds to display on my screen. Imagine trying to troubleshoot something critical in that type of environment. Luckily, I didn’t encounter anything truly critical, it was mostly maintenance tasks and such.
[ Disclaimer - I am responsible for a Citrix environment, but I'm reasonably proud of how well it works for our company ]
The technology behind remote desktops is fundamentally limited but I'm amazed at how good the user experience can be on a modern well-configured Citrix environment.
- The protocol responds well even on low bandwidth as long as latency is OK. On the office LAN it feels like a local computer.
- There is offloading for Teams[1], media streams[2] and even entire web browsers[3].
The tech behind this is impressive and it works pretty well (mostly!)
- For most staff it's easier to use a thin client or a minimal laptop.
- I can keep the Citrix environment patched and managed much more easily than a proliferation of laptops and home devices.
It can be a struggle at times and it's definitely not the right fit for developers. But it's got a lot of advantages and most of the time it works amazingly well.
I am at a law firm that uses a remote system like that. Have definitely gone two Citrix’s deep for some things, so I feel this.
Honestly, though, it’s better than the laptop the other firms have given me. One took over 10 minutes to boot, iirc. It wasn’t just the hardware, there was just so much … stuff, multiple layers of antivirus seemingly hooking all of the system calls and fighting with each other, and a document management system with blocking I/O everywhere that was somehow so embedded in Windows that it could seem to freeze the whole system.
The thin client setup may have latency, but at least it is convenient and it gets there eventually. Though I would swear it’s getting slower, or maybe my patience is waning.
For me what worked is a setup where I used an Arch linux laptop, ran f5vpn in docker and used the citrix client with some tweaks through that vpn connection.
It was a lot faster than my colleagues' Mac / Win client, and even better, it was automatable to start up and run everything.
My employer blocks Linux clients for whatever reason. Even if you pass through the initial checks there is some kind of system on the Remote Desktop that detects your local setup and kicks you out.
So I use a KVM Windows machine with Virtio drivers. QXL seems to be the best video solution.
Does everyone here just suffer from exceptionally shitty IT departments? I've used Citrix for years and not experienced any of the chronic issues described here. Remember Citrix was developed in the 1990s... the days of Windows NT 3.5/4.0 [1] & Dial-Up connections and to be able to function well in these low bandwidth environments (we're talking kilobits here people, a 10 Mbps LAN was considered glorious at the time). For years ICA was superior to RDP due to its better compression over such connections. It sounds more like whoever setup your environments didn't know what he was doing and the results are what you would expect.
Citrix performance can depend a lot on the apps - older win32 apps work really well as the object caching masked the latency on windows and buttons. Newer apps seem to somehow make the caching not very effective.
You simply get used to it. In many industry sectors (think CPU architects), multiple layers of inception is the norm (crossing multiple operating systems), and it is not strange for a keystroke to take 2 seconds, and for a menu to open and finish rendering in 10 seconds. This "experience" is probably the reason why I can still comfortably work over a DSL link with just network X (even though I still find NX much more comfortable).
You really just adapt your way of interacting, and start planning more carefully every one of your actions instead of simply clickety-clacketing everywhere like if you were trying to win a game of Starcraft. It's practically subconscious and it really changes you.
I always think it must be much, much worse for blind people.
It also reminds me of people who complain that 5-minute build times "impair their productivity". How do you even work on _any_ mid-sized commercial codebase then ? It's not that uncommon for a build to take hours (e.g. games), and in engineering it is also not that uncommon for builds to take _days_ even on powerful server farms.
This is me right now, but only for a short time (I hope). I'm at an agency and currently on my first ever banking client. I'm on a Mac but I use Citrix Viewer to access a Windows 10 machine. The part I dislike the most is the context switching between Mac and Windows. First off, windows doesn't natively let you customize the keys (I can't install anything obviously, it's a bank client). Also, for some reason, the alt key doesn't work in the Citrix Viewer so I have to change a lot of my usual VSCode shortcuts to sone custom ones. I've googled the issue and some people on Mac use a program called Karabiner[1] but I didn't want to install yet another program, I'm just dealing with it for now.
Our agency has another banking client that I hear sends you a laptop, I much rather have that.
Hah now imagine using Teams through Citrix workspace.
One thing I've learned about Citrix is that its a startup company with limited resources to handle all the bugs and crusty corpocrapware layers. The client craps on my HDR setup. It install a ton of crap you don't need and it relies on crap like HDX software running on your machine that last time I've checked it didn't had ARM binaries but this tech is also unavailable for the iOS clients. Meanwhile RDP can do semi-decent multimedia stuff without any of this crap.
Maybe I'm immune to it, or just lucky, but two hops (Logging it at home to a Citrix Network Desktop to Remote Desktop to the PC in my office) has been shockingly fine. I live very close to the servers in question though, so speed of light isn't a limiting factor, and I have solid and reasonably fast home internet. It can work fine.
Don't worry, I'm sure that will all be folded into microsoft teams soon :)
More seriously, I'm reminded of something a friend always said.
You need to have a response time of 1/10 of a second or less for something to be interactive. I remember that but I wonder if the brain fixes it like it ignores your blind spot.
Worked with Citrix for years at different customers. Think it is more about setup. Server capacity, bandwidth and so on. Often Citrix was used for connecting to a bastion or a jump host. Then an extra hop to the target machine. Some setups were laggy. Some worked just fine.
I work in banking. Everything company-hosted I access via a split tunnel VPN. Everything else goes through the normal internet connection, with a company root CA inserted to sniff HTTPS traffic.
This has been my experience with Citrix also, although I have heard it can be set up to work better. Has anyone had experience with HP Anywhere/Teradaci? I am curious how it would compare.
I would have agreed until I started working at Google. Also, you should completely avoid having Remote Desktop and instead use ssh + an editor that works with remote files.
At Google we have a custom fork of VSCode running in browser and builds can either be distributed or run on my Linux VM to utilize build cache.
I liked it so much I started doing a similar setup for small side projects. Just boot up the Cloud Console on GCP and start coding.
Advantages are:
- Accessible from anywhere (I use my pc, my laptop, etc. The env is always the same)
- More compute (I can attach more CPU + more RAM flexibly)
- Less friction to start (minimum environment setup, most tools are preinstalled)
There are some adjustments that need to be made to your workflow. And for some applications you are dependent on having the correct tooling. However, my personal prediction is most companies will move to this type of development workflow as the tooling improves.
There is one more point I would like to add to the "less friction to start." This is the killer feature for education.
No more do students have to set up their own envs. Figure out PYTHONPATH. No more do educators need to debug installation issues on 3 different OSes.
Teachers distribute a pre setup env which they know works for what they are trying to teach. Students go straight into writing code and building things with minimum friction.
Learning to set up your env can be punted to a problem for later.
Learning to set up my environment to compile my first MUD back in 1999 was my first introduction to Autotools, ./configure, make, etc. I was motivated to solve the problem because I wanted to tinker.
I'm sure someone will say I'm gatekeeping, but seriously these low-level skills have served me my entire career, and I wonder what happens when every dev environment is just a docker pull away. Who learns how to build new environments?
That said, I have recently been tinkering with the Flipperzero firmware, and compared to the old Rockbox days when you had to install an arm-elf-gcc toolchain yourself and pray, embedded development has gotten way easier and this doesn't seem like a bad thing. I don't have an answer, just something to think about!
Its not necessarily gatekeeping, its just assuming that all others who come through are also motivated in the same way you are. Reality is that the same experience likely discouraged countless others and prevented them from pursuing development.
I'm the same as you, but I recognize not everyone is.
Feels like this is happening already. There is no incentive to learn the fundamental concepts. We need more people interested in the why, and not just a quick buck. Folks interested in the why is the only reason we are here to begin with!
On the other hand....it's great job security! Less and less people understand networking everyday it seems!
The kinds of things people are talking about - fiddling with autotools or PYTHONPATH - are not “fundamental concepts” or “the why” behind anything. It’s just tedious, boring nonsense. There are plenty of intellectually curious people who would be turned off by this stuff.
In my experience, "tinkering" with an environment before getting any code working was a recipe for believing "this is too hard, maybe I'm not smart enough to do this".
Getting a feedback loop early on is so important. Once you're convinced that you can write software, setting up an environment starts to look like a tractable problem.
Think of it like a product funnel. If the goal is to get people to learn X, making them learn Y as a barrier first is unnecessary. You risk people abandoning the funnel that way. It's still entirely possible to learn these tools some other way.
Yes, but if learning X beforehand required learning Y, then we might be expecting the value of learning X to be the combined value of learning X + Y.
It should be noted when adopting this approach, that you're lowering the expected value from the people who make it through the funnel. This may be counteracted by more people making it through the funnel, or it may not. The delta in total value is completely unknown.
Those who make it through the funnel with only X are likely more than the X+Y scenario simply because it's less learning and new concepts so drop out will be reduced. This cohort are probably more likely to go on to learn Y if we further assume it was so valuable as to have been "required". In reality requiring X+Y backfires as step 1 because X is all they want and X may be worth 10Y to the student at step 1. The value of Y may increase to >X after learning X.
When I learned basic HTML, CSS and JS. On step 1, JS had essentially no value. CSS sounded like it had value but was a bit nebulous and optional sounding. HTML seemed like an obvious requirement for building a website, so that's where I started. Once I learned HTML, the value of CSS became obvious and I pursued it. JS still seemed esoteric but later once I got HTML/CSS down and wanted to do more JS kept coming up as the solution. So I learned it too. Now, I'd say JS is way more valuable to me than HTML/CSS even though I learned it last.
This was in the 90s BTW and the path wasn't very clear back then. The first book I bought was actually a Perl book and quickly realized it was too advanced of a place for me to start. I learned about Perl after noticing the /cgi-bin/ portion of URLs and wondering what that meant. It had to be something most sites where doing similarly. This was before Google so whatever random SE I used back then told me cgi-bin was associated with Perl scripting for web development. I continued to struggle with the backend stuff (JS was FE only back then) until PHP3 came around. I'm self taught if that wasn't clear.
I mean, if you're self taught, you can take whatever path you want.
My comment was in the context of students, which generally is backed by public funding. I'm worried about getting less value for the same number of public dollars. That being said, maybe it does still work out as an increase in total value, we should probably do a study on that.
That being said, I'm self-taught too, so I actually benefit if these classes provide less value, as it's easier to convince business folks to hire me instead, but it's the principal of the matter in this case (also I'm not really competing against recent college grads anymore).
I think the problem here is ordering. At the end we probably want them to know X and Y but we would rather they learn X then Y.
But to learn X they need to do some steps from Y.
If we can remove those steps we can make more people learn both.
But also in practice I don't think anyone struggling to set up their env before writing a line of code is learning much. They are just following arcane instructions until it works.
> wonder what happens when every dev environment is just a docker pull away
They learn later, when/if they actually need it. And for those who turn out to be unwilling or unable, well, there wasn't much chance they'd have taken the same path as you, anyway.
Speaking of paths, specifically path dependence, autotools and friends really are monstrosities and should rightfully be relegated to history. I hope that simpler, easier environment building really are the future, as you hinted.
If they're so helpless they can't figure out how to even get to an environment other than their corporate standard one, then they have, again, already exited the pipeline before this conversation becomes relevant. Someone incapable of setting up a linux vm was never going to do original ops work, regardless of what the computing ecosystem around them looks like.
> Autotools, ./configure, make, etc. I was motivated to solve the problem because I wanted to tinker. I'm sure someone will say I'm gatekeeping, but seriously these low-level skills have served me my entire career
What skills? 95% of using autotools etc. is tedious memorization or copy-paste, not something you learn anything useful from.
> No more do students have to set up their own envs.
That straight up sounds dystopian though. Speaking as someone who makes software for Linux machines, I'd really hate to hire someone that doesn't know how to play around with the OS side of things.
Handing someone a pre-built dev environment and playing god with them sounds like a great way to get stupid programmers who can't architect.
Check out Repit.com. They’ve added the ability to add dependencies. Having an easy way to hack on some Java with Maven dependencies already installed is pretty great.
I think this is possible using GitHub Codespaces as well…if you have a Java project with a Maven POM, the deps in the POM will get loaded at runtime.
I haven’t tried Python yet, but would imagine it’s similar. It’s all containerized, so you effectively have CI/CD you just don’t see it.
Huh? It's not like the children aren't using Google and Microsoft services by the truckload already. Every single one has Android/iOs phone. They use Gmail and YouTube and browse the web. And the schools use it too - MS Windows, Office, Google Classroom, Drive...
Practically nothing changes with a cloud desktop.
You mean a fully locked down device that allows no tinkering at all. Then they graduate to college and they're again suggested to use a fully locked down environment with no tinkering at all.
The flip side of this is that it's hard to do anything beyond the standard library for some languages. Using PythonAnywhere is great for std-in/std-out programs, but I couldn't figure out a way to import graphics libraries and such. The web is pretty limiting when teaching with anything other than JS/HTML/CSS.
env management is pretty easy considering tools like conda are out there, that will carry you pretty far especially in terms of education. you could create a conda env specific for your class and just email the yaml. its also easy to write a script that will install miniconda and build these needed environments
Remote desktops can work great if you constrain and structure what you do in them.
For me, this means not using traditional desktop environments like gnome/kde/windows/macos, but instead a full screen tiled window manager (i3 in my case) with different virtual desktops assigned to different work items.
Each virtual desktop is split in half between the IDE (vim in my case) and a terminal session for the code under development in that IDE. That's it. No silly weather or chat widgets (all that lives in my local laptop's traditional desktop).
The result is I never have to search for a dev task related tab or be unsure if the code I'm running is the code I'm editing (which can happen if you are working on many concurrent changes).
I liken the setup to the keyboard-driven text terminals you still see in some shops, hotels, and airport check-in desks.
In general, I think smartly crafting your workflow matters a great deal more than the particular tool you might use.
> Each virtual desktop is split in half between the IDE (vim in my case) and a terminal session for the code under development in that IDE. That's it. No silly weather or chat widgets (all that lives in my local laptop's traditional desktop).
I don't disagree, but if your editor is vim and you run/test in a terminal, why not do the whole thing in tmux in SSH?
Some of the software has a GUI, and some of the tools (i.e. unit test runners) display their output in a locally (to the remote desktop) running web app viewed via a browser.
If I were doing strictly text-based work, I might use tmux, but to be honest i3 scales down so well to the text-only use case that I probably wouldn't bother, since there is no upside to doing so.
Sure, but it's a lot easier just to open a browser in the remote desktop session. It's not like the FPS of the browser matters much when looking at unit test results.
On the 'cloud' (really our own cluster) we use at work most of the nodes have 16 cores and 128gb of memory, its a much beefier system than what is available locally.
> Also, you should completely avoid having Remote Desktop and instead use ssh + an editor that works with remote files.
That's kinda stretching the definition of a 'desktop', isn't it? The sort of tasks someone uses Remote Desktop for seldom overlaps with what someone uses SSH for. Also it doesn't seem to be the point of the article:
Article: > I'm also going to restrict this discussion to the case of "We run a full graphical environment on the VM, and stream that to the laptop"
Though I think it could still work if the applications are built with this in mind. Think Google Docs, Sheets etc as cloud replacements for local Word, Excel.
SSH and Video aren't the only two protocols to interact with a remote machine.
There's not really a way around network dependence when coding at Google. You're not allowed to have code copied to a laptop, unless you get special permissions. Only teams that work on client code like desktop apps are likely to be given access. So this means to even look at code, you have to be connected to the network (via Chrome or a Fuse-based VFS.)
If you're using a desktop, there's not really a case for going offline, so being dependent on the network is ok.
I get nearly all my work done off a laptop, but I do find its weak CPU/memory a mismatch for my heavy use of Chrome.
Damn, when you said Google I thought you were gonna talk about Cloudtop, etc. +1 to your recommendation, but they do a pretty good job Cloudtop too(for non-power users it is pretty usable).
Yeah, to be clear for Google work I am talking about the combination of Cloudtop (VM), Cider (IDE), Blaze/Bazel (Builds).
In addition you also need a version control / file sync system.
It's also nice to have some kind of network proxy especially if you are doing web dev. Tools or web services run on the VM and you just access it directly through the proxy on your local browser.
The integration/combination of these is what allows things to work.
For personal code this is Google Cloud Console. You can actually just jump into it . It has a built in VS Code editor.
But at home it would be GCP VM + VS Code + Git.
GCP also has built in proxy. The only problem I have had so far is it doesn't rewrite URL's which can be an issue for web apps. I think it's solveable I just haven't really tried yet.
Theres some other solutions in the other comments as well.
You also should mention the use of CitC. With CitC, I can build/write code from my work machine at the office and then go home and gmosh into a cloudtop that uses the same network mounted filesystem.
I thought network filesystems were a terrible idea until I used citc + piper, really two incredible pieces of engineering infra. So many problems are reduced to just writing files to disk if you have a magically disk that acts like it is infinitely sized and everywhere all at once with low latency and versioned by the second. Whatever promotions they offered those authors and maintainers, and whatever black magic they had to invoke, it really was worth it.
Yep, I sorta glossed over it with file sync. But I guess CitC is more than that. Its more like a workspace sync.
It acts like a view of the monorepo and holds whatever changes you make. Additionally it integrates with your version control and holds its state as well. For example any local commits or branches.
And this can all be accessed from the browser or the CLI on any connected machine.
There are many ways to approach this sort of thing. For example I work similarly, all code running on a powerful slurm-managed cluster that i can access using anything with ssh, but I just use stuff like tmux and cli editors that are already installed on the server rather than use a gui based editor. we have an lmod system with prebuilt packages of a lot of software we typically use with various versions represented, and we also use environmental managers such as conda/mamba and workflow control tools which are straightforward to use. Seems simple enough imo.
IMO you don't really need it to be a full vnc style remote desktop although, or even have the editor run in the browser. You can get equivalent results with bazel[0] remote execution + cache servers and get a similar horizontally scaling build system, without vnc style jank or full network dependence for your actions as a developer.
Another reason why google likes the remote dev experience is because it doesn't download code to the developers laptop, because they don't trust them.
They said they were using it for side projects. Even if Google is subsidizing the bill, I'm interested to know how much something like this would cost.
assuming ssh and not scp, replacing traditional ssh with mosh in this setup could prove for some interesting benefits wrt network dependency. If the connection was less brittle, and directories could be opened and cached and rewritten later on, after the connection was disrupted and re-established…that’d be awesome.
It’s probably the minority, but I think there are people who it’s helpful for.
For example, I travel with an iPad Pro and a work laptop. If I want to tinker one evening, I can use my iPad instead of bringing a personal laptop. (This also applies to cloud gaming for some people, but I haven’t done that personally).
My partner is also a software engineer and we have a toy server at home with various things running in it (eg game server, home assistant, vpn etc). We have a VSCode instance running so either of us can grab a browser and update the configs, without deal with being in sync. (Imo this is the “most obvious” use case - modifying remote files without worrying about sync).
At work, I also have this setup. (My school had something similar too, just less polished because it was a while ago) The benefits there are big too. Besides everything mentioned before, it also means that there’s basically zero setup time. If you break your laptop, or forget it, or whatever, IT just hands you a temporary chrome book which you log into your work from a browser.
By comparison, my old job pushed a bad MacOS update that bricked my work computer, and they made me remote into a windows VM (AWS workspace) from my personal laptop to do work until a replacement arrived. I lost all my work/files/etc since it bricked unexpectedly, and that job had remote Linux VMs so I had two levels of indirection. Then I had to set everything up again, so I easily lost 3+ weeks of work due to that incident.
I have a website with a worker process doing RSS parsing that occasionally fails. It would be quite nice to be able to spend 10 minutes fixing trivial bugs from my phone while I'm out and about. Or from my iPad. I'm not doing feature development but this would be nice to have for things that are so easy I could do them now but instead must wait hours or days until I'm back in front of my dev machine.
And actually I have a desktop and a laptop that I do dev work on. More than once I've started a branch on the desktop one night but don't quite get far enough to push it up and the next day I take my laptop to the coffee shop and realize the code is still at home.
I mean there are always going to be niche uses for this.
My point is about corporations adopting this sort of a thing because executives got sold on a fancy feature "Accessible from anywhere" but no one really materializes it for 99.99% of the time they are not coding from anywhere but have to suffer latency the entire time.
> executives got sold on a fancy feature "Accessible from anywhere" but no one really materializes it for 99.99% of the time they are not coding from anywhere but have to suffer latency the entire time.
This isn’t the pitch. Not at all. The pitch is “no code (or IP) on local machines that can be stolen” and “no downtime if laptop breaks… IT desk can keep a stack of chrome books ready for backup”. Combined with something like gmail and google docs, the laptop at some employees WFH house contains no business secrets ever.
I’ve never experienced the slightest drag of latency with this approach. If you’re running a compiled language, the compiler is surely the bottleneck. If you’re doing it for work, they’ll probably set it up so it’s always regionalized close to you from a cloud. Maybe fly.io should pitch this.
As a director in a previous job a few years ago I almost introduced apache eclipse orion to our organization strictly to reduce issues with onboarding and junior devs.
I love when senior devs can set up their workspace how they like, but juniors and onboarders often need lots of handholding. Being able to spin up an IDE with exactly what they need with zero effort is incredibly valuable. We lost days and days of productivity because some developers didn't understand how to manage having both a jre and jdk on their machine.
This is a very useful feature the moment you have > 1 development machine. Complex build environments, dependencies, personal preferences all syncing seamlessly between all of them is a godsend.
I already do that in pycharm. Everything else needs to be set once. But you’re actually arguing about multi-machines and I’m arguing about “code from anywhere” even on computers that don’t belong to you. That is an overblown feature that I’d never use.
Latency is top priority for me. It shall not be sacrificed for any multi machine inconveniences.
VSCode is super popular and performant. It runs in a browser (electron) natively. Running it in chrome remotely is literally no different if you have a performant network.
I always found VSCode to perform better than IDEA based tools fwiw, especially if you want to keep a laptop on battery. Latency has never been an issue.
If you don't want to set it all up yourself, GitPod basically has this up and running, with a pretty generous free tier. Think VSCode in the browser, with a docker container (controlled by you!) bash prompt at the bottom.
Nothing beats working directly on a fast but quiet workstation sitting next to my table.
At least for me, the productivity gains associated with quicker builds, IDE resyncs (CLion, looking at you) or just being able to have email, chat, calendar and an active video conference running without making the system crawl to a halt or long latency spikes are huge. 3-4k for a machine that will likely last 2-3 years is nothing in comparison.
For the life of me I don’t understand why folks default to laptops for development. Yes portability is great, but most of us park our behinds at the same desk everyday. If I’m going to be out of the office (away from home) I take a laptop and remote into the desktop! Even M1 macs (I have one and love it), while powerful, just can’t hold a candle to a workstation class machine.
I'd love to do that, but my laptop's and workstation's state inevitably get out of sync leading up to "wait why doesn't this work ... spend a couple of minutes .. ah yes I did X on the other device".
(Before someone suggest "use docker": then I'd need a more powerful workstation and laptop :-)
And VNCing into my workstation from the laptop has all the drawbacks that Matthew described in the article.
If you do your work in VSCode you can setup the following pretty easily and it works really well (I use it to connect to my workstation in my home office when I work in a coffee shop on my laptop or the rare occasions I go into the office):
I have recently started doing this and it's excellent. I can just wander away from my desktop, take my laptop and go work somewhere else in the house, and I used something very similar going away abroad.
Yeah, already Tailscale / WireGuard user and SSHing all around.
Didn't know VSCode had a headless mode that can be driven over the net (which is what your description sounds like), will definitely check it out, thanks.
It just makes sshing between the machines by name very easy. You can do the same thing by assigning permanent IPs to all machines in the mesh and then updating all your host files across all the machines in the mesh. Life is too short for that.
Having set up OpenVPN a few times, and troubleshooting Cisco VPN, the new thing unique to Tailscale is that it just works.
It takes a few seconds to connect each new machine. It took me way more just to find out how I should configure Cisco's VPN client, and I do not ever want to even think about OpenVPN again.
I've also maintained a WireGuard mesh where I distributed keys and set up /etc/hosts via ansible: add a host to the inventory file, run the playbook - simple.
Yet Tailscale is even simpler than that. And (for my purposes at least) it's free.
Not OP, but I think the purpose of tailscale w/ magicdns is to create a VPN connection directly between the laptop and desktop, regardless of the underlying network locations of either. I believe tailscale uses connection brokering so all connections can be outbound (no firewall policy / port forwarding). MagicDNS is probably just a quality of life improvement here.
Tailscale saves time in this. It does things like busting through NATs for you to get the VPN established, useful when on varying networks with the laptop, but yeah it is a wireguard VPN after that.
> (Before someone suggest "use docker": then I'd need a more powerful workstation and laptop :-)
Why do you believe that to be the case? Docker performance overhead is so minimal I highly doubt you'd be able to tell any difference compared to native processes.
I currently work on a project that involves 28 docker containers (edit: on Linux, so no extra VM overhead like on a Mac), and I definitely can tell the difference compared to native processes.
My main professionnal OS in a Linux booting up from an nvme drive with an usb3 case adapter. The pro laptop I have been given only has a screen resolution of 1366x768 and 16GB of ram. I don't mind too much when using it as a desktop because I have 2 fullhd 24" screens but if I plan to be more mobile and work on one screen, or if I need lots of memory to boot containers and VMS I boot it on my personnal Lenovo. I also boot it sometimes on a desktop in my office while I boot the original windows installation of the pro laptop updating itself and not be forgotten.
I use adhesive velcros so that the drive is secured on the backside of the screen and don't hang from the laptop.
I bring my laptop with me to other teams for questions. I also bring my laptop for presentations, demo’s and sometimes for refinements. I work at home 2 out of 4 days.
And I am no exception. The whole company I work for does this. I cannot imagine working with a workstation.
A big part of my team has crappy laptops and work just fine with a citrix client. I am a developer and do not understand how they deal with it, but the ba’s are ok with it!
> Even M1 macs (I have one and love it), while powerful, just can’t hold a candle to a workstation class machine
Not really true. My M1 Pro is performance-wise very close to my previous desktop based Ryzen 5900X, but:
1. It doesn't take space on or below my desk
2. Auxiliary screen it provides is useful
3. I can unplug it any time and continue working from anywhere with the same performance, without having to sync up the development environment.
Before M1 Macs I would concur, but right now the major reason to pick desktop is Linux availability (which is a subject to change with Asahi), not performance.
My System76 laptop with Nvidia graphics is faster than the M1 Pro I have for work, has more cores, and 64G of ram means I can run all my communications stuff and an ide and a compiler and a local k3s and the machine won't break a sweat. However, the battery will drain in an hour and a half doing all that. Performance is definitely not an issue on non-arm machines. Battery life is.
I have M1 (the original) MBP and Ryzen TR2920X desktop (with oddles of RAM, multiple NVMe drives and 10 Gbps networking). The Mac, while significantly better than any Intel laptop I had before, still cannot hold a candle to the desktop, sorry.
It depends on your workload and codebase. I have a Ryzen 5600X in my desktop and for C++ work, my M1 Pro is quite a bit faster for clean or incremental builds. The desktop is still useful/required for some of my work (using x86_64 windows with an nvidia gpu) but I default to the Mac for anything that could be done in either place. It also helps that I prefer the Mac tools so it’s not just about the CPU speed.
That said, I’d rather find a new job than trade either system for a cloud desktop. I count myself fortunate that I’ve always been in a position to choose my computer and tools.
> I don’t understand why folks default to laptops for development.
I think there's a lot of cost/benefit that comes down to: depends on what you are building. I had lunch with a VR dev last week. He needed a big machine for huge MS builds. I do a lot of web and network programming, and a $1200 LG Gram (i7/32GB, 17" screen) is way more than adequate. The important thing is that employers understand that slow computers cost them a lot of money when they hobble developers with them.
If people are slacking off more when working remotely, then measures that make doing the job less frustrating seem likely to have outsized positive effects, by reducing that slacking-off.
(Maybe I'm assuming I'm more typical than I really am. I know that when the work I'm supposed to be doing is frustrating and annoying I feel much more temptation to do other things instead.)
True, employment costs out-compete costs for hardware very quickly. If your employee takes 10 minutes per day waiting on tasks because the system is too slow, you can instead buy a pretty decent rig every year.
How do you truly report that in a corporation? I haven't seen a way to disclose that a good amount of my time in a project is waiting on thr crappy system they've constructed.
It's almost like the just assume the costs or pretend like it doesn't exist.
Don't make the problem be about people or process, just show the exact problem and how you could get more done if you had a way to continue working during a build. Put the problem on trial, and not people (don't call anyone in IT stupid, don't blame anyone for sucky processes, and do not under any circumstance indict the choice of tooling).
Take your slow laptop to lunch with your manager. Explain that you are starting a task you have to do three-four times a day that prevents you from working because your computer is maxed, compiling. At the end of lunch, let the manager know when the build stopped, and then discuss getting a faster or a second machine so you can work while building.
> Even M1 macs (I have one and love it), while powerful, just can’t hold a candle to a workstation class machine.
My M1 Pro is faster in some workloads than a small Dell tower sold as a "workstation". Of course I could buy a huge workstation with a 250W CPU or some kind of insanity like that, but then I suspect its power efficiency will be 4 times worse than the M1 Pro. The Dell tower already makes quite a good amount of noise under load while being beaten by a mostly silent M1 Pro.
> Yes portability is great, but most of us park our behinds at the same desk everyday.
Got to take exception to that. I'm a developer developer an I'm still required to get up from my desk to attend meetings etc, and I need my laptop in them. Or pair programming. It is usually a different kind of work so there's probably a world where I could have a desktop and a laptop but inevitably I'd end up needing to do something the iPad can't do and get frustrated.
I have a work laptop, but do 99% of my work on an identical VM that sets on my homelab proxmox cluster. Working this way allows me to work from any device, even my phone or iPad from anywhere. It's encrypted, has all the standard security tools required for work, etc. Our VPN suite checks for all of that on connection. I have the added benefit of being able to provision with massive amount of resources that it'll only use when needed due to ballooning and quick backup and rollbacks due to LVM thin provisioning.
I do have everything sync back to the work laptop so in the rare case I lose internet or have a hardware issue with the cluster I can continue working. But that's only happened once in the last two years when a completing fiber provider cut a fiber line on my property laying their own fiber. (Not their fault, my current provider had the markings off by 50', and even then the foreman gave me a gift certificates for the trouble)
Even in the bad old days when I did work in an office, we worked some days in the office and some at home.
Even now that I work remotely, I still go home to see my parents for a week at the time and work from there. I definitely wouldn’t want to be dependent on the internet.
Not to mention in less than a month, my wife and I will be doing the digital nomad thing working while flying across the country for a few years.
My set up includes a portable USB C powered external monitor as a second display and my iPad as a third display. Of course I have a Roost laptop stand.
If I need to spin up resources, I use my own (company provisioned) dev AWS account and it’s just there.
Even the last 60 person startup I worked at would let us set up dev AWS accounts with the appropriate guardrails for development.
We had CloudFormation templates to spin up environments as needed and we could just tear them down.
You nailed it. Portability. Also if you're working professionally it's far easier to collect your property as a company when you don't have to pay oversized shipping costs for a desktop.
Though rarely is a laptop in clamshell mode as good as a desktop. For certain things, I don't think they'll ever be. For example, graphics work and a lot of scientific work just isn't sufficient unless it's done on a desktop.
I've asked for a workstation from corporate IT, because I'm nearly always working from the same spot, and would be okay with a Chromebook on the rare situations I'm working remote.
The cost of a beefy (but properly cooled) workstation + cheap Chromebook isn't much different than a corporate laptop. It's just not an option being considered anymore.
"Most of us" absolutely don't do that. My company has 3000+ people, and I can say with certainty that every single person works away from their desk at some point in the day. I would quit my job in an instant if I had to be tied to a desktop at a particular spot all my life.
Becsuse the business only provides you with one machine so if you need a portable one one day of 100 then it has to be a laptop.
Buying or maintaining two devices per developer is too costly regardless of whether the pair (a cheap laptop and decent desktop) is cheaper than an expensive laptop.
This is also my experience. Large Rust/C/C++ code bases will easily compile 3-4x as quickly with a fast workstation as with a top end laptop. I blame thermal design and power limits
Android development, a clean build of our project on an M1 Pro is 15 minutes, a clean build from our build server (which is ultimately just a thick 11700K or something along those lines, so still relatively old) is 3 minutes.
At my job, our main software is a multi-million line c++ codebase. It takes most devs 45 minutes to an hour to compile without a fresh ccache, and this is a workstation with 8 physical cores. On my laptop, a fresh compile takes over 2 hours. This can be brought down to under 10 minutes with enough cores. Partly due to poor internal dependency management, it's pretty common that every git pull or rebase requires recompiling ~1/3 of the codebase and waiting multiple minutes to compile in an edit/compile/test cycle is common.
I guess this is why I ultimately went for a fully spec'd Macbook Pro. It's the price of a car but the value of having workstation class performance anywhere I go makes it easily worth it.
It depends what you're doing. For a normal web dev workflow, I have yet to see my M1 MBP be anything but flawlessly responsive. I'm sure there are other workloads where it's different
I have the 2019 16-inch i9 MBP for work, and even that has served me pretty well for almost 2.5 years. I'm fairly conscious about what I have running at any given time, I force-quit out of apps that I only open occasionally to free up resources. Sometimes the fans will get going quick if I'm doing a lot (running Java services, in a Teams call, etc - on top of whatever the hell processes are being used by jamf, VPN, and zscaler) but I can't recall it ever "slowing to a crawl." It mainly just gets hot until I'm done with one of the big tasks the laptop is currently doing.
So the issue is the corporate mandated malware. I usually lay the blame of performance issues on corporate malware for any modern Mac or Windows PC.
But all video conferencing software sucks. I have to use them all on occasion depending on the client and usually the only one I actually keep installed instead of using the web version is Chime (yeah I know how do you say where you work without saying where you work).
Oh I just notice you said you had an x86 Mac, yeah they all suck when it comes to fan noise and throttling
Yep. Having worked in these environments, this solution is almost always sold to companies that are working around shitty hard to reproduce software stacks, staff trust issues, scale up difficulties and checkbox security cargo cults. The resulting outcome is usually increased staff turnover, increased cost and decreased productivity. Most of this they are having trouble rationalising or acknowledging still.
You don't want to work for those companies.
It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable. The reality is usually some horrible AWS, Azure or Citrix portalised solution however.
It’s a miserable experience from top to bottom. Onboarding a new developer takes much longer and is far more tedious than one might expect. There are multiple layers of security employees must navigate. And when something breaks, anywhere, it’s a huge pain to sort out the source of the problem, find the right person responsible, and get something fixed.
If you find yourself in an organization that thinks this remote desktop environment is a great idea, do yourself a favor, if you can, and leave. You’ll give other devs more incentive to push back and make this a thing of the past, like “thin clients”.
RDP works fine for Windows honestly, but in Linux the only decent solution is nomachine and sometimes not even that.
Anyway, in my company they decided to hand us out a company laptop and connect through VPN to the corporate network, with shared drives, and it's the best solution IMO.
I worked over RDP for a couple of years. It's not terrible but it's not too good either. You pretty much have to have a wired Internet connection and there are still problems with Alt+Tab and high DPI displays.
That's a reasonable compromise from your org. Good on them. I was suffering with corporate OneDrive. Fortunately everything I do ends up in git anyway so I just turned it off and don't use it.
I work almost daily with RDP when I'm working from home, and I have to say that most of the time I almost can't tell if I'm on the remote machine (I'm working full screen) or on the local one. Unless I'm playing a video or use a graphically-heavy application. But it is very true what a good wired connection is needed (at least 100 Mbps, with low latency ~15 ms). Tried this on a LTE connection (a few Mbps and quite some latency >150ms), and it's a pain.
OneDrive and SharePoint have been mostly ok for me… but there are real limits. They’re fine for a large number of medium sized files that you collaborate on with a handful of people. But then when you have a 100MB PowerPoint with a dozen contributors it falls over and can’t get up. I’m so annoyed they killed Slide Library…
> issues, scale up difficulties and checkbox security cargo cults.
> You don't want to work for those companies
I can’t defend the likes of Citrix but I’ve been the guy who has to tell an intern on their last day to hand over the flash drive with code we know they copied over the day before. Sometimes avoiding those issues is easier.
Also weird tech stacks are a real issue (but there are lots of developer-native tools for the job).
> It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable. The reality is usually some horrible AWS, Azure or Citrix portalised solution however.
100% agree. VSCode and VM is my only accepted solution now.
Absolutely same experience here. The pay is often nice because they also have difficulties to attract developers. Absolutely not worth it in my opinion.
> It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable.
Layer 4 beats layer 3 in my experience.
I also find that remote workspaces have advantages that offset the latency and performance issues.
Being able to quickly spin up or clone new workspaces and isolate software dependencies is a huge advantage. It can help a lot when dealing with multiple Python environments or JavaScript dependency trees.
I don't get the use case. Why would you even consider using a cloud desktop?
Even a very low-spec laptop is going to run a simple graphical desktop environment like Xfce just fine. Watching a youtube video, browsing the web and even video conferencing can be handled with any new-ish laptop.
And in reality, you still want a reliable laptop with decent keyboard, long battery life, good display and so on. So you won't end up on a low spec machine to begin with.
For computation heavy dev stuff a simple SSH access is good enough. It can be a very smooth experience with a locally running VS Code or something.
In my opinion, developing is not a really good use case. Some of our team develops using VSCode + SSH against a remote VM.
One of the best use cases we've found is education and, specifically, trade schools. There are some trade school courses that require really specific software (image and sound, designing electronics, interacting with proprietary robots, etc.), and it's a painful experience to manage all of that, add new programs, etc. (some trade schools have 60+ courses, each one having different subjects and different software through the year!) By having cloud desktops, the teacher can create a template with their requirements and share that template with the students, and if the requirements change, it's as simple as modifying the template and sharing it again.
Also, most of the public schools here are underfunded, so they end up with really old machines, and the cost of renewing a whole classroom gets really high: let's say a new machine costs between 600€ and 1000€ (depending on the trade course requirements). If you have 30 machines for each classroom, it's something around 24k. (Then there are lots of classrooms, you get the idea)
By having "cloud" desktops, there's no need for renewing old hardware, since you can have something like xfce + the viewer, and all the systems can easily manage that load (we even have classrooms with RPIs), and this can be a huge money saving
In the end, cloud desktops aren't the best option for all the use cases, as the author puts it:
> Overall, the most important thing to take into account here is that your users almost certainly have more use cases than you expect, and this sort of change is going to have direct impact on the workflow of every single one of your users. Make sure you know how much that's going to be, and take that into consideration when suggesting it'll save you money.
> Why would you even consider using a cloud desktop?
I've travelled to and through countries (e.g. in the Gulf, France, India) with my work laptop where I was deeply uncomfortable having those data on hand. Taking a clean machine and remoting into that one when needed removes a lot of paranoia points.
Please don't say something like you don't trust encryption. We have known cases where even state actors could not crack encrypted devices. Not to mention the remote communication you have would be easier to monitor and possibly decrypt anyway.
Sure, in theory you would need a kill switch in case some special forces come through your window while you are working on you laptop and force you to remove your hands from it but but I doubt you live such an interesting life for that to be a realistic treat model.
While this is a good idea, note that in some countries it is an offence to not hand over keys or passwords when requested (or can rapidly become one - like in the UK) so not carrying data with you in the first place can defend against that.
What is stopping them to force you to give access to your cloud providers though?
I think there are solutions to make hidden partitions. You would have have to create a clean, plausible system to show potential attackers.
Still feel that clouds providers are a bigger attack surface than encrypted local data. To get you cloud data an attacker would just need to be able to compel you to give the password. With local data, they also need to get physical access to it. You could for example decide to not take your laptop to a potential dangerous meeting and store it somewhere safe.
Plus, cloud provider have way more attack surface area. They get regularly hacked. Some state actors already have back doors or can otherwise compel the provider to hand out your data.
The more I think about the more I think storing sensitive data in the cloud is not a good idea for privacy and security.
What is stopping them to force you to give access to your cloud providers though?
Here's my thinking: If you're travelling to a country with nosy officials and you needed access to a lot of sensitive data, if it were on your regular (but encrypted) hard drive then it would be more visible if they asked to see the machine. With that data online, it could be in a system you only access by a URL you remember which they can't see. You can show them a normal desktop.
Still feel that clouds providers are a bigger attack surface than encrypted local data.
If you are actively being targeted, I agree. I was thinking more the "curious official" folks seem to run into when travelling. Since the mere possession of certain plain text documents is a criminal offence in my country, this has the potential to catch people unawares.
I think there are solutions to make hidden partitions. You would have have to create a clean, plausible system to show potential attackers.
This is a good tradeoff and would probably be fine unless they're really out for you - a whole other ballgame.
> What is stopping them to force you to give access to your cloud providers though?
The fact that a) the cloud provider is in a different jurisdiction b) many countries have very broad "anti-hacking" laws that they'd be breaking. It's not by any means a "naturally safe" way of working, but under the current hodgepodge of laws it has some benefits.
This isn’t the intended use case, but one upside of cloud desktops is that if I ever forget to bring my work laptop with me, I can RDP from a friend’s computer, etc.
In one particular industry that is rife with cloud desktops you can be trusted to invest or trade $X mln dollars of someone else’s money on a daily basis, or to model out a $Y billion dollar M&A deal, but God forbid you try to install VSCode or MobaXTerm on your own.
IT presumably got tired of being bombarded with application install requests, so one solution is to use vendorized cloud desktops that come with pretty easy tools (for them) to install applications.
> In one particular industry that is rife with cloud desktops you can be trusted to invest or trade $X mln dollars of someone else’s money on a daily basis, or to model out a $Y billion dollar M&A deal, but God forbid you try to install VSCode or MobaXTerm on your own.
I don't understand when and why so much power was delegated to IT w.r.t installing software. The FSF needs to start fighting IT and device management policies before talking about open source software.
It's because people install malware a lot. Usually it comes along for the ride with pirated software, creating a 2x headache. I remember when they introduced similar policies at Google for Windows workstations - the stated rationale was that Windows users would warez literally anything and this was independent of job role or position. Senior engineering managers would be warezing things and it would come bound to malware. So they moved to binary whitelisting, eventually :(
Linux avoids this problem mostly because it doesn't have much commercial software to pirate in the first place.
Corollary: your company might force to use you a cloud VM desktop _even when your laptop is significantly more performant than the entire server holding these VMs_.
When your game developer has and needs a $4k workstation, and they work from home half the week. No reason to buy them another machine and have them maintain two seperate workspaces, we just give them parsec.
All our staff seem happy and we don't get complaints. Author hasn't tried modern tools it seems.
Another use case: mobile people with laptops, who sometimes want to hop in to a play test or show a game off to a vendor. No need for them to have a gaming laptop 99% of the time, when an X1 carbon + parsec to a beefy box work fine.
If your definition of zero trust includes the endpoint devices because they are in an area that the general public can access.
If you want access to your desktop from multiple locations. Ex, at my local hospital the staff can tap their badge to any computer and instantly reconnect to their desktop exactly where they left off.
If you are in a multi-site scenario but your big LOB app hates the internet so you need all your clients to be in the same building as the server. This is actually the reason I deploy vmware horizon... im not sure what jack henry and Fiserv are doing to make their overblown CRUD apps so network heavy and inefficient to operate, but I'm happy they are finally rolling out their own cloud-first apps so they can deal with their own garbage instead of outsourcing their support to their customeres IT guys.
If you literally just can't acquire hardware because of a pandemic and need more compute than you have on hand.
You definition of zero trust can never include the endpoint devices. Those get to see everything you type on them, and have the same level of access to your services that its user has.
Instead, with a remote desktop you are only adding a bit more of vulnerabilities. It can never remove any.
(About those others, there exist some nice works about program portability, that culminated on fully distributed OSes, but those have no adoption. Instead, people prefer to hack distribution over the piles of hacks that are modern OSes. Obviously, it doesn't work well.)
PIA to manage multiple systems / os that don't have functional parity. My steamlink, tablet and occasionally phone remote desktops strait onto my desktop with all the customizations I'm used to for daily tasks. It's just nice not having to adjust. Only problem (relatively new) is DRM preventing many streaming service from displaying video.
Contractors, no way to take code out premises (assuming proper security settings on the VMs), and easy to get new instances instead of waiting for crap Dell and HP dual core laptops, with 8GB and 256GB HDD.