Hacker News new | comments | show | ask | jobs | submit login
Ask HN: Viable tech for hotdesking in software dev?
34 points by kierenj 9 months ago | hide | past | web | favorite | 35 comments
We're about to move into new offices and something that appeals is the potential for developers to work in smaller 'project' areas. Thinking semi-open-plan, with quiet rooms for getting stuff done, then as I say semi-partitioned areas (not closed room) with a few desks for those working on each project.

Anyway, part of this idea involves moving between areas or projects. Laptops are one obvious answer - possibly outfitting desks with additional monitors, equipment as necessary. However we build fairly chunky (large) enterprise systems and sometimes running databases, FE tools, BE tools and the app (which might consist of a few components/services) takes a lot of resources. In terms of performance, battery considerations, upgradability and cost, desktops have many advantages. Point being, for the cost of a killer laptop, the corresponding desktop machine wipes the floor with it and I'd favour the responsiveness and performance.

My question is, is there a viable tech solution to facilitate modern (backend/fullstack) software development using some remote access/terminal arrangement. In the same office - gigabit LAN etc - with dedicated machines hiding in a cupboard or rack somewhere. BE/fullstack dev is currently in Windows land.

I guess for me that means working with multiple monitors and ideally multiple desktops.

In years gone by I've seen Citrix, terminal services, RDC, VNC.. but I'd like to know if it's viable for software engineering itself, and what kind of issues might crop up. Is the tech viable?

(Struggling to find an appropriate place to ask this question. Forgive me if this is off-topic for an Ask HN, but it's about efficiency in a growing team of hackers.. if I can swing it that way!)




From the Valve employee handbook (http://www.valvesoftware.com/company/Valve_Handbook_LowRes.p...):

> Why does your desk have wheels? Think of those wheels as a symbolic reminder that you should always be considering where you could move yourself to be more valuable. But also think of those wheels as literal wheels, because that’s what they are, and you’ll be able to actually move your desk with them.

> You’ll notice people moving frequently; often whole teams will move their desks to be closer to each other. There is no organizational structure keeping you from being in close proximity to the people who you’d help or be helped by most.

> The fact that everyone is always moving around within the company makes people hard to find. That’s why we have http://user — check it out. We know where you are based on where your machine is plugged in, so use this site to see a map of where everyone is right now.


I've had a good experience running Windows 10 on Azure DevLabs VMs accessed via Remote Desktop from Mac and PC. Even with the VMs running in Ireland (I'm in London), they're very snappy and can happily use Visual Studio, Eclipse, etc. Even video and light 3D works well. I wrote up a blog post on it: https://www.mcleodmoores.com/blog


Inspired by Valve, here's the $15 solution we use:

https://www.amazon.com/Mount-Adjustable-Universal-Computer-H...


Desks with wheels; each desk with an ethernet switch and a power strip so you're down to two wires when you move the desk. Moving is really simple.

Additionally, teams may want to rearrange desks (e.g., in a circle, or back-to-back, or whatever). There's no reason not to have flexibility here.

You'll find that most people will still accumulate stuff, and will need some kind of storage. People also get attached to their chairs and their working posture (sitting/standing, monitor height) as well as their particular keyboard and mouse brand, etc. -- hot-desking seems bad for this.


I've often thought about this myself, and there's really no perfect solution.

The most practical option is just give people the hardware they need, then facilitate their comfort and mobility accordingly. Desktops and laptops being used simultaneously at the same desk is very common. I'd also suggest desks with wheels.

As far as remote solutions for your situation, I don't think there are any acceptable ones. Low-latency video streaming wouldn't even cut it. Personally I'd limit myself to solutions that actually run the display signal itself. There might be some clever things you can do with the latest versions of Thunderbolt/USB/HDMI/DisplayPort; I'm not fully up to speed.

If you do end up theorycrafting that approach, one additional point of consideration might be virtualization with direct hardware passthrough. You could probably do some cool things like assign one GPU per hotdesking location, and pool any unused GPUs as computing resources. Storage could be a similar story, with neat separation.

That said, running your own on-premises AWS can become very complicated and very expensive. I wouldn't go that route unless you're really, really set on the hotdesking thing.


There is an amazon product to this effect but that's obviously not local. I know one team inside a very large engineering organization that used x2go to remotely access an eclipse developer environment with all the resources/separation of duty available. I also visited a developer site that was rigged with a windows terminal services setup and two monitors/keyboard on every desk and that actually worked fairly well since the true horsepower was in the VM farm in the lab. Active Directory allows for that profile roaming as part of the core functionality so it's easy to transplant yourself around the place.

Generally (I was on an interview bender recently) I see desktops with raised monitors ganged together (either displayport or hdmi supports daisy-chained monitors) and a keyboard and everyone is issued a lightweight laptop and defers all the beef to the cloud/vm farm.

I keep going back to windows remote desktop as really the best remote desktop solution. It supports multiple monitors, and if you are a Windows environment with an AD you can roam from desk to desk trivially.


Thanks, great to hear of people actually using that kind of solution.


I've had good results in the past running a fairly complex VOIP + webapp call center from entirely custom code based on PXE boot and Linux. (Lots of universities used to use this in their labs, I don't know if it's still common these days.) This would also work for developers.

You can trial it quite simply. All you have to do is have user login network mount a home dir and you are in business.

In general laptops cost quite a bit, break lots, and people want to take them home which creates issues. I think a lot of companies just give developers laptops now. However, it can be quite nice for the developer to have "not at work = down time" as an option, larger screens and keyboards are generally smiled upon, and desktops are WAY CHEAPER. You can even go 100% diskless.

PXE is usually centrally supplied kernel vs. individual developer compiled kernel, though, which can be annoying if people want funky features. However, you can do this diskless as a default and just say those who want to sidestep the system can boot from an external SSD.


This is great advice. We do something similar with a terminal device running Linux which gets you a multi monitor Windows 7 session. Want to move? Lock your session and log in at another desk and you’re back in business. We provide this for thousands of employees, both on site and working from home.


If you are considering hotdesking you don’t care about developer productivity.

https://www.joelonsoftware.com/2006/09/07/a-field-guide-to-d...


You could have made the same point without getting personal or ascribing malice, couldn’t you?


Not everyone hates hotdesking. Not everyone likes hotdesking. Why not give your employees the option and leave it at that?


Because hotdesking doesn't affect only the person moving. It affects everyone that person moves away from, and everyone that person moves toward.

If everybody cares, then as you note, not everyone agrees that it's good. If few or no people care, then why hotdesk?

Either way, someone has to set the policy. Even at Valve, with no structure, someone has to say "we hotdesk, it's our culture". Someone has to sell the haters on it (or tell them to take a hike).

And even if only one person hotdesks, depending on that person, that could still be enough to leave an impact.


Try it before committing, and if you do go for a remote access system make sure that everyone is happy with it.

Have you considered personal workstations on movable desks as apparently used by Valve?


Hotdesk seem a good idea until you have people that has to take kids to school, or you have someone that has a meeting with a customer at 9am and has to come later to the office to discover (s)he cannot sit anywhere.

Don't fall in the trap of voip phone systems that don't allow you to be logged in two places at once (usually for licensing reasons): someday you (or your colleages) will be hunting for the voip phone that someone forgot to logout.

For remote desktop, etc. This is a very mature technology: try to pick the tech that better blends with all the other tech stach at home. Citrix, for instance works great but it ecosystem is not trivial the best part is they even have an integrated VPN (Netscaler) you can use.

On the hardware side, take a look at HP Moonshot servers. I bet there are equivalents our there from other brands, but those are worth seeing: 4.3U servers that can house 50+ physical desktops with own storage.


Red Hat has something for this that's tied to their acquisition of Qumranet/Spice [1], but I've only seen a demo video so I can't speak to its reliability or suitability for any given deployment. I'm also not sure what the attached marketing terms are if you want to research a supported solution. Basically, each developer's "real" machine is a VM on a beefy server, and terminals are interchangeable. In the demo I saw, each terminal had a contactless smart card reader. When the card is put on the reader, the terminal connects to the corresponding VM. Take the card off, and the terminal locks or switches to a transient/guest VM.

[1] https://www.spice-space.org/


The marketing term is "VDI" and it's commonplace in Windows enterprise networks.


Eclipse Che might be worth looking into. I'm not sure how mature it is, but it gives you a web interface to Eclipse running on another machine.

https://www.eclipse.org/che/


Do the applications need to be running on the developers machines? A good deployment process that allows quick, one-click automated deployment of your latest changes to a testing environment can help a lot with that.

Otherwise, modern high-end laptops are quite powerful if you pick the large models not 13 inch thin ones, and you can certainly run a bunch of virtual machines/docker containers/whatever on a maxed-out laptop while it still being perfectly responsive; but you might want to have people running both a killer laptop and a beefy desktop if they're not deploying/running/testing everything on remote servers, the cost isn't that high.


Yes you're right, and we're part of the way there with automated, immutable deployments. Debugging locally is still common for us at the moment, and I'm looking for solutions for the situation we have at the moment. Of course you're quite right though.


A large proportion of the workstations at the company I work for are using Teradici (I'm not affiliated with this company at all) to implement the machines-in-the-machine-room idea you mentioned. They basically are remote units for providing keyboard/mouse/display connections over ethernet. In our setup each desk is always connected to the same machine in the rack, but I guess you could have them configured to change machine if you are changing physical location but want to use the same machine.


I just started wondering how effective a dumb HDMI-over-Cat6 system could be. (I have no idea how Teradici works.)

The main hassle would be switching everything; you could build a signal switcher(/multiplexer!) using FPGAs, but that would be expensive and take ___quite___ a while to do.


We do that in broadcast. We can have a matrix on the order of 1000 inputs 1000 outputs, with any source and destination routes to another. These tend to be really pricey, even before factoring in the miles of coax cable.

In fact as an industry we're moving to IP - a modern video stream is in the 12gbit range, so that's a backplane of about dozen terabit, so that's not cheap either. On top of that, 1 lost packet and it's a big problem, so we dual stream it all.


Oh _wow_. I remember looking at tiny 16>2 HDMI multiplexers a while back, IIRC they were a couple hundred.

I think I've seen the coax pools for 1k>1k units in cabling pictures online... these would _start_ at 5 figures, I presume, with full installations probably even rolling into 6 figures?

And... double wow, 12Gbps. Is there nothing like a realtime, lossless, zero latency compression scheme that works for video? Hmm, Huffman coding would probably only chew off a few Mbps, and not make much of a dent.

It sounds like video transmissions run over UDP. That's kinda insane... but TCP wouldn't work either, I'm guessing the transmission format is probably so closely tied to the wire protocol that repeated frames would give everything major indigestion too.

This has been a cool TIL


UDP yes. Repeated frames introduce latency. Broadcasters are used to latency measured in lines -- that's microseconds. In the analog days an extra foot on your cable would mean the difference between out of sync or not in the chroma domain.

We use SDI rather than HDMI, but fundamentally it's a container, with an active picture frame in 4K video of 2160 lines, with 3840 samples per line. Each sample is a 10 bit sample of the luma, and every other sample gets 2 extra 10 bit samples of chroma (The eye being less sensitive to chroma). This is 4:2:2 sampling.

This means that a single 4K frame is 2160x3840x20 bits, and in the americas there are about 60 lots of 165mbit, every second. SDI adds various things like blanking thanks to it's legacy of a digitised version of PAL/NTSC. Audio is tiny, <15mbit/sec.

Normal HD, with interlacing, is far more reasonable, at 1080x1920x20x30ish, 1.2gbit (less in 25fps countries), but SDI adds another 300mbit a second on top of that (UDP/RTP also adds overhead too).

The cheapest SDI routers you can get tend to be blackmagic ones, which are fine. There's a lot of people in the industry that remember the days of analog routers, where they were legitimatly expensive. Since SDI came out thought the electronics is little more than a switch chip. A 40x40 matrix will cost about $1300, but prices increase exponentially.

On wan links we tend to use compressed video in the 5-30mbit range (5mbit from somewhere skanky with poor internet). We have forward error correction on that, but the standards aren't great at high bit rates (only coping with 25 lost packets). A 20ms outage on a line with that type of error correction causes on air disruption.

Using something like ARQ which allows missing frames to be repeated adds latency, on a link from Europe to Singapore for example it could add 600ms. That's on top of the c. 100ms propagation latency and 400ms encode/decode, pushing the latency from half a second to over a second, and people start getting grumpy. There's still no guarentees either - as if you were to lose 100ms of traffic at 30mbit, you end up peaking your traffic far higher than the 30kbit/ms that you normally send -- to retransmit that 100ms in 100ms, you need to peak at 60mbit.

As an example of how important latency is for two-way communication, I'm currently investigating replacing our 12 frame (480ms) latency encoders from a couple of locations in London with ones that run at about 1-2 frames (40-80ms).

With wan links sending the same packet on two links doesn't even work. I've seen outages of 20ms on both a private wire and two separate internet paths in the past -- presumably some common router in the subsea cable system does something. Currently using temporal shifting (and some receive buffering) as well.

The expectations that the broadcast industry place is somewhat different to the IT industry. 3 second end-to-end latency is nearer 'store and forward' than 'low latency'. Compression in a production facility is unacceptable, and increasingly we're moving away from things like 5:1 compression on wan links and bandwidth becomes cheaper. One broadcaster in the UK has just replaced the main in-country network with an uncompressed network based on multi-path 100gbit wavelengths running over dark fibre. Another has replaced it's core switching network with multi-terabit backplanes. Just getting 10gbit/second in/out through the kernel is a challange - doing it with nanosecond packet timing that the next SMPTE standard currently requires is proving even harder.


Ah, 10-bit 4K. That completely explains 12Gbps.

I'm not entirely sure what you mean by "1920x1080x20x30", particularly the "20". I take the "30" to mean 30fps.

SDI rings an "I've heard of that!" bell. Huh, 37MB/s of overheard - nice. :(

I get the impression analog routers/multiplexers were basically every input port physically wired to every output port, with a tiny switch to disconnect the lines that weren't being used. Yeah, those would be exponentially expensive, because even though you can't physically have more than one input going to a given output (outside of practical special effects, maybe?), you can't get away from physically wiring all inputs to all outputs. And then I'm guessing the sea of internal wiring was all super high quality and so forth to minimize line losses...

40x40 for $1.3k is _very_ impressive. But yeah, it's just a specialized ethernet switch (albeit with really high maximum internal throughput), so that makes sense. The switch from analog to packet switched routing does make sense financially.

Hmm. What did everyone use after everything went digital, but before the switch to packet switching? What video formats got used back then? What time period am I even describing? I just realized I've always been mildly curious.

TIL about ARQ, and what to call what I've been using in my own messaging system design ideas. I do wonder though... in the context of video distribution, if you just repeat the last good frame you had if you don't get the next frame in time - why does there need to be realtime acknowledgement at all? Instead of acknowledging every single frame (I assume this is how current systems work), either send periodic beacons containing the last eg 10 or 100 or 1000 last message IDs, eg [ 1, 3, 8, 9, 10, 13, 17, 20, ... ] so the sender compute the number of frames dropped, or even better, if frame IDs are guaranteed to be sequential, the receiver can compute the number of dropped frames and simply just ping back the packet loss percentage value. That would take care of the sender blindly going too fast a la TCP rate control. Then, on the receiving end, if the receiver doesn't get the next frame in time for playback system to show it, it just shows the last received frame until it has a new one.

I get the impression ARQ works a bit like Skype on a bad day, where the feed locks up periodically then the image jumps forward in time (if you will) when the next properly decodeable frame comes in - except I suspect ARQ only has very tiny such jitters and jumps. What I just described would probably be equivalent, but (unless I'm horribly mistaken, or misunderstand) with much lower latency.

Note - I realize the way things are already done were carefully engineered and there are good reasons behind the techniques currently being used. I presume that the above idea has very possibly been thought of and disregarded - and I'm curious what the reason was.

400ms encode/decode is impressive. 40-80ms even more so. But that sounds perfectly reasonable given that this uncompressed video; I expect the faster encoder simply has faster silicon in it :)

TIL about the info about the traffic peaking and retransmission. That was interesting to carefully read through a couple of times :)

Talking about encoders and distances, that reminds me - I'm curious about how good those cellular backpack things are, the ones that take umpteen SIMs and seemingly set up a load-balanced aggregate connection over all of them. I'm guessing the latency is somewhere north of "higher than planes fly"? :P

What did you mean by "temporal shifting" in the context you used it? I also wonder what the traceroute on the private wire would have looked like. I can't imagine the link was tunneled (haha!) so if nothing showed up that would be perplexing.

I remember when satellite links used to have 3 second delays to TV live news broadcasts, only around 2006-2008 or so. Hah. Wow, expectations have (perfectly understandably) moved forward.

Agh. I'm 100% sure that it's possible to losslessly compress video and get good speedups... but the realtime requirements really make it so incredibly hard :( I realize HEVC has a lossless profile, but yeah, definitely not realtime. The problem is that anything that compresses at realtime on current silicon (and not anything custom) will probably only shave off a few tens or hundreds of Mbps. :/

TIL about the network upgrades. By "network" here, I presume you mean at least the in-building LAN, but I'm not sure where/what else classifies as "production". Surely the lines to the TV towers aren't uncompressed? I see no need for uncompressed video there (but I may be sorely misinformed). And wow, multi-Tbps backplanes... I was reading about how Facebook had deployed that sort of thing a few months ago, TIL it really isn't that unique.

As for getting 10Gbps kernel throughput, I understand a lot of groups are just doing networking in userspace and skipping the kernel IP stack entirely. But 10Gbps at nanosecond timing... haha, when I hear that, I say kick out Linux entirely, because it doesn't offer hard realtime AFAIK. Well, my knowledge may be hugely out of date, but last I heard Linux didn't really like doing realtime "in production". At any rate I know of no projects using it routinely for non-specialist tasks (ie, some kind of everyday desktop-usage use case, not embedded/appliance use-case).

I vaguely reading about the SNABB Switch folks doing really really fast packet switching, and they were getting all their speedups by using assembly language and doing instruction/cycle counting. Problem with using Linux is that you have a massive bunch of overhead (the mechanics of C, basically) you can't really shoo out of the way - sure, you can optimize your own stuff in assembly, but what's the point when you have tons of C kernel code darting in and out of the L1/L2/etc caches in between schedules of your process(es)...

Hmm. I just realized... it would be so nice if you could tell Linux to give you an entire CPU core. Just don't schedule anything onto this or that CPU, and don't even run kernel code on it. Then you could use the excluded core like a discrete application processor, and run your own kernel on it. THAT would be COOL. But I don't know if you can actually use x86 like that.

Somewhat less drastically, I wonder if you could build FPGAs - or, if the chipsets in NICs are amenable and fast enough, just bespoke firmware - and teach the NIC to receive and buffer SDI video frames really, really fast, doing some complex processing in hardware before the kernel gets the data over PCI-e.

That reminds of something that may or may not be interesting: https://news.ycombinator.com/item?id=13741975 - it's a totally new CPU architecture (it seems that the CPU arch wars/competitions are starting to warm up - nice) that basically, uh, moves the MMU into LLVM, instead of putting it in silicon. It's an interesting idea, but the reason I mention it is because I understand they're already at the point where they're taking orders (and have taped-out silicon to actually deliver). FWIW I was i336_ in that thread (accidentally locked the account, heh) but apart from the interesting interaction I have no affiliation.


What about getting desks that you can add wheels to. I️ know you can put castors on GeekDesks, but I’m sure other desks support them as well. Fairly low tech solution, and your developers can use whatever equipment they prefer.


Desks with wheels don't work if you have multiple story buildings or different buildings all together


Valve has 8+ floors and things work pretty well. If you've got multiple floors, you should have muscle with the building and be able to use the freight elevator.

I don't have a solution for multiple buildings, you're right about that.


Old buildings don't always have elevators


If you're using Windows, just use RDP to access the servers. Citrix and similar may end up being useful in time, but it's best to start simple.


Yeah, RDP is a damn powerful remote work solution. If you don't care too much about cost I think it's definitely the way to go.


> "If you don't care too much about cost"

Just to be clear, if you're already running Windows, RDP is free.


Normal RDP to a server or workstation sure, but if you want to use Terminal Service, VDI, and Remote App you have additional licensing.


What is the value you expect to get from hotdesking?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: