Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is anyone using cloud dev environments (e.g. Codespaces/Replit) at work?
217 points by nbrad on Oct 18, 2023 | hide | past | favorite | 236 comments
A lot of devs I know are excited about Cloud Development Environments (CDEs) like GitHub Codespaces, Gitpod, Codeanywhere, Coder, Replit, and CodeSandbox. They seem great, and simplify many aspects of the dev workflow: easy to onboard onto new projects, everyone on the project stays in sync, etc.

But I rarely hear of actual teams using them; it's usually individuals using CDEs for side projects.

Are you using a CDE at work? Would love to hear about your experience.




We at Instacart use them heavily. It is a system my team built called Bento Remote ("Bento" is a local orchestration tool we previously built). We built Bento Remote ourselves because Codespaces was in its infancy at the time and really couldn't meet our needs for things like pre-built images, preserved disks, warm pools, live patching, and being behind our firewall. Bento Remote was and is hugely successful and likely one of the largest boosts to developer productivity in our history.

With Bento Remote get a fresh new fully dedicated EC2 machine in about two minutes from when you run `bento remote create`. It is continuously tested on every merge and verified to function. Everything you need is preinstalled and ready to go. Just connect VSCode to it and start coding. No futzing around getting your local environment to work. Break something on your Bento Remote? Throw it away and get a new machine. Switching projects? Grab a new machine from the pool that is preconfigured for that project. Your settings travel with you.

We had a potentially unique set of circumstances though. Our full stack development requirements were VERY high and needed 64GB to even run our largest and most active apps. This made other 3rd party tools fantastically expensive. Building it ourselves let us fit instacart specific needs and workflows. As well as do clever things like hibernating the EC2 instance after hours when the users laptop is idle. This plus a host of other measures were essential to making this not only cost effective but a large net gain.

So I highly recommend everyone at least take a look and evaluate if you have the need. Start with the SaaS versions and see.


> 64GB

The cheapest instance I could find approached $1k USD per year without egress costs. If you can build all of that, why use EC2 over your own hardware? If you’re spinning up containers, why not use the dev’s workstation for that? You already paid for it, right?


How is buying a your dev a single top of the line expensive 64GB RAM MacBook equivalent to on-demand EC2 instances?

Despite containers working locally, the ease of having a remote image with everything preinstalled is huge. Lots of large companies offer internal dev cloud environments - I’ve always found it to be a frictionless experience versus local setup at other companies.


Suppose you’re building a feature that requires $newlib as a dependency. How is that handled? Do you request the addition of $newlib and wait for a new remote image to be deployed for everyone before you can work on your feature branch?


IME these remote instances come with sudo rights, and one is free to install/configure as they please. The initial image can be chosen too.


Right. The devs have full sudo access. They can install and configure whatever they want. Our Bento tool has pre/post setup hooks for all services. So teams can automate this. When something is commonly used we just preinstall it for them. Or some just contribute to the repo. It is all fairly straight forward Ansible and shell scripts. We push nightly to everyone's existing machines.


We actually moved away from containers for dev purposes for almost all of our apps, though many teams still use them. Mainly because they are a bad developer experience. We invested in making apps run consistently in a native way.

Bento was actually our first attempt at doing all of this on our local MacBooks. Many users found that their machine would slow to a crawl and heat up. M1 Macs and now OrbStack instead of Docker Desktop help reduce that. Moving off of docker helped as well. Ec2 gives us the consistent image that is easily testable and repeatable. Egress costs here are negligible.

We dont' keep the machine running all of the time. It stays up running during the users work hours (~8 hours) and then hibernates (the machine is off but memory persisted) which reduces EC2 costs (minus EBS) to zero. We detect activity on their laptop and then wake it when needed.


Spot instances actually make this manageably cheap!


Spot instances for dev environment???? "Your instance will be terminated in 1m" sounds like the new "Can't work; code is building"… haha


We don’t use spot because they can be yanked away easily. Savings plans and hibernation are the biggest cost savers.


It sounds like the number of instances needed fluctuates heavily, maybe max number of concurrent instances needed in a month is ~500 64GB instances while average could be ~100 instances?


Wow - it sounds so simple. Will read more about this (hopefully on the Instacart blog if it exists there!)


I assume this is VC funded? Seems crazy to me that a shopping cart product is building tools like this.


We generally try to do things that give a return on investment and don't burn money for no reason. This one most certainly had the ROI. It took 4-ish people to build it in under a year and it removed significant pain and increased developer velocity for hundreds of other developers. Many of whom are working on things that allowed us to be profitable at our IPO.

We're not just a shopping cart product. I thought the same before I first joined but it turns out getting groceries to your door is incredibly hard to do at nation-wide scale.


4 developer years? You paid a million dollars for this tool?


Sounds roughly right. Though not all of us were full time on it at all times. We also paid that just once about two years ago. It has been part time feature and maintenance work since then.

Now run that same estimate assuming we saved 600 engineers only 1 hour per week (a very conservative number by our research and feedback). $120/hr * 1 hour saved per week * 4 weeks * 600 engineers. We paid it back in less than four months, conservatively.


Hi mdeeks, could you please expand on where the 1 hour saved per engineer per week figure comes from?


Plenty of large companies are using them (I'm talking Google/FB/Microsoft and the like). You may not have heard of it because they run these systems in-house vs using a commercial service.

My company switched to 100% remote dev envs a couple years ago. When you cut a branch it spins up a VM and you can connect to it from VS Code (native or browser based) or just plain SSH. It works great. The lag is not noticeable at all. Dev envs are fully provisioned and up to date with all tooling and dependencies so you don't need to bother with managing any of it locally. Given a choice I don't think any dev at my company would go back.


Curious what you do for the people on less-than-good internet connections? Or is everybody just onsite with a business-class SLA?


Swear a lot. Last week I spent the day co-working with a bunch of contractors from the same agency/consultancy. Lovely location (an old pump house by a stream), great food, seats, tables etc except the wifi was awful.

I brought an old under-powered Macbook I had not used for a year or two, but I was going to use codespaces so no problem to run the entire stack. I thought. Codespaces works great from my home office PC...

Constant disconnection meant I eventually gave up on codespaces to directly work on my machine, but then had to try to download the world (brew update, git cloning, a new JVM version, docker images, all new SBT and Scala binaries, transitive dependencies, etc). Granted I caused it by not pre-downloading a more up-to-date dev environment but I was not planning on using it locally. :/

Scraping nails on a school blackboard painful hours later I mostly only did theoretical changes, and spent most of the time chatting/networking.

What we did agree was to put in an offer to the co-working space to set up a decent mesh wifi that can handle lots of people downloading all of maven, using devcontainers, zooming etc. So that we can come back.


SSH used to work perfectly fine over 28.8 kbps modems. You don't really need a stellar connection for remote coding. If your internet is capable enough for a video call it can run a hundred remote IDEs side by side.


>SSH used to work perfectly fine over 28.8 kbps modems.

I can't agree with "perfectly fine" here. Not unless you enabled local echo in your terminal emulator. The keyboard latency, especially for typo correction, was maddening. I still remember what a staggering QoL improvement it was the first time I used a low-latency connection to a server.


That's why vi is so handy over bad links: with rich command set you minimize number of key presses to achieve the target goal.


It doesn't work perfectly fine over in-flight wifi.


SSH falls apart with packet loss. In a plane, you usually have a ton of packet loss.


> It doesn't work perfectly fine over in-flight wifi.

I would hazard a guess that the main problem with in-flight wifi is that it is going to DPI'd beyond recognition.

So you're never going to know what is "real" packet loss and what is the DPI saying "computer says no".

The Sandvine/Procera[1] system that many (most ?) airlines use is an ML-based DPI that looks at absolutely everything in an effort to correlate obfuscated traffic to the actual flow. IIRC they claim 98% accuracy.

[1] https://www.sandvine.com/


Haven't had any issues w/ using VSCode connected to a remote dev environment tbh and we're full remote (Airbnb).

You can always develop locally if you're on the plane or w/e. In our case local builds try to use a remote runner if available if not they happen locally.


I used a codespace on a transatlantic flight and it worked ok.


This is software development. I wouldn't hire someone (remotely) who couldn't guarantee that they're on a good connection. It's part of the job.


I strongly disagree about connectivity being inherently required by software development and therefore an implied part of the job. To take it a step further I believe most developers would benefit from being offline some parts of the day, unless some exceptional circumstances make that unproductive.


That’s ridiculous, you’re conflating productivity (due to slack and meetings, I assume) with fast internet access.

You need to install potentially large packages and images, commit code, and be clear in meetings - that’s a minimal standard for software and if someone can’t do that because they’re on some crazy kbps internet then they’re not suitable for the job.


I live in a country with fast enough 5MB/s and huge ping (300ms+) internet. There is no problem with installing packages, downloading deps and so on, but realtime remote typing or gaming could be really clunky.

Don't forget that you don't need a fast ping for development (with local env). "Good internet" is a very broad definition.


Remote Async means not everybody has a good connection 24/7. Not every position requires being online immediately if it’s not prescheduled. Picture someone doing Van Life across the US with friends. They know they need to get to a Starbucks for the 10am meeting and be at a truck stop for a meeting at 5, but outside those time periods developing locally and having slack for text chat on a spotty connection should be good enough.


No one can guarantee that, at most a remote worker can pick the one (or two if they are lucky) ISPs available to them. Anything beyond that is out of their control.

I have been working fully remote for years from a rural location. My only connection to the Internet is LTE and while it's decent most of the time, there are some days where it's pretty bad.

Luckily my dev environment is local first, which means that a few hours of high packet loss doesn't prevent me from being productive.


Perhaps you should chat with Alan Cox? IIRC, he was on 28k dialup (over a couple of lines) whilst the more metropolitan of us were on ISDN. Adjust your workflow to your environment.


lol..

Congrats on trying to make a 35+ year old reference apply to remote work now.


Microsoft has a commercial offering called Microsoft Dev Box [0]. Engineers at the company use the service.

[0] https://azure.microsoft.com/en-us/products/dev-box


I think you're talking about spinning up a temporary environment running the code and connecting via a local IDE to inspect it, whereas OP is talking about hosting the IDE remotely.


No, Google actually runs a remote web IDE called Cider. The latest version is derived from VSCode.


At Google, people can use "Cider" which is a web browser based IDE, and they can use a "Cloudtop" which is a desktop virtual machine provisioned via Google's cloud infrastructure, as alternatives to a dedicate physical workstation.


Nope, the VM has the git repo + IDE binaries + everything else. You can code against it using just a web browser if you want.


We are an agency where you might flip between working on several completely different projects in a week, and for us it's extremely useful.

We'd had all our sites set up to run fairly easily via docker compose prior, but I'd still find myself debugging people's setups fairly frequently. And giving developers data and secrets was often either insecure or complicated, depending on the codebase.

With codespaces, people can just jump straight into a working project, without pulling any client code or secrets or data onto their machine. It still requires maintenance sometimes but at least when I fix the codespace config I know everyone will definitely benefit from the changes.

The main downside is it's pretty expensive (if you have, say, 10 devs using it all day every day) compared to "free".

If you work on just a few projects, and/or you have very sophisticated systems across the board (like every site has an on-rails setup script with useful sanitized dev data, and secure SSO'd secrets management), I doubt it's worth it.

But in our case, a relatively junior dev being able to spin up a working dev version of a site they've never worked on in 5 minutes with no issues, so they can knock out a 3 hour change and maybe never work on it again, is a big money saver.

It's also meant that we can more easily standardize everyone's laptops without having to consider how well they work as bare metal dev machines (which has meant we can move everyone to fairly cheap macbook airs without people moaning about their tooling or storage size etc.)

I also like that access to a lot of stuff becomes directly mediated moment to moment by someone's github access (which for us also runs through our sso, cloudflare zt etc).

We're doing it in a slightly clunky way though - we use docker compose still, inside the codespace. I like this approach personally bc it feels like we're less locked in to the platform. For us it also made the initial migration easier. I think it also makes debugging the environment a bit easier because you don't need to keep rebuilding constantly on changes, you can just dcb dcup...


Since devcontainer.json works fine on docker desktop, I usually use that, but I do use codespaces frequently for review and small patches, as well as exploring new libraries. I'm slowly adding devcontainers to the open source projects I work on. It's much nicer to have a docker compose file and several docker files in this setup than maintaining instructions on setting up test environments.

I've run k8s/k3s with docker-in-docker this way too. Really easy once you get it setup, and great for playing with architecture ideas.


Any suggestions for a small agency that handles sites with a lot of personal information?

I work in a small shop and things are messy. Similar to having hundreds of WordPress sites, but we managed to standardize the main set of plugins we use on all clients (this has its own git repo), and clients will have their custom theme and some custom plugins (in another repo).

Ideally we would have a tool that lets us spin up a dev site for any client, fetch the production database from the last backup, anonymize the data, connect an IDE and have git commit access.


I mean, in codespaces you have scripts that run as the codespace is built. So we basically have s3 buckets with appropriately-sanitized (hopefully) data dumps that the repos copy down and then import into the database.

You can tell codespaces to include the AWS commandline tooling automatically via the devcontainer "features" attributes. And you can tell it to run a script once the codespace has initially been created using the postCreateCommand (which imo is a lot easier to debug than beforeCreate...

For us the s3 credentials live in the github repo as codespace secrets (although I think you could set up a much better auth approach via the vscode aws plugins possibly).


Cool, thanks for the hints. I'll dig more in that direction :)


> which has meant we can move everyone to fairly cheap macbook airs

> fairly cheap macbook airs

what did I just read


dunno what to tell you, they're a hell of a lot cheaper than the macbook pros and system76 laptops staff used to get.


and I dunno what to tell you, it's insanity that any Mac laptop is considered cheap in any way or form when you are not even running anything locally. It's reckless excess ? dunno how to say this.


I have not used a full blown online environment. Except maybe VSCode remote using SSH. I repeatedly find anything that requires a network call somewhere in between a serious impediment disrupting the flow of development. Sometimes I find myself in slow laggy situations with ssh to the point I prefer Mobile Shell mosh. VSCode remote (or similar) via ssh obviously becomes painful.

Most cloud environments are also limited in terms of what you can do. e.g: issue sudo while running a process, attach to a process with a debugger.

Usually when these come development environment ready, it also hides away underlying details - i.e, I no longer know the command line etc to should I need to write infrastructure code/automation later on.

I guess there are domains where these are non-issues. But for a wide variety of my use-cases local development is going to be preferable, because by design there are limitations in the alternative.


I don't really see the appeal. We don't even use the same dev environments at work, some use vscode, others use pycharm or codeblocks or whatever. We have code locally, on github, on gitlab and in other places.

Also I can't really say that I've heard anyone outside of this thread who is excited about this at all. I see people here have different ideas but personally I've never heard of it from anyone I work with or know directly.


To me it feels like one of things if executed really well would be amazing, but the first thing that comes to mind when I unwrap it is all of the things that could make it a bad experience.

I can also concur that I work in an area a that is doing somewhat bleeding edge infra work (probably second stage adopters) and none of my colleagues or myself is actively seeking out this technology. We don’t really seem to have the problems it purports to solve in a major way at this time.


I'd say it works well enough for maybe 90% of devs at my company. It solves a lot of real issues for them and dramatically simplifies onboarding.

The issue is that most of my team are in that 10% and now that 90% of the company have their needs met, we're getting pressure to conform because maintaining all the physical infrastructure for a small group gets expensive. Unfortunately, we can't use cloud environments for a reasonable chunk of our work, so that really means we're get forgotten about until we complain loudly enough, no matter how friendly I am with the relevant teams.


The issues you listed are non-existent with Gitpod. You can point whatever editor at it, and work on code residing anywhere.


Good to know but not really for us. Their cloud will not be able to access our internal network where most of our stuff lives. I'm sure it's configurable to some degree but is never going to happen.



If you need a selfhosted and managed solution check out our product at https://daytona.io


I got a cheap Hetzner Auction server for all the work needs https://www.hetzner.com/sb . I use VS Code remote to work there. It's easier and more convenient than Codespaces (and I think cheaper). Has a lot of benefits like instantly sharing a web server with the outside world, ability to run my servers and bots continuously, ability to work from different computers seamlessly. It also functions in many random ways (e.g. as my Anki backup server)


Is this how you work across the company?


no, but I had one employer reimbursing it (changed job since then). they were quite interested in this and might have actually implemented this for others, not sure


I rent server rack space at a datacenter where I have a beefy workstation to do all my work. I use VS Code Remote and SSH port forwarding. The workstation has fast 2TB NVMe and 64gb of ram, 16 cores and a RTX GPU. It's actually cheaper and faster than our production cloud server.

I tried Github Codespaces and thought it was cool but wasn't nearly as fast as my remote workstation.


Almost the same for me. The data center is a couple miles from my home and my home has fiber. It’s low latency, I almost never notice a difference from my device. Maybe if I used it for really large files but that’s not my thing. It’s nice keeping my local device crud free and low demand.

Edit to add; cost me about $70/month depending on a few variables on collocation for a 4U. My same apps on AWS would cost 10x more. So that’s plenty of budget for my hardware that I usually get a year or two out of; doing minor repairs/upgrades myself.


Where are you colocating? Looking for one myself


I’d recommend you approach it from a hyperlocal perspective, find the local data center you think will be best for you then look up who is offering colo out of that data center. Then do some network testing. Maybe compare a couple data centers. The providers are going to vary in most cases. So I don’t recommend looking for a specific provider just because it works for me or someone else. You’ll get to pick your bandwidth provider too in many cases.

I like to make sure the network hop path to my home is as direct as possible. My ISP has a large presence in one of the local “backbone” data centers so that’s where I find the best results. But still do testing, it’s not all about location when it comes to latency. If you visit the data centers, or online, they often list names of colo providers. They’re often rather small shops and not large brands so going to be very location specific


How do you find datacenters which will let you colocate hardware? How much hardware do you have? What was the cost?

I'm really interested in this kind of thing.


Can't answer the latter questions as I'm not OP, but you could start finding a colo here if you're looking at the cheaper end of things:

https://www.webhostingtalk.com/forumdisplay.php?f=131

Also just googling "[major city name] colocation" will usually find results, though be warned pricing is often "call us" vs. stated upfront.


Beefy HEDT/workstation with ssh running are amazing dev environments. Vscode remote ssh or even vim make it a dream.


Too bad Intellij requires the Ultimate version of their software to use their equivalent of the VScode server/client feature.


Don’t worry about the price. It’s absolute crap. [At least PyCharm remote is and I believe it has the same underlying Jetbrains architecture]

Windows randomly quit, take forever to open, there are always indexing issues, etc. Locally everything is great, but remote is completely unstable.


I think it has you install something on the server if you want to, and it's more reliable. But yeah, completely in different league from vscode.


What's the advantage of having the workstation at a datacenter in this context?


$60/yr vs thousands of $ upfront for a comparable desktop or notebook machine.

Not to mention the superior network connectivity (10 gigabit/s). In fact, there are no 10gbps options from the common last mile carriers available at any price (AT&T, Crapcast). And if they did offer it, it'd probably be $500USD/mo+ just for Internet.

AT&T has a 5gbps fiber plan available for $250USD+/mo, but there are very few (>10 fractional portions of cities nationally) areas where it is offered today.

If you have a strong enough desire for cost-effective, really fucking fast Internet for a fair price, move to Korea. In .kr you can get 10gig for something like $30-50USD/mo.

Edit: Oops, my bad. I meant to reply to this comment:

https://news.ycombinator.com/item?id=37934982.

> Step 1. Create an Oracle Cloud account Step 2. Create an Ampere 6 core, 32gb memory instance for like $5/mo Step 3. Use Jetbrains Gateway to run your IDE as a thin client, executing on that host.

> You get a pretty darn beefy ARM64 VM instance from OCI for extremely cheap. You can get these in a region near you, with low latency. And Jetbrains Gateway works pretty great.

We use OCI, aka Oracle Cloud. As a customer, OCI is more financially appealing compared to the competition (AWS, Azure) for bare compute infra. Not as cheap as OVH, if that fits your requirements. For me though, OCI is preferred because it provides all the standard cloud building blocks of multi-region + (cheap) object storage + block storage.

I mean, fuck Oracle and Larry Ellison, right? That said, building and operating a cloud at scale is a lot of work and I'll leverage their work as long as it makes fiscal sense.


> The workstation has fast 2TB NVMe and 64gb of ram, 16 cores and a RTX GPU.

> $60/yr vs thousands of $ upfront for a comparable desktop or notebook machine.

I don't think we're talking about the same things here.


I paid $5000 for the workstation (2019) and upgraded to NVMe later, and I pay $0 for the datacenter because I have referred clients to their business and established a relationship over the years. Because when people find out how much cheaper it is to skip the cloud, they switch but don't know where to go.

I could upgrade to dual 2.5 GigE but I have nothing that needs that much bandwidth. I can download 30gb from hugging face in a few minutes which doesn't bother me.

The DC is about to onboard another client of mine, a startup that ran on AWS credits for 9 months. They are upgrading to a 64 core 2TB ram box that was an extra the DC had, for $3500, which will probably go into the rack next to mine.


The commenter posted about a self-installed machine. Also, colocating GPU servers is expensive, because it's billed at roughly $75 per amp per month, and a typical machine with a single 280W GPU will use about 5 amps at peak.


That sounds really low. Do you mind sharing where you rent your racks from?


I was thinking the same. I'm paying about $1600/mo for a 42U rack, 2 x 30A Feeds, and 60mb/s 95th percentile, ipv4 + ipv6


Intellij Ultimate edition costs more than the server at this point. Wish they'd bring this over to Community and Android Studio.


Where are you seeing these prices? OVH seems way more than what you’re saying, but might be looking at the wrong thing


nice - I've also found VS Code Remote + SSH to be pretty fantastic


We've started integrating Codespaces into our team's workflow. It's been a game-changer for onboarding new devs. No more "works on my machine" issues. The ability to jump into any project without the setup hassle is pretty sweet. We're still ironing out some kinks, but overall, I'm pretty bullish on it for professional use.


isnt that what containers are for?


The file defining the environment is literally called devcontainer.json, so yeah, but it’s not really about that, it’s the service aspect which is the value proposition


Nice, thanks! what kind of team are you on (company size, how experienced the engineers, tech stack)? when you say "ironing out some kinks" what were the pain points?

if you're up for quick chat/DM, would love to hear more (link in profile)


Frankly I dont get the appeal. And I've also used these in a really polished env (a company rhyming with schmookle) where decades of r&d had gone into building this. Where these are useful (though a pain to use) is if you have to work on a system whose components that cannot be built individually. Ie you need a production grade spanner, and env with all of the 15 microservice dependencies with their 15 dependencies each with their 15 dependencies each and so on.

At this point id question if I want to be this tiny piece of a tiny cog in a tiny gear among a billion gears in the huge clunky machine (comp and bottom layer of Maslow hierarchy aside).


We use Gitpod (https://gitpod.io/) for our eCommerce Magento development tasks across all of our projects at Develo (https://www.develodesign.co.uk).

This gives our support developers instant access to a fully configured development environment across all of our client sites, it really helps speed things up, previously there would be a min of 1-2hrs local setup for a new developer to work on a project, now it's 5 mins and guaranteed no problems getting going. So we can spread support developers across more projects and not worry about local setup. I put together a free starter repo if anyone wants to try that for Magento dev https://github.com/develodesign/magento-gitpod


Happy to chime in - Love the discussion ;-) Disclosure: I am the co-founder of Strong Network - a Swiss company. We are using and selling our own CDE solution.

My background is in cybersec and I worked at Snapchat after they acquired my previous (cybersec) startup - first time I saw the concept of CDEs, naturally I thought - can we combine productivity and security? This is what Ozrenko (my partner and ex Snap as well) and I decided to do.

Along with all CDE management functions, we designed infrastructure security mechanisms that are transparent to developers and that make their life easier (I and Oz are developers) - we realized that, we can expand the concept with a load of DevSecOps automation and eventually, we've hit very novel code security practices that we embedded in the environment (and filed patents on them).

Our security is mostly to keep developers chill while protecting the org (and reduce infrasec cost ;-) : protect all resource access credentials from leaking (phishing and malware), we provide data loss prevention (IDE, web apps), we detect secrets and prevent sprawling, we detect external code pasting in the code base, etc. All transparently and no hassle for the dev team - everybody gets free SSO to any resources!

Most importantly, we have hardened the platform at very complex organizations such as Broadcom, SwissRe, Niantic Labs and others with our self-deployed platform - you can imagine the difficulty of running efficiently across WAF, traffic proxies, VDIs and SASE. Oz is the man for that.

So in summary - we have today the most advanced CDE platform that provides both efficiency and security for all your resources and assets. We are a Swiss company working world-wide (greetings from Tokyo this week), so if you are motivated to join us (any function), please let me know!

Sadly, we have not chosen a codename for the platform yet.


At Coherence (withcoherence.com) we are building a slightly different twist on this where one configuration can drive different environments across the SDLC - Cloud IDE for dev, full-stack branch previews, CI/CD, staging/qa/UAT, production - with deployments in your own AWS/GCP account.

We generally see a ton of velocity increase in dev from teams that adopt the Cloud IDE alongside the rest, as exhibited by lots of comments. Internally, we dogfood them 100% and the team would never go back to the old "local-first" ways. So we are using them and love them. The key qualities we love are ephemerality, parallelism, and accessibility. But in general there is a lot of resistance for various reasons, as seen in many comments here.

We wrote up some of our POV in our docs here: https://docs.withcoherence.com/docs/explanations/remote-deve...

[I'm a cofounder of Coherence]


I can't tell you how frustrating it is to see new cloud IDEs come up that don't adopt the containers.dev standard. It's published, it's open, please don't NIH this.

Devpod did this right. Gitpod is actively working on supporting it. No I one hundred percent do not want to use your coherence.yml files instead.


When we did our thing our starting position was support for devcontainer after evaluating all options. But we are working on supporting devfile soon.


I forgot about Devfile but yes good choice!


Right, I forget that one size always fits all :) This is one hundred percent the way to go for something that isn't only a cloud IDE...


Please contribute to the standard if it's not flexible enough. That's what I'm saying. We need to standardize on a format and stop mucking about with ten thousand different ways of declaring a thing. Devfile, devcontainer, whatever. Just please pick an existing, open standard and upstream changes.


This is one of these things that I'm shocked there isn't more widepsread uptake on. It seems to win on some of the more important dimensions: ease of use/UX (if executed well), power of machines -> faster iteration time for many workflows, improved priacy. I am not familiar with pricing for many of these offerings and I suspect it probably is pretty expensive.

Even at megacorps where they have really good cloud dev environments, adoption is not universal. Many many people at my current employer have big under-desk workstations to do their day-to-day programming.


I would challenge if the size of the machine or responsiveness of the cloud is superior to local. Most people are not compiling Rust all day, but refreshing JavaScript. Which is to not even bring up the cost difference to rent or own a beefy piece of hardware. 64 gigs of ram is nothing for a local machine, but comes at a cloud premium.

I do think there are some real wins in ensuring development environments are consistent and versioned. Knowing you can pick up the exact version used to develop the project without dedicated effort is attractive.


> 64 gigs of ram is nothing for a local machine

It's not nothing. Most developers are issued laptops, and a 64GB Macbook costs over $3000. Plus a CDE can be shared.


Then don't buy a macbook, adding ram to a laptop isn't that expensive. You can get a 64 GB stick of DDR5 for $150.


> Most developers are issued laptops, and a 64GB Macbook costs over $3000.

This is unfortunatley a nonsensical argument.

As a company, you are employing a developer to write code. Both the company and the developer are happy when they are at their most productive. And to that extent, a $3,000 laptop is a perfectly reasonable "tool of the trade". Also don't forget companies can lease laptops if they don't have the cash floating around.

Its the same thing in other sectors.

For example, in Finance. A Bloomberg terminal comes in at $30,000 a year. But if you are, for example, a bond trader putting in multiple 6,7 or 8 figure orders on the market every day, then its no secret that Bloomberg has the monopoly on bond data and as a bond trader you absolutely need access to one as the tool of your trade.

In addition, following COVID, most sensible companies have embraced WFH. And so what the company saves on office space they can spend on other things such as better laptops for developers.


Developers do not typically get to choose.


It's a mix of many different things.

At work we have very beefy workstations that just aren't available in those hardware specs from cloud vendors. They use workstation grade hardware (A6000 GPU, threadrippers) that offer better performance/price than similarly sized cloud offerings which use datacenter type hardware. To actually tick all same boxes you'd need to go for much larger instance types. Plus it would be questionable whether one could realize the scaling benefit of the cloud because those machines are more pets than cattle, killing them over night and bringing up a clean system the next day would likely upset dozens of different workflows and tweaked setups. And then the software expects network filesystems with low latency random access, not blob storage... They did the cost calculations, buying hardware is cheaper in the long run, including ops.

And then there are customers that have extremely stringent security demands. Segregated networks, video surveillance, no data must leave the premises. Convincing them to move the data into the cloud might not be impossible but it would be a big recertification ordeal for one cloud vendor. And then we'd be locked into that vendor until we could get something else. And we have several such customers.


The power of machines is less than a 2021+ macbook unless you really pay up for your cloud workspace. Then it's worth the cost if you put the money there instead of towards a good laptop, but people still need some kind of computer.

I get that spinning up a new dev env is really handy, but with something like Ansible it's possible to get a good dev environment on someone's local machine within hours, not much worse.

What am I missing?


you mean I could transmit my commands at the super fast speed of my internet connection compared to nanoseconds? SIGN ME UP!


I know you joke, but I've been in situations where my local environment was so slow that I'd rather program in vim at 200ms latency, because that still would have been better for my sanity.


At Niantic we use Strong Network: https://strong.network/

It's a swiss based company that provide a secure environment based on container.

We like the many security features they offer, like proxying outgoing connections, organization wide controls, importing automatically source code ACLs and many possibilities for custom images. Onboarding new people is also very easy once we got our custom base images working.

We have a very good contact with Strong network and they are very responsive to our requests and comments.

Compared with other integrated Web IDEs, we feel that security was at the core of the design, which is not always the case, and this transpires in the way the product is structured.


Most of the CS classes at my university have moved on to an online Jupyter environment with VS Code preinstalled. It lets students spawn an environment with all the required software for their class preinstalled.

E.g. For Computer System it spawns an environment with GDB pre-installed, while for Intro to AI it has PyTorch/Python3 pre-installed


Having more grads coming out of university unprepared and expecting even more handholding (this time in the form of devcontainers in the cloud preconfigured and "ready", etc) really does not bode well for the health of our industry long term.


Sounds like normal progress. The bulk of the next generation will operate on a higher level abstraction that affords them more productivity with some of them specializing in the lower level stuff that used to be part and parcel of normal SWE duties but is now basement-dweller neckbeard voodoo. Grandpa complained that nobody knows how to set points ignition and jet the carburetor anymore, dad complains that you can't fix anything anymore, we will complain that kids these days never leave the pod and live in VR.


I lived on a boat with a 1972 Atomic 4. I had to set the points and jet the carb every so often. I truly enjoyed working on that thing. My car needed a freaking mechanic with a computer. It was nice to hear it start chugging incorrectly, pop the cover off, measure a few things, then start it back up. No technology required, just well timed explosions.


The situation is pretty bad. Some students don’t know how to Google properly because they know they can always reach out to TAs/CAs during office hours

But at the same time if everyone moves to devcontainers, will this skill even matter for the most part? Because the people who want to learn how to do something are going to do it regardless, and those who don’t might not need to worry about it


Who sets up the devcontainers?


Mom


Yours or mine?


Computer science is not exactly related to googling cryptic error messages, but IMO every undergrad CS major should have a tools/linux basics class of some kind


Yes but seems like most companies are moving to this remote dev container environment that you ssh into. Obviously the google skills are still important but there are plenty of other, more interesting, problems you can use to practice that.


I don't think that's even remotely true, I think you're in a bubble.


How is that different from using shared Unix boxes from back in the day? Using Containers doesn't mean not having access to the file system or console. You basically have the exact same experience as using vscode locally.


Online REPLs are great, but students especially in CS should know how to set up their environment.

I remember being a Junior Compsci student watching other students struggle to install the Java JDK on their Windows laptops (install JDK not JRE; configure PATH; install Eclipse; point it at the JDK). The professor had to come around and help them. I was the only student with a Mac and I had transferred from a community college where we had already learned Java in the first semester, including how to install and set up a local environment. All of this to say I feel like colleges should better prepare students to set up their local environments, although I see how useful the online REPL options are for getting started.


I went through a similar struggle in ~2005, seeing the word Tomcat still gives me PTSD from being left to ourselves to install Tomcat server on Windows XP.


Do you know how that works exactly or do you have a link where I can educate myself on the topic of using Python online in a sort-of-IDE? I‘m teaching Python fundamentals and don’t know how to sail around helping everyone with their individual problems with their machines while installing python or an ide.


I'll bet they're using this, that's exactly what it is designed for: https://tljh.jupyter.org/en/latest/

Alternatively, you could do something similar with Google Colab and a notebook you make to serve as a template.


https://www.colorado.edu/cs/students/computing-resources-stu...

Refer to the section titled “CSCI JupyterHub Coding Environment”

Alternatively, as another user pointed out: https://tljh.jupyter.org/en/latest/install/custom-server.htm...


For Jetbrains users:

Step 1. Create an Oracle Cloud account Step 2. Create an Ampere 6 core, 32gb memory instance for like $5/mo Step 3. Use Jetbrains Gateway to run your IDE as a thin client, executing on that host.

You get a pretty darn beefy ARM64 VM instance from OCI for extremely cheap. You can get these in a region near you, with low latency. And Jetbrains Gateway works pretty great.

On the plus side, this is an entire VM, so if you've got containers, or whatever else you need to run, that all executes there too.


Jetbrains Gateway is very poor compared to VS Code's remote development plugin. I can't use it without wanting to pull my hair out. It's laggy, installing plugins is painful (per project?!), etc..

Has it improved in the last year?


Yes, it's improved remarkably over the past year. I had the same experience as you, but it's at the point where it's pretty usable.

There are still irritations. But I am comfortable using it day-to-day.

It's absolutely critical that your remote gateway be nearby. I'm about 10ms away from mine, and though there is sometimes perceptible lag, it's not bad at all.


Sounds close to my experience with Rider the other week. Had a fantastic experience migrating my dev environment to WSL+VSCode from VirtualBox. Went to try Rider pointed at the WSL instance and.. Basically gave up on it lol.


It's still painful. Maybe it's improved, but my day-in/day-out with it is horrible.


not really

i so desperately want it to work, but it messes up often enough that it's not worth it yet

fwiw, using local intellij with nfs mounted source is a better experience


Do you mean $50/month? Or maybe $5/day? I’m reading:

> Ampere A1 compute… with cores billed at $0.01 per OCPU-hour and memory billed at $0.0015 per GB-hour in all regions.

So for 6 cores and 32gb memory I’m calculating $78.84 per month.

I’d love to get 32gb of ram for $5/month.. but it sounds too good to be true.


Looks like 4 cores and 24 GB fit inside the free tier: https://www.oracle.com/cloud/free/


It is funny they call it "Always Free".


Either they are aping from Google Cloud or Thomas Kurian brought some Oracle phrases with him to Google, since GCP also calls it that (or at least they used to).


Thank you, Wow that’s a generous free tier!


No because I want the entire development environment to run locally (and anywhere else) without WAN connectivity.


I had this gut reaction initially, but I can't recall it ever being an issue since we moved to gitpod around 2020. Our dev experience is inherently coupled with network access. It's not as though we keep local caches of package registries, etc.


You could still totally lock it down and deny WAN access but holepunch just for your vpn of choice.


Doesn’t help when you truly don’t have WAN connectivity. Imagine working on an airplane where the wifi is unusable or unavailable, or working during a long NYC subway ride (where even mobile data connectivity between stations is only occasionally and transiently available in the underground segments of the system).


why would you be working while sitting on some type of conveyance?

doing things like that will not get you any rewards which compare to the effort you've spent. look out a window, I say.


Over the past decade I'd reckon 1/4 of my productive output has been whilst sitting on some type of conveyance. Some of the intervals in that period have stretched from 6mo to 1.5 year where >50% is while commuting or otherwise traveling.

Honestly, I'm either on the clock or off, and I have better things to do with my time than look out a window when I'm not being paid.


I genuinely envy your ability to do this. I find that it's simply impossible for me to engage in any meaningful work during commutes. I can't focus.


And I can't focus in an office :/


Yeah, that's rough as well.


I used to do code reviews and catch up on email on my phone while on the train. It let me leave the office sooner. The metro runs underground, so there's nothing out the window to see.


Let’s say I take a train 1 hour each way every day - why not do some work?


because you're not a slave. you don't owe your employer your travel time if you aren't explicitly paid and reimbursed for travel time and costs.

that's your time. don't give it away to your employer for free. get paid for it if you are going to do work while you commute.


It's more the principal so that I can spin up tests anywhere and actually understand the full set of deps.


We use remote dev VMs with VS Code connected via SSH. It works a treat for a small team working on microservices. You can work from any machine while everyone has their own user account. Sharing code is a breeze. It's easy to test APIs in development (no http tunneling). Deploying to a local (on the VM) docker host for longer running services in test works well and it's super cheap to run.


Cool! I've used a similar workflow with teams that SSH into boxes with GPUs, VSCode makes it completely seamless - surprised i don't hear about it more


Would not using one of the selfhosted CDE solution make it even easier?


Kind of, I used to run code server but you miss out on plug-ins and other custom settings. Connecting via SSH is seamless. You can copy from local -> remote, keep your plug-ins and access from any machine with VS Code.


What if it was as seamless but also managed everything for you. Can send you a link to try it out if you like?


Yes! But I work on CodeSandbox, so that creates some bias :). We've been working on our own CDE solution, though we've taken a different spin to improve speed and cost.

Our solution is based on Firecracker, which enables us to "pause" (& clone) a VM at any point in time and resume it later exactly where it left of, within 1.5s. This gives the benefit that you won't have to wait for your environment to spin up when you request one, or when you continue working on one after some inactivity.

However, there's another benefit to that: we can now "preload" development environments. Whenever someone opens a pull request (even from local), we create a VM for it in the background. We run the dev server/LSPs/everything you need, and then pause the VM. Now whenever you want to review that pull request, we resume that environment and you can instantly review the code or check the dev server/preview like a deployment preview.

It also reduces cost. We can pause the VM after 5 minutes of inactivity, and when you come back, we'll resume it so it won't feel like the environment was closed at all. In other solutions you either need to keep a server spinning in the background, or increase the "hibernation timeout" to make sure you don't have the cold boot.

It's kind of like your laptop, if you close it you don't expect it to shut down and boot the whole OS again when you open it. I've written more about how we do the pausing/cloning here (https://codesandbox.io/blog/how-we-clone-a-running-vm-in-2-s...) and here (https://codesandbox.io/blog/cloning-microvms-using-userfault...).


I'll repost here a comment I made on another HN post about cloud dev environments and why I will never be convinced to use them.

> I have never in my career seen a good implementation of cloud development. At every company I've ever worked for, "cloud development" is nothing but a Linux VM that gets provisioned for you in AWS, a file watcher that syncs your local files to the VM, and some extra CLI tools to run builds and tests on the VM. And every time I've used this, the overhead of verifying the syncing is up to date, the environment is consistent between my laptop and VM is the same, all this other mess...every time I end up just abandoning "cloud dev" and doing things on my laptop. God forbid you change a file in your cloud VM and forget to sync in the reverse direction. Not only is local development more reliable, but it's also faster (no remote network hop in the critical path of building things).


With VS Code remote SSH, there is no "local" you are always on the server so there is also no syncing. They do some tricks to make this seamless and perform and feel as if everything was local.


But what do you do when you need to work without internet access, or with limited internet access?


I briefly worked for Facebook. Their cloud dev environment blew my mind. Way better than trying to get that stack running on your machine.


I have been building them my entire career and there should be no sync involved at all tbh.

Everything is remote but you work in a local IDE and everything feels local


It is interesting that in the comments on this thread I’m not seeing any mention of nix, which is arguably overlapping the topic at hand with the Venn diagram of “spinning up dev environments”.


Love NixOS. Flakes are awesome and I've started playing with flake-parts+devenv. It's been far easier and faster to set up isolated environments(nothing installed globally) than docker compose, that and it's substantially more performant (especially on non-Linux OSs). I've also found of the container route that you usually still need to set yourself up locally for most of the tooling or have a fairly complicated set of images for different use cases and/or set it up with the complexity of a full OS anyway.

Started a new job recently and had my laptop up and running, ready to code in about an hour (only second nix box I've brought up), by day 3 I was building and running the main monolith monorepo with my own local flake. I have since replaced redis, postgres, and two ancillary services in containers with devenv services/processes; it's been really great not dealing with docker volumes, networks, images or building containers and managing pruning them.

It would be interesting to play with automating deployment of my nixos machine configuration into a cloud VM or pod as I work 99% CLI anyway, but I just don't really see the need... this is just easier.


A colleague wrote this comparison of available “standards” just recently https://www.daytona.io/dotfiles/mastering-development-enviro...


I've heard some reports that Nix is very painful to get working with Python/ML stack - do you know if this is this still (ever?) the case?


I used Cloud9 for about nine months before Amazon acquired the company.

I loved it-- I loved having separate environments per-project. I enjoyed the collaborative features as well (send a link to look at code or preview something, etc). I see a lot of potential with them and I would love for them to be more mainstream.

After Amazon acquired the company I cancelled my subscription (I was paying annually.. I think it was $190??). I knew Amazon was going to murder the service, require an AWS login and who knows what.

I have tried others since then like code spaces and some open source/self hostable solutions (I have even tried the old self-hostable Cloud9 code).

Ultimately, I gave up on it... why? I didn't like the idea of self hosting (more attack surface area, etc). I didn't like any companies offering the service.


Code is still here and usable for c9: https://github.com/c9/core

I bet it's hella outdated and full of security issues now though.


What didn't you like about any of the other companies?


First and foremost, the editor/systems were not open source. Second, the companies are known for requiring certain kinds of logins and locking things down (Google/Microsoft/AWS account, etc).


I use codespaces and replit quite a lot. It's just very easy to work from anywhere and I spend quite a lot of time in VR (because it's convenient for me to work with large screens vs actually having large screens; also better battery life than my laptop).


Would love to know more about your VR setup. I can barely spend 20 mins with a headset before the dizziness and fatigue kicks in, and this is when playing games! I can't imagine working with a headset on.


When I work, it's with the (n/x)real, when we have a tech meeting, we use the Quest 2. The xreal, I haven't had any fatigue or issues and I have been using it fulltime for a year. The Quest 2, I don't notice any eye/brain issues, but it's just pulling on my head (and I have the comfort straps and extra balancing out battery, but it still starts hurting my head after about an hour. If it were lighter and more comfortable and AR and have better battery life, I would probably wear it fulltime. Maybe the quest 3 although battery life is not going to cut it I am sure. The huge advantage of the xreal is that it's just sunglasses; they are very light and your phone (or external device) is the driver; I use a beefy samsung phone and it basically means I have easily 15 hours of screen time before it starts whining about the battery.


The trend towards cloud dev environments will continue for a while. Some people will spend years working full time developing support for these. They'll make assumptions about network connectivity and resource cost while building layers and layers of abstraction to get the whole thing to work. Bringing up a dev environment will be mystified and no single person will know how to do it.

Then someone will realise "hold on, can't my own computer run this stuff?", get rid of all the cloud layers and run the environment directly on their laptop.

They'll write a blog post about it and people will be amazed that it's possible.

And the cycle will be complete.


GitHub moved to codespaces for dev 2 years ago

https://github.blog/2021-08-11-githubs-engineering-team-move...


We ise Coder at our lab for machine learning simulations. We have a self hosted Coder deployment and use docker to proviosn workspaces on our GPU servers. Each lab member can spin up a fully working, dev environment in less than a minute and they don't ever have to think about installing it configuring the. Correct CUDA version, GPU drivers etc.

We persist home directory of a user across all their workspaces and also mount a common data directory to all so that we have access to ore downloaded datasets.


I am sorta doing this, locally. I have a handful of servers in my basement that I use vscode+remote-ssh to work with. The benefit is that I can jump from my desktop to my laptop without any fuss. I can leave long-running scripts online for like an experimental batch job and attach to that from any machine, even when I am not home thanks to wireguard/tailscale. It is very nice knowing my dev env is always there in the state I last left it. This also gives the nice benefit of being able to choose your developer env (windows or mac) while running your dev/code/system on a linux system for example. It also gets you around architecture issues, like if your laptop is arm-based and your app needs x86.

I was initially kinda opposed to this and preferred "bare metal development" on my local rig. But the performance is actually pretty incredible for remote development with vscode such that I don't really notice things are running on different hardware.

We recently had some new members join the team and I decided to spin up some dedicated EC2 instances for them to use for this exact purpose. They aren't being used yet, but as our stack becomes more sophisticated I think workloads will transition there. It's done with a custom terraform module that also provisions other assets needed for each dev (regardless of local or remote dev) like an S3 bucket, some dynamo tables, IAM roles, etc. Being able to onboard a new dev with a handful of lines added to a mapping is pretty awesome.

tl;dr I would absolutely consider remote dev spaces.


This is basically what I do as well. It's awesome.


None of those, but on my last contract, the client used D.O. + Docker envs for "local" dev. Personally, I loved it. I didn't have about to worry about getting a project to work on my on-desk hardware. I simply opened VSC, VSC+SSH'ed in, and went to work. It simplified collaboration, etc.

By far, what I appreciated the most was I didn't have unnecessary data on my local hardware. No customer data. No order data. Nuttin'.


> the client used D.O. + Docker envs

What does D.O. stand for in the context?


Probably "DigitalOcean", https://www.digitalocean.com/


Digital Ocean if I were to guess


DigitalOcean


I thought CDEs were a pretty cool idea years ago until I discovered Nix and specifically "nix shells".

Call me old school but if I can run my tooling locally I typically prefer that in most cases, and Nix does a stellar job of tracking everything deterministically, so sharing amongst the team works great too.

So much so I think replit actually uses it under the hood for some of their environments iirc.


We do a mix:

* quick PR tweaks on someone else's branch => github code editor

* data science & customer success work: jupyter notebooks (GPU) & google colab (CPU)

Based on those experiences, and local dev experiences, we invest in a mix of native + containerized + ci/cd staging server dev experience... and not experimenting with cloud IDEs.


As an Android develop this greatly interests me. In fact this was one of the reasons I had to buy a more powerful laptop than I wanted to and it almost cost me double my target laptop otherwise. Because Android development is such a resource hog.

Are there useful cloud dev env setups for Android? The way Google has made sure Android dev remains locked in to Android Studio i.e IntelliJ Idea and also to the Great Gradle and what not I assume it’d have to come from IntelliJ - https://www.jetbrains.com/remote-development/.

Has anyone successfully tried this for Android remote dev? I suspect even if it’s feasible with lots of fragile moving parts the resource hungry setup will surely make it very costly for having a personal setup in the cloud.


I only lightly use the cloud hosted environments but regularly use dev containers now to ensure that complex development environment are only configured once and shared amongst a team. It ensures no more ' works on my machine issues ' plus means both development and production are extremely stable


Yeh. We use DevPod to work in cloud dev environments in our AWS cloud. I hate it. DevPod brings its own SSH implementation that injects itself into your server and munges CRLF, making ssh sessions to your workspace fraught with difficulty except for basic command line applications. The only terminal that seems to work is the one built in to Visual Studio Code. Maybe Microsoft Windows Terminal also works, I dunno.

If you have editor attachments and want to work this way, suck it up and learn your VS Code. That's what all the tooling supports first class. Second class is JetBrains. Vim and Emacs aren't even considered. I'd say just run them from inside the container, but then you run into things like custom SSH that munges CRLF. When you look at who actually authored the devcontainer standard, it makes sense.


It depends on what CDE you go with, some like Gitpod support vim/emacs well via their Terminal editor. But you're not wrong about VSCode being the first class citizen.


daytona.io is a brand new alternative (disclosure I am the cofounder), these issues do not exist and your company can host it as well,


Codespaces user here (I set it up for the teams), for about the last 1.5 years in a large corp setting with teams using it.

My experience with is it has been wonderful for getting started and immediatly becoming productive with very complex systems. Most of those systems have 1 (or very few) experts who need to help everyone else with their setups. When problems arise, and they often do, they've become the bottleneck and Codespaces removes that. Those experts can focus on keeping just that up globally versus locally for each individual.

Outside of that scenario, complex systems, I've experienced it to be overkill. The negatives that come along with using such systems haven't outweight the benefits.


After our experience of running Codeanywhere for more than a decade, the whole market and technology finally started to converge to the moment when those solutions started to be meaningfully useful in work environments.

Today, with Daytona we are trying to solve just exactly this challange. Daytona is the enterprise-grade GitHub Codespaces alternative for managing self-hosted, secure and standardized development environments.

The unique value of Daytona is that you can self-host it on your own infrastructure and benefit from high-density workspaces which offer efficient resource utilization while ensuring a great developer experience.

Disclosure: Obviously, I work for Daytona, and was working for Codeanywhere. :)


We at Broadcom use "https://strong.network/" which gave lot of flexibiltiy and platform/resource control as well boost to productivity for our development community. A part from data loss prevention (IDE, web apps) and security that this platform provides, it helped us to Host , scale , delegate within our organization which is also a cost effective solution that other current solutions availble are lacking .


We at LinkedIn have been using CDE's (we call them rdev) for quite a while. We wrote about them in our engineering blog:

https://engineering.linkedin.com/blog/2021/building-in-the-c...

I'm happy to answer questions but others have already posted many of the benefits. As far as I know, local containers on macOS still have performance issues so we mostly use them in the cloud.


We’ve been using vscode containers for our complex repos since getting deterministic Python environments are almost impossible with current tooling.

For our node/js repos, yarn does a great job at keeping things simple, deterministic and fast. So even if we have containers, devs use the direct method for faster devloop.

For bigger companies with much more complexity, remote devboxes make a ton of sense. You want to manage farms, not individual flower pots.

Nice thing about containers is that you can run it locally without internet.

Containers + vpn solves a lot of pain.


https://bunnyshell.com and k8s -- seems like a good way to get going quickly with new projects --


I work at Bunnyshell, so happy to see it mentioned here. Happy to share some more details:

Bunnyshell creates two types of CDEs:

- local IDE, code running in the cloud. The cool thing here is you get to use a local IDE (any IDE/editor you want), no lag, the user edits a local files and the files are synced in real time to the cloud environment. All editing on local, all execution in the cloud. Supports debuggers.

- remote IDE, code running in the cloud. When configured in this way, both the IDE and the execution happen in the cloud. No code on local.

The CDE is just one side of Bunnyshell, the other is provisioning environments on demand or automatically (eg. ephemeral environments on each PR). All these envs support remote development (if enabledd).

Our team makes heavy use of CDEs internally for actually building the product.


Besides using Codespaces for inspecting code, I’ve used it for running some applicants’ submissions a few months back.

One of the submissions was built in Java and didn’t use Docker (I guess the person had an aversion to it). I didn’t want to bother installing Java on my machine, so Codespaces seemed like a good idea. It took less time to get it working than it would’ve to find the download link on Oracle’s website.

I’ve also used it to write a few lines of code on my iPad, but this was far from ideal.


I've used Github codespaces at work when I wanna make small modification to an open source repo and don't wanna go through the entire process of cloning, building things, etc. It's prety useful.

We have our own custom built cloud dev envs where I work and I def see the value. I don't need to worry about a conflicting version of a dependency that I have installed locally for some prototyping affecting my day-day productivity.


My computing environment mostly consists of a browser and tmux, with vim for editing. A few months ago I wanted to try Github Copilot, so I signed up for the trial thinking I could use it in Codespaces. Eventually I gave up and installed VSCode locally. Now I use it a little, mostly for exploratory work, but less than I would if it were in my browser.

Is there any CDE with good Copilot (or equivalent) integration available today?


If you happen to be running Ubuntu, or are willing to run an Ubuntu VM, the 3 scripts and instructions I have over at [1] should be enough to get you a working Neovim instance with the LazyVim "distro", which itself has built in Copilot support. And a lot of other command line niceties too!

I've been told it's fairly easy to tweak and run in Debian too. I basically wrote this for those times where I want a fresh development box but I'm too lazy or computationally cheap to even bother installing Ansible to run a `host: localhost` self-configuration playbook on.

[1]: https://github.com/hiAndrewQuinn/shell-bling-ubuntu


I don't really trust the whole Node Package Manager ecosystem and will happily use a prophylactic like Codesandbox on my work computer where possible.

Also I have to upskill some colleagues and what better way to do that than share a demo in Codesandbox that they can fork and play with, rather than an email with 20 step instructions on installing node (on Windows)

For actual development I can't work with a thin client, it's too slow.


We now have an in-house cloud dev environment at Airbnb (note: my opinions are my own and not those of my employer) and it was a huge improvement over what we had in the past.

Spinup time is fast, it's easy to use VSCode remotely, and it's easy to have multiple environments for different types of projects (Python, Java, Go, etc.).

Neovim + LSP's also works perfectly if you prefer that over VSCode (via ssh)


Would it be possible to see what your CDE looks like?


It's just VSCode running a remote session. It's not a web based IDE.

You can also ssh into the workspace and you can do w/e on the terminal (runs in a k8s container).


I use private replit(s) sometimes to work on small snippets of code. Then I use my code snippet as a reference when working on our actual codebase.


Full disclosure: I'm a Gitpod Community Hero (i.e. volunteer ambassador)

Personally I used Gitpod at work every day (Nx + React on a fairly complex stack) for 2 years and I loved it.

Always fresh, always working. I don't use it (at work) anymore because I moved to another company and some of us are in Australia / New Zealand where latency becomes a issue (so far).


Sorta, I have a beefy PC at home and I use Parsec to just stream my screens to my laptop if I'm at the office or whatever.

Not quite the same since it's not really containerized, but I set up the project once and then I'm good to go from there

It's nice cause I detest Apple/MacOS and this lets me avoid it completely since I'm just interacting with a Debian machine


I thought Parsec didn't have a Linux host application?


Probably not what you mean by "at work" but as a non-CS STEM academic that needs to help my colleagues get dev environments so we can collaborate on research software, devcontainers are a game changer. I work a lot with folks that are scared of the command line, but I can get them productive in their web browser with much less friction these days.


I have a somewhat remote setup - I use a server which sits in my basement. It is helpful since running some things is just easier on a x86 Linux machine vs a M2 MacBook!

Vs code works great but I would really prefer to find a way to get neovim working with less lag. Mosh support is improving but still isn't fast enough not to be annoying!


My dev environment is emacs + gcc + gdb + <insert database here> and sometimes <insert fun scripting language here like scheme, javascript, lua or tcl>. I have an EC2 instance I ssh into and as long as my network connectivity is decent it's not a bad experience.

But that's probably not what you're talking about.


Not me, but I have been looking for a self-hosted CDE; anyone have experience with Eclipse Che they would share?


If you're looking for a self-hosted solution, you can check out strong.network (I work there). Even though I'm biased, we try our best to combine productivity and security with our Cloud Development Environments (CDEs).


Daytona.io is also a Self hosted alternative to Codespaces. We have not launched publicly but feel free to book a call (on the website) and can send it to you


I've used Replit for love coding interviews when we are talking with candidates. From what I have seen I don't think I would choose to switch to it for a large project. However, it is nice for quick examples of a way for someone to write small chunks of code with no setup.


I used Cider while at Google, and setup Google Codespaces at the startup I'm working at now.

Some notes:

- Codespaces uses the devcontainer spec, so folks can use our setup offline if they want

- I get far fewer questions about how to do XYZ, and when I do get questions, I can almost always reproduce them, which is a breath of fresh air

- We do pay a fair chunk to use high CPU/memory machines, this is worth it for us. I have a very lightweight laptop that I use

- Some of our developers like having multiple separate codespaces at a time as a way to separate out different projects. I just use branches, but some folks like the codespace workflow.

- I can run vulnerability scans over our dev environment which is nice

- When I onboard devs, I can get them to the point of running our full application suite in a 15 minute meeting

- I don't have to deal with m1 vs not-m1 issues that were popping up all the time before we made the switch

- Having a linux base is nice, as that's what we use in production. We deal with some annoying dependencies and not having to install them on Mac anymore is nice

- It's seamless changing between my laptop and desktop that I work from. All the code is instantly on the other when I switch, even if I haven't pushed up to git

- Chrome + Notion + Slack + our task tracker app + etc. take up a lot of memory now adays. Even with 32gb machines, folks often would run out of memory trying to run our app. On a codespace, all of the memory is dedicated to just the app.

- With prebuilds, when a developer opens a codespace, we already have all of our python, node, go, etc. dependencies preinstalled (also things like awscli, terraform, pre-commit, VsCode extensions, docker). They just run `aws sso login` and then `start --serve` and things work.

- I live in an RV, and sometimes don't have great internet, but my Codespace is on a remote machine that does always have fast internet. Even being an internet-based service, this works great for me.

The biggest cons:

- Some of our devs were really passionate that they preferred other IDEs than VsCode. Some other IDEs do now have support, but they aren't as supported as VsCode yet

- Codespaces have downtime occasionally (as did Cider at Google). Most folks just use a local copy/devcontainer of the app or do other things when this happens, but some folks choose to always use devcontainers/local copies because this annoys them so much. At Google, we just posted memes when this happened and went home.

- Codespaces become inactive after a controllable amount of time has passed. Some folks don't like the 40 seconds to reactivate a codespace after being inactive for more than 30 minutes, so they either write scripts to keep their codespaces alive or just don't use codespaces at all.

Overall:

- most of our newer employees exclusively use codespaces.

- A few of our older employees who developed locally for years have chosen to continue developing locally, and will just hop into a codespace to run a quick terraform command or something if their local version gives errors.

- The number of questions we deal with regarding issues on a single users' machine have gone down dramatically, and tbh most of the questions come from folks who use the local still

- We do pay a pretty penny for this, but it's a small fraction of our overall cloud spend or costs per employee


I presume you mean GitHub Codespaces because Google's offering is called IDX, and in development.


Oh yeah good call. Cidr was the internal tool I used at Google when I was an employee there. Github Codespaces is what my current company uses


I am a big fan of strong.network, they have convinced some of the best tech people I know with their offering and some market leaders in reinsurance, semi-conductors as well as other industries across the US, Switzerland and Asia are already clients


There are a few gotchas, but I've introduced them at both Vanta and Q Bio and have been thrilled with the productivity increase at both places. As long as you stay on top of keeping prebuilds green, you know that everything just works for all of your engineers.


My previous employer used them to test client integrations of our hosted services. They were nice to send a url for collaborative debugging rather than telling someone to go spin up a copy of the SDK and pasting them a block of json to configure it with.


I've been doing this personally and also pushing for adopting this at work. We are currently moving to monorepo though so we will probably look into this better after we're done with that.


doesn't quite fit the definition but at my work we use Vertex Workbench quite extensively. the rest of the organization is still with archaic IT practices, so it is quite liberating that we have full control (within reason) of our development environment when everything is set up.

its workflow is still quite at its infancy though. but the SaaS route has allowed teams new to adopting to best practices still benefit from them, while they move towards adopting them.


Do things like Wix, Weebly, Wordpress count as CDEs?


Nope - those are website builders.


If Wix or Weebly add code editing will they count then as CDEs? I guess my question is, what would they need to add to count as CDEs?


Replit to build very quick prototypes of some script and see what it would console log.

GitHub Codespaces for quick stuff, I prefer my IDE.


We're not, and I'm very glad for it. I am the opposite of excited about the idea of CDEs.


Just for curiosity, could I ask you why ?


I use stuff like replit, online sandboxes, jsfiddle, just to execute snippets of code.

I'm aware of IDX though


I use Codespaces daily at work. Full disclosure, I work for GitHub. I like them pretty well.


I have used the GitLab cloud IDE at work for quick things. Not so much as a primary IDE.


hmmm why these sound like "connect via (video)terminal to $$somewhere" from the ancient times? Except the terminal is not a dedicated vt100 or Tektronix but your laptop..


I have a Macbook Air and vscode-remote over ssh to do anything.


Are these things charged per usage? Feels strange to me.


Director of Product for Codespaces here... AMA


How is the space trending? Who are the biggest adopters (i've seen a lot of use cases in consultancies, education, and "big gnarly stacks")? What are you most excited about?


I considered it but the cost seemed pretty steep - up to ~$30/day/dev for codespaces. I didn't think local development was close to broken enough to justify it.


bit biased, but we use coder (I work at coder)


"Based on my knowledge and experience, I think that the Strong Network solution is the most complete and robust approach for the enterprise"


"Based on my knowledge and experience, I think that the solution from Strong Network is the most complete and robust approach for the enterprise "




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: