Hacker News new | past | comments | ask | show | jobs | submit login
GitHub’s engineering team has moved to Codespaces (github.blog)
861 points by todsacerdoti on Aug 11, 2021 | hide | past | favorite | 679 comments



Serious question: if I were to use this, would Microsoft collect analytics on me (code written, keystrokes, mouse movements, sleep/work schedule, productivity metrics, etc) and monetize that data by using it to build some AI product like Copilot, or build a productivity dashboard so managers can fire people for not being productive enough (like Xsolla did), use it to serve me ads, or do some stupid/irresponsible/unethical thing with it that will ultimately end up hurting me in the long run?

Because I feel like the answer to that is yes, and I can already see myself in the future writing something angry in the comments section of an article on HN that exposes some evil/stupid shit Microsoft did with this, or happened as a result of this.

I'm not against the concept of using a thin client for development, but it just doesn't seem smart to me do it in such a way where you have to place trust in a company that, throughout their entire existence, has consistently proven that you should not trust them, because they have no incentive to (and thus never will) act in your best interests. It's like if Facebook released their own web browser and "promised" to respect your privacy; you'd be an idiot to believe them.


If a business is paying for this product, then THEY (not you) are the customer. And in that context - yes, Microsoft may be asked or build a product for managers to manage their workforce, particularly if remote. This could involve presence tracking, engagement tracking (ie, do you open alerts / read your bug reports) etc.

One approach would be to monitor what management considers productive / successful employees and also on employees who have gone on PIP's - then train the AI on that data set. You'd then be able to drive various alerts to manager and HR dashboards (ie, if a manager was failing to address alerts / manager their team it would be surfaced up a level).

Dystopia may be coming indeed. Remember, usually the key issue is not if product is a paid product, but are YOU paying for it. If not - someone else is the revenue source, and Microsoft et al will serve them. Maybe the individual licenses will not have this "management" layer add-on as an option.

But I think a very real chance of something like this occurring! If not right now eventually. In Sci-fi this is an inevitability (monitored through a smart watch or something etc).


> One approach would be to monitor what management considers productive / successful employees and also on employees who have gone on PIP's - then train the AI on that data set.

I look forward to the papers on how to slack off on your job while the AI classifies you as "a straight shooter with upper management written all over him". Possible title: "An adversarial approach to the TwoBobs Productivity Classifier"

(/s if this was not clear)


Sadly, not /s.

I’ve been pitched the TwoBobs Productivity Classifier.

My feedback was anyone who needs it has classified themselves as an unproductive dev manager.

I suggested the concept be made available as being by devs, for devs, purposed as a personal coach, with no uplink to the mothership.


> I suggested the concept be made available as being by devs, for devs, purposed as a personal coach, with no uplink to the mothership.

Ah yes, the classic "foot in the door" approach. Despite all the good intentions behind it, it still would end badly.


As long as there's more demand than supply for SWE workers, something like that would never become the norm.

All of the dystopic news stories of WFH employees being subject to monitoring are invariably from fungible employees.


I often read on HN that it's a norm for US employers to install spying software on developers computers. I don't see how is it different. I would try to avoid from working in such a companies, because I don't feel secure knowing that someone is spying on me.


> I often read on HN that it's a norm for US employers to install spying software on developers computers.

I haven't heard of this happening at all.

Whereas it is commonplace for people to use time-tracking software (like Toggl, RescueTime, etc) - and it is also not-uncommon for companies to require employees to use such software (software-eng or otherwise) for the purposes of reporting their time/hours/what-they're-doing, they all have web-based UIs for doing data-entry: which is definitely not the same thing as installing monitoring software on an employees' own personal hardware with or without their informed consent and it being a condition of employment.

I think people are getting terms-conflated and using overloaded terms with negative connotations: "time tracking software" which does active-software-application-window-tracking for the benefit of the direct user can easily be mischaracterized as "monitoring software".


Not only have I heard of this happening, it's literally on my computer right now! Most of the monitoring on it is automated - but I also write that software. There's not a lot of ML involved in the kinds of automated activity checking that my employer is doing, but if they suspected I was not being productive, they could basically pull my entire body of network requests, which at the very least has addresses of sites, and can tell you roughly how long someone spent on them. Since 90% of my work is on the internet, this is pretty effective spying.

Interestingly, I'd much prefer Microsoft's algorithms classifying me as productive or unproductive, based on how I code - while there's a lot to be said for different patterns of work being equally productive, I trust Microsoft to implement the same heuristics across the board, and not take a look at my gender and race and make assumptions about me based on that.

(Of course, since the ML data of productivity will come from nearly entirely white males, if it's only available to github enterprise cloud subscribers, then the ML data will be pretty biased as it is. Guess discrimination is inevitable!)


> but are YOU paying for it. If not - someone else is the revenue source

even if you are paying for it, it's quite possible that they also have other revenue sources that conflict with your interests.


Exactly, this heuristic doesn't work all (or even most?) of the times. A more accurate approach is to rely on the terms you agree to when you buy/use software (if you don't want to read them, then you can depend on faulty shortcuts instead).


> if you don't want to read them, then you can depend on faulty shortcuts instead

Or refuse to use software that requires you to agree to long and confusing terms and conditions.


I really hope that if this happens, DPAs bring the hammer down on everyone involved (both MS and companies using it) in a useful timeframe (i.e. before it has become socially acceptable because "everyone is doing it" and there are no consequences).


Everyone is already doing it. Not to engineers yet, but look at Amazon warehouses.

These things are always deployed against the least powerful first.


It's also easiest to implement in the warehouse.


Do you have a right to privacy in the workplace?

Back in the day the boss would be right there with you in the store etc and could watch you all the time.

Or maybe this will be something that really only shows up in china etc? I think they are a bit further along on some of these types of things.


I prefer to say "they're a bit less developed on some of these types of things". Their attitude is still very much that of the Industrial Revolution, where a worker was no more than a replaceable meat unit with no rights and no expectations of a life of any kind outside of productive work.


You don't have the same rights to privacy when performing your job for a corporation, and everyone else is already doing it in other industries.

Just this week was a big story about how a call center provider (used by Apple and others) was forcing employees to install cameras in their own homes to monitor their remote work: https://www.nbcnews.com/tech/tech-news/big-tech-call-center-...


I was just going to mention customer service reps, who traditionally would have worked out of call centers and been subject to restrictions to make it more difficult for them to steal customer data (with the side 'benefit' of micromanaging productivity). With the pandemic, businesses that had considered CS staff they couldn't replace with AI as not sitting for remote were forced to figure out a way, and that way was essentially to replicate many of the security conditions of the call centers in the home, rationalized as a necessity to keep people employed and get through the pandemic. But let's be real, CS has long been a race to the bottom cost segment, often outsourced and run as a computer with a mere human overlay. So it was only a matter of time before management was satisfied that the security risks were mitigated enough to justify the new coat savings. Some big companies have already decided to stick with the home-based support setups they put in place only because of the pandemic. And now to apply the formula elsewhere.


Customer support is a computer with a mere human overlay. A profound statement.


I am not sure what is possible in other jurisdictions such as the EU, but I have not seen something like this so far.


I'm not sure about individual member states, but there was this story in Germany the other day:

https://www.zdnet.com/article/gdpr-german-laptop-retailer-fi...

€10M fine for keeping employees under video surveillance for no good reason.

This is an example showing that there are limits to employee surveillance on-site.


In the US, or in Europe?


woah, this is bonkers !


What’s a DPA, please?


Data Protection Authority


I believe it's data privacy advocate?



Thank you!


Not that hard to imagine, it's happening already in their other products[0], it's what the management wants.

0: https://news.ycombinator.com/item?id=25198713


It doesn't matter who is paying for the product. You could pay for the product and they could still add usage terms about monetizing and sharing your personal data.


I think that any workplace that would do this in the future, is already doing similarly toxic things. This isn't an excuse - there's a danger in commodifying and making things like this even easier - but there's already shit workplaces out there with plenty of red flags.


> It's like if Facebook released their own web browser and "promised" to respect your privacy; you'd be an idiot to believe them.

Replace Facebook with another major ad company, and you are basically describing the most popular browser right now.


If they do, the competition is just a click away: https://www.gitpod.io/


Stackblitz Webcontainers[0] look promising too. I read the creators plan on open-sourcing the underlying tech once more mature.

0. https://blog.stackblitz.com/posts/introducing-webcontainers/


Yes for all of us but all corporate clients will flock to github to benefit from those surveillance capabilities.


How about moving all our code off GitHub while we're at it?

In this worst case scenario (which hasn't happened yet), I'm sure GitHub would expect the good will of open source projects and contributors while simultaneously being the boot on their face in the workplace.


I have projects on github because some of my clients requested it. My own projects stay off. Other than renting physical servers I do not use a single piece of software for my own business that forces me to subscribe and be online.


I've been on sourcehut for a while now. It's great.


Unfortunately it's your company that signs up to this sort of thing, not you.


As slownew45 points out, Github does not make money from individuals who use it for free. Of course such use will become increasingly difficult over time for Microsoft pinheads to justify from a business perspective. Telemetry, full-on surveillance and ultimately subtle manipulation of individual software authors seems an obvious choice to try to justify why Github should continue to allow non-paying individuals to use their repository.

The Facebook web browser. Never say never. If we knew what we know today about privacy, we would have pointed out the idiocy of anyone choosing to use a web browser released by Google.


> The Facebook web browser. Never say never. If we knew what we know today about privacy, we would have pointed out the idiocy of anyone choosing to use a web browser released by Google.

Google were already making money off targeted ads at that point. There was even an outcry about GMail scanning people’s emails several years before Chrome was released. But you could probably argue that we didn’t fully appreciate just how bad things would get at that stage. Many of us were just happy to see Microsoft get some competition.


> we would have pointed out the idiocy of anyone choosing to use a web browser released by Google.

I still try to, but it’s a lost battle already. (Don’t use Chrome, BTW.)


The fact that they will be able to do it, is problem, we should not put ourselves in position to rely on their future morality.

Github at this point needs to be open source, atleast then they would be an easier way out.


Lol, like Microsoft paid the billions to give it out now for free. Besides, do you want your index funds to do well? Because that's exactly where the proceeds go to.


More than that, github was floundering economically, and Microsoft bought them to ensure continuity, because Microsoft was the single largest user.

Don't want to rely on Microsoft for GitHub? The underlying 'git' is open source. Just deal with your git repos raw.

No one is forcing you to use github.


This is completely false. Microsoft bought GitHub to appeal to developers. They had TFS and Azure DevOps with Git for years before the purchase. I'm also 100% certain Microsoft was/is not GitHub's largest customer.


You don’t pay $7.5B for things which are “floundering economically” usually. It was a strategic buy, and likely reflects a healthy multiple on revenue. Kudos to Microsoft for being the one to get the deal done, I’m very confident GitHub had multiple suitors.


> More than that, github was floundering economically, and Microsoft bought them to ensure continuity, because Microsoft was the single largest user.

Yeah they paid 7.5B so they can continue using their favorite web app? This is possibly the most naive take I've read on it that ignores any of the strategic reasons for actually acquiring them.


>More than that, github was floundering economically, and Microsoft bought them to ensure continuity, because Microsoft was the single largest user.

Was it a common knowledge at the time? I don’t recall seeing info about MS being largest user and unfortunately their newspost on acquisition returns an error…

[1] https://news.microsoft.com/2018/06/04/microsoft-to-acquire-g...


> More than that, github was floundering economically

I do not think we actually know that. Was there any release of any financial data?

Also, part of the "beauty" of the modern "tech" giants is that they manage to find alternative sources of revenue, in some cases (notably Google and Facebook) completely replacing the old school user subscription revenue. By and large people seem to be OK selling their data for either a price reduction or complete price removal for services. It is what it is.


Github is not the only git web frontend. I worked with Gerrit (fully open source) for a while and really enjoyed it.


Github wasn't profitable but had plenty of marketshare to guarantee more funding.

Also Microsoft has plenty of infrastructure and did not need Github. How do you think Windows and Office were built for decades?


Wow. I have memories reading Github is a bootstrapped, profitable and dominant 90 person company. What happened to profitability?


Venture capital happened: https://archive.is/1t6qi


Github does not need to be open source.

Open source needs to pull its head out of its collective ass and not hand over its entire workflow to private companies.

Github may be the single greatest execution of embrace/extend/extinguish in computing history.


Where are we seeing the extend or extinguish with GitHub? It's been 3.5 years and GitHub is just as compatible with git as it has ever been.


Microsoft plays a very long game. 3-4 years is not enough time to boil a frog; it takes much longer or the frog will realize what's happening. Be patient. Give them 10-15 years after their "We love Linux" shift and you'll see. They love it so much, they will own it.

Just think of Microsoft like the Borg.


Microsoft is a trillion dollar company in large part because they embraced open source. Azure is extraordinarily profitable.

Why would they fuck with their number one audience, developers?


According to this site, Microsoft's profit is divided into thirds:

https://www.investopedia.com/how-microsoft-makes-money-47988...

A third is for Office, a third for cloud, and a third is Windows.

IMO Microsoft is a trillion-dollar company because of their long monopoly with Windows. It's so entrenched that their "punishment" for abusing their monopoly position (according to the DOJ) was to give all schools free copies of Microsoft Office, thereby forcing every parent to own a copy of Microsoft office to be able to read reports from their kids' schools. Ingenious!

You know why they would fuck with developers? Because developers have choices now, and Microsoft doesn't like it when anyone has a choice besides Microsoft. They are remedying that.


You know how they added Github CLI[0], right? There may be a time where you must use Github CLI instead of a "standard" git client to interact with Github projects. That would be the "extinguish" phase. Right now they have embraced and are extending (such as with Github CLI).

[0] https://cli.github.com/


If you wish to leverage the advanced features of github, the CLI gives you direct access rather than rolling your own api client. git will always be a first class citizen at GIThub.. but how do you open, close, or comment on PRs with a standard git client?

Edit: The last part came across a bit snarky, but I’m trying to stimulate the thought process. Such as leveraging GitHub Actions to automate all aspects of PRs via gitops instead of using the GitHub client. Many ways to avoid dependence of the CLI!


As another commenter pointed out, we're at extend part of embrace, extend, extinguish. This part is just making Github tools work easier than non Github.

In extinguish phase - Git won't prevent GitHub from rejecting a commit if it doesn't contain <TrackingMethodSignature>.


That sounds very much like the "extend" part of the process.


It feels like an open source standard on top of git that defines how to do this stuff would be helpful to reduce lockin. I wonder if Fossil or Gitlab have something.


I made this comment elsewhere [0], but by this logic anyone who builds an ecosystem for an open source tool should be accused of the EEE strategy.

> There may be a time where you must use Github CLI instead of a "standard" git client to interact with Github projects

So we're accusing MS of a slippery slope based on a phrase from 25 years ago, which there is absolutely zero proof or indication of them pursuing in the last decade (at least).

When GitHub show the _slightest_ inclination to do anything of the sort, I will be standing there with you, screaming blue murder. As of right now, they're progressing the development of git, "embracing" the open standards that have been developed (lfs being exemplary) and have shown absolutely zero signs of extending git in incompatible ways. See elasticsearch [1] for what "extend" _actually_ looks like.

> Right now they have embraced and are extending (such as with Github CLI).

If they _hadn't_ embraced, people would be complaining that GH are forcing non-open standards to be used. I firmly disagree that the GH issue tracker and pull request management are an "extension" of git. They have nothing to do with git whatsoever.

[0] https://news.ycombinator.com/item?id=28149098 [1] https://news.ycombinator.com/item?id=28110610


> I made this comment elsewhere [0], but by this logic anyone who builds an ecosystem for an open source tool should be accused of the EEE strategy.

Yes, of course.

Why, did you somehow think otherwise?


In practice, it doesn't matter what an "actual" extension looks like, but how your product is viewed by customers, because those are the ones you want to lock in. And judging by HN comments, to very many people, Git means Github, making it an extension, if not outright a synonym.

And their extending has nothing to do with open standards, which is worrying due to the potential to create lock-in.


By that logic, anyone who builds a product around an open standard is guilty of EEE. A web browser that implements a sync feature, an XML editor that implements syntax highlighting , an RSS reader that implements link previews all "embrace" open source standards and "extend" their core open standards with non-standard features, which is _exactly_ what github have done. They have been excellent players in the git ecosystem.


I think this conundrum can be broken up by noticing how pervasive one extension is. A feature that has become identical with the core product in the eyes of a layman is problematic in the same way as thinking "IE6" is the same as "the web". If there's no awreness of choice, then the only outcome is further lock-in.

Implementing "sync" on its own doesn't matter that much, because few people think "sync" is a defining feature of the web standards, or that preview is an inherent part of web feeds. The awareness of alternatives still exists (I hope).


That's the trick, the git is open, but the metadata is what matters. Issues, PRs, integrations into various code auditing services. That's the whole "extend" bit.


By that logic any proprietary product that provides an interface with an open source tool is guilty. I think accusing MS of EEE in this case is a huge stretch; GitHub is a commercial product that uses a popular open source tool with 100% compatibility (that I'm aware of) and actively participates in the development and features of that tool. When Ms start implementing extensions to git that only work with GitHub, we can point fingers but accusing them of extending git to extinguish based on pull requests is baseless


> By that logic any proprietary product that provides an interface with an open source tool is guilty.

In my view, the entire PaaS ecosystem modus operandi is building on top of open hardware & software, allowing and enticing people to move in easily (familiar tech/open source), and making locked-in products out of them. If that's not EEE I don't know what is.


I can’t think of a single person I’ve worked with in the last 10 years who understands the difference between vanilla git and GitHub. Nobody really understands why pull requests are named that way, or that it’s possible to have very different workflows to what GitHub offers.

I don’t know that this is the result of a deliberate strategy by GitHub, but by any metric the extend step is a resounding success.


> I can’t think of a single person I’ve worked with in the last 10 years who understands the difference between vanilla git and GitHub.

If not one developer in a decade can tell the difference between a source control hosting service and a tool they use, that's not really GitHub's problem. This is the same as my parents not konwing that "facebook" isn't the internet.

> Nobody really understands why pull requests are named that way,

Sure they do, the answer is one google search away on the largest Q&A forum for programmers [0]. it was the first google hit for "why are pull requests called pull requests"

> or that it’s possible to have very different workflows to what GitHub offers.

Maybe inside your circle, but plenty of people are aware of different workflows. Some of the largest open source projects in the world exist on github and don't use the pull request workflow (linux kernel, firefox, chromium off the top of my head).

[0] https://stackoverflow.com/questions/14817051/why-does-github...


> If not one developer in a decade can tell the difference between a source control hosting service and a tool they use, that's not really GitHub's problem. This is the same as my parents not konwing that "facebook" isn't the internet.

Nobody said it was GitHub's problem just like nobody said it's Facebook's problem in your example. They're both laughing their heads off all the way to the bank. It's everybody else's problem.


GitHub isn't the only one using pull requests, Stash and Gitlab does too. Pull requests have just become standard workflow without it ever being part of git.


FWIW, Gitlab calls them merge requests instead of pull requests, which IMO is a more descriptive name.


there's never a stretch with Microsoft. It is always, and has always been the same play. You can make excuses all you like, but in the end they will attemtp to embrace, extend and extinguish all competition.


Wheh they show any signs of extending git, or extinguishing any other open source competitors, I will be standing there with you calling them out. Until then, they're developing an excellent product that people want to use, built with open tooling


Those who don't study history are doomed to repeat it.


I'm arguing against soundbytes here, which is really unfortunate and something I hoped would be above HN. It's very easy to shut down an argument with a soundbyte or a famous quote when you don't have any actual proof that history is repeating itself. The situation and landscape is _very_ different from the mid 90s, Microsoft are showing _no_ indication of being bad actors whatsoever, (in fact they're showing the exact opposite) and haven't done for at least a decade.


We're down to soundbites because all of the arguments already happened and we're down to agree to disagree. You are basically asking for proof of a future event already occuring when the best we can do is look at the historic behavior of not just MS but similar companies. Your entire stance is to ignore any historical trends or behavior and only agree once the damage is already done. There's no constructive argument to be had there.

Effectively you're saying that all decision making should be based on current observable reality only and any projections about the future should be ignored because they are not 100% provable. I guess that's a way to function, and based on how bad people often get the future wrong it might even be a productive one, but it does explain why we're down to soundbites, because we've reached a fundamental disagreement not on opensource or Github, but a disagreement on how to plan for the future.


If you're going to tell me to look at the history, then I should tell you to do the same. The history says the did it, 25 years ago, and haven't been doing it for almost 20. The history says they are being good citizens in all their open source contributions in the last decade. It says they're actively working to further the open standards they're building on. At a certain point, a specific incident becomes an outlier, which is a point I believe we have passed.


That is somewhat fair - there aren't great alternatives to migrate issues and the like to today. But all of those bits of metadata are easy to pull out of the service via API hooks. If someone wanted to start a serious competitor to github that shares the same basic data model it'd be pretty trivial to write up migration aides.

I don't disbelieve that it's possible for microsoft to severely restrict these - but entirely removing them is off the table unless they cut out a lot of value. Inter-service communication to third party review tools and CI/CD tools all depend strongly on those API hooks.


> If someone wanted to start a serious competitor to github that shares the same basic data model it'd be pretty trivial to write up migration aides.

Like Gitlab? I assumed Gitlab would dominate after the MS purchase of Github, but I was incorrect that people wouldn't want to trust their data and personal projects to the epic abusers of privacy that is Microsoft.


Well I don't think the personal projects are really a significant factor in terms of where MS's paychecks are coming from - they care more about the commercial subscribers. And, speaking as someone employed at a commercial subscriber, objecting to continuing to use github due to the microsoft acquisition when we use office for pretty much all document production is going to be a pretty hard argument.


There are really simple ways to play dirty with APIs like making sure that they are incomplete in subtle and inconvenient ways like leaving out some of the metadata. Hypothetical example: make it impossible to query tags on issues. You'd still be able to do most things through the API, but if you have a workflow that's heavily dependent on tags, the data is effectively siloed.


So to be clear, the proof the GitHub is extending is a hypothetical example of an incomplete api that is not based on an open standard?


I didn't want to prove anything. I just pointed out what is possible.


Sorry, I misread the username and assumed you were the commentor I replied to. Yes, these things are _possible_ but there is no evidence or indication that they are happening with Github.


I agree that there are no signs at the moment. Companies change over time, so a certain risk - however small - is present, as with all 3rd party services.


Are there any foss initiatives to making these metadata bits portable?



What is going to be extinguished?


> Open source needs to pull its head out of its collective ass and not hand over its entire workflow to private companies.

Well, for starters, there is GitLab which attempts to do a lot of what GitHub does, while allowing you to self host it: https://gitlab.com/gitlab-org/gitlab

In some respects, i'd say that it does things better, for example, GitLab CI seems way easier to use in comparison to GitHub Actions: https://docs.gitlab.com/ee/ci/

As far as alternative source code management platforms go, with some code review and issue management functionality added on top, there is also Gogs, which is a far more lightweight solution and better fits smaller deployments: https://github.com/gogs/gogs

It was also forked by the Gitea project, which is largely compatible with it but is also in active development: https://github.com/go-gitea/gitea

Oh and there's also GitBucket which also attempts something similar to these: https://github.com/gitbucket/gitbucket

Now, you can probably hook those up with Jenkins or most other CI solutions, but personally i rather enjoyed how Gogs/Gitea integrated with Drone, which allowed for container based builds (no more plugin hell like in Jenkins): https://github.com/drone/drone

Then, you can throw in some additional tools, for example, for code analysis you could use SonarQube ( https://github.com/SonarSource ) and for security scanning of infrastructure you could look at OpenVAS ( https://github.com/greenbone ).

Oh, and on the organizational side something like Rocket.Chat ( https://github.com/RocketChat ) or Mattermost ( https://github.com/mattermost ) for communication and perhaps OpenProject ( https://github.com/opf/openproject ) for project management.

And there you have it! An open source based workflow that allows you to do most of the stuff that GitHub would let you! Of course, concessions might need to be made depending on what your definition of "open" is and whether you're okay with certain features being restricted to paid tiers in some software; if you do have a problem with that, there's also the possibility of looking at some libre alternatives, though that might lead to the occasional half-dead piece of software that doesn't really have financial incentives for maintenance anymore on anyone's part.

That said, i believe that few choose this approach, because it's somewhat complicated to run all of that and all of the sudden you become responsible for your own SLAs, which many don't want. It's often the same reason why people just provision VPSes from AWS, instead of running their own servers in a server room. I think the amount of links to GitHub for open source above speaks volumes about the state of the industry.

I don't think that there's an easy answer to the implications of this, maybe people should just familiarize themselves with the concept of "Service as a Software Substitute", so that they're at least aware of the trade-offs that their choices have: https://www.gnu.org/philosophy/who-does-that-server-really-s...


GitLab CI is not only easier to use , it also isn't a half-baked product like GitHub Actions.

For example, GH Actions don't support YAML anchors but also have basically no other way of cutting down on repetition in your CI config files (actions can't call other actions, for example), so your CI config is full of brittle boilerplate.

Also noteworthy is how you can't rerun single actions. If your deployment failed, you might have to rerun the whole workflow, including the 10 min test run.

Meanwhile, if you use dependabot, PRs issued by that tool have no access to secrets, so if you need to connect to e.g. AWS to run tests, you need to implement weird workarounds.

I don't understand why GitHub is so popular, GitLab seems like such a better tool and it's also developed totally in the open. Some of the CI stuff is literally amazing (e.g. merge trains).


Technically speaking, Microsoft is a public company. A private company implies not open to investment on a public market.


This is not the first time I see this exact pedantry on HN and I can't help but feel the same way as with people who immediately yell "it's not free! Nothing in life is free!" when you talk about free healthcare or free public transportation.

We know. We understand that it is technically correct. But it's completely useless in the discussion. You know perfectly well what the other person meant.


You are technically correct - but it's overly pedantic. Another commonly accepted definition of public companies outside of the US is companies that are partially or wholly owned by the government and thus act as a public service. This includes crown corporations in Canada or the USPS in the US. I think within the context they used that calling Microsoft a private company is fair since they have no obligation to "act in the public good".


> Github may be the single greatest execution of embrace/extend/extinguish in computing history.

Oh ffs are you even aware that MS didn't originally build GitHub?

And your comment is super out of touch with what MS was doing in the 90s and early 00s. GitHub has plenty of competition, Gitlab for example is a fantastic platform, overall more powerful than GitHub as well. GH's strength is how much better it is for open source projects and if MS messes with that they'll be killing their golden goose.

There is no all-powerful benevolent deity you can hand your code to and say "here, take care of this for me, for free, for all eternity". At some point private companies will have to get involved (or would you rather have your government get involved instead? Because I don't want your government hosting my code). So you want their incentives to be nicely aligned.


> Oh ffs are you even aware that MS didn't originally build GitHub?

Purchasing a popular product is definitely qualifies for the "embrace" part of the strategy.

But that's not unique to MS - AWS and Google follow similar playbooks, as does almost any large company that provide a platform-like product. The goal is to make the platform as sticky as possible for as many customers as possible.


If your employer decides that they need such an analytics dashboard, one that is spying on every keystroke, then they will have one. That one doesn't depend on you working on a laptop, or in a remote environment - the company owns and is administrating that laptop, and they can install everything they regard as necessary.

Therefore the real question is if Codespaces is helping to get your job done; I guess that depends on the complexity and structure of the code, the complexity of the build environment/how you work with the CI environment and how the project is being deployed. I guess that this complex of issues will increasingly dictate how things will get done - if these are too complex, then you are better off with a remote environment.


Upon re-examination: if github/microsoft is owning the analytics data gathered from monitoring the keystrokes, then they are a third party, and could possibly make that data available across time; so that a prospective employer can get a view on past performance of a candidate, or the new employer will have access to analytics data gathered on you while working at a previous company. Now that one has the potential to make the whole thing much more sinister. There might be a big difference between the situation where your current employer has a keylogger on you, as compared to the keylogger sitting in the cloud. Grandparent poster Bogwog has a solid argument here.

It all depends if the industry will act this way or not, I still hope that there is some level of desency left somewhere, because being too paranoid about all of it will not get me anyway either. Also the utility of all this analytics data is limited, for example they could possible get the 'number of lines of code' written by someone, but it is impossible to judge how essential all these lines of code were to the business. Also there would be a significant backlash, if they get too invasive on the privacy of developers. (that's the moment for the video where Steve Balmer is flipping out on stage and shouting 'developers' https://www.youtube.com/watch?v=I14b-C67EXY )


Don't underestimate trivial inconveniences[0]. There's a difference between being able to install employee surveillance in principle, and having it handed to you on a golden platter.

Some manager may toy with an idea of getting IT to install and manage monitoring software, but ultimately reject it as not worth the cost. But if the company starts to work over something like Codespaces - even if only for purely technical reasons - then suddenly they'll find themselves having a dashboard for free, and for sure they'll start using it.

--

[0] - https://www.lesswrong.com/posts/reitXJgJXFzKpdKyd/beware-tri...


On second thought, the move to remote environments may have an interesting side effect: if people are no longer familiar with setting up their local development environments, then it will be a bit harder to set up their own side projects to tinker and try out new stuff; in a reality where everyone is working on a remote/preset environment, we will have just another small, but very significant inhibition to deal with our own side projects.

now there is a bit of an irony - github helped open source, now this development will be very helpful to corporate software developers, but might turn out to be harmful to open source as such.


I don’t know if this is a good indicator or not, but Codespaces costs money. So there is a non-zero probability that you are not the product.


Businesses can both charge money for a service and sell / analyze user data. The two aren’t mutually exclusive.


I agree. But as a rule of thumb, if someone gives you the service for free, they have no choice but to collect value from you by other means. Whereas when you pay money for the service, the business does have a choice not to sell your data. Either way, I guess we need to read the terms to be sure.


If a company has a certain type of investors or shareholders, they may have no choice but to collect value from you all means possible regardless of how much value they are already collecting.


One very underrated (or overrated depending on your pessimism) way to deliver value to customers is by offering privacy as one of the products you deliver. The issue is that privacy is hard to prove and so most claims of privacy are accepted at face value and many proclaimed privacy minded services are very much the opposite under the hood.

This, I think, is a place where we need some regulation to codify some different forms of privacy and give the government a big stick to bop companies over the head when they violate those definitions. We could potentially manage the definitions as an industry group - but we'd need the government to get the big stick.


> This, I think, is a place where we need some regulation to codify some different forms of privacy and give the government a big stick to bop companies over the head when they violate those definitions.

I 100% agree with you, but doubt this will ever happen in a heavily pro-surveillance government in an effective corporatocracy. Therefore, I think open source can and needs to do better in providing free competition against potentially-dystopian closed-source alternatives


> Codespaces costs money.

It's going to be paid by your employer which leads to GP's concern about,

> or build a productivity dashboard so managers can fire people for not being productive enough (like Xsolla did)

Which is more likely. If your employer sets something like this up - will they use it to calculate a productivity score and use that for lay offs? Seems rather probable to me.


My company is stupid, but not that stupid. We already have vast amounts of data about checkins that could, in the hands of a stupid person, be used to create stupid developer metrics for performance evaluation, and yet I've never once in my many cross-team performance ranking meetings heard a manager who had the gall to trot out "lines of code" as a serious metric for comparing people. And I'm at a FAAMNG. ;)


Windows also costs money and is chock full of spyware.


Businesses rarely ever leave money on the table.

If they can sell the product AND sell developer productivity metrics to management, they will.


Microsoft Outlook/Office 365 costs money, but they still analyze your activity and report so called "productivity score". Maybe not sell, but I have no doubt they will analyze the data if they can.


Unfortunately, that rule doesn't apply to Microsoft and the other tech giants.


I would like for MS to collect information that would make the product better. If there is a feature that I am always using, i want them to know it so that they can either improve it or not get rid of it.


Context is hugely important, and equally important is not to get sucked in by the HN echo chamber - that is, consider the MS of _now_ as opposed to that of 15 years ago which HN likes to bash (and give a free pass to other companies doing nefarious things). The FB example is considering the FB of _now_ and not 15 years ago.

In terms of the biggest companies in the sector, MS has been doing much more for developers and developer productivity than others. At the same time it's not a monolith and so you should not expect boyscout behavior either.

Privacy is a concern, and the onus is on you the user to ensure that you remain in control of your data. When using a service (any service), the answer is to always be ready to bail at a moment's notice. That means using source control and not tying yourself to a specific ecosystem.


> consider the MS of _now_

Not sure if that's supposed to be an argument for using/trusting MS.

The levels of user and privacy hostility they went to (and are continuously going to) with Windows 10/11 makes me see this as a strong argument against.


The MS that's killing their own single most popular product, Windows, with anti-user features, is the MS of now. Releasing VS code and WSL doesn't redeem their sins.


is this not another example of the 'HN echo chamber'? Is Windows market share budging? Windows currently is by far not the worst iteration of the OS and nobody gives a crap about anti-user features which everyone has had to deal with for literally decades


> nobody gives a crap about anti-user features which everyone has had to deal with for literally decades

The one thing that used to set Microsoft apart from the likes of Google and Facebook was that their main product, Windows, was not anti-user. No matter what you thought about the OS or Microsoft's business practices, it was a paid piece of software that worked for the user. Windows now is filled with user-hostile features and it works for MS instead. And they don't even grant us the courtesy of giving it away for free.

Market share or the fact that the public at large isn't punishing them for it doesn't mean anything. It was already clear that most people are willing to walk headfirst into a dystopia.


If i sum all the instances of Ubuntu but doing what i wanted it to do and compare that to the dystopia that windows theoretically exposes me to I'm not sure what's better tbh. But if you want to play modern games on your PC then the choice of OS has been made for you.


The fact that microsoft's user hostile moves haven't affected windows' market share does very little to reassure people that they won't pull similar shit with any other product.


By the same reasoning, since we're talking about data collection, and that data doesn't just disappear in most cases, consider the MS, FB and Google of 15 years in the future. Given the changes already mentioned, what can we reliably say about them, other than that massive change is at least possible, and possibly even likely?

This is why I don't like Google collecting data on me now. Not because I don't trust them now, I for the most part still think them trustworthy, but 15 years is a long time and there's no way I can intelligently trust that they will act in a certain way 15 years from now, given the available information at my disposal.

And that's with a company that I think is heavily incentivized monetarily and has a strong employee culture of keeping data private and in-house. Until there are better laws protecting my data or a clear legal relationship with me and an entity with regards to my data, I think the only prudent course of action is to keep as much information about me as possible out of their and others hands.


You think? I'm just completely drained and I want Microsoft, Google, Apple, Facebook, to get off my lawn.

https://rentry.co/areweweloveopensauceyet


Top comments always have to be so negative every time there’s something cool on the front page…


That's because people here have grown up and see the world for what it is, how products like these play out, how incentives around them are structured. Most of the cool things like this are bait.

Seriously. It takes extreme naivety to take a SaaS business at face value today. Patterns of exploitation are well-established and easy to spot. Once you see this a couple of times, including in offerings you got so excited about - or worse, you get burned by them personally - it's hard to be anything but bitchy about tech.

The comments may be negative, but the bigger problem is that they're also not wrong.


Look how tech is employed against lower wage workers. It is a good idea to be extremely vigilant about anything "reporting". I think people working in tech have a responsibility to not behave like 14-year old fangirls.


The business incentive is quite easy. It is to evaluate code monkeys and to quantify development efficiency in the eyes of a salesmen, those that approve projects at Microsoft and possibly GitHub. They want to offer services to large corporate entities and they deem this valuable. Call it collaborative and something-cloud and you have your green-lit project.

The road is to quantify the work in small packages is a decent one though, dissect a problem until only trivial tasks remain.

Still, I think your intuition is pretty much on point. But with anything, just take the features you like. If your business can dictate this for you, search for other businesses. Contrary to popular belief, there are a lot of those, although pay might indeed be a bit lower.


So, maybe? But these products already exist. Your company doesn't need to pay Microsoft to get that data, they can install stuff on your work machine and get all that info right now.


Semi serious answer: couple of years ago we devs used to joke about putting ourselves out of business by teaching AIs how to code. I guess the joke's no longer funny; I also guess current situation shouldn't be a surprise.


> code written, keystrokes, mouse movements, sleep/work schedule, productivity metrics, etc

That's not the most valuable info at all.

What would be valuable is to know which ecosystem, languages and tech people are investing time and resources into. These are the ones you need to prioritize for your dev tools. For instance, Microsoft decided to invest in supporting Rust https://docs.microsoft.com/en-us/windows/dev-environment/rus... as well as making sure .NET works on all OS.

> or build a productivity dashboard so managers can fire people for not being productive enough (like Xsolla did)

When a competitor decides to start using these, it's awesome. You get a nice window of opportunity to poach high performers.


They already do with Teams. Of course they will do it here sooner or later as well.


Yes, and that is how they will train the AI that will replace us all. It's a brilliant strategy, so I can't help but play along.


Of course.


I’m surprised Facebook hasn’t yet. I kinda assume that Facebook isn’t a browser on iOS simply because that’d make it a 18+ app.


> It's like if Facebook released their own web browser and "promised" to respect your privacy; you'd be an idiot to believe them.

Is this satire? A little too on the nose, you know cause Google…


> My friends, I’m here to tell you I was a Codespaces skeptic before this started and now I am not. This is the way. ~@iolsen

I don't actually doubt that this (and the 4 other glowing employee quotes) are real, but even assuming they are, I can understand people remaining skeptical about the sample size of 5 being broadly representative of the 100s (1000s?) of engineers at the company.

Also slightly hilarious that "This is the way" is a Star Wars callback referring to a character blindly following dogma before eventually realising the error of their ways...

On the other hand, there's quite a few "hints" littered throughout the article that their current setup is a bit of a beast, so I can definitely imagine some engineers being relieved to be able to get such a behemoth off their physical laps/desks: I guess if the local development experience is sufficiently brittle & frustrating, any alternative may seem welcome.


I’ve very recently become a GitHub engineer, and I got pre-access to the beta too and I must say - my absolute favourite thing about CoseSpaces is being able to dev on a repo you don’t work on regularly and probably wouldn’t contribute if it meant having to set up environment etc. It’s really nice to just dip into a project with a working environment in seconds, make your PR and then move on.


This is a problem that can be solved without Codespaces too. For example, if GitHub were to embrace Nix, every project could have a shell.nix file for all of their dependencies, and the new engineer bootstrap script that installs Nix could then just add an internal Nix binary cache (or use Cachix). Any time the shell.nix changes a GitHub Action could push the closure to the binary cache.

With this setup, you can use whatever local tool you want and, as long as environment variables are respected, they’ll just work. The only requirement on your end is to either run nix-shell first, or to use direnv and ask it to load the nix env for you.

The downside is you still are compiling locally, so you still need sufficient CPU/memory resources, but that’s it.

Unfortunately most companies still aren’t paying any attention to Nix.


-insert comment about that hacker news comment about how dropbox could be replicated with linux here-


I'm not sure that comment applies here. The suggest Dropbox replacement was more complicated than Dropbox, and had more steps. On the other hand, Nix is about the same amount of complexity, and same number of steps for the developer.

Yes, the parent comment looks a bit complicated, but that's just the one time setup, not something that needs to be done by every developer. And Codespaces will require some of that setup too; it's not like it'll magically know what packages and dependancies your project requires to build.


- I see comment about iPod, no wireless, less space than a nomad. lame


Nix is a prototype-grade implementation of a very narrowly useful model of computation.


Does it take 10 seconds to spin a new one up?

If not, then it misses the main benefit outlined in the article.


Yeah, it’s really fast. In fact before I joined, I also set up a default CodeSpaces environment for my own repo and then I was up and running in only a few minutes, and that was the first time so I was learning and reading the docs. I could then set up all subsequent new CodeSpaces to start in seconds.

But yes, once a project commits to using CodeSpaces dev environment (even and especially ones with large code / heavyweight environments) there is a lot that can be done to optimise the spawn up time, that cannot be done as easily for local dev.


How does it work when you work on several packages at once,e.g. a lib and an app using it? Is it a monorepo, do you configure both dependencies to be locally editable copies, etc.? For example patching an upstream open source project that is a blocker for your app.


You can also create a separate codespace for each package if they’re independent of each other and you don’t need to test things in the downstream repo in a way that requires your changes being available “locally”.


The codespace is backed by a regular Linux VM, so we can clone the upstream repo and edit/compile/debug/etc in that just like we would locally on our laptops and workstations.


This was my first idea for introducing Codespaces to my team -- set up the infrastructure to be able to spin up a codespace on our less-used component repos where people don't have a local dev environment prepared.


I'm a big fan of capturing the toolchain(s) with the repo. I have had to hack this for embedded toolchains out of necessity for years (decades?) using VirtualBox on a Mac. I still have a Windows 95 VM sitting around containing a copy of Keil or something that is the only known way to rebuild the code for a certain weird-ass micro from some consulting gig in the late 90's. I wonder if it still boots? Hopefully, that customer forgot about me...


> . It’s really nice to just dip into a project with a working environment in seconds, make your PR and then move on.

That sounds incredibly useful actually. Is that the main sales pitch for codespaces?


Well I use them for regular dev in work too on especially larger repos with heavier toolchains, and both are big plusses for me personally!


I've been playing around with a homegrown version of this. It's really not that hard: just set up a docker context to your home lab server and use VSCode remote containers.

I've been able to remove WSL from my PC: Docker desktop was gouging itself on resources. I can now also shut down my desktop and continue exactly where I was on my laptop, and visa-versa. I don't have to pull WIP commits back and forth between the two machines.

It's a major leap forward, and there is no need to use GitHub infra.


I also started doing this, but using containers locally (on a Mac). The ability to have a set dev environment without needing to install local programs or compilers is great. And really once you flip to this model, it doesn’t matter if you’re running your dev in a container or remote VM. So long as you can get a command line to your dev environment, it really doesn’t matter where it is.

(Or who hosts it)


Though I like this model, the problem with dev containers on macOS is that the disk io is bog slow. This makes some IO heavy tasks (pip install, npm install) painfully slow.

I have started to run some dev tasks on a remote Linux server, with Dockers, and then just configured automatic file sync on save over SSH.


> This makes some IO heavy tasks (pip install, npm install) painfully slow.

Aside/HTH: The new Yarn has "Plug 'n Play"[1]. They claim that committing [their version of] node_modules is a reasonable thing to do, so that's one way to avoid install entirely. I don't trust that claim, yet (it smells like noisy diffs). You can instead re-enable the global cache[2], and then mirror your host cache directory into your container (whatever `yarn cache dir` gives you).

I have tried none of this myself because I'm not working on anything JS right now. pnpm[3] is what I've been using up til now, and is also a direct upgrade in terms of speed (but I suspect it won't work with mounting host directories).

[1]: https://classic.yarnpkg.com/en/docs/pnp/ [2]: https://yarnpkg.com/features/offline-cache#sharing-the-cache [3]: https://pnpm.io/


Very true. It really depends on what the workflow is. But once you’ve already moved to a disconnected dev environment, moving from a container to remote server is a small step.


I've never used Docker. Is this sort of like using tmux/vim on a cloud server?


I'm guessing that VSCode has a distinct back-end and front-end. It's able to run the back-end server on the remote machine/container/whatever over a bog-standard SSH tunnel. Any autocomplete functionality, debugging, etc. will use the resources on the remote machine. It also does port forwarding for you. The front-end (GUI) runs on your local machine.

A closer analogy would be using Plan9, and mounting most of your cloud machine locally. Vim would then just be your local front-end into those mount points.

Containers/docker add the ability to share your exact machine configuration with someone else, and have them create their own copy of it with a single command. It would be almost like sharing a VM image with your team. There's been a lot written about the pros/cons of containers, but they achieve similar goals to VMs, and so-happen to bring a lot to the table in this scenario.


I'm not sure why you're being downvoted, that's a very good question. The short answer is yes*, plus some automation around spinning up a new session.

The wrinkle is that the display server is running on your local machine, and connecting to the thread of execution on the remote server. Vim doesn't really have that concept of separating view from control, but tmux does play a somewhat similar roll.


It is hard when you have millions of commits, thousands a day, and your monorepo is measured in gigs.


VS Code remotes are magic. I use a light version of it for 100% of my development on windows with WSL, but I also use it to log into a mac server and edit files on that.

I have no idea what actually went into developing the feature, but I would have to suspect that Electron would have contributed to making the separation of an app client vs server easier than something totally "native".


So how do you edit media files, using some ssh fuse mount magic ? What happen when you need to work from a shitty internet network ? In train ?


> So how do you edit media file

This only does code. You'd probably have to do those locally and SCP them over or, yeah, some mount magic.

> What happen when you need to work from a shitty internet network ? In train ?

You'd have to commit your work, but you'd be able to bring the environment up on your laptop. There are still merits to this, such as the repeatable nature of containers. Depending on where you've worked, you may have been through the hell of a huge list of setup steps when on-boarding a project: that's one thing that containers are really good at automating.

You may not be the target audience, though.


> this only does code.

VS Code remotes allows you to drag and drop files from your desktop and they're placed where you want them in your remote environment. I don't understand why this would be any different.


You edit media file in an editor?, wouldn't you just upload it? You can drag& drop into vscode and probably find the path to the tmp folder and work off that too.


oh right, my home lab server, that must be around here somewhere...


They aren't as expensive as you'd expect[1][2] ($70-$200). Hell, you could use a RPi for many use cases.

In my case, my homelab is my laptop (which I connect to from my desktop). I don't actually have a server blade lying around (although those can also be relatively cheap when big cloud upgrades their servers).

[1]: https://www.ebay.com/sch/i.html?_nkw=dell+optiplex+3040+micr... [2]: https://www.ebay.com/sch/i.html?_nkw=intel+nuc&_udhi=200&_ud...


I pay 30$ a month for a server (yes, a virtual one, if you want docker, you have to pay a bit more) with org-domain and tls cert. I would recommend that to every developer. Yes, some administrative work is required but that is also a valuable learning experience. You would own all the tools that you need to develop efficiently.

Personally I use Gitea, which should be usable by someone used to Github. I don't switch workspaces too often, but I am sure some tools are available.


I have an HP 4u from 2010 with 80 threads and I think a quarter TB of memory, 6 SAS, and 2 1TB ssd, currently running Ubuntu 16 and Wok, for VMs. It took a lot of effort to get this working and it's still a huge pain - I have no UPS for it since it uses 220VAC, and UPS cost more than the server cost me, for example.

I can't use docker in general because my internet is too crappy. That was annoying in 2012 when I first used it, but it's even worse now. I've always worked with VMs, since 2001 or earlier - virtualbox, wok, AWS, wok, and now proxmox.

It helps to know what your use case is, I don't run a ton of VMs, just one or two at a time that can use almost all of the resources of the server, and I generally do infrastructure and computing work in general.

I presume none of this counts as a homelab, though, since I don't program in rust.


> It took a lot of effort to get this working and it's still a huge pain

You make an attractive case it for! :) Yeah, this is why I don't have a "home lab".


> Also slightly hilarious that "This is the way" is a Star Wars callback referring to a character blindly following dogma before eventually realising the error of their ways...

My friends, I’m here to tell you I was a Codespaces skeptic before order 66 and now I am not. Good engineers follow orders.


Brittle local development in from what I've seen is either due to another symptom of tech debt - an aging monolith with too many different build systems, dependencies and what have you that you end up with way too many Docker containers or the need to have a very long doc for newcomers with "run this script here, then that script there, unless it's a Monday, then run this script over there". Or a team that's gone all-in on a distributed microservices architecture with lambdas all over the place and a dozen or more repos and it's literally impossible (yes there's LocalStack but...) to get a faithful end-to-end local environment up and running.

The nightmarish local development experience of either case are symptoms of deeper issues than local development itself being a problem.


It's also a sample of people who, if they run into issues with Codespaces, are just a Slack message away from someone who can fix it.


yes but thats the great thing about dogfooding. give it a year or two of this tight feedback loop and Codespaces will be really really solid


Given that they apparently couldn't get their setup scripts in order[0] in 13 years I wouldn't get my hopes up.

[0]: https://news.ycombinator.com/item?id=28145556


Do you know whether they use the public Codespaces or a customized version?


I don't. My initial comment is assuming they use the public version. Whether they use the public or a private version they still have back-channels for support that the rest of us don't have access to.


Not about Codespaces, but about Web IDEs in general, they're amazing. I've not used Codespaces but I have used what is being talked about here [0]. It's one of the biggest productivity improvements I've seen from Google. I never really have to think about what packages or software is setup on my laptop and I can change laptops or computers while having the entire state of my IDE preserved. It's a pretty amazing experience.

The only people who I don't think would instantly like this sort of thing are people like game developers who basically don't do any unit testing of their code and run the binaries they generate. For those users you could find quite a few ways around that.

[0] - https://www.quora.com/What-does-Googles-web-IDE-look-like


I think people like not having their physical machine tied to their dev environment. IMO "the way" to do this is with ssh and tmux, not a web app. As long as people have choices I guess it doesn't matter.


If you read the article, that's an option with Github's code spaces setup.

> Visual Studio Code is great. It’s the primary tool GitHub.com engineers use to interface with codespaces. But asking our Vim and Emacs users to commit to a graphical editor is less great. If Codespaces was our future, we had to bring everyone along.

snip

> From there, GitHub engineers can run Vim, Emacs, or even ed if they so desire.


I did not understand that part. Every heavy Emacs or vim user have its own customizations of editor. How does it work with shared image?


The repo owner gets to define the base Docker image that’s used in the codespace. And then it also looks for a dotfiles repo under your user account, which you can use to install all of your personal customizations. You can see mine at https://github.com/dcreager/dotfiles


The codespaces can have their own dotfiles.


Just use an sshfs mount and a local editor.


I think this defeats the purpose of Codespaces, because it would mean that all the code runs locally and only thing that remote server provides is storage?


The lame part is that they have to spin up vscode just to ssh


Quotee here. I agree! I use tmux and vim via ssh for github.com development in Codespaces.


Interesting… Can you use Neovim? Can you persist you custom vimrc config and plugins between sessions?


Yes, I use neovim myself! Codespaces looks for a dotfiles repo under your user account, which you can use to install any personal customizations. Mine, which installs a bunch of neovim plugins and configs, is here: https://github.com/dcreager/dotfiles


Ah, fantastic! That sounds like a really good solution. I also have my carefully crafted configuration preferences for Neovim, and wouldn’t be happy with codespaces it didn’t allow me to configure it. But for what you are telling me, they have done it in a smart way. Nice. And it’s nice that they had the people who work mostly in the terminal in mind when creating this. Neovim is a great editor right now, it has been such a fantastic evolution over Vim, adding a language server, for example. Cheers, thanks for replying.


> I think people like not having their physical machine tied to their dev environment.

I'm curious to know if this statement is true. Most experienced developers these days have a regular desk and a chair where they like to code and focus. How prevalent is the move-and-code scenario?


It's not just that. There is also the matter of laptop getting broken (we all have proper backups, right? right?), stolen, ...

So I do see the appeal in thin client. My only issue is that the experience his closely tied to latency, so sometimes working over ssh is... irritating. Especially on mobile connection.


It’s all Docker images under the covers, so our devs also have the option of downloading and running that image locally instead of in a Codespaces VM. That was always possible before, of course, but Codespaces gave us the impetus to invest the effort in Dockerizing the dev environment more rigorously.


Oh, that is cool. Sounds like a nice mix actually. Local vim, browsers, known system. And container with mount-bind to actually compile and run stuff in.


I like it a lot.

My usually setup is chrome dev tools and some node servers fired up, so quite easy to replicate.

But I still prefer to just move my laptop around.


Thank you for ruining the mandalorian. I just started season 2.


So you're telling me that the next GitHub outage could take out my dev environment and give me an afternoon off? Time to convince management that we need to switch to Codespaces!


It's funny timing to announce this the day after a major outage that took down practically the entire platform


It’s very likely them doing pre rollout operations for this feature is actually what caused that outage though.


You say “though” as if it absolves them. If they break basic requirements of a Git host like fetch and push while working on this extra feature that’s on them.


They're not announcing a new feature with this blog post, are they?


They released codespaces to enterprise customers


Mainframe's down again boss!


this gave me a good laugh, thank you!


There are different risks, but there is not necessarily more risk. Does Codespaces have lower or higher than the availability risks of your current environment?


My current environment is my laptop, which I would also need to access codespaces.

So by extension, codespaces is more risky than my current environment.


I’ve had corporate laptops get BSOD’d because a sysadmin pushed a bad GPO rule that worked fine on 99% of their clients but butchered the developer laptops.

Another fun one was when an org was rolling out CarbonBlack on all endpoints and decided to block all Java. Because apparently Java in the web browser is a security threat. They blocked 30 devs Java runtimes as a result. So there was an entire afternoon wasted for those guys.

To a corporation the risks of GitHub being down might be ok. Plus this will drastically increase on boarding devs to an existing project.


If you've never had these issues, this product is probably not for you.

The question is, how much time do you spend on dev environment issues per year?

When I was at one of the FAANGs, our dev enviroment took about 1hr to install, and you had to redo it everytime you switched platform / version (about every week).

We spent ~50 hours per year managing our local dev enviroment. probably more in reality.


> The question is, how much time do you spend on dev environment issues per year?

Definitely not little. But at the same time, I usually learn a lot about how stuff works during that. Which I like to think makes me better at helping other people debug weird shit.


I would frame it as, your current dev env is your specific laptop, and codespaces requires any laptop.

It's certainly reasonable to think that a SaaS can approach availability time of a single piece of hardware.

Furthermore, it appears that engineering github/github to work in codespaces helped their local provisioning as well.


> My current environment is my laptop, which I would also need to access codespaces.

> So by extension, codespaces is more risky than my current environment.

That only looks at the hardware. There are many more pieces to the puzzle.


In the context of my comment of having a free afternoon off - having my local environment messed up means that I'll need to spend my afternoon fixing it instead, whereas having Codespaces go down is essentially an announcement to the entire engineering team to go take the afternoon off as there's nothing we can do


Current place of employment uses a similar system, power outages are quite pleasant.


I've developed on a remote server for about 8 years now. It started when I was a contractor and my machine was simply too slow to run the project I was assigned. I did not have the money for a new laptop, but I could afford the ~55/month for a dedicated server with 32GB and 4cores. I have worked that way ever since. I've been fortunate enough to work at companies that run their own VM infrastructure which allow me to work this way. And as someone who likes to work in different places, like the park, being able to download docker images while on a Hotspot and it not go through my data plan is amazing.


I recently set up a home Linux server and I've been doing my personal projects on it via VSCode's remote development (from my MacBook and my Windows desktop). The server isn't actually as powerful as those machines, but the convenience of having a single env regardless of client has still been fantastic (not to mention getting all the Linux niceties despite working from Windows).

Doing it in the cloud probably carries some risks for a business that need to be factored in, but I'm sold on the thin-client dev workflow. I'm wishing I could do it at my day job so my laptop stops screaming at me from all the Docker containers.


This mirrors my experience. It is nice not really having to care what type of machine I am given by my employer since my environment is going to feel exactly the same regardless.

And who knows maybe since you have your server running headless it is effectively on par with your laptops. These days most of my cpu cycles on my laptop are spent on Slack or Chrome!


Hadn't thought about that angle, but yeah, I guess it really is splitting the load


Agreed, this setup is super nice for personal work! Adding a raspberry pi as a jump box with wake on lan saves some money on electricity too. I've been using that with a big desktop computer instead of a server and it pretty much works for remote development while out of the house.


Yeah, I'm using an older desktop as my server.

I'd like to set it up for out-of-house stuff, I just haven't gotten around to messing with port forwarding, static IP, etc, not to mention guarding against all the potential security issues


a way that I've found to do this securely is to have a small constantly running cloud server $5 pm with OpenVPN on it. Then have your server connect and join the VPN on both laptop and server. If it's configured properly you'll be able to get them to communicate


How are you accessing the box away from your local network? Just exposing the box to the internet via your router or using something like Tailscale?


Tbh I haven't gotten that far. Of course, I also haven't really had the need yet with covid times


I've been doing something similar for the past few years. I have a development machine running tmux + vim that I can connect to via ssh / mosh. Biggest issue I've run into was high latency while in Europe (since my box is in the US), but otherwise it works great, even on flaky connections. The ability to download / build large docker images with a beefy computer is a very nice feature as you mentioned.


If you work from a park how do you manage your latency/connectivity to a remote server? It must get annoying fast when a pigeon flies over causing your hotspot to cut out.


I think the point is that a small random drop in latency affects the local connection, but not the remote server. So, if your SSH connection is a little flakey for a minute, that’s fine. The remote server is itself stable. It is also likely connected to a much bigger pipe, so pulling in a remote container is much faster than if you were doing the same thing from your laptop in the park.

If you’re worried about your SSH connection being stable, mosh is another option.


Exactly this. I used a combination of mosh, byobu, gnu screen, and vim. These days I use vscodes remote development setup more often.


Mosh sounds interesting! Thanks for mentioning!


On the front page right now: https://news.ycombinator.com/item?id=28150287 =)


If latency is high/nondeterministic every time you hit refresh in your browser to see your dev changes the delay gets compounded, productivity suffers and frustration intensifies.


Files are stored in memory locally so there’s no network trip when editing, only on save, and it’s kilobytes per edit in the worst case


He could use something like GNU screen, or tmux, or his server could use something like XRDP or even RDP to continue when his connection goes out.


Hah! Thankfully that doesn't happen too often. Worst thing that happens is I forget to turn off my Hotspot when I get home and drop into a Zoom meeting on it and use all my 4g data on accident.


I believe he's using the word "work" very loosely here.


I don't know what you mean by this. I've often worked in the park so I can be around my kids playing and get some more glimpses of them growing up than I otherwise would in an office. Plenty of commits have been made at a picnic table.


I mean your work suffers when you're distracted by pigeons and small children.


I've never been one to go 8 hours straight. The refresher pays off in the long run. Not to say my way of working is any better, but the things get done and I'm more satisfied with my work life balance.

And with us soon going back to home schooling (thanks delta...) children happily playing outside is much less distracting than pent up children yelling inside. And mama can't do it all.


eh let it suffer

even my (and many devs') "suffering" work is worth good money - or at least our employers continue to think so :)


> remote server for about 8 years now

> I did not have the money for a new laptop

> I could afford the ~55/month for a dedicated server with 32GB and 4cores

Umm... you spent 55128 = ~$5000 over 8 years. You could have bought 3 top-specced laptops. A good $1500 laptop will last at least 2-3 years. You would also get some resale value out of your laptop when you upgrade.


It hasn't been the same remote server for 8 years :) I don't pay for a server like that anymore now that I'm employed full time and one is provided for me.

And I was barely coming out of homelessness at the time. 1500 doesn't just pop out of nowhere.

You did just remind me of the time when I was contracting for that company, they flew me out to California but I didn't have enough money on my card to cover the incidentals fee at the hotel they booked for me. Thankfully an employee came through for me. That was pretty embarrassing.


But the thing is, if I have $60 extra money on my account RIGHT NOW, how long would I need to save to get a laptop?

I can get a $55/mo dedicated server right now and code for 30 days straight and get paid. Or I can save for X months for a laptop.

Also the server price will most likely go down over the years or I could get a beefier one for the same money. The laptop will depreciate every day.


I'm not sure why you're phrasing that math as if it somehow shows it was a bad idea. It is in fact possible to be able to afford a small monthly cost, but not have enough in the bank to immediately drop ~$1500 on a laptop, especially if you're just starting your career.

According to you they paid roughly the same amount as it would have cost to buy laptops and upgrade them relatively frequently, but amortized over much smaller monthly installments.

And their system would have been elastically upgradeable.

On top of that holding on to money and spending it later is generally better than spending it now, because $X is usually worth > X future $ because of inflation/lost ability to invest.


I've had this vision for over a decade and it is awesome to see we are finally making measurable progress towards it.

What I'd really like to see is a WebRTC type interface that overlays this to provide some kind of collaborative environment on top of it. I'm thinking of a built in voice chat as well as multi-author editing within the same codespace. Imagine it like pair programming. I'm in FileA.ts and I can see a little avatar icon of another dev in FileB.ts (maybe surfaced through the Explorer and/or the file tab bar). The voice chat could even be location aware so if we are in the same file/package then we are able to chat. I'd go so far as to allow multiple editors to affect the same file within the codespace so I could see the other dev edit the file in realtime (like coderpad or other interview tools).

I imagine a workflow where a feature/ticket is mapped to a codespace. One or more devs are assigned the codespace and they work together until the feature is completed. Incremental changes are stored up until a "build" action is submitted, resulting in the current contents of the codespace being built and deployed to an internally/publicly accessible endpoint. This might also result in a "snapshot" which could represent a commit to a sandboxed repo. The codespace goes away once the feature is completed resulting in a single check in to a master branch which would go through regular code review type processes (probably a squashed version of their local snapshot's changes).


See Englebart’s mother of all demos.

I think he had everything in your first couple paragraphs working in 1968!

https://www.youtube.com/watch?v=B6rKUf9DWRI


Did he really? Classic example of demo driven development


> _I'm thinking of a built in voice chat as well as multi-author editing within the same Codespace._

You should try Live Share! VS Code Live Share enables you to collaboratively edit and debug with others in real time, regardless what programming languages you're using or app types you're building. It also supports voice chat:

https://marketplace.visualstudio.com/items?itemName=MS-vsliv...


This isn't exactly what I'm proposing since it seems to be based around sharing my code. I may be misunderstanding the documentation and examples however. Live Share appears to be me inviting people to connect to my personal session.

I see a codespace as a shared destination. It lives independently of any individual. I can hop in and out whenever I want and it continues to live.

To expand on this, it reminds me of how way back in the day we used to use MSN Messenger (and then Skype) to facilitate work chat. The model there was a contact list, a list of individuals I could reach out to and connect. There were group chats as well, but the primary mental model was closer to a phone call. Then Slack came and changed the model so that channels were the default, shared spaces were preferred over personal chats or ad-hoc groups. I don't want the default to be "I have my session and may invite others to join". I want the default to be "there is a shared workspace I connect to that others might be currently active in".

You might even imagine an interface similar to Slack channels (or Teams if that is your world). I could have multiple codespaces as channels that I can switch between. Within each codespace there may be active several devs. I can jump in and out. You could even setup permissions like "read only" spaces or whatever.


There is 'Discussions' for this, which you can enable per repo on GitHub, and have a sort of chat / forum as part of the repo



Isn't that what repl.it is? (I've never actually used it, so forgive me if I got it wrong)


I thought that Cloud9 IDE had that.


> "So we moved to 32 core, 64 GB RAM VMs. By changing a single line of configuration, we upgraded every engineer’s machine."

On GitHub, that instance type is $2.88/hour or $2,073 monthly per developer for a single instance.

(Granted, that's running 24/7 but still - wow, that's expensive for a single instance)


That pricing is a ~70% markup over standard Azure rates.

Companies typically buy laptops with an assumed 3 year lifespan; so, let's assume a high-end $3000 laptop, that would be ~$83/month. Of course, you need a computer to access Github Codespaces, so this isn't saving all of that money. Maybe companies can cut some costs by buying cheaper laptops?

Or if you want a more apples-to-apples; a Lenovo ThinkStation P620 runs ~$2800 for 32 threads and 64gb memory. DIY would also land somewhere around that $3K mark; a Threadripper 2950x (32 thread) is $1100; 64gb of memory is around $250.

This, of course, is the most expensive option they have; the more standard option would be 4/8; for which you would pay ~$85/mo (all the time, normal work hours), for the privilege of a machine that's far less powerful than any laptop any of our developers have. For god's sake, my M1 MacBook Air is an 8/16, with four of those cores far more powerful than any three year old Xeon Azure has running. And it was like $1400.

I understand maintaining dev machines isn't the easiest thing in the world. I have never once worked anywhere where it was such a problem that the company would justify tens of thousands of dollars in spend. Because it kind of is one-or-the-other; time invested in making Github Codespaces work for your company is time not invested in maintaining local dev environments. And Github's ideal end-state is companies totally rely on Codespaces, so the local dev environments languish.

Its not worth it.


Even with Codespace your company will still have to buy you a laptop, to run corporate VPN, security/virus software, background screen recording, and zoom/video calls. All of this will still need powerful reliable business grade laptop.

The cost difference between laptop meant for remote development vs regular dev laptop is less than $1k. ($300 for CPU upgrade, $200 for RAM, $200 for SSD upgrade)?


:sigh: this is to close to home. Nothing like malfunctioning virus software + background screen recording to eat all your local machine ram and cpu cycles.


I think a lot of these things are less important by the day though. Are VPNs or advanced security software even necessary if everything is in Notion/Google Docs and something like Codespaces is being used?


End-device security is always important; there would be nothing to stop the compromise and lift of, for example, cookies stored in the browser, or a keylogger to grab passwords.


I don’t think they’re trying to save on hardware costs here. This is to prevent any kind of friction and wasted time when switching between branches/environment.

If your dev costs you $100/h, you only have to save them +/- 5 hours of faffing around with branches/dependencies every month to make it worth it (assuming a 8 hour workday).


>I understand maintaining dev machines isn't the easiest thing in the world. I have never once worked anywhere where it was such a problem that the company would justify tens of thousands of dollars in spend.

Look, this is yet another product targeting naive VC funded startups. Milking the cows until they bleed.


It's not worth it right now. The economies of scale could add up to make it break even, if not be cheaper.

Big could, of course. But considering it takes new devs at my company ~3 business days to get up and running (and we're a React shop), it could be worthwhile.


The kinds of bills that some large company development environments end up running are surprisingly high, but those companies also think the opportunity cost of losing productivity on application team developers to also very high. That's how you find companies that decide to hire very expensive teams to, say, add types to a dynamic language, rewrite a standard library, invest in massive parallelism to run thousands of CPU hour integration tests in 15 minutes, or many other projects like that which no normal company would ever consider. If it gets some project team to cut delivery times by 3 weeks, it's all considered worthwhile.

Where we run into trouble is when we try to copy the shape of those big companies, and the kind of initiatives they fund, into startups, or mid-range companies. In quite a few ways, those top companies are better at development, but the base costs are only worthwhile if you have a money fountain of real revenue that grows far faster than your dev expenses. Trying to copy them with very different conditions is going to lead to tough problems, possibly company killing, and it's the kind of imitation we see all over the industry.

So yes, someone like Github is going to have eye popping expenses, because the alternatives to avoid those expenses just don't make sense to them. If you are not working in a multi billion dollar company, their practices can be interesting, but it makes as much sense to emulate them as it'd make for you to hire an entire Formula 1 pit crew to keep your commuter car in good shape.


This was the thing that jumped out at me! I'm not sure how the VSCode application interacts with these to potentially put them to sleep, but dang that is $$$ for a dev machine at the scale github is operating at.

I guess lucky for them to be owned by Microsoft now and so that's just a cost of doing business. I have a hard time believing they could have even considered an approach like this if they were still paying for hardware out of pocket.

Kudos to the team for figuring it out though.


I assume they're getting quite a sizable discount being owned by Microsoft.

I do wonder how many VM's they have running though at different points of the day and how many spares they have prepared.


Something tells me that a Microsoft company isn’t paying Azure RRP prices


If spinning up/down is fast+smooth (e.g., no reconfiguring stuff each time), you could drop that down to ~40h/week taking you to under $500/month. Still pricey, but combined with not having to pay the ~70% markup another commenter mentioned, it could be under $2,000/year.


Add on top of that, spinning down should be automatic. I've worked on bespoke platforms like this where it wasn't. Devs would often forget and/or be more willing to run long running tasks after work hours. The net result was pretty much having to assume an instance would be online 24x7 unless there were at least gentle pokes to disable instances.


While that does sound quite expensive, companies typically have a limited budget for employee laptops, but appear to have almost unlimited budget for cloud hosted testing environments, which usually have many instances (and services like RDS / Redis etc)


Our company is the opposite, I have a max-specced laptop and we are actively trying to reduce the massive cost of cloud computing.

A powerful laptop is a drop in the ocean compared to how much cloud VMs cost per developer.


And that's on top of the price for every engineer's machine that is used to access the instance...

Sure, you might not need to buy machines that are as beefy (although I bet you'll still need a decent machine to run all your chrome/electron windows) but enterprise maintenance on these machines is still a significant cost.

I definitely don't think one can look at this from the 'saving money' angle.


I feel like all of the problems in the blogpost are solvable without moving everyone to this crazy web development environment. To quote one of the proponents:

> I do solemnly swear that never again will my CPU have to compile ruby from source.

...yeah, why was that even happening? Why not distribute pre-built artifacts? I feel like the whole docker ecosystem only exists because people forgot how to distribute software and/or python/ruby and friends make deployment so hard.


Fighting configuration complexity with another abstraction layer ... enabling the devs to pivot to pushing even more brittle code then ever before.


> Why not distribute pre-built artifacts?

Back in the day (ahem), I’d have solved the problem by building new .deb or .rpm packages and letting the OS manage the updates for us. This is a lot harder for languages like node/js, ruby, python, etc… Dealing with a system level packages vs local packages is hard, so almost all management happens out-of-band. Older versions of packages was/is still an issue for many distro-managed libraries. And different projects have different requirements.

In the end, I’m largely okay with this… I’d rather have my choice in local OS and be able to work on my client of choice. I see the rise of containers was more of a response to heavyweight VMs as a dev environment.

But it’s good to remember — these are not unique problems and we’ve solved them all before.


They say in the post that everyone runs OS X, so incompatible local environments should not be a problem.


mumble mumble nix cough


I feel I'm swimming against the current of contemporary dev practices, but I actually believe trying to run as much of your apps and development locally is important. Not exclusively, but at least from time to time I think shutting off your network connection and seeing what happens is important. It flushes out all sorts of assumptions, especially around dependencies, you may not have realized you had. If you at least occasionally code in a local, non-networked environment, I think you end up with more robust code.


My job now has no ability to run our code locally and its terrible. I work on APIs which at the end of the day is an HTTP server making and responding to requests. When I make a change its impossible to test locally which is terrible.

The same issue applies to serverless technologies (at least last when I used them). It was extremely difficult to run serverless services locally, debug issues etc.

Having the cloud is great, but nothing will beat being able to run things locally.

It just gives you so much more control. You're not locked into a specific editor or tool chain, you can integrate with whatever other local tools might be specific to you, etc

This is a great feature, something to complement existing development approaches, but I'm always going to prefer my own setup where I have everything configured exactly how I like it.


> It just gives you so much more control. You're not locked into a specific editor or tool chain, you can integrate with whatever other local tools might be specific to you, etc

My experience aligns very strongly with this. A local stack lets you add debug/print statements anywhere. There is no "magic", you can see the thing run on your machine. The freedom is unmatched.


I can strongly second this. I currently work on a serverless project and running/debugging code locally is one of the biggest pain points. Over time I've gotten better at mocking out a bunch of stuff to try and get the code I care about running, but it's a really rough process.

There are of course risks that the local setup process will differ from the prod environment in a serious way. Even with good automated test coverage I feel less confident about the code I commit here than most other places I've worked.


I've used serverless and AWS chalice to build Lambda, and both have have the option to dev locally. That would be a nightmare to have to wait to deploy every time.


I would love it if you could expand on this. I currently work with AWS Lambdas a lot and have tried a couple options for running them locally but never got to a point where it was worth the effort. For example I use PyCharm and tried getting the aws serverless plugin working with it for running Lambdas and State Machines but just kept running into different problems.

There must be a better approach than setting up a bunch of mocks and running the core logic in a scratch file.


Agreed, but for most companies, you're not going to be able to run the whole stack locally.


I think the main use case for this style of development is web apps.


As much as I’d like to try this, the blog post reads as « our dev environment was so heavy, our git repository so old that we put everything in the cloud even your dev env ».

If you’re building a new startup, you don’t need this. Use docker-compose.

The current latest full stack project I built requires a single command: npm run dev. Launches docker-compose (PostgreSQL), Next.js and ngrok (expose/webhooks).

And it takes 10 minutes to setup.

Not everybody has the requirements of GitHub!

Still, I’m sure I’ll use this in a year :)


Or Nix or Guix


The amount of time I've seen lost to engineers spinning up (or fixing) development environments is staggering. This kind of thing is going to save SO much money.

(I've run a team responsible for the tooling for a company's development environments in the past so this hits really close to home.)


Alternatively, they could have spent this money on fixing the dev stack. Maybe this was more politically viable.


What do you mean by fixing the dev stack?


I believe the OP is referring to "local stack" i.e. being able to run a single command e.g. `make up` (if you're using makefiles) to spawn a local environment and run a "local" version of the site i.e. serving on localhost.

Having such scripts enables one to quickly setup a local environment to make changes and test them before submitting a Pull Request.


I've tried fixing the local dev stack - all of the scripts and Docker containers in the world still can't predict the ways in which something might break when it's running on 100+ different laptops with different developers with different unique preferences for how they like to get things done.

My expectation is that if you roll out something like Codespaces a small minority of your engineers will resist because it doesn't fit exactly how they want to work.

Meanwhile everyone else will get back several hours of productivity per week.


If the solution is to force everyone to do everything the same way... that's a political solution, not technical. That solution could just as easily have been everyone is forced to do everything the same way on their local machine. Maybe its more politically expedient to move everyone to a completely different cloud stack than to fix/standardize the local one.

> My expectation is that if you roll out something like Codespaces a small minority of your engineers will resist because it doesn't fit exactly how they want to work.

I'd expect internal support to be highly dependent on how terrible the alternative is. Beyond personal preferences, I'm really apprehensive about coupling all dev work to a single remote cloud provider. What happens if you lose internet access or when they inevitably go offline? In the case of the later, literally all work stops.


I work at a F50 that has a similar home grown solution. The nice thing is team members can customize their local machine however they want! Then for the actual dev env, its standard.

Works really really well when you allow your employees to explore and do what they want locally. Then when things break or go wrong (either I tweak something, or an incompatible Apple update lands) - I can just reset to the same dev env as everyone else.

It's fantastic. Spin up and throw away a dev env that is so much more powerful than my local machine could be (100+ GB RAM machines are a click away).


>I've tried fixing the local dev stack - all of the scripts and Docker containers in the world still can't predict the ways in which something might break when it's running on 100+ different laptops with different developers with different unique preferences for how they like to get things done.

Running on a remote environment may sidestep the issues of running on local hardware (poor virtualization, resource limitations, etc) but you'll have to standardize and automate the generation and management of your dev environments either way. Otherwise, you'll have the same problems in your remote space as you currently do locally, plus a few more possible points of failure (low latency network reqs, permissions differences, etc).


If I understand correctly, this is similar to Google's CitC (clients in the cloud) [1].

According to the article, Codespaces supports non-IDE users by allowing ssh. CitC supports non-IDE users with a network file system. This seems preferable - the editor runs on your local machine with low latency, but you still run tests on the cloud machine.

I wonder if Github supports this workflow, e.g. by configuring sshd to allow sshfs/sftp access.

[1] https://cacm.acm.org/magazines/2016/7/204032-why-google-stor...


The problem I see with a remote filesystem is that you’ll still need tools for editor integrations - language servers, formatters, linters, etc.

Sure, you can install those locally too, but by then you are losing many of the benefits of a cloud environment.


Go try VSCode remote. These are all solved problems. The extensions run headlessly on the server, and the client does little more than render the interface. Switching between machines is seamless with settings sync, workspace recommended extensions, etc.

You won’t find many people who started using VSC remote dev and then abandoned it to go back to local. Once you’ve got it working, which is a pretty easy feat, it’s a no brainer, obvious win.


This is what I've started doing, and I'm definitely a life-long convert.

I still experience a few glitches (extensions randomly stop working, Intellisense sometimes dies, and auto-detecting architecture for launching mobile apps sometimes fails) but I'm sure those will be ironed out.


Yes, I am aware. I was responding to this bit:

> This seems preferable - the editor runs on your local machine with low latency, but you still run tests on the cloud machine.


Ah I see – apologies, I read your comment too quickly.

Something I miss with remote development is being able to use graphical Git clients like Sublime Merge on my working directory. I don’t want to setup SSHFS just to make that work, and I’m sure performance wouldn’t be great anyway.



Ugh, now not only you don't own your code (ahem copilot cough) you also don't own the tools to develop code. The direction the web is taking is worrisome.

The issue is not so much senior devs, but new devs. If they start off with things like that, there' so much magic under the hood, they won't understand how anything works. They don't understand they don't own shit until it's too late.

I remember when I first saw the JAM stack, basically they use external services for *EVERYTHING*, claiming it's "The future™". They don't even own the data/database.

I'd rather make things better for users, rather than companies. Stallman's essay [The Right to Read](https://www.gnu.org/philosophy/right-to-read.html) gets more and more scary.


It's funny because we told people not to trust big companies, they tell us we are paranoid, they do it, they pay the price for it, then they say we were right, then they do it again.

MS changed. Google changed. Apple changed. Why do one expect github to never turn evil ? Haven't learned enough from history ?

Don't let them control your entire stack! Don't let anybody control the entire of anything.

There is a difference this time though. This time, most of us will say it only once.


People act in their own interests, usually prioritizing the medium-term (eg. evaluating every few years/every decade) over the long-term. These products made by the big companies provide immense value that promises to make your life better tomorrow, so most people choose them.


It's over once we don't have control of the compilers. Look at the iPhone.


iPhone is the direction everything is trending.

Fight this bullshit. Call your pro-breakup / anti-trust representatives and give them the reasons this is bad for you and your career. Tell them the narrative and give them ammunition to fight this.

Google, Amazon, Apple, and now Microsoft deserve regulations and/or dissolution.


I don't understand the panic.

This isn't a black-box system, it's github. Owning your artifacts is as simple as setting up a job to automatically clone the repo a couple times a day (assuming your entire company is working through the cloud interface and not a single person already has a local checkout).

Lots of companies put their entire databases in AWS - owned by a company that's already evil, no speculation required - where the egress charges make offsite backups a challenge. And western civilization hasn't collapsed yet.

If GitHub changes its policies one day or shuts down the service... just push to a new repo and switch over to regular workstations.


> This isn't a black-box system, it's github. Owning your artifacts is as simple as setting up a job to automatically clone the repo a couple times a day (assuming your entire company is working through the cloud interface and not a single person already has a local checkout).

Isn't the use-case they're selling in the article approximately "the development environment setup is so fragile that it doesn't work on lots of machines?" I'd be willing to wager that after a few years of the whole team using this, the fragility will be much worse, so moving away will be that much harder.


Fragile in the sense that it might take you hours or days to get it all set up just right. It would be a disruption, like any IT problem, but it's not going to threaten an entire business.


I might be wrong, but it seems to be a combination of visual studio code + container plugin + docker. So you would need to install visual studio code, few plugins and spin up a docker to replicate that environment on the local machine.


> they pay the price for it, then they say we were right

What are you referring to?


github was already turning evil. and being bought by microsoft sealed the deal


Given the whole outrage over Apple taking advantage of customers' willingness to hand over control of their phones to them, I hope folks can see the parallels here and reject this.

At this point Microsoft's strategy around GitHub, VS Code, Copilot, etc. should be pretty obvious, and we should all be running for the hills.


> At this point Microsoft's strategy around GitHub, VS Code, etc. should be pretty obvious

As someone not following this closely, what would that strategy be?



What are they embracing, how are they extending it, and how do they plan to extinguish it?

My understanding of EEE is that it involves taking some competitive product/standard, making a cool MS version of it, and then killing off the standard version - e.g. implementing a barely sufficient POSIX layer on Windows, getting customers who have "POSIX" as a purchasing requirement to switch, and then getting them onto the NT API.

Most of the examples on that wiki page have been unsuccessful (IE trying to EEE HTML, Outlook trying to EEE SMTP, etc.), because they weren't able to get enough people to rely on the extended version and stop using the standard thing.

What is the analogy here? Microsoft wants to embrace and extend... compiling software? and will kill off... existing ways of compiling software? How will this work?


It's a slightly different flavor of the same thing.

VS Code is an amazing tool that's getting huge adoption because of how awesome it is, and how open source and community centric it is, etc. They've gotten a lot of mindshare and dev love, that's the embrace bit.

The next step is the set of closed source addons. Have you noticed that a lot of the new VS Code features are now in addons that are under a different license. This includes the new remote dev and python tooling. Still free to use and awesome tools, of course, but fully under MS exclusive and controlled. That's the extend.

I don't know what the longer-term plan is. I'm hoping that it won't lead to an extinguish. But if that's what they want, then they could e.g. cripple the open source version, moving all their dev effort into a closed-source, Azure/Github only web environment. Who knows?


Gotta love how the goal post just keeps moving forward.


I don't see what post was moved, this is exactly EEE, adopt an open strategy, extend it with proprietary extensions, don't let competition use those extensions, extinguish competition by not being able to interface with your proprietary extensions while having created an expectation from the user for them to be available.


More like you never know when they’ll initate the final part of their strategy.

Half the dev world is now used to VS code. Many junior developers have never used anything else. Whatever they do can have a big impact.


>> What is the analogy here? Microsoft wants to embrace and extend... compiling software? and will kill off... existing ways of compiling software? How will this work?

Microsoft is embracing free development tools that compete with purchased tools they sold. SourceSafe vs Git, Eclipse vs Visual Studio.. those are terrible examples but you get what I mean. So you make a free tier of VS, you buy git hub etc.

Extend.. you try to make your tools as ubiquitous as possible. You convince hobbyists and students that they should use your platform because it is more powerful /useful.. it will help you get a job later. This part of the strategy seems really hard because there are so many development tools that are open source, and lots of companies like IBM, Oracle, Google want to keep one of their own safely under their control.

Extinguish. You add features that only really work on your platform, or work better. Think about trying to develop for Android not on Android Studio. I haven't used VS for a Windows project in a long time but I assume it has similar advantages. Or Intel's compilers used to be the first to work with new instructions.

Ideally people would "have" to use your platform and you can start taxing them for that, either with fees or requirements that they implement your initiatives. Think about Google pushing Kotlin.. Android Studio kind of defaults to that now. Of how it supports Firebase, or defaults new projects in segments. I am not mad about that it or anything, it may just be a good idea. But it shows how if you control the dominant tech platform you can support your initiatives.

This isn't something that keeps me awake at night because a number of tech companies are working to promote their own semi-proprietary or open development platforms. But the fact that so many go to the trouble to do that shows it is a concern.


I understand how EEE is supposed to work. My claim is that it doesn't actually work, at least not for this use case.

So for my argument, SourceSafe vs. Git is a great example. :-) You simply can't keep up with a proprietary tool that does its own stuff. You have to use Git. Microsoft tried having a custom version of Git for Windows OS development, and decided that they weren't even able to get themselves to fall for the trap - they ended up getting involved in upstream Git development and turning all their extensions into standard features.

There are definitely cases where if you control the platform you can control what people are doing. I'm arguing that Microsoft has no actual effective control of the platform here.


EEE doesn't have to work to be considered anti-competitive. The intent is nefarious, it's trying to attract people by offering a veneer of good grace and good intentions, only to capitalize on it once you've attracted a strong enough user base that you can get away with closing things down, squeezing things in, and calling the shots.

So it be great if their proprietary remote extensions fail and they are forced to open-source them and make them a standard, but clearly the intent right now is that they don't fail and it gives them a leverage over their competitors.


They first embraced web standards by building a web based open source IDE called VSCode to get people to convert to it over using Visual Studio.

Now that they attracted people to use it, they embraced Linux containers and offered managed dev containers as a service over it called Codespaces running all on Azure.

At this point, it's all good, this is how a company profits on open source and standards.

But now they are extinguishing by adding proprietary remote extensions, see https://code.visualstudio.com/docs/remote/faq#_why-arent-the...

Those extensions being proprietary will lock you to VSCode and their own containers. As a user you might still use it for free, even though they may add future premium features see: https://code.visualstudio.com/docs/remote/faq#_will-you-char...

But other competitors will not be allowed to use them to compete, and they can't be bundled in anything else, so Sublime wouldn't be able to bundle them in or make use of the remoting features.

The remote extensions are key to the attractiveness of VSCode for use with cloud managed containers, so it's a pretty big differentiator.

On top of that, they purchased GitHub, where most open source software lives, and are now trying to move development on those open source projects to adopt their own proprietary extensions to offer contributors a way to quickly contribute using VSCode and GitHub Codespaces.

Sounds like EEE to me. Maybe it will fail to extinguish, but my guess is we will see a lot of people slowly relying more and more on proprietary extensions owned by Microsoft to the point where if you work for certain companies you might not have a choice but to use VSCode.


I think the 3rd E is more likely "extort" these days. They won't extinguish it, but you'll be paying per user per month forever and the prices will always go up, up, up.


Some could even call it Embrace, Extend, Extinguish...

hmmmm.......


The same thing was said about every programming abstraction to ever be introduced. How do we expect people to build memory-efficient applications when they become reliant on a garbage collector?


I've never seen a company collapse because their garbage collector posted an Our Incredible Journey™ blog post and then shut down a week later.

I have seen multiple companies collapse because the proprietary Backend-as-a-Service they built their sand castle on top of pulled a vanishing act, however.


There's plenty of companies that wasted a ton of resources trying to roll their own version of things that should be services. Remember when Uber built their own chat solution? [0]

> With operations in over 620 cities, it was paramount for us to identify a chat solution that would enable Uber employees to reliably communicate on desktop and mobile regardless of where they were in the world.

Did Uber really need their own chat solution? Are they not better served by a SaaS product, one of 5 that 90%+ of companies are using? Are they still using uChat today? (honest question, I don't know)

This is especially dangerous when talking about security sensitive features. It is highly inadvisable to roll your own crypto

Lot of companies shut down or fail for lots of reasons. I think a business should focus on what they do best and what's critical to their mission.

[0] https://eng.uber.com/uchat/


> There's plenty of companies that wasted a ton of resources trying to roll their own version of things that should be services. Remember when Uber built their own chat solution? [0]

Sure, there's obviously a big difference between controlling the things that are critical to your business and wasting time outside your core competencies. And part of that difference is the risk profile: if your entire backend disappears, and takes the API your frontend is built around and all of the data with it, you're basically up shit creek. If your chat app of choice disappears, you can sign the company up for a different one tomorrow – it doesn't take 8 months of crunch to "port" your employees, and likely all of your critical business docs weren't in the form of chat history.

> Lot of companies shut down or fail for lots of reasons. I think a business should focus on what they do best and what's critical to their mission.

Agreed. But very few internet companies can claim that the data in their datastore and the entire mechanism by which their product accesses said data is not critical to their business, which is the distinguishing factor here.


Crypto is almost always implemented as a library, and probably ultimately as a kernel module in most cases. There is no reason at all you should need to call out to a remote service to encrypt and decrypt data. That wouldn't even make sense, as it would need to be encrypted before traversing the network to be of any use. And there are plenty of commodity chat applications like Mattermost and RocketChat that you can self-host, again not necessarily needing to use someone else's cloud service but also not need to roll your own application.


Uber grew extremely fast in its initial years and that resulted in large number of teams starting NIH projects to "make themselves useful". Uber switched to Slack a while back, in addition to laying off a sizable chunk of the engineering workforce, and it didn't really collapse on itself.

Nowadays, there's a lot more support internally at Uber for buying off-the-shelf solutions instead of trying to reinvent wheels.


I see your point.

But at Uber's scale I can see the business need for a custom chat app. Most third party chat apps are bloated, both the cheap and expensive ones. Plus there's the need for custom integrations.


I used to work at a large bank and they operated perfectly fine with Window's messenger. As does every other bank in the US, despite having tens if not hundred of thousands of employees. Some things require more specialized features like trade tickets (which Bloomberg does), but for chatting, you're not gonna be able to build a better solution than already exists. And if you can, then you should spin that off into its own business like Goldman did with Symphony [0]

The nice thing about being a client is that they support all this stuff and it gets better over time. Also something like Slack is absurdly cheap compared to the usefulness and productivity it provides.

[0] https://en.wikipedia.org/wiki/Symphony_Communication


> but for chatting, you're not gonna be able to build a better solution than already exists. And if you can, then you should spin that off into its own business like Goldman did with Symphony

The implication that Symphony is a better solution for chatting than already exists / existed.

As a (non-trading) user of the platform, I strongly disagree.

I don't know what the traders think, but I'd hope there's a USP in there for them besides "it's cheaper than a Bloomberg seat."


I am not concerned with Microsoft posting an Our Incredible Journey (TM) blog post within my lifetime, but duly noted.


Microsoft, too, will pull projects, or just completely rewrite them and make them incompatible.

Applications using .NET Framework with WinForms are not just a dependency update and recompile away from being ported to .NET 6. WinForms got a whole new API and porting complex UI effectively boils down to a rewrite. At least if you can. Microsoft doesn't support third-party controls in the WinForms designer of Visual Studio for .NET 5+, so have fun trying out positions and compiling over and over.


Big companies put projects to rest all of the time. There are even whole sites with countdowns and all.


What’s an example of a company collapsing in that way? I actually can’t think of any.


These tend to be early stage startups who went with BaaS to get around resource constraints and couldn't survive the rapid need to exfiltrate their data and build a backend with a gun to their head. Larger companies generally have the survival sense not to sharecrop business critical functionality to such a degree.

Which is to say, you wouldn't have heard of them, because they never got big enough to be known to anyone beyond a handful of clients and employees. I've personally been hired to try to rescue such a company, and was not successful.


> Larger companies generally have the survival sense not to sharecrop business critical functionality to such a degree.

Larger companies have also been known to buy out critical service vendors and in-house them when there was a risk that they might otherwise shutdown or pivot, too; that's less often an option fot smaller firms.


Do you consider AWS Lambda and such to be BaaS? I'm pretty new to the field and have been focusing on AWS 'serverless' as the backend for most projects. Would you say this is bad practice overall?


(I'm not heavily experienced with Lambda but) I don't think you're in too bad a position, at least from what I've seen, as there does seem to be a portability story wrt those serverless functions.

The objection here isn't to hosting per-se – most companies aren't in the business of maintaining PostgreSQL and should consider outsourcing that task early on. If your host of choice shuts down, that's a standard service, you can dump and restore your DB and move on. Annoying but NBD.

The danger with jamstack or BaaS or similar approaches is working yourself into a place where a proprietary service API is baked into your product as a critical, fundamental dependency; if and when the host shuts down, you're in extreme trouble. That's what I'd heavily caution against.

(There's an argument that when the proprietary service is run by an MS or AWS or Google "they won't go bankrupt" so there's no risk, but even then those companies shift priorities and shut down services not infrequently).


I don't think it's a bad system to build on, but I would be nervous to have it as the only option in my tool belt. Proprietary cloud services often create vendor lock-in, and the skills you learn eventually become non-transferrable as you get more invested in their system. If you only want to work at serverless companies, or on serverless projects, that may be fine. But, knowing how to SSH into a machine, maneuver around, and how to set up a server will always be valuable


Any company built on a 3rd party API. Many examples on twitter.


The difference is that you don't pay a monthly subscription to use a garbage collector.


Garbage-collection-as-a-service would get a ton of venture capital though.


Oh wow, AR/VR mark/sweep applications! Those VR rooms get messy, who is going to clean them up?


At patent office right now with this. My attorney said will go through the USPO like butter.


There are a ton of a companies that help you track cloud spend, while some of it is architectural, most is really about waste, old ec2 image backups, s3 with no lifecycle policies ... garbage


Hahaha cloud-based garbage collection. Made me giggle.


There's two things happening in this post:

1. Development environments are running on Kubernetes, and VS Code is remoting into them. You don't have to pay a monthly subscription for that. VS Code remoting is free-as-in-beer and Kubernetes is free-as-in-speech.

https://code.visualstudio.com/docs/azure/kubernetes

2. The actual Kubernetes hosting is the product "GitHub Codespaces" running on Azure. GitHub certainly isn't paying a monthly fee to themselves for this; they are, technically, self-hosting it. I agree that you probably do want to have the option to run it yourself, but I think you do, given 1.


Microsoft probably does internal billing here, right?


I mean, probably for the actual Azure usage, but that's not different from billing yourself for development VMs or Mac laptops or whatever (i.e., they'd be doing that independent of this approach). I assume the Codespaces team is getting plenty of value out of GitHub dogfooding the product and doesn't need to bill themselves for its use.


This is already a thing, if you look at Java (old-school ik), Oracle vs OpenJDK [1].

I don’t see why that would be a problem though? If I can pay a company to provide improved GC for my infra, where the cost savings made it beneficial, I would do it…

[1] https://superuser.com/a/1365224


have you heard of Azul Zing?

(not entirely serious, but it has a very expensive monthly fee for what is in essence a better GC algorithm)


I bet somewhere you just gave a Lang Developer a monetization idea...

This will be coming to a cloud service near you


The garbage collector should pay _me_ to use it.


> The difference is that you don't pay a monthly subscription to use a garbage collector.

In some cities you do, and they're often controlled by organized crime.

(Whether this comment is a metaphor for SaaS is left as an exercise for the reader.)


I'd argue that it always held true to a certain extent. You don't necessarily need the underlying knowledge to be an efficient programmer in Python, but knowing C or Assembly certainly gives you an edge when it comes to, well, edge cases.

And it isn't all about efficiency. Programming and CS is such a major part of my life, that I want to know the deeper parts, and I want others to understand it too. Programmers who do not seek that knowledge I unjustly judge for it.


The opening to the second paragraph honestly reads like satire of something that could have been said at any point in the last 50 years.


You're totally right. Javascript and its ilk are clearly superior to the languages and tooling of the past. There's no doubt about that.


> If they start off with things like that, there' so much magic under the hood, they won't understand how anything works. They don't understand they don't own shit until it's too late.

I'm sure this has been said dozens of times throughout the years as we've built more and more tools of abstraction. As software gets more complicated, it's OK if not everyone fully understands every part of the stack. People specialize, and then become experts at their small scope.

Or even if they're not experts at their slice, not every company needs a 10x code wizard who understand the system up and down. Some companies just need someone lightly technical to mess with website templates or write simple scripts and services.


Sometimes I amuse myself by thinking that fantasy games are actually a thinly disguised sci-fi commentary on the direction of modern programming. Take for example the Protoss race in Starcraft or the Sheika tribe in Breath of the Wild - the premise is basically that there's a highly advanced society using the remnants of amazing ancient technology that they themselves no longer understand.


Jonathan Blow - Preventing the Collapse of Civilization https://www.youtube.com/watch?v=pW-SOdj4Kkk


> If they start off with things like that, there' so much magic under the hood, they won't understand how anything works. They don't understand they don't own shit until it's too late.

Bingo. The inevitable rug pull is going to be very, very expensive/catastrophic for a lot of people (though, profitable if you understand the underlying systems).


This may not be what you are referring too BUT I can say as a .Net developer it's amazing how little .Net devs know about how the low level dev related stuff works because visual studio holds there hand.

I have blown people's minds by showing them how to just call msbuild or git from the command line etc... I think it's changing since dotnet core took over in that space but had a good 10 year run where generally no one at various work places had any understanding of what visual studio was really doing and it did bite us in the ass at times.


There’s two sides to that coin. In visual studio, most of the time you don’t need to bother to see what’s under the hood, and you can focus on feature development. In that way, their ignorance can be a good sign. The flip side is when your stuff won’t build, it can be an ordeal to figure out why, but thankfully it does not happen a lot, and they can leave it to the local experts.


Yes, IDEs are similar in a way that they abstract lot of stuff, so people end up depending on them. I don't mind IDEs, but I encourage learning what they actually do behind scenes, so you won't be completely useless when you don't have them.

I think that's one of the biggest issues I have with codespaces.


ownership issues aside, the "magic" under the hood is not a problem assuming it actually works. Its when the magic breaks, or works in unexpected ways, then it becomes a problem. Magic is also progress, and progress is usually good.


I think you could look at it in terms of sustainability: "magic" progress has the potential of "magically" vanishing if it's inner workings are not sound, reliant on an unreliable other, etc.


Free software won, the commons ossified as no one cared to invest in fundamental ways, new layers are slapped on to hide the cruft, and the pendulum swings back to proprietary competition.


I assume a huge part of why GitHub is okay with this is that VS Code / Codespaces and Azure are both products from the same company. :)

I do think there's a serious autonomy question here, and I think companies do actually care a lot about maintaining their autonomy from other companies, and in many cases the incentives align a lot. The whole stack here seems like something that you can run internally / self-host:

- Codespaces are specially-configured containers. All the standard infrastructure for running containers (Docker, Kubernetes, etc.) is FOSS and is quite reasonable to run internally. It's not even like OpenStack where it's a knockoff product of what the major cloud vendors run: it's literally what the cloud vendors run.

- Shallow clones, pre-built running containers, etc. are all deployment practices, not products.

- You can ssh into a codespace.

The big problem here in my eyes is that it's reliant on VS Code, whose remoting stuff is not FOSS (and, having tried it at my own workplace pointing it at our self-hosted Kubernetes, it's been a pain to get it to work well with our authentication setup, proxies, etc. without access to the code and we have a number of open bugs filed). I think the challenge is for a free-software IDE to adopt the VS Code model of the IDE running locally but executing code (including running the LSP backend) remotely.

And, in a sense, Emacs already supports this just fine with TRAMP. It's just that the experience of using Emacs and the experience of using any modern IDE (VS Code, any of the JetBrains IDEs, whatever) are very different... and none of them are FOSS.

If you have a FOSS IDE, I think you can get this whole setup working well in a way where you have autonomy over the setup and where you can understand the details just fine. And even if you're using VS Code, you can set all of this up in a way where you're not contacting any external services.


> VS Code, whose remoting stuff is not FOSS

Do you have more details on this? I thought VC Code just uses the language server protocol for remote editing, which supposed to be an open standard. What stops say neovim adding full LSP support?

edit:formatting


LSP is definitely an open standard and you can make that work just fine. What I mean is that the components of VS Code itself that do remote execution are not open source, so if it's VS Code that you're running, you're a bit at MS's mercy for how you make it work (my current issue is that it downloads a VS Code remote agent into the container, and that download doesn't work right inside our corporate network, and there's no way to modify that).

If you're not running VS Code, then this isn't relevant to you and I'd imagine you can get a good experience via Neovim using LSP + netrw.


> The direction the web is taking is worrisome.

It's 100% economic. "Everything has to be free" means there is no money to be made in selling actual applications. That means the industry pivots to SaaS since the cloud is DRM and by running apps remotely you can charge rent and make piracy impossible. As a bonus you get the user's data, which you can do anything you want with in most cases.

As a bonus SaaS can also claim to be open source if only the client code is open, or if all the code is open but the SaaS still holds all the data and has the network effect. So you get those open source virtue points for free while still locking up the data or the network.

Of course the other response to "everything has to be free" is surveillance capitalism.


> If they start off with things like that, there' so much magic under the hood, they won't understand how anything works.

I feel like there's an ever growing tension in software development where magic like this makes us more productive but also more vulnerable.

Maybe we need to develop better benchmarks for "knowing enough" so when the magic fails, we're not starting at zero trying to figure out what went wrong.


Hire wizard consultant to fix magic and let the rest to rely on magic.


Why should you own your code or your device?

Why should you trust your computer?

Why should your career be safe?

People are fungible resources to be used when convenient and agreeable. Say or do the wrong thing, or have your utility dip below a threshold, and you can be removed.

It's happening.


> The issue is not so much senior devs, but new devs. If they start off with things like that, there' so much magic under the hood, they won't understand how anything works. They don't understand they don't own shit until it's too late.

Any good CS/Engineering program should have the students build and use a dev environment they configured to their liking.

If the only dev environment used is wrapped into some IDE magic I would be very skeptical of the program.


My biggest fear is giving up control to corporations. It starts with the honeymoon phase and once they control the market... they can dictate who has access, pricing, government backdoors etc...


jamstack is very fucked up to me, glad I'm not going crazy. you basically offload EVERYTHING to 3rd parties.


There's nothing about jamstack that requires you to use third-parties (except the CDN part, I guess, but that's just a mirror). You could just as easily host your own CMS and database and call out to it the same way.

It's just that like auth, logging, etc, CDNs have been neatly abstracted out and service-ified in such a way that it's often more economical to go third-party. But that choice is orthogonal to jamstack.


I agree on that, and was going to add that, to be fair to the stack. But the reality is, while that's doable, most people just use third-party services, at least that's my experience.


Sure, but I think that's true in non-Jamstack code as well. Datadog, Snowflake, BigQuery, DynamoDB, Google Analytics, etc. are all examples that I've seen used by non-Jamstack software, and most have a fair bit of lock-in.


jamstack certainly presents itself first and foremost as "use third party services".


I initially learned about it because a junior dev asked for advice on one of his/her apps. It was a college app, simple, it just gathered info from students. But, they offloaded all data to a third-party database. So you are actually giving a company a lot of sensitive, personal information about a fucking college.

He/She was concerned (and with good reason) about the safety of data. Had to learn the bad way :(


I would trust a service by junior devs more if it was hosted by a third-party database because so many companies have insecurely set up their environments over the years. Especially if the data is encrypted in a way the service can't read it, what would be the harm?


It's amazing how github is not open source, the biggest host of open source software is closed source?


And it might be fine as long as underlying building blocks are open sourced (git, etc).

There are open-source alternative(s) that support import of most things from github.


The real problem that Codespace for stuff like the GitHub codebase solves, is not having to bring up the dev environment. Non trivial, organically certifiably grown commercial software has bunch of services that all eat different things in winter. And things like the Rails monolith that GitHub's main service is, have a tendency of being rude to a dev's laptop, squatting the place with whatever port and file they want laying around, installing packages and software to variously namespaced-or-not locations. If you deal with any set of service, they'll all have various hygiene standards and run well or not on your machine, depending on wether your machine looks like the core dev's machine. And then everyone famously wants a custom work machine with freedom to manage it however they want.

Codespace stuff is a way out of insanity. It's a mediated common ground. You could standardize on Docker or Vagrant or whatever, but they come with a lot of pitfalls and the local dev story with them all kind of suck. Codespace on the other hand is a solution that mostly doesn't suck. It's actually a pretty decent experience.


Sure. Until you get your account banned because you went againts of obscure TOS clause, or because a powerful entity put pressure or paid their way into it.

Can't wait for the equivalent of abusive dmca to be able to put down your entire dev team.

It's really a way out of madness.


Doesn't docker solve all of this? It has for my group, even using C++, which is notoriously finicky. You can version control your docker file and bootstrap a dev environment in minutes.


> And things like the Rails monolith that GitHub's main service is, have a tendency of being rude to a dev's laptop

FUD. Everything is neatly handled by rvm (or rbenv) and bundler. You can even use gemsets if you want to be really anal about it. It all goes in version-controlled user directories. I keep my main Mac machine, my work Windows machine, and the production Linux boxes all running the same versioned stack just fine.


What do you mean FUD? Maybe you've got your own way of doing things that works well so far for you with your current projects, but a ton of people don't live with the same experience, and when they do reach a comfortable spot, it's only comfortable until they move to a new set of projects or company.


I'm saying that, while a top-tier, software-as-a-service company might have a development environment that is hard to bootstrap -- because of all the ancillary things that run alongside the "monolith" -- your generalization that "Rails" makes a mess of a development machine is FUD. The Ruby/Rails part of this equation is not the issue. As always, YMMV and TACMA.


I didn't say Rails makes a mess of a dev machine. What I'm saying is that every project makes different sort of assumptions about how it's brought up in a dev environment, and these assumptions vary wildly between projects and stacks, and usually the older and more prominent a project is, odds are it will have the most unexpected impact on anyone's machine. This is going to be true for any combination of libraries, frameworks and programming languages.

I'm not mounting an attack on Rails. Rails' fine.

(Edit) I can't answer your reply. It seems there's many ways to read the quote you brought up. I'm clarifying that the meaning I intended wasn't an attack on Rails. Take it as you will. I'm a bit confused why my original comment, which I meant as a positive-to-neutral tone, is being perceived so negatively.


> I didn't say Rails makes a mess of a dev machine.

I literally quoted you saying that in my first post. But hey, what do I know?


Codespaces doesn't magically solve that problem. You have to put in the work and ongoing maintenance effort either way.


Yeah but if before we had MxN complexity:

  - M: number of devs with their own machine
  - N: number of projects with their own way of bringing up dev env
Using stuff like Codespace reduces M to 1.


There are dozens of tools and strategies to reduce M to 1. Codespaces is just one of them.

Codespaces has the disadvantage of turning you into a renter.


I believe codespaces does directly rely on docker containers that you can customize…


Such a stale take. Can I assume you are pretty much against the cloud in general?


I'm not against the cloud, I use it in a daily basis. I don't mean you have to phisically own your servers. But it's not the same, to me, having your MySQL or Posgres DB up in the cloud, than using things like Firebase.

I just don't like the direction of moving the development environment to the cloud. It makes it easier to just consume third-party services and a) Don't understand how anything works, and b) Don't own anything.

If you know, fully aware, what the cloud implies, then sure, use it, but it's dangerous when the decision comes out of ignorance.

I guess there's not really much you can do about it though. People will do whatever is easier, and not everyone wants to know how things actually work.


Hello! Blog post author here. Wanted to quickly comment on how we support offline folks or those working on an unstable connection! The short version: we publish a container image into the GitHub Container Registry that closely tracks github/github’s devcontainer image. Folks can pull and use locally with VSCode remote containers or run directly in Docker. The github/github checkout is local in this case and mounted into the container.


So I can use the same workflow with codespaces connected to a local container? Is that documented somewhere?



Thanks but I was looking to see if codespaces could somehow be used with a locally running container.


It's all fun and games until big corps start adopting cloud-everything approach. There will be no devices left, only thin clients connecting to someones cloud and logging everything you see and do.

Imagine your employer watching your every keystroke and getting instant performance metrics. Then some OverseerAI reporting that you didn't type for 10 minutes already, sending a notification to your boss.


How is that different from now?

If you work for a big company you already have your laptop controlled by MDM software which can do whatever it wants.


Whatever cloud dev environment you use will shove productivity metrics up your manager's ass.

E.g. Employee A ran the tests 4 times an hour whereas employee B ran the tests twice an hour.


Just run the tests more times per hour. It's literally what you're getting paid to do!


We need new types of AI bots in this battle that can mimic us executing mundane tasks on our cloud developer machines so we can sit back and work on more interesting projects.


We currently have best developer machines ever in the history of development, and we squandered it all with bloat so much that we need to use a thin client to do development? That too on a platform where they can easily snoop on your code, collect whatever metrics they like and use it against you (one way or another)? No thank you.

Call me a luddite if you will, but I feel that the developers should be wary about protecting their craft. This will end up with BigCos turning this into another Social Dilemma type scenario.


The feature page for Codespaces reads less like an IDE enhancement and more like “docker doesn’t work well on MacOS so just code in a browser so we can spin up a local environment quicker”.

Or I don’t know, maybe try using a Linux laptop instead.


> Or I don’t know, maybe try using a Linux laptop instead.

This is an underrated, excellent option, especially for solo devs and small shops.

Or run a Linux VM in Parallels Pro with startup-on-login, shared networking, and port forwarding.

In my opinion, Apple should build a virtualized Linux dev environment into macOS (a sort of Linux Subsystem for Mac). That would be so helpful... or Microsoft could build a Linux distro with a next-gen window manager and full support for Windows apps. Either way and I'd be all-in.


I wonder why they're not telling the security & compliance side of the story here. Getting rid of local development means they have to worry a lot less about what's going on on their engineers' workstations. They've reduced them to dumb clients; any code going in or out of the repository has to be created, or at least pass through, a VM that GitHub controls. That lets them move the security boundary; I wouldn't be surprised if the next step is to cut off the ability to clone or push to certain sensitive repositories from outside of Codespaces.

That being said, while it has a lot of advantages from an enterprise perspective, I wouldn't want to live that way :)


> Getting rid of local development means they have to worry a lot less about what's going on on their engineers' workstations.

Wait, what?!?

No, it doesn't. Their engineers' workstation software can read or edit their code just as easily as their engineers can. Going into the cloud just adds another liability, it doesn't take any one away.


I don't think so. Software running on your computer has wholesale access to your filesystem.

If those files are accessible only via the web (or if you bother setting up ssh + pubkey, via ssh) that significantly reduces the surface area of possible attacks.

Don't hate on "not being impossible" for "being less likely."


"Less likely" applies to random events. Sabotaging or stealing code has nothing of random in it. You can claim it becomes "harder", what it does, a very small bit that doesn't warrant anything like the claims on the GP.

All it changes is that the attacker will have to launch the browser or rewrite some part of it (what he can do, because the browser auto-updates with your access level), instead of simply taking the files.

Oh, and by the way, not all software running on your machine has complete access to the filesystem. Reading your code and changing your browser requires basically the same level of access.


I stand behind the idea that forcing everything through SSH to a VM they control or over HTTPS to a web app they control has meaningful security advantages. Of course you can craft a hostile client for any service, but it adds a barrier (and a place to run countermeasures) that wasn't there before. It definitely lessens the likelihood of getting caught up by some dumb untargeted drive-by malware, at least.

Also, do keep in mind the target audience for Microsoft's enterprise products - if something lets you check off a bunch of boxes on some government or industry compliance checklist, then that thing has value to those customers, regardless of any actual security benefit.


The idea w/ the VM being that you could run all sorts of heavy anti-malware and pattern detection and file access analysis type of stuff ala CarbonBlack w/o having to manage deploying it to all your workstations and making sure it stays updated and functional there.


>"Less likely" applies to random events. Sabotaging or stealing code has nothing of random in it.

You have N machines, M developers, with access to the internet and development environment E. What are the chances of someone stealing your code given N, M, and E.

Changes to N, M, and E change your expectation on the frequency and severity of attacks.


Interestingly, the intro docs specifically recommend using Chromium for Codespaces:

> For the best experience with Codespaces, we recommend using a Chromium-based browser, like Google Chrome or Microsoft Edge.

(from https://docs.github.com/en/codespaces/developing-in-codespac...)

Makes sense coming from Microsoft, but it'll be fun to see if/how that affects the ongoing Chromium/Safari battles (and Firefox, to a lesser degree) if Codespaces seriously takes off. "Our company development environment requires every developer to use Chromium" is going to be quite a shock.


This is because it’s based around VS Code, which is an Electron app, which uses Chromium.

In short they never tested it outside it because they never had to. Only now it’s become available as a “web app” with a URL.


They’ve put a ton of work into making VSCode browser independent because of Codespaces. It now works in Mobile Safari pretty well, as well as Firefox.


Works fine with firefox afaik


The did the following optimizations to reduce the time it takes to spin up a new dev environment:

- creating a Docker base image with all the dev dependencies

- have a prebuilt environment with a full git clone of the 13GB repo before a developer needs it

But why does this only work for the remote development environment? You could easily make the same optimization for the local development environment, the only downside is dealing with Docker on Mac.


Docker on Mac is a no go for even a modest monorepo size. I took at look at it at my previous company where we have a 4.5GB full checkout and the IO just tanked building. It's sad really because I think could configure a VM to operate with better IO but it's not as straightforward to get that configured with the Docker for Mac system and I didn't think it was worth the complexity add for our whole engineering org.


From the article:

> The GitHub.com repository is almost 13 GB on disk; simply cloning the repository takes 20 minutes.

Their problem seems to be mainly this. Which I'm surprised that they don't fix, but invent a workaround for. Deleting/rewriting a git repo's history is not impossible and often necessary for these kinds of cases.


How often do you clone a repo?

Our monorepo is multiples of that and the last time I cloned it was 2 years ago. When we first moved to git we simply had a 'clean' clone sitting on a network drive that all users could just copy to their machines to get started.


I previously worked at a company on the team that managed the monorepo and the build system for the code within it. Our checkouts were 4.5GBS full. We had discussed performing yearly rollups, likely around Christmas for the current version. This would essentially be take a full checkout and `rm -rf .git && git init && git add --all` starting out with a new genesis commit. We'd keep the old "branches" in a separate archive only repository on our github and force push this new main copy to the working repository. And then provide guides and support for engineers to utilize the new rolled up repo and port any of their existing WIP.

The concerns were long lived branches that had WIP and I think also just misunderstandings about what was going to happen. We never did it.


Somehow I can see some companies using this to spy on their employees and "measure" how productive they are, and how their timesheets reports match.


This is 100% where the tools-in-the-cloud thing is going. It's going to be a major selling point for a lot of buyers (companies).


I'm really surprised (and discouraged) to see how many people here are so enthusiastic about this. There is so much room for misuse. You're increasing your dependency on an external corporation, giving them more control over your development environment, giving up much of your autonomy, and somehow this is a good thing?

Development environments being overly complex to setup, long compile times, those are symptoms of software bloat and bad design, but instead of addressing these fundamental problems, people want compilation to happen externally on a 32-core machine so they can sweep those problems under the rug. Okay, let's see how that turns out.


I would quit so fast if my company forced me to use a web IDE.


(googler, opinions are my own).

Google allows for normal desktop IDEs, plus having a web based one. The funny thing is, many people move to the web based one because it is so good.

But Google is also unique in how piper/citc[0] (our source control) works. It's effectively designed for web/cloud based style development, so the workflow for web based dev and desktop dev are effectively the same.

So web dev can work, but I don't think the existing tech that most of the software industry has is built well to support it.

[0] https://cacm.acm.org/magazines/2016/7/204032-why-google-stor...


We also "allow" both but the size of the repo and build process are quickly getting out of hand for something to run on a laptop. In the past we had tools to manage the size of what an IDE had to index, etc. but these were abandoned in favor of the cloud effort, so to develop locally with any sanity you have to use years-old IDE versions and pretty much debug it yourself.

The cloud environment builds and runs faster but the autocomplete, go-to-definition, etc. are pathetic.


Interesting, can you elaborate a bit on "cloud-based" development vs for example a traditional workflow of git and a central repo?


Google has a monorepo in which nearly all of the company's code is stored. The monorepo uses a custom implementation of Perforce. No personal computer could download a whole copy of the repo. So for that reason people need to check out a virtual/cloud copy of the repo in order to do work. This has the added benefit (for google) of not allowing individuals to have code on their device which is bad if the employee loses their device or has it stolen.

There are some exceptions for some teams (like Chrome). And when I was there exceptions for people who did mobile (iOS/Android) development, though Google was moving pretty aggressively towards a world where even iOS code wasn't allowed on the employee's device.


It's been a while since I've used Perforce, but wouldn't each team have their own 'path' in the depot and set their clientspec to only download that? Why would anyone, other than a build machine, need the whole repo?


Not a Googler but we do have a monorepo... you can start with a team specific path, but you feed that to the build system and ask it to identify all the other paths where your (transitive) dependencies are.


Should've read the entire post first. o\


Is the IDE based on Eclipse/IntelliJ/something else?


(Opinions are my own.)

It used to be entirely custom, but is currently being re-based on VSCode. Overall very similar feel to Codespaces, just tailed for how Google works.


GitHub isn't forcing anybody to use a web IDE. The most broadly used access interface is VSCode. Then there are heretics like me who choose to ssh in so we can use Vim. This is described in the article.


How is the latency btw? I SSH to a beefy box that sits under my table and sometimes I hate the latency of running emacsclient over X. But I am not using VIM and I don't use terminal emacs because of clipboard integration betweeen remote emacs and local desktop. I know there are hacks to make clipboard sync between local and remote emacs sessions without running emacs over X11 forwarding, but I always found those to be janky..


It's very good. The rendering and inputs are done locally. Completion, compilation, linting, and other things are done remotely but they are not instant anyway so the network latency is not really noticeable.

I use Github Codespaces with a Vscode and vscode-neovim. Neovim is running locally.


I’ve always thought that remoting to edit text files using X or RDP is doing at the wrong level of abstraction.

VSCode editing over SSH works really well in contrast. I think Sublime has a plugin that ain’t bad. Emacs (TRAMP) is probably the worse of the three :/.

Terminal Vim or Emacs over Mosh is a pretty good option too.


not op, but amazon uses "cloud desktops" with SSH access company wide, and no one really has latency issues there. So its def something supported at some companies...

I don't know how many people would want to use graphic applications over the network however, seems like it would be janky


Do you have to drag your .vimrc and plugins along every time you set up a new code space? My vim setup is pretty bespoke, it would be annoying to have to set that up every time.


You can have it clone your dotfiles repo automatically and run a set-up script:

https://docs.github.com/en/codespaces/customizing-your-codes...

I also have a very bespoke dotfiles config that predates Codespaces (https://github.com/wincent/wincent), so I made a thin wrapper around it that makes it work (https://github.com/wincent/dotfiles). For people with less complicated set-ups (ie. basically the entire universe), it is pretty straightforward.


It's totally worth taking the time to sync your dotfiles via git. There's a little bit of maintenance involved whenever you make a config change on one system that you want to propagate elsewhere, but it pays off.

Personally, I just `git init && git remote add ...` in my home directory and have a .gitignore that ignores everything by default, so I have to `git add -f` whenever I want to sync a config. Submodules work well for `.vim/bundle/the-plugin`. It's also nice for syncing important zsh customizations like this one: https://github.com/MatrixManAtYrService/home2/blob/master/.z...

There are also other strategies: https://dotfiles.github.io/


I once had a setup where I could just clone a repository and run a script to get my dotfiles set up. For Vim, I would just need to then run the command to install the plugins and that was that.

Edit: Here is the CLI program that helped me set this up:

https://github.com/thoughtbot/rcm


Can you install RDP/VNC and just remote into it like that? Then get sudo access and install whatever you want on it?


Well, it's a Linux VM/container, so while you could do that, I don't see why you would, I think all a dev would want to do in a dev container would be command-line in nature.


It's not just a dev container, it's meant to be your workstation, with the code files being stored in it and not on your personal machine. So if I want to install IntelliJ, Beyond Compare, etc. Or even if I don't want to bother with setting up tunnels and port forwarding for testing my web app locally, I could use Firefox in the VM, etc.

Seems it would solve all the issues, which are specifically that it mostly works only with VSCode or terminal apps, but with a remoting solution you could run anything you'd want.


How does setting up a desktop IDE work?


Currently, VS Code is the primary method of connection to each Codespace, and we do not support other editors officially.

If you’re looking to connect via SSH with your desktop IDE, we do have a workaround here: https://github.com/microsoft/vscode-dev-containers/blob/main... -- and I use this regularly for Jupyter Notebooks. :)


This (quick to provision vms of any size you want with your code base etc) for jupyter sounds like a colab and mode killer...the way both of those make it hard to actually check in code leads to a ton of copy and paste code etc. You all should really advertise this capability.


So if I want to use IntelliJ or RubyMine, is there a workaround?


Yeah, go work somewhere else :))))) /s

Jetbrains recently added remote capabilities to IntelliJ, I'm hoping it's going to be usable with codespaces soon.


I'm in the GitHub Codespaces personal beta: counterintuitively, the UI/UX of Codespaces is as performant as VS Code on the desktop.


I also don't use VS Code usually. My personal preference is a JetBrains IDE, but my point is more engineers should be able to choose the environment they're most productive in.


Currently, VS Code is the primary method of connection to each Codespace, and we do not support other editors officially.

If you’re looking to connect via SSH, we do have a workaround here: https://github.com/microsoft/vscode-dev-containers/blob/main... -- and I use this regularly for Jupyter Notebooks.


That looks cumbersome enough to sound like you guys are trying to discourage the method entirely. A custom password that has to be copied and pasted every time? A lot fewer people will put up with that for long enough, and I'm sure whoever came up with that and whoever approved it, they all know this.


Jetbrains has a hosted-IDE pilot project: https://lp.jetbrains.com/projector/


How is the performance on jetbrain IDE. I love webstorm & intellij but they are Soo slow compared to vscode for me.

At first I liked them because they have lots of great functionalities out of the box, but vscode has caught up and just yesterday I switched back to vscode from jetbrains. For java spring and react


> How is the performance on jetbrain IDE. I love webstorm & intellij but they are Soo slow compared to vscode for me.

It's pretty configurable. In my experience, it's pretty quick. It's not text editor or VS Code quick but the only thing that ever trips me up is the occasional re-indexing. Well worth it for how well it does everything else.


Doesn't VS Code essentially run in a browser anyway?


Yes, with special Electron shenanigans that still work in a normal browser well I suppose. The web UI has a few features that make it more device agnostic too (e.g. a hamburger menu to replace the File menu. Yes, you can run Codespaces on an iPad.)

Code server (https://github.com/cdr/code-server) works in a similar way using your own PC as a host, but for whatever reason Codespaces is more performant.


That's not counter intuitive at all: Desktop VS Code is a web app running in chromium.


To be clear, the UI is a web app. But that is serviced by out-of-proc servers written in many different languages (e.g. language servers are usually written in the same language that they service).


In some sense, that's exactly the same as normal web-browsing:

Parts of gmail run in the browser, the rest runs on servers written in many different languages at Google.


To my mind that's more of an indictment of VS Code than a recommendation for Codespaces.


unless you've used it, then you'd know that its browser backing is nothing to frown at since its a great piece of software that works great.


Isn’t VS Code Electron based? Same diff at that point


Same regarding the code executed, but not same in that various actions require a round trip to a remote server with attendant latency.


Yes. Though there are a lot of techniques to hide latency.


I am wary of requiring an internet connection in order to develop, but there seem to be more and more advantages to using something like Codespaces.

The article describes using pretty much any IDE, so they are just offloading the setup and running of the development environment to the cloud while letting you run your preferred IDE locally.


Some people literally disconnect the cable to get rid of all the online distractions. I've seen this mentioned in a few threads on procrastination here. It's sad some tools will not let you do that anymore.


An internet connection is already required for dependency management and stack overflow. I already develop with an always-on connection.


Dependency management is usually not a big part of programmer workflow (unless you are a devops or something). Download dependencies and docs, go offline, write code.

People who try working offline are exactly the people who want to do it without Stackoverflow.


They're also the people least likely to want to use this kind of setup in my experience.

My company has a remote VSCode setup not that dissimilar from this setup. Big C++ codebase. Many engineers love it, even some of the "old timers." Our interns can be productive on day one and it brings a lot of productivity to not have to worry about caring for your dev environment.

But there are plenty of people who really just want to SSH into a beefy desktop (or directly edit on that desktop) and use their vim/emacs setups. These are the people who know how the whole build stack works, can debug the vscode/codespace magic, and can do amazing things not available in the VSCode plugin library.

So it's good to be able to do both. Allow power users to use their own tools, but don't require everyone to be a power user to sit down and write some business logic. Give nice rails to ride but don't require them.


An internet connection is also required for fairly important things like pushing your code, giving and receiving code reviews, etc.


I can see how a web based IDE speeds up on boarding and reduces the friction in support and collaboration. But I wonder if it reduces the skills engineers develop over time?


What skills would developers really lose? Speeding up boarding seems like a win, not many skills are developed imo trying to set up you personal dev environment. It's mostly just frustrating when a company's tools don't work well on your machine.


Being able to manage the frustration could be one such skill. And also being ready to fix things instead of assuming that employer must set up a perfectly working environment for you.

I bet smaller/poorer companies may want programmers who are ready to fix tooling when it (inevitably) breaks


I see your point, but I think increasing productivity may be worth potentially losing some of the skills you mentioned.

Additionally, it's not like developer set up is the only thing that test your ability to manage frustration. I would say developer set-up is just something you need to get through to start your actual job, and if that can be eased up then it's worth it imo.

For smaller/poorer companies, they could always hire an engineer that specializes in fixing broken tooling. Ultimately it's a company's job to hire competent engineers.

Thinking back to all the times I had to set up my workspace, I wonder how much did I really learn vs just Googling and getting quick fixes.


I write all my code on my personal laptop and then retype it into my company's IDE


Don't you then have to maintain a copy of all the dependencies and input data on your personal laptop? I can't imagine myself being productive this way, even for small standalone projects where cloning everything to the personal laptop is feasible. This sounds awful and the company IDE must be even worse somehow.


That seems unnecessarily complex and time-consuming. Unless your company's work machine is locked down and you can't be productive on it...


If the company IDE is annoying to use, then rewriting it is probably a net time saver. The time consuming part of coding is rarely typing the code, afterall.


Exactly. I need Intellij to be productive in such a horrifically spaghetti code base


Reminds me of a project many years ago where I was forced to use a companies crappy home-grown CMS that deployed all their microsites, they had a weird code editor that they expected you to develop in inside the CMS. I wrote a selenium script that would login and upload the contents of my local files into the editor.


I wonder if that's against your employment contract.


Probably, but I am more likely to be fired for not doing work


You might get fired plus sued.


This is a sentiment that is very typical among Singaporeans. I suspect it has something to do with the political/business climate over there.

While the USA is also litigation happy, it actually takes an incredibly high amount of deliberate destruction for an employer to succeed in suing an employee.


I'm just an immigrant here.

I don't think it's very likely you'd get sued, but I wouldn't want to risk it against well funded adversaries with expensive lawyers.


same but i copy paste


The article specifically says that they support their vim and emacs users by allowing them to SSH in edit code.


The article says they support web based development, or ssh'ing into the codespaces machine.

There are other IDEs besides VS Code and shell based ones.


Don't most IDE's have the ability to edit remote files over SSH? I know VSCode can, and I remember reading a comment from an emacs user doing that. It looks like IntelliJ can do that too, though it seems to edit local copies of files and sync over SSH. There's also SSHFS, though I don't know how good of an experience that is.

It can be a pain to set up, but is it more of a pain than setting up a dev environment the old fashioned way?


Emacs has TRAMP, which does a very credible job of making remote resources appear local, not just to edits but to most Emacs libraries as well. Its most normal mode of operation is over SSH. It's not very hard to set up and it's pretty magical once it is. Even Magit just works. I was skeptical earlier this year, but 6 months later I'm doing 80-90% of my development in TRAMP buffers without thinking about it.


Good point! I suspect the issue will be running tests. From what I gather they did all this to save developers the pain of compiling ruby gems + specific stuff on your environment (which can be a huge pain...)

If you edit a file using some sync over ssh option, and want to run a test - how do you proceed? And how is this not the same thing as just checking out the git repo locally in the first place?


> If you edit a file using some sync over ssh option, and want to run a test - how do you proceed?

Well you have your files synced over ssh and also have a terminal over ssh which you use to run your tests. Am I missing something?

> And how is this not the same thing as just checking out the git repo locally in the first place?

It's not just avoiding cloning the git repo. It's avoiding building all the dependencies and getting them configured and running. For most of my projects that's not a big deal but I expect GitHub has several databases with specific configuration, memcached or redis, maybe custom patched builds, etc.


Yeah fair, if you run the tests in the cloud that saves you a step (filesystem sync via ssh is still in effect the same as a git clone I'd argue...)

If you check out code in RubyMine, it'll try and install the dependencies in the Gemfile regardless, though.


Whether or not the editor itself supports it, MacOS, Linux, and Windows can all run sshfs.


An IDE needs to be able to do more than just edit files, though - e.g. for debugging, it needs to know how to run the code on the remote box, and it'll be different from local.


No mosh? I've always had some latency issues trying to pull this off on my mobile hotspot or even just in general in SEA.


Mosh provides a noticeable improvement in SSH reliability even on a home fiber connection in the US.


Note that they mainly use VS Code and also support vim and emacs.


Like, Vim and Emacs keybindings? Or real Vim and Emacs? It's quite different xD


They let you ssh in. So you can run vim/emacs on the machine if you want.


You can ssh in and run actual editors.


Okay? That's three editors and there are hundreds of others (though probably <25 that are used by more than a few people). There are sizeable followings for other CLI editors like Neovim, Kakoune, Yi, etc.


That's not a requirement. RTFA


I would be happy to let go of an employee who bank their employment on the preference of toolings.

Hear me out:

Toolings are, like the name suggests, a means to an end. If a web IDE is fast, works reasonably, and can continue improve itself. And the learning curve is friendly to engineers with different background and experience. Everyone should be comfortable to be nudged to use it. And if the tool additionally is a critical product of the organization, everyone should be comfortable to be mandated to use it, because it's now a very valuable source of dogfooding.


> I would be happy to let go of an employee who bank their employment on the preference of toolings.

> Toolings are, like the name suggests, a means to an end.

It goes both ways. Employees are paid to accomplish an end, and I understand employees who are happy to walk away from an employer that micromanages the means to that end down to preferences in tooling. The reason you quit isn't because you miss Emacs (even if you do), the reason you quit is because your boss doesn't even trust you to choose a text editor.

That said, if the thing is your own product, yeah I get it: dogfooding is important. That said, I know of teams at Microsoft who do their (OS-agnostic) work on macOS... at some point accomplishing the end is more important than dogfooding, and you just let the workers use the tools that they're comfortable with.


And I would be happy to end my employment at an employer that mandates what IDE you use. (I would not be surprised that GitHub forces me to use GitHub for development, or Asana uses Asana to track issues, this one is just too far...)

No hard feelings or anything. I spend 5-8 hours a day working in my IDE. That's a huge chunk of my life and it matters a lot to me to work how I want. I've also found getting this aspect of my worklife right greatly impacts my overall productivity.


Yeah, whenever I hear this kind of rhetoric - if my company ever required <some minor annoyance> it's time for a new job - I roll my eyes. Sure, keep pushing yourself out of more companies, there are a lot of good engineers* who will be absolutely as productive as you and who won't be a drain on everyone around them.

*it's always engineers making these statements or I only know engineers


I wouldn't call it a minor annoyance when it's a way of working you're practicing for 6-8 hours a day.

I am best placed to determine what tools I work with best to get work done. You wouldn't get an electrician in to do some work and then decide you can suddenly prescribe the tools they use.

I just don't have the patience for this sort of nonsense, there are a dozen clients who won't micromanage the operating system, editor and way of working that I'll be using to get the job done. With that said, obviously it's a two-way street and some willingness to be flexible needs to be shown on both sides. If $client wants me to sync my work with their $uniqueVersionControlSystem then fine, I'm willing to put in the extra effort to try and meet their specific needs within reason. But that doesn't extend to working inside a client-provided VM, using a client-prescribed OS image, using a client-specified IDE or making other such major changes to my established ways of doing business.

And in fact any client trying to micromanage this sort of stuff is a massive IR35 working practices red flag over here in the UK. You're better off terminating the agreement and signing something else as a SoftEng contractor.


It works both ways. There are plenty of companies that will happily take productive engineers, without forcing them to use a slow, poor-quality IDE as their main tool, full-time. It's not a minor annoyance if it's the tool you use throughout the day, to do a job for which you are then evaluated.

"Tools are just means to an end" is just one of the many rhetorical devices that cushy managers use to eschew responsibility. Of course they're means to an end, but some means are better than others, and usually those with better means end up doing a better job.


There are plenty of engineers but few good ones.

Hiring is brutal, especially now, so good developers can afford to be picky and find another semi identical job in almost no time.


I hear you, but I'm suggesting that this trait is part of what differentiates a good engineer. If I'm hiring someone and they have a bunch of recent moves that are stemmed from reasons like "they wanted to use some process that I refused to try" or "they made us go through some training that I thought was a waste of time", then I consider that to be a very risky candidate.

There should honestly be some good faith on both sides. It doesn't make sense for a job, in most cases, to be strict about an editor. But it's also not a good move for eng to be walking around talking about all the fragile things they'll quit abruptly for.


> stemmed from reasons like "they wanted to use some process that I refused to try" or "they made us go through some training that I thought was a waste of time", then I consider that to be a very risky candidate.

I feel like you're gradually making this hypothetical engineer seem more ridiculous. If someone quit over training or their unwillingness to try scrum, that would be bananas of course. There's an extremely large gap between that and mandating the main tool to use to do your job everyday.


Totally, I think if something is making you quit it must be pretty annoying - or you wouldn't risk trading the known with the unknown (which can very well end up being worse). I also wouldn't mention the real reason in a future job interview, if it sounds ridiculous.

That said, small things can look trivial from the outside and make your life a living hell at the same time.


I'm also not sure I agree that there are few good engineers. I think that's some rhetoric that we all say because we all think we're some of the few good ones.

In my experience (FAANG, startups in the valley and startups out of state/country), strong engineering talent is not particularly difficult to find, but finding a good fit is the tricky part. And I would wager that eagerness-to-rage-quit is a pretty good indicator of someone who will have trouble finding teams that they fit on.


That depends on what kind of companies you want to work for.

If you look outside of FAANG (companies in the rest of the world with unattractive stock), finding good engineers is a problem and most codebases are filled with horror.

The plus side is that, once you're a good engineer, you get little stress/pressure and a lot of leverage on flexibility (eg. remote in cheap places, in pre covid times), even if you won't make as much as FANG engineers. Also, interviews don't require 2 months of preparation every time you want to jump ship.

I'm sure if you're paying FAANG money you can find plenty of good engineers happy to do backflips.


Not a rage quit, but I absolutely would look elsewhere. Folks are allowed to have dealbreakers for where they work.

A lot of people "rage-quit" basecamp over their no politics thing. I probably wouldn't have, but to each their own.

FWIW, I have also spent lots of time at FAANG, startups, etc and haven't had any trouble finding a great fit. I have also never looked more than a week or two for a job when I wanted to leave, so that does factor into my willingness to leave if I'm not enjoying my job.


[flagged]


sounds good, good luck


I'm currently working on a huge codebase that takes forever to compile.

I'd love to use Codespaces or a self hosted alternative so that I don't need to carry around a heavy i7 laptop.

Being able to develop in the cloud from a browser opens up the possibility to develop from a thin Chromebook.


You can do this right now with the Remote SSH extension for VS Code:

https://marketplace.visualstudio.com/items?itemName=ms-vscod...


I use this for developing on linux/pi/jetsons, (need to test linking against real libs, and cross compiling for some reason still isnt a thing), but the compile times are terrible...

I wish this was even more modular so we could have a dumb compile farm, +local ide +run&debug on-device (I've recently got 90% closer by remote sshing with vs code to a pi vm)


I suspect you could hack it together with something like distcc, with VSCode none the wiser.


Isn't this option already available with CI/CD or Vim?


You can also use VS Code on a remote VM directory. Our devs do this.


I'm doing that as well. Works fairly well.

(Much better than eg PyCharm's attempt at remote development.)


> Yet for all our efforts, local development remained brittle. Any number of seemingly innocuous changes could render a local environment useless and, worse still, require hours of valuable development time to recover. Mysterious breakage was so common and catastrophic that we’d codified an option for our bootstrap script: --nuke-from-orbit.

I know not everyone wants to run Linux or FreeBSD, but this is one area where ZFS shines for me. I take regular snapshots on a cron schedule and also whenever making system changes. If I manage to get the system into an unexpected state, I just rollback a snapshot and optionally reboot. Save for the potential reboot, it's an instantaneous operation. I don't need to spend hours restoring from backup.

Using the built-in ZFS support in Ubuntu, you get a nice split between system and user data. You can roll one back without the other. Snapshots are created on every mutating apt operation.


Maybe they should have used Nix to get a dev environment that is identical for everyone... but that would be too easy.

The future is long term probably in the web, I'm just not sure we are there yet.


Buy a Cachix subscription and now the org can build once on the local dev's machine, push to the service, then other users and machines can download & run anywhere the reproducible 'apps' on local dev machines, CI, and even production. CI can even build and push the architectures that aren't specific to your local machine to cover just about everything.


This is what we do at work, and it's working great for us.

GitHub's approach seems utterly ridiculous, especially given that I've seen colleagues prevented from working because their Internet connection at some point ran through a part of the world that was sanctioned by Western governments.

Developer, it appears you are somewhere near Crimea! NO EDITOR FOR YOU!


Apparently you can set up Codespaces with any Docker image you want, so there's no reason you couldn't use Nix to generate one for use with Codespaces.

The Visual Studio Code server isn't in Nixpkgs/NixOS yet, but there's an out-of-tree effort being maintained here for now: https://github.com/msteen/nixos-vscode-server


There are lots of options for making your dev environments reasonably quick & easy to set up if you're mostly doing web stuff, or developing for a very small count of platforms, and mostly or entirely with open-source tools. Change any of that and things start to get harder in a hurry.


Huh, so there we are again:

* you write code in VSCode (developed by Microsoft)

* using Codespaces (created by Microsoft)

* in TypeScript (developed by Microsoft)

* downloading libraries from NPM (owned by Microsoft)

* pushing your code to Github (owned by Microsoft)

* to run it on Azure (by Microsoft)

and on top of that you read email about the launch of a new feature in Outlook and celebrate it with your colleagues on Microsoft Teams...


But Google is the problem.


Both are problems - Microsoft monopolizes development experience, Google monopolizes other areas like private info (they know what you search for, what videos you watch, what restaurants you go to, probably even where you fly)


* VSCode doesn't spy on me (metrics can be disabled)

* Codespaces don't spy on me (much)

* Typescript doesn't spy on me

* NPM doesn't spy on me

* Github doesn't spy on me

* Azure doesn't spy on me

Google does. It's literally their main revenue stream.


This validates my theory that Microsoft is trying to build something big in dev tooling using Github, VSCode, NPM and Azure. This is probably the second "E" in EEE.


Didn’t know EEE as an acronym, in case others are looking and not interested in reading about Eastern Equine Encephalitis, I think here they are referring to Embrace, Extend, Extinguish: https://en.m.wikipedia.org/wiki/Embrace,_extend_and_extingui...


I wonder if this could create circular downtime: if Codespaces goes down, how can Github engineers work to restore service to Codespaces?

The tool to fix the broken tool is the broken tool.


I see the advantages with this, but this would be pretty annoying on a slow or choppy internet connection. I highly value being able to develop software with little or no internet.


> CPU up to 32 cores

> Memory up to 64 Gb

Great, barely enough to develop my new Electron app.


If your new app needs so much resources, I'm sorry to tell you, but it's not Electron. It's your app.


I can tell hyperbole, memes, and sarcasm have no effect on you.


Bit of a tangent; I used to work for a fintech company that had a good few ex-Googlers. Their setup was similar to using Codespaces: cheap, locked-down Chromebooks for devs to type on, remotely accessible machines or cloud VMs in a well-separated network to host the monorepo, run builds etc.

It was an extremely miserable experience.

While I love the marketing materials of Codespaces and I want to believe in the superiority of remote development, I'm not yet convinced.


Imagine being dev at GitHub, coming on, on the first day, setting up your dev env in a couple minutes and rolling out a simple fix to millions of users on the very first day, maybe even the first few hours. That would be extremely gratifying.


Reminder to self: don't apply at GitHub


Has anyone built a product that can beat Rubymine's code navigation capabilities?

I can navigate to any internal or dependency definition, out of the box.


I find RubyMine to be sluggish, and I stick with SublimeText and a set of terminals. However, I was trying to debug a plugin of a plugin of a gem, and installed it again to see if it would help. I found the problem in about 5 minutes because of this navigation. Credit where due; it was pretty awesome.


Agree - I find other editors much more responsive.

But when I need to learn or explore a really large code base in Ruby, I haven't found anything better.


JetBrains IDEs can require some tweaking of the jvm startup params. Mostly initial and maximum heap size. The defaults are rather low and for projects with lots of dependencies it needs a lot of memory to keep indices and cache warm.

I usually run mine with at least 8GB max heap space (on a 32GB machine) and have a good experience with large Gradle projects.


Some past related threads:

Developing Online with GitHub Codespace - https://news.ycombinator.com/item?id=24565606 - Sept 2020 (54 comments)

First Impressions of GitHub Codespaces - https://news.ycombinator.com/item?id=24339118 - Sept 2020 (93 comments)

GitHub Codespaces - https://news.ycombinator.com/item?id=23092904 - May 2020 (601 comments)


interesting read and lots to glean - but seems to draw a false distinction between local and cloud in many areas -

env portability/consistency/using prebuilds as an example could have been done with in house CD/vagrant/packer ~10y ago. This post doesn't focus specifically on that, and there are additional benefits to the approach taken, but the notion that local dev is incompatible with these types of technologies is a consistent background theme in the writing

also, referring to emacs as non graphical and 'shell based' is a bit off to say the least.


I’m using a homebrew version of this — I built a powerful linux machine that i keep at home and I connect to via VS Code remote thing. It is very stable at this point, I’m very happy with this setup. I have a light laptop that is basically a thin client and I do coding on a 16 core computer that i can access from anywhere where there’s internet. Port forwarding works great too in VS Code. I use tailscale for networking so that’s also been nice, no need to mess with vpns or anything like that.


I realize my reaction is, in a sense, completely off-topic, but what I notice most is that they didn’t think Docker solves this problem for them. In fact, I believe this is the first time in 4 years where I’ve seen a company try to solve this problem without making Docker the center of their effort (though they bundle the nightly build in Docker, it seems a minor part of their strategy, and could have easily been a traditional VM instead). They are being creative. The crucial thing is they didn’t feel this problem was solved by Docker, so they did something original and, in some sense, this amounts to a vote-of-no-confidence on the question “Does Docker solve our development problems?” A question that, back in 2016, most people thought Docker would solve. It is possible that 5 years from now, when we look back, we will remember this as the moment when the momentum and focus and energy of the tech industry began to shift in new directions. Because, certainly, for the last 5 years most of the tech industry has been asking “How will Docker solve all the problems of development environments.”


This use case is a development machine on a VM.

Docker driven development is generally code on your host and run the app in a container. Considering their git pull takes 20 minutes, it makes sense to go the VM route.


Now it will be pretty easy to bring back those pesky remote workers to the offices. They will just tweak some code on the backend and poof, the latency of your home connection skyrockets to the point of typing becoming impossible. Luckily there won’t be such an issue with the office internet! I guess we’ll have to come back to the office to keep being productive in this web-based IDE :^)


I have some of the worst home wifi possible, and have been comfortably developing from Codespaces for over 6 months :)

Editing happens locally in the client (and is synced to the service in parallel), and so you shouldn’t notice any latency when typing.


Is this true even when you are using Emacs/Vim? I know vscode has server mode and emacs has `emacs --daemon`.


I'm not a developer but I thought consistent development environments was something that containers was supposed to solve ?


Codespaces seems like magic. I've been using it for personal projects, and it is fantastic to do simple edits in a browser on the go.

That's not the intended use case, though. The actual value is where you get to scale up and automate what used to be only on local workstations.

The nightly builds with up-to-date dependencies and pre-pulled code get to going faster. Having a shared image encourages people to share all those local scripts that make things work where they may have just left them in a local `bin/` before.

Onboarding scripts diverge and fragment workstations because employees that have been there longer ran old versions and never got the new updates. This lets everyone use the same update to date tools together.

I'm excited to see where Github takes this. Tons of possibilities in now using Codespaces to create "local" environments composed of multiple machines.


Unpopular opinion with a lot of my peers, but "fragile local dev environment" is usually a sign of an unwieldy, tightly coupled codebase with a lot of parts, scripts, etc. Trying to hide that complexity with docker or codespaces, etc is just a bandaid in many cases vs dealing with the root issues.


For an org the size of GitHub, it’s not unreasonable that a local dev environment is difficult to accomplish


I think a lot of times the reason local dev environments are fragile isn't because they're local, it's because they are an afterthought and not a lot of effort goes into making them good and stable:

(1) On a team of developers, there usually isn't someone who is designated to be responsible for designing and maintaining a good dev environment.

(2) Many devs don't have the expertise to do it. Build tools, operating systems, dependencies, and configuration management are a related but different skillset than coding.

(3) Many devs don't want to work on it because they enjoy coding more. Just because you enjoy one type of work doesn't mean you enjoy the related work. (This also applies to writing documentation.)

(4) Fixing / improving the dev environment is not rewarded. You need to get X feature done by Y date, and fixing the dev environment may slow you down. It provides a long-term benefit, but not one that is easily measurable, plus that benefit is off the radar when evaluating the contributions of someone whose primary role is coding.

With this cloud environment, they've separated those responsibilities out and given them to a specialized team dedicated to it. IMHO, some of the success is probably due to the organizational structure, not just the technical differences.


In my experience, even with simple flask projects, I've found just the difference between Ubuntu and Arch to be irritating:

1. Ubuntu seems to have trouble with psycopg2 while Arch does not, so I often use psycopg2-bin just so Ubuntu developers can easily install the requirements (learned the hard way). 2. Some older (supported) editions of Ubuntu create extra lines in `pip freeze`, including `pkg-resource=0.0.0` which throws an error when installing on nearly any machine (including cloud versions of Ubuntu in Github Actions.

I have an application that uses docker-compose and has Elasticsearch as an image. In order to run it, I have to expand the vm_max_map (or something, I forget). But there are different ways of doing this in Windows vs Linux.

Differences between environments are legion. Unless a company wants to just send standardized laptops to every developer, it seems way better to just spin up a cloud vm that is already set. Plus, there's easier reproducibility in a cloud vm vs a physical machine (unless you want to reach for something like nixos, but hell, what company is equipped for that yet?


> difference between Ubuntu and Arch

The difference between Debian and Ubuntu is enough to be a nuisance.

The Dell XPS laptop my employer provided cannot boot Debian and I'm stuck with Ubuntu. Same packages, maybe a version or two are more recent right? Nope. Aside from the egregious transparent snap installs when using apt (which can't uninstall snaps, thanks Canonical), there's a lot of minute details that amount to quite a time loss (that and having a laptop with no USB/RJ45/HDMI ports). Simple scripts that work on one OS and not the other, font scaling and rendering making my standard 1080p screen unreadable, absurd default configuration that makes WiFi unusable, etc.

Glad I don't have to manage the same oddities for the actual work I do. Everything is simple once it's all Dockerized.

But that's not true either. I had a different problem setting up the repository with every developer and they were all using the same OS. No repo in sources.list (how?), no docker group created when re/installing the docker package (why?), all kind of weird stuff happening.

So you tell me there's a way to have all devs working and testing on the same environment? That's great! But it is _not_ worth handing over everything to Microsoft or any other third party.

I'll never trust GitHub, GitLab, or any other forge with business-critial stuff. Host your own code, commit your dependencies, have your build script work locally. I've deployed through Git{Hub,Lab} outages and the leftpad fiasco, and the cost was insignificant. Host a mirror there for cheap if you want, that's what I do for my public projects (but after the copilot debacle how can I trust GitHub with anything?).

And what if you're a remote team? You can't rely on the internet. My ISP was kind enough to remind me of this when I got my very first outage with them one hour before starting at my current company.

As valuable a product GitHub is and Coderspace may be, it is a step closer to the Minitel 2.0 where everything is centralized and controlled by a single entity. It's not the internet, it's MSN at best.


I'm totally with you here. Like, ideally you should be able to clone, install packages, and then do ``npm run start'' or ``python manage.py runserver'' or whatever, and the result should be a mostly functioning web app running locally. If it needs certain attached services, they should be 1) as standard/boring as possible (eg SQL, so you can have mocked versions of them for tests), 2) and added sparingly in only well-motivated situations (eg do you really need to have that caching DB, before you even have profiled real traffic?)

IMO investing time to keep applications "normal" like this pays off in the long run. It keeps development closer to "greenfield", which causes a general multiplying effect. Tutorials are slightly more likely to work, 3rd party packages will be quicker to integrate, etc etc.

(Note that this is separate from dev/prod parity, which is also good.)


This has not been my experience. First, I work on lots of code bases and this is where a lot of the brittleness comes from. They don’t use the same versions of the same things. Second, C dependencies are a giant pain in the ass when you add the first problem above.


Meanwhile, on BSD UNIX derivatives, I can 'make world' since the dawn of time...


You still have to grab the dependencies before running make, some of which you'll grab from apt/yum/chocolatey/macports, some you'll need to compile manually and put in a platform-dependent location, ...

Then you can run `make world`. Of course, BSD make is not completely compatible with GNU make...


OpenBSD and FreeBSD at least can compile themselves from /usr/src on a default install, fresh out of the box with zero additional packages.


"It can compile on a single platform from curated sources" is a very easy requirement to meet.


Ah yes, then there is the third big complexity. Some devs are on Mac, some windows, some Linux, and a few rare people on BSD.


It depends on how exactly the tools are used. If you're using docker for reproducible builds with locked-in/platform specific versions of dependencies that need to work seamlessly across platforms then that's pretty good. If you NEED docker/codespaces/etc.. because in order to do basic development and run tests you need N servers running with some specific state, duct taped with non-idempotent bash scripts then this type of stuff is just piling on tech debt.


so at what point are we? Embrace? Extend? or Extinguish? https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...


Extend. [0] Microsoft already has accepted Linux into the Windows Developer Ecosystem (Embrace) and has integrated it in WSL. They added NVIDIA GPU drivers that only works on WSL. (Extend)

GitHub Copilot also only works on VSCode and not on other editors. Soon they will probably move Copilot to only work on Codespaces. All of this is in the 'Extend' side.

Guess what 'Extinguish' looks like?

They will make it all free, starving the competitors out, when everyone runs to the new Microsoft platform.

[0] https://developer.nvidia.com/cuda/wsl


Microsoft does anything: "It's EEE"

Google does anything: "Whatever happened to Don't be Evil!"


Seems like we're coming out of Embrace and entering Extend.


I was curious about the cost. The pricing and compute resources just about correspond to an Azure A series instances, but the ram corresponds better to the v2 series, which is cheaper than v1. I wonder if they're upcharging for v2 machines.


These are also the costs for Teams/Enterprise (who would be willing to pay more); I wonder if they'll release a cheaper/free SKU for individual users.


Would love for them to have a free tier. Example: 25 hours free per month, just to play around.

This might be very resource intensive though.


One of the things I like about my local install of vscode is that it doesn’t burn cash. If I have to hop into a meeting, or take a lunch break, or take a shower, I don’t have to ask myself “Oh, did I forget to turn off my IDE?”.


I wish most people cared as much as you about not leaving random EC2 instances running.


I'd really appreciate if github provides a way to opt-out "deep" analytics (e.g., train copilot on my open source code), even if this reduces the amount of features that they provide for free.


Just a couple of years ago some kid gained access to every repo on github.

Now githubs code is going to be developed on an online platform, which works by sharing code others have uploaded.

I have a feeling that this is not a good recipe.


This is very true and a risk with any cloud service. Ie if you make a app and want redundancy, the code is exposed not only by your own security defects, former employees etc, and each cloud provider you host on such as AWS, GCP, Azure, DigitalOcean etc. I think in practice the largest mitigation is the legal system, as its safe to assume your code already exists places you don't want it to.


>Just a couple of years ago some kid gained access to every repo on github.

Very curious to know more about this. What exactly are you talking about? Closest thing I could find is this[0], but you said "couple of years ago".

[0] https://www.zdnet.com/article/hacker-gains-access-to-a-small...


No, not that.

Another kid, got access to every single private github repo.

It was on hackernews, he declared it, made a write up about it and got given a pretty small bounty from github. Google it, otherwise I guess the net was scrubbed of it. I follow him on twitter but yeah just google more and you should find out all about it.

I was always surprised it wasn't a much bigger deal. Basically since then I think any company ( most that I work for ) are clinically insane to upload their proprietary code to the site.

On a related note - was it not just a couple of weeks ago people were concerned the codespaces system is using other peoples proprietary code? ( I havent kept up to that on that one so not sure if its still an issue)


Microsoft owns Github now. There is only really one way things will go from that point.


Are you suggesting this will be available on Xbox?


There's more to choosing an editor than convenience. I learned a great deal about embracing free/libre software, learning new tools, and becoming comfortable at the command line by having my CS professor push us towards emacs/vim in school.

Nowadays with LSP there's even less of a reason to lock yourself into a editor as a service or paid editor. If you really love VSCode or Codespaces then more power to you, but I highly recommend giving something with a steeper learning curve a try for at least a few weeks to see what its all about.


I literally never had this problem. Sure, I broke my system in order to install something new (Eg. I switched to Wayland + sway recently) but that's constrained to when I'm consciously installing something.

I can understand the use case of having to run 200 micro services or something too powerful for my machine.

I don't understand my local environment automagically becoming "brittle".

I assume things may break after pdate on trash operative systems without an official package manager like Windows or Mac, but other than that...


Is the direction in which all software development is heading?


For the next 5 years until the fashion trend wears off.

We used to write COBOL using thin client on mainframes. Then we moved to development locally.


The team here is part of the company that markets Codespaces. They're not actually representative of other companies, so I wouldn't take this as a direction all software development would be taking.


No. You can achieve the same benefits of local reproducibility with Nix/Guix. Guix is more composable IMO because you can program multi-service development environments entirely in scheme.


Nix seems to have that ability via Disnix[1], though I haven't used it. Looking through the manual[2] it still looks like its in nix.

[1] https://github.com/svanderburg/disnix

[2] http://hydra.nixos.org/job/disnix/disnix-trunk/tarball/lates...


Is it just me or VSCode is dauntingly complex? There are so many configuration options and I often get lost in it.

I've been using Sublime Text for a decade and it has kept the scope creep to a minimum. I've also used JetBrains IDEs and while there is a massive amount of complexity, it is well organized I think. I just get anxiety from launching VSCode, may be it is because of not being familiar with it, I don't know.


I don't use any of the features of VSCode. I just use it as a simple text editor. I'm mainly doing yaml files, bash scripts, and Jenkins pipelines, so I don't need any fancy features. I just "alias vs='code .'" and then "vs" in any directory I have code. It's just a fancy text editor to me that also shows git statuses as well which is nice.


The configuration is complex, but in reality you don't need to touch any of the configuration for most development workflows.

The configuration also syncs itself if you login with GitHub, so you're not going to keep having to do this or juggle files around.

In reality it's not that different from juggling around dotfiles and the various configurations that go into other software.


In Sublime Text I needed to config things very often.

In VSCode (VSCodium to be more specific) I very rarely need to configure things.

From my doffiles git log - two config changes this year so far.


Imagine the amount of telemetry they will collect from their engineers! And of course the Big Data team will start working on identifying the “slackers” ;)


Honestly I see no drawbacks in using someone else’s free power and network if your workflow allows it. It’s amazing to run npm installs on a 2Gbps connection without so much as a % increase of your CPU.

What kills it for many people though is the ability to access the files from the build. Does anyone know how I could mount the “dist” folder locally or somehow sync it? If I can do that I’m set.


Why not let employees use what the tool they believe to be most productive? Are microsoft sliding to become what we knew it was?


Does Codespaces support offline development? Is an Internet connection now required 100% of the time to write code?


It's still a git repo in the end, so you can use all the usual tools to work with it locally.

You can also use https://code.visualstudio.com/docs/remote/containers to run an identical container locally to reproduce the environment, and then connect to it.


locked-in to visual studio code? no thanks


The article mentions supporting vim and emac, which probably means any editor can be supported.

  Visual Studio Code is great. It’s the primary tool GitHub.com engineers use to interface with codespaces. But asking our Vim and Emacs users to commit to a graphical editor is less great. If Codespaces was our future, we had to bring everyone along.

  Happily, we could support our shell-based colleagues through a simple update to our prebuilt image which initializes sshd with our GitHub public keys, opens port 22, and forwards the port out of the codespace.


Sounds like terminal editors can be supported but not alternative graphical editors like Sublime or IntelliJ.


Could just use sshfs with any of those editors


I think terminal editors are supported by allowing an SSH tunnel, and both Sublime and the IntelliJ suite have plugins to allow you to use them against remote systems over ssh


> The article mentions supporting vim and emac, which probably means any editor can be supported.

I read that as you can ssh in and run a terminal-based editor, but that doesn't seem to offer much support for people looking to edit in NEdit or Intellij or Atom or Sublime or whatever else. Maybe some of those could be supported by forwarding an XSession out of the image, but that's going to suck hard for people on macOS whose editor is now the Linux/X version, with all of the attendant mismatch.


I would claim that if you can SSH in, you can do anything, including exposing a network share over ssh.


> support our shell-based colleagues

What do vim and emacs have to do with the shell?

This is another case of people thinking "shell" is synonymous with "TUI", which is false.



From the article, it looks like future support for other IDEs is definitely possible:

"Visual Studio Code is great. It’s the primary tool GitHub.com engineers use to interface with codespaces. But asking our Vim and Emacs users to commit to a graphical editor is less great. If Codespaces was our future, we had to bring everyone along.

Happily, we could support our shell-based colleagues through a simple update to our prebuilt image which initializes sshd with our GitHub public keys, opens port 22, and forwards the port out of the codespace.

From there, GitHub engineers can run Vim, Emacs, or even ed if they so desire."


Yeah, love the dig at terminal editors with "ed". Anyone using one of those must be stuck back in the 70s on a 16-bit minicomputer terminal.


They mention exposing ssh access so their team can remote in from tools of their choice.


Seems like the next step is to avoid downloading the full repository on every machine. If you had it stored on a few machines in the same DC (keep hot files in memory), you could just fetch things on demand with little latency penalty.

Though a lot of people seem to have issues with FUSE filesystems.


It sounds like this use case is for developer's who buy laptops expecting them to act like desktops and then realize their mistake.

Maybe the cheaper, simpler, option is just to buy a performant desktop for less $$$ than has wall current and the space for RAM and heat dissipation.


I forked out a silly amount of money on a laptop last year as I assumed I'd be working abroad for a while. I knew developing on a laptop would suck and I was proven right. I don't know why some people prefer it. (The M1s may change that today)


You’ll still have a different network (this is big at Facebook, where downloading repos from home takes a ridiculous amount of time), and you’re still locked down to a single machine’s resources, when working on your PC. This remote-first architecture also enables you to further break apart your workflow as needed. Need a dozen GPUs to test something real quick? Spin it up! Need to collaborate with someone on a PR? Invite them to your session (eg with vscode liveshare). Work on 10 branches in parallel (eg during code reviews)? Lease more instances!


Isn't Codespaces essentially what Koding was doing ~10 years ago albeit w/less freedom?


I used AWS Cloud9 for years now and for half a year I'm using Codespaces.

I don't like the one IDE per repo model, but I like VSCode more than Cloud9.

It would just be cooler if it would still work when I go offline. Working in a train is simply no fun.


Just very minor nitpick: their first link to Codespaces is https, while the second one is http for the same page. (Was wondering why FF shows the first and second url differently.)

Hope there be an automated checker to catch these things.


> Over the past months, we’ve left our macOS model behind and moved to Codespaces for the majority of GitHub.com development.

The article never explains what "macOS model" actually means. Could someone enlighten me?


I see a bright future for this on some types of teams.

How does the billing work when I'm not using Codespaces? Does the VM stop automatically after some time idling or do I have to manually pause it to stop billing?


Good luck using it for anything GUI related that isn't a Web browser.


No serious engineering team would use this and if github have decided to do it, then count me out.

Github has the best part of 10 critical outages every month. It's a surprise when it doesn't go down.


> The GitHub.com repository is almost 13 GB on disk; simply cloning the repository takes 20 minutes.

Is it possible to get a 10Gbe link to Github ? That would bring down the clone time to 10 seconds.


> Blazing fast cloud developer environments

I have a blazing fast development computer already. Also, it only takes a few seconds to push changes to a repo.

Maybe if I were highly mobile, but otherwise, this is a nope.

Also, IP much?


Many of those problems could be solved with a smarter git clone just like they did in this case. Is there any work on something like that that could be upstreamed to git itself?


For 16Gb: 8 hours * 20 days = $115 per month.

Well, if a company pays - then, maybe it's attractive for some. But not for those who love the real power of the real IDE :)


Jetbrains Project support for this would be fantastic.


If you create a Codespace, then install Projector, you can securely forward its server port and then access it from your browser: https://docs.github.com/en/codespaces/developing-in-codespac....


Awesome, might give this a try then. Never did use codespaces before - this would be killer.


Can any of the github engineers using Rust comment on their experiences thus far? How has Codespaces affected compile times?


For development, what's the main advantage of this over sshfs-ing into an instance and using PHPStorm for example?


The pushback is confusing. If this approach continues to catch adoption a self deployable OS clone is inevitable.


This article does an excellent job motivating the problem! not sure if it is the solution but very curious


I'm impressed at how much IDE-as-a-Service has advanced in the past few years, a lot of other students at my university do all their programming assignments exclusively via repl.it. I wonder if they'll become the norm and basically everyone will just be writing code in browsers, save for a few lone greybeards still running their ancient bespoke Emacs setups locally[0].

[0]: https://xkcd.com/1782/


No serious engineering team would use this and if github have decided to do it, then count me out.


Does Codespaces only work if you plan to deploy your production workload to Azure?


Hasn't Github gone down a lot this year? What do devs do when that happens?


> How far we have come from the hand oiling of early motorcycles is indicated by the fact that some of the current Mercedes models do not even have a dipstick. This serves nicely as an index of the shift in our relationship to machines. If the oil level should get low, there is a very general exhortation that appears on a screen: “Service Required.” Lubrication has been recast, for the user, in the frictionless terms of the electronic device. In those terms, lubrication has no rationale, and ceases to be an object of active concern for anyone but the service technician. In a sense, this increases the freedom of the Mercedes user. He has gained a kind of independence by not having to futz around with dipsticks and dirty rags.

> But in another sense, it makes him more dependent. The burden of paying attention to his oil level he has outsourced to another, and the price he pays for this disburdenment is that he is entangled in a more minute, all-embracing, one might almost say maternal relationship with . . . what? Not with the service technician at the dealership, at least not directly, as there are layers of bureaucracy that intervene. Between driver and service tech lie corporate entities to which we attribute personhood only in the legal sense, as an abstraction: the dealership that employs the technician; Daimler AG, Stuttgart, Germany, who hold the service plan and warranty on their balance sheet; and finally Mercedes shareholders, unknown to one another, who collectively dissipate the financial risk of your engine running low on oil. There are now layers of collectivized, absentee interest in your motor’s oil level, and no single person is responsible for it. If we understand this under the rubric of “globalization,” we see that the tentacles of that wondrous animal reach down into things that were once unambiguously our own: the amount of oil in a man’s crankcase.

> It used to be that, in addition to a dipstick, you had also a very crude interface, simpler but no different conceptually from the sophisticated interface of the new Mercedes. It was called an “idiot light.” One can be sure that the current system is not referred to in the Mercedes owner’s manual as the “idiot system,” as the harsh “judgment carried by that term no longer makes any sense to us. By some inscrutable cultural logic, idiocy gets recast as something desirable.

> It is important to understand that there has been no “high-tech” development such that it is no longer important to stay on top of oil consumption and leakage. With enough miles, oil is still consumed and will still leak; running low on oil will still trash the motor. There is nothing magical about the Mercedes, though such a superstition is encouraged by the absence of a dipstick. The facts of physics have not changed; what has changed is the place of those facts in our consciousness, and therewith the basic character of material culture.

Shop Class as Soulcraft: An Inquiry Into the Value of Work by Matthew B. Crawford


Could use ssh -X to run any program/editor on the Codespaces VM.


IIUC, facebook has being doing this for a long time with devservers


Assuming it auto detects idle time?

Could be a good alternative to buying new MacBook


Yes it does detect. My environment is usually stopped after a break or a boring meeting.


We will rue any excitement for and advocation of this product.


> and have a local instance of GitHub.com running in a half-day’s time.

This is totally separate from the gist of this post, but I personally think companies should optimize this number into the ground. It would be awesome to join a microservices company that has an install/running time of 15 minutes. If some company had that level of speed, it would mean developing new features is probably pretty fast too. Not necessarily, but one of the biggest obstacles I think most developers face is not running the entire stack locally. Instead in a lot of companies they commit/push things to a branch and tell their CI/CD to build it out. While that might be useful in some companies, I think the ultimate awesome sauce for a development team is entire product running and editable locally.


The wifi is down is going to be a new level of craziness now


Does this mean all the engineers are forced to VS Code now?


No, the article pointed out that vim and Emacs users can ssh into the hosts and edit code there. No VS Code use required


New developers will get in using vscode web ide, and have no easy way to switch.


There are lot more IDEs other than VsCode, vim and Emacs


No doubt. I use JetBrains' suite of tools and I wonder how this squares up with Codespaces


Nuke with style lol: --nuke-from-orbit


wow, give up control of what tools i can integrate into my development life? sign me up!


well I use vi for development, this seems visual studio code only


What is codespaces?


The article describes what it is and links to more info about it.


It described it if you already know what it is. One sentence at the top of the post would have been enough. It's called "writing."


Did we read the same article?

> Today, GitHub is making Codespaces available to Team and Enterprise Cloud plans on github.com. Codespaces provides software teams a faster, more collaborative development environment in the cloud. Read more on our Codespaces page.

Links to https://github.com/features/codespaces

It's called "reading"


I can tell you don't read scientific papers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: