Can we trust Microsoft with open source? - https://news.ycombinator.com/item?id=28968231 - Oct 2021 (223 comments)
The Azure side probably doesn't care about selling Visual Studio, but they care about developer mindshare and reputation. The Visual Studio side seems to be in a more difficult position, I assumed they can just live from the enterprise/everyone else split and focus on enterprise-y stuff to still sell Visual Studio. But it looks a bit to me like VS Code and the .NET cli have become more of a competition than they'd like.
And the worst mistake here might not have been pissing off the .NET community, but pissing of the people working on .NET for Microsoft. I mean in the end this is the same, but pissing off the people working on .NET would result in a much more thorough destruction of trust with the community in the end.
But I have zero inside knowledge here, might just be weird decisions driven by internal politics or whatever.
VS has no place anymore. The velocity and mindshare is with VS Code. VS with its visual designers had its place .. but desktop is dead and Xamarin competes with frameworks without costly IDEs.
There is a place for a featureful IDE with robustly implemented build, debug, and package management capabilities working out of the box with multiple languages and entrenched technologies.
VSCode is great. However, there is plenty of space for VS: it is also very strong, and has a long history of deep and extensive integration into very mature technologies. VS is still getting better year over year and while I see VSCode as competitive in some spaces, it is no contest in others. If VSCode is to replace VS, it has a very long tail of issues to address, the resolution of which would probably raise both boats anyways.
But, what I didn't know until this recent debacle, is how there is a theory that the reason the .Net-tooling in VSCode is so bad is the same reason we had this watch-debacle. There was some discussion on that in the other HN-thread: https://news.ycombinator.com/item?id=28968231
For a while there wasn't really another way to run SQL projects outside of VS. Data Studio recently got support for that though.
In VS the Solution Explorer lists Project Items, not files. Eg it lists DLL References. This means the entire tree view goes through the IVsProject interfaces and you open Solutions and Projects, not folders.
VS Code works on files only and doesn't have a project system that can specify file nesting rules. The simple ask in the PR could be implemented, but it seems arbitrary and other languages will ask for their own rules. (Vue?)
What doesn't feel so nice it's writing C/C++ as intellisense does not help as much as it does for other languages.
I've only ever used VS for hobby/side projects, and even 10 years ago it was leaps and bounds better than what I have to use for my professional day to day work now (code completion, debugger are the two things that I miss basically every day).
The tools that I use now have these features, but they're such a joke in comparison. The code completion has no notion of "code", it's just looking for similar words.
VS Code still has good code completion/etc but it doesn't seem able to match the instant responsiveness of VS.
For C/C++ on Windows, well, you have VS Code and also JetBrains CLion, but IMO CLion is surprisingly rougher than Rider, even though it's older. You can get stuff done though.
Since I know it and am familiar with it, I make my employer pay for a commercial license at work. In the grand scheme of things, it is not that expensive in a commercial setting.
VS Code is great, but I do not think it is a comprehensive replacement for Visual Studio proper when doing full stack .NET development.
Though it seems MS actively disallows employees to come n tribute to them.
If you do GUI work, you pretty much have to use VS. :-/
As others have already pointed out, OmniSharp is "good enough", but what's built into Visual Studio is still a lot better.
It's usable for sure but the development it's not as fast as in Visual Studio.
I still use VS Code if I want to modify some files like XML, json, yaml, and I don't want to fire another instance of Visual Studio for that.
Even with extensions, VS Code feels like an text editor.
My only gripe is not supporting remote development but it's in the works.
Attaching a price to your ability to onboard a language with your favorite workflow changes how you view that language and the motives of its maintainers when compared to the alternatives for that platform. Java has no such barrier to adoption on Linux, for example, because IntelliJ happens to have a community edition.
Are you paid for your work? Why shouldn't the developers of development tools be?
For the non-web world specifically, the part that was Windows Server and Windows Desktop is simply a dead end outside of niches. And within the niches, comparisons are not all that relevant since... they are niches. If you have a specific job for a specific tool, then trying to compare that with something that does not meet those specifics isn't all that helpful.
Most of my C/C++ work that remains doesn't even target windows anymore since there is no purpose for it. The super small subset that does is just things like device drivers, and that's more a property of the OS than of the project itself.
Speaking of which, I feel the need to rant a bit: developing for Windows using QT is much more nicer than with any MS framework. MFC is a nightmare. Windows Forms was deprecated in favor of WPF which no one cares about. From C# you can't use Direct X or Vulkan with ease.
Maybe MAUI will bring a better experience. But they need to do something for C/C++, too. Maybe buy the rights to use QT and integrate it with Visual Studio if they don't want to develop a good framework. Or buy the framework from Embarcadero (the one used by C++ buulder). But don't force people into MFC mess. I presume that even their own developers hate MFC with passion.
Winapi was ok in the 80s. MFC was ok in the 90s but we are in 2020s.
But that doesn't make the tool a great tool in absolute terms. That is also the problem with this type of comparison, some people come up with arguments that are tangential at best. If you use literally anything else (anything that is not winapi, win32, forms, mfc, wpf or some legacy xaml) then Visual Studio is just a limited experience at best, and a steaming pile of crap in most cases.
This goes for more software obviously, if you want to write C# but try to do that in Xcode, you're going to have a bad time. Same for when you need to write a Kubernetes controller in Go, that's going to suck really badly in VS or XC.
There are a few remaining systems that really benefit from unmanaged languages and strong OS-integrated tooling (the niches that were mentioned), but the mass development practises going on today are basically non-desktop and specifically non-windows-desktop. This means that a tool that was designed to be specifically for windows-desktop (or macOS-desktop for that matter) is unlikely to be optimised for anything else.
Windows Desktop as-is might not be a niche, but building local native desktop applications is. Even if you target Windows Desktop right now for a new application, it's likely that it's going to be some crappy CEF/Electron thing. And yes, that's crappy, but it also means you get to use much more of the knowledge/mindshare/community that is out there which is bigger than all desktops combined.
CLion is getting there, it's not quite as good yet but was still much better than VSCode.
Presumably you are not using anything built on the Eclipse, Jetbrains or NetBeans with any language they support? (PHP, Java, Kotlin, Python)
Because out of the box and without any extras all these three beat Visual Studio easily once you start doing anything advanced except GUI, and NetBeans had a reasonable story to sell even there.
On refactoring the story is if not night and day then at least dusk and broad daylight.
Similar to what is going on here with Microsoft, they refuse to support what Eclipse and Netbeans do for free, as means to sell Clion licenses, and then make you run two IDEs in parallel.
That said, for most .Net devs I know the first thing they do after installing Visual Studio is installing Resharper just to get it up to the same level that IntelliJ (including the open source community edition), NetBeans and Eclipse (both open source) provide out of the box.
It's not a replacement for VS itself in that regard, but the activities that used to be bound to that IDE moved on over the years.
Whenever I have to load some old project into VS, it feels old, slow and clunky compared to other tools. The limitations on the structuring on-desk, metadata, building, linking etc. are very annoying as well for projects that ended up having more than just generic Windows Desktop targets (or even if they are highly specific editions/versions). I suppose that might be because other tooling outside of the process of the 'write-debug-release' chain has moved to broader and more interchangeable concepts where VS has remained basically the same for the last decade.
I can only guess but I think Big Scott, lesser Scott and Julia will have a meeting soon. The .NET community is at a boiling point and they should really avoid a community which takes dev productivity in their own hands. Because that is the garantueed end of visual studio.
sounds like .NET is almost the kind of thing I'd use then, because in a world without Visual Studio, I might not be punished for choosing to use .NET but not VS
I was amazed they'd try something new like cloud. I thought they will stick to the desktop and and MS Office until someone will snatch it from their dead cold hands. But Ballmer went and Nadella came. I still think that the company has a lot of Ballmers hidden in a lot of places, waiting for the opportunity to pull the breaks if they sniff that something interesting might happen.
People keep saying this, but they obviously have no idea. Desktop isn’t dead and nowhere near it. Just because web and mobile app developers think so doesn’t make it true.
There are several industries in which desktop applications are a must. Any application with the needs of multiple windows automatically rules out web and mobile stacks, even if they happen to work on desktops.
I would say most industries. Anything that requires creating any kind of digital content (images, videos, CAD models, electronic schematics, chip design, PCB layout, etc), most of the scientific software, all rely on the computational power and the speed of the desktop applications.
While there are industries that still use local compute and local rendering, that is exactly what it's about: industries. You might have image and video and audio manipulation. There could be hardware control. Maybe there is 'appliance'-like functionality such as POS and vending systems. But those aren't really the mass-desktop scenario that it used to be. That is mainly 'work' usage.
There is this section that you can carve out that does still exist in the traditional form and that is gaming. But that essentially turns the 'desktop' into a gaming console.
Legacy configurations that require things like an actual mouse pointer and multiple windows (or that dreaded MDI document-window-in-a-window interface) are generally left in two categories:
1. bad implementations
2. niche implementations
The first one means investment to fix, which is generally not going to happen if there is no commercial incentive for a commercial piece of software. The second one is a niche and doesn't represent desktops in general.
It would be more precise to say: Desktop business apps are dead. That is almost 100% true. Some lingering ghosts still exist.
While the backed is nice, that React UI feels like crap compared to the old UI. The only reason we are using an React UI being that "it's modern".
Not every app should be an web app.
Can you elaborate? The last time I looked into this (including a little now) showed that, yes, you can create multiple windows, but there are not good solutions for communicating between the windows. Electron's own documentation shows two, options between local storage and Electron's IPC mechanism. From what I've read, IPC is not good for more complex multiple window apps. So one needs to use a local network communication method.
If this is all the case, I would hardly consider it full support.
VS is on a whole other level than VSCode when it comes to C#. I suspect that many people here haven't really used the IDE features and VS' debugger for more than simple breaking and inspecting a variable.
Yes, if you have a hobby project, you can use VSCode instead of VS Community Edition just fine. But if you are dealing with concurrency errors, performance analysis, dump analysis, and very, very large projects, you can't use VsCode. VS' GUI is also a lot more flexible than VSCode's. In VS I can have a lot more information present where in VSCode I need to constantly switch windows.
That's exaggeration at best.
VS with extensions like Roslynator make C# development really good.
VS Code is nice, but I always felt like VS had significantly more reliable Intellisense than VS Code (for C#).
but I wish VS (real, not 4Mac) worked on Linux.
I use the community version. VS Code, while being extremely good, is still far behind VS. I love using VS, as a solo developer. The integration it has WRT .NET projects is amazing.
But VS Code is far more just a text editor and that you can run and debug many languages in it in my opinion means it meets the criteria of an IDE.
If I was forced to switch from VS to VS Code I would be miserable and feel handicapped, but infinitely more productive than just having a text editor and the dotnet CLI.
So VS Code is absolutely a “replacement” for VS, but only for a poor man with a very limited set of required features.
All the above being said, I don’t ever see VS Code truly replacing VS, but it’s a competent IDE considering its both cross platform and free as in beer.
And when .NET MAUI will ship, I guess we will have a visual designer for that.
That are endless features on VS that VSCode will never get.
I tend to disregard leetcode interviews anyway, unless I am on a deep need for a job without alternatives.
Same for 'what was your first computer', there is a whole generation (or maybe two of them at this point) that started out with smartphones and didn't get to the 'using a desktop' level until much later when preparation for work life required it.
Maybe Satya should keep an eye on her.
While VS is my daily driver and I prefer it to Code and Rider, I have no idea how they make a profit. _No one_ buys the retail sticker priced SKUs, you either go through MSDN or Microsoft Partner Network and pay a fee hundred € for access to basically everything Microsoft puts out.
Microsoft is a big company with changing microcultures all the time. They alone as a whole should not take credit for their open-source endeavors. In general, big contributor names should be emphasized more, so that we know when they leave Microsoft or the project.
I really dislike this mindset. Sure, it's disappointing that some pieces of VSCode aren't open-sourced, but man, they open-sourced an entire IDE and plugin ecosystem. That's such a fantastic contribution to the open source community, and yet I see comments like yours that treat it as if it's negligible and somehow want even more? Why is it that, say, Intellij never has to deal with this level of scrutiny? The work MS did even includes a bunch work on existing plugins they don't even own - I know this because my plugin was one which they got their engineers to do a couple months of volunteer work on.
If MS wants to retain a few small pieces of tech to pay the engineers that make a fantastic IDE and ecosystem free and open source for everyone, so be it. You say "making money" as if it's some great evil - how do you think all those open source engineers get paid?
Things will get ugly sooner or later, to the dev industry’s detriment. Insisting that one of the biggest tech companies on earth, with a history of abusing its power, go further than smaller companies is not unfair. In order to retain trust, Microsoft needs to put restraints on its own behavior and be open about it.
I think you call it commoditize your complements.
What stops you or anyone else to reimplement those parts?
Also, sticking to existing APIs is an inherent disadvantage when Microsoft can add whatever APIs meet their exact needs. Nobody could build Remote Development or Live Share until ms did because the APIs didn't exist.
Jet brains makes a free open source product and has a paid for product with extra features
Ms makes a free open source product with a paid product with more features (thou different code base)
Somebody built awesome software and dares charging money for it? Bring on the tar and feathers!
VS Code behavior you can accept as a open core or commercial plugin thing. The other thing was just a theft.
Their GitHub acquisition, VSCode, and CodeSpaces are clearly a moat-building towards making GitHub yet another version of Office 365 where you have to pay $$$ for developer tools. GitHub's workflow already bears little resemblance to a "real" Git workflow, and in 7-10 more years I predict that they will try to marginalize it until you can't actually make commits if you're not using the web UI.
Everything Microsoft-related should be considered closed-source and avoided if you have interest in Open Source as more than just a license. The management at Microsoft has way too many non-technical people focused on "user engagement" and "mindshare" for them to think in terms of OSS principles. They are not evil – they just don't "get" it. So I would strongly suggest avoiding any developer-focused products from Microsoft for the next 10 years if cross-platform parity or Open Source are high on your list of values.
If not, that's fine too. I advise buying and using a Windows laptop for a first-class Microsoft Developer Experience.
Which is? Email lists? GitHub is much easier to deal with than scouring through email threads. It may work for Linux, GNU, and more, but it doesn’t for the majority of devs.
You know what else Git cooperates with better? Push, pull, rebase, squash, a whole world of branching strategies etc. GitHub's PR model is artificially gimped against many useful features of git, to the point where I suspect they actively hate users creating commits and want themselves to be the only ones who can make them. Their PR model actively fights any kind of commit-based review; to the point of erasing access to old, rewritten commits.
Even their diff model is proprietary — and inadequate. You can only diff individual files against previous pushes, and that too only if you manually marked each file as 'viewed'. Diff two tips (read: heads)? Forget about it. You can't have it. Do it out dumbed-down PR way, with a pileup of correction commits you can't squash properly, and be happy you were given the privilege.
The PR interface actually exposes this for force-pushes, but the UI discovery for this is horrible. It turns out that the "force-pushed" part in the little message in the github UI is actually a link. This link points to the diff between the old and the new HEAD of the branch.
As an example you can look at this PR:
It has this little message somewhere down the page:
Monadic-Cat force-pushed the add-unwrap branch from e130dbe to 25235aa 4 months ago
If you then click "force-pushed" in that message you go to the "compare" page, which shows the diff between the two commits:
Disclaimer: I'm a Micrsoft employee, but don't work on Github. I'm a daily user of Github though.
All the git operations you listed work perfectly fine locally on a repo from github, it's only once you want to push to main you have to go through the review-gate and that's configurable.
Gitlab works mostly the same as Github, and for both of them you can configure the merge-strategy and PR requirements per repo. Next competitor in line would be Gerrit with a quite different approach, which is more based on single commits rather than branches and pull-requests, but in a way their refs/for/master is very similar to a PR except it doesn't have a branch-name and you don't need to fork the repo first, but under the hood you could say a patchset on a change is the same as a commit on a branch sent for PR.
In the end, they are all the same and you learn to work with it. None of them really fight against the core of git.
GitLab (I know it works, because I set up this workflow in my company). Before GitLab: Phabricator, Gerrit.
> All the git operations you listed work perfectly fine locally on a repo from github ...
Irrelevant. GitHub needs to be compatible with Git, not the other way around. This subthread began with the assertion that GitHub wants to break that dependence.
> Gitlab works mostly the same as Github
False. GitLab lets me compare different versions of the same MR against each other. Without extra work. It's right there in the MR diff UI. That's literally what I wanted.
> ... for both of them you can configure the merge-strategy and PR requirements per repo.
Some configuration allowed. Not the same kinds. GitLab's squash-and-merge strategy actually works, even lets me set the commit message! GitHub built their incomplete implementation of sth similar only recently, to catch up with GitLab, but also stopped half-way. That last bit is what gives rise to the suspicion that started this thread: that GitHub doesn't really want to be Git-compatible and doesn't care about Git. GitLab, OTOH, introduced this feature, a long time ago, because it saw people were already using Git this way. I couldn't accept doing development in my company without some kind of auto-squash support.
> which is more based on single commits rather than branches and pull-requests, but in a way their refs/for/master is very similar to a PR except it doesn't have a branch-name
Gerrit has branch names, I don't know what you're talking about. In fact, it has better support than GitHub. I can push a locally created branch and ask for the branch to be reviewed, all from the `git` CLI. I can even change the branch name, push the new branch, delete the old branch (on remote, from local `git` CLI) and all of my changes and their reviews remain intact! The equivalent on GitHub would require me to close my current PR and open a whole new one!
> and you don't need to fork the repo first
This is a good thing. `git clone` is already a fork. I shouldn't have to make two forks, one of them through some external interface, and another through the `git` CLI, just to be able to work on sth. Git's branches are already excellent, there's no need for yet another way to keep separate versions of a repo.
> but under the hood you could say a patchset on a change is the same as a commit on a branch sent for PR.
No. I think your understanding not Git derives almost entirely, if not predominantly, from GitHub. And this is exactly what we're talking about in this thread. GitHub would like to impose their way of doing things on everyone, irrespective not what better ways exist (and have existed since before GitHub was born).
In Gerrit, each change is a commit on a branch. There are versions of commits, each of which is a patchset. This is good and desirable, because it leads to cleaner branches. I like this, I want this, and I've been using this since before GitHub was launched.
GitHub's only supported way is to create a commit for every tiny edit, irrespective of whether it's significant enough to be enshrined forever in the final branch as an independent commit of its own. This adds noise. Buncha commits with just the message 'Typo'. Ugh. Imagine running into one of these in a git blame, months later. Imagine looking at a git log. Just ugh.
And what's the alternative with GitHub? Squash all commits in a PR into a single one. So now I can't have more than one significant commit in the same PR. Okay, fine, whatever; can I at-least make one PR dependent on another one, like a commit that has a parent? Nope!
> In the end, they are all the same and you learn to work with it.
As I show above, they're not all the same. Can't learn to work with sth you need that's missing.
> None of them really fight against the core of git.
If I can just git push and git pull and get all my work done with just that, then yeah, none of them really fight against the core of git. But then why would I use them instead of a headless SSH server?
They all fight against Git, in different ways, when they decide to build custom UX for one workflow or the other. They're telling you what workflows they support. If your workflow can't fit into them, well then they can't do the job you want them to.
This is why both Gerrit and GitLab were built the way they are today: their users needed a certain workflow (feature) and they cared enough about those users that they grew to support such features. GitHub has ... other users ... it cares about more. Not the power users. But power users also are more likely to be defiant, to resist control.
Just small remark that I personally much prefer the Gerrit model over Github, both UI and the git interaction :) Just don't see either as any big barrier or deviation from git workflow, as long as you stay away from the github client of course.
This makes it one commit per PR. Undesirable.
> commit --fixup/--amend
I'm aware of these (and magit makes it seamless). The problem is with the force push that these necessitate.
> Force pushing ... it's perfectly fine
Not when it breaks the reviewers' process.
> In a way one could argue this is more git-native compared to Gerrits refs/for which does some server side processing of the Change-Id and whatnot to map it into one change.
It is that server-side mapping that allows comments and discussions on changes to carry over from one version to another. Unlike on GitHub, where I've had to re-raise the same comment after a force push. It sometimes feels like GitHub's treatment of force pushes make it a dark UX pattern to slip bugs by reviewers.
This problem is big enough that I sometimes simply can't review a branch on GH. Unlike on GitLab, where I can be sure my comments, even when outdated, won't simply vanish.
> Biggest problem here is Githubs UI for indicating force-pushes is an abysmal line in the comment-list, instead of a showing all major updates to the branch in one place and mapping the comments to each of them.
For a platform whose main selling point was "Look, pointy-clicky web interface!", this is a major problem. The PR discussions interface sometimes feels like it hates the humans interacting with it. You better not ever let it get too long!
Force pushes aren't the only thing it makes you hunt down, BTW. Got an old comment somewhere? Perhaps one that has been "outdated" by a force push (aside: this is server-side processing, BTW, and worse than Gerrit)? Well, good luck finding that in a sea of hidden comments.
> Just don't see either as any big barrier or deviation from git workflow,
'Git workflow' is not the same as 'GitHub workflow'. GitHub doesn't currently mind you using them for the former, but they'd much rather you do things the latter way.
But I agree with the thesis of your comment.
GitHub is a real hindrance for reviewing code, and the dumb practices its infantilised review system forces you into make it a nightmare to re-review code after check in.
Thank you for your nice advice. I started following it 20 years ago (with some short but numerous tries to use Linux and BSD as desktops).
I use Windows desktops and laptops for all kind of development. Which is now mostly web microservices running under Kubernetes on Linux.
When I developed Android apps and multiplatform games (Android, iOS and Web), Windows was still my preferred development platform.
In hindsight, editor independent language IDE features driven via a client-server model are such an obvious idea - to the point that I wonder why it took so long for this model to emerge. It makes so much sense to build the IDE features a single time, ideally re-using parts of the compiler infrastructure.
By now a lot language servers exist. They have various levels of quality, and the purpose built solutions like VS or the various Jetbrains products are often markedly superior.
But many of them are more than good enough for a lot of developers, and they allow turning even Vim into a full-featured, powerful editing environment. (Neovim 0.5 has built-in LSP support)
This all started out with VS Code, Typescript, and the LSP specification, and Microsoft deserves a lot of credit for kicking off this trend.
But something that probably started out as a way to popularize VS Code and Typescript has turned into a movement that makes the chosen editor much less crucial and more of a commodity.
This almost certainly wasn't the intended outcome, or at least a side effect that wasn't anticipated.
Hence also the recent steps to counter this trend, like restricting the new Python LSP or this newest .NET drama.
Microsoft earned back a lot of good-will from developers with Typescript, VS Code, .net core, Github, npm, .... Many younger devs have lost the mistrust and disgust that was common in certain spheres not that long ago.
They probably feel confident enough now to shift gears and aim for lock-in and control.
I am implementing an LSP for a pet language, and the state of the documentation is quite shocking. When I first read the "spec", I thought surely I must only be at the intro/marketing brief since so many details are missing.
This is a JSON-RPC protocol and the documentation for all the object types is written in typescript. There's nowhere you can go to see a list of all possible json messages in json. And as you get farther into it, it feels less like a generally designed protocol and more like an interface to vscode in particular.
In general I wanted to use SublimeText and nvim as test benches for my LSP, but in the end I gave up because there were too many subtle gotchas when not developing against vscode itself using the base test project MS provides.
Don't get me wrong, I think LSP has been a great thing for the industry, but I suspect the actual goal was to commodify this type of editor plugin so other competing editors would not gain an advantage over vscode, and to do that in such a way that the result will generally be better on vscode.
On first hand it'd be weird to expect MS to do everything for free
On the other hand the way they handled it (initially they made it free, OSS and promised fanciness) is kinda poor.
On yet another hand maybe they just really sucked at communication, priorities and stuff this time? hard to say.
On yet another hand2 .NET maintainers are really open about a lot of stuff, it's easy to talk to them - let it be asp .net or roslyn/compiler project which makes me giving them the benefit of doubt
That's a very odd way to say "modern developers don't use Visual Studio."
Then by that definition there are no IDE's.
This reads like an apology and recognizing they've made a mistake:
>We are always listening to our customers’ feedback to deliver on their needs. Thank you for making your feedback heard. We are sorry that we made so many community members upset via this change across many parameters including timing and execution.
>Our desire is to create an open and vibrant ecosystem for .NET. As is true with many companies, we are learning to balance the needs of OSS community and being a corporate sponsor for .NET. Sometimes we don’t get it right. When we don’t, the best we can do is learn from our mistakes and be better moving forward.
>Thank you for all of your feedback and your contributions over the years. We are committed to developing .NET in the open and look forward to continuing to work closely with the community.
> Scott Hunter
They knew they were going to lose some 'developer love' when they originally made the decision. They went ahead anyway. So, your framing isn't adequate.
What made them walk back is when they realised _how much_ they'd lose. Which happened only because lots of people were outraged and vocally spoke against it. Far more than the number of people they were expecting.
It'd be naive to believe they didn't know they're going to lose some, especially given how long they've kept up their charade of 'MS :hearts: OSS'. What's more likely, and supported from evidence of their similar behaviour in other parts of their OSS charade, is that they thought they could get away with it this time too.
What's wrong with this arithmetic? Don't independent FOSS organizations have similar metrics when deciding on things? They want to progress while causing the least amount of grievance to the community, and when it exceeds expectations, they walk back?
For example, I hate Firefox's new tab UI; I think it's terrible, and one of the reasons I stopped using it. But, apparently I'm in the minority, so Mozilla Foundation is okay losing my love. Had the backlash exceeded Mozilla's expectations, wouldn't they have walked back? What's extra sinister about what Microsoft's doing here (besides other valid points of criticism)?
Math is never the problem; it's what you use it for.
> They want to progress while causing the least amount of grievance to the community ...
Except that's not what's happening here at all. Cannibalising a promised, existing OSS feature in favour of one's proprietary tool is not "progress". What caused them to back off wasn't a desire to minimise "grievance to the community", but harm to their self-image.
I'm not going to deign your sidetrack about another, actually Open Source product, making UX decisions, with a reply in connection to this charade of OSS.
That wasn't enough. I'm sure they fought for it before the public did. It took the public calling out Microsoft's duplicity for them to change face. I'm sure the insiders' voices helped, but they weren't adequate.
IOW, Microsoft will absolutely do it again, if they think they can slip it past the public. Outrage can't be a driving force for very long, and they know that.
Objectively, a good thing. Too bad it's useless, because the guests aren't gonna stay very long.
And community should keep an eye on .NET development. If we want to keep the nice things we have, we should work towards it.
I'm so glad I'm not at Microsoft anymore. Her decisions might be great for Microsoft's profit margins, but my god does she hate people getting "stuff for free"
Whoever approved this in the first place has shown a shocking level of incompetence in understanding how the developer ecosystems have tilted in the last decade.
On long term such decisions will hurt profit margins. I wouldn't hire managers that only think about short term, in an attempt to enlarge their bonuses.
But it’s not like everything is fine again. The Foundation is apparently a bit of a mess and the C# debugger is STILL not available on VScode versions built from source.
That and that they would ever even consider doing this still discourages serious investment into .NET.
Search Algolia for lots of posts about the recent .NET Foundation kerfuffles.
Well, maybe it would be avoided if ability to review would not be blocked from start. And community would note it during code review.
(obviously, no one believes this explanation and we understand what was actual reason. And thanks to all - hopefully anonymous and which will stay anonymous as long as needed - Microsoft employees who gave real info to Verge)
So far, they have been more convincing in getting me back into .NET than they have getting me back into Visual Studio, locking .NET features inside VS would just alienate me from both.
Doing .NET development is a very nice experience now. Especially with the ability to do fronted web with Blazor and cross platform mobile development with .MAUI.
.NET became usable for any kind of development beside low level systems programming where the garbage collector is not suitable. I would love to see the addition of AOT with the possibility to do manual memory management when you need to.
I wonder how this decision was made _behind the scenes_.
Either some negotiation happened and the Visual Studio team got some concessions (smaller target?) or it was mandated from the top.
Both alternatives do not bode well for the VS team.
>The Verge understands that the decision to remove the functionality from .NET 6 was made by Julia Liuson, the head of Microsoft’s developer division. Sources describe the move as a business-led decision, and it’s clear the company thought it would fly under the radar and not generate a backlash. Engineers at Microsoft that have worked on .NET for years with the open source community feel betrayed and fear the decision will have lasting effects on Microsoft’s open source efforts.
Reminds me of the people at Android that keep pushing big carrier SMS group chat protocols, where everyone can tell from miles afar they are just a blind caterpillar sensing their way along some local optimum gradient descent in whatever mismatched incentive hell their big corp job has landed them.
You saw the same crap with Office vs everything new.
it takes at least 3 minutes to start, the UI designer takes 30+ seconds to appear, and starting your process for debugging takes 10+ seconds
every single autocomplete takes a few seconds to appear
even opening a 100 line .c file takes 10+ seconds, and they KNOW its bad because it pops up a dialog with a progress bar!
this is all on a azure "cloud" instance with 8 cores, 64gb of ram and SSD, with a clean install every 2 weeks
not to mention nearly all of the dialogs are as awful and exactly the same as they were in VS6 (e.g. run configurations)
and it's super expensive
meanwhile VSC is snappy and free, and the JetBrains IDEs of 2011 run rings round it
>even opening a 100 line .c
that I'm using it for C#.
There's something very wrong with your machine.
VS2022 is an amazing experience end-to-end.
as I stated, it's a high end VM on Azure, using the official MS OS image
and it does a clean install every 2 weeks (completely clean registry+userprofile), so there's no inherited crap/extensions
meanwhile VScode and jetbrains IDEs on the same VM are perfectly usable
If you have VisualAssist or Resharper installed, disable them and see the difference.
(note I said a file with a hundred lines of code, not project)
I have some moderately sized solutions that take forever to open and chug to a halt and start paging like crazy with unresponsive UI if I don't disable all the code inspections and Intellisense. This is on a brand-new i7 with 32GB of RAM and a NVME SSD. It pegs itself at that 32-bit process memory limit and thrashes. It's worse if it is a modern web app with a bunch of JS involved.
And so I've used Rider almost exclusively for the past fifteen months or so...
If you maintain an application with legacy form components that are only available in 32bit, you now need to continue using VS2019.
Granted, who builds 32bit in year 2022 but VS claims to support 32bit, which it actually doesn't fully.
There is still a chance that they will fix this before release but I highly doubt it.
A step in the right direction.
Always remain vigil
This doesn’t sound like the full truth to me..
If you're not familiar, there's two different ways to authenticate against Graph API - either directly with the user's credentials, which is good for webapps and user-interactive desktop stuff, and using application credentials, which are suited for server-side workers and daemons and things like that. All the Graph APIs have weird hodgepodges of permissions that are allowed, which sometimes make sense (getting the current user's profile at /me) and often don't make any sense at all, and are just not supported because they didn't bother.
If you've got a lot of time on your hands, you can whine and open UserVoice items and rally other people up and lobby, and sometimes they'll get off their asses and do something. Or not, like some of the Teams presence and calling APIs that have been unimplemented for three or four years now...
If the only reason they walked back this decision is the backlash, they'll continue to try get away with whatever they can. Their culture obviously hasn't changed for the better.
Nothing about their strategy since has made me regret that decision.
Kind of sad to see them flailing about and screwing up, burning goodwill from VS Code and the like.
DevDiv still doesn’t get it, dinosaurs.
Is it fast, great, nice to use? Maybe -- but I won't ever use it because it's riddled with licenses and other bullshit I really don't care about.
I'll continue to use truly open source and free tools like Postgres and Elixir.
Ruby I think has managed to avoid very material conflicts.
Python hasn't, it's a comparative shit-show.
Java had -huge- problems in the past but was able to move past them and is stronger for it.
C# is in wait and see territory for me. I -really- love the language and tools but if I had to start a new company today I would probably go with JVM stack until this sort of stuff is sorted out. A few years from now I hope I will be saying the same about it as Java.
Also, C#/.NET is kind of jack of all trades, you can do anything but a small subset of development where having a garbage collector stays in your way.
As for alternatives, I dislike Java, Python is less performant and not usable for large projects, Rust, Nim, Haskell, Elixir are not as usable and well polished. Kotlin is nice but it does not have any benefit over C#.
I would love for Rust, Nim, Haskell, Elixir to pick up some steam, become more usable, have larger communities, much more libraries. But until then, C#/.NET is the sweeetspot for me. I still dabble in C/C++ from time to time and that makes me grateful C# exists. I am even inclined to double down on my .NET bet by starting using F# along with C# because it feels like a very nice language. It seems to enable even faster development while being more concise and needing less tests written.
They fumbled the cell phone era
They half heartedly entered the hardware business with poor results.
They have a few golden gooses that could be disrupted at any time (Windows, Office).
The biggest asset is dotnet and if they fumble that they will go down hard.
Specifically what I mean is current VS is just not usable for web dev (comparing to Jetbrains tools), but what is even more important - there is nothing close to Spring Boot in terms of ecosystem richness (important for corpos) and even newest .net 5 is much more cumbersome compared to eg. golang (important for startups).
Don't get me wrong - I do like C# - it is a nice language (at some point it was better than Java), but current Java and other jvm langs and tooling (lombok) are great too or better (Kotlin) and current web/cloud ecosystem is all but not .net related.
if you develop for windows desktop or some .net oriented organization (some do politicall decisions) use .net, but if you can make your own decision and you develop for web/cloud just don't use .NET
FYI .NET 6 + C# 10 has a ton of work invested into removing the "enterprise boilerplate" experience and making the language and frameworks more akin to node or go in terms of effort/LoC required to spit out a program:
There's also a few epics that are specifically focusing on .NET as a compelling cloud framework:
We will see in 10 yrs if they catch up....
Sure, C# is more pleasant to use than Java, and the language has almost C++ like capabilities for low level coding, but it lacks the 25 years of cross platform experience. And one can always include a native library if required, JNI might be boilerplate but it doesn't bite.
One cannot pick random .NET library and use it on .NET Core, most likely it is using .NET Framework APIs, COM or Win32 calls.
So already there, the ecosystem is reduced to the libraries whose authors have bothered to port to .NET Standard or Core.
Then issues like this, or how MAUI is being handled versus what Uno and Avalonia achieved on their own, show the ongoing power struggle.