MS has some very talented programmers. They're not very common, but they exist. The problem is that the entire company is completely and totally focused on developing an absurd number of new features and products, giving them completely unrealistic deadlines, and then shipping software on those deadlines no matter how half-assed or buggy it is.
The idea is that everything is serviceable over the internet now, so they can just "fix it later", except they never do. This perpetuates a duct-tape culture that refuses to actually fix problems and instead rewards teams that find ways to work around them. The talented programmers are stuck working on code that, at best, has to deal with multiple badly designed frameworks from other teams, or at worst work on code that is simply scrapped. New features are prioritized over all but the most system-critical bugs, and teams are never given any time to actually focus on improving their code. The only improvements that can happen must be snuck in while implementing new features.
As far as M$ is concerned, all code is shit, and the only thing that matters is if it works well enough to be shown at a demo and shipped. Needless to say, I don't work there anymore.
I personally think near-daily updating to fix bugs that should've been discovered and fixed before actually shipping product is absurd. It's been argued that this is because systems are more complex now, but maybe we don't actually need all that complexity...
I keep using the product/service if it's stable. I have no use for a service with loads of nice features that don't work.
With 10.10 and 10.11 I had the same issue, caused by the kext from virtualbox. Removing it and installing version 5 solved everything and haven't had a kernel panic since then.
If not virtualbox, I'd look at other kext you might have there.
I really wish Microsoft had stayed the course and maintained separate desktop and mobile operating systems. Windows CE had problems, but they could have fixed that without messing with Windows.
I have a cheap Win8 tablet. It's weird. Quite slick in a lot of ways, but you can go back to the old UI and see where all the bodges are.
Mac is a much more viable choice today than in the past, and I think the Mac's slowly but steadily increasing market share has something to do with the perceived reliability and stability of the platform.
I'm only basing this off of personal experience. I have Windows at work and Mac at home.
Thankfully, OS X is cheap/free. I use OS X, Windows, and Arch literally every day of my life, both at work and home.
To be quite honest, I blame consumers for all this non-sense. No one wants to actually pay for software. I can go into this in a more nuanced fashion, but I seriously think software across the board is a good bit cheaper than it needs to be. From OS's to games and everything in between.
This is not some pedantic arm-chair econ101 argument; they are able to charge a higher price due to having OS X as part of the product, and they do.
Arch is actually for free.
That's a suspect claim. I am not saying the cost isn't accounted for, but claiming that they charge a premium simply because they have OS X is dubious at best. I don't consider OS X a selling point in any regard and I'm not sure even Apple could. Of course this is largely my opinion. I would find it kind of funny if they really considered it a selling point.
But for people who value specific software that is OSX-only or some of the features only found in OSX (like CoreAudio for music stuff) or the general interface polish and perceived stability, it doesn't matter if the Apple option costs $2000 and the other brand/build with similar core components costs $1000. If it doesn't run the platform you want/need then it doesn't matter that it's more bang for the buck.
Basically I default to Windows because for the things I do on a computer, it's got the most flexibility and options for hardware and software. Linux is missing too many applications I need/want and OSX only runs on machines that cost a good deal more while offering nothing I really need enough to justify the cost. But when I've needed or wanted to run an OSX application in the past (Final Cut Pro in my case several years ago) it just wasn't a question. The stuff I wanted to run and the hardware I needed to hook up required OSX so that's the only reason I shelled out for an Apple computer. If I could have done it without OSX I'd have saved the dough and put it into something else.
Admittedly, a good selling point for OS X is how well it handles the hardware -- the battery life is consistent and lasts long enough for my needs. I definitely use OS X more often when I'm traveling, and will relegate my development to that environment just to make sure the battery lasts.
The software ecosystem is definitely a selling point for some. I am not involved in multimedia production but I certainly can appreciate paying a premium amount to get the tools you want/need. Although whether this aspect can be fundamentally attributed to the OS or just a product of other things (audience, marketing, corporate deals, etc) is another conversation.
Good point, but I'm trying to stay completely agnostic of what Apple thinks: it's not Apple's cost that determines the price of a macbook (aside from setting a lower bound), it's what people are willing to pay.
I assume that people are willing to pay significantly more for macbooks in this universe, than in the parallel one where they have no OS. At least as much more as it costs Apple to develop it in the first place.
You are right, though, that I can hardly "claim" it :)
It's either OSX, or it's that shiny Apple logo
The problem is that Windows isn't a purely enterprise product. Although it gets little money from the end user market, it needs dominance on that market for strategic reasons. So, MS will ensure it dominates end user market, whatever it takes.
From a customer perpective, it looks like an evil spiral more and more companies tend to fall into.
So much focus on "getting there quickly." A few keystrokes and a marketing page, and everyone's using "Foo.js" v0.0.1.
And then you're spending time as a developer dealing with deprecations because APis changed.
But as painful as that is, I believe we need external feedback to deliver better software. Trying a thing, seeing how it's used, and improving it is a good way to go.
It is just going to result in bad times in the short term.
Apple can't fix this until the next update, yet it's a problem in Safari. Now we have to wait till the next update to fix a pretty bad problem.
You can see this, incidentally, if you browse to Libreoffice's opengrok.
That optimism did not last long. I only stayed there as long as I did because of how much they were paying me. Referring to them as "M$" is disturbingly appropriate given that all they do is to throw huge amounts of money at problems until they go away.
After seeing what goes on inside that company, I immediately tried to move to Linux, but it just isn't practical (http://itvision.altervista.org/why.linux.is.not.ready.for.th...). At the very least, I won't be upgrading to Windows 10 until 2020.
As far as the 64/32 bit argument, it's pretty absurd. What are the benefits of going 64 bit? More memory consumption? (http://blogs.msdn.com/b/ricom/archive/2015/12/29/revisiting-...)
What types of apps are you developing in Visual Studio? It's my daily driver and crashes very rarely for me.
It always amazes me how someone can say "x sucks" and never question their own insane workflow.
why couldn't you break it out into separate solutions and reference versioned dll's?
I thought most serious users of VS had to install ReSharper on top of it?
Honestly back in VS 2010 I'd call ReSharper a core tool. In VS 2015, eww.
Just about everyone I've worked with that had daily complaints about how slow and laggy they felt Visual Studio was didn't seem to understand that ReSharper was 99% of the slow-down, excessive memory use, and UX lag...
My big problem with the first article was that in my experience, the extra registers available in 64-bit make all the difference in the world. In the comments for the article you posted, he says that isn't true with VS. Surprising, but I'll take his word for it.
Still, it bothers me that he keeps talking about 4 GB of code and data should be enough. VS becomes pretty unusable for me around 1.5 - 2 GB. If it worked reliably up to the 4 GB limit, I'd be happier.
- "First, from a performance perspective the pointers get larger, so data structures get larger, and the processor cache stays the same size. That basically results in a raw speed hit..."
- "The cost of a full port [due to the amount of code involved, not the quality of the code] of that much native code is going to be quite high and of course all known extensions would break and we’d basically have to create a 64 bit ecosystem pretty much like you do for drivers. Ouch."
- "A 64 bit address space for the process isn’t going to help you with page faults except in maybe indirect ways, and it will definitely hurt you in direct ways because your data is bigger...32 bit processes accrue all these benefits just as surely as 64 bit ones."
I know IntelliJ is great, and JetBrains in general rocks, but it's no contest.
Wait, what? Why is it okay if a program crashes at all?
Development environments in particular may have to crash sometimes in order to prevent the user from wrecking everything else on the system.
Less said about the old-fashioned WebForms GUI, the better...
VS2015 hasn't locked up on me in a couple of days. ;-)
Even with locking up once or twice a week, it's pretty awesome.
When we shared the RC design preview with you, we expected the uppercase menu would generate mixed feedback and emotions. We had seen similar reactions from early adopters and from our own internal users prior to posting about it. 
So then they said they had been "thinking about it" (why a lot of thinking was necessary is beyond me), and that "using uppercase for the menus was not an arbitrary decision" because they needed "to keep Visual Studio consistent with the direction of other Microsoft user experiences".
Then they tried to say that "some of you" won't like the change and that these people have "been very direct in expressing your opinions on this subject".
In other words: they got told it was an absolutely awful decision from the very start by beta testers and their own test team, but ignored it because, well, "consistency". Then they released it and they got almost total user revolt, but they can't back out now because, well, "consistency". But they now allow you to change to letter case with a registry tweak, after all - Microsoft know better than their end users, even when those end users are screaming for them to change a fairly fundamental UX error.
Deadlines appear to have been a problem back then, as well.
I have wondered, how much of that has changed since. From your description, not much... :(
Progress/bugs were reviewed weekly at ship room. Live site issues were reviewed the next morning (involving the relevant on-call engineers).
So as with any large company which isn't a monoculture, things vary from team to team.
Also, calling them "M$" in 2016?
I suspect the culture still exist today.