> Thank you for reporting it and not selling it on the black market!
I disagree. If MS is going to treat major issues like this then researchers should be selling them to the highest bidder. Maybe that way they'll actually treat disclosures properly.
Pretty bold to advocate for blackhat behavior on one of the most schoolboy vanilla places on the internet, but I can't say I necessarily disagree with your sentiment, big tech needs a lesson but is this really the vulnerability we want? 115 million DAU on teams...
The amount of damage the NSA or some other state sponsored actor could do with this... It would be very bad to say the least. How bad depends on which state acquires it.
If a script kiddy got it they would likely do a mass randomware infection, hospitals would get hit, people would die. Millions in crypto would be lost to unencrypted wallets found on the vulnerable machines (yes people do that..), this could cause some to lose their life savings... People have commit suicide for less.
My point is its important to look past FAANG being cheap and look at 2nd and 3rd order effects from something this powerful and widespread.
Governments around the world already regularly trade in exploits that are as or more severe than this one.
That isn’t to advocate for brokering to a government, just to say that the market already exists and contains comparable exploits. It’s only
a matter of time until we see
the next EternalBlue to WannaCry lifecycle.
> look at 2nd and 3rd order effects
.. which FOSS engineers have spent their lives on, while FAANG acumulates patent and SSL money across international borders? forcing TEAMS kool-aid with surveillance built-in, down your desktop with the help of C-Suite and their attorneys?
> But what about all of the innocent people who would be harmed by such a callous approach?
They should then think again about their choice of using teams. Why should Microsoft rake in money from a shabby product while volunteers have to fix their shit?
Assigning a ridiculously low score to significantly lower the bounty as a billion dollar company is disgusting.
> They should then think again about their choice of using teams.
Try saying that to a student who is using Teams on a school-issued laptop, by no choice of their own.
I'm not in any way defending how Microsoft handled this. Frankly, I'm ashamed of my former employer (though I worked in a completely different division). But your outrage toward the company should not extend to its unwitting users.
Bullshit. Currently there are millions of children who are obligated to use Teams for their publicly funded education.
And thinking these huge metrics get changed by selling black hat exploits to what? Teach Microsoft a lesson? While harming an already vulnerable population (not just children are obligated to use Teams). As if the long term goal of educating "unwitting" users is advanced at all by blackhat behaviour.
Microsoft has had a serious reputation for being a serious security risk for the 30 or so years I've been in IT. It's one of the oldest jokes in the industry. People and the world in general clearly do not work the way you apparently think they do.
Zoom still has a ton of users, and every single thing they make or do is a serious security risk (or has been in the past, evidencing a distinct lack of secure development culture).
There are lots of vulnerabilities in most door locks, does that mean we should go around stealing things because Chubb have made money selling insecure locks?
Funny thing to say when we're in the middle of a global pandemic, and more people are working from home than ever.
I work at an university and I've been forced to install that crap on my home computer because I need to teach from home. And so do all the professors in around half the universities I know in my country.
Interesting, I'm surprised that they don't have to provide you with the tools needed to do your job!
In Australia, the emp is generally responsible for providing any necessary tools or equipment needed to do the job (contractors are another matter though)
In normal circumstances they do provide the tools needed for the job, as they should. But this was a sudden state of emergency triggered by a pandemic, there were no funds, reactions weren't fast enough... so basically, they didn't.
Anyway, those of us who have research projects (as is my case) typically do have computers provided by the university at home, because research has strange schedules and working from home has always been a need (meeting with colleagues in different timezones, waiting for experiments to complete at night, rushing for deadlines, etc.).
But... it's not really practical to make room for two different desktop computers for my own use in an already spaced-starved flat, or to work in a laptop for many hours when I could do so in a desktop. So in practice, my home computer and my work computer are one and the same. And it's like that here for most, if not all, people I know.
We are a Latin country and also tend to live in small flats, maybe in other places it's different. I can imagine that if I had one of those American McMansions, it would make sense to have a home office with a sober, black work computer, a good camera setup and a green screen, and then a gaming room with a flashy gaming computer and huge speakers (near the billiards and darts room, probably :)). But that's not really how things work about here. Here, separation of home and work computers at home is almost exclusive of jobs with high security restrictions. Most people in normal jobs just don't do it because it's not practical.
I consider that inherent risk. Not getting a raise because the company made business decisions that turned out suboptimal (such as gaining short-term profits by not investing IT security) is a risk that any employee faces. If you want a more stable environment you go for a more risk-averse employer, perhaps even public sector jobs.
That's a silly proposition. If my field of expertise is inherently private, I don't have that choice. Also I can't solve for every variable when searching for jobs. I choose among the ones I get an offer for, and obviously their IT decisions aren't top of my list (nor do I know what those are prior to hitting the desk)
Ruining companies that can't (or won't) get their act together (whether it's security, finance or any other critical and undervalued area) is a short-term pain that fixes the issue. Refusing to fix simply prolong the problem - at some point you have to say "enough is enough" and tear the bandaid off, if you don't, and you don't do so with severe enough consequences then businesses will simply conveniently ignore what they're being asked to do.
Necessity is the mother of invention, I have no doubt that the opportunities created by blowing away poorly-behaved incumbents will cause a healthy collections of startups who will be operating within the required framework.
You may not see yourself as having a choice but that wasn't really my point. What I was getting at is that being an employee in general comes with a diffuse risk of many factors that can result in not getting a raise or the company even going bankrupt. Many of them are outside your direct responsibility or influence and yet you take up the whole risk package when joining that company.
The company getting ransomwared is just one more factor. It's not special.
Well, one issue with it is that it requires criminal activity so it's dragging us down to a worse equilibrium where more resources have to be spent on countermeasures. But arguably that cat is out of the bag, so the next best thing that we can do is to make security best practices easy. And microsoft wasn't doing its part here.
To what extent should the blame for any harm fall on Microsoft? They are the ones relying on effectively free labor to protect the innocent. In such a case blaming the free labor instead of blaming the ones relying on free labor seems to create some very bad incentives.
Personally I would prefer just having all new vulnerabilities immediately disclosed once found. No selling, but letting people decide for themselves if they want to continue to use a product after someone has found a vulnerability. I also think the incentives this creates would mean that Microsoft and similar shops would put more effort into testing their own software because they would no longer have the safety net of a grace period when someone finds a problem.
Thing is, we don't know if this was found before by malicious actors and sold and/or abused.
This thing sounds like it is mostly pretty straight forward to find once you start looking - "you" being somebody experienced in this field of research, that is. At least you don't have to construct fancy weird machines (with type confusion, heap spraying and all those shenanigans). It comes down to finding something that can perform code execution in their internal API (here: "electronSafeIpc") and then finding a way to get there (here: angular escape bypass/not-properly-sanitized user provided data) and you can do both in javascript and don't have to read tons of machine code.
Given that Teams is a great target because of it's large and often corporate user base, I'd be surprised if none of the usual industrial espionage suspects (e.g. China, NSA, etc) had a look at Teams before. And I'd think the chance of them having found the same bug, or a related bug, once they looked is pretty good too.
From what I am hearing even the (US) military uses Teams sometimes... If that isn't incentive to look at this thing for "interested parties", then I don't know.
I didn't want to belittle your work, if you think that was the case. It's still outstanding to find things like that on your own, and a lot of work goes into it. Sorry if I gave the wrong impression.
I have analyzed foreign code bases of similar dimensions in the past myself and found critical bugs. The size doesn't say much, it comes down to identifying the "interesting" bits (like the electronSafeRpc in this case), which can be hard and tedious, but greatly reduces the code you have to look at in detail. My assertion is that if your name is e.g. China then you will not be turned off by that.
Then people will move to some understuffed FOSS alternative with 5 people working part-time on it, with as severe bugs that nobody notices (remember Heartbleed and countless others?)...
Profiting from the very likely unethical use of the exploit would be unethical.
Instead this mishandling by M$ should rather cause researchers to publicly announce the vulnerabilities which would hopefully cause M$ to change their ways in future dealings.
It is ofcourse easy for me to say this, not being a researcher who lives off of the discoveries made.
My point is that in the case of M$ the defects could be publicly announced to all parties at once as a way of making M$ realize that how bad their handling is/was. In all likelyhood this shouldn't have to happen for too long before they would realize their mistake.
Many other corporations do indeed value the discoveries of researchers and do pay accordingly for being notified. Never did I suggest that this should become the industry norm (i.e not paying for private disclosures).
Now what ever your personal feelings on that idea is, it does not change the fact that selling exploits to other parties would be unethical.
Furthermore, participating in a system that promotes assumptions and flawed reading comprehension is not conductive to a good discourse. That means you.
you could be a grey hat if you averaged out one exploit turned in to the proper group to one exploit sold to the highest bidder. Flip a coin to be a real greyhat
That most locks are pickable is common knowledge and that is why high-risk targets invest in additional security beyond locks.
That crufty electron apps are a security risk is not. So yes, you do need someone to run out into the streets and yell that the emperor has no clothes. Otherwise common knowledge will not be established.
I disagree. If MS is going to treat major issues like this then researchers should be selling them to the highest bidder. Maybe that way they'll actually treat disclosures properly.