Also, why did it take 2 months to fix an RCE? It's an RCE, not some XSS. I'd imagine this would be a high-priority. No?
I agree that RCE should be a priority!
I don't need to be convinced that security bugs should be on a need-to-know basis during the responsible disclosure period, that seems obviously prudent. Anyone not working specifically on security can learn about the details at the same time as the wider public.
Also, shipping security fixes in stand-alone updates makes it much easier for attackers to identify security-critical changes (especially if they have access to source code, which they do for Firefox) and reverse-engineer the flaw. Firefox developers often land critical fixes with somewhat obscured commit messages to increase the work required by attackers to identify the critical security fixes in the torrent of commits that go into each regular release.
Obviously this only makes sense while the bug is believed to be unknown to attackers. If Mozilla believes the bug is being exploited, they can and do issue an emergency update.
Wow, that's fascinating. Do you have any interesting reads to point to in this regard?
To me, the bug fix introduced a clear regression, allowing an even more powerful vuln in the process.
This was a VERY valuable bug. I mean, it's sad to think about but the most likely scenario is that someone with access to the report at Mozilla or Google (or maybe elsewhere if it was shared more widely) called a friend of a friend of a friend and... sold it.
Typically, they leak the opposite way.
I don't care if you track that I clicked on your link. I care that your link doesn't appear to go to the same site your reply-to email would go.
From one of the last emails I read this evening: "we would like to inform you that there is a form on our website"
[me]: The form is not on your website, only the link to it.
You're right, but it's much more simple to disallow links, people don't really understand the difference on a website, emails are another additional complication.
1. while developing -> send tokens for copy/paste if a user has chosen text-only emails
2. while administering -> institute the policy of having to ask one of the administrators, if it happens too often you have worse problems in any case
>attackers would send a spear-phishing email luring victims to a web page, where, if they used Firefox, the page would download and run an info-stealer
Anything that will wake people up and stop them from just blindly clicking on things. For a financial institution like Coinbase where a hacker could compromise the security of the entire company, it doesn't seem completely unreasonable.
As long as the employee need to be able to browse the internet any whitelisting of links seems like a waste of resources.
That might work for the first day or so, but you'll eventually tune them out and blindly click pass the warning.
Email does not benefit much from format when doing communication.
Marketing and sales is a different story.
The level of sophistication in crypto hacking would terrify me if were a crypto startup employee.
Protecting against a rogue employee also adds defence against employee computer compromise.
The other risk if you don't fully and publicly mitigate insider threat is someone applying pressure to your employees to do something bad. (The intel community and other high risk environments have long had this threat model). This is basically identical to insider threat at time of pressure, although there are some different countermeasures leading up to it.
The low end of this is catching someone doing something they shouldn't (browsing porn on a work computer, having a relative with legal difficulties, etc.) and applying that as leverage. Usually "report early, no action will be taken against you" is a good policy for minor things.
I would NEVER expect (or want) someone to do anything but fully comply with an attacker who has kidnapped his kid and credibly threatens to do something horrible unless he authorizes a payment. My instructions to the insiders are "comply; we have technical countermeasures which will make those attacks fail".
If the criminal has an ounce of brains, they'd execute the hack while on an overseas vacation.
I am not likely to notice one day if I have 4.96551 ETH and the next 4.96530 ETH
That's the advantage of public ledgers, it's way easier to monitor for abnormalities. You don't need to tell your employees about all the checks you have put in place either.
note to self: only keep round amounts in online wallets.
When I did thought experiments on it, one of the big issues for me was how to show buyers the vulnerability without losing money from them stealing it or deal with them claiming that they already had it in a way that minimizes risk to all parties. It was a tricky problem. Folks selling on the side was a potential result in some scenarios.
> Google is aware of reports that an exploit for CVE-2019-5786 exists in the wild.
(NB: I have no knowledge of the details of this specific bug.)
(And I'm actually a Chrome user, for now.)
They probably discovered phishing attempts with a link to a page deploying a curious payload.
Regardless of my post above, keep in mind I do use Firefox primarily and see nothing wrong with it.
Nowhere in article it is said that any Coinbase employee was using Firefox. It only says that attack targeted Firefox, not that Coinbase employees use Firefox.
I don't know, maybe because they need to get work done...? Even traditional banks allow JS.
Hint: if your environment feels like a concentration camp, users will find ways to work outside of it most of the time - which will be even more disastrous.
Security is a tradeoff; nuking browsers for everyone is just a bad tradeoff in 2019.