They seem to have waited until Apple had a patch ready, they disclosed it to Apple and gave them an adequate amount of time to patch the vulnerability, and users are better for it.
So in this case and others similar to it, kudos to Google.
While I understand the common ethos of our current culture supports this, has there been analysis if giving what could constitute a second chance to fix security issues leads to less prioritization of security initially? I could definitely see a business deciding to lower their security expenditure since if an issue is found, they will be given a grace window to fix it before the world hears about it. It would still be damaging, but it would be far less since the PR machine could spit out that it was patched before it was announced to the world.
There has to have been some agreement to limit the grace period since people will go live once a reasonable time frame to fix it has passed and they won't be judged negatively if others agree reasonable time was given. So if we won't judge someone for giving only 6 months instead of 3 years, what about the one who gives only 2 weeks instead of 6 months? How do we calculate which of two time frames is better?
In the Microsoft case that vaguely comes to mind, I believe the issue was one that required a bit of work because it was pretty low level for Windows. I want security patches on my system ASAP, but I also don’t want someone to release something that breaks my OS’s functionality or renders my files (or the ability to open files) fubared either. If memory serves, they were making progress on it, but it went past the time period Project Zero set and they were unwilling to give an extension and as far as was reported, didn’t seem to be exploited in the wild. But then you have something unpatched that is disclosed by Google. That doesn’t help users all that much.
That is all to say it isn’t being verifiably exploited in the wild. When that is the case, that changes things to the point users need to be made aware as soon as possible and if it means “turning off” a feature, if possible, as a stopgap, give that info to them.
and regularly has strong showings at pwn2own. Android's security for the past couple of years has been superb.
Google is slowly trying to fix it, but average Android device is way behind average iOS device in the wild, and that will be the case for many years to come.
It is, though. The Android that's most commonly used by users is the one from Samsung, who also issues monthly security patches for a large range of devices: https://security.samsungmobile.com/workScope.smsb
LG ( https://lgsecurity.lge.com/security_updates.html ) does as well, and so do at least Motorola & Nokia.
> average Android device is way behind average iOS device in the wild, and that will be the case for many years to come.
Average iOS device just got hit by 2 zero-days in the wild. And jailbreaking is a long and well established practice on iOS, which is literally privilege escalation exploits. There's a constant, continuous stream of those on iOS. There doesn't seem to be many (any?) on Android for a while now.
To be fair, there are a variety of reasons why this isn't the case that have nothing to do with security. An Android jailbreak is less valuable for a few reasons, among them that you can often purchase android devices with root privs, the same isn't possible for iphone.
No, they wanted to set a legal precedent.
>FBI found someone inside they could negotiate with(IE- better patents or the threat of denied patents)
>FBI found someone they could bribe
>FBI hacked and broke into the iphone
3/3 of those Apple wants to keep quiet.
It's supposed to have been in place for a year or so... but it's clearly not working. If this particular one isn't blocked, then what ones are?
I'm on up-to-date Chrome 72...
Additionally, if you navigate within a site (ie, click on another ZDNet article from that same page), that counts as a website interaction, and the new page will be allowed to autoplay.
Firefox's upcoming controls are user-controlled instead of based on algorithmic behavior, and they don't have a whitelist. They're available right now, but not turned on by default (yet).
So just ugh. Disappointed in Chrome that zdnet.com is somehow considered high enough quality to play videos with audio automatically. :(
"CVE-2019-7286 and CVE-2019-7287 in the iOS advisory today (https://support.apple.com/en-us/HT209520 ) were exploited in the wild as 0day."
A Google top security engineer has revealed today that hackers have been launching attacks against iPhone users using two iOS vulnerabilities. The attacks have happened before Apple had a chance to release iOS 12.1.4 today --meaning the two vulnerabilities are what security experts call "zero-days."
The revelation came in a tweet from Ben Hawkes, team leader at Project Zero --Google's elite security team. Hawkes did not reveal under what circumstances the two zero-days have been used.
At the time of writing, it is unclear if the zero-days have been used for mundane cyber-crime operations or in more targeted cyber-espionage campaigns.
The two zero-days have the CVE identifiers of CVE-2019-7286 and CVE-2019-7287.
According to the Apple iOS 12.1.4 security changelog, CVE-2019-7286 impacts the iOS Foundation framework --one of the core components of the iOS operating system.
An attacker can exploit a memory corruption in the iOS Foundation component via a malicious app to gain elevated privileges.
The second zero-day, CVE-2019-72867, impacts I/O Kit, another iOS core framework that handles I/O data streams between the hardware and the software.
An attacker can exploit another memory corruption in this framework via a malicious app to execute arbitrary code with kernel privileges.
Apple credited "an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero" for discovering both vulnerabilities.
Neither an Apple or Google spokesperson responded to requests for comment from ZDNet before this article's publication. It is highly unlikely that the two companies will comment on the issue at this time, as both would like to keep the zero-day specifics to a minimum and prevent other threat actors from gaining insight into how the zero-days work.
iPhone users are advised to update their devices to iOS 12.1.4 as soon as possible. This release also fixes the infamous FaceTime bug that allowed users to eavesdrop on others using group FaceTime calls.
And if you think there are security holes, you're welcome to report them to get a bug bounty.
No one would say that if you want a PC that is secure, buy a Microsoft Surface.
Nope, Android security is quite good actually. Not as good as iOS, however very good nonetheless.
Of course, fragmentation issues and the fact that most Android devices are not updated do not help.
However there is active mitigation of security issues both in kernel and in Android user space.
I mean here we have the most valuable company on Earth, specifically scoping out competing software and hardware constantly looking for zero day vulnerabilities. They don't submit to the bug bounty, so if they have a disclosure that you disagree with you better agree quick because they'll just go public.
How fast do you think I would get sued if I found a zero day in a Google product, told Google to fix it, shove the bounty, and then went public 30 days later?
With respects to the merits of Project Zero:
> How fast do you think I would get sued ...
I don't think you would but even if you would -- your argument really seems to underscore the value of Project Zero. Google can publicize these bugs and have the resources to stand up to lawsuits from irresponsible vendors.
I'm not silly enough to think that Google doesn't exploit competitive advantages but in this case I think they are trying to catch up with the public perception of Apple's superiority wrt secure product design. Objectively we can see that's the case with many of the design elements of iOS vs Android. And it's not by putting down Apple that they do this, instead it's by demonstrating that they are leaders in security, not followers.
I want my phone to be as secure as possible because I use it for work, but of course, since it's not brand new I can't get updates.
> Not trying to be snarky; Android has been around for over 10 years and we all know how irresponsible all OEMs are, including Google. He had the information when he made his purchase.
I care about the security of my devices and I don't want to pay the Apple tax (that is really absurd in Brazil; even with my quite good engineering salary it is still half of one month of work). I always buy supported devices and have monthly security updates on my Android One device.
So much for Andy Rubin and his promise of openness and choices...
If a law made a security bug a refundable or warrantied defect, I bet you this shit would stop.
But noone gives a shit.
I'm also incentivized to release a new model every month with ANY improvement in order to limit my liability to a smaller window of revenue.
The current system isn't perfect, but it could be much worse.
> when I push out improvements
No, if your software has a security issue, it's refundable. Write good software.
> release a new model every month with ANY improvement
Good, but that doesn't remove your liability from your last model.
There are 0 companies that can provide consumer software on the lifecycle consumers have come to expect without any bugs. You write software. Are you willing to claim that you can just "write good software" and never ship anything with a security issues?
Because otherwise you're advocating for consumer tools that use nasa's release cycle. Which like, that's cool and all but I don't want to rely on hardware from 2012 or 2005 running software that was developed from 2010-2014 and has just finished its verification process. You're advocating for a world where we just got the verifiably bug-free Nokia 3310.
And that doesn't even begin to discuss the clusterfuck that would be open-source in this situation. Am I liable for heartbleed because I use OpenSSL? Are the openSSL devs?
GDPR is basically "you are liable if you are actively exploited and data is stolen". You're saying that a company is liable if they ship bugs, which the GDPR absolutely doesn't care about.
Not even close, you are liable for keeping the data you collect as a data processor or controller safe.
Which again, is nothing like "write bug free code or you're liable".
What? No, you have to have a DPO, provide clear language on what you do with data, who it's shared with and no intrusive prompts having opt-in by default just to have a few.
The GDPR focuses on procedural liabilities. You're asking for application level liabilities, which like I've said 3 times now, are a whole different ballgame.
Since you're so deadset on this, I'll just ask again: Who is liable for Heartbleed or for Meltdown? Who gets sued, and for how much, and why?
Anyone who doesn't make an effort to update. If your hardware is still Heartbleed fucked and you're selling it, you deserve to lose money.
Intel and AMD.
> Who gets sued
Noone. Here's your product back, it's defective, please cut me a check, that's all.
I disagree on everything else you say. Google P0 has definitely pushed companies like Microsoft and Apple to be better and this benefits all of us. Their 90 day policy pushes vendors to make overall improvements in their build, ship and update processes, helping us get faster updates for critical vulns. They forced the leadership at these companies to put security very high on the list.
I am also pretty sure that Google is held pretty tightly to their own 90 day disclosure policy. Google runs a full fledged bug bounty program and external researchers do find critical bugs in Chrome, Android etc. and don't get sued by Google.
With hundreds of critical bugs in major software (and hardware) their contribution to security is nothing less than stellar.
Why shouldn't they go public? Not everyone can spend their time reading patch notes. If a new patch comes out that fixes a security flaw, most people say "update later". But if you say "no really, criminals are using this attack RIGHT NOW", that can help a lot of people keep their devices safe.
It should be relatively easy to prove, by showing a disclosure from Project Zero without an accompanying acknowledgment/post from the company...
So far, most of what I've seen has been good for security.
Not least the whole Spectre/Meltdown mess.
Please edit to say "one of the most valuable companies on Earth," as it is absolutely not the most valuable (Market Cap). In fact, it's currently less than Apple. ($781B / $803B)
If I was running project zero I would not even give them the 30/90 days
Full Disclosure is the best way
People forget that one of the reasons the 90's was such a hayday for hackers was that companies tried to fight hackers with court orders instead of actually fixing their stuff.
i can not protect my systems from things I am not aware of, allowing a vulnerability to be exploited for 60 days when on day 29 of "responsible disclosure" window the vulnerability was observed being actively exploited is not responsible either.