The discussion on race conditions at the end is an important one, and IMO the bugfix is a bandage at best: the notion of anything accessing the “current” object after any kind of delay, especially in an event handler, when there is any chance the thing is not a singleton, is a recipe for disaster. In this case, dismissing the “current” security code screen was a supported API surface and that should set off all the red flags.
Of course it’s annoying to have to track the identity of “our screen” and bind that identity to event handlers, or make it accessible with context etc. But it’s necessary in anything remotely security-adjacent.
(And never assume anything is a singleton unless you add a breadcrumb comment for someone who might change that assumption on a different team!)
Agreed. The fixed logic, at least judging by the commit message, still feels very shaky on correctness grounds ("if we are dismissing something that doesn't seem to be right, ignore it").
Since they're rewriting code and changing method signatures anyway, I would prefer they got rid of the notion of "currently visible screen" and made sure that all dismiss() calls have a unique pointer or token pointing to what exactly is being dismissed. If this was my codebase, their approach would give me all sorts of bad vibes about additional problems lurking deeper.
The whole process and the nature of the fix doesn't inspire a lot of confidence in the security of Pixel/Android in general.
So you'd go out and refactor a major security sensitive component (which dates to time before your career most likely) in a span of a single month for an emergency security patch deadline?
That doesn't inspire a lot of confidence in your risk assesment and decision making.
I'd do what Google did: rollout a patch that addresses the immediate danger and then backlog proper refactors over time.
Their fix included a similarly large refactor, they just used the "security screen type" as a newly introduced parameter instead of something unique to the screen instance.
I do agree that in the real world, sometimes you have to settle for a less-than-ideal solution. I hope my post reads less like "those people are idiots", which was not my intent, but more like: this specific fix isn't ideal, and knowing this type of code is live in a device doesn't fill me with confidence, even if I can understand reasons for why it was done that way.
Right? This was absolutely the "right" level of refactor for a hotfix, as the full refactor would introduce much more state management that could itself introduce bugs. And especially if behind the scenes there was a detailed audit of what things can currently access the current security screen, it would be fine for now.
But I sincerely hope that in the postmortem, there would be a larger meta-discussion around code review practices and how something like this "global dismiss" became part of the API surface to begin with, and a sincere prioritization of a larger review within the backlog. Though with everyone on edge at this time in big tech, I doubt that ends up happening :(
Their change is hardly a big refactor. This includes all the new code, all the parameter changes everywhere the function is used, and two additional test cases. This is a tiny change.
>12 changed files with 102 additions and 26 deletions. [1]
I don't think that is as much of an issue as the ridiculous process he had to go through.
Think about that first security researcher. You literally found a Screen Unlock bypass (should be Priority #1, right?) - and Google just went and put fixing it on the backburner.
If they will put something like that on the backburner, what else are they ignoring? It isn't confidence-inspiring.
Edit: Also, knowing Google, what are the odds of your full refactor? "Temporary" fixes become permanent fixes quickly.
Maybe it was an already well known exploit. After all this was a duplicate and Google was sitting on it. Two people found it and reported it to Google. Why not a third one, and sold it?
> The fixed logic, at least judging by the commit message, still feels very shaky on correctness grounds
This was my experience as a dev on a team at Google for a few years. I saw a LOT of exceedingly lax treatment of correctness in the face of concurrency. There have even been multiple decisions I've seen to guess at how to fix concurrency bugs and just say "well, looks good to me, let's see if it does anything."
It's par for the course, and folks get (got? =P) paid handsomely for doing it.
They already have multiple security screens, and a demonstrated critical bug with security screen confusion. Not sure how this is premature optimisation.
> the number of screens is small and there are few tiers (only 2)
Making this kind of assumption, when there are no such guards in the system itself, is exactly what leads to security issues.
If the system enforced two named singletons as security screens, so it was impossible to .dismiss() the wrong thing, then sure. But that's not how the system is, and assuming that "the number of screens is small" and "there are only 2 tiers" without enforcing that assumption with code is pretty much how the original bug was introduced.
Since they are dismissing the Lock Screen _type_ (SIM, PUK, Pin) and not the instance, a logical example for where this might go wrong is if you have dual SIM. Then again, worst case you dismiss the incorrect SIM Lock Screen. That will not give you full unlock and also the ‘wrong’ SIM will still not work
Yeah an attacker may be able to use their own personal dual SIM Pixel phone to bypass the SIM lock screen for a stolen SIM card that they don't know the PIN or PUK code to using a similar technique, but like you said, I'm almost certain that it wouldn't actually let them use it to send and receive texts (and if it does, then that's really an issue in the SIM card's OS, considering anyone could just modify AOSP to ignore the SIM lock screen and then put that on their own phone.)
Even still, being able to bypass the SIM lock screen would still be a bug, just not a vulnerability. Google doesn't pay bounties for non-security related bugs to the best of my knowledge, but I can't help but feel this is still not an ideal way to design the system. It likely is fine today, but as strix_varius said, these kind of assumptions are what led to this vulnerability in the first place. Vulnerabilities do pop up from time to time in otherwise well-designed systems, but this lock screen bypass never would have been present in the first place had better design practices been followed. As krajzeg said [1], The whole process and the nature of the fix doesn't inspire a lot of confidence in the security of Pixel/Android in general.
I would indeed expect something more robust like a security state machine where not all states can transition to any other state freely. The UI shouldn't even have a say in what transitions are allowed.
The Rx model works nicely. Rather than trying to model state transitions, you model "scopes of work." So if you have something like an event listener, you would tie it to the active unlock session. When that session ends, you dispose of that "scope of work" and all work associated with it would immediately stop.
No, not exactly, but Android is old and gnarly enough that a lot of components don't have a clear Model/View separation you'd expect in modern codebases.
Well, the UI element’s dismissal is what signals the system that holds unlock state. And the problem was that multiple other systems would automate the dismissal of the current UI element… without checking whether it was the appropriate UI element they expected to be there!
Google invests a great amount of money in their project zero team but does anyone know if they have a specific red team that is dedicated for Android ?
As a former Google Information Security Engineer I can say: I don't know, because the Android security organization is completely separate from Google's security organization. It's something I always found odd, as it created some redundancy between both organizations, lack of homogeneous processes, etc
I was under the impression that decrypting storage actually requires the passcode of the phone, but this bug makes it look like the device is able to decrypt itself without any external input.
Does anybody know more context about this? What's the point of encryption if the device can just essentially backdoor decrypt itself?
It didn't work on a fresh reboot, so presumably, it functioned like you're describing. But, when he swapped the sim live, without the reboot, the phone was already running with the key in memory.
On iPhone, keys are evicted from memory when the device is locked. Apps running behind the Lock Screen can only write files to special file inboxes (this is why the camera lets you take pictures while locked but doesn’t display earlier pictures, for example)
You’re telling me that android keeps keys in memory for its entire uptime?
There is a data protection class that is like what you're describing, but it is not used super-widely, the one most commonly used is exactly what is being described and makes data available after first unlock.
That's not really true at all - you can of course unlock your iPhone without entering PIN for every screen lock which should give you a clue that keys for disk encryption generally aren't purged when iPhone is locked.
Some keys are, but not the ones that are the issue here.
I've even seen conditions where iOS devices reboot and still retain keys.
If you unlock the screen using Face ID the OS gets the keys from the Secure Enclave which, depending on the model, does the face recognition itself or using the normal processor in some kind of secure way. Just like if you unlock the phone using the pin code, the OS gets the key from the Secure Enclave which makes sure it’s not easy to brute force. The PIN code is not the key itself of course.
The only key that sometimes gets retained at reboot is the SIM unlock.
From what I gather the more secured keys should be discarded 10 seconds after lock screen event. Lower security keys stay in memory to allow background activity.
Encryption on ios, if i understand correctly, is on a per file basis. There is thus no "mount" event to look for and it should provide no value to use a less secured key if you do not intend to run on background because decryption is supposed to happen on the fly.
PS: Also if I remember correctly pressing down the emergency sequence (holding power + volume up) discard ALL keys instantly and unlock require the passphrase as if you just rebooted. Emergency call don't need to be issued just initiated (must hold 10 sec or confirm on screen to make the actual emergency call).
> If you receive a phone call while locked presumably the phone can still access the address book to display the contact name and photo?
Just so you know, this is true on an iPhone, but NOT if the phone has NEVER been unlocked since reboot. If you get an SMS/call in this state, it will just show the number. It can't read the address book.
I think for iMessage, the actual messages are sent using APNS, so the message is in the push notification itself. Thus while you can see the message itself without unlocking, any older messages that are behind the Secure Enclave are inaccessible without keys.
This is correct. For example, when I connect my iPhone to my provided work wi-fi, and I get a Tinder notification. I can partially see the message on the lock screen (once Face ID authenticates), but as Tinder is blocked on the wi-fi if I want to read and respond in the app I have to pop to cellular.
> You’re telling me that android keeps keys in memory for its entire uptime?
Yes. I've known that for quite some time, and yet I keep forgetting considering how stupid this feels [1] . Google provides "lockdown" button which is supposedly more secure (I think it's recommended for journalists?)... Well it doesn't evict keys either. Only eviction is to reboot.
[1] It feels stupid because there had been a LOT of work to move from FDE to FBE and to allow two states of data encryption and telling apps to support both of them. Doing all this work just to be able to store incoming SMS and to display wallpaper on first lockscreen...?
Do you have any more details about how that works on iphone? It seems very hard to believe, given the complexity and diversity of background apps on iphones, some of which access huge data that would be impossible in system memory (e.g. offline GPS/navigation apps). For example, Google Photos can send the full photo library on the phone, even if large, to the cloud while the device is locked.
I actually find this incredible. I am familiar with iPhone security but not android and had naively assumed Google probably did a better job on the non-UX aspects.
Nonsense. If that was true then things like backups and cloud sync couldn't happen when the device is locked. But of course they do, meaning the keys are still sitting there freely accessed by the CPU, along with all the data on disk.
Your camera example is not at all convincing of anything special going on, since that's also the camera behavior of other OS's (like Android) that don't purge the keys. That's far more easily implemented as just a basic app policy than some security system that yanks the file system out from underneath running processes.
On Linux this is adressed by systemd-homed, which encrypts at least your home partition in sleep mode. Attackers could still try to manipulate the rootfs & hope the user doesn't detect it before using the device again.
"If your Mac has the T2 Security Chip (recent Intel-based Macs) or uses an Apple silicon chip (M1 family and future), security is significantly improved. The Secure Enclave in both systems uses encrypted memory, and has exclusive control over FileVault keys. The Intel CPU or the Application processor (M1) never sees the keys, and they are never stored in regular (unencrypted) RAM. Due to this, an attacker would only be able to extract encrypted keys (which can't be decrypted), and only if the system failed to prevent the DMA attack in the first place. These protections make it a lot safer to leave your Mac asleep."
The most valuable information for an adversary is typically found in Ram. Like your password manager master password, browser cookies, etc. Ram can be dumped easily with the right equipment.
The only safe encryption is on a powered down device.
If you fully hibernate to disk where it encrypts the memory snapshot to your FDE key, then you are good to go but that is not locking that is turning the computer off.
As long as that secondary disk uses a different FDE key and you manually unmount it. This is easily done with LUKS on Linux but YMMV on other operating systems
The passcode is required to get access to anything the first time you start the phone, for the reason you mention, and after that the password is retained in the trusted execution environment. This way apps can continue to function in the background while the phone is locked and you can unlock with alternative methods like fingerprints or face recognition.
It was a fresh boot, and instead of the usual lock icon, the fingerprint icon
was showing. It accepted my finger, which should not happen, since after a
reboot, you must enter the lock screen PIN or password at least once to decrypt
the device.
i was surprised to read this part too. assuming that the author's version of the events are accurate here, my best guess is that the device had not fully powered down, and was in either a low-power/hibernate or find-my-phone mode, where portions of the security subsystem were still powered, hence the device-unlock PIN was still cached. i don't otherwise see how else a fingerprint alone would allow for the device to be unlocked on cold boot.
of course this detail doesn't take away from the rest of the report - great find xdavidhu!
Doesn’t seem like a full unlock, see the next paragraph: “After accepting my finger, it got stuck on a weird “Pixel is starting…” message, and stayed there until I rebooted it again.”
It seems to me this bug appears when a phone is booted, unlocked (and decrypted) once, and then locked again, but the decryption key still stays in memory.
This is virtually always the case with these kinds of vulnerabilities on smartphones. Security researchers often say whether an attack or vulnerability is possible "before/after first unlock" in reference to the fact that the security is a totally different story if the phone has been unlocked/decrypted since last boot.
I have an obsession with classifying software bugs into general categories, looking for the "root cause", or more constructively, for a way to avoid entire classes of bugs altogether. I've been doing that for more than 20 years now.
This bug, if you look into the fix, falls into my "state transition" category. You can (and should) model large parts of your software as a state machine, with explicit transitions and invariant checks. If you do not, you still end up with states, just implemented haphazardly, and sooner or later someone will forget to perform a task or set a variable when transitioning from state to state.
This category is my strong contender for #1 problem area in all devices that have a user interface.
I think the root issue is one of which state is the default one. In Android the logged-in state is the default one, and the logged-out state is constructed by taking the logged-in state and essentially hiding it behind a modal.
The issue with this is that systems have a tendency to return to their default state. If the modal is dismissed or has a bug that makes it crash or has a memory corruption, or any number of things then the system will return to the default state.
I would turn it upside down, and let the logged out state be the default one. The logged-in state is then a lockscreen that is hidden behind a session. If the session dies you are back at the lock screen. The lock screen can't be dismissed because it's always there. If the lockscreen crashes the phone locks up completely because there is nothing else there to show.
It's acceptable for failures and exceptions to decrease privilege (boot you out), but they must never increase it.
Edit: Ideally the lockscreen should also run with reduced privileges so that it literally can't start a session even if it wants to, except indirectly by supplying a password to some other system.
"When a fail-safe system fails, it fails by failing to fail-safe." (from the wonderful "Systemantics").
Yes, one should definitely try to fail safe. But managing your states and state transitions explicitly and carefully is a good way to avoid these kinds of bugs.
Do you have any writing I can read about your classification? This sounds extremely interesting and useful. (I have some related thoughts, but not 20 years' worth and largely not recorded.)
Hmm. Perhaps I should get my notes into shape and publish them… I'll think about it. I would need to force myself to post them to HN without looking at the discussion, though.
Reminds me of Orthogonal Defect Classification. Analyze defects for when they were introduced (during development, architectural design and so on) and what caused the introduction of the defect into the system in the first place.
I even think they should dismiss modal by id instead of type.
As this is a highly sensitive part, I think stacking lock screens on top of the unlocked menu leaves the door open for many bugs that could unlock your device.
The unlocked menu should be locked at all times, and use a flag to monitor if it’s locked/unlocked, and only flip the flag when you unlock with biometrics or with password.
If the flag is locked, then the whole screen is black and can’t have any interactivity via touch, mouse, kw…
This way is more robust, so even if you manage to bypass the stack of lock screens, you end up with main menu locked.
I was also thinking they should only dismiss by ID instead of type.
The other question is, why would background tasks be permitted to call dismiss at all? I can imagine a scenario where you get a malware app installed using whatever method. Then when you get physical access to the phone, you send a notification to the malware app. The malware app in the background calls dismiss on every possible type several times to unlock any possible security screens.
There should be some sort of lock/flag/semaphore that is held by the current top level security screen. Dismiss should only be callable by whatever process has a hold of that. Dismiss calls from anyone else should not only be denied, but processes that make such calls should be blocked, quarantined, marked as suspicious, etc.
I was thinking that if I would have to code this, at least once this issue would cross my mind, the question of "what happens when there are multiple screens stacked" and how it should get handled properly. This is what meetings are there for, to discuss such issues.
It almost sounds intentional, but at the very least like a very sluggish approach to security.
I think an even better approach would be to have the concept of fixed tiers of locking combined with evicting the decryption key for any Lock Screen above the basic PIN.
And you can only move down one tier of unlocking at a time. Unlocking SIM PIN moves you down one tier to phone PIN screen.
I'd go for one screen with a queue of prioritized unlocking tasks which need to be completed successfully one after the other. These tasks could get the chance to hand over a fragment which the screen embeds and presents to the user, in order to properly modularize the tasks.
#0016
VENDOR: GOOGLE
STATUS: FIXED (NOVEMBER 2022 UPDATE)
REPORTED: JUN 13, 2022
DISCLOSED: NOV 10, 2022 (150 DAYS)
Project Zero only gives vendors 7 or 90 days before disclosure...
The short version: Project Zero won't share technical details of a vulnerability for 30 days if a vendor patches it before the 90-day or 7-day deadline. The 30-day period is intended for user patch adoption.
This is also the point that stands out to me the most. This is hypocritical and pretty close to negligent, if they set such high standards for other companies they investigate but can't own up to it themselves.
I can only hope this is a singular case or else the argument "yeah we collect your data but we also keep it safe!" falls pretty quickly.
If you follow published reports of Android vulnerabilities you'll see that taking longer than 90 days for a fix is actually not that rare. I myself had a similar experience a couple of times.
Not testing it right now, but my understanding is, that the issue is technically for every device, but the specific condition (putting the lockscreen on top of the secure screen stack right before `.dismiss()`-ing) is a Pixel software bug.
Every once in a blue moon when I pick up my locked iPhone (which auto-locks in just 30 seconds) and engage the home button just as the screen comes alive from the gyro sensing movement, it unlocks on its own. It just flashes the PIN dialog and slides right onto the home screen. I don't use Touch ID, and never stored my print with it even once to test the feature/hardware. It's been happening ever since iOS 11, with both my 1st gen. iPhone SE and my current iPhone 8.
I reported it years ago but the report was ignored and closed, possibly because I could not provide a reliable/reproducible procedure for triggering it.
This sounds like a UI race condition and actually gives me more confidence in the iPhone (unlike the Pixel, the unlock state isn’t tied to UI elements).
Unless of course you can do this long after it locks…
> "Hopefully they treated the original reporter(s) fairly as well."
Perhaps they should have reconsidered a bounty payment of some sort for the first bug reporter as well. Perhaps that's where the other $30k of the $100k went.
This actually says something interesting about bug bounty programs in general:
Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.
What's the correct protocol to be fair to all of the original bug reporters in those situations? Split the bounty? Reward all of them with the full amount?
> Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.
This case was not the case of eventually the same reports being taken seriously. None of them were, until the author met people working at Google in person at some event, and him showing them the issue and then persisting.
Different than "Ops, we received a couple of requests, better look into it" and more like "this guy won't stop bothering us about it, probably should look into it".
Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.
> Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.
Fair point, but it's also in their interest to overestimate the impact of the bug they found. And, even if the reports are well written, many reports that I've seen (mostly from new gray hats) were not actually exploitable, even with aggressive poc code.
This is where I find out my otherwise completely functioning Pixel 3a no longer gets security updates, as of May.
I knew and accepted that it wouldn't get new features and major android versions, but to not even get security updates, after only three years?
So my options are: live with the piece of technology in my life that is both the most vunerable to physical security issues and has the widest access to my critical information no longer getting active security updates, attempt to root it and install my own build (is cyanogenmod still a thing?), or throw a working piece of complex technology in the trash?
The most recent Pixels get a few extra years of only security updates after regular updates run out. So at least they have improved the policy somewhat now.
It would be great if Google went ahead and fixed this problem in particular for more devices, though.
Dunno whether you'll read this, but as a fellow 3a owner I just discovered that there is a September update image on the google website: https://developers.google.com/android/ota#sargo -- I guess maybe this will appear as an OTA update eventually? On the other hand it still doesn't have a fix for this bug, so the situation isn't really any different :-(
Can you tell me where to find release informationen for the Pixel 3/3a? I see that there's a new version from Nov 11 (only) for the 3 series but the most recent changelog is from Nov 10 (https://grapheneos.org/releases#2022111000) and I don't see any information there regarding the lock screen bug.
A fun and interesting read. But it is frustrating to hear that such a major security bug was ignored until a happenstance meeting with Google engineers.
Wow, this is very serious - it pretty much turns every "left my phone on the bus" incident from "oh well" into "all your data was compromised". I don't know how Google couldn't take this seriously. Even after the poster physically demonstrated it they took months to fix it. For sensitive data with legal disclosure requirements this is a game changer.
Very disappointed with Google here - even though I lost a lot of trust in them in other areas, I still rated their security stance as excellent, especially on their own phones.
* The security screen "system" works as a stack, and the individual handlers for them don't actually have a reference to their own security screen. That seems like a terrible design; this design caused this bug, and the fix for it feels like a band-aid that could have unintended consequences (and thus new security implications) in the future. The handler for the security screen should have a reference to the security screen itself (or an opaque token or something like that), so it can be sure it is only dismissing its own screen. Passing a "security screen type", as the new, "fixed" code does, is not specific enough for this kind of code, and still seems unsafe to me.
* I'm a bit confused as to how this could unlock a newly-rebooted phone. Isn't the user data partition unmounted and encrypted? How could this partition get decrypted if the user doesn't get the opportunity to enter their device PIN, which should be needed to gain access to the partition's encryption key? I guess maybe it isn't decrypted, and that's why the system got stuck on the "Pixel is Starting" screen. Still, pretty concerning.
Meanwhile, my Pixel 4 still only has the October 2022 security update, claims there is no update waiting, and is presumably still vulnerable.
It doesn't get decrypted. The data is still safe after a reboot. That's presumably why the phone hangs for the author after a reboot. Although some comments have said Android itself loads (and I guess also the launcher) but they can't really do anything
I went to buy a phone maybe two months ago. Before I had my current Google Pixel 6, I used a OnePlus 3T for six years, and even then I only stopped because I sat in a hot tub with it on. At the T-Mobile store, I announced to the salesman that I would be back to buy a Pixel 6 when they had it in stock, and a man pulled me aside and privately asked me why I wanted to buy a Pixel.
He explained to me that he was actually working in the hardware division at Google and that the team that he was managing was responsible for some parts of the Pixel's design. But he added that he had never actually talked with anyone out "in the wild" who owned a Pixel or made a positive attempt to buy one. He went on to explain that most of his team didn't use a Pixel either - they were all pretty much using iPhones, but some were even using Samsung devices.
I understand that this was someone from the hardware team and it doesn't necessarily reflect on the people who work on the Android OS, but I feel silly for not having taken what he said into consideration when I finally bought a phone. If the people working on a device don't even want to use it themselves and can't figure out a compelling reason for anyone else to use it, shouldn't that have been a strong signal to me that I shouldn't have selected it? But I did, and I've been regretting it since. Great camera though.
I'm... not sure that I would take a random person* in a T-Mobile store at their word when they claimed that they were "actually working in the hardware division at Google."
I recognize that I should've been more clear but the person who pulled me aside was a random customer waiting in line who pulled me aside when I told the T-Mobile guy that I was planning on getting a Pixel, which they didn't have in stock. I did ask a fair number of questions about what it was that he did to determine that it wasn't someone older who was just messing with me. Granted, this was some number of months ago, but if I recall correctly he was trying to figure out why people wanted Pixels because on his team, people would use iPhones because their family members used iPhones, or because it was easier from an enterprise security standpoint with BYOD. I'm not sure if I remember specifics beyond that.
It's kind of a post-hoc realization that I should've used his admission to me as a reason to second guess a purchase on a device which I've come to discover: has a stock messenger application that fails to sync message receipt times, that gets very hot to the touch, drops cell phone tower connections until rebooted. And, as the article we're replying to points out, had a lock screen bypass bug that wasn't fixed for months.
Because then you get headlines like Meta got recently, where developers are being forced to use Horizons(sic?).
TBC I also agree that you should dogfood things you build, especially in the cruisy world of software development where if you really hate what you work on you can just go somewhere else. It is a bad look in the media though
Not OP, but for me, the biggest annoyance with Pixel 6 compared to older devices was the fingerprint reader under the screen - so uncomfortable to use, and so much less precise than dedicated readers on the back like they had before (or on power button, like some other phones do).
A general frustration with the entire Pixel line is the lack of video output options. It's basically Chromecast or GTFO - no Miracast, no USB-C video. It's kind of sad when an Android phone has worse compatibility with generic hardware than iPad! And the most annoying part is that Google deliberately removed all these at some point in the past.
>For example, I think people give Google grief over making it difficult to unlock the bootloader, but the same can be said of every other vendor.
It's not so much that as it is buying an unlocked Pixel and RMA'ing it when hardware problems happen only to receive a locked phone in return. This is the sort of thing that makes people angry.
I've also struggled to install GrapheneOS and Calyx on my iPhone Pro Max. /s
(I actually do have both and definitely prefer the Pixel -- especially the cameras on the Pixel are amazing in low light, but it's a bit annoying how the official Gcam app seems to expect that Google photos is installed.)
How funny is it that the best way to de-Google is to buy a Google device, and that Apple is incredibly prescriptive about knowing exactly who you are, connecting to wifi, and getting your purchasing on file before you can even get through the initial setup on an iPhone.
The one thing I like about iPhone over Android is that the animations are a bit nicer and more polished, and the stock color scheme is pretty bright (but awful at night), and everything else about Android seems to be better.
Modern carriers are migrating to VoLTE, and LineageOS is unable to implement this outside of a few devices, meaning that many phones have been dropped from the latest release.
As [w]cdma is shut down in preference for 5g and LTE, Pixels (model 3 and above) will be more desirable for users who wish to run Android without Google on modern cellular networks.
I am one such user.
Supposedly, two different implementations of VoLTE exist in AOSP, neither of wich are used outside Pixels (if I understood previous discussions correctly).
I have a Pixel 6 because I bought it through Verizon when they offered a 5/month rebate for 36 months = 180 dollars off. This was at the beginning of this year so before the Pixel 7 released. I assume they offerred the rebate because they had a whole lot in stock that nobody bought, but everyone in my family got a Pixel because of it.
Your perception is that the probability of seeing someone that works at Google in the United States is as low as seeing an extraterrestrial capable of speaking English? What justification do you have for believing that I would be incapable of asking questions that would be able to tell the difference between an actual engineer at Google, and a random person pulling my leg? Google employs more than a hundred thousand people in North America alone. Some people that are now employed there were in my graduating class! Do you have some reason to call me an idiot?
What was the pragmatic purpose of saying "A person pulled me aside and told me he was an alien" if not to cast doubt on me talking to a Google employee? You didn't hurt my feelings but you were clearly trying to express doubt that I talked to a person that worked at Google. Where does that doubt come from?
I have received and read your response regarding my apology to you. I'm concerned that I may have upset you more than I realised.
My grandfather once told me not to take in too seriously random strangers comments. I try to live by that advice, wether it's comments from internet strangers, or, for that matter, customers at the phone shop.
This is of course not an advice from me to you. But I thought I'd share it with you in case you might find it useful too.
roll to disbelieve. i think someone was pulling your leg
especially the bit with samsung devices, i've had the misfortune of setting up a samsung phone for a family member and the amount of crap on those is just unbelievable
Could it have been a sales technique? Perhaps to sell you an iPhone. Perhaps the iPhones they were trying to sell have a higher commission and margin than the Android phones.
I wish closing things as "this is a duplicate" essentially required disclosure of the original (dupe) report.
It may well be that it's a dupe, or it may be something that looks similar but not actually the same. And indeed as in this case it's only the follow up report that got the bug fixed.
In this case it seems that contacts at google allowed them to escalate anyway and get it fixed.
But so often and especially with other programs almost everything gets closed as "dupe" which is just dispiriting.
In any case, if something this serious is a duplicate then there's the suspicion it went unfixed for long enough to be independently discovered and reported which is worrying.
Do you mean each new person should get a new bounty, or all reporters should split the bounty? The latter does not really incentivize much, but the former incentivizes reporters to collude with other reporters (i.e. you find a bug, tell your 40 friends to report the same bug, you get a kickback from all your friends who also reported it. $$$$).
The latter does incentivize everyone who stumbles across the bug to not disclose it. At the same time, it's sad for the original researcher whose bounty gets smaller with every new person stumbling across it.
Higher impact; but if it is just luck you are the first of many to find it and did not invest a lot of work in its discovery is reasonable to pay less.
Under "Closed as dup" system the probability is you get nothing for reporting trivially found bugs. Whilst you are still providing valuable information (that lots of people can find it).
Well i see where you are coming from, the point of bug bounties is to reduce risk to the company not neccesarily to reward effort of the researcher. There is a sense that a bug where you have to be NSA level of skill to find is less likely to be exploitted than a bug that every script-kiddie is stumbling upon.
> Everyone who reports an undisclosed bug should get a share of the bounty; this incentivizes them to stick to the embargo.
Having worked with but bounty programs, I can guarantee this would be abused to no end. Reporters would enlist their friends and family to submit as many duplicate reports as possible.
There are a lot of good security researchers out there doing work in public, but bug bounty programs also get flooded by people looking to game the system in any way possible.
I mean you all share the fixed bounty amount. You could only game the system if you expected other people had already found the bug. However this would be risky as it is fairly easy to detect and penalize. The common case is still you only get one reporter per bug.
I agree, but just to play devil's advocate if I discover a bug, disclose it, then tell all my friends to also file a report before it is filed they'd have to honor multiple bounties.
I, too, am frustrated that I've read far too many stories about someone reporting a devastating critical exploit and all they get is "this is a dupe" back without further explanation. Makes one paranoid that employees are working with someone externally, back dating their bug reports, and splitting the bounty.
You'd probably violate the agreement so you and everyone else technically wouldn't qualify and would be committing fraud. That said there are other options, such as splitting the reward for a vulnerability amongst those who report it (even the dupes). This would incentivize people not to disclose the vulnerabilities while keeping the payouts static.
I suppose the risk is people could 'game' the system.
Person A finds the issue, reports it.
Then Person A secretly tells Person B about it (with no apparent connection), and Person B reports the same issues a few weeks later, but with apparent different code/description to look ever so slightly different.
I’ve run into this with other vendors and really wished it’d get you CCed on updates so you didn’t have to ask for status periodically. It definitely doesn’t give a good impression when things drag out for aeons.
What's crazy is that it's 100% in the vendor's interest to keep this person happy, who they know can cause massive damage to their system, completely legally. The only leverage they have is the reporter's greed to get a bounty.
Surely in this case, the second report must have added some details, since they weren't fixing the original report and i assume android doesn't just sit on lock bypasses.
Seems to me that if you report something that significantly recontextualizes a previous report (e.g. make it go from a low to a high severity), then your report shouldn't be considered a dupe.
I've reported some bugs to programs on Hackerone before that were flagged as dupe and the triager did reference the original report. Chrome team does this too.
Sounds like this only affects phones that have been unlocked since the last restart, so unless they have kept them plugged it is unlikely that this attack would be successful.
Some discussion elsethread[0] suggests that that may only be the case for devices that are encrypted, as the passcode in that case would be part of the key for unlocking the contents.
If that's the case, it's possible that this attack may still work from a fresh boot for unencrypted devices.
I am always skeptical of these "lawtech" companies that sell magic unlocking devices. Are we really to believe that there are unpatched security holes in all major devices (both Android and iOS) that allow this kind of backdoor access?
I find it rather convenient that the "detailed support matrix" is only available for current customers only, seems to me like the actual amount of supported devices/operating systems would be limited to things such as outdated Samsung Galaxy phones and similar.
It's complicated, but yes there are a lot of ways to unlock devices some of which include straight up exploiting the device. Keep in mind btw that a lot of the sorts of criminals local LE is going after with these devices are not usually running fully patched iphones or pixels.
>Are we really to believe that there are unpatched security holes in all major devices (both Android and iOS) that allow this kind of backdoor access?
If you are at all familiar with the histories of jailbreaking, previous exploits, and the gray unlock market, it’s unreasonable you would not consider this this to be the default case.
It works. It's basically a software brute force that works great for 4 digit pins, takes longer for longer passcodes. Other offerings are a keylogger for the pin/passwords after they "return" the device to the suspect.
> It's basically a software brute force that works great for 4 digit pins, takes longer for longer passcodes
Since the pin/password isn't actually the encryption key and is instead just the code that is provided to the security module/TPM on the device, I fail to see how this can be bruteforced. Unless there is also a magic hardware backdoor in Android phones, but in that case why would there need to be private companies and how would they even have access to this.
Confiscated phones often have been confiscated for months and are therefore on a relatively old patch level. If at any point old vulnerabilities come out these can be used. Keeping the phones on and connected to a charger in the evidence lockers doesn't seem like too much work.
If the phone is setup for automatic updates it'll restart within a month (most of the phones I've had do monthly security patches) and you'll be in a fresh boot state. You can't turn off the updates without first unlocking the phone giving you a rather limited window to attempt to exploit the device.
If I had to guess: not everyone can buy this software and A/G are not wanted by the sellers. Even the usual customers (law enforcement) are not very likely to pass exploits to them, because their work would become more difficult.
Very weird implementation with UI stacks and dismiss. The way we designed a multi step flow for a web app was basically having a sort of state machine/flow which says what are possible transitions
say password > mfa1 > mfa2 > done
and as each steps complete what's the next security steps for this particular user's configuration and simply allow just that transition. Once we are at the done state the authentication is marked as successful.
Not storing auth state in UI (regardless of any MVC concern) and allowing only a very narrow allowed state of transition seems like a trivial design choice. I assume google has no shortage of people for security focused design.
The UI stack being created together and dismissed rather than created/called on demand as state transition happens also seem a very wired design. Perhaps I don't understand the reason cause I'm not an android programmer.
> The same issue was submitted to our program earlier this year, but we were not able to reproduce the vulnerability. When you submitted your report, we were able to identify and reproduce the issue and began developing a fix.
> We typically do not reward duplicate reports; however, because your report resulted in us taking action to fix this issue, we are happy to reward you the full amount of $70,000 USD for this LockScreen Bypass exploit!
Lots of mixed feelings just reading this, but at least in the end it seems like a positive outcome for everyone.
Ah, that's a nice hack to avoid having to pay your bounties! First report: "can't reproduce, sorry." Subsequent reports: "duplicate, sorry." Then fix on whatever schedule you feel isn't too blatant.
Appalling handling on Google’s end here. The duplicate issue part I can understand, but why should it take two reports of a critical vulnerability to take action? Surely when the first one comes through it’s something you jump on, fix and push out ASAP, not give delay to the point where a second user can come along, find the bug, and report it.
The refactor that’s mentioned towards the end of the article is great, but would you not just get a fix out there as soon as possible, then work on a good fix after that? For a company that claims to lead the way in bug bounty programs this is a pretty disappointing story.
You can read in the conversation that Google was not able to reproduce it the first time the bug was submitted:
> The same issue was submitted to our program earlier this year, but we were not able to reproduce the vulnerability. When you submitted your report, we were able to identify and reproduce the issue and began developing a fix.
I wonder if it really was the same bug or what they did wrong to reproduce it. Or maybe they just made some mistake in reproducing it.
> I did something weird after putting in a new PIN, and I was able to access my home screen without my password, but I'm not sure of the exact steps I did
then that's not really a duplicate. If the original bug report doesn't have enough information to recreate the steps, the second one is the only real bug report.
Just trying to rationalize, but if the "external researcher" was hired by Google to find security issues, google might have a requirement to fix the bug at its own pace.
I would personally be highly suspicious of a security flaw being a duplicate though. It's can be a very convenient excuse not to pay the bounty.
Reporting and investigation matters. Perhaps the initial report was only on the bypass of the lock-screen but the initial report only ran into the decrypted phone state so it was dismissed as not being exploitable (see other comments), whilst the second report actually got inside an active phone (And then was also written up in a simple, concise and reproducible way).
So basically google wanted to give this guy nothing. Then he set a hard deadline for disclosure and google managed to buy him for 70k so they could stick with their own deadline.
It would not surprise me if in some cases, google runs the exploit up the tree to the NSA to see if they're actively using this for matters of national security, then slow-walk the patch to release. Given how easy the exploit is (no software needed, no special equipment beyond a paper-clip), would not surprise me if this has been in wide use for several years now by various groups.
I agree that it appears to have been the disclosure threat that resulted in the bounty, but I don't agree (if I'm reading you correctly) that the OP acted unethically. It sounds credible to me that he was just doing everything he could to get the bug fixed.
According to the article, the reporter had already decided before the bounty had been set that they would wait for the fix:
> I also decided (even before the bounty) that I am too scared to actually put out the live bug and since the fix was less than a month away, it was not really worth it anyway.
Or more charitably, by the terms of the program he wasn't eligable for anything, but they gave him seventy thousand dollars out of goodwill and the spirit of the program.
Of course it’s annoying to have to track the identity of “our screen” and bind that identity to event handlers, or make it accessible with context etc. But it’s necessary in anything remotely security-adjacent.
(And never assume anything is a singleton unless you add a breadcrumb comment for someone who might change that assumption on a different team!)