Hacker News new | past | comments | ask | show | jobs | submit login
Accidental Google Pixel Lock Screen Bypass (xdavidhu.me)
1592 points by BXWPU 83 days ago | hide | past | favorite | 444 comments



The discussion on race conditions at the end is an important one, and IMO the bugfix is a bandage at best: the notion of anything accessing the “current” object after any kind of delay, especially in an event handler, when there is any chance the thing is not a singleton, is a recipe for disaster. In this case, dismissing the “current” security code screen was a supported API surface and that should set off all the red flags.

Of course it’s annoying to have to track the identity of “our screen” and bind that identity to event handlers, or make it accessible with context etc. But it’s necessary in anything remotely security-adjacent.

(And never assume anything is a singleton unless you add a breadcrumb comment for someone who might change that assumption on a different team!)


Agreed. The fixed logic, at least judging by the commit message, still feels very shaky on correctness grounds ("if we are dismissing something that doesn't seem to be right, ignore it").

Since they're rewriting code and changing method signatures anyway, I would prefer they got rid of the notion of "currently visible screen" and made sure that all dismiss() calls have a unique pointer or token pointing to what exactly is being dismissed. If this was my codebase, their approach would give me all sorts of bad vibes about additional problems lurking deeper.

The whole process and the nature of the fix doesn't inspire a lot of confidence in the security of Pixel/Android in general.


So you'd go out and refactor a major security sensitive component (which dates to time before your career most likely) in a span of a single month for an emergency security patch deadline?

That doesn't inspire a lot of confidence in your risk assesment and decision making.

I'd do what Google did: rollout a patch that addresses the immediate danger and then backlog proper refactors over time.


Their fix included a similarly large refactor, they just used the "security screen type" as a newly introduced parameter instead of something unique to the screen instance.

I do agree that in the real world, sometimes you have to settle for a less-than-ideal solution. I hope my post reads less like "those people are idiots", which was not my intent, but more like: this specific fix isn't ideal, and knowing this type of code is live in a device doesn't fill me with confidence, even if I can understand reasons for why it was done that way.


Right? This was absolutely the "right" level of refactor for a hotfix, as the full refactor would introduce much more state management that could itself introduce bugs. And especially if behind the scenes there was a detailed audit of what things can currently access the current security screen, it would be fine for now.

But I sincerely hope that in the postmortem, there would be a larger meta-discussion around code review practices and how something like this "global dismiss" became part of the API surface to begin with, and a sincere prioritization of a larger review within the backlog. Though with everyone on edge at this time in big tech, I doubt that ends up happening :(


>Their fix included a similarly large refactor

Their change is hardly a big refactor. This includes all the new code, all the parameter changes everywhere the function is used, and two additional test cases. This is a tiny change.

>12 changed files with 102 additions and 26 deletions. [1]

https://github.com/aosp-mirror/platform_frameworks_base/comm...


I don't think that is as much of an issue as the ridiculous process he had to go through.

Think about that first security researcher. You literally found a Screen Unlock bypass (should be Priority #1, right?) - and Google just went and put fixing it on the backburner.

If they will put something like that on the backburner, what else are they ignoring? It isn't confidence-inspiring.

Edit: Also, knowing Google, what are the odds of your full refactor? "Temporary" fixes become permanent fixes quickly.


> Edit: Also, knowing Google, what are the odds of your full refactor? "Temporary" fixes become permanent fixes quickly.

Hahah, I wish that was only Google :D


Could have been sold for up to 300k or more on the black market.


Maybe it was an already well known exploit. After all this was a duplicate and Google was sitting on it. Two people found it and reported it to Google. Why not a third one, and sold it?


Hahah it can go both ways.

You can have 2 major rewrite over 3 years or you can have a new temporary-became-permanent bug fix.


> The fixed logic, at least judging by the commit message, still feels very shaky on correctness grounds

This was my experience as a dev on a team at Google for a few years. I saw a LOT of exceedingly lax treatment of correctness in the face of concurrency. There have even been multiple decisions I've seen to guess at how to fix concurrency bugs and just say "well, looks good to me, let's see if it does anything."

It's par for the course, and folks get (got? =P) paid handsomely for doing it.


This just sounds like you're prematurely optimizing for additional security screens getting added.

Maybe that's not on the table atm? Still odd that they took so long to change a couple method signatures and write a couple test cases


They already have multiple security screens, and a demonstrated critical bug with security screen confusion. Not sure how this is premature optimisation.


because if the number of screens is small and there are few tiers (only 2), passing an identifier around could be overkill

sounds to me like it's an optimization for introducing more tiers than what there are


> the number of screens is small and there are few tiers (only 2)

Making this kind of assumption, when there are no such guards in the system itself, is exactly what leads to security issues.

If the system enforced two named singletons as security screens, so it was impossible to .dismiss() the wrong thing, then sure. But that's not how the system is, and assuming that "the number of screens is small" and "there are only 2 tiers" without enforcing that assumption with code is pretty much how the original bug was introduced.


Since they are dismissing the Lock Screen _type_ (SIM, PUK, Pin) and not the instance, a logical example for where this might go wrong is if you have dual SIM. Then again, worst case you dismiss the incorrect SIM Lock Screen. That will not give you full unlock and also the ‘wrong’ SIM will still not work


Yeah an attacker may be able to use their own personal dual SIM Pixel phone to bypass the SIM lock screen for a stolen SIM card that they don't know the PIN or PUK code to using a similar technique, but like you said, I'm almost certain that it wouldn't actually let them use it to send and receive texts (and if it does, then that's really an issue in the SIM card's OS, considering anyone could just modify AOSP to ignore the SIM lock screen and then put that on their own phone.)

Even still, being able to bypass the SIM lock screen would still be a bug, just not a vulnerability. Google doesn't pay bounties for non-security related bugs to the best of my knowledge, but I can't help but feel this is still not an ideal way to design the system. It likely is fine today, but as strix_varius said, these kind of assumptions are what led to this vulnerability in the first place. Vulnerabilities do pop up from time to time in otherwise well-designed systems, but this lock screen bypass never would have been present in the first place had better design practices been followed. As krajzeg said [1], The whole process and the nature of the fix doesn't inspire a lot of confidence in the security of Pixel/Android in general.

1. https://news.ycombinator.com/item?id=33545685


I would indeed expect something more robust like a security state machine where not all states can transition to any other state freely. The UI shouldn't even have a say in what transitions are allowed.


The other nice quality if your code is in a state machine is that it can verified by a model checker.

You might like this post on statig, an HSM library for Rust

https://old.reddit.com/r/rust/comments/yqp2cq/announcing_sta...


The Rx model works nicely. Rather than trying to model state transitions, you model "scopes of work." So if you have something like an event listener, you would tie it to the active unlock session. When that session ends, you dispose of that "scope of work" and all work associated with it would immediately stop.


Am I reading this right? This reads like the presence of a UI element holds the unlock state of the phone?


No, not exactly, but Android is old and gnarly enough that a lot of components don't have a clear Model/View separation you'd expect in modern codebases.


The good old "Model / View / Confuser" paradigm.


Android was invented before MVC?


Well it was designed around Java, so definitely before common sense was invented.


Well, the UI element’s dismissal is what signals the system that holds unlock state. And the problem was that multiple other systems would automate the dismissal of the current UI element… without checking whether it was the appropriate UI element they expected to be there!


And of course fix includes magic SecurityMode.Invalid value, which makes dismiss() behave like it did before.

I'd look very hard at places that use SecurityMode.Invalid.


Yeah, I suspect this fix isn't going to hold for very long at all now that everybody knows to look at it.

If Google isn't readying a more extensive fix right now, they're going to get pwned shortly.


Google invests a great amount of money in their project zero team but does anyone know if they have a specific red team that is dedicated for Android ?


As a former Google Information Security Engineer I can say: I don't know, because the Android security organization is completely separate from Google's security organization. It's something I always found odd, as it created some redundancy between both organizations, lack of homogeneous processes, etc



I was under the impression that decrypting storage actually requires the passcode of the phone, but this bug makes it look like the device is able to decrypt itself without any external input. Does anybody know more context about this? What's the point of encryption if the device can just essentially backdoor decrypt itself?


It didn't work on a fresh reboot, so presumably, it functioned like you're describing. But, when he swapped the sim live, without the reboot, the phone was already running with the key in memory.


On iPhone, keys are evicted from memory when the device is locked. Apps running behind the Lock Screen can only write files to special file inboxes (this is why the camera lets you take pictures while locked but doesn’t display earlier pictures, for example)

You’re telling me that android keeps keys in memory for its entire uptime?


That's not exactly true.

There is a data protection class that is like what you're describing, but it is not used super-widely, the one most commonly used is exactly what is being described and makes data available after first unlock.

https://developer.apple.com/documentation/security/ksecattra...


Maybe but this should be limited to application data scope.

What baffles me is that lock-screen is a system wide critical application and should in no way rely on this method.

iOS lock screen in theory shouldn't only respond to cryptographic validation from the secure enclave.


oh huh! thanks for the correction!


That's not really true at all - you can of course unlock your iPhone without entering PIN for every screen lock which should give you a clue that keys for disk encryption generally aren't purged when iPhone is locked.

Some keys are, but not the ones that are the issue here.

I've even seen conditions where iOS devices reboot and still retain keys.


If you unlock the screen using Face ID the OS gets the keys from the Secure Enclave which, depending on the model, does the face recognition itself or using the normal processor in some kind of secure way. Just like if you unlock the phone using the pin code, the OS gets the key from the Secure Enclave which makes sure it’s not easy to brute force. The PIN code is not the key itself of course.

The only key that sometimes gets retained at reboot is the SIM unlock.


Yes, and that's how Pixels work as well. The condition in question here is of course when the secure enclave releases the keys and mounts the storage.


You can have look at this document to answer that : https://help.apple.com/pdf/security/en_US/apple-platform-sec...

From what I gather the more secured keys should be discarded 10 seconds after lock screen event. Lower security keys stay in memory to allow background activity.

Encryption on ios, if i understand correctly, is on a per file basis. There is thus no "mount" event to look for and it should provide no value to use a less secured key if you do not intend to run on background because decryption is supposed to happen on the fly.

PS: Also if I remember correctly pressing down the emergency sequence (holding power + volume up) discard ALL keys instantly and unlock require the passphrase as if you just rebooted. Emergency call don't need to be issued just initiated (must hold 10 sec or confirm on screen to make the actual emergency call).


Can you elaborate on those conditions? It's my understanding that this shouldn't be the case


You probably were seeing a respring (restart of the UI processes) not a reboot.


Presumably not all keys?

If you receive a phone call while locked presumably the phone can still access the address book to display the contact name and photo?

And music playing apps can presumably access their database of music to play songs whilst the phone is locked?


> If you receive a phone call while locked presumably the phone can still access the address book to display the contact name and photo?

Just so you know, this is true on an iPhone, but NOT if the phone has NEVER been unlocked since reboot. If you get an SMS/call in this state, it will just show the number. It can't read the address book.


Could be reading a cached copy of the contact list since it’s not very big

The music playing is a different story


Text messages (iMessages) can be displayed on lock screens. Not sure how they do that with encryption but maybe the notification is separate.


I think for iMessage, the actual messages are sent using APNS, so the message is in the push notification itself. Thus while you can see the message itself without unlocking, any older messages that are behind the Secure Enclave are inaccessible without keys.


This is correct. For example, when I connect my iPhone to my provided work wi-fi, and I get a Tinder notification. I can partially see the message on the lock screen (once Face ID authenticates), but as Tinder is blocked on the wi-fi if I want to read and respond in the app I have to pop to cellular.


> You’re telling me that android keeps keys in memory for its entire uptime?

Yes. I've known that for quite some time, and yet I keep forgetting considering how stupid this feels [1] . Google provides "lockdown" button which is supposedly more secure (I think it's recommended for journalists?)... Well it doesn't evict keys either. Only eviction is to reboot.

[1] It feels stupid because there had been a LOT of work to move from FDE to FBE and to allow two states of data encryption and telling apps to support both of them. Doing all this work just to be able to store incoming SMS and to display wallpaper on first lockscreen...?


Do you have any more details about how that works on iphone? It seems very hard to believe, given the complexity and diversity of background apps on iphones, some of which access huge data that would be impossible in system memory (e.g. offline GPS/navigation apps). For example, Google Photos can send the full photo library on the phone, even if large, to the cloud while the device is locked.



Of course we won’t see analogous bug fixes on the Apple side so we can’t compare too closely. Unless you worked on this codebase :-)


This.

I actually find this incredible. I am familiar with iPhone security but not android and had naively assumed Google probably did a better job on the non-UX aspects.


Nonsense. If that was true then things like backups and cloud sync couldn't happen when the device is locked. But of course they do, meaning the keys are still sitting there freely accessed by the CPU, along with all the data on disk.

Your camera example is not at all convincing of anything special going on, since that's also the camera behavior of other OS's (like Android) that don't purge the keys. That's far more easily implemented as just a basic app policy than some security system that yanks the file system out from underneath running processes.


What do Windows/Mac/Linux do?


Key is in memory at all times after boot on all of those.

Full disk encryption is only useful on a laptop if the device is powered down fully.


That sounds like a security issue. Why are disk encryption keys not evicted in sleep mode? Seems like no apps should be running in sleep mode?


On Linux this is adressed by systemd-homed, which encrypts at least your home partition in sleep mode. Attackers could still try to manipulate the rootfs & hope the user doesn't detect it before using the device again.


It is a major security issue, and one of the reasons people running around with production access on laptops is insane.

It is hard to fix this too, because almost no background desktop processes behave well when they are suddenly unable to write to the disk.

Even if you solved that, your password manager has keys in memory, your browser has cookies in memory, etc etc.


Mac seems mote secure in sleep:

"If your Mac has the T2 Security Chip (recent Intel-based Macs) or uses an Apple silicon chip (M1 family and future), security is significantly improved. The Secure Enclave in both systems uses encrypted memory, and has exclusive control over FileVault keys. The Intel CPU or the Application processor (M1) never sees the keys, and they are never stored in regular (unencrypted) RAM. Due to this, an attacker would only be able to extract encrypted keys (which can't be decrypted), and only if the system failed to prevent the DMA attack in the first place. These protections make it a lot safer to leave your Mac asleep."

From https://discussions.apple.com/thread/253568420


The most valuable information for an adversary is typically found in Ram. Like your password manager master password, browser cookies, etc. Ram can be dumped easily with the right equipment.

The only safe encryption is on a powered down device.


Sleep mode could suspend all activity? You could encrypt all memory before sleep?

It doesn't seem unsolvable, as long as sleep (closing lid) suspends all activity.

(lock with background activity is different, lets discuss the sleep case)


If you fully hibernate to disk where it encrypts the memory snapshot to your FDE key, then you are good to go but that is not locking that is turning the computer off.


> Key is in memory at all times after boot on all of those.

I would think it would have to be while the device is mounted and OS locked, but surely if you dismount a secondary disk/container the key is purged?


As long as that secondary disk uses a different FDE key and you manually unmount it. This is easily done with LUKS on Linux but YMMV on other operating systems


It has to, if you want to be able to unlock the device with a fingerprint


The passcode is required to get access to anything the first time you start the phone, for the reason you mention, and after that the password is retained in the trusted execution environment. This way apps can continue to function in the background while the phone is locked and you can unlock with alternative methods like fingerprints or face recognition.


  It was a fresh boot, and instead of the usual lock icon, the fingerprint icon
  was showing. It accepted my finger, which should not happen, since after a
  reboot, you must enter the lock screen PIN or password at least once to decrypt
  the device.
i was surprised to read this part too. assuming that the author's version of the events are accurate here, my best guess is that the device had not fully powered down, and was in either a low-power/hibernate or find-my-phone mode, where portions of the security subsystem were still powered, hence the device-unlock PIN was still cached. i don't otherwise see how else a fingerprint alone would allow for the device to be unlocked on cold boot.

of course this detail doesn't take away from the rest of the report - great find xdavidhu!


Doesn’t seem like a full unlock, see the next paragraph: “After accepting my finger, it got stuck on a weird “Pixel is starting…” message, and stayed there until I rebooted it again.”


It seems to me this bug appears when a phone is booted, unlocked (and decrypted) once, and then locked again, but the decryption key still stays in memory.


This is virtually always the case with these kinds of vulnerabilities on smartphones. Security researchers often say whether an attack or vulnerability is possible "before/after first unlock" in reference to the fact that the security is a totally different story if the phone has been unlocked/decrypted since last boot.


In the write-up search for the bit that says "and one time I forgot to reboot the phone".

tl;dr: It's not an encryption bypass, it bypasses the lock screen once the phone has been unlocked once.


Which is equally important. Most people have their phone in that state than powered down.


I wonder if this can bypass the "Lockdown" mode. I always recommended people switch the phone fully off in lieu of using Lockdown.


I have an obsession with classifying software bugs into general categories, looking for the "root cause", or more constructively, for a way to avoid entire classes of bugs altogether. I've been doing that for more than 20 years now.

This bug, if you look into the fix, falls into my "state transition" category. You can (and should) model large parts of your software as a state machine, with explicit transitions and invariant checks. If you do not, you still end up with states, just implemented haphazardly, and sooner or later someone will forget to perform a task or set a variable when transitioning from state to state.

This category is my strong contender for #1 problem area in all devices that have a user interface.


I think the root issue is one of which state is the default one. In Android the logged-in state is the default one, and the logged-out state is constructed by taking the logged-in state and essentially hiding it behind a modal.

The issue with this is that systems have a tendency to return to their default state. If the modal is dismissed or has a bug that makes it crash or has a memory corruption, or any number of things then the system will return to the default state.

I would turn it upside down, and let the logged out state be the default one. The logged-in state is then a lockscreen that is hidden behind a session. If the session dies you are back at the lock screen. The lock screen can't be dismissed because it's always there. If the lockscreen crashes the phone locks up completely because there is nothing else there to show.

It's acceptable for failures and exceptions to decrease privilege (boot you out), but they must never increase it.

Edit: Ideally the lockscreen should also run with reduced privileges so that it literally can't start a session even if it wants to, except indirectly by supplying a password to some other system.


Anything related to security should fail safe.

Failure is not lack of rigour, it's from fundamentally flawed architecture.


This begs for the famous quote:

"When a fail-safe system fails, it fails by failing to fail-safe." (from the wonderful "Systemantics").

Yes, one should definitely try to fail safe. But managing your states and state transitions explicitly and carefully is a good way to avoid these kinds of bugs.


Do you have any writing I can read about your classification? This sounds extremely interesting and useful. (I have some related thoughts, but not 20 years' worth and largely not recorded.)


Hmm. Perhaps I should get my notes into shape and publish them… I'll think about it. I would need to force myself to post them to HN without looking at the discussion, though.


Reminds me of Orthogonal Defect Classification. Analyze defects for when they were introduced (during development, architectural design and so on) and what caused the introduction of the defect into the system in the first place.


I second this comment. It will be very interesting to see a rough sketch.


Another way to look at it is, since the bug is from a race condition, modeling your program after functional programming would minimize these bugs


I would like to subscribe to your newsletter.


How come the security model is so basic?

I even think they should dismiss modal by id instead of type.

As this is a highly sensitive part, I think stacking lock screens on top of the unlocked menu leaves the door open for many bugs that could unlock your device.

The unlocked menu should be locked at all times, and use a flag to monitor if it’s locked/unlocked, and only flip the flag when you unlock with biometrics or with password.

If the flag is locked, then the whole screen is black and can’t have any interactivity via touch, mouse, kw…

This way is more robust, so even if you manage to bypass the stack of lock screens, you end up with main menu locked.


I was also thinking they should only dismiss by ID instead of type.

The other question is, why would background tasks be permitted to call dismiss at all? I can imagine a scenario where you get a malware app installed using whatever method. Then when you get physical access to the phone, you send a notification to the malware app. The malware app in the background calls dismiss on every possible type several times to unlock any possible security screens.

There should be some sort of lock/flag/semaphore that is held by the current top level security screen. Dismiss should only be callable by whatever process has a hold of that. Dismiss calls from anyone else should not only be denied, but processes that make such calls should be blocked, quarantined, marked as suspicious, etc.


If an non-system app can call dismiss at all, it's already game over.


Oh, of course.


I was thinking that if I would have to code this, at least once this issue would cross my mind, the question of "what happens when there are multiple screens stacked" and how it should get handled properly. This is what meetings are there for, to discuss such issues.

It almost sounds intentional, but at the very least like a very sluggish approach to security.


I think an even better approach would be to have the concept of fixed tiers of locking combined with evicting the decryption key for any Lock Screen above the basic PIN.

And you can only move down one tier of unlocking at a time. Unlocking SIM PIN moves you down one tier to phone PIN screen.


I'd go for one screen with a queue of prioritized unlocking tasks which need to be completed successfully one after the other. These tasks could get the chance to hand over a fragment which the screen embeds and presents to the user, in order to properly modularize the tasks.


And this is something I come up on the spot. Engineers should think about like their life depends on it, this is a major security flaw.

Even more robust would be to switch that flag off by using the password or derived password from biometrics.


It is android.


To be fair, we only know the source of the bug since it's open source. With iOS, we have no idea how bad the code is behind the scenes


#0016 VENDOR: GOOGLE STATUS: FIXED (NOVEMBER 2022 UPDATE) REPORTED: JUN 13, 2022 DISCLOSED: NOV 10, 2022 (150 DAYS)

Project Zero only gives vendors 7 or 90 days before disclosure...

The short version: Project Zero won't share technical details of a vulnerability for 30 days if a vendor patches it before the 90-day or 7-day deadline. The 30-day period is intended for user patch adoption.

https://googleprojectzero.blogspot.com/2021/04/policy-and-di...

Google should have given Mr. Schütz $200,000 alone for not revealing it.


This is also the point that stands out to me the most. This is hypocritical and pretty close to negligent, if they set such high standards for other companies they investigate but can't own up to it themselves.

I can only hope this is a singular case or else the argument "yeah we collect your data but we also keep it safe!" falls pretty quickly.


If you follow published reports of Android vulnerabilities you'll see that taking longer than 90 days for a fix is actually not that rare. I myself had a similar experience a couple of times.


I am surprised there is an assumption that android fixes will get to users within 30 days


Seems to me like this impacts not only Pixel devices but all Android devices?

Patch was to AOSP: https://github.com/aosp-mirror/platform_frameworks_base/comm...

I don't have a locked SIM handy, but can someone please test on their non-Pixel device and confirm?


Not testing it right now, but my understanding is, that the issue is technically for every device, but the specific condition (putting the lockscreen on top of the secure screen stack right before `.dismiss()`-ing) is a Pixel software bug.


One of the commenters on the blog post stated that the bypass did not work on their Samsung device.



Thing is, most phone manufacturers will customize the lockscreen quite a bit, so it's possible (but not necessary!) it affects others.


I don't think many phone OEMs will actually take the effort to muck around in the lock screen mechanisms.


That's pretty much the first thing every single one of them does to differentiate the phone.


Yeah, agreed. I wonder if it affects LineageOS, which I run.


It does.


Oh no. Do you have a source?


Tried


Every once in a blue moon when I pick up my locked iPhone (which auto-locks in just 30 seconds) and engage the home button just as the screen comes alive from the gyro sensing movement, it unlocks on its own. It just flashes the PIN dialog and slides right onto the home screen. I don't use Touch ID, and never stored my print with it even once to test the feature/hardware. It's been happening ever since iOS 11, with both my 1st gen. iPhone SE and my current iPhone 8.


Do you have an Apple Watch? My phone unlocks as long as I'm nearby, wearing the watch and have it unlocked.


No Apple watch, and it can happen without the phone being connected to anything Bluetooth/Wi-Fi.


You better record and show it to people if possible.


But the Watch tells you it’s unlocking the phone.


And unlocking with the watch only works on Face ID iPhones to make it more convenient when you're wearing a mask.


And it doesn’t do a super great job at it - often times the underside of a table, or sofa cushions will trigger it


Delete this comment and write a bug, you could get $100k


I reported it years ago but the report was ignored and closed, possibly because I could not provide a reliable/reproducible procedure for triggering it.


Yeah I’ve seen this too.


This sounds like a UI race condition and actually gives me more confidence in the iPhone (unlike the Pixel, the unlock state isn’t tied to UI elements).

Unless of course you can do this long after it locks…


The issue is that it happens after the phone has locked, not that the PIN dialog happens to briefly flash before being bypassed.


Very strange. I’ve always used Touch ID when available so I can’t say I’ve experienced the issue myself.


    > "Hopefully they treated the original reporter(s) fairly as well."
Perhaps they should have reconsidered a bounty payment of some sort for the first bug reporter as well. Perhaps that's where the other $30k of the $100k went.

This actually says something interesting about bug bounty programs in general:

Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.

What's the correct protocol to be fair to all of the original bug reporters in those situations? Split the bounty? Reward all of them with the full amount?


> Given a high level of false positives, it's probably not uncommon AT ALL that sometimes it takes a couple of bug reports before something is reproducible or generates a high enough alert/credibility status, as seemed to have happened here.

This case was not the case of eventually the same reports being taken seriously. None of them were, until the author met people working at Google in person at some event, and him showing them the issue and then persisting.

Different than "Ops, we received a couple of requests, better look into it" and more like "this guy won't stop bothering us about it, probably should look into it".

Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.


> Security reports from proper pentesters tend to include easy to reproduce steps and if you can't reproduce it yourself from that, you can ask them to expand, since it's in their interest for you to be able to understand them, since that's how they get paid.

Fair point, but it's also in their interest to overestimate the impact of the bug they found. And, even if the reports are well written, many reports that I've seen (mostly from new gray hats) were not actually exploitable, even with aggressive poc code.


Having been on the other side of a bug bounty: This screams like not having enough resources to take care of the reports.

Probably at Google size it's impossible to have a team large enough to deal with all the reports.


This is where I find out my otherwise completely functioning Pixel 3a no longer gets security updates, as of May.

I knew and accepted that it wouldn't get new features and major android versions, but to not even get security updates, after only three years?

So my options are: live with the piece of technology in my life that is both the most vunerable to physical security issues and has the widest access to my critical information no longer getting active security updates, attempt to root it and install my own build (is cyanogenmod still a thing?), or throw a working piece of complex technology in the trash?

Amazing


There is LineageOS now, https://wiki.lineageos.org/devices/sargo/

The most recent Pixels get a few extra years of only security updates after regular updates run out. So at least they have improved the policy somewhat now.

It would be great if Google went ahead and fixed this problem in particular for more devices, though.


Dunno whether you'll read this, but as a fellow 3a owner I just discovered that there is a September update image on the google website: https://developers.google.com/android/ota#sargo -- I guess maybe this will appear as an OTA update eventually? On the other hand it still doesn't have a fix for this bug, so the situation isn't really any different :-(


Or install GrapheneOS? Pixel 3a support is being phased out slowly but surely but currently it's still getting all security patches.

EDIT: I stand corrected, they stopped support in September. :(


GrapheneOS has just patched this vulnerability for the Pixel 3a!


That's great news!

Can you tell me where to find release informationen for the Pixel 3/3a? I see that there's a new version from Nov 11 (only) for the 3 series but the most recent changelog is from Nov 10 (https://grapheneos.org/releases#2022111000) and I don't see any information there regarding the lock screen bug.


So you migrate to Apple


That would be the "throw a working piece of complex technology in the trash" option, yes


Bottomline: have buddies at Google if you want anything ever get fixed.


Well there just went my chances of getting anything fixed at Twitter or Facebook...


I doubt Twitter would be fixing much of anything even if you knew someone still there.


and Facebook... and every other company too cheap to pay for support.


Applies to YouTube too.


which is also Google


Technically Alphabet.


YouTube is a part of Google. It's not a separate "bet" like Waymo.


A fun and interesting read. But it is frustrating to hear that such a major security bug was ignored until a happenstance meeting with Google engineers.


Wow, this is very serious - it pretty much turns every "left my phone on the bus" incident from "oh well" into "all your data was compromised". I don't know how Google couldn't take this seriously. Even after the poster physically demonstrated it they took months to fix it. For sensitive data with legal disclosure requirements this is a game changer.

Very disappointed with Google here - even though I lost a lot of trust in them in other areas, I still rated their security stance as excellent, especially on their own phones.


This is pretty terrible.

* The security screen "system" works as a stack, and the individual handlers for them don't actually have a reference to their own security screen. That seems like a terrible design; this design caused this bug, and the fix for it feels like a band-aid that could have unintended consequences (and thus new security implications) in the future. The handler for the security screen should have a reference to the security screen itself (or an opaque token or something like that), so it can be sure it is only dismissing its own screen. Passing a "security screen type", as the new, "fixed" code does, is not specific enough for this kind of code, and still seems unsafe to me.

* I'm a bit confused as to how this could unlock a newly-rebooted phone. Isn't the user data partition unmounted and encrypted? How could this partition get decrypted if the user doesn't get the opportunity to enter their device PIN, which should be needed to gain access to the partition's encryption key? I guess maybe it isn't decrypted, and that's why the system got stuck on the "Pixel is Starting" screen. Still, pretty concerning.

Meanwhile, my Pixel 4 still only has the October 2022 security update, claims there is no update waiting, and is presumably still vulnerable.


It doesn't get decrypted. The data is still safe after a reboot. That's presumably why the phone hangs for the author after a reboot. Although some comments have said Android itself loads (and I guess also the launcher) but they can't really do anything


I went to buy a phone maybe two months ago. Before I had my current Google Pixel 6, I used a OnePlus 3T for six years, and even then I only stopped because I sat in a hot tub with it on. At the T-Mobile store, I announced to the salesman that I would be back to buy a Pixel 6 when they had it in stock, and a man pulled me aside and privately asked me why I wanted to buy a Pixel.

He explained to me that he was actually working in the hardware division at Google and that the team that he was managing was responsible for some parts of the Pixel's design. But he added that he had never actually talked with anyone out "in the wild" who owned a Pixel or made a positive attempt to buy one. He went on to explain that most of his team didn't use a Pixel either - they were all pretty much using iPhones, but some were even using Samsung devices.

I understand that this was someone from the hardware team and it doesn't necessarily reflect on the people who work on the Android OS, but I feel silly for not having taken what he said into consideration when I finally bought a phone. If the people working on a device don't even want to use it themselves and can't figure out a compelling reason for anyone else to use it, shouldn't that have been a strong signal to me that I shouldn't have selected it? But I did, and I've been regretting it since. Great camera though.


I'm... not sure that I would take a random person* in a T-Mobile store at their word when they claimed that they were "actually working in the hardware division at Google."


I recognize that I should've been more clear but the person who pulled me aside was a random customer waiting in line who pulled me aside when I told the T-Mobile guy that I was planning on getting a Pixel, which they didn't have in stock. I did ask a fair number of questions about what it was that he did to determine that it wasn't someone older who was just messing with me. Granted, this was some number of months ago, but if I recall correctly he was trying to figure out why people wanted Pixels because on his team, people would use iPhones because their family members used iPhones, or because it was easier from an enterprise security standpoint with BYOD. I'm not sure if I remember specifics beyond that.

It's kind of a post-hoc realization that I should've used his admission to me as a reason to second guess a purchase on a device which I've come to discover: has a stock messenger application that fails to sync message receipt times, that gets very hot to the touch, drops cell phone tower connections until rebooted. And, as the article we're replying to points out, had a lock screen bypass bug that wasn't fixed for months.


In the story above he says he wants to buy a pixel to the salesperson, but it is someone else that says they work at Google.


Thanks, I've updated "salesperson" to "person."


So basically, even sketchier.


No, I'm quite more likely to believe that a customer at T-Mobile is a googler than to believe that the salesman is.


Especially if this is a store in mountain view!


Did this self-reported hardware engineer from Google tell you WHY his colleagues don't use a Pixel?

You could have just as likely been listening to hot air from a random individual. Perhaps an Apple store employee with an axe to grind.


I'm not the OP but I know a couple of Google SRE's and an Android Auto HCI person and they use iPhones...


Sigh, like Microsoft UI designers using MacBooks. How does someone in charge not demand that the developers dogfood the product?


Because then you get headlines like Meta got recently, where developers are being forced to use Horizons(sic?).

TBC I also agree that you should dogfood things you build, especially in the cruisy world of software development where if you really hate what you work on you can just go somewhere else. It is a bad look in the media though


Or maybe T-Mobile gave higher commission for apple sales that month.


The main reason for this is just fucking iMessage.

It's not even just that iMessage segregates non-iPhone messages by color. It screws up video and group texts.


I don't get the iMessage hate.

Just use another app if you want. They already provide compatibility with SMS. What more do you want?


Out of curiosity, why have you been regretting it? I've been using Pixels for quite a while now and generally been quite happy.


Not OP, but for me, the biggest annoyance with Pixel 6 compared to older devices was the fingerprint reader under the screen - so uncomfortable to use, and so much less precise than dedicated readers on the back like they had before (or on power button, like some other phones do).

A general frustration with the entire Pixel line is the lack of video output options. It's basically Chromecast or GTFO - no Miracast, no USB-C video. It's kind of sad when an Android phone has worse compatibility with generic hardware than iPad! And the most annoying part is that Google deliberately removed all these at some point in the past.


There are plenty of reasons that people how to spell it over the years, but none are actual red flags.

For example, I think people give Google grief over making it difficult to unlock the bootloader, but the same can be said of every other vendor.

In my experience, using the Pixel is good enough that I don't miss my Nokia 6.1 running LineageOS too much.


>For example, I think people give Google grief over making it difficult to unlock the bootloader, but the same can be said of every other vendor.

It's not so much that as it is buying an unlocked Pixel and RMA'ing it when hardware problems happen only to receive a locked phone in return. This is the sort of thing that makes people angry.


The fact they’ve had multiple critical bugs relating to emergency calls over the years is a pretty big red flag to me.


What don't you like about the Pixels? I just switched to an iPhone this year and really regret it (the UX is horrible) and miss my Pixel.


I've also struggled to install GrapheneOS and Calyx on my iPhone Pro Max. /s

(I actually do have both and definitely prefer the Pixel -- especially the cameras on the Pixel are amazing in low light, but it's a bit annoying how the official Gcam app seems to expect that Google photos is installed.)

How funny is it that the best way to de-Google is to buy a Google device, and that Apple is incredibly prescriptive about knowing exactly who you are, connecting to wifi, and getting your purchasing on file before you can even get through the initial setup on an iPhone.

The one thing I like about iPhone over Android is that the animations are a bit nicer and more polished, and the stock color scheme is pretty bright (but awful at night), and everything else about Android seems to be better.


Modern carriers are migrating to VoLTE, and LineageOS is unable to implement this outside of a few devices, meaning that many phones have been dropped from the latest release.

As [w]cdma is shut down in preference for 5g and LTE, Pixels (model 3 and above) will be more desirable for users who wish to run Android without Google on modern cellular networks.

I am one such user.

Supposedly, two different implementations of VoLTE exist in AOSP, neither of wich are used outside Pixels (if I understood previous discussions correctly).


I have a Pixel 6 because I bought it through Verizon when they offered a 5/month rebate for 36 months = 180 dollars off. This was at the beginning of this year so before the Pixel 7 released. I assume they offerred the rebate because they had a whole lot in stock that nobody bought, but everyone in my family got a Pixel because of it.


In met a guy once that pulled my aside once and told me he was an alien.


Your perception is that the probability of seeing someone that works at Google in the United States is as low as seeing an extraterrestrial capable of speaking English? What justification do you have for believing that I would be incapable of asking questions that would be able to tell the difference between an actual engineer at Google, and a random person pulling my leg? Google employs more than a hundred thousand people in North America alone. Some people that are now employed there were in my graduating class! Do you have some reason to call me an idiot?


Mr. Edman. I have no doubts about your experience. I'm sorry if I've hurt your feelings.


What was the pragmatic purpose of saying "A person pulled me aside and told me he was an alien" if not to cast doubt on me talking to a Google employee? You didn't hurt my feelings but you were clearly trying to express doubt that I talked to a person that worked at Google. Where does that doubt come from?


Mr Edmam

I have received and read your response regarding my apology to you. I'm concerned that I may have upset you more than I realised.

My grandfather once told me not to take in too seriously random strangers comments. I try to live by that advice, wether it's comments from internet strangers, or, for that matter, customers at the phone shop.

This is of course not an advice from me to you. But I thought I'd share it with you in case you might find it useful too.

DBNR,

R. Root


roll to disbelieve. i think someone was pulling your leg

especially the bit with samsung devices, i've had the misfortune of setting up a samsung phone for a family member and the amount of crap on those is just unbelievable


Could it have been a sales technique? Perhaps to sell you an iPhone. Perhaps the iPhones they were trying to sell have a higher commission and margin than the Android phones.


I wish closing things as "this is a duplicate" essentially required disclosure of the original (dupe) report.

It may well be that it's a dupe, or it may be something that looks similar but not actually the same. And indeed as in this case it's only the follow up report that got the bug fixed.

In this case it seems that contacts at google allowed them to escalate anyway and get it fixed.

But so often and especially with other programs almost everything gets closed as "dupe" which is just dispiriting.

In any case, if something this serious is a duplicate then there's the suspicion it went unfixed for long enough to be independently discovered and reported which is worrying.


Everyone who reports a undisclosed bug should get a share of the bounty; this incentivizes them to stick to the embargo.

If too many people are reporting the bug before you fix it then you have other problems.

I also start to feel that at Google's scale bounties this serious should start doubling every month.


Do you mean each new person should get a new bounty, or all reporters should split the bounty? The latter does not really incentivize much, but the former incentivizes reporters to collude with other reporters (i.e. you find a bug, tell your 40 friends to report the same bug, you get a kickback from all your friends who also reported it. $$$$).


The latter does incentivize everyone who stumbles across the bug to not disclose it. At the same time, it's sad for the original researcher whose bounty gets smaller with every new person stumbling across it.


It dose imply that finding it was easier then ones where you are the only reporter; partially justifying lower rewards.


No. A bug that can be trivially found is higher likelihood of being exploited, and thus higher impact.


Higher impact; but if it is just luck you are the first of many to find it and did not invest a lot of work in its discovery is reasonable to pay less. Under "Closed as dup" system the probability is you get nothing for reporting trivially found bugs. Whilst you are still providing valuable information (that lots of people can find it).


Well i see where you are coming from, the point of bug bounties is to reduce risk to the company not neccesarily to reward effort of the researcher. There is a sense that a bug where you have to be NSA level of skill to find is less likely to be exploitted than a bug that every script-kiddie is stumbling upon.


> Everyone who reports an undisclosed bug should get a share of the bounty; this incentivizes them to stick to the embargo.

Having worked with but bounty programs, I can guarantee this would be abused to no end. Reporters would enlist their friends and family to submit as many duplicate reports as possible.

There are a lot of good security researchers out there doing work in public, but bug bounty programs also get flooded by people looking to game the system in any way possible.


I mean you all share the fixed bounty amount. You could only game the system if you expected other people had already found the bug. However this would be risky as it is fairly easy to detect and penalize. The common case is still you only get one reporter per bug.


Yeah for purposes of the reward it should only be allowed to be considered a dupe if it duplicates a disclosed bug.


I agree, but just to play devil's advocate if I discover a bug, disclose it, then tell all my friends to also file a report before it is filed they'd have to honor multiple bounties.

I, too, am frustrated that I've read far too many stories about someone reporting a devastating critical exploit and all they get is "this is a dupe" back without further explanation. Makes one paranoid that employees are working with someone externally, back dating their bug reports, and splitting the bounty.


You'd probably violate the agreement so you and everyone else technically wouldn't qualify and would be committing fraud. That said there are other options, such as splitting the reward for a vulnerability amongst those who report it (even the dupes). This would incentivize people not to disclose the vulnerabilities while keeping the payouts static.


I suppose the risk is people could 'game' the system.

Person A finds the issue, reports it.

Then Person A secretly tells Person B about it (with no apparent connection), and Person B reports the same issues a few weeks later, but with apparent different code/description to look ever so slightly different.


Split the reward between everyone who reported it. It's even still kind of fair: The more people find it the easier it was to find.


Of course, then when A and B independently find a bug, B can enlist C, D and E, thus taking 80% instead of 50% of the bounty.

No system is perfect.


> I wish closing things as "this is a duplicate" essentially required disclosure of the original (dupe) report.

Only if it has been fixed and is allowed to be talked about, else malicious actors will submit speculative bugs to see if they catch anything.


Speculative bug reports are irrelevant, since they don't have a repro/proof of concept.


I’ve run into this with other vendors and really wished it’d get you CCed on updates so you didn’t have to ask for status periodically. It definitely doesn’t give a good impression when things drag out for aeons.


What's crazy is that it's 100% in the vendor's interest to keep this person happy, who they know can cause massive damage to their system, completely legally. The only leverage they have is the reporter's greed to get a bounty.


It's not greed to hold a company accountable to its promises of compensation.

Even so, surprisingly many researchers disclose a bug after setting a reasonable fix deadline, risking to forfeit compensation. Kudos to them!


Surely in this case, the second report must have added some details, since they weren't fixing the original report and i assume android doesn't just sit on lock bypasses.

Seems to me that if you report something that significantly recontextualizes a previous report (e.g. make it go from a low to a high severity), then your report shouldn't be considered a dupe.


I've reported some bugs to programs on Hackerone before that were flagged as dupe and the triager did reference the original report. Chrome team does this too.


I wonder how many LEO agencies are now digging androids out of the evidence closet.


Sounds like this only affects phones that have been unlocked since the last restart, so unless they have kept them plugged it is unlikely that this attack would be successful.


This is why Graphene OS has an auto-reboot feature. So that a device cannot be kept plugged in until an exploit like this is discovered.


Ah, yep. I wonder how sophisticated (or not) a typical police department is with these kinds of procedures.


Some discussion elsethread[0] suggests that that may only be the case for devices that are encrypted, as the passcode in that case would be part of the key for unlocking the contents.

If that's the case, it's possible that this attack may still work from a fresh boot for unencrypted devices.

[0] https://news.ycombinator.com/item?id=33550327


LEO already have access to locked phones via stuff like GrayKey.

https://www.grayshift.com/graykey/


I am always skeptical of these "lawtech" companies that sell magic unlocking devices. Are we really to believe that there are unpatched security holes in all major devices (both Android and iOS) that allow this kind of backdoor access?

I find it rather convenient that the "detailed support matrix" is only available for current customers only, seems to me like the actual amount of supported devices/operating systems would be limited to things such as outdated Samsung Galaxy phones and similar.


It's complicated, but yes there are a lot of ways to unlock devices some of which include straight up exploiting the device. Keep in mind btw that a lot of the sorts of criminals local LE is going after with these devices are not usually running fully patched iphones or pixels.


>Are we really to believe that there are unpatched security holes in all major devices (both Android and iOS) that allow this kind of backdoor access?

If you are at all familiar with the histories of jailbreaking, previous exploits, and the gray unlock market, it’s unreasonable you would not consider this this to be the default case.


It works. It's basically a software brute force that works great for 4 digit pins, takes longer for longer passcodes. Other offerings are a keylogger for the pin/passwords after they "return" the device to the suspect.


How would you install a keylogger on an encrypted device without rooting it or deleting user data?


I guess you could replace the screen with one that logs taps.


maybe you could sniff the data coming from the touchscreen with something you physically install into the phone.


> It's basically a software brute force that works great for 4 digit pins, takes longer for longer passcodes

Since the pin/password isn't actually the encryption key and is instead just the code that is provided to the security module/TPM on the device, I fail to see how this can be bruteforced. Unless there is also a magic hardware backdoor in Android phones, but in that case why would there need to be private companies and how would they even have access to this.


Confiscated phones often have been confiscated for months and are therefore on a relatively old patch level. If at any point old vulnerabilities come out these can be used. Keeping the phones on and connected to a charger in the evidence lockers doesn't seem like too much work.


> Keeping the phones on and connected to a charger in the evidence lockers doesn't seem like too much work.

There's no way that's a standard procedure.


Why? Seems pretty intuitive to me in a time where everything is encrypted.


If the phone is setup for automatic updates it'll restart within a month (most of the phones I've had do monthly security patches) and you'll be in a fresh boot state. You can't turn off the updates without first unlocking the phone giving you a rather limited window to attempt to exploit the device.


Will it reboot if it's not on network?


Updates need network access. If the phone isn't on a network, then it won't reboot.

Police can't really pop out e-sims, that means the police needs to keep the phone in an RF proof bag/work in an RF proof room.


Which, again, wouldn't be too much work. Also, my Android phone does not update and reboot automatically.


Android disabling USB data by default has been a thorn.


I think it has problems in some cases, pin codes longer than 6 digits.


Why don't Google and Apple buy this product then proceed to analyze and close all holes?


If I had to guess: not everyone can buy this software and A/G are not wanted by the sellers. Even the usual customers (law enforcement) are not very likely to pass exploits to them, because their work would become more difficult.


Very weird implementation with UI stacks and dismiss. The way we designed a multi step flow for a web app was basically having a sort of state machine/flow which says what are possible transitions

say password > mfa1 > mfa2 > done

and as each steps complete what's the next security steps for this particular user's configuration and simply allow just that transition. Once we are at the done state the authentication is marked as successful.

Not storing auth state in UI (regardless of any MVC concern) and allowing only a very narrow allowed state of transition seems like a trivial design choice. I assume google has no shortage of people for security focused design.

The UI stack being created together and dismissed rather than created/called on demand as state transition happens also seem a very wired design. Perhaps I don't understand the reason cause I'm not an android programmer.


> The same issue was submitted to our program earlier this year, but we were not able to reproduce the vulnerability. When you submitted your report, we were able to identify and reproduce the issue and began developing a fix.

> We typically do not reward duplicate reports; however, because your report resulted in us taking action to fix this issue, we are happy to reward you the full amount of $70,000 USD for this LockScreen Bypass exploit!

Lots of mixed feelings just reading this, but at least in the end it seems like a positive outcome for everyone.


Ah, that's a nice hack to avoid having to pay your bounties! First report: "can't reproduce, sorry." Subsequent reports: "duplicate, sorry." Then fix on whatever schedule you feel isn't too blatant.


And they stiffed him $30K


Appalling handling on Google’s end here. The duplicate issue part I can understand, but why should it take two reports of a critical vulnerability to take action? Surely when the first one comes through it’s something you jump on, fix and push out ASAP, not give delay to the point where a second user can come along, find the bug, and report it.

The refactor that’s mentioned towards the end of the article is great, but would you not just get a fix out there as soon as possible, then work on a good fix after that? For a company that claims to lead the way in bug bounty programs this is a pretty disappointing story.


You can read in the conversation that Google was not able to reproduce it the first time the bug was submitted:

> The same issue was submitted to our program earlier this year, but we were not able to reproduce the vulnerability. When you submitted your report, we were able to identify and reproduce the issue and began developing a fix.

I wonder if it really was the same bug or what they did wrong to reproduce it. Or maybe they just made some mistake in reproducing it.


Agreed. If the first bug was

> I did something weird after putting in a new PIN, and I was able to access my home screen without my password, but I'm not sure of the exact steps I did

then that's not really a duplicate. If the original bug report doesn't have enough information to recreate the steps, the second one is the only real bug report.


Yes. The first one is more like a user complaint than an actual reproducible bug report.


Then if that’s the case, the author should have been paid a full payout, not a “thanks for making us fix this” payment.


Just trying to rationalize, but if the "external researcher" was hired by Google to find security issues, google might have a requirement to fix the bug at its own pace.

I would personally be highly suspicious of a security flaw being a duplicate though. It's can be a very convenient excuse not to pay the bounty.


Reporting and investigation matters. Perhaps the initial report was only on the bypass of the lock-screen but the initial report only ran into the decrypted phone state so it was dismissed as not being exploitable (see other comments), whilst the second report actually got inside an active phone (And then was also written up in a simple, concise and reproducible way).


So basically google wanted to give this guy nothing. Then he set a hard deadline for disclosure and google managed to buy him for 70k so they could stick with their own deadline.


It would not surprise me if in some cases, google runs the exploit up the tree to the NSA to see if they're actively using this for matters of national security, then slow-walk the patch to release. Given how easy the exploit is (no software needed, no special equipment beyond a paper-clip), would not surprise me if this has been in wide use for several years now by various groups.



I agree that it appears to have been the disclosure threat that resulted in the bounty, but I don't agree (if I'm reading you correctly) that the OP acted unethically. It sounds credible to me that he was just doing everything he could to get the bug fixed.


According to the article, the reporter had already decided before the bounty had been set that they would wait for the fix:

> I also decided (even before the bounty) that I am too scared to actually put out the live bug and since the fix was less than a month away, it was not really worth it anyway.


According to the bug thread transcript Google have not yet known he's not going to disclose in October when they offered the 70k.

https://feed.bugs.xdavidhu.me/bugs/0016


Or more charitably, by the terms of the program he wasn't eligable for anything, but they gave him seventy thousand dollars out of goodwill and the spirit of the program.


Then they would have done this before the sorta-threat of his own disclosure date.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: