- The "security bug" it references isn't one at all. Accessibility Services can draw over other apps and interact with them by design. They're designed as an alternative way of interactive with android for disabled people consequentially they can do anything a normal user can do. The security "researcher" found a non-bug then convinced an ignorant journalist at zdnet to publish an article about normal/correct Android functionality.
- If it were somehow a bug, it is still impossible for Google Authenticator to patch/mitigate it. Accessibility Services act as the user, therefore they can do anything a user can do. You cannot block them for good reason (breaks disable user's ability to use Android).
- The third party apps they reference don't block Accessibility Services either, making it hypocritical to criticize one while pointing to others with the same fictitious flaw.
- Switching to a random open source TOTP/HOTP 2F app might reduce your security, not increase it. You'll either compile it yourself requiring enabling "Allow installation from unknown sources" (bad) or you'll more commonly just grab it from the app store, in which case that random OSS developer's account can feed you evil-ware either intentionally or via compromise. As soon as it is from the app store the "open source" nature is completely irrelevant, it isn't a security guarantee.
- No unpatched security bugs have been found in Google Authentictor. Google Authenticator's lack of updates are annoying though (unfixed bugs, poor backup support, QoL features like secondary pin, etc). Just not the way the article frames it.
This is one of my biggest pet peeves about Android. I wish i could add my own signing key for apps I compile myself, rather than just being given the choice of Google play or anything at all.
Is there a way around this where you can say "no, just let me install from anywhere" or is it forced unless you root the device?
An app you compile yourself (assuming you at least tried to read the source code, the open source Google Auth is pretty brief/readable, I've reviewed it before) is vastly more secure than installing whatever the Play Store serves you.
Literally billions of regular users cannot do that, and regularly install malware laden apks.
I have no way to easily determine that an app on the Play Store actually came from my bank. At best, I can look at the install count, hope the app store does a half way decent job looking for fraud, and hope for the best. Maybe if I'm lucky they have some sort of verification program?
The idea we actually trust app stores from Google or Apple over a direct install from the institutions we want to interact with is hilarious, especially considering all of the rampant malware and fraud issues on the Play Store.
If they dist an apk, how is the bank to provide updates for their app? Build their own updater architecture? If there was a mitmed update from a store they could cancel it centrally.
Telling people to prefer apks is setting them up to be hacked.
The article doesn't call this issue a "security bug" anywhere in the article. This and ZDNet's article don't claim this is a vulnerability or bug, but are reporting on the relative novelty of malware targeting 2FA apps.
The article doesn't even claim Google Authenticator or third party apps should mitigate it, or that third party apps are any better at it. The first three paragraphs and the last one of this comment are just railing against some strawman. There are 0 claims about vulnerabilities or security weaknesses in this article.
Finally, I frankly find your claim about open source apps being bad ridiculous. What's the alternative, never trust indie developers, only trust the big corporations? Also: the blanket "Allow unknown sources" has not been a thing since Android 8.0 - you have to select individual trusted sources (e.g. F-Droid, which actually has a superficial volunteer review process). You can't buy a new Android device which forces you to use blanket unknown sources anymore unless you really try.
I like contrarian arguments on HN as much as the next guy, but I only like those that actually respond to what the article says. At worst, the article's use of the ZDNet is too opportunistic to drive its other valid points.
> The article doesn't even claim Google Authenticator or third party apps should mitigate it
Second sentence of the article is "I doubt Google will ever release a fix for this issue". We could argue about the semantics of "issue" vs "bug" (though I'd rather not), but it seems pretty obvious the author thinks of this as a fixable "issue", when the parent comment's point is that it absolutely isn't.
Nobody called Open Source bad. What got posted was:
> As soon as it is from the app store the "open source" nature is completely irrelevant, it isn't a security guarantee.
Unclear how you got from "no security guarantee [with compiled open source in the app store]" to "open source is bad." Those two things aren't even similar.
The biggest problem with Open Source in the app store is actually automatic updates. It is conceivable to build the app locally and compare the resulting APK to the app store version. But it isn't very realistic to do that every update (or even just follow the repo changes long term).
Most of the time these types of issues aren't important because apps are well sandbox on Android. But in the case of password and two factor managers special consideration has to be taken because the data within the app's sandbox is highly valued, and extrication is trivial with all apps having the default android.permission.INTERNET privilege.
None at all. Or did I miss something?
I don't like that viewpoint.
For example, take the story from a little while back about how Gmail add-ons could access your messages. Any programmer looks at that situation, and they think, "Duh... OF COURSE an add-on could access your data, because how else on earth could it possibly work?"
But a non-programmer doesn't think this way. They think, "Gmail is a platform and a service that I use, which is run by somebody that I'm supposed to be able to trust to handle all that stuff for me. They ought to reason through all these things before they offer it to me." They very well may not have a good enough understanding about how software works to reason through the implications like we can. It isn't self-evident to them where your data goes and what will happen to it if you do this or that. moreover, Gmail (and tech generally) isn't the center of their universe, and they may not realize these add-ons come from a third party or may assume a certain level of vetting would be done if they do.
Whether this expectation is right is one question. In theory people ought to take responsibility for their own choices.
But putting that question aside for the moment, when designing a product, we ought to never forget that many users lack either the knowledge or the inclination to think through all this. You're building a product that you're going to hand over to users, and a lot of users, maybe even the majority of them, are going to approach it this way. I don't have a great solution to offer, but blindly assuming that all users are knowledgeable certainly isn't the right path.
So I do understand Apple's approach. Unfortunately, Apple muddies the waters by mixing up security protections with their own business interests and the interests of authoritarian regimes. To a degree I fear this is unavoidable.
I think the best solution is to have that very restrictive app store that you can trust to vet things for you, but in addition to that they should permit side-loading for apps and content that you have to vet yourself or rely on a third party vetting system.
It was completely impossible to explain to him the technical background (e.g. that him sending an email through the program and the program sending an email by itself is technically the same thing for Gmail) and he wouldn't accept that Google is quite rightly informing him about the implications (Google should change the text to say it only allows him to send emails).
As you said he expected Google to give him technically impossible guarantees and handle that the third party software can't do anything without him confirming it.
While that's technically the truth, it doesn't represent the actual state of events. 99%, and probably 99,9% don't use their devices responsibly. And then companies that bare no responsibility for the actual hack still suffer PR damage, because journalists don't report responsibly too.
When we realize that many exploits can mean that devices get co-opted by bad actors to serve some rather nefarious purposes, we have to do more than "trust that the user will be smart" in order to suppress dangerous or economically destructive criminal uses of these devices
More and more I realize that it's not our strategies that I dislike, it's how incompetent and error-prone users actually are. If we were perfect users, none of this would be required.
But staying in a single apartment for the whole life is not healthy for most people or society. Modern computing grew because people were able to build and experiment on things way beyond what the original manufacturers of hardware envisioned. Even if that gave some people Windows viruses.
It's like saying "quarantine is pointless in my rural village of 29 people, so therefore it's also pointless in my city of nine million"
"It seems that you are using a screen reader or something else that uses the Accessibility services. If you have no idea what this is, do not continue past this screen. The below buttons will be enabled in 30 seconds [I know what is happening] [This doesn't sound familiar, exit]"
(Of course it only appears when you actually have something subscribed into the accessibility services, which is how I found out) Yes, it's inconvenient, but forces you to actually read the message instead of blindly barging through Accept Accept Accept.
And how does this actually prevent abuse? Wouldn’t malware happily wait 30 seconds and click “yes”? And if the dialog isn’t accessible to the screen reader, doesn’t it then exclude some users entirely?
So actually that's not how it works. If it was just reading the screen it would only get a single OTP password which is only valid for up to 60 seconds.
The trojan is actually tricking user to enable accessibility privileges. Those privileges give the trojan ability to control the phone. With this privileges it actually can navigate settings and grant itself other permissions.
I suspect that with authenticator it can navigate and extract a private key somehow. I don't know how it's doing, because I don't see an option in the UI to obtain such key. So perhaps it is still somehow exposed to accessibility application but not the user. Or the report makes it look worse than it is and only talks about the one time code (it is not clear).
Anyway the real issue though is that accessibility is giving app full access to the phone. Restricting that though would affect accessibility, I believe Android though should make such accessibility request more scary.
I think the fatal flaw there is maybe allowing an app to prompt the user to enable accessibility settings. Although it is still not as terrible, an app can only send the user to the right setting page, but it can write whatever it wants in the description of the app (typically they say why they need the setting). The only warning from android is when you're about to enable the setting, but that warning is very dry, it lists the permissions, and some description for them. It doesn't seem to distinguish more scary ones from less scary ones. It also conflates them.
For example I have app called "back button" which basically emulates a back button (actually also other buttons), quite useful when it is broken. Though the only permission the app is requesting is "observe your actions (receive notifications when you're interacting with an app)" it doesn't say anywhere that it is more than that, the app actually can emulate pressing various buttons and nothing like that is even mentioned.
Not sure if possible, but perhaps the authenticator app could detect that accessibility mode is on, and refuse to show the codes? (or at least do it behind a warning?)
Also, I think the trojan app would need to ask for accessibility permission from the user to be able to read other apps' text?
Whether or not it should use its own text to speech depends on whether or not you need to worry about malware subverting the system wide text to speech system.
If it does use its own text to speech, it might be possible for it to just use the system text to speech system on install to do generate speech for the 10 digits and save that, and use those recordings for speaking codes.
That avoids having to worry about system text to speech compromises except during authenticator app installation so is probably almost as safe is including a dedicated text to speech system, an ensures that they can handle all languages that the system text to speech can handle.
Redirecting all output to a single mono headphone is really important, and most users who use talkback adjust the speed and how things are spoken to better suit them.
And it still doesn't solve the problem, as accessibility services can still tap into the audio output and run speech-to-text to get the codes back still accomplishing the same exact thing.
But how about the first part of my suggestion, which was to have whether or not accessibility features are used in the authenticator app controlled by a setting in the app separate from the system wide settings?
For people not needing accessibility, at least, and so have it off both system wide and in the authenticator app they would be protected. A rogue app getting system wide accessibility access would still not get access to the authenticator.
 The system might have to support two independent sets of accessibility privileges and settings, one for most apps and one for security apps, with severe restrictions on what regular apps can do when a security app is running.
Couldn't the accessibility app just approve the warning? Otherwise how would blind users do it?
Why are 3rd party apps needed for accessibility ? How about Google take accessibility serious and make the OS accessible without expecting others to pick up the slack ? An API with this kind of far reaching access shouldn't even need to exist.
Google undoubtedly has the resources to build say, Android for Blind People. But it's not realistic to expect Android for this one guy who can twitch his nose, but is otherwise paralysed; or Android for the woman whose brain can't process shapes properly.
By enabling a generic Accessibility feature it doesn't close the door to anybody. If you can adapt it, or get anybody else to adapt it, to be accessible to you then this feature will help you get that done. To do that it's enormously powerful, which means bad guys with this permission can take over your phone.
Android doesn't support this, so a workaround is to use the accessibility service. The trade-off is having to grant all those permissions. You can also open the app directly and copy-paste, but that's a lot more extra steps. The killer features for password managers are that they're both more secure AND quick/convenient.
You can block accessibility from the specific field of the OTP app, if you also have a system framework that retrieves exactly the contents of that field into your paste buffer when you interact with a privileged UI element (i.e. the keyboard-UI “autofill 2FA” accelerator.)
(See also: accessibility of elevation prompts in Windows.)
In the former case, you just call your app SuperLegitScreenReader, prompt the user to grant accessibility access, and then snarf up data. Bonus points if you bundle in some open source or pirated screen reader code so your app can act the part, but you could probably just say “fetching reader data, screen reading will be functional in ~60 minutes” and count on most users to not bother uninstalling the app or revoking permissions.
In the latter case, you call your app “CandyNinjaBirds”, and have it pop up and say “to let you share cute stickers to your friends, you need to enable accessibility access” and count on the vast majority of people to click “allow”.
The latter case is way more common because the former case limits your target audience to people who actually want a screen reader.
True - but you can make accessibility access an opt-in feature. I hate phrasing it like that. But a piece of security software should allow people to disable accessibility features when they aren't needed.
It already is. You have to explicitly go and enable it per requesting program in the system settings.
I ended up doing as much when I lost my Android tablet at a bookstore on which I had Authy installed. It gave me a mild panic attack at the time, but, a year later and none of my accounts have been hacked yet, so, I have to hope that the other security measures on the device were enough to thwart the potential thief/adversary.
Another approach is to have your phone as the daily 2FA device and the yubikey is the backup.
Not much else you can do, the point of 2FA is that you can't get in without it.
Some services let you store multiple keys. So you can own a separate yubikey that you keep in a trusted place eg deposit box.
I could see this working for a few very secure logins, but if the idea is to two factor everything it doesn't seem tenable.
you add your main device and your backup device.
than you store the backyup device, after having it configured on your accounts, in the safe and use the main device to authenticate on a daily basis.
when you create a new account you add your main device and than when possible retrieve the backup device from storage and add the new account to it and store it back again.
on the event of you loosing your main device you retrieve the backup device from storage, use it to authenticate and remove your lost device from your accounts.
You can then acquire a new one key, add to all your accounts and store the backup in a safe again until needed.
If the site allows for UTF I register each key for that too, but only if I can also use OTP since I can't yet use UTF with my android phone.
Does anyone know if this is normal behaviour, and if so, what the reasoning behind it is?
That seems like it would be just asking for a world of warranty returns and support nightmares for whatever company built it into their phones.
I'm pretty sure this was implemented across their phones after the Galaxy Note 7 debacle. I think the idea was that anyone who tried to root their devices and avoid Samsung remotely deactivating them would at least be prevented from charging the battery to full, which could cause the battery swelling/overheating/fire-catching problem.
I'll have to do more digging on the quote I remember about Samsung phones physically blowing a circuit somewhere if they detect they've been rooted.
In this particular case, it was probably a safety thing on that particular phone. I bet that Samsung pushed it in a firmware update because of their "batteries catch fire" issue, which was fixed with some patches to the kernel driver. Chances are this was intended to make sure that somebody running an unsigned/homebrew kernel (which might or might not have the patched charge-controller drivers) wouldn't be able to make their battery catch fire.
This sounds like the ODIN QFuse.
These aren't really particularly dramatic- they're tiny, microscopic "fuses", designed to blow cleanly and easily. Most devices have plenty of them- used to lock the devices down after manufacturing.
I lost my phone once and was primarily using Google Authenticator at the time. Because I didn't backup the seeds when registering entries to Google Auth, I had to go through some recovery processes on many of my accounts which where time consuming, nerve wrecking, and annoying. This was at the height of crypto craze so one account was Coinbase in which I had to go through some in depth recovery process (with an ID an all, which ended up crossing some legal lines in my state).
Do yourself a favor and either use a service with a supported and up to date app or take control with an app like andOTP and take backups yourself.
Also, these concerns would pop up in the reviews if you could not deny the permission if it asked for it. One of the reasons I looked at MS Authenticator because it is one of the highest scoring and reviewed authenticator apps, so it would have taken a hit on the score if this was not possible.
Wow, they really did their best to find a sliver of justification for that permission. I don't need an OTP app to autofill my first and last name.
And to those arguing about their opt-out for this permission, ask what unnecessary permissions your coworkers/family/friends have denied on the last few apps they installed.
> Contacts and phone. The app requires this permission so it can search for existing work or school Microsoft accounts on your phone and add them to the app,
Keyword here is requires. This doc is out of sync with the application, unsurprisingly.
On that note, for anyone using Microsoft's VS Code, I recommend https://vscodium.com/
I don't get it why Google apparently does not get the notion that they basically "own" the brand identity of what we technically-minded people know as the TOTP 2FA scheme - and they own it because of the Google Authenticator app. Thousands of websites ask their users to install "the Google Authenticator app" if the user wants to enable 2FA - they don't tell them to install "an OTP app" or something like that, no, practically everyone refers to this scheme and the associated apps as "Google Authenticator".
And it can't be too hard for a company able to fund and drive the development of a leading web browser engine to keep a simple TOTP app up to date and well-supported in the two major smartphone ecosystems. Heck, a single full-time developer should be more than enough manpower to do that! And Google has like, what, 100.000 of them?
Instead, Google lets other companies slowly chip away at their mindshare in the 2FA market - during the years of Google's inactivity, lots of alternative applications sprung up, and many password safe apps added TOTP support to their feature catalogs. We're at a point at which most technically savvy people advise other people to use ANY of those apps, but NOT Google Authenticator, even if the website tells them so. It's just a matter of time until the sites catch up and quit suggesting Google Authenticator (after all, the shortcomings of Google's application, like inability to backup seeds, probably cause additional burden on support channels for sites explicitly mentioning Google Authenticator, and if there are other apps that cause less problems, suggesting to use those at some point in time will be more beneficial than the brand name recognition bonus provided by suggesting an app by Google).
Authenticator is still a secure 2FA app (unlike Authy) and the fact that the devs did not ship a new build or updated the UI recently might not be great but the app does its job.
If "awk" does not ship a new build in a few years would it be considered abandonware?
Screen also nearly suffered this, though my understanding is development has resumed since.
The way the source article reads, it seems like the vulnerability would affect any OTP app.
The problem I, the author, have is that it seems unlikely that Google can fix this. What are the risks associated with suddenly changing a codebase which is 2.5 years old? Is there anyone there who works on it day-to-day, understands how it works, and can release a verifiably fixed patch?
Security products need constant maintainance.
Even if the code were bug-free at the time, dependencies, APIs, and attacks change.
I guess they consider TOTP as legacy and are focusing on U2F, I think in their products you can’t even sign up for TOTP based 2FA anymore.
I amassed quite a collection of U2F Titan security keys that Google gave away for free at conferences, some of them have become obsolete already though and some had serious security issues (the Bluetooth one). I prefer using the Yubikey for that reason, though I find them quite a bit overpriced.
When you initially enable 2FA, you won't be given the option. But you can register with SMS/U2F/etc. Once you've enabled the first authentication method, you can go into your 2FA settings and add a virtual 2FA device. Once that's done, you can go back and delete whatever you added initially to be left with only a virtual 2FA device.
This is one of the main reasons I stopped using any of their new projects.
I'm sorry I didn't make that clearer.
However, it does seem to me like there is a genuine problem here. Perhaps the solution is to have a two-layer accessibility permission system? One permission for reading the screen of non-inherently-sensitive applications, like web browsers and email clients, and another permission for reading high-sensitivity applications like Google Authenticator. Enabling the second permission would come with very strong warnings about security, and the admonition that you should really really trust this application provider a lot more than your average screen reader app.
But get this!
On iOS it correctly shows the right OTP code for these algorithms.
On android it acts like it’s a sha-1 and has the wrong code....
In addition to being supported, it avoids the possibility of a future mistake leaking TOTP seeds since they're stored on the key rather than the phone.
I know I can just get a yubikey, but I'm still not comfortable with the process for temporarily authenticating on a computer or phone that isn't mine. In those cases I much prefer to type in a code.
its open source, supports import of keys as well as export with a very active dev.
Does anyone know of an alternative hardware token that supports more than that and also all the other protocols that YubiKeys do?
I'm working my way through the list aphabetically and at "Jetbrains" I reached 32.
This is over ten years worth of accounts. I'm sure I'll reach 50-60 before the end.
Ideally all sites will implement U2F as two factor authentication, but there aren't that many users who have a U2F compatible token. The reach of TOTP is far more beyond U2F, which is probably why sites use TOTP more than U2F.
When sites offer both, choose U2F. When sites offer TOTP only, use it. It is better than nothing. When you have a yubikey already, use the Yubico authenticator app to store the TOTP secret to make your TOTP attack surface less and to have the availability to change your phone without losing TOTP secrets.
In some sense TOTP, basically HMAC, seems like it would be harder to screw up than a public key system. RSA is amazingly hard to get right. I wonder if the order of preference should be:
(1). U2F ECDSA/EdDSA
(3). U2F RSA
(4). Google Authenticator
(Infinity). SMS 2FA
No idea where ECDAA  fits.
What else do you need? Why do anyone need to update it if it is completely fine?
I’d quite enjoy if e.g. 1Password would import all my Google Authenticator tokens into itself automatically. I’ve been meaning to move them over for a long while now, but it’s a whole process, since there’s no place in the Google Authenticator UI to retrieve the original seed value.