When the user hits "save password" on the client-side, the client needs to save the actual (plain text) password in order to replay it for future logins. A hashed password cannot be stored, otherwise that hashed password becomes the plain text password anyway (essentially destroying any benefit hashing would have here). The rules that apply to servers/services aren't the same as those for clients/saved logins.
Using reversible encryption is unavoidable, because they ultimately need plain text to send to the TeamViewer remote service. It is still marginally better than storing the actual plain text even if it is security through obscurity (using a hard-coded key in this case).
Storing it in the user's registry hive instead of a higher privilege allows the TeamViewer client to run in the user's context, instead of needing to run as administrator, have a broker, or similar.
One potential upgrade might be to use Windows' Data Protection API. It works essentially the same way as their current method, but Windows is responsible for protecting the encryption key (and has a broker than can house it at a higher privilege level).
But ultimately if you have a process running in the user's context and can manipulate the TeamViewer client, you can bypass the Data Protection API pretty trivially (e.g. injected DLL, UI hooks, other misc memory hooks, etc).
So my question would be: Describe the scenario where a bad actor has the ability to arbitrarily read data within a user's execution context, but not manipulate the TeamViewer Client also running in that same user's context? Because I got nothing.
My security advice here is simple: Don't save passwords to your local machine if you don't want them saved to your local machine. And don't leave backups of your saved passwords in insecure locations. I'm going to research if I can get a CVE assigned to that.
We have to do this with web apps all the time. Nobody stores passwords in local browser store.
TeamViewer does operate account infrastructure like you're describing, but that isn't exclusively what they offer. They do point-to-point too.
> Hint: If you choose Accept exclusively, TeamViewer will disconnect from the internet – that means it will no longer be possible to make or receive connections using the TeamViewer ID, and the Computer & Contacts list will no longer be available.
This does not happen "all the time" in web dev.
Yes, quite many body do:
Since there's a central server involved in this authentication (as far as I understand how TV works), that's not the only way to do this, nor is it the best way. A better way is to store some other token that can be used to verify that the user has logged in before and has a valid session, one way to do that is to sign and encrypt data saying which user they were logged in as. This prevents the password itself from ever being retrieved even if they can still use it to impersonate the user to TV
It's a subtle distinction for the average user who doesn't care, but for a sophisticated user it does indicate a specific kind of behavior, and the relevance is what you do in case of compromise. Do you change your password and make the eavesdropped one useless? Do you deauthorize the stolen computer via a centralized interface?
(For bonus points, the salt should be re-randomized whenever the password changes, so that changing the password makes old hashes permanently useless, even if someone later changes the password back to its old value. Not that going back to an old password is ever a good idea, but some users will inevitably do it anyway.)
Edit: Actually, it’s probably better to use a completely random token and just have the server reset it whenever the password changes. That avoids any possibility of cracking the hash to recover the original password.
- a token can be invalidated individually, while resetting a password invalidates all sessions
- a token can be given a predetermined expiry without the issues that arise from predetermined password expiry
- a token can be given a limited permission set
- a token can be used to track the origin and extent of an attack
In practise for something like TeamViewer I find it likely that none of these except the first would be implemented. If the attacker has access to the registry chances are you're f*cked anyway.
According to a comment above, it's possible to use teamviewer purely locally, without a central server.
I mean, in an ideal world where they didn't save the user's password locally...
If you're asking how they currently do it, I guess by sending the password, but I don't really know.
One way that would be slightly better than that would be to send the hash. That's still a replayable token, but it has the advantage that if someone does password reuse it's not immediately replayable against other websites. Ideally the hash would be salted (maybe with the username and some teamviewer-specific constant). Even better than sending it would be to use a PAKE, authenticated with that salted hash.
Maybe an alternative would be some sort of asymmetric key system, where each device gets its own keypair. I guess it depends on exactly what functionality is needed. Does it need to be able to be setup offline, or can it be setup online, then go offline for further use?
I suppose they could insist on an online login before they allow an offline login, then save the hash locally. A bad person would need to generate a hash collision to login then.
As security advice for developers, the writeup you're commenting about isn't especially useful. But that's not the point. It's a standard consulting war story post, about how they managed step-by-step to exploit a weakness in an application. As a war story, it seems plenty competent.
IIRC Chrome used to do this until 2013, for example, and resisted fixing it saying your OS account should protect you. I suspect these days it’s encrypted to your Google Account as it’s synced, and additionally requires you to re-enter your OS password.
PS. Does anyone know what Firefox does these days with their master password, especially now that they too ship a password manager? Their master password approach was similarly criticised .
That was the thing I was protecting against... Of course, no matter what form the access token takes when you press "Keep me logged in" allows you to authenticate remotely.
Don't save plaintext passwords. Ever. It's really that simple. Why is this still an issue in 2019/2020?
Sorry if I am saying something obvious but just so I understand... we need Team viewer to expose some way for a program to send a username and a password and team viewer then sends back a token saying store this instead. Now, every time the client makes a request, it should use this token? and if the client tries to use a user password, refuse? Thank you
That's the general gist, but if you're thinking of writing some software for it I'd suggest searching out some third party tool to manage these tokens for you or, if you're unable to find anything, then doing a bit of research into the pitfalls you can run into with this approach. The less you write in-house the better.
Back up a little bit. Start from the premise that you should never ever know the user's password. Period. Consider that a liability. This is the beginning of a mindset. With this in mind, erase the statement "we need a program to send a username and a password". That is the wrong mentality. (IMHO)
Now, you want your user, who you have authenticated, to be able to access other services that you don't own, am I correct? E.g., like IFTTT or Gmail?
If that is the case, you need to use an authentication mechanism (e.g. Oauth2) that allows the user to approve authentication to a service. Then you store the token returned by that service and use that to obtain access. If it is not a big-name platform, then you need to work that platform service to implement OAuth (or Auth0, or Cognito, or some other token-based authentication system).
If you are trying to access a service that does not support a token authentication system and only supports passwords, you should not allow the user to connect.
This may damage your functionality value proposition, but IMHO it is more secure because you minimize exposure of the user's password.
I wouldn't use a service that stored my plaintext password and logged in as me. To me, that's a no-go.
Again, all my opinion.
In fact, I'm eagerly reading this thread because I was in this situation years ago, and went with Auth0 (actually Stormpath before they were bought by Auhto0), but I had to argue for budget to pay for the service because I faced exactly the same pushback: "Just save the password!" I'd like to know if I missed something obvious.
This sounds reasonable until you realize that requiring only the user to handle passwords creates an even bigger liability: users are terrible at creating, handling and remembering passwords. Giving them this sole responsibility means they create simple, short passwords and reuse them across multiple services, thus increasing their vulnerability.
Browsers that started handling passwords and helping users create good passwords was one of the greatest security UX improvements of the past 20 years IMO.
Here's why: we are talking about the huge legal difference between liability of the user and liability of the service. If user's pick poor passwords, that's their problem, and a problem with passwords in general. But if you store the password, suddenly you are opening yourself up to bigger risks, risks that could sink your product.
In this case, anyone can access that registry and get the pwd.
1. Store a token that is replaced with a new token on every successful connection. This makes it very obvious if the token is stolen: either the stolen token stops working quickly or your connection stops working.
2. Store a key using a privileged service (TPM, Microsoft KSP, etc) that provides only the ability to use the key, not to read it.
3. Both 1 and 2.
E.g. If you make your password to access your home computer "eb49wsrew-home" ... what are the odds that the password for your work computer is "eb49wsrew-work"?
In an ideal world passwords would be independently selected, but we write software for non-ideal users in a non-ideal world.
> So my question would be: Describe the scenario where a bad actor has the ability to arbitrarily read data within a user's execution context, but not manipulate the TeamViewer Client also running in that same user's context? Because I got nothing.
If there was a system service which you could hand a read only secret and then it could perform challenge response queries of the form response = H(secret||challenge), and had the server send a fresh random challenge on every every connection then you could have a setup where even if the attacker could run as teamviewer (but not the system admin) he couldn't steal the credential he could only make use of it.
When you save your credential you'd hash the password, the server would also store the same hashed password. You'd send the hashed password to the system service, and the server could challenge your knowledge of it. For bonus points, have the service implement some simple ZKP (such as SRP) so being able to observe the challenge-responses doesn't allow you to grind offline attack the password.
This would be at least a small improvement.
The passwords are saved to your Teamviewer profile which exists on their servers. It seems like it would be safer to request and decrypt only when needed by an authorized user, rather than storing it for someone else to find.
@dang This title is misleading.
It's still insecure to store a user password as-is. TeamViewer should create an session key of some sort that is unique to this machine instead of replaying the username/password.
If you look in the Windows "Control Panel" app you'll see something called "Credential manager" that provides an API for saving passwords.
The plaintext password may be re-used by the user on other systems, so yes hashing would still benefit here.
But as other replies have pointed out, you can generate a login token to offer a "save password" feature without actually saving the password to disk.
After a successful authentication with username/password - store a one time token (randomly generated) on the client. Each time the token is used - generate a new one. That's it. On the server store a hash of that token, so getting the database of tokens does nothing, either.
The user retains ability to revoke any tokens at any time and also to know where they have been stored and used (if they choose to name a device/storage)
Not using the Data Protection API makes it easier to extract the password.
So while you're right that storing the password has some consequences, they should have done better, especially since their product is focused towards non-techs as well.
And I think browsers generally do a better job using OS security mechanisms to restrict access to that key, too.
It's basically security through obscurity as the key stays clear text UNLESS the profile is protected by a password a human must type every times he opens Firefox.
Several people on my team were opining to just comply because it's the easiest way to do and they were concerned about their job security if they would go "against security".
My takeaway from that experience was that the people in charge of corporate security policies and audits are not experts in security, they are experts in reshuffling responsibility and covering asses. And many developers are easily cowed by them.
The result is compliant software, not secure software.
Often times companies go through this with internal processes as they grow, and some grow out of that phase.
Other times (especially at even larger companies) processes are adopted "from the industry". While not necessarily bad, it also requires flexibility from feedback. In parent's case, auditing to ensure sensitive data is encrypted to a minimum standard is reasonable , auditing to ensure any encryption is limited to a very narrow set of algorithms is not.
I worked on a project once that contracted out security audits to HP and those were distressingly not good security audits, just vague automated checkbox-checking of a list put together with zero context.
You'd be surprised how many people come up with this then don't think any further.
This is the kind of solution I see from new grads with no crypto/security experience, but whom with a quick frown, think about it a little more and suggest why this is a bad idea. Many can do that even when they don't know how exactly to do it correctly.
This was just lazy/rushed or dare I say it, negligent.
Definitely they didn't follow the one and only rule of security: don't roll your own.
That said, the much greater problem is the idea of using a hard-coded key, instead of generating a unique key for each device/installation.
Ask this question instead: Why bother to encrypt them in the first place?
What is the attack scenario you are trying to defend against? This is local software - it need to be able to decrypt the key and use it to login.
So at the end of the day it's the same machine with the password and the key to decrypt the password. It seems pretty pointless to even encrypt them in the first place.
So why do it? I don't know, maybe just to make it marginally harder to see the password. If that's the entire goal, there isn't really any reason no to use a hardcoded key - it accomplishes that goal just fine.
Ask yourself this: What attack scenario would a different key per machine defend against that a hardcoded key would not?
Versus what they should have been doing - authenticating the password and then storing a session cookie.
"January 28 is Data Privacy Day! TeamViewer is a Data Privacy Day Champion. As an organization, we understand the importance of being open and honest about how we collect, use and store your information."
>Send email to Director of Security notifying them there is now a CVE assigned to this November 18th, 2019
>Receive first and only email back from vendor “We’re looking into it” email January 13th
Good one TV.
It worked well for me until it started flagging some very non-commercial use (family IT support) as commercial. I actually tried to use their process to unflag myself. Gave up and am happy with AnyDesk now. Not sure what they do with passwords... guess I should check!
So I won’t say he messed up by stepping through code for 6 hours to find the AES decrypt sequence but the instruction he could have scanned for is a special CPU instruction now!
How many separate routines could they have that directly call AES instructions?
A better signal is the presence of the S-box tables (63 7C 77 7B...). Even that won't detect applications which use an implementation from a shared library, though.
Teamviewer saves remote passwords in registry, encrypted with hard-coded key
The biggest ambiguity here is that the passwords in question are just user-saved "remote server passwords" and the key takeaway is that remote desktop software should not be left running.
Never had to store passwords in anything but a hash. My way is just to get the user to enter it again and never store passwords.
Lack of hashing isn't the problem here, the fact that low privileged users can access these credentials is.
The author isn't to blame for the silliness though, whoever decided to editorialise this title for HN is.
Seriously, consider lowering the contrast or chaging typography... Don't know, but it was hard to read in its original form...
I am visually impaired. I need sites to look like this just to consume them.
This was my experience and yours was different. Being able to let the viewer choose is critical so that everybody can use the web.
The benefits of leaning on a third party for common logic should be carefully reviewed on a case-by-case basis unless that software is related to security, in which case, please do lean on a third party.
fuck, I need to tell that to all top CTF teams that their members should stop competing and leave their jobs and focus on PhD
It may be a little bit offensive, but did you even graduate?
So it's less specifically required that people have PhDs and more... this stuff takes time to fully understand, I know a lot of pitfalls on the topic but I wouldn't say I'm an expert - I just know enough to know that I don't know.
Security is high stakes
Security is hard