Hacker News new | past | comments | ask | show | jobs | submit login
TeamViewer stores user passwords in registry, encrypted with hard-coded key (whynotsecurity.com)
439 points by reiichiroh 14 days ago | hide | past | web | favorite | 122 comments

They're doing nothing too wrong, the person criticising them doesn't understand basic computer security.

When the user hits "save password" on the client-side, the client needs to save the actual (plain text) password in order to replay it for future logins. A hashed password cannot be stored, otherwise that hashed password becomes the plain text password anyway (essentially destroying any benefit hashing would have here). The rules that apply to servers/services aren't the same as those for clients/saved logins.

Using reversible encryption is unavoidable, because they ultimately need plain text to send to the TeamViewer remote service. It is still marginally better than storing the actual plain text even if it is security through obscurity (using a hard-coded key in this case).

Storing it in the user's registry hive instead of a higher privilege allows the TeamViewer client to run in the user's context, instead of needing to run as administrator, have a broker, or similar.

One potential upgrade might be to use Windows' Data Protection API. It works essentially the same way as their current method, but Windows is responsible for protecting the encryption key (and has a broker than can house it at a higher privilege level).

But ultimately if you have a process running in the user's context and can manipulate the TeamViewer client, you can bypass the Data Protection API pretty trivially (e.g. injected DLL, UI hooks, other misc memory hooks, etc).

So my question would be: Describe the scenario where a bad actor has the ability to arbitrarily read data within a user's execution context, but not manipulate the TeamViewer Client also running in that same user's context? Because I got nothing.

My security advice here is simple: Don't save passwords to your local machine if you don't want them saved to your local machine. And don't leave backups of your saved passwords in insecure locations. I'm going to research if I can get a CVE assigned to that.

A smart use of "save password" is where you authenticate and store the login token locally. You can then log in without a password, but also your session can be invalidated any time you say... change your password, or ask for a session to be kicked off.

We have to do this with web apps all the time. Nobody stores passwords in local browser store.

I don't know if that applies to TV's circumstances because you can use the client to connect to endpoints without either end being internet accessible.

TeamViewer does operate account infrastructure like you're describing, but that isn't exclusively what they offer. They do point-to-point too.

Yes, looking at this page it’s possible to disconnect the client from Team Viewer’s online infrastructure entirely: https://community.teamviewer.com/t5/Knowledge-Base/Can-TeamV...

> Hint: If you choose Accept exclusively, TeamViewer will disconnect from the internet – that means it will no longer be possible to make or receive connections using the TeamViewer ID, and the Computer & Contacts list will no longer be available.

The TV host could issue its own persistent tokens, no internet access required.

I don't see how network position relates to the authentication scheme. As far as I can see, these are entirely separate issues, handled at different layers of the OSI stack.

^this. i recently had to explain this to a product manager who wanted the user's email and password to be autofilled it they landed at the login page.

It's not the same thing at all. TeamViewer literally has to take in the password and paste it plaintext in later. They cannot simply validate that a user knows the password.

This does not happen "all the time" in web dev.

I’ve actually seen that functionality in the wild. Quite horrible. IIRC it was some weird open source web app.

That's a cool idea for a web service where once someone tries to actually change something, you can ask again for the password. I struggle to see how it makes any sort of difference for a remote management software; it's essentially an enterprise ready trojan. The moment someone has connected to a machine through token or password, that machine is utterly compromised and can not be saved by you clicking "revoke" in a web interface. That kinda situation calls for you to physically disconnect the machine from power.

> Nobody stores passwords in local browser store.

Yes, quite many body do:


He's talking about web apps

And TeamViewer isn't a web app. The parent is just demonstrating how other local, non-web apps have exactly the same issue for the same reason

deep tangle

> When the user hits "save password" on the client-side, the client needs to save the actual (plain text) password in order to reply it for future logins.

Since there's a central server involved in this authentication (as far as I understand how TV works), that's not the only way to do this, nor is it the best way. A better way is to store some other token that can be used to verify that the user has logged in before and has a valid session, one way to do that is to sign and encrypt data saying which user they were logged in as. This prevents the password itself from ever being retrieved even if they can still use it to impersonate the user to TV

It is the only way if the action is labeled "save password," rather than "stay signed in" or "authorize this device" or similar.

It's a subtle distinction for the average user who doesn't care, but for a sophisticated user it does indicate a specific kind of behavior, and the relevance is what you do in case of compromise. Do you change your password and make the eavesdropped one useless? Do you deauthorize the stolen computer via a centralized interface?

Right, but that means that what you actually want to do is store a hash of the password salted based on the identity of the machine you’re connecting to. (And the authentication protocol would be changed to verify possession of that hash rather than the password itself.) That way, not only do you avoid leaking the password itself, you ensure the hash can’t be used to connect to any other machines that use the same password. But changing the password is still what you need to do to block future access, preserving the expected semantics for sophisticated users.

(For bonus points, the salt should be re-randomized whenever the password changes, so that changing the password makes old hashes permanently useless, even if someone later changes the password back to its old value. Not that going back to an old password is ever a good idea, but some users will inevitably do it anyway.)

Edit: Actually, it’s probably better to use a completely random token and just have the server reset it whenever the password changes. That avoids any possibility of cracking the hash to recover the original password.

"the only way" Don't let pedantic adherence to the wording of the UI be more important than security.

These are the ways the strategies differ:

- a token can be invalidated individually, while resetting a password invalidates all sessions

- a token can be given a predetermined expiry without the issues that arise from predetermined password expiry

- a token can be given a limited permission set

- a token can be used to track the origin and extent of an attack

In practise for something like TeamViewer I find it likely that none of these except the first would be implemented. If the attacker has access to the registry chances are you're f*cked anyway.

Also - a token can’t be used to change your password or steal the account completely. The app can ask you to enter your password again if you are doing these changes to your account. A token would not allow the attacker to pass this.

> Since there's a central server involved in this authentication

According to a comment above, it's possible to use teamviewer purely locally, without a central server.

How does TV authenticate a login locally if disconnected from the server?

I mean, in an ideal world where they didn't save the user's password locally...

Are you asking how they currently do it, or how it could theoretically be done better?

If you're asking how they currently do it, I guess by sending the password, but I don't really know.

One way that would be slightly better than that would be to send the hash. That's still a replayable token, but it has the advantage that if someone does password reuse it's not immediately replayable against other websites. Ideally the hash would be salted (maybe with the username and some teamviewer-specific constant). Even better than sending it would be to use a PAKE, authenticated with that salted hash.

Maybe an alternative would be some sort of asymmetric key system, where each device gets its own keypair. I guess it depends on exactly what functionality is needed. Does it need to be able to be setup offline, or can it be setup online, then go offline for further use?

Yeah I guess the first.

I suppose they could insist on an online login before they allow an offline login, then save the hash locally. A bad person would need to generate a hash collision to login then.

The normal thing to do here is to use the credential just once to generate a (revocable) token, and to store that (reversibly encrypted). Especially if you own both sides of the connection, it is in fact a bad idea to store user passwords; they're irrevocable and inevitably shared across services.

As security advice for developers, the writeup you're commenting about isn't especially useful. But that's not the point. It's a standard consulting war story post, about how they managed step-by-step to exploit a weakness in an application. As a war story, it seems plenty competent.

If you are able to grab the users password off of their computer then you are able to capture it as they type it. Passwords are stored unencrypted on computers all the time, your browser stores passwords.

The point of tokenizing is to minimize the number of times you ever need to handle the actual password, in part to avoid exactly those attacks.

Reading keystrokes sent to other applications requires higher privileges than reading registry and filesystem, no?

Not on linux/x.org (wayland is more secure)for certain... i am guessing not on windows either but i am not sure...

I am not familiar with Team Viewer but you’re correct: any software that allows you to “save passwords for later use” is writing your password to disk/cloud — maybe reversibly encrypted but decipherable all the same, especially if the keys are held locally for offline use.

IIRC Chrome used to do this until 2013, for example, and resisted fixing it saying your OS account should protect you[1]. I suspect these days it’s encrypted to your Google Account as it’s synced, and additionally requires you to re-enter your OS password.

[1] https://news.ycombinator.com/item?id=6166793

PS. Does anyone know what Firefox does these days with their master password, especially now that they too ship a password manager? Their master password approach was similarly criticised [2].

[2] https://nakedsecurity.sophos.com/2018/03/20/nine-years-on-fi...

Firefox encrypts passwords with your master password. The clear downside is that you lose all data if you forget your password. That's clearly good for security but I don't think it's an option for most services.

This is trivially solvable by salted hashing twice, once client side (which then can get saved post hash) and once server side.

That doesn't help you since you authenticate using the client hash so knowing that lets you authenticate remotely anyway. It only protects you against a password that was reused for something else as well.

> It only protects you against a password that was reused for something else as well.

That was the thing I was protecting against... Of course, no matter what form the access token takes when you press "Keep me logged in" allows you to authenticate remotely.

Tokens were introduced to avoid saving passwords anywhere and they have the benefit of expiration.

Don't save plaintext passwords. Ever. It's really that simple. Why is this still an issue in 2019/2020?

> Tokens were introduced

before 1990



> Don't save plaintext passwords. Ever.

Sorry if I am saying something obvious but just so I understand... we need Team viewer to expose some way for a program to send a username and a password and team viewer then sends back a token saying store this instead. Now, every time the client makes a request, it should use this token? and if the client tries to use a user password, refuse? Thank you

When the user initially attempts to authenticate into Team Viewer send the user a randomly generated (not-derived off of user information at least) token in the response and register it with a data-store on the server. From that point on the user can re-authenticate with Team Viewer by sending the token until, for some reason, you decide to invalidate the token on the server - then the user is forced to re-authenticate with their password. At any point before their token is forcefully invalidated the user can optionally re-authenticate with their password to get a new token.

That's the general gist, but if you're thinking of writing some software for it I'd suggest searching out some third party tool to manage these tokens for you or, if you're unable to find anything, then doing a bit of research into the pitfalls you can run into with this approach. The less you write in-house the better.

To build on what user "munk-a" said below...

Back up a little bit. Start from the premise that you should never ever know the user's password. Period. Consider that a liability. This is the beginning of a mindset. With this in mind, erase the statement "we need a program to send a username and a password". That is the wrong mentality. (IMHO)

Now, you want your user, who you have authenticated, to be able to access other services that you don't own, am I correct? E.g., like IFTTT or Gmail?

If that is the case, you need to use an authentication mechanism (e.g. Oauth2) that allows the user to approve authentication to a service. Then you store the token returned by that service and use that to obtain access. If it is not a big-name platform, then you need to work that platform service to implement OAuth (or Auth0, or Cognito, or some other token-based authentication system).

If you are trying to access a service that does not support a token authentication system and only supports passwords, you should not allow the user to connect.

This may damage your functionality value proposition, but IMHO it is more secure because you minimize exposure of the user's password.

I wouldn't use a service that stored my plaintext password and logged in as me. To me, that's a no-go.

Again, all my opinion.

In fact, I'm eagerly reading this thread because I was in this situation years ago, and went with Auth0 (actually Stormpath before they were bought by Auhto0), but I had to argue for budget to pay for the service because I faced exactly the same pushback: "Just save the password!" I'd like to know if I missed something obvious.

> Back up a little bit. Start from the premise that you should never ever know the user's password. Period. Consider that a liability.

This sounds reasonable until you realize that requiring only the user to handle passwords creates an even bigger liability: users are terrible at creating, handling and remembering passwords. Giving them this sole responsibility means they create simple, short passwords and reuse them across multiple services, thus increasing their vulnerability.

Browsers that started handling passwords and helping users create good passwords was one of the greatest security UX improvements of the past 20 years IMO.

Everything you said is true, but completely unrelated. :)

Here's why: we are talking about the huge legal difference between liability of the user and liability of the service. If user's pick poor passwords, that's their problem, and a problem with passwords in general. But if you store the password, suddenly you are opening yourself up to bigger risks, risks that could sink your product.

Big difference.

What you normally do in this scenario (which is similar to a cookie), is to have an "Remember me" option that generates a unique token you can use to log in with. It's not your password, and ideally you can look in your online account and see what devices has such a token and revoke access.

Also the token should be regenerated as well on use, not stored as plaintext token, either. But yeah - the idea to store plaintext password encrypted or hashed is just stupid.

Do I understand correctly - it's the same reason why browsers keep plain text passwords locally when the users selects to save them?

[0] https://security.stackexchange.com/questions/170481/how-secu...

No, you can only access browser's stored passwords if you can log in as the user in the machine (see second response).

In this case, anyone can access that registry and get the pwd.

Storing a bearer token in the registry means that anyone who compromises the registry once gets persistent access. There are multiple solutions:

1. Store a token that is replaced with a new token on every successful connection. This makes it very obvious if the token is stolen: either the stolen token stops working quickly or your connection stops working.

2. Store a key using a privileged service (TPM, Microsoft KSP, etc) that provides only the ability to use the key, not to read it.

3. Both 1 and 2.

FWIW, it's still better that the thing you save is a hash, because that's less useful for cracking other related passwords than the raw password itself.

E.g. If you make your password to access your home computer "eb49wsrew-home" ... what are the odds that the password for your work computer is "eb49wsrew-work"?

In an ideal world passwords would be independently selected, but we write software for non-ideal users in a non-ideal world.

> So my question would be: Describe the scenario where a bad actor has the ability to arbitrarily read data within a user's execution context, but not manipulate the TeamViewer Client also running in that same user's context? Because I got nothing.

If there was a system service which you could hand a read only secret and then it could perform challenge response queries of the form response = H(secret||challenge), and had the server send a fresh random challenge on every every connection then you could have a setup where even if the attacker could run as teamviewer (but not the system admin) he couldn't steal the credential he could only make use of it.

When you save your credential you'd hash the password, the server would also store the same hashed password. You'd send the hashed password to the system service, and the server could challenge your knowledge of it. For bonus points, have the service implement some simple ZKP (such as SRP) so being able to observe the challenge-responses doesn't allow you to grind offline attack the password.

This would be at least a small improvement.

They also used CBC over EBC for the encryption mode. Which is good. Looks like they were at least trying to keep it safe.

Assuming one byte per character, you'd have to have a 17 character password before you're using multiple blocks. For any password shorter than that, this cipher is pretty much operating in ECB mode due to the re-used IV.

I agree with your arguments.. however the unanswered question is why the passwords need to be stored locally at all?

The passwords are saved to your Teamviewer profile which exists on their servers. It seems like it would be safer to request and decrypt only when needed by an authorized user, rather than storing it for someone else to find.

The solution is to hash the password once, salted, and store and use that as a plaintext equivalent instead of storing the real password. This way if the password is reused on other services and an attacker retrieves it from the store it isn’t as much of a disaster.

The article doesn't say anything about hashing the password, this is a perfect case of the title having a large influence on how we read the article.

@dang This title is misleading.

> When the user hits "save password" on the client-side, the client needs to save the actual (plain text) password in order to replay it for future logins.

It's still insecure to store a user password as-is. TeamViewer should create an session key of some sort that is unique to this machine instead of replaying the username/password.

As you say, if you have to save the passwords locally, you have to save them locally. Web browsers that save passwords and password managers like LastPass all have to do this.

If you look in the Windows "Control Panel" app you'll see something called "Credential manager" that provides an API for saving passwords.

> A hashed password cannot be stored, otherwise that hashed password becomes the plain text password anyway (essentially destroying any benefit hashing would have here).

The plaintext password may be re-used by the user on other systems, so yes hashing would still benefit here.

I think parent is saying you can't have a login system where you store the hashed password locally and then log in by sending the hashed password to the server. The hashing to validate a password has to happen on the server, otherwise anyone could get a copy of your hashed password and submit it as a login.

But as other replies have pointed out, you can generate a login token to offer a "save password" feature without actually saving the password to disk.

Storing password encrypted or a function of the original password (hashed) is all wrong and the entire premise of the post follows suit.

After a successful authentication with username/password - store a one time token (randomly generated) on the client. Each time the token is used - generate a new one. That's it. On the server store a hash of that token, so getting the database of tokens does nothing, either.

The user retains ability to revoke any tokens at any time and also to know where they have been stored and used (if they choose to name a device/storage)

Storing a hash would be better because it wouldn't leak the actual password. People have a tendency to use the same password over again as well as sharing them between logins, so this is bad.

Not using the Data Protection API makes it easier to extract the password.

So while you're right that storing the password has some consequences, they should have done better, especially since their product is focused towards non-techs as well.

I'm not an expert so I'm just gonna ask the question. How is this different than what my web browsers do with all the passwords they memorize for me? They are stored locally, and encrypted, not hashed, with a key that is hard-coded somewhere. Right?

No- Chrome actually uses the Windows DPAPI, which encrypts passwords using your user login credentials. 3rd party password managers use a master password with a PBKDF.

It's still absolutely trivial to dump these though. Especially in the case of Chrome. iStealer is a classic script kiddie tool designed to exploit this.

Well, the key isn't hardcoded in the browser implementation. (It's unique per-install)

And I think browsers generally do a better job using OS security mechanisms to restrict access to that key, too.

IIRC, in the case of firefox, a key is generated at profile creation, and then the password saved are encrypted using this key.

It's basically security through obscurity as the key stays clear text UNLESS the profile is protected by a password a human must type every times he opens Firefox.

Browser have no other choice - but if you control the server and the client - there are ==a lot== better mechanism to store auth. info.

Why on earth would you store user passwords with AES locally and then use a hardcoded key to encrypt them???

$employer has recently been acquired by a big international corp. As part of the acquisition they have been "auditing" (really checkbox ticking and superficial API scans) our applications. One of the issues that came up is that we were using bcrypt as password hashing algo and our new overlords had a whitelist policy that only included sha2 variants and perhaps ripemd or something like that. We argued that their whitelist was totally inadequate for password hashing and pointed to multiple national standards bodies. The people doing the audit just ignored everything we explained to them and repeatedly pointed at the policy (which they weren't in charge of themselves).

Several people on my team were opining to just comply because it's the easiest way to do and they were concerned about their job security if they would go "against security".

My takeaway from that experience was that the people in charge of corporate security policies and audits are not experts in security, they are experts in reshuffling responsibility and covering asses. And many developers are easily cowed by them.

The result is compliant software, not secure software.

Compliance is really the biggest source of bullshit jobs. All the paperwork you need to get certified, the manhours spent reviewing it, all for marginal benefits.

Many (most) processes like that started because someone got burned by something, and the people ultimately responsible said "make a process so it doesn't happen again".

Often times companies go through this with internal processes as they grow, and some grow out of that phase.

Other times (especially at even larger companies) processes are adopted "from the industry". While not necessarily bad, it also requires flexibility from feedback. In parent's case, auditing to ensure sensitive data is encrypted to a minimum standard is reasonable , auditing to ensure any encryption is limited to a very narrow set of algorithms is not.

I worked on a project once that contracted out security audits to HP and those were distressingly not good security audits, just vague automated checkbox-checking of a list put together with zero context.

The loss of efficiency is still probably worth it. At least, it's worth it for systems where failure has severe consequences. You wouldn't want to ride an elevator where they skipped all the paperwork, for example.

Could you hash using both, your preferred algorithm first then theirs? That would make everyone happy.

bcrypt(sha256(pw)) works I believe, but it might not be enough to satisfy the policy.

It's actually better than plain bcrypt because you can allow long passphrases.

Is it better? The suggestion is vulnerable to a sha256 collision attack.

Except that for a collision attack to be a thing, you need to know the value you're trying to collide with - and that requires reversing bcrypt.

'Cause a whole bunch of people don't know how to do security.

You'd be surprised how many people come up with this then don't think any further.

This is the kind of solution I see from new grads with no crypto/security experience, but whom with a quick frown, think about it a little more and suggest why this is a bad idea. Many can do that even when they don't know how exactly to do it correctly.

This was just lazy/rushed or dare I say it, negligent.

The policy person at head quarters said, "systems must encrypt sensitive data". The local security office classified passwords as 'sensitive data' and sent a memo to all staff (sensitive data must be encrypted at rest). The devs encrypted the passwords. The auditors 'checked the box' as the passwords were now encrypted. Everyone moved on and forgot about it. At the time, a few people complained about the details... something about one way hashes. But they complain about everything, so no one listened to them.

I guess they were proud for using encryption instead of hashing, hence, more secure!

Definitely they didn't follow the one and only rule of security: don't roll your own.

Hashing is not an option for locally saved passwords. Some kind of token-based Auth scheme could work, but not hashed passwords.

That said, the much greater problem is the idea of using a hard-coded key, instead of generating a unique key for each device/installation.

Unique keys don’t help much. It only takes one person to write a script and put on github so that it can dynamically find the key and unencrypted stored password.

Again it's a grad level thing, but many smart grads can work out why encrypting isn't better. At least of the many I've interviewed, even ones I didn't hire could figure this much out.

> and then use a hardcoded key to encrypt them

Ask this question instead: Why bother to encrypt them in the first place?

What is the attack scenario you are trying to defend against? This is local software - it need to be able to decrypt the key and use it to login.

So at the end of the day it's the same machine with the password and the key to decrypt the password. It seems pretty pointless to even encrypt them in the first place.

So why do it? I don't know, maybe just to make it marginally harder to see the password. If that's the entire goal, there isn't really any reason no to use a hardcoded key - it accomplishes that goal just fine.

Ask yourself this: What attack scenario would a different key per machine defend against that a hardcoded key would not?

So that that if you're browsing your registry and someone looks at your screen, that person doesn't get your password.

It was the quick thing to do. I bet the have a "technical debt" story on the backlog that they're never able to get to.

Maybe. But does every teamviewer instance everywhere need to use the SAME AES key?

Yeah, that was what got me. (But at the same time, I don't think there's a trivial way for teamviewer to securely store that per-install key. "Trivial" meaning "can be implemented with no dev time and work on all versions of Windows")

That’s probably fair. IDK. I mean, the solution here was to not store plaintext at all and do token/cookie/whatever. But yea, I get your point.

Storing on the local machine.

Versus what they should have been doing - authenticating the password and then storing a session cookie.

Ironically, TV tweeted recently:

"January 28 is Data Privacy Day! TeamViewer is a Data Privacy Day Champion. As an organization, we understand the importance of being open and honest about how we collect, use and store your information."

>Send email to the Director of Security November 14th, 2019

>Send email to Director of Security notifying them there is now a CVE assigned to this November 18th, 2019

>Receive first and only email back from vendor “We’re looking into it” email January 13th

Good one TV.

One more reason to flush this turd.

It worked well for me until it started flagging some very non-commercial use (family IT support) as commercial. I actually tried to use their process to unflag myself. Gave up and am happy with AnyDesk now. Not sure what they do with passwords... guess I should check!

Yeah just checked Anydesk, 100 Euros a Year and no online address book really?

I'm on the fence about this security practice, but does anyone know a good FOSS alternative to TeamViewer? As in something that lets me remote into any of my devices from any other and doesn't require a fixed up?


I spent 3 weeks stepping through a program to reverse their blowfish encryption a long time ago...

So I won’t say he messed up by stepping through code for 6 hours to find the AES decrypt sequence but the instruction he could have scanned for is a special CPU instruction now!

aesdec xmm0,xmm1

How many separate routines could they have that directly call AES instructions?

Not all AES implementations will use AESNI instructions. Support for those instructions is relatively new -- many systems in productions still lack them -- and they're hardly necessary for a non-performance-critical implementation like this one.

A better signal is the presence of the S-box tables (63 7C 77 7B...). Even that won't detect applications which use an implementation from a shared library, though.

Yea, that’s how I did a blowfish program, just looked for the big constant table, but you know there are some clowns out there that “make their own” s-box :) you know, to “be even more secure”.

I am in the middle of a cyber incident response engagement and we identified compromises basis this. Wrote a quick and dirty code to enumerate it en masse and do automated connection analysis basis available teamviewer logs. The results, are actually eyepopping. Good analysis and w00t!

Perhaps misleading title?

Teamviewer saves remote passwords in registry, encrypted with hard-coded key

The biggest ambiguity here is that the passwords in question are just user-saved "remote server passwords" and the key takeaway is that remote desktop software should not be left running.

I've seen recommendations to move away from Teamviewer for a few years now. They may have been good and free in the past but not much reason to use them anymore when the casual user can use chrome remote desktop, and more tech focused can use splashtop.

How do you do this if you're storing passwords to send on to another end point.

Never had to store passwords in anything but a hash. My way is just to get the user to enter it again and never store passwords.

Could anyone tell me if this affects MacOS? not sure if I should rush to uninstall teamviewer on my home computers?

I guess when you've used those credentials on a windows machine?

There’s an open source teamviewer equivalent that’s free to use if you build it yourself - looked really slick, but I can’t for the life of me find it now. Anyone got any ideas what it could have been? (Wasn’t VNC/RDP based). Dammit, wish I could remember

The OS doesn't have something like a keyring or what? It's the same as registry, but explicitly states the intent.

It's pretty surprising that something this silly gets upvoted so heavily here.

Lack of hashing isn't the problem here, the fact that low privileged users can access these credentials is.

The author isn't to blame for the silliness though, whoever decided to editorialise this title for HN is.

I've taken a crack at rewriting the title. If anyone can suggest a better one (i.e. more accurate and neutral, preferably using representative language from the article), we can change it again.

Sorry I took the title verbatim from Reddit and wasn’t editorializing. It seemed blunt without sensationalizing anything to me when I submitted.

Kudos to the security expert for the finding but his web page gave eye cancer.

Seriously, consider lowering the contrast or chaging typography... Don't know, but it was hard to read in its original form...

IMHO, the website was a wonderful and easy-on-the-eyes experience to read.

I am visually impaired. I need sites to look like this just to consume them.

This was my experience and yours was different. Being able to let the viewer choose is critical so that everybody can use the web.

“I don’t like dark themes” would have been an adequate critique.

I also can’t read it as is, but it’s even worse than that: the page is not compatible with Reader View on iOS Safari.


From your own experience. I know at least 3 enterprise companies with around 70,000 employees that use TeamViewer. So, yeah, it matters.

Disgraceful. These companies need a kick in the pants. This product is clearly a gaping security hole.

Seeing shit like this daily is holy demoralizing. Makes me feel like writing software for a living is a lost cause.

This particular issue reeks of "roll your own" when a lot of out-of-box token based solutions would just do a better job. The best lesson to learn about security in software is that if you're doing it and you don't have a PhD you're doing it wrong.

The benefits of leaning on a third party for common logic should be carefully reviewed on a case-by-case basis unless that software is related to security, in which case, please do lean on a third party.

>The best lesson to learn about security in software is that if you're doing it and you don't have a PhD you're doing it wrong.

fuck, I need to tell that to all top CTF teams that their members should stop competing and leave their jobs and focus on PhD

It may be a little bit offensive, but did you even graduate?

Sorry, I was a bit hyperbolic - let me clarify. I didn't mean that educational experience is a critical portion - more that working in the security field is a lot more theoretical than most and requires a much higher investment of time to start to comprehend. People without degrees absolutely can be security experts, and people with degrees can be utterly clueless - but if you're working on a webapp and given a week to write a login system then it almost certainly will be vulnerable.

So it's less specifically required that people have PhDs and more... this stuff takes time to fully understand, I know a lot of pitfalls on the topic but I wouldn't say I'm an expert - I just know enough to know that I don't know.

It isn't a lost cause. The lessons are:

  Security is high stakes
  Security is hard

Yes another sloppy product that reeked of security flaws the first time I used it. How do these products become so widely accepted within a technically savvy user group? It doesn’t take a lot of knowledge to see the obvious flaws in how poorly the desktop sharing is implemented in this software. No wonder it is a favourite of scammers.

Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact