Hacker News new | past | comments | ask | show | jobs | submit login

[Disclosure: I work for AgileBits, the makers of 1Password]

We've talked about this several times, most recently was in June. Please see https://blog.agilebits.com/2015/06/17/1password-inter-proces...

This falls into the question of what can we do to prevent attacks from malicious processes running on the user's machine. For the most part, we do try to defend against something where we can. For example, we take steps to make key logging a little more difficult.

In this case, the steps (other than mere obfuscation) that would be necessary to properly encrypt that channel would require that the user be confronted with a "pairing" request and prompt almost every time they restart their browser.

Again, it would be easy to obfuscate this communication, say by using wss; but the private key for that server would still need to be stored en clare on the computer.

There are other approaches as well, but all have unpleasant side effects that risk user data in other ways.




Surely encryption with a shared key is strictly better than sending in plaintext?

>In the extreme case, we could have some explicit pairing (sort of like Bluetooth) between 1Password mini and the extension. That is, the browser extension may display some number that you have to type into 1Password mini (or the other way around). With this user intervention we can provide solid mutual authentication, but that user action would need to be done every time either the browser or 1Password mini is launched.

Why every time? The first time, yes, and then each side stores its key. Future connections must handshake on that shared key, so you aren't talking to an imposter.

You still lose against root malware, but threat still seems to strictly decrease.


The browser would have to hold on to the shared key long term. Where does it put this?

Encrypting with a key that is available to attackers (who have the same set a privileges needed to run localhost sniffing) is not encryption. It is obfuscation.


I thought the point of the arxiv paper was that a non privileged app could effectively "sniff" by pretending to be the other app.

If I'm understanding correctly, that means that encrypting with a shared key helps. Am I missing something?


1Password verifies the code signature of the apps connecting to the WebSockets server using SecCodeCheckValidity.

That would prevent connection from the apps that do not have their code signature whitelisted.


From the paper:

>WebSocket. WebSocket servers are typically built over a few popular opensource frameworks, such as CocoaHTTPServer [3] and QtWebKit [14]. All of them provide a receiver method for getting messages from browser extensions, which is used by Xavus to fingerprint this channel, and a response method for replying to the extensions. The invocations of these two methods are identified as the claim and the use of the channel, respectively. Between them, the server is supposed to access the HTTP header Origin that includes extension IDs attached by the browser and check the signature of the browser through the API SecCodeCheckValidity. If these operations are not found, the app is considered vulnerable to the threat from a malicious extension or app. On the other hand, the attack from a malicious server against an extension cannot be detected through any existing APIs.

This sounds to me like the app validates the extension, but the extension can't validate the app. In that case, using a shared key helps.


Not if the shared key is present in the host computer already, which it'd have to be if the host process is to decrypt anything.


http://arxiv.org/abs/1505.06836

Other non root apps can masquerade as the app intended to get the data, and if it's sent in plaintext, they get information they wouldn't if it used a shared key accessible to the intended app.


Particularly if someone is specifically targeting 1password, which they (almost by definition) already would be. It'd just be a single patch to the exploit and you're back in business. Well, okay, obviously that's an oversimplification, but it wouldn't do you much good regardless.


But then malware must run as root.


Not necessarily. Malware only requires privilege if it is violating a security policy. There are a variety of attacks in which a process executing as a user is able to access resources controlled by other processes which are executing as the same user.


"Let's throw encryption at the problem" isn't always wise.

For example: https://paragonie.com/blog/2015/09/comprehensive-guide-url-p...

What do you need to do before you can sniff a loopback interface?


http://arxiv.org/abs/1505.06836

You can run as a regular user (or even sandboxed) and connect to the interface without authentication.


Author here:

Thank you for responding. The reason I wrote this up is not to let people oh my god the world is ending but to let them know, hey if you are using the browser extension there is clear text passwords being sent over the loopback, if you don't like that then don't use their extensions.

You have made very well informed arguments about how hard it is to protect if someone has access to your machine, specifically elevated. But the argument to me seems kind of like "well, it's hard to protect against then so we just decided to not do anything at all." It seems like one other password manager company did something, so why wouldn't you?

Can you explain why it would require a pairing request every time? Couldn't you just pair it once and then be done with it? Can you explain the other approaches?


I am not affiliated with 1password, but I am a security engineer.

You are asking 1password to fix a non-issue. Sniffing loopback requires root privileges on the machine. If someone can sniff loopback, they can just as easily reach directly into the 1password processes's memory and extract the password from there, or replace 1password with a malicious extension that sends all your passwords to the attacker, or just log your keypresses. These other strategies may actually be easier than sniffing loopback. There is no reasonable defense against an attacker with root access, and encrypting communications over loopback would be a complete waste of effort and CPU time.

Honestly I feel bad for 1password that this article insinuating a security issue in their product is trending on HN when there is in fact no issue (as far as I can see, from the information presented).


It doesn't always require root privileges...for instance if some program (wireshark) changes the permissions on your loopback. There are other questions there but I'll probably dig into it later.

The thing I guess I would say bothers me about your post is you just say well, if a person gets access to a machine it is basically all over which isn't necessarily the case. Even if they get root access we don't want to make it easy, right? Thats why we encrypt databases or in certain cases storage at rest so even if the hacker has root access it is sure going to be hard.

1Password can never absolutely protect against an attack when an attacker has root. But they can make it harder then just dumping out lo0. Level of effort does account for something.


> for instance if some program (wireshark) changes the permissions on your loopback.

That would be a serious security flaw in that program. Some program could also change the permissions on /dev/mem (or really any other device), with similarly disastrous results.

You can't expect 1password to defend you against other programs disabling the security of your operating system.

> if a person gets access to a machine it is basically all over which isn't necessarily the case.

Root access with ability to run sophisticated arbitrary code is game over, period.

Things that we should actually try to defend against (somewhat) include:

- A human briefly sitting down at your machine while you go to the bathroom. (E.g. Chrome's UI that allows you to display back your saved passwords in cleartext with no challenge is a legitimate security flaw, IMO.)

- Someone physically stealing your machine and then trying to dump the disk.

- Malicious software running without priviliges.

However, malicious software running as root is game over. Sorry, it is. Trying to make it hard for such software to do bad things is simply not feasible; there are far too many avenues of attack.

> Thats why we encrypt databases or in certain cases storage at rest

No, we encrypt data at rest to defend against physical theft of the storage medium, not to defend against live running processes that have root privileges.

If a malicious process has root, your encryption is meaningless. The key is in memory somewhere. The attacker doesn't really need to know where. They can dump the entirety of memory, and then cycle through it trying every N bytes as the encryption key and see if it works. This won't take very long!


You make some very good points. Question is, is there any way to know if such a malicious program is on your machine?


Maybe. Many will not hide themselves very well, but some use their power to make themselves invisible (often called a "rootkit"). I think the most promising approach to avoiding them is to use "secure boot" where the hardware verifies a signature on the firmware, which in turn verifies a signature on the OS, etc. ChromeOS devices (and some smartphones?) are pretty good at this. However, secure boot also prevents you, the rightful owner, from hacking (customizing) your own machine, which kind of sucks.


I am one of the 1Password developers.

If you only want to pair it once then you will need to store the pairing secrets somewhere. Since both server and the client are running on the same machine, it is going to be trivial to obtain this information.

Re: other password managers did something. It must have something to do with the disdain for obfuscation that we had for a long time. Encryption is good, obfuscation is a lie. We always felt if we obfuscate something then we will be lying to everyone. I am not blaming anyone and concede that our position might be a bit naive in this case.


You've asked multiple questions. I'd like to give you a separate reply for each.

I really really don't like talking about how our competitors might do things. (I work for AgileBits, the makers of 1Password). But I can talk about why we picked one security design over a plausible alternative. So here goes.

One way such a system could work is that the browser extension could talk to a process on the provider’s server. Because the server is remote, the TLS secret key would be stored on that remote server, and so would not be accessible to a root process running on the user’s system. This would, indeed, solve the IPC issue as there wouldn’t be any IPC.

There are a few reasons why we don’t operate that way. To operate that we we would end up knowing when and where our customers log into things. But we don’t want to be in a position to learn our customer’s shopping habits or whether they log into ISecretlyLoveNickelback.org. But with the client server model of that way of doing such things, we would be getting lots of that kind of information and holding on to some of it unencrypted.

Similarly, we don’t want to store any information that if stolen from us would enable such a thief to try to make cracking attempts at people’s Master Passwords. Sure PBKDF2 is good, but it only provides temporary protection should the data be stolen. We also don’t want “big” secrets in the browser. Our browser extension never sees your Master Password and only ever has the password for site it is filling for (or learning from). We see the browser as a very hostile environment.

Now there are certain ways around some of those things. And again, I’m not making any claim about how any other product does things. I’m only discussing how a service with the properties you describe might work.

Different security designs expose different sorts of threats. Nobody can really defend against an attacker running as root on their own system, but our more obvious exposure to to running on a rooted machine in this particular scenario is something that we think is worth the security and privacy benefits of our design.

[Note: I have plagiarized myself from https://medium.com/@jpgoldberg/ross-regarding-your-makes-you... as I'm being asked the same questions in different locations]


I'm giving a separate reply to each of your questions. This is "Couldn't you just pair it once and then be done with it?"

To pair it once would mean that a long term secret would need to be stored. Now we consider ourselves experts at storing long term secrets, but this one would need to be available to 1Password Mini and/or the browser extension even when 1Password is locked. So it would need to be stored on the local machine in a way that is not protected by 1Password.

So an attacker with root privileges (as needed for the loopback sniffing) would be able to get that key. And as I've said in various places throughout this conversation, "encrypting" something with a key that is available to the attacker is merely obfuscation.


And now my reply to your first point. Thank you for letting people know that this interprocess communication is readable by something running as root on the user's machine. For those interested in this, please see an article we posted about this back in June: https://blog.agilebits.com/2015/06/17/1password-inter-proces...

We try to be very clear and open about the choices that we've made in our security design, why we made them, and what the implications for users are.


> This falls into the question of what can we do to prevent attacks from malicious processes running on the user's machine.

It's likely that I'm misunderstanding things here, but here's what I'm imagining:

* I use 1Password and presume that my plaintext password stays on my machine.

* I use HTTPS for traffic that I care to secure.

* I contract to a vendor, with whom I don't share trusted information, to provide a dumb (non-VPS) shell, which I use for a variety of purposes.

* Among these, I use a SOCKS proxy to VPN traffic for some reason (say, to circumvent state censorship). Again, I presume that the vendor in question can't read my HTTPS traffic or 1Password info.

Can't the vendor in this case, who presumably has root on the machine in question, now sniff the loopback and find my passwords?

If the answer is "yes," then it breaks the workflow of trustless use of a SOCKS proxy.


The vendor does not have root on your machine or access to the loopback interface. Your machine's loopback interface should not be exposed through a SOCKS proxy unless you specifically tell it to do so (which is a terrible idea for reasons beyond just this situation).


I see. So the loopback traffic occurs on the local machine, the password is decrypted, and only then sent over the regular HTTP(S) connection (with this last bit being the only traffic on the SOCKS)?


Yes. Check your settings for your proxy to make sure you aren't sending traffic for localhost over the proxy.

(In Firefox, for example, it's excluded automatically.)


You can't just use some kind of Diffie–Hellman key exchange to agree on a key, instead of the user copying the key?


Are you suggesting simple DH without any long term secret?

Hmm. I'm fairly sure we thought through that option, but at moment I can't see see/recall why we rejected it.


Well, unauthenticated DH ("simple DH without any long term secret") is trivially MITM-able, so it could only be useful if you adversary could read but not write to the channel, which I doubt is the case in this context.


Right. This would be unauthenticated with all of the problems that that entails.


How would that work when both processes are running on the same Mac, under the same user account?


FTA "the 1Password browser extension protects you from a more typical malware attack of pasteboard/clipboard sniffers."

Clearly, someone at agile bits has given thought to defending against a compromised machine. Therefore, I don't see how you can justify not obfuscating data sent over lo0. I also fail to see how obfuscating the data sent over lo0 would "have unpleasant side effects that risk user data in other ways."




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: