Q: Is it secure?
A: Security is not binary.
Q: OK, how secure is it?
A: It seems like you just asked that question.
Q: No, the first question was if it's secure,
the second question was how secure is it.
A: Well now that wasn't even a question at all.
Tell you what, if you find an unreported security
vulnerability I'll buy you a beer.
The only way to know if something is secure is when it's adopted en-mass and you see if it really was secure or not. You could read the WinXP pamphlet on security back when it was released and it had endless bullet points about how secure it was. It was probably the least secure software in the history of computing based on actual attacks after the fact.
Security isn't something you provide an answer to unless you're selling snake oil. Luckily, it seems most people prefer buying snake oil and are happy to eat up a vendor telling them how secure an utterly untested product is.
Security theory is not something you can understand as a non-technical user anyway.
That's all that has to be said to a non-technical user. Sometimes providing links to more information is good too.
Q: What do you mean "No"?
A: We believe we have done a good job in securing it.
Q: So did you do a good job?
A: We hope so!
Q: You "hope so", what sort of answer is that?
A: Trust us. It's secure. We are not hackers. We don't want to steal your data. We did not put in any back doors. We audited the code ourselves. There are not any kernel level hacks, root kits, or otherwise. This has been tested against a variety of anti-virus scanners and none of them flagged anything. We're very good. Please please trust us?
What testing methodology did you use, what form of vulnerability or classes of errors does it prevent (valgrind, ...). Has the code been formally verified ?
What are the attack scenarios that you have considered. What are those you don't prevent (physical access, system compromise, user compromise).
What are the knows and known unknowns.
Ultimately it comes down to "Trust us". Unless you are well versed in computer security, anything other than what I wrote, is meaningless. Even the rootkit stuff I put there is above the head of the average computer user (we're probably talking the 98th percentile and above that would understand what a rootkit is).
Probably talking the 99.99th percentile for what's above.
There is of course the counter argument, that if you're non-technical, you probably shouldn't be trying to implement a cryptographic layer-3 network for any reason other than "the lols".
Pond is a great example of doing this well:
"if an entity can do something that is not listed here then that should count as a break of Pond"
Q: What attacks is it secure against?
Q: How do you know it's secure against anything at all?
Right now, IPsec practically requires PKI. But at Google or Amazon's scale, PKI is far from an easy problem, distributing keys to millions of nodes must be painful. And auditing the system must be its own level of hell, as I doubt many internal PKI systems attempt to manage devices at that scale. Unlike a smartphone or a laptop, where you can rely on 2-factor authentication, a server must be single-factor authenticated. The server is the server, and that places a huge burden on correctly allocating certificates.
And then there's the chicken and the egg problem: if you want to deploy PKI to millions of existing servers, how do you do that and ensure every server is what it says it is? There's too many shaky links of trust involved for a system like that to stand up.
I really like this idea, it's in many ways better than the idea I had about IPv6, because it uses the DNS layer to advertise public keys. It's inarguably more extensible, to boot. My idea would fix IPv6 into a single standard for IPsec, this is much more flexible.
This is precisely how CJDNS works, and it works wonderfully.
I've been working on a python frontend for it; I call it Cirque. https://github.com/jMyles/cirque
Not saying that's necessarily a bad idea; cjdns seems to be useful to the people that use it. But if I want to build an app that communicates P2P over such a network, a manual step to join the network won't fly.
There's nightfall to find/announce public peers, but I consider it very beta (quickly hacked together on a bus trip).
The project is in the middle of a partial rewrite. The existing DHT has several issues and I'm replacing it.
The change is going to break compatibility, which made it into a much bigger change because it provided an opportunity to make several other compatibility-breaking changes. So I haven't been promoting the project recently and the DHT bootstrap node is currently offline.
There should be new code some time around the end of summer.
Looks like the DHT used for NAT and resolving .key addresses is not currently online, at least my (very well connected) test machine wasn't able to connect to the 1 pre-seeded DHT peer.
Anyone gotten it to work outside of a single machine and ideally thru NAT?
What are some use cases this can be applied to?
IP addresses (normal, unicast ones) like 188.8.131.52 and a9c::890 are meant to reach a computer somewhere on the Internet, through any number of routers. Routers are meant to forward IP packets until they reach their goal.
Try it with your neighbor's computer, it's not going to work. Did he enable DMZ or port forwarding? Alright, that works when your neighbor is home. Now try it when your neighbor is at work. His IP address changed, so there goes the reachability. This seems like "duh, obviously," and you're right. But I just want to perform the fundamental action of connecting two computers.
I personally used Tor to solve this problem: run a hidden service on one, connect to the .onion address on the other (you can configure ssh to work with .onion addresses).
This public key system would solve the problem in a much better way, without going through Tor. Not that Tor is bad, it's just not meant for this. Connecting the two machines directly without thinking about the intermediary network is what I wanted.
Second, snow does not change whether an application is centralized or not. It's the application which is centralized, not the address. Your host's address can be "184.108.40.206" or "abcdefghijklmnop", this does not change how the application works at all.
Third, snow is just a tunnel. Any tunnel would "fix" an application the same way by simply translating addresses and encapsulating communication.
This is basically just onion routing, but snow doesn't really exist to be an onion router. The real purpose of snow appears to be that the author wanted to use the features of IPv6 (secure connections and the ability to address and connect to a host behind a network firewall) without having to actually use IPv6 in his application, and doing all this on top of an IPv6-only network. This is what sets it apart from every other NAT-tunnel. The public key stuff is a red herring.
Applications tend to assume that IP addresses are globally unique. ISPs depend a lot on each other to handle routing properly. Occasionally we see a route leak when someone screws up. Sometimes it even happens deliberately. And it's entirely possible that malicious routes are announced on a regular basis to conduct clandestine MITM attacks. Technical solutions for automatically determining which ASNs should be allowed to announce an IP prefix remain problematic. And BCP 38 - while it helps to deal with DoS attacks and certain security issues - also breaks some very useful approaches to deploying high performance/scale applications.
The internet is currently far more centralized than most people like to admit. The reality is that both DNS and IP are handled by delegation from a central authority. For instance, proof of IP address ownership remains outside the scope of the protocols. Network connectivity still remains based on trust relationships. That is fundamentally incompatible with a decentralized and ad-hoc approach to networked applications.
There are many network operators who have been shown untrustworthy. The design of the internet hasn't quite caught up yet.
And it really has nothing to do with centralization or decentralization. It's peer to peer. Your peers can be anywhere and you can send and receive anything, out of order, connectionless. This is fantastic for decentralized distributed networking.
Applications can 'assume' anything they want; that's the application, not the addressing protocol. Everyone who has read RFC1918 knows IP addresses are not unique.
And there is no way to ensure a route doesn't have a malicious actor. It's been shown time and again with networks like Tor that it doesn't matter what layers of security or obfuscation or decentralization you add. A bad actor on a route will be able to identify or mess with your traffic. Your application is the deciding factor in the security of the connection.
DNS and IP are not handled for everyone by a central authority. Both are independent protocols which can be used across the internet without a central authority's authorization. Of course IP addresses are more closely guarded, but like you mentioned before, advertising an invalid range of addresses works all the time. And DNS is not even needed to use the internet! Public domain registration using specific TLDs does have centralized control bodies, of course, but that's necessary to prevent conflict.
The internet is a web of trust. That will never, ever change. The reason it will never change is we all want something for free.
If you wanted, you could pay for and bury fiber-optic cable from your home to every place on planet earth that you want to make a network connection to. Then you wouldn't have to trust anyone, and when someone taps into your fiber or cuts the connection, you could (hopefully) determine that your connection is no longer "safe" or "reliable". But this is not very practical.
The internet fixes this by allowing any network to help any other network get around common network problems. We help each other because it is mutually beneficial. When that mutual assistance breaks down you get problems like the Comcast-Netflix debacle. No internet protocol or addressing scheme will route around a monopoly on the network. The only "decentralized" solution is a bunch of people on a wireless mesh network and a satellite link, which will still result in Netflix not being practically usable.
But please, keep believing that an addressing scheme will somehow keep you from having to trust a foreign network. Good luck getting House of Cards to stream.
Isn't this a similar concept to Tor addresses without the onion routing being part of it?
The problem with SSL is that it needs certificates. You need a domain name and a certificate if you want to run anything over SSL in a resonable manner.
If address == identity then those requirements vanish because learning about the address already provides you all the information you need to establish a secure connection.
It democratizes authenticated connections.
It's interesting, and it may be useful. But for securing what we currently do with IP it's useless.
You would still have the problem of name resolution. However since the address would be the public key, once you had resolved the address the identity of the other party would be assured. Assuming that an attacker cannot feasibly generate an equivalent public key, you remove key exchange+authentication as an attack surface.
Key management could be a downside. If you want to update your key you have changed your address, which would look like a name-resolution poisoning attack. There are feasible ways around this but none are ideal (particularly if your key was compromised, mechanisms like signed forwarding records would become extremely hazardous). It would probably have to rely on name-resolution mechanisms similar to those used by current IP addresses.
Would you bake the private key into the container or set it at runtime? If you set it at runtime how will two containers in different places know who to talk to?
Perhaps you generate the keys at build time and add the public keys to the partner containers, then at run time you inject the private key into the container via an env var. Now you have to securely manage and transport private keys and you've got two problems.
There must be other things I'm not considering.
And, of course, whatever system is running the container can step into it and read the private keys (or any malicious containers running on the host that are able to break out of the container). But we can just avoid that by saying they are our own hardware.
It IS the hash of the public key...
This is a public key hash:
Snow lets you use it like an address...
@gruez, The length of an IPv4 address would not allow for a future-proof-enough address length. IPv6 might just do it, I can't say.
@y0ghur7_xxx, Let's say I want to expose some snow-unaware service via snow to only a single host. I have no way to set up an iptables rule to do that atm. "When you resolve a key name an address is assigned to that key. The address remains assigned to the key as long as there is traffic, but never for less time than the TTL on the DNS record and never for less than 5 minutes (and generally for much longer than that)."
What I'm more interested in, is a protocol that can let people share data on a DHT, which is resistant to denial of service and other security issues. I guess freenet is that already (somehow), but it's really not usable.
There are so many things in bitcoin I'd love to see in other standards, especially for messaging and forums. It would make things so much harder for the NSA and advertisers.
For my answer to this, see my comment elsewhere in this thread: https://news.ycombinator.com/item?id=9844987
Pretty usable, has also a browser version: https://github.com/amatus/gnunet-web
As fun as trudging through supposedly secure C++ code is, I'd rather have an understanding derived from the principles.
From what I could gather, both use public DHTs for routing, and AFAIK public DHTs in general can be rather trivially crawled for metadata.
The current generation Internet already offers plenty of methods to protect message contents, but very few can also obfuscate metadata, which can be just as revealing, but almost always much more readily accessible.
The main problem with using a public key as your identity is that it's a horrible string of gibberish that people can't remember. What you want is some way of mapping some friendly name to your key.
But you can resolve friendly names to keys however you like. You can update that mapping however you like. That is orthogonal to what snow does.
How are NAT entries recycled? Also couldn't SNI be used here to do this via a single IP?
That's the idea.
> How are NAT entries recycled?
DNS responses have a TTL. The mappings last at least as long as the TTL and get extended if any traffic is sent to the address or there is another name lookup. After there is no traffic for a period of time the address goes back into the pool.
> Also couldn't SNI be used here to do this via a single IP?
SNI is HTTP. Doing it this way works with other protocols too.
I can't think of a way to make this compatible with the DNS method you're using now though... you'd need a new address class that only ever returns a fixed IP by DNS, which was used exclusively for alternative determinations of the .key name requested. You could do this for TCP/80 with the HTTP Host header and TCP/443 with SNI, for instance. I'm wondering if one way to do it would be with haproxy to avoid having to implement this yourself.
Since a lot of connections going over Snow are going to be HTTP or HTTPS, this might make sense, at least for IPv4 where you IP space is limited.
The way Tor does it is to use a SOCKS proxy. Then you don't need an IP address and it's protocol-agnostic but the client application has to support SOCKS.
I'm not sure address space limitations are even a major problem. Address assignments are local, not global, and there are millions of RFC1918 addresses.
For HTTP you have to solve it from the other side anyway. An HTTP server would be more likely to run out of addresses than a client would. But running out of IPv4 addresses is what IPv6 is for. We could even return both IPv4 and IPv6 addresses until the IPv4 addresses run out. Then if you want to burn through millions of peers you just have to support IPv6.
But idea is well welcome. Key based address resolution has potential of obsoleting many types of current DNS attacks.
Also a Rust version of this would be nice.