Hacker Newsnew | comments | show | ask | jobs | submit | oleganza's comments login

If we were running our economy on steam and gold we would have even less problems. Your life expectancy would be 2x lower, but Sherlock Holmes would be able to find the blackmailer.

Do you seriously blame the technology and instead of finding a technological solution propose to blame everyone using it for occasional consequences you do not like?


> If we were running our economy on steam and gold we would have even less problems.

Absolutely. I've lost a lot of coins in various hacks so this is experience talking, not theory ;).

> propose to blame everyone using it for occasional consequences you do not like.

Not blaming anyone and I'm still quite hopeful about crypto (albeit much less now than before), just saying this is a dark side of crypto which people are afraid to look at and it's not going away.

You may remember this discussion if god forbid somebody hacks the service were you hold your coins at (https://bitcointalk.org/index.php?topic=576337) or exploits a vulnerability in your OS and steals your wallet OR does the thing in the article.

Right, we are literate, we don't keep our wallets online or on our hard drives. For maximum safety we keep our public key pairs on a piece of paper or wood and hide it away in a dark place.


Multi-key addresses would be more robust. Plus, when it's an Open Asset on Bitcoin blockchain, then you can "rotate" keys by moving your identity into a different multisig address when one of the previous keys gets compromised.

-----


Every time some OpenSSL bug is announced, I remember my issues with TLS (I don't know if this bug is TLS-related or not).

Would be nice if TLS supported:

1) "hot" certificates with 24h ttl signed-off by a longer-living certificate on a more secure machine. Having 1-year certificate private key deployed on a web server is crazy. Especially since revocation does not really work.

2) Threshold multi-signature certificates for both CAs and end-user certificates.

3) CA certificates locked to specific TLDs (was there RFC about something like that already?) - so Russian CA cannot sign certificate for a Canadian TLD.

4) Ultimately, blockchain name pinning on DNS level.

The last three do not really relate to a case when bug in OpenSSL reveals a private key stored on web server.

-----


Regarding #1, check out Anchor - https://wiki.openstack.org/wiki/Security/Projects/Anchor - an ephemeral PKI system coming out of the OpenStack community.

In brief, it attempts to address weaknesses in PKI operations and specifically revocation capabilities by allowing issuance of short-lived certificates based on a set of validation rules. The README has some detail about the tool, http://git.openstack.org/cgit/stackforge/anchor/tree/README.....

A recent presentation is at https://www.youtube.com/watch?v=Q_ZhrQq-_YM.

-----


Just in case the stackforge link disappears, the project is being moved right now into OpenStack proper. There's no dependency though, so you can easily use it on its own.

We'll release version 1 soon, so the API will change just a little bit, but the main idea remains: issuing short lived certificates from a single location (or multiple, it works without changes in multi master mode).

If you have any configuration issues, get in touch, we're happy to hear about new use cases for this tool.

-----


There is still the issue of the CA that is in place for anchor - your users have to trust it.

-----


Yes, the main reason it was created is OpenStack deployments where Anchor is used for securing internal communications, so the PKI is an internal one (most likely).

We cannot change how the CAs work in public networks unfortunately, but if they do, we're going to be ready :)

-----


The fact that 3) was not built-in from the start pains me to no end. It should be flying first-class with Priority Airlines, a penthouse in the Hilton bugtracker since 1996. Instead, it's #3 in someone's list somewhere in a forum in 2015.

-----


AFAIK it was built-in from the start, as far as the X.509 protocol is concerned. Its just not used anywhere.

http://tools.ietf.org/html/rfc5280#section-4.2.1.10

-----


Oh my God! What? How is this not a major, major thing? How are Mozilla and Google not pushing hard for government CAs to have these, like, stat? And for OpenSSL to actually check them?

I am bewildered. What happened? Is this just apathy?

http://blog.codekills.net/2012/04/08/adventures-in-x509-the-...

Unbelievable. This is SRV for HTTP all over again. The Betamax of the 21st century.

-----


Enforcement of nameConstraints is inconsistent at best.

I experimented with name constraints a couple years ago for a private CA project, with the idea that I could restrict the private CA to issuing only names within a chosen subdomain.

I remember being able to enforce nameConstraints on the subjectAltName, but I was never able to get it to enforce anything on the subject Common Name. In theory new certificates should always have a critical subjectAltName extension, but this makes it worthless in practice.

It's also possible that my X.509 foo is not strong enough, or that I was testing with an older version of OpenSSL that doesn't implement it.

http://blog.codekills.net/2012/04/08/adventures-in-x509-the-...

-----


> 1) "hot" certificates with 24h ttl signed-off by a longer-living certificate on a more secure machine. Having 1-year certificate private key deployed on a web server is crazy. Especially since revocation does not really work.

This seems like the kind of thing that Let's Encrypt could be good at: if you have a fully automated renewal process, why have certificates last a long time?

-----


It's the kind of thing that QUIC supports out of the box.

-----


How so? What in QUIC makes it easier to have short-lived certificates?

-----


Here's some techniques to address #1:

1. Store your private keys in a separate process, either on the same machine or a remote, more secure host. Offload private key operations to the private key service. This obviously requires an encrypted connection between your webserver and the private key service, but you get good gains in security from no longer having the private key in your public facing web server's address space.

2. Another technique you can add is to separate the data plane from the control plane on your public facing webserver--that way you have a stripped-down process that is just handling the low level reading and writing of buffers to the wire, and have a fast pipe connecting it to a separate process doing all the HTTP logic. That way, you can lock down the data plane service and make it harder to exploit since it has a much smaller surface area than a full fledged web server.

-----


Your webserver needs access to the private key when it starts. Is it really a big enough gain to be worth it to move it to another server? If I root the webserver, I can presumably just read the key from that service.

-----


The webserver process does not need access to private keys, just access to the encryption service ("TLS offloading").

-----


OK, so if my TLS temination host gets rooted.... I guess the assummption is that host is hardened and less likely to be exploited since it's just doing one well-defined thing?

My ignorance may be showing, how would I copmpletely isolate the private keys from the public-facing service? I suppose using an accelerator card would do it?

-----


No accelerator card needed.

You run an "RSA signing server" that accepts connections only from the internal IP of your webserver (or maybe only TLS connections from across the internet but only if the client connecting to it has a client certificate signed by your own self-signed CA, but that depends on topology).

It runs a very simple app that accepts requests to sign something and responds with the <something> signed by your RSA private key. The code for this is very minimal and secure. You teach your web server to use this thing during TLS handshake, to sign the ephemeral key exchange.

If the web server is hacked, the hackers gain the ability to sign things with your private key, but they don't get the private key itself. They need to hack the "RSA signing service" server for that. They can't stockpile signed ephemeral key exchanges. After you detect the hack, snapshot and kill the hacked server, start a new one from clean backup, they lose the ability to impersonate your site.

CloudFlare have a writeup on their service that does this (TLS negotiation by their TLS terminator with only you holding the key): https://blog.cloudflare.com/keyless-ssl-the-nitty-gritty-tec...

-----


This is an excellent explanation of how private key offloading works, thank you.

BTW, you can store private keys in a HSM on your private key server and that's an additional layer of protection, but nobody uses TLS accelerator cards anymore that I'm aware of. You can simply do the relevant crypto on commodity CPUs and still have plenty of throughput.

-----


Though I guess what they could do in such a scenario is implement a service which uses the signing server and hope to run undetected. Yes?

-----


Yes. My first idea to detect this is keep count of signatures performed on the private key holding server and count of tls handshakes on your webserver.

-----


Does that actually help? If the attacker is running arbitrary code as the webserver, they can use the webserver's access to the service to MITM anyone trying to connect to you.

-----


There is no limitation in TLS preventing you from cycling your keys every 24 hours. Most CAs let you do unlimited reissuance. It makes public key pinning hard (impossible?), though.

-----


> It makes public key pinning hard (impossible?), though.

In this case you don't pin the server's key but the "central" key.

-----


This doesn't really work without the certificate expiration date also being 24-48h in the future, though.

I think they reissue the same 1y+ validity cert...

-----


Some allow manually specifying the expiration times.

-----


Entrust, for instance, allows you to purchase certs essentially on a subscription plan. You may have one valid cert for 3 years (and pay the reduced 3 year rate), and reissue it with 24 hour expirations every day.

I personally wouldn't go with an expiration that low because of the operational overhead, but a few weeks or a month is attractive. It still significantly limits the downside potential versus 1-3 year certs. Basically any cert (non-root) over 1 year should be considered against best practice at this point.

-----


4: Namecoin supports this, if I recall correctly, but I recommend to pull this value out of your local blockchain instead of dns.

-----


I don't understand #1 - what prevents you from doing this today? When you generate a certificate you get to pick the expiration date; you are free to make it as short as you want. Don't intermediate certificates exist to implement this strategy?

edit: basically, what stops you from doing this?

Root CA root cert (20y expiration)

v

Root CA intermediate cert (10y expiration)

v

Your intermediate cert (1y expiration)

v

Your short-lived cert (24h expiration)

-----


And who is going to issue you a reasonably-priced intermediate cert? Especially since PKIX name constraints don't actually work, so that intermediate cert would let you sign just about anything.

-----


For your own web servers and your own clients, use your own CA and your own certs.

-----


This isn't a serious solution except for intranets--and this problem exists most pressingly on the internet to begin with.

-----


3) Does anyone have any reference to this?

-----


Here's [0] the relevant section of the X.509 RFC (Name Constraints). Unfortunately, last time this was discussed on HN, someone mentioned that Name Constraints are not supported by all client software, making it unsafe to rely on them.

[0] http://tools.ietf.org/html/rfc5280#section-4.2.1.10

-----


I'd imagine (based on very superficial knowledge) that DANE would achieve something to that effect. But it's pretty much dead because apparently DNSSEC wasn't all that great.

-----


Very short lived certificates instead of broken revocations is bascially what DNSSEC gives you.

(It's apparently "dead" around here, but then again so is IPv6...)

-----


4) would only make sense with signed DNS responses

-----


1) could be solved by a HSM or a similar construction in software. The private key never ends up on the web server this way.

-----


#3: What about .com, .org, .name and many more domains? Who will control them?

-----


What problem does #1 solve?

-----


It makes losing/leaking a private key less of a problem, because it restricts the leakage to a 24h window. It also makes (webserver) key revocation kind of useless, because the certificate is automatically invalid after 24h.

-----


Accounting issues with wages are not as important when savings are possible. In today's ever-inflated economy many people have zero savings and a lot of debt. People live from pay cheque to pay cheque. But when savings are enabled by Bitcoin, then wage is merely an addition to the existing balance and its adjustment to the market prices is not so dramatic.

-----


Inflation makes debt cheaper to hold.

So a thesis that says that saving is easier if you make debt more expensive has some problems.

-----


Debt does not create savings. Savings are created by putting the real money for later spending (not IOUs, but real collectible: land, commodities, bitcoin). Making debt cheaper via inflation is simply a form of taxation: money is being reallocated from savers to spenders without asking savers' opinion. While savings are made 100% voluntarily without extracting anything from anyone.

Also: people need less debt if they have savings. And it's much safer to save your own money (when it's allowed) and have 100% control of your business/investment, than to borrow capital and be a slave.

-----


I guess I don't understand what step 2 would be here:

1. Bitcoin

2. ?

3. Savings

-----


There are two steps only and they are already happening since Jan 3rd 2009.

-----


Great depression was caused not by deflation, but by previously unleashed inflation (as in "inflated money supply"). Collapse of the bubble is a logical consequence of any bubble. Alternative - ever-increasing inflation was experimented in Weimar Germany, Zimbabwe and few other places. Hard money (Bitcoin or physical scarce collectibles) prevents people from inflating money supply, thus preventing global bubbles and thus preventing subsequent depressions.

Of course, local stock-specific bubbles can still exist, but they are always subject for arbitrage and voluntary exits. Global money bubble is more destructive because use of a certain money is enforced by laws and therefore people cannot easily exit or switch to alternative.

-----


If deflation didn't cause unemployment and the great depression, why did the collapse of the bubble lead to unemployment and the great depression?

Hard money prevents the Fed from changing the inflation rate but it doesn't prevent random and severe changes in inflation and deflation.

From 2010 to 2013 bitcoin had a deflation rate of 96%. Which is insane any normal economy would turn Mad Max if it had that type of deflation. If I had received a loan for my house for 100k in 2010 I would have had to repay $1.5 BILLION in 2013.

Then from 2013 to 2014 it had an inflation rate 353%. If inflation leads to bubbles and we all used bitcoin this inflation would led to the mother of all bubbles.

-----


All the discussed attacks (51%, selfish mining etc) are working under assumption of a relatively short time preference of a miner (how long he is willing to wait for ROI). In practice, though, there is no stable condition when you have miners of varying time preferences.

Miners with longer time preference are willing to tolerate greater difficulty increases and invest more in mining and generally will go out of business later than miners with shorter outlook. This creates a feedback loop which quickly leaves in business only the most hardcore investors doing mining. This provides a foundation for "rational behaviour", that is decision-making for long-term value of Bitcoin.

In other words, economics of mining leave among miners only the biggest believers in bitcoin driving out lesser believers. That's why in practice all miners are "honest" - all of them have dug the deepest hole possible, none is interested in short-term "profit" from double spending that would not return even a fraction of their investment.

-----


> In other words, economics of mining leave among miners only the biggest believers in bitcoin driving out lesser believers. That's why in practice all miners are "honest" - all of them have dug the deepest hole possible, none is interested in short-term "profit" from double spending that would not return even a fraction of their investment.

GHash already conducted a 49% attack against a gambling site. https://bitcointalk.org/index.php?topic=327767.0 They blamed a rogue employee - but then the claim is no longer "miners would never do that, it wouldn't be in their self-interest" but "no single person at any mining concern would do that, it wouldn't be in the mining concern's self-interest even if it it was in their own". Which of course isn't true. (It isn't true in the wider financial system either, which is why regulations exist.)

-----


Imagine you are sitting in the bunker and only get some bits of data from outside world. You cannot have any other trusted source of information about these bits except from the bits themselves. And there's a plenty of reasons for someone to actively try to deceive you ("active attackers" as in "hostile environment"). You have no idea how many people are out there and how much they are connected and who is connected to you. You are just sitting in this blinded box with a radio station.

Your goal is to make sure that whatever bits you receive are exactly the same as everyone else is seeing. In other words, that it is practically impossible for an attacker to produce alternative bits satisfying your requirements and show them to someone else (and not showing to you). One way to make it is to make these bits fully predictable (like digits of Pi), but that's not useful. Another way is to implement a gigantic ever-growing proof-of-work that implements the condition of "practical impossibility". You receive some bits with such a big pile of PoW that you can be sure it's not possible for someone to quietly build an alternative bunch of bits with the same PoW and unleash it on you or anyone else.

Protein folding and proof of stake do not help with this problem. You need abstract ever-growing scarcity implemented by converting energy into a short number.

You might ask: why should we be so strict with the security requirement? Well, because history of civilization shows that people always try really hard to build hierarchies of power to manipulate property and human beings. Any system of global consensus outside of existing power structures will face huge amount of resistance and therefore must be designed from ground up to withstand it.

-----


If you have no idea how many people are out there, then you don't have any idea how expensive it would be to produce a false "big" pile of PoW.

-----


It's not "cheap" if you cannot take a ride. Total cost is a sum of the price, time wasted and the risk of being late. "Cheap" taxi that is impossible to catch is not actually cheap.

That's the same fallacy as when people whine about Uber's "surge pricing". When there is huge demand, Uber prices may go up 2x, but you still will be able to get a ride in 5 minutes instead of waiting half an hour or not even knowing if you will be able to catch a car at all.

-----


You conveniently omit that Uber, in some cases, jacked up prices 8 fold.

There's a very good reason why taxis are regulated in most jurisdictions. If Uber doesn't like such regulations they're free to try to change it. And they do have very heavy hitting lobbyists on their payroll to do just that.

They are not free, however, to break the law.

-----


I'll quote myself (http://blog.oleganza.com/post/85111558553/bitcoin-is-like):

Bitcoin is like Git: in Git (a distributed version control system) all your changes are organized in a chain protected by cryptographic hashes. If you trust the latest hash, you can get all the previous information (or any part of it) from any source and still verify that it is what you expect. Similarly, in Bitcoin, all transactions are organized in a chain (the blockchain) and once validated, no matter where they are stored, you can always trust any piece of blockchain by checking a chain of hashes that link to a hash you already trust. This naturally enables distributed storage and easy integrity checks.

Bitcoin is unlike Git in a way that everyone strives to work on a single branch. In Git everyone may have several branches and fork and merge them all day long. In Bitcoin one cannot “merge” forks. Blockchain is a actually a tree of transaction histories, but there is always one biggest branch (which has the value) and some accidental mini-branches (no more than one-two blocks long) that have no value at all. In Git content matters (regardless of the branch), in Bitcoin consensus matters (regardless of the content).

-----


While the whole WWDC site is set in Myriad Set, the WWDC15 logo uses (probably) a rounded version of San Francisco font (from Apple Watch). I haven't seen this rounded version of SF font before or anywhere in Apple Watch UI yet.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: