
Let's Encrypt now fully supports IPv6 - el_duderino
https://letsencrypt.org/2016/07/26/full-ipv6-support.html
======
matt4077
So unfair! Comodo once, a while ago, also thought about using IPv6!

But seriously: letsencrypt is doing excellent work. It's a great case study in
how inefficient a mostly-free market can be: SSL adoption doubled within a
year. All that was previously deadweight loss.

~~~
zanny
There is no such thing as a free market when certificates were only created
and issued by those who had the political clout to get their authority in the
Microsoft / Google / Mozilla trusted keyring or in the trust network of
another trusted provider.

A free market is one where you can compete independently.

If anything, it is a demonstration of why faux-markets with monopoly control
(or in this case, the collusion between established and trusted authorities to
prevent competition) are dangerous.

~~~
iancarroll
What? Anyone can become a certificate authority, so long as you'll take the
time to document your procedures, store the private key securely, and be
externally audited. You probably want an insurance policy as well.

The only program that charges money to be included is/was Oracle, as far as I
know. Everyone else is free.

~~~
viraptor
Everyone else charges indirectly. Inclusion is free, you "only" need to go
through someone like webtrust who will charge you $100k+ for an audit.

~~~
superuser2
So? Relying on commercial auditors to verify compliance with published
industry standards is a pretty common free-market solution to problems of
trust.

~~~
viraptor
I'm not disagreeing with it. I just think the parent post was weird: anyone
can do it, just spend some time to obtain things, only Oracle charges for the
program.

------
sp332
According to conversation on
[https://github.com/letsencrypt/boulder/issues/593](https://github.com/letsencrypt/boulder/issues/593)
they couldn't support it because one of their datacenters didn't support IPv6
traffic.

~~~
IgorPartola
Which sucks because a data center that doesn't support IPv6 is the 2016
version of a data center that only does Token Ring networking.

~~~
azernik
They are unfortunately and shockingly common.

~~~
imglorp
(Cough) AWS? (cough)

What's up with that anyway?

~~~
TenOhms
As someone who is responsible for protecting networks for large data centers
the biggest issue I see with adopting IPv6 is the overhead required. I took a
job with another company this last year and while I thought the asset tracking
was bad my previous company, I found it can get much worse. I have clients
whom we host servers for that I have no clear method of determining what their
IP space is.

System Admins routines make mistakes with complicated host names and trying to
acquire an accurate inventory is an absolute nightmare. This ties into IPv6
because why would anyone take that disfunctional system, which barely works
with 'easy' IPv4 addresses, and make it even more complex? We would have to
support both IPv4 and IPv6 simultaneously and firewall rules would get much
more complex initially and they are already a huge issue for me to get changes
made.

At my old job this was similar. Even though it was in the financial industry
and that particular company was rolling in profits it couldn't keep enough
network engineers around to save it's life. The turnover was high,
documentation was horrible and projects to make things better languished on in
the ether.

No no no, forget the whole IPv6 thing, just run IPv4 for all things internal
and gratuitously support IPv6 outside if you really have to. I jest of course,
but that is the reality of my corporate life in the last 6 years in two
fortune 500 companies.

~~~
DanielDent
The way I think about IPv6 is that it's an entire second network which happens
to frequently coexist on the same layer 1 and layer 2 equipment.

That said, I think there's often a strong argument for only using IPv6 for the
internal parts of a network. IPv6 actually simplifies things, and where IPv4
remains needed, it can be encapsulated and routed over a v6 network.

But it takes a team which understands the v6 world and is able to take
advantage of the benefits for this to become a reality.

------
JohnnyLee
For any Go users out there, I'd recommend Russ Cox's package:
[https://godoc.org/rsc.io/letsencrypt](https://godoc.org/rsc.io/letsencrypt).
It automatically acquires certificates and keeps them up to date.

------
jdc0589
someone play devils advocate and tell me reasons I might __not __want to use
LetsEncrypt? (aside from potential issues from short-lived certs).

~~~
vbezhenar
Contrary to popular belief, it might be quite hard to configure it. I spent
few days trying to make it work, but I didn't succeed yet. Though my
requirements might be a bit atypical. I'm not going to run their software
which does too much for my taste, therefore I'm using letsencrypt.sh, which I
briefly inspected and I feel comfortable to have full control over process. I
don't want to perform domain validation using HTTP, I want to use DNS
validation, so I have to write an additional software layer to integrate
letsencrypt.sh and my DNS provider API (vultr). And that turns out not so
easy.

For me StartSSL was the best offering, but now I need few 3-rd level domains,
so letsencrypt seems the only free choice.

~~~
stephanheijl
To be completely honest, if you're not using the tool Let's Encrypt is
creating with the specific intent of making configuration easier, it sounds
like a moot point to state that it "might be quite hard to configure it". I'm
not saying that your use case is invalid, but starting your comment off with
stating that it is hard to configure might throw people off.

~~~
icebraining
I don't think it's too much to expect people to read a single paragraph before
jumping to conclusions.

------
yeukhon
Famous question - intranet.

We can do dns-01 verification, on intranets (like valid domain). But the
downside is our domain would be logged in the certificate transparency log.
What is the downside of being on the log?

~~~
spikengineer
Whether you like it or not, all certs in the near future from all providers
will be logged anyway.

Most sysadmin don't like their intranet adresses being in the log so as to not
provide intel to intruders.

~~~
yeukhon
Ah. I didn't realize we will eventually. So if I get a cert for
*.dev.example.com I am exposing just dev.example, but not foo.dev.example.com?

~~~
eeZi
Yes, the log is static. It only contains the subject name if the certificate.

But there's little to fear from exposing internal domain names. DNS names are
more or public knowledge - they are transmitted unencrypted, end up in plenty
of caches etc. Attackers can probably brute force it or the PTR records
anyway.

------
haasn
Awesome! This was the second of only two steps to remain until I can fully
turn on HTTPS. Now they just need support for IDNs (which they've also
announced) and LetsEncrypt will be functionally complete from my point of
view.

------
jimktrains2
What's this mean? If a site only has an AAAA record it can now get a cert?

~~~
jo909
I think you could always get a cert via the ACME DNS challenge for an
IPv6-only/AAAA-only domain. But you could not talk to the ACME/API endpoint
from an IPv6-only system, so actually requesting and retrieving the cert would
have to happen on another, IPv4, system.

(Just as a sidenote: you _never_ need to request and retrieve the cert on the
system that the domain name points to. That is just the easiest way and the
workflow most clients suggest, since it also makes a lot of sense.)

~~~
voltagex_
Would it be possible to set up a somewhat isolated VM that has the sole
purpose of requesting certs? I get stuck on needing either the webroot or
standalone method of certbot-auto.

~~~
majewsky
What's your specific problem with the webroot method? I'm using it on my
systems, and contrary to popular belief, the certbot can easily run as a non-
root user when using the webroot method. (My configuration is at
[https://github.com/majewsky/system-
configuration/blob/master...](https://github.com/majewsky/system-
configuration/blob/master/hologram-letsencrypt.pkg.toml) .)

------
AndyMcConachie
Does anyone know if Let's Encrypt supports DNSSEC validation? I mean, do their
data center recursive DNS servers do DNSSEC validation?

I'm wondering how easy it would be to forge DNS responses to their servers
checking that I control a domain name.

~~~
bracewel
DNSSEC is enforced at the resolvers.

~~~
AndyMcConachie
Thanks. Since my zones are secured with DNSSEC that makes me feel a bit safer.

------
serge2k
> We’re looking forward to the day when both TLS and IPv6 are ubiquitous.

Kudos to Lets Encrypt for their great work on the former.

A single sad tear for the state of the latter.

------
INTPenis
Expected but when is Tor support coming? I read a forum thread indicating it
would be nigh on impossible due to .onion tld status.

~~~
kstrauser
I'm asking out of ignorance because it'd never occurred to me before just now:
would you _need_ HTTPS on a Tor site? I thought Tor itself would handle
trusted encryption for you. Or is that as a layer of defense against malicious
nodes?

~~~
icebraining
It does for hidden services, but HTTPS allows the browser to know the
connection is secure, which lets it apply different rules, like mixed content
blocking (if, say, you're browsing a .onion forum and someone links to an
image hosted on a non-onion address).

~~~
Nullabillity
Then perhaps browsers should whitelist .onion as "secure" regardless of
protocol?

I'd also like to see whitelists for the reserved-for-private-use IPv4 ranges
and a .local or .home TLD, since those are circumstances where HTTPS doesn't
give you much either, and where getting a certificate is unreasonably
difficult.

~~~
johncolanduoni
This is a terrible idea, since nothing stops the onion TLD from being spoofed.
The way the browser requests resolution of .onion names is no different than
any other; to visit them you need to be communicating through some sort of
proxy (possibly on your own computer) that intercepts both your DNS requests
and the HTTP requests to the returned address. HTTP does not provide any way
to validate that your are connecting to the intended proxy instead of a
malicious one.

These same thing applies to .local, .home, and private IPv4 ranges, which can
all be spoofed depending on where an attacker is in your network.

~~~
Nullabillity
> This is a terrible idea, since nothing stops the onion TLD from being
> spoofed. The way the browser requests resolution of .onion names is no
> different than any other; to visit them you need to be communicating through
> some sort of proxy (possibly on your own computer) that intercepts both your
> DNS requests and the HTTP requests to the returned address. HTTP does not
> provide any way to validate that your are connecting to the intended proxy
> instead of a malicious one.

Presumably you wouldn't be visiting a .onion address if you're not already
connected through a Tor instance you know about.

> These same thing applies to .local, .home, and private IPv4 ranges, which
> can all be spoofed depending on where an attacker is in your network.

Which would be exactly the point, those are OK to spoof, since you'd only be
visiting them through a trusted network, where nothing can be externally
verified anyway.

~~~
johncolanduoni
> Presumably you wouldn't be visiting a .onion address if you're not already
> connected through a Tor instance you know about.

How about a link? Or even more problematically, using its now trusted status
to load inside an HTTPS page!

> Which would be exactly the point, those are OK to spoof, since you'd only be
> visiting them through a trusted network, where nothing can be externally
> verified anyway.

What? How is it okay for safe-place.home to be trusted when an attacker can
spoof the DNS resolution upstream (like ISPs already routinely do to point you
to ads)?

The whole point of distinguishing HTTPS connections is that they provide some
way of guarding against spoofing of name resolution/packets and snooping.
Nothing about how .local, .home, .onion, or local-reserved IP ranges are
handled by browsers prevents these from being attacked, in many cases even
from outside your network. If you curl 192.168.80.1 (assuming that's not
within your subnet), your router will happily shoot some packets at your ISP.
The situation for the others is even worse.

~~~
Nullabillity
> What? How is it okay for safe-place.home to be trusted when an attacker can
> spoof the DNS resolution upstream (like ISPs already routinely do to point
> you to ads)?

I guess I was unclear, my point was that I think some TLD should be dedicated
for home networks, with ICANN and especially browsers recognizing that.

ISP spoofing wouldn't be an issue because if you used these TLDs then
legitimate requests would never reach that far anyway. If not, well, you
wouldn't be visiting such domains anyway and there would be nothing to spoof.

> If you curl 192.168.80.1 (assuming that's not within your subnet), your
> router will happily shoot some packets at your ISP.

But that's not an issue, because if it's not on your subnet then you wouldn't
be visiting it in the first place. Any snooping ISP could just as easily make
you visit some other address instead that actually did have a TLS certificate,
as they could make you visit that. In the worst case, you could make the
browser check your subnet mask. But since the contents on those IPs will be
unique from local network to local network anyway, I really don't see the
point in bothering.

~~~
johncolanduoni
> I guess I was unclear, my point was that I think some TLD should be
> dedicated for home networks, with ICANN and especially browsers recognizing
> that.

If it's for home networks, .home resolution would usually occur at the DNS
server on your router. How does the browser know that your router follows the
new rules and won't route that DNS request up to your ISP, and therefore
should trust the request?

> But that's not an issue, because if it's not on your subnet then you
> wouldn't be visiting it in the first place.

Unless your attacker can get you to click a link? That's a pretty easy thing
to get users (especially the unexperienced) to do. Or they can sneak it into a
secure page and monitor requests/serve malicious assets.

> In the worst case, you could make the browser check your subnet mask. But
> since the contents on those IPs will be unique from local network to local
> network anyway, I really don't see the point in bothering.

This ignores the case where your local network is either (a) infiltrated (b) a
coffeeshop. The second being _super_ common, and would need to be guarded
against by the browser having some sort of Windows-style public/private
network distinction, which users would remember to configure correctly.

> But since the contents on those IPs will be unique from local network to
> local network anyway, I really don't see the point in bothering.

I'm not seeing the connection. If someone with control of your public internet
connection (i.e. what HTTPS is designed to guard against) sends a response
when your browser requests something from that address, what does it matter
what that address does in another local network?

Everything I've described here has been an element of a real attack where
something somewhere was more trusted than it was supposed to be. This would
add a massive array of attack vectors, and at best would indicate to the user
trust in something that has no reason to be trusted.

If you're doing something on your local network, it makes a lot more sense to
just create a self-signed CA and put the root on your devices. In the onion
case, you should use HTTPS between you and your proxy (e.g. with a *.onion
wildcard cert) to make sure you actually connect to your proxy.

------
jo909
This is in no way criticism against LE, where I work _nothing_ is IPv6 and we
do not even have it on any agenda.

But when an "we are going to change the future of the internet"-project makes
IPv6 a Prio-2 feature (to be added later, not native from the start) it just
shows that we are really not there yet.

~~~
geofft
It's getting to be time for us to admit what the holdup is—IPv6 hasn't been
deployed because IPv6 NAT ("NAT66") isn't a thing.

There are a million reasons NATs are terrible for the internet. But they're
_used_ on IPv4, and IPv6's technical goal of increasing the address space is
tied up into the technical goal of killing NAT, immediately, and changing the
way a lot of people think about networking. For instance, end-user ISPs are
expected to give you a /64 or more instead of a single IPv6 address so that
you don't need to NAT, but many of them don't, because that's not how people
think about addressing. If you have a NAT-using site and you want to switch to
IPv6, you have to pursue the political goal of convincing your ISP to think
differently about addressing.

Meanwhile, IPv4 and IPv4 NAT _works_. I'm typing this from behind a NAT,
you're probably reading it from behind a NAT. It's not ideal, but, rough
consensus and running code.

As soon as we all put our collective feet down and insist on IPv6 NAT
implementations, such that IPv4 sites can move without rearchitecting their
environment (whether or not that rearchitecting would be a good thing), IPv6
will get deployed quickly.

~~~
kstrauser
> For instance, end-user ISPs are expected to give you a /64 or more instead
> of a single IPv6 address so that you don't need to NAT, but many of them
> don't

Name one? I've been on Comcast and Sonic, and both natively provide /64
networks. I've never heard of an ISP providing a /128.

> Meanwhile, IPv4 and IPv4 NAT _works_.

No, it _doesn 't_. It breaks a million things more than it solves and it makes
the Internet worse (and vastly more asymmetric, but that's repeating myself).
NAT needs to die in a fire, and there is zero political or technical
motivation to inflict its brokenness on a new protocol that absolutely does
not need it. Evidence: that many ISPs _are_ providing native, un-NATed IPv6 to
their customers. Perhaps some don't, but someone will manage to screw up _any_
given feature. They need to fix their shit, not coerce the rest of the
Internet to break itself for their convenience.

~~~
roblabla
OVH's cheap dedicated servers, Kimsufi, only provides a single /126 IPv6
address, instead of the recommended /64 block :(.

~~~
DanielDent
I have heard rumours that, although this is what they state, they actually in
practice do assign the entire /64 to the machine. Not sure if this is true and
I have not tested it myself.

I see it largely as an attempt to do market segmentation and limit the
usefulness of Kimsufi to push people towards their other brands. Unfortunate,
but...

~~~
roblabla
Well, it does work (just have to allocate the IPs statically) but since you're
kinda not supposed to do that, I assume it will either stop working one day,
or get my machine voided as a TOS violation or whatnot.

It really is unfortunate. Not having to use a proxy for the sole purpose of
sharing the 80 port would be nice...

------
getraf
Great news.

------
Animats
You mean it didn't?

~~~
awqrre
Does that mean that you are not using IPv6? I know I am not...

