
What you can't say - sophiebits
http://apenwarr.ca/log/?m=201104
======
Confusion
When confronted with disagreement, claiming your assertion seems to be
"Something you can't say" is incredibly weak. The article does not address any
of the counterarguments raised in
<http://news.ycombinator.com/item?id=2377109>, but merely restates the
assertion. It would be "something you can't say" if all the counterarguments
were grasping at straws, showing that people are unwilling or incapable of
understanding or accepting the assertion.

It's the same argument every crackpot invokes to explain disagreement. It is
the very last line of defense that everyone can invoke, independent of what
the other has argued. It's basically equivalent to "I'm right; you just don't
understand". But the article hasn't even tried to refute any of the
counterarguments and is in no position to make such claims.

~~~
philwelch
The difference between merely unpopular and What You Can't Say is how people
respond to it. Merely unpopular views are responded to with thoughtful,
rational rebuttal. What You Can't Say is responded to with moral outrage,
dismissal, and similar responses. Someone who's merely wrong can be corrected
with good argument taken in good faith--someone who's broached a taboo is
beyond simple argument and faces opposition on a social than a rhetorical
level.

~~~
Joeri
The discussion also tends to be steered away from the taboo, towards
nitpicking. The basic argument here is that ipv6 is less desirable than a
series of hacks on top of ipv4, but very few people were arguing that. Instead
they refused to even admit that possibility and focused on refuting individual
elements of the argument. Every single argument of the article could be
refuted, and still the basic tenet could be correct. I've read through the
discussion and i still don't know whether ipv6 is a good or bad idea.

Anyway, practicality always wins. If the ipv6 proponents can't make the
transition succeed soon enough, hacks+ipv4 will win by virtue of being the
only practical solution _right now_.

~~~
Confusion
The basic argument that most people responded to was the claim that the ipv6
transition was practically impossible. The central point of the
counterarguments is that since djb's article, a lot has changed and there now
actually _are_ sensible transition plans. Even djb has acknowledged that fact.

The remainder of the arguments were mostly about taste. He accepts NAT; many
others argue that NAT is horrible and he ignores those arguments based on
straw men (an edit in the original article) like:

    
    
      (Update 2011/04/02: A lot of people have criticized this 
      article by talking about how nasty it is to require NAT
      everywhere. If we had too much NAT, the whole world would
      fall apart, [..]
    

Moreover, his arguments included technical errors that limit their
applicability. All in all he was simply far from convincing. Getting into a
discussion where you defend the virtues of NAT is not saying something "you
can't say". It saying something unpopular that needs good arguments, because
there are good reasons it's unpopular.

------
thwarted
_I was shocked at the time that some people actually think Postel's Law is_
wrong _, but now I understand. Some people believe the world must be_ purified
_; hacky workarounds are bad; they must be removed._

 _Parsers that refuse to parse, Internet Protocol versions that don't work
with 95% of servers on the Internet, and programming languages that are still
niche 50+ years later... sometimes you just have to relax and let it flow._

Just "letting it flow" is what we've been doing, Postel's law is exactly that.
And look at the mess that's gotten us into. Letting people off the hook by
working around their shitty "interoperable" implementations by being liberal
in what we accept doesn't help in the long run, and it, IMO, naively assumes
that eventually everyone will come around. It's short term thinking.

It's not about purity, it's about sanity.

And for the record, I'm a proponent of Postel's Law, because it's pragmatic,
but not blindly. One must learn from one's mistakes. The problem is when the
800lb Gorilla decides to be liberal in what they generate because they know
that everyone else will be liberal in what they accept--Postel's Law being
effective pretty much "requires immediate total cooperation from everybody at
once" (which you might recognize as one of the possibilities on the "Universal
Crackpot Spam Solution Rebuttal"). There's a time and place for Postel's Law,
and the hard part is finding the line where on one side it makes sense to be
liberal and on the other it makes sense to be conservative.

~~~
Joeri
The main problem with "pure" implementations is that they deny a core aspect
of humanity: we make mistakes. The problem is not that parsers have to deal
with invalid syntax, that's just a given, it's that they have a notion of
invalid syntax at all. It's not that hard to design a spec in such a way that
all input will be parsed in a predictable way, maximally extracting semantics.
This is what i like about the html5 parsing; it doesn't have a concept of
unparseable input, yet all parsers can implement the same standardized parsing
algorithm.

~~~
thwarted
_they deny a core aspect of humanity: we make mistakes_

Exactly. Postel's Law is meant to work around that. One might call it being
robust (one might also call "accepting the input and doing something sane
rather than trying to guess" robust also). There are two holes in it, however:
1) it doesn't encourage people to actually fix their "mistakes", and 2) it
encourages exploitation of those who are liberal with their input.

We must be liberal, but not necessarily too liberal, in what we accept.
Postel's Law has specific applications. One shouldn't be liberal in their
acceptance of tyrants, for example.

~~~
Joeri
1\. Why do people have to fix their mistakes if automation can solve the
problem for them? If we can assume that mistakes will be made, and we can find
an automated way to solve those mistakes, then why should we force humans to
jump through hoops?

2\. Why can't a parser be strictly standardized and liberal with its input at
the same time? If the spec provides error recovery behavior, what is wrong
with that?

My point is that there's no such thing as too liberal as long as all parsers
implement the same exact kind of liberal parsing. Our low-level communication
protocols have no concept of invalid input, they can recover from any random
burst of garbage input, and we think this is normal. But then at a higher
level of communication, like XML, suddenly error recovery is a bad thing? It
makes no sense to me.

~~~
thwarted
_1\. Why do people have to fix their mistakes if automation can solve the
problem for them?_

Postel's Law isn't about automation, it's about where to apply effort.
Automatically fixing mistakes _at the time of their creation_ would be great,
but just like real life, there are a million ways something can be interpreted
wrong after the fact (and thus the wrong "fix" applied) and only one way to
interpret it right.

 _why should we force humans to jump through hoops?_

Humans have to jump through hoops to _create_ the robust error recovery.
Rather than the effort being evenly distributed among all parties when
everyone is conservative on both the production and acceptance side, the
producers can be really lazy and the acceptors have to jump through hoops to
accept all the lazy people's output. There is no automation here, _someone_
has to write the code that liberally accepts things, which is often a hard
task because of the many different ways things can be interpreted when they
are not specific and explicit.

 _2\. Why can't a parser be strictly standardized and liberal with its input
at the same time? If the spec provides error recovery behavior, what is wrong
with that?_

A parser that is liberal with its input _and_ provides robust error recovery
begets tag soup. The only people who like tag soup are those who want to be
lazy when producing it. It's more work, over all, to accept _all_ input and
try to figure out what was intended than it is to just say "I can't interpret
this" and tell the generator that they need to be more conservative in what
they generate (the other part of Postel's Law).

 _My point is that there's no such thing as too liberal as long as all parsers
implement the same exact kind of liberal parsing._

It's not the liberal parsing that is necessarily the problem, it's the second
order effect of liberal interpretation. If people can be liberal in their
parsing, then they can be liberal in their interpretation, and if we accept
that, we have to, as users, accept very little robust interoperability.

 _Our low-level communication protocols have no concept of invalid input, they
can recover from any random burst of garbage input_

"Recover"? If you don't do the TCP handshake in very specific ways, not only
does no other server talk to you, but you may end up breaking some of the
guarantees that TCP is supposed to provide. Random garbage that sets the RST
bit in a TCP packet closes the connection, it doesn't "recover" from that.

Now, obviously, I'm not advocating that things should outright crash when
given bad input: that's the worst. They should produce decent error messages
as soon as possible so the producer can increase their conservative nature of
generation. Consider serving a web page as application/xhtml+xml, which in
Firefox (at least back when I was doing a lot of this) would fail to accept
the file and would tell you where it was structured wrong. By accepting any
old ambiguous format, you'd never see this error and you wouldn't know that
you weren't being conservative in your output. And since different browsers
treated malformed content differently (either accepting it, or not accepting
it, or trying to guess and often getting it wrong or different from what other
browsers guessed), you end up with a mess where the "liberal" accepting side
gets tagged as deficient if it doesn't jump through all the hoops thrown at
it.

------
gnaffle
In many ways, the IPv6 debate is similar to the x86 vs RISC debates. While
everyone agrees that the Intel architecture is bad, in practice it doesn't
matter all that much on big desktop chips.

Linus Torvalds had a small rant about that when the Itanium was introduced:
[http://www.theinquirer.net/inquirer/news/1008015/linus-
torva...](http://www.theinquirer.net/inquirer/news/1008015/linus-torvalds-
itanium-threw-x86)

The sad fact is that the x86 architecture survived because it was good enough,
and there was no incentive to replace it with a "purer" design. Only in the
mobile / microcontroller space has there been widespread success for more
RISCy designs (ARM/AVR). This wasn't because HTC, Samsung and Apple decided
that ARM was "purer", it was mostly because there was no x86 processor that
could compete with the speed and power and chip real estate afforded by ARM
designs.

Likewise, the incentives for IPv6 aren't there, and in fact, it introduces a
host of new problems for most users.

For instance, I spent quite some time trying to figure out why my NTP daemon
would not work. The reason? It was IPv6 enabled, as was my operating system.
The NTP server was also IPv6 enabled and had an AAAA record. But my ISP didn't
route IPv6, so no communication took place.

Now, I could blame the NTPD creators for not thinking about this and writing
some specific code for it with an IPv4 fallback (perhaps after 30 seconds just
to make life interesting). Or I could blame the ISP for being slow to
implement IPv6 (this was in 2004 after all). Or I could blame myself for being
incompetent and just using the default config files provided by the operating
system.

But the fact is, this is just what happens when you have one protocol that
always works, and another new protocol that sometimes works but you really
want to make the new one the default, preferred protocol.

~~~
perlgeek
> Likewise, the incentives for IPv6 aren't there

Maybe IPv4 + hacks works still great in the US and in Europe, evolving nations
(think: China, India, Brazil, ...) have lots of incentives to use IPv6.

Just because _you_ have no incentives doesn't mean the rest of the world does
not either.

~~~
gnaffle
Sure, and just because the x86 instruction set works for desktop PCs, it
doesn't mean it works well for cellphones. Which was exactly my point: It will
spread because of incentives, not because it's the "better" solution. And if
another solution works well enough and is cheaper to implement, it will slow
the adoption of IPv6.

------
Joakal
I had a similar feeling with HTTPS [0] in suggesting a transition to making
encrypted connections the norm on websites so that networks can't sniff
activity. The point was lost on most posters who want absolute security for
every website that uses HTTPS despite the unfriendly UI warnings for even a 5
minute expiry despite being valid [1]. Not just banks, but websites like
Wikipedia too.

[0] <http://news.ycombinator.com/item?id=2376548>

[1] <http://news.ycombinator.com/item?id=2376183>

~~~
saurik
One could also argue that the point was lost on you that "there is no such
thing as partial security".

------
wladimir
_Internet Protocol versions that don't work with 95% of servers on the
Internet, and programming languages that are still niche 50+ years later...
sometimes you just have to relax and let it flow_

Progress requires hard work and perseverance. This is like saying that the
Wright brothers shouldn't have tried to build an airplane, because people have
been trying to fly for as long as human civilization but never managed to.
Maybe it was time to relax and let it slide...

Being pragmatic is good, and I agree that not everything can (or needs to be)
'pure' and 'perfect'. Some things are OK as they are now. But I'm happy there
are people striving to make things better.

~~~
sqrt17
IPv6 has been the future for quite a while (2002), and it will probably remain
the future for quite a while because we have good partial solutions:

* The HTTP Host: header means that you don't need a public IP address for every site. Heck, it would even be possible to have a single public IP for a load balancer that redistributes requests to a data center full of servers that serve requests for thousands of sites. If you think this can't be done efficiently, think about these Cnoection: and oCnnection: headers that pop up with load balancers that only rewrite part of the header.

* NAT means that we only have one IP per Internet-connected household, not one per computer or other device. With mounting pressure, mobile operators will increasingly put their users behind NAT and maybe some people in address-starved regions of the world will only be able to get NATted connections from their provider. Software like Skype knows how to get around NAT pretty well, and UPnP works pretty well for other server-like programs that people have.

Which means that 99% of the people will be perfectly happy with multiple-
occupancy HTTP service and UPnP/firewall piercing and not care at all whether
IPv4 addresses run out or not. The remaining 1% will have to push very hard,
and pay a significant fraction of the cost, to get IPv6 off the ground all
while the incumbents sit in a corner and watch.

~~~
asharp
The problem is that we have fewer IPV4 addresses then we have consumers who
wish to consume IPv4 content.

Take our current situation: We still have large swaths of china and india
without Internet connectivity, yet we are basically entirely allocated with
demand for ipv4 addresses increasing.

V6 used to be hard to implement because older routers/switches implemented
their support in software rather then in ASIC. This is now no longer an issue,
which is why you see ipv6 connectivity sprouting up around the place. I would
expect that when push comes to shove, we will see a fast v6 transition when it
becomes a large enough deal for there to be > 0 consumer demand for it.

~~~
sqrt17
My prediction is that, if large swaths of China and India get Internet
connectivity, they, or their internet providers, will have to choose between

* NATted IPv4 connectivity (maybe with one IP per city and not one per household) and

* buggy but non-NATted IPv6 connectivity.

with the former being cheaper to implement. Most likely, IPv6 will only see
universal adoption after the old gear that has buggy or too-slow IPv6 support
has been thrown out. Then again, if a large government (say, China) puts
itself behind IPv6 (say, because having one IP address per citizen is more
convenient for keeping tabs on everyone), a fast transition would be more
likely.

Don't get me wrong here: I think that the transition to IPv6 will happen
eventually, but not as fast as the shrinking number of remaining IPv4 address
blocks would suggest.

~~~
asharp
From what i've heard, carrier grade nat is stupidly expensive and rather hard
to scale nicely at reasonable speeds.

It would be fairly easy to implement say ipv6 in china, as it would be a
closed system with everybody getting new gear and everybody in the same "we
have no ips" boat.

To be honest, as the internet ecosystem in places such as china is mostly
closed, I'd be surprised if they didn't go ipv6 only internally well before
the rest of the world switches (what's the point in having a v4 address if
everything you care about is on native v6 and you have a 64 gateway for
everything that isn't).

Going to be interesting to see.

