
RFC8890: The Internet Is for End Users - pimterry
https://www.mnot.net/blog/2020/08/28/for_the_users
======
tW4r
I was really hoping this to use the same phraseology as technical RFCs
including MUST, MAY, SHOULD, etc.

For example: The task force MUST prioritize... the user MAY expect...

------
xg15
> _For example, the recent standardisation of DNS-over-HTTPS (DoH) pitted
> advocates for dissidents and protestors against network operators who use
> DNS for centralised network management, and child safety advocates who
> promote DNS-based filtering solutions. If the IETF were to only decide upon
> technical merit, how would it balance these interests?_

I neither like DoH nor in general arguments of the form "X is not political,
we only decide on merits".

However, in this case, I really wonder if the IETF's contribution is
political: DoH on it's own is highly so, but to my understanding, the politics
are in the decision to design an encrypted, centralized and block-proof
alternative to DNS with preselected servers and automatically deploy it to
millions of users. But those points belong the initial requirements and the
deployment process - not so much the technical details of implementation.

That DoH is a standard makes some difference, but not very much. I think we'd
have roughly the same problems (though maybe with less publicity and
discussion) if this was some properietary Chrome thing that was fully
developed inside Google.

Or to put it differently, assuming it was already decided _that_ something
like DoH should be built, is there still political power in the follow-up
discussions about _how_ it should be built?

~~~
ignoramous
> _...and child safety advocates who promote DNS-based filtering solutions._

DoH doesn't prevent filtering. Sure, it doesn't respect OS or Network-enforced
settings, but that's kind of the whole point of its existence.

> _...DNS for centralised network management._

DoT (and DNSCrypt?) would arguably be the better alternatives in these cases.

So, use-cases are absolutely being addressed in DNS' case at the very least.

> _I really wonder if the IETF 's contribution is political: DoH on it's own
> is highly so..._

I think DoH and ESNI are exactly the kind of response expected from a tech
firm like Mozilla: Fight for its users and invest in the right technology that
helps keep the Internet _global_ despite the many efforts to dismantle it, not
just by governments but by big tech in equal measure. In fact it is big tech
that sells "cyber security" solutions to these governments in the first place:
A trillion dollar market opportunity [0].

Look no further than ITU to see how anti-user, pro-corporate, pro-government
politically-influenced technology standards can be [1]. Things might take a
steep dive given Google's [2] and Facebook's [3] recent inroads in the
Communications industry. This pro-user political engagement from Mozilla et al
is more than welcome, in my opinion.

[0] [https://foundationcapital.com/cybersecurity-the-next-
trillio...](https://foundationcapital.com/cybersecurity-the-next-trillion-
dollar-market/)

[1]
[https://news.ycombinator.com/item?id=23416269](https://news.ycombinator.com/item?id=23416269)

[2] [https://www.opennetworking.org/](https://www.opennetworking.org/)

[3] [https://telecominfraproject.com/](https://telecominfraproject.com/)

------
moonchild
Recommend changing the link to the actual RFC, [https://www.rfc-
editor.org/rfc/rfc8890.html](https://www.rfc-editor.org/rfc/rfc8890.html)

~~~
geofft
I think they're different - RFC 8890 is a particular position statement, and
this blog post discusses why they decided they needed to publish it, some
specific cases (e.g. the discussion about Parliament) that led up to it, and
their decision-making process in general. RFC 8890 itself doesn't mention
that, nor does it specifically mention things like DoH and China blocking it.

~~~
dTal
Additionally, the blog post links to the RFC, but not vice versa.

------
jillesvangurp
It's good to put a stake in the ground for the IETF to create clarity about
what they do and why. Countries like China, the US, Australia, and the UK are
of course interested in controlling the flow of information for political,
ideological, military and other reasons. Therefore they are interested in
controlling what the IETF does and how. That's the nature of any big country
or entity (including companies). E.g. Apple kicking competitors out of the app
store is exactly the same kind of behavior. Whether that's good or bad is a
separate discussion.

The reason the internet exists at all is that it provided a way around /
relief from this level of control and that it allowed independent innovation
to happen. In a way this was a happy accident because a lot of the funding for
the R&D behind this came out of cold war era defense projects which were very
much not about the free flow of information. So, it would be a stretch that
this is working as intended. But it was successful exactly because of that and
the IETF brought together the right parties to ensure it kept enabling this.

More strict networks ultimately failed to compete because users and companies
found that they needed to tap into this free flow of information. The fact
that countries like China, Russia, North Korea, etc. are on the internet at
all (grudgingly and with all the restrictions that they try to enforce) is
because they need to be. They are also part of the world economy for the same
reason. And as much as they'd like to tell others what the value of their own
currencies should be relative to $, renminbi, euro, etc.; they have no choice
but to leave that to market forces. Of course it doesn't stop them from trying
to influence and manipulate that. Nor does it stop anyone else from doing so.
But ultimately it leads to either hyper inflation or people just swallowing
their pride and accepting or paying in dollars for stuff they buy (e.g. oil)
or sell on the international market. The internet is the same; it's a take it
or leave it kind of thing. Either you are on it or you are not.

It is a bit of neutral ground where mutually hostile entities looking to
control, monitor, and manipulate each other can conduct their business (some
of it very bad business). It works precisely because things like https prevent
man in the middle attacks, certificates keep us honest (to some extent). All
the other technical tooling the IETF standardized to ensure third parties
don't mess with private interactions between two parties trying to have some
meaningful interaction is there for just that reason. Of course the tools are
far from perfect and there's an arms race between intelligence agencies to
exploit loop holes, bugs, design flaws, etc. The IETF's role is to address
these issues, not to create more of them to enable authoritarian regimes to
gain an edge over other nations or their own citizens.

So, DNS over HTTPS is a good thing and exactly the right thing for the IETF to
be standardizing because it fixes a problem with what they previously
standardized (i.e. DNS). It's controversial because it takes away power from
countries actively abusing that power. But the fact that others are exploiting
protocol weaknesses in DNS is fundamentally a bug and not a feature. You could
argue DNS over HTTPS is not a perfect solution or that better solutions exist
but not that the role of the IETF is to enable the Chinese to have a say in
what people (in or outside China) lookup via DNS. That's not their role. If
the Chinese, the Australians, etc. want to block people that choose to
configure their browsers to use this (or browsers that do this by default),
that is of course their good right. But enabling that kind of self isolation
is not a core IETF function.

~~~
swiley
The internet supposedly exists because it provided a more reliable way to
order nuclear missile launches.

DNS over HTTPS is controversial because the current implementations take power
from the end users (by moving the resolver out of libc who’s configuration and
often implementation is controlled by end users and into certain applications
where the user is lucky if the app allows them to adjust the parameters,
especially if the interface is stable and intuitive. They really have to
because cramming http and tls into libc is a bit crazy.) Furthermore it
concentrates power and data into the hands of a couple of organizations where
previously using their resolvers was optional. At this point I’m not entirely
sure I trust Google any more than most foreign governments or even my own.

~~~
jillesvangurp
Nonsense. End users have the power to configure their browsers just like they
have the power to configure their DNS settings, their operating system and all
the rest. Of course very few people do. Mostly that power requires something
most users don't have: which is knowledge to know they need to do this and how
to do this. The people actually capable of doing this lose no power
whatsoever.

You seem to not trust Google. Nobody is saying you should or must. Other DNS
servers are available. Set up your own one if you must. You have that power.
If you are smart enough to actually know how to set up DNS now, you should be
able to figure out how to configure that to use https. It's not rocket
science.

If not, maybe it's a good thing that your browser stops blasting your DNS
queries unencrypted over a public network to absolutely anybody that can be
bothered to listen in. This would include all the 3 letter acronym security
agencies that you can name (domestic and foreign ones), big ad driven
companies, your local police, and everybody else with or without the
cooperation from your friendly neighborhood operators or arm twisting of the
incompetent politicians representing which ever government is governing
wherever you live. The best you can hope for to protect you from that is a
combination of incompetence and indifference.

~~~
AlexandrB
> Nonsense. End users have the power to configure their browsers just like
> they have the power to configure their DNS settings, their operating system
> and all the rest.

This power is illusory. If Google decides Chrome will no longer allow changing
DNS providers, game over for 60% of internet users. Consider how it's no
longer possible to block ads the way you want to in Chrome with manifest 3.

~~~
xg15
True - but I think this is not the fault of DoH or IETF. On a technical level,
the power stems from Google's ability to auto-update Chrome with whatever
logic they see fit. On a social level, maybe Chrome's market share and the
acceptance of users of that power.

Even without DoH, Google could just as easily have decided to hardwire DNS for
Chrome to 8.8.8.8 or to switch Chrome to their own home-grown proprietary name
resolution protocol. They don't need a public standard for that.

------
luord
Seems like an attempt of making "fight for the users"—which should be
everyone's rule zero—textual, at least for the IETF.

I try to stay away from politics as much as possible, except on this matter. I
like this.

------
1vuio0pswjnm7
So if an end user tries to request a page from this site without using SNI,
instead of an just an HTTP error code, they get this lovely little piece of
snark:

    
    
      This Web site requires a more modern browser to operate securely; please upgrade your browser.
    

The truth is it just requires SNI server_name in the ClientHello. I don't send
SNI except on sites that require it. Not every site is on a shared IP; some
still use dedicated IPs. So I am the end user reading HTTP headers.

I always thought this addtional "message" from mnot.net was somewhat
presumptive for an IETF person who claims to be some sort of user advocate.
This is the kind of thing I usually see from web developers telling you to
"upgrade" your browser or turn on Javascript for the best "user experience",
not IETF folks. Maybe it's a default on some server software, not custom.

His site does TLS1.3 but it doesn't do ESNI/ECH so I guess some browsers (or
other clients) can be a little "too modern".

~~~
vertex-four
In practice, I reckon you might be the only person in the world - or maybe one
of a dozen - who has this problem. And in that context that's not snark at
all, it's an accurate description of what is most likely the cause of the
error.

~~~
DarkWiiPlayer
> an accurate description

except it's a lie. SNI isn't "more secure", it leaks information to a possible
MITM.

~~~
jorams
Note that the message isn't saying SNI is more secure. It just says the site
needs SNI to operate securely.

~~~
DarkWiiPlayer
The website does not support the more secure ECH nor the more secure option of
just not using SNI at all. It's the website that doesn't support a fully
secure connection and it shouldn't be blaming the user agent for that.

~~~
vertex-four
ESNI is not supported by common software yet. Can you tell me your alternative
to SNI in this configuration:

    
    
        [nix-shell:~]$ nslookup redbot.org
        Server:  127.0.0.1
        Address: 127.0.0.1#53
        
        Non-authoritative answer:
        Name: redbot.org
        Address: 45.79.113.165
        Name: redbot.org
        Address: 2600:3c01::f03c:92ff:fe89:3e33
        
        
        [nix-shell:~]$ nslookup www.mnot.net
        Server:  127.0.0.1
        Address: 127.0.0.1#53
        
        Non-authoritative answer:
        www.mnot.net canonical name = cloud.mnot.net.
        Name: cloud.mnot.net
        Address: 45.79.113.165
        Name: cloud.mnot.net
        Address: 2600:3c01::f03c:92ff:fe89:3e33

~~~
DarkWiiPlayer
Without any tunnelling like VPN or TOR, the safest option would be to have
several unrelated services share one certificate, when only looking at MITM
vulnerability.

This would in theory ensure that any attacker could only assume the client is
accessing at least one service on the target machine.

Setting aside the obvious risk that one of the services could claim to be one
of the others, this obviously comes with some other technical limitations.

Again, it's not like the site is doing anything wrong, it just shouldn't be
blaming the user for something that's obviously just a technical limitation of
the technology being used.

~~~
account42
> Setting aside the obvious risk that one of the services could claim to be
> one of the others

Not much additional risk there when both site's TLS connections are already
handled by the same process.

------
xg15
> _Because of the global nature of the Internet, it wouldn’t be possible to
> pursue a bilateral or regional style of governance; decisions would have to
> be sanctioned by every government where the Internet operates. That’s
> difficult to achieve even for vague statements of shared goals; doing it for
> the details of network protocols is impractical._

In all honesty, maybe that would be the better alternative, if the only other
option is US tech companies deciding unilaterally for the whole internet.

~~~
bjo590
US based tech companies are making those decisions for the whole internet
because internet users are voting for US based tech companies' influence with
their time, attention, and clicks. It's open and democratic.

~~~
guerrilla
Not really since most people don't know they even have a choice let alone that
they are making one in any particular direction.

------
cryptica
This RFC makes no mention of the root of the problem; the Federal Reserve
system.

In our fiat monetary system, all new money which enters the economy comes from
banks as credit; companies then compete against each other to earn the biggest
possible share of that newly printed credit. This means that new money doesn't
originate from consumers (so-called end users), all new money originates from
institutions... So why would companies care about end users (consumers) when
all the new money they get actually comes from financial institutions??? Why
not get money straight from the source; cater to the paymasters and use
consumers as pawns?

These days, it makes more financial sense to manipulate consumers to please
financial institutions than to manipulate financial institutions to please
consumers... The dynamics are out of whack. That's why we need UBI urgently -
Make consumers the source of all new currency, this would force big companies
to forget about cheap institutional money and focus on consumers. Let
consumers be the paymasters of the economy.

This game of allowing banks to decide who gets access to cheap credit and who
doesn't is inevitably going to lead to the kind of corruption we've seen over
the past decade.

------
anilakar
The IETF strives to create the best technical solution for given
specifications. That's its task. Where those specifications come from is a
related but separate issue.

~~~
ergl
Seems like IETF's mission statement disagrees with you:

> The Internet isn't value-neutral, and neither is the IETF. We want the
> Internet to be useful for communities that share our commitment to openness
> and fairness. We embrace technical concepts such as decentralized control,
> edge-user empowerment and sharing of resources.

[https://www.rfc-editor.org/info/rfc3935](https://www.rfc-
editor.org/info/rfc3935)

~~~
wyc
Also, the IETF has no formal membership roster or membership requirements, and
all participants and managers are volunteers.

------
superkuh
Hard to take someone saying "the Internet is for End Users" seriously when
they're working to implement QUIC, a purely corporate needs fulfilling step
backwards for human people.

~~~
luizfelberti
Hard to take someone saying what you're saying seriously, when this is given
in such a crass and uninformative manner

QUIC is a vastly superior transport to TCP in every technical regard. What
exactly makes it a "purely corporate needs fulfilling step backwards for human
people", if you don't mind adding some context to this extremely vague
statement/opinion?

~~~
superkuh
QUIC is what happens when corporations define standards so their own services
can be run cheaper. The transport layer shouldn't be 'aware' of what it is
transporting. It shouldn't be a gigantic heap of things, even if those things
in the heap sound nice (like encryption baked in). It is already impossible
for anyone but Google (or another mega-corp) itself to make a browser that
works with "modern" sites. This piles on to that.

It is, despite the partial liberation from Google itself by the ietf, still a
google attempt to control what HTTP and the web are. Google should not.
Protocols should be generic. Layers should _layers_ not just squashed all into
one so youtube videos pull up faster on your mobile phone. There's more to the
web, and the internet, than fixing mobile latency issues.

~~~
mongol
This sounds like a different variant of the "rampant layering violation"
argument that was used against ZFS. It is sometimes healthy to challenge old
assumptions. If QUIC turns out better than what we have in every respect that
matters "for the end user", as this article is about, what would be the harm?

