
Please consider the impacts of banning HTTP - v4n4d1s
https://github.com/WhiteHouse/https/issues/107
======
xamuel
Until there's a free, easy, maintainable, and actually existent solution to
SSL certs, enforcing HTTPS-only is just downright extortion.

Referring to solutions that are under construction doesn't cut it. If you're
that passionate about it, contribute to the SSL cert solution yourself instead
of to the endless calls for HTTPS-only.

The 'Semantic Web' movement promises that if everyone would just publish their
websites in XHTML with RDF annotations, we'd magically achieve world peace and
end hunger. (I exaggerate slightly.) Should we ban non-semantic websites?

Physical snailmail and the spoken word are unencrypted. Both are frequently
used to transfer data more sensitive than cat pics. If I'm surfing a website
in a coffee shop, yes, there's a danger someone could intercept the data to
spy on me. But they could just as well look over my shoulder, and HTTPS-
everywhere isn't gonna do anything about that.

~~~
e12e
Arguably there _is_ a solution: just use self-signed certs and/or your own ca.
And have browsers implement some form of trust-on-first use and/or some
dns/web-of-trust way of avoiding a big scary warning message. This _won 't_
fix everything, but it is more secure than http and more honest thsn the idea
that you should trust _all_ the CAs browsers bundle.

Ideally browsers should just bundle their own CA certs, and implement some
form of semi-formal wot/have a sane UI for the rest. After all we trust our
browsers implicitly - but why should we elevate them to do _transitive_ trust
for us?

Lets just build on x509, and get some kind of _meaningful_ trust.

Lets say that Apple, Microsoft, Debian, Red Hat (eaxh distribute their own
trusted (self-signed) CA cert. And also work with Mozilla, Google to trust
(sign) their certs.

Then let trust-on-first-use or some other distributed method take care of the
rest. When let's encrypt work: let distributions trust that too.

The resulting system would _not_ be perfect - but I still think it would have
a better trust model than our current mess.

~~~
electrum
What stops the browser from automatically trusting a self-signed cert for
PayPal or your bank?

~~~
e12e
What stops the browser from automatically trusting a forged certificate signed
by a bundled CA? That's not a hypothetical question. It's happened before -
either through incompetent CAs, or malignant ones (see: Google/Mozilla vs
China).

The problem with the current trust model, is that it's unclear _who_ we trust
-- or put in another way, who we empower to betray us. No trust without the
possibility of betrayal - no betrayal without trust.

With the current model, the path from who the user trusts (eg: Mozilla, Google
and the OS vendor) is abused to extend to way too many CAs. So many, that the
user can give up (ie: I use the browser and trust the green bar) -- or get a
crippled experience, because the model assumes that you trust _all_ bundled
CAs. Sure, power users can in theory remove CAs from the store (and add ones,
like I do for cacert.org, as I use them for my domains).

The fact that I add cacert.org reminds me of another thing: There should
probably not be any CAs that can sign arbitrary subdomain.TLD. Since I add
cacert.org, they can empower someone to mitm _all_ my tls connections. But
that is a separate issue - this issue already exist.

Trust decisions is all about meaningful choice -- and choosing between not
using the web, and trusting Chinese (and every other) intelligence, along with
various foreign corporations (they're all foreign to _someone_ ) to not
enable/be tricked into mitm my email, my web browsing etc.

------
chjj
This is interesting.

I've always been okay with dropping HTTP for HTTPS-only as a long-term goal,
_as long as we get rid of the SSL cert racket first_.

As far as MITM and identity go, could we at least modify the protocol to allow
fingerprint caching, as opposed to certs, as a fallback? ...SSH does this and
it is arguably more important to secure than HTTPS.

Fingerprint caching seems insecure when you think about it, yet we're all okay
with maintaining our servers with this in place.

Furthermore, X.509/ASN.1 is the worst thing to happen, ever. I know this
because I damn near tore my hair out trying to implement an x.509 certificate
validation.

~~~
mmphosis
_> as long as we get rid of the SSL cert racket first._

"It is really a messed up situation to have to pay to not have your website
marked as dangerous"

~~~
IgorPartola
It's also messed up that we have to pay for domain names. Why can't we have
them for free? I want Google.com please. You can get free certs now, and more
ways to obtain them are coming this summer. In either case, let's solve our
immediate problem now, then add different authentication methods to browsers
after.

~~~
danieldk
This is an incorrect analogy. Domain names are a finite resource, the cost of
signing a certificate approaches zero.

~~~
mauricemir
not for proving identity - how do you prove that ebay is ebay or that bank
site really is Barclays and not some scammer

~~~
moe
Not with SSL certs anyway.

At least not as long as your browser trusts hundreds of CA's, including shady
ones such as Comodo[1] who will issue fake certs to any name (Google, Skype,
etc.).

[1]
[https://www.schneier.com/blog/archives/2011/03/comodo_group_...](https://www.schneier.com/blog/archives/2011/03/comodo_group_is.html)

~~~
eli
The fact that Comodo is occasionally scammed (a headline-generating event)
does not prove that they add no level of identity authentication.

------
hannob
This seems to make two wrong assumptions:

1\. HTTPS does not only guarantee that data is secret, it also guarantees that
data is not manipulated. And in this sense scientific data is very sensitive -
it matters that you know your data is the correct data.

2\. As so many do, it vastly exaggerates the performance costs of HTTPS. They
are really minimal. If you really care I suggest you benchmark before you make
any claims regarding your servers can't handle it.

~~~
dijit
Your second point is valid only for x86 machines (and modern ones).

AES instruction sets implemented in hardware are the main factor in making
HTTPS viable. But my toaster (which speaks HTTP), or my ARM based router, or
even my high end server iDRAC/iLO does not have a dedicated x86 or AES
hardware instruction.

so the point stands, even if it's not relevant to modern laptop users.

~~~
thoughtpolice
This point is important, because there is a lot more non-AESNI hardware than
the converse. But luckily, it's possible (and likely, I hope) that TLS 1.3
will include the ChaCha20-Poly1305 AEAD ciphersuite, which should improve this
matter quite a bit for most users who need software implementations - it's
_much_ simpler to implement, and even in software, a heavily optimized can get
within the ballpark of AESNI.

Google is already using ChaCha20-Poly1305 inside Chrome to talk to Google
servers, if your hardware doesn't support AESNI. It's been doing this since
early last year, and Adam reported at that time nearly 40% of all traffic to
Google was going through it (including mobile devices IIRC):
[https://www.imperialviolet.org/2014/02/27/tlssymmetriccrypto...](https://www.imperialviolet.org/2014/02/27/tlssymmetriccrypto.html)

~~~
qrmn
It won't just include it, at this stage it seems set to become mandatory-to-
implement (not necessarily mandatory to deploy). We'll see.

------
maaaats
> _The effect on those without computers at home_

The additional bandwidth is not really that big. Yes, you cannot cache the big
news-papers etc. when everything is https, but I think most stuff people do
nowadays is things that's not cache-friendly anyway (like your private e-mail,
facebook etc.). If they are afraid of some people using all the bandwidth
because they cannot block youtube, netflix etc., they can can divide the
traffic in better ways, limiting each client.

> _Restricting the use of HTTP will require changing a number of scientific
> analysis tools._

A non-argument. One cannot let things remain the same for ages just because of
backwards compatibility.

> _HTTPS is not a security improvement for the hosts._

So what?

> _HTTPS is frequently implemented improperly_

Non-argument. Https not being perfect doesn't mean it's useless.

I'm not necessarily saying https-everywhere is a good idea, just that these
arguments adds nothing to the discussion.

~~~
josteink
> I think most stuff people do nowadays is things that's not cache-friendly
> anyway

So you're OK with throwing away a perfectly fine and proven internet protocol
which has survived for several decades, on a vague notion you have that "it's
probably not that cache-friendly anyway".

That sounds solid.

> HTTPS is not a security improvement for the hosts. So what?

Reduced performance. Increased latency. Increased complexity. Increased
attack-vector size. But indeed: so what?

> HTTPS is frequently implemented improperly. Non-argument.

Yes. Who cares about the real world, anyway?

I'm awfully sorry, but I honestly don't think your post counts as a very good
counter-argument to those very valid points which were raised in the reported
issue.

~~~
hueving
>So you're OK with throwing away a perfectly fine and proven internet protocol
which has survived for several decades, on a vague notion you have that "it's
probably not that cache-friendly anyway".

HTTPS is not throwing away HTTP. It just protects it with TLS.

>Increased attack-vector size. But indeed: so what?

I think you meant 'decreased'. By not being able to modify the payloads or
steal cookies, attackers are only left with the TLS protocol to try to mess
with, which is a much smaller attack vector than being able to tweak HTTP
headers and so-on.

~~~
spacemanmatt
> I think you meant 'decreased'.

It could have been a sideways glance at flaws in SSL/TLS that have rendered
servers, data, or both compromised. Basically, a straw-man claim that plain
text is actually more secure, since a flaw in the crypto stack could exist.

------
PebblesHD
I'm aware that there is a proposed service to simplify the acquisition of SSL
certificates for websites but at this point getting SSL and HTTPS ready is
both a costly and complex exercise for many webmasters. Managing several
websites for generally small audiences (1000-5000 people) and with limited
revenue to pay the costs associated with getting a cert makes me reluctant to
support an outright ban on the use of plain HTTP especially when the content
is not of a personal or indeed personally identifiable nature. How does the
proposal address the ease of access to certificates and the current state of
affairs in certifying authorities?

------
josteink
> In summary: Non-sensitive web traffic does exist.

A million times this. I cannot believe that the current HTTPS-nazis totally
fail to see something this obvious.

There's countless use-cases where plain HTTP is not only OK, but probably the
simplest and best option.

Trying to ban it to address some vague possibly-maybe security-related
theoretical issues seems asinine.

~~~
onion2k
Non-sensitive web traffic does _not_ exist. Every request is sensitive in the
sense that it's another piece of data that can be used to build a tracking
profile of the computer that sent it. You might not care that a particular
request is tracked, but that doesn't change anything because you may decide
you do care _at some unknown point in the future_. Using HTTP takes that
choice away.

 _There 's countless use-cases where plain HTTP is not only OK, but probably
the simplest and best option._

That doesn't mean HTTPS everywhere is a bad idea. There definitely are use-
cases where HTTP would be faster/cheaper/easier/simpler/etc, but the move to
ban HTTP takes all that in to account, and argues that banning it is still a
good idea because the implications for privacy are simply more important than
any of those issues.

~~~
empressplay
When any society (and the web is a society) starts to sacrifice the freedom
for its citizens to act in public if they so choose in the name of "protecting
them" from the poor behaviour of a few bad actors, it becomes awfully
difficult to characterise that society as "free".

~~~
userbinator
...especially if that protection involves authorisation by centralised
entities. I would be far more supportive of ubiquitous encryption if it was
controlled by the users.

Notice how encryption which is under control of the user (mobile device
encryption, cryptocurrencies, full-disk encryption, self-signed certificates)
is seen as a threat, while systems like the SSL CA model where governments
could more easily obtain keys if they wanted to, are not?

"Those who give up freedom for security deserve neither."

------
rkuykendall-com
> "HTTP has become the de facto standard in sharing scientific data. [...]
> HTTPS would cause an undue strain on limited resources that have become even
> more constrained over the past few years.

Interesting that this keeps coming up. I saw the same thing at the top of the
comments on the Python 2/3 article here yesterday [1] That academia is full of
people trying to do technical things and doing them poorly, so everyone else
should hold back progress because they're barely managing as-is, with no
engineering training or staffing, if you change anything it's all coming down.

Why is this a problem with Python or HTTPs and not a problem with the
priorities of academic departments?

[1]
[https://news.ycombinator.com/item?id=9397320](https://news.ycombinator.com/item?id=9397320)

~~~
alphapapa
Read the GitHub bug. We're talking about missions started decades ago. There
is no budget and no staff to maintain the existing code, much less rewrite
parts of it.

But because some wonk thousands of miles away in Washington made a "HTTPS only
from now on!" proclamation, all these existing projects which are working fine
should be canceled? Or the staff should put in hours of unpaid overtime to
learn and rewrite and test decades-old codebases?

How about you volunteer to do some of that work, and then you can criticize
"priorities" and "people trying to do technical things and doing them poorly."

------
flurdy
Conventions, not rules, better every time. Though a rule that public services
should at least state that the data is not sensitive and not needing https is
probably a good idea to force people to at least think about it.

But we should encourage more public data not less, not add another potential
step that might delay/discourage people from making data available. Making
https/ssl/tls easier and easier will make this a non issue eventually.

(I had to stop myself from spamming the github issue with a link to the seven
red lines video.... :)
[https://www.youtube.com/watch?v=BKorP55Aqvg](https://www.youtube.com/watch?v=BKorP55Aqvg)
)

------
ajanuary
Commenting on the argument rather than the conclusion, it does an odd bait-
and-switch.

> there is a statement that 'there is no such thing as insensitive web
> traffic' \-- yet there is. [...] Forcing these transfers to go to HTTPS
> would cause an undue strain on limited resources that have become even more
> constrained over the past few years.

That HTTPS adds additional strain on resources says nothing on whether the
data is sensitive or not. The entire post leaves "Non-sensitive web traffic
does exist" as an assertion while going on to provide arguments around
resources.

Not that "HTTPS-Only Standard" makes a particularly coherent argument in the
other direction.

------
ubernostrum
Title is misleading and the conversation reflects it: the request is for
reconsideration of the US federal government's move toward HTTPS-only _on its
own websites_. Nobody is trying to ban non-HTTPS websites in general.

------
deskamess
HTTPS certainly is not cache friendly. Is there an easy way to extend https to
handle heavy cache use cases (to improve caching behavior across clients)?

For example:

Assume: file.data exists and is already encrypted with enckey=K, deckey=D,
algo=X, etc

Client ->
[https://www.example.com/file.data](https://www.example.com/file.data) <->
<protocol between server/client to get deckey, algo, etc needed to decrypt>

<\- transfer of file.data (from cache) without further encryption but still in
the scope of the outer https. The response would carry appropriate headers to
indicate where the "plain" data starts and how long it is. At this point, you
can use a packet filter and see the encrypted body of the file but not the
https headers or anything else.

\- Client recognizes this valid https session response but takes the inner
sections without further decryption. The inner section would need to be marked
(as in a multi-part section), and https response headers would need to
indicate which byte range of the body should be read as-is and decoded with
key deckey.

Again, I am hoping for some sort of extension to https to make it cache
friendly.

Advantages: File is encrypted once (or as needed if policy requires it) and
caching proxies can serve the file without re-encryption per user session
response.

Disadvantage: Likely need to change http/s in some way to wrap and mark plain
data.

~~~
pornel
There is a spec for it already:
[http://www.w3.org/TR/SRI/](http://www.w3.org/TR/SRI/)

~~~
deskamess
This adds an integrity (checksum) attribute to a resource and I think that is
fine. It can also be used via https or http but they indicate http is not safe
(3.3.2).

The problem is that the data is going to be encrypted per user https session
and this adds a load to the server (say if you are streaming a movie or
downloading a large file). With https the data has to be encrypted per user so
there really is no caching on the output. Sure, the original data file can be
in a cache, but what goes out (payload sans header) is the encrypted-per-user
byte set. Unlike http where the payload across all users is the same.

------
moron4hire
>> Due to many institutions having policies against FTP and peer-to-peer
protocols,

This is the problem. Your organization needs to stop cargoculting IT policy.
Perhaps banning HTTP will be enough pain to make that change possible.

~~~
lultimouomo
> This paperwork has restricted our ability to openly share scientific data
> with international partners, and has impacted our sharing with an NSF
> partner that had a foreign-born system administrator.

I'd say their IT policy is not the only broken one...

------
higherpurpose
I think schools banning HTTPS has a lot more to do with being able to censor
what the students are visiting than saving bandwidth.

~~~
maze-le
Even with HTTPS enabled, institutions can censor based on IP or DNS. Blocking
based on Deep-Packet-Inspection is a bit complex to set up for normal schools
i think.

~~~
dezgeg
I'm pretty sure a DPI-based filtering can be purchased as an off-the-self
service.

And that's one of the core reasons I dislike HTTPS everywhere. It will lead to
organizations like schools installing their own root certificates to MITM
traffic, lessening the security of those things where encryption is the most
important, like online banking.

~~~
dublinben
Many universities already require the installation of a root certificate. The
primary purpose is to avoid buying commercial certificates for every last
internal university site, but it also has the effect you've mentioned.

------
mcguire
" _If the schools, libraries, and other filtered access points own the client
machines they can install their own CA into those machines and have the proxy
software intercept and generate certificates._ " (From the comments.)

That's the first time I've seen a man-in-the-middle attack described as a
technique for improving security.

------
mike-cardwell
The benefit of ubiquitous encryption outweighs this small list of minor
drawbacks a million times over.

Even if you could convince me that it's ok to send some traffic in the clear,
that wouldn't make any difference. You're just going to have to suck it up,
for the benefit of the web and humanity in general.

------
protomyth
Why does the news of Yahoo need to be sent over an encrypted channel? Since
bandwidth is still expensive, why take away basic caching? I don't want to
setup reverse proxies because I don't want a massive headache to separate
Yahoo from people banks or medical records.

~~~
MichaelGG
I'd have a lot of fun and probably make a lot of money, if I could control
Yahoo News for a week.

~~~
protomyth
That is the single most unlikely thing that could happen. It is also just a
dumb argument to require people to do a lot of book keeping for every printer
or other device on a network.

I am starting to think this is some weird plot to make entry into any web
software harder and protect those already here because the arguments for this
don't make sense. I assume that caching is going to end.

~~~
MichaelGG
Really? 4chan got Apple stock to plummet 5% just by announcing stuff about
Steve Jobs. Yahoo News publishing news like "Elon Musk dies" or "3 killed in
Tesla car battery explosions" or just other simple "X misses Q3 by 80%" or "X
to acquire Y" would do a pretty good job at changing prices, I'd guess.
(Though if you went overboard they could roll trades back, maybe.)

As far as the fun part. Just change people's quotes, slightly. Twist words
around. Make it seem like Obama really regrets the healthcare act. Or
something funnier. Done well, it'd be a fantastic piece of trolling. Done
really well, you could send people into a panic.

~~~
protomyth
Has this actually happened or is this just a fever dream? We are talking
actual dollars and pain-in-the-butt work versus some weird hijack looks like
it would get caught in very short order. I am unwilling to trade caching and
bandwidth for this.

~~~
MichaelGG
[http://www.cnet.com/news/whos-to-blame-for-spreading-
phony-j...](http://www.cnet.com/news/whos-to-blame-for-spreading-phony-jobs-
story/)

Even easier - someone just posted to CNN iReport and AAPL fell 10%. Awesome.

You asked what the benefit of encrypting news is. Well one benefit is that you
restrict who can modify stories. Instead of just compromising the network,
you've gotta compromise the box. There's value in making sure all data people
receive comes from the source they believe it does. (Now if they decide to
make trading decisions on CNN iReport or Yahoo News, well that's another
issue.)

~~~
protomyth
You keep giving examples that do not involve anything like a Man In The Middle
attack. The CNN iReport was a regular posting. All of your examples have
nothing to do with the proposal and would have happened if the site was using
https.

------
BringTheTanks
"HTTPS-only" goes directly against the architectural principles laid out in
"REST", where intermediaries should be able to understand (in a limited sense)
the request and responses that pass through, do caching, differentiate
idempotent from non-idempotent actions etc.

The ability for intermediaries to see what goes through is in large part why
"REST" is said to aid scalability, the same point this article seems to
address.

Now, both movements, "HTTPS-only" and "REST" are widely popular in dev
communities. Yet I never see one acknowledge the existence of the other, which
threatens it. In fact, I'd see people religiously support both, unaware of
their cognitive dissonance.

Curious, I think.

~~~
maaaats
Because your initial premise is flawed. Equal GET requests will often have
different results based on the user doing them. Either because they are
requesting their "own" data or because they have different privileges and see
different results. While not perfect, it's the reality.

This throws out all possibilities of caching. And why intermediates should
differentiate more than that I cannot see. So https is in no way limiting
REST.

~~~
BringTheTanks
My premise is that HTTPs-only and REST have opposing constraints.

You have not demonstrated any flaws in it, REST says communication is
stateless and cacheable except for acknowledging some select minority cases
when it's not the case.

Turning the minority cases into the _only_ way of communication nullifies most
of the benefits of REST, because the whole rationale of the paper is lost.
I.e. intelligent shared processing and caching by intermediaries.

I'm taking no stance on what "the reality is". I'm taking no side about which
side is more correct. I'm stating what both sides want, and finding it curious
they don't see the contradiction.

~~~
smsm42
I think the description of REST you've outlined is not entirely right. The
statelessness relates to client state, not the system state - i.e.,
POST/PUT/DELETE etc. can very well change the system state and that's the
whole point of them - and also session state is allowed too, it's just not the
part of REST architecture but is assumed to be implemented externally.

It is true that HTTPS may impede some cacheable resources. Maybe HTTPS may be
improved to allow transparent caching of _some_ content, but the security
implications may be hard to predict and will require very careful
implementation to not introduce new security issues with attacks on caches
themselves (DNS system still has this problem AFAIK).

~~~
BringTheTanks
The statelessness relates to communication state. A client _can_ hold state
and it most certainly _will_ hold state (consider your browser: open tabs with
URLs, bookmarks, local browser cache; form autocompletion; settings; all of
this is "state").

Instead, REST talks about a request being stateless and a response being
stateless (i.e. sufficient on its own and not dependent on preceding or future
communication between that client and server).

This is, again, done for the benefit of intermediaries, because intermediaries
should _not_ be forced to hold state in order to interpret REST communication.
Every request, response should be sufficient on its own to be understood.

~~~
smsm42
Sorry, I was not clear - by "client state" I meant not "state kept on the
client" but "state on the server that is kept different for every client".

------
natch
So the author thinks that scientific data is non-sensitive data. He's an
astronomer.

Perhaps he's not familiar with the story of another astronomer, Galileo, and
what people thought of his data.

[http://en.wikipedia.org/wiki/Galileo_Galilei](http://en.wikipedia.org/wiki/Galileo_Galilei)

It's not always about what you think about your own data. It's also about what
others think of your data... which is something beyond your control and
sometimes beyond imagining.

------
tempodox
Isn't HTTPS-only just a financial ploy of root certificate vendors? I'd
welcome more security but it shouldn't cost an arm and a leg.

------
smsm42
It looks like mix of very good arguments (some traffic is not sensitive, and
ensuring data integrity may be done much cheaper than HTTPS) with iffy
arguments (we must have bad security because some governments ban some people
from having a good one) with outright bad ones (since HTTPS can be implemented
incorrectly or have bugs, it is not useful).

------
Animats
Trying to secure everything weakly leads to weaker security on important data.
If you're using HTTPS for everything, it's so tempting to run everything
through a CDN such as Cloudflare, which lets them look at your most critical
data. This over-centralization creates a convenient point for central
wiretapping. If you run the important stuff like credit card data through your
own secured server, and serve the cat videos through the CDN unencrypted,
you're be more secure than if you run everything through the CDN. HTTPS
Everywere discourages this, which is why it's a form of security theater.

Then there's the EFF's own backdoor, the HTTPS Everywhere plug-in. Compromise
the EFF's "rules" servers, and you can redirect user traffic anywhere. Their
"rules" are regular expressions which can rewrite domain names. Here's an
HTTPS Everywhere rule, from their examples:

    
    
        <rule from="^http://([\w-]+\.)?dezeen\.com/"
            to="https://$1dezeen.com/" />
    

That's a third party using a regular expression to rewrite a second level
domain. This rule always rewrites it to the same second level domain. But do
all of the thousands of rules in the EFF's database? Here's an dangerous
looking one that doesn't:[1]

    
    
        <rule from="^http://(?:g-images\.|(?:ec[5x]|g-ecx)\.images-)amazon\.com/"    
        to="https://d1ge0kk1l5kms0.cloudfront.net/"/>
     

That redirects some Amazon subdomains to the domain
"d1ge0kk1l5kms0.cloudfront.net". Seems legit. The EFF wouldn't let someone
redirect Amazon traffic to a hostile site hosted on Cloudfront, would they? If
someone set up an instance on Cloudfront which faked the Amazon site, and got
a rule like that into the EFF's database, they have a working MITM attack.
That site is "secured" by a "*.cloudfront.net" wildcard SSL cert, so all we
know is that it's hosted on Cloudfront. Does the EFF must have some way to
check that "d1ge0kk1l5kms0.cloudfront.net" string? Nothing in their
documentation indicates they do.

Welcome to "EFF Backdoors Everywhere".

[1] [https://www.eff.org/https-
everywhere/atlas/domains/amazonaws...](https://www.eff.org/https-
everywhere/atlas/domains/amazonaws.com.html)

~~~
serve_yay
Ahh, you're nuts. If it's HTTPS, it must be secure -- that's what the "S"
stands for!

------
ajani
Cert racket and other attendant problems aside, there is little in the
argument itself against banning HTTP. It is the same old argument - If we
change things, things will break.

Yes, this is the nature of change. Not all change is good. But no good will
ever be discovered without attempting change.

------
teekert
It comes up everytime but let's at the very least wait until we see if this is
a viable way to easily implement HTTPS:
[https://letsencrypt.org/](https://letsencrypt.org/)

I'd love to see such thing being built in straight into the Nginx/Apache
packages of disto's to really make it straightforward.

Personally I have a mail.mydomain.nl, a mydomain.nl and an owncloud instance
at cloud.mydomain.nl, it is such a pain to update every year, it requires at
least 2, if you do it well 3 long sessions at startssl. If you by any chance
don't have postmaster@mydomain.nl set up, you get to do that first too. This
problem really, really needs solving.

------
crypt1d
Simply banning any kind of legacy protocol is not exactly in good spirit.
People should have freedom of choice when it comes to running THEIR OWN
infrastructure.

~~~
spacemanmatt
I don't think anyone was trying to tell you how to build your core application
network or your home LAN. SSL Everywhere is about critical connections subject
to interception.

~~~
crypt1d
I was referring to public networks as well. I should be able to do HTTP GET to
my server if I choose to do so. In the same way as I can open a socket to my
server and write plain text to it.

~~~
spacemanmatt
I think you should be able, in the sense that it should not be legally or
technologically prohibited nor prohibitive. I do think there is a line, beyond
which a service should be obligated to encrypt everything, and that line is
somewhere around carrying others' messages, certainly around getting common
carrier status for the same.

Edit: I must stress, social not legal consequences should apply. I'm in no
hurry to invite government scrutiny of this line.

------
larrysalibra
Poster could set up a reverse proxy to support apps that couldn't be updated
to HTTPS in less time than it took to write his github issue post:

[https://github.com/WhiteHouse/https/issues/107#issuecomment-...](https://github.com/WhiteHouse/https/issues/107#issuecomment-94425001)

~~~
danielsamuels
Are you trying to garner sympathy from the Hacker News crowd? Because you're
not going to get it.

~~~
larrysalibra
Not at all. Apologies for leaving that impression!

Want more people to call bullshit when people in government use lazy arguments
as an excuse to compromise privacy of citizens.

"But if there's encryption, my job is more work" arguments from NSA, CIA, FBI,
military, etc, _exactly like this_ are a huge threat to civil liberties &
freedom.

------
mentat
The number of crypto negative posts here makes me feel like HN SSL posts are
being gamed. It's the same sorts of comments from multiple "people" across
different threads. Anyone want to do the NLP to verify

