Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What feature would you want the web to “force” next, after HTTPS?
114 points by chiefofgxbxl on July 12, 2017 | hide | past | web | favorite | 269 comments
We've seen the push for HTTPS in recent years accelerate and become more and more aggressive (in a good way).

Browsers have arguably led this drive by notifying users that the pages they are viewing are "NOT SECURE", through the use of pad-lock icons in the URL bar or even notifications under textboxes (e.g. Firefox) [0]. Chrome, too, is driving this trend. And with users fearful of sending data over a "non secure connection", they'll be vocal enough to push website owners to fix this issue.

---

So, if you could decide: what feature or measure would you want to see adopted as quickly as the push to make all sites use HTTPS?

[EDIT: kudos if you describe how your new standard could be "forced", e.g. through a URL-bar icon, notifying users somehow, etc. How would you convince other developers and maintainers of large code-bases, websites, browser vendors, etc. that they should throw their support behind your initiative?]

Think ambitiously too -- imagine your proposed feature would have the same backing and urgency as we have with HTTPS, with browsers (for better or for worse) authoritatively "dictating" the new way of doing things.

---

[0] http://cdn.ghacks.net/wp-content/uploads/2017/03/firefox-52.0-warning-insecure-login.png

[1] https://www.troyhunt.com/life-is-about-to-get-harder-for-websites-without-https/




First party isolation. Social media buttons and other trackers should not get a global identity for free.

Explicit opt-in to store persistent state at all. An exception should be a cryptographic identity that is only revealed when you click a login button.

No sound without opt-in.

No big data transfers without opt-in. If a site wants to shove 10MB of crap in their article, then they should have to show a page asking permission to use data. And search engines should refuse to index anything behind a bloatwall.


I like this idea, but because websites have the content, they can simply throw up a button that requests showing your identity to view the content and most people would blindly click it leading to the same situation we have now. The HTTPS push is important and works because the search engines can leverage their importance and the browsers can (effectively) scare people without any user input.


I should clarify what I mean: if you have a login page (form element of type login, perhaps), you get a real login button with trusted chrome (i.e. you can't restyle it to look like a kitten). If you push it, the website gets your TLS Channel ID or similar. This isn't a global identification -- it just lets the site match you up to the last time you went there. But the browser could give you an alternative that gives you a fresh transient identity.


It would just be another case like the EU cookies thing. Every website would have the button and everyone would click it immediately to get rid of it. It would just be an annoyance.


If done well, the chrome would be more clever than that. There should be "log in as [username]" and "stay anonymous". Unless websites want to start validating email addresses to let you read their content, they'll have to accept "stay anonymous" because it would be indistinguishable on the server's end from getting a brand-new user.

So you'd have an idiotic banner pissing off your users and the considerable majority would click "stay anonymous", gaining the site operator nothing.


"Not Secure" because http-minus-s is easy. With all the ML expertise Google has, Chrome should be able to spit out a hundred reasons why you might not want to visit a particular website. Even without ML, your recommendations should be straightforward. The only problem though, is how to ensure Chrome doesn't ignore these things when they're on google-sponsored pages.

I'm getting closer to dropping Google, but not quite there yet.


Registration forms should be standardized. I want to have my "real" details, and my "fake" details ready to be entered into websites that want yet another registration. Why does every single website implement their own registration form with exactly the same details?! Why does every single web site re-implement the registration page slightly differently?! Ideally, I'd enter the registration page, the browser would list the things they want to know, I'd pick either my details or another set of fake details (for spammy websites or others u don't really care about), one click and registration complete.


> Why does every single web site re-implement the registration page slightly differently?!

Because registration pages are the top of most sites' conversion funnels, and as such they produce metrics that reflect on not just IT teams but also UX/design and marketing. The amount of tinkering and customization in registration pages is a political/organizational problem, not a UI/tech problem.

And besides that, no business has an incentive to make it easier for you to free-ride their service with fake credentials. The better question might be why so many companies continue to put cost-inducing barriers like signup in their flow before they fully demonstrate the value-creating potential of their product. I liked Facebook's "Anonymous Login" but apparently very few developers were actually interested.

https://www.recode.net/2015/3/6/11559878/whatever-happened-t...


> The better question might be why so many companies continue to put cost-inducing barriers like signup in their flow before they fully demonstrate the value-creating potential of their product.

Because it works. If it didn't work, companies wouldn't be doing it.


> And besides that, no business has an incentive to make it easier for you to free-ride their service with fake credentials

Ah, but they do have an incentive to make it easier for you to sign up with legit registration credentials.

If the OS was able to store the common fields that describe a person / account and the browser could prompt the user to inject those, HTML5 would only need a few more supported valid input type attributes and only populate them after the user has opted in.


I was thinking of writing a spec and implementation on a similar thing a few days ago:

Each site would publish a standardized file in a standardized location, and/or add a META tag to the file in each page. The file itself would simply contain the URL to the login page and the parameters that should be passed to it, and the way to pass those parameters.

Then, your browser/password manager could simply perform a request in the background, and log you in completely automatically. This solves all the hassle of finding the login form, locating the fields, auto-filling, typing some characters if the change wasn't detected, handling single-page apps, etc.

This could later be extended to signing up automatically. It would make having strong, unique passwords for each site the convenient alternative.


Unfortunately that requires mass standardization (adopting the spec), which is probably unlikely given that no one standardizes on the registration form to begin with.


Standardizing on the form would be much harder. For Django sites, for example, this would be a one-line change (adding the library to Django), and similarly for Node or Rails or most other frameworks.

The advantage, where password managers would just log you in transparently on any site, would hopefully be big and visible enough for more and more sites to start implementing this.


That would be great!

The more likely (as in, one could create it today) implementation would be an extension that could do the GET/POST login, but was oblivious as to required parameters. Then people could contribute the formats expected by different domains.

Much less elegant, and only usable by the more technically inclined, but implementable today without explicit support from websites.


This sounds like a great opportunity for a startup to create and capitalize on


it's kinda been tried by proxy through authentication standards like OpenID.

i wonder if the solution isn't to back off a little though. that is, standardize some process instead of (de?)centralizing authentication. something that gives developers more control, and users more familiarity, than OpenID.


open ID moves the user away from your service, which i think is not what some sites want because they want more user info and control. that's why openid failed.


agreed. tried to articulate that in my post! thanks :)


chrome already has variants of my information cached for forms


- A protocol for sites to get my public PGP key for server side use

- The discontinuation of using SSL certificates for verification of website identities and a move to true fingerprinting ala SSH.

- Deprecation of email or rather its insecurity.

- Logins on websites with a public / private keypair ala SSH.

- A resurgence in sites that let me pick my own anonymous username instead of Facebook, Google or Twitter logins and email addresses as UIDs.

- Blocking of any and all forms of popups, including javascript popups, overlays, cookie banners, browser notifications.

The web is rapidly becoming a place I don't want to visit anymore.


The login problem was attempted to be solved by Mozilla's "persona", now deprecated. I like the general idea that I strongly authenticate to my browser, which can then "vouch" for me to various sites using cryptographic tokens that are otherwise useless (so no cracking/stealing passwords, etc). The devil of course would be in the details.


The devil is that the corporations able to influence this change directly benefit from current identity systems. Google for example has huge infrastructure in place linking its products together and includes tracking and other features that would only work with the current system.


Sorry that didn't work out :(


>The discontinuation of using SSL certificates for verification of website identities and a move to true fingerprinting ala SSH.

You do realize that's trust-on-first-connect aka self signed certigicates, right? Especially with LE, thats worse in every way to the CA model.

>Logins on websites with a public / private keypair ala SSH.

Pretty much client certificates minus PKI.


SSH is only trust-on-first-connect if you choose to behave that way. Whenever you connect to a new machine, it prompts you "I don't recognize this machine, and its fingerprint is XXXX. Do you trust that?"

I don't see how the option to trust websites is worse than having a bunch of certificate authorities choose who I'll trust for me.


How would you possibly know which fingerprints to trust? Also, do you honestly think there is any hope for the average user to understand what that means and know what to trust and what not to trust? IMO, that is a massive step backwards in usability, which directly impacts the overall security of a given solution. If something is not usable, then people will figure out a way around it and security then goes out the window.

For example, if users were responsible for knowing which fingerprints to trust for a given website, they would most likely just click "ok, trust it" for everything. Then, you're overall security goes way down because now people are conditioned to click "accept" to everything, regardless of the impact.


Signed archives of trusted or untrusted fingerprints, distributed by various and independent authorities is one option.

Trust is, by definition, an extention of solidity or support. CA is a trust model, which has proved both brittle and unworkable.

http://www.etymonline.com/index.php?term=trust&allowed_in_fr...

Google and other services presently provide extended trust and validity assessments for websites: pinned certificates, malware scans, and the like. A limited number of such reliable schemes would scale reasonably well, and should prove useful.

I'm not saying "perfect", I'm saying "useful".



Interesting, that's a possibility.

I was thinking of something more akin to, say Google's pinned certificates (something which practices such as Let's Encrypt actually makes harder AFAICT):

https://security.stackexchange.com/questions/29988/what-is-c...

Or something that might be a parallel of email reputation services -- SenderBase / IronPort (now Cisco) rating email servers by their spam loads, etc.

Rather than negative reputation, a positive reputation (vouch rather than warn) might be viable. (Negative ratings systems, digital or otherwise, tend to inspire various legal assaults.)


As an individual user, you have no better information than your knowledge of the website and its claims on security, maybe its brand, and what your social circles claim about its security practices. A certificate authority at least has the potential to aggregate data and from there assess the security of a website. The original ideas behind certificate authorities are not dissimilar from insurance.


SSH-style login is still something I'd really fucking love. Much more secure way of logging in, easy protocol for storing multiple passwords, and easy authorization/deauthirzation of passwords/keys.


Client TLS certs already exist, and they are a massive pain for the average user.


I remember StartCom used client TLS certs, the only place where I ever saw them in use, and the browser workflow was certainly clunky. I'd hate to see a non-techie have to deal with it.


They worked quite smoothly for me. I actually liked StartCom's login process.


So I have heard of client TLS authentication, but does it exist for the web (I mean just in principle, not whether it is really used). That is, do browsers support it?

The thing is, if it is a massive pain for average users, then that is an own-goal. Look at SSH: there is no certification chain there. All you do is generate a keypair and then (here's the awkward bit) magic the pubkey over to the server.

It would be easy enough for a website to make pubkey installation a seamless part of the sign-on workflow.


It is a massive pain for average users but it's well supported by browsers is because big corporations have IT departments that provision (and admin) computers for their users, average or otherwise, and the provisioning process includes setting up client certs.

So while browsers do support client certs, there has been very little effort to make them easy to install due to a chicken-egg situation. They're hard to use, so no one uses them, but no one (random sites on the Internet, that is) uses them for client auth because they're hard to use.


Yes, browsers have supported client certificates for decades and continue to today.


Token binding provides something like that. With token binding you have a private/public key pair for each site that supports it. An identifier is created from the public key and signed with the private key, to prove your identity.

https://www.sjoerdlangkemper.nl/2017/07/05/prevent-session-h...


> Deprecation of email or rather its insecurity.

We would need something to replace it with, there is really nothing right now.


While I agree, there is definitely movement and recent movement [0][1] , I just wish I had the knowledge to contribute. I am great and figuring out how things work, breaking them and understanding them. Building from scratch is a bit above my head, albeit I have a few ideas....

[0] https://news.ycombinator.com/item?id=14708783 [1] https://magmadaemon.org/


We already have standards for encrypting/authenticating email transfer between mail servers and between the server and user agent.

This doesn't solve the problem of mail being encrypted in transit. As far as I can tell the server operator can still read the user's mail. What we need is end-to-end encryption, lack of support in MUAs (iphone mail app, thunderbird etc.) is the problem here, it has really nothing to do with the server.


> lack of support in MUAs (iphone mail app, thunderbird etc.)

Thunderbird supports S/MIME out of the box and PGP through an addon, so I'm not sure what other kinds of end-to-end encryption you'd want to see. Not sure about the iPhone mail app.


> The discontinuation of using SSL certificates for verification of website identities and a move to true fingerprinting ala SSH.

Would HPKP (https://developer.mozilla.org/en-US/docs/Web/HTTP/Public_Key...) cover that for you?


I love the encryption ideas.

Do you know if PGP public/private key pairs can be used for ephemeral keys? I'd hate to rely on the same secret to store everything throughout time.


I haven't had my morning coffee, but I believe that lack of forward-secrecy is exactly the main drawback with PGP as a protocol; but it's been a while since I looked at it


PGP allows creation of subkeys. I'm not entirely clear that that's useful here, but it might be.


My concept (high level) for dealing with things like server and user identities handled by key pairs:

Take keybase (or its conceptual basis) and distribute it. Each domain can host its own key server. You can post proofs on other domains to link domain identities or logins. So now my phone has a key that's attached to the identity jtsummers at legoflambda.org. You have a service. I register the identity jtsummers at legoflambda.org with your service. I can log in using the key from my phone. My laptop has its own key. My yubikey stores a third key for access while I'm traveling without my laptop. Each of those I've connected via some proof mechanism to my key server so I can log into your service using any one of those keys.

Google and others can also host a key server so my user at gmail.com identity can also be used or my facebook.com identity, again with multiple keys associated with them from my various devices. And possession of the private key can also be used to access the same services (perhaps paired with something like a TOTP or other shared secret if you want an extra layer of authentication).

Now you want to send a message to me, you can have a service similar to what keybase offers. You send a message to any one of my public identities or keys, it gets made available to all of them. You know me here on HN, you send it to that identity. You know me by some other forum handle or by my gmail account you use those. And since my public keys are all available, you can send an encrypted message that will be available to any one of my devices (which I can re-encrypt as I add and remove devices).

This also handles a lot of the problem with spam. Spammers now have to take the time to individually encrypt messages for every user. They have to publically post identities and keys so that users can authenticate them. And users can block spammers by blocking the keys associated with them and block an entire identity by blocking all the keys associated with the spamming key. You want to ensure that that email from your bank is legitimate? Your bank should have a publically visible key server that all communications from them make use of. Whether they send the message in the clear with only a signature or if they send an encrypted message to you (preferred for privacy and security anyways).

This also helps with applications like signal/whatsapp which are presently tied to a single client instance. Now, I can associate my whatsapp key with multiple other keys (each on different devices, presumably). So you want to send me a whatsapp message, it can now be sent to all my devices. My phone number can still be an identity used by those services, but it's no longer the only one.

This was a particularly annoying case for me as I travel internationally with a separate phone than my US phone since it's locked, I had to enable WhatsApp on my travel phone using my primary cell number for ease of friends communicating with me (I don't have to get my secondary number to all of them and remind them to switch back once I'm done). If I could have connected my secondary device to my primary one then all messages sent to my main number would have been received by both, and messages sent from either would all appear to the recipient as belonging to the same identity.

====

This is not a well structured presentation, sorry. It's more the random thoughts that have been hopping around in my head for the last couple months between other more pressing concerns.


I'd vote for DNS-over-HTTPS or similar tech. Encrypting domain name resolution should help mitigate a gateway or proxy (Comcast) from knowing or blocking sites you visit.


I always found DNS to be one of the most compelling uses of the blockchain. Namecoin actually did a great job at this.

Effectively if put into practice, ISPs would run name servers that effectively mirror the whole DNS system via blockchain. And if you really wanted to have ultimate privacy you could run it locally on your machine and there would be no way for anyone to know what domains you've looked up.


SNI puts the DNS names you're connecting to in plaintext at the start of every TLS connection. Running your DNS over an encrypted channel won't stop someone from knowing or blocking the sites you connect to.


Luckily, from 2018 on, SNI will be mostly unnecessary, as LE will support Wildcard certificates, with DNS verification, for many domains in a single certificate.


SNI will still be necessary for when you have multiple servers under one IP (until IPv4 is deprecated, this is necessary), for example on a shared host (which might even have shared IPs under IPv6).

IIRC there are some ways SNI will be encrypted with TLS 1.3 so it's not a problem to begin with.


DNS is a non-trivial amount of traffic to go moving from a lightweight UDP protocol to something like HTTPS. Furthermore, that would dramatically increase page load times (for reasonably sized pages) since HTTPs requires more turns.


No need to go full https with it, dh once per session, then exchange data (could be over udp)


> dramatically increase page load times

This is true, but with a reasonable cache design, it shouldn't be too bad.


Unfortunately a single page load often contains files from many different domains. Sometimes 10+. So caching may be of limited use.

Although this may be a nice driving factor to get eCommerce sites to stop putting 50 tracking pixels on every page.


That's true. DNS lookups seem like something you can do in parallel though, so I still don't think it's that big of a hit.


Presumably you would keep-alive your DNS over HTTPS connection. That would keep the packet turns the same.


DNS Crypt (https://dnscrypt.org/) at least partially addresses some concerns with the current DNS specification providing for the authentication of DNS entries (spoofing prevention). I don't think it addresses the privacy concerns of ISP or 3rd party sniffing though.


Doesn't https://dnscrypt.org do that?


Nope, it simply gives you an assurance that the DNS entry you receive hasn't been spoofed and is coming from the DNS server that you expect it to originate from. See their homepage explanation.


There's DNSCurve by djb which does pretty much that.

https://dnscurve.org/


I second this. DNS is still a privacy killer


But then how would the Food Standards Agency get access to my browsing history?

https://www.reddit.com/r/unitedkingdom/comments/5ei5dz/list_...


Ajax without JavaScript. Ability to send a response from server updating only part of DOM. Basically, react with virtual DOM on the server pushing diffs to user with http2 awesomness.

There will be no need for JS on most sites, can be adapted to current frameworks, and with preload/prefetch it might be very fast.

* U can prefetch progress bar / loading state for example, and redirect to partial url of a real content


You may be interested in intercooler.js (http://intercoolerjs.org/). It allows you to perform a host of AJAX + DOM manipulation flows using only HTML attributes.

intercooler also supports something similar to the server pushed DOM diff flow you envision:

Server Sent Events BETA (http://intercoolerjs.org/docs.html#sse)

"Server Sent Events are an HTML5 technology allowing for a server to push content to an HTML client. They are simpler than WebSockets but are unidirectional, allowing a server to send push content to a browser only." http://intercoolerjs.org/docs.html#sse


Thats awesome, I was only aware of turbolinks. Sadly, I will never use it for real.

I, like many other, spending some time to ensure server side rendering works, and web site can function without JS. If intercooler were part of browser, and not separate code it will be possible to adapt any SSR ready app to work with this.


Curiously half of what you're describing has already existed on the web for more than 20 years, in the form of server side push using the obscure `multipart/mixed` HTTP content type [1]. This technology was added to Netscape in 1995 and I believe it's still supported by modern browsers, but it seems to have hardly gotten much traction beyond those early webcam sites that an image every N seconds.

I am not sure if interaction is possible as part of the mix using some tricks, though. It seems like we just have turbolinks for that.

[1]: https://docstore.mik.ua/orelly/web2/xhtml/ch13_03.htm


Wow! They got a x-mixed-replace. That's totally it. From quick googling - looks like it is not working for html since chrome 36

Edit: It works only with svg and only in Firefox, or am I doing something wrong?


Guess I was wrong about the support still being there. I think because nobody used it they must have taken out a lot of that functionality.


That sounds awesome, but how could you do it without either:

  - Client sends the entirety of what it has to the server so the server can do the diff
  - Server sends the entirety of the new page to the client so it can do the diff
  - Server is constantly keeping track of the last thing that the user looked at so it can send the diff for the next page

?


Assuming: 1. Everything user sees is from server 2. Templates/Views is function of state

Server can generate UI with baked changes to state. URL parameters. Generate html from received state by calling render function of clicked component and send just that components html.

If we had http header for that we can leverage http2 push and replace even components which state were changed as side effect.

React-Redux, Elm, Vuex doing this already. Just bake actions to urls and keep store on server.


<iframe> already does something like that, although admittedly in a clunky fashion


JavaScript Standard Library created that every browser has "installed" and updated automatically.


Or just stable JS APIs that you can specify on your scripts.

<script vers="3.1"> </script>

<script vers="4.2"> </script>


Apparently this somewhat works. http://jsfiddle.net/Ac6CT/


Or both. I would love this so much..


Yeah. It's a real shame that the language we need batteries-included in the most (because of code size limitations) is one of the most deficient.


I feel like the mess helps discourage it's use. I shouldn't need a Turing machine in order to read a text document.


While I agree with you, that ship has sailed I think. I'd rather see it take less bandwidth, less memory and less CPU time at least!


Sounds a little bit like making https://cdn.polyfill.io/v2/docs/features/ persistent between websites ;-)


I'm ignorant of a lot but, segregation of cookies by browser tab. If I log into Xsocialmedia in Tab 1, and go to news site in Tab 2 using Xsocialmedia plugin, it doesn't know that tab A logged in, or that it came from same browser.

Basically, I want my tabs to be isolated and treated as completely separate, isolated browsing histories, caches, and cookies. ...This is my gmail tab. All that tab ever sees is gmail. This is my HN tab. All it ever sees in HN.

Like I said, this isn't my field, but..


I think you're talking about first party isolation, which I second. In the same way that the web should be secure by default, I think we should have privacy by default.

> First party isolation means that all identifier sources and browser state are scoped (isolated) using the URL bar domain.

From: https://www.torproject.org/projects/torbrowser/design/


You can kind of do this with incognito mode.

It would be a bit frustrating because once you close the tabs everything is lost. However I am not sure how different browsers "incognito mode" handle multiple tabs.

I do like your idea though.


Firefox has an experimental (opt-in) feature called "Containers" for this: https://testpilot.firefox.com/experiments/containers

You can create a container for each compartmentalized context (work, social media, whatever) and then create tabs assigned to diffent containers. They're visually distinguished as to which tab is "in" which container, but otherwise you can manipulate and mingle them freely with the rest of your tabs. Each container's persistent state is isolated from both the other containers and your default browsing context.


oh wow that is awesome. I've not used firefox in years but something like that could be reason enough to give it another try.


As a UI dev working in security, I can tell you that other security devs largely don't know what Incognito mode is or how the cookies are sandboxed inside/outside of Incognito mode.

Some think cookies are segregated in each Incognito tab/window. Some don't know that the Incognito cookiejar is reused if you open another Incognito tab/window without first closing all other Incognito windows+tabs.

This applies to LocalStorage as well.


"grandmother" usable email encryption for the masses.


Hard deprecation of the long tail of Javascript browser capabilities and incompatibilities.

So much code and so many libraries are littered with "if (old version browser) do x, else if IE, do y, else, ..."


As far as I recall, IE, since reaching adulthood some few years ago, has identified itself as Mozilla when asked. It's a very pragmatic solution to the problem of a lot of websites having code to check for IE and apply all kinds of hacks to make stuff work anyway. It's also a tacit concession that .. yeah .. our track record wan't too good.

I'm old enough to remember the days when IE was a dominant force and they could pretty much thrash around and invent their own quirks and standards, forcing everyone else to play to their tune. So I find great joy in seeing IE disguising itself as Mozilla in order to not be treated as a rotten egg.


So IE should say "sorry I suck, switch to another browser?" Doesn't make any sense...

I mean sites will already tell users they don't support older browsers.


May not be exactly what you mean, but if you open IE on Windows 10, it encourages you to use Edge instead. And I think generally Microsoft has been pushing to move people off of old IE, e.g. https://www.microsoft.com/en-us/windowsforbusiness/end-of-ie...


What would really be nice is to have a spec that either you follow or you don't. If you follow it, you follow all of it and it just works. If you don't Javascript is completely broken. It'd be a good incentive to get browser vendors on board.


This idea sounds great but it misses the reality on the ground - web standardization is broken.

Compare C++ standardization. Most people are using two year old compilers. While the newest compilers do implement draft features, these features are almost all standardized in the next ~3 years. The standards are clearly versioned and you can put newer compilers into modes to check against older standards if your code needs to compiler on multiple compilers. Features are never removed.

By contrast, on the web, everybody is running brand spanking new browsers. Experimental technologies are sometimes implemented before full draft standards are even written. Many or most drafts that are written never become standards. If they do become standards, it's often a decade or more away. Alternatively the living standards of what browsers actually support aren't versioned. It's not possible to put a browser into JavaScript 99 mode - the easiest way to check your code for conformance is to automate your test cases across a variety of browsers. And, features are sometimes removed because they are deemed security or privacy concerns, so conforming to an old standard is not sufficient to ensure proper functionality under modern browsers.


I like it, not dissimilar from the shaming model that has contributed to HTTPS accelerating adoption.


I want a way to force sites to become static after they are rendered. Just frozen, as though they were on paper. I am tired of scrolling making menu bars move around or triggering popovers. Just give me a way to turn off javascript and any dynamic CSS junk after X amount of time. I looked into writing this as a firefox browser extension, but extensions now use javascript so we're all screwed.


I've been using this Kill Sticky bookmarklet, and it's turned out to be one of life's simple pleasures. Gets rid of menu bars and popovers. It's the most-used thing on my bookmarks toolbar by a factor of about eleventy babillion. It's not automatic like an extension would be, but you'll find you get 80% of what you want with a tiny bookmarklet.

https://alisdair.mcdiarmid.org/kill-sticky-headers/


Its not perfect, but try reader view on Safari.

I use it alot to make various articles and blog posts more readable.

There's also quite a few reader view plugins for Chrome, Firefox, etc


I tried something like thiswith a user script to attack those fixed banners, but I found that it caused too much breakage. (For instance on Youtube you would not be able to use the menus.)

I settled on hiding the fixed headers and other objects upon scroll down. (They can be brought back by scrolling up.) https://iwalton.com/wiki/#NoFixed

It would be nice if there was browser support, maybe something like tricking the page into thinking the viewport is as long as the page itself.


You can effectively disable JavaScript with WebExtensions by injecting a script that overwrites all the properties of the document and window with undefined. Instantly crashes pretty much any script. If you want to do it sometime after page load, you could replace all those properties with a Proxy object instead, and have that object start throwing errors for any access after X time has passed.


It's very interesting that most people here mentioned changes only to the top layers while one of the most urgent problem is in the BGP protocol that help route traffic between ISPs. Many times in recent years governments and ISP used it to steal the traffic of entire countries, or to block websites.


I think this is only really worth the headache for security issues. That said:

- HSTS

- DNSSEC

- IPv6

in that order. I think for a long time, governments had no interest in pushing security and encryption because that would prevent them from mass data collection. I think minds are starting to change around that: poor security is much more likely to be exploited against a government rather than used in its favor (plus all the real criminals now have much better opsec these days so mass surveillance is much less effective).


I feel like I need more training for IPv6. For a long time, I've thought that it was a simple thing to enable and allow (and often our servers are dual stack). It turns out, though, that unless you really know what you're doing on the server side (i.e. overriding the horrible defaults for IPv6 resource allocation), you can end up with an inexplicably slow server that spits out bizarre errors.

Anyone here have any recommendations on a book, course, etc. that covers IPv6 readiness?


More on the network side myself, and I thought I had v6 down, or at least the basics.

What kind of resource allocation problems did you have on dual-stacked hosts? Windows/Linux/Other??


For example, in Debian 7, routes.max_size is dynamically allocated for ipv4 and hard-coded to 4096 for ipv6. I eventually figured it out after searching for a while, but that's not something I'd expect to have to do (having been used to ipv4 working decently out of the box).


Add SRV lookups to the HTTP standard.

There's a tremendous amount of complexity and cost attached to the fact that browsers look up the IP address of the hostname and then connect to port 80.

First, it's true that you can specify another port in the URL, but nobody does that because it's ugly and hard to remember. If you want to be able to send people to your website, you need to be able to tell people what the url is - "Just go to example.com". The minute you start saying "example.com colon, eight zero eight zero" you're screwed. With a SRV record in DNS, example.com could map to an arbitrary IP address and port, which would give us much more flexibility in deploying web sites.

If you want a bare http://example.com to work, you need to create an apex record for the domain. That can't be a CNAME that maps to another hostname, it has to be an A record that maps to an IP address. This means you can't put multiple websites on a single server with a single IP address, you have to have an IP address for each site. IPv4 addresses are already scarce, this just makes it worse.

Also, port 80 is a privileged port in unix (which does the lion's share of web hosting). That means you have to run web servers as root. That, in turn, defeats the unix security model, and requires hosting providers to either lock down their servers and give limited access to users (cPanel anyone?) or give customers root access to virtualized operating systems, which imposes a tremendous amount of overhead.

Virtual operating systems also impose a bunch of complexity at the networking level, with a pool of IP addresses get dynamically assigned to VMs as they come and go, DNS changes (with all the TTL issues that go along with that), switch configuration etc.

These problems are all solvable and indeed solved, by really clever modern technology. The point is that it's all unnecessary. If browsers did SRV lookups, we could still be hosting like it's 1999, and putting all the tremendous progress we've made in the last 20 years into making it cheaper, faster, easier and more secure to build and run a web site. People that support the "open web" as opposed to "just make a Facebook page" should advocate for SRV support in HTTP.

This doesn't actually have to be "forced" on users of the web - it'd have to be forced on browser implementors, hosting providers and web site operators. If the transition was handled well, users wouldn't even notice.


There's two issues with this: first, it's not necessary, and second, it won't really work.

The first: it's true that only one (privileged) process can bind port 80 on a host. But that process can simply do what most front-end webservers do now, and reverse proxy to any number of other local hosts. IP addresses can be demultiplexed through the Host header, the way they have been for decades. That makes this a systems design problem, and not something that needs to be exposed in the standards.

Second, even if you could transparently run websites on port 9999, that wouldn't change the fact that a good number of networks filter everything but ports 80 and 443. Universal network accessibility would still put ports 80/443 at a premium.


This means you can't put multiple websites on a single server with a single IP address

Huh? This isn't true. A webserver can just look at the host header in a HTTP request and return a response for the appropriate domain.


Lots of people are pointing out that with the Host header one web server can handle multiple domains. Yes, that's true.

It's be useful in cases where you have one organization that hosts multiple domains. Then you just configure your server to handle this domain this way, and that domain that way, etc.

But it doesn't help the cases where you want to host multiple, unrelated websites on one server. Let's say Acme Widgets has a static site that just serves files off the filesystem, but they've got a bunch of rewrite rules to handle legacy urls. Umbrella corp wants to run a node backend. To get that to work, you need to agree on a server that will handle the requests. Everybody needs to be able to configure it to their liking, which leads quickly to the cPanel scenario I mentioned above. Or, hey, we can automatically configure the shared server as a proxy and let everybody run their own servers on non-privileged ports! That works, but it introduces unnecessary overhead in terms of memory, CPU, SPOF, latency, configuration etc. It would be better to just have the browser connect directly to those unprivileged ports!

tptacek brings up the good point that lots of networks block connections on ports other than 80 and 443. That's true, but it's because of the fact that HTTP essentially has to use those ports. If the web started working on other ports, that would change. Slowly, yes. Port 80 would have a special status for a long time. But if the standards did support other ports, network administrators would have a hard time answering "Why can't I connect to acme.com?" with anything other than "oops, let me fix that". This would be a way easier transition than say, switching to IPv6.

Finally, I'll reiterate that none of these problems are insurmountable. The web exists because we've found ways to work around them. A lot of use make a living doing just that. But that doesn't mean this is the best way of doing things, or that the work-arounds have no cost.


You can run many sites on the same IP and port. TLS and HTTP both indicate the host name. (And for most installs, non port 80/443 might be nonstarter due to firewalls.)

Apex CNAMEs can be worked around in the server software - just dynamically resolve it into an IP. Cloudflare does this, for instance.


This doesn't make any sense at all. CNAMEs don't help with running multiple sites on a single IP address, it's just a convenience in the DNS. More like a symlink really, it means 'when you're looking for X try Y instead'.

If you want to run a web server on port 80 without the server having root access there's many ways to do that, the firewall can rewrite the packets so they go to a different port, you can give the web server the right to open port 80 without root privileges, you can proxy the requests etc.


nginx makes it very easy to sever multiple websites on a single ip.


It's not security related, but: Accessibility.


Yep, browsers should have screen readers built in. It's ridiculous that you have to shell out $1000+ for a JAWS license (there are alternatives, but they need work).


Anyone who needs a screen reader needs it for accessing the whole device so building a screen reader into a browser is kind of pointless. The exception being Google's ChromeVox but that's because on Chromebooks the browser effectively is the OS.

NVDA and Windows's Narrator are already probably good enough but JAWS is better. VoiceOver is built into macOS and iOS and is beyond good enough.

Browsers can do a better job of exposing semantics through accessibility APIs to assistive technology [0]. Browsers could also intelligently make up for the failings of web sites; e.g. when a site uses a div with a click event instead of a button element, present it as a button through the accessibility API based on heuristics. Browser rendering engines already do a lot to visually compensate for errors in HTML, they could do the same for some semantic errors.

[0] http://www.html5accessibility.com


Yeah, I've tried using the free options to sort of test my products, but it's not a realistic test; the most useful work comes from people who use screen readers daily, but it's really hard to test if we've fixed the problem even when it's been identified so we have to go back and forth.

And, I mean, there are accommodations and assistive technology built into the standards. It's just nowhere near as widespread as it should be, in terms of usage, and it's always an afterthought (if it is thought of at all) in frameworks and HTML templates and such. And, because most of the time an inaccessible site or app is so inaccessible as to prevent anyone who would notice from getting far enough in to complain. So, we need good tools for knowing when our stuff is broken from an accessibility standpoint.

I think what I'm getting at is that it should be easy to see errors in accessibility, and maybe search engines should favor sites that at least make an effort.


The problem with testing with assistive technology yourself is you're not a real user who knows the tool well. Also, like browser testing, there are differences between different AT; VoiceOver is used by many visually impaired people but what it supports and how it works is different in a number of ways compared to JAWS or NVDA (e.g. VoiceOver doesn't have "form" and "browse" modes).

Search engines giving more accessible sites a "buff" as Google now does for HTTPS sites is a good idea. They already do in a small way in that having things like proper headings are good for SEO but they could go much further.

Google has their own Accessibility Developer Tools [0] add-on, they could make it a default part of Chrome Dev Tools and make it more prominent.

https://chrome.google.com/webstore/detail/accessibility-deve...


Accessibility is more than just screen readers, too; e.g. there are OS-level settings for increased contrast and differentiation without color that are just about universally ignored on the web.


I second this. Most of the other ideas here are either technologies that don't exist yet or things only nerds care about (I'm a nerd so I can say that).

This would have a meaningful impact on the lives of many in an underserved community.


Start cracking down on bloated and unnecessary JS. Loading more than 1 script? More than X KBs of total JS? More than Y secs CPU time? "This page is slowing down your PC".


That'd be hilarious and tragic. Most of the internet would be flagged slow.


And that would force devs to change them


Devs aren't usually the ones in charge. Imaging trying to explain to an exec that you need to remove the analytics code from their website to speed it up for end users, not gonna happen!


Now that adoption of HTTPS has solved all SQL injection holes, we can take steps to further modernize the Web so people can feel secure.

Require Facebook login for everything. Just don't serve the content without a Facebook login. Can use DPI at the network layer to help enforce.

Add phone-home features to CPUs to make them turn off 6 months after product introduction. Everyone ought to be buying a new computer every 6 months.

Disallow email addresses ending in anything other than @gmail.com.

Rewrite everything in a memory-safe language such as PHP. Eventually this can be enforced at the OS level.


Seriously? This is absurd


Kudos.

You had me going for a minute.


A peer-to-peer hosting protocol which publishes user data outside of site silos while still "feeling" like a web app. Bonus feature: end to end encryption.


Adding support of Internet Message Body Format (a.k.a. MIME) to browsers [1].

MIME is a format that can contain html/css/script/images/etc in single file (or stream).

Thus the whole web application can be served as a single stream by the server.

Yet emails (that are MIME files) can be opened by browsers as natively supported documents.

[1] MIME : https://tools.ietf.org/html/rfc2045


You might be interested in Web Packaging: https://github.com/WICG/webpackage


Not clear why do we need this new format. MIME is there long ago. If you need some meta data you can simply add header section with application/json or application/manifest+json or whatever...


You mean like .mhtml in Chrome?


Obtrusive prompt (UAC equivalent) required to load any javascript. How the web would be so much functional, to the point and responsive. Just imagine the electricity savings.

The world truly would be a better place.


I couldn't disagree more but there's plenty of browser add-ons that allow you to do this.


The whole point would be that it would be mandatory. So that websites would need to do without javascript unless they really couldn't be usable without it.

Noone will stop using javascript based on what I do locally.


Why not just install NoScript? This seems like an easy problem to solve locally.


This is a moonshot, but I would love to see a social network based on protocols similar to how emails work. Then different websites could implement interfaces for the protocol and talk to each other.


Working version without javascript (unless it's crucial for the website). No opacity 0 animations, javascript only menus etc.


This should get more votes. It should be an internet requirement that a website has to be functional and readable without Js.

I don't know how this could be implemented though


Just include <noscript> with style tag where it would include those fixes (regarding opacity), menus without js are not a problem in new browsers, css3 is powerful enough. There's even a repo with common elements done without a line of js - see https://github.com/you-dont-need/You-Dont-Need-JavaScript


ML driven content blocking for ads and other garbage such as social widgets and beacons. Red screen warning as deceptive on any site that tries to hack its way around the filter.


Interesting concept.. one that Google certainly wouldn't implement in Chrome because of their ad-based revenue model.

But I could entirely see a little privacy "eye" icon in the URL bar of Firefox, similar to the padlock icon we have now for HTTPS/certificates.

The eye could turn red and display text for the site you are visiting based on analytics, beacons, web bugs, and so on.

Or how about major social media sites have their icons placed in the URL bar if they have trackers / social media widgets on the current page. This way, it is made explicitly clear to the user that "{Insert social platform here} is tracking you on this page, even while you are logged out, don't have an account, ..."

The difficulty with having the browsers force the standard is getting Google Chrome on board, since they have so many users.


Wouldn't you by definition not be able to detect working around the filter? Because if you could detect it, why would it not just be another filter condition?


- A decent minimum password length, without any funky requirements, just the minimal length.

- Being able to prosecute any company that stores passwords in plain text


- no max length on password at all, or allow a 3 digit number of chars. Never silently truncate passwords either.

- never disable paste on a password field.


You'll have half your users with passwords like '123'. You could say it's the user's fault and their account is compromised, but when it's half the users on your site, really your site is compromised.

Better to have a minimum password entropy.


I think that you misunderstood. What I want is that the password can be 100s of chars long, if the user so chooses. i.e. no noticeable maximum length.

I said nothing at all about minimum password lengths, and that's deliberate, it's a separate kettle of worms.


I understood the parent comment as the password length should be 3 digits long (hundreds of chars) rather than password itself.


Correct, though I would phrase it as "the password can be 100s of chars long, if the user so chooses"

i.e. No noticeable maximum length for people using password managers and generating 30, 50 or 100 char random passwords, but still insulated against attacks with endless streams of input data - it is acceptable to reject 10 000 char passwords as a hostile input designed to tie up server resources.

I said nothing at all about what the minimum password length should be, and that's deliberate, it's a separate kettle of worms.

But ok: I'm also not a fan of measures such as "password entropy" or "must contain at least one from column A and one from column B". Subjectivity, naive use and changing attacks have given these a bad reputation, often deserved. Password length is not subject to such changing moods.

The parent posts comment, "A decent minimum password length, without any funky requirements, just the minimal length" is fine by me. I didn't want to add to that statement on the topic.

With all the rules in the world, some people are going to have relatively weak passwords, and we cannot entirely eliminate that. But we can also allow and encourage strong passwords by - as an easy first and minimum step - removing deliberate misguided impediments like max lengths and disabling paste.


I think the only way this can happen is zero-knowledge password proofs, i.e. browsers implement a mechanism by which password fields submit a proof that the user has the password, rather than submitting the password. This way the server can only verify the password if they've implemented the proof system correctly, and they can't leak the password because they've never had it.

The basic idea is, the server gives a unique nonce with the password form. The user enters their password. On form submit, the browser stretches the key space of the password using a slow hash, then uses the digest to generate an asymmetric key via a referentially transparent algorithm (no random salts). Then the browser prepends the URL (obtained from HTTPS) to the given nonce (to prevent man in the middle attacks). The browser then checks to see if it has seen this nonce before and displays an error if it has (to prevent replay attacks--this forces servers to generate new nonces, although the browser can't force them to verify that the nonce that is signed later is the same one they sent). Finally the browser uses the key to sign the nonce, and sends the signature to the server. The server uses the public key (which was generated in the same way and given to the server at sign-up) to verify that the user has the password.


I love the idea of zero-knowledge password proofs. Others can chime in on the approach you've proposed, but I have a more practical concern about developing critical mass.

How do you break through the chicken and egg problem of not enough users using or not enough browsers supporting this capability?


If it's a field on inputs of type password, all you'd get is something like:

<input type='password' password-nonce='42'></input>

Browsers that support the password-nonce argument sign as I described. Browsers that don't support it pass through the password and the server performs the ZKPP key generation (this is no worse than the current system of hashing passwords). So servers can implement this immediately without worrying about breaking in non-supporting browsers.

After adoption by a few major sites, browsers can add a warning that the server didn't send a password nonce and the password will be passed to the server so the user has to click "Okay" before it gets submitted. This can be escalated to more severe messages to pressure more sites to comply.


Find a large user who feels this is a valuable feature and have them adopt it.

Governments are one such large customer.

Vendors, faced with multiple customers requesting a feature, but with slightly varying specifications, will tend to seek a mutually acceptable spec.


I'm waiting for other people to tell you to implement this, but without JavaScript.


Implementing it in Javascript would defeat the entire purpose. The point is for the browser to implement it as a supported field on input tags of type password.


- Being able to prosecute any company that stores passwords in plain text

And the death penalty to any company that emails you your plaintext password (and even worse, the email tells you to "protect your password").


You don't want to restrict on any password aspect but this:

Is the password known?

Any sort password is know. There are lists of millions of known passwords.

Better would be to get away from passwords entirely.

NB: I've been checking the xkcdpass utility (available on Debian). Generated 50 sets of 100,000,000 passwords each, comprised of six words (the default), then sorted these uniq, and counted the output lines.

Any duplicates would result in fewer than 100,000,000 lines.

All fifty trials had no dupes.

Took most of a week to run that, on an older box :)


IPv6?


Standardise on a set of basic document types. Index page, article, gallery, search/results. Others as necessary. Specify standard elements and styling.

Standardised metadata. Pages should have title, author, publication date, modification date, publisher, at a minimum. Some form of integrity check (hash, checksum, tuple-based constructs, ...).

User-specified page styling. If I can load a page in Reader Mode, https://outline.com, or Pocket, I will (generally in that order). Every page having some stupid different layout or styling is a bug, not a feature. Web design isn't the solution, Web design is the problem. Users could specify their default / preferred styling. Night mode, reader support, etc., as standards.

Fix authentication. PKI or a 2FA based on a worn identification element (NFC in a signet ring with on-device sensor is my personal preference), if at all possible. One-time / anon / throwaway support.

Reputation associated with publishers and authors. Automated deprecation of shitposting users, authors, sites, companies.

Discussion threads as a fundamental HTML concept.

Dynamic tables: Sort, filter, format, collapse fields, in client. Charting/ploting data would be another plus.

Native formula support.

Persistent local caching. Search support.

Replace tabs with something that works, and supports tasks / projects / workflows. (Tree-style tabs is a concept which leans this way, though only partially).

Fix-on-reciept. Lock pages down so that they are no longer dynamic and can simply be referred to as documents. Save to local storage and recall from that to minimise browser memory and CPU load.

Export all A/V management to an independent queueing and playback system.


For me, I would go with:

- Typed javascript should be built-in in browsers. (Typescript)

TypeScript is great, but all the configurations and transpiling is a pain.


Why exactly typescript?

Typescript should be compileable into WebAssembly bytecodes and that's it.

If you want it to be transparently compileable, like foo.ts to be sent as WA bytecodes then something like mod-typescript (on the fly typescript compiler) can be designed that will send (compiled and cached) bytecodes.


I kinda dislike dealing with the configuration too but it's there for a reason; because one size doesn't fit all. Then again, it's not a major pain point for me as long as I stick to the defaults.

And deferring the transpiling to the browser won't really accomplish much, except increase load time for all users.

I'm currently working on a project with ~500KLOC of typescript, and transpiling it takes about 15 seconds on my devbox. It's still a bit too slow for my taste but I don't think there is a technical quick fix for this kind of stuff, rather it's a tale about a project which started small, organically grew larger over time, and would in many ways benefit from a clearer structure with clear dependencies.


A secure standard for ads. Right now reputable sites are running ads from people they shouldn't trust, and getting bit in the ass by it. Popups, page takeovers, even viruses get distributed through ad networks and end up on non-malicious sites.

None of that should work. Ads shouldn't be able to inject their own JavaScript into a page. There's a technical solution to that problem.

Let's narrow down the scop of things an ad needs to do (display an image, maybe play sounds and videos (after user clicks on them), and send back a reasonable amount of tracking data, etc). Then let's come up with a sandboxed DSL for ad networks to specify their ads. Web sites could embed those ads inside an <ad> tag that sandboxes that content and makes sure only supported functionality is being used.

Then I can turn off my ad blocker and not have to worry about all the security issues that unscrupulous ad providers bring with them today.


A ban of everything JS except for these so-called web apps, which obviously need it. Make the internet great (performant/efficient/secure) again!


Forgive me, but I just don't understand this sentiment at all. I understand your general frustration with over-engineered websites - but is it not your choice to visit that website? Do you not also have the ability to block javascript just like the scourge of flash websites before it? We aren't talking about vulnerabilities here though, youre just saying that there are websites out there that could do with less (or no) javascript but arent, so you want to "force" them to?

Let me ask it a different way - do you have any reasonable expectation that your proposal will ever be accepted? That browser manufacturers will implement things to limit or block the functionality of js? Where would the line be drawn (and who would draw it) between the so-called web apps and everything else that isn't worthy?

There already are some mechanisms in place to decentivize misbehaving websites such as the google rankings. But thats a far cry from a browser not supporting or displaying some warning when viewing one of those sites.

Maybe im missing something - that there are these required sites that are misbehaving and we need some regulatory power to rein them in.


>Let me ask it a different way - do you have any reasonable expectation that your proposal will ever be accepted?

There are hundreds of pie in the sky suggestions being floated here, and the one about javascript is the one you choose to attack with this argument?

JavaScript has unequivocally made the web worse for everyone but advertisers and perhaps the people that run CDNs.

Why, of all the proposals here, are you trying to shit on this one on particular?

Honest question.


> JavaScript has unequivocally made the web worse for everyone but advertisers and perhaps the people that run CDNs.

Do you really think this is defensible? That the web would be as popular or useful as it is today without the ability to run code in the browser? I'm curious if you think there is a majority of people that agree with this?

> Why, of all the proposals here, are you trying to shit on this one on particular?

I am not shitting on anyone - im trying to have an honest discussion about why you and the OP feel that javascript is such a scourge that it needs to be regulated. Not one person has addressed even one of my questions, you included. I'm sorry you're taking my challenge as hostility - its not intended that way.

I submit that its possible I am missing something - perhaps there is situations out there that I don't have to deal with. I'm asking for an honest view point that I can try to understand.

> ... pie in the sky ...

Theres a difference between "here's something thats easily accepted is a good idea but might be difficult to implement" and this. I'm asking for an explanation of the premise itself.


Plenty of popular web applications work(ed) without JavaScript. I'm thinking here of thinks like Gmail.

The only thing that I can think of that absolutely requires JavaScript is advertising and tracking.

Anything else better serves the user as a desktop or native app.


  Maybe im missing something
You definitely are. Disable js and try browsing. Note the quadrupled battery life in your laptop.


I don't doubt improved battery life (although quadrupled seems like a stretch) - but I bet if you turned off images and video you'd have a similar improvement - but nobody is saying that pictures are ruining the web.

Again, arent you capable of choosing the websites you use? Are there websites you are required to spend extended amounts of time with that you want the browsers to step in and force them to use less javascript?


images decode once and usually in hardware. js runs in background, forever.


Let me guess, you write angular apps backed by node?


Really what browsers need are profiles. Maybe a research profile that just supports submitting search forms and renders everything in the same colors and the same fonts and the same margins, and an app profile which lets pages do all the ridiculous JS crap.


And websites supporting this. I recently came across some site providing a paper that wouldn't even load the raw text without JS.


First thought when reading the headline: Backlinks.

Jaron Lanier explains... https://www.youtube.com/watch?v=bpdDtK5bVKk&feature=youtu.be...


FIDO U2F hardware authentication token for 2 factor login. Simultaneously easier and more secure than other 2 factor methods. But first someone needs to make a <$5 hardware token so people might actually consider buying one.


YubiKeys are $18. Not $5 but that's in the neighborhood, and they're relatively new. Prices will come down.


$18 is not in the neighborhood of $5. Also I doubt the keychain type can achieve mass adoption, it's the in-USB-port kind that is actually convenient, and those cost $50.


SSL was forced by Google single-handedly. Developers scared that https might provide ranking factor, quickly moved to SSL.

As for topic, I would like to see all mails clients rendering emails same god damn way.


And handling replies the same way. Far too often I see someone using IBM Notes send an email to someone using Outlook and when it gets to me the sender says "review the email chain below" and every damn line has another damn angle bracket. Not sure which client is adding it all in, but it makes it unreadable.

>hello

>>my name is bill

>>>i'd like to have a meeting

>>>>please provide your availability


>>>>Even worse, is when you have a really long line followed by a

>short

>>>>line that got moved because a word exceeded some unknown column

>boundary.

>>>>This isn't cool anymore, and anyone that implements it should be

>shot.


2FA everywhere, preferably with Yubikey (no connection but happy user)


There was a link just 2 days ago on HN [1] about how 2FA has already been forced, but it's a mess b/c every site does it differently and usually in a way that's not secure.

1: https://news.ycombinator.com/item?id=14735759


Side question: Do you use your Yubikeys for GPG? I tried to make two identical keys with the same GPG key, but still I get "Please remove the current card and insert the one with serial number: ..." if I try to decrypt a file with the other key. I asked the internet a couple times but no one seems to know.


This is the most important thing. It solves so may other inconveniences of the net. Your username should be your fingerprint. Your password should be 8 characters. You should have a 2fa fob that secures your account. So much less fraud and other attacks. I think this has the highest ROI.


W3C standard for bloat-free websites, aka vendor-neutral equivalent of Google AMP and Facebook Instant Articles, to avoid further fragmenting the web.

If its an open standard, mobile-view and other stuff can be progressively added to websites in a variety of ways: built-into browsers, polyfills or open source libraries, and lead to a much better web experience across devices. Aggregator startups and apps would stand to benefit a lot by this.


Getting rid of passwords. Passwords are the easiest way for others to get access to your accounts.

A move to federated identity, with a standardized API, and integration with the browsers, would fix all these issues. You could easily use a federated identity provider with support for 2FA, and ALL your accounts would immediately work with 2FA.

And, with federated identity, you can also run your own, if you don’t trust Google or Facebook login.


> And, with federated identity, you can also run your own, if you don’t trust Google or Facebook login.

Sounds like OpenID...which was kind of a train wreck.

One middle ground could be tighter integration between browsers, sites and password managers. With the right specification, sites could offer a "register with [1Password,KeePass,LastPass,etc]" button which would open the password manager and pre-fill all the fields from a password manager's identity record (i.e. if a user has multiple identity records, the password manager can prompt for the one to use.) If there's a need to choose a username, there should be a standard endpoint for checking uniqueness. Once all information is filled out, the password manager can generate a password based on the site's specifications, post the information back to the site, store the account details and the site's ToS in its database and update the browser page to the success url. There can also be a standard endpoint for password rotation so the password manager can periodically update the password without the user's involvement.

This would still use passwords and for those that explicitly don't want to use a password manager, they could continue doing things the current way. But for the majority of us, it would be just like what you're talking about except that our identities would be stored locally in our password managers, giving us complete control over everything while automating as much as possible.


That’s another one of these half-assed solutions that the world has too many of, just like credit cards or using SSN as auth.

No.

We’ve solved all these issues before, OpenID was a good solution, and OpenID Connect – a complete rewrite – can be used to replace it, and is used already for Google and Facebook login.

Just use OIDC, on every page, and allow users to choose an identity provider. Problem solved.


> OpenID was a good solution

Ugh...no. Just no. If you believe that, I'm not sure there's anything I can say that will get through to you...we're just going to have diametrically opposed opinions. But let me tell you that I've implemented it quite a few times on various sites and it's been a nightmare every time, so it's not like my 'half-assed' approach was ignorant of it being an option. It was based on specifically discarding that option as poorly conceived, poorly implemented and simply not being an option in many cases. Banks, for example, should never let a 3rd party IDP be part of their authentication process. And if your solution is inapplicable in situations where tight security is required, you've got to ask yourself whether you're actually solving the problem or just requiring users to be aware of two different ways of logging in instead of the one that they currently understand. Even if they do have to remember different passwords for most sites, most people can deal with username/password conceptually in a way that using a third-party IDP confuses them.


Banks should only allow auth via EMV chip and pin, like all German banks already allow.

And for anyone else OIDC is more than good enough.

Username and Password is in any situation less secure than OIDC.


I kind of agree, using OIDC with users selecting providers in the form of an email address (i.e. you enter an email address and the provider is selected based on that) seems like ideal UX.


And that also is standardized via WebFinger and OIDC Discovery.


Yep, although those aren't really very (at all?) popular yet :/


Strict HTML, CSS and JavaScript parsing. One single error => Site won't be displayed. These lazy web devs need some more discipline!


In theory this is an awesome idea, but it faulters for a couple reasons imo. One is graceful degradation, in case of bugs in a browser, or a browser doesn't support new syntax, etc.

If the spec had byte-for-byte specification for what happened when, this would be fine, and of course that is the case for say, JSON, which has remained roughly the same since inception. But given how fast CSS and JS are moving, it would be painful to do this same thing with either of those.

XHTML did this with HTML, but it kind of sucked because some browsers would reject what others would accept in some edge cases, and that is terrible. Plus, there's already a lot of bad HTML in the wild and XHTML casted some doubt on our ability to maintain a major HTML compatibility break.


> or a browser doesn't support new syntax,

Then the user needs to update their browser. It's pretty inexcusable these days to not keep your browser up-to-date, and browsers tend to support new language specs before those specs get widespread usage.


> before those specs get widespread usage.

'widespread'

This would kill off new features, basically. No-one would implement a new feature, because it would kill off their users in $other_browsers completely, and browsers add new features piecemeal - without seeing which features are popular, there's no way to tell what is best bang-for-buck to work on.


I agree with this, but feel like there might be issues I'm not thinking of. Can anyone shed some light on why browsers are so tolerant today and why that might be a good thing?


The Robustness Principle states "Be conservative in what you do, be liberal in what you accept from others."

Following that maxim, browser developers assumed that, even if HTML wasn't inherently correct, if they could figure out what the user logically meant then assuming that was better than not working at all.

In short, people wrote garbage HTML and it proved easier to fix browsers than people. At first, it wasn't too problematic, but as HTML got more complex more problems surfaced and now everything is a mess.

This was the goal of XHTML: HTML that was required to validate as XML or it wouldn't work at all, and some browsers were, indeed, strict at this. The idea was that you'd only use XHTML if you were generating it with an XML parser or some other template generator that could produce valid code. In reality, that just meant that browsers that didn't understand XHTML treated it like HTML and worked, and browsers that did understand XHTML and validated it would show errors. Thus, users saw that browser X (doing the right thing) couldn't display a site, but browser Y (doing the wrong thing) could.


Probably legacy reasons and the type of errors you can get.

Since JS used to be "sugar on top", it wouldn't make sense to completely eliminate the page when that piece of code which makes a logo flash doesn't work right.

Also, you can have JS errors coming from loads of places. What if an extension you use has a bug in it that triggers only on certain sites because of some stupid unicode issue? What if some ad has an issue like that?

And basically, it really boils down to: we all ship buggy fucking software. Everything has some kind of a threshold for errors (or errors that blow up only under certain conditions). It's good to have some built-in fault-tolerance that prevents an all-out disaster.


If two browsers both implement strictness, but have different standards or implementation bugs then you truly have made the dev's job hell.


Automated HSTS, revokable public key pins, and certificate transparency.


Agreed, but I'd add ipv6


Agreed, but to those 4 things I'd add DANE TLSA (RFC 6698) and Certification Authority Authorization (CAA) (RFC 6844) as further lines of defence against rogue CAs.


Evergreen web browsers. Safari and IE11 continue to ruin my life.


Unfortunately this one is impossible even if every developer and browser vendor were all unanimously in favor. Edge-née-IE is evergreen now, but users have to upgrade Windows first, and that's not something that can be forced.


What do you mean by this?


"evergreen" means self-updating, with a rapid release cycle. Chrome, Edge, and Firefox follow this model. IE (essentially a legacy browser at this point) and Safari don't.


Thanks.

This works in some instances.

Not so much in others.


Ending the NSA dragnet


Only half joking. Team up and find a way to force Apple to allow competing rendering engines on iOS.


Decentralization.


I would love package systems and server admins to recognize the inherit danger in allowing the servers to call out to the wild. All internet facing servers should be allowed to only call out to white listed addresses.


Do you mean blocking all IPs except for those whitelisted? Obtaining the right whitelist seems to be time consuming task. If you control your servers and trust their software, why would you cripple its operation?


Disclaimer on all sites about data collection (on sites that collect data):

Precisely what data is collected, a list of the 3rd-parties the data is sent to, the policies of those 3rd-party sites, how long the data is held at the primary domain, how long the data is held at the 3rd-party sites, options for requesting that such data be deleted.

Sites that act as a conduit for the collection and transmission of user data should be held accountable for the breach of such data.


Dropping TLS in favor of IPSec. Now every protocol is transparently secure by default and there's no chance of developers accidentally messing it up.


Key management would be a nightmare here. IPSec has very long setup times, and every destination would need a new crypto configuration. This is exactly the problem that TLS was designed to solve, and IPSec and the web are a bad mix.


IPSec is a complex design by committee, with NSA's help. Schneier does not believe it will become a secure system.

http://www.mail-archive.com/cryptography@metzdowd.com/msg123...

https://www.schneier.com/academic/paperfiles/paper-ipsec.pdf


IPSec is beautiful and definitely was the correct way to handle encryption.

IKE and ISAKMP was the problem, that stuff is an absolute nightmare. Maybe now with IKE2 it might get better...


> IPSec is beautiful.

Could you elaborate?


- Make client-side certificate authentication mainstream. Fix the UI, UX

- Standardize on some sort of biometric identification that actually works. I HATE two-factor :(


1. Client-side certificates usage has privacy implications - https://github.com/tumi8/cca-privacy

2. Is biometric really necessary? U2F tokens already exist and are standardized (maybe not officially, I'm not sure). Chrome and Opera already support it, Mozilla's support must be coming soon (meanwhile you can use an add-on).


I am sick of the actual motions of authenticating, and many of the 2-factor implementations out there today are terrible. (SMS, really? What happens when your phone is stolen? How do you protect against an angry lover? What a joke)

U2F dongles aren't much better.

Also, a quick glance at that link seems to indicate attacker needs some sort of MITM access? Is it anything more than a replay attack?


A simple way to block third party trackers/beacons that's on by default, with a simple one-click to disable it on that page load.


A truly obfuscatory browser: one in which everything sent to the server looked the same, regardless of which user, region, etc.


With addons which disable user agents, referrers, and other details, and by disabling Javascript, you can nearly achieve this with Firefox and Chrome. I've noticed that some website refuse to serve content without a user agent.


curl -H "" -o stuff.html && elinks stuff.html

I've been looking for a site I can run this on over TOR at random times for reading news but I haven't found one.


Can elinks read from stdin?


links2 can, and both of them have --dump.

The nice thing about doing it like this is that the file persists on disk so viewing a page is decoupled from fetching it (which is nice for a lot of reasons.)


Yeah, I may have done that once or twice myself.

https://ello.co/dredmorbius/post/naya9wqdemiovuvwvoyquq

Why browsers don't dump to disk in a format that makes for easy rendering ... I don't know.


Accessibility. Should be hard or impossible to build an inaccessible website. Tooling needs to be vastly improved.


Ability to mark HTTPS site as "not secure" using HTTP headers if it's asking for things like logins and passwords.

Would be useful for things like free static HTML web hosts and CDNs for combating phishing.

Could be something put in CSP.


Is that different from CSP form-action?


Yes. This wouldn't prevent a form from working, what it would do is warn the user that this site shouldn't be asking you for a password and may be trying to do a bad thing, instead of just showing a green bar and a "security lock" with the word secure on it.


Informing user about trackers being used on a website. (I know there are add ons available). There should be mode or something that informs user about this so that he can close the website and look for alternatives


I'd like to see form validation get a badly needed overhaul. At the same time, we can punish sites that use shitty/inappropriate practices. This would vastly improve mobile experience especially.


Less features... Instead, figure out how to force good usability.


* SameSite cookies * CSP


For me it would be the cookie consent control to be implemented at the browser level with sites being able to describe the policiy via a hosted policy file.


I want a meta command to enable the browsers reader mode. Then I can just render HTML and the browser can display it as the user prefers.


IPv6, DNSSEC, P2P DNS and rootless DNS,


Default encrypted email communications


That's not part of the web.


Given that most people's email is probably through Web apps these days, that's probably not effectively true.


Even if you use a desktop client, email is part of the world wide web.


How do you figure that? If you're using a desktop client, it seems pretty exactly not part of the web.


It seems that I was confused with the definition of the web. I was synonymizing it with Internet. So never mind, email is not part of the web.


A simple micropayments scheme that can be used on publications, music sites, whatever.


HTTPGP, forced PGP encryption between client-server. Would be pretty cool


Pay turn off ads. A certain percentage of visitors are asked to rate the content (to avoid paying) the rest of the visitors are automatically billed and pay the average rating.

Each user can specify a maximum payment and can opt to view with ads if payment requested is too much.


Ever try ad-block? It's actually free.


How do you pay the people who create content?


Not through ads.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: