
Ask HN: What feature would you want the web to “force” next, after HTTPS? - chiefofgxbxl
We&#x27;ve seen the push for HTTPS in recent years accelerate and become more and more aggressive (in a good way).<p>Browsers have arguably led this drive by notifying users that the pages they are viewing are &quot;NOT SECURE&quot;, through the use of pad-lock icons in the URL bar or even notifications under textboxes (e.g. Firefox) [0]. Chrome, too, is driving this trend. And with users fearful of sending data over a &quot;non secure connection&quot;, they&#x27;ll be vocal enough to push website owners to fix this issue.<p>---<p>So, if you could decide: <i>what feature or measure would you want to see adopted as quickly as the push to make all sites use HTTPS?</i><p>[EDIT: kudos if you describe how your new standard could be &quot;forced&quot;, e.g. through a URL-bar icon, notifying users somehow, etc. How would you convince other developers and maintainers of large code-bases, websites, browser vendors, etc. that they should throw their support behind your initiative?]<p>Think ambitiously too -- imagine your proposed feature would have the same backing and urgency as we have with HTTPS, with browsers (for better or for worse) authoritatively &quot;dictating&quot; the new way of doing things.<p>---<p>[0] http:&#x2F;&#x2F;cdn.ghacks.net&#x2F;wp-content&#x2F;uploads&#x2F;2017&#x2F;03&#x2F;firefox-52.0-warning-insecure-login.png<p>[1] https:&#x2F;&#x2F;www.troyhunt.com&#x2F;life-is-about-to-get-harder-for-websites-without-https&#x2F;
======
amluto
First party isolation. Social media buttons and other trackers should not get
a global identity for free.

Explicit opt-in to store persistent state at all. An exception should be a
cryptographic identity that is only revealed when you click a login button.

No sound without opt-in.

No big data transfers without opt-in. If a site wants to shove 10MB of crap in
their article, then they should have to show a page asking permission to use
data. And search engines should refuse to index anything behind a bloatwall.

~~~
kgen
I like this idea, but because websites have the content, they can simply throw
up a button that requests showing your identity to view the content and most
people would blindly click it leading to the same situation we have now. The
HTTPS push is important and works because the search engines can leverage
their importance and the browsers can (effectively) scare people without any
user input.

~~~
jazoom
It would just be another case like the EU cookies thing. Every website would
have the button and everyone would click it immediately to get rid of it. It
would just be an annoyance.

~~~
amluto
If done well, the chrome would be more clever than that. There should be "log
in as [username]" and "stay anonymous". Unless websites want to start
validating email addresses to let you read their content, they'll have to
accept "stay anonymous" because it would be indistinguishable on the server's
end from getting a brand-new user.

So you'd have an idiotic banner pissing off your users and the considerable
majority would click "stay anonymous", gaining the site operator nothing.

------
age_bronze
Registration forms should be standardized. I want to have my "real" details,
and my "fake" details ready to be entered into websites that want yet another
registration. Why does every single website implement their own registration
form with exactly the same details?! Why does every single web site re-
implement the registration page slightly differently?! Ideally, I'd enter the
registration page, the browser would list the things they want to know, I'd
pick either my details or another set of fake details (for spammy websites or
others u don't really care about), one click and registration complete.

~~~
eastbayjake
> Why does every single web site re-implement the registration page slightly
> differently?!

Because registration pages are the top of most sites' conversion funnels, and
as such they produce metrics that reflect on not just IT teams but also
UX/design and marketing. The amount of tinkering and customization in
registration pages is a political/organizational problem, not a UI/tech
problem.

And besides that, no business has an incentive to make it easier for you to
free-ride their service with fake credentials. The better question might be
why so many companies continue to put cost-inducing barriers like signup in
their flow before they fully demonstrate the value-creating potential of their
product. I liked Facebook's "Anonymous Login" but apparently very few
developers were actually interested.

[https://www.recode.net/2015/3/6/11559878/whatever-
happened-t...](https://www.recode.net/2015/3/6/11559878/whatever-happened-to-
facebooks-anonymous-login)

~~~
krapp
> The better question might be why so many companies continue to put cost-
> inducing barriers like signup in their flow before they fully demonstrate
> the value-creating potential of their product.

Because it works. If it didn't work, companies wouldn't be doing it.

------
notzorbo3
\- A protocol for sites to get my public PGP key for server side use

\- The discontinuation of using SSL certificates for verification of website
identities and a move to true fingerprinting ala SSH.

\- Deprecation of email or rather its insecurity.

\- Logins on websites with a public / private keypair ala SSH.

\- A resurgence in sites that let me pick my own anonymous username instead of
Facebook, Google or Twitter logins and email addresses as UIDs.

\- Blocking of any and all forms of popups, including javascript popups,
overlays, cookie banners, browser notifications.

The web is rapidly becoming a place I don't want to visit anymore.

~~~
steven777400
The login problem was attempted to be solved by Mozilla's "persona", now
deprecated. I like the general idea that I strongly authenticate to my
browser, which can then "vouch" for me to various sites using cryptographic
tokens that are otherwise useless (so no cracking/stealing passwords, etc).
The devil of course would be in the details.

~~~
ben_jones
The devil is that the corporations able to influence this change directly
benefit from current identity systems. Google for example has huge
infrastructure in place linking its products together and includes tracking
and other features that would only work with the current system.

------
syncerr
I'd vote for DNS-over-HTTPS or similar tech. Encrypting domain name resolution
should help mitigate a gateway or proxy (Comcast) from knowing or blocking
sites you visit.

~~~
swiley
DNS is a non-trivial amount of traffic to go moving from a lightweight UDP
protocol to something like HTTPS. Furthermore, that would dramatically
increase page load times (for reasonably sized pages) since HTTPs requires
more turns.

~~~
criddell
> dramatically increase page load times

This is true, but with a reasonable cache design, it shouldn't be too bad.

~~~
throwaway2016a
Unfortunately a single page load often contains files from many different
domains. Sometimes 10+. So caching may be of limited use.

Although this may be a nice driving factor to get eCommerce sites to stop
putting 50 tracking pixels on every page.

~~~
criddell
That's true. DNS lookups seem like something you can do in parallel though, so
I still don't think it's that big of a hit.

------
gcoda
Ajax without JavaScript. Ability to send a response from server updating only
part of DOM. Basically, react with virtual DOM on the server pushing diffs to
user with http2 awesomness.

There will be no need for JS on most sites, can be adapted to current
frameworks, and with preload/prefetch it might be very fast.

* U can prefetch progress bar / loading state for example, and redirect to partial url of a real content

~~~
jlaporte
You may be interested in intercooler.js
([http://intercoolerjs.org/](http://intercoolerjs.org/)). It allows you to
perform a host of AJAX + DOM manipulation flows using only HTML attributes.

intercooler also supports something similar to the server pushed DOM diff flow
you envision:

Server Sent Events BETA
([http://intercoolerjs.org/docs.html#sse](http://intercoolerjs.org/docs.html#sse))

 _" Server Sent Events are an HTML5 technology allowing for a server to push
content to an HTML client. They are simpler than WebSockets but are
unidirectional, allowing a server to send push content to a browser only."_
[http://intercoolerjs.org/docs.html#sse](http://intercoolerjs.org/docs.html#sse)

~~~
gcoda
Thats awesome, I was only aware of turbolinks. Sadly, I will never use it for
real.

I, like many other, spending some time to ensure server side rendering works,
and web site can function without JS. If intercooler were part of browser, and
not separate code it will be possible to adapt any SSR ready app to work with
this.

------
atirip
JavaScript Standard Library created that every browser has "installed" and
updated automatically.

~~~
Spivak
Or just stable JS APIs that you can specify on your scripts.

<script vers="3.1"> </script>

<script vers="4.2"> </script>

~~~
vortico
Apparently this somewhat works.
[http://jsfiddle.net/Ac6CT/](http://jsfiddle.net/Ac6CT/)

------
SubiculumCode
I'm ignorant of a lot but, segregation of cookies by browser tab. If I log
into Xsocialmedia in Tab 1, and go to news site in Tab 2 using Xsocialmedia
plugin, it doesn't know that tab A logged in, or that it came from same
browser.

Basically, I want my tabs to be isolated and treated as completely separate,
isolated browsing histories, caches, and cookies. ...This is my gmail tab. All
that tab ever sees is gmail. This is my HN tab. All it ever sees in HN.

Like I said, this isn't my field, but..

~~~
cstrat
You can kind of do this with incognito mode.

It would be a bit frustrating because once you close the tabs everything is
lost. However I am not sure how different browsers "incognito mode" handle
multiple tabs.

I do like your idea though.

~~~
mintplant
Firefox has an experimental (opt-in) feature called "Containers" for this:
[https://testpilot.firefox.com/experiments/containers](https://testpilot.firefox.com/experiments/containers)

You can create a container for each compartmentalized context (work, social
media, whatever) and then create tabs assigned to diffent containers. They're
visually distinguished as to which tab is "in" which container, but otherwise
you can manipulate and mingle them freely with the rest of your tabs. Each
container's persistent state is isolated from both the other containers and
your default browsing context.

~~~
cstrat
oh wow that is awesome. I've not used firefox in years but something like that
could be reason enough to give it another try.

------
nodesocket
"grandmother" usable email encryption for the masses.

------
payne92
Hard deprecation of the long tail of Javascript browser capabilities and
incompatibilities.

So much code and so many libraries are littered with "if (old version browser)
do x, else if IE, do y, else, ..."

~~~
frandroid
So IE should say "sorry I suck, switch to another browser?" Doesn't make any
sense...

I mean sites will already tell users they don't support older browsers.

~~~
freehunter
What would really be nice is to have a spec that either you follow or you
don't. If you follow it, you follow all of it and it just works. If you don't
Javascript is completely broken. It'd be a good incentive to get browser
vendors on board.

~~~
will4274
This idea sounds great but it misses the reality on the ground - web
standardization is broken.

Compare C++ standardization. Most people are using two year old compilers.
While the newest compilers do implement draft features, these features are
almost all standardized in the next ~3 years. The standards are clearly
versioned and you can put newer compilers into modes to check against older
standards if your code needs to compiler on multiple compilers. Features are
never removed.

By contrast, on the web, everybody is running brand spanking new browsers.
Experimental technologies are sometimes implemented before full draft
standards are even written. Many or most drafts that are written never become
standards. If they do become standards, it's often a decade or more away.
Alternatively the living standards of what browsers actually support aren't
versioned. It's not possible to put a browser into JavaScript 99 mode - the
easiest way to check your code for conformance is to automate your test cases
across a variety of browsers. And, features are sometimes removed because they
are deemed security or privacy concerns, so conforming to an old standard is
not sufficient to ensure proper functionality under modern browsers.

------
5ilv3r
I want a way to force sites to become static after they are rendered. Just
frozen, as though they were on paper. I am tired of scrolling making menu bars
move around or triggering popovers. Just give me a way to turn off javascript
and any dynamic CSS junk after X amount of time. I looked into writing this as
a firefox browser extension, but extensions now use javascript so we're all
screwed.

~~~
mabcat
I've been using this Kill Sticky bookmarklet, and it's turned out to be one of
life's simple pleasures. Gets rid of menu bars and popovers. It's the most-
used thing on my bookmarks toolbar by a factor of about eleventy babillion.
It's not automatic like an extension would be, but you'll find you get 80% of
what you want with a tiny bookmarklet.

[https://alisdair.mcdiarmid.org/kill-sticky-
headers/](https://alisdair.mcdiarmid.org/kill-sticky-headers/)

------
delbarital
It's very interesting that most people here mentioned changes only to the top
layers while one of the most urgent problem is in the BGP protocol that help
route traffic between ISPs. Many times in recent years governments and ISP
used it to steal the traffic of entire countries, or to block websites.

------
exelius
I think this is only really worth the headache for security issues. That said:

\- HSTS

\- DNSSEC

\- IPv6

in that order. I think for a long time, governments had no interest in pushing
security and encryption because that would prevent them from mass data
collection. I think minds are starting to change around that: poor security is
much more likely to be exploited against a government rather than used in its
favor (plus all the real criminals now have much better opsec these days so
mass surveillance is much less effective).

~~~
rmrfrmrf
I feel like I need more training for IPv6. For a long time, I've thought that
it was a simple thing to enable and allow (and often our servers are dual
stack). It turns out, though, that unless you really know what you're doing on
the server side (i.e. overriding the horrible defaults for IPv6 resource
allocation), you can end up with an inexplicably slow server that spits out
bizarre errors.

Anyone here have any recommendations on a book, course, etc. that covers IPv6
readiness?

~~~
topranks
More on the network side myself, and I thought I had v6 down, or at least the
basics.

What kind of resource allocation problems did you have on dual-stacked hosts?
Windows/Linux/Other??

~~~
rmrfrmrf
For example, in Debian 7, routes.max_size is dynamically allocated for ipv4
and hard-coded to 4096 for ipv6. I eventually figured it out after searching
for a while, but that's not something I'd expect to have to do (having been
used to ipv4 working decently out of the box).

------
cwp
Add SRV lookups to the HTTP standard.

There's a tremendous amount of complexity and cost attached to the fact that
browsers look up the IP address of the hostname and then connect to port 80.

First, it's true that you can specify another port in the URL, but nobody does
that because it's ugly and hard to remember. If you want to be able to send
people to your website, you need to be able to _tell_ people what the url is -
"Just go to example.com". The minute you start saying "example.com colon,
eight zero eight zero" you're screwed. With a SRV record in DNS, example.com
could map to an arbitrary IP address and port, which would give us much more
flexibility in deploying web sites.

If you want a bare [http://example.com](http://example.com) to work, you need
to create an apex record for the domain. That can't be a CNAME that maps to
another hostname, it _has_ to be an A record that maps to an IP address. This
means you can't put multiple websites on a single server with a single IP
address, you have to have an IP address for _each site_. IPv4 addresses are
already scarce, this just makes it worse.

Also, port 80 is a privileged port in unix (which does the lion's share of web
hosting). That means you have to run web servers as root. That, in turn,
defeats the unix security model, and requires hosting providers to either lock
down their servers and give limited access to users (cPanel anyone?) or give
customers root access to _virtualized_ operating systems, which imposes a
tremendous amount of overhead.

Virtual operating systems also impose a bunch of complexity at the networking
level, with a pool of IP addresses get dynamically assigned to VMs as they
come and go, DNS changes (with all the TTL issues that go along with that),
switch configuration etc.

These problems are all solvable and indeed solved, by really clever modern
technology. The point is that it's all _unnecessary_. If browsers did SRV
lookups, we could still be hosting like it's 1999, and putting all the
tremendous progress we've made in the last 20 years into making it cheaper,
faster, easier and more secure to build and run a web site. People that
support the "open web" as opposed to "just make a Facebook page" should
advocate for SRV support in HTTP.

This doesn't actually have to be "forced" on users of the web - it'd have to
be forced on browser implementors, hosting providers and web site operators.
If the transition was handled well, users wouldn't even notice.

~~~
tptacek
There's two issues with this: first, it's not necessary, and second, it won't
really work.

The first: it's true that only one (privileged) process can bind port 80 on a
host. But that process can simply do what most front-end webservers do now,
and reverse proxy to any number of other local hosts. IP addresses can be
demultiplexed through the Host header, the way they have been for decades.
That makes this a systems design problem, and not something that needs to be
exposed in the standards.

Second, even if you could transparently run websites on port 9999, that
wouldn't change the fact that a good number of networks filter everything but
ports 80 and 443. Universal network accessibility would still put ports 80/443
at a premium.

------
r1ch
Start cracking down on bloated and unnecessary JS. Loading more than 1 script?
More than X KBs of total JS? More than Y secs CPU time? "This page is slowing
down your PC".

~~~
ryan-allen
That'd be hilarious and tragic. Most of the internet would be flagged slow.

~~~
roryisok
And that would force devs to change them

~~~
ryan-allen
Devs aren't usually the ones in charge. Imaging trying to explain to an exec
that you need to remove the analytics code from their website to speed it up
for end users, not gonna happen!

------
SwellJoe
It's not security related, but: Accessibility.

~~~
jbob2000
Yep, browsers should have screen readers built in. It's ridiculous that you
have to shell out $1000+ for a JAWS license (there are alternatives, but they
need work).

~~~
SwellJoe
Yeah, I've tried using the free options to sort of test my products, but it's
not a realistic test; the most useful work comes from people who use screen
readers daily, but it's really hard to test if we've fixed the problem even
when it's been identified so we have to go back and forth.

And, I mean, there are accommodations and assistive technology built into the
standards. It's just nowhere near as widespread as it should be, in terms of
usage, and it's always an afterthought (if it is thought of at all) in
frameworks and HTML templates and such. And, because most of the time an
inaccessible site or app is so inaccessible as to prevent anyone who would
notice from getting far enough in to complain. So, we need good tools for
knowing when our stuff is broken from an accessibility standpoint.

I think what I'm getting at is that it should be easy to see errors in
accessibility, and maybe search engines should favor sites that at least make
an effort.

~~~
extra88
The problem with testing with assistive technology yourself is you're not a
real user who knows the tool well. Also, like browser testing, there are
differences between different AT; VoiceOver is used by many visually impaired
people but what it supports and how it works is different in a number of ways
compared to JAWS or NVDA (e.g. VoiceOver doesn't have "form" and "browse"
modes).

Search engines giving more accessible sites a "buff" as Google now does for
HTTPS sites is a good idea. They already do in a small way in that having
things like proper headings are good for SEO but they could go much further.

Google has their own Accessibility Developer Tools [0] add-on, they could make
it a default part of Chrome Dev Tools and make it more prominent.

[https://chrome.google.com/webstore/detail/accessibility-
deve...](https://chrome.google.com/webstore/detail/accessibility-
developer-t/fpkknkljclfencbdbgkenhalefipecmb?hl=en)

------
lognasurn
Now that adoption of HTTPS has solved all SQL injection holes, we can take
steps to further modernize the Web so people can feel secure.

Require Facebook login for everything. Just don't serve the content without a
Facebook login. Can use DPI at the network layer to help enforce.

Add phone-home features to CPUs to make them turn off 6 months after product
introduction. Everyone ought to be buying a new computer every 6 months.

Disallow email addresses ending in anything other than @gmail.com.

Rewrite everything in a memory-safe language such as PHP. Eventually this can
be enforced at the OS level.

~~~
lerax
Seriously? This is absurd

------
pfraze
A peer-to-peer hosting protocol which publishes user data outside of site
silos while still "feeling" like a web app. Bonus feature: end to end
encryption.

------
c-smile
Adding support of Internet Message Body Format (a.k.a. MIME) to browsers [1].

MIME is a format that can contain html/css/script/images/etc in single file
(or stream).

Thus the whole web application can be served as a single stream by the server.

Yet emails (that are MIME files) can be opened by browsers as natively
supported documents.

[1] MIME :
[https://tools.ietf.org/html/rfc2045](https://tools.ietf.org/html/rfc2045)

~~~
spankalee
You might be interested in Web Packaging:
[https://github.com/WICG/webpackage](https://github.com/WICG/webpackage)

~~~
c-smile
Not clear why do we need this new format. MIME is there long ago. If you need
some meta data you can simply add header section with application/json or
application/manifest+json or whatever...

------
tjoff
Obtrusive prompt (UAC equivalent) required to load _any_ javascript. How the
web would be so much functional, to the point and responsive. Just imagine the
electricity savings.

The world truly would be a better place.

~~~
Viper007Bond
I couldn't disagree more but there's plenty of browser add-ons that allow you
to do this.

~~~
tjoff
The whole point would be that it would be mandatory. So that websites would
need to do without javascript unless they really couldn't be usable without
it.

Noone will stop using javascript based on what I do locally.

------
scottmsul
This is a moonshot, but I would love to see a social network based on
protocols similar to how emails work. Then different websites could implement
interfaces for the protocol and talk to each other.

------
lousken
Working version without javascript (unless it's crucial for the website). No
opacity 0 animations, javascript only menus etc.

~~~
roryisok
This should get more votes. It should be an internet requirement that a
website has to be functional and readable without Js.

I don't know how this could be implemented though

~~~
lousken
Just include <noscript> with style tag where it would include those fixes
(regarding opacity), menus without js are not a problem in new browsers, css3
is powerful enough. There's even a repo with common elements done without a
line of js - see [https://github.com/you-dont-need/You-Dont-Need-
JavaScript](https://github.com/you-dont-need/You-Dont-Need-JavaScript)

------
avaer
ML driven content blocking for ads and other garbage such as social widgets
and beacons. Red screen warning as deceptive on any site that tries to hack
its way around the filter.

~~~
chiefofgxbxl
Interesting concept.. one that Google certainly wouldn't implement in Chrome
because of their ad-based revenue model.

But I could entirely see a little privacy "eye" icon in the URL bar of
Firefox, similar to the padlock icon we have now for HTTPS/certificates.

The eye could turn red and display text for the site you are visiting based on
analytics, beacons, web bugs, and so on.

Or how about major social media sites have their icons placed in the URL bar
if they have trackers / social media widgets on the current page. This way, it
is made explicitly clear to the user that " _{Insert social platform here} is
tracking you on this page, even while you are logged out, don 't have an
account, ..._"

The difficulty with having the browsers force the standard is getting Google
Chrome on board, since they have so many users.

------
momania
\- A decent minimum password length, without any funky requirements, just the
minimal length.

\- Being able to prosecute any company that stores passwords in plain text

~~~
kerkeslager
I think the only way this can happen is zero-knowledge password proofs, i.e.
browsers implement a mechanism by which password fields submit a proof that
the user has the password, rather than submitting the password. This way the
server can only verify the password if they've implemented the proof system
correctly, and they can't leak the password because they've never had it.

The basic idea is, the server gives a unique nonce with the password form. The
user enters their password. On form submit, the browser stretches the key
space of the password using a slow hash, then uses the digest to generate an
asymmetric key via a referentially transparent algorithm (no random salts).
Then the browser prepends the URL (obtained from HTTPS) to the given nonce (to
prevent man in the middle attacks). The browser then checks to see if it has
seen this nonce before and displays an error if it has (to prevent replay
attacks--this forces servers to generate new nonces, although the browser
can't force them to verify that the nonce that is signed later is the same one
they sent). Finally the browser uses the key to sign the nonce, and sends the
signature to the server. The server uses the public key (which was generated
in the same way and given to the server at sign-up) to verify that the user
has the password.

~~~
joegosse
I love the idea of zero-knowledge password proofs. Others can chime in on the
approach you've proposed, but I have a more practical concern about developing
critical mass.

How do you break through the chicken and egg problem of not enough users using
or not enough browsers supporting this capability?

~~~
kerkeslager
If it's a field on inputs of type password, all you'd get is something like:

<input type='password' password-nonce='42'></input>

Browsers that support the password-nonce argument sign as I described.
Browsers that don't support it pass through the password and the server
performs the ZKPP key generation (this is no worse than the current system of
hashing passwords). So servers can implement this immediately without worrying
about breaking in non-supporting browsers.

After adoption by a few major sites, browsers can add a warning that the
server didn't send a password nonce and the password will be passed to the
server so the user has to click "Okay" before it gets submitted. This can be
escalated to more severe messages to pressure more sites to comply.

------
deepsun
IPv6?

------
dredmorbius
Standardise on a set of basic document types. Index page, article, gallery,
search/results. Others as necessary. Specify standard elements and styling.

Standardised metadata. Pages should have title, author, publication date,
modification date, publisher, at a minimum. Some form of integrity check
(hash, checksum, tuple-based constructs, ...).

User-specified page styling. If I can load a page in Reader Mode,
[https://outline.com](https://outline.com), or Pocket, I will (generally in
that order). Every page having some stupid different layout or styling is a
bug, not a feature. Web design isn't the solution, Web design is the problem.
Users could specify their default / preferred styling. Night mode, reader
support, etc., as standards.

Fix authentication. PKI or a 2FA based on a worn identification element (NFC
in a signet ring with on-device sensor is my personal preference), if at all
possible. One-time / anon / throwaway support.

Reputation associated with publishers and authors. Automated deprecation of
shitposting users, authors, sites, companies.

Discussion threads as a fundamental HTML concept.

Dynamic tables: Sort, filter, format, collapse fields, in client.
Charting/ploting data would be another plus.

Native formula support.

Persistent local caching. Search support.

Replace tabs with something that works, and supports tasks / projects /
workflows. (Tree-style tabs is a concept which leans this way, though only
partially).

Fix-on-reciept. Lock pages down so that they are no longer dynamic and can
simply be referred to as documents. Save to local storage and recall from that
to minimise browser memory and CPU load.

Export all A/V management to an independent queueing and playback system.

------
alain_gilbert
For me, I would go with:

\- Typed javascript should be built-in in browsers. (Typescript)

TypeScript is great, but all the configurations and transpiling is a pain.

~~~
c-smile
Why exactly typescript?

Typescript should be compileable into WebAssembly bytecodes and that's it.

If you want it to be transparently compileable, like foo.ts to be sent as WA
bytecodes then something like mod-typescript (on the fly typescript compiler)
can be designed that will send (compiled and cached) bytecodes.

------
serial_crusher
A secure standard for ads. Right now reputable sites are running ads from
people they shouldn't trust, and getting bit in the ass by it. Popups, page
takeovers, even viruses get distributed through ad networks and end up on non-
malicious sites.

None of that should work. Ads shouldn't be able to inject their own JavaScript
into a page. There's a technical solution to that problem.

Let's narrow down the scop of things an ad needs to do (display an image,
maybe play sounds and videos (after user clicks on them), and send back a
reasonable amount of tracking data, etc). Then let's come up with a sandboxed
DSL for ad networks to specify their ads. Web sites could embed those ads
inside an <ad> tag that sandboxes that content and makes sure only supported
functionality is being used.

Then I can turn off my ad blocker and not have to worry about all the security
issues that unscrupulous ad providers bring with them today.

------
linopolus
A ban of everything JS except for these so-called web apps, which obviously
need it. Make the internet great (performant/efficient/secure) again!

~~~
netcraft
Forgive me, but I just don't understand this sentiment at all. I understand
your general frustration with over-engineered websites - but is it not your
choice to visit that website? Do you not also have the ability to block
javascript just like the scourge of flash websites before it? We aren't
talking about vulnerabilities here though, youre just saying that there are
websites out there that could do with less (or no) javascript but arent, so
you want to "force" them to?

Let me ask it a different way - do you have any reasonable expectation that
your proposal will ever be accepted? That browser manufacturers will implement
things to limit or block the functionality of js? Where would the line be
drawn (and who would draw it) between the so-called web apps and everything
else that isn't worthy?

There already are some mechanisms in place to decentivize misbehaving websites
such as the google rankings. But thats a far cry from a browser not supporting
or displaying some warning when viewing one of those sites.

Maybe im missing something - that there are these required sites that are
misbehaving and we need some regulatory power to rein them in.

~~~
na85
>Let me ask it a different way - do you have any reasonable expectation that
your proposal will ever be accepted?

There are hundreds of pie in the sky suggestions being floated here, and the
one about javascript is the one you choose to attack with this argument?

JavaScript has unequivocally made the web worse for everyone but advertisers
and perhaps the people that run CDNs.

Why, of all the proposals here, are you trying to shit on this one on
particular?

Honest question.

~~~
netcraft
> JavaScript has unequivocally made the web worse for everyone but advertisers
> and perhaps the people that run CDNs.

Do you really think this is defensible? That the web would be as popular or
useful as it is today without the ability to run code in the browser? I'm
curious if you think there is a majority of people that agree with this?

> Why, of all the proposals here, are you trying to shit on this one on
> particular?

I am not shitting on anyone - im trying to have an honest discussion about why
you and the OP feel that javascript is such a scourge that it needs to be
regulated. Not one person has addressed even one of my questions, you
included. I'm sorry you're taking my challenge as hostility - its not intended
that way.

I submit that its possible I am missing something - perhaps there is
situations out there that I don't have to deal with. I'm asking for an honest
view point that I can try to understand.

> ... pie in the sky ...

Theres a difference between "here's something thats easily accepted is a good
idea but might be difficult to implement" and this. I'm asking for an
explanation of the premise itself.

~~~
na85
Plenty of popular web applications work(ed) without JavaScript. I'm thinking
here of thinks like Gmail.

The only thing that I can think of that absolutely requires JavaScript is
advertising and tracking.

Anything else better serves the user as a desktop or native app.

------
seltzered_
First thought when reading the headline: Backlinks.

Jaron Lanier explains...
[https://www.youtube.com/watch?v=bpdDtK5bVKk&feature=youtu.be...](https://www.youtube.com/watch?v=bpdDtK5bVKk&feature=youtu.be&t=15m45s)

------
modeless
FIDO U2F hardware authentication token for 2 factor login. Simultaneously
easier and more secure than other 2 factor methods. But first someone needs to
make a <$5 hardware token so people might actually consider buying one.

~~~
_asummers
YubiKeys are $18. Not $5 but that's in the neighborhood, and they're
relatively new. Prices will come down.

~~~
modeless
$18 is not in the neighborhood of $5. Also I doubt the keychain type can
achieve mass adoption, it's the in-USB-port kind that is actually convenient,
and those cost $50.

------
Giorgi
SSL was forced by Google single-handedly. Developers scared that https might
provide ranking factor, quickly moved to SSL.

As for topic, I would like to see all mails clients rendering emails same god
damn way.

~~~
freehunter
And handling replies the same way. Far too often I see someone using IBM Notes
send an email to someone using Outlook and when it gets to me the sender says
"review the email chain below" and every damn line has another damn angle
bracket. Not sure which client is adding it all in, but it makes it
unreadable.

>hello

>>my name is bill

>>>i'd like to have a meeting

>>>>please provide your availability

~~~
theandrewbailey
>>>>Even worse, is when you have a really long line followed by a

>short

>>>>line that got moved because a word exceeded some unknown column

>boundary.

>>>>This isn't cool anymore, and anyone that implements it should be

>shot.

------
vfulco
2FA everywhere, preferably with Yubikey (no connection but happy user)

~~~
m-j-fox
Side question: Do you use your Yubikeys for GPG? I tried to make two identical
keys with the same GPG key, but still I get "Please remove the current card
and insert the one with serial number: ..." if I try to decrypt a file with
the other key. I asked the internet a couple times but no one seems to know.

~~~
snarf21
This is the most important thing. It solves so may other inconveniences of the
net. Your username should be your fingerprint. Your password should be 8
characters. You should have a 2fa fob that secures your account. So much less
fraud and other attacks. I think this has the highest ROI.

------
rdsubhas
W3C standard for bloat-free websites, aka vendor-neutral equivalent of Google
AMP and Facebook Instant Articles, to avoid further fragmenting the web.

If its an open standard, mobile-view and other stuff can be progressively
added to websites in a variety of ways: built-into browsers, polyfills or open
source libraries, and lead to a much better web experience across devices.
Aggregator startups and apps would stand to benefit a lot by this.

------
kuschku
Getting rid of passwords. Passwords are the easiest way for others to get
access to your accounts.

A move to federated identity, with a standardized API, and integration with
the browsers, would fix all these issues. You could easily use a federated
identity provider with support for 2FA, and ALL your accounts would
immediately work with 2FA.

And, with federated identity, you can also run your own, if you don’t trust
Google or Facebook login.

~~~
curun1r
> And, with federated identity, you can also run your own, if you don’t trust
> Google or Facebook login.

Sounds like OpenID...which was kind of a train wreck.

One middle ground could be tighter integration between browsers, sites and
password managers. With the right specification, sites could offer a "register
with [1Password,KeePass,LastPass,etc]" button which would open the password
manager and pre-fill all the fields from a password manager's identity record
(i.e. if a user has multiple identity records, the password manager can prompt
for the one to use.) If there's a need to choose a username, there should be a
standard endpoint for checking uniqueness. Once all information is filled out,
the password manager can generate a password based on the site's
specifications, post the information back to the site, store the account
details and the site's ToS in its database and update the browser page to the
success url. There can also be a standard endpoint for password rotation so
the password manager can periodically update the password without the user's
involvement.

This would still use passwords and for those that explicitly don't want to use
a password manager, they could continue doing things the current way. But for
the majority of us, it would be just like what you're talking about except
that our identities would be stored locally in our password managers, giving
us complete control over everything while automating as much as possible.

~~~
kuschku
That’s another one of these half-assed solutions that the world has too many
of, just like credit cards or using SSN as auth.

No.

We’ve solved all these issues before, OpenID was a good solution, and OpenID
Connect – a complete rewrite – can be used to replace it, and is used already
for Google and Facebook login.

Just use OIDC, on every page, and allow users to choose an identity provider.
Problem solved.

~~~
curun1r
> OpenID was a good solution

Ugh...no. Just no. If you believe that, I'm not sure there's anything I can
say that will get through to you...we're just going to have diametrically
opposed opinions. But let me tell you that I've implemented it quite a few
times on various sites and it's been a nightmare every time, so it's not like
my 'half-assed' approach was ignorant of it being an option. It was based on
specifically discarding that option as poorly conceived, poorly implemented
and simply not being an option in many cases. Banks, for example, should never
let a 3rd party IDP be part of their authentication process. And if your
solution is inapplicable in situations where tight security is required,
you've got to ask yourself whether you're actually solving the problem or just
requiring users to be aware of two different ways of logging in instead of the
one that they currently understand. Even if they do have to remember different
passwords for most sites, most people can deal with username/password
conceptually in a way that using a third-party IDP confuses them.

~~~
kuschku
Banks should only allow auth via EMV chip and pin, like all German banks
already allow.

And for anyone else OIDC is more than good enough.

Username and Password is in any situation less secure than OIDC.

------
irundebian
Strict HTML, CSS and JavaScript parsing. One single error => Site won't be
displayed. These lazy web devs need some more discipline!

~~~
paulrosenzweig
I agree with this, but feel like there might be issues I'm not thinking of.
Can anyone shed some light on why browsers are so tolerant today and why that
might be a good thing?

~~~
danudey
The Robustness Principle states "Be conservative in what you do, be liberal in
what you accept from others."

Following that maxim, browser developers assumed that, even if HTML wasn't
inherently correct, if they could figure out what the user logically meant
then assuming that was better than not working at all.

In short, people wrote garbage HTML and it proved easier to fix browsers than
people. At first, it wasn't too problematic, but as HTML got more complex more
problems surfaced and now everything is a mess.

This was the goal of XHTML: HTML that was required to validate as XML or it
wouldn't work at all, and some browsers were, indeed, strict at this. The idea
was that you'd only use XHTML if you were generating it with an XML parser or
some other template generator that could produce valid code. In reality, that
just meant that browsers that didn't understand XHTML treated it like HTML and
worked, and browsers that did understand XHTML _and_ validated it would show
errors. Thus, users saw that browser X (doing the right thing) couldn't
display a site, but browser Y (doing the wrong thing) could.

------
bmh_ca
Automated HSTS, revokable public key pins, and certificate transparency.

~~~
sofaofthedamned
Agreed, but I'd add ipv6

~~~
dane-pgp
Agreed, but to those 4 things I'd add DANE TLSA (RFC 6698) and Certification
Authority Authorization (CAA) (RFC 6844) as further lines of defence against
rogue CAs.

------
grumblestumble
Evergreen web browsers. Safari and IE11 continue to ruin my life.

~~~
dredmorbius
What do you mean by this?

~~~
grumblestumble
"evergreen" means self-updating, with a rapid release cycle. Chrome, Edge, and
Firefox follow this model. IE (essentially a legacy browser at this point) and
Safari don't.

~~~
dredmorbius
Thanks.

This works in some instances.

Not so much in others.

------
joshfraser
Ending the NSA dragnet

------
_greim_
Only half joking. Team up and find a way to force Apple to allow competing
rendering engines on iOS.

------
hedora
Decentralization.

------
snomad
I would love package systems and server admins to recognize the inherit danger
in allowing the servers to call out to the wild. All internet facing servers
should be allowed to only call out to white listed addresses.

~~~
effie
Do you mean blocking all IPs except for those whitelisted? Obtaining the right
whitelist seems to be time consuming task. If you control your servers and
trust their software, why would you cripple its operation?

------
O1111OOO
Disclaimer on all sites about data collection (on sites that collect data):

Precisely what data is collected, a list of the 3rd-parties the data is sent
to, the policies of those 3rd-party sites, how long the data is held at the
primary domain, how long the data is held at the 3rd-party sites, options for
requesting that such data be deleted.

Sites that act as a conduit for the collection and transmission of user data
should be held accountable for the breach of such data.

------
Spivak
Dropping TLS in favor of IPSec. Now every protocol is transparently secure by
default and there's no chance of developers accidentally messing it up.

~~~
swiley
IPSec is beautiful and definitely was the correct way to handle encryption.

IKE and ISAKMP was the problem, that stuff is an absolute nightmare. Maybe now
with IKE2 it might get better...

~~~
effie
> _IPSec is beautiful._

Could you elaborate?

------
tomc1985
\- Make client-side certificate authentication mainstream. Fix the UI, UX

\- Standardize on some sort of biometric identification that actually works. I
HATE two-factor :(

~~~
conductor
1\. Client-side certificates usage has privacy implications -
[https://github.com/tumi8/cca-privacy](https://github.com/tumi8/cca-privacy)

2\. Is _biometric_ really necessary? U2F tokens already exist and are
standardized (maybe not officially, I'm not sure). Chrome and Opera already
support it, Mozilla's support must be coming soon (meanwhile you can use an
add-on).

~~~
tomc1985
I am sick of the actual motions of authenticating, and many of the 2-factor
implementations out there today are terrible. (SMS, really? What happens when
your phone is stolen? How do you protect against an angry lover? What a joke)

U2F dongles aren't much better.

Also, a quick glance at that link seems to indicate attacker needs some sort
of MITM access? Is it anything more than a replay attack?

------
mattl
A simple way to block third party trackers/beacons that's on by default, with
a simple one-click to disable it on that page load.

------
davimack
A truly obfuscatory browser: one in which everything sent to the server looked
the same, regardless of which user, region, etc.

~~~
swiley
curl -H "" -o stuff.html && elinks stuff.html

I've been looking for a site I can run this on over TOR at random times for
reading news but I haven't found one.

~~~
dredmorbius
Can elinks read from stdin?

~~~
swiley
links2 can, and both of them have --dump.

The nice thing about doing it like this is that the file persists on disk so
viewing a page is decoupled from fetching it (which is nice for a lot of
reasons.)

~~~
dredmorbius
Yeah, I may have done that once or twice myself.

[https://ello.co/dredmorbius/post/naya9wqdemiovuvwvoyquq](https://ello.co/dredmorbius/post/naya9wqdemiovuvwvoyquq)

Why browsers don't dump to disk in a format that makes for easy rendering ...
I don't know.

------
joelcollinsdc
Accessibility. Should be hard or impossible to build an inaccessible website.
Tooling needs to be vastly improved.

------
kyledrake
Ability to mark HTTPS site as "not secure" using HTTP headers if it's asking
for things like logins and passwords.

Would be useful for things like free static HTML web hosts and CDNs for
combating phishing.

Could be something put in CSP.

~~~
krallja
Is that different from CSP form-action?

~~~
kyledrake
Yes. This wouldn't prevent a form from working, what it would do is warn the
user that this site shouldn't be asking you for a password and may be trying
to do a bad thing, instead of just showing a green bar and a "security lock"
with the word secure on it.

------
supertramp_sid
Informing user about trackers being used on a website. (I know there are add
ons available). There should be mode or something that informs user about this
so that he can close the website and look for alternatives

------
api_or_ipa
I'd like to see form validation get a badly needed overhaul. At the same time,
we can punish sites that use shitty/inappropriate practices. This would vastly
improve mobile experience especially.

------
gwbas1c
Less features... Instead, figure out how to force good usability.

------
molsson
* SameSite cookies * CSP

------
martin-adams
For me it would be the cookie consent control to be implemented at the browser
level with sites being able to describe the policiy via a hosted policy file.

------
flukus
I want a meta command to enable the browsers reader mode. Then I can just
render HTML and the browser can display it as the user prefers.

------
betaby
IPv6, DNSSEC, P2P DNS and rootless DNS,

------
ScalaFan
Default encrypted email communications

~~~
swiley
That's not part of the web.

~~~
vortico
Even if you use a desktop client, email is part of the world wide web.

~~~
StavrosK
How do you figure that? If you're using a desktop client, it seems pretty
exactly _not_ part of the web.

~~~
vortico
It seems that I was confused with the definition of the web. I was
synonymizing it with Internet. So never mind, email is not part of the web.

------
tardo99
A simple micropayments scheme that can be used on publications, music sites,
whatever.

------
cody8295
HTTPGP, forced PGP encryption between client-server. Would be pretty cool

------
monk_e_boy
Pay turn off ads. A certain percentage of visitors are asked to rate the
content (to avoid paying) the rest of the visitors are automatically billed
and pay the average rating.

Each user can specify a maximum payment and can opt to view with ads if
payment requested is too much.

~~~
wwwhatcrack
Ever try ad-block? It's actually free.

~~~
monk_e_boy
How do you pay the people who create content?

~~~
wwwhatcrack
Not through ads.

------
knocte
HTTP2 instead of the horrid hack that are WebSockets or long polling.

------
keymone
hashcash and convenient end-to-end crypto everywhere

------
jasonkostempski
User prompts for JavaScript from other domains.

------
anotherevan
Civility.

------
spuz
IPFS

------
acosmism
a distributed built-in password manager (a password-less web)

------
zmix
X(HT)ML

------
chriswarbo
Make the Web standards more fundamental, so they barely need to change.
Implement the rest in terms of those fundamentals.

Some thoughts:

\- HTML and CSS are reasonable from an implementation standpoint: they have
pretty rigid syntax (annotated tree of text, groups of name/value pairs) so
user agents can ignore whatever they don't know/care about, and give
reasonable results. Even if that's just a wall of plain text.

\- Javascript is awful in this regard. It has masses of syntax, keeps
changing, requires incredibly specific behaviour from a truckload of APIs and
likes to silently bail out completely if one thing goes wrong.

Our notions of computation don't change all that much, and certainly not
quickly. There's no reason to make every user agent understand all of the
human-friendly bells and whistles that the standards bodies keep bolting on.
Whilst "view source" is nice, these days we often need tools to undo
minification and obfuscation; let alone the rise in compile-to-JS languages.

The standards should only dictate something that won't need to be changed for
a long time; say, a pure, untyped, call-by-value lambda calculus, with
literals for integers, strings and symbols. APIs can be defined as reduction
rules involving the symbols; for example:

\- Applications of the form '((+ x) y)', when x and y are integers, can be
replaced by the sum of x and y.

\- Applications of the form '(array 0)', can be replaced by an empty array
value (defined elsewhere); applications of the form '((array 1) x)' can be
replaced by an array value containing the single element x, etc.

\- Applications of the form '((object 1) (keyvalue x y))' can be replaced by
an object value, with the value y for property x, etc.

\- Application of the form '(XMLHTTPRequest x)' where x is an object value
with properties...

Executing such programs would, like with HTML and CSS, allow implementations
to ignore whatever they don't know/care about. Expressions with no
corresponding reduction rule just sit there unevaluated, whilst everything
around them carries on as normal. Users could implement their own overrides
for how things should rewrite; like user styles, but more pervasive. Sites
could supply pure reduction rules as part of their code, to enable things like
fancy control flow. Effectful reduction rules could be controlled at a fine-
grained level by the user agent (and hence, the user). Programmers can write
in whatever language they like and compile to this simple Web language. Since
we're being ambitious, let's say they'll include links to the original source,
under the AGPL ;)

Fancy, state-of-the-art browsers can come with a bunch of optimisations and
tricks for faster parsing and evaluation of common code patterns. They can
also define their own libraries of symbols and rules, which are more amenable
to optimisation (like asm.js); along with fallback "polyfills" which make them
work (slowly) everywhere else.

We can probably do similar things for rendering, layout, etc. The clever,
complicated algorithms dictated by the standards can be great when we've got a
bunch of content and we'd like the user agent to display it in a reasonable
way. On the other hand, if we've got some exact output in mind, we should be
able to describe it directly, rather than second-guessing and working around
those algorithms. All of this can go into libraries, leaving the "core" alone.

There's always the danger of turning the Web into the equivalent of obfuscated
PostScript: a blob of software that, when executed, renders an image of the
text, etc. content. However, I think that's mostly down to the choice of what
APIs are included by default. If the default behaviour is similar to today's
browsers: take text from the document and lay it out in a readable way; allow
headers, emphasis, etc. using annotations, and so on, then I'm sure most would
do that, ensuring the text and other content is easily parsed, indexed, etc.

------
interdrift
Destroy fake news (no pun intended).

------
mathiasben
VRML

------
frik
Dear overlord, stop this shit. Don't force any web user over your agenda BS.

Amazon.com worked fine from 1995 to 2016 with HTTP (only the login page was
HTTPS).

If you have a crappy ISP like Verizon or whatever, it's your own personal
problem - 99% of the web user don't care about your problem. Maybe use a VPN
to somewhere to an ISP you can trust.

I stopped using Firefox because they turned mad. Chromium with some custom
patches seems like a far better solution nowadays. Yet I see Google is too
trying to destroy the open web with their PWA/AMP monoculture that is favored
and listed on top of search results.

We need the EFF and other "good" foundations to lobby for the end user - too
many shady and corporate entities lobby against the end user, unfortunately.

~~~
schoen
I work for EFF and we have actively lobbied for sites, including Amazon, to
adopt HTTPS by default for years. Even if you have some way to verify and be
confident in your ISP's privacy practices, your data is going to pass through
a lot of different ISPs' facilities, and you're not going to have any control
of that.

------
phkahler
Fixed IP addresses for everyone. Also addresses that encode LAT/LONG. These
are intended to increase traceability.

~~~
DivisionSol
What? Why.

------
tammuz18
Is this some reverse-psychology attempt to point out how fascist it feels when
the browser feels it's smarter than you and begins taking its own decisions?

