This is my very problem with Chrome/Chromium right now. The Chrome team does assumption on how things "should" be (in a highly subjective way) and breaks the web.
Another example: they decided to ignore the value of `autocomplete` attributes on `<form>` tags , because:
> The tricky part here is that somewhere along the journey of the web autocomplete=off become a default for many form fields, without any real thought being given as to whether or not that was good for users. This doesn't mean there aren't very valid cases where you don't want the browser autofilling data (e.g. on CRM systems), but by and large, we see those as the minority cases. And as a result, we started ignoring autocomplete=off for Chrome Autofill data.
Problem: Chrome now auto-fills wrong parts of forms with username/passwords and this breaks forms that get unexpected data when submitted. And now, they opened an issue on their tracker  to track "Valid use cases for autocomplete=off".
This is insane to think that the developer is wrong to use some attributes values, and to assume how a page should behave, ignoring devs intentions and Web standards.
I'm all for empowering browsers to override abusive behaviour from websites, but using bad defaults and breaking innocent websites as a result is not the solution.
The browser is the agent of the user.
Come work tech support for a company/product with a web-based form that has a password field in it (like a CRM or other administrative system). Now explain to users why we can't stop their browser filling in their password in the field that's asking for the other user's password.
I've had situations where I'm configuring a VPN connection on the web interface for a router... then spending 10 minutes head-scratching why the hell it's giving me an invalid PSK message. Oh, because Chrome filled my password into the goddamn wrong field when it wasn't asked to.
Of course that would be a terrible idea for a number of other reasons, but if enough people specifically avoid more recent versions of Chrome for this reason, maybe Google will finally do what's actually best for users and make this behaviour configurable.
No, you can't do that.
Instead you have to tell them to stop saving passwords in the browser.
What? Why would the other user ever provide their password to this user?
So the helpful support person can now go "Your password is now 'foobar' and you will be asked to pick a new one when you login."
A one time password is nothing but a token - maybe not ideally named.
The employee should of course change their password and that should be enforced policy, the admin shouldn't know user passwords.
Tokens work too but it's a bit of an overhead, especially in smaller SMB where the admin is probably just across the corridor or atleast in the same building.
If it's something like a password, it should be treated like a password. Why shouldn't you use type=password for that?
>The browser is the agent of the user.
Here I support you, but do you remember that Googlers pressed assertively for hijackable right click, no opt-out history spoofing, and many many many other supremely user hostile features in W3C standards.
Out of all broken web novelties, they prefer to unilaterally break ones coming from outside:
Declarative right click menu from Mozilla was killed by Chrome.
Declarative SMIL animations, same story
They added support for HTTP pipelining and killed it just to force people to migrate to their protocol
They had working drag and drop on mobile, yet they killed it because "iphone's dnd is broken, so we made ours broke too"
They banned few people from bugzilla from insisting on turning off vibrator API from mobile Chrome, yet the moment people started using it for bot detection, they killed it in iframes (denying its use as a hard proof of clickfraud for ad companies)
I always told clients that if you have to choose in between having feature A broken on iphone or having it broken every other browser, to chose to break iPhone. Now, I don't know what is right to say there, probably something like "you need to settle on a variant where stuff is broken in a consistent manner across all browser"
In principle, I agree with you. But in reality, the iPhone market is very valuable.
If it was protecting your interests, it would be a toggleable setting that defaults to normalized behavior.
But you can always install (or develop) another browser that might do this job better for you.
> If it was protecting your interests, it would be a toggleable setting that defaults to normalized behavior.
The problem with this view is that the vast majority of end users do not want, and in fact will never know about or use a new setting, and so the when doesn't move forward. This is the opt-in problem that Google is talking about. In politics, an analogy is called public choice theory.
Google could even make an extension for this to allow users to gather and share a list of sites that behave in a user unfriendly way.
But of course, it was much easier to fuck up half of the Web.
Public choice and mandates are great for things that require cooperation and agreement against race to the bottom (like tax havens), not against autocomplete fucking off.
My guess is that many medical computers aren't well administrated and leave autofill on, which can easily cause accidental HIPAA violations.
Or that's why then the SysAdmin should disable the password manager.
It's not up to the website.
If you have an internal site, you already control the browser, then why do you want to fight the browser from the inside instead of from the outside? :o
There was a form where administrator can change certain details of another user profile, and if e-mail (or name) field was empty on a profile, then chrome would autofill it with administrator's e-mail, which can result in unwanted / corrupted data.
Some replies in this thread suggest configuring chrome differently in organizations where this is important to avoid, but when you are SaaS vendor, your users will inevitably blame you for Chrome's behavior.
What do you imagine?
* Turn on anyway
* Leave disabled for this form
= Remember my choice for this website
I'm a user.
I want that.
I'm probably not "a majority" but I fail to see how being assimilated to the majority without my consent, without even knowing it is happening, is "protecting my interests"
What would you think of a restaurant that charges you 50% of the bill as tip because "the majority of the customers does that"?
I don't know about you but when I go to the restaurant, I can't change the price of the items. If they set the tip for you, that's pretty much setting the price, isn't it?
I forgot HN is mostly an american place.
Tip in most Europe is voluntary
We don't force people to tip
Now, Google shoves on you their preference (which is a coin-toss to align or not) and you shout that this is protecting your freedom of choice.
Chrome cannot decide on a whim that old websites should be broken. It is not how the web moves forward. Take for example Firefox that kept breaking extensions with every update, now they have few compatible extensions.
that's a misbehaviour of the browser, not the developers
The browsers were designed for viewing a single page at time
popup blocking is fixing a browsers misbehaviour, as much as fixing a car that when you hit the brake accelerates
You ship an experience like that, you end up not only breaking old websites but shift the emphasis on regularly updated websites that work on mobile. That was the intent! But it is not the web we know and love.
It's default on a lot of Android phones, but far from the only one available. I use Firefox on mine. Opting to use a non-Chrome browser on an Android phone is about as technical as using a non-Microsoft browser under Windows.
This is a gross oversimplifcation
They also cannot opt-in if you don't have electricity in your house or don't own a computer
This is not a real argument
A real argument is obsolescence
Of course there's a point when older versions are not supported anymore
Here we are talking about future versions breaking compatibility, without a shared consent, without a deprecation roadmap, that need to be supported, in a Chrome specific way, making Chrome the new IE for no real benefits to the user and a vendor lock-in by Google
When they users will try a different browser, for whatever reason, they will experience a different web, that for the majority of them being non techy will look like a broken web
because that's what I want it to do.
I'm all for giving users the power to ignore the standard and disable certain features, but forcing it on users without making it configurable sounds like a really bad idea.
A better idea would be to show that yellow bar at the top explaining briefly what the site does and asking whether the user wants to allow that. That keeps the user informed and empowered, rather than subject to the whims of the website and browser makers.
If that's true, the browser should ask me and not assume what i want it to do.
The right thing to do is to bring the issue up with whoever is running the website. If they decide not to act on it because they think they know security/UX better than Google/you, they're stuck with crappier security/UX and the market should sort it out.
I've actually managed to convince a company to stop prohibiting copy-paste on logins by pointing them at the new NIST password guidelines. I don't expect this to work for every company, but if they intentionally don't change, it's okay to name and shame them for it. Whether it's a UX sin or security voodoo.
The difference is that disabling of "autocomplete" is a user interface issue, and can be addressed by the browser I use. The problem of ridiculous password restrictions is not usually something that can be controlled by the client.
I like the idea of convincing IT to change crazy password policies, but I don't have the mental energy to navigate these huge bureaucracies.
When IT is convinced they have to decide when to put it into the budget. If they think their policy is not okay, just not perfect the fix will probably be buried in the bottom of the budget pile and cut every year.
When a company does not follow a NIST standard [that applies] that is admissible in court against them. While it isn't an automatic loss they have to defend why they didn't follow the standard. In some cases the defense is not to the jury, but the the judge while can make a "statement of fact" and tell the jury to assume negligence for not following the standards. When legal says the cost of not complying with NIST password guidelines is potentially 10 million dollars that puts fixing the password requirements much higher in the budget.
But that's a recent change to the NIST guidance. Searching for "Bill Burr NIST" will turn up recent stories about the original author's regret of a lot of the password recommendations from the original publication in 2003 that survived until the update this year.
A recommendation won't allow you to sue a company contrary to what the other commenters seem to think, but it's enough for any internal employee who works on something to call for and justify a change.
Your question is one of the reasons I didn't say the legal route was a better way. It is an option that may get better results in some cases. Even in the US it may not always get the best result.
So - what's ok to break, and what isn't? In the end, it's a judgement call on the browser developer. I like what another person said here - the browser is an "user agent", as long as the actions are clearly motivated by user interest, I think they're ok even if/when they break some standards.
I'd say, for a start, achieving consensus before breaking is ideal. If the attempt to achieve consensus fails, at least I can see they tried (or I can disagree). Hopefully in the future users can use their leverage to make it not a judgement call on the browser developer, but a judgement call by the community of users. The myriad of features in browsers, however, makes it a difficult arena to enter making a large swath of users subject to these judgement calls.
> as long as the actions are clearly motivated by user interest
It doesn't matter what the road is paved with. Many things are not clear and intent is also not clear. You should not use intent to determine what you are ok with, you should use the action and effect. You may be ok with Safari's approach to third party cookies, or Chrome's approach to cookies, which is ok. But you might not be ok with the next action, and when it hurts you as a user, the reasoning will matter less than it does when you support it.
The people who disagree are third-parties who want to impose their own preferences on Chrome's users. Their opinions should not be taken into account because they are not Chrome's users.
If there is never a valid reason to disable autocomplete and therefore "the standard is wrong", there are ways to change the standard. The standardisation process for Web APIs actively involves browser vendors but allows for building a consensus before jumping the gun.
If the Chrome team could figure out a way to force all of those sites to change their password restrictions through a Chrome update, they probably would.
What? No. Clearly it should never have been in the spec, and a browser should be the extension of the users desires.
In a perfect world that would be the case, because the browser would belong to the user. But how much did you pay for your last browser? Consider yourself lucky when the browser does something you consider beneficial.
How insane would the password rules have to be for anyone to travel 1 hour more to go to a different university? How insane for them to stop playing a given video game? To change banking institutions?
I don't have the answer for others, but for me, the answer to all of those is "pretty insane". Except for the banking case, password security is a minor concern (and even then, the system protects us with anti-fraud laws and what not).
In a sense, the market is sorting itself out: it just decided that it doesn't care much about passwords. In fact, if you figure out a way to be profitable while offering twice the interest rate but every time people log in to your bank they have to dance the robot or whatever, you'd probably still have customers.
You can also copy text with right-click -> Inspect Element -> copy($0.innerText). Although you'd use $0.value for an input or textarea.
They claim they are making these decisions based on user data which I have no reason to doubt.
> and breaks the web.
Breaks crap websites that are broken already from performance and usability perspective.
I personally think this is exactly what is needed to make the web better as a whole.
Individual developers working for individual companies are rarely if ever thinking about the good of their users, except in a very narrow profit motivated sense, and never thinking about the good of the ecosystem as a whole.
Google is also "breaking the web" by not auto playing videos, not allowing alerts in one tab to block the entire browser, etc. Those all seem like good things to me.
Me: Hey, I'd like to add a work-item to our current sprint that reworks semantic attributes into our site.
Them: What would that do?
Me: It improves the underlying architecture making the markup more usable and extensible.
Them: How does that benefit the users?
Me: Over time it will reduce the TTM for features and allows us to adopt a common standard others use on the web.
Them: So there's no UI and it won't affect their workflow?
Me: Well, no.
Them: Then why would you suggest it? We don't have the budget to add meaningless development tasks.
I might be biased towards thinking there are plenty of developers and engineers out there thinking about this stuff but we don't always get the final say in what is done. Also, a lot of enterprise applications were and continue to be around before many of these standards. It's a struggle to improve what doesn't directly add to the bottom line.
You must decide that it's necessary. As an engineer, you are the most qualified individual to determine if something is technically necessary. Just like you decided that proper indentation, unit tests, refactors, etc. are important, so must we decide that other elements are.
That's exactly the point and you're essentially nitpicking (with anecdata) the fact that he pinned the problem on developers instead of product managers. Regardless of who is making the decision in companies, the end result is pretty much the same.
It brings the site closer to recommended accessibility guidelines, making it easier for e.g. people who have vision loss to navigate. It also reduces our potential legal liability to such users for not meeting accessibility requirements.
so what's the problem?
>Individual developers working for individual companies are rarely if ever thinking about the good of their users, except in a very narrow profit motivated sense, and never thinking about the good of the ecosystem as a whole.
>Google is also "breaking the web" by not auto playing videos, not allowing alerts in one tab to block the entire browser, etc. Those all seem like good things to me.
Its interesting that you're calling for a browser to not implement standards.
In any case, an advertising company would be the last entity who I would trust to do anything "good" with regards to the web.
The "standards" are created by the browser makers. It is up to them to decide what the standards are by choosing what to implement.
For instance, Apple can decide to not allow 3rd party cookies, which "breaks the web" and "doesn't follow standards" but it is also the right thing to do.
You're half right: standards are created by W3C & IETF, which includes browser makers.
> It is up to them to decide what the standards are by choosing what to implement
Kind of like IE and ActiveX, right? Right?
The changes some browsers have made to autoplay videos or alerts do not contradict what is in the manual.
WHICH IS OKAY, if we're talking about security/privacy, but performance ... not really. Let the market (and the users) decide. If a site takes forever to load, and scrolling is impossible because it takes ages to scroll, and burns a mark in your palm in that time, then maybe you'll reconsider visiting that site ever again.
WHICH IS OKAY, if we're talking about security/privacy, but performance ... not really.
It's not automatically OK to break functionality even for security/privacy reasons, IMHO. Browsers are used for many useful purposes that do not involve visiting sites run by large organisations with full-time development teams assigned to ongoing maintenance. Intranets. Embedded UIs in devices. Personal sites full of useful information but with no-one actively maintaining them.
The number of breaking changes browser developers have been willing to make in recent years does not bode well for the future of the Web, a platform which became what it is today precisely because it was a known target for developers and for a few years the industry standards actually meant something.
The Brave New Web is one dominated by ego-stroking contests between the major browser developers and huge, centralised sites with effectively unlimited resources run by just a handful of organisations. Notice that neither users nor original content creators appear in that previous description. The original author here is quite right to call the big players out for that. We saw Microsoft's infamous embrace-and-extend strategy and the way the Web was held back for years as a result. We should be just as wary when the likes of Google sing the same song.
And there are non evergreen browsers for those environments.
I agree that unilateral moves are bad for the web, but the reality was always that. Vendors have their own agendas, and usually the overlap is large enough.
Plus the change was always there too. Framesets work just as 'fine' today as they did back in '96, and the same goes for tables, divs, and falling snowflakes in the background DHTML snippets. The churn we see today is because the web became a lot more diverse, a lot of things were and are still live on the edge, some are in terms of performance and creative expression (just think about CSS Houdini, giving even lower level access gradually to the rendering pipeline), some in terms of security (because they depend on some implementation detail of browsers - such as CSRF protections, that took a decade to get sort of right with the Same Origin policy).
Then chrome Devs say to use autocomplete=something-stupid to get the specification behaviour of 'off' WTF is that?!
Donate to Firefox.
Chrome is the new IE
Websites consist of code to be interpreted by browsers as they see fit, for the benefit of their users. Those users do not necessarily want exactly the experience the site authors want them to have.
So, along the same lines, it may make sense to improve the UI for autocompleting users, or for hinting about the use of the field, to make it easier for sysadmins. But that shouldn't break the more common case of handling sites that just think they're Too Special or Too Important to allow saving login information.
And I would actually argue that blocking before closing the tab should be a decision up to the developer, not the browser.
The shop controls the server the user rightly controls the experience on their machine. The browser vendor provides an application that runs on the users machine. If the shop doesn't like it they can pound sand or suggest the user would be better off with a different browser.
They can perhaps rightly say that the deviation from standard is unfriendly or sub optimal but one would hope they wouldn't appeal to imaginary authority derived from a bad analogy.
A raven isn't like a writing desk and a client server interaction isn't like a user visiting a physical store.
Analogies can serve to communicate but when you use them to prove a point the only thing you prove is your lack of understanding of the matter.
I didn't realize until now that it doing the right thing from my (the user's) perspective was because it was breaking a web standard, but damn if I don't find it useful.
I'm all for moving the web forward and addressing stuff like this is a critical part of that, but this is not moving the web forward, this is moving Chrome forward, and in so doing breaking thousands if not millions of sites, and placing the burden of their repair on their developers with no notice and no better solution, just a different hack than the hack they were using.
Chrome is breaking 'autofill=false' on webforms because its really bad to have autofill=false most of the time. For example, this was breaking password managers, forcing users to have a worse experience around something as security critical as entering a password. If the goal is to have the user never remember their passwords to avoid phishing, this is a really important thing.
I'm entirely, 100% in support of them ignoring 'autofill=false' because most of the time I see this as a security flaw on websites. This isn't anticompetitive - they aren't trying to break websites on Firefox, they aren't contacting websites to get them to do some Chrome specific thing. They're ignoring a really bad default that hurts users.
While Chrome has the goal to destroy everything else.
Example 1: Google Chrome spam on youtube, gmail, every website on the internet. I can't count the amount of times my parents called and asked why Google asked them to "upgrade their browser" (hint: It wasn't a new Firefox-build).
Example 2: Sending email SPAM to all Google-users where they are advised to install Chrome if they ever sign in to their Google-account on a new machine using anything except Chrome.
Example 3: Installing Chrome unasked as drive-by installers when you install anything lots of freeware, because Google paid third-party developers to host Chrome as a spyware-like installation in their installer.
Google is using spyware techniques to deploy their fucking browser. Google is literally working on killing all other browsers.
And for the good of the web, this should be reason enough to instantly and permanently uninstall Chrome.
Had Microsoft done a fraction of this for anything they did, you'd seen social media and the EU causing a shit-storm. How come Google gets a free pass?
Cite? I've never seen this, and I use Firefox regularly.
> Example 3: Installing Chrome unasked as drive-by installers when you install anything lots of freeware, because Google paid third-party developers to host Chrome as a spyware-like installation in their installer.
Again, cite? Because, again, I've never seen this.
I don’t have articles to link to, but I’ve experienced this on numerous occasions.
Me working with manual testing where we regularly set up clean environments from scratch probably has helped me notice the problem more than a regular webdev.
Because this is absolutely a problem.
> Example 3 cite
I’d say Google it, but it really is a known and documented problem:
Goggle is in no way playing fair.
"For most modern browsers (including Firefox 38+, Google Chrome 34+, IE 11+) setting the autocomplete attribute will not prevent a browser's password manager from asking the user if they want to store login fields (username and password), if the user permits the storage the browser will autofill the login the next time the user visits the page. See The autocomplete attribute and login fields."
Netscape is irrelevant. The fact is Chrome is using it's market dominance to affect the implementation of standards and in doing so, is breaking numerous websites. That IE is famous for but because Chrome is made by Google, it gets a pass.
> Chrome is breaking 'autofill=false' on webforms because its really bad to have autofill=false most of the time.
That is not the Chrome web-teams decision to make. The point of standards is that the standards should be followed, regardless of your or anyone elses opinion. If you disagree with a standard, you work to change the standard, you don't just change your browser's implementation of the standard, break a ton of functionality all over the web, and then sit there saying "well that's how it should be."
> This isn't anticompetitive - they aren't trying to break websites on Firefox, they aren't contacting websites to get them to do some Chrome specific thing. They're ignoring a really bad default that hurts users.
I agree with this in principle, but then they should be working to see that the standard is changed, and do so in such a way that the other browsers can follow suit and allow the standard to be improved. I don't disagree with the stance Google is taking; I don't like that one company is allowed to make such a sweeping change, and all the people who would get all over Microsoft's or Apple's asses about it if it was done in IE/Edge or Safari, are just all cool with it because again, Google is the golden child.
Netscape is entirely relevant. The question was raised about how different Google / Chrome are compared to Microsoft / IE and the answer is Netscape. Microsoft tried to tie the internet into their own platform, Windows + IE, due to them pushing non-standard rendering and ActiveX. Whatever your opinion of Chrome's break from standards might be, Google simply are not trying to lock people into Chrome on Android / ChromeOS. Remember that Google's revenue comes from ads, not software sales (as was the case with Microsoft). Chrome, Android, etc are just platforms to help leverage ads but they still make money from people running Firefox on Windows, Linux and OS X. So while you might dislike the standards Google are breaking in Chrome, the intent is very different from Microsoft in the 90s with Internet Explorer as Google are not trying to control the web (or at least not with the examples given in this discussion. I'm less convinced their intentions are honorable with AMP).
That assertion is laughable on it's face. Of course Google does brand lock-in, the fact that their brand is further reaching across more markets than Windows/IE were doesn't negate that they attempt brand lock-in at every opportunity they can. In fact Chrome largely was started because Google wanted to control the browser aspect of the experience, otherwise why would they make it?
There isn't any benefit in Google locking users into their free software if it means users on other platforms can no longer use Google's revenue generating services.
Your point about brand awareness is valid but not really the same as what I was originally discussing (software lock ins). Eg Google's products work fine on other browsers so there's no actual technical lock in however users are encouraged towards Google because of brand familiarity and browser defaults for search and home page.
How? Advertising budgets follow those eyes. Google is a nobody in TV ads, so doesn't get any part from the (big) money advertisers are currently spending there. If these would switch to targeting the web (where Google is the dominant actor), chances are that huge chunks of those ad budgets will be inserted into Google's giant ad placement machine.
If your litmus test is “as bad as Microsoft” then that’s exactly what we will get.
Complaining wouldn't change anything. Corporations don't change their policies because a few nerds moan on a few message boards.
> If your litmus test is “as bad as Microsoft” then that’s exactly what we will get.
It wasn't my litmus test. I feel you're missing my point because others are making that comparison and I'm saying the two don't compare. In Microsoft's case they broke from standards to lock people into their paid platforms. Clearly that's bad. But in Google's case the platforms are free and the break from standards doesn't lock users into any platforms. You can make arguments against Google break from standards if you want but comparing the two because they involve a web browser is just clutching at straws.
Now, if you wanted to argue about Google corrupting the web then AMP is a far better example. There are far more similarities between MS+IE and AMP even though AMP isn't a web browser:
• both are a free product that locks users into the company's revenue stream (ads in AMP for Google, Windows for MS)
• both push their product through their massive market shares (Google promoting their AMP CDN on Google Search above the search results, Windows shipping IE)
• both onboarded developers with promises to better user experience while locking them into a non-standard platform.
I could go on, but in short AMP is where you guys need to be worried about with regards to Google shifting the web landscape. Not whether Chrome follows spec on a small handful of specific tag properties. That's a whole lot of hysteria over nothing.
> both are a free product that locks users into the company's revenue stream (ads in AMP for Google, Windows for MS)
Ads on AMP sites don't have to be from google
> both onboarded developers with promises to better user experience while locking them into a non-standard platform.
AMP, while a particular subset of HTML/JS, is still just a subset, aka part of the standard platform and can run in any browser. Calling it non-standard would be like calling it non-standard to use React. It's just a library/framework.
AFAIK they do if you want to be on the carousel above the search results on Google Search. It's part and parcel of using the Google AMP CDN. Sure you can use other CDNs (even roll your own) but then you do not appear on the AMP carousel above Google search results (which is highly valuable real estate).
> AMP, while a particular subset of HTML/JS, is still just a subset, aka part of the standard platform and can run in any browser. Calling it non-standard would be like calling it non-standard to use React. It's just a library/framework.
The fact it's a subset makes it non-standard because you cannot then write standard HTML / JS.
To flip the argument, this debate started because Chrome was accused of not following standards because it ignores some HTML properties (eg autocomplete). You could argue the same as you were for AMP that Chrome is following a subset.
Personally I find it a stretch to argue that x being a subset of the standard means x is a standard.
How very wrong you are on this one point, your argument was going on the right line but here it took an unexpected turn. Microsoft was the one truly not trying to control the web. The web was a causality for something else (killing java). Now, Google. Google is the one that must control the web. Chrome and android and AMP serve the exactly same goal!
While you can argue that Chrome and AMP serve the same goal, they're approaches are very different. One helps diversify a market place and doesn't lock developers onto its ecosystem nor users into its product via anti competitive practices; while the other seeks to create a "second web" who is controlled by Google by locking companies into their promise of reduced bandwidth and users into their hosts and advertising via leveraging their massive market share in search as an effective sweetener/bribe (use us and be placed before the search results).
You can complain about Chrome all you like but overall have more browsers on the market is better and Chrome doesn't lock people into anything aside brand loyalty. Where as AMP is a very different beast.
this is what gates calles the figth for the web. saving windows. thats why they integrated IE so heavily on explorer. ...again, they did hit very hard in the actual internet ecosystem, but that was a side effect. and they would be silly not to try to take some advantage of that too since they were getting this for "free" while fighting for windows dominance.
You're exaggerating the point of standards, and/or confusing them with specifications. Standards have always been open to (ed: some amount of) interpretation with regards to how they are implemented.
Chrome is the most used web browser
Chrome is deciding how the web should work unilaterally.
These are the exact conditions that produced ActiveX and resulted in many websites with an IE6-only requirement. As much as developers dislike having to design around the various browser quirks, the situation will get much worse if any one browser becomes "important" enough that you can forget the others.
Next lets look at the actual technology we're complaining about. ActiveX wasn't just an Internet Explorer-only feature, it was Windows only as well. Furthermore it wasn't the only feature in IE that wasn't supported on other browsers (VBScript, JScript, WebForms†, different support for HTML and CSS, etc). Where as Chrome doesn't diverge from other browsers all that significantly in comparison.
Chromes changes could be compared better to Internet Explorer with regards to how IE4 changed the way frames were used (inc the addition of iframes). This was really annoying as a developer but actually these changes were for the better - particularly the iframe - and eventually those features landed in other browsers. Including Netscape Communicator.
The W3C are pretty glacial at updating their specs so it's often required browsers taking things into their own hands, eg the browser CSS prefixes. Developers would moan about having to add lots of extra junk to their code just to get new features to work but it was the price to pay for having access to new features that hadn't landed in the various web specifications. So as much as they would moan about vendor prefixes, they'd welcome browsers extending the specification.
The above paragraph wasn't all that long ago and the difference between then and now was that Chrome, Firefox and Opera were all at it with IE still playing catch up with regards to compliance; but now the browser market has consolidated where Chrome (read: Blink) has absorbed Opera, IE has largely died off and Firefox is sadly slowly falling out of fashion. So people are paranoid Google might abuse their advantage. However I really cannot see that happen - or at least not with regards to Chrome. Not only is Chrome's market share small compared to IE in the 90s / early 00s but also the web is viewed from a multitude of different devices. As someone who has managed online shops, games and other popular web-based services I can testify that product owners do NOT want to lose business from portable devices. If just because some product owners are so technologically inept that their primary computing device outside of the office is an iPad and/or their business iPhone.
† I think it was called. But basically it was their 90s tech for writing apps that looked like Win Forms.
Yeah. Where other devices are mostly Android phones running Google's fork of Webkit. I.e., Chrome
However that all said, when it comes to product support, I've found product owners generally only care about Blink (Chrome / whatever) on Android and Safari on iOS. So there's usually little interest in all the other random browser + platform combinations seen in "other devices".
Also, do you agree, in general terms, that sometimes changes can be good whereas other times changes can be bad?
The only big ones that are independent are Safari (since Google forked Webkit to create Blink), Firefox, and IE/Edge.
And the impression i have is that Safari is lagging, Mozilla is directionless and struggling to keep Firefox relevant, and MS is, well, MS.
Ever since Opera folded and made their browser a Chromium clone, only Mozilla have been carrying the banner for standard correctness. And even they seem more and more bow to Google's decrees (whole virtually cloning the Chrome look and feel with each Firefox update).
Hopefully they are on the upswing recently because they have found more focus and direction.
Have we not been there yet? Can we agree that that is a place of endless pain for developers, project managers and users as well?
Are you serious? No one thinks this way? I've worked at several companies where the software I've written was EXPLICITLY written with others in mind. To claim "rarely, if ever" shows that you know very little about this industry. Or, you surround yourself with a bunch of selfish people who shouldn't be doing software development.
But the megacorp Google is acting all altruistic and not at all abusing it monopoly powers?
Give me a break.
It's simply not realistic the 1000th time it's offered as the obvious solution.
If its not they should probably just work how everyone else works.
No, it's better for users who read the auto-filled boxes to make sure they are sane (which is what I do). It's not good for users that don't understand what auto-fill is (i.e. they think the website is suggesting it to them).
Easy to miss things.
Once those autofill, it's anyone's guess whether you have put the right data in.
Except that whether it's better or not is probably contingent on the particular web site and particular user.
Its super trivial to bind hotkeys to shell functions that operate on the current directory or files or whatever. I know that some of this can be done with custom actions but its nice to be able to a) bind keys not just provide a context menu item and b) create a custom command with a few clicks.
Example something like autojump or what have you is a common tool for shell users. Its nice to bind a key to pop up a prompt key in a string and have this jump to a directory in your file manager using the same source of info as autojump.
This took like 30 seconds to figure out and set up.
Another example a hotkey show a thumbnail gallery of every image recursively below current dir.
The settings that come with the markup are best understood as a recommendation, a slight hint that autocomplete might not be appropriate here. The final say is always with the user, though.
Sadly, through over-usage on the part of website operators, this hint has become utterly useless.
I think the solution is to follow the hint, but give users an override button next to or inside form fields that have it set to off.
This isn't what the autocomplete attribute was for in the first place; password managers have a different workflow and saving a password is prompted to the user, so we removed it. Password managers are not autocomplete features, basically.
Disabling autocomplete=off on everything doesn't seem to be a good change. Would love to be proven wrong but I don't think there are any ways autocomplete=off can be abused besides the password manager thing, and it's not worked on password managers for years.
I hereby confess to being guilty of adding autocomplete="off" to a login form, because the client's corporate IT only cared about ticking all the boxes on a security conformance report. We have tried to fight this and some other rules (e.g. log the user out after 20mins), because this was quite a simple website, but the rules were strict like it was online banking at the very least.
Now OWASP has changed their mind, but the damage has been done.
> Since early 2014 most major browsers will override any use of autocomplete="off" with regards to password forms and as a result previous checks for this are not required and recommendations should not commonly be given for disabling this feature. 
Yeah, I'm aware it used to be in conformance stuff (and probably still is in many cases). Super annoying when that happens :)
This is a continual frustration of mine, when valuable diagnostic data is disabled for security reasons and then when stuff breaks I'm asked to go fix it blind. Sure you can have gigabytes of audit logs that are impossible to find anything in, but an error message that tells you that some system certificate expired is too much of a security vulnerability.
Autocomplete=off is abused, certainly. It's commonly used to interrupt password manager functionality, in much the same way that copy/paste disabling is used on "repeat new password" fields. (As an aside: disabling autocomplete is a good idea, but only on the password manager level, not the website level. It's defense for user privacy, so employing it on well-meaning websites is worthless.)
But it's not abused in the user-endangering manner that circular redirects and back button hijacking have been. (Specifically, to make scam sites hard to escape and easy to click into.) It's just an inconvenience to users, and honestly I'm not thrilled to see browsers override code for non-security reasons.
Personally, I installed a Safari plugin to ignore autocomplete=off because it was so annoying. So Chrome is doing what I want my browser to do.
This is precisely why I refuse to use Chrome.
I use Firefox and would do so even if it were an inferior browser, plus at this point in time and for my usage patterns Firefox really is superior.
If you have a simple web app that has an administrator mode for editing accounts, you should be able to turn off autocomplete so your password doesn't automatically get filled in for users that you edit.
It's that simple.
Are there situations in which it is easier if the admin can just reset it, sure. Is that a good idea, no.
I have run into all three of these scenarios:
1) A login for an administrator area on a server has a different login/password than the main area, and the browser always fills in the main login/password, even though I ask it not to. autocomplete=off would fix this.
2) A new user and edit user screens exist in web apps that are driven by users sign-ups. Sometimes these things need to be coordinated with other systems in a corporation and users do not always have their own email (I know this is a shock for you, but it is very common). autocomplete=off would fix this.
3) Using a remote admin tool to setup a datasource on a server, the username and password fields refer to database credentials, not a person. Yet Chrome keeps filling in my info. autocomplete=off would fix this.
Broken WebGL anti-aliasing because:
> for now since it _shouldn't_ affect any actual devices
Except well... it did.
Philosophically-speaking some Web Platform developers think that it's more secure for the browser to autofill credentials from a keychain so that users can use better passwords and not be burdened with remember N-pseudo-random character sequences. Sounds good. Probably is good.
Whether or not you agree with the above there are plenty of regulations in certain setting that require us to disallow client applications from auto-filling form fields. We have to battle with the Web Platform authors to fill this niche and work around everything they put in our way to stop us from doing our job. Awesome.
Personally I thought feature detection was a smell and was glad to see it going the way of the do-do in the early aughts. Not so glad to see it making a come back.
> there are plenty of regulations in certain setting that require us to disallow client applications from auto-filling form fields.
And which regulations would those be, specifically?
From an ISO/IEC/IEEE 29148 perspective the language might be "shall not remember passwords" which would imply a legally binding requirement for compliance purposes.
This doesn't preclude applications from using autocomplete on form entry from password managers; just that the browser is not allowed to remember the entered password.
However in practice I've seen most systems deploy their applications on a target platform that runs the browser in "kiosk" mode, which if I recall from the time, was an IE-only thing. In more modern times we're starting to see consumer-level tablets and devices enter the mix and I'm not even sure if those can be locked down into a multi-user/kiosk type mode. Regardless... the web is a difficult platform for this kind of stuff.
If I had a penny for every time I've heard this excuse for really terrible unsafe practices just to find out that the developer deliberately misinterprets it to make its job easier... it be rather rich now.
From a historical perspective, this is nowhere near the sort of behavior one saw from Microsoft in the 90’s, but that’s pretty faint praise - nobody wants anything like that to happen again. You shouldn’t have people contributing to competing products out of spite.
I hope we never hit the point again where it is a cliche for FOSS people to bond over shared hatred of a company. I’d like to think we have been inoculated against that. I like to see people who will speak up like this early and often.
Every update of chrome erases their password manager entries. So for every site their password has to be re-entered.
Why is that dangerous? Because that means that my cousins and nephews other regular folks cannot trust Chrome to keep their passwords, so they must either reuse passwords or write them down somewhere. Obviously, a third-party password manager is simply not an option for these folks- they rely on the browser. The browser can get it right or get it wrong. Chrome gets it wrong.
That has not been my experience.
At least for me they don't. I have 2 passwords stored in Chrome (not a site important enough to go into KeePass), and have been through multiple updates and yet to lose any passwords.
It's been doing it for a while apparently.
Autocomplete is a security problem that can be used to get more information from users than they think they are providing - just ask for one field like email but put the other fields to be auto completed as hidden, and the browser helpfully gives the site all the other fields the user didn't realise are being autocompleted..
And that is how IE used to operate. I think this is something we're going to have to deal with for a long time.
As a developer, I have never used it anyway, so don't care.
As a web developer, I see this attitude a lot and it annoys me immensely. Another way of phrasing it: Google is putting users first, ahead of developers. This is as it should be. "your" website exists to serve users, if you're doing a bad job at it then maybe it's an opportunity for self reflection.
Janky scrolling behaviour on mobile has been a problem for a long time. Apple also implemented non-standard behaviour for years to avoid it. You should almost never be listening to scroll events in a non-passive way. The vast majority of times scroll listening is used, a passive listener is the correct implementation and just wasn't available when the code was written.
This is proven by the article itself: the change was made in February of this year. Do you remember the internet breaking that day, and all of us rushing to update our event listeners? No, me neither. Did Chrome really break when the web when absolutely nothing broke?
This is why there are piles and piles of cruft within browsers and web standards for making sure old behavior that folks may rely on continues to work.
The commitment to stability on the web is astounding; I don't see anyone changing that any time soon. Google should not be breaking things just for its users; because its users are affected when they break things too.
That said, Google did push this through the right channels (https://github.com/WICG/interventions/issues/35 , https://github.com/w3c/touch-events/issues/74 , https://github.com/WICG/interventions/issues/18 ) and it seemed like browser vendors were in agreement that this wouldn't be a problem. This is something that does happen; browser vendors "unship" or break functionality after discussing it sometimes.
So while I'm not convinced Chrome stepped out of line here, breaking sites absolutely is not putting users first.
They won't if they use a <blink> tag, for example. The problem is that if we freeze the entire web and demand 100% backwards compatibility, we also can't ever move forwards. This scroll event change is a positive in 99% of cases - should a web site built in 1998 really hold that back? There's no absolute in "putting users first" there - you're either putting the minority user looking at an old site first, or you're putting the majority looking at newer sites first.
Maybe it wouldn't be the worst thing in the world to introduce an "archival" mode in a browser. Nothing published on the web is ever truly broken because it's very well documented what it should do. So we can turn those features back on when viewing an older site that requires them - blinking text and all - while still moving forward on the platform people use every day.
> There's no absolute in "putting users first" there - you're either putting the minority user looking at an old site first, or you're putting the majority looking at newer sites first.
I'm not talking about old sites; I'm talking about current sites. The old sites were illustrative examples of the backwards compatibility in the web.
The point is that the web has always been a platform you can deploy code to without needing to tend to it later.
This post is an example of it breaking a current site.
There is a thing called Forward Compatibility. Not everything old has to die to move forward.
On our OpenGL 4.x videocards still run OpenGL 1.x. And no, fixed function pipeline of 1.x isn't anything like the modern shader pipeline. (Sidenote: many cards that were designed in the 3.x era and nVidia backported 4.x support into their older cards which is amazing to me.)
>Maybe it wouldn't be the worst thing in the world to introduce an "archival" mode in a browser.
Internet Explorer already did that.
But we're talking about a very specific example, where the best option for the vast majority of users was to change default behaviour. There is no "passive event listener still implements active event listener API" equivalent here. You have to make a choice.
In few years time, it will lead to the same thing happening to Chrome JS ecosystem
Unless their navigation interface used a Java applet.
The first part I agree with. The second, I don't understand.
.NET and Java are way more universal than assembly language unless you really split hairs and pretend "any byte code generated by a machine that ends up as assembly = assembly."
For the most part, Google is putting its developers ahead of your developers. They could have e.g. jitted scroll handlers to bail out of passive, but did not, and instead broke all active scroll listeners.
> The vast majority of times scroll listening is used, a passive listener is the correct implementation and just wasn't available when the code was written.
And once again minorities get to visit the mass grave through no fault of their own.
Could they? From my understanding passive listeners do not stop execution, so even if a specific handler wants to bail out at run time it will already be too late to do so.
> once again minorities get to visit the mass grave
You don't think you're being a touch hyperbolic here? You can still use a non-passive listener if you want to, you just have to opt into it. Defaults should serve the majority use cases, that's the whole point of having them.
Code rot is very real for websites, and I worry that in the quest to do things 'right' Chrome is undervaluing the incremental harm done by every change. I don't think this action broke the web, but I'm wary of the idea that breaking changes can be casually blamed on the people who didn't update fast enough.
Change is bad, inasmuch as it makes new work for maintainers and harms user experience where maintenance isn't prompt. It's often necessary, but there ought to be an open discussion about timeframes and impact before a change happens. In this case, the discussion largely happened after the fact.
There was. In fact, there's a whole GitHub project dedicated to discussing proposed interventions like these: https://github.com/WICG/interventions
All interventions also get posted on the blink-dev mailing list before they are implemented: https://groups.google.com/a/chromium.org/forum/#!forum/blink...
It'd be absolutely insane if a native app did not get first crack at all input events. This is how they all work (iOS, Android, Windows - take your pick), and the mere existence of a touch/scroll listener is never a problem.
But on the web simply having a scroll listener is such a massive performance issue that it's worth introducing not just a new API to get events after they've happened, but to then make that the default? Why not just fix the performance problem instead of hacking and slashing around it?
Sure, there's something to be said about webpage authors getting off their butts quicker, if their webpage is actually broken, rather than just not quite as fast as it could be, but the majority of webpages are not actively maintained. Chrome is simply breaking those. With only 8 months of a transitional period, there's no two sides to this argument. What they've done is unresponsible in every way.
I maintain a library of interactive widgets for an education product with a lot of dragging and dropping and I was hit hard by this.
I understand the reasons behind this, and I agree something thad to be done, but Google is making it difficult for devs to sympathise with the unilateral intervention. It's not the what, it's the how.
same with apple - "now we'll ignore user-scalable=no and screw every responsive webapp out there"
instead of user punishing badly behaved application, vendor are indiscriminately breaking well behaved apps whether they're doing the right thing or not (ie. apps that use em and thus respect user accessibility settings)
this leaves both developers AND users with a sub-par browser experience - like all canvas games out there now get zoomed on double taps, an animation which is often enough to stress the device gpu so much that the browser crashes.
I'm all for improving the web, but there are ways that work and there's this force feeding crap to developer without tough and foresight, and we as a community need to call what's good and what's crap for what it is.
I mean honestly you should have seen the writing on the wall with user-scalable: it inhibited the user from doing something they wanted to do, it hurt accessibility, it made websites imperceptibly inconsistent, and there isn't a good way of asking the user if they would like to disable zoom.
Whats wrong with "This page wants disable zooming. Allow?"?
That would fix the canvas web game usecase.
You know what's good for the web? Standards. Standards that are thoughtful and consistent.
Here are the comments on the property:
> Authors should not suppress (with user-zoom: fixed) or limit (with max-zoom) the ability of users to resize a document, as this causes accessibility and usability issues.
> There may be specific use cases where preventing users from zooming may be appropriate, such as map applications – where custom zoom functionality is handled via scripting. However, in general this practice should be avoided.
> Most user agents now allow users to always zoom, regardless of any restrictions specified by web content – either by default, or as a setting/option (which may however not be immediately apparent to users).
Maybe because 90% of responsive webapps would be better implemented as standard html websites which would improve performance and help preserving battery.