Hacker News new | past | comments | ask | show | jobs | submit login
Canistilluse.com (jim-nielsen.com)
555 points by sjs382 on Aug 26, 2021 | hide | past | favorite | 361 comments



Personally, I think the web/browsers has churned far too much, and if that stopped happening, perhaps we would get more accessible sites and browser diversity as people "stop looking for new dogs and start teaching new tricks to the old ones." Of course, Google would try its hardest to never let that happen, since change is its weapon of monopoly.

Related: Stop Pushing the Web Forward (2015) https://news.ycombinator.com/item?id=9961613


May be risking a lot of downvotes, but I really want to work for somewhere that cares about a11y for very selfish reasons - I want to use semantic html and not have to do stupid shit like using a package that re-implements the select element with divs


It's a shame the UI elements built into HTML are so lacking, which I think is what ultimately drives people do things like this.

Look at <select multiple> for example—the browser built-in is borderline unusable. Anything that requires combining a click with the shift and ctrl/cmd keys is not going to go down well with the average user. Same goes for <select>s that need more advanced behaviour like filtering. There's <datalist>, but it's incredibly basic and pretty useless for most cases you'd want that sort of input.

The web is an app platform whether we like it or not, and I'd like to see a robust set of browser-native controls with mechanisms to customise styling and behaviour. It would improve accessibility, improve performance, reduce page sizes (because we wouldn't be re-implementing the whole world in JS), and it would at least go a little way to offsetting the slow erosion of quality desktop software that web technologies are bringing.


The browser select-multiple widget works like that because that's how it's worked in most desktop OSs since Macintosh System 1.0 in 1984 (at least). It's been simple and predictable and easily learned for decades, calling it "borderline unusable" is just hyperbole.

Now, if you wanted to say that HTML's standard widgets should be allowed to reflect the platform-native look-and-feel (for example, adding a "selected" checkbox on touch-screens, or supporting long-press to select items, or whatever) instead of being tied to Windows 95 appearance and behaviour by backwards-compatibility concerns, then yeah, I'd agree with that.


It's "borderline unusable" because the vast majority of people who use computers probably wouldn't know how to select more than one item. There are no affordances and virtually nothing uses it any more.

Even in the early desktop era when it was popular, many non-enthusiasts probably just didn't know how to use a UI control like that. It just didn't cause a problem because most people didn't use computers much.

I would bet that most internet users today have never even used a UI where that control is popular.


If you're that worried "<p> hold shift to select more than one item.</p>" isn't a whole lot extra to add and is something most people can handle.


You think people actually read what's on the screen...?

Never done tech support I take it!


So how do you expect them to use your product if you strawmen them as people incapable of reading?


Because they’ll only read what they believe is essential to their goal. Small text below a form element does not scream “critical” it doesn’t get read. I’ve watched many users use websites: they only read what they think is important and skip everything else.


Not just naive users either. I am very computer literate and yet I find myself in situations where I've skipped that text and only find it when I go back.

It makes me think when I'm designing things. If I have problems with stuff like that, how can I make it better.


Really, if people read everything they'd never get anything done. They see dozens to hundreds of things marked visually as important that they'll do fine ignoring, every single day. Of course they often miss the one or two per day that they actually needed to read.

Then there's the way styles are so different, especially on the web. Horribly user-hostile. How does this site mark something as important, assuming it bothers in the first place? Who knows. And if I'm only planning to be on it for 30s, I'm not going to learn that.


>. And if I'm only planning to be on it for 30s, I'm not going to learn that.

It's a standard UI control and it is used pretty often.


I guarantee you fewer than 50% of people who use a desktop computer at least once a week know how to multi-select and range-select using shift and ctrl on them, though. That's why you need the text—which many will, for reasons that are actually pretty good, ignore.

Or you could just use a better control.


It is standard but it is not used often. In fact, I can’t remember the last time I saw one used that want some ancient, intranet application built on Cold Fusion.

Many users don’t even know you can use a key shortcut to copy/paste.


[flagged]


A good designer designs for reality, not their ideal head canon of how people /should/ behave.


No, we should stop dumbing down UIs to the lowest common denominator because that's how you get user-hostile software that treats users like sheep to be herded and monetised.

The less you encourage learning and self-improvement, the less it will happen.


You get user hostile software that treats people as sheep to be herded and monetised because of the capitalisation of software, not because designers are making their interfaces too easy to use.

Don't blame designers for that, blame VCs and Silicon Valley.


Probably wouldn’t be a great fit for touch or screen reader interfaces, though.


Touch screen OSes usually have their own implementations.


Which frankly is the best argument FOR using the built-in multiple select, since the custom widgets often fail on touch.


Some users don't know what key "shift" refers to.


"* Command-key on MacOS that's the one that that has a ⌘ on it btw"

"‡ On a phone or tablet or whatever, ignore us, we're just confusing you."


How do I do it on my phone or tablet?


> It's been simple and predictable and easily learned for decades, calling it "borderline unusable" is just hyperbole.

And it's also completely not discoverable. I only know how it works when I read from HTML book back in the day.

Just because it was the standard doesn't mean it's easily learned.


I agree that calling it borderline unusable, when it works great for power users on the desktop, is hyperbolic. Unfortunately for those power users, Digikey, one of the best electronics distributors and also one of the best examples on the web of parametric catalog search, recently disagreed, changing from the classic and highly functional control/shift + click to select multiple to a weird click and drag/click and repeat paradigm.


An order of magnitude more users are more familiar with iOS and Android than Mac System paradigms from 80s.


Those "Mac System paradigms from [the] 80s" are also the Windows paradigms from the 1980s to the 2020s. Still "an order of magnitude" more users more familiar with iOS and Android than that? I doubt it.


Android alone had 2.8 billion users in 2020, so I do not understand what kind of rationale can possible lead to think this is less than PC and Mac users.


Ah. So when I thought I was saying 2.8 billion isn't "an order of magnitude MORE" than the surely at least a billion people who've used Windows at some point in the last three and a half decades, I was actually saying "2.8 billion is less than at-least-a-billion"? Funny, that's not what I thought I was saying.

If you absolutely have to put words in my mouth, please make them at least a little less stupid ones. Maistuivat vähän paskalta.


> It's a shame the UI elements built into HTML are so lacking, which I think is what ultimately drives people do things like this.

I like to imagine that if browser makers had collaborated better on standardising CSS hooks into the default widgets, we would not have seen such strong adoption of tools like bootstrap. We possibly would have had a different pathway through template+controller libraries too.

As a FED, I didn’t help. I rode the gravy train along with Backbone, Angular and React. I wish I had fought harder and spoken more eloquently on the practicality of a11y-first semantic mark-up and progressive enhancement. I caved in to the demands of the agencies who took me on to get stuff done in the trendy stacks.


> Anything that requires combining a click with the shift and ctrl/cmd keys is not going to go down well with the average user.

Is there a reason the element has to be implemented that way? Correct me if I'm wrong, but aren't the details of how it works up to the browser? It only works that way because every OSs native select widget worked that way and once upon a time browsers actually tried to integrate well with the OS and used native widgets.


I suspect that one of the big reasons why native HTML widgets are not improved, is because everybody who needs something "fancy" (by web standards), just rolls their own.


> Anything that requires combining a click with the shift and ctrl/cmd keys is not going to go down well with the average user.

This is part of the problem. Your average user at temporal point X has difficulty combining mouse clicks with keyboard modifiers. You build interface around that notion and in a later temporal point Y your average user cannot combine those at all and you have lost input mode. Your average user has problem distinguishing single click from double click. You install debounce logic, train them there is no difference between the two and lose input mode.

The web is a shitty platform for apps, because currently it is supposed to be used from at least proper PC, touchable handheld device, embedded in controlling container and in relatively near future virtual environments. Game developers have tried for literally decades to bridge the gap between PCs and consoles and mostly failed at that with both platforms being relatively static. The web being a moving target is much more difficult to fit on different device classes. Yet we try to do that and as a result are moving to the lowest common denominator.


> Game developers have tried for literally decades to bridge the gap between PCs and consoles and mostly failed at that with both platforms being relatively static.

For grand strategy / RTS, yeah, that's kind of a function of the complexity. Most other game genres are fairly well defined in the expected control schemes on both platforms now though.


> Anything that requires combining a click with the shift and ctrl/cmd keys is not going to go down well with the average user.

These users need to retake the computer literate 101 class.

When I'm in elementary school, we learned how to multi-select in the tutorial system that comes with Windows 3.1.

I don't know why Windows XP deleted these very important tutorials and replaced them with a webpage.


There are lots of people without your educational background. It's a few years old now, but I remember finding Alice Bartlett's talk on <select> box inaccessability [1] quite instructive.

1: https://www.youtube.com/watch?v=CUkMCQR4TpY


This, rather than JavaScript would have been the way to extend the web into a more cross platform application domain. Now we have essentially a bitmapped graphics terminal where each and every entity re-invents the user interface in new, incompatible and broken ways.


> Anything that requires combining a click with the shift and ctrl/cmd keys is not going to go down well with the average user

This is how computers have worked since pretty much day 1, not just the web. I remember getting computing classes in school, but it seems the current generation is completely ignoring all of that and instead just plays on their phone.

More built-in widgets would be great though.


And it's not just keyboard modifiers -- in middle school I remember having to explain double clicking. People eventually got used to it, then the web got popular, and I had to explain to users not to double click on web links but do click on desktop links. Even in this century I still occasionally see people using double click to navigate the web.


Could be argued that that was a mistake in the original specs for the Web. At the time, as I recall it, "double-click for action" was already a pretty well-entrenched standard on multiple platforms.


I work with the GOV.UK Design System and everything's geared towards making services accessible. Your government (or health service) might have their own.

https://design-system.service.gov.uk/


Australia just cancelled our equivalent :(

It’s been open-sourced and forked here if people would like to help keep it alive:

https://designsystemau.org/


You guys are the best people. I can't express the depth of respect for you and the work you're doing.


I don't want to take credit for the work the Design System team does (they're awesome and supremely dedicated), I stand on their shoulders.


The US has one as well. Similarly, it has a key focus on accessibility.

https://designsystem.digital.gov/


Well that is what html should be used for.

I do freakishly experimental webdesign at times (like for designer portfolios, where freakishly experimental webdesign is kinda part of the thing), but for everything where it is much more about the info and the content I will bend over backwards to use semantic html (and even with those freakish sites I will try).

If you never looked at your website via lynx or used a screenreader I highly encourage it. It will give you a different perspective on things.


> a11y

...which the text-to-speech reader probably pronounces as "ay eleven why".


That's how I pronounce it in my head, too, but I still know what it means. All is well here.


It's an abbreviation for accessibility (because there's 11 letters between 'a' and 'y' in the word.)

Similar to i18n and l10n for internationalisation and localisation.

Interestingly unlike w3c, so it's not standing for aaaaaaaaaaay :)


My point was that it's inaccessible: "accessibility", "internationalization" and "localization" are clear to everyone, but "a11y", "i18n" and "l10n" are not at all obvious even to native English speakers, and especially those using screen readers.


I've never understood why these shortenings are so common. Is it just to avoid the mental load of having to remember how to spell long words? Isn't the mental load of having to remember which number goes with which word worse?


It's like a name drop. Saying "a11y" instead of "accessibility" let's other people know what you're familiar with the industry jargon.


Same reason people use contractions. It's faster and conveys equivalent meaning.

A11y is probably the easiest one to remember because it looks like ally. Which is what you're being by worrying about accessibility when you yourself don't rely on the standards.


You only have to remember two digits instead of the correct order of 10–18 letters. It’s probably also a lazy typist thing: four characters instead of 20.


My bad, I had read the parent quickly & thought they'd said "I still don't know what it means" rather than "I still know what it means" so just wanted to expand the jargon for the thread here.

Agree with you re: screenreaders.


Just tested with MacOS's VoiceOver utility. Yup, it's "ay eleven why"


Work for a government agency (in the US). We're required by law to meet accessibility standards, and at my workplace, we often do so through the path of least resistance – not using JavaScript to reimplement features that are already accessible and thereby take on the responsibility for implementing accessibility ourselves.


> not using JavaScript to reimplement features that are already accessible

Seriously, I've seen pages that create links by using a styled <span> tag with an onclick even that merely called a function that set document.location using a hard-coded value.

Did developers forget that `<a href=....>` exists?


Isn't that the whole point of ARIA roles? To enable you to create custom controls and interactions while marking them up for accessibility?


I can't think of many new browser features I've used in the past few years. There's the permissions API and the fileReader API that are part of apps I've built (and they're very useful), and I've played with a few things like sharedArrayBuffers and WebGL2 but not in anything that's been deployed. Browsers do move fast, but not that fast, and most of the new features are super niche things that the majority of developers don't really need.

If you're prioritizing looking at new things over accessibility and cross-browser support then you are making that choice; browser vendors are not forcing it upon you.


The one feature I want more than anything in the world (and seems to be stalled in Chrome land, which means it’s stalled overall since chrome tends to drive the standards) is the streams API (aka websockets with backpressure). There’s no easy way for clients to signal to servers that they’re overwhelmed, and the result is either saturating the network or the CPU since the message handlers are synchronous.


And yet try the popular websites on browsers from a couple years ago. Like a tablet stuck in 2015.


Yes, and this is what really bothers me. Feature detection is a thing in browsers. There's things like @supports in CSS, "if (<property> in <API>)" in JS, and there are even whole libraries like Modernizr to make it trivial to detect whether something is available. Web developers don't bother though, and then they complain that "browsers broke their code".

Browser vendors do a lot to make it possible to write robust code that doesn't break in the future. Maybe they could do more, but if web devs are failing to leverage the features that are there already then why would they? A whole lot of the responsibility for the broken ass nature of the web is not due to browsers.


Feature detection seems to be quite broken to me.

For instance Safari on iOS reports support for drag and drop events, but they aren't actually triggered. Another I encountered recently is the "accept" attribute on input file type. On iOS, you can't set specific file type, just mime type, but you have no way to detect that except by using the user agent. Then you have the buggy releases that make a feature seemingly supported, but is actually unusable in practice (looking at you IndexedDB on Safari).

This means using a library like Modernizr is almost mandatory. Also, what do you feature-test? Everything? Feature-testing something that has been available for 20 years sounds like a waste of time on the face of it, but the "alert" case has shown us that if you really want to do it correctly, you can't assume anything. I am not saying it is not possible to do it properly, but it is not that simple. For anything non-critical, I understand waiting for things to break then fixing them instead, it's way less efforts.


The fact that all three examples you listed are iOS Safari is giving me flashbacks to a previous job where we had way more time spent coming up with ugly hacks to make iOS Safari like something behave closer to a decent browser than we spent on all other areas of new feature development. And we also supported IE11, which was way easier to manage specifically because it didn't straight-up lie to feature-detection like iOS Safari did (and does).


It's probably a bit biased because I develop on Firefox/Chrome (very rare to find difference between the 2) and have to wait for reports/borrow an iOS device to test/debug. There might be similar issues with the aforementioned browsers, but the pain is less visible because it happens during development and I don't remember it. Also most of our users are using iOS devices, so reports for them are more likely. On the other my gut feeling is that iOS is more painful to develop for.


I'd argue it's very much on the browser vendors, because they create the strong expectation that everyone is supposed to always be using the latest version. They do this by bundling together feature updates and security updates. Web devs do the pragmatic thing and ignore feature detection, because they know users are constantly being told to update their browsers to keep themselves safe.


> Google's weapon of monopoly.

This reminds me of Spolsky's blog post about Fire and Motion. [0] He gives the example of Microsoft creating an endless stream of new technologies which kept their competition busy (eg: ODBC, RDO, DAO, ADO, OLEDB, ADO.NET, ...). Get the competition to spend their resources keeping up rather than competing.

0: https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


I agree with you, and I think it can be done today, by taking matters into your own hands.

I've done it by testing with historic browsers alongside modern, and challenging myself to make it work with all of them, JS and noJS, without errors, reliably.

In the process, I found a set of "lowest common denominator tags which work across almost anything. I use these to build the "base" HTML. Then, I add JS with feature-checks to allow older browsers with JS enabled to still run it, though it doesn't do much at the moment.

I think it's a worthwhile exercise to learn where the roots of the web are, and how it evolved, and allows you to write much more resilient code in general. The reason for this is that the longer a technique or syntax has been in use, the longer it's likely to continue to be used.

Lindy effect is the name of the trend.


I'm curious, what kind of tools and processes do you use to test that the webapps/website work on so many platforms?


For the time being, I mostly use manual testing.

Most of the older browsers are covered by three VMs: Windows 95, Windows ME, and Windows NT 4.0.

Most Windows Netscape can run in Wine, so I can use symlinked .wine directories to switch versions.

I also do occasional testing at Best Buy or Walmart (many thanks to those places) at a cross-section of their available devices -- a couple of desktops, a couple of phones and tablets of each flavor, etc.

I also often ask people if I can test out my website with their device (or if they want to participate in a 5-minute user study)

For difficult things like Mac IE, I just have a couple of devices e.g. G4 Macs and iOS 7 iPad.

Of course, there are text-mode browsers, which I usually install with apt or whatever.

I also use online emulator services like BrowserStack, they have a pretty good slice in their free tier, and a couple of minutes is enough for smoke testing.

Also, I sometimes come across public devices, such as desktops in survival center, library, hotel, or public community space, and I test on those.

With my own devices, I test across various public wifi locations, so that I can provide workarounds for e.g. content blockers or inadequate proxying.

I use Charles Proxy locally to simulate low-bandwidth connections and other common issues.

I have also asked rare use-case users, such as visually impaired, to test using a script, usually through topical Facebook groups.

That's about all I can think of for now, there's probably more.


It would be great to slow down on features and focus on securing the platform (browser) so it didn’t leak like a sieve.


Ironically Chrome is the most secure(not talking privacy) browser, because of site isolation and sandbox architecture. Firefox is somewhere second.


I was hacked by IE (viruses on Windows), Firefox, and Chrome.

Few years ago, at revolution, Chrome told us about MITM attack and refused to connect to Google servers, while Firefox noticed nothing.

Few years later, attackers used my Chromium, which I used for work, to spy on my Firefox window, which I use for private browsing, by capturing of whole screen when Chromium sit unused. (I have it recorded on video).


All the problems with security seem to come from JavaScript exploits intrinsic to the engine, not any of the new features (misty because the new features are strictly typed and have the power of hindsight with modern design principles), so it’s not like new features are strictly an antigen to security.


> All the problems with security seem to come from JavaScript exploits intrinsic to the engine

Yes the classic memory-related bugs come from the engine, but the comment explicitely mentioned leaks and I don't think that was about the memory ones. Many of the new "features" turned out to leak sensitive or at least identification-enabling information. Imo having remote code execution without a big red warning that this is stupid and you should not do it that users can't click away without being forced to think about it just isn't a good idea, even if it is sandboxed. At the very least we should have a permission-based system where users need to authorize every single Javascript API, for every single connection/file/database/whatever and be unable to ignore it without disabling the APIs. That would imo be the best compromise since web-devs would be forced to think about what they are doing to users computers¹ while still allowing applications to be built.

¹ My hope being that they wouldn't include [bullshit fontend framework] except when absolutely necessary


I think you underestimate the number of users who would either blanket-approve everything or switch to a browser that doesn't nag so much. Most people care very little about their privacy online.


Relevant:

https://twitter.com/JimMcKeeth/status/692596120464150528/pho...

Indeed, users don't read error messages, and will just click whatever they think they need to click to move on.


As long as it doesn’t break or give up backwards compatibility, I don’t want the web to stop improving. It’s one of the most important platforms in history. I thought we can’t teach the old web new tricks because of backwards compatibility?


It's also worth reading from the same author the update on the same matter: Breaking the web forward[0]

[0]: https://www.quirksmode.org/blog/archives/2021/08/breaking_th...


"Start teaching new tricks to the old ones" on the web usually means shipping a bunch of extra JavaScript. Which is fine when a new pattern or paradigm is being felt out, but once everybody is using this one library to do this thing the exact same way, you reach a point where it makes more sense to enshrine it in a native API.

See jQuery (both Ajax and selection), Moment/Temporal, etc


The recurring calls for a "faster web" or a "safer web" are responses to problems for users that web developers, "tech" companies and their advertiser customers have themselves created. Users did not create these problems and yet, unless I am mistaken, these marketing campaigns for "pushing the web forward" are directed at users. As a user, I want those problems fixed but I am under no illusions about where they come from. It stands to reason that a user-controlled web would be much faster and much safer.


> change is its weapon of monopoly

I think it's OK for Google to make Chrome into an OS and cram whatever they want into it as fast as they can.

Regular web browsers don't necessarily need to be on the same feature treadmill/death march (assuming they could keep up.)

We seem to be trying really hard to reinvent the applet systems of the 1990s, but in a way that brings systems with 10-100x the memory and CPU resources to their knees and requires huge armies of programmers to implement. I guess JavaScript/webasm is better than Java in some ways.


> it's OK for Google to make Chrome into an OS

> Regular web browsers

Chrome is a "regular web browser". It holds a 70% market share among web browsers.

> Regular web browsers don't necessarily need to be on the same feature treadmill/death march (assuming they could keep up.)

They either don't need to be on the same death march, or keep up. You can't have both.

Currently, it's a major problem, because Chrome churns out 40-70 new web APIs with every release which happens every two months or so [1]

So, no, it's not OK for Google to convert Chrome into an OS.

[1] https://web-confluence.appspot.com/#!/


> Currently, it's a major problem

It certainly is an insane amount of feature churn.

But the right perspective might be to look at the impact on end users.

The issue for end users seems to be that they occasionally encounter "web sites" (really web apps) that only work properly with Chrome. This is sort of a bummer for iOS users, but for the most part they seem to get by just fine. Perhaps there will be a tipping point where the "web" becomes unusable from iOS, but it hasn't happened yet.

I consider this a comparable problem to encountering web sites that only worked with Internet Explorer, or Flash, or Silverlight, or Java. Certainly annoying, but not really catastrophic.


We should also make caniusebutdoesitactuallywork

This to refer to my annoyance of browser makers leaving incomplete implementations stagnant for years.

As an example, the <dialog> element, a browser-native standardized spec as promising replacement for alert. Except that it doesn't work, it's inaccessible at its core.

And nobody fixes it, it's just left in this broken state forever.

<section>, the element that was supposed to cut up an HTML document into multiple outlines, hence making componentized SEO-optimized headings easy and better, do nothing at all.

CSS columns, a simple and easy way to distribute text, were unusable for 8 years because Mozilla refused to fix their own small bug. Which pales compared to the extraordinary amount of Webkit bugs that are absolutely never ever fixed.

Form controls, since the very invention of them, are terrible. It has cost the world billions, as worldwide every single developer and project has to reinvent some of them, often breaking basic accessibility in the process.

I could go on, but I'll sum it up as the failure to address extremely common real world problems, and to just let broken buggy solutions linger. When you accept a standard and implement it, bloody finish the implementation. Fix bugs. Otherwise, what is the point?

I'd call this hardening the web. There's no reason I can see why you can't harden it whilst also making progress on new features.


> caniusebutdoesitactuallywork

MDN/caniuse cover quite a lot of this already.

> <section>, the element that was supposed to cut up an HTML document into multiple outlines, hence making componentized SEO-optimized headings easy and better, do nothing at all

They do a lot for Reader view and I’m fairly certain they help with screen readers where heading hierarchies might otherwise be ambiguous.

- - -

Overall, I share your concerns but feel they’re overstated. Mostly because I remember the bad old days of NS4, IE4-6, etc. The web as a set of reliable standards isn’t perfect but it’s worlds better than I ever anticipated.

Just as a matter of perspective: when I started web dev, I learned and used by rote dozens upon dozens of hacky incompatibility workarounds. I moved more backend in recent years, but have had more web work over the last year. I can count on one hand the number of browser compatibility issues I’ve had to address (yes, all Safari). And not because build tools are helping. If anything build tools are the biggest source of frustration for me now.


Looks like I missed your remark on sections.

I'm not sure about Reader view, but <section> does nothing for screen readers as it comes to headings. It doesn't start a new nesting level. You can label a section and this way mark it as a landmark to a screen reader, but that's something you can do on any element.

Meanwhile, developers trying to keep up incidentally may use <section> in an effort to produce more semantic value over a <div>, but they all use it incorrectly. Perhaps because just understanding how to use it correctly seems impossible. Or, better said, it has no function at all: https://www.scottohara.me/blog/2021/07/16/section.html


Overstated in comparison to darker times? Very much. I started web development in 1996, so I know what you mean.

But it doesn't lessen my annoyance. How can you deliver a 90% implementation, and then let it rot for a decade? Effectively delivering 0%, because it's unusable. It just doesn't make sense to me.


> Form controls, since the very invention of them, are terrible. It has cost the world billions, as worldwide every single developer and project has to reinvent some of them, often breaking basic accessibility in the process.

We're close to being down to two browser engines that matter, one of them with well over half total market share, and they still don't bother to fix forms. You're right about the costs, they're immense. Billions isn't an overestimate for ball-parking the figure, I'd say.

Shit, Firefox, you want to do something to stay relevant, be the first to do that right. Go nuts with some (well-considered) non-standard extensions, behaviors, and tags, worst case no-one uses them and no other browsers adopt them, best case you achieve arguably the greatest thing your project ever has (which is saying something—early FF, especially, was awesome). It may be too late (like every other idea I can come up with to save FF, they should have started at least a decade ago) but it's worth a try.


They actually did do something on forms:

https://blogs.windows.com/msedgedev/2019/10/15/form-controls...

It's a restyle into the timely (cough) flat design. I guess any progress is good progress, but it's meh.

Firefox can't do anything regarding relevance. It's not an engineering problem, they can't push their browser, they have no reach.


> Firefox can't do anything regarding relevance. It's not an engineering problem, they can't push their browser, they have no reach.

I don't think that's true, considering they became popular originally though features and overall program quality. I wasn't using it and recommending it to (and/or installing it for) everyone I knew back in the Phoenix/Firebird/FF1.x days because of advertising or whatever, but because it was excellent and solved a lot of problems for people, and that exact mechanism is how they gained a foothold to begin with—every power-user and nerd was doing exactly what I was, and pushing everyone who'd listen to switch to Firefox.

There are other ways to gain relevance, obviously (say, promoting your browser prominently on your search engine) but simply being better than alternatives definitely can work. The proof is that it already did, for Firefox in particular, once.


I really want to agree with you, but I can't.

Firefox has no meaningful presence on mobile, whilst both Google and Apple can push their own browser to billions of users. Google in particular also pushes it from services having billions of users, like Youtube or Gmail.

Mozilla has no such thing, it can't push anything. It has no platform.

Your historical reading is correct. The lack of progress in IE created a temporary vacuum for a better browser to jump in. That doesn't mean the situation is repeatable, as this vacuum doesn't exist right now. Instead, Chrome is speeding away from Mozilla's budget-cut team.


I agree that they're probably screwed, but disagree that it's not an engineering problem. Way I see it, their only somewhat-realistic hope is to approach it as one. They certainly aren't going to advertise or "message" their way out of this, and their leverage is basically nil. Innovation is their only (admittedly remote) hope.

Make native HTML UI elements much better, and add behavior and new tags that should have been added to the spec 10-15 years ago. Bake ad-blocking in (though Brave stole their thunder for that, a bit). Build in federated social networking to the browser itself, to give people a reason to install FF (would it work? I don't know—but it might). Do something more than "we pointlessly redesigned the UI again, and we've almost caught up to 2nd-best on battery life and performance!" each release. They may as well give up, if that's all they're going to do.


Flat design is not just ,,timely'', but it makes it harder to understand how you can interact with the screen. I preferred Windows until Windows 7, but with Microsoft's hard to use design (and M1 hardware coming out) I switched to Mac, which I wouldn't have done before.


I was just talking with someone today about how Fetch API is now 10 years old and still isn't a complete replacement for XmlHTTPRequest (or is it XMLHttpRequest? I can never remember) because it can't do file upload progress tracking. It's been a decade of "the spec doesn't support that yet". Yet? Will it ever? At what point do you admit you're just not going to do it?


Indeed another fine example, forcing developers into massive packages like axios.

Since we're in a salty mood, let's keep it going. jQuery, the much despised jQuery. Inspired by jQuery, browsers now have native DOM APIs that are somewhat equivalent, reducing the need for jQuery.

Except that their syntax is terrible, and chaining isn't possible on most methods. So an elegant chained one-liner in jQuery becomes a blob of ugly syntax spanning many lines using native methods.

A massive regression.


Can you elaborate on <dialog>? It is currently implemented behind a feature flag in Firefox, and not implemented at all in Safari. So the answer with caniuse is maybe with a polyfill. How exactly is it inaccessible at the core? As far as I know a <dialog> is required to have at least one focusable element inside it (I usually have a close button in the dialog’s <header>), and then the user agent is supposed to trap focus inside it.

Is it not usable for users with assistive technology? Is it a bad UX for them? Is it broken or buggy? Does the polyfill not workk? etc.


Have a look at this: https://www.scottohara.me/blog/2019/03/05/open-dialog.html

To sum it up, <dialog> is a specced standard of which the first implementation appeared 8 years ago. It is a perfect example to illustrate my original rant.

It is a feature in high demand, almost every web application needs it. Hence it makes sense to have a native control and for each browser to implement it, eventually.

No such thing happened. A broken implementation is delivered, and more importantly, never fixed. It doesn't work cross browser and in the browser where it works, it actually doesn't. And now, nothing happens, they just gave up on it.

Which indeed leaves us with custom implementations, but my point is that we shouldn't need those. It was specced for a reason, it's high on the list of developers needs.


That is distressing.

> tldr; I’m just going to say right now that the dialog element and its polyfill are not suitable for use in production. And it’s been that way since the dialog’s earliest implementation in Chrome, six-ish years ago.

With the announced intent to get rid of window.alert/confirm (discussed elsewhere in this HN post comments), it would be nice if dialog got some attention as a replacement... I'm not sure what we're going to do when it goes away; everyone has to roll their own replacement? Doh.


Fixing bugs vs adding new features doesn't bring in new users, so it doesn't get prioritized. And if as you say these bugs are in features that nobody uses, why bother? Nevermind nobody might be using them because of the bugs, but the PR buz over those features has already been had, so on to the next.


Not sure I fully agree.

The features we're talking about here are in the developer space, not the end-user space. When a browser maker implements a brand new web standard, I don't think the correlation with new end-users is that strong. It takes forever before any new standard is popularized and widely used.

Check release notes of browsers, almost all features no user will ever directly experience.


I don't disagree with this so...

sed "s/users/devs/g" previousComment


Another example of this is SharedWorker.

Here it is in the standard: https://html.spec.whatwg.org/multipage/workers.html#dom-shar...

Safari/Webkit removed it in 2015 (edit: maybe even earlier!) and never reintroduced it: https://caniuse.com/sharedworkers

Here is the apparent reason for cherry picking features:

> The implementation of Shared Web Workers was imposing undesirable constraints on the engine. It never gained any adoption.

https://stackoverflow.com/a/48193804

Edit: Here is the ticket for it’s reinclusion https://bugs.webkit.org/show_bug.cgi?id=149850

Turns out it was removed temporarily because of internal architecture changes, and never reintroduced because of lack of adoption. Of course the biggest barrier of adoption is Safari not supporting them, so talk about a self fulfilling prophecy.

> This feature was originally removed temporarily during multiprocess bring-up, and actual usage on the web has been pretty low. We're willing to reconsider if there is significant demand.


IIRC, it was only even available through WebKit1 (i.e., the single-process WebKit API) and never through WebKit2, so it will have gone from Safari when Safari moved to WebKit2, even though the implementation lived on in WebKit for longer.

It was only 4+ years after it was dropped from Safari that anyone started to ask about Safari support for it again, so it had kinda fallen to the wayside due to lack of interest.


Why do people think that the universality of decay can be stopped? I would even question the assumption that it should be stopped.

On a long enough timeline, the survival rate of all APIs goes to zero. (Except Lisp, which is Eternal).


> Why do people think that the universality of decay can be stopped?

Theory: A lot of those people tend to survive by jumping from metaphorical ship to ship (API or software or whatever). So to them, the reality is that the ship you're currently on pretty much always feels like it's sinking, and you're always looking out for signs. You write articles lamenting sinking ships, because that's your reality.

These people also may feel like the universality of decay can be stopped in the current context, by moving away from a decaying ship/system. It's more of a question of where the decay isn't as bad, or as seemingly needless.

Example Pro: After a while you can get really good at evaluating ships. Con: It feels useless to build your own ship; you're afraid you'd have to jump from your own ship and wouldn't that feel awful.

Other people survive by building ships. They're cool with decay, because building new stuff that works better is interesting. Their job is to support their stuff, and to a lesser degree to patch others' stuff, like maybe their supply ship, or a friend's ship. To people like this, the reality is that holes just happen. So you learn to deal. Maybe you even learn to love patching holes, and you get so good at building ships that your ship's holes are downright fascinating anyway.

These people aren't usually as worried about decay. But they may have a problem of eventually going down with their ship, or finding that their ship is no longer just a ship but also a lot like a baroque form of floating junk pile.

Example Pro: Obvs, you can build ships. Con: People will try to jump on your ship, and they'll probably tell you they think it's sinking, and expect you to do something about it.


Perhaps there is an expectation there, that we should be able to build something that would function without constant churn and effort on our part to prevent the code from rotting.

For a second, ignoring safety concerns, what should change in how we serve some HTML, JS, CSS and some images in 20 years? Sure, there are plenty optimizations and new technologies to utilize for sites that need high performance or to use certain hardware functionality, but when you just want to display some simple content, none of that is really relevant. Why couldn't i build my personal website with some mostly static content and have it work for many years, while i'm not tied down with constantly maintaining it?

Even now, that's not the case. I decided to build my own site in Ruby on Rails, since it feels mostly stable - however i need the exact same version of Ruby both on my local machine and the server for it to work (using containers for development isn't always a pleasant experience, even though using them for packaged software is great). I also need to rely on dozens if not hundreds of packages, as well as bunches of native extensions, for example, to connect with a SQLite database to serve some simple dynamic content. Of course, i also need to update the OS (thankfully Debian unattended upgrades are pretty stable, except for the one time when they broke GRUB entirely and the server couldn't boot), as well as the web server and there's no guarantee that the SSL/TLS certificate provisioning from Let's Encrypt also won't change in the future.

To that end, the above goal is impossible - i can't just keep building new things, since i have to spend time maintaining what i've already built, even if nothing changes about what i need from these projects functionally, just because sooner or later a rug will be pulled out from under my feet.

Edit: i've actually written a blog post on the topic of updates, called "Never update anything": https://blog.kronis.dev/articles/never-update-anything


> Ruby on Rails, since it feels mostly stable - however i need the exact same version of Ruby both on my local machine and the server for it to work

Doesn't sound so stable to me…

> Why couldn't i build my personal website with some mostly static content and have it work for many years, while i'm not tied down with constantly maintaining it?

You can, HTTP still works and even oldschool HTML mostly works¹.

If you only need sqlite as a db you could easily compile a static binary that will run forever (as in foreseeable future) and serve content via unencrypted HTTP. If you then use a reverse proxy that can be automated with certbot you will have a system where all the maintenance work is done by the EFF, the reverse proxy developers and your distro's packaging team.

¹ https://caniuse.com/?search=marquee :D


> Doesn't sound so stable to me…

I can understand why they'd complain about version mismatches when installing dependencies, since in those circumstances failing fast prevents me from running into deprecated functions down the road, as would happen with PHP. However, the fact that different versions of Ruby/Rails are available in different OS distros and such basically mandates that i use containers OR that i just change the contents of my Gemfile to reflect the version that i will be using during build, which carries the aforementioned risks.

That said, Ruby and Rails are both far more stable than the current npm or pip ecosystems, given that the development has slowed down in Rails somewhat and isn't broken every week due to some package introducing breaking changes. That's not to say that it's better in most conceivable ways (for example, in regards to scalability), but as far as batteries included solutions go, it's pretty okay.

> If you only need sqlite as a db you could easily compile a static binary that will run forever (as in foreseeable future) and serve content via unencrypted HTTP.

Actually, static binaries are perhaps one of the better ways to ship software, especially with static linking, as long as you're ready to take certain security risks in the name of long term stability, though that doesn't prevent you from building new versions in an automated fashion either, at least before something breaks down the line and hopefully your tests alert you about needing manual intervention.

It feels like Java sort of tried to be this, as did .NET, but there is too much functionality that depends on reflection and standard library classes out there, that many projects are stuck on JDK 8, and the whole .NET/Mono --> .NET Core --> .NET cycle is as promising as it is also problematic to deal with. As for actually workable options nowadays, i'm not too sure - most ways to encapsulate scripts in static binaries fail miserably (containers allowing to mitigate this, but don't address the root issue) and otherwise there aren't too many technologies that are good for this out there.

If i wanted to go down that route, i'd probably go with Go, since it doesn't have the problem of needing JDK (and GraalVM is still brittle, for example with Spring Boot, an issue that Go doesn't have). Any other options that you can think of? I really like the idea behind Lazarus/FreePascal, though their web server offerings are really lacking, which is sad.

As for HTTP, i largely agree - read only sites don't necessarily have to be encrypted, even if that can hurt SEO.

> If you then use a reverse proxy that can be automated with certbot you will have a system where all the maintenance work is done by the EFF, the reverse proxy developers and your distro's packaging team.

I am already doing this, but as the Caddy v1 --> v2 migration showed, even web servers and their integrations are subject to churn and change. I'd say that it's only a question of time until Apache/Nginx/Traefik + Certbot run into similar issues, either with new methods for getting certificates being needed, or something changing elsewhere in the supply chain. And even then, your OSes root CA might need to change, which may or may not cause problems. Old Android phones have essentially been cut off from internet for this very reason - the fact that i can't (easily) install Linux on those devices and use them as small monitoring nodes for my homelab disappoints me greatly, especially since custom ROMs brick hardware devices due to lacking driver support.

So sadly if i get hit by a bus tomorrow, it's only a matter of time until my homepage stops functioning and the memory of me disappears forever. Of course, that's just a silly thought experiment, since i recall another article on Hacker News which pondered how someone could keep code running for centuries. It didn't look too doable.


> It feels like Java sort of tried to be this, as did .NET

I meant compile your whole logic and libraries into one (big) binary so you won't depend on any runtime that might ever change except for your operating systems syscalls.

> most ways to encapsulate scripts in static binaries fail miserably [...] and otherwise there aren't too many technologies that are good for this out there.

LUA is about as stable as it gets, (minimal) WASM runtimes will also probably live forever. (Or are, most likely, interchangeable if not) For both of them you'll need to build the interface yourself, so any breaking change will at least be your own fault.

> Any other options that you can think of?

Rust, or if you are a bit masochistic C or even C++. Rocket 0.5 (Rust) looks really nice (as in ergonomic) as a webserver and iirc you can statically link with musl instead of glibc.

> I'd say that it's only a question of time until Apache/Nginx/Traefik + Certbot run into similar issues

But then youll at least have http as a fallback

> So sadly if i get hit by a bus tomorrow, it's only a matter of time until my homepage stops functioning and the memory of me disappears forever.

I think what you want is a trust fund :D


Is one of these examples showing why people think the universality of decay can be stopped? I was confused about that, because it seems to me like neither of them think that?


I added some bits to try and clarify. Just a dumb theory anyway, but in case it helps.


thanks.


> Why do people think that the universality of decay can be stopped?

Because information can be perfectly copied? If you replace the components as they wear out, you can still have a computer from the 1980's running fine. The OS and software still work exactly the same. Whatever data will still exist as a perfect copy.

(In fact, rumors are that George RR Martin does exactly that, and sends whatever he writes to his publisher on 3.5" floppies)


I share you opinion. I'm a little bit horrified by anybody who is alright with digitally-represented information somehow "decaying". Why is that an acceptable thing? We should be working to create new less-volatile and longer-lived storage media, and to be documenting the specifications for the machines that process our data so that gate-level simulations can be made in the future.

Nothing we store digitally need ever be lost so long as some basic stewardship is performed. For now that means moving data to newer storage formats, making redundant copies, using error-correcting codes, etc. Maybe eventually we'll get the mythical "store all your data in a diamond" 3D / 5D holographic storage that always seems to be a few years off.


Exactly. And this is, in fact, the very reason digital won over analog: digital data does not decay. The medium carrying it does, but data itself can be perfectly read and copied ad infinitum.

Somehow, we've managed to take that natural feature away.


Because humans are the constraint, not the bits. Humans (in a group, societal sense) need to context switch over time, and cannot perfectly retain equal, active understanding of how to use all systems or frameworks ever invented, no matter how perfectly the bits are preserved.

That being said, in an archival sense, it would be a shame and seems unnecessary to lose anything, since perfect preservation is achievable. But archival preservation, is different from staying in active use with a healthy community of practitioners perpetually ad infinitum.


That’s a really good point, there can exist a distinction where the goal should be to make it always accessible (archives, emulation, documentation, etc) but not necessarily always accessible in widely deployed web browsers.

So even if you have good reasons to remove an API or feature, you should also provide means or resources to those who wish to preserve and archive the data. And this is something that could be factored into the original design without committing the project to the technical overhead of supporting it forever.

Additionally reducing the problem to humans just chasing shiny new things is simply not true. There’s no denying that some implementations are just bad or dangerous security wise or the world has completely changed in some way.

Not letting things decay could risk the whole platform dying as a result, not just the feature. It could even get to the point where adding anything new is way too risky because of long term support obligations, so decay could be made worse because of efforts to prevent decay!

Obviously we can still make sure great consideration is made both before adding something new and before removing it.

There’s no getting away from the balancing act between change and conservation.


Digital decays different.

There's the physical degradation of mechanisms and data storage substrates themselves. That's ... not insignificant, but a minor part of the whole situation.

The short-term strength and long-term technical debt of digital is dependencies. Sure, you can spin up some bare-metal or virtualised instance of a system from last month, or five years ago, or twenty years, ago, or fifty. Odds are that the emulation will run faster than the original.[1]

But with time and complexity, dependencies started to expand.

One of the underheralded changes of the 1990s wasn't so much Free Software as what it wrought: an ever expanding and accelerating increase in the number and specificity of dependencies. Tarball distribution gave way to packages, with dependency-resolution, some better (APT), some worse (RPM). Crucially, dependencies didn't simply have to be resolved at build time, once in the lifecycle of an executable, but at install time, once per installation.

The explosion of Web apps and frameworks triggered another violent expansion of the situation with both even more dependencies, more deeply nested ones, but runtime dependencies where prerequisites are identified and fetched from remote hosts.

At runtime.

Which makes the current appified-web immensely flexible and convenient, but also fantastically brittle.

(Of course, the old-school type scan extend this story back to interpreted vs. compiled languages, JIT, hardware-specific variations, high-level vs. machine langauge, binary, and toggling in programmes. It's been a long process.)

But as bits of that infrastructure fall apart, you'll find that digital does in fact decay.

________________________________

Notes:

1. At a gig some years back, a cow-orker told of a uni prof they'd studied under, who did work on the the B and BCPL programming languages. Those are precursors to C. Back in the day when auto manufacturers were looking at automated systems controls, he convinced them that C was far too complex and high-level, and that they should use the more performant BCPL. Which they did. And still do (or did as of a decade or two back), running under several generations of emulation. Faster than the initial hardware implementation. Mind I'm taking them at their word....



> If you replace the components as they wear out, you can still have a computer from the 1980's running fine. The OS and software still work exactly the same. Whatever data will still exist as a perfect copy.

Sadly, this is no longer the case, since nowadays OSes rely far too much on the Internet. For example, your Docker and VS Code repositories eventually will deprecate and remove certain versions of software that you might have been using. Furthermore, npm and pip packages will also decay in a similar manner and eventually Maven repositories will drop off the Internet one by one.

Installing software from DVDs or flash memory is no longer the normal for all of the OS and the rise of the private package repositories without having the tooling in place to ensure that we can download .deb or similar files with all of their dependencies into an installable collection that can be persisted in such a manner is an utter failing of the modern age.

For an example, just look at these:

  - https://superuser.com/questions/876727/how-to-download-deb-package-and-all-dependencies
  - https://stackoverflow.com/questions/13756800/how-to-download-all-dependencies-and-packages-to-directory
They're not tools that have had a lot of consideration and attention given to them, prioritizing their development and testing. They're scripts that are exceedingly hacky, patched atop unsuitable methods of managing packages.


>> If you replace the components as they wear out, you can still have a computer from the 1980's running fine. The OS and software still work exactly the same. Whatever data will still exist as a perfect copy.

>Sadly, this is no longer the case, since nowadays OSes rely far too much on the Internet.

That seems like an issue with current work, not with past work. And yes, it's an issue. Figuring out how to archive tools and buildchains is important.

Although, do modern OSes rely too much on the Internet, or does the modern development ecosystem?


> Although, do modern OSes rely too much on the Internet, or does the modern development ecosystem?

I'd say both.

Operating systems should be developed with offline first in mind, instead of giving into the prevalence of downloading things from the Internet. For example, Debian has apt-cdrom which allows updating /etc/apt/sources.list and adding packages from DVDs or CDs. That is good, the next logical progression of which would be doing the same for any storage medium - flash memory, HDDs, SSDs etc., just a set of tools to easily create package mirrors on these storage devices and to read them from the OS to make airgapped environments even easier.

Instead, as things currently stand, downloading a package with all of its dependencies is perhaps needlessly hard if you don't want a full mirror, but only need something like gcc with everything it depends on in one gcc-with-all-dependencies.deb file. And the situation isn't necessarily much better with the way how Flatpak and Snap packages want to handle updates, while there are definitely valid arguments to be made about automatically updating packages over the internet, as Debian unattended upgrades already does, relying on it too much makes software brittle.

For example, just look at this: https://unix.stackexchange.com/a/541583

It took Flatpak until 2020 to address this, i'm not even sure what the situation is with snap packages and using them in air gapped environments, since that's also only in beta: https://docs.ubuntu.com/snap-store-proxy/en/airgap

I apologize if the above sounds a bit like rambling, but the bottom line is that if developers truly cared about airgapped environments, then using and managing them would be as easy as the alternatives, not less so. Be it in regards to tools, to how packages are managed, or even just the day to day experience while using them.

And that trend extends deeply both into the OS itself, as well as the software that's used by it (especially in cases of non-standard update mechanisms, like some browsers like to do), as it does with the development ecosystems nowadays. It's surprisingly how little pushback something like snap got, instead of the community delaying its utilization until all of the concerns have been addressed.

I'm not saying that we shouldn't innovate in regards to package management, apt over apt-get showed that there definitely can be improvements in regards to usability, it's just that we as an industry should never be satisfied with using half baked solutions.


This is an excellent point. Consider the nature of decay - is it of the artifact itself? What about printed books, for example? Or language, or culture? Do those things decay in any sense?


Yep, in fact an Amiga 1200 with proper memory protection, would probably handle most tasks for a general slice of computing population.


Absolutely. However all these endless discussions have an implicit time frame, say 50-100 years backwards and forwards, during which time, old, well used stuff like HTML are expected to not suddenly remove stuff that's being used in the wild.

Because web pages and simple web scripting is often done by non-professionals, who cannot be expected to follow standards processes in perpetuity to keep their pages/scripts working. They often author a few pages and move on with other things in life, which is why browsers being ultra-conservative about breaking stuff is important for the robustness of the Web.


> Because web pages and simple web scripting is often done by non-professionals, who cannot be expected to follow standards processes in perpetuity to keep their pages/scripts working. They often author a few pages and move on with other things in life, which is why browsers being ultra-conservative about breaking stuff is important for the robustness of the Web.

I am a professional software engineer at my day job, and we strive to keep our software up to date etc as the world changes.

But I am also exactly this non-professional developer in my after-hours. I have written a number of little web apps that solve some small need, and I really have zero interest in maintaining them over time.

When I wear that hat, I really appreciate the conservatism of most web stuff.


> Because web pages and simple web scripting is often done by non-professionals, who cannot be expected to follow standards processes in perpetuity

Professionals can't keep up with standards processes either. Because Chrome employs people to work on them. Who's going to pay me to keep track of literally hundreds of standards currently in development?


Indeed, but then it becomes an argument about degree, not kind.

FWIW although I'm actually a fan of decay, I think its a terrible idea to deprecate `alert()`. I get it that it's abused, but it's also an important part of learning js -- it's the easiest way to make a side-effect. (No, `console.log()` isn't easy because you have to open dev tools, which is scary and hard. A simple modal dialog is far friendlier and immediate and visceral and gives a feeling of power.)


One of the great things about the web is the fact that I can expect anything I create today, to be relatively forwards compatible, and I can build things in a way so they're either backwards compatible or degrade gracefully.

The philosophies around entropy isn't really relevant here IMO. The web should continue to be evergreen.


Supporting legacy constructs forever can be hugely detrimental to progress though.

Something completely new would fasttrack innovation. It could leave out all the quirky workarounds and get advanced features built in or easily extensible. When adoption is high enough it could include a legacy box, to run good old HTML based content.

Have any attempts at this sort of thing ever been made?


Flutter started with Ian Hixie and a few other Chrome people going "what happens if we remove all the 'junk' from the HTML spec".

Of course, what's junk and what's not might be controversial, but that's where it got its start.


I don't think this is really a good idea. HTTP and HTML aren't perfect, but they've been made into ubiquitous standards already used by everyone. In order to switch to a new standard, you'd need to have a pretty convincing way in which it was better. And I don't think you can ultimately outbalance what we have now. Not to mention that the only way this could probably actually happen would be through Google doing it, which would be disastrous for the open nature of the standards.


There's plenty of ways in which we could improve on the current setup. Everything could be so much easier, for one thing. Basic things like animation, drag & drop and responsive design all require libraries or extensive knowledge to get them right.

Something like websockets is not exactly accessible to a beginner, but there is a huge need for online content that supports live interaction.

You are right that it's not going to be easy to get everyone on board, but that should not be a reason to stop trying.


If you only use a super-basic subset of HTML tags, design carefully, and ignore "standards", you can write a website which works across 25 years of browsers, mainstream and obscure, with gated enhancements for browsers which support them.

I think that's pretty impressive as far as API age.


WinAPI worked for 25 years. Windows 11 did not deprecate it, so probably it'll work for another 10 years at least.


Browser vendors have long opposed making backwards incompatible changes. The problem is if any existing websites start breaking some users will switch browsers because of it. Browsers that don't implement the backwards incompatible change will in turn gain users. No browser wants to lose users, all browsers want to gain users, so no browser is willing to make any changes that cause old websites to break. Once it's a browser feature, it's always a browser feature (with very limited exceptions).


> Why do people think that the universality of decay can be stopped?

I'd rather say that the "universality of decay" is a U-shaped curve. Just have a look at old arcade and console games... they went out of fashion, the hardware (sometimes literally) rotted, but emulator technology is getting better and better all the time - the result is you can use any modern computer or many smartphones (!) to run all that stuff that is sometimes many decades old.

All that any kind of technology needs to survive into modern ages is one (single or group of) person that engineers an appropriate abstraction layer. Polyfills, virtual machines and other emulators, FPGA-based hybrids... the list is endless.


Lambda calculus API is eternal. There are multiple LISPs, so multiple APIs, changing over time.


Does it, though? We still use some APIs from >50 years ago at this point (parts of C stdlib). I don't think the industry has been around long enough to make any definitive conclusions. FWIW, given how most software development is done these days, I think we'll just keep building layers upon layers, with lots of legacy CRUD baked inside, invisible but still necessary for the whole thing to work.


> On a long enough timeline, the survival rate of all APIs goes to zero. (Except Lisp, which is Eternal).

<3


Since it looks like Chromium is set on removing alert() altogether[0], I don't see why this can't be handled in the same way browsers handle popup windows. If a website tries to open a pop-up, it gets blocked; but, on Firefox at least, I get a small notification in the toolbar, where I can choose to copy the popup window's URL, open the popup, or allow the website to open popups as much as it wants.

Why not do the same exact thing for alert()s?

[0]: https://news.ycombinator.com/item?id=28310716


The problem is that alert() blocks JavaScript from executing. If a web page fires an alert, it expects to be blocked in that moment. If the browser were to delay the alert, the page would be blocked unexpectedly at some later time. This would probably cause bugs.


The problem is that people relied on a side effect of a function that isn't guaranteed to block execution all the time (browsers are allowed to ignore those prompts) and now they have to fix their code.


> The problem is that people relied on a side effect of a function that isn't guaranteed to block execution all the time

Yes, this is a problem when people code their websites against current iterations of a browser. It's not a problem regarding Chrome removing window.alert() but rather a problem that happened at development of the site, not at the iteration of browser (which is being discussed here)

> browsers are allowed to ignore those prompts

Unless previously marked as "Block future popups", is there any browser that currently (by default) doesn't block JS execution upon window.alert()? AFAIK, all browsers currently do.

> now they have to fix their code

Yeah, good luck maintaining the web with that mindset. There are countless of websites that will basically vanish (rather, stop working) if you change the execution model of browsers too much.

But large swaths of the web currently are just online because someone set it up well 20 years ago and has never touched it since. Maybe they can't even modify it at this point.

So any change needs to consider the historical impact the change can have. Hopefully the people working on browsers and standards have a bit better mindset than "now they have to fix their code", because otherwise we're utterly screwed.


At the time that change was proposed, the specification was saying that browsers could optionally return, with some examples, but those are never meant to be exhaustive lists.

If it's legal to do so, then your code needs to be ready for it. Relying on a browser's specific version behavior is brittle.

And no, not all browsers currently do, all the browsers you've used in some specific scenarios behaved the way you thought, but there were already ways for it to fail before.

As nice as it is, MDN and all the other websites are just paraphrasing what is in the specification, and sometimes omitting crucial information. Yes, Javascript and web technologies are accessible and can seem simple, but the reality is a lot more nuanced than it is portrayed in most places.


I understand the general principle you’re making, but for this specific case, how many websites would realistically break because the browser stopped pausing JS execution when alerts are fired?


Quite a lot. Also, confirm() prompts for input from the user and returns it; can't really do that without blocking execution.


From humble point of view HTML 4 was just fine for the purposes of the Web, and everything else should be done via native applications and network protocols.

Google, alongside all of those that push Chrome wrapped in whatever kind of package, have managed to turn the Web into ChromeOS.

I expect job adverts for HTML 6 to be about years of experience developing ChromeOS applications.


I disagree at least about the part about native applications. It really is better to have this universal VM for ephemeral apps and not have to worry about if the thing supports Linux or if it's actually spyware or how much hard drive space is this thing going to take up.

Recent things like Zoom native app being incredibly insecure but the same company's webapp being much better kind of proves the point.


> ephemeral apps

They are not ephemeral. They are far more persistent than anything you could install locally, because all the data now resides on someone else's computer. The "ephemerality" paradigm of web apps has been a strong contributor to that trend.


All Web applications are spyware and security is not given, hence OWASP.


Surely there's things that can be better with webapp model, but .exe or .msi are so drastically worse on the spyware front that it's not even close.


How do you figure? Before everyone went web crazy and started bundling always-on network traffic into their calculator application, programs were a thing you could download and install and use without a network connection at all forever.


Every single HTTP request done by you is tracked down on the server, every single one!


I think that’s a result of being connected to the web not a native application vs web app thing. Nothing is stopping a native app from sending telemetry to a server for every action you take.


If you can't self-host a web-based application you have no capability of ever using it without third-party tracking

It's at least possible (albeit potentially difficult) for a native application, should it have have such tracking, to have that tracking removed. Software "crackers" have shown, time and again, that so long as the code is present on a machine it can be made to run in whatever manner is desired.


Yeah, except Web apps use telemetry in every single request, to profit for marketing dashboards.

Spend some time learning about marketing solutions.

So native apps may use, Web uses it all the time.


Native apps have access to all your local data, unlike web apps. Web apps can't gather telemetry on things they can't access.


On Web apps your data lives on someone's else computer, including basic stuff like credit card information.

Every horizontal line on the network tab in developer tools is yet another piece of telemetry information.

Native apps have access to whatever user they run under.

Again whatever native apps can do, web apps do all the time.


> Again whatever native apps can do, web apps do all the time.

This isn't true. Yes, web apps have tons of telemetry, and every web request is logged server-side. But native apps do MORE:

- Native apps can look at what processes you're running. Web apps can't.

- Native apps can look at what software is installed. Web apps can't.

- Native apps can accurately determine exactly what operating system and browser you're using, while web apps either have to rely on a User-agent (which is trivially spoofed), or perform fingerprinting in order to come up with a guess.

- Native apps can see exactly what hardware you have. Web apps can't.

- Native apps have read/write access to every file on your system, subject to user-level permissions. Web apps require explicit selection from the user for file access. A native app can easily send your /etc/passwd file to a remote server, and can enumerate the local users.

--

Look, nobody is disputing that web apps have tons of telemetry, yet you keep responding as if that's what people are arguing with you about. What we're disputing is the exact statement I quoted. You implied that the telemetry of a native app is a subset or equal to the telemetry of a web app, and that's just plain false. It's very much the other way around. The telemetry of a web app is far less than a native app.


> - Native apps can accurately determine exactly what operating system and browser you're using, while web apps either have to rely on a User-agent (which is trivially spoofed), or perform fingerprinting in order to come up with a guess

A native app may know which browsers I have installed, but how would it know which of the umpteen ones I have I actually regularly use (if I don't do so while running the app)? The Web app, OTOH, is running in that browser, so yeah, of course it has a better chance of knowing that.


True, but to be fair, most users (Windows users, at least) don't have more than 2 browsers installed (Edge and either Firefox or Chrome, and it's likely a safe bet that if FF or Chrome are installed, that's their preferred browser). *nix users likely only have 1.

Your "umpteen" browsers is very much an edge case.


A native app running under your account can do anything you can do, so it can determine which browser you use by any number of methods, for example by checking the timestamps on the files in your various browser profiles (and of course read your browser history etc.).


Yeah, sure. So that brings us to the next question: Why would it want to?


Here's an example: Discord app reads all the processes running on the machine. Discord webapp can't. Telemetry is probably the same.

Another example: Microsoft Word can include a bug that makes opening a doc file run whatever command, including wiping the system. Google Docs can't do it.

It's much simpler to see telemetry in a web app - just open the Network tab. The fact that it's harder to do with a desktop app does not at all means it isn't there. Give wireshark a spin.


Google docs can read all your data that happens to be stored on their server.

You don't get it, yes desktop apps can do telemetry, and many do.

Web applications not only have all your data, every page interaction is fed into marketing engines regardless of your opinion on that.


But web applications don't have all my data. I gave you an example just above - Discord web app can't see what other programs I have running. Discord Desktop can. Google docs can read all my data on Google Docs (shocking), but Microsoft Word could be stealing my cookies from my browser and accessing my Google Docs, my iCloud and anything else too.

You keep making these bombastic statements with no data behind. Not all web applications store every page interaction into a marketing engine. For example, I have a web application - as you navigate no additional network requests are sent (it's SPA!), and anyway I don't really have access to those the server logs because they're hosted by some Netlify-like service. See? Web application without you data that doesn't feed your interactions into a marketing engine.

Some web apps track, some desktop apps track. But clearly and without any doubt an executable running in your operating system can potential do much more than a web app you open with your up-to-date browser. An executable can even... open a web app!


They have all the data they can extract from each HTTP request, plus 100% of all data stored on their end.

Discord web app has a track record from everyone you ever spoke with, where you where when each sentence was written, who the people you talk to were.

All Web apps track, there are no exceptions, unless you are talking about some hobby stuf written by yourself.


Plenty of native apps phone home (and won't work if they can't)


100% of Web apps never turn off the phone.


This is not true. LibreOffice is being ported to the web, for example

https://wiki.documentfoundation.org/Development/WASM


So now they can track down every user that uses the Web version.


They are porting the codebase to WebAssembly, not developing a Google Docs type SaaS product.

It shouldn't be difficult to see that GIMP and LibreOffice can run on this runtime with similar privacy.

We typically download native executables over HTTP. Then check for application updates over HTTP.

Privacy respecting WASM apps can do the same.

This is the new Java, not the new SaaS.


You repeated this like 10+ times already in this thread, but never explained it why it's so evil. You just assume everybody is on the same page that tracking (logging actually) is bad, but it's far from obvious.

The fact that I can see access log in my web server is just a helpful tool for me as a developer to improve my services. I think the majority of sites use this in good faith and keep products healthy.

If your threat model involves secret services tracking your activity down based on downloading favicon.ico, then you might have more serious problems than architectural choices of the web platform.


Because 10+ times people keep not getting that while native apps can track you, all Web apps do track you and fed every single action into marketing engines, even if then don't public acknowledge doing so.

And they own your data as well.


It's "all" that people are objecting to.

At my day job, we make a web application for health records that can be deployed inside an air-gapped intranet. Surely you don't think that's feeding a marketing engine?


How I as a patient, can be sure you haven't built one, and aren't crawling my health records?


That's a different question. All over this thread, you're repeatedly saying that 100% of web apps are feeding marketing machines. I have a counter-example.

It's a separate question of how a patient can be sure of that fact. There's actually not a really reliable way a patient could even become aware of the existence of this product, since they would never see it or be informed of it. Patients are not users of this product. Users could ask their IT department for a log of outgoing internet-bound requests from the servers. Or ask whether those servers even have the capability of contacting arbitrary third parties.


Sorry, but until you present conclusive proof that you don't, we better assume that you do. Unfair? Sure, possibly... But that's just the risk you took in choosing to use the same technology as all the personal info thieves.

I mean, one could also be running around in a supermarket in a balaclava without intending to rob the cashier -- but would you assume someone you saw doing that wasn't going to do exactly that?


You can look at the network tab of your browsers dev tools. You can see everything being exfiltrated that way.

In fact, that's pretty similar to the technique that you'd use to check on a local app too, except it's built into the browser.

I'm not particularly interested in convincing anyone that some app is or isn't leaking their data. If you don't want to use web stuff, don't use it. But I do take issue with assertions that 100% of all web apps must be doing that kind of stuff. It's obviously not true. You can develop your own web app from scratch that doesn't do it, which is sufficient to form a counter-example.


Sure I can. And you can. But to 99% of users, you're talking Greek -- ancient, not modern. And hey, BTW: Can we always? Where's the "Dev tools" menu on my phone browser?

And one counter-example does not a summer make. As long as 99% (typical Internet statistic, i.e. pulled from my mether regions) of web apps harvest your data for sale, that last percent won't get the benefit of the doubt: it's far too difficult and uncertain to find out which percent that would be.


At least I know a website couldn't read my id_rsa, as opposed to native executables. Unless there is a very serious browser exploit.


They're working on it: https://wicg.github.io/file-system-access/

While local apps are getting sandboxed properly: https://docs.flatpak.org/en/latest/sandbox-permissions.html


> They're working on it: https://wicg.github.io/file-system-access/

It seems it need user to manually select a file/folder to be used, like Android or iOS does.

> While local apps are getting sandboxed properly: https://docs.flatpak.org/en/latest/sandbox-permissions.html

It looks good, but it seems many applications still require filesystem=host to run (https://flatkill.org/2020/). Also, its sandbox solution isn't going to work on Windows, Mac and BSDs.


> It seems it need user to manually select a file/folder to be used, like Android or iOS does.

please select your home directory for our super-awesome functionality


It doesn't need to, the server has the master key and the data it cares about is tracked on each HTTP request.


In which world do companies use rust and not over 20 years outdated c++ libraries to make your argument work?

I'm sorry, but the likeliness of zero days in a c++ program that was too stupid to handle char arrays or multithreading is way off the charts vs heap spraying bugs in a VM.

I'd argue that native applications therefore are far more vulnerable than web applications.


The Web is the modern version of timesharing/X Windows, the owner of the application tracks down every single HTTP request and their contents.


The web can function as a fully offline application runtime.

Desktop applications can track your every click with HTTP requests to their server.

The runtime does not determine this.


Desktop applications can track, Web always tracks.

And if by offline you mean PWAs, unless you are doing Hello World, that app cache is going to be cleared and then analytics can be updated.

Finally, there is very little a pure PWA can do without doing HTTP and Websocket requests.


> Desktop applications can track, Web always tracks.

Desktop code cannot be verified. Web code can be verified. This argument goes both ways.

If we're talking about the web-equivalent of Desktop Apps: There's electron, which can do far more than HTTP requests and WebSockets - but that on the other hand is also too bloated, right?

Sometimes I wish people would just make up their mind, stop complaining, and start trying to fix it. We had servo, and we had a nice modular and privacy respecting future for everybody; and then we messed it up because apparently nobody really cares about it.


> Desktop code cannot be verified

My gentoo install would like to have a word with you


Web code can be verified?!

Please explain the crowd how you verify the SaaS server side.

Electron is an abortion, that will eventually follow Active Desktop footsteps after its fashion hype curve dies out.

In any case, Electron based apps can do whatever user account they run under.


Client-side code can be verified.

If it's in your browser, you can see the code it's running and the data it is transmitting. And you're only a single uBlock Origin (other other browser add-on) filter away from blocking it.


You're missing that there doesn't need to be a SaaS server side.

HTTP is simply the delivery mechanism for a self-contained WASM application. Just like it is the delivery mechanism for most native binaries now.


All web applications may be spyware, but using web technologies as a common format for specifying a UI had benefit, even if it's not through something as heavy as electron.

Having an interface I can just stand up on a port and access locally through multiple different browser options, or even expose to remote if the user wants, and it will be the same across every OS for zero additional shipped library cost, and works in every programming language, is an amazing thing.


Really amazing, X Windows, RDP, VNC, Views,... never happened.


The only one of those even remotely similar to what I'm talking about is X Window System protocol, and that's never been close to ubiquitous, which is something you can say about web browsers. HTTP+HTML is so ubiquitous that many (all of the most popular) operating systems ship built in components to handle it, and then people often have one or two additional clients to handle it.

For RDP, clients are easy to come by (but not ubiquitous), but the server side tech is limited, and generally OS based and not application provided.

For VNC, client and server tech is easily available as a library to all, but it's still an additional layer on top of your application which you need to layer on and then communicate as a separate step to any client that needs remote access.

I'm not familiar with views, but I'm not sure how it could be any easier to use than VNC without limiting where it can be easily deployed.

Opening a port and taking a few commands is simple. There are myriad libraries to help handling requests, and in some languages rolling your own is a matter of tens of lines. The client requires nothing that every person that would want to use it doesn't already have, and if you want to support remote access, you literally just change the interface you bind to from localhost to 0.0.0.0 or the actual IP address. All additional firewall config is something you would likely have to deal with in every other technology as well.

Now, don't get me wrong, I'm not saying this is the best UI paradigm to target. There are many things better about specific UI libs, but none of them can even come close to the cross platform capability and simplicity of just using HTTP+HTML. Do I want major applications delivered this way? No. Do I think it should be the ultimate choice for most programs that have time and resources to do otherwise? Probably not. Do I appreciate that I can write a Perl/Python/Rudy/JavaScript script and package it with its runtime or ship as a script (or just use a compiled language) and it will just work on basically any platform I'd want to run it on, and a client (browser) exists for everything someone would want to use to configure it? Hell yes.


Self-hosted web apps. You develop for one platform, the web, and it works for nearly every OS.


Almost no one does it, and every SaaS application knows more about you than you think.


Most of those SaaS have native mobile apps which track you, too. The runtime is not to blame. FOSS applications could be delivered with near-instant load times, native feel and excellent sandboxing, with one universal build target.

That is a remarkable ability.

Do not let the crimes of Web 3.0 blind you to this.

Tracking and disrespect for privacy are utterly orthogonal.


Native can track, Web always does.


Almost no one needs to develop a self hosted app, because there already are self-hosted apps for lots of use cases out there.


Such as?


I was going to list some use cases individually, but you can get a better picture by skimming through this: https://github.com/awesome-selfhosted/awesome-selfhosted

It's rare that a self-hosted piece of software is not present on this list. As you can see, the coverage is pretty extensive.


Nice list, and what is the market share of them?


There are things like Minio which lots and lots of people use as an S3-compatible object store because it's so simple to set up. Most S3 alternatives are just a part of much larger filing system that's more demanding to set up. NextCloud is pretty big as well, though I don't know the exact numbers.

I think in the end, it doesn't matter as much. These are meant to be deployed for a cohesive user base that numbers between a single geek and an entire region of a country or small org.

They most often use libre data formats and protocols to store and communicate data. In such a situation, the network effect is less pronounced, and measuring market share isn't as important. As long as the service works reliably and meets user needs, I don't think people will clamour to replace them with proprietary solutions.


> All Web applications are spyware and security is not given, hence OWASP.

Web apps are still the most secure platforms to date, nothing widespread really came close in terms of sandboxing & safety.


Really? How certain are you about the data stored in someone's else computer, being tracked down on every HTTP and Websocket request?


Some tracking will always be possible regardless of the platform but compared to native app the comparison is clearly in favor of web apps. Web apps make cross website tracking very difficult, web apps don't have private apis, web apps can be easily investigated to see what they do and even edited on the fly by the user if needed (adblocking), the sandboxing of web apps is very strong and exploits are getting rarer and rarer.

There's a reason everybody asks you to download their native app, tracking is much easier there.


You can only see the UI of Web apps, and tracking is super easy with marketing engines.

No one asks you to download native apps for desktop platforms, unless it is some Electron garbage for whatever reason, usually for stuff that is available to PWAs anyway.

All ask for native apps on mobile, because development just sucks less.

Whatever, it is great that people believe Web apps are so safe with their data, more fun when creating analytics rules.


Space Station Marshall: Men cannot grow beards in space.

Bearded Man #1: But I have a beard.

Marshall: Well, then you're an alien.

Bearded Man #1: No I'm not.

Marshall: Yes you are.

Bearded Man #1: No, I'm not.

Marshall: Well, then you can't grow a beard.

Bearded Man #2: But he has a beard.

Marshall: Well, then he's an alien.

Bearded Man #3: He's not. He's from Pittsburg.

https://vimeo.com/7117832


I author a CSS 2.1 rasterizer, and a partial implementation of it is used in a game engine my company's open source organization publishes.

There's such a small number of people who have written their own pieces of visual web client technology, and an uncountable number who consume it.

I've entertained the idea of writing a partially compliant web browser and releasing that for fun. It's still totally possible to write your own web browser today.

You will of course need to put in effort than an exceptionally small number of people have, and even after you do that, you'll only have something partially compliant. But it will be valid!

Hell, you could build an HTML5 valid web browser that didn't even use CSS. Invent your own thing. Make JSON style sheets or something.

Anyway. We don't have enough people toying around with web tech. For years, I've only ever seen people toy around with the easy. Things like making little web clients to help you with API requests instead of turning to curl, or rehashing a CSS framework for the nth time.

And frankly it's sad to see because its so uninspired and boring.

Where are the people creating basic web browsers that use C# as a programming language instead of JavaScript? Or people inventing a new hypertext markup language as an alternative to HTML that still uses HTTP as a transport protocol?


SerenityOS probably has the most serious attempt I've seen at "Let's make our own web browser for the sake of it" (along with the rest of the OS). I think they are still working towards Acid 2 compliance but they also have a spattering of newer features supported as they hack at getting newer sites to work. Not a very whacky take though just a home grown one.

http://serenityos.org/


WPF and WinUI.


Wait, why are alert, prompt, and confirm being deprecated? So we are going to need to write poly fills for old features?


https://github.com/whatwg/html/issues/2894

Apparently they are. How disappointing, I really like how useful they are for quickly getting something working and maintaining a consistent and expected experience for the user.


Where in that thread are they agreeing to deprecate it? This was someone random asking to deprecate, with the person in charge saying the usage is too high...


I also don't agree with deprecating this without a feasible alternative to ask the user of a page something with a simple line, that works well on desktop and mobile without any extra CSS needed to make it mobile friendly.

The original proposal of the author of that issue is to use NotificationAPI but that is not supported in IE. And a lot of web apps in B2B are extensively using alert and confirm.

I feel this is a solution for websites abusing this feature that will cause a lot of maintenance effort spent in a lot of legit applications.

Here is a better alternative (of course with a lot of drawbacks that I cannot think now in 5 minutes): make the dialog timeout after a period by default. When timing out the dialog disappears without any action/change happening in the rendered page.


These are all terrible from a UX perspective. What in the world were you doing that couldn’t be solved in a more user friendly fashion?


That is a very broad statement. What specifically do you find terrible?

Here is a list of issues I often have with JS-based alternatives that do not exist with alert/confirm/onbeforeunload:

  - the escape key does not close the modal
  - tab focus is not restricted to the modal
  - the modal is not properly announced by screen readers
  - the positions of confirm and cancel buttons differ across sites, leading to misclicks
  - the modal is not or very hard to use on mobile
Building good modal dialogs is hard and much easier done in the browser than on the page. Even <dialog> (if it ever becomes a reality) will not solve all of these issues reliably.

So in a way I agree with you: there might be more user friendly solutions. But the average alternative that people come up with will be worse, not better.


I use confirm() on occasion as a "are you sure you want to delete this?" type protection against misclicks that doesn't involve coding a modal or something.


Same, there's probably hundreds of thousands of business back-end applications that use confirm, as shiny buttons aren't a requirement.

Will they just remove the API (so a JS error) or will it default to "no"?


How cares? I mean, sure if you want to do a shiny applications for consumers, you care. But 99% of the business software doesn't care and wants something that just works.

I use alert, confirm and prompt a lot because it's the simpler way to notify the user, ask for confirmation or ask some input that just works, in all browsers, with vanilla JavaScript, without having to code any CSS (that I hate), or including huge frameworks.

They are used extensively in all enterprise software, where you don't need to be fancy but need to produce something that works reliably.

Removing them to me is a terrible idea. More terrible if there are not alternatives to these, yes there is the dialog API that is supported only by Chrome, and it's not as simple as the good old alert, prompt or confirm functions. And we know that in the enterprise world we would have to wait years to have all the browsers compatible with new APIs, there are still a ton of people that uses Internet Explorer...

By the way this is a so big breaking change that to me would require a new version of HTML entirely. To the point where the browsers if they encounter a old HTML document they keep the old behavior. But they removed the DTD with HTML5 leaving just <!doctype html> that to me was a terrible idea.


Isn't the better solution for the browsers to collectively improve them in that case rather than ditch them & every site roll their own (with custom styles and the over-the-wire weight of the code to implement it)

In your opinion, what's terrible about these from a UX perspective? Is it just the styling or something else?

The alternative to a default confirm prompt is going to be someone including Bootbox in their site, which I'm not sure how that's much better


A single line of code


These functions are blocking, so they can't be polyfilled.


Surely it can be done with a `while (!userHasConfirmed) {}` loop.

Edit: I’m mistaken because now it also blocks the event that would be able to toggle the while condition.


Yet another reason it's bad for asynchronous functions to be a separate type from normal functions.


What do you mean? Do you think all functions should be asynchronous? Do you think no functions should be asynchronous?


Ideally, all functions should be async-transparent. That is, it should be up to the caller to decide how to invoke it. This is usually done with some form of green threading.

It all works great, right up until the moment you have to interop with another language/runtime that doesn't understand your bespoke async. Callbacks (and layers over them such as tasks/promises) are uglier and necessitate opt-in async, but you can interop them to anything that speaks the C ABI.


Green threading is multithreading right? Isn't Javascript singlethreaded? I would assume adding multithreading would break tons of code that relies on it running singlethreaded.


It would break the same as sprinkling "await" all around your codebase. Asynchrony in general is not free - you have to redesign around it regardless, and deal with the issues it introduces.


Callbacks have been in Javascript since the beginning, and await is basically syntactic sugar for that.

Having everything switch to blocking and have to start using mutexes and other multithreading primitives would be a giant change to the language.


Callbacks have been there, but most code wasn't written with that in mind. And you still need some synchronization in async code - even if it's all scheduled on a single thread - due to re-entrancy issues.


All javascript functions should be asynchronous-capable, in a way that's invisible unless you actually touch asynchronous features.


Parody aside, we actually need something like this.

Here’s a non-exhaustive list of breaking changes to the web platform:

https://github.com/styfle/breaking-changes-web


Take the word "deprecated" with a grain of salt. I've got a project that utilizes an HTML tag deprecated in 1993! https://github.com/kristopolous/TopLevel

It's <plaintext> which basically means "stop parsing for rest of the page". There's no way to close the tag. It's super easy to implement which is probably why it's still around.

Deprecated 28 years ago in HTML 1.1, yet still supported in all major browsers. Test page over here: http://9ol.es/TopLevel/example.html reference rendering: http://9ol.es/tl.png

There's some modern timing issue in chrome I think, it's intermittent Looks like there's a bug.

My original post on the hack, blowing off the cyber dust from 2014: https://news.ycombinator.com/item?id=7850301


The other fun tag that similarly changes parsing is the <XMP> tag - it is similar to plaintext, but can be closed. I’m unsure if it is still supported because I haven’t used it for a decade and not near PC at moment.


That'll be a fun tag to use the next time I find an XSS vulnerability.


Do that followed by the Unicode RTL override character http://www.unicode-symbol.com/u/202E.html


This is a crazy hack! I'm amazed it works.


appears to actually be flaky these days. I'll have to get back to it and figure it out. There's something subtle going on on mobile chrome. Things are being done differently. The image appears to get pre-fetched even though technically, according to the old-school <script> blocking rule, it shouldn't.

I'll have to check the blink source whenever I have some free time. There's probably a strange way around it (for instance, maybe convincing the browser it's a really old website and it reverts to the traditional policy for compatibility or perhaps maybe there's another strange old feature I can leverage, I dunno I'll have to check). And yes, I know this is just pure theater and it's completely useless, I still want to do it well!


> Forms with passwords marked Not Secure over HTTP

It requires a rather curious definition of “breaking change” to consider this one.

> A̶r̶r̶a̶y̶.̶p̶r̶o̶t̶o̶t̶y̶p̶e̶.̶f̶l̶a̶t̶t̶e̶n̶ ̶b̶r̶e̶a̶k̶s̶ ̶M̶o̶o̶T̶o̶o̶l̶s̶ renamed to Array.prototype.flat

That doesn’t belong in the list at all; it’s a prime example of the platform bending over backwards to avoid a breaking change, for better or for worse (it means that future users are stuck with an inferior name, see also contains which got renamed to includes because of, if I recall correctly, MooTools again).


If I'm interpreting the list right, I think they agree with you about flatten. I think the strikeout is supposed to indicate that the struck portion would have made the list, but they took corrective action. I spelunked through the commit history and the struck portion was indeed unstruck originally, and then when the situation was resolved they crossed it out and added the description afterwards.


I'd add HTTP Public Key Pinning (HPKP) to the list. I was burned by that one.


Could you elaborate more? I’ve seen advice that it’s not recommended (and not been recommended for some years), but I’ve also seen questions in recent times by app developers who are bent on using it to “increase the security” (as it relates to where the apps want to connect to securely without any interception/modification).


Sure, I'll just defer to an older comment on this: https://news.ycombinator.com/item?id=17779395

---

8 points by buu700 on Aug 17, 2018 | parent | favorite | on: OpenPGPjs has passed an independent security audit

We (Cyph) have been pretty disappointed in the Chrome team's decision to kill HPKP.

Paraphrasing, but IIRC the reasoning pretty much boiled down to "it's a pain to maintain and Expect-CT is kind of similar anyway" — which I think is a really weak justification for harming end user security and breaking established APIs that people depend on in production. Fingers crossed that Firefox keeps it alive! [Narrator: They didn't.]

That said, it doesn't entirely break WebSign in Chrome, just weakens a bit further below strict TOFU. https://www.cyph.com/websign goes into detail, but WebSign has some client-side logic to validate its own hash against a signed whitelist. The major downsides to relying on this are:

1. It depends on a caching layer, not a security feature. This means that any guarantees are potentially out the window if a browser vendor decides to do something crazy for performance reasons or whatever.

2. It opens up an attack vector where it can be forcibly unpinned by filling up the user's disk and making the browser evict the cached WebSign instance.

All in all I think it's still basically fine, but shipping an optional browser extension for hardening WebSign is now a higher priority because of this.


Hmm, https://www.cyph.com/websign-architecture the hkpk suicide bit is a beautiful hack, but is so far removed from the motivating purpose of hpkp, that i dont think you can really blame web browsers for not caring.

Although i guess im kind of surprised that worked. I'd assume that service workers could fall out of cache before hkpk at random, and then your app would just be bricked (?) Seems like a bad failure case that could just happen without anything makicious going on, but maybe i just dont understand how service workers work well enough.


Ah yeah, 100% agreed. I think it was a cool concept, but if we're being fair we were practically exploiting a vulnerability in HPKP to produce unintended behavior. (On that note, one of the HPKP Suicide demos we presented at Black Hat and DEF CON was actually a ransomware concept.)

I'd assume that service workers could fall out of cache before hkpk at random, and then your app would just be bricked

Well... that did actually happen on occasion, although IIRC it was considered to be an edge case browser bug in the ServiceWorker and/or Persistent Storage implementations rather than expected behavior, since the locally installed worker shouldn't have been wiped before its replacement had been successfully fetched. We had to set up a support page with instructions to unpin the keys through about:config / chrome://net-internals, which wasn't really ideal. (Both browsers did end up actually fixing this, not that it ultimately did us much good.)


HPKP locks you into one public key forever, so you can't ever rotate private keys for your website. (You can rotate the cert, but this isn't the same.) Heartbleed was one time where your keys would be leaked and you'd have to rotate, but even normal business processes prefer key rotation (and heaven forfend you ever lose it!). Too much burden for too little gain.


Plus, I think it can make a domain unusable forever. You can end up buying a domain you cannot use because the previous owner had used HPKP.


No, HPKP didn't lock you into one public key forever. You could rotate keys. The HPKP header had an expiry date and let you specify multiple keys, so you could add a new key to the list and switch over when the previous key expired.


It makes sense to use HPKP to pin to a CA (maybe a CA's intermediate, I can't remember what they let you do) or better, multiple. Depending on your expiration, and what terrible thing happens in the PKI universe, you should probably be able to resolve an issue if you've got multiple independent CAs pinned.


I'm red/green colorblind and cannot for the life of me tell the boxes apart


There is a colourblind friendly colour option that you can enable in the site footer


The boxes have (weak) stripes. But screenshots of a webpage are missing all accessibility features, so links to the actual caniuse.com articles would be helpful.

I don't understand how e.g. reddit allows text-as-images when that is discriminatory for people with eyesight issues.


I don't understand how any website allows images when they are discriminatory for people with eyesight issues. /s


> I'm red/green colorblind and cannot for the life of me tell the boxes apart

You can change the box colors at the bottom of the site :)


I'm on a B&W grayscale e-ink device ... and I feel your pain.


Caniuse, rather infuriatingly, used to deliberately not track usability of obsolete/deprecated web stuff (it would just say something like "Feature X is obsolete, don't use it" instead of the browser support graph), but it seems to have changed course

https://caniuse.com/?search=blink

thanks, caniuse!


Web developers should be angry that their craft has so much churn and leads to works that end up being ephemeral in nature. That people in the field exalt this state is upsetting because this state is bad for both users and developers.


Dear Mr. Client, a whole year has passed, the web app we developed for you is obsolete. Paying us a lot of money to develop a new version appears to be the only option.


Dear Mr. User, a whole year has passed, in order to keep up with the latest trends we now require you to give permission for mining your user data via our new microservice every time you enter our site, we have made sure the experience will be a pleasant one, as we added 5 mb of JS and a full screen video to keep your browser occupied as we process your requests. Be aware that nothing substantial has changed, only the UI has again changed for no apparent reason, the site is now once again trice as large and hangs twice as much on previous generation mobile devices.

Dear Mr. Software Developer, as you might know all useful software now runs on the web. Writing a web browser is no small task, this is why at Google and W3C we pride ourself in growing our spec beyond any reasonable proportions to make sure you don't try to create a competitor to render a simple HTML5 page. After all, why would you?


... The article is about a single bit of potential churn that was significant because it was so unusual for the web to have churn in standardized features, and regardless the churn didn't end up happening. The web seems like a success story as far as backwards-compatibility goes.


Idk, i remember having to support IE6. That was not fun. Some churn is good.


Some web developers might get a decent amount of their income from maintenance.


Erm, alert() is being deprecated? Don't like half the "JavaScript for beginners" books use alert() for the first few "Hello World"-type programs?


It's not! Just being able to trigger alert from an iFrame that is being reconsidered.


That's just the immediate change, but the Chrome team has said they plan on removing alert() entirely: https://twitter.com/domenic/status/1422647331804037120


The Chrome team are a bunch of entitled jerks. They keep doing these things completely tone deaf and disconnected from the rest of the world. Unfortunately this seems to be fairly common with large software vendors in general, as soon as they have a captive audience they immediately turn hostile.


It's a random Twitter thread where it's not even clear if they talk about alert() in general, or iframe alerts.


It's not a random twitter thread. That's one of Chrome's main standards-writing people. And he explicitly talks about the removal of these entirely.

There's more in Intent to Remove: https://groups.google.com/a/chromium.org/g/blink-dev/c/hTOXi...

--- start quote, emphasis mine ---

We’re on a long, slow path to deprecate and remove window.alert/confirm/prompt and beforeunload handlers due to their role in user-hostile event loop pausing, as well as phishing and other abuse mechanisms. We’ve been successfully chipping away at them in various cases, e.g. background tabs, subframes with no user interaction, and now cross-origin subframes. Each step is hard-fought progress toward the eventual goal

--- end quote ---


Thanks for this!

It ought to be a lot easier than this detective-work to discover this plan. Like, we should probably all know about it, and stop using these user-agent modals now working to replace all use of them?

(rails-ujs, for instance, still uses window.confirm for opt-in "are you sure" on form submissions. not sure what a good replacement is honestly. This is something I would hope to see people discussing and figuring out...)


Still, just a bunch of Google Devrels that decided out of the blue to break the web, because... well, they can.


Exactly that. It is amazing that a bunch of isolated developers has the ability to inflict such damage on the web by breaking backwards compatibility.


okay, that’s actually a useful source, thanks.


oh wow. that's gonna break a LOT of things. I hope they agreed the value of that removal has to be very high for that level of breakage, and decided it was... if decision-makers just don't care about breakage anymore, that would be disturbing.


> being able to trigger alert from an iFrame that is being singlehandedly removed by the Chrome team.

Fixed that for you. It’s not being reconsidered; The Chrome team makes decisions for the web whether you agree with them or not. They might pull the “proposal” only to just do it again later. [1]

More context on the same blog [2]

1: https://www.quirksmode.org/blog/archives/2017/09/chrome_brea...

2: https://www.quirksmode.org/blog/archives/2021/08/breaking_th...


Disclaimer: I'm the one who made the change in [1] so I'm biased but...

IMHO the argument on the blog mischaracterizes the situation - Chrome didn't break these properties but changed how they're interpreted under pinch-zoom. This was done precisely to keep backwards compatibility (on desktop browsers): at the time, the vast majority of pages assumed pinch-zoom can't happen on desktop (something that was becoming more common). The status quo meant zooming in on a desktop page would cause it to "swim" as various "fixed" elements shifted around, JS drop-down menus appeared in the wrong place, etc. This happened virtually everywhere one looked: facebook, twitter, apple.com, etc.

The blog basically argues for "make pages fix themselves" which, even if major sites do, is unrealistic in the long tail.

> They might pull the “proposal” only to just do it again later

It's not nefarious, this often happens in response to feedback and real world experience to try and minimize disruption. In this case, developers convincingly argued that there should be an API to better react to pinch-zoom before making the change.


Then you go to the standards body.


> Note how browser support was short-lived.

The author could have used the date relative view. 8 years (implemented in all major browsers 2012, deprecated in 2020) is not short-lived in my book.


unless i haven’t read deep enough into that groups thread. this seems overblown. alert, prompt, etc. isn’t being deprecated on the entire web, only in the context of cross-origin iframes.

that being said, this post definitely strikes a chord with me. instead of adding a new index of deprecated features, it think it speaks more to the fact that caniuse needs to rethink how they approach deprecated features.


No, they will be deprecated. The decision has been made:

https://news.ycombinator.com/item?id=28310716


Wow, it is shocking to me that `substr` was depreciated. Searching for it, the function was not part of the standard in the first place.


`substr()` is in Core JavaScript 1.2 (pre-ECMA standard), [del]introduced with Netscape Navigator 3.0, which was officially launched in Dec. 1996.[/del]

Edit/correction: Core JS 1.2 was Netscape Communicator 4.0, which was in beta in 1996 and launched in June 1997! (The 4.0x series was still pre-ECMA, as opposed to the 4.x series.)


Yeah that one caught me off guard too. Any idea why? The only reason I can think of is that it's very easy to confuse with `substring`.


Back in 2014, I built some very flashy interfaces using Polymer components just for fun. Fast forward to 2021, and these pages are completely broken. They don't even display the text that I wrote.

I'm guessing that the depreciation of HTML imports is why they don't work anymore.


It was depending on Web Components v0 stuff that was enabled in Chrome before it was ready.

You may find that the component does still work in Firefox (which never implemented the stuff that Chrome eventually removed, and for which Polymer included polyfills).

Also, my hobby horse: HTML imports were removed, not just deprecated (or depreciated). Two (three!) very different things.


Decimated?


… and that word alone means two very different things because of incorrect metaphorical usage! (Its original sense, now rare, was killing one tenth of a group of people, and so metaphorically the destruction of a tenth of something; the typical meaning now refers to a much more drastic reduction than ⅒, very commonly almost complete.)


If the pages worked in Safari and Firefox then they would still work in Chrome now. The only pages that would have broken are those that didn't load the polyfills at all.

It was a bad situation which I hope is never repeated.


About chromium on cross origin remove of alert and co.(link in article):

"We haven’t engaged with other browser vendors regarding this change yet, but plan to submit a spec change proposal once the change is approved for Chrome."

So chrome just changes it. And then officially applys for the spec change, so other browsers might follow - or not.

I mean, why pretending at all, that you care about the spec, when you are fat monopolist?


Eh, I don't like how much influence Chrom(e|ium) has but this is either missing information or intentionally misleading to paint a picture.

This is how the spec process works:

1. You find out if you want to implement something

2. You find out if others want to implement something

3. You implement your something (for simple things in a testbed for complex things as an experimental change over multiple development versions)

4. You demonstrate the something works

5. You demonstrate others are intending to ship as well (For complex things you may have to wait until others are comfortable with their working implementations)

6. If you had a bad experience and many discussion points in 4/5 you go back to 4 and refine until you had a good experience and your spec is merged.

You imply the Chromium authors went 1, 3, ship because you read about step 1 happening. In reality they consulted with Firefox and Safari prior to merging code into the codebase or simply lobbing a spec proposal over the fence. https://github.com/whatwg/html/pull/6297


Ok, all I know is, that the site in question, Canistilluse.com, has as the main information, that in the next version of chrome 95, "alert" can not be considered, as working as expected by the standard.

For me, that comes a bit out of the blue - and seems to be a "making facts" attitude by chrome, not discussing things.


That colored squares from canistilluse.com are not not "all you know" considering your original quote was pulled from the same page I pulled the above Firefox/Safari signals+timeline and spec PR link from: https://groups.google.com/a/chromium.org/g/blink-dev/c/hTOXi...

Also canistilluse makes no claim about whether the deprecation matches standards or not. For reference the standards change was actually approved back in February, per the link in my above comment, so it is actually according to standards.


Does "approved for chrome" mean it's going to be implemented, or just that there's enough people in favor on the Chrome side that they think it's worth submitting for standardization approval?


The bigger problem is that the web itself is rotting away as we speak. Yes, browsers no longer supporting old API calls is a huge problem, for instance for archived content that at some point will simply stop working, a bit like a 78 RPM record. Good luck finding a player. All the Flash content and so much other work that is part of our digital record is no longer working (and I absolutely loathe Flash).

So whether or not you can still use it today, the bigger question is will you be able to use that website 10 years or longer into the future? Because any book ever printed can still be read today (assuming you know the script and the language it was written in), I think the longevity of the web will top out at a couple of decades at best before the digital termites and worms will consume the devices that could have rendered the content you are interested in.

Plain ascii text will likely live the longest, with markdown as a good second. Anything that executes will likely simply die.


It's already happening, with consequences. Remember Manifest V3 nerfing all the webextension APIs used by ad blockers?


Last time I heard this only affected chrome. So if you want ad blocking, just use a different browser.

Kinda obvious that the browser you get from the ad company for free is not for you if you do not like ads.


Why nobody mentioned window.open? It's not deprecated but there are popup blockers everywhere.

This still makes problem with old apps.


Popup blockers are user controlled, feature removals are not.


Oh man HTML imports looked sweet. why was that deprecated?


IIRC it was going to conflict with JavaScript imports. So now they are going all in on Javascript. See, e.g. CSS Modules: https://github.com/css-modules/css-modules


I honestly don't understand what great feats of technical debt removal or code simplification browser devs are hoping for after removing alert() etc. JS engines will still need the ability to stop-the-world, unless they want to get rid of breakpoints next.

I'm also of the opinion that chrome devs should first try to write a "javascript for absolute beginners" tutorial before giving sage advice what such a tutorial should teach and what it shouldn't.

... though I can sort of understand the viewpoint. From a browser developer's PoV, you could probably never be too early in teaching about the event loop, promises, callbacks and async/await. Except that this seems at odds with how programming and JS is actually learned by users.

This reminds me a bit of the Java language design, where you're in theory supposed to understand classes, objects and static methods before you can even write a "hello world" program.


At some point, I started making a similar thing for C++ (arewemodernyet). I didn't even include a provision for the case that features go away - I think I just stored for each compiler the minimum version a draft (Nxxxx) is implemented in. The thinking that "everything becomes green over time" is just so ingrained.


Not sure what's wrong with some aspects of an API being deprecated over time. Of course if it's not done prudently and you can't rely on core features to work anymore after a short period of time then it starts to get annoying. But opposing any deprecation at all seems like the other extreme.


It isn't extreme at all, it's the basic contract of the web. If something works today and is standardized, it should keep working forever.

The main reason is the gigantic size of the web and the state most of the web is in. Unmaintained. There's no team to update anything, so you're purposefully breaking the web.


Alert was always annoying, but it was never as annoying as sites asking for notification permissions.


Alerts steal and block focus. A notification permission request doesn't keep me from scrolling the page, or clicking a link.

If alerts were as common as notification permission requests, it would be absolutely infuriating.

Disabling notification requests is a simple browser setting and I haven't even bothered with it. If alerts were as common as notification requests, I'd do whatever it took to disable them.


I'm salty that they where ever implemented in the first place.


I don't know that anyone should be worried about supporting anything in MSIE 11 unless site analytics reflect a significant number of users with that browser version. I annually audit a company's access patterns to see which browsers and resolutions we need to worry about. This affects development, QA and support at the very least. For example, global patterns in the US alone reflect 0.64% and dropping as of July. https://gs.statcounter.com/browser-version-partially-combine...


That link about killing alert() refers to only cross-origin iframe use of alert.


Actually, it would be nice if we could retire old image formats. For example, if/when JPEG XL becomes universally supported, the WebP format will be entirely redundant (lower quality, worse compression, no progressive, and it's not even faster to decode) and AVIF could be retired too (comparable compression, but much slower, and much more bloated HEIF container).

But currently browsers are Katamari Damacy balls endlessly accumulating more and more code. How long can we keep just adding without removing anything?


I sure hope so, because they're absolutely not a waste of bytes. Unless you personally are planning on converting all existing use of those old codecs everywhere?

What you're suggesting is an archivist's nightmare.


Redundancy assumes that either the original still exists, the encoded copy was lossless, the original was JPEG (which can be losslessly upgraded to JPEG XL), or that the new lossless copy of the old lossy intermediate will outperform. In reality the vast majority of AVIF and WebP images are lossy without the original copy meaning such a change will either force worse overall efficiency to now be a lossless reproduction of a lossy image or generational loss each time a new format takes over.

Not to mention the breakage and work effort during such change periods.


Just a quick note, alert/prompt/etc are NOT in danger of being deprecated. Chrome is just considering not allowing them to be triggered from iFrames for security reasons.


> We’re on a long, slow path to deprecate and remove window.alert/confirm/prompt and beforeunload handlers due to their role in user-hostile event loop pausing, as well as phishing and other abuse mechanisms. We’ve been successfully chipping away at them in various cases, e.g. background tabs, subframes with no user interaction, and now cross-origin subframes. Each step is hard-fought progress toward the eventual goal, and we should consider carefully whether we want to regress, even in an opt-in manner.

https://groups.google.com/a/chromium.org/g/blink-dev/c/hTOXi...


I wonder what is user-hostile about it. Users have no concept of the event loop.

Can’t remember being harmed by an alert() since forever. Last time was probably six or seven years ago when pop-under ads where still a thing.


Alert (essentially) blocks the main UI thread. This means that if you have a while(true){alert} kind of thing, it can be impossible to exit the page since you first have to exit the dialogue before you can take any other action (but of course the dialogue is immediately retriggered). This is better nowadays, but can still cause issues.


I'd love to see figures on how many people this hits in the wild, especially since you have the "prevent this site creating more dialogs" option.


This was solved many years ago by having dialogs scoped to the active tab. You can just close the tab.


No, blocking them from iframes is just the first step. alert/confirm/prompt are absolutely in danger of being deprecated and eventually removed.

Browser makers have declared they’d like to remove them altogether: https://github.com/whatwg/html/issues/2894. But they’re too popular to remove under current policies unless they’re really causing harm, and I do believe there’s serious risk of them classifying it that way. (They’re actually a bit of a maintenance burden, as three of the few remaining synchronous things.)

Remember also the big difference between deprecation and removal. I’d go so far as to say I think it’s likely that alert/prompt/confirm will be deprecated and produce console warnings in at least one major browser before four years pass, though I don’t believe they’ll yet have justified going ahead and removing it.


Your link says the exact opposite of everything you wrote.

The post is by a random dev who doesn't work at a browser, and it was closed by someone who does after they said they don't deprecate things unless they plan on removing them.


That someone in the WhatWG thread works for Google, and is the same person saying they will be removed in the chrome developer mailing lists, quoted and linked above.

In fact, almost every argument for their removal seems to be made by the same person. There doesn’t seem to be any other evidence of browse makers having consensus on this.


It’s definitely not the best source—the best sources are scattered about the place and probably more just murmurings than clear statements of intent—but it does include one positive signal from browser and spec makers, annevk on the reason for reopening it:

> I figured it deserved some more implementer input especially since there seems to be interest in removing them eventually

It’s not spelled out clearly, but I believe from what I’ve seen of processes there and from what I’ve read elsewhere that he’s talking about implementer interest rather than just-anyone interest. Consider also how Chromium actively gathered usage statistics and Firefox wants to, which you only do if you want to remove the thing.

They definitely want to remove it altogether, they just don’t reckon it’s feasible at this time, and it’s definitely not a high priority for them.

There are other sources, most likely to be found in previous discussions on HN of this rough topic, but regular search for this stuff is just about impossible now because it’s been drowned by stuff pertaining to the block in cross-origin subframes.

(bryik and anonydsfsfs have now found better citations, refer to their comments.)


I think they may have meant to link this instead:

https://github.com/whatwg/html/issues/6897#issuecomment-8857...

there's also another reply to the parent with a link to the chromium issue tracker confirming this


The link titled "Deprecate alert(), confirm(), prompt()"?


Yes, it's an issue created by a random person on GitHub.


I dunno, why did the OP cite this?


Does deprecation of web APIs actually mean anything in terms of support though?

Last I checked, frame sets have been deprecated for something like a decade, but they still work fine.

It seems like deprecation on the web really just means "if you're new here, know that this is considered bad practice now and we're only listing it for completeness"


E4X is another one: supported by Firefox way back when, but it never made it into any other browsers and FF eventually removed it.

https://en.m.wikipedia.org/wiki/ECMAScript_for_XML


> All green boxes indicating support, with a note at the bottom: “this feature is deprecated/obsolete and should not be used”.

CanIUse should just color these deprecated features in orange, meaning "it works but don't use it".


All my HTML still works. Just don't go overboard with frameworks and Javascript.


Yep. If you think like this the best can I use x site is: http://caniuse.xyz/. You can try it out like, http://caniuse.xyz/css-grid . I whipped it up and registered the domain on a whim in about 30 minutes.


> Get a new one not controlled by mega-corps and their invalid profit-motive PKI assumptions.

The very browsers that pushed that change founded Let's Encrypt to provide free certificates prior to pushing sites towards HTTPS. That's the opposite of a profit motive. There are also more free alternatives like ZeroSSL since.


That's a reasonable misunderstanding since the site is fairly terse.

To be clear, what I mean is that forcing HTTPS-only is something that is required by profit motivated corporations. They see the potential risk of an HTTP downgrade attack as being enough motivation to completely kill HTTP. And unfortunately even Mozilla is going along with it. HTTP has it's place for human persons and human person run sites. It allows free communication without having to exist only on the whim of some CA or your mega-corp browser's decision if your cert is too long, too short, or too something.

I did not mean that the founding of LetsEncrypt was profit motivated. LetsEncrypt is good for what it is. But forcing HTTPS and not allowing HTTP at all is very, very bad for the web. Especially since cert CA usage follows a powerlaw like normal and nearly everyone centralizes in LetsEncrypt.


> recently reached the threshold of (mostly) supported everywhere: the .webp file format.

Why has google (via lighthouse and pagespeed insights) been pushing webp so hard if it has only recently gone majority green?


<picture> has been green for more than five years now and can trivially fall back to jpg if you still need to support IE11.


I wonder - with all the limitations being imposed on IFRAMEs, what do people do when they have a product that needs to integrate UI with other peoples websites?


Put some JS on the host and use postMessage()


They use an iFrame, with the correct headers to enable what they need. If the site isn't trying to do something nefarious it'll work perfectly well.


Wait a minute the lede is truly buried here — Chrome is killing alert/confirm???

This will break tons of stuff. What's the recommended replacement?


Only from iframes


Have you heard of https://deprecate.it/? :)


A pragmatic way to catch deprecated features during development are Typescript intellisense and Webhint.


Quick, add WebSQL to the list.


Things that have bitten me (combinations of actual breaks and things where it seemed that progress was happening in a particular direction and then it changed):

* use of geolocation & crypto APIs on http hosts

* websql

* changes in how browsers provided accelerometer data

* multipart/x-mixed-replace


Is Vivaldi not worth tracking?


Something less that 0.1% of users and the default cutoff is 0.5%.


Chrome does not disable alert(). Alert() will still work, just not from cross-origin iframes.


No, they will be deprecated. The decision has been made: https://news.ycombinator.com/item?id=28310716


I don't consider some random Twitter thread an official messaging.

Also the thread itself talks just about iframe alerts.


HN loves to bitch about whatever Safari doesn’t support at the moment but I use Safari as my platonic ideal web browser. It’s the only one that is directly designed for users and not web devs, and the only one that has a simple “buy our hardware and get Safari” business model.

I am automatically suspicious of web techs not supported by Safari.


It's also the one made by the company that doesn't want web apps to compete against their app store monopoly.


The bitching of lack of a lot of the features for web devs doesn't come from it being built for users instead of the devs rather it comes from Safari being built to not compete with the AppStore on iOS where they get a 30% cut of all transactions vs the 0% on the web.


my biggest chuckle is that he probably spent the most time trying to figure out how to center the div on his new parody site


Nah, `body { max-width: 400px; margin: 0 auto }` is the usual way of capping page width and centring it.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: