But their view of the situation is completely correct; just because this one feature might be useful (heck, when Microsoft dumped XMLHttpRequest on us that was incredible too), doesn't change the fact that we're now living in a web where Google does whatever it wants, adds whatever it wants to Chrome, and the web is at its mercy. If they wanted to add <blink> it'd get added and supported, and other browsers would have to follow lest they become incompatible.
There's not really much of a difference between the web today and the "Best viewed in IE" web, and that's sad. There's a difference in developer goodwill, and of course things aren't as buggy as they were back then so it overall doesn't feel like you're "stuck" with Chrome like it did with IE, but... you are. You're just OK with it.
It saddens me that people worship Chrome and yet Firefox has a lot more potential in some areas. Hell it was Mozilla and not Google that gave us WebAssembly through asm.js and other awesome efforts. Microsoft gave us AJAX when Mozilla reverse engineered it back when standards were not properly standardized which is why AJAX is so XML focused in the name despite being able to fetch other things.
Some software houses who supply the company I work for, are refusing to work with anything but Chrome. So it really does seem like the bad old days all over again.
It's nothing remotely like the bad old days. You are correct.
It is however bad new days.
Yes, Chrome is a better browser than IE 6 was, but getting everybody on a team, on every team to test cross browser compability is starting to get hard.
We're now seeing otherwise great developers who more or less openly admit they don't test in other browsers.
I call them out on it. It is unprofessional.
We've fought to get where we was a few years ago, where every modern browser was considered equal.
Don't let any web developer get away with Chrome only!
Just like nowadays more and more pages only get coded for chrome and might or might not work on Firefox.
IE6 outlived it's expected lifetime by leaps and bounds due to lots of customers stuck on machines locked into running old version of windows due to financial or corporate policy reasons.
(Fun spitballing doomsday scenarios: LineageOS gets more traction and needs to harder fork; or, Huawei is forced into a hard fork and takes the entire Chinese market with it, which could snowball other markets across; or, some Android worm causes friction with the major carriers and they get back into the version/upgrade micromanagement game even worse than before.)
Today isn't a dreamland, but it is in no way as bad as the IE6 days.
But in the early 2000s, you only ever targeted IE6 and maybe 5. Opera was that weird European thing, Netscape disappeared by 2003 and Firefox was that silly open source stuff used by people without friends or any sort of social skills. That was the IE days.
They were not as good at bending standards groups to their will as Google has been, for various reasons, which is why standards did not always match their proposed implementations. But trying to paint IE as not engaged in standards is pretty misleading.
(Now I don't think they were too engaged in things like XHTML2, but neither was any other browser, because that group was fundamentally not interested in browsers being engaged, as far as I can tell.)
TBH I think if you want a slogan then "Google is the new AOL" is better. When you look at what their doing across technologies it looks more like reinventing AOL's walled browser for the modern age.
I'm old enough. It's not different. Chrome is the new IE.
I'm old enough.
People sharing your view, are agreeing with a broad assumption that isn't based in reality. Ironically, the Chrome has had to adopt from the other direction (webasm) and hasn't created much whole-cloth that other browsers have had to adopt...and developer's certainly aren't hambstringed by these small additions that make Chrome a delight (for now). From a business perspective, the popularity monopoly is not the same as how IE was made "popular". It's VERY different from the old days in terms of quality and the state of the industry.
Can you show any examples of these "features" that are forcing the hand of other browsers?
> think Outlook.com and the introduction of AJAX
I remember that every browser had an AJAX API, where IE had a specific API that was more straightforward. The AJAX capability was a forgone conclusion rather than a technical leapfrog. You could always had access to an HTML request and timers to code AJAX as a final backup.
"This website supports Netscape Navigator 4."
Browser incompatibility was used to lock entire enterprises into using only Internet Explorer. Badges did nothing to help.
The only thing it seemed to do was let you know what browser the developer/s were using.
If you happened to have multiple browsers it was possibly of some benefit if you could switch to that browser.
Enterprises still lock users to one browser if possible for security & maintenance reasons. I do see multiple browsers allowed for certain individuals to support older software.
It's a bit of a monopoly that Google has on the browser market and so now they have leverage over what gets approved and implemented in those browsers.
Since most people use Chrome then other browsers are forced to follow suit with compatibility.
Link to chromium dev: https://www.chromium.org/getting-involved/dev-channel
Neither am I, but I wanted to point it out anyway. If things like chromium-dev had existed back then, maybe we wouldn't have had the IE fiasco we all remember. Or maybe we would have anyway, who can tell :)
And that shows why Chrome is not IE... Chrome jumped in and supported WebAssembly even though they already had a mature equivalent (pNaCl) which they have now deprecated.
I just don't understand why there is so much hate for Chrome out there. Sure, we don't always get our way.
To me the Chrome team seems to listen to developers and users, and to put the effort in to follow standards, and to keep improving Chrome in ways that I care about as a developer.
IE6 did none of that.
IE6 had just different users than Chrome has.
But MS did listen to them, even too much actually.
MS had their customers intranet developers to make happy, Chrome has its giant father, Google, the AD company.
Chrome is much worse in perspective for users freedom.
A lot of the reason so many people seem to like Chrome Dev Tools better seems to be exactly the sort of "Works best in IE6" mentality that so many devs only spend time testing in Chrome and customizing Chrome Dev Tools how they like and making sure Chrome Dev Tools work right that it just becomes inertia that that is what they like better.
Little tweaks to sourcemap generation options, for instance, can be all it takes to get Firefox Dev Tools working just as well as Chrome for code debugging, but so many devs work in Chrome only that the defaults in so many places are accidentally so Chrome-centric.
Similar goes for all the Chrome-only Dev Tools extensions. At this point even the Dev Tools Extension Models are close enough between Chrome and Firefox that it mostly just needs more people using Firefox for development to catch up on extensions.
Looks like somebody updated the Firefox Quantum UI to look more like Chrome.
Chrome also stopped doing this, AFAIK.
Fun fact: I was subscribed to a firefox ticket "Password manager should support OS X Keychain" which is 18 years old, it got closed some months ago.
Same experience for me on Chrome on macOS too, so... You sure you didn't change those defaults somehow?
I usually open new tabs with Command + t or Control + t on Linux / Windows.
Chrome does this.
> and if you click the + button to open a new tab the cursor ends up over the new tab, not over the + button so you can click again to open a new tab.
Chrome does this too.
> The whole UI looks and feels non-native.
Yep, that describes Chrome.
It sounds to me like you're a Safari user, since the things you're describing are things Safari does correctly. I'm a die-hard Safari user, and both Chrome and Firefox feel wrong to me. But Chrome feels even worse than Firefox does.
- The 'x' is on the right of every tab.
- When I press '+', the cursor ends up over the new tab - I cannot immediately click '+' again.
- Pressing esc does not do anything for full-screen mode.
In Firefox, using the Customize option from the menu, you can move the new tab button to the left side of the tabs, or even to the far right side so that it doesn't move.
What version are you using?
For me the most jarring thing is how Firefox’s omnibar works. It isn’t good at showing me the sites I’m looking for after typing just a few characters. Instead, it shows absolutely every page I’ve recently visited at one of those sites.
Another design thing is the amount of information on the default new page has a ton more info than I’m used to seeing (or really want to see). Hence a question I posted yesterday:
On iOS, I keep trying to pull to refresh pages, but in Firefox it’s a specific button. But this is something that I could get used to.
Interesting! I think this may be a case where, over time, as you use a search mechanism, you get used to the kinds of searches that work well with that mechanism. I've had similarly frustrating experiences with Chrome's address bar, which I'm not used to using.
Firefox has a concept of "frecency", a metric that combines frequency of visit and recency of visit to determine what you're most likely to want based on what you typed. (It also gives some preference to the starts of words/domains, and some other heuristics.) However, I can easily believe that before Firefox has enough information, typing at it would produce suboptimal results. That's something worth reporting as a bug: https://bugzilla.mozilla.org/enter_bug.cgi?product=Firefox&c... . Please feel free to use any of the text above to help describe the issue, if it helps.
> Another design thing is the amount of information on the default new page has a ton more info than I’m used to seeing (or really want to see).
Complete agreement. I disable the new tab page on desktop, as I really just want a blank page. (On mobile, I find the default new tab page more useful and usable.)
I’m not sure how the omnibar works, but I’m not extremely impressed.
I can see how this is going to be confusing for a new Firefox user, but it's easy to get used to.
(And I have Firefox sync setup)
(And on a general note: some people here might want to verify they haven't touched the settings;-)
The strange thing is, every nine months or so, FF will forget one of these urls. It doesn't do it for all of them at once, just one, and it's not related to upgrades. All I have to do is enter the full url and hit enter, and things are back to normal again for that particular url for many months.
I use macOS, so maybe it’s different on Windows. A few things bug me, it’s hard to really pin it down. It just doesn’t feel finished. The Save File dialog window feels like it’s from Mac OS 9, or at least its sentiment is. The behaviour is confusing, because if you press cancel, you still end up with a file in you Downloads. It would better if it just dealt with it the way Safari and Chrome do. A lot of the dropdowns and other inputs feel a little old too, everything ends up looking like 2003, which isn’t terrible, but I would imagine it puts people off when switching from Chrome.
The address bar is also very odd, it almost always suggests the wrong thing for me. I’ve adjusted the settings, but even for a site I just visited with the exact name in the page title, I still end up with the search as the default result.
For FF mobile the UI is less convenient than Brave. In Brave the icons for adding a new tab and switching tabs are at the bottom of the screen, while in FF they're in a menu at the top. Because I use my phone in portrait mode I have to reach my thumb up to the menu every time to create or switch tabs. It's really tiresome and I use Brave instead due to this. Also, the URL bar on mobile FF has the 'X', but its behavior is completely different from Brave. It closes the bar entirely rather than clearing it. I also get that it's a thing to get used to after switching, but why not just delegate cancellation to the Android back button? (Could it be due to iOS?)
1. It always "just works". I've run into problems with both Chrome and Firefox getting confused by a search term I entered into the URL bar.
2. The search string persists even after clicking on results. This is particularly helpful when I'm having trouble finding a good result and want to make a slight change to my previous search string, or I want to remember exactly which search term brought up a specific result.
Just curious, are you developing in Lua? That would be my guess from that string.
Chrome is dead simple, minimal, and predictable. When I want something, I look in the one menu. Firefox, on the other hand, has multiple bars, multiple menus, sidebars, icons plus text. It's too much.
FYI: I use Firefox for idealogical reasons.
1. Spacebar doesn't select items
2. Key equivalents don't work while it's open
3. Cut/Copy/Paste often does not work, depending on where the keyboard focus is
4. Inapplicable menu items don't disable properly
5. The caret continues blinking in text fields even though keyboard focus is in the menu
etc etc. This sort of basic bugginess is embarrassing and entirely avoidable by using the native platform controls, instead of re-inventing them poorly.
1. I'm pretty sure there is an issue for this in their tracker but they don't seem to think it's important enough to fix.
- restore pages from last time
- start page (input url)
- default new page chrome
By default, intranet authentication (spnego) is disabled in Firefox. There are numerous guides explaining how to turn it on, and it takes all of a couple of minutes to set up correctly after which point it just works.
FWIW IE has this problem too, in that by default it only enables negotiation for unqualified hostnames. If you want people to use an FQDN, you have to go add that FQDN to a list buried behind a couple of advanced dialogs.
Opera wasn't a browser from scratch, though: Presto was older than KHTML (and WebKit, etc.).
And really, starting from scratch ten years ago you'd just run into Presto's big failing: site compatibility! Websites rely on all kinds of asinine edge-case behaviour, and if you don't match the majority behaviour, users will leave your browser for your competitors (and site compatibility was, along with crash bugs, always one of the top two reasons for users to switch away from Opera).
In many ways, the vastly larger HTML and CSS specs are a massive boon for minority browsers: when I started at Opera, a large proportion of the Presto team were QA staff who in reality spent almost all their time reducing site compatibility bugs and reverse-engineering other browsers. HTML5 and CSS2.1 made to a large degree that work go away: there was enough movement to converge behaviour (including from the larger browsers) on documented behaviour that reverse-engineering other browsers ceased being something consuming large amounts of resources on all browser teams.
What killed Presto is a variety of things, and the growth of the platform was only a small part of that.
And as mentioned in other sibling comments, all major browsers have rewritten major components at various occasions.
A browser is something that 1 out of every 2 people on earth  use frequently. That's a lot of people! All developers in the world use a browser. Lots of them really believe in open software. Some are 10x developers. A certain percentage are literal geniuses. Exactly one is the smartest developer alive today. I get that it's hard, but smart people mobilized at global scale hard?
Starting from scratch today developers would have better tools, modern languages, and a hindsight view of what worked and what didn't in previous browsers to work with. Wouldn't that make it somewhat easier?
Say the average person loads 10 websites per day, and a less optimized browser requires 100ms more to load each page. That's 415 million hours wasted per year. Say the average person makes $4 an hour, that's 1.6 billion dollars wasted!
If every browser user donated 50 cents, that would be 2 billion dollars. Would that be enough? There are 50 people worth more than 17 billion , would one of them bankrolling a new browser be enough? What would it take?
In other words, we're toast ;( But hey, that might be exactly the kind of situation that motivates developers after all.
With my project , I'm attempting something less ambitious: I'm trying to re-establish SGML as an authoring format (HTML is based on SGML, and SGML is the only standard that can tackle HTML), to at least bring back a rational authoring and long-term storage format for content that matters and that you'd like to be able to read in a couple of decades still without an ad company or even a failed, over-complicated all-in-one document and app format of the 2010's getting in your way.
When you can click the reader button, it makes every website better. Reader view defeats modal dialogs and dickbars. Reader view renders faster than AMP, because it skips web fonts. Reader view always scrolls and zooms without jank.
Build a better browser by embracing the web as it was meant to be: the best document publishing platform. Let the other guys build the world's worst application platform with clippy and toast and the rest.
All the developers in the world combined can not solve a legal problem. You can't implement technologies like Widevine without a license, and if they simply won't give you one , you're dead in the water.
The DOM code there's definitely less motivation to rewrite, in large part because there's a lot less benefit to be gotten from rewriting large parts of it (versus layout or style where there's much more entanglement across the codebase).
My HTML parser uses SGML which is more generic as it takes the HTML grammar (a DTD) as parameter and computes state machine tables etc. dynamically based on it, thus a bit harder, but still very much doable.
World is missing a standard, a good and lean one is created, multiple vendors implement it, there's an initial year of minor incompatibilities but otherwise all is gold and glory, everybody loves the standard. Standard becomes immensely popular so everything supports the standard and worse - the standard starts supporting everything because every vendor just has this one small extension they want to add. After adding thousands of such extensions, suddenly the standard isn't so lean anymore. Now the spec isn't just a single RFC that can be read over lunch. It's a whole collection of documents with thousands of pages, appendices, mandatory extensions and compatibility tests. You need few people employed just to keep up with organizing the documentation. Vendor implementations start becoming incompatible because of too high complexity. Only a few big players manage to stay afloat and they probably like it that way because it raises the barrier to entry and they get to keep their position.
I see it more as a self-fulfilling prophecy and a constant stream of FUD from naysayers whenever such a thing is merely suggested (with popular topics being that it will never be finished, it will not be secure, if mozilla needs $500m/year how mere humans will ever be able to do it, etc).
I think that a lot of people nowadays forget that almost much every single piece of open source tech that existed since the 90s or early 2000s was started by naive young programmers trying to do something (have you seen KHTML's source code in KDE1?) without having assholes telling them they can't do it. Well, ok, they had some, but nowadays they are WAY more numerous and at the past they mostly came from (what was seen as) "evil corporations" so they were more easily dismissed. Today most people in open source (both users and programmers) dismiss most things that do not have some big commercial entity behind them.
As a single developer it is impossible to implement a browser that is compatible with today's websites.
Then don't make it "compatible with today's websites".
In fact, that should probably be the goal. That is, what should or could "tomorrow's internet" look like?
Think of the "time lag" between "The Mother of All Demos" and it's actual commercial realization: Arguably the Mac, but some might say the Apple Lisa, other's Xerox Star, and still others could pop their own in the timeline - but for the general consumer - that is "wide adoption" - it was the Mac in 1984.
That's a lag of almost 15 years - but one guy managed to see that future, and with some help, pulled it into the past (if you've never watched the demo, and put yourself in the shoes of that time, then you can't easily understand just what it took for it to occur; it's honestly awe-inspiring to me from a historical standpoint, I'm sure there were people in the audience who didn't understand they were seeing the future).
Try to do that, is what I'd propose.
And some people are. Where I believe that future lives is in the idea of the "distributed web" - which honestly is what the internet should have been all along, but apparently we're going to have to drag it back there. Part of the reason it didn't go that route was mainly because of "dial-up access" - the end nodes weren't looked at as "peers", when they should have been, just instead of "always-on" peers, as "ephemeral and temporary" peers. But they were kinda sold differently, and most people weren't made aware that they could be (and should be) peers. But rather, relegated to 2nd class "clients" and "consumers".
Now many people have the available bandwidth to be closer to real peers, run servers, etc - but are instead limited in a variety of ways (most notably by draconian TOS language, that while in many cases is "ignored" - it can be easily dragged out to deny service if and when an ISP feels like it).
I'm not sure the distributed web is the full answer (the full answer would include mesh networks - but there are logistical issues there with those, especially in the United States, that currently prevent them from transitioning beyond, at maximum, "city level") - but it's a start, I think.
KHTML was being implemented in 1999. That was an extremely fast moving and chaotic time in the development of the web! Browsers were shipping new features left and right, the specs didn't describe at all what browsers really did, and if you fell behind people would quickly switch to other browsers.
Even by 1999 you wouldn't have been able to make a competitive browser on your own, and especially not keep up with the rate of change.
(In the early 2000s, after Microsoft "won the first browser war" and disbanded the IE group, everything slowed way down, though.)
Firefox began as a version of the Netscape Suite stripped down to just the browser. The Gecko rendering engine long predates Firefox. No one has launched a full browser "from scratch" in nearly twenty years.
At least back then sites were not as reliant on JS, so I think that's one of the ways it was better; sure, sites often looked different in non-IE vs. IE, but you could still consume the mostly-static content (and what wasn't static was almost always ads...) Now there are far too many "appifications" of sites that shouldn't be anything but static pages, JS is required to render their content, and that JS is often very browser-specific.
At least Microsoft's overarching goal was just to get developers to use their platform, by making their platform as useful as possible (even if many of the changes were ultimately misguided).
With Google, there's some of this, but there's also some very clear "we're an ads company and we want the ad experience on Chrome to favor us as much as possible, without regard for the user."
Or better: Use standards which match most browsers at caniuse.com
At the moment some people develop only for Chrome because it has the most market share. So they deploy their site using the <toast> thing and suddenly people using Firefox are wondering why things are broken. At this point Firefox can either implement the new feature(s) or lose users. And once FF etc. have implemented it suddenly it's on caniuse.com as "works in all modern browsers".
which is probably why OP is suggesting it
I agree that the monopoly power situation is similar but there really is a big difference between IE hell and today's situation.
There is a big difference between one browser having a cool feature that may not be available in other browsers, versus the leading browser strangling everyone in a stasis of mediocrity, which was the case when IE/Microsoft owned the web.
This is a shallow comment. IE was closed-source, cornered the market and then abandoned all development. They stalled the web for years.
Today all browser vendors are active in the standardization process, open or nearly open source, and collaborate heavily to implement the same features.
Any feature that "only works in X browser" is a result of one browser implementing a feature first, rather than the result of a browser monopoly. For instance prefers-color-scheme was introduced by Safari, than Firefox, and next to-be Chrome.
So yes, for a period of one to two months a site might have "worked best in Safari" as a result of this feature adoption. But drawing a parallel to a web dominated by IE6 is completely ignoring all context of the situation. It shows a complete misunderstanding of how the web has evolved.
I don't know - I kind of figured that was part of the joke - I thought it was pretty clever.
Why not just call it <notification>?
Or even better, don’t add anything - it’s surely just a <div> that needs styling and animating with existing CSS and JS?
But it's called a toast because that's what many UX people have called it for years, because it pops up and toast pops up. It's a specific kind of notification, not any notification. (There's also "butter bar" and "modal dialog" and so on.)
And yes it's behavior, but <input> and <button> and <select> and <textarea> define plenty of behavior too.
I think the big concern here is standards and process... not its name or concept in themselves.
Thanks for this, it hadn’t occurred to me why they chose this name
In Android land, a "toast" is a particular implementation of a notification.
Just because you are oblivious to the concept it doesn't mean it's bad. It just means you need to learn something before blindly asserting that stuff you don't know and are not familiar with is for some reason comparable to dog shit.
It's exactly the point the author here was trying to make. Google does not control the web or define HTML standards in a vacuum, nor is it the focal point of the internet.
That was not the point at all. The point is that the concept of a toast exists and is already widely established. Android is one of those platforms. That's it. I fail to see the point of bitching about Android as if that would undo the dissemination of this particular UI pattern. I mean, Microsoft adopted the concept. Do you expect to undo that by bitching about Windows 10?
Out of all the reasons why they could've named it "Toast", the most reasonable one to me would be "because it's another kind of a pop-up notification, and the phrase 'pop-up notification' is already taken".
I don't understand your problem with the concept. It's pretty clear what it means and how it operates, and the UI concept has been extensively disseminated and adopted, not only in web-base UI and UX but also in GUI toolkits for mobile (android) and desktop (Windows 10).
Exactly where are you having problems understanding the concept?
The concept is fine, I’m sure everyone understands that. The conflict is your assertion that it comes from the meaning related to “a small speech given while raising one’s glass.” It comes from the “bread that has been lightly re-cooked” meaning. Toast notifications would slide up from the bottom of the screen, then sink back down – moving like a piece of bread in a top-loading toaster. (The most common kind, at least in America, at least at the time.) This leads to bad semantics, on the web.
Here’s a citation from 11 years ago describing that: https://www.techrepublic.com/forums/discussions/whats-a-toas... I’m sure it’s even older, but “MSN Messenger” is my oldest personal reference, so I wasn’t sure what else to search for.
Perhaps the inspiration came from the Sunbeam Radiant Control toaster:
Unlike modern toasters which rudely attempt to catapult your toast into orbit, the Sunbeam gracefully and almost silently presents your toast with the love and care it deserves.
Getting it to engage was super finicky. I never saw anybody get it to work on the first try. You often had to whack it down pretty hard to make it work, while other times a light touch was sufficient. You could tell when someone was using it from across the house because you heard a metallic "whang, whang, whang, whang!", at various volumes, as they tried to get it to trigger.
Going up was super slow. (In his video, he shows it at "6x" or "12x" speed so viewers don't fall asleep.) Who would ever want that? It's done toasting. Don't make me wait 10 seconds to get my toast. Lots of singed fingers from impatient people reaching in to get their food.
The sensor wasn't very good, either. There was a razor thin margin where you'd get reasonable toast, and it was almost all the way to the "light" end. At even 25% to "dark", you'd get a block of charcoal. I'd set it all the way at "light", and toast my bread twice, if needed, because that was safest.
Lack of a manual up/down lever makes everything worse. It means you need to use the special "ONE SLICE" slot when you only have one slice. It also means if you see your bread start to catch fire (yeah), there's no easy way to get it out in a hurry. You could push the lightness all the way to "light" and hold it there (I think that was the official way), but that was unreliable. Usually, we'd just unplug it.
Also, as he notes in the video, the bread guides don't move so about all you can fit in here is a small generic white slice, and the outside gets as hot as the inside so be careful not to touch it.
This is the 2013 Mac Pro of toasters: very pretty, very clever, and a thermal nightmare. If I had infinite money and space, I'd buy one to show off, and never use it.
Lolol....reminds me of our first family toaster from the 70's.
But my dreams of owning one of these are now shattered :)
But my, does it feel more fragile than my old Chinesium toaster!
Doesn't anyone there remember when <bold> and <italic> got deprecated in favor of <strong> and <em>??
The way Google has gone about this seems to be…
1. Ooh! I have a cool idea!
2. Other people in Google agree with me!
3. Other Google projects could benefit from this?
4. Let's stick it in Chrome!
5. Oh, guess we should tell the community what we're doing.
The difference is that if Google introduces a new tag and starts using it, then every other vendor must implement the tag, or the [potentially popular] applications that use it [made by Google] will simply not work.
That's not a power that "countless proposals for new declarative UI HTML elements" have.
> I think this incident should make it very clear to everyone that HTML as seen by "browser vendors" (Google) is a very different thing from HTML the markup language as used all over the world for personal, business, medical, legal, and cultural documents, and which demands community representation and participation.
Google is not a browser vendor; they do not sell web browsers. Google sell advertising, and it's absolutely critical to Google's future to get everyone on the only web browser that has crippled ad blocking and privacy capabilities.
But Google isn't starting to use it? They're implementing it in Chrome to see whether any difficulties would arise when implementing the proposed API. If they do, that can serve as input. If not, input is still possible?
How fast do you think someone creating a website in the US will get fired if they said they are going to ignore iOS users because iOS is only 15% of the global market?
If iOS didn’t matter do you think Google would be paying Apple a reported $9 billion a year to be the default search engine?
People don't have their browser choices etched in stone, and Google's services are a powerful form of persuasion. If they work in Chrome but not other browsers, people will just switch to Chrome.
It's not 2009 anymore. iOS simply doesn't give Apple the power to stop Google here.
And that still doesn’t negate my other point, if you exclude iOS, you miss the most affluent users. Who do you think is the most profitable market segment? People buying $50 Android phones or people buying $700 iPhones?
Apple doesn’t have to “stop” Google. If Apple doesn’t support it, either web developers wont use it because no one is going to give up their most affluent customers or they are going to be forced to write an app.
Again, if I’m writing a website for the US, why do I care about the worldwide market share?
If your service is free and you monetize by selling users' data, they're all the same. You don't have to sell things to your users to make money. Just look at Google!
> Again, if I’m writing a website for the US, why do I care about the worldwide market share?
Again, the world is not the US. If I'm making a website for Europe or China  or South Korea  or Vietnam , why do I care about the US or Japan market share?
Because users in the rest of the world who are not as affluent aren’t as attractive to advertisers. You are already seeing it with Google. Google just announced a year over year decline in net income as ad sales increased a lot slower than acquisition costs.
Google doesn’t “sell users data” it sells access to users to advertisers based on their data. Advertisers aren’t willing to pay as much for users data for much less affluent users
If you combine South Korea and Vietnam you probably have the GDP of a midsize state in the US. In China, if you want to reach the growing middle and upper class - you still have to support iOS.
> If they work in Chrome but not other browsers, people will just switch to Chrome.
On iOS, Chrome is just a wrapper around WebKit. You can't ship third-party web rendering engines on iOS. So if Apple doesn't implement this tag for Safari, Chrome on iOS won't have it either.
They really should have resurrected the blink tag and added a property for the number of iterations required.
Of course, there are other ways (such as ARIA attributes) to accomplish this.
And from whence comes the bread?
The front-end in recent years has become an unintelligible mess between developers obsessing over toast (with no toaster in sight), and Hamburgers... Am I supposed to make a UI or order a meal?
And <clippy>? Did the mischevious little paperclip finally come back to irritatate me by making the noise of tapping on glass from an LCD screen?
"Intent to Implement: Toast UI element"
>Edge: No public signals
>Safari: No public signals
>Web developers: Positive (previously expressed privately; we've encouraged them to make their interest public on the WICG thread)
So basically, nobody at Google talked to anyone about this, asked only google engineers and then pushed those mentioned google engineers to make postive comments in the standards thread? And none of the other browser know shit about it?
What the fuck?
The standardization process will undoubtedly change the shape of the thing altogether, or it could not even make it into the HTML spec.
Have these people and the google developers behind Chrome considerd that maybe they are the baddies?
Perhaps notably, it was actually faster and had lower memory usage than Firefox for a while (Firefox had many leaks). The issues were security, standards and NO TABS!
... Honestly if MS hadn't been so unbelievably deaf to users and web developers, Firefox and Chrome may have never succeeded. If Google doesn't completely screw up, Chrome won't go the way of IE. Maybe messing with adblockers qualifies?
The latest Safari (bundled with Mojave) tries to auto-complete email addresses using your address book. That's not at all a bad idea, except that like most things Apple of late, the implementation sucks. When you give the input box focus, it opens up some sort of weird dialog prompt, which pulls you out of full-screen mode if you had it enabled. Really messed up our 3D web viewer.
"You seem to be having some trouble. I find clicking on everything randomly until you figure out what each piece of the web-page does works best for me!"
"You seem to be having some trouble - have you tried using this website on a phone?"
"You don't seem to be having _enough_ trouble - have you tried using this website on a phone?"
This. Apple is in this self-reflecting bubble too. Companies that start off user-focused and get big often become thoroughly narcissistic.
It gets shipped in Chrome (often without a W3C spec or substantial discussion with other stakeholders), and then other browser vendors are forced to play catch up.
When Internet Explorer 4 or 5 was released, Microsoft was in full-on Embrace and Extend mode, on their way to Extinguish. One of the ways they tried to appeal to businesses (and one of the ways IE4/5-ish ended up being installed well past its best-by date in certain places) was a technology they called ActiveX, basically a brushed up and simplified COM. (The "X" here was from DirectX, which they were trying to market all together; IIRC technically the Agent had nothing to do with DirectX.)
And one of the ActiveX objects they released for this functionality was, basically, Clippy. It was called the Microsoft Agent . It included a number of characters, but you could bring up the real, actual-factual Clippy in Internet Explorer. You could also use text-to-speech to actually speak to the user. It had an API that allowed you to trigger certain pre-cooked animations, move it around the page, etc.
In an advanced mode, you could also specify your own graphics files. It also allowed you to specify graphics for something like 5 different mouth shapes, so you could reasonably lip-sync your new wizard object. I did this to my University's logo back in the day.
I say it "kinda" existed because, technically, this wasn't its own tag. It was an instance of the OBJECT tag . But there was a time where at least in Internet Explorer there literally was HTML you could write that would put the literal Clippy on your web page.
A gallery of the available characters: https://www.youtube.com/watch?v=Rb9yBfDLjsI
I didn't find any videos of the Clippy character used in a web page, but it shipped in the default control, I'm fairly sure.
I never encountered the Microsoft Agent in the wild.
: https://docs.microsoft.com/en-us/windows/desktop/lwef/access... - "[Microsoft Agent is deprecated as of Windows 7, and may be unavailable in subsequent versions of Windows.]" "To keep Agent running between pages (and thereby keep a character visible), create another client that remains loaded between page changes. For example, you can create an HTML frameset and declare an tag for Agent in the parent frame." Remember frames?
This reminds me, one of the small enduring legacies of the 90s X-TREME! craze is that Microsoft still does a lot of marketing around the letter X, even today, like the XBox. And that goes back to ActiveX and DirectX being named in an era where that was intended to make Microsoft sound Hip and Cool and With It. I suppose at this point the XBox has transcended this, and is now just a name.
Then you missed out on the silly games I wrote for it, or the way I got around having to speak my own class presentations by having PowerPoint narrate itself. (I wish I had more videos of that stuff that I did in school given that deprecation has bit rotted it all.)
Also, you never got accidentally talked into installing ~malware~ "friendly user assistance and downloader tools" like Bonzi Buddy.
Ah, yes... clarification: I never encountered it in the wild on a webpage that wasn't specifically about the Microsoft Agent. I did have a couple of commercial programs that used it, though nothing huge.
It did require a machine with what at the time was a generous amount of RAM to handle everything that spun up to run that thing. Nowadays, of course, it would be nothing, lost in the noise of the fluctuation of your Slack window, but at the time, megabytes of RAM were still a pretty big deal.
It is true I never installed Bonzi Buddy, but I was aware it used the Microsoft Agent. ISTR someone thinking it must take a ton of programming to make that happen, and I actually showed them my animated school logo and how little code it took to prove the opposite.
(That's part of why the "games" webpages I made with them, and the PPTs as well, impressed people a lot more than the rather rudimentary "screenplays" I was actually writing them as felt to me. I felt like I got more credit for that work than I probably deserved.)
I think a lot of people kind of assumed that that level of interaction did exist and was happening behind the curtain, hence some of the very loud and vocal disappointment with Agents, especially in the Office Assistant form that they were never smart enough. I think it was an uncanny valley effect between user experience and user expectations.
It's somewhat ironic that one of the reasons stopping us from applying as much animated "personality" to Alexa/Siri/Cortana seems to be that residual dislike of Agents/Office Assistants despite that in some cases at least it seems like we are ever closer to getting past that uncanny valley.
(It's interesting to see even today which Office features are getting UWP treatment and how, with the most interesting part of "how" being how much they are moving to React Native.)
The Chrome devs are thinking about whether a <toast> element would be useful. They float the proposal early on, and announce that they will implement (not ship! Web developers cannot use it) it to get some real-world experience with that to inform its design.
Still plenty of time for people to get involved, make objections, allow refinement, etc. Or am I missing something?
I have a really hard time understanding why anyone would want it.
Yes - google has been pushing web forward with QUIC and many many other elements.
Seeing Microsoft and other pushing back on something like toast (IE was a crapshow of stupidity for developers) is ironic. toast is going to be welcomed.
The problem is that microsofts additions are things like clippy or cortana in windows which few people want. Seriously, look at windows 10 and the cortana addition (forced in many cases) and ask -> did users WANT this, do users LIKE this.
Toast is going to be picked up happily. If microsoft tries to jam Cortana down IE users throats they are going to be in for a rude surprise.
The reason for google's dominance is in part because of how seriously they have taken the browser.
Obviously YMMV and anecdotes isn’t data, but nobody I’ve seen here on HN cares about <toast>, and everyone seems to consider it more bad Googleism.
I foresee a great future for the <clippy> markup. So much friendlier than the bland <track action=spy> tag normally used.
"Many libraries in a variety of frameworks implement a version of toast (see research), but the web has no built-in API to address the use case." 
Unless I can do this of course:
All other browser vendors put a lot of effort into making their browser better, and <CLIPPY> could very well mark the end of Google Chrome's monopoly--with what I guess is a pretty trivial feature to implement, in comparison.
I'm switching to Edge if I can have Clippy back.