Can someone explain to me the innovation here, besides some page-transition eye-candy?
> Furthermore, portals can also overwrite the main URL address bar, meaning they are useful as a navigation system
Isn't this already possible with iframes and target="_top" / target="_parent"? You can also use the <base>-Tag inside the iframe.
> The advantage over using Portals over classic links is that the content inside portals can be pre-loaded while the user scrolls through a page, and be ready to expand into a new page without having the user wait for it to load.
What? What about the vast majority of links that appear in-line, in text (think Wikipedia). What about the increased bandwidth usage for pre-loaded pages I don't intend to visit? What about the obvious tracking issues? What about the further increased difficulty for users to even tell which page they are actually on, if the transition to another page is now smooth and not clear-cut?
This article makes it seem like <portal> magically removes all the security concerns people had for decades with iframes, which imho is not the case at all.
frankly, tracking seems to be a primary purpose of the portal tag. it takes the imperative away from the user so that the server can make the decision about what you see and how intrusive it can be. the point seems to be to shift control to google.
How is prefetching taking the imperative away about what you see? You can click or not click. As for "tracking", the website could already ping as many servers as it wanted in the background, it's not like it can do anything new here.
I honestly don't see how the server gets any more "power" than it already did.
If I go to foo.com, I generally expect that bar.com, baz.com, and qux.com are not also loading.
We already have enough issues with tracking pixels, but now we'll have to worry about multiple sites worth of trackers (which I'm sure google will do their absolute best to obfuscate behind the background network call and other sandboxing shenanigans)? No thanks.
As you mention, foo.com could just as well have pixel trackers/scripts from bar.com, baz.com and qux.com. Nothing about portals makes it any more or less of an issue than it already is.
baz.com doesn't take control over foo.com, and it's not invisible. You click on a link and the domain changes, it's as old as the internet itself. The only difference is that the new page will be embedded into the old one, instead of there being a clear cut transition.
The main limitation on using frames/iframes for different sections of the same UI was that they couldn't impact the primary URL, which means if you click a link in one of them, then reload the page, the frame gets reset to its default view. Charitably, I can see some small benefit to the idea of having "frames but they aren't bad", instead of having every JS framework reimplement its own view routing.
Uncharitably, this probably just gives Google more ability to track/control your interaction with sites that it links you to.
> What? What about the vast majority of links that appear in-line, in text (think Wikipedia). What about the increased bandwidth usage for pre-loaded pages I don't intend to visit? What about the obvious tracking issues?
These already exist. You can do <link> prefetch now ...
But I THINK they are doing preload now as well so the DOM can be pre-cached.
I mean you could do it on your own site but most people try to make sites FASTER not slower.
I still don't grok the main advantages of Portals though.
With regards to bandwidth from pages you do not want to visit, sites can already include content that you do not care about in the page manually as part of the initial HTML or later via AJAX. In fact, the use case for portals covers this in that news sites commonly add other IMO irrelevant articles at the end of articles that I am reading.
It's kind of funny, how mobile today is a big part of traffic and this will only increase, yet while the mobile experience is already terrible, especially in chrome with no script blocker, google constantly works at making it even worse, with preloading of undesired, external content. Does Silicon Valley not have data caps? I guess they never have bad reception, though. So at least the malicious add that preloaded in the background can transition you seamlessly to a new page. As quick as if you newer saw the content you intended to. Progress! :-D
This new element will likely not show up as a negative on any site analysis reports... a sneaky way to make your site "appear" faster but still be a resource hog.
> Does Silicon Valley not have data caps? I guess they never have bad reception, though.
Pretty much. When the developers only use the latest phones, on a company (or unlimited) plan, connected to 4G LTE with full bars, this is what you get. Google seems determined to hide network latency with its various prefetching efforts, but since I'm paying for bandwidth, I'd rather just wait the extra couple of seconds.
> Google seems determined to hide network latency with its various prefetching efforts, but since I'm paying for bandwidth, I'd rather just wait the extra couple of seconds.
Google makes money off companies whose goal is to keep you in a mindless state of flow. Having to wait for a page to load disturbs that flow, making it more likely for you to realize you're being monetized and/or just wasting your time, and should do something else instead.
The recent extension-disabling problem made me realize how awful the unfiltered web is. Trying to use mobile Chrome for the first time ever was a dreadful experience. I legitimately had no idea that mobile Chrome doesn't have any extension capability whatsoever. I always assumed it was a Firefox-style unified experience where the same desktop extensions worked on mobile. I was horrified to discover otherwise.
Ads ads ads everywhere. How do people actually live day-to-day with mobile browsers that have no true adblocking capability? Using mobile Chrome is like taking a stroll through a plague ward.
I've had a similar experience as the GP, and I find it awful that autoplay videos will follow you around the page, pop-up modals will interrupt you in the middle of reading something, and ads will take up 40% or more of the page, loading between paragraphs while reading. It's hugely distracting to me.
We could make the same argument: you've spend years without adblock, and they have worn you down to the point where you accept active mental blocking of highly distracting ads. And even then you say sometimes it's bad enough for you to leave the site.
I've had a similar experience as the GP, and I find it awful that autoplay videos will follow you around the page, pop-up modals will interrupt you in the middle of reading something, and ads will take up 40% or more of the page, loading between paragraphs while reading. It's hugely distracting to me.
Why would you ever spend time on a site that did these things? Even if you have an adblocker, why would you use such a site on purpose?
I guess you got used to it. I also had the chance to re-discover how many ads are everywhere on the web thanks to firefox addons outage, and I was very surprised. Honestly, smartphones screen are already quite small (even more when the keyboard pops out) and with all the ads the actual useful surface of the screen gets ridiculously tiny.
I couldn't imagine that chrome on mobile doesn't have an way to block ads, and still has such a huge user base.
Exactly. Technology exists to solve problems. One of the greatest problems in today's world is the metaphorical tsunami of advertising sewage flooding our mental shorelines. Browser-based adblockers neatly solve that problem.
As for the "But how do we pay for all these free things then?" counterargument, I'd suggest voluntary restraint on the part of website creators. I can live with a few modest JPEG-only banner ads hosted directly from the site I'm visiting. If a site promises to not use animated images, streaming video, streaming audio, any sort of trackers, any sort of third-party content, and if they keep the screen-real-estate ratio of ads to content relatively low, then I'll whitelist them in a heartbeat.
> If a site promises to not use animated images, streaming video, streaming audio, any sort of trackers, any sort of third-party content, and if they keep the screen-real-estate ratio of ads to content relatively low, then I'll whitelist them in a heartbeat.
I just don't think this is a workable model for most users. "You want me to do extra work so I can see some (admittedly nonintrusive) ads? No way, figure out your own business." Honestly, if I started using an adblocker I'm not sure I could force myself to make that effort per website.
I really wish uBlock Origin at least had the option to enable a blacklist model—ie, no sites ever have their ads blocked by default, but as soon as a site annoys me, I can go into my adblocker and disable their ads.
>I really wish uBlock Origin at least had the option to enable a blacklist model
It effectively does have this capability. Just disable all the standard filter lists, and manually add one's own entries to the "my filters" section.
This method has the added benefit of enabling other useful privacy features that don't directly relate to display advertising, like killing off the nastiness of hyperlink auditing, CSP reports going to 3rd parties, prefetching, and remote fonts.
Is this per the ad network / server, or per domain like the whitelist? I would have assumed the former, but if it works like the whitelist that's great!
It's not so much ads being annoying as ads completely blocking the content. A lot of sites are completely unusable on a phone; a malicious ad redirects the browser to a scam almost immediately. That's how looking at a link from the Facebook application works, but opening it in Brave gets you the site you want minus malvertising.
It’s, uh, really not that bad. I rarely notice banner ads and the like. If a particular site gets really egrigious, I leave the site.
Yeah, I never understood why people purposefully visit sites full of ads and then complain that the sites they visit are full of ads. It seems like a pretty simple problem to solve. And yet, even among the tech crowd, people seem to struggle with it constantly.
The way I see it, without an adblocker there is the potential that I will notice an ad. With an adblocker, that potential is completely gone. That's a big difference.
Again, not trying to convince anyone here, I just don't feel comfortable with it personally.
Firefox on my phone crashes ALL the time. I can't load 3 web pages without it crashing. Is Android doing this on purpose? I just cannot get firefox mobile to work, sadly.
Counter data point: I've had it running with 100+ tabs open for at least 2 months now. Yes, it gets restarted when my phone does - or when I need the resources for something else, or doing updates, but it's working great. Motorola Moto G second gen, running LineageOS...
This has not been my experience. I've been using Firefox for a while now with way too many tabs open and I've not encountered a crash. Or at least not one I remember. It could perhaps be some hardware it doesn't work well with? Or maybe it struggles with certain sites that I happen not to visit?
I hadn't heard of Fenix but it seems I'll have to try that soon.
I haven't used Firefox on Android in 6 months or so but it was pretty stable for me.
I switched to Brave for mobile and it is amazing. Super easy to block scripts and other tracking without hijacking your DNS or requiring ad blockers/pi-holes etc.
It’s a real shame you still can’t install a third party content blocker easily in the default browser on Android. Setup a VPN and PiHole for sure works, but my goodness what a lot of effort when simpler approaches exist, even on Safari on iOS... I’m aware other browsers exist that can do this, but this basically forces me to use a different browser on my phone versus my desktop.
Such is the reality of using an OS made by an advertising firm I guess.
Fortunately on Android you at least can install another browser - Firefox. Works great - well, most of the time... but using it for a day without extensions makes me appreciate it even more.
EDIT: and there is no reason to use a different browser on the phone and desktop, Firefox is available on desktop too. :P
I know it sounds complicated, but it's a one-step process that only increases to pay off the more devices are connected.
I used to have four adblockers on my laptop (two operating systems x 2 browsers), one on my smartphone, one on an additional smartphone I use... and how do I stop my TV and my ebook reader from serving me ads?
Pi-hole requires little to no maintenance, and offers a lot of benefit. Two people have purchased their first Raspberry Pis after I've slipped in "oh and BTW, no ads will load while you're connected to this network" after giving them a WiFi password.
That's why I pointed out the second option (blokada). It's just a matter of installing it and you've got Android wide adblock, which mostly works even in non-website contexts
But it doesn't work if you're using a VPN... So it's an either pi-hole or blokada situation
Check out the "MACE" function of Private Internet Access VPN as well. It may work out well. I've recently started it on desktop and mobile, and disabled any other blockers. There may be other VPNs with similar functions but I don't know yet.
It's been a few years, but IIRC I had poor reception on the Google Campus with AT&T. Some spots with poor coverage by SJC too. Probably folks are just always on wifi.
One imagines that a hotbed of tech would have great cell coverage. However, it’s also a hotbed of NIMBYs and the sort of people who think cell phones give you cancer, making it hard to construct more cells.
Looks to me as an effort to make any user visiting google.com to not actually leave google.com FOREVER.
Users will get portal-ed to a result page (with the browser's url bar showing foo.com, instead of google.com), but the user will be tracked the entire time from google, since they would be still on google.com.
I've been using duckduckgo as my sole search engine for years. I've been totally satisfied with my results. In fact I've had other people mention how my googlefu is exceptional since I'm always finding stuff that they can't seem to find. It just occurred to me that maybe it's the search engine that I'm using instead of my search skills.
Either my google-fu is failing me, or Google changed so much that my fu isn't working anymore. I feel Google is so fucking terrible now, I was looking for an obscure DOS game, and the word "game" lead it to show me games on the app store...
Or, it would show pages with just 2 out the 3 search terms I give it. And guess which word they'll leave out? The word that makes the search specific instead of broad... god effing damnit!
This keeps getting said, but the 500 or so 'sources' they mention are at best a distraction. The vast majority of the compute and storage needed to maintain a global index comes directly from Bing and Yandex. DDG is nothing more than a middleman.
With a portal, you are not on the original origin at all, you are on the new site once it's activated - I think you're thinking of signed exchanges which enable CDNed like experiences.
Portals are essentially iframes, but once activated they become the top level origin.
> After being activated, a portal opened by the news aggregator will now receive the input events. This means that it will have to co-operate with the news aggregator in order to maintain the desired user experience.
Why exactly does it _have_ to cooperate? And _how_ can it cooperate? Will it be enough to include a script from parent domain? Will analytics be part of "desired user experience"? Will parent domain ban a site from it's index for trying to "break out"?
It is essentially impossible to build relationships of mutual trust to the point when one website can freely embed another... unless one party has overwhelming power advantage. Who is the target auditory of <portal>? What "news aggregator" has enough leverage to make pages switch back to it as told to? If such "aggregator" existed, I would not trust it to make web browsers or dictate, how they should work.
Doesn't it just mean that both sides have to maintain postMessage() handling? Just like host and iframe with different origins. Google will not be able to receive any events/messages unless they're sent by not-google.com. That's what I understand from it, please correct me if I'm wrong.
> Google will not be able to receive any events/messages unless they're sent by not-google.com
Exactly: the integration requires cooperation from both websites, and there is zero reasons, why child website would want to cooperate. Unlike, say, using Google Analytics, letting user leave you via <portal> tag is against website's best interests.
If mywebsite.com embeds the portal for othersite.com, and user navigates into it, is there a way for mywebsite.com to keep tracking user interactions inside othersite.com?
> portals can be pre-loaded while the user scrolls through a page
Let's be real here. This can already be done though <link> "prefetch" and "preload" [0].
This is just Google introducing more pointless tech that will make webpages clunkier for no good reason. Honestly, I can't think of a single purpose this serves that isn't already fulfilled by existing web tech.
It's not pointless at all. It makes the Web platform more complicated to implement, which, as the company that owns the leading browser, is in Google's interest. The more complicated the platform becomes, the harder and more expensive it will be for any potential competitors to ship a product that can challenge Chrome.
More importantly, as it seems to require an implementation of native UI and not just rendering, it will harm browsers that are currently Chromium based.
Link prefetch/preload puts the loading under the control of the browser. The user can't see whether it loads or not, so the browser can make intelligent decisions about when to load it or whether to even load it at all.
Portals are user-visible, so the browser can't defer the load.
“Google says portals allow users to navigate inside the content they are embedding --something that iframes do not allow for security reasons.”
I love how they gloss over the security issue here. This is the equivalent of “we made a new Ajax that can call from different domains.”
Reading through the article, I don’t see the security restriction addressed at all. I expect that this gets worked on when google submits this to the w3c to be made part of html.
that's also my first impression of this article. but i (hope) believe google is aware of this security concern, otherwise they'll probably be the next Microsoft
It says that it only allows postMessage for cross domain communication.
BUT I'm concerned it doesn't mention similar sandbox and feature policies as iFrame. It also mentions that it will allow the 'child' to know if it's inside a portal with window.portalHost but at least this github mentions you can minimally declare no-referrer https://github.com/web-platform-tests/wpt/blob/master/portal...
Wouldn't the concern be ads ( I assume this is what this feature is intended for especially on mobile )? If there are no similar restrictions as a iframe safeFrame I can see a ton of ad scams that take advantage...
At first glance, this seems really cool. Then I remember it's Google we're talking about here. Somehow I feel like this is something they can use to benefit their search engine and/or AMP-style efforts. Keep more users within their controlled environments rather than officially navigating to a real website elsewhere. Also seems like potential phishing problems could arise. Can someone explain to me how this portal element to actually great and not just great for Google?
>Keep more users within their controlled environments rather than officially navigating to a real website elsewhere
But that seems to be exactly what it does: look at the image at the top of the article, the address bar clearly changes to the portal-target's location. I think that's what they mean by "can be navigated into".
Right but rather than making a seamless transition between pages they made it a subpage of the parent.
Having built-in page previews is actually a useful feature but without the other useful feature of
"play video" -> "click video to open page" -> "video continues playing while the rest of the real page renders around it" -> "you are now really on the other page with no connection to the previous page"
it seems like the business motivation for this was to embed more content in Google Search while keeping Google branding in the background.
This. It can allow you to see the google results and navigate from them seamlessly without actually leaving the top-level domain. It effectively captures the entire audience. There will be no more Internet, it's going to be a Google portal (like AOL) with Google keywords, and will "borrow" the content from everybody else.
How do you define "actually leaving"? Because after navigating to the portal, it occupies the whole viewport and becomes the main document with your site's URL.
Even if we assume you use AMP, package it with signed exchange, and Google SERP displays it then, yeah, your site could appear with no extra network request to your site. The lack of network requests isn't that different to old school HTTP proxies but with better specs and security. AMP has analytics and stuff so if you want to count visits, go do that.
I think the problem these technologies are meant to solve is making navigating between pages take under a hundred milliseconds and appear much faster by being able to preview/animate them. That would be good.
Yeah, new technology is not without cost and risks but Google turning the web into a walled garden isn't high on my list of risks.
It took me a few months to realize that when I browse google with my iPhone and click a link, it's not actually taking me to that site, just loading it in an iframe. Weird and unnecessary, except in the eyes of Google's margins.
The scrolling issues were due to a Safari bug. Unfortunately, Apple does not allow its users to install non-buggy browsers on their devices. I hate Apple devices with a passion.
For example, engineers hope that when a user is
navigating a news site, when they reach the bottom
of a story, related links for other stories are
embedded as portals, which the user can click and
seamlessly transition to a new page.
Perhaps it's not a very good description, but isn't that just a link? That sounds a lot like a link to me?
So this example really doesn't make it clear to me why we need this new element?
I think it's largely a user experience thing. Because the "portal" has already been loaded and rendered, you don't need to add the delay of loading the new page at the point of navigation.
But if performance boosts from preloading are all you want, you could just add a 'preload' attribute to the usual <a> tag. The stated rationale of supporting spiffy transitions seems a bit more sensible, though, given how many hooks this has into browser UI, I'd be interested in seeing the limits on what, say, a malicious ad network could do with it.
(Besides, if a page has several dozen links, exactly how many of them can you afford to preload before that activity starts bogging things down? The ZDnet article talks about this behavior replacing all links, which doesn't make a whole lot of sense to me...)
Also just realized, as much as people like to push all the snark about Microsoft regarding Embrace, Extend, Extinguish, that's exactly what Google is doing right now.
With them dominating browser share, they can now push tags like this that only make ads more annoying.
Geez we so badly need a more balanced browser landscape.
Unfortunately people seem to have such short memories. Most millennials today were too young to see Microsoft's evil side and so think people like me who have a lingering distrust for that company are just old luddites with axes to grind.
OTOH, they grew up indoctrinated to Google and so refuse to see the water coming slowly to a boil.
It's going to hurt small sites the most, because sources of traffic like Google, Facebook, and other large sites will eventually stop linking to websites, and load them as previews in frames ("portals") instead.
"No other browser vendor has expressed interest in supporting the Portals" -- but Google is releasing it in Chrome anyway.
It changes the fundamental nature of the WWW as linked documents. Embrace, Extend, Extinguish. The only conclusion I can come to about things like AMP and portals is that Google is actively trying to kill the Web.
> "Google engineers hope that their new Portals technology will take over the web and become the standard way in which websites transition between links."
> sources of traffic like Google, Facebook, and other large sites will eventually stop linking to websites, and load them as previews in frames ("portals") instead.
That's already happening to an extent with local AMP caches of outbound links 3rd party links on AMP enabled pages at cloudflare.
In what way loading a site on a portal would be different to loading it on the main window?
Portals sound like just a technical UX solution, but would still count as site traffic.
> In what way loading a site on a portal would be different to loading it on the main window?
The difference is made crystal-clear in the draft [1]:
> Every browsing context has a portal state, which may be "none" (the default), "portal" or "orphaned"... "orphaned": top-level browsing contexts which have run activate but have not (yet) been adopted
In other words, the original google.com document will be kept active in background (in "orphaned" state) and the child document can continue to interact with it by using Javascript after "adopting" it (at which point roles reverse and google.com becomes child document itself).
In theory the specification allows child document to ignore parent and let browser close it, completing transition. It also allows to perform graceful switching between child and parent, keeping each in control as long as it remains top document.
In practice, child portals will run Javascript, written by Google, and will be subject to Google's complete discretion. The distinction between first-party and third-party scripts will be erased, effectively letting Google run analytics on third-party domain, and send results back from it's own domain via Javascript proxy.
It looks like a way to sort-of link to a site, but to make the default action to return back to the linking site instead of going deeper into the other site. You aren't sending people to the target page, but are just loading a preview. As the target website, even if your analytics show a visit, it might not be a full visit.
15 years ago, it was common for website owners to ask you to "make all links open in a frame" so that the original site could keep a frame on top of all the other content. That was probably only stopped by JavaScript snippets to break out of frames. Portals is the same kind of idea -- give power to the sites that link out (like Google) at the expense of the smaller sites that receive links.
They're doing this in the open through the W3C WICG process [1]. They're not doing this as they see fit.
> for their own purposes.
Open standards are available for anyone to use and benefit from. That they may have had a use case in mind when defining the spec does not invalidate it; rather, it reinforces it and shows its value.
So-called "Web Standards" processes and organizations have been a fairy tale for a long time now. W3C is a pay-as-you-go puppet organization dominated by Google. You can see who calls the shots just by considering that the W3C-HTML 5.3 and SVG 2 specs have been abandoned. WHATWG was also funded and is sponsored by Google, and made an aggressive attempt at usurpating de-facto power over web standardization with a vulgar, staged anti-establishment "HTML5 rocks" campaign. The dominating browser and mobile platform (again, Google) can do what they want anyway. Now they turn to W3C again as a fig leaf for their not-so-stealth operation to become the web.
WHATWG's and W3C's track record as wannabe standardization bodies is incredibly poor considering that it has seen Opera and MS drop out of browser development alltogether, and that web standards are so fsckng complicated it's infeasible to develop a browser from scratch again, ever. I mean, that's exactly the situation that a standard is meant to prevent.
It's time for a real standardization org to step in, and for society to sit at the table when it comes to decide on the primary means for digital communication.
Sure, if it gets approved as a standard. Why not go look at the history of other HTML, CSS, and JS API additions, and the way they were staged, and what percentage shipped without agreement from other stakeholders?
Start at https://www.chromestatus.com/features/schedule you can drill down from there, for each feature, you can see links to W3C, WICG, TC'39, or whatever repositories, complete with discussion, and status from other browser vendors.
Yes, Chrome has shipped proprietary API features "on by default" before without buy-in from everyone else (e.g. NaCL), but so has Mozilla. The question is, are these the exception, or the rule?
And in my view, most of the Web's evolution in the past few years has been way more open and participatory than the 90s/00s Netscape/IE era.
> They're doing this in the open through the W3C WICG process [1]. They're not doing this as they see fit.
I really don't see a significant difference. They can pretty much propose whatever they see fit through WICG and implement it in Chrome as they see fit.
By the way, this isn't a standard by any reasonable definition of the word.
That's how modern web standards are done. If people like it, other browsers will adopt it and then the standard will too. If people don't, they won't. But every browser runs nonstandard features as part of experiments to implement new things.
One way of looking at it is that the standardization process is "propose and implement whatever you want, but it doesn't become a standard until another browser implements it and consensus is achieved".
This is a direct response to the problems of the W3Cs old way of defining large, pie in the sky, standards without any implementation, and then only much later realizing that they were bad.
No, given chrome's market share other browsers are forced to implement it or people will consider those browsers broken.
Google are now where microsoft were when they launched IE6, adding features as they see pleased to support their own products and revenue.
Back then the web population was savvy and not so reliant on microsoft that they could jump to firebird/firefox/opera when things got really bad.
The modern web users don't have that option because google are far more dominant now than microsoft ever was.
Back then people would joke about their grandma being "stuck" on IE6, but that's the level of technical aptutide of the whole web now. Just ordinary people wanting to get on with their own business.
Back in '02 chances are you'd be re-install XP all the time and during one of those installs a technician could install firefox (or later chrome) and you'd be set up. That doesn't happen now, OSes "just work" and people aren't changing their browser.
> The modern web users don't have that option because google are far more dominant now than microsoft ever was.
At peak (in ~2003-2005) IE had over 90% of global browser market share. Chrome on desktop has between 60 and 70, and afaik less on mobile, and less in the US. So, no.
> That doesn't happen now, OSes "just work" and people aren't changing their browser.
Given that chrome doesn't come on any popular desktop OS by default, this seems unlikely. If people weren't changing their browser, Safari, FF, or Edge would be the most popular due to coming by default on OSX, most linux distros, and Windows by default. (how popular is CrOS?)
> No, given chrome's market share other browsers are forced to implement it or people will consider those browsers broken.
There's a pretty decent history of this not happening.
I also don't really think your idea of how the web was used is a true reflection of reality. I was an elementary school kid back in 2002 and was using netscape and IE 4? 5? at the time, in school. I certainly had no clue how to jump to firefox or opera, nor did I know how to re-install the OS. Nor did my teachers.
I don't think the average web user back then was as savvy as you're thinking. Perhaps the average web user you hung around was, but that's just your social circles.
> Chrome on desktop has between 60 and 70, and afaik less on mobile, and less in the US. So, no.
No, it has about 60% on both mobile and desktop, not counting the derivatives (like Edge and Opera). Considering that both of those markets grew significantly since IE6 days, that makes Chrome more dominant than IE ever was.
And you seem to assume that an average consumer installs stuff on his own after the OS installation, which is not the case with less tech-savvy people I know. They usually give it to someone a bit more tech-savvy to do post-install tasks, and those always include an installation of a different browser. Every single second-hand computer available in my country's market comes with a cracked version of Windows, MS Office and a different browser. Tech-savvy ones just reinstall the OS, non-tech-savvy ones continue using the defaults — the defaults set up by someone else.
> Considering that both of those markets grew significantly since IE6 days, that makes Chrome more dominant than IE ever was.
No. Dominance is a function of market share, not market size, that's silly. Otherwise I could argue that Firefox is also more dominant than IE ever was, since it's got more users than peak IE. Same with Safari. But then 3 browsers would simultaneously be more dominant than the one that controlled 95% of the market at its peak. Neat.
>> The modern web users don't have that option because google are far more dominant now than microsoft ever was.
>At peak (in ~2003-2005) IE had over 90% of global browser market share. Chrome on desktop has between 60 and 70, and afaik less on mobile, and less in the US. So, no.
The parent post was comparing Google and Microsoft, not just Chrome and IE.
>They're doing this in the open through the W3C WICG process [1]. They're not doing this as they see fit.
They launched the draft on the 2nd and implemented the tag on the 10th. I'd hardly call that going through the open process. That's a move to save face.
At the end of the day, web standards are nothing more than advisory and Google own a clear majority of browser market share. Who is going to stop them?
If mobile app store revenues can be used as a metric for revenues on the web generally, iOS users are a valuable demographic to target, even if they are the minority. If your website doesn't work in Safari, you're going to lose revenue.
<portal> doesn't feel like something that will ever be supported in Safari.
IE6 lost that battle because it stagnated - it wasn't just not keeping up with new standards, it wasn't innovating in general. We don't know if it turned out the way it did, if Microsoft kept proactively extending HTML in new IE versions. But given that this is largely what was happening in the IE/NN mortal combat era that preceded IE dominance, I'd expect "this browser best works in ..." to make a comeback. And, indeed, it's increasingly common to see websites explicitly say that they want Chrome. Especially new Google projects...
It's not up to Google, it's up to web developers. If they find <portal> useful they'll implement it which forces FF/Safari to create their own implementations.
Dart ran natively in Chrome once. Why doesn't it now? Because webdevs didn't care enough to write native Dart on the front-end.
Am I reading this wrong, or is Google pushing a fairly complex element with substantial security considerations into HTML exclusively to solve their UX problems with AMP and the pre-load search carousel? Like this seems like it would be a fair amount of work for browsers to implement, and Google is the only party who would benefit.
No it isn't. It leads to fragmentation of the platform, and pretty soon you'll start seeing "Works best in Chrome!" tags on websites, because developers are too lazy to implement fallback solutions where <portal> (or whatever other proprietary extension) isn't supported.
No, the purpose of the W3C is to make sure we don't end up with multiple slightly different, mutually incompatible variations of the same thing. It's not meant to be a gatekeeper for new features.
That said, Portals _are_ going through the normal W3C process for experimental features via the Web Incubator Community Group: https://wicg.github.io/portals/
And the "standard" is written entirely by Google, and soon to be released in Chrome. The W3C and IETF are so thoroughly captured as to be little more than two small technical documentation groups at Google. It's the old Microsoft playbook, disguised at the expense of two decreasingly-credible industry standards bodies.
It's kinda the opposite actually. iframes didn't provide sufficient security to do the sort of things Google wanted to be able to do with them, so they had to design a new standard with better protections: https://github.com/WICG/portals/blob/master/explainer.md#why...
I don't understand the cynicism people have for this idea. All they did was say "wouldn't it be cool if you could have nice animations in between pages" and built a proof-of-concept. It's not a finished product. They aren't forcing it into a standard. It's a demo of something that would be cool.
My point is, portal isn't "an iframe without all the security protections." portal is a demo of animating between pages. What it becomes from there is completely flexible.
Things are not "flexible" once they have been shipped on by default, typically. Changing behavior or removing at that point becomes very hard, requiring usage measurements, etc.
when you are google, unilaterally releasing and pushing a major new feature for “the web” has an entirely different meaning and implication to it compared to, sadly, mozilla, or some other player (even apple to some extent) because of their huge market/mind share.
in that scenario “wouldn’t it be cool” is not a good enough reason, and for a major feature such as this, skepticism is healthy and warranted... the “web browser” is slowly being transformed into “the google browser” and we have no one to blame but ourselves
The consensus opinion seems to be, from this thread and elsewhere, "While Google is not doing anything wrong by standards in this case, because they have the power/potential to do something wrong by standards we must oppose this as well."
Serious question: what are the substantial security considerations?
I've been embedding websites in a tiddlywiki instance for todos as a test run, and I was surprised how much every website now tries to avoid/stop iframes due to the fact they aren't a "root" element.
This type of HTML element would allow me to build a web application that actually leverages other websites w/o click jacking. This is so powerful, it's what makes emacs and other ubiquitous interfaces so powerful. Leveraging other content
But maybe that's a pipe dream? Maybe that should be an application outside of the browser? Curious if there are any resources on the security implications of < portal >
> I was surprised how much every website now tries to avoid/stop iframes due to the fact they aren't a "root" element.
JS-based “iframe busters”, then X-Frame-Options, and now Content-Security-Policy should be ubiquitous. We started “busting iframes” in they early 2000s in the banking industry for security reasons.
Preventing being an iframe child protects your site from phishing via click-jacking your login screen, and also prevents “stealing” of your content by spammy aggregation sites.
If you're reading about this looking for how Google might misuse it, and that's what you see, then it seems like you're reading it as you intended. This isn't surprising. Many things can be read more than one way with only a little imagination.
But there are other ways to read things. Like, how could I use that on my site?
I'm thinking it might be fun to do something like the infinite zoom that Scott McCloud wrote about. Or more practically, it would let everyone put site previews next to links, like Twitter and Facebook do, without needing a big infrastructure.
Or you could think about how arbitrary hackers could misuse it. The security issues seem a bit concerning. But I suppose it could be blocked like iframes?
It's true that I could be lacking imagination. Because to me the portal proposal reads like it's simply moving the implementation of Google's AMP carousel into the browser in order to solve the address-bar problem. Your suggested use cases aren't terribly convincing, I'm afraid. Zooming UIs are perfectly possible using existing web technology. It would be interesting to have a standard way to embed Twitter/Facebook-style preview cards, but portals aren't it. A portal would just display a small portion of the page, which wouldn't be terribly useful unless the page is designed to reformat itself when displayed inside a portal viewport. If you want link previews now, there's nothing stopping web application developers from fetching the Open Graph metadata themselves.
Sorry about that, I didn't mean you have little imagination, but rather that any reading of a tech spec requires a bit of imagination.
It doesn't seem all that unreasonable for a web page to show a preview if the window size is small enough. You could do that with a media query. And then when it goes full-screen, it would already be loaded and that would just be a resize.
No worries. I really was sincere with my original question of whether I was missing useful potential applications.
Building richer cross-site interactions into web standards and browser interfaces is an intriguing idea. A more dynamic model of hypertext, where the browser intelligently displays linked-to content. Even though portals are a highly limited version of that idea, Google has a high chance of getting them standardized and implemented by competing engines, so hopefully more useful applications can be found.
Embedded youtube videos could support a navigation to the video page where the video just animated from the embed to its in-page position while continuing to play.
Embedded twitter posts could be clicked on to do the same sort of transition. Really this sort of thing applies to all kinds of media embeds.
Amazon ads could do the same sort of transition to the advertised item.
Basically, this could allow multiple different domains to coordinate to get some of the fluidity that you can build on a single page. Ex. you could develop a federated social network of many different providers (or video sharing or whatever) while retaining the UX of interacting with a single thing.
While they are solutions to problems Google has, with some work, they can perhaps also be solutions to problems many other websites have, using browsers not written by Google. That's the benefit of making something a standard.
>Google engineers hope that their new Portals technology will take over the web and become the standard way in which websites transition between links.
This is marketed as being similar to the iframe tag, but it's really an attack on the a tag.
> Furthermore, portals can also overwrite the main URL
> address bar, meaning they are useful as a navigation
> system, and more than embedding content --the most common
> way in which iframes are used today.
Well, this looks like the next hack/scam feature. I really hate to think of the sort of rubbish that this will cause.
> Google says portals allow users to navigate inside the
> content they are embedding --something that iframes do not
> allow for security reasons.
Surely this is exactly why the "portal" shouldn't do this either? Why not work towards fixing the security issues of the iframe?
Regarding pre-loading content in links, they could just implement this in the browser if they think it's really that important. This will destroy your mobile data.
I really wish they had sat down and discussed this with other browser teams to flesh it out. I think part of this is in getting the jump on other browsers, this really will take a long time to implement securely.
I had to double-check that it's not an article form 1999 or something, when everyone was building "portals", with a very similar premise of having everything in one place [1].
It took mere ~25 years in the industry to see things making rounds and old ideas coming back as new, unironically. X Terminal and "network computers" → browser, JVM → WASM, CGI → serverless, Modula/Oberon → Go; now <iframe> → <portal> and the whole idea around it.
>This is a proposal for enabling seamless navigations between sites or pages. In particular, this proposal enables a page to show another page as an inset and perform a seamless transition between an inset state and a navigated state.
But is it "google launches" or "google proposes"? I think semantic details matter a lot here. The article is written in a way that it makes it look like W3C does not exist.
Agreed. There's one line about a drafted standard, along with a note that no other browser vendor has expressed interest.
I do not want to return to the days of incompatible browsers (to the degree it was) even if they now publish a draft of how it works - the goal is compatibility, not POTENTIAL compatibility.
What I don't understand is that many websites frame-bust to prevent phishing and click-jacking attacks on their content. But if Google wants to be accessible through portals, they'll have to open themselves up to the same phishing and click-jacking vulnerabilities, right?
The reason for a new element, rather than just tacking this behavior onto `iframe` seems to be backwards compatibility [1].
> We wish to give user agents as much flexibility as possible to isolate the host and guest browsing contexts (in implementations, in separate processes), even when the active documents may be same origin-domain. To make this possible, we intend not to expose the Document or WindowProxy of the guest contents, via the IDL attributes on HTMLIFrameElement, by using access to named windows, or by indexed access to window.frames. Without such access, communication can be be guaranteed to be asynchronous and not require shared access to JavaScript objects. Those operations which don't apply to portal contexts would all need to be modified to throw an appropriate exception in such cases.
I enabled portals in Canary and tested out the demo. For some reason after the portal has activated, I can no longer go back to the previous page. So much for "take over the web and become the standard way in which websites transition between links". I wonder how much this will confuse the average user. I hope that this is still due to it being experimental.
Do they use their internet dominance to invent a bunch of complex html elements so google becomes the only one being able to parse html effectively in the future? I hope not.
I don't think Google can be wholly blamed for that dominance. It's only natural that they would want to invent their own web rendering engine and with it they can pretty much choose what they do.
What's less natural is that so many other browsers/frameworks chose to be based upon it (like every Electron app in the world, Opera, Silk, Microsoft Edge real soon now™). Apart from Safari or Firefox, there aren't really many alternatives anymore.
It's the natural consequence of people clamoring to turn a simple hypertext engine into a full on application VM. Feature upon feature was piled on, producing a gigantic ludicrously complicated garbage fire the likes of which simply cannot be replicated by mortals, and to ensure this whoever is on top will keep adding to the garbage fire so no one can ever catch up.
Web devs and evangelists of the world, you brought this on yourselves.
IMO it's not so much about blaming or other moralistic category a la "Google is evil" than it is about whether we're going to accept and let happen the Googlification and appification of the web. If we do, than we (as in humanity) must look elsewhere now for our communication needs. Think about how involved the web in everyday life is (in education, personal communication, public services, contracts, law, webmail, ecommerce, etc. etc.). If we accept that this medium is going to slip away and out of control for no good reason, then we need to specify something with a more specific scope for preserving digital documents, contracts, transaction manifests etc. that is also fully able to represent existing (static or dynamic) HTML content.
I'm not just talking the talk here: I've spent significant amout of time (years) in implementing SGML and an SGML grammar for HTML5 [1].
To talk of a browser vendor "launching" a new element is also ridiculous - Google have no authority to "launch" an element for the web as if it were some product they decided to update. Article title is poor in this respect.
Looking at the actual spec:
> This specification extends [HTML] to define a new kind of top-level browsing context, which can be embedded in another document, and a mechanism for replacing the contents of another top-level browsing context with the previously embedded context.
1) Why not an iframe?
2) Why not a link to the iframe src?
Or is the link here from within the iframe? Surely that's possible already with a postMessage and a window.location change. I don't understand the benefits here.
Is the sole purpose of this to avoid a HTTP request?
I agree with most of your comment. However, I don't think the claim that Google is launching a new element is ridiculous (even if I'm sad about it). That's more or less how it worked during the IE vs Netscape war. Remember those fancy <marquee> and <blink> tags for example. Standardization came second.
Depending on how it interacts with SRI, there is one interesting use case that could be supported. Imagine you had a data-uri / bookmarklet which loaded a <portal> of a web app, using the ideas discussed here:
If the SRI hash on the data-uri could restrict the content of the portal to a trusted HTML document (containing SRI hashes for the scripts it referenced), then navigating "into" the portal would allow the browser to display the URL of the origin site, and run JavaScript in that context, with the TLS indicator visible.
This would allow us to pin specific versions of web apps (if they were structured this way), and those apps could be subject to independent audit. Specifically, it would then be impossible for a compromised host to silently "upgrade" you to a new malicious version of the web app (or even downgrade you to an earlier version).
A substantial amount of browsing is now done on battery and expensive data plans so “no thanks” to all these preload-100-things technologies. Just waste, multiplied by more waste.
Great. This looks like another feature for AMP, but disguis.. I mean packaged as something generic. I’m really not fond of how Google is driving AMP as the go-to way to get better visibility on Google. This is going to introduce more headaches and very little to no upside. I don’t understand why they keep building things without talking to the user. There are other users than Google developers out there, and very little demand for AMP.
The example they give seems bad to me.
I'm fine just seeing the title of a news post and clicking that as a normal <a> tag.
I don't need to see a miniature version of the post that I then need to click on to read anyway.
Also, the preview problem... you could really just use an image as a link if you wanted to. Less convenient, that's true. But meh, this tag seems to offer little extra (to me).
I think things started to go more or less downhill since HTML 2.0. For the kind of web browsing I'm used to value text and images go a long way and they don't really allow for anything too intrusive. There were tons of interesting sites online in the 90's and basically none of them were bad because they were missing some essential interactive page/UI elements. Most of them were near-instant already at the time if you had something better than a modem connection. Things could've gone heavenly if the nerds who built that only weren't a minority. At least we still have text terminals and ssh...
I honestly don't see the point of this unless it gets widely adopted by other browsers (which is an obvious point). Mobile is already a huge market and on iOS at least, even chrome and safari simply use the iOS built in WKWebView with a UI wrapper around it. So unless Apple decides to include <portal>, wouldn't this not work on any iOS browsers?
I also use Android but don't use Chrome due to it's lack of extension and content blocker support. So other browsers on Android will also need to adopt this.
I am growing increasingly annoyed with "user experience." It's becoming a catchall excuse for Google and others to inject all kinds of crap that no one wants.
Reminds me of the old Google Caja project. ”The Caja Compiler is a tool for making third party HTML, CSS and JavaScript safe to embed in your website. It enables rich interaction between the embedding page and the embedded applications.”
The headline is wrong, nothing was 'launched', rather, an experimental feature was added to chrome://flags, just like the gazillions of other features that are hidden behind flags until they are standardized and shipped.
Is Firefox going to implement it? Seems like a lot of pressure on that team to ensure it makes sense. And if they find a reason not to implement it they may be really screwed because of the control Google has.
The best-case scenario for the future health of the web is for Mozilla to resist supporting <portal>, which will hopefully dissuade companies from adding this element to their products.
Now that Chrome is quickly becoming (if not has outright become) the new Internet Explorer, I wonder what browser will come out in a few years to be the next Chrome?
So many snarky comments here. Chill. Portal is just an iFrame that has the ability to smoothly transition itself into being the top-level URL while maintaining its inner state.
"The advantage...is that the content inside portals can be pre-loaded while the user scrolls through a page, and be ready to expand into a new page without having the user wait for it to load."
My brain translated it to this:
"The advantage...is that advertisements can be pre-loaded while the user scrolls through a page, and be ready to add to Google's revenue without having the user to wait for it to load."
> Furthermore, portals can also overwrite the main URL address bar, meaning they are useful as a navigation system
Isn't this already possible with iframes and target="_top" / target="_parent"? You can also use the <base>-Tag inside the iframe.
> The advantage over using Portals over classic links is that the content inside portals can be pre-loaded while the user scrolls through a page, and be ready to expand into a new page without having the user wait for it to load.
What? What about the vast majority of links that appear in-line, in text (think Wikipedia). What about the increased bandwidth usage for pre-loaded pages I don't intend to visit? What about the obvious tracking issues? What about the further increased difficulty for users to even tell which page they are actually on, if the transition to another page is now smooth and not clear-cut?
This article makes it seem like <portal> magically removes all the security concerns people had for decades with iframes, which imho is not the case at all.