Google is a business like any other - and when they have the power to lever a monopolized situation - they will.
For example, they may start integrating technologies for which they have exclusive, or at least 'special' access. Can you imagine if all of a sudden Google apps start performing better than anyone else's?
Or what if they integrate technology that de-facto collects usage and behaviour - even from other domains - and then lever that competitively?
The power that Google already has, relatively unregulated is crazy. Remember that Google is the company that could change the outcome of elections ... possibly without us even knowing. They could drive markets up or down at will.
This would probably happen as a 'slow drip' - not one step being close enough to consumer awareness to create problems, and what with 'business friendly' politicians, nary a worry of legislation getting in the way.
They could introduce these technologies under the guise of 'improving user experience' (and maybe legitimately so to start), but other PM's, new CEO's etc. just take the opportunity before them. Why wouldn't they?
We worry a lot about 'Net Neutrality' at the network level, like it's a religion ... but that we don't worry about 'information neutrality' as well I find quite bizarre.
I suggest that having '1 major provider' may be a problem, doubly so if it's an entity like Google, and that frankly we don't really gain much from this fact at all. If Mozilla were the provider, great. Even Apple might be a better choice simply because they are not in the business of managing our information - at least right now, for them, it's a headache, not a revenue source.
So the concern is real.
This is already happening. I very recently worked on the Edge team, and one of the reasons we decided to end EdgeHTML was because Google kept making changes to its sites that broke other browsers, and we couldn't keep up. For example, they recently added a hidden empty div over YouTube videos that causes our hardware acceleration fast-path to bail (should now be fixed in Win10 Oct update). Prior to that, our fairly state-of-the-art video acceleration put us well ahead of Chrome on video playback time on battery, but almost the instant they broke things on YouTube, they started advertising Chrome's dominance over Edge on video-watching battery life. What makes it so sad, is that their claimed dominance was not due to ingenious optimization work by Chrome, but due to a failure of YouTube. On the whole, they only made the web slower.
Now while I'm not sure I'm convinced that YouTube was changed intentionally to slow Edge, many of my co-workers are quite convinced - and they're the ones who looked into it personally. To add to this all, when we asked, YouTube turned down our request to remove the hidden empty div and did not elaborate further.
And this is only one case.
If this case hasn't already been run up to Microsoft's lawyers, start running it up to them. You'll be doing the world a service.
It is not the best time to strike now, once the timing is right, I am sure they will.
They'll get a more level playing field.
CEO's generally don't order this stuff to happen. More often it's a director, manager, VP or whatever that's just really aggressive. Possibly the CEO knew or not.
When a company gets bloodied for a pile of money, they generally have to own up to it, which makes them look bad (by they way, these things do have a cumulative effect) - but more importantly, they have to at very least 'go through the motions' of getting staff to 'not do this stuff'.
So they have 'training' and 'oversight' etc.. However ingrained it is into behaviour (or even a single rotten apple) the likelihood of recursion goes down.
For example - if an inner legal team gets some responsibility for oversight on these issues, they can make life difficult for managers on these things.
I worked at a Fortune 50 that was sued by a patent troll, and it seriously and fundamentally changed internal culture to the point wherein we needed lawyers involved in everything, it was really bad. Obviously a negative example.
But especially Microsoft has enough $ to drag Google into court, they should do it.
That said: I'll bet $100 that MS might be doing some tricky things of their own anyhow.
TBH, I consider Google much more evil than Microsoft, in or outside of Dev Circles. Microsoft dropped the evil baton and Google picked it up and sprinted away.
What Microsoft gain after Windows Phone YouTube app case? Nothing. Google successfully fucked up Microsoft.
And I've worked for Google in the past, but their main issue has always been that they change a lot which makes them a moving target which is annoying in its own way.
Microsoft earned its public image and while it's made nicer noises recently it's not an organisation that fills me with trust.
Uh, money? It might not exactly be a noble incentive for a lawsuit, but it's sure as hell an incentive, isn't it?
With a lot of legal issues, sometimes the only winning move is not to play.
I'd be curious to see how likely Microsoft would be to follow this approach rather than to just stick to using Blink... as they've already decided to do.
Google could start responding to YouTube requests with binary streams of gibberish if they want, MS would only have standing to sue as a content creator and advertiser on YouTube.
If Google is reverse engineering other browsers optimization paths and putting out content that is disagreeable to that optimization, that's possibly unfortunate but not illegal.
b) since Google's browser became the defacto standard browser thanks to Edge switching engines, hence the thread. The standard doesn't really matter anymore; Google makes most of them now as a matter of course anyway.
Click the "show desktop view" page on mobile firefox, reload, and suddenly the score is there. They're not discriminating that aggressively against competing desktop browsers. Yet.
First, I can find nowhere that Chrome claimed to have better video-watching battery life (in fact, popular tech sites mention improving but still worse).
Second, the only dip I can find in the public Edge battery life tests was
April 2017 - 12.5 hours
Dec 2017 - 16 hours
May 2018 - 14.3 hours
vs Chrome's 9.3, 13.5, and 12.5 hours. Which means whatever happened last spring, Chrome also dipped.
And third, how about we talk about the kind of web browser battery benchmark based on playing fullscreen video and is defeated by adding a single hidden div? It's not testing battery life of a representative sample of what a web browser is actually used for (especially over 12+ hours), and obviously wasn't very resilient in the face of what a web browser has to actually handle.
Honestly it sounds like they added a div for unrelated reasons (accessibility, "security", ads, who knows), thought it was worth the performance tradeoff (or never measured), and it indirectly ended up making Edge better for real web content (as of the Win10 Oct update).
I'd suggest inspecting a few sites you watch video on. An empty div is the least weird thing you'll find.
This ultimately comes down to hardware limitations. GPUs are limited as to what they can compose during scanout, because of memory bandwidth limits. Each plane that you can alpha-blend together at scanout time multiplies the amount of memory fetches per dot you have to do. On today's high-DPI displays, the bandwidth going out to the display is very high to begin with, so you can't afford to multiply that by much. That is why putting something on top of a video is tricky: you're adding another layer to be alpha-blended on top, increasing your memory bandwidth by 50% over the two layers you already have (RGB for the background plus YUV for the video). The user's GPU may or may not support that--as I recall, prior to Skylake, Intel GPUs only had two hardware planes, for instance.
I'm not surprised that Microsoft just used "are there any DOM elements over the video?" as a quick heuristic to determine whether scanout compositing can be used. Remember that there is always a tradeoff between heuristics and performance. At the limit you could scan every pixel of each layer to see whether all of them are transparent and cull the layer if so, but that would be very expensive. You need heuristics of some kind to get good performance, and I can't blame Microsoft for using the DOM for that.
which, again, that's fine, but mayyyyybe they were a little lax in checking performance on nearly any other popular video site on the web to see if that heuristic is a good one?
Or maybe changing page layout in an extremely common way wasn't an effort to undermine a hyper specific benchmark?
Remember, if you put visible DOM elements on top of the videos, then you lose scanout compositing no matter what.
A lot of them? Vimeo, for instance, has a number of opacity: 0 and hidden divs over the video. Twitch has at least a couple of opacity: 0 divs on top.
Maybe we're interpreting the phrase
> hidden empty div over YouTube videos
differently? That's the structure I assume they were talking about.
Considering that it's now optimized and that's not what the original post said, I don't know why you'd assume that.
That's what this "empty div" is for if that's the one I think it is. It is the container for things like branding and annotations.
FWIW, I think you're a little credulous there; as I mentioned in my other comment, I can't find anything stating that Chrome starting beating Edge at the test (their videos actually claim the opposite) or anybody from Chrome boasting about it (articles from the time like yours also say the opposite).
> On the other hand, pretty easy how such a div might trigger a less efficient path
I mean, sure, you can always fall off the fast path, but given how common transparent divs over video are, the battery benchmark should have come with even more caveats. Edge is the most battery efficient browser†!
† for playing fullscreen video††
†† Battery test not valid if the page doesn't use the exact layout youtube used in December 2017. Also not valid if testing vimeo, or twitch, or any porn site, or...
And while I agree that video overlays are common, I also think it's reasonable for such overlays to revert to a slightly less efficient path.
In my own web development activities I can point to hundreds upon hundreds of hidden, invisible, and obscured DOM elements that have no obvious reason to for existing to someone outside the code-base where you find the commen explaining the required work around, browser hack, or legacy constraint. I've also experienced wildly divergent performance on MS browsers compared to others when creating content, often from something as trivial as DOM order or composition.
Clearly Google owes me some money for my part in their ongoing conspiracy to hurt Edge. I'm flexible, I'll accept GCE credit :)
Hey, there's a new div in the DOM, the only possible reason for a change like that is so Chrome can advertise about beating Edge on a benchmark nobody cares about? Even though they never beat Edge on it and this "advertising" never took place?
This was the credulity I was talking about. These events didn't happen (you literally wrote the stories plural! about edge winning the benchmark) and the motivations make no sense. I'm not sure why you'd repeat it without even a warning that it may just be a narrative made up from grumblings about fixing a fast path heard third hand.
I do care about video playback battery performance.
So much so, in fact, that I bought my current laptop specifically so it would last long when watching videos.
Also note that tablets, smartphones, the Macbook Air and the Surface are sold on their battery stamina, and specifically while watching videos. And how would you measure that? Youtube, of course!
Makes total sense, but if you're over-fitting for Youtube's exact layout in 2016, you're eventually going to have to update your optimizations. Sites don't stay the same forever.
Also, what can explain Edge failing to load Azure dashboard - was that a Google bug too? Ref: https://www.youtube.com/watch?v=5zMbfvEHlTU
Sure, and they did. And then next month it’ll be something else, and a new check. The next month it’ll be something else yet again, and yet another check. Pretty soon Microsoft’s codebase is littered with checks and guards against the random things Google does, and they’ll still always be a deploy behind. Google could keep this up for years.
Given how easy it is to delete an offending div using the dev tools, it would be easily verified by web developers. There'd be a thousand blog posts and news stories saying "Google deliberately sabotages Edge on YouTube" and the public blowback would be pretty damaging I imagine.
Google of today is a collection of disjointed silos which don't work well together or they work together at all. The leadership of those silos is being aggressively staffed by "industry veterans" VPs and SVPs from Oracle, HP, Motorola and alikes. These folks build their little empires, not products. NIH spreads, internal "competition" starts, etc. This story should sound familiar to more experienced people from Microsoft... they've seen this development phase, they know what I am talking about... Microsoft's name for that was "IBM", Google simply calls that "Microsoft". And when you hear "We are not THAT yet" and people have need to say it, you probably turned into THAT.
Anyway, The idea of Chrome being so aligned with Youtube - over such minor gains over Edge - would today be just a wishful thinking. Until some major, major restructuring and changes to their recent corp "culture", Google will simply remain incapable of driving waaaaay more important product development changes across its product surfaces than this.
As an aside- his points on why Chrome is a monoculture misses the point. Chrome is a monoculture for the same reason almost every other thing becomes at monoculture. It was, at one point, deserving of it as a product. Whether it was naturally the only one or the best. But now, they might not be the best out there. And when they go on to create barriers to competitors and lock-in, thereby making them artificially dominant, that is an anti trust case in the making, if the U.S. Justice Department had any teeth in that area.
It doesn't confirm Google's reasoning for the div
Correlation does not imply causation: Yes, it might be true that companies make changes to their products that break stuff somewhere else. But to claim that it is done intentionally is very far fetched. Especially when you have to support so many different environments, it is close to impossible to thoroughly check all of them. Demanding that the performance is good even goes one step further than just demanding the service works.
Because it's EVIL!
Also, have you tried opening Google.com, GMail, YouTube or Translate in Edge? You are bombarded with prompts to install Chrome. There is no way to disable them, not even when logged in.
What's to stop "YouTube needs Genuine Chrome™ with Google Play® Support Services Installed"?
They could throw up a check and have "Youtube requires Chrome XX.X with the Evil-DRM plugin enabled" live whenever they want. It's the relevant market forces and ecosystem.
Nothing is going to make me feel bad for Microsoft losing market share. This is the company that silently disabled microphone access for Chrome because it wasn't installed via the Microsoft store. I spent a month trying to figure out why my microphone suddenly wasn't working in any of my web apps. As much as I dislike what Google is doing, Microsoft has been doing far worse for much longer.
- embrace and extend standards
- secret apis
- advocate for standards, then drop them for proprietary ones
- perform hidden changes that make competitors look bad
- then tout your advantages loudly
The only thing missing is outright paying people to not support your competitors.
Firefox, however, I feel bad for. Nerd advocacy was partially responsible for Chrome's rise to popularity, but the time has come to advocate for Firefox.
I worked on IE in the days when there were many crazy conspiracy theories about silverlight and IE collaborating to ruin the open web. This sounds similar.
Seems counterproductive to me.
So, color me astonished, not.
The biggest feature difference for SharePoint cross-browser has historically been with the ActiveX controls. Say you had Word installed, there was a control to open documents with Word. As in, you could click save/view/edit rather than just whatever the browser default was. Or with Skype, to see a user's available/busy/away status next to their name on a SharePoint page. So if you're looking at a wiki or calendar for something, you might go "oh, the author is free, I can just ask." Chrome's plug-in model was much stricter than ActiveX. That's why you'd see some features in IE but not Chrome.
How is this productive?
They've already started. Google Meet (their version of Hangouts for enterprise) didn't work on non-Chrome browsers for a very long time. If you tried to do a video call with a non-Chrome browser, you were told to install Chrome and load the page in Chrome. The only reason they were able to get away with this is because Chrome has the high market share that it does. If they'd done this when Chrome had 10% market-share, the response from users would have been to stop using Google Meet. However, because they did this when Chrome has 70% market share (and climbing) there was hardly any outcry; those who didn't already have Chrome installed gave a resigned sigh and installed Chrome in order to participate in their work meetings.
Like you said, it'll start as a slow drip, with the least popular and most obscure applications (for example: Google Play Music) being moved over first. Then, as users fail to object, they'll move over larger and larger applications. Even if they never make the "big-two" of YouTube and Google Search Chrome-only, they'll still be exerting a fair amount of pressure for users to switch to Chrome.
Google Earth is Chrome-only, because they use the Chrome-only NativeClient.
Google Hangouts dropped calling support on non-Chrome browsers for quite a while: https://news.ycombinator.com/item?id=15889018
YouTube (and perhaps GMail) is extra slow on non-Chrome browsers because they use a dropped-and-never-standardised feature that is available only on Chrome, and force a slow polyfill on Firefox (and perhaps Edge): https://twitter.com/cpeterso/status/1021626510296285185
I work on the Polymer team at Google, and have worked with YouTube on this. I can promise you that the folks at YouTube are working very hard on porting to the specs that are supported crossbrowser. Switching to the final versions of the specs is super high priority for all of us, there were just some changes between v0 and v1 that are hard to paper over, and as you might imagine, YouTube has a lot of code.
> Google Earth is Chrome-only, because they use the Chrome-only NativeClient
My understanding is that if it weren't for spectre, cross-browser Google Earth would very likely be shipping today. There is a web assembly version of Earth, but it needs threads and SharedArrayBuffer, which is disabled in most browsers today because of spectre-class security issues.
From my point of view, the moral of this story is that Chrome shouldn't be shipping v0 specs that are on by default.
It's not that people like me want to be cynical conspiracy theorists, it's just really frustrating to watch Google ship an unfinished API that's on by default, integrate it so tightly into its core products that it can't be easily updated after the spec changes, and then just kinda... sit on it. And it's really convenient that whenever Google messes up standards, it just happens to mess them up in a way that makes Chrome look better to normal users for all of its core products.
Heck, it's frustrating that Chrome is still shipping the v0 implementation. I don't want to be vindictive or bitter, but engineers on the Polymer team clearly thought the polyfill was fast enough for Firefox. They don't think it's fast enough that Chrome could be using it now?
I don't think I'm being a conspiracy theorist when I say that probably the reason why Chrome still ships with a working v0 implementation is because Youtube uses it. And it makes me feel weird to have a dominant browser deciding to contradict a standard because it makes a Google site fast. I think that's problematic and harmful for the open web.
So even though I know that the team's intentions are good, the intentions do nothing at all to mitigate or fix the harm. And I'm sure you can understand that when people talk about the dangers of a browser monoculture, it is exactly this kind of thing that we're afraid of. A lot of us feel like Google dev teams don't really have much respect for standards or community processes. Google tends to have an attitude that standards will always go the way it wants, and if other browsers are behind on that, we just need a little polyfill or stopgap or something until they catch up. Certainly they won't decide to go in a different direction.
Is that going to get better when Chrome has 95% market share? Won't Google devs feel even more emboldened to ship v0 implementations that disregard the standards process? Are there any new safeguards or internal policies being put into place to prevent devs working on Youtube or Gmail or Maps from building core infrastructure around browser APIs that haven't been standardized yet?
I'm sure the folks at YouTube are working very hard on porting to the Web standard, but the truth of the matter is:
1. Non-Polymer YouTube works just fine, and yet isn't served on Firefox and Edge;
2. Chrome shipped and continues to support a Chrome-only "standard", which is never going to be a Web standard, for some reason;
3. YouTube, a sibling product of Chrome, and supposedly a Web app, built a whole redesign around this when it wasn't even a Web standard, or even available in more than one browser, and now apparently can't do without, without dragging every browser that YouTube is not related to down.
On these counts, the apparent favouritism is clear.
> there were just some changes between v0 and v1 that are hard to paper over
I believe you. I also see that there's a polyfill for this. And yet, YouTube doesn't (need to) use it in the one browser that it's parent company makes?
> My understanding is that if it weren't for spectre, cross-browser Google Earth would very likely be shipping today. There is a web assembly version of Earth, but it needs threads and SharedArrayBuffer, which is disabled in most browsers today because of spectre-class security issues.
You mean, if it weren't for Spectre, a Web version of Google Earth would very likely be shipping today, using WebAssembly. Google Earth is currently available for the desktop in these forms: A native version, and a Chrome app. Yes, Chrome also happens to be a browser, but it is not just a browser, and Google Earth does (heavily) depend on its non-browser parts.
So, naturally, questions arise:
1. If it weren't for things like Google Earth and Hangouts, would Chrome's NativeClient still be around?
2. If it weren't for Chrome's NativeClient, could Google Earth and Hangouts have become the products that they have?
The answer to the second is easy: Hangouts did exist as an NPAPI plugin for a while, and does currently work on non-Chrome browsers via WebRTC, so it certainly could have used the exact same features on Chrome, and Google Earth has always had an installable, native, desktop version.
It's going to be unshipped around April.
Chrome shipped the v0 spec in order to prove out that the specifications were valuable and worth implementing. It took time to get the other browsers on board. Writing a major application like YouTube with the specs provided a lot of valuable feedback for the final version of the specification.
Today, the other browsers are very much on board with these specs, Firefox is even rewriting many parts of their UI with these same specs. Three years ago there was more skepticism, and by building real world applications we could figure out what works and what doesn't, and address the (very reasonable) questions that people had about their design.
> I believe you. I also see that there's a polyfill for this. And yet, YouTube doesn't (need to) use it in the one browser that it's parent company makes?
YouTube does actually use the Shadow DOM polyfill today, even in Chrome, and Shadow DOM is definitely the most complicated and arguably the most performance-critical spec involved. Polymer v1 uses the Shadow DOM polyfill by default even in Chrome because if it used native Shadow DOM it would be too easy to write an app that would only work properly in Chrome.
> You mean, if it weren't for Spectre, a Web version of Google Earth would very likely be shipping today, using WebAssembly.
Would you prefer that the NativeClient version of Google Earth not exist at all? The work that went into NativeClient has contributed to the wasm spec, and (I assume, but don't actually know) the NativeClient version of Google Earth was the basis of the wasm version.
Speaking just for myself, like, I get what you're saying, but I'm not sure how to actually translate it into action. If no one had used the v0 web component specs, then the v1 specs might never have gotten adopted at all, or at least would have been specified with way less experience on the table for how they need to work in the real world.
NativeClient is somewhat similar. It was a proposed standard that informed will (hopefully soon) be superceded by a final standard which could build on the lessons learned in building NativeClient.
Someone is going to be the first to ship a new feature. For a while, asm.js applications were really only usable in Firefox. They would technically run in other browsers through what amounted to a JS polyfill, but not very well. But Firefox's asm.js work was really good for the web, because they were able to gather a lot of real world info on how asm.js and similar techniques work in the wild.
Same thing with WebVR. As I recall, for a while Firefox was the only browser shipping experimental support for virtual reality in the web browser (someone correct me if I'm wrong here). Yes it's annoying when something works better in one browser than another, but how do you add new features to the web platform without experimentation and getting real world use?
The important thing is that the insights from shipping the experiment make it into a open specifications that are implemented cross browser.
Google Meet also appears to have updated to support the WebRTC spec (rather than the oddball implementation Chrome had) so long as your browser supports it: https://blog.mozilla.org/webrtc/firefox-is-now-supported-by-...
This is why Mozilla is pushing AV1 and VP8/VP9 so hard, being stuck with H.264 as the whole industry migrates to H.265 due to bandwidth savings is a major impediment, and the licensing body could easily extract usurious rent for H.265 if it becomes the only reasonable option: https://hacks.mozilla.org/2018/08/the-video-wars-of-2027/
And that article from Mozilla, I remember it, worst article from Mozilla for as long as I remember ( That is since Netscape Era ).
If that’s true, why are they a member of the AOM, which promotes the royalty free av1?
I think this is a repeat of the USB C engineering participation by Apple, they may roll it out to their tiny MacOS install base, but it could be years before AV1 is supported on iOS. Safari on iOS is where Apple likes to draw its line in the sand, it also happens to be where they have nearly 1 billion users, versus 70 million on MacOS: https://ngcodec.com/news/2018/10/9/whats-in-a-codec-hevc-ver...
What does Web Push have to do with WebRTC?
WRT VP8/VP9, bandwidth is much more of a concern than you make it out to be. Network bandwidth on mobile devices is inconsistent, unstable, and often relatively low (say hello to congested towers, or fringe 700Mhz coverage where 500Kbps is all you get :P), thus the best compression possible ensures that video quality and usability is top notch.
By restricting video to H.264, your stuck with a legacy codec that has relatively poor compression rates compared to the other standardized codecs. Is it good for interop with a 2005 era deskphone? Sure, but not much else.
> Network bandwidth on mobile devices is inconsistent, unstable, and often relatively low
I'm not using Google Meet on a cellular network. I'm using it on a wifi network. And if it comes to it, I'd rather have slightly worse video quality with H.264 than slightly better video quality with VP8/9 if it means it preserves my battery life (which is to say, set a network bandwidth target and then use whatever video quality meets that target).
> By restricting video to H.264, your stuck with a legacy codec that has relatively poor compression rates compared to the other standardized codecs.
It's apparently really hard to find an actual practical comparison of compression quality of H.264 vs VP8. What info I did find is about 8 years old now and itself was pretty wishy-washy. My vague impression of all of this is "lots of people think VP8 has better compression quality, but won't say by how much, while other people think you can get about the same results, but either way VP8 is rarely hardware-accelerated on mobile".
In any case, if you want better than H.264, how about H.265, which macOS and iOS both support? I have no idea if WebRTC allows the endpoints to negotiate alternative codecs, and I'm having difficulty finding the answer to that.
Great, I really don't care about Google's WebRTC app of the month, it is unlikely to be with us in 10 years.
Your root question about why Google hasn't supported Safari with their Meet product likely boils down to the codec wars, as without VP8/VP9 support, Safari does not support the WebRTC standard. A reasonable thing Safari could do is rank codecs by power usage, prioritizing H.264 front and center.
Another take is the failing tests listed here, Google could easily depend on a component that Safari has not implemented (once again not WebRTC spec compliant): https://wpt.fyi/results/webrtc?label=stable&aligned
> In any case, if you want better than H.264, how about H.265
H.265 is a patent encumbered, licensed codec that is only supported by Safari & Edge (if you download the H.265 codec manually on Win10). At this point, Apple, Mozilla, Google, Microsoft and others are working on AV1 as a successor: https://headjack.io/blog/hevc-vp9-vp10-dalaa-thor-netvc-futu...
> I have no idea if WebRTC allows the endpoints to negotiate alternative codecs, and I'm having difficulty finding the answer to that.
According to the page you linked, nobody is spec-compliant.
I'm going to guess that Google's lack of support for Safari boils down to video codec and nothing else.
> A reasonable thing Safari could do is rank codecs by power usage, prioritizing H.264 front and center.
Apple does not have any support for VP8/9, period. And I will be extremely surprised if they ever add support for a non-hardware-accelerated video codec. Apple believes in power efficiency over pretty much anything else, which makes hardware acceleration mandatory for something like this.
Last time I checked, Google Authenticator only works on Chrome.
Google Authenticator is available for Android, iPhone, and Blackberry, and not for any browser; there are a number of third-party implementations, however, including a Chrome/ChromeOS one.
But I did know that.
It was just that Everpedia used it for chat, and I was curious.
This would be solved if they did feature detection instead of browser sniffing, but given that the user agent hack works, they're very likely doing browser sniffing.
Better yet, they should probably wait to adopt new standards until things are actually standardized, instead of doing things as soon as they hit "experimental" status.
Put another way, isn't it OK for users to experience temporary discomfort and inconvenience? If it truly gets bad, someone else will eventually eat their lunch, and everyone (except Google!) is happy again. See: Microsoft.
And if the fork diverges, then what? This is not a theoretical concern: Blink and WebKit are now quite different, and it's best to regard them as two different rendering engines. Is that bad?
> Maybe the rendering engine should be more akin to the Linux kernel and less akin to Windows?
This presupposes that the Linux kernel is at an optimum point in the design space. It's pretty clear to me that it isn't. In fact, I think the monoculture around open source kernels has been bad for OSS. I look forward to Fuchsia providing some much-needed competition, much in the same way LLVM provided much-needed competition to GCC.
At that time, 2001-2002, IE6 was highly praised. You don’t get to 96% marketshare by being something everybody hates. The hate came later when CSS demand ramped up and it became clear IE6 used a non-standard box model. Worse still MS essentially abandoned the browser, the result of total marketshare dominance, and tied it directly to the operating system.
The reason people are complaining about this is the lack of diversity. This has proven very bad in the past. It essentially makes the platform a proprietary space even if the dominant browser is open source. The key here is who directs the technology and direction of the platform.
Which we all use now anyway because it was right way to do it...
I don't do that and I avoid code that does. In the days of IE7 I learned to write CSS code that was conformant to both box models. Now I just write to the standard box model. CSS isn't something that will ever be particularly challenging once you have gone through the extremes of edge case use cases.
At the time, the CSS specification didn't make it clear which way to calculate widths so Microsoft, probably due to their experience building graphical layout software, chose border-box while all other rendering engines at the time went content-box probably without considering the consequences. Realizing the error, the box-sizing property was added to CSS to fix this.
Imagination aside, IE3 was the first browser to offer any CSS support.
I remember having to deal with all these things when working with the company's aging CMS. It was a great tool, but the version the company was using was old (before standardized doctypes in 1999) and inserted a comment into the top line of code throwing all documents into quirks mode in IE.
The box model was corrected when IE released version 7, which still also featured quirks mode rendering for backwards compatibility.
For me personally, multiple independent implementations is a stamp of approval. It means that Google can't just sit on the standards committee, build something behind closed doors, and ship it, even if the result is open-source.
It means someone outside of that one company has to understand it, and be able to implement it. It can't be ridiculously complex, or have DOCX-style "just do what proprietary implementation X does" requirements. There has to be sufficient documentation. The implementations probably won't all have the same bugs -- I've heard avionics systems have multiple independent implementations for this very reason.
I fear it'll be more like Darwin. It's nice that XNU is "open source", perhaps, but nobody is taking advantage of this. I've never heard of anyone running XNU except as part of Apple's proprietary operating systems, and I've never heard of anyone forking it to run a custom version on their Apple hardware. I'm not even sure to what extent it's technically possible.
I would be much less worried if the single surviving browser engine were Gecko -- not because I think it's technically superior (I don't think it is), but because I trust Mozilla a lot more than I trust Google, or even Google+Microsoft. Mozilla is not good at design but they operate out in the open.
Seems like there's something called PureDarwin that's trying to do something with it now.
Both FreeBSD and OpenBSD maintain large patch-sets to be able to build Chromium. 
I don't know the current situation, but historically some of these patches have had problems reaching upstream because Free-/OpenBSD is not officially supported.
If Blink has a security vulnerability, like the recent WebSQL remote code execution, then in a monoculture that means everyone has this vulnerability.
There are several second-order reasons why a monoculture is damaging for the browser industry. I will give a few examples:
1. A monoculture encourages web developers to test on a single platform with the assumption that all other platforms, and even industry standards, are meaningless. Many of us are web developers, and most of us are guilty of dismissing low-share platforms during testing. But as long as some viable competitors exist, we nominally target the industry standards in the hope that by doing so, the platforms we do not test on will have a modest/tolerable (maybe even good) experience.
2. A monoculture is self-reinforcing. Losing established viable competitors makes it more difficult for new competitors to enter the market. If we allow second-tier browsers to be rendered meaningless, we all but ensure a long and (eventually) painful stagnation, and create an ever-larger hurdle for a new entrant to clear to reach any significance (as more an more of the web is designed to work with a single platform). While it may sound passably agreeable to have a Chrome monoculture in 2019, do we want to still have a Chrome monoculture in 2029? For my part, I hope we see an increasing set of options in most areas of life over time, even in "browsing," whatever that ends up looking like in 10 years.
3. We don't know what future stagnation will look like. We don't know what opportunities we will have lost by making it more difficult to compete or by losing the healthy diversifying force of competition. It's a classic problem of the unseen. Projecting forward, we predict testing will be a bit simplified, but we cannot know what innovations we'll never see (or won't see as quickly) because the hurdles for experimentation were too high. Today, if Firefox introduces a new feature, even though its usage is relatively small the usage is still large enough in absolute terms that the innovation isn't entirely under the radar for most of us. If Firefox's share were 0.8% instead of 8%, far fewer of us would even notice if they added a slick new feature, leaving the feature unheralded and obscure despite its potential wide appeal.
If we really want diversity on the internet we should probably still be using Gopher and other internet clients.
This sort of thing boxes out other vendors/rendering engines and forces them to have to add non-standard features themselves. Vendor prefixes are not enough. If devs no longer test on non-Chromium platforms, vendor prefixes have to be adopted by non-Chromium engines or risk utter obsolescence. This is also true of Chromium forks. If a fork is a niche player, being open source doesn't buy you much because the Chromium governing body (i.e., Google) still has the lion's market share, and thus the decision-making authority for what the web should look like.
Where is Microsoft saying that?
Problems with monoculture: lack of innovation, a single agenda being pushed forward, anti-competitive behavior. Google doesn’t want technology X? X is dead. Google thinks Ad platform Y is bad? Y is dead. etc
So the whole thing is arguably suspect.
Yes, but that was based on a comment that was twisted out of context, and is, moreover, demonstrably false.
> web will fail because of monoculture
Go ask a banana.
It also shows that past precedent can lead you down a gauntlet of bad PR: Microsoft has tried to be pushy with many features of Windows 10 (telemetry, inking, online login, Cortana, always-online search) in the same vein that we've long seen from Google, but their strategy contrasts unfavorably against long legacy of a less invasive Windows. Google has pursued a policy of opt-out from the beginning, and by doing so they escape a fair bit a bad press.
Firefox has the worst of it: their users are the most discriminating, the hardest to please -- even if some of Mozilla's controversies were poor choices, the level of uproar was immense. Posts like this just encourage the migration of users driven by causes. This would be less of an issue if it were entirely community-supported, but Mozilla has paid developers and evangelists, and deriving revenues from a discriminating userbase is a challenge.
"Switch to Chrome
Hide annoying ads and protect against malware on the web
[No Thanks] [Yes]"
And that then appears on every single search. And if you click no thanks, it reappears.
And then when they go to youtube, they see this:
"Watch YouTube videos with Chrome
Google recommends using Chrome, a fast and secure browser. Try it?
[Yes] [No Thanks]"
Again, every page load. It's a dark pattern. What about Gmail?
"Google recommends using Chrome
Try a fast, secure browser with updates built in
[No Thanks] [Yes]"
EVERY PAGE LOAD.
Granted, the browser i use is Blink-based. Probably has something to do with it, but it is still _not Chrome_.
More accurately, they install it because google.com, Gmail, YouTube, etc. will constantly nag you to install Chrome and will sometimes display errors telling you that a feature requires Chrome even though the browser you’re currently using also supports it.
There are people claiming that Microsoft could easily fork from Chromium, but honestly, do you see that happening in the next few years? Having read that Microsoft's Edge team was a very small one, I don't believe Microsoft will fork Chromium anytime in the next two or three years (no financial incentive, which could be a big motive, to do so)...and that's a long time to keep pushing the Chrome/Chromium way ("what Google wants for the web").
As an aside, I very much liked this article linked within, titled "The ecological impact of browser diversity" by Rachel Nabors. 
To all those who evangelize Firefox, please also see if you can donate money to Mozilla. It may seem like Mozilla has a lot of money (more than 90% coming from Google for being the default search engine added in the browser), but this is not enough, and could turn to be nothing if Firefox's market share becomes next to nothing and Google decides to pull the plug when the current contract expires. Partnerships with Bing (or other search engines that not many people use) may not bring in as much money to Mozilla.
Read about all the work that Mozilla does (aka "not just Firefox") in its "State of Mozilla 2017" annual report.  The audited financial report is here. 
Disclaimer: independent browser author, unpaid.
Please share your project and other independent browsers/engines. It’s been some time I’ve checked this space out (and I do know that I could also go to Wikipedia to get some information on this).
There should be a left column wide enough to read the text, and some transient menus which appear when mousing over the links there, and a switch to icon-only left column when you scrunch the window size small enough.
Mostly facetious, but who except Americans cares about iOS anymore? Total iPhone market share is down to ~15% of smart phones worldwide. Android has the big lead now and doesn't look like it will relinquish it soon. Chrome only feature, works on ~85% of smart phones today? "Ship it," says the bottom line.
Being equally facetious. No one cares about a bunch of poor people buying $40 Android phones running 4 year old operating systems...
Look at the countries where iOS market share is above 30% and compare it to per capita GDP
"I don't care if the Maserati owner can't buy these wiper blades, because look how many more GMs and Fords there are on the road every day. I bet the Maserati owner flogs themselves if they accidentally even drive in the rain, how often do you think they replace their wipers?"
iOS market share, and it's luxury brand demeanor, leaves it at great risk. I don't know where that line is, myself, but pinning hopes on iOS preventing a browser monoculture seems increasingly desperate looking at current world figures.
You’re not talking about reaching people that can afford Maseratis versus Fords. More like the people who can afford a car vs people who can barely afford a bicycle.
If I was trying to reach poor people in developing countries the equation may be different.
That doesn’t mean that things like Lightning connectors don’t also happen but in the context of the web Apple has generally been helpful for fighting monoculture since it used to be a huge threat to their continued existence.
As a Firefox user for years I lost all the motivation to donate to Mozilla because of their mixed messages.
I would like my money to go to their engineering efforts, to make the best browser in the world. You could guess to what kind of "places" I don't want my money to go to. It's nice to advocate for the free web but in the end if your browser isn't competitive everything else will fall apart.
There's a lot of FUD with things like the Mr Robot "ad" (Mozilla wasn't paid for that) but there are legitimate concerns about some of their behavior. It wasn't so long ago that I clicked on anything Mozilla with joy, but now my brain bullshit detector is automatically deployed. I love reading about Servo and anything tech related but that's about it.
Let's just replace 'browser' with OS (ironically the similarities are becoming more with each passing day) above... would we still say the same? Is an OS monoculture good, regardless of whether it is inevitable? Have we forgotten the power that MS had with Windows during the 90s and early 2000s before Linux, MacOS and Android became viable alternatives?
Given that browsers are becoming the new OS and VM, we need competition in this space more than ever, going forward. Let's not accept Chrome/Google as the One Ring.
Supporting the most use cases is where you get the compatibility from.
Being better than your competitors is where you get browser-exclusive functionality from.
If something (e.g. ActiveX, Java, Shockwave, Flash, some new CSS that's not standardized yet, a new feature on some top 500 website, ...) works in your browser, that's a win for both goals - and if it works only/best in your browser, that's also a win.
Monocultures can be toxic in part because the disincentives to adding lockin (e.g. losing users and other negative feedback sources) get weaker/less likely as one participant becomes more dominant.
I'm old enough to remember the fun with "works only/best in IE/Netscape/...", and Microsoft's old Embrace/Extend/Extinguish policies. While I do not have any reason to believe any of the parties involved in browser development would behave pathologically, history suggests that monopolies in any space tend to eventually become pathological.
(I work for Google, not on anything browser related, all opinions above are mine personally, etc.)
From what I've seen, yes absolutely, as long as it is Linux.
I feel like, if I wanted to, I could contribute to the linux kernel. All that I would need to do is submit pull requests, have it vetted, and gain reputation within that community.
Could you say that about any arbitrary pull request for chromium? Even if it was against googles interests to merge it?
Is your contention that I just fork it and compete with chromium with my PR integrated?
I don't view these projects as equivalent at all.
I would place myself squarely in the camp of those that don't see every browser using Chromium under the hood as a bad thing. It makes every company that has their own browser invested in improving together, for one thing. Also, compatibility across browsers will only get better if that is the case, which is also something that people tell horror stories about from the time when IE was king.
I’ve started using Safari and it’s actually pretty OK and the resource utilization is a lot better than Chrome (no fans blowing!). I empathize with Firefox making a big deal out of avoiding a monoculture but i really hope they’re figuring out how to do a better job on things like the Mac internally, because I don’t think that people are really going to use an inferior product as some moral statement.
No sources, it was just in passing somewhere, but I distinctly remember it.
Stuff that Chrome got pretty damned fast, I hasten to add. Whether people like Safari or not, it does set the standard for browsers on macOS because, unlike IE or Edge, it's actually a very good and __very__ well-loved browser.
I used to have heavy pages hang here and there and really burn some CPU (Facebook, etc) and sometimes even freeze without loading completely. Now on the same machine, the latest Firefox works just fine and avoids those states completely.
It's still a long way from perfect, but I once thought Firefox unusable on MacOS and went back to Chrome, but now that I've tried the latest FF releases, I'm happy enough with the performance that I'm staying with FF this time around
The only remaining sites with poor performance are Google products, and I blame Google for that and am trying to slowly remove myself from all of their products.
The amount of money that is spent on browsers is such a tiny fraction of web-derived profit it is, in my opinion, surprising that browser vendors don't have more money.
The recent better one from Firefox: https://play.google.com/store/apps/details?id=org.mozilla.ro...
You make it sound like users know, or care, which browser they're using. Most of them don't. Most don't actually know there's more than one browser. They use the stock browser that's installed on their device, and they don't switch unless there's a reason to - eg their favourite websites and apps don't work.
In the case of Edge with EdgeHTML there are popular sites that don't work properly, and that means users seek out Chrome instead. Microsoft switching to Chromium will mean those users continue to use Edge because "it works properly" for the user regardless of the actual problem being the developers of those sites failing to write cross-browser compatible code.
I'm a bit sad that the choice and market for browsers is shrinking, and I think it's going to slow progress in web tech, but let's be honest here - the blame for Microsoft's very sensible business decision lies with web developers who fail to test their code in more browsers than just Chrome. If web developers did a better job there'd still be business value in developing a competing browser engine.
He legitimately looked as though he'd unlocked some kind of arcane technology that mortal man should never have wielded
I still feel a little bad when I casually shrugged it off with "Firefox has had that for years". He looked legitimately shattered
It's sad that in one week not even one of their people opened it in Firefox, and not many of their users either (there was just a couple of mentions on Twitter about this issue)
All major browser coordinated to distrust the certs at the same time.
Is this the fault of Firefox or Digg though?
63% of all browsing is done on mobile.
(Microsoft is doubling down on Azure and DevDiv, if anything.)
This seems only to put the desktop at risk, especially with (EdgeHTML-powered) HTML+CSS+JS as the "third pillar" of the original Universal Windows Platform strategy, this only seems to further feed questions about Microsoft's overall confidence in UWP and thereby that middle word, Windows itself. This decision is directly showing a weak flank to "Chrome OS". What's to stop Chromebooks from attacking Windows for just being the junk drawer of old Win32 applications and games around a Chrome OS knockoff?
There's certainly still a lot of value, especially in enterprise, invested in that "junk drawer", but is that enough any more? Yeah, there's just no way that this decision is doubling down on "desktop dominance".
That’s all Windows has been for years. There hasn’t been anything interesting happening nor any serious money invested in writing PC software for well over a decade. The only non game companies making any money selling desktop software are Microsoft and Adobe.
Even from a corporate standpoint, enterprise deployments of desktop apps are cumbersome. That’s the reason that even corporate LOB apps have moved to the web. Besides a web site is the ultimate cross platform app.
A trivial example is the battery API. It's nowadays mainly used for tracking, but is an official webstandard. Mozilla decided to fuzzy it, so it lost its usefulness for tracking. Google didn't.
And I imagine, there's hundreds of similar examples at this scale, which as an average user you'll just never hear of.
One bigger feature is Firefox's Containers. It's based on work from the Tor Browser. Which also just illustrates what this partnership sometimes brings forth. Tor Browser is going to always be there, checking the Firefox code for privacy problems, and will suggest better ways of doing things, which Mozilla can just adopt.
For Chromium, there exist in principle similar efforts, like Brave, Iridium Browser, ungoogled-chromium, but these will always fight an uphill battle against Google and obviously Google isn't going to adopt and maintain their fixes.
Why should we trust them?
No US based entity can offer user privacy, as the government can force any of them to cooperate while requiring non-disclosure. The fact they don't use anonymous user data to offer internet services doesn't negate the fact that where most of us actually care about privacy, they don't have the power to actually offer it.
In any event, being a non-U.S. entity has less significance with each passing day with respect to the ability to insulate information from government. Consider that in the past decade, the U.S. managed to get the Swiss government to change its once-formidable bank secrecy laws.
All of these articles love to fear their listeners by warning them of another IE monopoly situation. None of them seem to mention why monopolistic browsers are an issue, or even how history will repeat in this crazy modern internet world. The only point anyone's made is that security vulnerabilities will span across all browsers. With that logic we might as well write all programs in different languages, and use 20 different operating systems. Obviously that isn't efficient and neither is wasting development money on yet another layout engine. But hey, it's fun to worry about petty issues, so I'll keep reading these blogs.
Borrowing from a previous comment I made, think of it this way: do you think it helps or hurts Google to have every version of Windows come pre-installed with what is essentially already Chrome, except, of course, it will probably have Bing as its default search engine. Do you think the odds of people just using Edge to download Chrome and nothing else go up or down with this move? Do you think it helps or hurts Google to have most tech people not bother telling their parents to download Chrome anymore? There is significantly less control from "owning" an engine than owning an actual browser. I don't think I would have had much of an issue with the dominance of IE 20 years ago if I knew I could compile and modify (and release!) IE myself.
If you care about the state of search monopoly, out of control ads, and identity on the web, then you should be happy with this move. This is more akin to most browsers now having a common starting point. The problem with browsers is that if you truly want to make a new one you need to somehow replicate the decades of work put into the existing ones. What that means is that before you can exercise any of your noble privacy/security/UI/whatever goals, you must first make sure you pass Acid 1 and replicate quirks mode float behavior and etc. etc. etc. This is a non-starter. But now, Microsoft can launch from Chromium's current position and have a browser that can actually compete with Chrome. It's as if they've taken "engine correctness" off the table, and can compete on cool features or "we won't track you" or anything else. Websites will work in Edge by default, so if you like that one new feature in Edge, you can feel OK switching to it without compromising devtools/rendering/speed/etc.
Now I know that the initial response to this is "but Google will call the shots!". Not if the way this has gone down every other time has anything to do with it. Google's Chromium started as KHTML. When Apple based WebKit off of KHTML, the KHTML team had very little say in anything and they eventually forked of course. Then Google based Chromium off of Apple's WebKit, and once again, there was very little "control" Apple could exercise here. Sure, they remained one monolithic project for a while (despite having different JS engines which just goes to show that even without forking you can still have differentiation), but inevitably, Chromium was also forked from WebKit into Blink.
And there should be no reason to think the same won't happen here, and it's a good thing! Microsoft in the past couple of years has demonstrated amazing OS culture. I can't wait to see what the same company that gave us VSCode is able to build on top of Blink, and eventually separate from Blink. Ironically enough, the worst thing that could have happened to Google's search dominance is have Blink win the "browser engine wars": we all agree Blink is the way to go now, so we can all start shipping browsers that at minimum are just as good, and won't auto-log you in, or have their engine set to default, or etc. etc. etc.
The situation here is very different from WebKit and Blink. Google was already the plurality contributor to WebKit at the time of the fork. By contrast, Microsoft has contributed virtually nothing to Blink, and they intend to be identical to upstream Chromium. Microsoft is not going to fork Blink.
Blink is completely controlled by Google. (98% of Blink patches are reviewed by a Google employee.) This means that Google has complete control over the direction of the Web platform. While Google may not be able to set the default search engine in Edge, it has and will exert more subtle influence on the Web as a platform in ways that benefit Google. (Just to name one example, Chromium has deployed whitelists of Google properties for NaCl support.)
I feel people don’t analyze this move realistically in the current context, but instead compare it to some imagined magical alternative where we snap our fingers and have a fresh new viable competitor or people just decide to switch away from a browser most people like.
Regarding the specific example you gave, I’m fairly certain Microdift’s wrapped Chromium can exclude participation to those lists. And frankly I’d be surprised if they didn’t. Microsoft didn’t sign on Google as an OEM provider of a browser for their OS, they are building their own browser on top of Google’s engine. Im not super worried about Microsoft being bullied, they’re probably the best suited to do something like this in a defensible way.
It's quite telling that even Microsoft was not able to keep-up with the complexity and decided to piggy-back on the biggest investor.
If we want less mono-culture, the first thing to do is to bring the complexity down to something manageable.
The conversation though will shift from conversations in external organizations between stakeholders, to conversations in a shared codebase between stakeholders.
In some cases this could be better, as it means we will have fewer occurrences where there's agreement on a feature, but implementation lags in 1 browser. Instead, adoption of many features should now be quicker, while debate about larger or more controversial features will still exist.
First of all, individual boycotts will never, ever overcome a collective action problem, so a fundamentally futile solution is being proposed here.
Plus, this is only bad if Google act destructively evil and no one steps in to change it. If the endgame is control of Chromium being passed from Google to a neutral foundation -- which I suspect it will be -- then everyone wins.
There is zero chance of that happening. Like all public companies, Google acts in its own self-interest. Google controls Chromium, and there is absolutely no benefit to Google to hand control of Chromium over to an outside entity. 98% of all patches to Blink are reviewed by a Google employee.
If establishing a foundation were going to happen, then it would have happened when Apple and Google were still collaborating on WebKit.
I think it should be that one competing _standard_ will win out. Ideally we can have infinite number of browser engines, as long as each one implements all widely used standards correctly. We already see this with most websites working fine whether in chromium or firefox. The only incompatibilities (in theory) should come from bugs or new standards/experiments which haven't been widely adopted or implemented yet.
Also if everyone moves to a Chromium engine monoculture, what will happen to innovation? eg. Mozilla has made some major progress with it's work with Servo.
But I wish Microsoft worked with Apple and bring Safari / WebKit to Windows, and invest in Webkit instead.
( Blink and Webkit are now pretty much different Engine )
Suppose WebKit or Blink are the one engine that everyone uses. Do you know how much uncertainty and effort that will save around the world that goes into browser compatibility and quirks? Look at the x86 architecture or POSIX as well.
Here is my serious practical question: this is open source, so if you want to make some extension, you should be able to distribute it. And if it becomes so popular as to be merged into the core, or included with the main distribution, then it will be.
It seems on balance, this would be a good thing.
Nobody is lamenting the loss of the edge engine. Things like choice of video codec, drm module, etc have shown is it's a good thing to have a few different vendors with a seat at the table for standards.
Most important of those is Firefox.
If they really cared, they could have adopted Gecko/Spidermonkey atleast.
Just my two cents