If there's antitrust sentiments towards Google, it needs to come from some where else.
Techology isn't going away and is becoming ever more important. It seems obvious to me that we will need cross-domain specialists to handle cases such as this in the future -- someone with both a legal and computer science background.
> Many of the most important challenges confronting the legal profession lie at the intersection of science, technology, law, and policy. Emerging science and technologies, such as AI, big data, social media, genomics, and neuroscience, demand an interdisciplinary approach and visionary leadership. Students in the JD/MA in Bioethics and Science Policy program spend their three years at Duke focusing on these intersectional problems and preparing themselves for a seat at the table in these discussions for decades to come and earn an additional degree while doing so.
Thanks, I'm using this from now on.
I wager that Google will be very quick to declare this a bug and fix it ASAP.
This 2018 CNET article has more details:
It's been almost 30 years since the Cypherpunks. When billions of dollars or existential business threats are at stake, regular people are motivated to find a technically-knowledgeable peer for advice. There have now been several generations of financially successful tech entrepreneurs, some of whom move in non-tech circles.
It's your computer that store a cookie to local storage. It's your computer that decide to send back previously stored cookie. And they're crying like they don't have a consent.
Fortunately, EU regulators understand that non-technical users exist and need protection from abuse.
Neither does the user, not by themselves anyway. Only the website knows really.
If you consider “social media” as a market, it has healthy competitive landscape. If you consider different styles of social interaction as separate markets, they’ve cornered markets. I don’t see competition in these spaces. Facebook != Twitter, and I feel that is why both can exist. Behemoths in neighboring spaces opt to buy a social experience instead of trying to compete with their own.
The missing piece, IMO, isn’t regulation around “censorship” for these platforms. It’s regulation that results in a rich market of products around a single style of social interaction. Example: regulation around interoperability.
Within businesses, people have evolved far past market definitions where widget x¹ competes with widget x². Our political savvy as consumers would improve if we could see that as well.
For example, why would Google approve a product like Stadia? What does it compete with? Nintendo, yes but not really, since so many Stadia players have a Switch also... just like most of us have Facebook and Twitter accounts. But maybe they're true competition is Netflix? Social media? Users are giving Google their time = data = insights = further monopolistic advertising power.
Suppose Tesla tomorrow becomes the sole manufacturer of battery powered cars. However, the good (dirty?) old petroleum based cars are still out there and on the road (not a lot but still). However, everyone wants an electric car in future - will that make Tesla a monopoly?
How will it be different or same in this case of Twitter or Facebook?
A petroleum based car is largely interchangeable with an electric car, assuming we're talking one that will probably comply with environmental regulations over it's lifetime. I might prefer an electric car because of the environment, or to support the movement, or whatever, but at the end of the day, a petroleum car still gets me where I'm going. Tesla is unlikely to become a monopoly because even in the electric vehicle space, there are interchangeable goods. I'm not intimately aware, but it sounds like there are a couple other companies that make competitive models.
Where that interchangeability can get weird and not so clear is on a more specific market, where users don't necessarily have a choice. Tesla is the only company (afaik) that makes a fully electric truck. You could possibly argue that Tesla has a monopoly on fully electric trucks; I think the question becomes, are other goods interchangeable? Is a petroleum truck interchangeable? Is a fully electric SUV interchangeable?
Applied to social media, each of the major social media networks offers or encourages a substantially different type of social interaction. Twitter is largely for piecemeal content, and is largely more public than other forms of interaction. It leads to really high levels of engagement, and lots of flame wars. Instagram is all about photos, people go for the glamour. Facebook attempts to make you engage with your network more, I find people share more personal information there. Reddit is more anonymous than the other two, and builds around the concept of communities, which are featured more prominently than the other platforms.
I think we all agree nobody has a monopoly on social media. The question is whether it's possible to have a monopoly on a particular form of social media. Are Reddit and Instagram interchangeable for you? They aren't for me, so I would say that they aren't in competition and as such, the existence of Reddit doesn't prevent Instagram having a monopoly any more than the existence of Chiquita does.
"Social media" is an incredibly diverse category of services. Deciding the monopoly status of a company based on the health of competition in social media is like deciding whether to break up Standard Oil based on the health of the entire raw materials goods sector. It's not a granular enough measure, because it contains several non-interchangeable goods. If Standard Oil jacks up the price of oil, I can't just go buy iron instead; I can't put steel in gas tank. Likewise, if I get pissed off at Facebook and decide to quit, I can't just go somewhere else. My 80 year old grandma is on Facebook, teaching her to use Twitter is going to be a problem, and I generally don't know if I want to expose my grandma to the cesspool that Twitter can sometimes be. The services are not interchangeable to me, so Facebook has a monopoly on that service. My choices are to play by their rules, or to just bow out of the experience entirely. Let's say we ignore the legal technicalities of a monopoly for a moment; doesn't the outcome look remarkably similar? If this doesn't count a monopoly, it seems to lead to the same place, and perhaps it's non-monopoly status is due to a flaw in the law, rather than being expected behavior.
In terms of regulations this is within reach of possible changes to how the notorious section 230 is applied.
I recommend https://www.youtube.com/watch?v=O1OhE4w0TAU for a competent commentary.
“A lie will go round the world while truth is pulling its boots on.”
C. H. Spurgeon,
Whereas a truth is may be amplified by ineffective blocking a lie may be irreparably damaged if the truth gets there first.
Regarding 230 the primary author of the statute disagrees with Thomas.
Justice Thomas is the Giligan of the supreme court.
>Why was this? It is because Thomas is not a conservative but, rather, a radical—one whose entire career on the Court has been devoted to undermining the rules of precedent in favor of his own idiosyncratic interpretation of the Constitution.
Anyway what I was referring to was how Twitter's moderators decided to restrict this news based on it being obtained through hacking, but few days ago had nothing against Trump tax story that was based on illegally obtained documents.
You don’t have to be anti-competitive or abusing a monopoly to be the target of regulation.
A more reasonable target for a 230 carve-out would be recommendation algorithms. Those aren't merely passively hosting user-generated content, but actively selecting what they think you should see to keep you engaged with the platform. Featuring content rather than showing it ordered by some simple criterion like time should be treated as editorializing rather than moderation. If a human editor decides to feature lies I tweet about you on their "best tweets of the week" page, you may be able to sue them for libel. If twitter's algorithm shows lies I tweet about you to a large audience, you currently can't.
I don't think current law and understanding of same allows any major changes to how we treat platforms. I tend to think that any major changes in the law are liable to be for the worse because even well meaning law makers seem to possess a mostly incompetent perspective on tech.
Section 230 protect the right to moderate within bounds.
> any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
So long as their actions are in good faith, and the content can be lumped into "otherwise objectionable" (as I'm sure most anything could), they are well within Section 230 protection. Even if they have an implicit bias in their moderation. Even if they have an explicit bias in their moderation that they put in their ToS. It specifically says "that the provider or user considers obscene...", which explicitly states the bias of the provider is considered.
The only way Twitters moderation could remove their Section 230 protections is if they did it in bad faith. If they were doing it specifically to try to lose Trump the election, that might count as bad faith because it has nothing to do with limiting access. They are, however, free to remove everything he posts because they find him to be objectionable. Or to remove things they think they are objectionable. Or to only remove violations of their ToS when Trump does it, because they find him or his past patterns objectionable. Or because they find it more likely to lead to flamewars, etc on the site when he does it. Etc, etc, it's mostly a hypothetical because you have to prove bad faith, which is hard unless someone is dumb enough to write it in an email.
Twitter violating its own ToS and/or promises to the users sounds like an example of bad faith. (This would not apply if Twitter's marketing was 'Fuck you! we do whatever we want', instead they promote themselves as a fair platform)
Moreover the entire exemption does not apply when the `provider` is not a provider but is actually a publisher using editorial discretion.
(for example if twitter decided to ban false statements in tweets this would clearly put them outside of section 230 immunity)
His opinion is out of sync with what legal scholars and indeed an what an author of the law says the law means.
Just on a laymans reading of the text good faith isn't given in some universal context of fairness or fair play it is given in the sentence.
Good faith herein means actually because they found it objectionable not for some ulterior motive.
In order to assert that the removal wasn't protected under 230 you would be asked to prove the contents of the minds of the decision makers that the removal was NOT because they found it objectionable. They could literally argue that they found the effort to influence the election itself objectionable and suppressed it therefore and be safe within the boundaries of the law.
In fact good faith .... otherwise objectionable is so broad as to encompass virtually any removal for any reason
Furthermore finding that one removal wasn't protected under 230 wouldn't magically dispel all legal protection it would mean for the purpose of THAT removal someone could sue them if they had just legal grounds.
I don't believe your source provided one.
I can agree with your interpretation that where the protection of section 230 applies companies would be allowed to remove basically whatever they want.
But there need to be a criterion distinguishing why Twitter can claim this immunity while newspapers cannot. The rights granted need to have some kind of obligation.
From what I understand the people that are trying to attack this immunity have mostly given up with the argument "Twitter monopolized a space for discussions so it should be held to constitutional standards like telephone companies" or "social media platforms are clearly acting as publishers of their content" and are rather trying to push "social media companies falsely promised open forums to users and content creators only to hit them with draconian rules once a monopoly was established" (had facebook (or youtube) had the same ToS since its inception it would have never become a monopoly).
With this last argument the entire question of section 230 is sidestepped.
As far as I understand it will not actually accomplish anything soon; a lawsuit on these premises was successful against Patreon, but those were very special circumstances.
* I am even more remote from the US than a Canadian lawyer, but I do not see my role as telling the courts what they should do, but rather as someone that is trying to understand what is happening and trying to develop informed opinions.
I honestly don't know entirely what it is trying to express. "Proving Too Much" seems to be a complete non sequitur I have no idea what you fallacy you are suggesting is expressed by the prior or any other post on this subject.
>But there need to be a criterion distinguishing why Twitter can claim this immunity while newspapers cannot. The rights granted need to have some kind of obligation.
You can institute new obligations as soon as you buy your own congress critter and get them to write new laws. If you believe such obligations are already expressed in law kindly cite the statute and section.
The distinction between the print copy published by the New York Times and say reddit/twitter/facebook is literally that this is the distinction the law makes. It doesn't have to make sense to you to be the law of the land. Particularly the short comprehensible section already the primary topic of discussion.
If you want to dig into why it seems relatively obvious. The finite first party content is dear and expensive and the act of curation is already inherently an expectation. Asking a publication to take legal responsibility for what they publish is a tolerable and reasonable burden.
Reddit/Twitter/Facebook solicit users to produce a veritable ocean of content for which they offer users a chance to communicate to their fellows and a small amount of server time which per unit is paid for by a slightly larger income from ads provided with that content.
Legal responsibility for content shared between you and I would be a herculean task, impractical, intractable, and expensive that would leave them with little choice but to cease operations.
Indeed few people actually want this what they want is 230 to be used like a club to keep people like twitter from shaping the conversation despite owning the property on which you expect discussion to take place and no law providing such a right to someone else's megaphone.
If you don't like it start your own website.
I am not saying that section 230 should not apply, I am saying that, to my knowledge, under current laws if a social media company decide to apply excessive editorial control (let's say twitter decides to only allow factually true tweets) they would lose the protections granted by section 230.
> The rights granted need to have some kind of obligation.
By "need" I meant that I believe these obligations already exist in laws.
> Proving too much
By proving too much I meant to say that since section 230 does not apply to newspapers the law must make a difference between them. To my understanding this difference is editorial control.
Finally I am not trying to have a debate over this, I am only trying to understand better the issue; I clearly have a side/bias, and I am trying to learn more about the many other facets of the issue.
Instead of guessing what the difference is why not read the very short section 230? The difference is that 230 specifically deals with the web. The difference isn't editorial control its literally that the law directly speaks to the web. I would suggest in half the time required to watch the video one could read 230 twice over. This misunderstanding directly stems from concerning oneself with bad secondary sources.
It's a short law read it.
There is no clause that specifies that a company even can in a blanket fashion "lose protection" in such a fashion.
First relevant section.
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Second relevant section
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
This means that in order for a party to sue they would have to prove both that they were blocked in bad faith AND completely aside from this title they possessed a legitimate cause to sue.
To be completely clear someone could post on reddit the libelous allegation that you ate babies causing you to lose your job at the day care a clearly obvious cause of action and then the ceo of reddit could personally block your profile to keep you from running against him for mayor of your little town. A judge could agree that your content was blocked in bad faith and you STILL wouldn't be able to sue reddit for the baby story.
If you don't like that reddit or facebook or twitter blocked your story your problem becomes finding a legal right to exercise your legitimate freedom of speech VIA their platform.
The DMCA has existed for 24 years longer than some readers here have been out of diapers and I can no platform has been censured yet for removing deplorables in a nation full of both deplorables and lawyers. It seems likely none ever will without a new law not a new interpretation.
If I had to guess I might point at employer liability or contract laws, but it might be a discovery for another day
But tiktok and Snapchat work. You know: real sustained efforts. Instagram launched 9 months before google plus. That should give you an idea of the landscape that google operated in. Instagram— with 13 employees at acquisition in 2012– works.
Google isn’t a determiner of what works. They’re one of the laziest implementers of new services/startups.. they literally throw something out there and try to coast off their name. In markets where the customers are mostly satisfied, lazy stuff like that doesn’t create a winning product.
Follow up comment here because Ive apparently reached my limit for a while: I believe you. I still think a Google half-ass is going to be a stronger effort than another start ups full out stab at it. Facebook is the biggest "country" on earth almost by a factor of 2x with 2.7 billion people. Its just other worldy large and difficult to compete with especially today. But I get what you're saying.
And who even is "they"? The entire tech sector? FAANG? Or just whoever makes the rounds in the news at any given point?
Are there not serious fines for companies saying they opted you out of this kind of data collection and then not actually abiding by your requests?
France Sweden and the UK is already in on that action, Im sure others will follow.
I'd like a search engine where I have some input on the ranking of sites shown to me. Some sites are crap and I never want to see them, other sites are ranked low but often have info I'm interested in.
Even just let me vote my search results up or down on relevance. I can vote on everything else, why not this? (Though ideally, I'd love to be able to devise my own algorithm for these things.)
Filter out: Pinterest, YouTube, WikiHow, and all the other garbage SEO farms.
However there is a browser extension you can install. It's either endorsed or developed by duckduckgo.
It's highly conceivable that one could get a completely novel and diverse web and search experience if we were to exclude those concentrated websites entirely from search results. At that point, google can no longer slide on just showing the top results from a tiny subset of their index, and would be forced to always show results from the entire index. As opposed to now where 99.9% of the time, they mostly show you results from that smaller subset and 0.1% of the time show you the rest (that is of course only if you have a super-specific query or you force them with some of their remaining search modifiers).
they stopped providing it for some reason...
I liked it for blocking quora/pintrest/w3schools/...
I just tried it for W3schools and it worked
personally I don't want some random third-party having the ability to exfiltrate my google cookie via an auto-updating extension
Driven by the same motivation, I've first adapted a userscript , but later replaced it with simple uBlock Origin filters:
What does that achieve? They still receive your data, unless you are logged off and not using any google software.
This is really just Google using hostile UX to badger people into enabling location history and (in their eyes - hopefully) leaving it on.
I’d be surprised if they don’t follow that wish just for PR and legal reasons alone. The fewest people will actually opt out or if they do accidentally opt back in since the “enable web and location data” action appears anywhere from Maps to Google Home setup.
User consented to "personalization of experience", and that's all it is. Personalised ads.
My solution is a shortcut on my homescreen to my address, or somewhere close.
One click is worth it and for someone who pays more attention than most to the windscreen, safe enough for me.
Adblocker extensions need full access to all network traffic and all it takes is a single person's account or machine to be compromised to get access to millions of browsers. Chrome extension compromises are a somewhat common occurrence - see  for a recent example.
I want ad blocking without giving the extension access to my cloud accounts, bank statements or company intranet.
My current solution is to use the ExtensionSettings Chrome policy to blacklist extensions from particularly sensitive domains like accounts.google.com, my bank and the company intranet, but it's a clunky solution - I still want tracking and ad scripts blocked on those!
Some relevant points:
- Google still hasn't raised the rules count like it announced last year in the blog post you linked. The current API is still limited to 30k rules. (the dynamic rule count is ridiculously low too)
- Even if the rule count were unlimited, having a static list of rules handicaps more complex algorithms like those used in uBlock Origin, that aren't limited to "if URL in URL_LIST then block". For instance, a Levenshtein-distance-based algorithm can't be implemented with declarativeNetRequest.
- Manifest v3 doesn't seem to prevent extensions from examining traffic, just blocking it. So Google's stance that its API is against data mining, not ad blockers in particular seems hypocritical.
- Similarly, its stance that the proposed API is more efficient is extremely dubious. Modern WebAssembly has close-to-C++ performance, meanwhile ads and analytics are one of the biggest source of slowdowns of the modern net. The idea that restricting adblockers would improve performance in the general case is absurd.
Overall I have the same view of adblockers as I have of pirate sites: they're very convenient for me and I like to have them, but I don't begrudge corporations for doing everything they can to get rid of them. In a world where most of the internet is funded by ads, I understand why Google would want to find ways to make adblockers just a little less powerful.
But Google's insistence that it isn't doing exactly that, and that its API is technically motivated, reads as corporate nonsense. They haven't responded at all how I'd expect them to if the whole controversy was just a misunderstanding.
Google is deeply afraid of machine learning based ad blocking. You can only camouflage ads so much before they don't serve their purpose. Forcing ad blockers to use a primitive blocking method prevents smarter ad blockers from being built.
Manifest v3 is still in development, so I'm assuming that this simply hasn't happened yet. It definitely needs to fit uBlock Origin's default rule set and I don't see them backtracking on the 150k announcement.
> - Even if the rule count were unlimited, having a static list of rules handicaps more complex algorithms like those used in uBlock Origin, that aren't limited to "if URL in URL_LIST then block". For instance, a Levenshtein-distance-based algorithm can't be implemented with declarativeNetRequest.
This is the explicit trade-off that is being made. I'll gladly accept this limitation in exchange for not having to trust the ad blocker extension.
> - Similarly, its stance that the proposed API is more efficient is extremely dubious. Modern WebAssembly has close-to-C++ performance, meanwhile ads and analytics are one of the biggest source of slowdowns of the modern net. The idea that restricting adblockers would improve performance in the general case is absurd.
The blog post explains this - the issue isn't the (in the case of uBlock, carefully written and very fast) extension code, but the IPC overhead in routing all requests through the extension. The Chromium teams loves metrics and they wouldn't make this claim without having substantial data to back it up - it's not a matter of opinion, but objectively quantifiable.
> - Manifest v3 doesn't seem to prevent extensions from examining traffic, just blocking it. So Google's stance that its API is against data mining, not ad blockers in particular seems hypocritical.
The blocking version sits in the critical path, the non-blocking one can be called asynchronously. This is consistent with their reasoning.
With Manifest v3, blanket host permissions are going away, which addresses data mining extensions and would make the existing blocking webRequest API impractical: https://twitter.com/justinschuh/status/1138889508512866304
The case has never been made that this is the issue of why wesites take long to load nowadays, and rather the finding is that content blockers help significantly page load speed.
You seem eager to uncritically accept Google claims while leaving out the views of the critics.
* * *
This is only true in the sense that an all-purpose browser is "a major security risk". That is to say, it's not true in any coherent sense.
Yes, the ad blocker needs to be trustworthy, and there are a variety of approaches for furthering that goal.
> Adblocker extensions need full access to all network traffic and all it takes is a single person's account or machine to be compromised to get access to millions of browsers.
Again, you could say the same about the browser itself. Even if it were infeasible for extension developers to implement more security safeguards, that would be a flaw in the Chrome Web Store, not in the concept of web extensions.
Their continued "refinement" of the core ad blocker APIs while all these abuses and deficiencies go unaddressed is extremely suspicious.
Yes, this is bad and a big security risk. I don't use any extensions that request this permission. My company even pushes a Chrome policy that outright blocks them.
Manifest v3 fixes this by taking away blanket <all_urls> permissions. This would break ad blockers, so they add declarativeWebRequest and remove the blocking webRequest API that would be useless anyway.
I have one for banking, for example, with zero extensions
I wonder what they've actually addressed. It looks like this was just lip service:
> Additionally, we are currently planning to change the rule limit from maximum of 30k rules per extension to a global maximum of 150k rules.
16 months later the limit is still 30,000.
To give some context, it looks a clean installation of uBlock Origin would require nearly 80,000 rules.
> 79,972 network filters ＋ 39,856 cosmetic filters
And what exactly has changed in the past year to do so?
My understanding is that webRequest blocking is deprecated and a limited size static list will replace it. No?
Edit: spec still shows ~35,000 total block entries, far too few. A medium sized marketing firm could, on their own, set up 70,000 distinct s3 bucket URLs, or a large one could easily justify that many distinct domains. Many existing block lists and uBlock's dynamic (uncountable) behavior far outstrip these limitations. This spec will break the back of ad blocking for good, and Chrome engineers and PMs know it.
The changes included greatly increasing the rule list size, allowing dynamic rules, not requiring the list be included in the manifest (for independent updates), and the ability to adjust some network headers.
As I said, they addressed all the major concerns that I saw raised.
In a few years, which 2/3 of the rules would you cut?
How is this a win for consumers? How have they addressed those major concerns?
Edit: That was stock, I just added a few lists and passed 100,000 network filter rules. Please explain to me slowly, as if I were a child, how a static limit of 30,000 rules is a bigger number than 100,000, and why my computer with 128GB of RAM memory can't possibly support more than 30,000 rules?
We’re talking about access to the internet, something that people are increasingly acknowledging as a primary need. Regulations will follow.
I'm certainly not happy with how Google is using their position, but is it illegal? Should it be? Even a Pixel phone can install and use Firefox. You might perhaps make a case out of how all SDK WebViews end up being Chrome (-ish), but as long as a third party app embedding their own web view would not be rejected by Play, that's still more open than significant other parts of the smartphone market. Sure, Google is using a position of power and everybody who isn't a major shareholder shouldn't exactly be happy about it, but itvit abusing? In a way assailable on legal grounds?
1. I tested with many different sites and configurations in order to narrow down the issue. The screenshots in the article are just a small sample of my tests, for illustration.
2. I'm not logged into Chrome or any Google services. I've gone through chrome://settings and disabled everything Google-related. Nonetheless, although I'm not using those Chrome features, this issue obviously could be related to the existence of those features in Chrome.
3. My goal in publishing the article was to get the issue fixed ASAP. I'm a browser extension developer, so I'm constantly testing with different browsers, including Chrome, Firefox, and Safari. It wasn't my intention to start a browser war.
4. I believe that Chrome is entirely open source, so I hope that someone familiar with the code base will take a look at this issue. The sheer size and complexity makes it a bit daunting for an outsider, but since Chromium has been adopted by other browsers such as Brave and Edge, there are outside developers already working on it.
How do you know, and how can we prove it?
ungoogled-chromium is a project that removes Google integration from Chromium. Here is the patch they use to remove this special treatment of Google sites:
Now the question is how those IsGoogle functions are used in storage handling.
Did you report it in the Chromium bug tracker? (https://bugs.chromium.org/p/chromium/issues/list)
From my experience, they tend to look at those sooner or later.
could be "by design"
Of course something can be irresponsible for many reasons but I think there is a solid argument that making a website that works better in Chrome than other browsers is morally irresponsible because you are encouraging more users away from more open browsers. (Which I guess is mostly Firefox at this point? :'( )
Given the pro-open-web and anti-FAANG sentiments that's shared on HN I had expected slightly different results.
Furthermore, people tend to be louder about things they perceive as threats, such as corporations dominating the internet. Those who comment about those threats are likely to be the same ones taking active steps to mitigate them.
I considered Firefox and tried to switch for a month before but the recent reorg + the stuff about their top officer pay makes it seem like it's a cushy position some people entrenched themselves in and the org is completely lost - the browser experience was inferior and I don't have sympathy towards them so why bother.
Chrome has plenty of forks so I try to run those on other platforms.
1. uBlock Origin - the content-based blockers on Safari are not nearly as good
2. Zotero connector - for my academic work
3. Session Buddy - for saving sessions
4. Proxy switcher - for selectively using my uni proxy for academic resources
and so on...
It's kind of exasperating how low of a priority efficiency is with both Chrome and Firefox.
You might ask her to try again. Safari 14 actually shows favicons, and that is such a welcome relief.
That doesn’t mean Chrome is more secure. They literally install user tracking and tie you to your Google account so they can advertise to you and better sell things. There is nothing secure about a browser built to monetize your data and send it to the cloud for analysis and machine learning. Meanwhile they have their share of bugs as well. What do you think about this one which is more recent?
A vulnerability in Google’s Chromium-based browsers would allow attackers to bypass the Content Security Policy (CSP) on websites, in order to steal data and execute rogue code.
The bug (CVE-2020-6519) is found in Chrome, Opera and Edge, on Windows, Mac and Android – potentially affecting billions of web users
Google literally sends your browsing habits from pages visited to mouse movements to their servers where they link it with your other Google info like Gmail, Google Calendar, and Google Maps GPS tracking via your phone. Google products “leak” all the users data back to the mothership as a feature. And Chrome users tend to use a lot of random extensions which means the data usually leaks to a lot of unknown third parties as well (see DataSpii for one example which effected millions of Chrome and Firefox users)
So yes let’s expect higher standards from all browser developers. But realistically Apple likely fixed the bug or has a very good reason why it’s difficult to entirely patch yet. Google has had many extended data leaks as well but they actually build tools to gather your data up in the first place which makes it that much riskier should it get stolen or misused by Google or Google employees.
It's a security bug that leaks information about the user. It reduces privacy for the user to any website they visit. Chrome and Firefox fix these security bugs immediately upon learning about them. Safari does not because it had hyped ITP and is more interested in security theater, which makes for great marketing, than actual security.
I don't know why you are trying to redirect the conversation to services that can be accessed from any browser. The topic of discussion is which browsers are more secure and offer better privacy.
Guess what story popped up when I logged in today? https://www.tomsguide.com/news/chrome-google-site-data-speci...
“Chrome won't clear your Google and YouTube data — even if you tell it to
Browser retains site data in defiance of privacy settings”
Every week there is a new scandal with Chrome privacy and usually it’s Google to blame and not a bug. I’m sure they’ll tell us this was another honest mistake that tracked billions of people for ten years but NEXT time they’ll put our privacy first. Meanwhile we all know our data has been the goal since day one and Google Chrome is the Trojan Horse to get the tracking onto our computer
The particular issue mentioned here turned out to be a bug that affects more than YouTube and Google. https://bugs.chromium.org/p/chromium/issues/detail?id=127340
Safari’s Webkit engine has plenty of browsers using it — in fact IIRC Chromium began as a fork of at least part of it — eg. GNOME Web/Epiphany, Luakit, Surf, et. al.
Personally, I switched from Chrome to Firefox a long time ago. But I still use plenty of other Google products. I'm overall anti-Google, but I'm not a religious about it. I disconnect from Google where I can, and support products that match my views when I can.
If anything, I'm surprised the percentage of non-Chrome users your site encountered was as high was it was. Makes me kind of hopeful.
I use Firefox for most of my personal browsing other than Fastmail's webmail interface, and most of my general work browsing.
I use Chrome for a lot of testing and development at work and for dealing with PayPal. These things all get separate profiles, and Chrome handles multiple profiles better than Firefox. Yes, I know about Firefox containers, but I need separate bookmarks and history. Containers just deal with cookies and maybe cache.
I've been tempted to switch to Chrome for at least HN and Reddit because I tire of dealing with Firefox's spell checking. It regularly tells me things are spelled wrong that are not (such as "webmail" in this comment). It's not just that it is terrible that irks me--it is that it is inexplicably terrible.
What I mean by inexplicably terrible is that they are using Hunspell. That's the same open source spelling engine that is used by Chrome, and LibreOffice, and MacOS. Those all have great spell checking. I thus infer that Firefox's problem is not an engine problem. It's a dictionary problem. So why don't they they grab the ones LibreOffice uses?
Here are some words that came up in comments of mine either here or on Reddit that Firefox incorrectly told me were spelled wrong. Each one interrupted my writing flow as I had to stop and go look it up elsewhere to make sure that I had it right.
> all-nighter auditable automata blacksmithing bubonic cantina commenter conferenced epicycle ethicist fineable inductor initializer lifecycle micropayments mosquitos pre-programmed preprogrammed prosecutable responder solvability spectrogram splitter subparagraphs subtractive surveil tradable transactional tunable verifiability verifier
There's an issue in the Bugzilla for reporting misspelled words. I've reported all of those there so they should eventually be fixed. I'm not sure how long that takes.
Here's a bunch I indirectly reported earlier, that are now fixed:
> "ad infinitum" anonymized backlit bijection commoditization else's handwrite heliocentrism merchanting natively photosensor plaintext pre-fill preload prepend resizable scoresheet surjection unrequested
(Indirectly because I asked about them on /r/firefox, and someone responded telling me about the Bugzilla issue, which he had already added them to).
Here's my list of ones I have not yet reported:
> ballistically chewable counterintuitive exonerations mistyped phosphine programmability recertification shapeshifting tradeoffs webmail
But they purposefully use CLS in Search to increase clicks on Ads https://twitter.com/andyhattemer/status/1262564268890820609
You present this as a fact, but it would be absurd that Google would use such a cheap and easily detected trick to increase CTR. It would be bordering on ad fraud and I'm sure that Google, of all companies, knows better than that.
Occam's Razor says that this is a stupid async content loading bug, which they subsequently fixed. I've never seen this happen and when I just tried it without adblocker with that exact search term, it didn't - the page loaded with the ad.
3 years ago and I wouldn't believed it at all but around 2 years ago I saw it happen consistently with a colleague at the desk next to me.
I cannot say for sure that it wasn't an extension in his browser but I can say for sure that I think Google has been really busy tearing down the mountains of trust they had before 2007 - 2009.
For this to not be an accident, one would have to assume that Google actually makes more money from those invalid clicks, and that someone decided that yep, rendering ads asynchronously was a decent and legal approach at increasing advertising revenue, and requested the GMail team to implement it.
This kind of corporate misbehavior is not unheard of, but I just can't imagine it happening at Google.
It's much more likely that this is just unfortunate UX design to "improve" rendering performance without considering users on slow connections.
(I can reproduce this one just fine in desktop GMail - on the first render of the "Promotions" tab, the ads render asynchronously)
'Unfortunate UX design to 'improve' rendering' is the plausible-deniability they can use to justify this.
> This kind of corporate misbehavior is not unheard of, but I just can't imagine it happening at Google.
I definitely can, I don't think anywhere is immune to this once you reach a certain scale. They have a profit-motive to make money, they will absolutely try and get away with as much as they possibly can.
And the re-ordering happens as your mails and the ads are loading! You might be about to tap your email, then the ads load in and you suddenly click on an ad. Or you want to tap the top row, but the app decides to put a different email above the ads and you end up tapping into the wrong mail because it was reordered just before the tap.
No, they don't. This is false. It's a mechanism called Media Engagement Index, Google properties have zero advantage, and any site can get a high score.
Chrome ships with a preloaded MEI assembled from global telemetry data, which is then trained locally:
Would they have made the same choice of preloading a default seed if they had no properties in the seed ? who knows
Once they reached a dominant ad network position their whole strategy has been “advancing the web is advancing our revenue”, and it bled into mobile to the point where building and maintaining a whole ecosystem for free makes sense as long as they stay the search and ad engine of choice (that’s the only thing they’ll fight to impose).
Chrome is built in the same optics: push forward the web and webapps as long as search is theirs.
There are other examples where only the large sites benefit while everybody else has to play by stricter rules: "EU Parliament bans geoblocking, exempts Netflix and other streaming services" -- https://www.dw.com/en/eu-parliament-bans-geoblocking-exempts...
EDIT: User teraflop posted a link to the list of "sites that are allowed to autoplay video even without any prior media engagement" right here in this thread https://news.ycombinator.com/item?id=24818178
My guess from someone who had to develop a web video player at work, many websites will attempt to autoplay the video with sound and if it fails, it's easy to catch the failure event, they will mute the video and try again.
Web browsers are also capable of determining that autoplay on technically-not-load-but-automatic counts as autoplay. (There's even text in the spec about it.) In particular, they can tell whether it is in response to a user action/gesture on the site or not.
The preloaded list is in the source code (https://github.com/chromium/chromium/blob/master/chrome/brow...) but it's encoded as a finite state automaton that makes it a bit difficult to enumerate the list of whitelisted domains.
Here is the code:
And here is the plain-text list:
According to the MEI it actively measures user behavior and one of the most important measures is that a video is unmuted. From the document:
“The MEI is meant to allow media heavy websites (e.g. YouTube, Netflix) that rely on autoplay for their core experience. It is a non-goal to allow websites with a “good media behaviour” to autoplay without restrictions”
It doesn’t sound too good, and still doesn’t really explain how everything is seeded.
The preimage space is finite and easily enumerated.
These are the kind of tricks a shady company would do. So disappointed what Google is doing to the web the last few years.
It isn't obvious to me from this that Google are privileging their own sites above others here