Hacker News new | past | comments | ask | show | jobs | submit login

I just don't understand how so many people in the HN community, who are so vocal about privacy, turn around and use Chrome.

Don't feed the beast.




I have been using Firefox since version 1.0. I don't understand the desire to use google's browser. However, why would even trying view secure data in your web-browser... Not even just Chrome. Things may get cached ect... Although, Mozilla has been doing things that I find annoying at times. Like adding pocket ect...

Little Rant Although, I have looked at some of the other forks. What I find more depressing is how few up to date browser engines exist. It's a sign that web standards are getting too complicated. We already going to have a 3rd version HTTP as well... Both HTTP/2 and potential HTTP/3 are based off of work from google. Those protocols are a lot more complicated than HTTP. So it's much harder for a small group to implement them. That's just the protocol layer. Let alone JS, HTML, CSS, and all the other little things. It's like big companies keep bloating the standards. The result is the browser is probably one more complicated pieces of software regularly use.

What ever happened to "KISS".


Maybe over-complicating things is a way of eliminating competition from Google. If it wasn't so complicated, someone could easily offer a competing, privacy-oriented browser; which Google would not like-- so make it so hideously complex no one can do it without $50 mil? There would be more innovation if things were simple, because anyone with a good idea could contribute.


This sort of happened when the WHATWG effectively wrested control of HTML away from the W3C (although Google was not a founding member, they are one of the Steering Members now). https://thehistoryoftheweb.com/when-standards-divide/

The membership of the W3C supported XHTML, to improve interoperability among other reasons. Apple, Mozilla and Opera had a different vision and broke away and formed the WHATWG which Google and Microsoft later joined. Those companies (minus Opera) now have near total control over HTML and the W3C just rubber stamps whatever they decide.

(Note: I don't believe the participants in WHATWG were doing what they did for anti-competitive purposes, but in hindsight it had that effect.)


XHTML actually decreased interoperability with seldom anyone able to produce conformant strict XHTML. XHTML was a huge mistake, the W3C obsoleted itself with this one.


Precisely that has been observed across many markets. Teachers unions being an example where adding on requirements to entry enshrine current members.


Google also has additional power by simply not implementing things introduced by WHATWG participants. Case in point: the menu/menuitem elements which would have provided scriptless interaction in a limited way (removed from W3C HTML 5.2). Any small attempt to make the web more declarative by extending HTML is doomed since not essential because it can be implemented using JS.

WHATWG's specification process, putting the world's main communication medium into the hands of browser vendors with an interest to eliminate competition and define entirely new Turing runtimes (WASM), and advertisers who turn around and create competing mechanisms (AMP), then not actually ever delivering a standard (the "living standard" nonsense) is broken, and has been for a long time.


I've seen a number of comments about http/2 and http/3 being driven by Google. The ideas originated there (SPDY and QUIC respectively) but in both cases many different entities backed the ideas and formulated specifications in IETF settings. I'm not sure I buy that somehow Google managed to hoodwink the people that toiled on these specifications in a non-Google environment and managed to influence them in such a way that the output was beneficial to their nefarious goals.

There are already quite a number of http/3 implementations from non-Google companies and projects [0]. Cloudflare seem to be big backers of http/3 [1]. There were some other articles today that are generally positive on the http/3 approach. One was from Tim Bray at AWS [2] and the other from @ErrataRob [3].

0. https://github.com/quicwg/base-drafts/wiki/Implementations

1. https://cloudflare-quic.com/

2. https://www.tbray.org/ongoing/When/201x/2018/11/18/Post-REST...

3. https://blog.erratasec.com/2018/11/some-notes-about-http3.ht...


It's just an example of how much weight Google is able to throw around. That's a just one part of what would be needed for a browser. The more parts you add the harder it becomes to build a browser.

Also I have read about QUIC there are some things that are interesting about it. However, there are things that I don't like.

Moreover, this was something I read from IETF mail archive: "That QUIC isn't yet proven. That's true, but the name won't be formalised or used on the wire until the RFC is published, so we have a good amount of time to back away. Even then, if it fails in the market, we can always skip to HTTP/4 one day, if we need to."[1]

I find that pretty concerning. If it does not pan out we can just skip over. That's still something someone has to implement even it's not used much. I would only be considering things that people in general are eager to use not just a few big companies.

[1] https://mailarchive.ietf.org/arch/msg/quic/RLRs4nB1lwFCZ_7k0...


That's still something someone has to implement even it's not used much.

Not true. This is why alpn and the upgrade header exist. You do not need to implement any of the new protocols, and you can certainly skip a version if you don't think it's worth the effort.


Cloudflare seem to be big backers of http/3 [1].

Having the second-largest traffic analyzer on board would seem like more of a cautionary negative than a positive to me.


Tinfoil hat off for a second it makes more sense Cloudflare and Google are backing these protocols because they're more efficient which means lower infrastructure costs. They both terminate traffic already so can already see everything regardless of the protocol used.


I am not the person you responded to. However, I would only be considering things that people in general are eager to use not just a few big companies. Most users of HTTP have never been too concerned with it's overhead. Except maybe the way cookies have been design. It definitely has problems, but most peoples problems are not googles or cloud flares.


So we should be against something that makes all sites faster... because big companies care more about their sites being fast? That just seems like spite to me.

If anything, smaller sites have more to gain from HTTP/2 and HTTP/3 than the likes of Google. For example:

- Both HTTP/2 and HTTP/3 seek to reduce the number of round trips, mitigating latency between the user and the server. Now, from Google's perspective, the "server" is the nearest load balancer in a globally distributed network, which is probably geographically close to wherever the user is. Thus, users with good Internet connections typically have low enough latency for the improvements not to matter much. But Google still cares about latency because of users with poor internet connections – such as anyone on a cell network in a spotty coverage area. Well, poor connections affect all sites equally. But small sites tend to not be fully distributed; they probably only have a single origin server for application logic, and perhaps a single server period, if they're not using a CDN. That means a fixed geographic location, which will have higher latency to users farther away even if they have a good connection – thus more benefit from latency mitigation.

- QUIC can send stream data in the first packet sent to the server, without having to go through a SYN/ACK handshake first. TCP Fast Open lets plain old TCP do the same thing – but only when connecting to a server you've seen in the recent past (and retrieved an authentication tag from). Thus, QUIC is faster when connecting to a server for the first time – which affects smaller sites a lot more than Google.


Most users of HTTP have never been too concerned with it's overhead

End users complain all the time about latency. And that includes the latency to your small website hosted on a single server hundreds of milliseconds from your visitor... certainly more than it includes google's websites.

What you really mean is that small website operators generally don't care that their visitors are irritated by how slow their website is... and just brush it off and ignore it because they have no solution to the problem.

Maybe you should consider h2 as being for the benefit of visitors across the internet, and a benefit for those who care about performance.

It says it all that even though h2 is not required, small website have adopted it across the globe... now at 1/3rd of all websites, and growing.


I don't think cloudfare really does traffic analysis. At least nowhere near the level that google does. It is not their core business.


Why then they offer free fully functional CDN-like service, free SSL ? Data is new oil, and CF has all data in plaintext - your logins/passwords included.


Because...

a. It's really cheap for us to offer that service

b. Lots of those free customers end up upgrading, paying for extras, etc.

Between a and b offering the free service makes sense. We make money from the customers who pay us for our service (https://www.cloudflare.com/plans/), not from doing something nefarious with data. We'd be shooting ourselves in the foot if we did because that data is our customers data. We need to be very careful with that or we'd lose trust and not be in business.

Also, free means anybody can try the service and kick the tires. Often those people turn out to me the CIO, CSO, CISO, CTO, ... of big corp.


The plaintext thing is just too sensitive, and your free service offer makes the reach too wide. Could you be compelled, by warrant, to provide all plaintext traffic from a single user IP?


> I don't understand the desire to use google's browser.

It was the only browser with a decent Javascript sandbox, at least until recently. Wikipedia claims Firefox got a sandbox this month, but I think I've seen earlier claims:

> Until November 2018, Firefox was the last widely used browser not to use a browser sandbox to isolate Web content in each tab from each other and from the rest of the system.[120][121]


Also it was the only browser where every tab ran in its own process so a crash would only take down that tab.


Microsoft's browsers got this functionality pretty early as well (I believe around the IE9/10 timeframe), though they of course had and still have numerous other issues that would make them undesirable for regular usage.


> It was the only browser with a decent Javascript sandbox

What about Safari? IMHO it has strong sandboxing. Another interesting thing I found, is sharing cookie access between private tabs, Safari does not, Chrome does.


Could be. I don't know much about the Apple ecosystem.


> Those protocols are a lot more complicated than HTTP. So it's much harder for a small group to implement them.

Why does a small group need to reimplement HTTP/2 and HTTP/3? It's important that we have more than 1 or 2 implementations, but we don't need more than a small handful, and we definitely don't need every independent group reimplementing them. We just need enough that anyone who needs it has access to an implementation that's usable for them, whether it's bundled with the OS (such as Apple's Foundation framework including a network stack that supports HTTP/2), or available as a library (such as Hyper for Rust, or I assume libcurl has HTTP/2 support).


Because then you get more parts of your stack that you don't really understand how they work and are unable to audit.

We are basically doing with TLS. Which went fine - until people realized that one of the major go-to implementations of TLS contained years old unfixed bugs that could be remotely exploited.


I am not sure TLS would have been better if instead everyone rolled their own TLS implementation.

Nor do I think that a more diverse world of TLS implementations would've led to better auditing of openSSL. We had barely enough eyeballs to audit openSSL, let alone to audit more stuff.

The issue with openSSL was that the protocol was sufficiently complicated and sufficiently critical that people just picked the available option. Perhaps those who did look into the code they were running concluded it was bad, but weren't willing to create a new library. Besides, any new library would have the stigma of 'they are using a non-standard and new crypto library'.

In that case, the solution would've been louder complains about the code quality of openSSL.


Better for everyone to be using a small handful of battle-tested implementations written by experts than for everyone to roll their own implementation. The latter may mean that people have a better understanding of the component, but it's also pretty much guaranteed to mean the various implementations are buggy. Even very simple protocols are easy to introduce bugs into.

For example, it's pretty easy to write an HTTP/1.0 implementation, but it's also easy to open yourself up to DoS attacks if you do so. If you're writing a server, did you remember to put a limit on how large a request body can be before you shut down the request? Great! Did you remember to do that for the headers too? Limiting request bodies is an obvious thing to do. Limiting the size of headers, not so much. But maybe you thought of that anyway. What about dealing with clients that open lots of connections and veeery sloowly feed chunks of a request? The sockets are still active, but the connections are so slow you can easily exhaust all your resources just tracking sockets (or even run out of file descriptors). And this is just plain HTTP, without even considering interacting with TLS.


"It's a sign that web standards are getting too complicated."

Is there precedent for standards significantly simplifying over time, or do they always tend to get more and more complex?


What frequently happens is that a simplified alternative appears.

HTML5 rather than XHTML, Markdown vs. HTML or LaTeX, HTML, originally, vs. SGML or Sun's ... proprietary hypertext system (Vue?).

Arguably, replacement of much office suite software with Web technologies.

Multics -> Unix.


This is true, but a web browser can't really make those choices without a breaking a lot existing stuff. The big problem is that we keep piling onto HTML, CSS, and JS. For instance if we wanted web apps it would have been better to make something separate. Instead we have taken HTML which was originally just a way of rich text formatting and have made into the beast that it is today.


This may be a nitpick, but hopefully it's also an interesting rabbit-hole:

HTML was originally contemplated as more than a method of rich text formatting. It was created as a way to describe and link arbitrary media and applications. I'd recommend reading the first published proposal for (what later became known as) the World Wide Web written by Tim Berners-Lee [1]. In my reading, I see it as being intended applications as powerful as the kind we build today - at least as far as could be contemplated and described in 1989, and given the degree of abstraction with which the document as written:

> "Hypertext" is a term coined in the 1950s by Ted Nelson [...], which has become popular for these systems, although it is used to embrace two different ideas. One idea[] is the concept: "Hypertext": Human-readable information linked together in an unconstrained way. The other idea [...], is of multimedia documents which include graphics, speech and video. I will not discuss this latter aspect further here, although I will use the word "Hypermedia" to indicate that one is not bound to text.

An example of anticipated usage:

> The data to which a link (or a hot spot) refers may be very static, or it may be temporary. In many cases at CERN information about the state of systems is changing all the time. Hypertext allows documents to be linked into "live" data so that every time the link is followed, the information is retrieved. If one sacrifices portability, it is possible so make following a link fire up a special application, so that diagnostic programs, for example, could be linked directly into the maintenance guide.

Another category of use-case was web crawling, link-based document search, and other data analysis.

These and other anticipated use-cases envision more than text formatting; the primary purposes of the proposal were, in my opinion, the inter-linking of information and the formal modeling of information, especially for the purpose of combining different programs or facilities into a single user experience.

[1] https://www.w3.org/History/1989/proposal.html


I wish Google Search would create an HTML5 subset for documents that would boost rankings if used.

A good majority of search results I am looking for should be simple single page HTML documents that don't use complex HTML5 features that are needed for web apps.

Change ranking, and you give websites the incentive to avoid JavaScript or CSS features that are against the reader's interests.


I'm 80% sure you're joking, but just in case, this is essentially what AMP does.


Last thing we need is google dictating more about the internet.


My understanding was that this was the original plan for XHTML. Keep HTML 4.x around as a "legacy standard" for old content, make new developments in a new language with an architecture more suited for modern use cases.

Of course this would have required browser vendors to support two languages at the same time for a sufficiently long transition period, which was apparently too much to demand.


But they did support both languages, and support them to this day.

It's the sites that didn't adopt XHTML. Everybody on the infrastructure side loved it.


..without a breaking a lot existing stuff...

That's specifically why and how new standards apear. They accomplish most (though not all) the earlier capbilities, with a masive reduction of complexity. It's a form of risk mitigation and debt reduction.

Compare browsers generally: Netscape -> MSIE -> Mozilla -> Firefox -> Chrome -> Firefox. Each predecessor reached a point of complexity at which, even with massive infusions of IPO, software monopoly, or advertising monopoly cash, they were unsustainable.

The old, dedicated dependencies (frames, ActiveX, RealPlayer, Flash, ...) broke. Simpler designs continued to function.


>For instance if we wanted web apps it would have been better to make something separate

But then we need to make another app + browser version? Which defeats the purpose...


Moreover, we have gone from Microsoft pushing complexities to Google.

Like the latest two HTTP protocols are both based of tech that google has already made. However, IETF is like that sounds good. It's got it's advantages, but there is very little push back saying well that makes things more complicated.

For instance with HTTP/2 it has support for pushing files to the client. Most back end web stacks are still trying to think of good ways to make that easy to use. Mainly since what files to send depend on what the page contains. So either you have to specify a custom list or the web-server now needs to understand HTML to get a list of required resources. This also gets more complicated since a push will be useless if the resource is already cached. This means your webserver has to have some kinda of awareness of how clients will cache data. Again this starts to mean your web server needs more client knowledge.

This is does not even take into account how the browser should handle these things.

Additionally, while cryptography is a good thing, the standard for HTTP/2 does not require it. However, pretty much all the browsers ignore that un-encrypted HTTP/2 is allowed. So if you wanted to run HTTP/2 without TLS the browsers act like site does not exist. This gets into the problem since there are so few browsers they can basically make defacto standards. So if you went through the effort and followed the standards what you encounter may not follow those standards at all.


The standard for h2 may not have required it, but practically it was required. There are middleboxes on the internet that assume any traffic over port 80 is http 1.1, and will destroy/interfere/break non-1.1 traffic. There are also servers that will respond with a 400 error if they see an unrecognized protocol in the upgrade header. This is why actual data shows h2 has a higher success rate when sent over tls.

IIRC MS/IE wanted to implement it, but they backed off because of these issues

Asking browsers to implement h2c is asking them to make their browsers flakier... their users would see a higher connection error rate... which the user WOULD attribute to their browser, especially if they open the same URL in another browser without h2c and it works.

Using the upgrade header instead of alpn is slower anyway.


> HTML5 rather than XHTML

Huh? Parsing HTML5 is much more complicated than XHTML, and everything else is about the same.


The issue with XHTML is not parsing, it's generating valid one. The internet got years to try, failed, time to switch to something else...

Because parsing invalid XHTML, which all browsers ended doing, is more complicated than parsing HTML5...


It's pretty easy to generate a valid XHTML doc. The issues come when someone is editing by hand and doesn't care.

> Because parsing invalid XHTML, which all browsers ended doing, is more complicated than parsing HTML5...

I don't understand what you mean. Isn't the non-strict parser for XHTML just the normal HTML parser? The complication levels should be equal.


> It's pretty easy to generate a valid XHTML doc.

In the face of arbitrary user-content, like comments? Are you checking they don't include a U+FFFF byte sequence in there? (Ten years ago almost none of the biggest XHTML advocates had websites that would keep outputting well-formed XML in the face of a malicious user, sometimes bringing their whole site down.)

It's absolutely possible to write a toolchain that ensures this, just essentially nobody does.

> Isn't the non-strict parser for XHTML just the normal HTML parser?

Yes. It's literally the same parser; browsers fork simply based on the Content-Type (text/html v. application/xhtml+xml), with no regard for the content.

The bigger problem with XML parsers is handling DOCTYPEs (and even if you don't handle external entities, you still have the internal ones), and DOCTYPEs really make XML parsers as complex as HTML ones. Sure, an XML parser without DOCTYPE support is simpler than an HTML parser, but then you aren't parsing XML.


The problem is that with the glut of document declaring strict conformance but failing to be, fallback mechanisms had to be implemented, making it like a two pass parser, where if strict fails, you reparse in non strict. In the end slightly more complex, and definitely slower.

Anything more would be paraphrasing http://www.webdevout.net/articles/beware-of-xhtml


In the particular case of web standards, my impression is that some companies that develop browsers (1) tie individual performance evaluations (e.g. bonuses) to whether the engineer has added stuff to standards and (2) _really_ like over-engineering things. The effect on web standards has not been good.



Firefox performs badly, especially on my 2-core macbook.

Quantum is still not fast enough with many pages I use, I bet most devs do not test on firefox anymore and I've found FF unusable unless you use a 4 core machine, otherwise you get many random pauses here and there.

So my choice is chrome or safari. Safari is not customizable enough for me so chrome it is.


I use Firefox as my daily driver and I am consistently amazed by how slow Chrome is whenever I load it up for a debug session or to access a work related site (it's the new IE, sites only support it).

Most Google sites are faster than Firefox (big surprise /s) but most everything else is the same or slower. I thought Chrome was supposed to be fast, it feels like a turd.

I have a Yoga 2 (4 years old) and my laptop fan revs up like a harrier jump jet whenever I load Chrome. Firefox only manages to make it purr loudly.


I recommend to my non-IT friends and family that they should use gmail and chrome because they use Windows and Google's security is fantastic. Sure there is a compromise. Google are using private information for advertising but (1) Google doesn't have a history of sharing PII with third parties and (2) Google are very good at keeping information and passwords secure. Many of my non-tech friends and family use Android phones so they need a gmail account (convenience), and they use Facebook (so their privacy is already compromised to third parties). I strongly recommend against IE and Edge because they are buggy and IE/Edge had 9 critical security flaws in October (implies lots and lots of zero days in the long tail still remain). Firefox is OK but it just isn't as fast, reliable, usable, or secure as Chrome IMHO.

I personally use Chrome because it is secure and fast (and the debugger works far better than Firefox's, Safari's or Edge's). I personally don't use Apple because I don't want to spend x% of my disposable income on iDevices per year, when I can spend 0.x% on Android devices per year. I distrust Microsoft (their security is suspect and their implementations suck: I use outlook for work and the UI is super buggy - I notice unique flaws regularly and have to live with some bugs every day. Like email notifications stopped working the other day - just unbelievable shit). I would love to not use Google, but for the compromises I need to make, it remains the best choice by far for me. Edit: fixed # flaws.


Have you given Firefox another shot during the past year? I agree that it used to be terrible, (as in bloated and buggy and slow) but the work they've been doing with servo and quantum has really, really paid off. It really is a whole new experience.

I don't do webdev, so I can't really comment about that. I agree wrt edge being terrible and not being willing to pay Apple prices.


I do some webdev (work with APIs & frontend stuff like react, vue). The firefox debugger is different. After an adjustment period I got used to them just fine and actually prefer some parts (like the network tab).


So I'm a Firefox user. Went back for reasons everyone here already knows; lots of privacy issues. I use a VPN too.

Anyway I switched to Firefox on my computers and mobile system. I use that VPN to try limit Google's tracking of me and I use duckduckgo for the same reason.

Long story short I just switched back to chrome on my Android, because Firefox has kinda stopped working. I used to be able to keep 100 tabs open, not I can't even keep 1 open in the background. When I go back it just forgets what it was and won't refresh. I click refresh and it shows it's refreshing, but then nothing happens.

Nothing I can do. I'm reading some or I see a great article and I open it right away in a FF tab for later. I go back it doesn't load. Then I can't reset/restart it, because it won't die and then it stops syncing, etc.

It's really really bad. Sadly this wasn't the case when I decided to switch about 8 months back, this is only in the last 60 days.

I'll keep using Firefox for now on my desktop, but honestly I really rely on profiles and sync across profiles, which is a pain to get around on FF as it is, but now it's a big burden I can't really see my way around.

Too bad, but honestly I need a reliable tool more than I need privacy at this stage.


I was seeing similar symptoms. Fixed by setting cookies and all other state except bookmarks, one or two others, to clear upon browser exit.


Interesting extreme user, would be interested to learn more about how you compute.


Happy to connect anytime and share. I'm on most platforms with this handle.


I've used Firefox 55-62 and I switched back to Chrome recently. Quantum was a great improvement, but it is still too buggy for me and it's gone downhill somewhere in the past few months

* Uses 30-40% CPU constantly on my Ubuntu laptop, causing the entire system to freeze.

* Slow on JS-heavy apps like JIRA, Gmail, Google docs.

* Firefox Android randomly decides to stop loading web pages, requiring a force quit and restart.

* Firefox Android bugged out while writing this comment, the text I typed would appear at a specific location, regardless of where I put the cursor. This and various other HTML input bugs require me to restart the browser again and again.


Heh,google's entire business model is sharing your data with third parties. Is it better because they filter out some details?

Also,Chrome might be more secure from a vulnerability point of view but browser exploits(exploit kits) are not a very common means of deploying malware these days. They tend to focus on IE and flash these days: https://blog.malwarebytes.com/threat-analysis/2018/03/exploi...


Can you point to some links on Google sharing data with third parties? AFAIK they don't share any data. You want to target a certain demographic they will target for you but they won't pass on the data. They keep the data internal.

If you install some 3rd party app and give that app permission to access your data they'll give that app the access you requested them to give but otherwise no sharing AFAIK.


If I target users based on a specific criteria and I know they came from google then that information about them is passed. Raw data isn't much use anyways,that's why google and fb aren't afraid to share with users what they collect. The analysis and targrting donewith that data is what users should be concerned about.


The same reason that so many of them continue working for companies that blatantly violate principles they claim are important to them (privacy, open-source, anti-advertising, etc.):

It's easy to be vocal about principles, but when it comes down to it, very few people are actually willing to impact their own comfort or convenience to truly follow them. It's simpler to just come up with a reasonable-seeming justification for why you're not really supporting things you claim to be opposed to.


> The same reason that so many of them continue working for companies that blatantly violate principles they claim are important to them (privacy, open-source, anti-advertising, etc.):

You're trying to paint people as hypocrites, where a more simple explanation is that maybe most of users even here on HN are not as concerned with the problem as you are. Vocal minority and all that.


> willing to impact their own comfort or convenience to truly follow them.

*method of survival.

You can "stand up for your principles", or you can not be an ideologue, survive, and live to fight another day making progress and positive change along the way. Full stop boycott stops nothing. Changing from within is the most effective. Instead of posting shame-inducing posts like this, labeling people and assuming the worst, try assuming the best, encourage them to take actions to increase privacy and increase security. I work in that field, and when my own principles are violated, I speak out. I guarantee that changes more than people shaming others on social media. Advertising and Data collection have about as much as a chance of stopping as world governments agreeing to stop producing bullets, so let's try to make it as ethical as we can.


You mean like we make visible progress in climate change because everyone suddenly decided to stop using meat, even though they get constantly bombarded with advertising telling them not to? Oh wait, we don't actually!


I just don't understand how so many people in the HN community, who are so vocal about privacy, turn around and use Chrome.

The amount of blind trust that people - including very technical people - gift to Google is rather shocking.


You might trust someone with one thing but not with the other thing. A question "do you trust them?" out of context is not specific enough.


Here is an alternative view from Theo De Raadt, OpenBSD founder: “[firefox catching up with chrome’s security] is lipstick on a pig”: https://marc.info/?l=openbsd-misc&m=152872551609819


One could say that it doesn't matter how many different privilege separation levels Chrome has if it so readily exfiltrates your data to Google's servers.


True. But security considerations add an important dimension to the conversation which is often missed.


Firefox has more than two process classes. I'm probably missing some still, but it separates at least into main process, content processes (tabs), NPAPI plugin process and extension processes (at the time of his writing, I believe this was one big extension process still).

There's also a process for Asynchronous Panning and Zooming (APZ), but that probably doesn't help much with security.


Chrome makes sure that no one get user data but Google.


I don't use Chrome but the vast majority DOES and I share a planet with them. Google is everywhere these days. So far it doesn't impact me too much since I've been blocking ads profusely.


Many tout their laziness as efficiency.


Chrome made a better walled garden out of the web than any other browser.

People want apps. Not browsers.


> I just don't understand how so many people in the HN community, who are so vocal about privacy, turn around and use Chrome.

1) Firefox is SLOW. I have ~400 tabs open on a Macbook right now in Chrome, Firefox snails around at 30-40 tabs.

2) Firefox dev tools sucked for a long time, compared to Chrome's. Same goes for Safari's dev tools - and don't get me started on the clusterf..k called Internet Explorer... that's why devs drove off to Chrome in the first place and stayed there.


My experience with these browsers (also on a Macbook) is almost exactly opposite. It's amazing how people can use the same things and have such wildly different experiences.


People simply don't believe in their own agency.


Specially ChromeOS.


It still has the tightest sandbox. So until Firefox has a new Js engine, it's a security-vs-privacy choice.


half the internet is optimized for chrome. we're lucky theres any alternatives at all. MS should buy FF just to fuck the future


Because it's faster. That's all there is to it. Yes, I know many people will come and tell me how wrong I am and how Firefox is so much faster in their experience. Maybe they will link to some synthetic benchmarks. I don't care. Chrome is faster.

Also the Developer Tools of Firefox are worthless... and not only because of how slow they are.


> the Developer Tools of Firefox are worthless

That's odd, because I think the Chrome dev tools are junk compared to Firefox. And I've never had an operation in Firefox dev tools that wasn't instantaneous. Perhaps our use cases are markedly different.


And I don't understand how so many people in the HN community can believe such conspiracy theories about Google and Chrome, what info they collect and how it's used and shared.

AFAICT Google doesn't share any info with 3rd parties unless you sign up with some 3rd party and ask them to share the info. I've never used my Google account to sign up with any 3rd parties.

As for Chrome collecting my history.(a) I want that since I want to be able to search my history across devices. For those rare cases where I don't I use an Incognito window. (b) you can opt out of having Google use your history for ad targeting. https://adssettings.google.com/authenticated

Note that ad targeting does not in any way suggest Google is sharing data. In fact it's in their best interest not to share data. If they share the data then other companies can use that data themselves. If they don't share data then other companies have to go through Google for targeted ads.


Google is the third party in this case.


It doesn't matter if they're not sharing it with third parties.

If the user in question did not give specific permission for Google to steal their ProtonMail email/s and send them back to Google servers, then that should be a crime. It should be a felony, just as it would be if a Google employee opened or obtained my physical mail without permission, scanned it in some manner, and took it back to their offices.


that is a fairly uncharitable interpretation of what happened. At worst Google tried to be helpful by offering a free translation service and the user clicked "yes, translate all pages in language X to Y". Chrome does not automatically translate pages out of the box. it asks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: