Recently I had to install Hangouts app on the Android phone (it was easier than using it on desktop because I don't have the latest Chrome). One has to register a Google Account in order to use it, and I had to answer a lot of questions as if I was applying for a visa, including a phone number (of course I used a fake number) and date of birth. Then the app displayed a terms of service page with boring legal text. But I noticed that there was a small button to show more details, and when I clicked it, the page expanded and I saw checkboxes (lot of them), most checked by default, like "share my location with Google" or "record web & app activity". Of course, I turned all of them off and thought that I am smarter than a typical user that would not even see these checkboxes.
It turned out, I have been tricked too.
First, the Hangouts app somehow added this newly created account into all other Google Apps, so Google Play (which I have never used before) has started itself up and said that I need to update several apps (no, I don't) and then Mail App said that I got an email (this boring kind of email they put into your inbox upon registration). Also, Hangouts app added this new Google Account into the phone settings. And enabled sync for everything - including contacts.
Luckily, I mostly use phone as a dictionary and it didn't have any personal information - but if it had, it would be irreversibly copied into the Google Cloud.
These settings are not easy to find. For example, to learn about sync, you have to go to Settings -> Accounts -> click word "Google". Only then you will see that your data are being uploaded to Google. Google doesn't even give a warning, let alone asks you whether you really need it. To disable location tracking you need to notice a tiny button at terms of service page or find it at the settings. I am sure that most of users don't even realise that they've agreed to be under constant surveillance by Google.
I must admit, Google is good at sucking data out from people and deceiving them. After all, it employs smartest people on the planet.
If it's not what you want then it is overly intrusive and exceptionally hard to manage so you control just the functionality and personal security you want.
Most end users, I think, just want something to work and are happy for all the magic to just happen. When you embrace it a lot of what it does is very clever and very useful. Most people I know who have embraced it just find the integration fantastically useful and don't have most of the concerns the more technically aware people do.
* Phone number is clearly marked as optional and it says they use it for security. (Of course, Facebook said the same thing, and look how that turned out...)
* The only information requests that I think are unnecessary are date of birth (they say because some services are age restricted), and gender (which "rather not say" is an option for).
* The page you are referring to is not really boring legal text, it's pretty plain English that is easy to understand, and there's not a lot of it. I think average people can read it easily in 1-2 minutes.
* The "location history" option is off by default.
* It is true that the options are in a "more options" folded thing at the bottom of ~2 pages of text. This sucks, but looking at the whole page in context, it's not nearly as bad as you made it sound.
> And enabled sync for everything - including contacts.
I think this only applies to new contacts created within your google account, and not local phone contacts. I learned this the hard way when my contacts did not sync from my old phone when I wanted them to.
> To disable location tracking you need to notice a tiny button at terms of service page or find it at the settings. I am sure that most of users don't even realise that they've agreed to be under constant surveillance by Google.
This was off by default for me.
US law has some requirements on accounts created by minors which essentially means you must check the age for people creating new accounts, and you must not allow accounts for people < X years (but are not allowed to tell them up front).
I don't entirely understand where exactly this applies, i.e. why you can create accounts on some sites without giving your age. It might only apply in some telecommunication accounts cases, or maybe (possibly more likely), Google is one of the few companies subject to scrutiny here, so everyone else just flies under the radar.
That law is called COPPA ("Children Online Privacy Protection Act"). It applies to sites dedicated to children (as per https://www.ftc.gov/tips-advice/business-center/guidance/com...). As Google uses one and the same account for all its services including YouTube, all Google accounts are potentially "dedicated to children" (YouTube IIRC has even a special section for videos suitable even for toddlers).
Are they really making a product that caters to very young children while claiming for legal purposes no children can use it?
> Phone number is clearly marked as optional
That is because your IP has a good reputation. For me registering a Gmail Account from Firefox looks like this . Note that the text is misleading (this is not for my security, this is to prevent bulk registration).
> date of birth (they say because some services are age restricted)
If the user is over 18 you don't really need the day and month, year is enough.
> The "location history" option is off by default.
Ok, I deleted Google Account from phone (with two scary warnings that some of my data will be deleted from the phone), deleted Hangout's cache, forced stop it and tried to repeat the registration again. I made screenshots for every screen. If anyone needs, I can upload all of them.
It is weird, but this time Google didn't require me to confirm a phone number while it requires a number if I use a desktop browser and it required a number previous time. The phone is connected to Internet via desktop, so it has the same external IP. This is suspicious and might mean that Google has recognised my phone.
Here's what I observed:
- Google warns that it can exchange my device info (IMEI?) with a phone company  if I enter my phone number. Do phone companies sell data to Google?
- Terms of usage are written in a plain language, but they are three screens long and important options are hidden behind a spoiler . Note that the button is named "More options" instead of "Choose what I share" or "No, I don't want to share" or something like this. Google doesn't really want you to click it.
- By default, "Save web & app activity" is enabled , and it includes "searches and associated information, such as location and activity from sites, apps and devices .... like Chrome history, for instance...". I don't understand whether this really means that they collect my browsing history and location or they meant something else (like stats calculated from those data).
- Saving Youtube search and watch history is enabled by default 
- You are right, location and voice history are off by default
- Backup to Google Drive is enabled by default . I don't understand what it does, and what Google means by "data". Does it include all files in /sdcard? That would be scary and I definitely don't want it. Also, I don't remember if I saw this previous time. I had to retry registration several times late at night, I was tired and I could accidentally forget to disable it which would explain why contacts sync was enabled. Maybe it was my fault. But I am not sure.
- Note that the description of "Backup to Google Drive" is hidden behind a spoiler
- I checked whether sync is enabled and couldn't really understand anything. One screen says "Sync is OFF" , but another says "Last synced on xxx.xxx" 
- In detailed Google Account settings I found out that Google will save "contact info of people I interact with in Google Products" 
So you were right about location history - it is disabled by default. But there are so many settings and it is so easy to forget to disable anything. I wish I could update original comment for clarity.
But if the user is 18, you need them. It would be a really weird UI if it changed this dynamically... I wouldn't expect anyone to implement that.
That's what it means. You can look at the data google is storing at https://myactivity.google.com/myactivity
If you have an Android phone and don't trust Google then what their apps ask for seems a bit irrelevant - you've almost certainly already given them whatever you don't want them to have.
I trust Apple more when it comes to privacy, because there is hard proof they are more trustworthy. Note that I'm not saying they can be trusted 100%, but they are far, far better at protecting their users' privacy than Google. Of course, that's a given considering Google's entire existence is based on selling your information to advertisers, whereas Apple makes their money when you buy their hardware.
Honestly, there is no practical way to not end up in some company's database somewhere unless you eschew all 20th and 21st century tech, live in a cave, and forage for food. Even then you're bound to end up in a news story on the Internet if you're ever spotted, even if it's just Weekly World News talking about another Bigfoot sighting.
This sounds like a religious view rather than one based in any evidence.
- I have rechecked and "Location sharing" is off by default, although "Share web & app activity" is on by default and it can include "searches and associated information, such as location and activity from sites, apps and devices .... like Chrome history, for instance..."
- I might accidentally forget to turn off "Backup to Google Cloud" when registering because I had to retry the registration several times and was tired. This would explain why sync was enabled. But I don't remember it clearly.
I installed maps.me to give it a try... Very surprised that one of the settings is "Use Google Play services to determine your current location"
Too bad that Ubuntu phone died on the vine.
I tried twice - and every time Google sets this option on by default even if I had disabled it previous time.
Of course, this might be not intentional - maybe the enable sync flag is not stored at Google server.
Remote self-destructible VM for browsing with Firefox in incognito mode (only sites you NEED to, that REQUIRE JS), through multiple VPNs over multiple proxies.
Everything else is command line HTML parsers (also on different, remote VMs), or API endpoints (HN API as an example?).
Need email service? Self-hosted, tiny email-server somewhere in eastern Europe. DDNS etc.
Local machine is always clean. Imagine you had to have iPad as your primary work machine? Very similar spiel.
Good thing is that most of it can get into a habit very quickly and most tedious parts can be automated :)
Keeping it up offline? Cash-only, prepaid phones (these give you internet access as well, 80$ no contracts, activate, use 20GiB until the end of the month, discard the phone, destroy and repeat), prepaid debit for "card required" purchases.
Easy-peasy! no idea what people are complaining about....
Let's Encrypt is probably the great example of actually getting it right. But normal people won't really ever need it.
This is somewhat how Richard Stallman uses the internet:
> I am careful in how I use the Internet.
> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it (using konqueror, which won't fetch from other sites in such a situation).
> I occasionally also browse unrelated sites using IceCat via Tor. Except for rare cases, I do not identify myself to them. I think that is enough to prevent my browsing from being connected with me. IceCat blocks tracking tags and most fingerprinting methods.
> I never pay for anything on the Web. Anything on the net that requires payment, I don't do. (I made an exception for the fees for the stallman.org domain, since that is connected with me anyway.) I also avoid paying with credit cards. For freedom's sake, insist on paying cash. When a business pressures you to pay in an identified way, that means your help as a citizen is needed: say, "If you won't take my cash, no sale!"
I hope you're just joking since layering up multiple VPNs doesn't provide any privacy by design. The best way is to use disposable Whonix VMs in Qubes OS.
2. The Qubes team has been doing a great job so far, and Joanna has not been directly involved for about a year now. Marek and Andrew Wong are the ones I notice the most on the mailing list and on github, but there is a big team that has gotten Qubes to where it is today.
That collaborative one.
That sharing one.
That innocent one.
But thank the universe we still have it.
A free-to-use global computer network borne of a military projects programme with all communications in the clear by default, centralised in a country with a highly active global foreign policy. Hmmm. Looking at it like that it seems Google and Facebook are just picking up where the other guys left off.
But yeah still glad we have it though.
Also, in Russia (and many other countries as well) you cannot legally buy a SIM card without an ID. And Digital Ocean doesn't accept some virtual debit cards and suggests that I use a real credit card (so that they can charge me even if I don't have money).
What are you using? Do you have any advice for setting this up?
I've been using Protonmail for a couple of years, and while I'm generally fairly happy with it, I'd really like to self-host my email in my home. Aside from the technical experience, my understanding is that the US court system sees data stored on your own hardware in your own home very differently than data that you've entrusted to the care of a third party outside your home - the former is protected by the Fourth Amendment while the latter is not.
On the Debian setup menu, I selected smarthost, to use the ISP's server for sending (required because of the way the internet service works; your own service is still used for receiving). And then, in order to reduce spam, modified the configuration so that only aliases can be used and not real usernames, and set up several aliases in the /etc/aliases file, so that a different one can be used for each service or correspondent. I then set up the router to allow incoming SMTP connections.
(If necessary, you may need to disable NAT with your internet service provider. If they won't let you to do this, or won't allow arbitrary port numbers, then it isn't a real internet service.)
The advice in the thread isn't "never connect to protonmail using Chrome." It's "don't use Chrome".
100% agree. Firefox is so good now, there's really no excuse.
So, I would just use Safari, except that as a dev, I need the vastly superior dev tools, extensions, and customizability of Chrome and FF.
So now I have to juggle browsers, trying not to get burned by Google, burned in a whole different way by FF, and trying to get things done in Apple's "no preference settings for you, because Apple's preferences are all that matter" design ethos.
Furthermore, we're discussing a major con of using Chrome in this thread, so I'd be happy to take a minor trade-off or two to avoid that (especially if that trade-off is partly down to Google crippling their own service perf-wise in non-Google browsers).
What kind of sites are taxing Firefox other than the well-known offenders? Given you're on Pro and I've no issues with an Air, it's likely a GPU issue that could be remedied with some config tweaks.
I hate when people say this, of course there are valid reasons why not to use Firefox. For example Firefox can't play videos smoothly on my hardware (yes, this issue has been reported), that's a dealbreaker for me.
That's what they claimed, not what they discovered. What they discovered was that Chrome was sending emails created in a specific webmail client to a translation service. Language detection is done client-side; text is only sent to the translation service if the client decides it's in the wrong language.
Perhaps you are angry because it doesn't send your DNS requests in the clear to Google's 126.96.36.199 service? Perhaps you are angry because you don't like encrypted communication protocols?
Off the top of my head, forced telemetry (even if you turn it off in about:settings some stuff gets reported back to Mozilla); Pocket and Sponsored Tiles, the former sends Mozilla the URL and form data for every site you visit, the latter has complete access to your browsing history so it can show you "relevant info"; Adobe DRM and Encrypted Media Extensions (some people don't like any DRM in their browser, I don't have an issue if it's trustworthy but you're asking so I'm listing); and a minor, easily corrected nitpick but they went back to Google as their default search engine. My problem with that is every update (so far) ignores user settings and changes it back. This can lead to unexpected unwanted searches via Google.
More generally, if any of these things actually offend you, I'm sorry to tell you but you're not the audience for a web browser—after all, general web browsing is far, far worse. Every website you visit gets your IP address and your user agent string. Ooooh noooo.
IMO there should be an original title and an edited title (and users could optionally display only one title of they requested) ... but then there's lots I'd change ...
It just says they had to turn off the suggest translations feature, which would apply to all sites/languages.
Because people were abusing it left-and-right to prevent password managers, and as e.g. banks (my own bank did this...) rarely listen to customers, Google instead decided to disable that opt-out for everyone instead.
Not that this would surprise me with Chrome's general attitude...
<form name="form" onsubmit="checkspelling()" autocomplete="off" spellcheck="false" >Enter spelling: <input id="textfield" name="textfield" size="20" type="text" style="font-size:32pt;" autocomplete="off" > <input value="OK" onclick="checkspelling()" type="button" " style="font-size:32pt;">
[Edit] I've got the wrong end of the stick it seems.
Put a value into it a text field and Chrome will helpfully save it for future auto-completion. Then it'll upload it to your account on their cloud if you're logged into an account. How do you think it's able to fill out your name, address, etc. on all those web forms?
This isn't a thing that Google does
There's certainly a difference. I'm not sure it's a very big one though.
The latter is an extra problem in a few specific areas:
1. your foremost fear is a bad actor getting your private details (e.g. identity fraud / doxing). These are legitimate fears, but certainly not a primary likelihood in the majority of cases.
2. discrimination based on background checks (jobs/loans/etc.). Also completely legitimate, though background checks tend to be plenty invasive in isolation these days anyway, so I'm not sure how much of a negative impact Google's data would potentially add here.
Other than these specific threats, the two seem exactly equivalent for most reasons people are concerned about privacy.
You're asking me to give counter-examples to examples/explanations you haven't given.
This isn't some gotcha thing, I'm trying to understand these concerns better, because I really don't. I'm not asking for "counter-examples" to anything, I'm just asking for examples. It's not an odd question.
The main reasons people are concerned for privacy, I would say, are around influence and personal autonomy. There are plenty of people (many of them on HN, I've read many comments here to this effect), who want to cede decision-making about their own consumption to service-providers. There is an attractive convenience to this. Privacy advocates are typically not these people, and are concerned not just for their own individual autonomy, but also often motivated by broader societal concerns like those discussed by Pariser (obviously a hot topic right now w.r.t. Trump and Putin), as well as less-political aspects of selective exposure theory around societal trends.
The main problem I see is that corporations cannot be trusted to limit their use of personal data to benign purposes, nor can they be trusted to keep that data safe from people who will abuse it. But there's certainly a significant difference between potentially leaking or abusing data and actively selling it.
How you define benign? let me give you a real example,
A woman got pregnant, she probably did some web searches related to the situation. Then something bad happened, the pregnancy was lost but the woman continued to get ads related to the baby for months(or even more).
There is no button somewhere where you click and all ad networks can clear your history, your data is stored forever and sold or traded.
Both require an auth consent screen with permissions listed, where it may or may not be clear to the user what's being shared.
That's how Google leaks information.
It’s not the highest bidder but it’s still a problematic consequence of concentrating so much data in one place.
I read a lot of foreign websites, and the built-in translate feature (which you can request in the right-click menu, or from the Toolbar) is a life saving feature, like, literally, I've been traveling, and Chrome built-in ability to translate helped in a medical emergency.
If it is based on analysis done by the local machine, no problem. However, if it is based on analysis done by google servers, big problem!
The html tag has a "lang" attribute, and the server itself can send a Content-Language HTTP header. Most CMSes these days set one or both once multi-lingual is enabled.
Additionally the browser can utilize the OS or it's own spellcheck word database: check every word in every dictionary and the dictionary with the most matches is likely to be the relevant one.
Every word seems excessive, especially if a page has an excessive amount of text on it.
But does it send the page's content to some Google server only if you agree? The point here is that it seems that the content is sent over to Google no matter what.
Don't feed the beast.
Although, I have looked at some of the other forks. What I find more depressing is how few up to date browser engines exist. It's a sign that web standards are getting too complicated. We already going to have a 3rd version HTTP as well... Both HTTP/2 and potential HTTP/3 are based off of work from google. Those protocols are a lot more complicated than HTTP. So it's much harder for a small group to implement them. That's just the protocol layer. Let alone JS, HTML, CSS, and all the other little things. It's like big companies keep bloating the standards. The result is the browser is probably one more complicated pieces of software regularly use.
What ever happened to "KISS".
The membership of the W3C supported XHTML, to improve interoperability among other reasons. Apple, Mozilla and Opera had a different vision and broke away and formed the WHATWG which Google and Microsoft later joined. Those companies (minus Opera) now have near total control over HTML and the W3C just rubber stamps whatever they decide.
(Note: I don't believe the participants in WHATWG were doing what they did for anti-competitive purposes, but in hindsight it had that effect.)
WHATWG's specification process, putting the world's main communication medium into the hands of browser vendors with an interest to eliminate competition and define entirely new Turing runtimes (WASM), and advertisers who turn around and create competing mechanisms (AMP), then not actually ever delivering a standard (the "living standard" nonsense) is broken, and has been for a long time.
There are already quite a number of http/3 implementations from non-Google companies and projects . Cloudflare seem to be big backers of http/3 . There were some other articles today that are generally positive on the http/3 approach. One was from Tim Bray at AWS  and the other from @ErrataRob .
Also I have read about QUIC there are some things that are interesting about it. However, there are things that I don't like.
Moreover, this was something I read from IETF mail archive: "That QUIC isn't yet proven. That's true, but the name won't be formalised or used on the wire until the RFC is published, so we have a good amount of time to back away. Even then, if it fails in the market, we can always skip to HTTP/4 one day, if we need to."
I find that pretty concerning. If it does not pan out we can just skip over. That's still something someone has to implement even it's not used much. I would only be considering things that people in general are eager to use not just a few big companies.
Not true. This is why alpn and the upgrade header exist. You do not need to implement any of the new protocols, and you can certainly skip a version if you don't think it's worth the effort.
Having the second-largest traffic analyzer on board would seem like more of a cautionary negative than a positive to me.
If anything, smaller sites have more to gain from HTTP/2 and HTTP/3 than the likes of Google. For example:
- Both HTTP/2 and HTTP/3 seek to reduce the number of round trips, mitigating latency between the user and the server. Now, from Google's perspective, the "server" is the nearest load balancer in a globally distributed network, which is probably geographically close to wherever the user is. Thus, users with good Internet connections typically have low enough latency for the improvements not to matter much. But Google still cares about latency because of users with poor internet connections – such as anyone on a cell network in a spotty coverage area. Well, poor connections affect all sites equally. But small sites tend to not be fully distributed; they probably only have a single origin server for application logic, and perhaps a single server period, if they're not using a CDN. That means a fixed geographic location, which will have higher latency to users farther away even if they have a good connection – thus more benefit from latency mitigation.
- QUIC can send stream data in the first packet sent to the server, without having to go through a SYN/ACK handshake first. TCP Fast Open lets plain old TCP do the same thing – but only when connecting to a server you've seen in the recent past (and retrieved an authentication tag from). Thus, QUIC is faster when connecting to a server for the first time – which affects smaller sites a lot more than Google.
End users complain all the time about latency. And that includes the latency to your small website hosted on a single server hundreds of milliseconds from your visitor... certainly more than it includes google's websites.
What you really mean is that small website operators generally don't care that their visitors are irritated by how slow their website is... and just brush it off and ignore it because they have no solution to the problem.
Maybe you should consider h2 as being for the benefit of visitors across the internet, and a benefit for those who care about performance.
It says it all that even though h2 is not required, small website have adopted it across the globe... now at 1/3rd of all websites, and growing.
a. It's really cheap for us to offer that service
b. Lots of those free customers end up upgrading, paying for extras, etc.
Between a and b offering the free service makes sense. We make money from the customers who pay us for our service (https://www.cloudflare.com/plans/), not from doing something nefarious with data. We'd be shooting ourselves in the foot if we did because that data is our customers data. We need to be very careful with that or we'd lose trust and not be in business.
Also, free means anybody can try the service and kick the tires. Often those people turn out to me the CIO, CSO, CISO, CTO, ... of big corp.
> Until November 2018, Firefox was the last widely used browser not to use a browser sandbox to isolate Web content in each tab from each other and from the rest of the system.
What about Safari? IMHO it has strong sandboxing. Another interesting thing I found, is sharing cookie access between private tabs, Safari does not, Chrome does.
Why does a small group need to reimplement HTTP/2 and HTTP/3? It's important that we have more than 1 or 2 implementations, but we don't need more than a small handful, and we definitely don't need every independent group reimplementing them. We just need enough that anyone who needs it has access to an implementation that's usable for them, whether it's bundled with the OS (such as Apple's Foundation framework including a network stack that supports HTTP/2), or available as a library (such as Hyper for Rust, or I assume libcurl has HTTP/2 support).
We are basically doing with TLS. Which went fine - until people realized that one of the major go-to implementations of TLS contained years old unfixed bugs that could be remotely exploited.
Nor do I think that a more diverse world of TLS implementations would've led to better auditing of openSSL. We had barely enough eyeballs to audit openSSL, let alone to audit more stuff.
The issue with openSSL was that the protocol was sufficiently complicated and sufficiently critical that people just picked the available option. Perhaps those who did look into the code they were running concluded it was bad, but weren't willing to create a new library.
Besides, any new library would have the stigma of 'they are using a non-standard and new crypto library'.
In that case, the solution would've been louder complains about the code quality of openSSL.
For example, it's pretty easy to write an HTTP/1.0 implementation, but it's also easy to open yourself up to DoS attacks if you do so. If you're writing a server, did you remember to put a limit on how large a request body can be before you shut down the request? Great! Did you remember to do that for the headers too? Limiting request bodies is an obvious thing to do. Limiting the size of headers, not so much. But maybe you thought of that anyway. What about dealing with clients that open lots of connections and veeery sloowly feed chunks of a request? The sockets are still active, but the connections are so slow you can easily exhaust all your resources just tracking sockets (or even run out of file descriptors). And this is just plain HTTP, without even considering interacting with TLS.
Is there precedent for standards significantly simplifying over time, or do they always tend to get more and more complex?
HTML5 rather than XHTML, Markdown vs. HTML or LaTeX, HTML, originally, vs. SGML or Sun's ... proprietary hypertext system (Vue?).
Arguably, replacement of much office suite software with Web technologies.
Multics -> Unix.
HTML was originally contemplated as more than a method of rich text formatting. It was created as a way to describe and link arbitrary media and applications. I'd recommend reading the first published proposal for (what later became known as) the World Wide Web written by Tim Berners-Lee . In my reading, I see it as being intended applications as powerful as the kind we build today - at least as far as could be contemplated and described in 1989, and given the degree of abstraction with which the document as written:
> "Hypertext" is a term coined in the 1950s by Ted Nelson [...], which has become popular for these systems, although it is used to embrace two different ideas. One idea is the concept: "Hypertext": Human-readable information linked together in an unconstrained way. The other idea [...], is of multimedia documents which include graphics, speech and video. I will not discuss this latter aspect further here, although I will use the word "Hypermedia" to indicate that one is not bound to text.
An example of anticipated usage:
> The data to which a link (or a hot spot) refers may be very static, or it may be temporary. In many cases at CERN information about the state of systems is changing all the time. Hypertext allows documents to be linked into "live" data so that every time the link is followed, the information is retrieved. If one sacrifices portability, it is possible so make following a link fire up a special application, so that diagnostic programs, for example, could be linked directly into the maintenance guide.
Another category of use-case was web crawling, link-based document search, and other data analysis.
These and other anticipated use-cases envision more than text formatting; the primary purposes of the proposal were, in my opinion, the inter-linking of information and the formal modeling of information, especially for the purpose of combining different programs or facilities into a single user experience.
A good majority of search results I am looking for should be simple single page HTML documents that don't use complex HTML5 features that are needed for web apps.
Of course this would have required browser vendors to support two languages at the same time for a sufficiently long transition period, which was apparently too much to demand.
It's the sites that didn't adopt XHTML. Everybody on the infrastructure side loved it.
That's specifically why and how new standards apear. They accomplish most (though not all) the earlier capbilities, with a masive reduction of complexity. It's a form of risk mitigation and debt reduction.
Compare browsers generally: Netscape -> MSIE -> Mozilla -> Firefox -> Chrome -> Firefox. Each predecessor reached a point of complexity at which, even with massive infusions of IPO, software monopoly, or advertising monopoly cash, they were unsustainable.
The old, dedicated dependencies (frames, ActiveX, RealPlayer, Flash, ...) broke. Simpler designs continued to function.
But then we need to make another app + browser version? Which defeats the purpose...
Like the latest two HTTP protocols are both based of tech that google has already made. However, IETF is like that sounds good. It's got it's advantages, but there is very little push back saying well that makes things more complicated.
For instance with HTTP/2 it has support for pushing files to the client. Most back end web stacks are still trying to think of good ways to make that easy to use. Mainly since what files to send depend on what the page contains. So either you have to specify a custom list or the web-server now needs to understand HTML to get a list of required resources. This also gets more complicated since a push will be useless if the resource is already cached. This means your webserver has to have some kinda of awareness of how clients will cache data. Again this starts to mean your web server needs more client knowledge.
This is does not even take into account how the browser should handle these things.
Additionally, while cryptography is a good thing, the standard for HTTP/2 does not require it. However, pretty much all the browsers ignore that un-encrypted HTTP/2 is allowed. So if you wanted to run HTTP/2 without TLS the browsers act like site does not exist. This gets into the problem since there are so few browsers they can basically make defacto standards. So if you went through the effort and followed the standards what you encounter may not follow those standards at all.
IIRC MS/IE wanted to implement it, but they backed off because of these issues
Asking browsers to implement h2c is asking them to make their browsers flakier... their users would see a higher connection error rate... which the user WOULD attribute to their browser, especially if they open the same URL in another browser without h2c and it works.
Using the upgrade header instead of alpn is slower anyway.
Huh? Parsing HTML5 is much more complicated than XHTML, and everything else is about the same.
Because parsing invalid XHTML, which all browsers ended doing, is more complicated than parsing HTML5...
> Because parsing invalid XHTML, which all browsers ended doing, is more complicated than parsing HTML5...
I don't understand what you mean. Isn't the non-strict parser for XHTML just the normal HTML parser? The complication levels should be equal.
In the face of arbitrary user-content, like comments? Are you checking they don't include a U+FFFF byte sequence in there? (Ten years ago almost none of the biggest XHTML advocates had websites that would keep outputting well-formed XML in the face of a malicious user, sometimes bringing their whole site down.)
It's absolutely possible to write a toolchain that ensures this, just essentially nobody does.
> Isn't the non-strict parser for XHTML just the normal HTML parser?
Yes. It's literally the same parser; browsers fork simply based on the Content-Type (text/html v. application/xhtml+xml), with no regard for the content.
The bigger problem with XML parsers is handling DOCTYPEs (and even if you don't handle external entities, you still have the internal ones), and DOCTYPEs really make XML parsers as complex as HTML ones. Sure, an XML parser without DOCTYPE support is simpler than an HTML parser, but then you aren't parsing XML.
Anything more would be paraphrasing http://www.webdevout.net/articles/beware-of-xhtml
Quantum is still not fast enough with many pages I use, I bet most devs do not test on firefox anymore and I've found FF unusable unless you use a 4 core machine, otherwise you get many random pauses here and there.
So my choice is chrome or safari. Safari is not customizable enough for me so chrome it is.
Most Google sites are faster than Firefox (big surprise /s) but most everything else is the same or slower. I thought Chrome was supposed to be fast, it feels like a turd.
I have a Yoga 2 (4 years old) and my laptop fan revs up like a harrier jump jet whenever I load Chrome. Firefox only manages to make it purr loudly.
I personally use Chrome because it is secure and fast (and the debugger works far better than Firefox's, Safari's or Edge's). I personally don't use Apple because I don't want to spend x% of my disposable income on iDevices per year, when I can spend 0.x% on Android devices per year. I distrust Microsoft (their security is suspect and their implementations suck: I use outlook for work and the UI is super buggy - I notice unique flaws regularly and have to live with some bugs every day. Like email notifications stopped working the other day - just unbelievable shit). I would love to not use Google, but for the compromises I need to make, it remains the best choice by far for me. Edit: fixed # flaws.
I don't do webdev, so I can't really comment about that. I agree wrt edge being terrible and not being willing to pay Apple prices.
Anyway I switched to Firefox on my computers and mobile system. I use that VPN to try limit Google's tracking of me and I use duckduckgo for the same reason.
Long story short I just switched back to chrome on my Android, because Firefox has kinda stopped working. I used to be able to keep 100 tabs open, not I can't even keep 1 open in the background. When I go back it just forgets what it was and won't refresh. I click refresh and it shows it's refreshing, but then nothing happens.
Nothing I can do. I'm reading some or I see a great article and I open it right away in a FF tab for later. I go back it doesn't load. Then I can't reset/restart it, because it won't die and then it stops syncing, etc.
It's really really bad. Sadly this wasn't the case when I decided to switch about 8 months back, this is only in the last 60 days.
I'll keep using Firefox for now on my desktop, but honestly I really rely on profiles and sync across profiles, which is a pain to get around on FF as it is, but now it's a big burden I can't really see my way around.
Too bad, but honestly I need a reliable tool more than I need privacy at this stage.
* Uses 30-40% CPU constantly on my Ubuntu laptop, causing the entire system to freeze.
* Slow on JS-heavy apps like JIRA, Gmail, Google docs.
* Firefox Android randomly decides to stop loading web pages, requiring a force quit and restart.
* Firefox Android bugged out while writing this comment, the text I typed would appear at a specific location, regardless of where I put the cursor. This and various other HTML input bugs require me to restart the browser again and again.
Also,Chrome might be more secure from a vulnerability point of view but browser exploits(exploit kits) are not a very common means of deploying malware these days. They tend to focus on IE and flash these days: https://blog.malwarebytes.com/threat-analysis/2018/03/exploi...
If you install some 3rd party app and give that app permission to access your data they'll give that app the access you requested them to give but otherwise no sharing AFAIK.
It's easy to be vocal about principles, but when it comes down to it, very few people are actually willing to impact their own comfort or convenience to truly follow them. It's simpler to just come up with a reasonable-seeming justification for why you're not really supporting things you claim to be opposed to.
You're trying to paint people as hypocrites, where a more simple explanation is that maybe most of users even here on HN are not as concerned with the problem as you are. Vocal minority and all that.
*method of survival.
You can "stand up for your principles", or you can not be an ideologue, survive, and live to fight another day making progress and positive change along the way. Full stop boycott stops nothing. Changing from within is the most effective. Instead of posting shame-inducing posts like this, labeling people and assuming the worst, try assuming the best, encourage them to take actions to increase privacy and increase security. I work in that field, and when my own principles are violated, I speak out. I guarantee that changes more than people shaming others on social media. Advertising and Data collection have about as much as a chance of stopping as world governments agreeing to stop producing bullets, so let's try to make it as ethical as we can.
The amount of blind trust that people - including very technical people - gift to Google is rather shocking.
There's also a process for Asynchronous Panning and Zooming (APZ), but that probably doesn't help much with security.
People want apps. Not browsers.
1) Firefox is SLOW. I have ~400 tabs open on a Macbook right now in Chrome, Firefox snails around at 30-40 tabs.
2) Firefox dev tools sucked for a long time, compared to Chrome's. Same goes for Safari's dev tools - and don't get me started on the clusterf..k called Internet Explorer... that's why devs drove off to Chrome in the first place and stayed there.
Also the Developer Tools of Firefox are worthless... and not only because of how slow they are.
That's odd, because I think the Chrome dev tools are junk compared to Firefox. And I've never had an operation in Firefox dev tools that wasn't instantaneous. Perhaps our use cases are markedly different.
AFAICT Google doesn't share any info with 3rd parties unless you sign up with some 3rd party and ask them to share the info. I've never used my Google account to sign up with any 3rd parties.
As for Chrome collecting my history.(a) I want that since I want to be able to search my history across devices. For those rare cases where I don't I use an Incognito window. (b) you can opt out of having Google use your history for ad targeting. https://adssettings.google.com/authenticated
Note that ad targeting does not in any way suggest Google is sharing data. In fact it's in their best interest not to share data. If they share the data then other companies can use that data themselves. If they don't share data then other companies have to go through Google for targeted ads.
If the user in question did not give specific permission for Google to steal their ProtonMail email/s and send them back to Google servers, then that should be a crime. It should be a felony, just as it would be if a Google employee opened or obtained my physical mail without permission, scanned it in some manner, and took it back to their offices.
presumably she had "automatically translate" on...
I've really tried to use Firefox... Chrome just runs so much smoother, especially for media.
Similar thing would happen to anyone with email account setup to forward all emails to a public mailing list or something of that nature.
i just dont understand why people go "no way" over this kind of things -- it's google for F sake.
2. If you WON'T be public — don't use Google! Keep uBlockOrigin & uMatrix in your web-browser always turned ON or use Links as default browser!
As for me, I want manage 'own' YouTube channel (spoiler!), but I will newer use 'own' GMail or other Google's services for serious things non on home PC, non on Android mobile.
P.S.: How many of you has LinkedIn profile? ;-)
<meta name="google" content="notranslate">
Go to https://myactivity.google.com/myactivity and you will see all the things they track. It's bizarre.
The one that pisses me off the most is that they track the apps that I open by binary name and I own an iPhone and use Safari. I don't even know how the fuck they do that.
I don't use Chrome, and don't use Google. I'm pretty intimately aware of the point. ProtonMail can still do their part. ;P
It seems they log app usage for apps that have some sort of Google SDK installed or are serving Google AdSense. Definitely not all the installed apps, but several.
Even some dedicated translation apps that you install on your desktop actually upload everything to a 3rd party server for translation. I would love a list of local-only translation software that were close to as effective as the various online options... or even online but with a good data policy.
This is how it always worked and the number one reason I'm avoiding Google Chrome.
So any Google software is serving this goal - phishing as much user data as possible. That is Chrome, Android, GMail, iOS Google Maps, iOS Gmail, Google-Analytics scripts on websites, Google DNS, any software written by Google.