Hacker News new | past | comments | ask | show | jobs | submit login
Apple adds a tracker blocker to desktop Safari (techcrunch.com)
517 points by Allvitende on June 5, 2017 | hide | past | favorite | 286 comments

Sounds like this is more in line with what they did with ApplePay vs traditional credit cards--I.e. They give you randomized IDs each time so the other party can't track you from transaction to transaction. Adds can still appear but they won't know who you are, so it's a direct shot at Google and others looking to give people "targeted" adds based on user behavior. I agree it's an issue that needs addressed. Just because I searched for X two days ago doesn't mean i want to see adverts on X for the next two months.

Apple Pay does offer enhanced privacy by not transmitting your name along with your card number, but it doesn't randomize the number with every transaction. In fact you may not want that as it would disrupt email receipt systems and loyalty programs (absent some parallel mechanism).

Apple Pay does something called tokenization, and the goal is more fraud protection than privacy. It generates one new number at card enrollment and uses that exclusively. By using a unique account number which can only be issued by Apple Pay devices, it means it doesn't matter if someone hacks the merchant and steals your number. They can't use it without the associated Apple Pay generated cryptogram, secured by your PIN / fingerprint.

Honestly the enhanced security of Apple Pay is underhyped. It's really great.

I think I just inched toward using Apple Pay. I've been very skeptical until now.

You should read the iOS security guide portion that explains Apple Pay, the security benefits and amount of thought that has gone into the system are pretty amazing.


Sounds to me like they basically transferred chip-and-pin online.

Maybe it is magical in the American sense, but i can't say i get the big whoop from the European side of the Atlantic...

Apple Pay is a generation ahead of chip and pin. It obviates the need to enter a PIN which is more convenient and invulnerable to attacks on PIN terminals, which have defeated chip and pin in the past.[1]

[1] https://en.m.wikipedia.org/wiki/EMV

Some merchants I've noticed no matter how much you spend still require a signature, which I don't really understand. Others such as Kroger require a signature if it's over $50 but they don't take Apple Pay.

What about contactless cards? I personally find contactless cards, common in the UK, much more convenient than having to faff about with my phone, which may be out of battery.

I find the opposite. My phone is just in my pocket, I pull it out tap the fingerprint scanner and I'm ready. My card is tucked away in my wallet and takes a bit longer.

Admittedly being out of battery is a hard problem, but I chose my phone partly because it has decent battery life.

Two reasons Apple Pay is better than contactless cards. One is security. For purchases over a certain amount, cards make you enter a PIN. This is inconvenient and also susceptible to attack. Apple Pay secures every single transaction with your PIN, made convenient by Touch ID.

Two, if "faffing about" is your concern (thank you for that expression by the way, i'm going to start using it), pulling out a card is not really much different from pulling out a phone. But the Apple Watch supports Apple Pay, and that way you don't have to pull out anything. It's really convenient.

If you're wondering how it works securely with the watch, it's pretty smart. You unlock the watch with a PIN when you put it on. It senses when you've removed it from your wrist, so it just stays unlocked until that point. Therefore all Apple Pay purchases are PIN authorized without having to prompt you for it or a fingerprint. All you have to do is wave your wrist by the terminal and confirm.

Also Apple Pay is always online authorized while contactless cards under a certain amount are often not authorized as a speed hack.

I am in Australia and we have had contactless payments for years. Apple Pay with an iPhone was more trouble than using a card. On the other hand since I got my Apple Watch I haven't carried credit cards with me. It started out as a 1 month experiment to see if I could survive and it has never been a problem.

Phone based payments seem to have really failed to gain momentum here in Australia too; we've had contactless cards for a few years now and they've become the norm, using a phone seems to have marginal benefit over that.

My understanding is that contactless cards have limit of £30 per transaction in the UK. Apple Pay (perhaps it's the same with Android Pay?) has a significantly higher limit.

25 € in Austria. But for higher amounts you can still just hover the card above the terminal and then enter your PIN instead of sticking the card in (at least at some stores).

Interesting. In most stores in Holland my experience is that over a certain amount, or after a certain number of transactions, I get a 'pin required' message. I then have to wait for the cashier to do some voodoo, only then insert my card (before that causes issues), and enter my PIN. This is the case pretty much everywhere.

I love contactless payment, and this occasional complication doesn't cause enough trouble, but it's still really annoying because I'd expect exactly what you describe to be the case.

Note regarding Apple Pay: due to the way credit card networks implement network tokenization, online merchants can actually track you across multiple transactions. You get a per-device ID, of which you can have at most 10 per card (for Visa). Ideally, unique payment tokens would be derived from these IDs for each transaction. In reality, the ID is sent along with a random cryptogram for each transaction, leaving tracking possible (on the same device only). This is because these tokens have to be in the same format as card numbers, and the ranges available to issuers are rather limited.

I've had that problem before as well. My wife was going to Mexico and looking for a new swimsuit, and so I was hitting up the SwimCo website. Cue three months of women's swimwear ads from SwimCo – and nothing else. Almost every single ad on every single page was the same ad in different shapes, all of them for SwimCo.

Recently too I've noticed that Amazon is putting ads in my Instagram feed for specifically things that I've looked at on Amazon within the last day or two. I'll literally click a link to a book or do a search, and then four hours later it'll show up in an ad in the app.

Aside from being kind of pointless (I already know about these items, why are you showing them to me later that day?), it's also all kinds of creepy and unsettling to see Amazon advertising six things I've seen recently, and not even on the same device I was viewing them on.

Whats worse is when colleagues at work can look over your shoulder and see everything you are considering buying. Luckily for me my purchasing habits are pretty mundane, but I can imagine this could get quite embarrassing for people shopping for more risky items.

Indeed. It was especially awkward after I searched for "Willy Wonka costume", which prompted Amazon to also show me the results for "Willy costume". Some of those items were then clearly visible in almost every Amazon ad and recommendation I received during the next couple of months.

That's actually not how ApplePay works. You get a new randomized credit card number, but only once. Shops can still track you by checking for the number. You can check that yourself by looking at receipts when you pay with ApplePay - each receipt features the same numbers (most receipt only show the last 4 digits, but they are always the same when you pay with ApplePay).

Indeed, I observed this because our local grocer asks for your email address when checking out so it can send receipts there. After providing mine it never asked again.

It would be really cool if it generated new numbers each time and had an amount coded to that number. So when I wave my Apple Pay device over the reader it would display the amount on the device, I would approve, and then a number would be handed back that's only good for that amount.

Is it a randomized number per card per merchant, or just a randomized number per card?

It's a randomized number per original card. Every merchant sees the same number. According to some other poster, if you have several devices (like an iPhone and an Apple Watch), then you get a new number for each device.

So, not only can a single merchant track you, but all merchants can cross-reference the data they have about you and track your whereabouts, purchasing habits etc. They just don't know who you are anymore, because that information is not transmitted. Unless one merchant asks for your email or home address, and this merchant then adds that email to a shared database, at which point we're back to step 1 and the merchants know everything about you.

Or you just use any kind of loyalty card/ account when making a payment using apple pay even once. :( I didn't realize it only randomized once and am now disappointed in the way apple marketed it.

It's per-device/per card, so your Amex gets a different number on your iPhone, watch, and MacBook, as will your visa.

Nothing about the new PAN is randomized. It's valid PAN pointing to a dedicated, valid BIN range and has a valid Luhn checksum.

"Just because I searched for X two days ago"

But would you rather see ads for something you might be interested in (however tangentially) or something completely random? Personally I prefer the former, as long as there is some basic sanity filtering involved.

Thumbs up for Apple distinguishing themselves by their pro-privacy stance, as opposed to MS, who don't have anything to win by Win10's excessive "telemetry" IMHO.

I recently realised that any company taking on Google (e.g. Apple, Mozilla, ...) that is afraid they won't be able to take them on in areas like machine learning or sheer size, is realising, rightfully so, that being pro-privacy is the one thing they can compete on with Google that Google will never be able to imitate. Pretty sweet, actually.

I think a big thing about AI with Apple is the fact that their stance on privacy makes it a bit harder to compete with the harvested data sets of Google. From what I understand though, according to the reactions on the papers they released a bit back, they're not doing so poorly.

That being said, and I've never used Cortana or Alexa, but I hear they're pretty decent compared to Siri.

Exactly, I think they figure that if they have to harvest data themselves and try to beat Google at its own game, they have a larger chance of losing than if they concede on quality by not harvesting data (as much), but try to offset that by being privacy-conscious. And yeah, perhaps they'll still be able to do a pretty decent job that might not give them too much of a disadvantage compared to Google. You often see, I think, that with a few years delay many machine learning advancements can be reproduced offline.



> CTIA is the main lobbyist group representing mobile broadband providers such as AT&T, Verizon Wireless, T-Mobile USA, and Sprint.

It doesn't just represent them, Apple is also a member:


You do know that Apple makes mobile devices, right? And so it would make sense that Apple is part of the group the supports hardware standards in the mobile device industry, they're not just a lobbying group.

You mean the Walkman, right? Why yes, I heard of it, but thanks for double checking.

Anyway, If they're pro-privacy, it would also make sense for them to raise hell about the group they're part of for some other reason is arguing that browsing data shouldn't be considered private, of all things.

They're a member of the group, not its owner. We don't know what they're doing but they can't control it.

5 downvotes huh; anyone able to explain how the above is compatible with a strong pro-privacy stance? That's like saying you're a vegetarian, except for saturday noon.

Apple may be a member of CTIA, and CTIA may have argued for a particular anti-privacy stance, but that doesn't mean Apple supports that stance. There are a lot of members of CTIA, and CTIA presumably lobbies for a lot of things, not just ISP privacy rules.

May be a member? They are. And I tried looking for an statement by Apple on that issue, couldn't find one. So until I see one, I'll say the lobby group speaks for them.

Is the money that lobby groups works with organized in such a way that Apple's money doesn't go towards this particular thing? If not, what does "not supporting that stance" even mean? That they outsourced their fight for it, so that they can have the nasty outcome and stay morally pure, that as long as there is some bending over backwards possibility of Apple being against it, they're against it?

> There are a lot of members of CTIA,

Apple is a giant, not just one member among hundreds. If they are quiet on this, it has their support. Or are you saying they might not even be aware? Just didn't find it important enough to scream bloody murder about it? No matter how you try to spin it, can you spin it into something really good?

> and CTIA presumably lobbies for a lot of things, not just ISP privacy rules.

Unless you're meaning to say the also have a part of the budged for lobbying for the opposite of this effort I can only ask "yes, and?".

You're going to great lengths to argue that the tech company with the best track record for privacy is secretly trying to violate your privacy. You're wrong.

I'm not going to great lengths to $yourstrawman, certainly not in a kiddo phrasing like "is secretly trying to violate your privacy." I say what I say, in the words I use, and apparently none of you can argue with one bit of it, yet you downvote like mad regardless.

Were any of you even aware of the information? I looked at that list out of curiosity, and was surprised to see Apple. I was less surprised to see others, but Apple did surprise me. Not as surprising as the pathetic reaction here so far, but still surprising. And I don't take Apple seriously since the 1 button mouses, you know? I still believed their "privacy in our walled garden" stance, it's not like that required flattering them.

> the tech company with the best track record for privacy

I simply never bought into the premise that I have to pick among the presented turds. At that level of size and desire to be a middleman just to be a middleman, they're all trash sadly, and if you think criticizing one means bolstering the others, that's your outlook, another premise I don't share.

Are you absolutist or absolutian?

I love how this comment of course stands. Keep staying classy.

It's also a real shame they're doing this before Mozilla. Mozilla already has Tracking Protection but only for Private Windows.

It's like Mozilla can't even embrace its privacy stance fully.

Like their entire revenue stream comes from search deals, who all depend on advertising.

In fact, Mozilla is working on that: https://testpilot.firefox.com/experiments/tracking-protectio...

> We believe these additions will help us take the next step toward shipping Tracking Protection in Firefox beyond Private Browsing Mode. Look for that study in late 2017.

Not exactly the same, but there's Privacy Badger [1] from EFF that works on Firefox, Chrome and Opera. If you'd like to see a visualization of the tracking for your browsing habits, there's Lightbeam [2] from Mozilla. Both these have been around for a few years now.

[1]: https://www.eff.org/privacybadger

[2]: https://www.mozilla.org/en-US/lightbeam/

You can enable Tracking Protection in Firefox for normal browsing windows in the settings->privacy tab

Does not appear for me. It's only for the private windows.

Oh. I am sorry. True, it doesn't appear for normal windows. Anyway, as some other user suggested, visit "about:config" and change the privacy settings there.

What key?

I didn't realize it was only activated in private windows. Now that you've made me notice I consider this misleading, on the settings page it says nothing about being private windows only.

It most certainly does say that this only applies to private windows, "Use Tracking Protection in Private Windows" [1]. If you want the always using tracking protection you can set privacy.trackingprotection.enabled to true in about:config or install the disconnect extension.

[1] https://support.mozilla.org/en-US/kb/settings-privacy-browsi...

Apple has every reason to do so, as their revenue doesn't come from ads. But I guess Google also has the motivation to enable this in Chrome, to make other ad networks less effective (and Google ads more effective). If it happens, what does this mean for the internet advertising business?


Internet Explorer has tracking protection since IE9.

They are completely different beasts. Internet Explorer merely offers the option to enable[1] "Do Not Track", which websites and advertisers are free to ignore[2], while Safari's new ad tracker blocker "uses machine learning to identify trackers, segregate the cross-site scripting data, put it away so now your privacy — your browsing history — is your own"[3].

[1] https://en.wikipedia.org/wiki/Do_Not_Track#Internet_Explorer...

[2] https://en.wikipedia.org/wiki/Do_Not_Track#Effectiveness

[3] https://techcrunch.com/2017/06/05/apple-adds-a-tracker-block...

Also worth pointing out that Safari has had the 'do not track' feature for years and Twitter recently announced they are going to start ignoring it (a good example of how useless it is). So this new protection is very necessary and a great USP for Safari.

You are actually incorrect. Tracking Protection refers to an IE feature that lets you set "Tracking Protection Lists", which block traffic to specified domains and URLs. You can see a bit about them here: https://msdn.microsoft.com/en-us/library/hh273400(v=vs.85).a...

The whole "Do Not Track" default thing was, of course, a huge fiasco, as Google and others chose to ignore IE's default usage of it.

I don't know why this has been downvoted. Tracking Protection Lists are one of the best and unsung features of IE. People don't realize that they're different from Do Not Track.

Do you have to set it up manually? How is this better than downloading one of the privacy-protecting extensions available for most desktop browsers?

Arguably, the fact that you don't have to trust a random third party extension code is a perk. And since this is a pretty straight up text file format it works off of, it's easy to roll your own or customize it as you wish.

As I said in one of my comments, it's a bit janky to set up because you have to select one of the Tracking Protection Lists from their add-on gallery to turn it on, there's no default list pre-selected.

Going to my IE right now to activate it, I have to say this is a janky solution. It opens the Add-ons window, where you can see you have no Tracking Protection Lists. Then you can click to browse the add-on gallery for them, and then you have to scroll down and pick a list from a set of options.

While this is flexible, open, and that's all good, the lack of a common sense default and a multi-step setup process is probably why like... even I am not using this right now.

If Apple does this by default, it's gonna make a huge dent in Google Analytics' numbers, whereas probably almost nobody uses the feature in IE.

If you're interested, I created and maintain a tracking protection list based on the Ghostery and Disconnect filter lists. It's concise, fast, and better than anything in the IE gallery. https://amtopel.github.io/tpl/

IE isn't really a browser I use heavily personally. Has Microsoft carried the feature forward to Edge, or are they relying on extensions from the Store for that?

They discontinued it in Edge, unfortunately.

But you can install extensions like ghostery that so the same things in edge

But you shouldn't have to install extensions to do this..

"it's gonna make a huge dent in Google Analytics' numbers".

Very unlikely, since almost nobody uses desktop Safari.

This feature will surely show up in iOS Safari, which would have a big impact.


It’s obviously not true that nobody uses Safari on Apple’s laptops and desktops.

Apple’s Mac sales have doubled since 2008 and is in the top 5 when it comes to PC shipments: http://www.asymco.com/2016/11/02/wherefore-art-thou-macintos...

Shipping ~30 million Macs a year certainly isn't almost nobody.

He isn't talking about the number of macs sold though. Market share of Desktop Safari according to NetMarketShare [0] in May 2017 was 3.56%.

[0] : https://www.netmarketshare.com/browser-market-share.aspx?qpr...

Web advertising is such a cutthroat and low margin business that having 2–3% of all browser users become untrackable is a big deal.

Safari on iOS obviously has significantly higher market share on mobile and the intelligent tracker blocking is part of iOS 11 beta.

Yes, it will be a big deal when it comes to mobile.

Most sites don't obey the do not track header. Edge on the other hand is much more nefarious, by default it sends all data sent by POST requests to Microsoft. I was surprised to find Bing sending data from people who use Edge on my site to try and improve their search results. There are so many security and privacy issues with this it's not funny at all.

I've never heard that Edge sends absolutely all POST requests to any website to Microsoft as well. Could you share an article or something proving this?

You're joking right? Factory reset your iPhone sometime and note all of the prompts to share and access your data. The only difference between MS and Apple is that Microsoft made the mistake of presenting everything all on one screen instead of breaking it up over 5 or 10 prompts as you use the device.

There's a huge difference between being asked if it's ok to share your data and just sharing it by default. Additionally, Apple doesn't offer any kind of way for anyone but you to decrypt your data.

I think he’s referring to aggregated anonymized usage data, which people can opt in or out of with no effect on function. (This is different than messages, etc., which are stored on Apple’s servers but end-to-end encrypted.)

I know he is, but that's not the same thing as what he's implying and, even then, Apple's solution is opt-in while Microsoft's was on by default.

If Apples solution is opt-in, Microsofts is in. For many things there just is no opting out.

How can you say that when Windows 10 had Cortana turned on by default? Siri is not enabled by default on a Mac (just like all the other services) and you have to purposely choose to have them turned on during install.

I'm pretty sure I was asked to opt in to Cortana during install. During the setup phase so it may be true of OEMs as well.

What are you talking about, you can't turn it off

Sure you can, during setup or any time afterward since day one. It's even easier in the Creator's update.

Messages are not currently stored on Apple servers unless you enable iCloud backup.

As of today they will be! And synced across devices.


(still encrypted)

So you've never actually installed Windows 10 then? Because from the beginning it's asked for permission to share your data for things like Cortona, the touch keyboard, ink, voice, etc. Based on user response they've evolved the interface and made it clearer removing anti-patterns.

This is the same stuff Apple asks for permissions on.

Microsoft doesn't let you turn off telemetry data entirely. Things like hardware configurations and installed drivers are still send, albeit anonymously so that Microsoft can better support the OS.

Again Apple does something similar. Spotlight and Safari send data back to Apple even when you're making queries against other services (e.g. DuckDuckGo Searches). And like Microsoft, there's no UI to disabled it.

Yes, I have. I installed it 2 weeks before it was released to the public as part of the Windows Insider program and then again on several machines after release and, most recently, in the Creator's Update. Until there was a huge backlash against MS, all of those things were opt-out and turned on by default, including Cortana and the ink features. Apple does not turn any of these on by default, nor have they ever, and you have to opt-in to those features to use them.

Spotlight and Safari send anonymous data back that is parsed and separated so that it can't be used to identify the machine, user, or account that they came from. That's wildly different from the MS approach even after all the changes made on MS's end.

You're right that it's opt-out with Microsoft and opt-in with Apple. However Microsoft's opt-out screen appears during setup before you ever even reach the desktop. Where as Apple's opt-in is a nag that occurs periodically as you use the device or anytime you install an update.

I understand that Apple has publicly disclosed how they anonymize data and roll identifiers but Microsoft hasn't so you really can't say if telemetry data can be tied back to a user or not because you don't know.

Apple's opt-in is only triggered when you attempt to use a feature that relies on a function of the opt-in or when a new OS feature utilizes those functions. It's not a nag. It doesn't ask you until you want to use it. Microsoft assumes you want to use it and hides the opt-out settings under an "Advanced" button during setup when it asks about new features and then promptly asks you again after it assumes that you don't know what you're doing. The "Are you sure?" prompts on Windows are far more egregious.

But that's basically spotlight so it's a difference with little meaning IMO.

You can turn off spotlight and safari and siri suggestions and there should be no traffic to Apple after that.

Yes, Microsoft has a long history of providing user privacy and control settings and then ignoring or reverting them.

Absolutely and so does everyone else. Google is great at asking/nagging you to turn features back on that you've turned off in Android. And every time you update iOS, if you have Location Services or some other feature like iCloud turned off then you get nagged to turn them back on.

I thought I had read that there were ways to shut off all the MS telemetry via hidden settings or registry changes.

Cortana can also be disabled by quickly deleting or renaming a file somewhere, after killing a process. Specifics can be googled.

There are ways to disable all of the various data collection mechanisms but they aren't sanctioned by Microsoft and MS provides no user interface for them. There are several projects on Github that perform varying degrees of this.

The same goes for Apple's collection mechanisms.

Here's the official blog post explaining the feature in depth: https://webkit.org/blog/7675/intelligent-tracking-prevention...

This suggests that Google/Facebook/Twitter will still be able to track you, assuming you use their websites regularly, but advertising companies that don't have pages frequented by the average internet user won't.

This is going to be a pretty interesting case study in deploying ML in adversarial contexts :)

If I understand this correctly, the obvious counter-measure is for all links on example-recipes.com to go through example-tracker.com, which then immediately redirects to the original website with the linked-to content. Sort of like the weird link URLs in Google's SERPs.

This is great, but unfortunately, until Apple ups its browser security game, Safari is a non-starter. On macOS, switching from any other browser to Chrome is in the top 3 things you can do to materially improve your security in ways that actually matter in the real world.

Just to add some context, on macOS you can look at the seat-belt policy as a rough analog of for basic sandboxing guarantees, where the fewer exceptions you have the stronger your sandbox is. From that perspective, Chrome's policy has around 1/10th the exceptions of Safari.

* Safari SB policy: https://trac.webkit.org/browser/webkit/trunk/Source/WebKit2/...

* Chrome SB policy: https://cs.chromium.org/chromium/src/content/renderer/render...

And of course, that's before we get into more complex forms of isolation that Chrome implements, such as the sandboxed GPU process, or ongoing work into things like network sandboxing, the macOS bootstrap sandbox, and site isolation (origin-bound renderer sandboxing).

For anyone following, this is Justin Schuh of the Chrome security team (and co-author of TAOSSA, probably still the best book in all of software security).

Another thing Chrome does out of the box that Safari doesn't is U2F.

Still another is Chrome's industry-leading TLS management, including the pioneering of HPKP and the Chrome/Firefox pin list, and the aggressive policing of the WebPKI CAs.

I've been pretty aggressively terse in this thread, because I didn't even realize this was a live argument anymore. Safari is simply not as secure as Chrome, and it's less secure in ways that are meaningful to normal users.

Again: iOS, different story.

The question is a bit complex than a simple reading of these files. Mac OS sandboxing allows dynamic extension of the sandbox, which would not be reflected in the profile (I'd bet Safari does more of this than Blink though). Also, as you mentioned, it's relevant to look at what's factored into separate processes, and how those processes are sandboxed. Safari's Network process has been networked since 2013, so I don't think you can count Chrome's ongoing work to do so as a Chrome advantage.

If you add these things up, the difference in practical effectiveness is not as wide as one might think.

I don't keep up too much on Safari these days, so congrats on moving the network stack out of the content process. But looking at the current WebProcess seat-belt policy and what gets initialized, it looks like there's still far too much attack surface relative to Chrome. Things like audio/video capture and other permissioned Web APIs appear to be permitted directly inside the sandbox. And the GPU attack surface alone is a giant vector for escape--plus all the other potential escape vectors posed by that very long list of mach services.

So yeah, the seat belt policies alone aren't determinative, which is why I called them "a rough analog". And it's hard to say what gets pulled in through warmup (which is why we'll be eliminating it with our v2 bootstrap sandbox). Accepting that, it's pretty clear that there's just dramatically less attack surface exposed from inside Chrome's sandbox versus Safari's.

The network stack has been out of the content process for a super long time, it is not a new thing. (Ironically, Chrome engineers argued strenuously against doing it when we first started).

You're right that separate GPU process is a huge advantage for Chrome. Kudos on that, and we'll likely have to move in the same direction sooner or later.

Audio/video capture is temporary and not in currently shipping Safari. It was just the simplest path to getting WebRTC up and running. We plan to fix it before we ship. I agree with you that it's risky attack surface.

Also agree with you that we expose more mach services and for lots of them it would be better not to expose them. A tradeoff here is that Chrome (as I understand it) provides most of those facilities via brokers that are often not sandboxed themselves. It used to be many of those things were just done by the application process.

I suspect over time we'll see our respective sandbox models become more similar over time, especially on macOS.

> The network stack has been out of the content process for a super long time, it is not a new thing.

FWIW, Chrome's network stack doesn't live in the content process either. It's not currently sandboxed, but it's in a process that has no scripting runtime or other dynamic content, so it's still pretty high bar for exploit. The exact reasons for the current situation have to do with some legacy Windows support that has since been removed, which is why the sandboxing work is now moving forward. So, I definitely appreciate your situation with adding some sandbox exceptions for WebRTC.

> I suspect over time we'll see our respective sandbox models become more similar over time, especially on macOS.

Fair. But I will say that Chrome being cross-platform tends to naturally push us in the direction of eliminating sandbox attack surface. Our supported platforms just differ so much that it's easiest to lock down the OS as much as possible and implement narrower, origin-bound capability brokers inside Chrome. If I were more tightly bound to a given OS implementation, I expect I'd have a lot more fights about sandboxing, because it's easier for devs to just standardize on what the OS gives you.

It does seem like being cross-platform makes it more natural for Chrome to lock down the content process very tightly, and provides a strong incentive to do so. On the other hand, it may make it more difficult or less natural to lock down some of the other processes.

On our end, it's natural to sandbox every new process we introduce, but also easy to fudge what is allowed in sandbox profiles. Sometimes we have a choice of accessing a service through a separate process, or working to make sure that service itself is more secure (sandboxed itself, offers thinner and properly validated IPC interface, etc). In many cases, the real right choice may be to do both. As well as fuzzing the heck out of every IPC boundary.

(Your links are switched)

edit: they're fixed now

Indeed. Fixed now, and thanks for letting me know.

Can you give specific examples why Chrome is significantly better than other browsers, including Firefox, Opera? Chrome is a non starter for me because of its resource usage and battery hunger.

One specific area where Safari is better than Chrome is in private browsing mode. In Safari, each tab is completely separate, and the cookies aren't shared (as far as I can tell) whereas in Chrome, it's only separate as a whole "private browsing session." They each have their pros/cons but I prefer Safari's model.

Me too, I only use chrome for its built-in Flash support when a site requires it.

Even then, I use Safari. The single-use permission request is handy. I pretty much only use Chrome as a development tool.

How is Chrome more secure than Safari on macOS?

If you're interested in a detailed answer, read this:


Then try to work back either Edge's or Chrome's approach to security to specific Safari features and design.

The Chrome security team is probably the most sophisticated software security team in the industry (lest you think I'm in the tank for Google, I'd say the iOS platform security team is a close 2nd --- and, to be clear: Safari is a different story on iOS).

> The Chrome security team is probably the most sophisticated software security team in the industry

Unfortunately the Chrome security team can't provide the kind of security I care about - security from Google's tracking.

If you don't have the kind of security Chrome provides, then in reality everyone can track you, because all they have to do to own up your machine is get you to look at a web page.

So you're saying that privacy is impossible?

If that's true, then I'd rather go down fighting (no matter how futile that is) than willingly give up any more private information to Google. I think the time for pragmatism when it comes to privacy is long over.

I'm saying what I said: if your browser isn't adequately secure, all the anti-tracking features don't much matter, because the people you really need to worry about will be able to own up your entire machine and quietly persist themselves into it.

Let me play back what I'm hearing: Chrome is great because it uniquely protects you from 3rd-party hackers on the internet. Fair enough.

But do Chrome's protections also protect you equally from intercession by Google? I just want to clarify this point in my mind.

> But do Chrome's protections also protect you equally from intercession by Google? I just want to clarify this point in my mind.

I'd say yes, unless Chrome has specific backdoors for Google. If that was ever discovered, I'd imagine a huge shitstorm happening.

Google doesn't need backdoors into Chrome, in the same way that it's technically not cheating if you adjust the rules to fit your demands better than others (see f.i. AMP).

> because the people you really need to worry about

I think we disagree who to really worry about. I worry more about persistent low-level corporate surveillance more than hacker attacks because while the latter is more acute and can cause great financial harm, the former is whats going to damage my freedom and right to privacy once the government decides it wants to firehose all that data.

Unless Google can track you better when you use Chrome, that's not an argument against using Chrome.

Google can indeed track you better when you use Chrome, since Chrome doesn't have Safari's default-on tracking protections.

This post has lots of info about Chrome and Edge RCE defenses. Super informative on this front. But is surprising light on detail about what makes their sandbox more robust than Edge's. (I don't know near enough about the Edge sandbox to assess this claim for myself.)

I am curious to know why you say this considering Safari is just as sandboxed as Chrome?

No, it isn't.

Safari sandbox isn't identical to Chrome's but it's pretty effective. I don't think your statement is a fair one without qualification.

ETA: we'd appreciate info about specific info wrong with Safari's sandboxing. We are definitely looking to improve it.

You work on the Apple Safari team. Are you really saying you feel like Safari's sandbox and anti-exploit features are comparable to those of Chrome? That would be a newsworthy claim.

Safari's sandbox is weaker in some ways and stronger in others. Saying which is overall stronger would be a judgment call. I wouldn't make a claim like that without spelling out at least some of the details.

This subthread is about the sandbox so I'm not sure why you threw in "and anti-exploit features". I'd probably say without qualification that Chrome has better memory corruption mitigations.

I hoped you might have concrete feedback on what aspects of our sandbox we should shore up. We have our own ideas but of course an informed outside view would be valuable.

In what ways would you say the Safari sandbox is stronger than Chrome's, on macOS?

How would you compare Safari's anti-exploit technology (allocator hardening, Javascript engine hardening, &c) to that of Chrome? Do you think you do anything better than Chrome does on that front?

Your original post here made a bold claim with no qualification and no supporting details. You're not providing any backing to your claim but at the same time you're asking me to give details. Plus you've repeatedly thrown in anti-exploit tech which wasn't the original point of contention.

It would be easy to get the impression that you're trying to shift the burden of proof and move the goal posts. Despite this, I will try to assume good faith.

I think you original post gave the impression that Safari either has no sandbox, or has a wildly ineffective sandbox. You didn't directly state it, but at least some users understandably took away that implication. I think this is inaccurate and unfair.

One piece of evidence we have is grey market prices for end-to-end Safari exploits (with full sandbox escape). By this metric, breaking out of our sandbox on Mac or iOS is not trivial, and is at least comparable in difficulty to Chrome or Edge on Mac, Windows or Android. On the flip side, it seems to be significantly easier to get inside-the-sandbox remote code execution in Safari if you go by market prices, hacking contests, etc. That's something we're working on. Chrome and Edge definitely have materially better mitigations here (as I said in my earlier post).

And finally, to answer your question: One small way Safari has better sandboxing is the we sandbox our network process (something that Chrome is still working on).

My contention was that Safari is less safe than Chrome, not that Safari's sandbox was in particular worse than Chrome's. Nevertheless, on balance, Safari's sandbox is significantly worse than Chrome's. I think --- but you'd know better than I would --- that this is because browser security is a platform problem for Apple, and an application problem at Google. Apple's platform-level mitigations are very powerful on iOS, but substantially less powerful on general-purpose operating systems. Chrome's sandboxing is specific to Chrome itself, and thus finer grained and more powerful.

I think if you create a breakdown of all the facets of browser security, it will look something like this:

Isolation: Chrome > Edge | Safari > Firefox

Anti-Exploit: Edge > Chrome > Firefox > Safari

UX: Chrome > Firefox > Safari > Edge (U2F, password manager)

TLS: Chrome > Firefox > Safari | Edge

Library Security: Chrome > Edge > Firefox > Safari

If you want to add privacy controls here, you'll get an easy win for Safari, but privacy isn't security.

You're close to this stuff though, so if you disagree with any of these informal rankings, or think I've got the rankings wrong, please correct me.

You actually did make a claim that Safari's sandbox was in particular worse than Chrome's, in the post I directly replied to. That is what got my dander up. Elsewhere you implied that the Safari sandbox comparable to the Java sandbox. I hope you will now agree that the Safari sandbox is closer to Chrome's than to Java's.

I don't know enough about the full spectrum of security technologies in all the browsers to have an informed opinion on your rating scorecard, but some thoughts:

Your assumption is that browser security is (only) a platform problem for Apple is wrong. If that was true, we wouldn't have dedicated sandbox profiles for the WebKit content process and its various helpers, which are much tighter than the system default app sandbox on both macOS and iOS. All, macOS has significant system-level defenses, though obviously not as strong as iOS.

Safari and Chrome both use the same underlying OS facilities on macOS to implement their respective sandboxes, so I don't think it's right that "Chrome's sandboxing is specific to Chrome itself" to any greater than Safari's (or really, WebKit's). It's also not more fine-grained. My understanding of the Chrome sandbox model is that their ideal is to deny everything, based on designing around the very coarse grained mechanisms in Windows. The macOS/iOS sandbox model is intrinsically built around fine-grained permissions, and Safari grants more of them to our content process. So if anything Safari's sandbox is more fine-grained (but I am not sure this is an advantage).

On the scorecard itself:

- It's really hard to compare sandboxing technologies across platforms. My vague impression is that Safari's is stronger than Edge's and macOS Chrome has perhaps a small overall edge over macOS Safari in terms of effectiveness. I'm also not totally sure you can even do a linear ranking. For instance, only Edge puts their JIT outside the content process, but I am not sure this means they have the strongest sandbox overall.

- Anti-exploit: agree with the top two, not sure I'd put Firefox over Safari.

- UX: I'm not totally sure how you are grading, but you should be aware that Safari has a really good built-in password manager. Passwords are securely stored in Keychain and we offer to generate random per-site passwords at account creation or password change time. I don't even know the vast majority of my website passwords. With iOS 11 this will be expanded to sharing website passwords with corresponding native apps for those sites, removing the main remaining reason to have a simple password.

- TLS: Not knowledgable enough here but note that we're moving to boingssl in the upcoming OSes and have cert pinning and HSTS and all that good stuff.

- Library security: not entirely sure what you mean by that.

I broadly agree with Justin Schuch's point in the post you linked that isolation technologies are more important on a philosophical level. Also I would give kudos to Chrome and Edge for having excellent overall security.

Sorry for the delayed response. Also: I have to be terse about some of these things for work reasons.

First, regarding isolation: using the same OS facility to block system calls is a superficial similarity between Chromium and Safari. Chromium and Safari are divided into process components differently, and block different system calls. Chromium exposes much less to its renderer process than Safari does to WebProcess. Not only that, but Chromium has finer-grained components; the GPU isn't exposed to Chromium renderers the way it is to Safari WebProcesses. This isn't a theoretical difference, as you know (but readers here don't): IOKit has been a source of WebProcess sandbox escapes for Safari. Safari isolates the network process and Chrome doesn't, but the network process is a low-priority attack surface. The highest priority attack surface is the one reachable directly from content-controlled Javascript. You say Chromium's edge over Safari is "small overall". We agree that the edge exists, but I strongly disagree that the delta is small.

We agree on anti-exploitation. In fact, if Safari got better here, I'd be less nervous about people running Safari. What are the plans here? The combination of (1) general purpose operating system, (2) rich attack surface exposed to WebProcess, and (3) lack of serious runtime hardening is most of my argument against using Safari. The rest of this list is "nice to have" stuff.

Regarding UX: Chromium has a well-regarded security UX team. Does Apple staff a dedicated security UX team for Safari? Chromium supports U2F natively. When will Safari? I think Chromium, Safari, and Firefox are closer together here than the browsers are on other facets of this list; I don't think Safari does a bad job here, just not as good of a job as Chromium.

Regarding TLS: Adopting Google's BoringSSL library is a fine start and I know Apple has strong crypto people on the Secure Transport team. But does Safari support HPKP? (If so, when did that happen?) Why is it virtually always Google's TLS team detecting and punishing rogue CAs? What CA BR violations were detected by the Safari team, or any other team at Apple? Has Safari done anything like the Google PQ handshake experiment? It feels a little unfair holding Apple to the standard of what is basically the most sophisticated Web PKI team on the planet, but that's a real part of browser security.

Regarding "Library Security": I don't know what to call this item and so I'm not surprised that you're confused, but: how does Apple's work fuzzing and doing vulnerability research in the underlying libraries that the browser depends on compare to Google's work doing the same thing? I think we both know the answer: nothing Apple is doing is close to what Google's in-house offensive researchers are doing. Apple benefits from the work Google does here and so can draft off Google's team here, but Google prioritizes their in-house offensive work to help Chromium.

I could make a similar scorecard for iOS versus Android and I think you'd see the reverse on these rankings, with Apple in the lead on basically everything. But browser security isn't hardware security, and on macOS, I don't think Safari and Chrome are close. I think Chrome is significantly more secure.

> note that we're moving to boringssl in the upcoming OSes

Could you give more details on these plans?

I know your replies here are probably somewhat aggravating and definitely time-consuming, but I appreciate your level head and the detail and information you provide. I don't have a complex understanding of any of these technical details and you explain things in a clear and concise way.

Why don't you first answer the question that was asked?.

Oh ok.

Just because both features are named "sandbox" doesn't make them equivalent; the exact same argument says you should also be happy to run hostile Java applets, which, after all, are sandboxed.

Oh, now you use your words?

You make this claim over and over again and then double down on it when pressed (you literally call Chrome and macOS Safari "incomparable"), but you fail to provide any reason or evidence to support your claim and instead always insist on counter-evidence. When pressed (further down in this comment thread), the only thing you share is this vague idea that Apple isn't incentivized to make Safari secure on macOS. You also compare Safari/Chrome sandboxes several times after declaring them incomparable.

You then, unprompted, create a comparison of all the major browsers again with no citations or supportive reasoning.

It's all very strange.

While I'm not sure how effective they are on OS X, I have a hard time agreeing with any suggestion that Chrome is anything but the worst possible option for browser security right now. Chrome's official extension store is full of malware which collect not just your browsing data, but the contents of every page you view, and Google has shown almost zero interest in policing it. And a large percentage of malicious websites are designed to get users to install these malicious extensions.

Chrome may be relatively decent at preventing a webpage from compromising your OS, but in the modern era, a compromised browser is as bad or worse anyways, since that's where most of your sensitive activity goes.

While many HN readers will know to avoid the perils of this crud, I don't feel Chrome can be recommended over IE6 to the wider Internet while this remains so commonplace. Safe use of Chrome requires constant vigilance.

In light of Chrome's issues, I feel like a claim that switching to Chrome is important for security to require an exceptional evidence of vulnerability in the other browser.

So battery life or security, take your pick?

I'm not saying it doesn't suck. But it's going to keep sucking as long as people are willing to pretend that Safari already has comparable security to Chrome.

Then why not fix the horrendous browser performance of Chrome? It's not like people's complaints about how much energy it uses relative to Safari are new. People have been complaining for YEARS.

According to another comment you work on the Chrome team. So unlike most everyone here, you're actually in a position to fix one of the two options.

(a) I do not work for Google or on Chrome.

(b) I do not disagree about Chrome's energy usage. It sucks.

Well that's what I get for trusting random commenters.

Do they even try to break Safari at those pwn2own events?

Or is it just assumed to be an easy target/too niche/no money from Apple?

They do. Breaking out of the sandbox is not trivial though over time people have found ways to do it.

What are the other two (for macOS)?

"Use a password manager, enable MFA" unless I miss my guess.

I guess password manager and U2F yubikey.

I'd combine those two, and then my #3 would probably be making sure that you can't easily click on things in emails that open documents in local applications, and my #4 would be some combination of FDE and encrypted DMGs for projects and sensitive files.

I didn't know that. What are the specific security problems with Safari?

Fair enough, it's not exactly a security play though.

>“It’s not about blocking ads, the web behaves as it always did, but your privacy is protected,” he added.

Does this mean browser fingerprint is somehow scrambled before it is sent to the tracker instead of blocking?

Stopping fingerprinting right now is essentially impossible for a motivated attacker. It's enough to block the dumb trackers, but as long as performance is a consideration caches will exist. And as long as caches exist, so will fingerprinting.

I doubt the next Safari also addresses browser fingerprinting. Otherwise, Apple would have mentioned it. Most likely, ad networks will adopt browser fingerprinting over the next few months, and then Apple will introduce some solution to that in a year or two.

It looks like the same pattern as the way Apple scrambles Bluetooth MAC addresses and credit card numbers

> Does this mean browser fingerprint is somehow scrambled before it is sent to the tracker instead of blocking?

It might be homogenized instead of scrambled. Every iOS device could be given (barring IP etc.) the same fingerprint.

I don't think that's even theoretically possible. How do you block JS font enumeration without crippling the browser font API?

Offer the same basic set to every site. Why does a website need to know the fonts you've installed?

No user installed fonts on iOS. So that's already effectively the case there.

You can install fonts on iOS with MDM or configuration profiles.

You can't block font enumeration without crippling the entire CSSOM.

But that doesn't affect iOS, because you can't install fonts on iOS.

you can't install fonts on iOS.

Custom fonts can be installed via custom configuration profiles[0], which is what some font applications do[1]

I'm not sure if this is exposed via Safari or not, so it could still be a moot point.

[0] - https://developer.apple.com/library/content/featuredarticles...

[1] - https://itunes.apple.com/us/app/anyfont/id821560738

Simple way would be tainting any JS/DOM data that interacts with the font metrics API (or one of a number of other similar APIs) and then not allowing tainted data to be used as parameters in network requests.

You don't even need the font metrics API. Draw a span containing the character "m", measure the width of the span using Element.clientWidth. Unless you taint (almost literally) the entire CSSOM, you can pull off similar things.

Is there a reason to not taint the entire CSSOM?

Alternately: why not anonymize CSSOM return values? Your browser might have access to OS fonts A+B+C, but if your JS asked the CSSOM about the size of characters on the page, the answer it would give would come from an "alternate world" where the browser only has access to the web-safe fonts, and so is using one of them.

Pixel-correct measurement of fonts / text is a must-have for certain specific applications like subtitle renderers. (I maintain one.)

For a specific example, it's more pleasing to split a long line of text in a way that all the split lines have roughly the same length - "a a a b b b" -> "a a a\nb b b". But CSS only gives you one way to split lines - as much text as possible in all but the last line and whatever's leftover in the last line - "a a a b b\nb". This means a renderer library has to be able to measure the width of text to be able to insert linebreaks itself.

Huge amounts of the web will break: anything doing anything layout-related with JS will likely break.

Changing line-lengths will cause odd bits of layout breakage, so just giving bogus results as if rendered with a different set of fonts won't work properly either.

Browsers already treat first-party cookies differently than third-party ones. Report a homogenized fingerprint to Google Analytics, but a real one to the main site.

One way would be to not expose any installed fonts to web content other than the system default ones.

Isn't this one of the stated features of the Tor browser?

But then sites could easily identify and choose to react to that fingerprint. (But maybe that's OK)

Looks like this will stop (after 24 hours) some companies from doing an initial redirection to set cookies for tracking purposes... Example:

1. Search Google for hockey sticks

2. Click on search result hockeystick.com

3. hockeystick.com issues a 302 to adcompany.com which then issues a 302 back to hockeystick.com

Why the 302? Because in Safari, you could only access cookies in a 3rd party context if you've seen a domain in a 1st party context. Setting a cookie in adcompany.com in a 1st party context gives you the ability to read that cookie in a 3rd party context which could be used for tracking purposes.

Woah - is this what companies that "rent" other companies' pixels like perfect audience are doing to get the pixel data?

Won't the browser show an error about a circular redirect? Or does that take a few bounces?

The URLS would be different. Companies also rewrite internal links as you're navigating a site to accomplish the same thing. Example: https://baycloud.com/thirdparty-redirect

It wouldn't be circular if the URL was different, for example:




I read https://webkit.org/blog/7675/intelligent-tracking-prevention... which details this.

They're just being a little sophisticated in how they block third-party cookies. This will hardly stop other tracking scripts, tracking images, widely-used fingerprinting techniques and related js calls. So nothing remotely close to even Brave let alone a TOR or the Epic Privacy Browser.

We're trying to do the most extreme thing we can do short of blocking ads. To be more effective, you end up blocking ads, whether intentionally or as a side effect.

This blocks more than just cookies by the way, it affects all client-side state. And client-side state is still the primary and most reliable tool used for tracking, even though other methods exist, such as browser fingerprinting, behavioral fingerprinting, and IP-based tracking.

And something says me that ability to completely block third party cookies is going to just as magically disappear.

The big question to me is whether it's enabled by default, and whether it blocks requests to Google Analytics. If so, that's an interesting shot across the bow.

You don't need to block Google Analytics to make it more private. You just need to make the user appear to a new user to every site.

So Google may lose data because then they can't track you all over the web, but the websites don't because they still see you as one user.

Maybe I'm overly paranoid, but I assume Google does all sorts of fingerprinting (documented and not) via GA. Why else would it be free if it didn't provide a big upside for Google?

Why do they need fingerprinting? They can just give you an identifier and combine it with your login on the Google sites to connect it to your identity.

Only a limited subset is free. More advanced types of tracking requires the pro subscription.

Isn't it like this that all data is gathered anyway, but site owner can access the more advanced tracers with paid subscriptions?

I don't intent to provoke FUD, I seriously don't know. This would sound like rational choice for Google since they need this data to run their business.

This is actually really concerning to me. If they blocked Google Analytics, it would severely damage that data. It'd be bad news for site owners who just want to quantify their traffic.

....so? Site owners are not guaranteed this access; their script runs on the client computer.

I say this as someone who does a lot of analytical research and re-targeting and would be hurt if this was rolled out on a larger scale; I just don't think I have a right to the data.

Well, if don't care about things that prevent you from doing your job, then what exactly is the point of working in that field?

Imagine you were a police officer with this mentality.

"As someone who investigates lots of crimes, it's totally fine if someone invokes the fifth amendment, I don't have the right to compel them to answer."

"I mean if you don't care about something prevents you from doing your job, why even join the force?"

I don't have to "imagine" anything. Let me invite you to consider the context of my original comment before coming up with ridiculous comparisons.

Doesn't seem that ridiculous a comparison to me. You don't have a right to compel something from someone else, but that doesn't meant you just have to give up at whatever task you are trying to accomplish

When I read a comment, the first thing I do is read the ones above it so I understand the limited scope of the comment. I don't immediately rub my hands with glee and go about compiling a list of situations to which the comment doesn't apply. I guess that sort of thing appeals to some.

Complaining about losing data is in no way the same as assuming entitlement status or forcing someone to do something. The job of a police officer is to enforce laws, and exercise human judgement. The issue with the fifth amendment would be handled by lawyers in courts, not by the officers on the ground. IT jobs have completely different parameters. The comparison with police officer is entirely irrelevant and as such I don't want to continue that discussion.

Kind of presuming the wrong thing there. There's still work to be done, right? Just because something would make the job easier does not mean it should be done, ethics come first.

Um, they were already tracking people, and now they can't (presumably). If ethics were a priority why were they working that job in the first place?

There is nothing unethical about using Google Analytics. I also am not entitled to this access, by the very nature of how it works.

I was guessing that the reason that you did not feel entitled onthe data was for ethical reasons (at least that's how I feel). I also feel it's ethical to use the data, as long as it's freely given.

I think you responded to the wrong person. I didn't claim it was unethical.

They don't prevent me from doing my job. It makes it harder. Oh no, I have to work harder.

Why is it wrong to want to make your job easier? I guess I don't see your point of view..

Why is wrong for people to want to protect something they have of value, from someone else just harvesting it from them? I can see why this is annoying but can you really not see the other side of this situation?

I don't think you quite understand this conversation. I'll prefer to not argue with you any further.

>I don't think you quite understand this conversation.

Pretty sure you should read the remainder of the comments and see which person is having trouble understanding everyone else.

> I don't think you quite understand this conversation.

Looking at the conversation as an outsider, it seems to me you are the one who have things backwards.

I would like it to be easier. I also know that I am not entitled to that. This isn't difficult to grasp for most of HN based on the comments...

The difficult thing to grasp here is your continued insistence that someone claimed they are entitled to something when they did not.

The article says (my emphasis):

> segregate the _cross-site_ scripting data

So if you "just want to quantify your traffic", use self-hosted Piwik (i.e. not cross-site), as many of us do already.

What's the advantage of third-party analytics over self-hosted ones (e.g. GA over Piwik), except for "someone else hosts it for me for free"?

Honest question. I'm not logging anything about my websites' visitors on principle, so I don't have insight into that area.

Welp. Sorry.

Well, back to log files. Which may no longer work with the latest and greatest JS framework :(

I hope it's not enabled by default so it doesn't ruin analytics data for web masters. I personally don't see any reason to turn it on either, so...

Does Analytics do some tracking by default? Or only if you enable the advanced demographics options, which enable DoubleClick?

The fact that it's free for site owners suggests it has some benefit to Google, so I assume heavy fingerprinting and tracking.

What would it block? The JS file? Analytics only sets a cookie on the site using it, not one that works across sites.

Cookies are one way to track users. They are not the only one. Google Analytics is so ubiquitous...I can't see Google missing the opportunity to leverage it.

Speaking of shots across the bow, I'm surprised they've never put an ad-blocker in Safari.

The cynic in me sees this as cutting off Google, and then tracking within the browser so they become the source of cross-internet tracking. I'd be on the lookout for any new 'personalization' feature that comes in to the browser. E.g. WWDC 2018: 'Today we're happy to announce Siri integration with safari! She will provide personalized recommendations and results by applying machine learning to your documents and data!'

They showed this on iOS Safari today:

> Siri now suggests searches in Safari based on what you were just reading. And when you confirm an appointment or a flight on a travel website, Siri asks if you want to add it to your calendar.

Search for "Smarter about you." on this page: https://www.apple.com/ios/ios-11-preview/ Looks like it's done on the device though, End-to-end encrypted with your other devices.

Firefox (Nightly at least, I don't follow stable :D) also has built-in tracking protection, only in Private Browsing by default (about:config to enable everywhere).

It says a lot about the state of the web that both Apple and Google are looking at publishers and saying "Look, if you won't fix your websites, we'll fix them for you" (Google in the form of AMP on mobile devices). However, as one of those who subscribes to the opinion that AMP breaks the web, I greatly prefer Apple's approach.

It makes me wonder how many publishers at national newspapers and magazines are even aware of what’s going on.

It is well-known that Apple uses Omniture (acquired by Adobe, aka SiteCatalyst, aka 2o7.net, etc.).

As in Remember, "SWF" stands for Small Web File. Yes, they actually tried to get users to swallow this when Shockwave Flash started to be used in devious ways, such as to track users.

Omniture's business is third party tracking cookies similar to Google Analytics or KISSmetrics. Not sure and don't care whether Flash is used so much anymore. If too young to rememeber search and ye shall find information about "permanent, Flash cookies" that could not be removed.

Apple is not saying "We will not engage with companies selling third party tracking cookie services." Clearly they are not opposed to third party tracking cookies in principle.

Instead they are announcing some change to their browser. Wow, exciting. It is not clear what exactly this announcement accomplishes for users. Probably nothing. If you are trying to avoid ads and tracking, popular browsers (without extensions, etc.) are not your friends.

Advertisers will finally move their tracking behind their CDN's, which was always the end goal for them and why they were free in the first place.

Then we have a problem where the industry is reliant enough on CDN's that browsers can't simply block access.

Not available for Safari though.

Finally, I hope this becomes a common practice from other vendors as well.

It's unclear to me how these "trackers" work? How do they track you, is it cookies, or what?

That site is interesting, and it shows you that 1 of X browsers resemble a particular fingerprint ingredient.

I found this one rather interesting, it was the most unique of the ones listed:


One in several thousand have the same headers as me. But the headers themselves are quite a small little string, I'm surprised it is that unique.

Probably most (if not all) fingerprinting sources are showcased by fingerprint.js: https://github.com/Valve/fingerprintjs2

For a second I thought "why in the world is Valve maintaining this". Confusing username to be frank.

It is essentially impossible to enumerate all the ways browsers leak fingerprintable information.

Yeah true, should've put it another way.

I'd go with !Tor.

In practice, User-Agent strings (which are just HTTP headers) have been shown to be pretty effective at uniquely identifying and tracking most people. So even disabling JavaScript and Cookies only goes so far.

Source? Because the only information contained in user-agent strings in modern browsers are browser version (realistically limited to vendor since browsers auto-update) and operating system version. So basically all you're going to get is (Chrome/Firefox/Edge/Internet Explorer/Safari on Windows/Linux/Mac), which isn't much.

It's more than just the browser, it's the exact, EXACT version of the browser which can be very revealing if you're not updating your browser (almost) every day. For example: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.85 Safari/537.36

I can't speak for Chrome or Safari, but Firefox's UA is pretty sparse:

Mozilla/5.0 (X11; Linux x86_64; rv:55.0) Gecko/20100101 Firefox/55.0

This a totally custom, self-compiled build--and there is absolutely no reflection of that in the UA. Also note that the Mozilla/5.0 and Gecko/20100101 fields are frozen and are only there because sites break if they're not there.

>It's more than just the browser, it's the exact, EXACT version of the browser which can be very revealing if you're not updating your browser (almost) every day

Is there a reason why you don't have auto-update enabled in your browser?

Also, auto-updaters don't apply updates right away, so as long as you're not a few versions behind the latest, you will blend into the crowd.

Even without JavaScript or cookies, HTTP request headers can reveal a lot of unique entropy:

  * Browser
  * Browser version
  * OS
  * OS version
  * Machine architecture such as x86, x86-64, or ARM
  * User locale
  * IP address
  * DNT flag
Trackers can also tag clients with unique cookie-like ETag or Cache-Control values that clients will return in future HTTP requests.

A quick Wikipedia search turns up more fields [1]. Although some of these fields are not 100% accurate due to historical reasons (I'm looking at you IE). I'd bet there are a couple other data points they gather via JS to finger print.


Mozilla/5.0 (iPad; U; CPU OS 3_2_1 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Mobile/7B405

1. https://en.m.wikipedia.org/wiki/User_agent

I compared 2 chrome versions and it seems that most of the version numbers there are static.

    Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2979.0 Safari/537.36
    Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.10 Safari/537.36
As for iOS 10 it's pretty sparse as well.

    Mozilla/5.0 (iPhone; CPU iPhone OS 10_0_1 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Mobile/14A403 Safari/602.1
It's slightly worse than windows because it probably discloses your device type, but there are tens (hundreds?) of thousands of users for each iphone variant.

Good read here.[1] An example could be using your installed fonts. Like I was saying they probably use a bunch of other JS tricks. These 3rd parties aren't going to disclose anything.

Due to how the site takes into account ALL user-agent strings ever collected, it overestimates how unique an user-agent string is. Realistically in a given point in time, there are only a few dozen user-agent strings in widespread use (due to how few bits of information actually gets put into it). Unless you're using a special snowflake browser/operating system you should be fine.

I misspoke when I said it was simply User-Agent - they appear to fingerprinting based on other items such as installed fonts, etc. I believe when they say it's unique, it means, "unique". Not, "reasonably uncommon". And if that's the case, it's been up for years and has never encountered a system exactly like my current one. I'm on a very popular Linux distro used by most of my co-workers at a mid-size company, and I have the same set of work-related plugins installed as all of them, plus LastPass and Ad Block Pro. So not mainstream by any means, but also not going out of my way to be a snowflake, either.

Maybe the next frontier in browser anti-tracking is to stop sending a User-Agent header, or to build in functionality like one of the browser add-ons that randomly pretend to be different browsers.

There are a number of ways. Cookies are one, but you can also collect other kinds of data from a web browser to uniquely identify a user across multiple sessions. Generally speaking, if you can run JavaScript, you can track the user. This is done by all advertisers and most little widgets like Facebook or Disqus comments, like and tweet buttons, etc.

> This is done by all advertisers and most little widgets like Facebook or Disqus comments, like and tweet buttons, etc.

Why isn't this illegal already?

Because it's highly lucrative.

And because it's very functional and easy at the same time.

Most people don't know nor care about the issue.

It's cookies and the change isn't really earth shattering but it does close the "redirection trick" loophole that some companies were using to track you across domains. See my example here for more specific details: https://news.ycombinator.com/item?id=14493373

Most trackers use cookies or other client-side state to track you across the web. There's also various fingerprinting techniques but they are less reliable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact