I wonder if this will undo, in large parts, the "new" web that's being worked on for the better part of this millenium. Third-party cookies are blocked, presumably YouTube-like redirection to make cookies first-party are detected, other cookies are held no longer than a week, etc. So advertisers will turn to browser fingerprinting, so soon we'll see requesting window/viewport dimensions as being flagged as insecure. Even CSS media queries aren't safe as they can trivially be used for fingerprinting (and I have seen CSS where max-width was beeing queried in steps of 3 pixels out of sheer incompetence). Also caching/timing "attacks" for fingerprinting. And finally, arbitrary JavaScript execution will be questioned, as it should've been a long time ago. It'll be interesting to see this arms race enroll, with Apple protecting their users (and also their apps and iDevice market) and Google protecting their version of freedom (and unchecked privacy invasion and monopolization of the ad space and web). Hint: I've recently purchased an iPhone
> And finally, arbitrary JavaScript execution will be questioned, as it should've been a long time ago.
I run UMatrix and manually whitelist 3rd-party scripts, I've blocked web features like WebGL and Canvas behind prompts, and I regularly disable JS on websites that I think don't need it (news sites in particular). At the moment, if you're really worried about tracking, it is a very good idea to think about blocking most Javascript and culling down the language features your browser supports.
However, I want to stress at the moment. The problem is that pervasive tracking and device fingerprinting will go wherever the app functionality is. Currently, even with all of the many, many problems on the web, it's still better to use apps like Facebook via a website than it is to download them as native apps to your phone. The web has better fingerprinting resistance than most native platforms.
Again, doesn't mean the web is great. It just means we need to think about where Google Maps is going to end up if it's not on the web. What I'm getting at is that blocking scripting isn't a long-term solution to fingerprinting in general. We could get rid of Javascript on the web, it might even be a good idea. We will still need to solve the same problem with native apps.
Forget the web for a second, what it comes down to is that we need a way to run a turing complete language that is incapable of fingerprinting the host environment. That language doesn't have to be Javascript, and it doesn't have to be on the web, but it does have to be something, somewhere.
It's a really hard problem -- vulnerabilities like Spectre and Meltdown have made it worse. Now we're asking questions like, "is it actually reasonable for applications to have access to high-resolution timers?" People look at the web and say, "oh, this is a web problem." It's an everything problem. If we really want to get rid of pervasive tracking, we now have to think about how high-resolution timers are going to work on desktops.
What the web promised was a VM where anybody (technical or not) could run almost anything, without validating that the code was safe, and the VM would just protect them. We're finding holes in this particular implementation, but I still eventually want that VM as promised.
I don't want to deprive you of the VM as promised, but that seems very hard to implement, and I don't have time to help.
I just want to solve the much easier problem of giving people some way to use the internet to read documents without getting tracked [1].
By "document" I mean a page of text, images, links to other documents and maybe some other easy-to-implement non-privacy-compromising things.
The web is how almost all documents are made available on the public internet. Most document authors don't even consider or imagine any other way to do it. And the web is a privacy nightmare. That is the problem I'd like to solve.
I felt the need to write this because past discussions on this site of the problem I want to solve have gotten derailed into a discussion of how finally to achieve the vision of "a VM where anybody (technical or not) could run almost anything", which, like I said, strikes me as a much harder nut to crack.
[1]: and without the need for anything as demanding of the user's time or the user's technical skills as you described when you wrote, "I run UMatrix and manually whitelist 3rd-party scripts, I've blocked web features like WebGL and Canvas behind prompts".
We already have all the of the technology we would need to build a document-only web now. The biggest unresolved problems are asset caching[0] and IP addresses[1]. But for the most part, nobody would even have to build a new browser, they could just distribute a custom build of Firefox that turned off Javascript and a few other features.
It would be very fast, reasonably private, probably a lot nicer to use (at least where documents are concerned), and nobody would use it. I don't necessarily disagree with your goal -- it seems very reasonable to want a document distribution platform that isn't encumbered with JS. But given that news sites can already speed up their pages dramatically by removing JS, and they don't, why would they support this new browser or platform? And without them supporting it, why would users move to it?
We've seen this play out with AMP. AMP is fundamentally flawed, but it did get one thing right: that news sites only care about search engine placement, and they do not care about user experience past that point.
I would (very cautiously) suggest that it might actually be easier to implement a VM that safely runs arbitrary code, than it would be to convince publishers to move to a user-friendly platform that only distributed documents. The only success I've seen in getting publishers to abandon scripting is via platforms like Facebook and Medium -- maybe that's replicable with a new browser or distribution layer on top of the web[2]? I dunno; I think that also might just be a really, really hard problem.
I'd be happy to be proven wrong, I would happily start using a ubiquitous, script-free platform for document distribution.
---
[0]: We maybe need to just bite the bullet and get rid of asset caching, or at least give asset caches a very short expiration date (<1 day). I guess making them domain-specific could help too.
[1]: IP addresses are a huge issue, and I don't think the tech community talks about them enough. TOR is not really scaling. The best solution for ordinary consumers is a VPN, and VPNs have a lot of pretty obvious problems.
[2]: My guess would be, someone would need to make publishing much easier than building a website (ie, Medium), maybe by adopting the DAT protocol and just offering free hosting for everyone. That runs into the same IP address problems (DAT and IPFS are not privacy friendly), but there's progress being made in that area. Or someone would need to find a way to get users to en-mass abandon the web and move to the new platform.
>it might actually be easier to implement a VM that safely runs arbitrary code, than it would be to convince publishers to move to a user-friendly platform that only distributed documents.
Would the VM that safely runs arbitrary code render existing web pages or would it be necessary to persuade publishers to adopt it?
The current strategy being pursued by Apple and Mozilla is -- yes. They're hoping to make Javascript into that VM without breaking the majority of web pages.
It is yet to be seen whether that's feasible. Apple and Mozilla are certainly making a lot of progress, but Javascript is very old, and was designed in an era where the attacks were much less sophisticated.
The most promising progress (I think) is in building tracking protection that is undetectable. For example, you can put a permission prompt in front of someone's location and block off the API if the user clicked "no", but you could also just lie about their location, which means you'd still be compatible with most existing pages, and publishers wouldn't be able to strong-arm users into turning off the setting. This is how Firefox currently handles high-resolution timers. You can request them, they'll just lie to you sometimes.
Again, it's yet to be seen whether that kind of stuff will work.
The other hope is that WASM may be good enough on its own to encourage wide adoption -- being able to use (almost) any language to compile to the web is very, very attractive, so WASM might overcome Javascript's network effects (no pun intended) and replace it as the dominant language on the web.
If that happens, WASM might be an opportunity to rethink web permissions. Might.
If neither of those approaches work, then anything is fair game. At that point, we might as well try to make a document-only web, or migrate everyone to a new platform. I think that will be very difficult though.
Asset caching problems I worry about come from two sources: the browser itself, and 3rd-party content-delivery networks.
On the browser side, by serving a unique set of resources to each user, I can identify them in future visits. Roughly speaking, the attack works by inserting a combination of unique asset URLs and repeated asset URLs, and then logging on the server which of those URLS the browser tries to fetch. By responding to fetch requests with 404s or bad assets, the server can avoid having your browser overwrite the unique combination of URLS it has cached, which allows for more persistent tracking.
These caches also work across domains -- that means that if "foo" requests asset "googlepixel/my_id", and "bar" also requests asset "googlepixel/my_id", both will get served from the same cache. This means that caches can be used to track browsers across multiple websites.
On the content-delivery side, the cached images are likely unique to the website you're visiting, and by downloading them, you're letting the 3rd-party know that your IP address visited the page that they're associated with. In a centralized web this is a small concern, to the extent that it allows Cloudflare/Google to know a lot more about what pages your IP is visiting.
In a distributed web like IPFS this is a bigger problem, because now you're connecting to completely random people online to get your assets downloaded. Even worse, you're asking for those images/documents not by going to one specific server and saying, "hey, server X said you had asset Y", but by announcing to the entire network, "hey, I want asset Y, does anyone have it?"
> What the web promised was a VM where anybody (technical or not) could run almost anything
I have to agree with hollerith here and say this is not at all what was promised. Instead, what was promised, IMHO, was that there are linked, passive pages with an expectation of UI transparency wrt when network accesses occur (in response to clicking on links or submitting forms), and this notion is even honored in the HTML 5 spec to some degree. A universal VM/runtime is desired by developers who want to sell services rather than sw, or don't want to bother with deployment procedures in app stores, or want portable code accross platforms, or for other plausible reasons, but is a non-priority next to the web's original purpose. OSs are far superior platforms for general-purpose apps, and turning browsers into platforms is only helping the (few) browser vendors left, but trampling by design on security, privacy, simplicity, and power efficiency.
Sounds good -- but, if we move all of the web apps to be native apps, then I want a native environment that can safely run untrusted code.
Currently, none exist that I'm aware of. Phones aren't doing well on that front, and we haven't finished moving to Wayland yet, so that mess still exists. X11 is heaven for anyone who wants to fingerprint a device. And we still have to figure out whether or not we're going to allow high-resolution timers or raw access to the GPU, which is itself a pretty big fingerprinting target.
On mobile phones, the closest thing I have to a good adblocker is AFWall+, which doesn't work on iOS, and only blocks via the built-in IP-table, which isn't good enough to make me feel safe running apps like Facebook or Twitter. And most mainstream Linux distros (with a few exceptions like Qubes OS) are not shipping with the kind of process isolation that's necessary to guard against malware.
I guess MacOS is making some progress in this area at least? But for the most part, none of our computer environments were designed to run untrusted code -- Linux in particular was primarily designed to protect you against other users. The prevailing advice was, "just don't download malware", which doesn't reflect how people today use computers.
I want to stress -- there could be a solution to this. We could make a user-friendly native platform that replaced the web. But I don't think anyone has made one yet.
I want to advocate that it's a good idea for us to solve that problem (or at least think about it) before we get rid of Javascript. I don't care what happens on the web, except that the web is currently the most user-friendly, widely-used VM that we have. I see a lot of people suggesting that we burn that down, but I'm not sure they've really thought about what's going to happen afterwards.
>the web is currently the most user-friendly, widely-used VM [where anybody (technical or not) could run almost anything] that we have. I see a lot of people suggesting that we burn that down, but I'm not sure they've really thought about what's going to happen afterwards.
Could you say more about this? Previously, I asked you for technical information, but here I'm after your aspirations and maybe your values. What is so great about a state of affairs in which the average consumer can decide where on the internet to go today and at each stop (e.g., web page) along the way, code written by the owner of the web page is sent to the consumer's computer and is transparently run with the user's having to install anything?
My guess is that you dream of using the internet to create compelling experiences that move many (million?) of people, and you consider documents consisting of text, images and links to other documents woefully inadequate for that purpose, but let's hear from you.
(BTW, I don't care about using the internet to consume compelling or moving experiences -- or more precisely ordinary text documents, images, audios and videos are the only types of compelling / moving experiences that I use the internet to consume, and I have no need or desire for more than that.)
> What is so great about a state of affairs in which the average consumer can decide where on the internet to go today and at each stop... [code] is transparently run
Someone might as well ask what's so great about general purpose computers, or Open Source. The web is a way to share documents, but even from its origin it was also a way to distribute software packages. There are a couple of things that also make it a reasonably decent software runtime, but more on that later.
My goal is that I want to make it easier for ordinary people to share software and to share software modifications -- that means fewer gatekeepers (ie, app stores), less complicated publishing (software should be as portable as possible), and less complicated installation. The removal of those barriers means that software is inherently less trustworthy -- I want ordinary people to be able to share code, but I also don't trust ordinary people that much.
On Linux, our thought process around software has been that distro packagers will read source code and hand-pick which packages are safe. Users can bypass their package managers, but for the most part shouldn't, unless they feel OK reading the source code and evaluating whether the author is trustworthy. This doesn't really scale (see Android), it requires a ton of volunteer work, it makes developing and distributing software much harder, and it puts burdens on end users that are unrealistic.
If we want a world where anyone can write software and anyone can run it, we have to make arbitrary code safer. It's never going to be 100% safe, but a user should feel comfortable downloading and installing an arbitrary app. When I say that currently the web is the best VM, this is what I'm referring to.
Across almost every axis, it is currently safer to visit a random website than it is to download a random app to your phone or desktop computer. And when I talk to people about hardening phone security, they're all caught up on moderation and approval processes, which are actively the wrong direction to go if you think of computers as general-purpose, democratizing devices.
From this point of view, it's less that the web should be a software runtime, and more that making software accessible requires us to have a good software runtime, and currently the web is better than the alternatives. It's pragmatic -- all of the other software runtimes are either less secure (Android/Windows), or less accessible (Qubes OS, actual VMs).
----
> My guess is that you dream of using the internet to create compelling experiences... and you consider documents consisting of text, images and links to other documents woefully inadequate for that purpose
I do want to be able to create compelling experiences and weird stuff, and I think there's an inherent value to having even flawed platforms that enable that. But, let's ignore weird canvas experiments and games, since not everyone cares about them. When we talk about traditional, normal software, my position is the opposite -- that document layout tools are adequate for most software.
Let's ignore the web and just talk about what a good general application framework would look like. Maybe about 60-70% of the software I run today could be using a terminal interface. Pure text is good enough for a large portion of application interfaces, and terminals are usually nicer to use than GUIs.
Most other applications I run natively are just documents, and they'd be better if their interfaces were HTML/CSS. Chat apps, text/database editors, git clients, file navigators, calendars, music players: these are not fundamentally complicated interfaces. The only applications I have installed natively that aren't just interactive documents are fringe-cases: games, image editors, Blender. There's a subset of programmers that get wrapped up in having pixel-level control over how their applications look, and I couldn't care less about how they want their applications to look -- all of their interfaces are just text arranged into tables with maybe a few SVGs on the side. They're documents that I can click on.
HTML and CSS have real problems, and we might want to fix a few of them. But they're already pretty good at laying out documents -- arguably better than most other interface tools that we have. And once you start thinking of applications as interactive documents, a lot of design decisions in HTML/CSS make a lot more sense. For example, if HTML is a language that you use to build a display layer, than it's dumb that there aren't more significant 2-way data-binding tools. But if HTML is a display layer, then it's obvious why we wouldn't want to have a lot of 2-way data-bindings -- they're hard for users to consume.
Where scripting is concerned, we have two options for this theoretical platform: we can run logic locally, or we can run it on a server. A lot of FOSS developers advocate for serverside logic, and I don't understand that, because I think that SaaS is (often) just another form of DRM that takes control away from users. I'd like to move more logic off of servers -- some of the biggest weaknesses of the web come from the fact that everything is so impermanent; you can't pin libraries, you can't run an older version of a website, you can't easily move data around. SaaS makes the majority of those problems worse. If a calculation can be done locally it is often better for the user to avoid the server entirely and bundle everything clientside.
None of this touches on the network layer or user extensions, which could also be long conversations in and of themselves. And again, I want to stress this theoretical application runtime could be anywhere; we could have a document-only web and do applications someplace else. But I don't (usually) see people proposing anything like that when they talk about getting rid of Javascript -- usually their vision ends up being either, "fewer people should write software, and we'll just use the existing native model" or "everything should be SaaS."
I don't like either of those visions. I think most native platforms are just as bad as the web today (worse if you're thinking about security), and I think widespread SaaS is bad for users. Again, this is pragmatic -- it's not that the web is great, or that it doesn't have fundamental problems, it's that the web currently exists and is available to most people, and I don't think any of the native alternatives are comparable. If someone showed me something better, I'd abandon the web in a heartbeat.
Such the VM is way too complicated, and has other problems. (Also, it is difficult to sufficiently control the features of JavaScript programs (not only HTML related stuff, but also Date and endianness), or to replace specific scripts.)
But just if you want a Turing complete language that is incapable of fingerprinting the host environment, there is TeX, which might be incapable of fingerprinting if a bug it has is fixed, and the file I/O is made more restricted.
Also, for internet stuff there are many other programs and protocols, so could be used, e.g. SMTP, NNTP, Gopher, Telnet/SSH, etc. (In many ways it will even work better. No need to deal with complex user interfaces that do not even work properly and which is not even the user's intentions.)
There is also Glulx, and other VMs, and I also thought of idea to make up a VM for this purpose, too.
>We could get rid of Javascript on the web, it might even be a good idea.
We will first need to get 95% of the useful Javascript functionality within Browser first. I would be happy if I could have Rails's Stimulus and Turoblinks function without Javascript.
Firefox settings. Fingerprint resistance (privacy.resistFingerprinting) will auto-reject the canvas and show a relatively non-intrusive icon in the address bar that lets you re-enable it for that website[0].
Since webGL requires you to use canvas, this should also block that attack, although I'm currently going a step farther and disabling webGL entirely (webgl.disabled), since I've seen a few sites (panopticlick for example) get around the prompt specifically with webGL, and I don't know how they're doing it.
Firefox's fingerprint resistance efforts are showing a lot of promise, although you will have to put up with some quirks (like learning to read UTC time).
I know there's a prompt for canvas, but it doesn't seem to work for webgl. On a fresh install with fingerprint resistance enabled, webgl fingerprinting appears to work fine[1].
Hopefully we move towards a future where advertising is being targeted based on the location of the ad, rather than the person viewing it. If I am watching a car review, I have no problem watching an ad for a car (someone has to pay for the video), since that ad makes sense in that location. But I don't want ads for cars to follow me around the internet.
> If I am watching a car review, I have no problem watching an ad for a car (someone has to pay for the video), since that ad makes sense in that location.
While I mostly agree, it is very annoying seeing the exact same car insurance or <insert random local budget car dealer> ad three times during a ten minute video on YouTube about a super car I was interested in.
Yes but video ads are annoying no matter how you do them. It's because they get in your face and you have to wait for them, unlike picture ads. Nobody likes video ads, since the beginning of internet ads. They should just go the way of the popup-window.
Are you telling me that you don't like making small talk or having a quick friendly chat with somebody and than seeing ads for the service or product everywhere you turn for the next few days?
Although I share this dream I’m unsure whether Apple is pursuing the security and privacy of their users for intrinsic reasons or just to be not Google. Meaning that when they get all users and Google is gone that they can turn all the tracking they like.
I'm completely with you. Cynical me says Apple is merely siding with privacy to make their walled garden attractive and protect iOS-exclusive content and apps. Actually, Apple's cancelled iAd program has already shown they're not above privacy invasion. But for now, Apple differentiating on privacy with plausible economic reasoning is a good enough guarantee for consumers I guess. In the end, only new legislation and tech (provable privacy, while also letting advertisers have their unique visitor stats, but no profiling) will overcome the demise of public media and monopolization of the web, and maybe bring back value and trust in online media. As for Google, it's a bit premature to worry about them ;) and they aren't stupid after all.
Not a huge Apple fanboy here, but it should be acknowledged that Apple did an about face and started pushing on user privacy loooong before it was a competitive marketing checkbox vs. other tech companies. Locationgate happened what, 8 years ago?
While I agree with you in the history, I don't put trust in a corporation to necessarily keep doing the right thing when conditions change in the future.
And even if one can afford Apple hardware, I'd bet on there being significant overlap between the "has keen interest in security" and "likes to fix own hardware" crowds. I could easily afford to have an all-Apple household. I don't, because I don't like Apple's attitude towards individuals fixing something rather than buying new.
This feels pedantic and totally besides the point.
The author was describing "gets all users" as in, gets all users that Apple can get. What's next, would you point out how people in the 3rd world wouldn't be buying Apple/Google phones either?
The point was, when Apple has all of the market share they can expect to get, will they turn around and flip tracking on and etc.
Please don't deviate topics for solely pedantic reasons. I generally love pedantic distinctions, but this i think added little to no value.
I don't think it's pedantic at all. spockz's scenario was explicitly "they get all users and Google is gone" (my emphasis). That is, Apple would be able to re-enable tracking because at that point users would have nowhere else to go.
The counter-arguments were that this would never happen because Apple's unlikely to deviate from the luxury pricing model that's worked so well for them. If Apple "gets all users that Apple can get" then turns tracking back on, while Android is still around with the same tracking but at a small fraction of the price, there's no reason for privacy-conscious users to stick with Apple.
One which users were obviously willing to pay if they switched to Apple because of privacy concerns, and do would be willing to pay again if that need was met elsewhere.
Their goal has and always will be to get you into the Apple ecosystem where they have complete control over what you can do. Hope you never your Apple account revoked. Good luck recovering everything you've purchased. Oh and all the features of the ecosystem stop as well.
> Meaning that when they get all users and Google is gone that they can turn all the tracking they like.
I don't think we have to worry about that. Apple has a small (10-20%) portion of the global market. In many sub markets Apple is struggling against the competition.
What we do have to worry about is digital privacy being unevenly distributed with people that have more money being able to afford more privacy.
> What we do have to worry about is digital privacy being unevenly distributed with people that have more money being able to afford more privacy.
That's assuming it's only a matter of Apple vs. Google and there is nothing to be said about it by Mozilla or Purism or anyone else whose products don't require Apple money to use.
And even if that turned out to be the case, it could sink the whole silly edifice anyway because advertisers want the customers with money. Tracking people who don't buy anything isn't nearly as valuable.
One solution to an arms race is labelling bad actors. If site XYZ keeps trying to find a workaround, maybe there's a way to label them as such to the consumers?
Labelling bad actors is a fundamentally abusable process from both sides: those doing the labelling have a massive amount of control, and those being labelled are in a constant arms race to fix their image via something that isn't currently labelled as "bad". Consider if Apple blocked Facebook from their browser today: Apple would very likely (and probably rightly) be called a monopolist, while Facebook would probably reincorporate under TotallyNotFacebook or something similar or seek to find another loophole in the labelling process.
According to this article, Goldman-Sachs is paying Apple a considerable amount for the privilege of facilitating Apple Card, expects to lose money for a while, but is confident it can make the deal profitable by employing brand-new technology. This suggests to me something highly intrusive. I cannot take Apple seriously on any privacy claims as they partner with GS and do things like develop iBeacon.
GS is the lender whom a credit card owner is asking to pay a merchant. Why would you not expect GS to have the transaction information? They are the ones paying the merchant at the request of the credit card owner.
Not to mention card transaction data is already available for sale, and by the network owners, nevermind the banks!
I mean, what are the chances that Goldman doesn’t sell that info?
It does seem odd for Apple to choose to hand that data over to the bank with the absolute least scruples. GS has made a name for themselves screwing their own customers.
I would’ve hoped Apple would partner with a smaller bank that has something to gain, and then say something like “because we care about your privacy, we have partnered with Bank X to develop the first credit card with truly private transactions” or something like that.
The fact that they just leak the data to the credit card industry is odd. With all their cash they could’ve bought a bank and issued the card themselves. Hopefully that’s their endgame.
> “because we care about your privacy, we have partnered with Bank X to develop the first credit card with truly private transactions”
It would have been awesome if Apple provided the lender and Visa/Mastercard/AMEX with a non-transferable license to use client/transaction data with Apple being the owner of the data.
Couldn’t that still be the deal? Certainly Goldman is in a position to use this data for something profitable in-house that doesn’t seriously impact privacy at an individual level (there are probably valuable signals in the transaction data when it comes to algorithmic trading), and it seems out of step with their broader privacy strategy for Apple to give Goldman unfettered rights to use the data however they want.
Is it possible to have a credit card with truly private transactions? The lender (GS) has to pay the merchant (Walmart) in your behalf. How can they do that without the transaction data? And if you want to dispute a charge, how can the lender allow for that if they don’t know the transaction belongs to you?
They could be desiring data (even anonymized) to assist leading edge prediction of market trends (purchasing, interest, etc) immediately as they happen. In a world of algorithmic trading and futures speculation, perhaps that matters a great deal?
It is goldman though after all, so you could be right!
We know from this case https://www.bloomberg.com/opinion/articles/2015-01-23/capita... of insider trading based on spending data at Capital One that such data would be extremely valuable. But it sounds like it would probably be illegal because merchants wouldn't agree to let their data be used like that.
Retailers may sell data (via payment networks) to Google so the latter can use it for their advertising business, but it seems (at least to Matt Levine in the article I linked) that they can't/won't sell it to hedge funds to trade on:
> does that mean that Capital One was allowed to trade on this data for its own profit? Wouldn't that be amazing? Surely the answer is no: I assume that Capital One signed agreements with retailers (or rather, with Visa and MasterCard, which signed agreements with retailers) in which it promised not to disclose transaction data, or use it for nefarious purposes.
If you think about it this makes sense. A retailer selling material non-public information to a hedge fund which then trades on it is essentially the same as them executing trades based on the data via the fund, which is obviously insider trading.
No, I don’t think that’s correct. “Selling” data is typically enough to make it “public” (i.e. it’s dissemination), after all most financial data isn’t available for free (e.g. stock prices).
That can't be right, otherwise a CEO could "sell" their revenue figures to a golf buddy who then trades on them. Possibly allowing a reasonably broad set of people to buy the data is enough, but I don't think that's what we're talking about here.
If it can be done in a privacy preserving way, I'm all for Goldman profiting from it and making the market more efficient at the same time. Goldman has done much worse things than this.
Should you recheck this understanding in case you're leaving money on the table? Trade all you like in the US on NPI if you're not in fiduciary position w/r/t to the info holder and aren't picked out expressly in the rules as an insider. The classic example is the cabbie trading on NPI gleaned from his passengers' conversations. At least this is what I gleaned from securities law classes. IAAL but IANYL so take it for what it's worth.
You're right that my knowledge of insider trading isn't strong but according to sec.gov the following qualifies as insider trading:
>Employees of law, banking, brokerage and printing firms who traded based on information they obtained in connection with providing services to the corporation whose securities they traded;
Correct me if I'm wrong, but that sounds like this particular situation to me.
In this case, I think the "services" they are providing in conjunction with this offering would be to MasterCard, Apple, and the cardholder, so I don't thing they would be constrained with regard to anything else for /this data/. Though, I think it really depends on the agreements between MasterCard and Apple, about what data would be considered "misappropriated" though (see sweeneyrod's link[1]).
They still couldn't /legally/ trade on things like "companyX asked us for a loan, so let's buy/short/etc companyX before they announce it", as that would be servicing companyX directly, and they learned the information (about the loan) from that servicing.
The customer segment served by the intersection of Apple and Goldman seems awfully narrow. I'm not sure there's much in it when it comes to gauging wider market trends.
> I'm not sure there's much in it when it comes to gauging wider market trends.
It's not about Apple customers AND Goldman Sachs customers, it's about consumer trends. Having access to millions of purchases as they happen is useful unto itself when it comes to market trends. In this day and age effectively anyone can get HFT hardware / software, but having that pipe of data is a real moat that your average shop cannot compete with.
I think you misunderstand what I'm saying. Goldman would be getting some data from Apple customers who were also using this Goldman issued credit card - hence "intersection".
Is that a diverse enough customer segment to draw any wider conclusions beyond this particular group of people? It seems pretty narrow to me, but perhaps you are right that the data has some value regardless.
Isn’t it strange that an article that has a very specific title immediately changes the topic and starts promoting brave—a browser that practically nobody uses, and has a controversial business model—while there are companies like Mozilla, that have proven record of protecting users’ privacy online? It’s not the first time I’ve seen this aggressive, misleading, promotional articles, which makes you wonder about if Eich and friends are desperate to hype their Ware by injecting brave into this clickbait articles?
Yeah I'm fed up with unwarranted mentions of Brave. Indeed they are always "aggressive, misleading, promotional", and nothing close to informational. The article was supposed to be about Apple and Webkit, and Brave is just off-topic.
Brave is not a privacy-focused browser. It is an ad-focused browser and the business model of Brave is just this: ads, through Basic Attention Tokens. Privacy and BATs are in conflict and Brave will never be incentivized to respect the privacy of all its users. If you want privacy for you and for everyone, competition except Chrome is already better.
Brave is not a solution for a browser user's problem.
Competition that's preferably not based on Chromium, because Chromium's marketshare gives Google incredible leverage on the market. Witness how they are pushing AMP.
At this point there are only two browsers left: Firefox and Safari.
I’m not seeing the connection between Chromium and AMP.
AMP, for what it’s worth, is based on existing web standards. It’s a restrictive way of using those standards, but it sits on top of existing standard tech. And Google is using their position as a search engine to push that, not their position as a browser vendor (as evidenced by the fact that AMP has the same effect in Firefox/Safari as it does in Chrome).
The connection is this: Google has a large amount of influence in the Chromium code base. Using Chromium, a codebase Google Controls, gives them addition leverage in controlling how the web is viewed and built.
I don’t think anybody is disputing that Google’s influence over Chromium gives them leverage over the browser space. This is evident in things like their changes to how TLS certificate metadata is displayed, support for TLS 1.3, and changes to how content is rendered.
But this comment thread includes the claim that their influence over Chromium is being used to push AMP, and I don’t see how they are using their position with Chromium to push AMP, a technology that uses a specific subset of existing web technology and operates cross-browser already.
Technically correct, but not useful. Three of the seven members of the committee are employed by Google, and the other four members are each from different companies: https://github.com/ampproject/meta-tsc#members. Plus, it's not clear wether each member has the same amount of "power".
> it's not clear wether each member has the same amount of "power".
Each member gets one vote. Membership to the group is decided by the group, with a goal of no more than 1/3 coming from one company. The distinction I pointed out is both technically and practically correct.
There is always something curious about an opinion so strongly held and communicated that is at the same time so wilfully ignorant.
Brave is clearly a privacy-focused browser: it takes very little time for a technically-minded, veteran HNer to kick the tyres on that project's focus and codebase to understand that privacy is their USP.
It's obvious the whole team believes in the idea of a user agent being an Agent of the User.
A little more time reviewing key figures, from Yan Zhu through to Johnny Ryan, reveals the calibre and integrity of people working on this project.
They are attempting to build a model that upsets the current surveillance capitalism status quo, so it's no surprise that there are attempts to spike perceptions around the project.
No sure. I get it. But the seriousness with which you attribute a for profit like brave a benevolent mission, as if. How many times do we need to go down the same road of talking about features vs what really counts: track record and who's actually standing behind the technology? Clue: people. Why would you trust Brave to begin with? Because Brenden Eich is such a role model (a racist homophobe last time I checked)?
As for homophobe, I reject your definition. Call me what you want there, but “racist” is a lie. Either yours or your “last I checked” source’s.
From what you write, nonprofits are innocent and for-profits are guilty. I worked for a nonprofit or its wholly owned for-profit subsidiary for 11 years, and I can tell you that the profit motive does not go away in nonprofits. Check the 2017 form 990 on Mozilla’s site for the top salary, >$2.3m. I never got 1/3rd that and went down to 1/15th to start Brave.
Brave uses all open source for auditability and we pay for audits as well as bug bounties. We pay the user 70% of user private ad revenue. For publisher ads (not yet done, working with publisher partners) we will pay users the same 15% we get - publisher gets 70%. So we won’t make revenue without our users being happy and making more than we make. Let’s see Firefox share Google search revenue (which held up tracking protection in Firefox for years) with its users, giving more to the user than Mozilla gets.
You ad hominem argument against an open source product is absurd on its face. Should right wingers use only software from righties? How many tribes must hive off and build their own software, and reject open source that’s ritually unclean? Judge products on their observable design, implementation, and business properties.
You make it sound like the whole article is a bait and switch. I went to see, and it was two sentences about Brave, right after mentioning Chrome and ad blockers. Then it goes back to Apple. They also mentioned Mozilla towards the end.
Do you think Mozilla, Chrome, and ad blockers were also part of this marketing effort, or is it possible that all this was just normal context?
Brave’s zero market share would suggest to me that if it should be mentioned, it would be more of a side note. For years when dealing with Linux and free software the argument given by these type of publications was that they’re not worth mentioning or considering, because they had no market share. When did Brave (AdBuddy) become the defender of users rights and online safety? And why would a browser with no users be mentioned in every article you read lately about browsers? How can it have such an impact on editors still not on users?
A new browser, a notable founder, a modern technology, a new ad model in an increasingly privacy-focused world. Seems clear why that would capture attention. Whether individually you like it or not.
Mozilla doesn't block ads and trackers by default so I would say Brave takes protecting users privacy more seriously as they block all that by default. They also provide tor browsing as an option for even more privacy with the possibility to browse onion sites. Something Firefox has refused to do for some reasons.
When using Tor it is important to be indistinguishable from other Tor users. If you can be fingerprinted by user agent, screen size, fonts, add-ons, or numerous other indicators, then the anonymity set of your browsing behaviour essentially becomes just you.
That's why the Tor browser is setup to be as homogeneous as possible. Using Tor within Brave does not provide the privacy that users might expect, and Brave even point that out in their website. Hiding your IP is a good start alongside blocking known trackers, but it's only one component of properly avoiding tracking online.
Partially incorrect. The last Firefox release enables tracker blocking by default on new profiles. You can switch it on at any time in your existing profile if you’d rather not wait for Firefox to do so for existing profiles.
You could probably say the same about Apple - iAd was pretty intrusive in terms of the amount of data it collected, especially compared to what Apple allowed from third-party ad platforms at the time, but they failed to make a business out of it.
Mozilla has a proven record of remotely installing extensions on your browser to advertise TV shows[0] and sending your entire browsing history to a third party ad company[1]. A clean record indeed.
Nothing wrong with the first instance. I'd much rather Mozilla have independence and deal with one tiny advert.
At least they were "transparent" about the second one. But sending my browsing history to a company I've never heard of is a big ask. At least I can choose to have Chrome sync with Google, this wasn't ever asked.
You stance regarding mozilla and their willingness to betray the trust of their supporters makes no sense. How exactly is mozilla independent if they have to resort to all sorts of tricks to fund themselves?
All of their efforts to wean themselves of the Google cash have been not only pathetic failures, but also breaches of trust.
Many people and long time Firefox users felt back then that Mozilla did do something wrong.
You'd prefer that Mozilla could install any extension they want? You do realize that sets precedent that they're free to sell forced extension installs that run universally on every page?
Give an inch, take a mile. It's always like that every time by every single company in existence.
Yet you'd trust Chrome which send everything to Google...? Or Brave, which is a for profit in search of a revenue stream? Why? Wasn't AdBlock a good enough lesson?
I also noticed this. To be fair Mozilla is mentioned. I wonder what the motivation is, are they planning on building revenue through Brave’s reward program?
Anyway, I wanted to give this more airtime so I upvoted you ;)
True. Though there’s nothing to suggest the a for profit like brave should be users’ first option in the fight for user privacy online... what’s their record? If anything this man in the middle concept is even worse because it creates an illusion of safety where there’s non. Sort of like Adblock and it’s symbiotic relationship with advertisers. They’re part of the same ecosystem and Mozilla is definitely not part of.
Yes it often feels like fake news/promotion. Question is if any of these companies can be trusted. It is getting increasingly hard for users to make an educated decision about which tools are safe to use (if any) and the consequences associated with such choices in the long run.
We can just go all the way and call it counter-intelligence, with a significant component of sigint.
OS developers and browsers will need to emit statistical noise to mask a users identity/activity. That will mean all emissions down to wifi packet levels. Expect significant restrictions of what javascript can do in the future (No more window.history, etc).
There won't be the opportunity to lay back and say "This emission isn't trackable." very smart people at public and private intelligence agencies (Facebook, Google, GCHQ, Spetssvyaz, NSA) are working to find a way.
You might not think FB or Google are evil. But we live in a cyberpunk world now, there are criminals who are learning to act more sophisticated. Eventually they'll get leverage over an employee at Google/FB/etc and the data they get access to will be used offensively.
The current guys working in tech, tracking your every move are on the friendly end of the spectrum. They just want to sell you things, or get you hooked on e-cigarettes.
EDIT: Also a note that iOS doesn't have any way to control app network access, I don't think Android does either. So there's another easy front.
> EDIT: Also a note that iOS doesn't have any way to control app network access, I don't think Android does either. So there's another easy front.
I don't know about Apple/iOS, however Android has plenty of third party local vpns that exist specifically to filter per-app internet access (No Root Firewall, NetGuard, etc), as well as iptables GUIs like AFWall+ for those who are rooted.
Without installing a third party app, while you can't entirely prevent an app from going online, there's a toggle in app settings, "Background data: Enable usage of mobile data in the background", though I'm unsure of the exact effect this toggle has. With more technical knowledge, you can leave the Android platform and run a pihole or a privoxy. It should also be possible on Android to write a no-root vpn that switches between different proxies/profiles for various apps (ie, use squid/privoxy on browsers, use DNS proxies for native apps, whitelist as necessary).
If you're rooted, you can also run your privoxy/pihole on the local device; I've had success with running a local dnsmasq, however it's far from battery friendly.
In my opinion Firefox could take it one step further and build-in Ublock Origin into the browser. Give it extreme speed with native Rust code. Of course configurable but with a few malware lists enabled by default and a few trackers blocked.
That'd make me definitely switch if I'd get faster ad-blocking and only lose HW accelerated video.
Brave doesn't have enough of an userbase for any impact compared to say Firefox saying they're one-upping Apple. It also isn't mainstream enough that I'd trust it security wise, any steps behind me and vendor security patches is usually bad.
> Publishers and companies rely heavily on online tracking — i.e. collecting (anonymized) data about a user’s activity on the web — to keep tabs on your every move as you hop from one site to the other. [emphasis added]
> While this is typically used for targeted advertising, the implications go beyond just serving relevant ads in that it allows marketers to create detailed dossiers about your interests — resulting in significant loss of privacy.
> This involves the use of cookies, tracking pixels, browser and device fingerprinting, and other adtech-based navigational tracking methods intended to amass browsing activities and build elaborate profiles of web users.
It sounds like many people now use anonymized whenever there's no obvious personal identifier (name, email, social security number etc) in the data. Never mind that a thorough profile doesn't need one to identify individuals.
For a long time people have said that tracking will be an endless arms race between blockers and ads. You know what else is an endless arms race? Malware. However for many purposes the arms race has been "won" by companies like Apple: vulnerabilities exist, but they are not a major part of the daily life of Mac users. Apple provisionally won the malware arms race, maybe they can win against tracking too.
Great news, I’ll continue to use Safari as my main browser (have done so the last 10 years). I’m becoming more and more skeptical to Google, just a shame that GCP is such a nice cloud platform.
Privacy for average people will never be a fair fight in limited democracies. The people need to demand a more direct democracy to be able to draw tighter lines.
The point where this becomes annoying is, when you're building a third party app or experience that is designed to be embedded in an iframe. Not a hidden iframe to track users, or for advertizing, but as a first-class experience, which can be embedded and displayed on a page on a different domain, and interacted with by users.
I wish the latest round of privacy restrictions (which I think are overall a decent idea) would take these use cases into account, or at least allow a mechanism to request the user's permission to use third party cookies for sites they trust.
Native apps have pretty robust permissioning systems. Why shouldn't websites?
> Native apps have pretty robust permissioning systems.
They really don't, though. No better than websites. I was astounded when I first used Little Snitch and saw how often random apps were making network requests. For example, Translate Tab (a simple language translation app) sends every translation to Google Analytics. And you need a relatively sophisticated tool / expertise just to see this.
It made me rethink the superiority of native apps since all this is so hidden. I prefer websites over native apps because I can run uBlock/uMatrix. Native apps can do whatever they want when it comes to tracking. People aren't even talking about it.
Not only do native apps get off scot free, HN glamorizes them as superior with no questions asked.
That's a fair point. I wonder if 'make network calls' was ever considered as an app permission, or if it was just considered so ubiquitous that it would be granted by default. Or if there was any thought made around 'same domain' and 'cross domain' restrictions for apps.
One thing I've experienced with Apple's tracker blocking is that the WSJ guest pass doesn't persist across sessions. It's supposed to last for 7 days (and does on Chrome), but on Safari I have to enter it fresh every time. I wonder: if the WSJ took steps to make their guest pass persist, would Apple view this as a security vulnerability?
I’ve been thinking that SaaS companies that provide JS code to embed could work around this by letting you attach a subdomain; so instead
Of the js pointing to say adroll.com, it would point to ads.adrollcustomer.com (with dns set up to goto adroll.)
This could probably work, but remember that there’s a lot of tracking that happens by programmatic ad exchanges. To get a sense of this, look at the dozens of sites in a media company’s ads.txt, e.g. https://cnn.com/ads.txt
A big player like Facebook could ask media companies to set up fb.cnn.com or similar, but I imagine this is where we start to enter “security vulnerability” territory, where Apple uses different heuristics to ban this approach.
Huh, OK thanks for explaining. I think I actually understand why this was happening with the particular WSJ guest pass I was getting, and how it can be fixed. Thanks!
I supposed this doesn't bode well for the chances of getting AWS Cloud9 working properly on Safari, since it relies on some shifty third-party cookies to work properly.
Guys, it might be possible that our people consider us an anti-privacy practicing company, given that we were listening to anonymized audios similar to what Google has/had been doing
Marketing - Might be. Let's put up a banner about it in every corner
Tech - Find the most sensitive/concerning topic, and let people debate.
Has anyone seen this error [1] before? A user reported seeing it the other day and I can't figure out if it was an issue with Safari, his ISP, or what. It only happens for him when connected via WiFi on mobile. My SSL certs are auto-updating and I manually refreshed them, yet he still sees this a day later. Thanks in advance!
Looks like DNS-level blocking from the WiFi gateway or ISP. When a blacklisted domain is requested, it gets IP pointing to the site with that notification, not the actual IP of the requested domain.
This can be overridden by manually setting DNS servers on the phone or computer to non-filtering public DNS servers, eg. 1.1.1.1 or 8.8.8.8.
Thanks! I didn't know American ISPs would blacklist domains so easily. I'm also going to try contacting Comcast as I can't figure out what is "dangerous" about the site. It's just client-side javascript that queries content from reddit's API.
What about offline tracking? I went to a big box store today, and after I got home, I got an email asking me to review my visit. Now, I'm sure that somewhere I ostensibly consented to any and every form of tracking, but it was startling to have my nose rubbed in the fact that every time I use my credit card and everywhere I carry my phone it is being tracked and linked to every available piece of information about me, which I'm sure is far more than I would like to be public.
I don't fill out surveys anyway, but I was feeling particularly pissed, just because it should be socially unacceptable to behave like this, just because you think you can legally do it and nobody can stop you. So I flagged it as spam.
Oh, and the chain store that did this has been in the news for data breaches, not long ago at all.
This is kind of scary, especially with sparse details. I'm always having to turn off ad blockers to support site's basic functionality, like forms and even navigation if they rely on ajax requests. What functionality can I support on which versions of Safari? Can I ensure items that talk to my origin are never blocked? Since this is now a 'security' issue will its implementation be opaque?
> Since this is now a 'security' issue will its implementation be opaque?
Security by obscurity doesn't generally work. FWIW, here's what WebKit has to say about what it'll block: https://webkit.org/tracking-prevention-policy/. I hear that a specific list may also be released at some point.
I work with an enterprise publisher that has literally every item listed as an essential part of how it does business, from running programmatic ads to SSO and Google Analytics. The Unintended Impact and No Exceptions parts would mean a rework of that entire business for all of WebKit. That is true of most web publishers today. I cannot overstate what a vast impact this will have on web publishing.
Maybe those things need to change. We want to move towards a member driven model and move away from ads, but in order to fund that transition ads need to continue running as the product is developed and user base grows. Even then, since we use Okta for SSO, that would also break or require a significant reimplementation to server-side auth, which could also break cache for authenticated users.
We have a grand, beautiful plan for creating a publishing model that is trackerless except for a first party event logger that hashes all PII before it’s stored. We need data to operate, and some of it needs to come from the client. We would share client data with zero 3rd parties.
For now our pages are a brothel of 3rd party scripts. We hate it, we can’t survive with our it, and forcing this change could force us and most web publishers today out of business.
Below is a quote from the link above:
>Unintended Impact
There are practices on the web that we do not intend to disrupt, but which may be inadvertently affected because they rely on techniques that can also be used for tracking. We consider this to be unintended impact. These practices include:
Funding websites using targeted or personalized advertising (see Private Click Measurement below).
Measuring the effectiveness of advertising.
Federated login using a third-party login provider.
Single sign-on to multiple websites controlled by the same organization.
Embedded media that uses the user’s identity to respect their preferences.
“Like” buttons, federated comments, or other social widgets.
Fraud prevention.
Bot detection.
Improving the security of client authentication.
Analytics in the scope of a single website.
Audience measurement.