Why facebook emerged as a big player in this field was because the other companies were really expensive for consumer apps and hence they were used only by the biggest players. Facebook made it cheap, automated the process and allowed for targeting without even telling the advertisers what audience needs to be targeted. All at the cost startups and smbs could afford.
Google also does the same. Infact all the big advertising networks do it, or utilize a DMP to do it on their behalf.
I realize that this discussion is about privacy, and in that regard, it should not be allowed by any company. However, this is just the tip of the iceberg. Almost all the big players in travel have huge customer profiles on us, gained by our credit card transactions, credit history, data from insurance and other friendly companies (you scratch my back and I scratch yours) and various other practices which frequently compromise on user's privacy. I dont know how one can solve this though.
> Behaviorally-targeted advertising is 2.7 times as effective as non-targeted advertising.
Really? That's it? We're paying for a multi-billion-dollar surveillance apparatus to make ads 2.7 times as effective as the part of the website you automatically ignore, or the part of the newspaper you immediately throw in the trash?
Better ones (in the sense of advertisers thinking they work better and being willing to pay more for them; not in terms of them being “better” ethically) would include YouTube pre-roll ads, product placement in movies and TV, and billboards on boring stretches of road (as observed by the passengers in a car, not the driver.)
God those are so awful. I wasn't running adblock on youtube, but that shit made me install one. I could be listening to a nice playlist and suddenly some ad at 200% volume blares in my ear "YOU NEED TO BUY THIS TOOTHPASTE" or whatever. I wouldn't mind being forced to watch ads before being allowed to watch other videos, but let me bank watch time. Tell me how much time I have left before the next ad and let me watch enough ads that I don't see any in the next e.g. 3 hours. Or okay let's say the ads I get depend on the videos I watch, then put the ads on my tab and let me watch them later. Whatever arrangement works, just let me control when the ads happen, because right now it's maximum obnoxious.
Advertisers are shameless, and will exploit anything until they are smacked down by laws. Congress and the FCC had to step in and regulate commercials' volume on TV (https://www.fcc.gov/media/policy/loud-commercials), and I suspect the same will have to happen online as well.
youtube-dl is the only thing that makes the site usable these days. It gets rid of ads and pop-ups, and decreases "engagement" by forcing you to copy the link over to the terminal to download and watch a video. It's even better than how people watched TV back in the day, by recording it to VHS tapes, then fast-forward through the ads.
That is exactly what would make YouTube ads useless, from the advertisers' perspective.
The key to "impressonability", from advertisers' PoV, is attention. You need to actually be looking at the ad. You don't have to think you're looking at the ad—you might think you're just looking at the "seconds until you can skip" timer—but you're looking (and listening) all the same, at least peripherally, and advertisers think that that's time they have power over you, in at least some subliminal sense.
But if you let users bunch up twenty ads in a row? They're going to queue them up and then go get a drink. The audience won't be looking; won't be listening. The ability to control ads inherently means the ability to ignore ads. Ignored ads aren't impressions.
The only valuable ad is an ad that sits on your attention stack on top of something that was getting your full attention, such that you have to (still with full attention) pop the ad from the stack to return your attention to the previous stack frame. Whether that's an ad in a newspaper between two columns where you have to pass "through" the ad content to find the next article; or an ad on YouTube where you have to wait; or a Google Sponsored link you have to read "past" to find the regular links; or a modal ad on a website that only appears five seconds after (the heuristic on the site says) you've started reading the content. The goal is to put the ad where your active attention is, such that the ad "impresses" upon your attention, however fleetingly. Anything else is a failed ad.
And yes, it's obnoxious. But that's... what advertising is. Those are the only ads that are really doing their job. The ads that don't end up on top of your attention stack? Subtract those from your perception of the field—they're just cargo-cult attempts to advertise that persist because it's really hard to measure the success of ad campaigns. What's left when you subtract those nice, unobtrusive ads, is what advertising as a field is really "about." It's what advertising would be, exclusively, if they got better at measuring things. And it's awful.
It's just that ads are irrelevant, disruptive, and non-consensual at the same time. The whole "ad impressions as subliminal messages" idea is a normalized form of non-consensual (senti-)mental assault or penetration, depending on whether it results in a headache or a purchase.
Youtube has started showing me two ads. The [skip ad] button has turned into [skip ads]. This means the second advertiser misses out if the first ad sucks, because there's no way to skip the first ad alone.
right now, legislatures are asking the wrong questions to the wrong people
The CEO of the wrong company to ask can legitimately say "we don't record you I have no idea why you suddenly started seeing ads about that thing you were talking about"
Regarding asking the question to CEO, well, his company made that decision to integrate the SDK/Cookie with the product so he is partly accountable. It is a different scenario that he would not know anything specific about how the process works.
The entire business model is infeasible under GDPR.
Basically, GDPR only prevents DMPs from training their models using your app’s data (i.e. exfiltrating the user’s personal details from the client to a backend.) If, on the other hand, the DMP comes to you with an advertising library that runs entirely client-side and embeds a pre-trained model; where, when you call this library, it decides what profile of theirs a user slots into by doing things like e.g. “young or old based on how they type on a touch keyboard” (but in a much more clever ML-dictated fashion)... then there’s no violation here. You’re using the user’s personal data to derive a category to place them into; but you’re not recording or sending the user’s personal data anywhere.
And when you, as an app, turn around and make a request to an ad network for ad media to display based on that profile, that’s not a violation of GDPR either: the derived profile is no longer information about a particular user, but is rather just a group that the user belongs to, with each profiling group being so large that that assertion-of-fact doesn’t help to de-anonymize the user any more than saying “they’re a human being” does—it’s not information that could be mapped back to the individual user. So it’s fine to send out.
It can be taken a level further, as well; such “statistics that convert to a profile” can be aggregated into one per-app profile. At that point, you’re doing exactly what magazine publishers do: looking at the audience for a given magazine, building a summary profile of said audience, and then selling ad space to advertisers who want to target that summary profile.
Of course, that same library will probably do a region check, and in the case that you’re not in the GDPR-affected area of the world, it will skip its (slightly crap) client-side detection logic and instead feed all your clicks and taps back to its (much smarter) backend, just like the previous version of the library was doing unilaterally.
Up to the point where the client sends something mathematically indistinguishable from random noise to the server. At which point it's useless except for yet-to-be-discovered quantum effects. Which will then be outlawed.
Either way, the whole advertising revenue model is sickening pile of "they technically didn't object and it's not technically illegal", luckily it will die along with DRM and copyright. In the meantime I sincerely wish all parties who profit from it a stomachache proportional to their standard of living.
The irony is, I've seen quite a bit of industry chatter whenever the topic of 3rd party DMP audience segments comes up, and the general vibe is they are not at all accurate. And this is why so many large companies are investing in 1p and even 2p data. 1p in particular, depending on their business model, can give them the data they need to remain competitive in a post-GDPR world, assuming they can obtain consent and/or prove legitimate interest.
The exact same thing applies for all iOS devices. Web banking, airlines, almost every game. There very few exceptions of the most common apps, e.g. Dropbox, Skype, stock apps. Facebook is cancer. It gets into the body and never leaves. Unless you firewall or hosts block their IPs.
1) not many people see the evil of Facebook's immoral actions
2) not many people (even in here) are aware that they can firewall their Android devices.
Its really disappointing to see how the software ecosystem has degenerated into these shady mercenaries with zero compass. The only thing that can temper this crazy greed fueled appetite for surveillance is regulations and prosecutions because what some of these articles describe is venal and corrupt to restore some sense of propriety and civilization values.
I think everyone agrees there's a market for a simple-to-use solution.
Folks at https://GuardianApp.com are doing just that for iOS. For Android, apps like NoRootFirewall, NetGuard, Glasswire exist and other solutions like XPrivacyLua require root, or flashing LineageOS/ChromeheadOS and/or de-googling the phone voids the warranty. Most of the 2 Billion Android user-base wouldn't go anywhere near these.
A stop-gap then might be to provide a zero-touch / firction-free Firewall/VPN app that's "free" and one that's anti-surveillance and anti-censorship, but is also transparent, in that it enables end-user to inspect the traffic flowing in and out of their devices.
The challenge is no one wants to pay the internet provider and a random VPN app for a censorship-free / surveillance-free internet but they might gladly pay the internet provider extra premium if they offered the same experience. I know my dad wants this, but he wouldn't pay two entities for the same service.
May be what https://puri.sm is doing is the eventual end-game, but I think they're trying too hard. May be its time for an Android phone-manufacturer to launch a privacy focused phone. OnePlus reached its heights by placing itself as a Nexus/Pixel killer offering vanilla Android experience... so may be there's a market for privacy focused phone too, or may be https://e.foundation might pull this miracle off and become mainstream enough to matter.
Eventually, though, governments have to step in and bring forth regulations that prevents relentless surveillance of the end-user, similar to how wire-tapping phone-calls is illegal.
I think, on Android, with root you could do a lot more and not have to use VPNs at all.
> That gives too much trust to the VPN operator.
Local VPN apps like NetGuard are open-source, btw. And server VPNs like ProtonVPN have no-logs policy. I'm curious, what other guarantees are you looking for?
LOL, they don't need it. Privacy focused phone doesn't generate money from user data. There's two ways: dumbphone or Apple.
Its a bit of a hard problem which we tried to solve using a permissions system but its a hassle because its hard to tell if a permission is being used legitimately and the average user just hits accept on anything because they don't know how to verify if something seems right.
The GDPR was a step in the right direction where it allows you to say no to tracking and still use the service as normal.
But merely preventing it on a technical level creates this race where companies and startups are always finding new ways to violate our privacy, while we stumble after trying to patch the latest evil, hoping that it's even possible to patch this time. Stop ajax calls to third party domains? What if they start piping it though the first party server? etc.
There fundamentally needs to be laws and principles in place that sets clear lines as to what's okay and not, it shouldn't come down to "whatever is technically possible". You may NOT take my personal data, my contact list, my browsing habits, and sell them to a third party, even if it's hidden somewhere deep in your T&S. No human actually wants you to do that, if you offered somebody on the street five bucks for their phone contact list they wouldn't say yes. It's only possible because you are doing these evil things hidden from view.
There is an argument i have with a guy who stands on the street offering 'free coffee'. So i ask for a voucher, and he explains i have to download an app. I am not convinced that is truly free.
It is the classic tragedy of the commons. Everyone doing whatever is best for themselves leads to the absolute worst outcome for everyone (including yourself) in the end. E.g you running 50 kWh of AC per day is pretty inconsequential. 2 billion people doing the same is not.
You're asking for legislative controls which, at the end of the day, can still be bypassed either flat out illegally or via legal grey areas. At best it's remedied after the act or prevents only the most obvious misuse. When you give politicians the mandate to control something often they're too technologically/process incompetent to get it right and to make sure the solution is in your interest.
You want to abate this practice? Exit social media (...and life will improve), use RSS for mass information consumption (I've been doing it for ~10 years now - I can pour through what would normally be a days sifting/reading in 15 minutes while taking the first dump of the day) and use a browser with extensions that give you more fine grained control i.e. Firefox with NoScript.
Be in control of your world.
You can abuse the analytics event system by encrypting data and submitting it as a string attached to analytic events. These can't be differentiated from normal click tracking.
Server side you can then pull this data out via api and unencrypted it.
When it comes down to it, even things like phone number, MAC, or current access IP address (as opposed to VPN egress address) are highly security sensitive information. There should be no way for apps to get access to these things, and if they insist on obtaining access, the ability to fake out that data should be the baseline of any modern OS.
Scott McNealy, former chairman of Sun Microsystems.
There was a time when legal businessmen, with sincerity, claimed that a dark skin color made you deserve to be slave.
"When plunder becomes a way of life for a group of men in society, over the course of time they create for themselves a legal system that authorizes it and a moral code that glorifies it." ~ Frederic Bastiat
We need to continue highlighting unacceptable practices to drive privacy improvements in legislation.
And we should establish ethics boards and education (ethical software engineering) amongst ourselves to stop building shady things.
Instead I think we should take Mr McNealy's words as prescient. Even if you protect your privacy, people you know are still willing to tag you in photos or upload their contact lists.
Instead of pretending that we can remain private online perhaps we should be thinking about how to compartmentalise our online identities so that the whole 'us' can't be revealed by an inadvertent mouse click.
TLDR: Scott was right, so let's work out what to do next.
It's unfortunate that Android is singled out here - it'll lure iOS users into false sense of privacy.
Also, the author literally says that Google tracks even more apps vs. FB but still chooses to use FB in their headline. Sigh.
However, I do have the impression that the situation has already started improving significantly, and as the fines and other enforcement actions start happening, these practices will become less attractive (as in "I don't want this in my company because my competitor had it and got bankrupted by the fines").
A nice aspect of GDPR is the civil enforcement, allowing NGOs to sue on behalf of individuals whose data was abused. This helps resolve the problem that the DPAs are useless and it is infeasible for you to sue Facebook. NOYB.eu is one of the NGOs doing some work in that area (mostly pushing the DPAs to do their job by filing complaints, for now) that has led to fines.
Is there a practical way to demonstrate, in a court of law, that one's data has been stored/used/traded in an illegal way?
For example, assume I use a specifically generated unique email address when signing up to one service. After a while I get spam on that email address from a different company. Can I use this as evidence in a lawsuit? How strong will the case be? Would it help if I ask a trusted party (notary) to deal with the generated email address (so nobody ever sees it except the notary and the company)? What options do I have in collecting evidence?
In practice, the worst data sellers will pretend to act legally, so they'll often admit in some fine print what they're doing.
Also, if the judge asks the representative of the company whether the company is selling your data, the possible answers are "yes", "I don't know", or "no". The former two mean you win. The latter means the representative risks jail time if it later comes out that the company did sell the data.
After a quick search, I couldn't find a source that claims that any website is blocked in Hong Kong, and several that claim that no website is blocked.
The Internet connection is monitored, there's pressure and self-censorship, but there's no blocked access to websites. It's easy to be bigger than zero.
EDIT: The biggest collection of the websites I could find lists 1129 that are still blocking access to EU citizens, and 252 that stopped (presumably, once they've become GDPR compliant): https://data.verifiedjoseph.com/dataset/websites-not-availab...
While I agree that there is pressure from pro-Chinese Government henchmen to intimidate free press in HK, I am curious as to where you get this assertion that the internet is monitored. That would imply some kind of government surveillance. I have never heard of any systematic internet monitoring in HK of the kind done in other countries, like mainland China.
this is not a right/wrong issue. like most of reality, it's full of grey.
Care to share some examples?
GDPR on the other hand has absolutely no political selectivity. It's just a bunch of self selected US sites that can't be bothered to properly implement GDPR (until they get their own version of it).
That makes your claim about the number of websites accessible from Hong Kong about as relevant for freedom of speech as the lack of chlorinated chickens in Europe's supermakets for the variety of food we get to eat.
getting data to target ads. Apps gladly integrate them, because they can get better analytics on how the app functions and is used, but also for marketing attribution (comes back to ads.)
I find the outrage at Facebook for this to be a bit obnoxious. Are we equally as outraged at every app that has Google Analytics implemented? It's sending your data to Google, even if you don't have an account!
I tolerate WebRTC blocker, Canvas Blocker, DecentralEyes, FPI, PrivacyBadger, HttpsEverywhere, uMatrix, and NoScript on Firefox and painfully deal with all the broken websites, despite the costs. It is worth it ten times over.
Thanks for opening my eyes to this!