I love how the onus is always on the end user to find and report the "bad" ads. How on earth did that insane status quo become acceptable and widespread? The user is expected to know how to unminify and read JS in order to figure out if they are being screwed or not? Seriously?
The first time I got served malware via web ad was in 1998. I started manually blocking ads by modifying my hosts file that day. Haven't stopped blocking since.
It's a broken model, stop forcing it down our throats and stop shrinking the definition of "bad" advertising.
what is the copyright status of all the user contributions on all the stack exchange platforms? I would love to see a decentralized p2p stack exchange platform with LaTeX support, i.e. a stand-alone client...
If GP meant what I think they meant, it's because scooter startups executed an Uber on cities.
It's not that we should outlaw everything. It's that there's a civilized way of introducing innovation which involves sorting out main issues before deployment, and then there's the Uber way which involves dumping externalities left and right on the unsuspecting populace, while showing the middle finger to local regulations because "we have venture capital money therefore we make the rules".
I propose the Uber way should be banned and punished with extreme prejudice.
>I propose the Uber way should be banned and punished with extreme prejudice.
I'm not arguing here, just thinking it through... What is the alternative? Do I need to haggle with literally every city I want to have a presence in? Even though there's no law against doing what I want to do? Seems like the scooter rental business would simply never exist in the first place, it would be a huge deterrent.
> Even though there's no law against doing what I want to do?
There is, or at least there are in case of Uber, pretty much in every city around the world. I'm not that familiar with the regulatory situation of scooter business in general, but there are a law against littering, and the scooters end up as trash in the cities, creating danger (e.g. to people with no or low vision) and having to be picked up by the police.
In my city (Kraków, Poland), a few startups recently decided to clone the Uber-for-scooters and it turned out to be a huge nuisance. Some interesting tidbits include:
- A few weeks after deployment, the city announced it's calling an emergency meeting about the scooters, because apparently none of the companies bothered to come and talk the parking situation through.
- A lot of noise in the media was created by the blind community, who reported instances of people tripping over scooters left on the sidewalk.
- A friend tested available scooters few months ago, and his report mentioned bad technical condition, and low-quality and sometimes broken brakes.
- Deployment in Kraków and other cities triggered an upcoming update to traffic rules, in which electric scooters will get reclassified as motor vehicles, i.e. forced to drive on the road. And rightfully so, because they reach dangerous speeds and pose serious risks to pedestrians.
- Speaking of which, there were at least two confirmed deaths in Poland caused by these scooters, and it's not even half a year since deployment.
Where to park them and how they can safely participate in traffic are two basic issues that absolutely should have been talked through with the officials before. I'm only disappointed the companies involved weren't just banned by fiat.
FWIW, in my experience in dealing with relevant people, the city of Kraków is quite supportive of innovations in the city space. But one has to actually go and talk with them. Apparently, in their desire to be first to market, these companies didn't.
EDIT: My wife was almost hit by some careless electric scooter driver while being 9 months pregnant. She didn't tell me back then to not upset me. And perhaps for the best. If they hit her, I'd be suing and campaigning to get them off the sidewalks.
> there are a law against littering, and the scooters end up as trash in the cities, creating danger (e.g. to people with no or low vision) and having to be picked up by the police.
There is. Chicago introduced them with a claim that everything will work out. They won't do anything against them. There's lots of documentation about it. They've actually put out press releases ignoring the issues associated with it.
When it comes to Uber almost every municipality had already figured out a way to have taxis in their town.
It’s just that Uber’s tech scales to national and global scales relatively quickly making mass haggling necessary and since that’s pretty difficult you end up with the externalities being dumped on unsuspecting populaces since Uber’s not going to do that mass haggling.
Instead, Uber pushes the haggling initiative onto the towns and municipalities when it used to be the other way around.
I think the GP is saying that in most cases, constituents probably didn't ask for (in this case) scooters. They were most likely lobbied for by the companies who offer them or by some extremely small subset of the public.
Your point seems to skip over the fact that policy creation itself should be reactive - sparked by an obvious public need or because of a prediction backed by science.
Interesting thought. It seems most likely that the public lobbies generally against a problem without much of a solution in mind, or at least without doing all that much research beforehand. I can see how the public might complain about a lack of public transit and end up with lawmakers delivering scooters.
tosdr.org (Terms of service; didn't read) is a website that simplifies website's Terms of Service and Privacy Policy to make it easier for people to read.
Stack Overflow is given the lowest rating (class E) in terms of user rights. (For reference, even Google has a class C rating.) Here are the worst points taken from SE's privacy policy:
* This service allows tracking via third-party cookies for purposes including targeted advertising.
* You agree to defend, indemnify, and hold the service harmless in case of a claim related to your use of the service.
* This service forces users into binding arbitration in the case of disputes.
* Many third parties are involved in operating the service
* The service may use tracking pixels, web beacons, browser fingerprinting, and/or device fingerprinting on users.
* Blocking cookies may limit your ability to use the service
* You waive your right to a class action lawsuit
* This service can share your personal information to third parties
* The court of law governing the terms is in a jurisdiction that is less friendly to user privacy protection.
* The service can sell or otherwise transfer your personal data as part of a bankruptcy proceeding or other type of financial transaction.
* The service uses your personal data to employ targeted third-party advertising
* This service retains rights to your content even after you stop using your account
I think if the next few years are going to be entertaining. At the moment a handful of shitty companies are abusing their ability to track users - but for me (and I think for many non profit-affected nerds) the outcome seems clear: ANY AND ALL third-party script have to go.
Even when that means losing all the third-party goodness (CDNs, analytics, cloud providers...) that we've come to depend on. Yep, they save an uncountable amount of time and effort - but at the cost of tracking users' every step... We can't have one without the other!
Honestly, if all third party scripts are blocked, the ad networks will just start making their participant sites serve the javascript direct and/or run local applications to proxy the data back to them. And unfortunately most management will force the changes through, because they want in on that ad revenue gravy train.
Cloudflare's CDNJS sets, by default, crossorigin=anonymous and subresource integrity. Add a "Referrer-Policy: no-referrer" header, and all the CDN sees is either a) nothing, because the client has the resource cached or b) a request for a certain version of jQuery without knowing where it came from.
If you think this is a good idea because it enables technical enforcement, do you also want to ban static.example.com? Because if you don't, you'll soon have pointstothirdpartyadserver.example.com. If you do, you'll have https://example.com/proxytothirdpartyadserver/ instead...
This is not a problem that can be completely solved on a technical level. Enforce the hell out of GDPR and see the problem shrink.
The ad networks only satisfy the market demand. Makes sense that if you are an ad publisher that you want to get the best ROI on your ad budget, so if you can choose between two ad networks, one of which shows your ads to random users and another one which shows your ads only to your target audience, while having the same cost per impression, you will choose the one that brings you most revenue. Business don't care how those "target audiences" were created, but they want access to them.
I could not give less of a shit about justification of bad behavior by invoking market demand, if it leads to me finding a turd in my drinking water.
Unless, of course, it leads to the dawning of understanding that maybe the Econ-101 understanding of supply-and-demand is a spherical-cow level of analogy that rapidly breaks down when it encounters the real world, and that regulation isn't a toxin.
Is it not possible to create an ad network that only supports static targeted ads? The ad network itself can run its own JavaScript, but I don't see any reason to allow the advertisers themselves to run scripts inside ads, other than to make the ad look a bit flashier and interactive.
It's perfectly possible. The problem is that these very scripts the ad network are serving are what people are objecting to. Exactly these scripts, and exactly this tracking. It creeps people out.
I think the best solution so far is for advertisers to just purchase banners on the sites their target audience is on, just like in the old days. Maybe use something like BuySellAds to find and buy "directly" from that niche site, but BSA is pretty bad.
It is ironic that just after I read this item I opened Safari and discovered that the latest update (Safari 13.0) had removed all protection from trackers, malicious advertisers, and unwanted media that I had previously used. So without notice I would be exposing my computer to those hostile elements if I hadn't noticed.
This is not what I want - Apple has done a bad thing.
I miss the days when you could simply buy an ad spot on a specific site for X days/months (no targeting, except when choosing what site to buy ad space on, an no javascript, just regular banner ads).
Is there still a market for these kind of low-tech ad buys these days?
You can probably do it, but your competitor who do target users will end up with $30 CPA and you will get $300 (or worse).
So, unless you are ok with x10 user aquisition cost, you will quickly stop doing it.
/u/cj specifically lamented the lack of pay-for-time, not pay-per-click. Of course you have to track to accurately reward on a pay-per-click basis - that's why pay-per-click is a shitty model.
Automatic ads might be x10 cheaper, but x1000 less effective. Who knows your audience best, you or the automatic algorithms ? And automatic ads are easier to block, game and cheat.
>Automatic ads might be x10 cheaper, but x1000 less effective.
what are you talking about?
Lets say you are running SaaS for Azure admins. Do you want to buy the top banner on slashdot.com for the whole month, no targetings at all, for 50K/month, or you only want to show yor ad if user was interested in at least 2 stories with azure tag, for 5K/month?
Automatic (aka highly targeted) ads are more effective, not x1000 less.
I just love the automatic / highly targeted ads that I get for products I have already bought. It doesn't matter if it is a low end consumer product or high end hardware I get a lot of ads for things I have already purchased right after buying them.
Without any knowledge of the industry, it seems obvious to me that you can have one ad that goes to 99% people who don't want a product, and another ad that goes to 90%, and the latter is ten times as effective, even if it's annoying most people who see it.
I wonder why more people don't try to do this, and use that fact for marketing - as in, "we actually have a semblance of a moral compass, please shop with us".
The highest conversion rate I've ever had on ads was when you could buy out a specific subreddit.
Now Reddit just sells generic key-word targeted ad-space and it's really not worth it anymore as you are selling your soul to an algorithm that doesn't know your audience.
CodeFund is what we consider "ethical" and different from Carbon Ads and other ad networks because (1) we do not allow 3rd party scripts, (2) no cookies accompany ads, (3) we are 100% open source (gitcoinco/code_fund_ads), (4) we only work with advertisers that provide relevant products and services that are good for the developer community, and (5) we literally do not store any private data, including IP addresses.
Interesting and informative read. In particular, I like the way you explained how the ad selection process works on a typical page. I've been looking for a reference to link to for an article of mine about ads.
Speaking of ethical practices, why is there a notification icon on a blog, and why is it having a seizure? :).
When I was active in the Minecraft server scene, minecraftservers.org (and other sites) had very simple low-tech bidding like this. I presume it's still mostly the same.
They would hold weekly/monthly auctions for a "Sponsored Server" slot (your listing shows at the top of the page). Can't remember if it was weekly or monthly, but I recall prices being around $7-8k per slot. This was just implemented on the website as a super simple auction, you could see all bids and could only bid higher and the auction would expire at some set time.
Then there was an even more low-tech layer around it all - people would rent their spots to smaller servers that couldn't afford to outright buy a slot. This was implemented as... Google Sheets with one cell for each hour time slot, with the sheets being linked to on various forums. You just sent a message to the Skype contact if you wanted to buy an hour or two, and for a couple hundred bucks you, too, could show off your server in your very own temporary minecraftservers.org sponsored slot.
Is there still a market for these kind of low-tech ad buys these days?
Yes, but often you have to contact the web site directly.
I wouldn't be surprised if some of the non-Google ad companies like Exponential will do this, too. They did it before Google existed. They might still have that ability.
A metric ton of ads are still sold like this but still get served through ad servers like DFP, which can source inventory from these direct ad sales or AdSense depending on which makes the most money.
Is it possible to take an investment and continue to follow this mindset? Bootstrapped companies can afford to do it, but investors will not be happy to find out that they are losing money.
Which is kind of a point of taking VC money; if they wanted you to grow sustainably, they'd tell you to get a loan from a bank. These days I treat "took VC funding" as a negative when evaluating whether to commit to using a service.
Isn't the definition of fingerprinting something to personally identify you? There's only one person connected to every fingerprint, and it's there to identify you across websites. That's what PII means to me. How can stack exchange say fingerprinting isn't collecting PII? I wonder what they would call PII.
The purpose of fingerprinting is to do "re-identification". The ad network waits until the browser visits some site that does have PII for the user, and then combines that PII with the fingerprint to re-identify the user everywhere they go.
you mean a cookie. PII is like your name, address, phone number, etc. cookies are used to track your browser, not that stuff.
an IP address is actually an example of something that could be both PII and useful to tracking your browser, but there are practical and legal drawbacks to try to do that.
ad networks don't offer targeting that is anywhere near that level of sophistication. demographic profiling is really coarse and not very accurate. they're probably just using a history of other requests they've gotten for that fingerprint, and trying a few coarse predictions based on that.
I also don't know about "sites" offering "PII" back to ad networks based on a fingerprint match. but I could be wrong about that or maybe people are using a more expansive definition of PII.
The profiling exposed at the top level is coarse. I guarantee you that companies the likes of Acxiom are tracking every woman's periods and they've been doing it for decades. That PII gets squirreled away into their database and feeds into whatever genericised category it is useful for. Sign the right NDA and you get all the goods.
You don't need to. The reason fingerprints are problematic are because you can't get rid of them. They link all of your activity together into a single profile that you can't turn off.
Take a step back for a second -- why is it problematic to be able to link information to someone's real-world identity? Because who I am -- my address, my name, my face, and so on, is very difficult to change. It's a problem because once you've linked an activity to me in the real world, it's now permanently pinned to my identity.
So with that in mind, what is the difference between tracking where I live and stalking me in the real world, and tracking what devices I live in and where I go online? In both cases, you're taking away my Right to Hide, and forcing me to use a single profile that I can't walk away from or operate outside of.
Certainly, the difference isn't that the real world is harmful and the digital world isn't. You can abuse, stalk, price-gouge, censor, and deny service online just as easily as you can do it offline.
If you have a persistent identifier for me, that links only to me, that allows you to recognize me on every website I visit, and if I can't escape that identifier, then you have already traced that information back to me as a person. Knowing my name or my address doesn't matter, those are just facts about me. 'I' as a person am the persistent identifiers that point at me.
By asking some site which has PII of yours to look up the fingerprint and associate it with the PII. Like, anyone you've ever given an email address, phone number, or credit card number to.
Sure, let's give a hypothetical but quite plausible example. Let's say that I run website ABC and the ad intermediary scripts make a note that fingerprint XYZ visited my site.
They then give that data to Facebook who some days later records a visit from user srbby with the fingerprint XYZ, so they know that "srbby" with phone number 123455 (which fb has) visited site ABC.
Also, your aunt has your phone number in her contact list and she (as a few other people, to make it certain) lists a name "Bob Smith" for it, so FB can link that user "srbby" Bob Smith phone# 123455 visited site ABC.
Afterwards they sell that data to some ad agency that combines it with location data from either your cell phone provider (the major US cell providers sell such data) or some driving or taxi app to note your travel patterns and extract where you live and work.
So they know that fingerprint XYZ has the following (long) list of user accounts, visits sites like ABC, has that particular phone number, most likely is called Bob Smith, and most likely lives in such and such address and works at ACME Inc (or drives there every morning for another weird reason). For some fingerprints some of that data will be wrong, but it's mostly accurate, and definitely accurate enough for their prposes.
They don't really give all that data around to every advertiser just because (well, not for free), however, whenever an ad "auction" asks "heeeey, who's going to bid the most for which ad for fingerprint XYZ?" then this is the profile that's going to be used to make the winning, most targeted bid.
You can go to my ISP with a warrant and ask for their records. Which means that interestingly an IP address is only PII if the entity that holds it can lawfully request that information.
Using that logic, a browser fingerprint would also be PII if the ad network can use it to determine who you are, or presumably if they can link it to other PII.
It seems like our browsers need a sandbox mechanism for 3rd party js to restrict a) dom access b) ajax
Of course, I use uMatrix for that at the moment, but it'd been better if we, as users, can tell what sites are actually interested in providing privacy, by hobbling advertising antics from the get go.
Am I the only one who doesn’t care about this kind of tracking stuff? Like I really don’t care, is there some reason why I should? I feel like the worst thing that can happen is I get shown more relevant ads. If I use Adblock then it’s irrelevant, but if I don’t then what’s wrong with having targeted ads? A court has already ruled an IP address is not a person, they don’t really know it’s me, it’s just a construct they created that they think is me.
The "premium" aggregators I've seen used in some enterprise software campaigns can be extra nasty, I had someone at work forward me a link(over IM) that a competitor sent them targeting our userbase(our company name was in the title of the page, contents of the page was why they were better) since mailing to in email on our domain seemed odd; they sent me a "you visited our site, now call for a demo" email a few minutes later that had my full name, a week later they called my parent's house asking for me (guess it was the only historic phone number associated with my name). Ever since then I have viewed tracking data as unacceptable because the likelihood of misuse is only dependent on how much someone is willing to pay an aggregator to turn a small ID indicator into a person.
Thankfully I am in the US because the EU is nuts. Isn’t it their fault all these websites have a stupid cookie warning? People should just accept the fact that using a web browser means cookies.
While historically the cookie warnings were only an annoyance these days they often have a (working!) option to reject cookies, giving those popups at least some purpose.
Really though setting cookies isn't the problem, and you don't even need to show a popup according to EU law. You only need that popup if you use the cookies for nefarious things such as sending tracking information to ad networks. Setting a cookie for functional things such as remembering logins has always been allowed without a giant-ass disclaimer and nothing has changed in that regard.
I think devices need better security. Anything that JavaScript can access should be set to default values where they all equal null. Only once a user allows a certain website to access certain data will that data become available to that website. If every browser did this along with the apis for cellphones then users can finally regain control of what is collected.
That only removes the ads that display inline (above the question and between answers), see [0]. The ad that started this controversy was a sidebar ad ([1]).
Most adtech is RTB (real-time bidding) where ad slots are auctioned off and filled as you load the page. SO (and publishers) have no real control over the ad payload that comes back. There has been progress to use sandboxed iframes but there's still JS running inside those placements.
The JS won't be going away, it's part of a long supply chain of data, verification, viewability, anti-fraud and other layers baked in. For those saying publishers should do 1st party ads, that would lose them most of their income due to operational and sales overhead and doesn't really prevent everything anyway because they still have to accept the ads advertisers want to run, including the JS from vendors.
However the situation is slowly improving. Adtech has weathered through adblocking, native ads, anti-tracking tech but has failed to police itself because of a lack of consequences. Now there's finally regulatory pressure with GDPR, CCPA, and more that will finally force a change from the outside. I expect many of these issues to be greatly reduced within the next 1-3 years.
> For those saying publishers should do first party ads, that would lose them most of their income
Yes? Does that make it less likely? Less needed? No
If publishers can buy targeted ads with fraud detection, they will. But when they can’t (because the idea of the auctioned third party js blob finally dies) there will be money in dumber ads.
What might happen of course is that if someone wants to spend $X on ads that are dumb and untargeted they might as well buy a spot on the side of a bus. Do there would be a flow of ad money back from the web to traditional advertising.
> Do there would be a flow of ad money back from the web to traditional advertising.
That's not an improvement, because manipulating matter involves using up more resources. For all their problems, on-line ads harm the climate less.
That said, I of course welcome anything that can roll back the current state of on-line advertising. Even dumb on-line ads would still be more profitable than physical ones, and since advertising is a zero sum game, I don't expect the publishers to really lose money on that.
Ads are not going to become dumb. They're going to become privacy-compliant and more secure. Some signals will go away while others are replaced, but we're not suddenly going back to ads on the sides of buses.
I'm aware of the TCF, but it doesn't really address what I'm talking about. I couldn't find any real information about what Privolta is doing on the website, so can't comment on that.
What I have been seeing out of the industry (primarily what the IAB has been talking about and Google's "privacy sandbox") doesn't rise to the level of protecting privacy yet. Perhaps someday, but right now what it looks like is that they're seeking ways to continue an inherently privacy-invading business model while minimizing the impact of potential regulation.
Short summary: ID generation based on regulations and consent. Random number or cohort sample. Local data only, send and received interests matrix and ad payload while decisions happen locally. Plenty of ways to provide most targeting capabilities without revealing a particular user, until they decide to convert as a customer.
All I ask for is that my informed consent is obtained before data is collected (and data is not collected if I refuse to provide consent). If that bar is met, then I have no further objections.
But, and this is an honest question, where are the actual proposals to require that?
That's what the "C" in TCF stands for. Consent management platforms are an entire new sub-industry in adtech for displaying, gathering, and managing user consent for vendors.
The proposals are the implementation. The enforcement comes from regulation.
Yes, I know. The TCF is a technical infrastructure, and as such doesn't address my concern. It only comes into play if the industry actually starts taking informed consent seriously. That's the part that I'm doubting.
> The enforcement comes from regulation.
Indeed. It appears that strong legislation is the only realistic solution -- but that seems doubtful to me as well, since the big players have been, and will likely continue, working as hard as they can to ensure that any legislation will be token at best.
> SO (and publishers) have no real control over the ad payload that comes back.
That's not my problem, they could choose not to use those platforms. If they can't feasibly guarantee the integrity of the ads they serve, then my logical response as a user is to just block them all by default.
It's almost certainly the ad network. But the advertisers willingly joined the network, and the site willingly uses the network, so they get blame as well.
Bad conclusion. Web browsers are written to be fingerprintable. They are deliberately anti-user. Expecting web pages to "just not" is pointless. The solution is to fix the browser.
The first time I got served malware via web ad was in 1998. I started manually blocking ads by modifying my hosts file that day. Haven't stopped blocking since.
It's a broken model, stop forcing it down our throats and stop shrinking the definition of "bad" advertising.