I'm on mobile data nearly 100% of the time most months. 14GB costs me $50-$70 (depending on which network I'm on, I have two) to download ("unlimited" plans actually aren't, when used as a hotspot, though T-Mobile now seems to actually have a mostly unlimited hotspot option, I haven't tried it yet). So, not only are ads intrusive, disrespectful of privacy, and generally of negative utility for me as a user...they're also ridiculously costly.
So, yeah, I use an ad blocker. Oddly, I tried disabling it earlier today for an LA Times article (because of their blocker blocker), but it didn't correctly detect that I'd disabled it, so I closed it and went elsewhere. Now I know I should never disable ad block for LA Times, no matter how interesting the story seems.
It means I don't get images or videos, but I can read the occasional article without going through unnecessary gyrations.
It's not that I'm unwilling to pay for good journalism. I subscribe to the New York Times, Washington Post and The Atlantic. But the LA Times has chopped back their newsroom so savagely that they have very little investigative journalism or foreign corespondents anymore.
It's just not worth enduring their abusive ads and nagware to read what little content is worthwhile.
Publications right now have little incentive to improve their ads with regards to security, readers' resources or quality – because if the LA Times motivates you to install an ad blocker, the NYT is getting hit as well. The tragedy of the commoners, so to speak.
If ad blockers would aggressively judge each publisher, ad network and/or advertiser and rewarded good behaviour by showing the ads, I could see publishers prioritising the user experience. I'm pretty sure the publishers and journalists themselves look at these websites with horror, and they'll jump at the chance to regain some quality.
The adblock+ model is tangentially similar, but charging the publishers is simply too close to blackmail to be acceptable.
And maybe we ad purveyors who don't suck.
I occasionally do tech support on relative's computers and am appalled by how slow they are without an ad blocker.
I'll go even more extreme, if my browsing experience on your site invokes more than your domain and a single CDN domain, you FAIL.
If you want to serve ads, you should have to put your resources and the reputation of your domain at risk to serve them.
I'm not suggesting this kind of extreme response, my point is there's no punishment for putting up those ads so there's no incentive to behave correctly and we're stuck in a spiraling vicious cycle.
Breaking this cycle and turn into a virtuous cycle requires a paradigm shift and I'm afraid this is not going to happen by itself. Moving away from ads as a business model means wiping out google, facebook and a few others big players so whatever effort to emerge a new business model will be resisted by those who have the power and can burn money for years without even noticing it.
Until then I have two solution, the first is a combination of noscript and ublock origin in my browser and the the second is wallabag which for the LAT page given as example gives this: http://share.pho.to/AegpU
: https://wallabag.org/en and https://wallabag.it/en
I think we in the tech community need to work on smarter ad blocker. The history goes that ad folks abused window.popup and then browser vendors blocked that. Then came chrome who is supported by world's largest ad engine, but they were nice to give us extensions. So we had adblock extensions that gave us a ton of CSS injected in every page. Now need smart js to fool the page thinking it has ads.
I would love to see browser support showing a big fat red icon when you're being tracked and page is misbehaving.
I'm trying to think if I've ever clicked an advert on the internet intentionally and it's probably under half a dozen accidentally.
They just show you ads for people who are going for exposure rather than clicks, because you count as an impression even if you can't be used to generate clicks.
But there's a useless middle-man industry that is the cause of all these problems: the ad-networks. They don't quite care that the impressions are no good to the advertisers, and they care only very little about the brand-damage to the media, caused by malware and bandwidth-waste. Or (dangerous) inconvenience to the consumers.
We wouldn't have any of these issues if the media were just to strike deals with advertisers themselves (like newspapers used to) (or maybe hiring out to an advertising agency, but still keep full editorial control, no newspaper would regularly allow an ad agency to inject their content last-minute before press, without any editorial oversight).
They would basically just self-host the ad banner, with a click-through link, self-hosted analytics for the advertisers, and everything would be fine. Ad blockers don't block self-hosted banners. No viruses, no malware, not as much privacy risk, no bandwidth wastage.
The ad-networks are really bad for all parties involved.
Repeated spaced exposure seems to have an impact on people, even if they don't think it does.
I'm not even really adverse to ads, or to paying for content. (I subscribe to three newspapers, two of them in print.) I just want a browsing experience that isn't terrible.
So did you spend time looking at them ads on purpose of guilt???
how could I make that more clear than using ???
OTOH, you use the word trade which does imply guild on both sides
These sites all use a third party "tag manager" (Google Tag Manager, Tealium, Piwik etc.) to manage the scripts they load: ads, analytics, trackers etc. The people who use the tag manager typically aren't techies, so they don't understand that adding another tag will cause the page to slow down. Typically I've seen a single newspaper use 3-4 different vendors for the exact same thing, such as analytics. They don't actually use all of them; who knows why they have multiple overlapping ones.
Scripts are often badly written, and it's common to see lots of nonsensical errors spewed to the console. It's very annoying to debug your own stuff when you have that crap loaded, which typically comes from a header/footer combo provided by the customer.
As long as these pages have ads that display utterly gross images (if you doubt me, click the above link) that make goatse look like pure mindbleach, i'll keep blocking JS and ads.
I work in digital media. I get why publishers do it. But, sheesh, if you're resorting these ads that insult your audience in exchange for pennies, I think you need to reexamine your approach.
Had no idea there was a name for this shit! But of course there is. It's really sad that the most rotten apples have spoiled it for everybody. I was thinking recently about trying to set up a passive income website that makes money on Adsense, but got discouraged when I remembered that Adblockers are a thing.
Proper honest advertisements for actual products and services is something I have no issue with -- hell, I've actually managed to save money on car insurance because of a bloody online car insurance ad.
I despise stuff that autoplays video/audio, or moves around on the page, or displays creepy, pseudo-anatomical images engineered to trigger revulsion/fascination (trypophobia stuff or shit like those weird toenail images in those "heart attack" ads or whatever else).
My browser blocks JS by default because I don't enjoy seeing those kinds of shock images. Hell, goatse is less distressing for me to look at than the images in your average chumbox.
No, i get the trypophobia-triggering (DON'T GOOGLE THAT WORD) chumbox images when browsing in incognito mode, without pre-existing cookies or anything else. This has nothing to do with my ad bubble, https://theawl.com/a-complete-taxonomy-of-internet-chum-de0b... has mentioned that the chumboxes very often carry those images.
Also i'm not the only one, there's lots of other people who attest to the fact that chumboxes specifically use those kinds of images: http://ask.metafilter.com/247987/How-can-I-avoid-seeing-rela...
We have something in the neighborhood of 30+ ad partners. Some of those partners have decent code, others are really terrible. The IT team only adds the code to allow ad networks to inject their advertisements.
Because of the performance issues and the disreputable nature of some of the ad networks we have discussed the possibility of building our own ad server. However this takes time and money, I don't know the ROI of such an endeavor. Ultimately sale's job is to generate revenue so we partner with nearly every single ad network that can generate money. It's not desirable to deal with all these different ad providers, but you don't need me to tell you about the challenges the newspaper industry is dealing with. We need to bring in money on the digital side to make up the downward trajectory of print ads.
I actually purchased a subscription because I do want to support the paper but not deal with the nightmarish ads that make the site basically unusable. However after logging in, the site immediately demands that I disable blocking third party cookies. I've been in touch with the subscription department without luck, so at this point I am looking to cancel my subscription. Pity - I really wanted to support the paper, but they make it basically impossible to do so.
The ROI on cleaning up the ads tech mess will not be immediate. It will probably be negative in the short term. But in the long term it's the only way to save the newspaper. I'm anxious about the future of the newspaper business too and don't know what the solution is. But I'm sure it's not this ad escalation.
Exactly. Ever since the LA Times started blocking ad blockers, I do not visit it. If I accidentally click on a LAT link, I'm quickly reminded that I don't want to be there and close the tab.
That being said, I believe the media companies should start accepting the inevitable fact that people don't care about their financial struggles or hardships in finding ways to monetize content. As long as the said content is visibly free (even if it comes with strings attached as invisible tracking) then the people will continue to believe it's free and the more technical users like myself will go the extra mile to actively impede your tracking. You can't escape from that.
I would actually pay for good journalism. But as a possible consumer of the theoretical good journalism, I can't trust anyone. Every month there are news about very shady ads or nasty ransomware snuck through ad networks. How can I trust anyone who is after making money in the industry?
Both the ad and publisher industry have repeteadly shown their profits are their most important priority, not the readers safety. This is never gonna change, they must make money.
I can understand their motives but I have zero sympathy for them due to the methods they employ.
If they are smart business people, they have to sit and actively think of new and non-intrusive ways to monetize content, like right now. The current publishing and monetization models golden days are long behind them, all of the companies are fighting over scraps (obviously I can't prove this but I believe it's a legit theory, judging by the desperate methods) and the bean-counters should stop trying to beat this game and rather look for a new game.
There's also still physical newspapers. You could go up to your nearest little shop and, for just a few dollars really, try out a couple of the newspapers. You can get a different one every day, see how they compare, then you can subscribe to that one!
There, now you can pay for good journalism, and support that radical new monetization model.
As for physical newspapers, I hear they were decent once, but nowadays they're all so shitty around here that I believe one is better informed by not reading them, as at least it's easier to keep an open mind on issues, than after having been fed heavily distorted facts.
Wherever possible, I make it clear that this cruft comes at a cost, but I cannot recall a single client being swayed by those arguments. In the case of overlap between trackers (Analytics, Facebook, etc), from the perspective of the client, they solve their "problem" in a quick request. Either it appeases the marketing company or it gives them that little extra feature they think they need but will never use.
Increasingly, it can be miserable developing for anything other than my own projects because of how much time is spent implementing bad ideas requested by other people.
That's why I am thinking actively about starting my own small business lately. Or simply refuse to do web development at all and go into data science / A.I. / pattern extraction areas.
It seems like either of those would be excruciatingly hard to transition to (work inertia can be a blessing and a curse) but I want to do either -- or both -- because web development is in my eyes in its worst decline ever. I don't buy the optimistic BS from inexperienced JS devs or the promises of new tech. People still have to use user agent sniffing for their website to work best. Just lol.
Different ad networks use different provider's numbers to determine how much to charge for ads based on which traffic tier you're in. If you don't use them all, then you can't sell on that network, or you get a really bad CPM. And none of them seem to trust Google's numbers except Google, even though those are far more accurate than any other provider.
I'm dealing with Ensighten right now. A wonderful tag-manager (sarcasm) that lets anyone add any random 3rd-party script to all the pages.
Am finding services being loaded twice. From different URLs too so its not just a cache hit.
It's not quite clear if 14GB/day is an extrapolation based off the author's sample of 5 MB/30 seconds or if the author actually left the page open for all 24 hours. Regardless, that's quite a bit of data.
I'm against bad ads as much as everyone else, but I don't think using a terrible methodology to criticize them is a good idea. Making clear what you measured would be much better, and actually leaving the thing open for 24 hours would be best.
Safari says it's been about 12 minutes. It's requested over 1000 resources (it pegged 999+ at ~2 minutes). It's downloaded 88.4 Megabytes of data. It appears to be continually firing pixels from a multitude of places (openX and doubleclick are common). It's also downloading a number of different videos. The article I opened does not have any videos related to it's content. This article is open in an unfocused window; I have not fired any scroll, click, or touch events natively after the few seconds I opened the link, scrolled a bit, then came back to HN.
88.4 megabytes in a few minutes. I remember when my modem couldn't download more than a couple megs in dozens of minutes, and I'm a shitty stupid millennial, not some graybeard. Maybe it could hit a gig, or 14, in a day, but the point really remains set: the LATimes site is absurd.
(In case the sarcasm is not clear: the problem is that it's anywhere over 0.01 GB.)
To answer your question about the title, I did not change my blog post. Hacker News' silent moderators seem to have applied their usual policy of rewriting post titles to the article title, no matter how dull it may be.
Sample from only a few days ago.
From: "San Diego Union-Tribune" <email@example.com>
Subject: We are proud to offer you Moonlighting - Hire or be hired!
Received: from mta953.e.latimes.com (mta953.e.latimes.com [184.108.40.206]
[Blah blah bullshit about their proud partner promoting a soulless gig economy.]
This email was delivered because you registered for Email Membership at utsandiego.com
It's barely an alpha project and needs a ton of work to be functional, but just know you're not the only one who finds this frustrating and unacceptable.
I'm slowly transitioning for having a random 20 char generated alias email for every service/use and there's no chance anyone will hit a real mailbox by chance. If alias leaks, I can either abandon the service and block it, or change the registered email for the service to a new alias and block the old one.
In addition to that, I have a few easy to remember/type emails to give out to people, that I never use on the internet.
The result being no spam even without a spam blocker. (to the new addresses)
I felt that way about using a password manager and generating random passwords (what if I want to log into something and don't want to remember which password I used) but eventually realized that password managers are literally designed to make this as painless as possible and I should relax more. Now I use KeepassXC, random passwords and life is good.
Are email managers a thing, like password managers? How do you solve this problem?
On the mail server side, I have postfix set up to read the database and deliver everything for a valid address to a single mailbox.
It should be workable with any service that will allow you to setup aliases for a single mailbox. If they allow to set it using API, that would allow for automation.
On the sending side, given that I haven't yet found a need to communicate too frequently with any of the services I use via email, I have a simple PHP form to send email with the specified alias address set as From on the envelope and inside the email.
My solution was to change my email server setup so that . and _ act the same as +. Someone might guess that they can strip something after a + but nobody is going to strip an address after an _.
 Advertisers can then use that data to cross-track you between sites more effectively.
 Marketers can re-target you without any sort of pixel tracking because facebook, google, twitter, etc., all can identify you by that email address you provided.
 None of us have a loud enough voice to dramatically damage a company in a meaningful way b/c they were a bad actor.
In my mind, by obfuscating the email address, we take away its power. It loses a lot of its value when it's randomized per site, per user. It now is truly only useful for sending you an email, and if you get fed up with how they use that email address, they're shutoff forever (and have no way of tracking you).
See https://news.ycombinator.com/item?id=11781361 or https://mailhero.io/
Daily news doesn't fit into this philosophy that well (printed papers is too much bulk for me), but I get pretty much all the "breaking news" I need from Twitter. Stepping away from the 24 news cycle to sit with a piece of analysis from a weekly or monthly (or even quarterly!) magazine is a much better way for me to stay informed and support journalism. And as a bonus I don't have creepy ads follow me around the internet if I want to read a Socialist magazine, or a gun rights magazine, a bridal magazine, or whatever.
Ad networks are being subsidized by data caps and personal/business broadband costs. Maybe if ad serving went through the main host and their own bandwidth then maybe ad networks would have a reason to control this abuse of transfer.
Making the web all around faster as no more 10MiB webpages, and making the industry move back to server side rendering.
I'd like a notification to pop up when the current usage (within, say, a 3 minute window) exceeds my typical usage. That'd make it easier to notice a website started autoplaying a (muted) video, for instance. It's also something that could be done at the operating system level. The operating system could also conceivably show traffic by domain, but that's not ideal; you want traffic by referrer domain, ie. if the LA Times embeds megabytes of images from some random third party server, those should be aggregated under latimes.com and not under some meaningless CDN domain. But the OS probably doesn't have that information.
Of course browser vendors could add more detailed reporting. Say a brief in-browser notification ("Page finished downloading 5 MB.") after initial load is complete and repeating if excessive downloading continues. I guess that could annoying really fast, though. A less annoying alternative would be a special page (e.g. "about:data-usage") that has an overview per page in a user-selectable time range. Maybe link to it from the browser start page.
The only way in my opinion to fix this is to (a) give phones HARD preferences on maximum data that cannot be exceeded and (b) give web browsers even stricter per-site settings with sane defaults. Then, let the ad companies fight within themselves to figure out how to deal with the fact that none of their content gets through anymore. Force them to innovate into smaller form factors.
But if we change to a utility like model people will start caring about penny pinching their bills
Universal egress fees would go a long way to mitigating DDoS, too.
How? By charging the owners of infected machines?
Currently there is effectively no incentive to avoid having a gadget that joins a botnet.
> www.latimes.com uses an invalid security certificate.
> The certificate is only valid for the following names:
> .akamaihd.net, .akamaihd-staging.net, .akamaized-staging.net, .akamaized.net, a248.e.akamai.net
Do Doubleclick ads typically employ a cache control policy of "no-cache, must-revalidate"?
Using base64 images, websocket/blob: injections, third-party scripts, natively hosting the images on the site and using also webRTC to also inject.
Its a long fight of countering/re-countering, and until website developers listen to its users these type of ads aren't acceptable.
/Fanboy from Easylist here
Just blocking ads is certainly justifiable in the current situation, but it also certainly not sustainable if the ad blocker installation base keeps growing. And buying subscriptions for all sites is not really an option either. It becomes quite expensive pretty quickly, especially if you want the see only a few articles per month on each site but do so on many sites.
Why isn't there something like a Netflix for News that just gives me news free of visual cruft and takes care of distributing the revenue from what I read in approximately accurate portions to the content providers, not to mention a seamless interface for socially rating and categorizing news?
Newspaper/magazine subscriptions made sense when newspapers were heavily local and magazines were periodicals. Neither condition really obtains any more and having multiple subscriptions is not worth the overhead for even a small number of publications. The last time I made the mistake of subscribing to a magazine I also saw a huge increase in junk mail for several years afterwards. GFTO of my mailbox with that shit.
I am not really sure about that. About 10 % of all Internet users are currently using an ad blocker. So 90 % are okay with ads or don't care or whatever. Further I would estimate that users would have to pay between $5 and $10 per month to pay for the content they consume to provide an income for sites comparable to that of ads. Maybe 1 % of the ad blocker users - number pulled out of thin air - would be willing to replace their ad blocker with a monthly payment of $10 if this would would work frictionless, if you could magically convince the entire Internet to create some Netflix equivalent.
In the end that would be 0.1 % of the Internet users which seems not really a huge incentive to bring such a Netflix equivalent into existence.
Population: 300,000,00 million
Percent internet users: 84%
It's awful that I'm an artist but want to put most graphic designers out of work XD
It keeps working even when I have my ad blocker turned off. Obviously I must have some anti-tracking extension still enabled that it dislikes, but I'm not willing to spend that much time troubleshooting my browser. It's baffling to me, because the LAT is one newspaper I'd consider a digital subscription to, but as this article says the advertising/marketing people have clearly won out over the editorial, so fuck'em until that changes.
I have a hypothesis on how we can reduce the number of these types of ads, while also not harming advertisers or publishers in terms of reach and revenue.
I believe that we are in an advertising death-spiral. Sites are adding additional impression opportunities and ad placements. This is triggering higher numbers of impressions available to programmatic buyers. The additional number of available impressions is devaluing the impressions (we're flooding the market), which has led to a perpetual decrease in CPMs every year. This leads to publishers pushing higher "engagement" ads, which users just find terribly annoying, as well as more ad units. The cycle repeats, repeats, and repeats, just so both sides can "stay effective" with regards to whatever metrics they are measuring against.
My belief is that the LA times does not need N ad placements per article. They probably only need one. We don't need to junk up quality news organizations with taboola and the other "content" recommendation platforms. We quite literally need a détente.
LA Times goes down to one ad placement per article.
LA Times advertiser is guaranteed viewability (high placement in the article, for example), and 100% share of voice. The "impact" of that ad increases, with the overall decrease in other "noise". The cost also goes up, but commiserate with the increased value. Both sides likely end up making / paying the same amount of money, with likely the same level of impact for the advertiser, but they reduce the pain on the user, which they should both care about deeply.
I'd like to believe this is an opportunity from a business perspective. I believe that someone could demonstrate this value, in some way, to both sides of the market. The advertiser would need fewer impressions to achieve the same level of value / impact of their ads, which also has the side benefit of reducing additional costs around TPAT tracking, analytics costs, general tracking costs, etc., which are often priced on a CPM basis (so, fewer CPMs lowers their bill). The publisher would potentially see better engagement from their users, fewer ad blocks, and a higher quality experience.
I think for the sake of newspapers and advertisers alike, some way to make this reality makes this an idea worth solving.
The experimenter in me reads that article (and mind you it's very brief), and I actually get encouraged. They were attempting to measure it on a pure 1:1 basis, where all their other shows, as well as all the other shows on broadcast TV, were following the standard model. Educating advertisers is a challenging step in this process, and so it's no surprise they couldn't get the pricing right on the first go around.
Hopefully they decide to try again, test, iterate, and think outside the box.
Learned that trick here. Thanks, guys.
Edit: Automatic Reader View add-on eliminates the race.
Edit: uBlock Origin blocked 20 third-party domains. Totally typical of web sites these days.
BTW, this custom filter rule seems to work around the LA Times blocker blocker. https://news.ycombinator.com/item?id=13567527
Apt name. Had never heard of it before your comment. After some looking around its github wiki from my phone, this looks... powerful. Wow. Same maintainer as ublock origin, right?
+1 to my list of things to check out when I have a moment.
99% of these companies are in business by running as many impressions/clicks/whatever "engagements" as possible regardless of user experience so we end up with this tragedy of the commons.
I'm focused on getting a mobile article page down from 20+ seconds on 4G with 300+ requests and page-weight of 2MB+ (mostly ads).
Right now, on CI environment it is averaging 2.1s, 350K in size, <45 requests, and no ads in the initial view.
Sadly, Hearst doesn't own the LAT so it won't help them.
RJ Reynolds recruiting guideline SPECIFICALLY wanted sales (or marketing) people with 2.8-3.1 GPA. NO wonder so many ad server tags by the sales/marketing types are FULL or errors.
Anyhow, the very first time I experience a Mac computer crash HARD was when I opened NYT.com. Yes NYT.com.
It was a typical Mon morning, around year 2011. I got into office, got coffee, tapped keyboard the Mac Keyboard to wake up my iMac and proceeded to open the website I opened every morning, nyt.com.
I noticed some flash based ad doing some fancy thing in the top banner. But whatever.
I continue doing my thing.
Wait what? My iMac is frozen. iMac! This can't be true. I frantically pound on keyboard but nothing works. Out of desperation, I finger it.
When it comes back up, I slowly bring things up one at a time. And I realize it was the NYT.com's flashy flash ad that caused the crash.
When I upgraded my slightly outdated Flash plugin in my Firebox, I could view the nyt.com homepage without my iMac freezing. I could HEAR the mechanical HD in my iMac grinding and CPU widget showing CPU spiking when I open nyt.com homepage.
Because of an ad on NYT.com, I'm pretty sure millions of people experienced their computer crash on that Mon morning.
Well, Adblocker in Chrome seems to be blocking the ads on the page quite well. Some single requests are being logged after the page loads, but nothing as to what the article mentions.
I just want to spread the word out and make the world a better place.
edit: so I know, why does something this innocent get downvoted? I don't understand what I did wrong.
This will get downvoted probably because it hurt someone's ego but it's true if you look the history of science and technology.
I'm very happy with Notational Velocity at the moment (not sure about cross platform solutions, but I think most systems do similar things nowadays).
So their ad blocker blocker and their paywall kept you from reading the articles for free? Why don't you just pay them?
It's just a shitty thing to do at this point. If they have high quality articles then what would require you to pay them? Do they need to be on Patreon or Kickstarter or something?
Also if the article is paywalled, why would anyone keep it open and let ads play? Either buy the subscription or leave (or try to bypass the paywall some other way).