Hacker News new | past | comments | ask | show | jobs | submit login
The Web’s Cruft Problem (telerik.com)
362 points by jsnell on July 16, 2015 | hide | past | web | favorite | 156 comments

There is a major thing going on here that is not mentioned in the article: One department doesn't deliver a website.

Is that "Terms of Service" modal there because a front-end dev thought "interrupting the user experience is a great idea!" No, it's there because of legal. And all those social sharing widgets? They are there because of marketing. And the 5 different ad exchanges? They are there from Sales/BizDev. And that 400KB of JS crap? That's Optimizely and their crappy A/B testing library that the dev team put it. And that hero image that's actually 1600px wide and then resized in CSS to 400px? It's there because that was the source image from the external agency and no one thought to modify it.

The biggest challenge with modern web sites/apps is that they are largely built by committee. Often its not literally built by a committee, but I mean that while multiple departments are all involved and all get to add to the site, but no one is really responsible for the final, overall user experience of the site.

And even if there is a "user experience" or "performance" team, they rarely have the power to change things. A customer of ours is a Alexa top 100 B2C company that provides a market place for buyers and sellers. They get commissions from sales, but a large part of revenue is ads. The "performance team" makes no head way against any of the terrible performance problems with ads because the ads team is judged based on the ads/conversions, not on performance of the page. Even when the ads are hurting the conversion rates of the sales/commissions, the ads team doesn't care. It's total deadlock of departments saying "performance and user experience is not my responsibility, I only do X".

Back in May, TheNextWeb made some changes. Their mobile experience at the time had two bars, one at the top and one at the bottom. Even on a large phone display, viewing the content between the two was very distracting. Also, they both had icons for twitter and Facebook. I gave them some feedback regarding the design and said that I felt it was a UX mistake. The VP of design got back to me and said, very politely, that he disagreed with my assessment. He went on to say basically that an increase in shares means an increase in page views which means an increase in ad impressions, and thus income. With TNW being an ad-supported company, this directly relates to the quantity and quality of our content. This lead me to feel that the single driving motivation was not that their users had a good experience, but that they maximized revenue. From a user point of view, I feel that's wrong. From a business point of view, it does make sense.

Regarding who is responsible for the final content, I would assume that if you have a VP of Design, that's the person. Maybe I'm wrong.

I notice now that they only have the one bar at the top. Interesting.

They're possibly optimizing for the wrong metric, or at least a metric you don't care about. I'd think they'd want repeat viewers. A bad UX will mean fewer repeat viewers, and possibly an unsustainable business. Classic short-term vs long-term view. Problem with this is it's generally harder to measure long term, so short term becomes easier to justify.

Maybe yes, maybe no. A classic example is tourist trap restaurants in heavy tourism areas. They literally don't care about repeat customers because all of their business is transient one-time customers. Sure they might get bad reviews ultimately, and that could hurt them, but that hasn't really seemed to slow them down, and that is easy to game.

The ad exchanges don't care for repeat viewers. Its purely a numbers game. More clicks more revenue.

The content sites themselves largely don't care about repeat visitors. Social networks and other aggregators are the primary drives of traffic to news/content/lifestyle sites. In other words, most people do go look at the home page of Buzz Feed or Wired or Huff Po and read multiple stories. They go to Facebook and click into different articles on different sites

ITYMTW "most people do not go look at..."

Optimizing for specific metrics is dangerous, and makes a big difference

This is typically the role of a project's Product Manager: to balance the needs (or perceived needs) of all internal stakeholders (development, design, marketing, sales, legal, and all the various other isolated silos that don't talk to each other) with the interests of the end user. You need someone empowered to say no to crap, because every company is filled with people trying to load crap into the products, even developers.

Weak/ineffective Product Manager roles are set up to simply report to one department, for instance Marketing, and just do everything they want. Effective ones have the power to say things like, "We're not doing this awful UI element just because legal says they want us to." and "We're going to stop taking requests from sales until we improve product quality." You need one person who is directly responsible for the entirety of the product, who can't escape responsibility with the standard "not my department".

The problem for these Product Managers is that the incentives between the different stakeholders are drastically misaligned at a fundamental level.

The people in charge of community, design, content and publishing are actively fighting the demands of the parts of the organization that drive profits.

Companies that use advertisements to subsidize free content have sales and marketing teams that need to interrupt the goals of all of these other functions in order to keep the organization alive.

However, if you look at the incentive structure for say a traditional book publisher, the marketing and sales don't actively fight the content because they don't put adds inside the covers. They just sell the finished product.

Sure, market appeal might affect the kind of content that gets invested in by a publisher, but it doesn't affect how the content is presented.

That makes me wonder - why do companies treat design and content as something that does not drive profit? Profit is holistic, otherwise why bother with the so-called "cost centers"? Kind of like a restaurant saying "the cash register is where our revenue clearly ends up, let's just cut out all the other cruft like the kitchen".

It's harder to point to specific items in those areas and say, "this feature X saved us $Y", whereas the sales departments have easy figures to point to.

Sadly, this whole "cost center" versus "profit center" exists at all types of companies. I used to work in food manufacturing, and the incentives in play there were beyond strange. The sanitation budget was constantly being slashed, because Quality Assurance as a department doesn't bring in a profit (that they can see), whereas hordes of "process engineers" would come in and rearrange boxes and claim that they have just eliminated 10 steps out of the workflow and calculate this to be saving $92/day. They quickly get promoted before anyone can actually see if this made orders any faster. Another common tactic was to cut positions on the manufacturing line, claim the reduced headcount as a savings, and move on before the problems of fewer headcount (slower throughput) became apparent.

It's a matter of accounting.

With a professional kitchen, yes, the cash register is a place where value is stored, literally, but what are the inputs? Settled bills coming from customers. Who were paying for what? Food coming from the kitchen. The kitchen is creating valuable things that can be tracked and accounted for by both the customer and the cash register and the bill and the receipt from the debit card and all tracked in a double-entry style in to books, organized by many different kinds of departments.

This way the kitchen itself can be viewed as an isolated financial entity, separate from the concerns of the cash register or the landlord. We can then see exactly how profitable the kitchen is because it has well defined inputs and outputs. With this information in hand, we would probably not want to get rid of the kitchen.

Imagine if the food were free and covered with ads. That's what we have now.

Content, when treated as property on a market, like food sold to customers, retains these properties. Now the "content department" is most definitely creating valuable things for the salesmen to hock.

Design is tough because it is qualitative and it sort of falls apart from an accounting perspective. It's really hard to see it as anything other than an expense, which is probably why it is so undervalued in most corporate settings. It certainly is important but only when it is novel as it can be easily copied. You can't copyright an entire design paradigm, so the minute someone invents "Art Deco", well, the cat's out of the bag.

The iPhone is interesting because it had such a big head start but don't for a minute think that there aren't fleets of accountants with a lot of say running around Apple and helping them figure out how to bridge the economic realities of engineering and supply chain with the delightful new features being dreamed up for the products.

Most long-lived corporations that make money back for most investors end up in a situation where they design and ship lots and lots of products precisely because they can't ever quantitatively figure out how to ship a "hit record". So if you turn the product of all your subjective creations (design, art, content, creative) in to intellectual property that participates in a marketplace, you're creating a way to run a business that you might be able to make some sense out of.

But a company that relies on ad revenue that sees no change in revenue when the experience is slow or "bad" wouldn't hire a Project Manager to put UX first. Why would they? It wouldn't fall in line with their company objectives (read: make more money)

It seems like there are two separate kinds of cruft: stuff that is distracting and gets in the way (social buttons, privacy policy modal, big nav-bar with temperature, ads, etc.), and stuff that happens in the background and doesn't get in the way, but makes things slow (multiple ad network embeds, optimizely, analytics scripts, etc.). While I agree that the first category has a lot to do with organization structure and incentives, the second category seems like it may be a good business opportunity for a new generation of "low-cruft" solutions to the same problems. For instance, I was kind of astounded that optimizely only has a crufty javascript solution, rather than a (probably far more complex) less crufty pure server-side solution, and I spent awhile looking around for such a thing, but found nothing much. Or something that puts all the tracking and analytics that different departments want to do behind a single optimized script (maybe segment.io already does this?).

tl;dr; The problem that lots of different people that contribute to a website are vying for the user's attention in different ways doesn't seem to have a technical solution, but the problem that we're making too many third-party requests that respond with too much data and/or run too much javascript seems like one that could be solved if tools focused on it.

I would say the biggest problem with pages like this is that it "jumps" around as content is loaded. The fact that ads are lazy loaded is not bad (probably good actually!).

This is easily fixed by simply having the container set to exact width/height so that it doesn't mess with the rest of the page when it loads.

CNN also has a sidebar that pops in with other stories. Don't do that. Render it on the server.

Just a couple of fixes like this and the page would be fine, even if some extra stuff takes a while to load. They key from the user perspective is that the article should immediately be readable when the page loads.

The fact that ads are lazy-loaded can be a big performance problem if it's implemented with a method that causes browsers to turn off async pan/zoom. Especially sad is when images are loaded this way, because most browsers will load images on scroll automatically, so this technique of "lazy loading" ads for "performance" actually cripples scrolling performance with no benefit.

> if it's implemented with a method that causes browsers to turn off async pan/zoom.

What technique would cause that?

Segment(io) doesn't have a solution for this, in case you decide to go searching. And their server side APIs have plenty of issues of their own (eg no way to get utm parameters to the destination - hugely frustrating and we are still working with them on that).

Thanks for the info. When I first heard about their product, I got excited that they were trying to solve this issue, but after reading their marketing materials and documentation decided that it didn't work how I expected, but I wasn't sure.

... they are largely built by committee

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.

-Mel Conway ( http://www.melconway.com/Home/Conways_Law.html )

This might also explain why mobile apps by the same company serving the same content and, presumably, contributing to the same business model somehow manage to have less cruft[1]: there's a single gatekeeper team for each app.

It's funny how the important difference between apps and web pages is entirely arbitrary, social and historical and not based on the underlying technology—the concrete difference between the two.

[1] Or, at least, that's been my experience. As a concrete example, take a look at the BuzzFeed app[2] which only has the prominent social buttons as visual cruft.

Of course, the rest of the cruft on the Buzzfeed mobile page were two (!) links to their app...

The CNN app[3] is probably a better example, with the app being almost completely cruft-free.

[2]: https://play.google.com/store/apps/details?id=com.buzzfeed.a...

[3]: https://play.google.com/store/apps/details?id=com.cnn.mobile...

Pessimistic me says that's mostly because the market has not matured. Ads and cruft will arrive eventually.

that is exactly my point. And even when you have a single professional webdev/design team, you have no power over legal restrictions (EU cookie widget) or contractual obligations boss signed (ads must appear next morning). Real world just goes forward. Users have more power with adblockers over their UX than UX designer.

One way you can avoid the need for an EU cookie widget is by not setting unneccessary cookies.

In many cases to determine its necessity you set a cookie. Seriously, can you name a mainstream news site in Britain that does not set a cookie?

No, but I can name lots that shouldn't.

If I'm not logged in, there's no good reason for a news site to set a cookie.

Most uses of cookies that are not tracking are exempt from the cookie law and don't need consent: http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm#se...

Conway's Law writ as Web Cruft.

I always find it amusing that our capitalist system basically consists of competition between command economies as succinctly described above.

> And that hero image that's actually 1600px wide and then resized in CSS to 400px? It's there because that was the source image from the external agency and no one thought to modify it.

And because of retina iPads and Macbooks.

True, but they could be doing better by serving the appropriate image size depending on the client.

Much easier said than done, sadly.

Not that hard. Want to make it a background image?

    @media only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min-resolution: 192dpi), only screen and (min-resolution: 2dppx) {
        background-image: url(image@2x.png);
Do you consider the image to be part of your content? (Something you'd want to appear on printouts and RSS feeds)

    <img src="image-src.png" srcset="image-1x.png 1x, image-2x.png 2x" />
Retina images are a solved problem for modern browsers. Very few users with Retina screens (and thus new computers) are on non-modern browsers, and those users who are using old computers and browsers will end up getting the 1x image. (Good luck finding a Retina Macbook Pro that runs IE9)

Sorry, I just saw this now and I doubt you will see it 6 days later, but just in case:

The main problem with that approach is (or at least used to be; I would love to learn that it's changed) that the browser will still load the 1x image first. Since the large chunk of retina users are on mobile devices, that sucks.

I enjoyed this article. But I have one nitpick.

The author suggest HTTP/2 as a solution to web cruft.

I could be wrong, but I see the HTTP/2 ploy as a proposed way to deliver more cruft, faster.

What do you think is going to be in those compressed headers? How large do HTTP headers need to be? What exactly are they trying to do? I look at headers on a daily basis and most of what I see is not for the benefit of users.

We can safely assume the compressed headers that HTTP/2 would enable would have nothing to do with advertising?

Again, I could be wrong, but in my estimation the solution to web cruft (unsolicited advertising) is not likely to come from a commercial entity that receives 98% of its revenue from web advertisers.

The web cruft problem parallels software bloat and the crapware problem (gratuitous junk software pre-installed on your devices before you purchase them).

The more resources that are provided, e.g., CPU, memory, primary storage, bandwidth, the more developers use these resources for purposes that do not benefit users and mostly waste users' time.

This is why computers (and the web) can still be slow even though both have increased exponentially in capacity and speed over the last two decades. I still run some very "old" software and with today's equipment it runs lightening fast. The reason it is so fast is because the software has not been "updated".

HTTP/2 largely won't help the problems mentioned in the article. If I'm loading 200+ assets for 30-50 hosts, HTTP/2 can't help because I'm making 30-50 TCP connections and fetching 5-8 resources over each. The efficiency of HTTP/2 over HTTP/1.1 really doesn't excel when fetching so few resources per connection.

HTTP/2 helps when you are downloading 200+ assets for 1 or 2 hosts.

I routinely use HTTP/1.1 pipelining from the command line to retrieve 100 assets at a time. But these are assets that I actually want: i.e., the content.

Somehow I doubt that the 200+ "assets" coming from 1 or 2 hosts automatically when using a web browser authored by a corporation or "non-profit organization" that is connected to the ad sales business are going to be "assets" that I actually want.

> I could be wrong, but I see the HTTP/2 ploy as a proposed way to deliver more cruft, faster.

Could not possibly agree more. I tried my own HTTP/1 site in that site-tester, and it has a 105ms response time for entire pages. And my pages are dynamically generated.

HTTP/2 (and especially the Mozilla/Google-imposed TLS requirement) takes away the simplicity of the web. The thing that allowed you to write a few lines of code, and suddenly your old Commodore 64 was a web server. Increased complexity is going to lead to more and more of a monoculture, which is going to leave us all more vulnerable to attacks (like with OpenSSL and Wordpress.)

The article is also a little bit ironic for me. He speaks of unnecessary widgets added to pages, yet his unnecessary sidebar is overlapping the text in the article, cutting off the ends of every line, and even maximizing the window doesn't get rid of it ( picture: http://i.imgur.com/qJeyP5v.png ). I had to use "inspect element" to delete the sidebar to read the page in full. (I'm sure it doesn't happen for everyone or it'd have been fixed, but it does on my browser.)

It won't serve to eliminate cruft, but it will drastically reduce the effect it has on load times, which is a very significant part of the problem. If server push and connection multiplexing are used right, you could load an entire page in 1-2 round trips.

Hopefully, a lot of that second round trip would be extraneous (ads/tracking/etc) and the user can start using your content immediately.

I'm glad the web has finally advanced to the point where we can finally load an entire page in 1-2 round trips.

Sorry, I'm feeling unnecessarily snarky today.

That was my first thought exactly, a "faster" protocol will just be used to send me more crap faster.

This is incredibly ironic as I've used Telerik components for internal app development in our organization and the amount of cruft that gets loaded is way damn high. The payload is high and round trips are numerous. I ditched all that and developed my own framework from scratch using the open source libraries and managed to reduce the payload and increase responsiveness.

At some point in the past, it made sense to pay Telerik boat loads of money to get libraries that were supposedly plug-and-play but now there are even better solutions available for free thanks to OSS!


  Sounds a lot like Flipboard doesn’t it? If you’re a publisher and you opt in, 
  you let Facebook control the distribution of your content, in return for a far 
  more performant experience for your readers, and presumably shared ad revenue 
  of some sorts.
This raised some red flags! Making Fbs(Facebook/Flipboard) the content platform just to reduce cruft and responsiveness appears to be a trojan horse and appears to have similar issues as discussed in the Fb Fraud thread [0]. Another possibility is Fbs(Facebook/Flipboard)would become the Comcasts of tomorrow. The distributed nature of the web is what makes it so invigorating and democratic and I think it would be a mistake to go the cable route.

[0] https://news.ycombinator.com/item?id=7211514

Same experience here. Back when I used Telerik they where some of the largest and slowest components I'd ever seen. Even their own homepage was unbearably slow at the time. This was many many years ago.

The company I work for uses Telerik's CMS product, which loads lots of cruft onto your pages and is very slow. I walked away with the same feeling as you did after I read this.

Care to share any great OSS libraries you've found that are good replacements for various Telerik components? Thanks!

I've looked for a decent widely-compatible DatePicker alternative, and have not been successful. The Kendo DatePicker is a very nice implementation (it works all the way back to IE8) of the one self-contained widget that I'd really prefer not to hand-roll.

As for other Kendo UI components, I've found the ComboBox, DropDownList, NumericTextBox, and MaskedTextBox somewhat useful, in that they set up quickly and look nice. My typical approach is to use Bootstrap for whatever it has built in, and then Kendo UI with the Bootstrap theme for some extra widgets. Watch out though since they use different box models: http://docs.telerik.com/kendo-ui/using-kendo-with-twitter-bo....

I don't recommend using the Grid for anything but quick-and-dirty development, since any customization is almost inevitably going to be way less enjoyable than maintaining a hand-rolled solution (although the hand-rolled solution will probably take a lot longer to implement at first). The Scheduler is kinda nice but I have the same reservations about it. And don't get me started on the MVVM implementation.

But the good news is that all the basic form input controls (including the DatePicker) are Apache licensed! Until there are more stable and compatible widgets built with React I'll be using them fairly often: https://github.com/telerik/kendo-ui-core

Also, here's the script I use to wrap Kendo widgets in React components: https://github.com/guscost/kendo-react-wrappers

Disclaimer: the last time I used Telerik components was in ASP.net 2.0 about 9 years ago.

For React: http://material-ui.com/#/

Bourbon + Refills: http://refills.bourbon.io/

Material Design Lite: http://www.getmdl.io/

Bootstrap: http://getbootstrap.com/

etc, etc. Even back when I used telerik components, it was often faster to develop my own component, tailored to use-case, than it was to figure out how the hell to customize the 15-deep DOM tree that a table component generated.

Thanks for sharing.

I agree. Telerik (and similar) components are great for getting you a lot of functionality straight out of the box, with minimum time and effort.

Where they break down is when you need to make any sort of non-trivial customization or business-specific use case. At that point, it's easier and more maintainable to build things from scratch, or with more loosely coupled base components.

I was in Kuala Lumpur, Malaysia a few months ago. I wanted to see a film. I searched in Google for films showing in Kuala Lumpur. I found myself on a web page that, while accurately listing movies and schedules currently showing, looked like it had been created around 2000.

And it was fast. No animations, no auto-complete, no infinite-scroll, no JavaScript frameworks. Just the information I wanted, delivered to my phone seemingly as soon as I touched the link. Simple black-on-white text, plain layout.

This made me somewhat sad, because it showed me what the web browsing experience could have been today.

Obligatory: http://motherfuckingwebsite.com

I'm glad I got off the train as web sites became web apps. I made one of the latter, though it actually was designed to complete certain tasks as opposed to being a container for content.

Websites now seem to be trying to emulate native apps - not just in terms of cramming a bunch of UI into a page, but also in the closedness. I hate how 95% of content sites bury the hyperlinks to anything that takes you off their site, including the source.

The mfws page is 5k or so and one script (and I might want to fight about that depending on the reason it is there - see the page code at the bottom).

Suppose you had another 5k budget for css. What could you do with it to make the page look 'nicer' in the sense of closer to mainstream Web pages that ordinary people might use?

Edit: clarity

Yup, that is a tad more modish.

Right, and then you'll add images because you're not a monk. Remember when images were content and not chrome? That's another practice to bring back.

Even with @2x images or whatever, they at least can avoid blocking the load of the rest of the page and rendering the layout. As you add a bunch of other files that have to be fetched over another HTTP connection and then affect the rendering of the page (for example, fonts), we're back to sluggishness.

With only a modest amount of "cruft" (1216 bytes of CSS):


Yes I agree here. When I was in high school in the mid 90's I was able to create my own html pages in a text editor. The web was simple back then we had frames and tables and that was enough.

Sure it wasn't perfect pop up ads were rife, blink tags were a thing and internet explorer had problems with transparent gifs. But the web was simple. Nowadays html pages are riddled with javascript, CSS and I don't have a clue what is going on when I hit view source on the average website.

Webpages are massively over engineered you shouldn't need a scripting language to display an article of text and some links.

And now that we have HTML5 and CSS3 the technology for writing web pages with a text editor is better and simpler; many techniques and features have transitioned from impossible to easy or from convoluted hacks to straightforward. The two pillars of current web page overengineering are generating pages from server-side templates and preprocessors (not only automating cruft production but hiding horrible HTML and CSS code from users) and Javascript abuse.

On the other hand, many people don't want to touch with a ten foot pole a website that doesn't look "sexy".

Even if that were true, the definition of "sexy" has changed over the years. Currently we're in a Rubenesque period.

Do we know that though? Many people who are users or are web devs or project managers or executives? I think that's why there's been a trendy towards minimalist design; it looks pretty good and it's typically fast.

Not sure why he took CNN as an example instead of this very article... http://i.imgur.com/4w9TxOw.jpg > 146 requests, 1.8MB transferred.

CNN is a fairly reputable news organization, and a pretty good example of how broadcast can have a strong digital presence. They're going to be mimicked by other news organizations without a doubt.

Compare them to a local TV station's site, and what you'll see there is basically video clips dumped onto articles or section fronts. CNN takes the trouble to write news copy alongside script (or are at least pretty good at rewriting) because you can't just repurpose TV script for a news article.

And in other news, http://www.cnn.com/cnn_adspaces/

I ended up with 67 Requests and 2.8M. uBlock Origin > Flash disabled > Going through a Squid transparent proxy with SquidGuard lists from shallalist.de (adv,anonvpn,remotecontrol,spyware,tracker)

If you'd finished the article, you'd probably notice he says (twice) exactly why he used CNN. Not that they're one of the worst, but rather that this is pervasive.

> Again, I don’t mean to call out CNN as the “bad example,” but rather use them to show a specific example of a model that has become pervasive for content on the web.

Some parts of the web are worse than others. He must hang out at Telerik a lot.

That's strange. I get much different figures. https://veuwer.com/i/2uys.png

Perhaps they're working on it?

Did you clear the cache beforehand? (since you don't have "disable cache" checked)

With JS enabled TFA killed my (old) Android browser. Twice.

Disabling JS I managed to read it.

As far as the monetization of content goes, content creators are desperate because there is too much information out there. The people who make money with content these days are people who filter content and write for highly specific niches. I guess celebrity gossip and "Hottest Girls of Instagram!" also works pretty well as a content marketing strategy too. Give the devil his due.

One thing I hate about most content I get is how all the big stories of the day tend to creep in. I have a twitter account I use for personal marketing and I never ever post anything off topic or related to politics or whatever the hot meme of the day is. I only post information related to the very specific niche that I am covering. I pulled up my twitter feed recently and there are biotech people commenting on Greece. There are startup gurus retweeting Greece. If it's not Greece it's whatever is the hot topic of the day like <insert social issue that even saying you don't care about will result in social excommunication> or the Ukraine war or whatever. Frankly, I don't care, I don't have the time, it does not affect me personally. I have way too much stuff to think about already. This is why people don't pay for content. The supply is off the charts and the demand is not there and most people are just recycling crap that they read somewhere else anyway.

I am not sure if I'd classify Twitter as content, but to be less snarky, focused platforms tend to stay more on-topic than twitter. I find small subreddits to be very much on-topic, for example.

The cruft is because the underlying economics of how data is hosted and distributed is flawed. The incentives between advertisers, readers, publishers, hosting and distribution providers are currently misaligned.

In other words, building an info-economic system based on "free" content that is supported by online advertising results in: the people who make the content don't get paid enough and the content itself is horribly mangled by advertisements and other priorities.

However, we can build other kinds of info-economic systems. Perhaps we could follow the model of publishing back when we still used copyrights as an effective way to align the incentives between authors and publishers. Covering the costs of hosting and distribution are a lot easier to manage when the content itself isn't hard-coded in to an ad-selling-machine as delivery options like BitTorrent or WebTorrent are now an option. Permanent seeds could be kept alive at very little cost to publishers. Perhaps we could experiment with royalties, or selling virtual shares in media, or buying virtual goods that list you as a supporter, or allow people to invest in the creation of and share ownership in media...

Copyright hasn't gone away. Media organizations still own the rights to content they produce, and can sue anyone who copies it without their permission. Now, de facto, there are many people who will rip content and publish it on their own site. But that's not the plauge that's driving media companies to cruft up their websites, and by and large, it's a minor problem, thanks to Google's filtering of duplicate content.

People aren't paying for content because the publishers are giving it away. Copyright doesn't change that. They own the rights to their own content, and are free to charge whatever they want for it, including nothing. If they want to put up a paywall, nothing's stopping them.

I want to make clear that content isn't suffering because people won't pay. Newspapers and magazines never made the bulk of their money from subscriptions. It was always about advertising. Thanks to Google, advertising's a commodity, and publishers can't charge what they used to for it.

The solutions you list may work, or they may not. But let's not confuse the copyright issue and the advertising issue.

But publishers have to give it away with the "ad selling engine" embedded right in the content or else they make no money because... no one will honor their copyright and funnel advertisement earnings to the rights-holders.

Look at FM radio. They run ads. The songwriter is entitled to royalties on that income because of our copyright laws and accounting infrastructure. Funny enough, the recording artist is not entitled to royalties because at the time this stuff went in to action it was assumed that radio airplay was a good advertisement for the record, and that's how the artist would be compensated!

The way we're doing thing right now with social media is if the songwriters didn't get any money from radio play. Facebook isn't paying anyone for the content they monetize. Neither is Google. They just index and aggregate and profit and ignore copyright. Imagine FM radio if it was like this. Every song would have to be an advertisement for something else for anyone to make any money.

That's exactly what's happened to our content. They are filled with advertisements and there is no way to remove them! This "web cruft" is a perfect example of this effect! Why can't I just get pure, clean content anywhere? Because then anyone could link to it, and in the process, sell some deodorant on someone else's behalf.

Facebook has realized this and, much like Apple, they've decided to act as a private monopoly of a content industry where they get to set their own royalty rates. They'll take 30%, the "content creator" takes 70%. They'll decide what is worthy of publication and what isn't. We figured out hundreds of years ago that this monopoly should absolutely not be run by a private entity.

Not only would it be much better for individual liberty, but it would be much much easier and apply to ALL content if we could just include royalty payments to rights-holders. That way I just put "content" on the Internet and then if someone happens to be slapping ads on it directly or indirectly, money is making it's way back to the people who made it or paid for it to be made.

I'm not really talking about anything novel here, just looking at some recent history of examples of how copyright has been used in contracts to the mutual benefit of many different parties, including artists, publishers and distributors.

Facebook isn't violating copyrights when people share an article, nor is Google committing infringement when it indexes a website (excluding books and image search). All they do is provide a link. Telling someone where they can find an article is not the same as republishing articles without permission.

If media companies wanted to, they could raise a paywall and keep us from viewing their work without paying. And for sure, some piracy would occur. But there's no evidence this is happening on a massive scale. It's not as if people flock to the pirate bay to torrent articles from the wall street journal.

So why don't more sites have paywalls? Because it means less readers and less sharing. When you're in the business of selling ads, that means less money. But when you're in the business of licensing content, all that matters is what paying readers contribute. That should tell you how media companies see themselves.

I like the direction of your thought. However, the practical future may bring more of personal complex of intelligent browsing agents, pre-fetching info for the user, stripping out 'cruft' to make web useful again and ....m consumable after all. Good example is Safari Reader feature and Readability plugin for Chrome. Those are just the beginning.

That only satisfies the demands of the consumer. A functioning economic system needs to also reward the supplier and at a bare minimum cover their expenses.

The answers are right in front of us: just treat digital media as property and follow the same model of copyright that's been working for over 300 years. The issue has been that tracking ownership and the accounting around payments wasn't able to keep up when things went purely digital. These were tasks that were at one point fully monopolized and facilitated by government and still are to some extent with things like SoundExchange.

Right now the only things worth owning are shares in the aggregators, not the actual media itself. Digital media is no longer even remotely aligned with the interests of authors.

In music for example, Sony didn't directly license content to Spotify, it wanted equity in Spotify itself. Whatever deals are being signed are done without the consent nor consideration of the authors. Tidal is a service where a few artists realized they need to own equity in the aggregation service because their ownership over songs and recordings has become worthless.

Intellectual property is a fantasy. It is a legal construct. The only reason we've ever had a vibrant market economy of books, music, and movies is because we set up the legal and accounting frameworks to make it possible.

Whatever mess we've created over the last 20 years is really starting to show it's ugly side, especially in relation to music. I wouldn't be surprised if there is a wholesale musicians revolt when it comes to digital media. Vinyl sales are up year-over-year for coming on a decade now.

I really like your point about the vinyl. However, land property rights are also a fantasy. Its a legal construct. I mean that. How can anyone claim to"own" the land if us as a species are just 2M years old, comparing to the Earth's estimated 4000M years existence. But look at the mess we created around land property over past 2000 years! So why everyone in the West was so surprised when Russians revolted in 1917 to bring land back to those who works on it and depends from it?

At least in the Anglo-American world, the only world I know much of anything about, we've built our entire societies on the back of common law constructs around contracts, tort and property rights. For better or for worse, it's how we do things. The American revolution was just a logical extension to the legal structures that had been evolving in the United Kingdom for hundreds of years.

I can see the appeal of throwing out the concept of property rights if you, well, own no property, which was entirely the case for the vast majority of people living in Russian in 1917.

However, here in the United States, we've taken another approach, and that is to encourage ownership and participation in the legal and market structures that define our society. We want more homeowners, more intellectual property owners, and more equity owners.

The more people we have benefitting from these fantasies, the more likely that these fantasies continue.

I think for the intents of this discussion, we've already tried out the "communist" approach to digital media ownership on the Internet. We got to where we are right now because we've basically abandoned intellectual property. But we didn't get rid of ownership. There are plenty of profitable companies who deal in hosting, distributing and organizing digital media who don't track nor care about who owns anything.

Just like with the communist experiments on the grand scale of the Soviet Union, property ownership doesn't go away entirely, it just lays in the hands of a select few to the detriment of everyone else. It's not like you could ever stroll in to Stalin's house and borrow his hat without asking.

Now, if we could just get more private entities in the Western world to realize that all of their ownership stems from the fantasies of a social contract between a people and it's government, we might be able to get somewhere. Acting like the government is always the enemy is absurd. You can't have a corporation without government or some other system for managing the state of ownership... hmm, what's that technology that showed up recently related to coming to consensus on who owns what on a shared public ledger?

Thank you for exhaustive answer. It seems you mix personal, communal and private property. In the Soviet Union there was no private property. However, it respected personal property, so was the basis for criminal law, and you could not walk in and take Stalin's hat. And there was co-operative, or communal property, which was the property form for large enterprises. Property of the state falls into this category, however with a bit different sauce.

With respect to IP rights, the USSR considered that personal property and established legislation for that. Vast libraries of the USSR patents were protected that way.

As you pointed out - that was grandiose experiment. And it failed. Because what people wanted is private property and the state that protects private property. But the nature of the Internet reveals basic truth - you cannot own music.

Absurd things like arrested Tokyo accordionist who played The Beatles for fun will happen all the time if one impose private property on music. But you can control distribution and charge for that, owning supply chain of music - this is what Apple and Amazon do.

They are not going to sue homeless singing "Yellow Submarine" on the outskirts of Chicago. But they will extort every penny from labels and indie musicians who wants to use their trade channels to reach customers.

* But the nature of the Internet reveals basic truth - you cannot own music. *

Dude, this was as true now as it was 200 years ago. We've always created artificial systems to support markets for intellectual property.

Not always. And not forever.

You mean like a RSS aggregator and reader?

And the answer: "Why does CNN show ads? To make money. Why does CNN include tracking services? To learn more about the reader, to show more targeted ads, to make more money. Why does CNN use social media buttons? To get people to share the article, to get more page views, to get more ad views, to make more money."

And as ads make them less money they show more of them, or more intrusive ones. They a/b test where to put them or how to make them "appear" suddenly on the hope of stealing a click, and they try to disguise them as other news stories. But the bottom line is that the ad supported web at the ad rates people can get, is challenging at best and in cases like Dr. Dobbs Journal, well they give up.

(My opinion here, I know not everyone is going to agree). What I really would like to have, is web capabilities without design, I mean the browser should be in charge of most design. What I would like is more meaningful tags like <panel>, <post>, <user>, <description>, <icon>, <horizontal-menu> and let the browser handle the actual representation. CSS could be just used for rough positioning + size indication and background-color could be completely replaced by a 'contrast' tag and the browser would display the color according to the user choice or operating system interface. The website would then adapt itself to the user and not the opposite.

I thought on this for a while.

The problem is that, as a user, I want to be in control, to dictate what's allowed and what not. But as developer, I want the browser to behave like I need to accomplish whatever the site is offering to the user.

So there's a conflict there. And the browsers should be the mediators, much like the OS is, at a different level.

If the web were merely declarative, we would have gotten a better web

I think the way to go is for the site author to offer their own content style. Then the user has the choice to use that, or to use their own style. They can default to whatever they want (theirs or sites'), and override on a case-by-case basis.

If a user is determined to view your site with different colors/fonts, then there's no reason to fight them. They could even have very legitimate reasons (poor eyesight needing bigger fonts, color blindness issues, etc.)

But for this to really work, we need to drastically decrease the variation involved in the site markup (HTML), and rely much more heavily on the styling (CSS) for page layout. HTML5 semantic elements don't go nearly far enough.

There is a reason to fight them: layout. The battle is not developers vs users, its designers vs users. Designers want everything to look exactly like their vision (understandably), and that means things like bits of text fitting in exactly the right dimensions and placement. Once you let users resize things on a whim, that goes out the window, because most likely everything will look completely wrong.

It also opens the door for dead simple ad blocking that can't be stopped, and business owners will never stand for that. The purpose that the web has been bent to in the last 15 years is simply not compatible with user styles.

This is starting to happen with things like google material and similar UI libraries that try to standardize appearances across devices. It isn't as granular as you are asking but I actually like that aspect. Delegating design and display to the browser isn't ideal for a lot of reasons. I would like to exercise control over how my content is displayed and regularly load in reset sheets for css. Display is something delegated to a framework not a browser (my opinion).

I think that <something/> should be in charge of handling representation and we are seeing that with ui frameworks that standardize displays across devices and applications.

> background-color could be completely replaced by a 'contrast' tag

What? There are many different colors on websites and users are lazy. Also, css should ideally be for styling not positioning, but it is one of the only options right now.

It might be a strange (and incomplete) thought but how I see it is that the browser would deduce the color. Contrast would be relative to text. For example the panel has a contrast of 0.8, so the browser should chose a color which is readable according to the text color if we would put both on the same panel. This color could be picked from the OS design. It does not mean that it would be ugly tough since you can use pretty good designs and colors, the web would just look much more consistent.

i am just going to punt because this still isn't landing for me. Maybe I am just mischaracterizing you, but it seems like you are either frustrated building things with outdated tools, or you want more control as a user. If it is the former, we are at the top of a funnel right now and we will start to see much better tools become improved and a few winners will emerge. If you want more control over your sites as a user you would probably have to force the css to either not load, or override it with your own scheme.

it's just, when you say color, it isn't that easy. There are tons of screen sizes and even as minimal as hacker news is has at least 4 colors, so I am not sure what would happen on a site like fast company with many different colors. An OS/Browser couldn't manage that because no one would agree. Ok fine I didn't punt.

It's called "bloat". Maybe "selling out". To many hands on deck. VC's want profitability.

It resolves itself usually. Sometimes.

Websites get bloated over time. Difficult to read. Slow to load. Messy UI. Runaway code. Ads everywhere. People stop visiting. Less bloated alternatives appear.

Slashdot ... MySpace ... been there done that. Reddit, Imgur .. drifting slowly but surely towards bloat.

Mobile apps suffer too. To a lesser extent due to limited screen space. Poorly designed apps, non-native apps, heavy Javascript frameworks, ad popups etc.

Even worse, when a mobile developer decides to build a simple website for the first time.

Install Mercurial, Vagrant, Bower, NPM, Grunt, Mongo, Express, Mongoose, Passport, Angular .. update everything .. cache clean everything .. check your environment settings .. mess around with Heroku .. create a Docker image for easy deployment. Spin up a virtual machine.

Now hand off that 5 page website to someone else when the project is complete. They'll add bloat to bloat.

Better or worse?

The web is old. We're focused on apps. Eventually we'll move back to the web and clean things up.

The Reddit shift is happening on mobile.

This morning I saw an ad for a Magic the Gathering game in my feed. They had been testing Reddit placeholder messages before so I knew it was a matter of time, but damn did that highlight that I REALLY hate feed-based ads.

Unfortunately, I'm sure it is the new reality. My main issue is that over time the feed gets "curated" in a way that drives more and more ad clicks vs. the view I want.

Imgur is already bloated. As for reddit, I'm curious about what makes you think they're drifting towards bloating. They use JavaScript (replying to a comment creates a new textarea), AJAX (posting a comment), WebSockets (real-time updates of comment timestamps), modals (sign up), but they do this in a very moderate way and the result is really robust. It seems to me that they perfectly know the power of all these technologies but have a very strong QA which doesn't let a single shit get pushed in prod.

More of a prediction at this point ... that $50 million in VC has been pushing them towards possible bloat. Not so much on the tech side, but on the advertising and monetization side.

Perhaps the recent resignations have something to do with a pushback against that? Internal bloat will eventually show itself!

> WebSockets (real-time updates of comment timestamps)

What are you talking about?

>scrollaway 9 hours ago | parent | flag

If I leave this up for an hour without refreshing, I'll get:

>scrollaway 10 hours ago | parent | flag

Again, without ever refreshing the page.

Can't that be done purely in the client? Why use WebSockets to to remote math?

It can be done with a few lines of javascript. You certainly don't need to contact the server to ask them how long ago a certain time was.

Minor gripe: I wish people would stop talking about Ad-Blocker. Pretty much since i started using the web (back when with Opera) I've had a tool available that is much more general and has given me much more control about how the web uses my bandwidth:

An URL-Blocker.

I don't use it for ads, exclusively, although lots of those fall under it too. I use it to block anything i find annoying when i use the web, be it overly big header images, fonts i don't like, Javascripts that are used by many pages to "enhance the experience", and sometimes ads too.

Thinking about it as a tool to only block ads, instead of one to customize the web and block urls themselves seems narrow-minded to me and misses the point.

Please give us more details about the URL blocker you use. Is it a browser plugin? Is it a network change at the OS level (HOSTS)? Is it a firewall or routing block? Is it middleware in a container running in your VPC?

I still use Opera 12 and it's built-in.

You can give it a file like this: https://www.fanboy.co.nz/adblock/opera/urlfilter.ini

And it'll apply black/white-listing in whichever way you configure it before actually getting data from any URL. Editing features are built into the browser.

To have it available on a more global scale, you could probably use something like squid proxy, but i don't know if it gives power quite like that.

URL-Blockers however are a thing that by all rights should be built into the core of any browser, just like number-black-listing should be a default feature of every phone (but isn't).

I do this directly on my router running Tomato firmware. The initial setup is a bit more involved, but it applies equally to all browsers and mobile devices in the home.

The script I use is http://www.linksysinfo.org/index.php?threads/script-clean-le...

Ad blockers are more sophisticated than that these days, or at least support features that are. For example, they can hide a specific DOM element on a web page within a larger page according to its path or other characterization. I don't know what percentage of total effective rules these capabilities comprise, though, but it's something you cannot do with host level blocking alone.

Does that mean they preempt the network request to the div's contents completely, they only hide the div via css/js, or both?

Cruft removal suggestions:

1. Read the source of your own web pages. What's really necessary?

2. Do you really need more than one tracker? Does anybody need 14 trackers?

3. "Social" buttons don't need to communicate with the social network unless pushed.

4. Stuff that's not going to change, including most icons, should have long cache expiration times so they don't get reloaded.

5. Look at your content management system. Is it generating a unique page of CSS for each content page? If so, why? Is it generating boilerplate CSS or Javascript over and over again?

6. Run your code through an HTML validator, just to check for cruft.

7. You can probably remove the IE6 support now.

> 2. Do you really need more than one tracker? Does anybody need 14 trackers?

Marketing answer: yes. GA and Omniture do different things/generate different reports. Each tracker usually does one critical thing better/differently than the last, so it's really easy for marketing to say they want all of them.

> 3. "Social" buttons don't need to communicate with the social network unless pushed.

Yes they do, because tracking.

> 6. Run your code through an HTML validator, just to check for cruft.

5KB of html cruft is nothing compared to the massive amounts of JS cruft being loaded (your point #2). This is wasted effort imo.

#3 "social" button not communicate with the social network?

Good idea to kill fb/twitter!

The web today does indeed look broken. But this (oh, especially CNN) example is only to show that this problem is merely organisational. If there is no QA process in organisation, quality criteria are non-existent. If there are ever shifting bossy MBAs on top design positions, it will always be like that and get worse in time. The only thing we can do as "web crafters" is to make our personal sites suck less. When it comes to job, who will sacrifice job security for the sake of "better user experience"?

An unmentioned alternative is Firefox "Reader View" or Readability [0], which reformat the source post-download. They don't work on all sites and can get somethings wrong. Still, its a solution that's: open, distributed and can't be easily stopped by content producers. Imagine a Firefox mode that always had Reader View on; a sort of filtered/limited web browser.

[0] https://www.readability.com/

Strange that this is coming from telerik, a company that creates really heavy and closed javascript framework libraries, that you can probably load from their own domain. They definitely have their part of responsibility in that 'cruft'...

We can (must) judge this article on its own merit. While not particularly informative, it is well written and explains the problem very, very clearly. The examples and analysis make sense even to people who aren't developers. It helps to raise awareness about an important issue, and is a part of the solution.

Telerik's cruft is somewhat different from the cruft referred to in the article. They offer "rich" UIs, which we might hate, but is just what many companies choose when they try to replace their old Windows Forms apps with web apps. With modern browsers, they aren't really that computationally expensive. And in any case, they aren't responsible for the type of issues mentioned in TFA. Like loading from dozens of domains, interstitial ads, social media links, and 200+ requests.

Telerik's products cannot be evaluated in isolation without looking at their history. They primarily cater to enterprise-y Microsoft shops, and they been in the business at least from the classic Asp.Net days (10+ years). Their components were DLLs earlier (which they probably still offer), but have now moved to somewhat leaner, modern JavaScript libraries. Their market moves very slowly, and it makes sense for them to move at a similar pace.

You are right. However, the problem has no resolution in technical sense. When your boss tell you 'integrate that sh#t to this page now' you are just doing your job, right? You are not going to spend 4 nights developing technical proposal and organisational framework tailored to your employers business to introduce http2 and hire 3 interns to map external dependencies for the whole 50+ domains ecosystem, right? I did that several times in my life. Will not do that again.

Oh, telerik also has wonderful system of web components for IIS, and its CMS powers Iranian governmental sites. Which given Iranian networks are not exactly the fastest, gives very interesting experiences for those inside the max2MBit world.

Telerik has been following Microsoft and going down the road to lighter, and more modern designs.

They still have a ways to go mind, but they have been gradually at least talking the talk. (Though I think most of their revenue remains in integrated stuff).

I had about the same feeling. Telerik components come to mind when thinking about extra/unnecessary complexity.

We had a similar problem in 1999. They called it 'portalitis' back then. Go find some shots of the old Yahoo homepage for an example.

Then Google came out with one blank and 'search' and we finally exited that ugly crufty era.

Now we're back!

Yeah. I'm sure many remember all the other stuff that came with it last time: flash banners, popups/popunders, auto-reopening windows, etc. It was mighty crufty.

I wonder how many other developers got into disagreements back then with other departments about how that stuff "was the future" and that there "was no fighting it" and that "everybody should just accept it now." Then we got clean / low-cruft designs for a while. Now we're back indeed.

Cruft is cyclical.

Such is the nature of cruft: it accrues. Like barnacles, the longer you sail your ship the more of it you will find encrusted on your hull. Eventually you have to roll up your sleeves and scrape it off, and when you do, you will briefly be able to enjoy a cruft-free state. But once you leave drydock, it will start accruing all over again.

True, but I think there might be limits. I hate to think what 90's designers would have done with the massive screen real estate we have available today. Would it have been worse or would they have realized they'd saturated the human brains ability to separate objects?

Interestingly, the Google home page has more stuff on it today than it did back in 2001, but appears less crufty. Good layout seems to help eliminate some of the cruft. Stuff that used to all be relegated more or less to the center of the screen has been pushed away from the main search box; available, but not interfering.

http://wayback.archive.org/web/20010119175000/http://www.goo... versus https://www.google.com

Mobile is definitely cruftier, but mainly because of the smaller screen real estate and because if you don't have their app installed, they add a modal to prompt you to get it.

Well to be fair, Google has a completely different ad model, so they don't need all that cruft on the search page. Instead they cruftify the search results page with ads. At least they're just text ads so the speed is good.

It's what it has always been. Sites/companies that go the extra mile and provide better user experiences get rewarded; other that don't lose market share. At the end of the day, if the content is good enough most people will deal with the crappy load times.

Sadly, I don't think this is true for individual content publishers. The size of any individual publisher's audience is not big enough for them to stay afloat with a clean user experience. Flipboard and Facebook pull this off by providing the better user experiences of "you can read everything" and "you're already on here anyway" respectively, but that wouldn't work for someone like CNN.

People spends too much time improving technology and not content. Infinitely fast access to void is not worthy.

I think "the web" in this case really means media sites and some retailers. The media sites don't have a clue how to make money, other than to include all the usual ad networks and tracking scripts and then sit back and hope. The ecommerce guys are obsessed with slicing and dicing visitor behavior and tracking conversions. Seems to me that outside these (admittedly rather vaguely defined) spaces the web is a much cleaner and snappier place.

Tend to agree with this. For instance big porn sites, where every byte count, rarely has major bloat.

The latest stratechery article discusses this same topic from more of a high level industry trends and business forces perspectives, https://stratechery.com/2015/why-web-pages-suck/

Isn't the fundamental issue here that nobody wants to actually pay money for web content?

Solve that and you solve the cruft issue.

I don't think the issue is people not wanting to pay for content. The Web breaks apart for-pay models in different ways: people want content from more publishers than they can afford; and, people want to share.

Getting people to pay for content on the web is a tricky issue. Looking at it as a consumer, I don't mind paying, but to whom? "Walled gardens" tend to force you into a subscription model: pay "The Economist" (e.g.) annually and you have access to their content. In modern times, people read content from all over the place without respect for boundaries. Realistically, you will probably want content from more publishers than you can afford to pay in subscriptions.

Should publishers move to a model in which you pay per article? If so, how much? Do they price low ($0.10 USD) with the belief that more people will pay? Do writers get a percentage or flat rate? How is the quality of writing affected by each option?

How do we share content in a for-pay model? Communal reading and discussion is what people do. It's the same way with music. People share articles, music, and pictures with others who might be interested. In turn, sharing entices dialog and dialog entices relationship-building.

I don't think the issue is people not wanting to pay. The Web opened up individuals to a much larger set of publishers. Publishers in turn tried to keep a subscriber model while trying to curtail the very human desire to share. Solve the issues of not limiting people to small numbers of publishers and not limiting their ability to share, and you will reduce publisher dependency on ad revenue.

I would never consider paying for the vast majority of web content. Fully 99.9% of what I see and read online is total shit dressed up with JavaScript.

The cold hard truth is that most content on the web isn't worth even 10 cents per read.

I agree that there is a poor signal-to-noise ratio for content on the web. When I wrote the response I had in mind professional journalists and writers; publishers like The Economist, Wall Street Journal, Scientific American, ACM, etc.

>Solve the issues of not limiting people to small numbers of publishers and not limiting their ability to share, and you will reduce publisher dependency on ad revenue.

I don't see how that fundamentally solves the problem of how you pay content producers. For a business model to survive you need to extract something from the end user eventually. With ads there is this turtle stacking problem - revenue comes from a consumer buying something from a retailer > retailer pays for ads on a website.

So at the end of the day there is an exchange of money, just not between the content producer and the consumer. If the desire is to kill the relationship between content and ads then the content needs to capture the revenue through another means - sales of hardcover books (internal ads), subscription services etc... there really aren't that many models.

Publishers are already very available, and sharing exists. Different channels for paying authors don't.

It's possible that my observations won't solve the problem. But, we agree on the fundamental that content providers either need payment from the consumer or from ads or both.

Right now content publishers are often taking money from ad agencies in lieu of charging the consumer (no blinding revelation there). My assertion is that I'm more than willing to pay for content (as long as there are no ads), but I don't want to enter into subscription relationships with every content publisher I go to. I was also trying to implicitly state that I don't want to pay for a full issue of (for example) The Economist when I'm interested only in one article. Paper-based publications had to be that way; I'm skeptical that digital publications do.

Publishers who do allow per-article payment methods are often pricing individual articles so high that it would be impractical for me to pay for all the content I want to read.

I've been careful to use "publisher" because the publisher is often the proxy for the authors whose content we read. It may be that we need to enter into more direct relationships with authors rather than their proxies. It seems like we need an "App Store" for articles. It may make things worse, I'm not really sure.

Yea I get what you mean. I think that the system you describe would have a lot of friction in it, but if done right could be really useful.

On one side publishers get an easy to grok revenue stream and on the consumer side they get a better user experience.

I think it falls apart when you consider that adblock is becoming the norm and publishers are probably seeing pretty decent revenues from advertising.

Seems like it would have to be something where you pay $10 per month or whatever to see ad-free articles from publishers on the platform through a browser plugin. Seems too cumbersome though. Not sure - there could be a "disruptive" startup in there somewhere but I think it would have some pretty big challenges to overcome.

It's not hard to solve that problem ,theoretically.

Before an ad network serves me an ad, my reader app will see how much was paid for this ad, and will pay the sum for the removal of the ad. Of course there would be some settings menu defining the rules and amounts and behavior of this. And if i'm not willing to pay, i will be showed an ad.

Isn't that basically what Contributor by Google[1] does?

You bid for your own ad space on pages, and pick your placeholder.

[1]: https://www.google.com/contributor/welcome/

Publishers could charge whatever they're getting net from advertising. I can't imagine that it would be very much. Use some micropayment system. "Google gold" or whatever ;)

Weird, I constantly see communities rush to give money if a site is in danger because of money. I am not talking about '50 social media experts write lists about memes' sites but websites that came and come to be by intrinsic motivation to create and share. Be it a site about bicycles, simulation games or a plant species no one ever heard of.

There are some strong points in this article but I think it reaches the wrong conclusion. Flipboard and Instant Articles are not an alternative business model. They are a loss-leader, plain and simple. The goal is still to get you on their webpage where you'll see the ads.

Good ole capitalism solves this problem quite easily. CNN banks on its brand to keep people coming to their site but there are plenty of non-terrible news sites on the web so for those who care about a nice user experience can just use those instead.

HOST file ad blocking. Bang! Problem solved.


This works so well I have a vastly different user experience than other people. I rarely see ads and pages load fast.

Of course, you can't do this on the stupid devices we use but don't actually control (phone, tablets, I'm looking at you), so I wish someone would offer a DNS service that blocks these. I'm thinking of creating one for myself at this point.

If your devices use DHCP to get their DNS server, you can probably setup a firewall/router with the appropriate filtered DNS.

You can do this on any Android phone if you have root as it has a standard hosts file on it.

I used to use that exact hosts file on my phone but I found a compressed version that was for iOS and is only 28kb big. It blocks the same sites but it just a bit more lean.

Well... I try to do something...


Here's a first-pass user style sheet that gets rid of most of the noise.

   .js-gigya-sharebar {
       display: none;

   body {
 	padding-top: 0 !important;   

   .nav-header {
       position: relative;

Just want to say, that is a really nicely written and presented article. Loved the way it had a good lead-in, clear and well explained examples with good screenshots, and how it dug into the whys and also explored the future without driving some agenda other than its main point of understanding the cruft problem. I realize the point of the article wasn't about quality of content but more about all the clutter around the content, but it bears saying, if the author is here to hear, that this was some really well done content.

What I find really silly is "would you like to subscribe to my blog?" popups which appear over top of the content after scrolling.

Let alone the browser-level modal: "Allow This Person's blog to send you push notifications?"

I've come to dislike the first 5-20 seconds after loading a news article on mobile, while ads and popups are still adjusting the position of the article text.

Unfortunately, these tend to work. I'm sure many people blindly implement them, but many have tested the results and found that they won out.

I typically browse via nested VPN chains and Tor, and overall latency is ca. 500-1000 msec. Leaving aside my concerns about privacy and malware, ads and third-party tracking conversations simply don't work for me. My circumstances are unusual, of course. But I suspect that many who connect via geostationary satellites have similar experiences.

That chadburn looks like it has a marble face and seems unusual. Which made me go lookup what a chadburn is https://en.wikipedia.org/wiki/Engine_order_telegraph

The article by Ben Thompson on Stratechery is a good one to go along with this one. https://stratechery.com/2015/why-web-pages-suck/

I found it ironic that the author talking about web cruft also had some on their website. The header image on my cell phone in landscape mode took up 20% of my screen and stayed even if I scrolled. Anyways I thought that was pretty funny :)

Don't worry. Eventually the cruft will get filtered out by AI, just like in adblockers, but in a much more powerful way.

Essentially, the circuits in your brain that remove cruft subconsciously now, will be implemented in software.

Why do ad blockers not hide social media links? I would love this. It is cruft I deal with every day. I have never used this feature but been forced to implement it so many times.

But I like my weather widget on news sites. It's the second most useful feature beside the articles.

Naturally it doesn't make sense if it shows weather for a place across the ocean.

So in summary RSS is new again?

The Gruber quote is (as is typical for him, I guess) insane hyperbole. The mobile Web is not "looking like a relic" because I'm not going to go installing an app to read an article; that's more trouble than dismissing the modal yammering on about privacy policies.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact