
The Web’s Cruft Problem - jsnell
http://developer.telerik.com/featured/the-webs-cruft-problem
======
billyhoffman
There is a major thing going on here that is not mentioned in the article: One
department doesn't deliver a website.

Is that "Terms of Service" modal there because a front-end dev thought
"interrupting the user experience is a great idea!" No, it's there because of
legal. And all those social sharing widgets? They are there because of
marketing. And the 5 different ad exchanges? They are there from Sales/BizDev.
And that 400KB of JS crap? That's Optimizely and their crappy A/B testing
library that the dev team put it. And that hero image that's actually 1600px
wide and then resized in CSS to 400px? It's there because that was the source
image from the external agency and no one thought to modify it.

The biggest challenge with modern web sites/apps is that they are largely
built by committee. Often its not literally built by a committee, but I mean
that while multiple departments are all involved and all get to add to the
site, but no one is really responsible for the final, overall user experience
of the site.

And even if there is a "user experience" or "performance" team, they rarely
have the power to change things. A customer of ours is a Alexa top 100 B2C
company that provides a market place for buyers and sellers. They get
commissions from sales, but a large part of revenue is ads. The "performance
team" makes no head way against any of the terrible performance problems with
ads because the ads team is judged based on the ads/conversions, not on
performance of the page. Even when the ads are hurting the conversion rates of
the sales/commissions, the ads team doesn't care. It's total deadlock of
departments saying "performance and user experience is not my responsibility,
I only do X".

~~~
JustSomeNobody
Back in May, TheNextWeb made some changes. Their mobile experience at the time
had two bars, one at the top and one at the bottom. Even on a large phone
display, viewing the content between the two was very distracting. Also, they
both had icons for twitter and Facebook. I gave them some feedback regarding
the design and said that I felt it was a UX mistake. The VP of design got back
to me and said, very politely, that he disagreed with my assessment. He went
on to say basically that an increase in shares means an increase in page views
which means an increase in ad impressions, and thus income. With TNW being an
ad-supported company, this directly relates to the quantity and quality of our
content. This lead me to feel that the single driving motivation was not that
their users had a good experience, but that they maximized revenue. From a
user point of view, I feel that's wrong. From a business point of view, it
does make sense.

Regarding who is responsible for the final content, I would assume that if you
have a VP of Design, that's the person. Maybe I'm wrong.

I notice now that they only have the one bar at the top. Interesting.

~~~
mgkimsal
They're possibly optimizing for the wrong metric, or at least a metric you
don't care about. I'd think they'd want repeat viewers. A bad UX will mean
fewer repeat viewers, and possibly an unsustainable business. Classic short-
term vs long-term view. Problem with this is it's generally harder to measure
long term, so short term becomes easier to justify.

~~~
flinty
The ad exchanges don't care for repeat viewers. Its purely a numbers game.
More clicks more revenue.

~~~
billyhoffman
The content sites themselves largely don't care about repeat visitors. Social
networks and other aggregators are the primary drives of traffic to
news/content/lifestyle sites. In other words, most people do go look at the
home page of Buzz Feed or Wired or Huff Po and read multiple stories. They go
to Facebook and click into different articles on different sites

~~~
dredmorbius
ITYMTW "most people do _not_ go look at..."

------
ised
I enjoyed this article. But I have one nitpick.

The author suggest HTTP/2 as a solution to web cruft.

I could be wrong, but I see the HTTP/2 ploy as a proposed way to deliver more
cruft, faster.

What do you think is going to be in those compressed headers? How large do
HTTP headers need to be? What exactly are they trying to do? I look at headers
on a daily basis and most of what I see is not for the benefit of users.

We can safely assume the compressed headers that HTTP/2 would enable would
have nothing to do with advertising?

Again, I could be wrong, but in my estimation the solution to web cruft
(unsolicited advertising) is not likely to come from a commercial entity that
receives 98% of its revenue from web advertisers.

The web cruft problem parallels software bloat and the crapware problem
(gratuitous junk software pre-installed on your devices before you purchase
them).

The more resources that are provided, e.g., CPU, memory, primary storage,
bandwidth, the more developers use these resources for purposes that do not
benefit users and mostly waste users' time.

This is why computers (and the web) can still be slow even though both have
increased exponentially in capacity and speed over the last two decades. I
still run some very "old" software and with today's equipment it runs
lightening fast. The reason it is so fast is because the software has not been
"updated".

~~~
billyhoffman
HTTP/2 largely won't help the problems mentioned in the article. If I'm
loading 200+ assets for 30-50 hosts, HTTP/2 can't help because I'm making
30-50 TCP connections and fetching 5-8 resources over each. The efficiency of
HTTP/2 over HTTP/1.1 really doesn't excel when fetching so few resources per
connection.

HTTP/2 helps when you are downloading 200+ assets for 1 or 2 hosts.

~~~
ised
I routinely use HTTP/1.1 pipelining from the command line to retrieve 100
assets at a time. But these are assets that I actually want: i.e., the
content.

Somehow I doubt that the 200+ "assets" coming from 1 or 2 hosts automatically
when using a web browser authored by a corporation or "non-profit
organization" that is connected to the ad sales business are going to be
"assets" that I actually want.

------
myth_buster
This is incredibly ironic as I've used Telerik components for internal app
development in our organization and the amount of cruft that gets loaded is
way damn high. The payload is high and round trips are numerous. I ditched all
that and developed my own framework from scratch using the open source
libraries and managed to reduce the payload and increase responsiveness.

At some point in the past, it made sense to pay Telerik boat loads of money to
get libraries that were supposedly plug-and-play but now there are even better
solutions available for free thanks to OSS!

Edit:

    
    
      Sounds a lot like Flipboard doesn’t it? If you’re a publisher and you opt in, 
      you let Facebook control the distribution of your content, in return for a far 
      more performant experience for your readers, and presumably shared ad revenue 
      of some sorts.
    

This raised some red flags! Making Fbs(Facebook/Flipboard) the content
platform just to reduce cruft and responsiveness appears to be a trojan horse
and appears to have similar issues as discussed in the Fb Fraud thread [0].
Another possibility is Fbs(Facebook/Flipboard)would become the Comcasts of
tomorrow. The distributed nature of the web is what makes it so invigorating
and democratic and I think it would be a mistake to go the cable route.

[0]
[https://news.ycombinator.com/item?id=7211514](https://news.ycombinator.com/item?id=7211514)

~~~
joedavison
Care to share any great OSS libraries you've found that are good replacements
for various Telerik components? Thanks!

~~~
warfangle
Disclaimer: the last time I used Telerik components was in ASP.net 2.0 about 9
years ago.

For React: [http://material-ui.com/#/](http://material-ui.com/#/)

Bourbon + Refills: [http://refills.bourbon.io/](http://refills.bourbon.io/)

Material Design Lite: [http://www.getmdl.io/](http://www.getmdl.io/)

Bootstrap: [http://getbootstrap.com/](http://getbootstrap.com/)

etc, etc. Even back when I used telerik components, it was often faster to
develop my own component, tailored to use-case, than it was to figure out how
the hell to customize the 15-deep DOM tree that a table component generated.

~~~
joedavison
Thanks for sharing.

I agree. Telerik (and similar) components are great for getting you a lot of
functionality straight out of the box, with minimum time and effort.

Where they break down is when you need to make any sort of non-trivial
customization or business-specific use case. At that point, it's easier and
more maintainable to build things from scratch, or with more loosely coupled
base components.

------
stevoski
I was in Kuala Lumpur, Malaysia a few months ago. I wanted to see a film. I
searched in Google for films showing in Kuala Lumpur. I found myself on a web
page that, while accurately listing movies and schedules currently showing,
looked like it had been created around 2000.

And it was fast. No animations, no auto-complete, no infinite-scroll, no
JavaScript frameworks. Just the information I wanted, delivered to my phone
seemingly as soon as I touched the link. Simple black-on-white text, plain
layout.

This made me somewhat sad, because it showed me what the web browsing
experience could have been today.

~~~
jkestner
Obligatory: [http://motherfuckingwebsite.com](http://motherfuckingwebsite.com)

I'm glad I got off the train as web sites became web apps. I made one of the
latter, though it actually was designed to complete certain tasks as opposed
to being a container for content.

Websites now seem to be trying to emulate native apps - not just in terms of
cramming a bunch of UI into a page, but also in the closedness. I hate how 95%
of content sites bury the hyperlinks to anything that takes you off their
site, including the source.

~~~
keithpeter
The mfws page is 5k or so and one script (and I _might_ want to fight about
that depending on the reason it is there - see the page code at the bottom).

Suppose you had another 5k budget for css. What could you do with it to make
the page look 'nicer' in the sense of closer to mainstream Web pages that
ordinary people might use?

Edit: clarity

~~~
bewuethr
Like this?
[http://bettermotherfuckingwebsite.com/](http://bettermotherfuckingwebsite.com/)

~~~
keithpeter
Yup, that is a tad more modish.

------
cocoflunchy
Not sure why he took CNN as an example instead of this very article...
[http://i.imgur.com/4w9TxOw.jpg](http://i.imgur.com/4w9TxOw.jpg) > 146
requests, 1.8MB transferred.

~~~
DarkLinkXXXX
That's strange. I get much different figures.
[https://veuwer.com/i/2uys.png](https://veuwer.com/i/2uys.png)

Perhaps they're working on it?

~~~
detaro
Did you clear the cache beforehand? (since you don't have "disable cache"
checked)

------
narrator
As far as the monetization of content goes, content creators are desperate
because there is too much information out there. The people who make money
with content these days are people who filter content and write for highly
specific niches. I guess celebrity gossip and "Hottest Girls of Instagram!"
also works pretty well as a content marketing strategy too. Give the devil his
due.

One thing I hate about most content I get is how all the big stories of the
day tend to creep in. I have a twitter account I use for personal marketing
and I never ever post anything off topic or related to politics or whatever
the hot meme of the day is. I only post information related to the very
specific niche that I am covering. I pulled up my twitter feed recently and
there are biotech people commenting on Greece. There are startup gurus
retweeting Greece. If it's not Greece it's whatever is the hot topic of the
day like <insert social issue that even saying you don't care about will
result in social excommunication> or the Ukraine war or whatever. Frankly, I
don't care, I don't have the time, it does not affect me personally. I have
way too much stuff to think about already. This is why people don't pay for
content. The supply is off the charts and the demand is not there and most
people are just recycling crap that they read somewhere else anyway.

~~~
iSnow
I am not sure if I'd classify Twitter as content, but to be less snarky,
focused platforms tend to stay more on-topic than twitter. I find small
subreddits to be very much on-topic, for example.

------
williamcotton
The cruft is because the underlying economics of how data is hosted and
distributed is flawed. The incentives between advertisers, readers,
publishers, hosting and distribution providers are currently misaligned.

In other words, building an info-economic system based on "free" content that
is supported by online advertising results in: the people who make the content
don't get paid enough and the content itself is horribly mangled by
advertisements and other priorities.

However, we can build other kinds of info-economic systems. Perhaps we could
follow the model of publishing back when we still used copyrights as an
effective way to align the incentives between authors and publishers. Covering
the costs of hosting and distribution are a lot easier to manage when the
content itself isn't hard-coded in to an ad-selling-machine as delivery
options like BitTorrent or WebTorrent are now an option. Permanent seeds could
be kept alive at very little cost to publishers. Perhaps we could experiment
with royalties, or selling virtual shares in media, or buying virtual goods
that list you as a supporter, or allow people to invest in the creation of and
share ownership in media...

~~~
ommunist
I like the direction of your thought. However, the practical future may bring
more of personal complex of intelligent browsing agents, pre-fetching info for
the user, stripping out 'cruft' to make web useful again and ....m consumable
after all. Good example is Safari Reader feature and Readability plugin for
Chrome. Those are just the beginning.

~~~
williamcotton
That only satisfies the demands of the consumer. A functioning economic system
needs to also reward the supplier and at a bare minimum cover their expenses.

The answers are right in front of us: just treat digital media as property and
follow the same model of copyright that's been working for over 300 years. The
issue has been that tracking ownership and the accounting around payments
wasn't able to keep up when things went purely digital. These were tasks that
were at one point fully monopolized and facilitated by government and still
are to some extent with things like SoundExchange.

Right now the only things worth owning are shares in the aggregators, not the
actual media itself. Digital media is no longer even remotely aligned with the
interests of authors.

In music for example, Sony didn't directly license content to Spotify, it
wanted equity in Spotify itself. Whatever deals are being signed are done
without the consent nor consideration of the authors. Tidal is a service where
a few artists realized they need to own equity in the aggregation service
because their ownership over songs and recordings has become worthless.

Intellectual property is a fantasy. It is a legal construct. The only reason
we've ever had a vibrant market economy of books, music, and movies is because
we set up the legal and accounting frameworks to make it possible.

Whatever mess we've created over the last 20 years is really starting to show
it's ugly side, especially in relation to music. I wouldn't be surprised if
there is a wholesale musicians revolt when it comes to digital media. Vinyl
sales are up year-over-year for coming on a decade now.

~~~
ommunist
I really like your point about the vinyl. However, land property rights are
also a fantasy. Its a legal construct. I mean that. How can anyone claim
to"own" the land if us as a species are just 2M years old, comparing to the
Earth's estimated 4000M years existence. But look at the mess we created
around land property over past 2000 years! So why everyone in the West was so
surprised when Russians revolted in 1917 to bring land back to those who works
on it and depends from it?

~~~
williamcotton
At least in the Anglo-American world, the only world I know much of anything
about, we've built our entire societies on the back of common law constructs
around contracts, tort and property rights. For better or for worse, it's how
we do things. The American revolution was just a logical extension to the
legal structures that had been evolving in the United Kingdom for hundreds of
years.

I can see the appeal of throwing out the concept of property rights if you,
well, own no property, which was entirely the case for the vast majority of
people living in Russian in 1917.

However, here in the United States, we've taken another approach, and that is
to encourage ownership and participation in the legal and market structures
that define our society. We want more homeowners, more intellectual property
owners, and more equity owners.

The more people we have benefitting from these fantasies, the more likely that
these fantasies continue.

I think for the intents of this discussion, we've already tried out the
"communist" approach to digital media ownership on the Internet. We got to
where we are right now because we've basically abandoned intellectual
property. But we didn't get rid of ownership. There are plenty of profitable
companies who deal in hosting, distributing and organizing digital media who
don't track nor care about who owns anything.

Just like with the communist experiments on the grand scale of the Soviet
Union, property ownership doesn't go away entirely, it just lays in the hands
of a select few to the detriment of everyone else. It's not like you could
ever stroll in to Stalin's house and borrow his hat without asking.

Now, if we could just get more private entities in the Western world to
realize that all of their ownership stems from the fantasies of a social
contract between a people and it's government, we might be able to get
somewhere. Acting like the government is always the enemy is absurd. You can't
have a corporation without government or some other system for managing the
state of ownership... hmm, what's that technology that showed up recently
related to coming to consensus on who owns what on a shared public ledger?

~~~
ommunist
Thank you for exhaustive answer. It seems you mix personal, communal and
private property. In the Soviet Union there was no private property. However,
it respected personal property, so was the basis for criminal law, and you
could not walk in and take Stalin's hat. And there was co-operative, or
communal property, which was the property form for large enterprises. Property
of the state falls into this category, however with a bit different sauce.

With respect to IP rights, the USSR considered that personal property and
established legislation for that. Vast libraries of the USSR patents were
protected that way.

As you pointed out - that was grandiose experiment. And it failed. Because
what people wanted is private property and the state that protects private
property. But the nature of the Internet reveals basic truth - you cannot own
music.

Absurd things like arrested Tokyo accordionist who played The Beatles for fun
will happen all the time if one impose private property on music. But you can
control distribution and charge for that, owning supply chain of music - this
is what Apple and Amazon do.

They are not going to sue homeless singing "Yellow Submarine" on the outskirts
of Chicago. But they will extort every penny from labels and indie musicians
who wants to use their trade channels to reach customers.

~~~
williamcotton
* But the nature of the Internet reveals basic truth - you cannot own music. *

Dude, this was as true now as it was 200 years ago. We've always created
artificial systems to support markets for intellectual property.

~~~
ommunist
Not always. And not forever.

------
ChuckMcM
And the answer: _" Why does CNN show ads? To make money. Why does CNN include
tracking services? To learn more about the reader, to show more targeted ads,
to make more money. Why does CNN use social media buttons? To get people to
share the article, to get more page views, to get more ad views, to make more
money."_

And as ads make them less money they show more of them, or more intrusive
ones. They a/b test where to put them or how to make them "appear" suddenly on
the hope of stealing a click, and they try to disguise them as other news
stories. But the bottom line is that the ad supported web at the ad rates
people can get, is challenging at best and in cases like Dr. Dobbs Journal,
well they give up.

------
realusername
(My opinion here, I know not everyone is going to agree). What I really would
like to have, is web capabilities without design, I mean the browser should be
in charge of most design. What I would like is more meaningful tags like
<panel>, <post>, <user>, <description>, <icon>, <horizontal-menu> and let the
browser handle the actual representation. CSS could be just used for rough
positioning + size indication and background-color could be completely
replaced by a 'contrast' tag and the browser would display the color according
to the user choice or operating system interface. The website would then adapt
itself to the user and not the opposite.

~~~
eridal
I thought on this for a while.

The problem is that, as a user, I want to be in control, to dictate what's
allowed and what not. But as developer, I want the browser to behave like I
need to accomplish whatever the site is offering to the user.

So there's a conflict there. And the browsers should be the mediators, much
like the OS is, at a different level.

If the web were merely declarative, we would have gotten a better web

~~~
byuu
I think the way to go is for the site author to offer their own content style.
Then the user has the choice to use that, or to use their own style. They can
default to whatever they want (theirs or sites'), and override on a case-by-
case basis.

If a user is determined to view your site with different colors/fonts, then
there's no reason to fight them. They could even have very legitimate reasons
(poor eyesight needing bigger fonts, color blindness issues, etc.)

But for this to really work, we need to drastically decrease the variation
involved in the site markup (HTML), and rely much more heavily on the styling
(CSS) for page layout. HTML5 semantic elements don't go nearly far enough.

~~~
kedean
There is a reason to fight them: layout. The battle is not developers vs
users, its designers vs users. Designers want everything to look exactly like
their vision (understandably), and that means things like bits of text fitting
in exactly the right dimensions and placement. Once you let users resize
things on a whim, that goes out the window, because most likely everything
will look completely wrong.

It also opens the door for dead simple ad blocking that can't be stopped, and
business owners will never stand for that. The purpose that the web has been
bent to in the last 15 years is simply not compatible with user styles.

------
jasonsync
It's called "bloat". Maybe "selling out". To many hands on deck. VC's want
profitability.

It resolves itself usually. Sometimes.

Websites get bloated over time. Difficult to read. Slow to load. Messy UI.
Runaway code. Ads everywhere. People stop visiting. Less bloated alternatives
appear.

Slashdot ... MySpace ... been there done that. Reddit, Imgur .. drifting
slowly but surely towards bloat.

Mobile apps suffer too. To a lesser extent due to limited screen space. Poorly
designed apps, non-native apps, heavy Javascript frameworks, ad popups etc.

Even worse, when a mobile developer decides to build a simple website for the
first time.

Install Mercurial, Vagrant, Bower, NPM, Grunt, Mongo, Express, Mongoose,
Passport, Angular .. update everything .. cache clean everything .. check your
environment settings .. mess around with Heroku .. create a Docker image for
easy deployment. Spin up a virtual machine.

Now hand off that 5 page website to someone else when the project is complete.
They'll add bloat to bloat.

Better or worse?

The web is old. We're focused on apps. Eventually we'll move back to the web
and clean things up.

~~~
Dysprosium
Imgur is already bloated. As for reddit, I'm curious about what makes you
think they're drifting towards bloating. They use JavaScript (replying to a
comment creates a new textarea), AJAX (posting a comment), WebSockets (real-
time updates of comment timestamps), modals (sign up), but they do this in a
very moderate way and the result is really robust. It seems to me that they
perfectly know the power of all these technologies but have a very strong QA
which doesn't let a single shit get pushed in prod.

~~~
scrollaway
> WebSockets (real-time updates of comment timestamps)

What are you talking about?

~~~
Qwertious
>scrollaway 9 hours ago | parent | flag

If I leave this up for an hour without refreshing, I'll get:

>scrollaway 10 hours ago | parent | flag

Again, without ever refreshing the page.

~~~
function_seven
Can't that be done purely in the client? Why use WebSockets to to remote math?

~~~
scrollaway
It can be done with a few lines of javascript. You certainly don't need to
contact the server to ask them how long ago a certain time was.

------
Mithaldu
Minor gripe: I wish people would stop talking about Ad-Blocker. Pretty much
since i started using the web (back when with Opera) I've had a tool available
that is much more general and has given me much more control about how the web
uses my bandwidth:

An URL-Blocker.

I don't use it for ads, exclusively, although lots of those fall under it too.
I use it to block _anything_ i find annoying when i use the web, be it overly
big header images, fonts i don't like, Javascripts that are used by many pages
to "enhance the experience", and sometimes ads too.

Thinking about it as a tool to only block ads, instead of one to customize the
web and block urls themselves seems narrow-minded to me and misses the point.

~~~
stephengillie
Please give us more details about the URL blocker you use. Is it a browser
plugin? Is it a network change at the OS level (HOSTS)? Is it a firewall or
routing block? Is it middleware in a container running in your VPC?

~~~
Mithaldu
I still use Opera 12 and it's built-in.

You can give it a file like this:
[https://www.fanboy.co.nz/adblock/opera/urlfilter.ini](https://www.fanboy.co.nz/adblock/opera/urlfilter.ini)

And it'll apply black/white-listing in whichever way you configure it before
actually getting data from any URL. Editing features are built into the
browser.

To have it available on a more global scale, you could probably use something
like squid proxy, but i don't know if it gives power quite like that.

URL-Blockers however are a thing that by all rights should be built into the
core of any browser, just like number-black-listing should be a default
feature of every phone (but isn't).

~~~
mtone
I do this directly on my router running Tomato firmware. The initial setup is
a bit more involved, but it applies equally to all browsers and mobile devices
in the home.

The script I use is [http://www.linksysinfo.org/index.php?threads/script-
clean-le...](http://www.linksysinfo.org/index.php?threads/script-clean-lean-
and-mean-adblocking.68464/)

~~~
Pyxl101
Ad blockers are more sophisticated than that these days, or at least support
features that are. For example, they can hide a specific DOM element on a web
page within a larger page according to its path or other characterization. I
don't know what percentage of total effective rules these capabilities
comprise, though, but it's something you cannot do with host level blocking
alone.

~~~
Mithaldu
Does that mean they preempt the network request to the div's contents
completely, they only hide the div via css/js, or both?

------
Animats
Cruft removal suggestions:

1\. Read the source of your own web pages. What's really necessary?

2\. Do you really need more than one tracker? Does anybody need 14 trackers?

3\. "Social" buttons don't need to communicate with the social network unless
pushed.

4\. Stuff that's not going to change, including most icons, should have long
cache expiration times so they don't get reloaded.

5\. Look at your content management system. Is it generating a unique page of
CSS for each content page? If so, why? Is it generating boilerplate CSS or
Javascript over and over again?

6\. Run your code through an HTML validator, just to check for cruft.

7\. You can probably remove the IE6 support now.

~~~
Domenic_S
> _2\. Do you really need more than one tracker? Does anybody need 14
> trackers?_

Marketing answer: yes. GA and Omniture do different things/generate different
reports. Each tracker usually does one critical thing better/differently than
the last, so it's really easy for marketing to say they want all of them.

> _3. "Social" buttons don't need to communicate with the social network
> unless pushed._

Yes they do, because tracking.

> _6\. Run your code through an HTML validator, just to check for cruft._

5KB of html cruft is nothing compared to the massive amounts of JS cruft being
loaded (your point #2). This is wasted effort imo.

------
ommunist
The web today does indeed look broken. But this (oh, especially CNN) example
is only to show that this problem is merely organisational. If there is no QA
process in organisation, quality criteria are non-existent. If there are ever
shifting bossy MBAs on top design positions, it will always be like that and
get worse in time. The only thing we can do as "web crafters" is to make our
personal sites suck less. When it comes to job, who will sacrifice job
security for the sake of "better user experience"?

------
irq-1
An unmentioned alternative is Firefox "Reader View" or Readability [0], which
reformat the source post-download. They don't work on all sites and can get
somethings wrong. Still, its a solution that's: open, distributed and can't be
easily stopped by content producers. Imagine a Firefox mode that always had
Reader View on; a sort of filtered/limited web browser.

[0] [https://www.readability.com/](https://www.readability.com/)

------
d--b
Strange that this is coming from telerik, a company that creates really heavy
and closed javascript framework libraries, that you can probably load from
their own domain. They definitely have their part of responsibility in that
'cruft'...

~~~
jeswin
We can (must) judge this article on its own merit. While not particularly
informative, it is well written and explains the problem very, very clearly.
The examples and analysis make sense even to people who aren't developers. It
helps to raise awareness about an important issue, and is a part of the
solution.

Telerik's cruft is somewhat different from the cruft referred to in the
article. They offer "rich" UIs, which we might hate, but is just what many
companies choose when they try to replace their old Windows Forms apps with
web apps. With modern browsers, they aren't really that computationally
expensive. And in any case, they aren't responsible for the type of issues
mentioned in TFA. Like loading from dozens of domains, interstitial ads,
social media links, and 200+ requests.

Telerik's products cannot be evaluated in isolation without looking at their
history. They primarily cater to enterprise-y Microsoft shops, and they been
in the business at least from the classic Asp.Net days (10+ years). Their
components were DLLs earlier (which they probably still offer), but have now
moved to somewhat leaner, modern JavaScript libraries. Their market moves very
slowly, and it makes sense for them to move at a similar pace.

~~~
ommunist
You are right. However, the problem has no resolution in technical sense. When
your boss tell you 'integrate that sh#t to this page now' you are just doing
your job, right? You are not going to spend 4 nights developing technical
proposal and organisational framework tailored to your employers business to
introduce http2 and hire 3 interns to map external dependencies for the whole
50+ domains ecosystem, right? I did that several times in my life. Will not do
that again.

------
api
We had a similar problem in 1999. They called it 'portalitis' back then. Go
find some shots of the old Yahoo homepage for an example.

Then Google came out with one blank and 'search' and we finally exited that
ugly crufty era.

Now we're back!

~~~
smacktoward
Such is the nature of cruft: it accrues. Like barnacles, the longer you sail
your ship the more of it you will find encrusted on your hull. Eventually you
have to roll up your sleeves and scrape it off, and when you do, you will
briefly be able to enjoy a cruft-free state. But once you leave drydock, it
will start accruing all over again.

~~~
cgriswald
True, but I think there might be limits. I hate to think what 90's designers
would have done with the massive screen real estate we have available today.
Would it have been worse or would they have realized they'd saturated the
human brains ability to separate objects?

Interestingly, the Google home page has more stuff on it today than it did
back in 2001, but appears less crufty. Good layout seems to help eliminate
some of the cruft. Stuff that used to all be relegated more or less to the
center of the screen has been pushed away from the main search box; available,
but not interfering.

[http://wayback.archive.org/web/20010119175000/http://www.goo...](http://wayback.archive.org/web/20010119175000/http://www.google.com/)
versus [https://www.google.com](https://www.google.com)

Mobile is definitely cruftier, but mainly because of the smaller screen real
estate and because if you don't have their app installed, they add a modal to
prompt you to get it.

------
alistproducer2
It's what it has always been. Sites/companies that go the extra mile and
provide better user experiences get rewarded; other that don't lose market
share. At the end of the day, if the content is good enough most people will
deal with the crappy load times.

~~~
sanderjd
Sadly, I don't think this is true for individual content publishers. The size
of any individual publisher's audience is not big enough for them to stay
afloat with a clean user experience. Flipboard and Facebook pull this off by
providing the better user experiences of "you can read _everything_ " and
"you're already on here anyway" respectively, but that wouldn't work for
someone like CNN.

------
markbnj
I think "the web" in this case really means media sites and some retailers.
The media sites don't have a clue how to make money, other than to include all
the usual ad networks and tracking scripts and then sit back and hope. The
ecommerce guys are obsessed with slicing and dicing visitor behavior and
tracking conversions. Seems to me that outside these (admittedly rather
vaguely defined) spaces the web is a much cleaner and snappier place.

~~~
keyboardwarrior
Tend to agree with this. For instance big porn sites, where every byte count,
rarely has major bloat.

------
guelo
The latest stratechery article discusses this same topic from more of a high
level industry trends and business forces perspectives,
[https://stratechery.com/2015/why-web-pages-
suck/](https://stratechery.com/2015/why-web-pages-suck/)

------
AndrewKemendo
Isn't the fundamental issue here that nobody wants to actually pay money for
web content?

Solve that and you solve the cruft issue.

~~~
bpyne
I don't think the issue is people not wanting to pay for content. The Web
breaks apart for-pay models in different ways: people want content from more
publishers than they can afford; and, people want to share.

Getting people to pay for content on the web is a tricky issue. Looking at it
as a consumer, I don't mind paying, but to whom? "Walled gardens" tend to
force you into a subscription model: pay "The Economist" (e.g.) annually and
you have access to their content. In modern times, people read content from
all over the place without respect for boundaries. Realistically, you will
probably want content from more publishers than you can afford to pay in
subscriptions.

Should publishers move to a model in which you pay per article? If so, how
much? Do they price low ($0.10 USD) with the belief that more people will pay?
Do writers get a percentage or flat rate? How is the quality of writing
affected by each option?

How do we share content in a for-pay model? Communal reading and discussion is
what people do. It's the same way with music. People share articles, music,
and pictures with others who might be interested. In turn, sharing entices
dialog and dialog entices relationship-building.

I don't think the issue is people not wanting to pay. The Web opened up
individuals to a much larger set of publishers. Publishers in turn tried to
keep a subscriber model while trying to curtail the very human desire to
share. Solve the issues of not limiting people to small numbers of publishers
and not limiting their ability to share, and you will reduce publisher
dependency on ad revenue.

~~~
na85
I would never consider paying for the vast majority of web content. Fully
99.9% of what I see and read online is total shit dressed up with JavaScript.

The cold hard truth is that most content on the web isn't worth even 10 cents
per read.

~~~
bpyne
I agree that there is a poor signal-to-noise ratio for content on the web.
When I wrote the response I had in mind professional journalists and writers;
publishers like The Economist, Wall Street Journal, Scientific American, ACM,
etc.

------
Touche
There are some strong points in this article but I think it reaches the wrong
conclusion. Flipboard and Instant Articles are not an alternative business
model. They are a loss-leader, plain and simple. The goal is still to get you
on their webpage where you'll see the ads.

Good ole capitalism solves this problem quite easily. CNN banks on its brand
to keep people coming to their site but there are plenty of non-terrible news
sites on the web so for those who care about a nice user experience can just
use those instead.

------
cognivore
HOST file ad blocking. Bang! Problem solved.

[http://winhelp2002.mvps.org/hosts.htm](http://winhelp2002.mvps.org/hosts.htm)

This works so well I have a vastly different user experience than other
people. I rarely see ads and pages load fast.

Of course, you can't do this on the stupid devices we use but don't actually
control (phone, tablets, I'm looking at you), so I wish someone would offer a
DNS service that blocks these. I'm thinking of creating one for myself at this
point.

~~~
userbinator
If your devices use DHCP to get their DNS server, you can probably setup a
firewall/router with the appropriate filtered DNS.

------
andreapaiola
Well... I try to do something...

[http://andreapaiola.name/magpie/](http://andreapaiola.name/magpie/)

------
Touche
Here's a first-pass user style sheet that gets rid of most of the noise.

    
    
       #breaking-news,
       [data-video-id],
       .sibling,
       .pg-rail,
       .js-gigya-sharebar {
           display: none;
       }
    
    
       body {
     	padding-top: 0 !important;   
       }
    
       .nav-header {
           position: relative;
       }

------
natch
Just want to say, that is a really nicely written and presented article. Loved
the way it had a good lead-in, clear and well explained examples with good
screenshots, and how it dug into the whys and also explored the future without
driving some agenda other than its main point of understanding the cruft
problem. I realize the point of the article wasn't about quality of content
but more about all the clutter around the content, but it bears saying, if the
author is here to hear, that this was some really well done content.

------
msane
What I find really silly is "would you like to subscribe to my blog?" popups
which appear _over top of_ the content after scrolling.

Let alone the browser-level modal: "Allow This Person's blog to send you push
notifications?"

I've come to dislike the first 5-20 seconds after loading a news article on
mobile, while ads and popups are still adjusting the position of the article
text.

~~~
shostack
Unfortunately, these tend to work. I'm sure many people blindly implement
them, but many have tested the results and found that they won out.

------
mirimir
I typically browse via nested VPN chains and Tor, and overall latency is _ca._
500-1000 msec. Leaving aside my concerns about privacy and malware, ads and
third-party tracking conversations simply don't work for me. My circumstances
are unusual, of course. But I suspect that many who connect via geostationary
satellites have similar experiences.

------
njharman
That chadburn looks like it has a marble face and seems unusual. Which made me
go lookup what a chadburn is
[https://en.wikipedia.org/wiki/Engine_order_telegraph](https://en.wikipedia.org/wiki/Engine_order_telegraph)

------
flinty
The article by Ben Thompson on Stratechery is a good one to go along with this
one. [https://stratechery.com/2015/why-web-pages-
suck/](https://stratechery.com/2015/why-web-pages-suck/)

------
FollowSteph3
I found it ironic that the author talking about web cruft also had some on
their website. The header image on my cell phone in landscape mode took up 20%
of my screen and stayed even if I scrolled. Anyways I thought that was pretty
funny :)

------
amelius
Don't worry. Eventually the cruft will get filtered out by AI, just like in
adblockers, but in a much more powerful way.

Essentially, the circuits in your brain that remove cruft subconsciously now,
will be implemented in software.

------
beans1
Why do ad blockers not hide social media links? I would love this. It is cruft
I deal with every day. I have never used this feature but been forced to
implement it so many times.

------
odiroot
But I like my weather widget on news sites. It's the second most useful
feature beside the articles.

Naturally it doesn't make sense if it shows weather for a place across the
ocean.

------
Too
So in summary RSS is new again?

------
emodendroket
The Gruber quote is (as is typical for him, I guess) insane hyperbole. The
mobile Web is not "looking like a relic" because I'm not going to go
installing an app to read an article; that's more trouble than dismissing the
modal yammering on about privacy policies.

