Hacker News new | past | comments | ask | show | jobs | submit login
The Website Obesity Crisis (idlewords.com)
1122 points by jmduke on Jan 1, 2016 | hide | past | web | favorite | 367 comments

I think it is because people (designers, coders, etc) get bonuses and paychecks for creating stuff more than tearing down stuff.

Put this on your resume -- "Implemented feature x, designed y, added z" vs "Cut out 10k lines worth of crap only 10% of customers used, stripped away stupid 1Mb worth for js that displays animated snowflakes, etc". You'd produce a better perception by claiming you added / created / built, rather than deleted.

So it is not surprising that more stuff gets built, more code added to the pile, more features implemented. Heck, even GMail keeps changing every 6 months for apparently no reason. But in reality there is a reason -- Google has full time designers on the GMail team. There is probably no way they'd end the year with "Yap, site worked great, we did a nice job 2 years ago, so I didn't touch it this year."

That is most likely part of it. Likewise, the current structure of most companies simply cannot function in the face of something being 'done'. Someone has to keep the developers busy for the 40 hours a week they must be in their chairs to get paid. Even if that weren't an issue, you will always have someone who wants a promotion. And they will need to have something to show off to get it. So they will come up with a 'brave new direction' and use sheer force of will to make it happen.

I remember when I first realized that software companies had a fundamental problem with how they handled finished products. BulletProof FTP. It was a magnificent FTP client decades ago. I used it on my dialup connection to practice data hoarding. I paid for it because it was a trustworthy companion in my adventures through the net. But, that turned out to be its ownfall. It was feature-complete and it was excellent (although they never did fix the problem with quick reconnects resulting in a pre-existing binary transfer socket getting picked up for use as the command connection, spamming the thing with random binary data and confusing the hell out of itself and the user...). But clearly, that was unacceptable. They continued to shoehorn in irrelevant crap that nobody wanted. Eventually it got so bad that it wasn't even very good at being an FTP client any more. I've since seen that repeated innumerable times since. A product is completed, but people still have to be kept busy with fake work that destroys everything they built. The fact we still use the structures and ideas designed for factories and assembly lines for modern companies is ludicrous.

As written by the Conway of "Conway's law" in 1968:

As long as the manager's prestige and power are tied to the size of his budget, he will be motivated to expand his organization. This is an inappropriate motive in the management of a system design activity. Once the organization exists, of course, it will be used. Probably the greatest single common factor behind many poorly designed systems now in existence has been the availability of a design organization in need of work.

Great quote - any online references? Or even a new book I need :/)

Didn't C. Northcote Parkinson precede that by decades?

This is also the trajectory of Winamp. An application that was complete by v.2. But you couldn't just have an MP3 player. No, it needed to play videos and radio stations, plugins, skins, etc. By v.5 I barely recognized it and all the new Windows it opened when loaded.

See also iTunes, the Mac version, iTunes for Windows has always been shit.

I really liked iTunes until version 9. After that, it is barely usable. As a seasoned Mac user, I have a hard time finding functions.

> But clearly, that was unacceptable. They continued to shoehorn in irrelevant crap that nobody wanted.

It will be interesting to see if the SaaS subscription/rental model changes this. Or if they feel as beholden to adding features each release as people trying to sell software releases to get upgrade income.

SaaS is the literal embodiment of all those problems. Things are being dumbed down both because you have to target the mainstream with short attention span, and most webapps can't support any kind of serious load. It's all glorified toys, with the added benefit that the company can - and will - get acquihired or go out of business at any time, leaving you without the service you dependent on and without the data you've entered into it.

>It will be interesting to see if the SaaS subscription/rental model changes this.

I think the rental model will slow this down, but not get rid of it entirely: not as long as people still want to write blogs about the "new version of our product". I think JetBrains will make a good case study: their products are feature complete and they seemed to struggle to find new features to add lately (plus they have switched to rental model).

What we really need is something similar to API versioning: when GMail has a new version, bump up the version to 24, but keep version 23 accessible to those who prefer it.

> I think JetBrains will make a good case study: their products are feature complete and they seemed to struggle to find new features to add lately (plus they have switched to rental model).

What Jetbrains MUST do: improve the performance! Replacing my MBPs spinning rust disk with a SSD helped, but it's still eating up CPU when running on battery with indexing... it's a text editor for fucks sake.

Acrually, it's not a text editor: it's an IDE. How do you think all the fancy find usages, highlighting errors, and general contextual knowledge works? The IDE is constantly reparsing every time you make a change.

Visual Studio doesn't seem to have the same problem.

Isn't this why people get resharper? Add in missing features.

That is why VS gets confused often.

It is NOT a text editor and I think it's eating that cpu for a good reason.

It totally is a glorified text editor + some support tools and I'm pretty sure the reason it's eating that CPU is because of piles of unoptimized cruft. Like every other big project in contemporary "it's nil until you ship it" culture.

Like others have pointed out, it is not a glorified text editor.

Keep in mind that JetBrains IDEs started not as a text editor, but as a rename utility.

So really, it's closer to an AST tool with text-editing features.

PhpStorm/WebIDE is definitely not a glorified text editor.

Editor is definitely the core but all the support tools make a big chunk of features and usage. Sure, I can get quite a lot of those features in Sublime with some packages but they aren't that well integrated and easy to use!

Doesn't anyone here have any sense for some sarcasm?

if you think it's just a text editor and some support tools then switch to vim.

I use Emacs for almost everything, so I'm pretty aware where text editing ends and support processing starts.

That said, consider Microsoft Word. Since forever, it's been doing similar - if not greater - amount of work processing and reprocessing text every keystroke than IDEs contemporary to it. Yet somehow, it is not bloated, it does - and always did - work well on even low-tier machines. Also compare with OpenOffice, which does more-less the same things, only much slower and buggier. If M$ can keep Word functional and performant, I'm pretty sure Eclipse and IntelliJ could be made fast as well.

You were doing well, right up to the moment you stated that Word is not bloated and compared it to OpenOffice (who even uses OpenOffice? Everyone with any clue uses LibreOffice!).

P.S. as someone who has commit access to LibreOffice (and much to everyone else's chagrin, sometimes uses it), I could be a little bit biased.

In my experience of using Word - since mid-90s 'till today - I've never felt it's bloated. It has a lot of features, yes. I don't know what half of them do, true. Recent versions hide them in random places and made the thing a bit confusing to use. But it almost never locks up, it lets me enter text as fast as I type even in very large documents, and generally feels... slick. Not something I could say about OpenOffice when it was "the thing", not something I can say about LibreOffice (sorry). I know that M$ has a pile of hacks going on there, but hey, it works.

OTOH, all Java IDEs I've used so far lag by default. It's often about the little things - like those 1 second delays in menus, or 100ms delays between keystroke and letter appearing on screen, etc. Enough to make the experience frustrating. If I keep using them it's only because some advanced features don't have an equivalent (or easily installable equivalent) in Emacs. But it's a tradeoff between having a powerfool tool that I use every once in a while vs. enduring the constant experience of bloat.

Features of IDEs are cool, but they seriously need to optimize them. And to stop assuming that every developer gets to work on a brand-new machine. Maybe it's true in the US, but it's not true in other places (sometimes because managers don't see a problem with paying developers $lot but then being cheap on the equipment those same developers have to work on).

Your first paragraph is pretty much my definition of bloat. I've just come back to Word (work uses it) after a 10 year break. Things have got a lot worse (slower, hard to find, massive bloat, irritating layout) in that time imho.

Word 97 ran and opened faster on my old Pentium Windows 95 machine than Word 07 runs on my i5 Windows 7 machine. New features are great, but there is sluggishness and its unacceptable on modern hardware.

I don't know if this is the case for the parent post, but some people use LibreOffice while referring to it as OpenOffice.

I personally use LibreOffice quite a bit, but something about the name sounds silly to me, so I avoid speaking it. I usually refer generically to "a word processor" instead.

It is. After it being pointed out to me, I've recalled that I've been using LibreOffice for some time. Maybe I'm ignorant about the actual history, but I've always considered LibreOffice to be the continuation and rebranding of OpenOffice.

or use both: write (enter and rearrange text) in an editor, manage (refactor and debug projects) in an ide. i've found emacs + intellij to support a workflow for java that i couldn't easily duplicate with either on its own.

Off topic to bloated web sites, but: I like the subscription models of JetBrains, Office 365, etc. Long term, I think this business model will enable tighter more focused products because the "add new features to get update revenue" goes away.

So selling good feature-complete software is insufficient. People must be forced to keep paying for it at regular intervals, whether they need updates or not.

Software is the perfect product - it doesn't wear out, it doesn't go bad, if you really need to you'll be able to use software you got in 1990 in 2090.

For the software creator, once you've sold someone your software, the only way to make more money is to sell it to someone else, or create and sell a new version. Eventually you run out of someone-elses to sell to and can make new versions or go out of business.

... or sell a support contract or a subscription model.

I don't like it from a user's point of view, but squaring my preferences as a user with preferences of business isn't easy.

It's not just software. Most of the things we buy and use are being purposefully broken because otherwise they wouldn't wear out fast enough. That's the sick side of capitalism.

Well sure, but there's an ultimately sustainable business model if your product needs replacing in 30 or 50 years. (Leaving aside the things like marketing claims of a "30 year" or "lifetime" warranties from a company with a 10-year lifespan.) Capital needed for initial production is then the only issue. But there isn't such a model if the product never needs replacing again.

30-50 years of replacement time is apparently not sustainable, since most of the things we buy have 3-5 years of expected lifetime tops. As for software, the entire tech industry, including its hardware areas, is nowhere near stabilizing yet. Most software has to be replaced after at most 10-15 years, either because of security issues or just because it's no longer supported by hardware. There are exceptions, yes - software written for Windows 95+ is still alive and kicking, but because most software has to interact with other software eventually (even if by exchanging files), it has to keep up. The rise of web and mobile applications has sadly only sped it up.

Maybe one day we'll reach the times when programming is done by "programmer-archaeologists", as described by Vernor Vinge in "A Fire Upon the Deep", whose job is to dig through centuries of legacy code covering just about any conceivable need, to find the snippets they need and glue them together. But right now, software gets obsolete just as fast as physical products.

> 30-50 years of replacement time is apparently not sustainable, since most of the things we buy have 3-5 years of expected lifetime tops.

In practice, yes, most things we buy are designed to fail. But in principle 30 years is possible, if anyone is willing to pay for it. Lots of people aren't, for many reasons including time cost of money, fashion, or just not caring.

> Most software has to be replaced after at most 10-15 years, either because of security issues or just because it's no longer supported by hardware.

Yes, so there's the answer for why the software upgrade treadmill exists: it's for software not important enough to run on dedicated separated hardware. Few people are upgrading CNC controllers or power plant controllers or aeroplane controllers because of security issues or because the hardware is no longer available.

Anyway, even on the consumer side the lifespans are rapidly lengthening. In the 1990s running current software on five-year-old computers or using a five-year-old OS would have been basically unheard of in the mainstream. Today a five-year-old computer is only a step behind current and Windows 7 has turned 6 years old and is still extremely widely used.

> n practice, yes, most things we buy are designed to fail. But in principle 30 years is possible, if anyone is willing to pay for it.

My mom is still using a cooking stove she bought in 1982, as a second hand purchase. Parts of it have been repaired/replaced, but I don't think they make em like they used to: I doubt a 2016 stove will make it to 2049.

It's not that creating more durable products is significantly more expensive (it could, and was done in the 80s), its that manufacturers cut costs of manufacturing and their B.o.M. in the interest of maximising profit.

Actually software do go bad over time and it's a well documented phenomenon in the industry [0]

[0]: https://en.wikipedia.org/wiki/Software_rot

Eh... yes, but.

The "rot" is because the environment is changing (the hardware you're running it on, other software you need to interact with) or the software is being carelessly updated.

If you need to, software rot can be eliminated. It takes effort but it can be done.

To give an example: you can run a space station with software installed in the 1980s, but you probably can't run a space station with seals or pumps installed in the 1980s.

That's very true. Sometimes I wonder what the team behind WhatsApp does all day. They released Web and voice calling, but nothing since, and there's nothing to keep us expecting new things. No new features are added. Yes, they have to squash bugs and that is endless. Maybe work on a better infrastructure and make it more stable, but we'll never know about. Other than that, it's solid already and doesn't need anything else.

But then again, there's a paradox: if they do nothing and someone shows up that, for some reason, steals their customers, they can get in trouble, like Orkut fell and was "replaced" by Facebook. If they do something that breaks the experience, they can loose users to another player (recently in Brazil, WhatsApp was banned for 48 hours and lots of users signed up for Telegram. It wasn't WhatsApp's fault, yes, but it's just one more showing of how easily users migrate when they're not pleased).

So what do you do? You have to be careful with where you take your service. But standing still is just waiting for someone to come and stab you.

> Someone has to keep the developers busy for the 40 hours a week they must be in their chairs to get paid.

I thought Hacker News and Reddit handled that. :

I think it takes a technical lead who has the vision & skill to balance house-keeping work with new feature development. It can be both technical as well as political skill.

One methodology that I employ is "visibility-based development" where we will save up some super easy tasks that may have a big visual impact. When we need a sprint or two to get some boring clean-up work done, we'll throw in a few of those easy tasks with it. We've had some releases that look like a big, major release but in reality we only spent 1/2 a day on the features that seem big. The rest of the time was under the hood cleanup.

In some cases it's not simply your customers that require the illusion, but the management team as well. If you have a management team that gets uneasy if they don't see new features happening with every release, I highly recommend using this technique.

It seems like technical debt. Lots of what causes bloated pages can be fixed with simple solutions like scaling down gigantic images. I agree there are still major design decisions that can have a big impact, but there are also lots of things that can be done in most cases without too much elbow grease. For example simply installing Pagespeed will optimize a page that previously would take many architectural and dev hours to fix. So maybe the debate should be are the automated tools like Pagespeed moving at the same rate as our appetite to create more bloated pages.

Also going back and cleaning up resources (css and js references). Lots of dead css is out there in the wild

I find in most cases it's not your customers that require this at all. The majority of Gmail probably don't want it to change.

Pretty much, the basic HTML version of Gmail they strongly advise you against using is closer to the Gmail I want than the Gmail that they push by default.

Heck, that goes for most of Google. Look at this beauty: https://www.google.com/?noj=1&gws_rd=ssl Why is this not still the default google.com?

Upon following that link a giant banner was displayed saying "it's better in the app". It seems they've discovered there's still a lightweight version of google left and are quickly trying to close the gate.

By the way, why on earth would anyone want an app to do a google search? When searching for a webpage, the best 'app' is going to be the one that shows webpages, named 'the browser'. It's like going to weightwatchers.com and seeing a banner ad that tells you "it's better with mcdonalds". It is laughably grotesque. What happened to google that they've strayed so far from sanity?

> By the way, why on earth would anyone want an app to do a google search?

Because it provides a neat search widget, plus Google Now.

And the Gmail bloat is nothing compared to the Yahoo! Mail bloat. It's so slow and bloated that with my phone I'm not even able to get as far as downloading an attachment, and I'm talking about an 1.5-year-old high-end phone where I can do pretty much anything otherwise. In a high-end PC, it also takes a long while to download attachments, and when the connection is flaky, even reading email is difficult.

There is a "basic" version (which they also advise against) which is significantly better, but still slow and bloated. Yahoo used to be my go-to account back from around 1997, but after the successive iterations of bloat, it's now in a point where I only use it for unimportant website registrations (i.e. spam fodder).

How does that link look for you, compared to google.com? On my phone, both the mobile and desktop version pages are identical and pretty minimalist.

The biggest difference on desktop is you get the old black bar offering links to the 10 or so Google products that actually matter, rather than the dropdown. It also seems to load faster.

This is more lightweight: http://www.google.com/custom

I looked at that on my phone and got redirected to the mobile version. JS enable, 4 toolbar buttons, animated image, search bar, a prompt to use my location and a prompt to "install the app."

The current design is closer to the original design. Wait a couple of years, and the design you like will be back. Everything goes in circles for the reasons other people have mentioned above.

Hm, an 'alias' plug in for urls?

> I think it is because people ... get bonuses and paychecks for creating stuff more than tearing down stuff. ... You'd produce a better perception by claiming you added / created / built, rather than deleted.

It's not about creating vs. removing "stuff". It's about value and advancing business objectives.

That's to say that your citation doesn't carry weak perception b/c it mentions removing stuff, but rather b/c it fails to mention the value that created. A better way of wording it might be something like:

"improved [site performance, or dev efficiency, or something else] by XYZ% by removing low-use features, without sacrificing revenue or customer satisfaction."

... b/c simply removing features (regardless of LOC) used by 10% of a user base, in and of itself, doesn't offer any justification or value. I'm not saying it can't, just that I'd be concerned if that happened without strong rationale and a measurable net positive.

And the same goes for adding or building stuff. E.g. If you wrote a project management system at your previous job, focus on citing the number of hours or projects it tracked, or how it reduced the number of meetings required. The fact that you "built" it isn't as interesting as the value it created for the team/company.

So if positive perception is what you're after, focus on presenting measured value. Then, whether it was created by adding or removing is a non-issue.

Not only designers and coders, but managers too.

Managers are rewarded and promoted based on everything they & their team "do", not on what they "undo" or "don't do"

So manager x adds 10 things to the website, gets rewarded and moves on, then their replacement adds 10 more things, gets rewarded and moves on etc.

It's in the interests of those managers future careers they don't allow anyone to remove the things that were added during their reign, otherwise the list of things they "did" won't be very impressive.

That reminds me of the story which I believe was posted on here (might've been somewhere else) a year or two back about LG TVs and their interface everybody loves - it was a total accident. LG had (might still have, dunno) a program in place where managers got a bonus for every feature they spearheaded which shipped in a product... this resulted in managers from many different teams shoehorning in whatever they could into the 'Smart TV' interface. There were 2 separate 'app stores', multiple inconsistent interfaces, it was slow as molasses, etc. LG acquired the WebOS team and they adapted WebOS to the LG sets just as an experiment... but someone took one of the TVs they had it running on to a trade show... and everybody absolutely loved it because it was simple, direct, uncluttered, and fast. It caused LG a huge problem because it clashed with their corporate policy that encouraged the other bloated interface.

I honestly think that a lot of companies should seriously re-evaluate how they function. Once a product is perfected, the developers should probably be switched over to being paid "on retainer", so that they continue to get paid but do not have to go into work constantly and maintain a 40 hour work week. Let them work from home, and just require them to be reachable. When maintenance is needed, ping them to do it.

I think hardware companies are especially bad at this. When I was young my parents had a colour TV that was 25 years old before they replaced it. Even then it still worked perfectly fine, they just wanted to get a flat screen that took up less room. Now it seems you are expected to replace TVs every couple of years...

Designed failure is clearly a central part of how capitalism presently works for the capitalists - virtually everything I've repaired [domestically] of late has had a tiny part that seemed engineered to fail and take down the whole device.

Case in point - my dad's kettle, the push button to open the lid broke. The tiny plastic lever in the internals that was the ultimate problem had been moulded with material from the pivot removed. There's no way it wasn't designed to fail within a short time period. Add back that plastic or otherwise reinforce that pivot and it would likely work for several more years.

Another example, microwaves: I bought two new, [admittedly] relatively bottom end, microwaves consecutively. Both appeared to fail due to the cyclotron, neither part was available to buy. So instead I got my parent's old microwave from the 1980s. It really pained me to get rid of those new shiny stainless steel boxes, made so attractive to the eye, in favour of the beige-and-brown monstrosity with the working internals.

if you want gear that can last and stand up to heavy use, buy professional kitchen equipment from restaurant stores. plastic consumer stuff is always cheaply built because they compete on price.

I hadn't heard of the LG/webOS story, but it's a good read: https://gigaom.com/2014/08/28/a-failed-experiment-how-lg-scr...

this is how we treat homebuilders. and everybody seems ok with it.

Yap. It reflects not just directly on the product but on the team dynamics.

I have seen this happen at least twice in the past: start with a great team, smart people, self-sufficient. New manager comes in. What does the manager do? Stay by and watch the team do what they do best. Nope, start to set up meetings to improve communication, to streamline, to optimize, adds new rules , new Kanbans, some agile thing maybe, intensify devops a bit. "But why? it doesn't make sensse". Developers are wondering, watching their motivation and productivity get sucked out.

But it does make sense. The manager has a manager. When the end of the year comes, they'll have to report "I did X, implemented Y, facilitated Z etc". They are paid more than the average developer, their know it, the expectations are high. They have to do those things, so that behavior starts to make more sense.

Clearly we work on the same team, at both this job and my previous. Are you me?

Add to this people leave, new people come in and day what the old people did they could do better. A couple years later new people discover why the old people created what they did.

New people either look for new jobs or get into the same cycle as the ones before them.

The odd thing is that when you stay around you can start to see patterns in misconception. People come into a long-lived codebase and naturally fall into the same thought traps, wanting to rewrite the same parts for the same misguided reasons. There is a principle here to follow on a new codebase, that of 'senseless inaction', meaning that as long as something doesn't make any sense to you yet, you should refrain from making deep changes to it. Only once insight is formed on why the seemingly completely pointless and bizarre feature exists and what purpose it is serving, then can a deep change to it be safely considered. Tragically, the natural instinct is to lay off the parts that make sense and to rewrite the parts that don't. That's a guaranteed way to make a product unsuitable for the business it is in, because the weirder a feature is, the more adapted it is to the real world (most of the time).

In general, this 'senseless inaction' idea is known as Chesterton's Fence, and it's a very wise principle.


Last two jobs I have been at the coder(s) were the ones wanting to refactor, and management just wanted new features, completely ignoring technical debt.

Be careful - you are generalizing which will lead to issues for your interactions with your superiors.

Good managers remove things when they serve no purpose (prevent extraneous development), when they aren't being used (reduce technical burden / debt) because they look at the data, and the changes they make are tied directly to the success of the business.

This assumes a bit of a functional work environment, but if you aren't inside of one, the problem you are dealing with isn't bloated webpages, its bad management.

Many industries have a similar curse. One very visible example is the satellite news gathering (SNG) trucks owned by many local TV stations. These things cost a lot of money, and are ideal for covering breaking news stories in some remote locale — impending hurricane, manhunt, fires, etc. Because those events don’t happen every day, what you see them used for are very pedestrian, boring stories that don’t require them. Why is there a shivering reporter standing out in the front of the statehouse or courthouse for the 10pm newscast, talking about some minor piece of legislation or arraignment that took place 12 hours earlier? Because they spent all of this money on the damn SNG truck and they’ve got to use it for something.

Actually, they are used because news directors want to be able to break up the show with outside broadcasts to avoid the news simply being two people talking from a desk.

In my opinion, that's the type of thing that a news director would say to justify the expense of a new truck. I am not trying to be snarky, that's my take based on previous work experience for a morning newscast and a long-time interest in the business of news.

I would add that modern broadcast news has always relied on other types of imagery to break up the anchor shot, chiefly prerecorded reports from the field (which do not require a truck -- at my station the photographers and reporters used ordinary cars) and weather screens. The SNG trucks are nice to have when something big does break, but most of the time they are simply not needed ... hence the contrived uses and excuses.

I think the sharp decline in costs for competing technologies are forcing a rethink of newsgathering costs. In recent years, helicopters have been replaced by in-studio CGI based on traffic maps and (on occasion) drones. This trend will accelerate, although I think one of the main expenses -- "talent" -- won't go away. Viewers do like personalities and on-screen charisma.

This seems so true. Another thing I've seen employed is the idea that constantly "improving" the product is necessary. I'm not sure if that's born out of fear of not being relevant anymore "when there's all these sweet websites that play hovering videos" or whatever the latest thing some product manager saw when (s)he was trying to figure out how to look like (s)he should still have a job, or what.

I think there's huge value in constantly asking if you can improve and asking how you can improve. But the keyword in that sentence is IF.

There's absolutely nothing wrong with considering a change, deciding it isn't a gain for your users, and then NOT doing it. It may not be a glamorous call, it may not be a resume item, but that's real product management right there.

Something analogous happened with GM cars. Engineering group prestige increased with the volume of your subsystem under the hood, so everything kept getting bigger, and cars kept getting bigger.

There is probably no way they'd end the year with "Yap, site worked great, we did a nice job 2 years ago, so I didn't touch it this year."

They should take a page from Toyota's book and have 10 separate teams with projects for Gmail's next version. Except, instead of evaluating with metrics, reduce to just 3 options and expose the next versions as public betas and take measurements on how well people like them.

Are you suggesting if something is used by 10% of users, developers should strive to make it go away? Although eliminating 10k lines is definitely good, what if removing that feature means that 10% of users who use it are no longer paying customers?

The business reality is when a feature is added, there's almost no way to remove it, because the result would be less money to pay salaries. Hopefully the benefit would result in more than 10% of additional sales with better features and changes.

I'm at a place where we have an extremely large app and there are probably a hundred features that are each used by 10% of our customers. It's a bit of a kitchen sink app for a vertical market.

I certainly would not say that we "strive" to make any feature go away. In some cases, though, a certain feature can actually prevent us from implementing new functionality. It's always a tough decision. We don't want to leave any customers behind, but at the same time we don't want a small minority of customers preventing us from moving forward either.

I don't think these kind of decisions should be left up to a lone developer to make on their own. The developers, sales, support, management and everybody should be working together to understand our customers and what our mission is. It's not easy.

> Are you suggesting if something is used by 10% of users

It was just an example.

> what if removing that feature means that 10% of users who use it are no longer paying customers?

I can think of scenarios where that feature for 10% of customers is causing issues for rest 90% of customers, or eating up all the support time or ops teams' time.

> The business reality is when a feature is added, there's almost no way to remove it,

Even code-wise. Once a framework is adopted, it is hard to rip it out to make things lean again without making a fundamental change.

"Oh man, I can't counter this argument, because it's true. I'll just split it up into chunks that don't make sense and then disprove those."

Have to be realistic, today is an era where the bloat is the automotive equivalent of tailfins, spoilers, and 15 drink holders, not the trunk or the concept of the passenger seat.

This would seem to me to be a good reason to not have designers on staff that outweigh the needs of product development. If your designers are looking for change for the sake of change, you may have too many designers. I think Silicon Valley has gone a bit overboard on designers in recent years.

Google has a "VP of Design" now, and the one commonality of his tenure has been massively increased page load times, and buttons moving to different places regularly for no apparent reason.

I think Bill Gates and especially Steve Jobs got it right. They regularly tried out the products and critiqued the interface and after many iterations it turned out like all their products were created by a single well minded person. Win95 and iOS6 were so awesome at their time. (the same was true for the first few years of Facebook)

After working with seniors on their computers for years, I feel we should replace all current UI testing with "senior testing".

If you can't get a 70-year-old to understand what you're asking them to do, you're wrong.

Whoever replaced gmail's edit widgets with the new stuff surely has never tried to use it. Basic stuff like selecting text is broken. Why can't they keep a HTML text edit widget for plain text mode? Because it would make too much sense.

Does reframing "cut out crud" to "saved $X in bandwidth" help? Or is this the kind of thing that customers / employers don't see?

In my experience it doesn't get prioritized. I worked on a product where everyone including product management complained about performance but it always took very long to get fixing that prioritized. In one case I spent my weekend adding pagination to some pages that everyone had been complaining about being slow but no one thought should be a priority to fix over new features.

If it's unexpected cost and an emergency fix, probably. But if the cost is already factored into the budget, it's unlikely to have a that big impact – worst case, management will be annoyed that you ruined all their budgeting.

Find me a company that actually is excited by a resume like that, and they'll rise to the top of my search when I look for my next job.

Like someone else said, highlight the value created or costs saved, not the work you performed.

I have "appraised transport policy to reduce the number of agency drivers employed, projected saving $50k annually with no loss of service" rather than "used linear optimisation of transport schedule in Excel".

> Put this on your resume...

I have the opposite opinion. If you're applying for a job with me and have something about cutting 10k lines of crap then you've got my attention.

Being able to recognize when to delete code is a very valuable and underrated skill.

It's not only due to the people working on the site, but those paying to have it made too. A lot of clients also associate 'fancy' and 'complicated' with good, and then end up demanding every every feature and gimmick you can imagine in some attempt to look 'modern'.

So you end up with the designers, developers, content editors, SEOs, managers and clients all wanting different things at once, and none of them want to remove what's already there to lighten the load.

I blame micro-managers for a lot of website bloat. These guys can't code and they can't manage, but for some reason we 'need' them so that clients cannot talk to developers, a game of Chinese whispers is needed via the micro-manager, on need to know basis.

Because micro-managers lack technical competence, they 'shop' for solutions. They have to buy into advertising claims from SaaS companies that will perform wonders all for a small, per-transaction fee.

Why use the reviews feature that came with the CMS system when you can just add some third party thing for doing the reviews? One simple script and a tag, job is done, Disqus is on the site and any problems with the reviews then becomes something 'supported' by Disqus. No developer of the front end variety is needed to theme up the reviews that came with the CMS. Megabytes of stuff gets added instead of a few lines of CSS and the text content of the reviews.

Need a slider/banner thing for the homepage? Forget the actual requirements for a small selection of images that click through to somewhere else with normal hyperlinks. Instead pay for some bloat, only $70 a year, chicken feed! Let the developers struggle to get this behemoth working and ignore their protestations that animating a simple carousel is not 'rocket surgery'.

That pesky search box at the top of the page... Why use the built in CMS search augmented by a few simple site specific rules to fix the things not found? Instead of doing that, go to some 3rd party SaaS app that cripples the server downloading stuff from one index to put in another, far, far away index. The SaaS service will do everything and better than what the in-house fixes will deliver, the advertising says so and they have pie charts to prove it.

Those stupid social network icons. Again, let's Add This and see if it will float with those lead balloons added. It make perfect sense to add thirty scripts to the page just in case someone needs a 'Tweet' button.

Usually these inept decisions are made in good faith by a manager who does not know what he/she is doing. But they have managed these sorts of projects before, right?

Those bloat items usually do have a knock on effect of implementation difficulty. But, by then, these 'agreed on' features have been paid for, approved by the client and handed down from on high by some micro-manager.

I have never met a developer that goes with the bloat out of choice, they just know that a job is just a job with a paycheck. Their little bit of the project, e.g. 'frontend' has no concern for page load time, they just have to implement designs handed down to them by some micro-manager that has got some other person that can't code to do some pretty pictures in Photoshop. It is paint by numbers for the 'frontend' guy, performance is not their thing. Same with the backend guy working on that API integration for pulling through those 'tweets'... Again, no responsibility for page load time, for their work is in the backend, not how the site renders.

Another layer of people that can't code gets added with the UX guy. They might get only as far as designing the pages the Photoshop newbie has to 'draw', adding 'lorem ipsum' text accordingly. Yet these UX guys never seem to roll up their sleeves and optimise the stack for a speedier UX experience. No, why do that when you can go to meetings and conferences to learn how other people do these things?

Then there is the outside SEO agency. Let's face it, the people in SEO didn't do degrees in anything involving difficult things like programming or science. Yet these guys want another dozen scripts added to the page so they can measure how their 'campaigns' are going. More bloat signed off by a middle manager.

Too often the figures that matter (sales) are far from accurate or completely missing from the SEO agency reports. All the numbers that matter are all there in the server logs and the sales order tables should they bother to check, but why do that when you can pay for yet another analytics SaaS? Who cares if a percentage of the site visitors use 'ad block' of some sorts, rendering all efforts at this type of frontend stats scraping useless.

Newsletters. Who is for a newsletter? Again, how hard can it be to send an email? Not that hard, add an SPF record, get the to and from bits correct and that is it, good to go. But no, only idiots do that, what you need is to pay a SaaS a small fortune per email sent and wow, again pretty pie graphs deep-stalking the readers. More bloat with yet more customer data shuffled across the internets.

This point about newsletters is a backend thing rather than page bloat however it is typical micro-manager FUD thinking. Everyone knows it is a nightmare running your own mailserver, everyone knows that it takes one customer to mark one email as spam once to make all future emails effectively go into a big spam filter forever more, never to be seen again. With this being the case, best pay $0.02 per email to some SaaS to 'do it properly'.

No micro-manager knows that a 'send only' server works pretty well with next to no configuration, that SPF record still has to be setup if using 'monkeychimp 360' or whatever and that these 'tracking benefits' are not beneficial at all except in theoretical edge cases. (All they do is remove limited focus away from the numbers that really matter and stop common sense being used).

Then there is the spamification of emails - does that really happen if not using the SaaS service? Find out first then move to the 'Saas' when one's server gets blacklisted for spewing out spam-tastic-click-bait? That would be the cost effective choice, but no, let's just rely on a third party service for something as basic as sending an email!!!

Helpless micro-managers, the same ones that 'never got fired for buying IBM' a generation ago are the culprits in small to medium sized enterprises. These guys can't code. And code IS important, it is not just something you get a programming resource to do. (okay it is).

These clueless micro-managers can't have confidence in their team to deliver because they have no confidence in themselves to do anything actually difficult, i.e. needing a textbook to learn or fundamentals to grasp. Consequently they are forever being less than cost effective buying cheap bloat instead of engineering something better.

The bloat add-ons are also bloated up to appeal to the micro-manager mindset. That homepage slider, let's sell a slider that does all those tacky DVE dissolves that television was cursed with in the 1980s!!! Rather than the simple transition people know, let's have 200 effects to choose from! So micro-manager buys the 200 effects slider because it is 'better'. Forget that only one effect was needed, go with bloat and don't step back and think for a moment that a banner slider can be done by mere mortals using things like hand-written code. Why reinvent the wheel when you can have two hundred on one of those caterpillar tracked things they used to have to move the Space Shuttle to the launch pad. We're enterprise too, right?

I don't believe there is a coder in the world that goes for bloat, unless they are useless. I also believe that coders go with whatever the team decides, i.e. what the micro-manager hands down from on high. They suffer the bloat to make others happy, a pragmatic thing and certainly not to get a 'bonus'.

I think you identify a lot of the causes of page bloat here, but I'm not convinced developers are blameless. Things we do:

  - Load all if jQuery when we just need a few functions
  - Load all of Bootstrap when we just need a grid system
  - Load a whole templating library for one little widget ("we'll need more later")
  - Load a whole icon don't when we're only using two of them
  - Load a whiz bang carousel library when we only need to roatate a few images
  - Use prefab themes that do all of these things and more (shudder, *parallax*) with the idea of providing nontechnical users with the ability to do anything they can imagine without learning to code a little bit
I don't think this is all incompetence (though plenty of it certainly is). Some of it is just needing to get shit out the door before the deadline, and some of it is attempting to plan ahead (eventually we're going to use more of jQuery, right?). Likewise with your examples - I don't think this is always micromanaging, sometimes it's just resource management. If you don't have the developers to manage an email server, why not pay a little money to a third party and let them manage it?

I saw some devs who slap a JQuery plugin there, another plugin there and so on and various other third party JS libraries. And then the page load involves 1MB.

Or SaaS websites that have a single-page web app with a 4MB JS file. And their user interface looks like designed by an committee. The single page web app has a long loading time, looking with DevTools it turned out it's a GWT+Ember ugly monster, that was slapped onto their old rusty Eclipse based Java product. The little information they can show is shown on a 7-times nested navigation page structure so they can advertise it as "drill down" whereas competitors show all info on a maximum of 2-3 times nested pages, and most info is visible without a single click. And the bad one is very much into enterprise with a huge sales stuff, the better competitor is a SaaS-only company.

I saw: "Angular is cool, we should do that!". Next up page looks about the same but loads 3x as slow, but we got Angular so we had that going for us...

NIH much?

Developers cost a lot of money. Mailchimp doesn't. Maybe your "clueless micro-managers" know a little more than you think.

As the "coder" in your description, I was always happy to refactor / simplify code, and remove unnecessary cognitive load (i.e. unused features). But management were not interested in that at all.

The simple solution seems to be to add the absolute value of the code subtractions to the value of the code additions.

This would weight code refactors with the weight they deserve...

I think those other things can be worded in a positive way though. That could make it worth it?

Optimized website. Improved loading time by .3 seconds on average for all users. Created cleaner and more efficient design and implementation. Etc...

> "Cut out 10k lines worth of crap only 10% of customers used, stripped away stupid 1Mb worth for js that displays animated snowflakes, etc"

Well, those 10% would probably be pretty pissed off if you did that...

justifying their own existence, it seems

It's also about the ads bubble:

> "Or the bubble is going to burst. [...] we need to ban third-party tracking, and third party ad targeting. Ads would become dumb again, and be served from the website they appear on."

It's already insane that many news well know websites load 25+ third party trackers with 4+MB of waste. All these poor battery powered devices have already a hard time.

The thing is, removing bloat and making site faster can be seen as feature. Too bad that many will start to look for the new framework and lib to do just that :(

"Cut page load time by 90%" is also an accomplishment.


I hope to see less of this in 2016.

Happy New Year!

> stripped away stupid 1Mb worth for js that displays animated snowflakes

I implemented this in 740 bytes uncompressed.

> It's like we woke up one morning in 2008 to find that our Lego had all turned to Duplo. Sites that used to show useful data now look like cartoons.

That is the best description I've heard of the recent trend of making every item cover 30% of the page so that you can only fit 2 data points. What is the deal with all this? Keeping the number of options down is one thing, but making repeated tables of data gigantic serves no purpose at all. It might look good in a thumbnail of a screenshot but actually using it is next to impossible.

I particularly like this quote:

These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies.

That is what we call fashion. With the bandwidth and CPU/monitor power, the utility part is no longer in focus, so fashion part dominates.

Fashion do not need to make sense. Sensible folks (the minority part of family) may choose to ignore fashion; but if you are criticizing fashion (with seriousness), then you are in the wrong game.

If we're using fashion as an analogy, it's kind of a mixed bag.

There's both this type of fashion: https://s-media-cache-ak0.pinimg.com/236x/c0/66/12/c06612029...

...and this type of fashion: http://img.izismile.com/img/img7/20140521/640/fashion_runway...

out there.

Things can be fashionable, yet crisp and usable. Things can also be crazy, stupid (to most), and groundbreaking (to a few). Put another way, fashion currently dominates, but that would be okay since we don't need to sacrifice good visual design in the name of performance any more.


* People get lazy. * People don't user test. * People like to follow trends. * Designers and developers get micromanaged.

...along with all of the other monkey wrenches that most developers and designers have gotten used to. People end up building cool-looking websites that aren't as usable as they should be.

On the other hand the creative, intellectual minority that wants and needs fast access to information is left behind which is a cultural and economic collateral damage of fashion.

That would explain why engineers often dress like shit.

He gave that talk in Sydney, Australia. Oh, but if only The Sydney Morning Herald had been there and reported on it! We could have read about it on 3.4MB of downloadable content (actually, not just "content").

Their beta website takes up 1.1MB. I highly recommend that you add the following Adobe busting filter though (given that in their "Satellite" script Adobe attempts to bust your ad-blocking, I think you should bust their busting...):


This is the HN community's fault (not even web fora in general, it's HN specifically). Post a page with the default text size here and you'll get a raft of complaints that your text is "tiny". No, my text is the same size as web page text has been for decades. But users with enormous monitors don't set their DPI correctly and web browsers don't display pages at a sensible scale for the screen size.

A sensible scale is defined by the user's eyes, and most sites are not designed with impaired vision in mind. Web page text has never been set at a good size by default, and most designers make it smaller than the default because they have good eyes and aren't thinking about visual impairment.

I'm more likely to complain about the comically huge fonts on sites like Medium or various "modern web designer" blogs. When I browse on my old 1024x768 laptop, often there is literally less content on the screen than what used to fit on my old CGA monitor (80x25 characters). And about half the time they deliberately slow down the scrollbar to make it even harder to read. If I'm lucky, zooming out won't trigger an increase in font size, and then I can actually read the text for 30 seconds until "sign up for my newsletter" pops up and I leave.

The default text size in every browser, as far as I'm aware, is actually 16px. The text on Hacker News is 12px, because virtually every designer decides they know better than the browser settings.

Which one are you thinking of as the "default"?

I'm thinking the size you get when you literally do not specify a size. If you say that's 16px then I believe you.

And yes, HN itself is smaller than that.

Nobody should be specifying fonts in pixel sizes anyway. Just use ems everywhere and let the browser scale them for the current display resolution.

"Designers" hate this because they can't massage every pixel into just the right location on their cinema display and can't be bothered to accommodate different hardware. They'd rather just send users 300 DPI images of everything and let the proles deal with downsampling them on their ever so inadequate devices.

The CSS px unit is actually usually (for high-DPI devices) defined as 1/96″.

It doesn't matter how big my monitor is; I walk away from a lot of websites thinking I need a larger one.

I think it had to do with mobile. Things are scaled to screens, and mobile is the lowest common denominator of what can fit on a screen. Just blow that you to desktop on you have Duplo.

"Chickenshit Minimalism: the illusion of simplicity backed by megabytes of cruft."

This is not restricted to websites - a lot of software has suffered from the same trend, where newer versions look simpler - and often have reduced functionality - while for some reason still requiring more resources than the previous version.

Yes, Windows 95 run on 4MB RAM, Windows 2000 on 64MB. Than the bloat came, and nowadays Windows 10 looks more blant and has reduced UI than ever and still requires and runs as slow as Windows 7 which had a lot of themes and transparent effects and a rather modern and nice interface. No wonder several parts of the interface like the startmenu are now in dotNet. It was already a major mistake that lead to the restart of the Longhorn project which became Vista - the former dotNet based Explorer and shell and WinFS all were dotNet applications and very slow. I run WinFS beta 2 on WinXP on an high end PC at that time and it was promising but aweful slow. Instead of adding the database-like features to NTFS driver (parts of the object features from Cairo project are still there), they added an filesystem filter driver in userland and a dotNet service. Needless to say that idiotic design and relying on dotNet for a mission critical component never worked out. Vista shipped with an improved WinXP desktop search.

Windows 95 could almost run on 4MiB RAM, but that's with heavy swap use, as is the advertised minimum, 8MiB. 95 really needed 16MiB or so, to be fair to it.

Windows NT 4 could run in 16MiB but ideally had 32MiB.

And Windows 95 had no modern security checks in place, so good luck with Internet access. That stuff has performance costs. Also people seem to forget that we had to reboot those machines all the time due to crashes, more code that has performance hits. Yes there is definitely bloat but there is also a lot more going on underneath the seams than before.

95 perhaps. NT was rock-solid.

What kind of examples do you have in mind? I don't disagree entirely...but I've found that sometimes what seems like gratuitous weight is a result of the amount of code that has to be used to not only fix past bugs, but reconcile all the new standards and external mishmashes (such as interoperability with operating systems and other libraries that themselves have been upgraded) that have come into existence.

The magnitudes of increase in lines of code for the installers we download definitely outpace the increase in functionality, compared to 1 & 2 decades ago...but we have a lot more systems to interpolate with. That said, I'm always gobsmacked when I download a relatively new game from Steam and it weighs in at under 100MB...which would've been 70+ floppy disks back in the day :)

(though in the case of games, the increased weight is most often due to multimedia assets)

The Opera browser is an egregious example. It used to be a browser that had everything plus the kitchen sink, it was blazing fast, and really small (both in executable size and in RAM). It managed to keep that way for many years, constantly improving and packing more useful features while keeping the small footprint, I'd say until version 10 or so. Versions 11 and 12 starting removing features and introducing bloat, and then after version 12 the browser was switched to Blink and became a Chrome clone that didn't have even a quarter of the features of the previous versions.

I think probably one (not the only) reason for this was the "reconciling all the new standards and external mishmashes" you mention, but how does that make the weight less gratuitous? It only means that the blame is not exclusively on Opera but also on other companies, committees, etc., but it doesn't make the phenomenon any less ludicrous and sad.

Not the poster and not quite what you were talking about but I was recently impressed that the new Node version of the Wordpress control panel is 126 Mb whereas the old php version including that and the rest of wordpress is 24 Mb (both unzipped). I guess the Node thing includes Node and a browser but it still seems to have quite a good bloat to functionality ratio.

I wonder how much of that is simply npm duplicating libraries (at slighly different versions) due to its no-shared-dependencies approach?

https://github.com/Automattic/wp-calypso/blob/master/package... uses ~130 libs. After npm install (incl. dev deps), node_modules/ weighs 170M file size (and whopping 550M(!) disk size). Let's check duplication. Dirs: (Assuming equal size & name indicate equal content)

    $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | wc -l
    ~/wp-calypso (master) $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | uniq | wc -l
    ~/wp-calypso (master) $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | uniq --skip-fields=1 | wc -l
    $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | uniq --skip-fields=1 --count | numaverage 

    $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | tr -d , | numsum | numfmt --grouping
    ~/wp-calypso (master) $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | uniq | tr -d , | numsum
    ~/wp-calypso (master) $ sed 's@\S*node_modules/@@' DU | sort -k2,2 -k1,1 | uniq --skip-fields=1 | tr -d , | numsum | numfmt --grouping
=> dirs appear in ~2.2 places on average; ~31% of total size is wasted on exact dups, ~14% more spent on different versions on same lib.

    $ npm dedupe
    $ $ du --summarize --apparent-size node_modules/
    166,097,508	node_modules/
Underwhelming! Only 2%?! dedupe is constrained by npm lookup algorithm (can only lift equal versions to parent dir) but 2% is useless. Should have used sym/hard/reflinks.

Anyway, I now know npm's exact-duplication overhead is not huge (though could be linked); inexact-duplication is small enough to be easily worth the ability to mix versions; and that the new control panel is indeed bloated [however I assume it has more functionality than old?].

The entire iWork suite of Apple software suffered this. I used to use PAges and Numbers for a LOT of business stuff. Now it's not even an option. They don't do half the stuff the old versions did.

A recent example of this is buildbot. The current version (8.x) has a very no-frills, utilitarian python/jinja UI which has served me, personally, and many other open source projects well (chromium, mozilla, LLVM, etc). Version 9 is a rewrite two years in the making using angular with the modern JS toolchain for its frontend. I tried out the latest (beta) build, and from what I can see I think it's exactly what you describe: more resources with less functionality. Time will tell whether the final version lives up to its promises but I'm very skeptical at this point.

Google invented their own name for it - Material design.

I laughed at this harder than I should have.

It's funny because it's true.

For the most part this is a pretty good article. I find that the more removed someone is from the real, vanilla HTML the more bloat they will inadvertently bring in. Need to focus a form? Download an angular plugin because you're using angular, why would you want to do it natively!

The native DOM is pretty inelegant but at the same time it can't be ignored; it must be understood. You don't need a new plugin, font, css reset file to accomplish everything you want and you can even do it cleanly!

I was a little concerned about this part though:

> [...]ad startups will grow desperate[...]This why I've proposed we regulate the hell out of them now[1].

I'm all for downloading of your information but some of the other things are just a bit off the mark. Like deleting your data can be problematic in any type of collaborative / productivity app. The right to go offline is nice in theory but many devices may actually need the internet to work and without it it wouldn't be able to function. I mean yeah the examples given are good examples as to things that can be "smart" and "dumb" but what about similar things, like sensors and other types of trackers? Seems like market pressures would be better to change those items than regulation.

[1] http://idlewords.com/talks/what_happens_next_will_amaze_you....

> Need to focus a form? Download an angular plugin because you're using angular, why would you want to do it natively!

Otherwise known as CBDD -- "Code Bootcamp Driven Design"

Byte magazine had a cover feature 23 years ago entitled "Fighting Fatware".

We're doomed to repeat history...


The article starts: Dave Brown, a Keene, New Hampshire-based entrepreneur, got his Christmas wish last year - a copy of Microsoft's Access relational database manager for Windows. Excitement turned to disappointment, however, once Brown tried to run the program. Despite the fact that his system had the 4 MB of RAM that Microsoft recommends, Access was "hideously slow." A call to Microsoft technical support revealed the truth: He needed at least 8 MB of RAM to achieve acceptable performance. Now Brown has two options: He can spend $200 for more RAM or wait for version 1.1, which Microsoft claims will run better with 4MB.

A second notable quote: One controversial aspect of the software-bloat problem is the increased use of high-level languages, particularly C and C++. While assembly programming can produce very tight code, the common belief is that with C or any other high-level language the code will be larger. However, a good programmer, says Word­Perfect's LeFevre, can minimize the growth of code while mak­ing the most of a high-level language's advantages. Lotus Development (Cambridge, MA) ported 1-2-3 from as­sembly to C between versions 2.01 and 3.0. Consequently, the code size nearly tripled, from 1.4 MB to 4 MB. Not all of that growth is attributable to the difference between C and assem­bly-version 3.0, for example, included the printing utility All­ways and had significantly more features-but it was a major con­tributor. The resulting product would not run acceptably under DOS until the company delayed its release to compress and op­timize the code.

On Google's AMP, which endlessly reloads a picture carousel: "These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies."

On trying to fix the problem: "These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies."

Someone recently commented on one of my web pages for being unusual in that the pictures were all directly related to the copy.

> On Google's AMP, which endlessly reloads a picture carousel: "These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies."

> On trying to fix the problem: "These comically huge homepages for projects designed to make the web faster are the equivalent of watching a fitness video where the presenter is just standing there, eating pizza and cookies."

Did you mean to have the same quote in both paragraphs?

Oops, sorry.

A large part of the code bloat problem can be resolved if the JS ecosystem's most popular build tools had support for some of the advanced compilation features in Google Closure Compiler, like dead-code elimination and cross-module code motion [1].

ClojureScript makes heavy use of the Closure Compiler's advanced compilation features, and as a result generates code that is often orders of magnitudes smaller than what it would be without those features. Think of what a bloated mess a ClojureScript app would be if it had to include the entire ClojureScript standard library with every build. This is exactly what's happening in JS world, where developers include by default the entirety of any utility libraries they're using when they're only calling a handful of functions from them.

Before anyone starts suggesting "just use Closure Compiler for JS projects", it's really not that simple (at least when I last looked into it). There's a huge amount of friction involved in using the Closure Compiler for a regular JS project (most of which wouldn't apply to a ClojureScript project because its JS output is machine-generated, and its build chain is designed to work exclusively with the Closure Compiler and all its quirks), namely writing all your code as Closure Modules, defining externs for any third party libraries you use, and setting up the JVM-based compiler itself and integrating it with the rest of your build tools.

I hope to see some improvement in this area with the dawn of ES6 modules, since they were designed from the ground up with static analysis in mind. Robust and accessible dead-code elimination and cross module code motion for ES6 modules could easily bring about a revolution in JS code sizes on the web.

[1] https://developers.google.com/closure/compiler/

"A large part of the code bloat problem can be resolved..."

...with deflating the front left tire a little bit, putting a magnet on the gas cap, folding in the side mirrors.

I think you missed the point of the talk (or didn't read or watch it), and the kind of solution you proposed is mentioned there.

(PS I agree with what you said, though, regarding JS dead code elimination.)

I was only commenting on the first-party code bloat side of the problem mentioned in the article, and I realize this only addresses a part of it. Asset bloat and third party scripts are entirely different beasts that can only really be tackled on a case-by-case basis.

You're exactly right; ES6 modules can solve this. Rollup (an ES6 bundler) has tree shaking [1] and also smaller bundle sizes in general [2].

[1]: http://rollupjs.org/ [2]: https://github.com/nolanlawson/rollup-comparison


And it looks like JSPM/SystemJS already has it integrated [1]! I'm wondering if there are any similar efforts in the Webpack ecosystem.

[1] https://github.com/systemjs/builder/pull/205

I think Webpack 2 will support tree-shaking as well.

The general term is "whole program optimizer". It what Google does to Chrome (but not Chromium IIRC), and what Proguard does to Java (e.g. Android) apps.

Define a couple entry points and use static analysis to pare down the code to that, for minification, obfuscation, and performance.

One nice part of ScalaJS... get the free dead code killer.

Overall, this was a good article and it had me eyerolling at some of the dumb...dumb things people do. Like the internet.org background logo continually downloading a movie. Whoever did that should not be making websites.

Likewise, I took a look at my project and was able to chop the JS size in half by yanking out some libraries I no longer use, so now the JS and CSS are each under 500K each. Still a 1.2MB load overall; but it's also cached and an app people will visit more than once.

I hate that my CSS is close to 500K though. The design itself isn't that complicated; but I'm basing it off of a bootstrap theme and until I know what I'm using I can't prune much.

And I think that's a source of some bloat: frameworks and libraries. But, It's a tradeoff; it's code I don't have to write, which lets me get a better product to market faster. Sure, I could really spend the time to prune all my assets; and i think one day that will be a good move to make. But for me, and for many other projects, it's a tradeoff.

Usually media is the big one to blame, and things like streaming a background movie and eating up hundreds of megabytes in bandwidth to display it is simply irresponsible.

In Apple's case, they probably want their images to be high resolution, which is understandable. But even then they could (may even already) run it through some compression filters to reduce the size without hurting the quality.

It's something we should all be mindful of. You can, but don't have to go to extreme lengths to reduce the size of the site. There are often some low hanging fruit you can reach for that get you 80% of the way there. And obviously if its site that people a lot versus a site that people will visit once, your priorities for optimization are going to be different.

For pruning your CSS: http://stackoverflow.com/a/3113120

The problem with CSS pruning is that you have to run it over ALL sites you have and all kinds of dynamic pages you generate - or you experience loss.

Oh that's awesome. Would be even better if you could "start a session" navigate through your website and then see amalgamated results at the end, rather than just a single page.

> The article somehow contrives to be 18 megabytes long, including (in the page view I measured) a 3 megabyte video for K-Y jelly, an "intimate lubricant".

(Warning. Swear words incoming, because the situation has grown far out of control)

Fuck websites with autoplay (or autoplay-on-hover) videos. Fuck them. Whoever has invented or implemented this crap, please resign from your post immediately.

Even in 1st world countries, people use 3G/4G with data caps to work or are in otherwise bandwidth-constrained environments (public hotspots, train/bus wifi) etc. You are screwing over your users and don't realize it.

Also, something especially Spiegel Online comes to mind: 30 secs video clip with a 25s advertising clip. Fuck you.

> Why not just serve regular HTML without stuffing it full of useless crap? The question is left unanswered.

Easy actually: because a well-defined restricted subset of HTML can be machine-audited and there is no way to abuse it. Also, Google can save resources at indexing.

> Easy actually: because a well-defined restricted subset of HTML can be machine-audited and there is no way to abuse it

This is why I personally use NoScript.

People often ask me "how can you stand to use NoScript on the modern web? Isn't it a huge pain to whitelist scripts? Isn't everything broken?"

Nope. Most of the web works perfectly fine without loading Javascript from 35 different domains (as my local news site does). You whitelist a few webapps and you're pretty much good to go. The difference is incredible. Your browser uses a fraction of the memory. Pages load faster than you can blink. The browser never catches up or lags. Pages scroll as you expect. Audio and video never autoplay and distract you. When I briefly used NoScript on mobile, it was a miraculous life-saver that made my battery live forever.

In the past couple years, however, I have noticed a new phenomenon. Remarkably - madly, in my view - there are webpages, webpages that should be simple, webpages by all appearances that consist of nothing more than a photo (maybe more than one), a byline, and a few hundred words of text, that require Javascript to load. As in, you will get a a blank page or an error message if you don't have their scripts enabled.

I don't understand it. I don't want to understand it. I just want it to stop.

I understand that you need Javascript and so forth to run a webapp. I'm not even asking for your webapp to be less than 5MB. Hell, make it 50MB (I just won't ever use it on mobile.) Making applications can be a lot of work, maybe yours is super complicated and requires tons of libraries or some crazy media loading in the background and autoplaying videos and god knows what else.

But please, please, don't require Javascript to simply load an article and don't make a simple article 5MB. Why on Earth would you do that? How many things have to go wrong for that to happen? Who is writing these frameworks and the pages that use them?

"In the past couple years, however, I have noticed a new phenomenon. Remarkably - madly, in my view - there are webpages, webpages that should be simple, webpages by all appearances that consist of nothing more than a photo (maybe more than one), a byline, and a few hundred words of text, that require Javascript to load. As in, you will get a a blank page or an error message if you don't have their scripts enabled."

I use noscript as default and I'm noticing the same thing. I post them to twitter. Here's a sample:

- 'Here are the instructions how to enable #JavaScript in your #web #browser.'

- 'For full functionality of this site it is necessary to enable #JavaScript.'

- You must enable #javascript in order to use #Slack. You can do this in your #browser settings.

- 'You appear to have #JavaScript disabled, or are running a non-JavaScript capable #web #browser.'

- 'Please note: Many features of this site require #JavaScript.'

- 'Tinkercad requires #HTML5/#WebGL to work properly. It looks like your #browser does not support WebGL.'

- 'Warning: The NCBI web site requires #JavaScript to function. more...'

- 'Whoops! Our site requires #JavaScript to operate. Please allow JavaScript in order to use our site.'

- 'The #media could not be played.'

- 'Notice: While #Javascript is not essential for this website, your interaction with the content will be limited.'

- 'Powered by #Discourse, best viewed with #JavaScript enabled'

It shouldn't be surprising that Tinkercad and Slack require JS.

Seriously, did they think Tinkercad was just going to be a pile of CSS transforms?

He's probably talking about their home page, which is just text and images selling their product.

"Seriously, did they think Tinkercad was just going to be a pile of CSS transforms?"

To read the front page?

Seriously, I remember using a web chat client (might have been an old version of mibbit) which was "minimal" JS. Basically server side rendering of the incoming messages plus periodic page refreshes. It was a very poor user experience.

pomf.se used to have:

>Enable JavaScript you fucking autist neckbeard, it's not gonna hurt you

It always gave me a chuckle. Most of its clones still have it (e.g. pomf.cat).

From the FAQ at pomf.cat:

> All filetypes but exe, scr, vbs, bat, cmd, html, htm, msi files are allowed due to malware.

Yes, scripting in html never hurt anybody...

Back in the days of IE4/5 I discovered that turning JS off would very effectively stop pop-ups, pop-unders, pop-ins, slide-overs, and all other manner of irritating cruft, and it's been the default for me ever since. Conveniently, IE also provided (and AFAIK still provides) a way to whitelist sites on which to enable JS and other "high security risk" functionality, so I naturally made good use of that feature.

More recently, I remember being enticed by some site's "Enable JavaScript for a better experience" warning, and so I did, only to be immediately assaulted by a bunch of extra annoying (and extra-annoying) stuff that just made me disable it again and strengthened my position that it should remain off by default. That was certainly not what I considered "a better experience"... now I pay as little attention to those messages as I do ads, and if I don't see the content I'm looking for, I'll find a different site or use Google's cache instead.

Another point you may find shocking is that I used IE for many years with this configuration, and never got infected with malware even once from casual browsing, despite frequently visiting the shadier areas of the Internet and IE's reputation for being one of the least secure browsers. It likely is in its default configuration with JS on, but turning JS off may put it ahead of other browsers with JS on, since the (few) exploits which technically don't require JS are going to use it anyway to encode/obfuscate so as to avoid detection.

I think it is even the default on server versions of Windows since Server 2003.

> I post them to twitter.


"> I post them to twitter. ...Why?"

As a reminder to myself how lame some sites are and what happens when it's assumed JS is in use.

And on top of that, some redirect you to another page to display the no-js-warning. Then you enable JS for the site and reload but will get the same warning because you're no longer on the page you visited...

I think it is mod_pagespeed that's doing this, kinda silly.

NoScript, at least, has an option to catch and stop redirects of this kind. They can be quite funny when you can see a perfectly loaded page that wants to send you elsewhere.

>I just want it to stop.

You and I are kindred spirits.

I feel it's getting worse. A large % of the links (mostly start-ups landing pages, not articles) I click on HN just present me with a completely blank white page. No <noscript>, nothing! If you're lucky, you see enough to realize that it's probably just a page that relies on JS. It's never a good first impression.

It's getting progressively more annoying to whitelist the TLD as well as the myriads of CDNs that sites are using. Often it's a click gamble: a "Temporarily allow sketchy.domain.com", throwing in the towel and saying "Temporarily allow all this site".

Site embeds a video? Good luck picking which domain to whitelist out of a list of 35 different domains ;) Temporarily allow one, reload, repeat a few times, close tab in anger and disappointment.

It's sorta-kinda a good thing, implemented poorly. What's actually happening is that more things that used to serve HTML directly to the browser, are now just browser-oblivious HTTP REST-API servers. Which is good! API servers are great!

The correct thing to do, after implementing one, though, is to then make a "browser gateway" server that makes requests to your API server on the one side, and then uses that to assemble HTML served to the browser on the other side. (This is the way that e.g. WordPress blogs now work.)

What's happening instead is that the site authors are realizing that they can just get away with writing one static blob of Javascript to act as an in-browser API client for their API server, stuff that Javascript in an S3 bucket, put e.g. CloudFlare in front of it, and point the A record of their domain's apex at that CF-fronted-S3-bucket. Now requesting any page from their "website" actually downloads what's effectively a viewer app (that they don't have to think about the bandwidth costs of at all; they've pushed that entirely off to a third party), and then the viewer app starts up, looks at the URL path/query/fragment, uses it to make an API request to their server, and the response from that is then the rendered page.

Kind of ridiculous, no?

It does sort of make sense to me for sites like e.g. Twitter, where their bread-and-butter is running API servers and writing API clients that interact with those servers. The "web client" is just, then, another API client, written in Javascript and "zero-installed" into web browsers whenever they request a page from the site.

But for, say, a newspaper site, or a blogging engine? Those sites should definitely care about serving HTML directly, even if only for Googlebot's sake.

I think you really don't get the point of the article. This kind of crazy overcomplexity seems like one of the many things the article author would lump into excessive complexity for the sake of complexity.

Yes you can implement a REST API thingy and then an application server that templates it all, and maybe that logic is written in JavaScript and is the same code that executes on the browser so you basically have a sort of headless quasi-web-browser assembling pages to serve so you can reuse the same code on the client side to try and reimplement the browser's navigation logic with pushState etc. etc. in some dubious quest to outperform the browser's own navigation logic. I understand this sort of thing is actually done now.

Or you can just serve HTML.

And you miss the point of REST as well, I think. I'm increasingly convinced that nobody has the slightest clue what 'REST' actually means, but to the extent that I can determine what it means, it seems to essentially mean to design according to the original vision of at least HTTP. To use, e.g. content negotiation, HTTP methods, etc. as they were originally intended, rather than carving out some arbitrary path of what works arbitrarily choosing features of HTTP and bending them into some custom RPC system that you for some reason chose to implement over HTTP.

A consequence of this is that the same endpoint should be as happy to respond to Accept: text/html as Accept: application/json, surely. (And obviously, not just with some bootstrap page for your JavaScript web page-viewing web application.) It means your URLs represent resources and don't change. It means resources have canonical URLs. (If an API has a version prefix like "/v1/", it isn't RESTful no matter what its authors claim.)

I suppose you could proxy Accept: application/json requests without transformation to an API server, making the "browser gateway" server a conditional response transformation engine. In some ways that's kind of elegant, I think. But it also feels like overkill.

"Just serving HTML" assumes you have HTML to serve. If you're writing a CMS that consists of manipulating a bunch of /posts/[N] resources that are effectively un-rendered Markdown in the "body" field of JSON documents pushed into a document database/K-V store, though, then you've not got HTML. Coming from the perspective of writing the tools that allow you to manipulate the content, the simplest, cheapest extra thing you can do is to extend that tool into also being an API for getting the content. And nothing more. You don't want your flimsy document-management service to serve your content. You just want it to be spidered/scraped/polled through that API by whatever does serve your content. Exposing your flimsy hand-rolled CMS to the world, even with a caching reverse-proxy in front, would be a security nightmare.

And, even if you did want your CMS server serving up HTML to the world†, you don't want your CMS to serve up "pages", because deciding what combines to make a "page" is the job of your publishing and editorial staff, not the job of the people writing the content. This is an example of Conway's law: you want one component (the CMS) that your journalists interact with, that talks to another component (whatever themes and renders up the "website") that your line-of-business people work with, and you want them basically decoupled from one-another.

There's no reason, like you say, that the same server can't content-negotiate between the web version of a resource and the API version. I've implemented that sort of design myself, and a few years back, I thought it was the be-all and end-all of sensible web authoring.

These days, though... I've come to think that URLs are hateful beasties, and their embedding of an "origin" for their content (effectively making them {schema, namespace, URN-identifier} tuples) has broken the idea of having "natural" web resources before it ever got off the ground. The CA system and CORS have come together to make "origins" a very important concept that we can't just throw out, either.

What I'm talking about: let's say that the journalists are actually journalists of a subsidiary of the publishing company, recently acquired, kept mostly separate. So the journalists' CMS system is run by the IT department of the subsidiary, and the news website is run by the IT department of the parent company. This means that the CMS probably lives at api.subsidiary.com, and the website is www.parent.com.

Now, there's actually no "idiomatic" way to have the parent website "encapsulate" the subsidiary's API into itself. You can allow www.parent.com to load-balance all the requests, frontload content negotiation, and then proxy some of the API requests over to api.subsidiary.com... but api.subsidiary.com still exists, and now its resources are appearing non-uniquely through two publicly-available URLs. You can further try to hide api.subsidiary.com behind firewall rules so only www.parent.com can talk to it, but, now—disregarding that you've just broken the subsidiary's CMS tooling and that all needs to be recoded to talk to their own API through the parent's website—what if there's more than one parent? What if, instead of a parent company, it's a number of partners re-selling a white-labelled multi-tenant API service through their own websites? api.subsidiary.com still needs to exist, because it's still a public service with an indefinite number of public partners—even though it's denormalizing URL-space by creating a second, redundant namespace that contains the same unique resources.

And all this still applies even if there's no separation of corporations, but just a simple separation of departments. The whole reason "www." was a thing to begin with—rather than whatever's on the domain's apex just setting up a forwarding rule for access on port 80—is that the web server was usually run by a different department than whatever was on the apex domain, and each department wanted the ability to manage their resources separately, and potentially outsource them separately (which still happens these days; "blog." and "status." subdomains will quite often be passed off to a third-party.)

In short: REST is awesome, and content negotiation makes perfect sense... until there's multiple organizational/political entities who need to divide control over what would originally be "your" REST resources. You can try to hide all that complexity after-the-fact, but you're just hiding it; that complexity is fundamental to the current {origin, identifier}-based addressing model used for the web. We would need to switch to something entirely different (like, say, Freenet's SubSpace Keying approach) to enable REST resources to have canonical, unique identities, such that you could really encapsulate multiple REST "representations" served by multiple different third-parties into the same conceptural "resource", and have that "resource" only have one name.


† On a complete tangent, HTML is way cooler as a format—and far less of a hassle to generate (no templates!)—when you just use it as a pure flat-chunked unstructured markup language, like so:

    <html>hi <em>guys</em></html>
...instead of forcing it to be a container-format for metadata that should rightfully live in the HTTP layer. (Yes, the above is fully valid HTML. Validate it sometime!)

While it does make sense to request, say, the text/html representation of a BlogPost resource directly from the CMS server, what you should get in return is, literally, a direct HTML equivalent to whatever the JSON response you'd be getting is, instead of a "page"—that being an entirely different compound resource that you weren't talking about at all.

That's where people screw up REST—by thinking "send me text/html" means "send me a web page." Pages are a particular kind of "microformat" that the text/html representation is capable of! There are other kinds! Sometimes the unstructured HTML markup of a BlogPost-body might be all you want!

Now, that sounds all well and good as an argument for the sake of e.g. REST client libraries that want to get uncluttered HTML resources to chew on. But web browsers? It'd be crazy to feed unstructured markup directly to a web browser, right? Horribly ugly, for a start, and missing all sorts of important structural elements. It'd look like 1994.

Well, surprisingly, you can gussy it up a fair bit, without touching the Representation of the State itself. One thing people don't tend to realize is that the <link rel> tag is equivalent to sending a "Link" header in HTTP. This means, most importantly, that you can tell the browser what stylesheet it should use for a page as pure HTTP-response metadata, instead of embedding that information in the transferred representation.

With enough clever use of CSS :before/:after styles to insert whatever you like on the page, you can "build" a structure around some REST "State" that was "Represented" during the "Transfer" as completely unstructured markup. That's what you should be seeing as the result of letting a browser do content-negotiation directly with a RESTful API server.

(Now, you can't get any Javascript going that way, but shame on you for wanting to; the thing you're rendering isn't even a representation of a compound resource.)

Anyway, follow this kind of thinking far enough, and you realize that everything the "API CMS" server I just described does is something databases do (at least, the kind with constraints, views, triggers, stored procedures, etc.); and that the "website service" of the parent company is what's more traditionally thought of as a "webapp" server.

Now if you could only get your database to speak HTTP and serve HTML directly... oh, wait: http://guide.couchdb.org/draft/show.html :)

While there's for sure a use case where what you're describing is 100% appropriate, and in those cases, yes, you need some of that complexity - a good majority of blog-like things on the internet are by and large managed by a single person, and something like Jekyll or even just writing in HTML directly is totally fine.

In fact, this is the exact point the author brought up - of course a lot of this complexity has it's uses - complex systems aren't invented just for the sake of complicating things. But instead of assuming that every single thing you write needs to be written to the most complicated standards you can imagine, we should instead be focused on "What's the simplest way I can deliver this experience right now?".

Really, it's another way of expressing YAGNI, which apparently the development community has completely forgotten in their quest for the latest and greatest.

> With enough clever use of CSS :before/:after styles to insert whatever you like on the page, you can "build" a structure around some REST "State" that was "Represented" during the "Transfer" as completely unstructured markup. That's what you should be seeing as the result of letting a browser do content-negotiation directly with a RESTful API server.

CSS is likely to be too limited for that; often the layout you want to have means the DOM elements need to ordered and nested in ways that don't actually make sense for your data. What I've seen actually working was XSLT stylesheets. Of course, that has its own issues with bloat, and forces XML for data transmission. (The context I saw it in was a RSS feed, a few years ago.)

Well, when I write PHP/MySQL apps without all the plugins and libraries and crud, everything it sends to browsers is pure HTML. Manipulating the output can be done with server side code. It's just not as flashy and shiny.

Ah, no, that's not what I meant with that second bit; I was referring mostly to not having a <head> in your HTML, thus making your HTML less of a "document" and more just a free stream of bits of text with an <html> tag there purely to indicate what you should be parsing these as. (You don't even need that if you've sent a Content-Type, but removing it means you can't save the document because OSes+browsers are dumb and make no place for media-type metadata on downloaded files.)

Things that go in the <body> like "top navs" and "sidebars" actually are actually part of the resource-representation; it's just a compound "Page" resource that they're properly a part of. /posts/3 should get you "a post"; /pages/posts/3 (or whatever else you like) should get you "a web page containing a post."

But when you do use the "<head>-less" style, and are requesting a non-compound resource more like /posts/3, then a wonderful thing happens where you get literally no indentation in the resulting linted document. No recursive "block-level" structure; just single top-level block elements (<h1>, <blockquote>, <p>, and <addr> mostly) that transition to containing inline-reflowed text. It's quite aesthetically pleasing!

This seems pretty ridiculous. Why do you need a completely separate API?

The rendering is just business logic. Run a normal web app and it pulls the content from a database and does whatever transforms you need and still serves up HTML.

Maybe we need something like XSLT for JSON? /s

> But please, please, don't require Javascript to simply load an article and don't make a simple article 5MB. Why on Earth would you do that?

No Javascript => no tracking/ads => force users to enable JS and (sometimes) disable their adblocker. It's all about money, never about technology.

You can track without JavaScript, but you can't just include a 3rd party service, you actually have to do the work.

Same thing for ads, JavaScript isn't required (especially for text ads, although few networks will let you do server side ad requests)

No, you can still do a 3rd pay service without JavaScript. GA expressly supports it.

Tracking can absolutely be done without JS, although the granularity of the data is bigger; these have been around ever since the very beginning of the Web: https://en.wikipedia.org/wiki/Web_beacon

Not necessarily - a lot of the time it's just laziness and incompetence.

I've noticed quite a few simple article websites that have something like a class on the body that sets display: none - and then that class is removed only after all the JavaScript has loaded. It's very obnoxious.

There are also websites that show you a completely blank page if you have cookies disabled! That's amazingly terrible.

Probably to avoid the flash of unstyled content, but yeah, that could probably be handled better.

If you've found NoScript handy, try uMatrix.

It's not for the faint-of-heart, but if you know what CSS, cookies, JS, iframes, XHD, and Internet domains are, it'll make sense.

What you get is a matrix -- capabilities across the top, domains (and sites) down the side. Green is enabled, red disabled. You can save values.

Enable what's needed to get a site working, know just what it is you're nuking.

Much goodness to that feel.

uMatrix is a nice blocking tool, but it doesn't replace everything that NoScript does.

In my experience it replaces everything NoScript does (blocking scripts by site) and then some.

Or am I mistaken?

If so, could you clarify as to how?

Indeed. I have used Adblock and Noscript for years.

My browsing the web is not an invitation for websites to serve a webpage viewing webapp so that they can (poorly, buggily, in a more error-prone manner) reimplement a browser's navigation logic. (Ever had to reload a website which uses the pushState API because you clicked a link and for whatever reason the XMLHTTPRequest it made to fetch the page didn't work and it just hung and ignored all future link clicks? Dear chickenshit webdevs, if you think you can implement navigation better than an actual web browser, you're probably wrong.)

The vast majority of the time when I come to an article which is a blank page without JavaScript, I don't enable JavaScript; I just make a mental note that the web developers are beyond incompetence and move on.

I'm starting to respond to this trend with a more aggressive refusenik approach. For example, CSS is now so powerful that you can cause excessive CPU load with it alone. So I now have a shortcut configured to disable CSS for a site. This also makes many sites readable which otherwise wouldn't be, because they're doing something insane like blanking out content with CSS under the expectation that it'll be shown using JavaScript. And of course all of these recent 'ad-blocker-blockers' (http://youtu.be/Iw3G80bplTg) seem to rely on JavaScript.

Sometimes the content is loaded via JavaScript and so this won't work. Amazingly, for some years now there is a Blogger template which does this, which demonstrates that this brain-damaged approach has spread even to Google. But the greatest irony is that you can work around these sites, quite often, using the Google cached copy. Googlebot has supported JavaScript for some time (actually sort of unfortunate, in the sense that it removes an incentive for webdevs to design sites sanely), and it appears that cached copies are now some sort of DOM dump. Which has the hilarious consequence that you can now use Google to fix broken Google web design. There are *.blogspot.com sites which are blank pages, but the cached version is readable.

My own weblog is very spartan, being rather of the motherfuckingwebsite.com school of non-design. bettermotherfuckingwebsite.com was linked below, but I don't think I agree with it. Ultimately, in terms of the original vision of the hypertext web, I'm not sure web designers should be dictating the rendering of their websites at all; that is, I'm not sure web designers should exist.

So basically, imagine surfing the web with CSS disabled, but for your own user styles, that format absolutely all content the way you like it. Your own personal typographic practices are unilaterally adopted. bettermotherfuckingwebsite.com might be right as regards to typographic excellence, but it's wrong about where those decisions should be made.

Unfortunately it's undeniable that this is a lost battle. Browsers used to let you choose default background, foreground and link colours, as well as fonts, font sizes, etc. I think you can still choose default fonts. But the idea of the web browser as providing a user-controlled rendering of semantic, designless text is long abandoned. That ship died with XHTML2 - I think I'm about the only person who mourned its cancellation.

Well written, hlandau. People like you who really CARE about this stuff are why I come here.

Off topic: Thanks for the video link; the source of it, The Big Hit, looks to be something I might rent and watch today, given that the clip and another from the movie were rather funny.

A lot of times, that's just more effort than I want to deal with. I have Privacy Badger on a PC or two, and have reported things broken... probably a dozen times because I forgot I was on a computer with it enabled.

We shouldn't try to fix badly bloated websites. We should RIDICULE badly bloated websites, and take our business elsewhere.

The problem with NoScript is that when you use a browser that doesn't have it installed, the internet seems so much noisier. :)

> Fuck websites with autoplay (or autoplay-on-hover) videos.

It's not just marketing. My personal favorite is how Twitter and (shudder) Facebook are doing this on their timelines now. At least they have the decency to mute the volume, but that doesn't help with data caps.

Both services offer to disable the auto-play, but unfortunately global and not only on a certain device or depending on network environment.

Hell, that 'd be a nice thing to have in HTML5 and OSes: allow the user to classify networks as "unrestricted" (fat DSL line, fibre,...) or "restricted" (mobile hotspots, tethering, metered hotspots), and expose this to websites so they can dynamically scale.

Android already supports this, but no other platform. A shame.

> Both services offer to disable the auto-play

Twitter specifically won't disable autoplay for everything:

The option text reads:

Videos will automatically play in timelines across the Twitter website. Regardless of your video autoplay setting, video, GIFs and Vines will always autoplay in Moments.

I removed Moments from the ui with Stylish the same day they launched it, and image/video previews entirely in the main timeline. The selectors you want to hide are ".expanding-stream-item:not(.open) .js-media-container", ".expanding-stream-item:not(.open) .js-adaptive-media-container", and ".moments"

The Facebook app for iOS lets you choose whether to auto play on wifi+mobile data, just wifi, or not at all. One app, but it's a start.

Could you explain some more? I'm looking for these settings on my iPhone 6S, but can't seem to locate them. Thanks.

Sorry, should've included that.

More > settings > account settings > videos and photos > autoplay.

Nice feature, but that placement leaves a lot to be desired.

Windows 8.1+ does at the OS level, I don't think it's exposed in the browser though.


PAYPAL frikkin has full-screen video autoplaying on page load... of course one can go directly to .com/login but it's mind-boggling that a financial services provider for the masses thinks that this is acceptable.

I bet they did a study of 100 random users, and got more engagement with the videos. Just because we hate them doesn't mean our mom's and grandpas hate them.

But, I thought noscript breaks the web? What it seems like is web-devs are breaking the web. Somehow, they have turned browsing the web into a worse situation than watching cable television.

Even in 1st world countries, people use 3G/4G with data caps to work...

_No_ mobile browser supports autoplay on videos in webpages. There used to be a hack on Android but it was closed in 5.1 (maybe earlier). Another reason to avoid native apps.

That sentence wasn't limited to mobile. People (I for example) use 3G/4G with laptops too. I don't have a data cap, but in other countries those are more common.

This is a pretty good extension, that will keep javascript going, while turning off autoplay (mostly).

Of course doesn't work with Flash videos, but you should disable those anyway.


I find Flashcontrol good https://chrome.google.com/webstore/detail/flashcontrol/mfidm...

Kills almost all the annoying video stuff and one click when you do want to see something. Works on CNN etc.

(also if fixed horizontal junk on top of the page annoys you check out my own (5 LOC or so) thing to toggle them https://chrome.google.com/webstore/detail/zapfixed/jgiflpbko...)

Easy actually: because a well-defined restricted subset of HTML can be machine-audited and there is no way to abuse it. Also, Google can save resources at indexing.

But AMP isn't a subset of HTML, is it? They've replaced a bunch of HTML tags with their own amp-prefixed variants… IMHO it reeks of vendor lock-in, and could just as well have been made a proper HTML-subset.

Fuck CNN. Fuck them.


A surprising number of videos are served through a small number of video service provider sites (there's a different acronym for this, I can never remember it).

A half-dozen or dozen entries in your /etc/hosts file will block them quite effectively. I've posted this to HN in past.


(Including CNN. Fuck'em indeed.)

I've found Google's Pagespeed Insights[0] to be a great resource for keeping obese sites in check. Their site will give you stats on load time as well as instructions on how to optimize that time, which is very important especially on a user's first visit -- e.g., I will bounce from a new site in X seconds, but may give an established site (think Amazon) X + Y second time to load. It's easy to miss how important page load is when you've been working on a site a bunch, but every extra second of load time has been show to impact sites' bottom lines [1].

0 - https://developers.google.com/speed/pagespeed/insights/ 1 - http://www.fastcompany.com/1825005/how-one-second-could-cost...

I have been using slow internet more often lately, because I'm traveling full-time again, and only using 3G/4G broadband. It is remarkable how large some sites have gotten since I was last in this situation. Despite mobile broadband being somewhat faster now than a few years ago, the time to load (to usability) for many sites is much higher. HN is notable for loading instantly in these circumstances, not because the server (I'm guessing it still runs on one server, plus CloudFlare, but I might be guessing wrong) is blazing fast, but because it is so small.

I'm traveling as well. I've completely given up on some tech blogs. Because going to them is costing me too much money, i.e. The Verge.

I travelled through Spain where the largest data plan is 2GB for €20. Top ups were 100MB/€ and you could only top up 200MB at a time.

I'm currently in the Caribbean. Data roaming because some providers don't even offer data unless you are on a postpaid contract.

This is why I use opera mini on mobile. It really speeds up browsing.

I haven't tried Opera in many years. Will have to give it another look. But, I'm mostly not browsing on a mobile device...I'm using my laptop through a mobile broadband hotspot.

I think the primary reason for the rise of increasingly heavy sites is that animations and visuals can be used to attract and "hook" your reader.

Just as fish like shiny spoons and minnow lookalikes and monkeys like shiny objects, humans like pretty pictures and flashing visualizations.

Distraction is the same principle that drives the success of TV. It is so damn easy to just sit in front of the screen and grok out, never mind the fact that the signal to noise ratio is often astonishingly low.

Quality thought and challenging content consumption is much harder than simply letting yourself admire shiny visuals. Therefore, simple websites, while they may contain excellent and meaningful content, will often not stimulate the user's interest as much as animated websites with large pictures.

css animations can be done in only a couple of lines; so I wouldn't consider them to be the primary drive, or any significant factor, behind bloat.

But CSS animations are...rough currently, not standardized, and don't have about a decade worth's of knowledge behind them. I can sure make a element fade in on a webpage with CSS, but there's already a billion ways to do it in JQuery on StackOverflow, so it's a lot more appealing to use Javascript at first.

I did not mention "css animations" in my post. Animations in general are certainly a common cause of website bloat, as they are often effected with a mix of images, javascript, and css which combine to waste bandwidth and cpu cycles. The article lists numerous sites for which animations contribute strongly to large website size. One such site listed in the article is this one: http://pollenlondon.com/ .

My only concern about this is that a lot of the technologies identified as "adtech" on that diagram just... aren't:

* Vimeo

* Hootsuite

* LinkedIn

* UserTesting.com

Marketing != advertising. The overall point is really valid, but this is a dumb way to back it up.

It's bad enough to just take the ad-serving parts of the diagram he uses, which add up to hundreds of technologies (or use Ghostery on any news site).

I'm not with you on linkedin, that's not marketing, that's malware.

The whole adtech part is bad, the rest of the article had valid points but this author doesn't understand how advertising works and is theorizing about this bubble and the tech.

It's way easier to acquire stuff than it is to get rid of it, whether it's website bloat or personal possessions. It takes discipline to adhere to procedures that trim unneeded code / dependencies as they loses relevance. If you don't do it while the reason the code was put in in the first place was fresh in mind, there will be a natural tendency to kick the can down the road.

Not to mention, when they load up on all the new stuff, they try it out on fairly modern machines and it loads well enough, with a load time that's just good enough, so there's even less motivation to trim down.

I also would love to have leaner websites and less bloat, but I recognize that good enough still passes for good enough. It's only when they go past a tipping point do people pay attention, such as when iMore got called out by John Gruber.

Is it hypocritical that this website is over 1 MB and has over 100 requests? http://www.webpagetest.org/result/160101_KN_FH3/

No, it's not hypocritical: these 1 MB and 100 request are the content of this article, i.e. useful payload, as opposed to useless irrelevant junk which is on most websites these days.

Arguably, many of the "slide" images aren't directly relevant to the content, they're just there because you're expected to have images in a tech conference talk.

In fairness, that page is as clean as it gets, given that it uses tables for layout and it loaded really quickly on my resource constrained laptop. I thought it was an excellent translation of content from a slideshow presentation to the medium of the web. (From the mistakes in the source markup, I presume the HTML was manually edited).

I particularly enjoyed this self-reference:

> Examples!

> Here’s a self-righteous blogger who likes to criticize others for having bloated websites. And yet there's a gratuitous 3 megabyte image at the top of his most recent post.

> http://idlewords.com/2014/08/green_arabia.htm

    bash-4.3$ links -dump http://idlewords.com/talks/website_obesity.htm | wc
       1429    7698   78566
Seems like a better bloat-to-word ratio than many of the example sites the OA mentions (including one of his own blog pages by the way).

The little thumbnails of the slides from the original presentation do make up 990Kb of the roughly 1Mb but they are quite readable and do add value in my opinion.

+1 thanks for the reference to websitetest.org! I was looking for something that. I tested my major sites and they all tested good (0.1 to 0.3 megabytes) except for one that was 3.5 MB because of a banner image that does not add much to the site. I will fix that!

Yeah, websitetest.org looks like a terrible website on first glance, given the ads and all, but it's actually an amazing website.

I was having a good time, reading the article with a grin on my face. Until I got to

> On top of it all, a whole mess of surveillance scripts

And I just lost my cool and laughed out loud. Well written, sir.

Yeah, me too ;)

But he also says this:

> I bet if you went to a client and presented a 200 kilobyte site template, you’d be fired. Even if it looked great and somehow included all the tracking and ads and social media crap they insisted on putting in. It’s just so far out of the realm of the imaginable at this point.

If that's possible, I'm getting that bloat stems from sloppy implementation of all sorts. Fonts. Ads. Tracking. All of it. I suspect that the copy-and-paste approach accounts for a lot of it. And using third-party resources, such as Disqus for comments.

> And using third-party resources, such as Disqus for comments.

I actually kinda like disqus. Centralized notifications for replies and no more signing up in order to post a comment (for the users), and as a site op I don't have to deal with spam, people trying to XSS my comments and especially: I can statically cache ALL the content and even run without any database at all!

OK, so I shouldn't have included Disqus under sloppy ;) You do get secure managed comments. But isn't there a bloat cost, too? There are also security and privacy risks in using third parties.

Yep. Disqus is one of the more understandable shifts-to-bloat which developers have made in recent years. Understandable because spammers have made it so difficult to run our own comments. The selfish anti-social behaviour of this small minority of "SEO experts" ends up forcing everyone into tech choices we should otherwise be avoiding.

spot on. however, network trust issues are behind so many reasons why we can't have nice stuff... naive initial approaches to network-sharing of resources + lots of papering over = 20+ years of broken web

Disqus often shows a spinning wheel forever on mobile devices with no way to reload except reloading the whole page and waiting and scrolling down and crossing the finger that Disqus may display the comments this time, only to find out: nope. It seems Disqus peaked, and their product quality tanked. I cringle when I see embedded Facebook comments too, though they get less and less - nevertheless FB comments always worked on all devices.

Nowadays many websites don't offer comment sections at all. Probably because it was too much effort on their side to clean up the spam, etc - sad development. Often a captcha would preventvmost spam and idiots from posting shit.

I lost it at:

> Out of an abundance of love for the mobile web, Google has volunteered to run the infrastructure, especially the user tracking parts of it.

Most of the weight comes from images. The average website (Alexa top 1M) contains over 1.4 MB of image data:


Be sure to pick the most suitable format and to optimize your images. You can also try to serve WebP to browsers which support it. When it replaces JPEG, you save about 30%. With PNG8, it's somewhere between 5 and 50%. And with PNG32, if you substitute it with a lossy WebP, easily 80%.

Scripts come 2nd with ~363 KB. ES6's will thankfully help with that. Creating the structure of your application declaratively enables better tooling. Not only does this make your editor a lot smarter, it also paves the way for tree-shaking.

If you tree-shake your application, only the actually used library features will be included in the shipped bundle.

Disclaimer: my own blog. If this is considered spamming here, feel free to remove


It won't be long before my site which literally downloads an entire Windows 95 disk image on every page load will be considered average-sized. It's a mere order of magnitude away.

Interesting analogy. I grew up using Commodore computers, but I'm not trying to race to the bottom of computing minima. I remember when (we) engineers got REAL WORK DONE on x86 PC's with 10 MB HDD's, running AutoCAD, Lotus 123, and Word Perfect. This was inarguably the beginning of the era of the useful, general-purpose computer. You could load a computer DOS 6.22, Windows 3.11, several of these programs, and still have room leftover for Doom. The author references Apple's iPad page at 51 MB. The last time I rebuilt my gaming PC, the MOUSE DRIVER was 150 MB! It makes me weep for where we are. I don't see that things have really changed all that much in 25 years, as far as getting "real work" done. It's just... more for the sake of "more." (The games are cool, though.) In my opinion, all the bells and whistles really haven't added anything to the web browsing experience for about 15 years.

It's not just an analogy, it's a real-world example!


The disk image is 47MB (when gzipped). This means that the page is actually smaller than Apple's iPad page!

I likewise weep for modern computing.

So I keep telling clients: if we use WP for your low traffic site is like renting an 18-wheeler to move a small box. Yet, I feel like doctors must feel when patients argue 'quoting' something they read on the Internet: suddenly they are the experts now :(

It depends on your point of view. The server load caused by WP is much larger than most alternatives, yes. But the development(and ops) time required to ship a WP site is much lower than any alternative I know of.

Yet WP does come by default with a lot of the cruft problems mentioned here. Lots of inefficient js and css added over the years.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact