Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Unluckiest Paragraphs – Why Parts of Medium Sometimes Disappear (medium.com/medium-eng)
23 points by don_neufeld on Dec 5, 2015 | hide | past | favorite | 15 comments


They really needed to spend a little more time justifying what sound like bizarrely strange technical decisions, rather just baldly stating them. There are no completely stupid technical decisions, but if you're doing something really weird and roundabout, you should at the very least give us a hint as to why you're doing that.

  On most webpages, if you click a link, the browser automatically handles 
  loading a new page. On Medium, we speed this up a bit with JavaScript. We send 
  a request to “https://medium.com/@ritasustelo/advertising-is-not-for-geniuses-
  5d1ffbc505ac?format=json,” download the article text, and render it in your 
  browser.
What the hell is wrong with just serving flat html files, rather than making the browser parse a JSON blob?

  All posts are represented as a list of paragraphs. We give 
  each paragraph a unique name. The code to generate names is 
  a one-liner
Well, why do you do that? Does doing this make any sense at all?


Just seems like a standard single page app architecture which most web developers seem to be moving to these days (Angular/Ember/React/Meteor/etc.): https://en.wikipedia.org/wiki/Single-page_application


It's a blog. The page text for a given url is always going to be the same! Why introduce all this overhead, and needlessly complicate something simple?


Well it seems your opinion is already made but there are a few reasons why this architecture is gaining in popularity:

- Faster loading time. For most users, the bottleneck is the network and a JSON payload is typically a lot smaller than a fully rendered HTML page. This reduces the overhead (unless your users have crap CPUs and fast network connections - not the norm).

- Better separation of concerns. Your server becomes a dumb API and doesn't need to concern itself with presentation. Your UI can live in an entirely different repository and it doesn't matter which framework or language your backend is written in. Front-end developers only need to know about browser technologies (HTML/CSS/JavaScript) to build the UI. This reduces complexity instead of "needlessly" increasing it.

- Thanks to server side Javascript, it's relatively easy to have a fallback mechanism in place to do traditional server side rendering for web crawlers or browsers with Javascript disabled.


Of course, in theory, web pages are supposed to have this separation of concerns, which allows for faster loading, and of course server side rendering of web pages is very, very well supported.


No, none of this strikes me as bizarre at all.

This is how things are going to be, now. Ad blockers will make rules, publishers will flout them, some normal URLs will be casualties.

...or do you really think them peculiar for loading SVG icons?


> you should at the very least give us a hint as to why you're doing that

Well, they did. In the very thing you quoted:

> On Medium, we speed this up a bit with JavaScript. [...] format=json," download the article text

eg, they send what actually changed, probably highly compressed and cached, and not an entire new webpage. That's not super uncommon, and it's hardly weird or roundabout. We can discuss whether they've correctly identified the bottleneck (although, spoiler, they have), or whether it actually improves things for their users in real world conditions (probably), or if the technical overhead is worth it for something like Medium (debatable, but apparently they have the funding...), but it's not "bizarrely strange". It's a well known technique, and while maybe it's more common (and more sensible) to use it on a webmail app than a weblog, it's not black magic.



I think that's a more legitimate use case than this one.


Ah, yes. We've run into this same problem with ad blockers blocking anything that happens to have the word 'ad' or 'advertising' in it. (I run a search engine for finding classified ads across multiple sources and cities, so you can imagine why some of our markup might contain 'ad'.)

Doesn't make a lot of sense given that they have exhaustive lists of actual ad domains to go from, and you've got to think most advertisers are going to figure this out pretty quick and not call their ads "ad".


It's an arms race, and the users are the losers. Either they disable the ad blockers, and their browsing experience is filled with so much junk you can't find the content within it, or they enable the ad blockers and lose content.


> A web ecosystem with ad blockers is more complicated for authors to run and maintain. A more complicated web favors big, centralized players and disfavors small, independent ones.

I'm not sure if this is the case. I concede that a team that owns both the blog and the tool has an advantage but many people use standard tools like WordPress or Jekyll. The teams behind those tools can solve those problems too. I believe there is some friction but hopefully it's not so bad.


Lately it seems to be a common refrain when new startup owners pop up on entrepreneur chats looking for feedback on their MVPs that if issues are found, they're invariably related to ad-blockers filtering out various CSS tags.


ok, time to say it. "eatbeef" is not a hex. does he not know that, or is he not a programmer?


Very cool bug!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: