Hacker News new | past | comments | ask | show | jobs | submit login
The Return of the 90s Web (mxb.dev)
786 points by mxbck on June 18, 2020 | hide | past | favorite | 338 comments



What I miss most from the early days of the Internet is the content. It was all created with love.

My theory is that the high barrier to entry of online publishing kept all but the most determined people from creating content. As a result, the little content that was out there was usually good.

With today's monetized blogs, it is often content for content's sake. People don't try, or they write about topics which they are not really interested in, but did just to have a new post. Or often the writing is bad.

Maybe today's problem isn't the blogs, but the SEO that puts the crap blogs at the top of the search results. Or maybe I'm misremembering and the old content was crap too, or maybe my standards are higher than they were in my teenage years.


People are still creating great stuff along these lines - you just won't find it through Google or Facebook or most of Reddit. Complex, interesting hypertext creations and web sites are still everywhere. But try typing "interesting hypertext" into Google or Facebook and see where it gets you. You can't search for something that's off the beaten track.

This is where directories come back in. Check some of these out:

* https://marijnflorence.neocities.org/linkroll/

* https://neonaut.neocities.org/directory/

* https://webring.xxiivv.com/ (which led me to this gem: https://dreamwiki.sixey.es/)

Competing with Google in search has become an insurmountable task. Personal directories attack from the opposite direction (human curation, no algorithm) in a way that actually puts Google far behind. It's kind of exciting and unexpected.


What we really need is a new Google, built on open principles (decentralized / peer to peer, fully free software, backed by a nonprofit), and focused on indexing the long tail of insightful content that is neglected by Google because it lacks SEO, popularity, links, and other metrics that Google find interesting but we don't necessarily do.


For a long time I assumed that google indexed pretty much everything it, and it was only a question of providing a specific enough set of search terms to drag up older content.

But what you hint at might be more correct these days. They are running a reverse wayback machine in that anything not changed in the last year gets removed. If you click the advanced search its "updated within" and the max timeframe is a year.

In fact it seems the date range example doesn't even work: https://developers.google.com/custom-search/docs/structured_...

If I fiddle with it, it returns a result, but I see an hit from just a few days ago at the top...


> They are running a reverse wayback machine in that anything not changed in the last year gets removed.

Sometimes I wish that were true! Try Googling for, say, PostgreSQL documentation and the top result will often be for a 10-year-old version of the software.


Nitpick, that's kind of a non example because the official Postgres docs let you swap versions more or less seamlessly. IME I will click the top result then click the correct version for my Postgres SE queries. 2 clicks, 0 scrolling.


You're right. I tend to click back, then revise my search to include the version, and then click the result, so Google gets the message that when I search for Postgres docs, I want the most recent version. I have no idea if this actually works, but I heard Google uses bounces to determine relevancy, so I thought it was worth a shot.


Might as well piss in the wind. The number of back-links from different sources probably has a much larger effect.


Sure, but 1 click and 0 scrolling would be better!


Try the button I'm feeling lucky!


Before the omnibar became popular I used to use google as my homepage and I'd type website keywords then "I'm feeling lucky" to get to my frequent websites. I think bookmarks took up too much space vertically so this was my solution /shrug


Well it might be easy to switch but it is a good example

Why is it that Google is thinking the older page is more relevant? Does PageRank outrank content (and Google is oblivious to similar pages that have different versions?)


One would expect that with their resources they could figure out for which topics recency matters and for which it doesn't.


that's why I always filter by last year


> For a long time I assumed that google indexed pretty much everything it,

They did that for a long time, but some years ago the index grew so big, they started restricting it. I thing the general timeframe is 10 years or less till the last update.

> If you click the advanced search its "updated within" and the max timeframe is a year.

Because it makes no sense to go further. For older content you can define individual dateranges. And yes, it works fine for me. Tested a search for 2015 just now, first side had entries all from 2015.

> In fact it seems the date range example doesn't even work: https://developers.google.com/custom-search/docs/structured_....

All those examples are not working. Wasn't custom search retired some years ago?


This is really interesting, I never thought of it this way. Thanks for sharing.


Any new google will become the current google because it will be gamed. I agree with the parent poster about hand-curated directories being the key to finding better stuff. It's necessarily much more distributed - because one person can't index everything - and therefore much harder to algorithmically game.

Google has conditioned us into thinking that "an algorithm that automagically separates the wheat from the chaff" is the only way to do things. It worked for them for a while, but the adversarial forces of marketing, spam, malware, etc are very creative and fast-moving, and that's a lot for an algorithm to try to constantly reckon-with, so best case it'll probably stay a stalemate.

But that's not the only way things can be.


So we have hand-curated directories, and we have social networks. Put them together and you could have a scalable, crawlable, searchable, customizable and non-gamable decentralized indexing system. Just don't know why nobody has come with this yet. Maybe lack of (monetary) incentives? Maybe something for IETF or WWWC to initiate?


Just for fun, imagine an alternate reality where nobody cracked the problem of an effective automated indexing algorithm. Hand-curated directories emerge, some structured by taxonomic categorization, others using keyword tagging, still others reliant on user-based rankings. The most successful grow in popularity and consolidate, becoming highly lucrative properties with recognizable brand names. As they grow to gargantuan size, research in the field explodes and innovators race to come up with improved methods to connect people to content they seek. Sergey Dewey and Larry Linnaeus make a breakthrough they call SageRank. Instead of computer code, it leverages "social algorithms" and game theory to incentivize participant behavior. Vladmir Bezos sinks billions into a clandestine effort to game the system. Once the story breaks, public backlash rallies into a worldwide, anti-"fake links" campaign. In one little corner of the internet, some schmuck says, "Imagine an alternate reality where two nerds came up with an impartial computer program to crawl the whole web..."


Social networks are also gamed.

Since Twitter is their preferred platform, go put the activity of journalist Twitter accounts into a relational DB and start searching for who always boosts who. You'll find patterns. Of course there's nothing inherently wrong with this, but at the end of the day I don't need to know what a dozen NY Times journos think of a NY Times oped which is clearly written in bad faith, pushing a false narrative about a particular news event.

Non-gameable ultimately means people who influence the results can have no monetary interest in the results.

example:

Twitter thread...

https://twitter.com/jiatolentino/status/1263208982614814722

...vs reality...

https://wearyourvoicemag.com/jia-tolentino-parents-teachers-...


This is alive and doing well: https://wiby.me/


I tried to search for irish setter but none of the first results were related to irish setter dogs.


On the third page of search results for 'irish setter' I found this: http://blackpeopleloveus.com/

That search engine is like a gold mine! I searched for "black people love us" in DDG and it was the first result, followed by an article written this year explaining how it came about and .. the web felt like such a smaller place back in 2002 and I just don't remember this at all.

Which makes me wonder about how newspapers and free to air tv kept the culture pretty shallow before exploding with the internet and now, possibly contracting again as our filter bubbles shrink? Just an errant though.

But so much novelty and interesting stuff at wiby.me - search for 'trump' and the first result is just surreal.


Social networks are becoming walled garden so it's kind of tough to crawl them. I mean look at Facebook and Instagram, the biggest social networks. They have robots.txt configured to disallow any crawlers.

Facebook also used have RSS feed for public pages and posts. Now they've not only removed that feature but also have heavy restrictions for third apps.


Facebook and the like made the decision to optimise for distraction instead of connection.

Services connected to the Fediverse, an alternative framework that focusses on connectivity, is slowing growing. It's only a matter of time before they are more successful than the walled gardens.


That’s exactly what this guy is doing with Iris. It’s early days but definitely worth checking out.

https://www.hackernoon.com/what-is-wrong-with-the-internet-a...


I was thinking the same thing. Search engines for the whole web are over. The concept doesn't stand up to spam of various kinds.


Agree on the long tail. And maybe a way to exclude bigger sites, and newer results (the time range search doesnt really work anymore.)

I had a search today, and 7 of the top 10 results were from today. What I was looking for was NOT news, it was historical. If I wanted news, I would click the news tab. Having 7/10ths of the results come from today makes using google to search all of the web ever near useless, as todays noise is noisier than ever.

I dont even care if they are defaults, but buttons to "exclude big sites" "exclude the news" or "exclude fresh results" would make search so much better.



An equivalent which provides sources older than the past hour, day, week, month, year, decade, century, or millennium would be an opportunity here.


Google has that, it's just stopped working. You'll end up with articles from 1997 that say they updated 4h ago. And the amount of seo garbage that gets through has polluted the remaining results.

What would be really cool was if google could show what the results for a search looked like on a given day. Not the current algorithm, not any sites indexed since then, but what it looked like at the time. Going back to use 2010 Google would be a dream.


Part of me wonders whether this is technically possible. I’m sure it’s an answerable question.


There would be at least two ways to do it, neither of which are likely realistic to something google scale. One would be to cache results, so that any search searched before could be retrieved. This is limited in that you cant make new searches on old data, and possibly stores private information that shouldnt be accessible to others.

The other way would be to have the index and algorithm versioned, where you can target any instance of the algorithm against any version of the data.


Your second method is more in line with what I had in my mind as what you were getting at, and is pretty much the context of my original reply.

I am sure it’s technically possible going forward, but it would be interesting if such capabilities could be enabled for historical versions of the index and algorithm. Combined with anonymized historical zeitgeist data, some interesting digital archaeology could be attempted.

All the more reason to run your own crawler! What’s the state of the art for this area right now in self hosted solutions? Can you version your index and algorithm like we’re discussing and do these kinds of search-data time-traveling?


If you have an indicator of content on a specific date and can confirm no signoficant change ...

Though document fingerprinting is hard. Especially w/ fungible page elements.

Internet Archive has an angle here.


> Internet Archive has an angle here.

Are you referring to WARC type tooling or what? I don’t want to put words in your mouth. I’m a complete learner on this topic. I think gwern has written a bit about this broadly? I’m curious to know more about this, if you have time to share more.

https://www.gwern.net/Search

https://www.gwern.net/Archiving-URLs

https://en.wikipedia.org/wiki/Web_ARChive


> What we really need is a new Google, built on open principles (decentralized / peer to peer, fully free software, backed by a nonprofit),

We have them, they suck.

> and focused on indexing the long tail of insightful content that is neglected by Google because it lacks SEO

How would you even define that? SEO is changing all the time. And google is fighting it all the time.

And how would you prevent SEO focusing on that new searchengine? If it becomes big enough, people will optimize for it.


I think the goal is to exclude anything that was created with a popular CMS (to exclude some content marketing), or in the top 100k websites.

Then we might go back to something sort of interesting.


That is a good startingpoint, but also something SEO can very easy hack. Obfusicating the usual hints of a popular CMS is not hard.

I wonder whether doing a parallel search on google and filter out by their top-results from your own results would be a feasiable solution? Add a filter on the top 500 websites, and whether known ad-sources are used and you might get slowly there.

Maybe instead of a smart searchengine it would be better to focus on a dumb focus which gives access to all the metadata of a page too, and allows people to optimize for themself. Fulltext alone is not the only relevant content for good results. Google knows it and uses them, but has very limited acces to it for the enduser.


I LOVE the idea of a search engine that lowers the rank of results with ads. This might be the key to success in finding projects of love rather than commercial click holes.


Oh. That’s also an idea, but maybe we’d still get tons of ad-less blogs that are just link farms to other sites.

Still, I guess that’s only viable because Google rewards lots of links. If you just disable link relevance that part of gaming the system will be gone too.


How about TF(PageRank)/IDF(PageRank)? As in, start by ranking the individual result URLs with their PageRank for the given query; but then normalize those rankings by the PageRank each TF(PageRank) result URL’s domain/origin has for all known queries (this is the IDF part.)

Then, the more distinct queries a given website ranks for (i.e. the more SEO battles it wins/the more generally optimal it is at “playing the game”), the less prominently any individual results from said website would be ranked for any given query.

So big sites that people link to for thousands of different reasons (Wikipedia, say) wouldn’t disappear from the results entirely; but they would rank below some person’s hand-written HTML website they made 100% just to answer your question, which only gets linked to on click-paths originating on sites that contain your exact search terms.

This would incentivize creating pages that are actually about one particular thing; while actively punishing not just SEO lead-gen bullshit; not just keyword-stuffed landing pages we see in most modern corporate sites; but also content centralization in general (i.e. content platforms like Reddit, Github, Wikipedia, etc.) while leaving unaffected actual hosting by these platforms, of the kind that puts individual sites on their own domains (e.g. Github Pages, WordPress.com, Tumblr, specialty Wikis, etc.)

———

A fun way to think of this is that it’s similar to using a ladder ranking system (usually used for competitive games) to solve the stable-marriage problem on a dating site.

In such a system, you have two considerations:

• you want people to find someone who’s highly compatible with them, i.e. someone who ranks for their query

• you want to optimize for relationship length; and therefore, you want to lower the ranking of matches that, while theoretically compatible, would result in high relationship stress/tension.

Satisfying just the first constraint is pretty simple (and gets you a regular dating site.) To satisfy the second constraint, though, you need some way of computing relationship stress.

One large (and more importantly, “amenable to analysis”) source of relationship stress, comes from matches between highly-sought-after and not-highly-sought-after people, i.e. matches where one partner is “out of the league of” the other partner.

So, going with just that source for now (as fixing just that source of stress would go a long way to making a better dating site), to compute it, you would need some way to 1. globally rank users, and then 2. measure the “distance” between two users in this ranking.

The naive way of globally ranking users is with arbitrary heuristics. (OKCupid actually does this in a weak sense, sharding its users between two buckets/leagues: “very attractive” and “everyone else.”)

But the optimal way of globally ranking users, specifically in the context of a matching problem, is (AFAICT) with IDF(PageRank): a user’s “global rank” can just be the percentage of compatibility-queries that highly rank the given user. This is, strictly speaking, a measure of the user’s “optionality” in the dating pool: the number of potential suitors looking at them, that they can therefore choose between.

If you put the user on a global ladder by this “optionality” ranking; and normalize the returned compatibility-query result ranking by the resulting users’ rankings on this global “optionality” ladder; then you’re basically returning a result set (partially) optimized for stability-of-relationship: compatibility over delta-optionality.

———

All this leads back to a clean metaphor: highly-SEOed websites—or just large knots of Internet centralization—are like famous attractive people. “Everyone” wants to get with them; but that means that they’re much less likely to meet your individual needs, if you were to end up interacting with them. Ideally, you want a page that’s “just for you.” A page with low optionality, that can’t help but serve your particular needs.


Hey, nice comment! Please do this please.


Of course the problem is that if your site becomes popular it will be overrun by SEOs pushing monetized vapid content. This is how people earn money, they aren't going to stop just because you want to return to a time before they ruined the internet.


Results could be better filtered. SEO spam isn't very nuanced, it all reads the same and frequently plagiarized. I wonder if you can crudely use something like turnitin to filter for original content.

Maybe you could try and make a model SEO article for your own search engine, just taking your existing SEO results and figuring out which parameters are contributing the most to their ranking, then filtering out results that contain these parameters that worked well in your model. Rinse and repeat as SEO writers try and step up their arms race, but they should always end up being foiled by your changes to the search engine after optimizing your own perfect SEO model regularly.


If it was that easy, someone would be able to do it.

Instead every algorithmic content delivery platform, from Google to Twitter to Youtube to Facebook, is constantly chasing just to keep from being underwater against the spammers.


SEO is exactly as sophisticated as it needs to be to get to the top of your rankings.


SEO sites have an Achilles' heel though: they need to make money somehow, and 99% it's going to be ads. So filter them out based on that. Create a space for a noncommercial web.


They can also make money by selling things. Much harder to detect algorithmically than ads, and I don't know if you want to filter it out?


Or by scams, hoaxes, fraud, blackmail, pump-and-dump, propaganda, ....

Less than ads and subscriiptions, but enough to fund a lot of crap.


At some point this monetization game must end and we just have to pay for things we want. If there was a search engine that met my needs I would gladly pay $15/month for access.


I agree, but it is unbelievably hard to compete with free.


> What we really need is a new Google

Maybe, maybe not. How useful is Google (or search in general) to you?

For me search is more of a convenience tool than for finding sites that have information. There are questions I need answered but without search would have an easy time figuring out (e.g. "how many cups are in a pint?"). Sometimes I want opinions, but I almost always am going to the same sources. Sometimes I use search because I'm too lazy to click around using a site's own search. The only things that are actually useful for me from search are for specific expert knowledge that I want in a structured manner (e.g. "what do I need to consider when buying a house?"), and those queries are incredibly few

I feel like search is slowly becoming irrelevant


> Maybe, maybe not. How useful is Google (or search in general) to you?

I think I use google in the same way as you.

Most of the time, I could go to the websites directly (MDN, Stackoverflow, HN), but sometimes I'm trying to find something I don't know about, by trying different terms. I usually do this when I want a particular product but I don't know what it's called or what it is "smallest itx case without gpu", "midi router no power supply", "waterproof tarp diy tent setup".

I installed a browser extension called uBlacklist as recommended by someone here a couple of weeks ago, so now 90% of the time Google search is like my Ctrl-P for MDN, Stackoverflow, etc since I've managed to filter out the sites I don't want to see results from.


1. The internet is far too large to use human curation, an algorithm is needed.

2. An algorithm can always be gamed.

You're stuck.


Can you substantiate #1?

The content on the sites I visit is created by humans. Until automation genuinely overtakes us, I'm not ready to accept at face value the scale of the internet has grown so large that humans couldn't tackle the problem.


Google says there are 2 billion web sites. Each one may consist of a huge number of web pages (see wikipedia.org).

All I can say is, good luck with your human curation startup.


Sure, it also estimates more than twice that many people in the world with access to the internet. And Wikipedia is already well-curated by humans.

If there were orders magnitude more pages than humans, I'd agree. But I'd also ask: Who created them all?


Granted it's silly, but I'm fascinated by this thought experiment.

It's not easy to quantify the amount of useful content on the internet. The 2bn figure above seems to stem from registered domains, and depending who you ask [1][2] around three quarters of them are "inactive" (e.g. landing page for a parked domain).

At the other end of the spectrum, Google's index surpassed 130 trillion pages four years ago [3]. Point in favour of my opponent!

If everyone connected to the internet indexed one page a day over the course of their lifetime we might just about do it. And anyone creating a new page would need to [arrange to] index it themselves.

[1] https://www.internetlivestats.com/total-number-of-websites/

[2] https://hostingtribunal.com/blog/how-many-websites/#:~:text=....

[3] https://searchengineland.com/googles-search-indexes-hits-130...


Wikipedia is one website. There are 2 billion. I don't believe wikipedia's methods scale to the internet.

Also, any setup that allows everybody to be a moderator will be promptly gamed.

What you want is what yahoo used to do - a hand-curated search engine. It worked when the internet was small, but got buried under the eventual avalanche of web sites.


We can say the exact same thing on all the technical debts that we've been accumulated in our code base. But I've never seen those technical debts properly payed off :D


#1 can somewhat be substantiated by some technical info, without knowing the true number of pages.

- Using DNS "zone files", the DNS database for TLDs (which are not available for all, but most) show there's circa 200 million domains registered at any moment

- A large percentage of these are parked, i.e. no unique content.

- Many domains are "tasted", i.e. bought, are alive for a few days then disappear, so potentially you waste time crawling them

- Lots of sites are database driven and can result in millions of pages that can be created in a day

- URL rewriting means you can have an almost infinite number of pages on any one site

- Soft 404s and duplicate content can be hard to spot and can waste resources in gathering/removing them

There's paid for resources like Majestic/Ahrefs/Moz that crawl the web to see who's linking to who and they all contain trillions of URLs.

I think the most detrimental fact is that pages often disappear or change, I don't have a recent number but I'm fairly certain there's a 10-15% chance that any link you see this year, will be gone next year. "Link rot". Hard to build a DMOZ style directory on that scale with that problem.

I don't think it is unmanageable, it just needs to be seen from different perspectives and managed by different groups of people.


On the other hand, most of that is stuff that human curation can rightfully reject as not worth linking and indexing. We don't need to index the entire web, we need to index the good stuff (that is likely to stay there anyway and be worth archiving).


How does one know what the good stuff is without looking at it?


#1 doesn't matter because there's no rule that the entire internet needs to be visible. People only care about the sites they find value in. Search engines are one way to quickly locate those sites.


That any algorithm can be gamed does not imply that the least manipulable algorithm can be made arbitrarily bad.

You only need to choose an algorithm so that the amount of trash is less than you can handle.


If it's so simple, why hasn't anyone managed to do it?


I'm surprised by this content, because Google actually changed the web for the better in this regard. Anybody remembers the ezinearticles era? Where people kept publishing the same short articles filled with just a few keywords and nothing more, over and over again, just to get backlinks? Back then you couldn't make a search on any topic without falling on page after page of those light articles filled with keywords only.

Those article directories were eventually murdered by Google within the blink of an eye maybe a decade ago, and quite frankly on any given topic nowadays it's way easier to find good content rather than SEO filler. Google's algorithms will nowaydays favor fresh (aka published or updated recently), long content over short, "popular" (aka lots of links on it), duplicated content.


It's a pity the semantic web never took off. It might have greatly reduced the need for sophisticated centralised search-engines.


I think the thing that happened, is the OS vendors spent too much time addicted to the web themselves, and forgot that one of the duties of the core OS of any device is to serve data from it, safely and securely, to any other device.

Instead, most stacks are designed to enforce a walled garden, from which very little is shareable unless you go through the gateway (the web client app) to some approved destination.

Semantic web and total availability at the personal-computer level, are aspects of OS design I wish had been paid more attention.

Basically what we have now is a very, very expensive system of dumb terminals.


The semantic web will never work because there is no way to enforce it. If the topic "horses" become popular people will just start tagging their spam pages with "horse" to try to get people to look at their pages.

As an example go look at the tags on soundcloud. People tag their songs with whatever they think will get people to take a listen.


> The semantic web will never work because there is no way to enforce it. If the topic "horses" become popular people will just start tagging their spam pages with "horse" to try to get people to look at their pages.

There are plenty of trusted sources that would not spam their pages in this way. And if you spam too much, you risk getting dropped from search indexes, directories, links etc. because your source is just not useful.


Dishonesty is a problem the web always faces, whether it's lying to a person or to a search-engine.

It's not as if the semweb people weren't aware of the problem, and it's not self-evidently a fatal blow to the idea.


It's a fuzzy concept, but I think you're pointing toward something there's a need for. PageRank is/was useful in a lot of ways, it's just not enough by itself. Its weaknesses have been ever more apparent, and it has become less effective over time.

There are so many possible viable methods for ranking search results! Particularly now with higher level textual analysis using AI/ML/[buzzword], and perhaps more importantly, the resurgence of interest in curated content. People are getting better at discerning curated-for-revenue vs. curated-with-love.


> It's a fuzzy concept, but I think you're pointing toward something there's a need for. PageRank is/was useful in a lot of ways, it's just not enough by itself. Its weaknesses have been ever more apparent, and it has become less effective over time.

Would you speak to why you think this way about PageRank? What are its shortcomings?

To me, who only paid surface-level attention to this, it seemed like Google results were best when PageRank was the dominant metric. As they moved more and more in the direction of prioritising news, commerce and the aspects we call SEO, “number of links pointing TO the resource” became less and less important in the ranking. And as that happened, the quality of results dropped, and the content silo-ing rose up.

PageRank was peer-review. SEO is “who shouts the loudest”.


SEO with Google was about gaming PageRank. A lot of work happened in the 00s trying to prevent that, but it requires an army of moderators and others to identify good/bad links and link sources. Is HN a good source? Sure. Are HN comments a good source? Not once spammers realized they could drop links in comments to boost their site's results. How do you tell the difference? Plain PageRank is weak against this, and that's what motivated a lot of changes to the ranking system.


Nofollow helps a fair bit but requires cooperation from all high ranking sites to ensure that they don’t leak their PageRank juice.


They had to abandon page rank because it was widely gamed.

As far as I can tell, the main reason Google succeeded was that other search engines let advertisers buy placement for keywords (and didn’t label paid links). I heard from an industry insider that was able to strip the paid links that the engine they worked on gave results that were very similar to Google’s.

The second big reason was that pagerank was a useful signal that hadn’t already been gamed to the point of uselessness. I think this let a tiny team blindside an entrenched industry.

That’s not to say there’s no technical insight behind the page rank algorithm, but it was only a useful signal for a few years.


Before Google/pagerank took over, sites were successfully gaming the rankings of search engines like Altavista by adding every keyword they could think of in the html header, or hidden in the page with an invisible font etc.

It got to the point Altavista became more or less useless, and when Google showed up on the market they quickly took it over.

Seems the time is ripe for a new revolution. Doesn't have to be a better search engine, could be something completely different.


I don't think there needs to be a new Google or a requirement to be distributed.

Simply having half a dozen or more search engines per country/language, with their own indexes and algorithms should help see the web more fully.

ATM in English, Bing and Google have the largest indexes and Mojeek has its own index though smaller. DDG, Ecosia and others are just the Bing index re-ordered.

I enjoyed the OP, though. And I think niche directories/blogrolls would be progress. The current centralised web is a result of everyone dancing to the tune/rules of the large platforms.


The problem with any such system is there is money from capturing eyeballs, so once any system gets popular a lot of people will dedicate time to spamming it. I don't know how avoid that.


Hm; a sibling comment about p2p made me start thinking about some kind of "authority" system, with personal tracking. I know it'd be a lot of work for everyone, bit it still sounds maybe doable to me. Like if everything posted to the "directories" was signed by its author, and you'd be able to mark entries (links) as valuable or not for yourself, which would be tracked in your local authors DB, by tweaking particular author's total score. Then you could share your list with friends, who could merge it into theirs, but still marked as coming from you. I know, kinda complex (and reminescent of gpg's "web of trust"), but maybe still could be an alternative to centralized services like google? Also the emerging p2p networks like dat/hypercore or ipfs or ssb already auto-sign people's published data, so maybe this could be tapped into...


Even those types of authority systems can be gamed as well, unless you're also building your own recaptcha pattern analyzer too.

The problem with decentralization is that it creates a power vaccuum that is filled by the most interested actors. Even bitcoin, with it's decentralization-by-design is actually centralized to a handful of miners in China.'

If the goal is to rebuild something that is anything else other than profit based, you need to make sure the organization running it is strictly non-profit.


Maybe there is a way to align spamming and curation? I mean what if all those people working on SEO would actually work on curation?


If you have all the other stuff you might not really need peer to peer. Or maybe work in peer to peer after getting established?

Also I like the idea of making search like Wikipedia where people can edit results. Obviously you’d need super genius level safeguards to protect against scammers but Wikipedia does it ok-ish.


wiby.me can be useful for finding sites.


So, basically Yacy?

https://yacy.net/


The alternative to Google for discovery is Reddit. Reddit is the modern web directory:

- Directories are Communities defined by their link rules

- Human-curated

- Easy to start a new one

- SSO across all communities

- Built-in forum technology

- Unified comment technology for every website

You can get communities like reddit.com/r/Sizz for instance or larger ones like /r/esp8266 or massive ones like /r/sanfrancisco or planet-sized ones like /r/pics. And reddit itself plays the role of a meta-directory, with little directory networks (SFW-Porn being "the pretty pictures of the world" directory network) sitting between Reddit itself and the subreddits.

Reddit is an amazing amazing thing.


Except:

- Most subreddits are hostile to self-promotion of Web stuff. If you're unknown, you're going to have to be socially involved there enough that people know your name. (Though I agree that small subs like /r/esp8266 are begging for self-made content.)

- Related: people don't know your name because it's in small gray text. You're just another comment.

- You need upvotes. A personal directory requires only one upvote.

- You need to be on-topic. Your work may not fit Reddit's categories.

- Reddit mods are generally more like forum mods than librarians.

- Agreevotes do not equal quality. I don't want to overstate this, but I like that I'm not seeing vote counts on personal directories.

Reddit is cool - but it has its own rules and its own culture that goes with it. I personally wouldn't call it 'the modern web directory'. I do think it's less hostile to the Web than many other platforms - and certain subs like /r/InternetIsBeautiful and /r/SpartanWeb do good work.


> Most subreddits are hostile to self-promotion of Web stuff. If you're unknown, you're going to have to be socially involved...

Great! I'm there for the content, and self-promotion is usually the worst content.

I like that HackerNews tags this stuff with ShowHN so I can decide whether I want to look at it or not.


There’s no problem submitting blog articles though. Even your own.


I've had my blog post removed from r/algorithms for self-promotion.


Reddit is a feed, and therefore a link that was posted a month ago is no longer going to be seen by anyone. Basically, whatever appears to users between now and the moment when things drop off the page is a crap shoot, and Reddit will be inferior to a real web directory that can amass a larger collection of interesting links and, importantly, preserve them all together at one time.


I don't find reddit particularly navigable nor user friendly. It's a social media platform, not a discovery or search one.

Even the communities I care about have limited utility.


I thought it was just me... Every time I end up on a reddit (from a search result), its a click fest trying to see anything except 'top comments' and ultimately ends up very unsatisfying.


Reddit is horrible, and I say this as someone with combined ~27k comment karma on two accounts, and years of use. The mods are too happy to delete posts they don't personally approve of (whether they break sub or Reddit rules or not), and I recently got suspended on what appears to me to be a false charge according to the rule itself (a debate on whether some fanfiction should be illegal or not (it shouldn't be) got me suspended - a topic, it is worth noting, is covered by several serious academics). Bitter rant incoming.

The annoying part is that not only does a sub mod get the privilege of deleting posts at will, but there is no appeals process for that, and Reddit doesn't listen to suspension appeals at all.

Aside from that, the downvote/upvote culture is bad (even if I am usually in its favour) and encourages dogpiling and groupthink. Ironically, Reddit with its "don't downvote for disagreement" element of Reddiquette is worse at this than HN with its "downvoting for disagreement is fine" policy.

The site redesign, infinite scroll, a handful of mods controlling many major subs, nonsensical or inconsistent administration and rules, unjust dishing out of punishment, advertisements in the main feed, widespread outrage bait, and endless drama means Reddit is no different to platforms like Twitter where short, witty and possibly fallacious content thrives.

I can count on a hand the number of times I've actually valued information I've obtained from Reddit comments or submissions. That does not justify the amount of time and energy I've poured into the website, which I could have better spent simply not using social media (which Reddit now is). These days, if I had to use social media, I'd pick the Fediverse over Reddit every day of the week. It takes a lot of time to realize that highly upvoted comments (and every comment) is really just "someone's opinion, man" which the current zeitgeist dictates people will agree with.

On Reddit, politics is entertainment (r/PublicFreakout etc.), mocking and hate is central (r/SubredditDrama, r/unpopularopinion), administration is done in the interest of advertisers and personal opinions (phrases such as "weird hill to die on"), and moderation attracts people who would rather bask in the power afforded to them more than people who would rather carefully curate and foster discussion. The one sub which works to its purpose is r/ChangeMyView.

Spending time arguing with random people on the Internet is mentally taxing, very unlikely to achieve a change in opinion of the persons involved or the observers, and terrible for stimulating and interesting discussion. Next time I want to argue a point, I'll get a blog with a comment section, or these days, without one. If my friend told me they were going to register a Reddit account, I'd tell them everything I've just said in this post.


> The mods are too happy to delete posts

While that does happen, my problem is more often that mods of smaller subreddits are inactive or unwilling to moderate them so they end up being filled with low effort memes instead of actually interesting content.


2020 has given us a lot of crazy stuff, so I guess I won't be too surprised if it also gives us the return of webrings :-)

I'm calling it now, the hottest startups will be "disrupting search using artisinally crafted rings of websites"!


>I guess I won't be too surprised if it also gives us the return of webrings

You're in luck! https://news.ycombinator.com/item?id=23549471


Honestly, I don't think that's as crazy as it sounds. AI/ML-driven feeds often promise more than they actually deliver and by now more and more people are starting to realize that AI/ML is not the holy grail it was made out to be, at least in situation where you cannot throw massive resources at the problem.

Maybe not exactly in the form of webrings, but who knows, why wouldn't it be time for the pendulum to swing from the whole AI hype back in the other direction? There is a lot to be said for conscious curation on your terms and your devices vs algorithmic decisions made in the cloud for you.


Damn, this is all classy. I started writing html in 1996. My first job was at a boutique web shop building vertical portals in Coldfusion. Thanks for the great links. It's inspired me to put up a plain old html/JavaScript homepage again!


Hey this is great. Post the link here if you like.


Developers should come together and build an open-source curated platform where you can find 'only good - interesting, well-written'articles on all topics can be found.

Actually on the Hacker News guidelines it kind of describes this. And although it seems like the articles posted here are higher quality, they eventually get lost after 2-3 days.


Wow, those sites are blazingly fast and usable compared to most that I see these days. (With the possible exception of the third not making it visually obvious that the list items are links and having to figure it out from the context.)


Well, sheeeeit. If we're going for fever dreams, don't forget the electric sheep! https://electricsheep.org/


It's still all indexed by google. For example; all you need to do is search for a unique seeming piece of text from your dreamwiki gem, and the site will appear on the first page of googles' results:

https://www.google.com/search?q=WHO+ROAMS+THE+KIRUGU+NIGHT&o...


Yeah strangely the web ring has become back into value.


If DMOZ could have held on for a few more years...


I'm not certain DMOZ is the way to go. The big centralized directories are too hard to keep current. They get slammed with submission. And you end up with so many editors that no one has a sense of ownership.

I mean - maybe it's possible. Perhaps a really focused team could figure it out. (The 'awesome' directories have kind of figured that out, by having specialized directory.) But these personal directories are really sweet because they don't have to cover a topic. They can just be a collection of good stuff, who knows what.


Federation largely solves these problems. The biggest interop issue is sticking to a common/interoperable classification wrt. topic hierarchies, etc. and even then that's quite doable.


A solution would need crowdsourced collective vetting of sites along with a reputation system to keep out spam and bad actors without devolving into Wikipedia style personal fiefdoms.


I remember checking out DMOZ 15 years ago and it was already irrelevant.


DMOZ started stagnating around 2005. By the time it closed, it was already considered irrelevant, and much of the index had been unmaintained for years -- there was really nothing left to hold on to.


I think what you're saying about reduced barriers to entry has lowered the standard of all popular media.

It used to be expensive to publish anything - especially the further back in time you go. So classics for example typically represent particularly bright writers, as having something published before the printing press, and widely disseminated, was simply unlikely to happen.

But today anyone can create an account on YouTube or stream on twitch and it doesn't matter if the content is of any particular quality or veracity, so long as the common man sees what he wants to see.

I think there's a major secondary effect, in that now that we are surrounded by low quality media, the average person's ability to recognize merit in general is lessened.


The secondary effect you mention is absolutely the case. There is unlimited media and unlimited platforms on which to consume it. "Content" is truly a commodity now. I would like to try to make watching movies/tv a special thing again for myself, as opposed to little more than background noise. I think this will require careful curation and research, rather than just trusting an algorithm.


Yes, there is unlimited media and content, but the thing is that most of this content is either total crap, or polished content that was too much optimized for the median viewer. There is great, non-polished but authentic content for every niche, but it is very, very difficult to find it. Such content is not a commodity, but unfortunately, it seems that the average content is good enough for the average viewer...


I don't quite follow - if low quality media is everywhere, doesn't high quality media stand out?

Perhaps you're saying that so much low quality media drowns out the high quality media - such that it can't be found. The ratio is off, right?


if low quality media is everywhere, doesn't high quality media stand out?

Because there is so much low-quality content, it's become nearly impossible to find the high-quality content. Needles in haystacks.


Well, yes - that's my second sentence.


>if low quality media is everywhere, doesn't high quality media stand out?

You would think so, but more often than not, most people don't want high quality. What happens is that the media that panders to the lowest common denominator stands out the most, since that what the majority focus on.


The psychology of the "hot take" on twitter is a prime example of this. Is it less friction to read a 3 page blog post that critically analyzes a subject and takes into account different viewpoints that all have merit, or to read someone's 280 character reaction?

I am guilty myself, I often find myself jumping to the comments section even here on HN to understand what people are taking away from an article without even finishing it.


And that's where we need more journalitic (does that word exist?) writing. Even by non-journalists, I mean.

Make a long form content, start with the most important information in the first paragraph, and give more and more developments in the following paragraphs. Someone who thrives for short content will be happy with the first paragraph. Someone who want to delve into the details will ready each and every word. Heck, your first paragraph could even be a tweet containing a link to the long form.

This is clickbait taken backwards. You will get very few clicks as you already delivered the main information for free, but those who clicked will be there for a good reason.


It stands out, but it's so far away from you when you search for it, that it's below the horizon...


You can't determine the quality of media without consuming it. So the whole Akerloff "market for lemons" process applies: low quality cheap content dominates.


I think what you're saying about reduced barriers to entry has lowered the standard of all popular media.

You're more right than you know.

When there were only a handful of television channels, the content was higher quality than what we have now.

When there were only a couple of dozen cable channels, the content was higher quality than the endless reruns we have now.

When publishing a book went through the big publishing houses, the quality of what was available was higher than it is now where anyone can self-publish and pretend to be an expert.

See also: radio.

Content can only be created so fast. There are only so many talented content creators out there. While the number of media channels has exploded, the number of good content creators has not kept pace. Keep adding paint thinner, and eventually you can see through the paint.

The internet was supposed to give everyone an equal voice. All it ended up doing is elevating the drek and nutjobs to have equal footing with people who know what they're doing and what they're talking about. The quality is drowned out by the tidal wave of low-grade content.


> When there were only a handful of television channels, the content was higher quality than what we have now.

As someone who watched TV in the 1970s, before cable was a thing, I have to disagree here. I think we look back today and see "MASH" and "Columbo" still holding up great after 40+ years and think of it as representative. But nearly all TV was formulaic dreck back then, just like it is now. And even if the average quality level is worse now (which I'm not convinced is the case, but let's assume) the quantity is much higher and there's a large amount of really good stuff to choose from on the right side of the bell curve.

It's true that content can only be created so fast, but it can also only be consumed so fast. Once you have access to enough high-quality content to fill all the spare time you want to spend watching TV or reading or gaming or whatever, having more of it to choose from doesn't improve your experience much.


>My theory is that the high barrier to entry of online publishing kept all but the most determined people from creating content. As a result, the little content that was out there was usually good.

The barrier wasn't that high. Making a site on Geocities, Tripod or Angelfire wasn't that difficult. Writing 90's style HTML wasn't exactly writing a kernel in C, and most of those services had WYSIWYG editors and templates anyway. Few of the people publishing to the web in the 90s were programmers, so the technical knowledge required was minimal.

And plenty of people are publishing high quality content on the modern web, even on blogs and centralized platforms. I follow writers, scientists and game developers on Twitter, watch a lot of good content on Youtube, read a lot of interesting conversations on Reddit. The fact that people publishing content nowadays don't have to write an entire website from scratch has little to do with their personal passions (or lack thereof), whether they're interesting or (and ye gods how I've come to hate this) "quirky." That's like saying writers can't write anything worth reading unless they also understand mechanical typesetting.

As far as the old content goes, of course most of it was crap. Sturgeon's Law applies to every creative medium. Most blogs were uninteresting, many personal sites were just boring pages full of links or stuff no one but the author and maybe their few friends cared about. In both cases, between the old and new web, a bias for the past (as HN tends to have) leads people to only remember the best of the former and correlate it with the worst of the latter.


We are asked to believe that advertising is required for there to be any content on the Web.

However, comments like this seem to be proof that is not true.

I have personal memories of what the Web was like in 1993 but there are so many people today who are feeding off the advertiser bosom what are the chances anyone will listen. No one wants to hear about what the network was like before it was sold it off as ad space. Young people are told "there is no other way. We must have a funding model". Even his article is rambling about "the problem of monetization". No ads and "poof", the Web will disappear. Yeah, right. More like the BS jobs will go away. This network started off as non-commercial.

There was plenty of high quality content on the 90's Web. Even more on the 90's Internet. That is because there were plenty of high quality people using it, well before there was a single ad. Academics, military, etc. It all faded into obscurity so fast. Overshadowed by crap. The Web has become the intellectual equivalent of a billboard or newspaper. The gateway to today's Web is through an online advertising services company. They will do whatever they have to do in order to protect their gatekeeper position.


I remember visiting big corporate websites and there was always a little corner for the 'webmaster' often with a photo of the server the site was running on... or a cat... or something like that.

Geocities was a beautiful mess as ... it was just folks trying to figure out HTML and post silly stuff, but it was genuine.


My favourite spot was a little later, early & mid 2000s. The barriers had been lowered, but publishing still required some motivation. There was a ton of content to discover, a lot of discovery channels and "information wants to be free" still felt like a prevailing wind.

I think part of the reason was, as you say, lower standards. We were being exposed to content that didn't have an outlet before that. The music was new, black, polished chrome... to borrow a Jim Morrison line.

A bigger part is discovery though. Blogrolls & link pages were a thing. One good blog usually lead you to 3 or 4 others.

These days, most content is pushed, often by recommendation engines. Social media content is dominated by quick reaction posts, encouraged by "optimization."

The medium is the message. In 98, the medium was html pages FTPed to some shoddy shared host to be read by geeks with PCs. In 2003, it was blog posts. In 2020 it's facebook & twitter.


I think we have far more quality content than ever before. YouTube is a goldmine of high quality content, and so are the other publishing platforms.

The signal to noise ratio might have gotten worse, and discovery might be flawed, but the absolute quantity of quality content has never been higher.


Like this site, I learned how to build a strong bike rim from Sheldon. https://sheldonbrown.com/


I used Sheldon Brown’s site when I rebuilt my first decent roadbike (well, it was $100, including the parts I put into it; “decent” is relative).


I had just started college and I remember going to the computer lab and clicking around for hours at a time, at night. Just going from blog to blog, reading interesting stuff. You didn't have to have a particular goal in mind - one blog would lead to another interesting blog would lead to another one, endlessly. They would all be engagingly written, to a high standard of quality.

Like you, I know things have changed, but I still can't imagine I could do that today, going from blog to blog, without running low on material within ~60 minutes.

EDIT: I see the webring links here now, I may try them.


I came of age in the early 2000s, and Wikipedia was this for me. Eventually, it ran out of meaningful depth to me and in college I hopped to surfing books. You can surf on books til the end of time.

I think hypertext as a medium has a lot going for it that books don’t, but I don’t think we’ve figured out distribution, quality control, and discovery sufficiently to make the internet so stimulatingly surfable.

There was a period of time a few centuries ago when adults looked at kids reading romance novels the way that adults today look at kids scrolling through TikTok. I think all mediums go through cycles.

On the other side (when the wild internet’s commercial viability wanes, and people can no longer make easy money hosting mediocre terrible, SEO-driven drivel), I think a lot of the good content will survive the great filter, and that’s when we’ll be able to appreciate it for what it is/was. The next 20 years might be rough, but the work of this generations’ Dickenses and Pushkins and Gogols will survive.


People are still creating massive amounts of cool content, it’s just really hard to find. I play blood bowl, I care a lot about the statistics, and I’ve searched a lot on the topic over the past few years.

The best result I could find concerning some official data from FUMBBL (a place you can play blood bowl) was a blog entry from 2013.[1] My circle of friends and the different leagues I play in have been using that as reference for years. We’ve searched and searched to find the data source to no avail.

The other day I’m randomly site: searching for some thing completely unrelated and find a source for live FUMBBL data[2]. You’d think that was the first search engine result related to blood bowl statistics on any search engine, as it’s really the best damn source I’ve ever seen, but it’s not.

I know you were probably referring to something a little more interest based. Well I once sat next to a retired biology professor at a wedding, and it turned out he ran an interest site, detailing all the plants specific to the danish island Bornholm. I don’t care much about plants, but it was exactly a 90ies styles page. Unfortunately I didn’t save the link (I don’t care much about plants), because I haven’t been able to find it since, despite searching for his name.

So I think it’s still there, it’s just not easy to find it.

[1] http://ziggyny.blogspot.com/2013/04/fumbbl-high-tv-blackbox-...

[2] http://fumbbldata.azurewebsites.net/stats.html


> My theory is that the high barrier to entry

I have the same feelings about social media. It used to be that you only had to listen to your stupid Uncle at Thanksgiving. Now he constantly spews his garbage on Facebook


>monetized blogs

I believe that we have the 'web' today because big decisions were made about how little control the end-user (i.e., consumer) should have over the content made by producers, and that the #1 priority for all technology involved in the web has been to separate producer from consumer as stringently as possible.

If we had the ability to safely and easily share a file that we create on our own local computer, using our own local computer, to any other computer in the world - we would have a nice balancing act of user-create content and world-wide consumption.

Instead, we have walled gardens, and the very first part of the wall is the operating system running on the users computer - it is being twisted and contorted in such ways as to make it absolutely impossible for the average user (i.e. the computer owner/user) to easily share information.

Instead, we have web clients and servers, and endless, endless 'services' that are all solving the same thing for their customers: organising documents in a way people can read them. And all the other things.

And its all so commercial, because there is a huge gate in the way, and it is the OS Vendors. They are intentionally stratifying the market by making the barrier to entry - i.e. ones own computing device - untenable to serve the purpose.

Imagine a universe where OS vendors didn't just give up to the web hackers, in the early days, and instead of making advertising platforms, pushed their OS to allow anyone, anywhere, to serve their documents to other people, easily, directly from their own system. I.e. we didn't have a client-/server age, but rather leaped immediately to peer-to-peer, because in this alternative universe, there were managers at places like Microsoft that could keep the old guard and the new young punks from battling with each other .. which is how we get this mess, incidentally.

There really isn't any reason why we all have to meet at a single website and give our content away. We could each be sharing our own data directly from our own devices, if the OS were being designed in a way to allow it. We have the ability to make this happen - it has been intentionally thwarted in order to create crops.

Give me a way to turn my own computer, whether it is a laptop or a server or my phone, into a safe and easy to use communications platform, and we'll get that content, created with love, back again.

Its sort of happening, with things like IPFS, but you do have to go looking for the good stuff .. just like the good ol' days ..


The operating system isn't needed for that at all and blaming the vendors is fundamentally misplaced. That directly sending to other computers via direct IP connections is technically possible if you don't operate any firewalls or similiar to block it. And of course non-fixed and shared IP addresses would complicate the system.

You would find out quickly why you don't actually want that when your own "unisystem" designated port server/client gets hammered with unsolicited requests and exploit attempts. Easy is a matter of interfaces although other design choices would come with costs. Safe however would be far harder. If you set it to a whitelist well you lose discoverability instantly.


I don't agree with your point at all. I believe the vendors are fundamentally to blame for the situation with the web today .. the walled garden was intentionally created to protect the bigger consumers.

And none of the issues you state as being the reason why we can't have nice things, are actually valid reasons. OS Vendors could solve the problem of serving content from ones own local PC's quite effectively - the issue is not the technology, but rather the ethics of the industry, which prefers to have massive fields of consumers to farm from..


My only correction is that there was a lot of content out there! We didn't call it that, of course, because we're people and not corporations, so we just called it articles, blogs, rants and musings. A lot of it is still out there and a lot more is on the wayback machine!


> content for content's sake

I think there's some truth to this. Some junior developers make it a goal to be seen as a respected blogger, so they feel the need to write something, even if they have nothing to say.


My theory is that the more companies you throw in the less humanity you can find


I think the problem is that hiring practices across the entire "intellectual worker" market is a market for lemons that specifically ruins anything of quality.

Some blogs were great (i.e. created to solve the problem of too much interest in what one is up to even to answer internal questions individually) and signaled a few great minds that can be hired at a discount.. Those engineers told also pretty good co-workers in stage 1, by stage ~3, managers tell their underperforming direct reports to blog whatever they understand about what their group is doing in the hopes that they (either improve or better yet:) become a burden somewhere else.


I think you are right about the "high barrier" being a filter for good work yes !

Also I miss the wide and wonderful design and color scheme of the 90's :) Long before bootstrap or "material design" !


I so agree. I miss good old fashion text based content. Simple, readable, and fast. There was a reason that the default links blue and underlined. Visited links were purple. You could scan an site and quickly determine where you had been before.


If you want the 90s web.. you can find it here:

https://www.versionmuseum.com/websites


The whole idea of SEO in order to get clicks for your advertising based revenue model feels “bad” to me. Content is created which is controversial because that will get the most eyeballs on page. The side effect is that we veer towards a broken society the moment we go down that route.

I had an idea of a search engine that allowed you to permanently remove domains or pages with certain keywords as a paid service.


I think you have good points, but I would also add to it that the high barrier to entry also prevented people from being copy cats. A simple messenger was a major accomplishment, so when something worked there weren't 10,000 copies of it by the end of the week, so nearly all content you found was actually the original and not just some repost or copy paste job.


I feel like you can find better quality today. Sure you have to dig a little more.

But at the time it was so magical compared to pre-web where the only content you could find was professionally published magazines or books, and suddenly you had all this niche content about stuff that wasn't worth publishing, all in a few clicks.


This. Often you'll find this low quality SEO content is just a result of people hiring low-cost copywriters often from developing countries, the 'uncanny valley' of written content


I miss the lack of ads.

But they had all those annoying pop ups and pop unders.

Nowadays, Google just tracks you everywhere you go. Even when you’re browsing incognito.


Well you can do your part by not using google analytics/captcha/amp or other google spy services on your own sites.


There was less copyright concern back then too. Remember “Make James Earl Jones speak”? Or the Hamster Dance?


i remember that most of the early internet was under construction. The noise to signal ratio wasn't very high - but that might have been due to some universal constant of physics.


My theory is that there was no "money in it" back then.

;-)


I can't wait for server-side rendering to take its place in the sun again.

There are many use cases for which a client-side framework like React is eesential.

But I feel the vast majority of use cases on the web would be better off with server-side rendering.

And...

There are issues of ethics here.

You are kidding yourself to an extent when you say that you are building a "client-side web app." It is essentially an application targeted at Google's application platform, Chromium. Sure, React (or whatever) runs on FF and Safari too. For now. Maybe not always. They are already second-class citizens on the web. They will probably be second-class citizens of your client-side app unless your team has the resources to devote equal time and resources to non-Chromium browsers. Unless you work in a large shop, you probably don't.

Server-side rendering is not always the right choice, but I also do see it as a hedge against Google's, well, hegemony.


The less stuff on the server the better imo. Whenever I can get away with it I use a static site generator or I use vuejs with a json file containing all the data for the site. Being able to just drop a static set of files in to a webserver without any risk of security issues in my code is great. I also hate the tools for backend rendering since if you need any kind of interactivity it becomes so much easier if you had just built it all in vue/react with no downsides other than not running in someones cli web browser.


At my old company they were moving from client-side to server side because they had 30 different clients -- Roku, desktop, smartphone, Xbox, etc. -- and all of them had to reproduce the same logic. At the time I left they were trying to centralize all of that logic on the server and then just put the lipstick on the pig at the end.


Phoenix LiveView gives you excellent interactivity with backend rendering.


I desperately want to check that out when I have some free time.

(In other news, I would desperately like some free time)


There are quite a few others inspired by Liveview. On ASP.Net , PHP Laravel, Python Django and Stimulus Reflex on Ruby Rails.


> I use vuejs with a json file containing all the data for the site

Seconded. It makes hosting a breeze; especially when you can just throw it on GitHub Pages within minutes. I also like being dependent on only a text editor and a browser ... a combo still performant on just about any device from the last 20 years if not longer. No need to install full-featured IDEs and the various dependencies.


> Being able to just drop a static set of files in to a webserver without any risk of security issues in my code is great.

What security risks are removed by using a client side app instead of a server side one?


No database, no code running on your system other than nginx which I trust a whole lot more than myself.


None, you're just shifting the risk to the user.


There’s an exponentially growing recruiting, training, and hiring industry based on developers who only know how to write fat JS frontends. I think everyone has probably experienced high growth in their company’s frontend team. Many developers choose to write them not out of expertise but to have them in their resume and for job security.


    developers who only know how to write fat JS frontends
Yeah. This is often a big challenge for small companies / small teams.

Nearly every web app is now two apps, and it's increasingly infeasible for any developer to have a mastery of both backend and frontend stacks.

Not necessarily a terrible problem when you have dozens of developers, but a lot of dev teams are 1-3 people. Instead of web dev circa 2010 where you might reasonably have a 3-person team of people who can each individually work anywhere in the stack, now your already-small team is bifurcated.

In many ways this is an inevitable price of progress... 100 years ago you had one doctor in town and they could understand some reasonable percentage of the day's medical knowledge. Today, we can work medical miracles, but it requires extreme specialization.

At least in the development industry, we can choose not to pay that price when it's not necessary.


I don’t understand why members of a dev team back in the day would’ve been more capable of being full-stack than today...

The front/back divide also existed back then, with barely any possibility of a front person ever touching the back-end (a possibility that exists nowadays, without going into its merits or demerits).

For a reasonably ambitious and industrious individual nowadays it’s not unreasonable to become really good at one client and one server technology. There’s more to it than writing server-side code with HTML templating for presentation, for sure, but it remains well within the grasp of many people.


It is because we understood the full-stack.

There was not "frontend" and "backend" developers. There were designers and developers. Designers created designs. They were usually delivered as PDFs, because the bulk of them came from print design backgrounds.

Their designs were then implemented by developers. Senior level developers tended to more of the application level heavy lifting (server-side scripting & db), with junior level developers working on converting designs to html, then to templates. By the time a junior developer moved on to app code, they were well versed and had mastered HTML and all the weird edge cases. They knew HTML.

The first real wave of "frontend" and "backend" developers came on the scene when you had designers learn Flash. They started driving more complex applications and there was a more bifurcation.

Granted even in small teams of the era, you had developers prefer "front" or "back". We tended to value "A jack of all trades is a master of none, but oftentimes better than a master of one"


I missed the era with flash though. I remember the original Gorillaz website which was a full flash-based game / exploration / gallery type of thing. It was so cool and I've not seen anything like that I ended up spending hours and hours on that site...extremely patiently because of the slow internet speed at that time in my college dorm in China (2002-ish I think?). The quality was astonishingly good.


> I don’t understand why members of a dev team back in the day would’ve been more capable of being full-stack than today...

Because the stack in the old days was smaller and level of quality was far lower. In the '90s there was no css and javascript, nor did anyone care for accesability, multiple languages, security or interactive features. In the old days we had real documents with links and simple structure, not apps and mature frameworks. It was something on the level you get today with simple markup like markdown or org-mode.


Phoenix liveview solves this and the overhead of Ex is much lower than the overhead of mainting separate codebases and APIs, in our experience.


Many of us also choose to write it because it’s fun and the results can be excellent.


Nobody denies React and co. are excellent tools. The frustration occurs when people who enjoy these tools claim that, if you choose not to use them, or suggest they're not fit for a certain task, you're WRONG. In the case of React in particular, this happens a lot. Then of course the people who don't want to use a framework feel frustrated, and next thing you know it's a flame war. I've seen this happen multiple times on Twitter.


This happened when Mongo started getting popular, too. Mongo supporters would get nasty if you suggested using a relational database. That, eventually, cooled off. Hopefully this "React or nothing" phase will too.


I run into very few people who claim this.. oh you mean on Twitter? well there's the problem.


Twitter is where a large percentage of web devs hang out. If something is problematic, you can't dismiss it as "but that's just Twitter". If it's a common mode of discourse on Twitter, then it's legitimately a part of developer culture.

In any case, discussions around front end frameworks and especially React are scarcely any better here. Although they are usually politer at least.


> Twitter is where a large percentage of web devs hang out

This isn't true, Twitter is a place where a large percentage of Twitter users who are interested in web development talk about it.

Twitter is not a lens on the entire internet, it is a lens on a bubble of a bubble.


> Twitter is where a large percentage of web devs hang out

Source?


Back in the 90s I was working on a commercial server-side rendering "content management solution". Revision control and team workflow (with quite nice conflict resultion UIs). Templating using XSLT (sadly, because it's insane). Super expressive and solid stuff in general though. We used the actual cvs binary as a version control backend.

I left this part of the business space for the browser business at this time, but I assumed that the server-side rendering stuff would keep evolving. It didn't.

Then the "PHP CMS" wave came, and dumbed everything down.

Then the "reduce the server to an API" and "let insanely complex javascript frameworks/apps deal re-invent the browser navigation model" came... and here we are.


It's been a ride watching all this abstraction and design patterns make everything more complex


I think there's a lot of truth in what you're saying and the core problem is that we somehow decided there was one correct way to make a web site, and that was to use React.

Are you creating a complex webapp? Use React. Go nuts! But are you making a mostly static page (blog, marketing site, whatever)? Then don't use React. It adds entirely unnecessary bloat and complication.


The industry has gone back and forth here forever between thick and thin clients, and I view this as an extension. Largely we all use thick clients now (PCs, phones, and things with way too much compute power), and the move to chrome or chromium based browsers made the behavior predictable. The pendulum swinging back is really an acknowledgement that the advantages provided client side rendering, don't always outweigh the networking costs. Data visualization is one of these areas I wonder if the javascript methods provide a real advantage vs server side rendering.


Except that phones have only gotten thinner and thinner since the 1990s... And PC's (laptops, at least) are not far behind.



If I only evaluated our tech stack on a pure “what’s the most efficient way to deliver HTML to the user” I’d not choose React, but I don’t know of any other framework that ticks the other boxes required in a large organization.

- Code sharing - how do you share reusable snippets of code that includes both the SSR logic and the JS that is also required on the frontend. React’s component model is fantastic where a team can develop a component independently

- Skill set - getting 50 React developers to write HTML and JS should be fine if other problems were solved, but often the suggested solution is obscure things like Elm or Elixir

- Even if most of what a company builds is static marketing content, other parts can be more app-like and having developers be able to share code and use the same basic technology is a great productivity booster


https://nextjs.org/ is a react wrapper framework that has some nice SSR features. It makes a ubiquitous dev experience where the majority of the app can be rendered server or client side. Last time I worked with it about a year ago it was only the first page load that was rendered server side though but the first one the one that arguably affects the user experience the most.


Despite the literal phrasing, server-side rendering, as in SSRing a SPA, is not even close to what was happening in the 90's. That was much simpler, pure, and fun.


I love ️Vercel (formerly nextjs). It's such a great platform and their free tier services do a lot of the heavy lifting of configuring a website and managing deployments.


Slight correction to avoid confusion between Open Source product and PaaS(?) company in people's searches: Next.js the product is still named Next.js. The company behind it, Vercel was formerly Zeit :) Their command line tool "Now" has been renamed to Vercel


I am aggressively pursuing a universal 100% server-side UI framework (not just targeting the web). If AMD has demonstrated anything to me, it's that the server is the place to do as much work as you can. Offloading work to client devices feels like stealing to me at this point. Websites are very abusive with your resources.


> Offloading work to client devices feels like stealing to me at this point.

Kinda is. It makes my mobile work harder and thus uses more battery.


React isn't bound to web, there's quite a few React Native targets (ios, android, macos, windows, qt, gtk)

I recommend React when making an app, web or otherwise.

I recommend vanilla HTML + CSS with optional JS when making a website.


I recently watched the "Helvetica" documentary that was posted here a few days ago [0], where they briefly mention "Grunge Typography" [1], a seemingly dead-end branch of typography that, for some strange reason, became pretty popular for a short period of time.

After some years however, consensus amongst designers formed that what they've created was a pile of illegible garbage, and realized that there was no other way than completely dismiss that branch, go back to the roots, and evolve from a few steps back.

I feel the same kind of consensus is slowly forming around ideas like SPAs, client-side rendering and things like CSS-in-JS.

We saw the same happen with NoSQL and many other ideas before that.

We recently deployed an entire SaaS only using server-side rendering and htmx [2] to give it an SPA-like feel and immediate interactivity where needed. It was a pleasure to develop, it's snappy and we could actually rely on the Browser doing the heavy lifting for things like history, middle click, and not break stuff. I personally highly recommend it and see myself using this approach in many upcoming projects.

[0] https://www.hustwit.com/helvetica/

[1] https://www.theawl.com/2012/08/the-rise-and-fall-of-grunge-t...

[2] https://htmx.org/ (formerly "Intercooler")


>We recently deployed an entire SaaS only using server-side rendering and htmx [2] to give it an SPA-like feel and immediate interactivity where needed. It was a pleasure to develop, it's snappy and we could actually rely on the Browser doing the heavy lifting for things like history, middle click, and not break stuff.

I'm glad we're coming back to server side rendering with some JavaScript for interactivity, but from about 2005 - 2015 this was simply known as web development. You didn't need to worry about breaking the back button or middle mouse or command clicks because they just worked.

I feel like with React, we made the actual JavaScript code simpler at the expense of everything else in the system becoming way more complex.


Depends on what you build. I find a REST server + client side rendered frontend quite simpler to grok than a server side rendered page. Mostly because the separation between UI and data is really clear and all of the CLI interfacing comes free as well. There is certainly a way to split this well with SSR, but it's also easier to fall into the trap of tightly coupling the parts.


Sure, but you can just as easily fall into the trap of massively over-engineering your client-side rendering because you just grab packages off the shelf for every little thing instead of going "it's a little bit more work, but there is no reason to any of this when the browser already does this", like ending up with a 100kb bundle for what is effectively just a standard form that would work better, with less code, if it just used a normal form with built-in validation for almost everything, with the final acceptance/rejection determined by the api endpoint.

It really depends on which SSR approach we're comparing to which client-rendering approach, and who you're optimizing for.


I have fond memories of creating images with Grunge fonts in some pirated copy of Photoshop and then positioning them with HTML tables and Dreamweaver.


Those were good times in a lot of ways!

Also I may be an outlier, but IMO grunge as a textural expression still benefits lots of contemporary design projects. In fact if you know how to work within broader principles of design, maybe you stop caring as much about what's current, because that's just one of many outcomes that may or may not be appropriate for the message...


This is how I learned web development. Don't forget photoshopping the glossiest buttons possible. Not sure if that fad was before or after the grunge.


Gloss was a trend in the mid 2000s, if I remember correctly—around the same time that AJAX started becoming really popular


Web 2.0. Such a bright outlook in those days :)


It spilled over into the real world as well. I remember every electronic device being glossy black.


> After some years however, consensus amongst designers formed that what they've created was a pile of illegible garbage

Really? Citation needed?

I guess I have never heard graphic designers say anything about David Carson except that he's one of the most innovative and influential designers in the past 30 years. IMO his graphic design was amazing and perfect for the context (music and surf magazines). I loved getting my monthly issue of Raygun and marveling at what neat designs they had done this time.

The decline of it is pretty easy to explain: the same thing that killed print altogether also killed print music magazines (i.e. the internet).


You might want to give the two first sources I've linked a visit. Both mention David Carson, and the Raygun magazine.

The sentiment I paraphrased was stated by designer Maximo Vignelli, although he didn't single out David Carson.


I've spent a lot of time thinking about getting back to what I find a healthier set of trade-offs in development and healthier product. I lile this idea that the current pushback against modern JS trends is simply backing up and setting out again in a new, hopefully better direction.

I'm invested in Elixir and there are some interesting, different trade-offs being made there for highly interactive things with Phoenix LiveView. And there is the Lumen compiler to potentially in the future not need JS as one could write Elixir to get WASM for the interactivity needed.

My bet is still mostly on server side rendering and static as much as possible. Current JS does have the JAMstack ideas that I find a healthier direction.


I began developing apps the "old-fashioned" way over a year ago. My day job is React but for my side businesses, it's all PWA and Rails. Light React/Vue (depending on my mood) when I need a fine-grained interaction or a slick animation.


What animations are you doing that you need JavaScript for these days?


To me SPAs were useful for one thing: to establish that JSON is the only blessed format for FE-BE communication.

Back before the age of SPAs we had those "dynamic" apps which updated parts of the DOM by firing requests to the backend, getting a piece of rendered HTML and just throwing it there.

It was an absolute nightmare to maintain and I'm happy we're past that.

As for server rendering, I've been getting good results with Sapper.js [0]. Here's an example of a webcomic page I started doing for a friend of mine, but it never took off:

https://dev.chordsandcordials.com

It's a pure HTML static web page, which started off as a server-side-rendered app, but with Sapper's export tool I didn't actually need to deploy the backend, since the content of each page is deterministic with respect to the URL.

My friend is not technical, but understands what FTP is and is able to edit a JSON file, so that's how we went about this.

[0] https://sapper.svelte.dev/


> Back before the age of SPAs we had those "dynamic" apps which updated parts of the DOM by firing requests to the backend, getting a piece of rendered HTML and just throwing it there.

> It was an absolute nightmare to maintain and I'm happy we're past that.

Yeah, this model was adopted by noted failure GitHub and look where it got them.


Many successful businesses are a nightmare to maintain and the fact that they are successful doesn't make them less like that - if anything it makes them more like that.

Hell, I used to work for one such business, namely CKEditor.


I doing almost the same. Spit out regular html made by templating on the server side and spice it a little with vue. Is so little that I could change vue anytime.


I actually really like this htmx library on the surface, will give it a spin soon.


Not sure why you think browser history and middle click don't work on SPAs.


I'm not a web developer but my girlfriend needed a website to show her photography work so I decided to make it for her.

It's the simplest thing in the world, basically just three columns with photo thumbnails and the only javascript is some simple image viewer to display the images full screen when you click the thumbnails.

It's really, really basic but I was impressed with the feedback I received from it, many people were impressed by how slick and fast it was. And indeed, I went looking for professional photographer websites and indeed, what a huge mess most of them are. Incredibly heavy framework for very basic functionality, splash screens to hide the loading times etc... It's the electron-app syndrome, it's simpler to do that way so who cares if it's like 5 orders of magnitude less efficient than it should be? Just download more RAM and bandwidth.

Mine is a bunch of m4 macros used to preprocess static HTML files, and a shell script that generates the thumbnails with image magic. I wonder if I could launch the new fad in the webdev community. What, you still use React? That's so 2019. Try m4 instead, it's web-scale!


Can you give more details? I don't think I've ever heard of M4 before.


You probably already have m4 on your development machine:

https://en.m.wikipedia.org/wiki/M4_(computer_language)


OK, I'd never heard of this before and now I'm questioning why we have all these JS templating frameworks like Mustache, Pug[1], and the rest.

[1] I actually use this one and it's just fine - great, in fact - but seeing M4 does make me feel like a lot of people may have spent a lot of time reinventing a wheel.


I actually looked into some of these frameworks (in particular mustache but also a few others) before deciding to use m4. They were all either too complex or too limited (sometimes both!) for my use case.

M4 is old and clunky but it gets the job done without having to install half a trillion Node.js dependencies. I also know that my script will still work just fine 5 years from now (or even 50 years in all likelihood).

That being said, don't trash your favourite JS framework right away, while m4 is perfectly fine for simple tasks I don't even want to imagine the layers of opaque and recursive macros you'd end up having to maintain for any moderately complex project. It's like shell scripts, it's very convenient but it doesn't really scale.


> while m4 is perfectly fine for simple tasks I don't even want to imagine the layers of opaque and recursive macros you'd end up having to maintain for any moderately complex project

I'm not doing anything too clever with Pug, but there are certainly a bunch of things that it makes quite easy that would otherwise be awkward or complex.

Lots of things work really nicely as well: includes, sections, configuration, and it certainly cuts down on typing. I'm absolutely not contemplating a switch from Pug to M4.

I just find it interesting that there's this thing that's been hanging around for decades that would do at least a partial job and is still decent for simpler use cases.


Have a look at Perl's Template Toolkit. It used to be state-of-the-art for text and HTML templating.


The syntax of m4 is unwieldy, a "standard library" would be also needed to make it useful for smaller tasks. How often do you define loops yourself?

Sadly this holds to an unexpected degree for Mustache.

[1] - looks like a DOM AST + Javascript to me, quite unlike template


I've tried to get into M4 in the past but was always put off by the syntax. Mustache has been my go-to since it's simple and has lots of implementations in different languages. Are there any good reasons to use M4 over Mustache?


The only reason I didn't use Mustache was because it was unnecessarily complicated for my very simple use case. I just needed to effectively include blocks from other files in order to be able to share boilerplate between pages, plus having basic variable substitution for things like the page name. I even considered using the C preprocessor instead!

But I definitely wouldn't recommend using m4 for anything beyond a quick hack, unless you're a m4 wizard (or willing to become one) and you trust that whoever is going to work on the project is going to be one too. It's definitely clunky and not very user friendly.


Hah, I still have some m4 html macros lying around, it's certainly the quickest way to make HTML less painful.


Some of us still remember the hell of configuring sendmail, you monster.


Link?


I'm glad this is the case. I've been a Rails developer for close to 10 years now, but 3 or 4 years back I got sucked into the React world. I bought right in and my company quickly adopted the "React on Rails" pattern. Looking back, it was one of the worst professional decisions I've made in my career. Now we're back to server side rendering and StimulusJS on the front-end when needed. Productivity is way up, and developer happiness is way up. With new tools like https://docs.stimulusreflex.com and https://cableready.stimulusreflex.com I'm very excited about what can be accomplished with minimal JS.

(Note: I still think React is an awesome library! I'm sure there are devs that are super productive with it too. It just wasn't the best fit for me and my company)


A company I contract to for backend and server stuff made the jump from static HTML to client side rendering with react. They did it because the consulting company they went to receommended it "because it was the future", and I am sure in no small part because that was what the consulting company specialised in.

It was the worst decision they have ever made. The site they ended up with was incredibly slow, and given the relatively few pages on the site you never really make for that initial load in time saved later.

It's also incredibly hard to write well, requires a special third party service to show anything in Google and is incredibly hard to manage.

They don't realise this of course, and are now attempting to solve the management and initial load issues by splitting the app up into three distinct apps. It won't help.


I think the problem here is less about choosing to utilize a client-side rendering implementation and more about choosing to adopt the “SPA” paradigm - where everything is smashed together in a single application bundle.

React + React DOM is something like 35kb gzipped. That’s not nothing (don’t forget caching) and pushing the initial render to the client (though not strictly necessary) does incur a bit of a penalty, but I think the benefits outweigh the drawbacks in many more use cases than people give credit.

The real problem is two-fold: The first, as I stated above, is wrapping the entire application in a client-side implementation. As many people are pointing out this is often unnecessary. You don’t need to go full “SPA” in order to benefit from the vdom.

The second (related) reason is when developers just start adding 3rd party dependencies without considering their impact (or if they are necessary). React is a library, and for the features you get it’s really not that big. If that’s all you are using to add that extra sparkle to some of your pages I firmly believe you are getting the absolute most “bang for your buck”.


I really like the turbolinks approach - you simply write HTML and then include the script in your head tags. However, I'm still hooked on Markdown. So I am still prerendering HTML - and then doing the routing with Hyperapp. (See https://href.cool/Tapes/Africa for an example - you get a prerendered static HTML page, but it uses JavaScript from there to render the other pages.)

The ultimate approach is Beaker Browser though. You can actually just write your whole site in Markdown (/index.md, /posts/batman-review.md, /posts/covid-resources.md) and then write a nice wrapper for them at /.ui/ui.html. This means you can edit posts with the built-in editor - and people can 'view source' to see your original Markdown! It's like going beyond the 90s on an alternate timeline.

(A sample of this is this wiki: hyper://1c6d8c9e2bca71b63f5219d668b0886e4ee2814a818ad1ea179632f419ed29c4/. Hit the 'Editor' button to see the Markdown source.)


I kinda went down the same path to generate my site. I had a static generator, which worked as most static generators, being a standalone program, and moved to having a python script (and it's .venv) in the root folder of my content, that has markdown, that converts it to a HTML.


This sounds like a dream.


Schopenhauer's 19th century essay "On Authorship" has been a personal fave since discovering it last year:

Writing for money and reservation of copyright are, at bottom, the ruin of literature. No one writes anything that is worth writing, unless he writes entirely for the sake of his subject. What an inestimable boon it would be, if in every branch of literature there were only a few books, but those excellent! This can never happen, as long as money is to be made by writing. It seems as though the money lay under a curse; for every author degenerates as soon as he begins to put pen to paper in any way for the sake of gain. The best works of the greatest men all come from the time when they had to write for nothing or for very little....

https://www.gutenberg.org/files/10714/10714-h/10714-h.htm#li...

Brain Pickings articulates my reasons well, though really, just read the source:

https://www.brainpickings.org/2014/01/13/schopenhauer-on-aut...


> What an inestimable boon it would be, if in every branch of literature there were only a few books, but those excellent! This can never happen, as long as money is to be made by writing.

Well, we're making progress towards reducing if not eliminating profit through authorship.


Yes, the joke is that unemployment benefits are the real Endowment for the Arts. One wonders if we would have a healthier society if those who are capable of creating great art could do so by being freed up from financial constraints.


We would also play better golf. Still keen to subsidise it?


The "Player of Games" by Ian M Banks takes place in a far-future society in which the abundance is so high that many people do pass the time playing games or whatever else enriches their lives. The question is: since we long since passed the point at which most people could work one day a week and maintain the level of abundance that existed in the 1960s, is that the society that we want to live in?


My iPad is 10 years old and there are websites that bring it to its knees, especially mobile.twitter.com links. I don't click them anymore, it's too frustrating. Maybe web devs should be given low-end machines to work on so they can experience what their non-desktop users experience. The whole 'mobile web' distinction really shouldn't need to exist, my iPad isn't a mobile phone from 2005.


If your iPad is 10 years old, then it’s an original iPad. The original iPad received its last update 8 years ago. I really enjoyed my launch day original iPad — it was incredible, just because it was such a new experience, but the hardware was severely underpowered even for the time. Laptops from 2010 are probably much better equipped to browse the web today than that 2010 iPad.

It just seems very idealistic to expect most websites to cater to a browser that hasn’t received an update in 8 years running on a processor that has a fraction of the power of any Raspberry Pi 2 or above.

Modern iPads are served the “real” web by default, further emphasizing why you would benefit from upgrading at least once a decade.


A 1 GHz Cortex-8 is a ton of computing power. The proof of this is iOS itself. When you bring the GPU into the mix, it's obviously capable of a lot as an operating system. There are no problems displaying high resolution static content and video content on an iPad. But around the same time, it became fashionable to start rendering static content on the fly, maybe inserting some not-so-static content around it to justify this choice.

Add in advertising. And a browser that you might have trouble controlling script execution on.

Add in tracking. Remember that story about how eBay fingerprints your browser in part by mapping the open websockets you have? That costs power and cycles. And the cookies have been piling up since the 90s.

Speaking of storage, it's now much more common to use localStorage. Now anyone with a website on the internet can store appreciable amounts of stuff on your computer if you visit their website. And they can read and write that storage as much as they want while you're on their site, without regard for your computer's performance.

And all of this is just considered normal and regular. This isn't even getting towards abusive behaviour like running a crypto miner in a browser or something. This is just web applications continually expanding their resource entitlements.

The web is a great honking tire fire. Many articles and books have been written about this, many of them are summarily dismissed by web developers as ivory tower nonsense. But the trajectory of system requirements for displaying mainly-text documents is growing at an unsustainable pace, and there is eventually going to be some kind of reckoning.

I have a $3000 1 year old laptop, and sometimes slack gets so slow I have to kill the browser process and start over again. The issue is not hardware.


>I have a $3000 1 year old laptop, and sometimes slack gets so slow I have to kill the browser process and start over again. The issue is not hardware.

Keep in mind, the 1st gen iPad was single core and only had 256MB of RAM, which was the same RAM capacity as the then current iPod touch. Compare it to any PC with a single core Intel CPU and the experience will be largely the same, except of course the Intel CPU will require x10 more energy to run.

And yes, it did have a really great CPU for it's time. It's core CPU architecture remained largely unchanged for 2 more generations. The iPad 2 added an extra core (A5), and with the iPad 3rd gen, Apple gave it more GPU power in the SoC (A5X).


The issue is that when the GP's iPad was new, it didn't have many problems browsing the web. I know mine did not. Outside of Flash-using sites or extreme JavaScript beasts the original iPad was a pretty capable web browsing machine.

The web has regressed when an old iPad (or PC) can no longer reliably view it. It's not like words got harder to display in the intervening decade.

A blog post, news article, or a tweet shouldn't require a quad core CPU and gigabytes of RAM to be readable.


> The issue is that when the GP's iPad was new, it didn't have many problems browsing the web. I know mine did not.

Those are some rose colored glasses. The first page (technobuffalo) loaded in this video takes 10 to 15 seconds:

https://youtu.be/caTUPKJ5Zfo

Based on what was said, that page had already been loaded once, and it still took that long.

Loading a Google Search took only about two seconds, because of how extremely minimal and optimized the search page was back then.

Loading The NY Times took about 8 to 12 seconds, depending on where you draw the line.

So, at the time, maybe we were used to webpages taking a while to load on mobile devices, and it seemed very reasonable. I’m sure there were some extremely simple websites that loaded quickly, but The NY Times was one that Apple promoted heavily at the time (apparently) as demonstrating what a good experience the iPad browser was.

Nowadays, we hold the web and our devices to much higher performance standards.

It’s not an apples to apples comparison because the content is entirely different, but the context of this thread is that modern websites are significantly harder to load and render.

Even though content is supposedly much more resource intensive today, my 2018 iPad Pro loaded the technobuffalo home page initially in less than 3 seconds, and subsequent page clicks are even faster.

This iPad loads google search results even faster than the original iPad.

This iPad loads the New York Times in under 3 seconds, with subsequent page clicks taking the same or less time.

Based on the numbers, a contemporaneous 2010 iPad was 3x to 5x slower at browsing the 2010 web than my 2018 iPad is when it comes to browsing the 2020 web, and that’s a roughly two year old iPad design, so it should be even more at a disadvantage. My iPad is also rendering more than 5x as many pixels while doing that.

In conclusion, the original iPad was severely underpowered.

> A blog post, news article, or a tweet shouldn't require a quad core CPU and gigabytes of RAM to be readable.

Conceptually, I agree, but every image and video we use for content in websites now is significantly higher resolution and quality than they were back then. If you just want to read text, then you’re correct.


Page load times are a trailing indicator for web performance, they're not really the core issue I'm talking about. The original iPad could render even large HTML documents at a usable speed, including styles and inline images. Once a page was loaded and rendered the scrolling, tapping links, and interacting with forms was all usable fast. Even while the content was loading you could interact with the page.

The video you linked showed this capability. That TechnoBuffalo page was a pathological case for the iPad rendering and it was still interactive fairly quickly even if all of the resources weren't finished loading. I had the original iPad and browsing worked just fine on it. Even when pages took a long (multiple seconds) time to load they were scrollable and interactive. I could read the content as everything loaded.

The issue today is there's no page to start rendering in the bloated JavaScript way of the world. The HTML received from the server is just a reference to load all the JavaScript which needs to parse and execute then fetches and renders the content. A relatively low powered device, like the original iPad, needs to do vastly more work to display even just text content than a static HTML document.

It's not shocking that your modern iPad renders pages faster than the model released a decade prior. Not only does it have far more power and memory but the network (both last mile and far end) is faster. It's also got an extra decade of development on WebKit. The web is more bloated but the modern iPad has ramped up its power to compensate.

Look at Reddit versus old.reddit.com. The "modern" Reddit page has poor interactivity even on my current iPad. The old.reddit.com site, which is similar in complexity to 2010's Reddit, renders damn near instantly and has no interactivity issues.

> Conceptually, I agree, but every image and video we use for content in websites now is significantly higher resolution and quality than they were back then. If you just want to read text, then you’re correct.

Using huge images and video for "content" is part of the problem. Images have been used on the web since Mosaic, older devices can handle inline images just fine. It's the auto playing video ads and tens of megabytes of JavaScript executing to display a dozen paragraphs of text that's problematic.


> It's not shocking that your modern iPad renders pages faster than the model released a decade prior. Not only does it have far more power and memory but the network (both last mile and far end) is faster. It's also got an extra decade of development on WebKit. The web is more bloated but the modern iPad has ramped up its power to compensate.

The main point was that the 2010 iPad was being given the most favorable conditions, and it still lost horribly, because even compared to contemporaneous devices, it was very underpowered, unlike current iPads:

- your claim is that websites are substantially heavier now (which I agree), putting the 2018 iPad at a disadvantage

- the 2010 iPad was browsing early 2010 websites in that video, so we've had 10 years of bloatification since then

- the 2018 iPad Pro is browsing 2020 websites, websites built years after it was released, so surely more "bloated" than they were in 2018

- being 2010 websites, they were probably much simpler to render

- the 2010 iPad's screen had 5x fewer pixels to contend with

Nowhere was I saying that the 2010 iPad was super slow to render 2020 webpages in all their bloaty goodness. That would be an obvious conclusion. If the 2010 iPad's performance was so good at the time, but only became slower as the web became much more bloated, why was it still so much slower at browsing 2010 websites than my 2018 iPad is at browsing 2020 websites?

The 2010 iPad was actually slow from the beginning, as the video proves. Since it was slow back then, it shouldn't be surprising that it's slower and more painful now that websites want to support higher resolution experiences by default. Yes, they could put effort into giving old devices a lower res experience, but why? That old browser is one giant security vulnerability at this point, and no one should be browsing any websites they don't control on that thing.

Even with all those advantages being in the 2010 iPad's court, it was still 3x to 5x slower than a 2018 iPad browsing 2020 websites at 5x the resolution. This is not even a 2020 iPad Pro -- this is a 2018 iPad Pro. Imagine how much worse a 2008 iPad would have been at browsing 2010 websites, if it had existed.

You say that it's "not shocking" that they ramped up the power so it can browse better, but the point is that we're loading substantially heavier websites today significantly faster.

How is that possible? Because the 2010 iPad was severely underpowered. If it had been running on a chip that was equivalent to laptop processors of the era (as my iPad Pro's chip is), then it would likely have loaded the 2010 websites about as quickly as my iPad is loading 2020 websites.

> Once a page was loaded and rendered the scrolling, tapping links, and interacting with forms was all usable fast. Even while the content was loading you could interact with the page.

Yes, it's very impressive how much interactivity Apple was able to give the 2010 iPad with its really terrible processor, once the loading finished.

That interactivity isn't because the chip was any good. I remember very clearly that it was because you were basically scaling a 1024x768 PNG while you zoomed in and out. Once you let go, the iPad would take a second to re-render the page at the new zoom level, but you were stuck staring at a blurry image for a second after zooming in. The GPU was really good at scaling a small image up and down. The CPU was not so good at rendering websites.

It was also very easy at the time to scroll past the end of the pre-rendered image buffer, and you would just stare at a checkerboard while you waited on the iPad to catch up and render the missing content. iOS actually drastically limited the scrolling speed in Safari for many years to make it harder for you to get to the checkerboard, but it was still easy enough.

> Look at Reddit versus old.reddit.com. The "modern" Reddit page has poor interactivity even on my current iPad. The old.reddit.com site, which is similar in complexity to 2010's Reddit, renders damn near instantly and has no interactivity issues.

New Reddit is one of the worst websites on the entire internet right now, if not actually the worst popular website in existence. I really don't understand how that hasn't been scrapped at this point. It's not representative of modern web experiences, except possibly in your mind. YouTube, The NY Times, Facebook, Amazon... these are all modern web experiences that work great on anything approaching reasonable hardware.


But what is the great new experience offered by something like twitter’s web site that demands all that new power and brings that machine to it’s knees? What cutting edge features is it offering that it didn’t offer in 2010? Thats kind of what made the web a cool platform, it was backwards compatible by default and could run on all kinds of things.


Tracking and data collection. The way the make money. Twitter in 2010 was actively losing money, if I recall correctly.

That said... modern frontend that doesn't run proparly on a core 2 intel machine with no adblocking should be massacred.


I think it's a bit over too much to expect sites to cater to a device (and somewhat underpowered at that) that hasn't received any updates for several years.


No, it really isn't. Treat every connection like horrible shared wifi in a convention centre, all data as if it's an overage charge, and all of the processing as if your users' batteries are mere seconds from dead. Always. As a pleasant side-effect, you get pleasantly snappy performance on old gear - which may be the latest and greatest your users can afford without avoiding whatever it is you're trying to sell.


how is the battery still alive?


This seems to be one developer's wishful thinking, without any evidence presented to back up the assertion. Pointing out "hey, here's a couple websites that do server side rendering" does not a trend make.

We're ripping out webflow, if anecdata counts for anything (it doesn't). Webflow occupies the barren middle ground of "too complicated for marketing people, too simple for technical people". I find it much easier to write html than to figure out how to get their UI to make the html I want.


GOV.UK is a good example of a mainstream site that's built in the traditional manner. It's actually a collection of hundreds (thousands) of separate services, the vast majority of which are rendered on the server and use JS only where necessary.

As there's no advertising or images on most pages they tend to be incredibly fast too.


I just hit the same wall with bubble.. it definitely has some impressive attributes but sometimes I just need to see the code. I really hate clicking around looking for descendent nodes in a UI vs just looking at html template


I really hope personal blogging becomes popular again!. Speaking of which, I still haven't found a really good alternative to the "horrible" WordPress for blogging. It has:

- Integrated API, RSS

- Tons of plugins

- Accessibility, translations

- Easy and powerful editor(Gutenberg)

- Comments sections and forms w/ complete ownership and moderation

- Easy data imports from multiple platforms.

- Users and roles

- 100% open source w/ GPL. You own your data

- Extremely easy and cheap to host and move around.

I love modern tooling and git based workflows for all my project but my "static" 11ty/Gatsby.js blog doesn't provide all these features out of the box. Instead of writing, you end up reimplementing basic cms features.


Drupal is what I use. In addition to the blog part, have enjoyed creating custom content types to track some interesting things in my life (cars, phones, and computers for example). Then I add a view to show them all in a table with the columns and sorting that I want for that particular content type.

In the past I have used the APIs to a very small extent. I stored license keys for an app as a content type so that they were easy to manage by employees, and the app connected to a service that called the API.

Probably any fleshed out CMS will check all but the last one of your boxes (nothing is going to be "extremely" easy to host).


Have a look at serendipity. Certainly doesn't have "tons" of plugins but it's had all the ones I need, and it's dead simple: PHP and either a Mysql or a postgresql database underneath. I've used it for years, happily.


What? Gatsby plugs directly in to WordPress. That's the whole point.


Have you considered something like Ghost? ghost.org


Ghost looks great and very polished! For me though, it provides much less out of the box (no built-in comments etc). It's also harder to self host and manage and the hosted service costs 30$/month which is extremely pricey for an indie blog. 3rd party ecosystem for wordpress (plugins/themes) is huge. I can easily extend my blog to an e-commerce store, an online community etc.


> It's also harder to self host and manage

Eh, I disagree. The only difference is that it uses Node instead of PHP. You still hook it up to MySQL/MariaDB/SQLite and you're good to go.

Plus I consider its Members feature (https://ghost.org/members/) to be a game-changer, though it's powered by Stripe, so you'll have to be in one of 39 countries supported by it to make it work without extensively hacking your theme.


That's a big difference. I can't use a cheap shared hosting and managing a VPS (os updates, database backups etc) is too much for a small blog. Node.js is fine but it turns out after all these years, php is still the king in crud,blog,cms type of application, which is the 99% of the websites out there.

As for the members feature, wordpress has tons of "membership" plugins. Still though, Ghost landing page for members[0] does a great job explaining why i need this. Excellent design and copywriting

[0] https://ghost.org/members/


Hey! John from Ghost here :) here's a direct comparison of setting up self-hosted WP vs Ghost, just as a point of reference: https://youtu.be/rMKNgV1gTHg


That sounds exactly like October CMS.


It never went away, those of us old fashioned devs on Java and .NET stacks, it has been our bread and butter for the last 20 years regarding Web stacks, SSR with some JavaScript on top.

I guess what it happening is the newer generations re-discovering that actually it makes sense to generate static content once, instead of redoing it in every client device across the globe.


Social networks are failing us and we want independence and community. The web used to be that, then it turned into a gated gossip community.


Funny that he started with the claim that the dancing baby gif wasn't coming back. Turns out, it's already back. https://twitter.com/JArmstrongArty/status/122590192989894656...


> Frontpage and Dreamweaver were big in the 90s because of their “What You See Is What You Get” interface. People could set up a website without any coding skills, just by dragging boxes and typing text in them. > > Of course they soon found that there was still source code underneath, you just didn’t see it. And most of the time, that source code was a big heap of auto-generated garbage - it ultimately failed to keep up with the requirements of the modern web.

If you do not see the source code it doesn't matter if it is garbage or not or if it is following any "modern web requirements" or not - all it matters is if it does what you expect it to do. Besides, it is a bit of a hypocrisy nowadays to complain about the code underneath a WYSIWYG tool when many web developers use transpilers that target CSS, JavaScript and pretty much all sites rely on dynamically generated and altered HTML that doesn't let you make more sense on the final output than something like Frontpage or Dreamweaver would generate.

Sadly the closest thing i could find nowadays to something like a WYSIWYG site editor is Publii[0]. It suffers greatly from the 'developer has a huge screen so they assume everyone has a huge screen' syndrome and i really dislike pretty much all of the themes available for it (everything is too oversized). And it is an Electron app because of course it will be an Electron app despite not needing to be one (it doesn't offer full WYSIWYG functionality, only for the article editor which isn't any more advanced than Windows 95's WordPad and it relies on an external browser to show you the final site). But it does the job (i tried on a new attempt for a dev blog of mine[1]) even if i dislike how oversized everything is.

[0] https://getpublii.com/

[1] http://runtimeterror.com/devlog/


I first hardcoded kleinbottle.com with handwritten html; over 20+years, I've gone through several tools. One by one they've evaporated - Home Page, Front Page, GoLive, Dreamweaver 5. I'm now hobbling along with BlueGriffon.


I’d love to use a search engine that simply didn’t index websites with moderate or excessive amounts of JavaScript, images, and video.

You wouldn’t need AMP because it would load quickly, ads would be minimal, and the text content would probably be forced to be higher quality because it would have to stand on its own.

Does such a thing exist?


Maybe https://wiby.me/ ? I love surfing this on my free time.


It'd be so easy for Google (or others) to add that capability in their search, using the pagespeed insights they're now using to rank. Like, "squirrel -amp cumulative_layout_shift:0 total_blocking_time:0"


Even better. Blacklist all sites with ads. I’m guessing google will never implement that.


How would websites create revenue then? People are very unwilling to pay for online articles.


Maybe websites should stop spamming junk for SEO and don't rely on clickbait to create revenue? I am very happy to use adblockers and have very little sympathy.


I've been playing around with this for several years now; building more or less elaborate frameworks for server side rendering and dividing the interface into separate pages.

I blame Seaside [0] for corrupting me. Never used it to build anything, but once the idea of building the user interface on the server was in my head there was no way back.

Though I have to admit I still find JSON really convenient for submissions compared to using fields for everything as it allows massaging the data on the way.

Besides that I've found the approach to be a total success. Pages load instantly, bookmarks and back buttons work as expected and most of the application stays on the server.

[0] http://www.seaside.st/


Webflow is touted as "the new Dreamweaver". Of course, it's "software as a service", about 3x as expensive as basic web hosting.


> about 3x as expensive as basic web hosting.

Good, that's a bargain IMO. I get way more than 3x the value out of webflow than I do "basic web hosting".


Can you self-host Webflow-generated content?


You have to pay $192/year before you can export what you created.


That's very expensive.


Another point to this, I equate the modern fediverse with all the old message boards.

Message boards are still around but it used to be an integral part of web culture. They essentially took over from dial up BBS.

But now community boards have moved to cloud services like Discord. The self-hosted boards are still around in the shape of federated ActivityPub instances.

It makes a lot more sense than hosting an isolated island of PunBB or vBulletin.

I just hope more communities host their own small localized ActivityPub instance, using AP relays to create a vibrant fediverse.


For a lot of internal corporate web applications, server side rendering is what makes most sense. These applications are always used from browsers on a company provided laptop. You don't need to worry about multiple frontends and "web scale"

Back in the day, it used be that internal apps were shitty to use, and had slow service layer code. But once the data got to the view layer, at least the pages rendered fast. Now with proliferation of SPAs, we have shitty user experience, slow backends and slow UI renders.


Preloading on button-down, nice detail optimization. It's possible that you have to abort that request, but it will be the rare exception. It's my favorite thing I learned today.



Wow, Hey.com has none of the lag that modern websites have. Is the email client as responsive?


It's created by Basecamp which makes Turbolinks [0], Turbolink iOS [1], and Turbolinks Android (deprecated) [2], so there's a good chance that snappiness you're seeing is due to those awesome tools! (In fact, this Github issue comment [3] seems to confirm that HEY uses Turbolinks.)

On a related note, really wish things had gone this direction. Instead, it feels like React, Angular, etc. re-created the backend in the frontend, so now for most apps you essentially have two backends plus one frontend to maintain. I think as soon as our frontends started requiring controllers and routes to work we should have been like, "Hey, wait a minute..." But I guess design folks tend to know JS, so I can see how that combo won.

My $0.02 from 5+ years in the industry is that you probably don't need React, but Godspeed if you think you do - just maintain extreme discipline or you'll end up with spaghetti code (esp. if contractors are involved) faster than you can blink.

Long-term, I hope things move back to sanity. React and company work OK for some problems, but 95% of websites/webapps can probably get by with either straight HTML/JS/CSS or server-side rendering/templating.

[0] https://github.com/turbolinks/turbolinks

[1] https://github.com/turbolinks/turbolinks-ios

[2] https://github.com/turbolinks/turbolinks-android

[3] https://github.com/turbolinks/turbolinks-android/issues/111#...

Edit: Formatting


I find the client quite responsive, yes. On login it opens instantly, and interactions feel almost instant to me. But I haven't tried Superhuman to compare, I highly suspect Hey isn't as fast.


Actually it depends on the kind of website you'll build.

Some years ago, i made an app with Rails and turbolinks and some "tricks" to make ajax work smooth. The first version was built in 6 month, then i rebuilt the second version with React in 3 weeks !

The pain is in refactoring and adding new features as well as speed of development.

That's the day i discovered React. The way i develop Rails app is just make api json response to match mocked json on the React frontend, nothing more to think or trick!

There's a reason (or many reasons) I and many other chose React (or similar libraries/frameworks) to get the job done.


One idea I’ve been thinking about building is sort of a hybrid SSR where you make use of server sent events to continually render more on the page based on the users interaction (most obvious would be scroller).

Of course I have yet to investigate how modern browsers render a never ending index.html, how I wouldnsend the client events and correspond them to the existing open original request, how this would scale to multiple hosts behind a load balacer (but maybe that’s getting ahead of my self haha).


That's sorta the idea behind server-rendered Blazor. Components are rendered on the server and passed over to the client via websockets.

Unsurprisingly, input lag and scalability can be major obstacles, but I expect it to make large headwinds in enterprise apps.


One of the things I say (when I think some poor bastard has no choice but to listen), is “The September That Never Ended was the best thing to happen to the Web.”

That was what changed it from a niche community of geeks to a true “everyman’s town square.”

It was also the worst thing to happen to it. It heralded a tsunami of money and God-awful content.

Money tends to be a damoclean butterknife. It will spur tremendous growth and energy, and also reduce the moral structure to a septic tank.

I’m glad the BLINK tag is dead.


Along these lines, does anyone have any recommendations for a service or framework to create a simple personal site? I just want a basic blog, project pages, and photo gallery. I use Jekyll at the moment to statically generate pages to host on github.io, but frankly, I don't have much webdev experience and I'd rather just have something work out of the box at this point.


I would say: start by doing pure HTML, then progressively work with PHP to include header.php/footer.php on every page. Then create helper functions for a few things you don't wanna repeat.

You'll be surprised by how good it feels to know your website from A to Z.

This is what I do for my own site: https://www.juliendesrosiers.com/ . Almost every URL ends with an old-school .php for that very reason lol. But I like the fact that every time i want to modify something, I simply SSH into my dreamhost account and use vim to edit posts and pages. But om your case it could be https://www.panic.com/coda/ .

And if starting from nothing seem scary, you can always start with a basic themeforest theme, or a framework like Bootstrap. Having the html-css already done goes a long way.


This is exactly what I did a few weeks ago after getting fed up of Wordpress for my personal site. Static HTML pages and some php for the header the and the blog homepage. It means I need to write in plain HTML but that’s fine, the only js is prism syntax highlighting.


Hugo is great. It's feature packed static website generator. There is plenty of templates you can use and modify.

https://gohugo.io/


Personally I'd stick with the static site generators.

If you get something like wordpress installed on your own server, you now need to then commit doing the upgrades for it. I've been burnt enough times with self-hosted things like this where I missed an upgrade for whatever reason (real-life etc) and the site was hacked very, very quickly. There are ones that auto-update themselves but I have found that those have their own issues so cannot be relied upon.

Static site generators tend not to have this problem.


Why not $5 Digital Ocean hosting + wordpress image + $20 wordpress theme with visual composer from themeforest? You can also use the digital ocean hosting to serve many other sites not only one.


WordPress?


If we’re going back to the 90’s let this be your reminder we still don’t have something for web dev as easy to use as VB6.


We no longer have anything for the desktop as easy to use as VB6 (nor for mobile).


Plus ça change. :) I was just reading an article ("APL since 1978", [1]), which recounts the complaints of mainframe programmers when microcomputers were introduced in the mid-80s, because of how much harder it became to design application interfaces:

> Worse, the technical skill required to write applications suddenly increased dramatically. You needed to know about networks, how to deal with the poor reliability of shared files on a LAN — or how to construct client/server components to achieve the security, performance, and reliability that you needed. User interfaces, which were so simple on the mainframe, became an absolute nightmare as an endless procession of new GUI frameworks and UI “standards” appeared and then faded away.

[1] https://dl.acm.org/doi/10.1145/3386319


Anvil has basically created VB6 for the web but uses Python for the code-behind-the-forms. It can't do everything, but on the surface looks like a pretty valiant effort: https://anvil.works/


That’s nice but JS is the language of the web. Anything that tries to step around it won’t reach critical mass relative to VB on Windows 9x.


We kind of used to... dreamweaver, when it was still macromedia, was pretty neat.


> at some point our kids might think frosted hair tips are totally cool

you shut your face sir! my hair was the BOMB when I was 14!!!


I want a good webpage; -Clearly written like a Navy manual, -but using well made animated gifs alongside flat pictures, -with a few videos thrown into the mix, that are short and concise, no longer than 3minutes, -and short audio clips to learn to properly pronounce something.


They need to bring back Gopher. It has a good chance to become very hipster today.


Gopher still exists. There is even a new protocol, which is similar to gopher: gopher://gemini.circumlunar.space


I have actually tried a gopher HN mirror. And it was an interesting experience.


I haven’t even adopted the more modern JAMStack type techniques yet, and now this guy is saying those are old?! I still consider those closer to the holy grail; clients scale indefinitely, servers you still have to pay for.


What I can confirm is that old fashioned html tags such as <fieldset> or <iframe> are coming back and are underrated. So much can be done with just plain html.


I did get a chuckle out of a page extolling the virtues of 'html over the wire' asking me if I was

'interested in things like front-end dev and the JAMStack'


Won’t work, now there is one search engine and the first page is all ads and all pages are over optimized. No one will see any new sites or blogs generally.


This is so adorable. I wish more people and companies were using/designing/serving web sites the way that was meant to be done.


I've been counting down the days until it's okay again to use <tables> . I think I have a lot more counting to do ...


It's never been wrong to use <table> for its intended purpose.


Why people miss the simple solution, off with the ads on with micro-payments. You like the content, pay for it.


I was totally expecting the content of this post to be "return to html leading to JAMstack"


This is a legitimate question apropos of OP‘s web ring: Who qualifies as a nerd of the 90s? Is it nerds born in the 90s or people who were nerds during the 90s?


The indieweb is so great. I ultimately turned back off webmentions though because I was worried about GDPR liability, as well as wondered if people who liked my tweet actually intended to have their name and like appear on my website.

My site is otherwise marked up for the indieweb, I've got indieauth working, and it's a cool community.


Serverless is the new CGI


i'm getting a ERR_HTTP2_PROTOCOL_ERROR in chrome?


Good.


swings and roundabouts


please don't call it server-side-rendering. The http server was always meant to respond with hypertext content. Serving json is the oddity, not the other way around


For document-oriented stuff, you can serve HTML or plain text. For data-oriented stuff, I think that it is not so wrong to serve JSON, CSV, RDF, SQLite database, or whatever format it is, although such things should not be hidden from the user (especially if scripts are disabled); this is independent of whether or not there is also a HTML version of the data (if so, the HTML could be generated either ahead of time or on demand; which is best may depend on the situation).


Maybe I'm missing something, the article isn't very clear. One of the reasons why we have front end apps pulling from an API is because it allows for interoperability.

The same API that serves data to the web browser can serve that data to mobile apps and to third parties as well.

The idea of bringing HTML rendering back on to the server just doesn't seem useful to me.


    One of the reasons why we have front end apps pulling
    from an API is because it allows for interoperability.

    The same API that serves data to the web browser can 
    serve that data to mobile apps and to third parties as well.
I agree with the benefit of API interoperability, but I'd call that orthogonal to server-side rendering. Your server-side web app could be just another consumer of that repurposable API.

    The idea of bringing HTML rendering back on to the server 
    just doesn't seem useful to me. 
It's a streamlined developer workflow, for one. You have one renderer, not an infinite amount of client-side renderers. Easier one-stop-shopping debugging. And in reality, I feel that the promised performance gains of client-side rendering never truly materialized for many use cases.

For projects involving small teams and non-trivial amounts, this is a boon. Any backend dev can, at least, write code to spit out some semantically correct HTML. Somebody is going to need to work on the UI/UX at some point but not necessarily full-time.

Contrast with a client-side rendering approach. If there is any significant amount of backend engineering behind the site (there isn't always, of course) now you have two apps with two divergent skillsets and more often than not, you need a minimum of two developers.


That's if you're making an application, sure, but on the web I view all sorts of sites and most of them are just styled text containers, maybe with an optional way to mutate things. HN, Reddit, News, recipes, forums, galleries, these are things I regularly consume that don't need APIs to return content. For applications, like Facebook, it's understandable, but if you don't spend all of your time in Social Media, then you probably are not really using an application.


You must use old.reddit.com


On mobile I just use a client that requests data from the API and renders it for me. On desktop, yes, for now, but I'm in the process of writing views that load faster and don't require a browser.


Maybe I'm being dumb (I'm a sysadmin, not a developer) but wouldn't you just shift those API calls to the server rendering the page rather than the browser?


Yes


Having both is still an option with minimal effort. In Rails it can be as simple as adding .json to your request and you get the data, otherwise the HTML.


You're correct, but for many companies, APIs and multiple interoperable clients are a YAGNI.


You mean the locked down apis that are not available to the puclic? Those can rot in hell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: