Hacker News new | past | comments | ask | show | jobs | submit login
There's never been a better time to build websites (simeongriggs.dev)
314 points by adrian_mrd 29 days ago | hide | past | favorite | 324 comments



It's definitely a great time to build websites, but saying they _NEVER_ have been easier to make... I'm not 100% sure about that.

Yes, tailwind might make css a lot easier, and github copilot might make coding a lot faster... but is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

When it comes to reading a blog or article or consuming any other form of textual information on the internet - I find myself increasingly enamored with "Reader View" in Firefox, which basically ditches all the crap, and displays the web-page in a default-style, as if it was just the most basic html - just like that we did write in 90ies.

Of course this doesn't hold true for any webApps that do more than just present textual information (and the occasional image) - but why do project like ViewPure exist? There seems to be at least some demand for getting rid of all the clutter.

Is tailwind really easier to work with something like - let's say - pico.css? Of course if you want to do super-elaborate layouts, then probably yes - but if you embraced a more minimalist approach?

Isn't Hacker News itself a really great example of a website that successfully uses only very basic html with very, very little styling and fancy extra features?


> Yes, tailwind might make css a lot easier, and github copilot might make coding a lot faster... but is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

The thing is you can still do all of this. But we have since built tools and frameworks to let you still do this, while giving you superpowers. I think Svelte is the best example of this. You can create a dead simple site with it and have the result you are talking about, but the real power is now you can employ many of those advanced techniques that were reserved for large web apps in the same dead simple site.

> Is tailwind really easier to work with something like - let's say - pico.css? Of course if you want to do super-elaborate layouts, then probably yes - but if you embraced a more minimalist approach?

I've used tailwind quite a bit, it's killer feature is the fact that it's just intuitive. With most CSS frameworks, even minimal ones, I always refer back to the docs to see how they do a specific thing. With Tailwind you refer to the guide a few times but very quickly pick up how it does things. I think that's why it's so popular, I've typed out classes thinking "is this a thing", and almost every single time, it was.


The problem is that expectations changed, so now you're "required" to do more in certain contexts.

I absolutely loath frontend developments in companies with more than 5 engineers. In the past you may have found some spaghetti code and various abuses of jQuery but you could probably grasp the codebase in an afternoon. Nowadays every mid company codebase is a spaceship. For sure, you'll find a pseudo technical VP of engineering which enable whatever cool technology to be used and resume driven developers happy to pile on the latest Big Tech framework - which is likely to be an over-engineered, over complicated exercise in engineer retention and engineer roles marketing.

Then you have Pieter Levels pulling a mil per year with PHP and jQuery.


> In the past you may have found some spaghetti code and various abuses of jQuery but you could probably grasp the codebase in an afternoon.

My first job out of college involved a Rails codebase littered with conditionally rendered jQuery snippets. It was a nightmare to figure out what code was even loaded on any given page. Give me a modern JS codebase over that any day.


Yeah, as someone who started out in JS with procedural jQuery, moved onto prototype overloading shenanigans with classes, and now React with functional components - the JS ecosystem and tooling is still a huge mess, but I wouldn't want to go back to jQuery or prototype class hacks. At least a modern React codebase has a whiff of engineering to it. No matter how carefully we did stuff the old way at the companies I worked for, it sooner or later became an unmaintainable mess of hacks and overrides.


One big question: These "expectations" you are talking about, who's expectations are those?

Might it be possible that those expectations, and those requirements for doing "more", are not coming from the actual users/visitors to the websites - but rather from the companies owning the websites?

Because I, as a visitor of HN, come here for the IT news and the nerd stuff. I happily put up with the layout - because the sites actually meets my expectations for content. At the same time, the best, most modern, snazziest website ever... why would I go there (more than once), if there's no content of interest to me?

And that - to my experience - holds true for almost ALL visitors. They come for the content, and they put up with the layout. And this table-based layout with spacer gifs right here, imho is a lot easier to put up with, than some desing-heavy website that's perfectly optimized for conversion, engagement and retention rates and whatnot.

I think the actual problem is, that when we talk about "expectations" and "requirements" - we talk about what the HOST/SELLER wants - not what is best for the visitor.


> The thing is you can still do all of this. But we have since built tools and frameworks to let you still do this, while giving you superpowers. I think Svelte is the best example of this. You can create a dead simple site with it and have the result you are talking about

I don't know. I just tried to use Netlify to upload a simple static Html/Javascript page I made. Now I'm trying to figure out CORS errors. Pretty sure that didn't use to be a thing. For better and worse.


If you're trying to figure out CORS errors, it means you're either loading fonts or making XMLHttpRequests across domains. Neither of those were available in the 90s era OP is rhapsodizing about. You don't have to use the additional features/complexity of the past 15–20 years… but if you do, it's not surprising that things get a little more complex!


You actually do have to use HTTPS nowadays, and that's a non-trivial amount of complexity compared to just serving up a static page over HTTP.


Eh, most static site hosting provider offer it out of the box nowadays


What if I want to run a simple but non-static website? It was a lot easier to just do CGI with a small perl script in the 2000s.


Pretty much every web provider supports PHP, so it would be as simple as changing the extension from .html to .php


No it’s not. Certs are pretty dumbed down. So much so that there is a thing called certbot that just does it for you.


Yeah, we run certbot. It fails for some reason at least once a year and we have to do something to fix it. At the very least you have to find a way to run it on the regular and you can't forget about it or else your simple static website starts looking like malware. Tell me again how trivial it is for a 15-year-old kid using their parents' Windows 10 machine to get certbot running reliably.


IE had ActiveX since 1996, which allowed for things similar to Ajax.


I used https://neocities.org/ recently to make a simple static html website with no issues and no configuration. And I'm someone who usually uses the more "advanced" stuff. It's nice having the freedom to chase a little nostalgia sometimes.


Is that like modern geocities?


Exactly. Same in the spirit.


Welcome to CORS errors, so sorry you made the trip. :)

CORS will eventually click for you and then it will always make sense. Until then, sorry you’re going through the muck.


I honestly feel like CORS was invented just to get people to use reverse proxies more.


CORS enables site operators to prevent third-party sites from offloading requests for them onto clients, and conveniently reduces XSS exposure as a side-effect.


Question: Are people using tailwind for layout? Even when CSS grid is available? If so what is the benefit of tailwind layouts over modern CSS, beyond what tailwinds usually brings?


I am personally, for a couple of reasons.

(1) Tailwind uses utility classes that feel like syntactic sugar over full CSS, so I don't feel like I'm "using Tailwind for layouts" as much as "using useful CSS grid presets"

(2) Much like with margins and paddings, text sizes, and colors, Tailwind helps me "pare down" the number of different values available to me. Much like having preset "m-1", "m-2", etc. values helps me be more consistent, having preset grid columns and gap spacing helps me stay consistent and not go too crazy.

(3) Because I'm using something closer to CSS-wrapped-in-classes rather than a set of components built by someone else, and because tailwind gives both a naming convention and an easy mechanism for adding my own classes that fit right in to Tailwind's, if Tailwind doesn't give me the exact values I want for grid spacing, then I can modify theirs. If I need to add a couple new presets in addition to Tailwind's, then I can do so easily.

(4) Early on when CSS Grid was new, I got bit by some bad layout bugs that were very hard to find, and impossible to fix (they were browser bugs). So, while I'm sure the bugs I encountered have since been fixed, I'm a bit gun shy on grids. I occasionally use Grid when it makes sense (via Tailwind), but more often than not I find it easier, more flexible, and safer to just use nested flexbox cols and rows. For which I also use Tailwind, if the project is of suitable size.

To sum it all up better, I view Tailwind more as syntactic sugar over CSS that helps me build my own design system. So, I would totally use vanilla CSS for small projects that don't need a design system. But for anything more than that, I like Tailwind for the same reason I like using web frameworks on the back end: it helps me keep things structured and consistent and more maintainable, even if I could do it all on my own without the framework.


So it is basically, because you’re using tailwind anyway for styling, using it for layout is nice for consistency sake. I get that.

For (2) however I usually use CSS custom properties to reach for preset values, e.g. setting "column gap" to "var(--margin-inline-wide)", and "row-gap" to "var(--margin-block-short)" etc.


That's fair. If the question is "would I use Tailwind primarily for layouts and little else" then the answer would be "no". I would definitely use CSS variables along with grid or flexbox, because its essentially the same end with less complexity.

I would also point out that Tailwind only makes sense to me when I'm also breaking up each individual UI element like cards or buttons into reusable components, either using a reactive framework like React or Vue, or a server-side template system. I'm not sure the maintainability of Tailwind survives with more monolithic HTML pages.


You still use grid or flexbox but instead of writing a class adding grid:0 1 and referencing the class name you use a shortcut with predefined values.

Easier to remember and quicker to type. Centering content on a screen is class="place-content-center" which makes things easier.


This question doesn’t really make sense. Tailwind is just modern CSS, it’s just a composable shorthand for applying styles. There’s plenty of CSS grid utilities in Tailwind: https://tailwindcss.com/docs/grid-template-columns


I rarely use CSS grids like when I’m doing layout. The only times I use grid-template-{columns,rows} directly is when I’m using the auto layout (e.g. a stack of cards). Whenever I do an overall layout I use named grid areas with different sized cells (i.e. not repeat()). A layout scheme could be something like:

    .container {
      grid-template:
        "nav    .      header header header" auto
        "nav    .      .      .      .     " 1ex
        "nav    .      main   .      aside " 1fr
        ".      .      .      .      .     " 1em
        "footer footer footer footer footer" auto
        / 40ch  1em    1fr    1ex    15ch
    }
You could achieve something like this in Tailwind using nested flexboxes, but that is not the same as doing layout in CSS as you have to work with the shortcomings of the framework. Now I’m not saying it is not worth it, but saying it is the same is missing a huge aspect of how we do layout in modern CSS.


Tailwind is more of a method of writing CSS than a framework. Many people are writing modern CSS layouts IN tailwind, which you can consider a dialect of inline styles.


C. 2000 I was able to match state-of-the-art web designs, solo, without much difficulty, writing raw HTML & JS. And get paid for it.

I was in high school.

That was the best time to build websites.

(Though at least "flat" trends and terrible UX out of several major companies mean my shitty designs are back to looking about as good and working about as well as "pro" designers, so that's, kind of, an improvement over ~2005-2014)


Hacker News still uses tables - for layout!

But it always loads super fast, and it just works.

We figured out how to display text on a web page decades ago. It's a shame more people aren't just doing it the direct way anymore.

(Applications are a different matter obviously)


I would have been horrified if I saw what websites would look like in 2010+ when I first started using the web. The amount of unnecessary garbage and JS added to sites goes well beyond the adtech part. Half or more of sites using frameworks probably should never have used them.

A good contrast is Reddit vs hn. Reddit’s current site is basically unusable and sluggish on modern hardware that can you could train ML models with.


Just look at all the overhead of the article‘s website… When I first clicked the link the "website" crashed, so instead of the hot take, all I could read is that "addEventListener is not a function". That tells you all about the current state of affairs

(Yes I get that webdevs' personal sites are their playground)


That website also loads 184 KB of Javascript to display a 13 KB static document


It could sit on a shit ton of interpreter too.


My favourite recent incarnation if this is when YouTube has its JavaScript go sideways and then it blames the internet being out instead of doing anything productive


I don't think any of this is "good" web-design.

But my argument never was about the layout or design of Hacker News being "better".

What I meant to argue was, that content is king, and that everything else should serve the actual content - not distract from it.


"What I meant to argue was, that content is king, and that everything else should serve the actual content - not distract from it."

Thank you! So many sites have utterly garbage contrast, making content unreadable - it's so annoying.

I spam this excellent discussion of color and usability as often as I can: https://designsystem.digital.gov/design-tokens/color/overvie...

but getting stupid 20 something designers with perfect vision to pay attention is almost impossible :p


And if you try it on mobile, it's mostly crap.

Let's not treat HTML tables as being some super special ability.

If HN were implemented exactly the same but with flexbox or grid for layout, it would be objectively better with no drawbacks I can think of.


(Semi) honest question to the commenters below parent, do you all use styluses, or do you just have small fingers, or is there some predictive touch system in the default browser for your devices that isn't present in Firefox? How big are your screens, are you running phablets?

Trying to understand how so many people are reaching the conclusion that HN works well on mobile. Maybe 25% of the time I try to upvote or downvote a comment on mobile, I fat-finger the wrong arrow. That shouldn't happen in a mobile interface.

I feel like just for the sheer number of comments I'm seeing saying that they don't know what the problem is, there must be something I'm missing. To me it's pretty straightforward, you have to zoom the screen in 20-40% to press any of the buttons or accurately target on any of the links, and when you do that you have to interrupt reading flow because the text doesn't wrap during that zoom.

When I turn off the mobile site and request the desktop version it's even worse, so I don't think it's a browser setting. I don't think I have particularly large hands, but the links on the top of the site are still only about 1/3 to 1/4 the size of my pointer finger.

What are you all seeing that I'm not? Is there a font-size setting you have checked? This is a pain to use without a precise pointer.


This button clicking issue certainly exists in Android Chrome. It's just that almost every designed-for-mobile website is missing functionality or behaves in unpredictable and terrible ways. Browsing not-designed-for-mobile web sites on mobile and zooming whenever I need to click on something is a big upgrade.


This is an interesting/illuminating comment, and it makes me wonder if some of this might be a personal bubble thing. I feel like I generally tend to have a better experience on mobile than other people describe, but I'm also running mobile adblockers, and more importantly, I also might just not be visiting all of the same sites as other people?

People complain about Reddit on mobile, and I totally agree, Reddit mobile is awful, but it's also a really small part of my life, I generally don't visit Reddit on a phone that often -- so it's easier for me to think about Reddit's mobile site(s) as being some kind of outlier?

My experience has been I read a lot of blogs on my phone, I do searches (that go through duckduckgo, which I don't really have a ton of complaints about as a mobile site), I look up quick pieces of information on the fly from random sites, I look at MDN documentation, trying to think what else...

I also spent a long time turning off Javascript entirely on my phone browser and I've only very recently started changing that practice (largely because of Gorhill deprecating uMatrix), which probably biases things even more, because a lot of the browsing I do on my phone works without Javascript, and that gets rid of a nontrivial number of annoying behaviors from more aggressive sites, and so I wonder if I'm just not giving the same amount of "credit" for HN not doing the Reddit bullcrap where it pops up a notification asking me to install an app, and that makes it easier to stare at the flaws.

If I was doing a lot of Reddit browsing on my phone, I would appreciate HN more on mobile, I will give people that HN is much better on mobile than Reddit.


One workaround would be making a new account whenever you get enough hacker news points to unlock downvoting.


I dunno, it looks great on Firefox mobile.


HN is fantastic on mobile. Much better than Reddit, even old.reddit and i.reddit.


Yeah it's easily one of THE best mobile sites I use.


The touch targets are way too small, and I don’t even have fat fingers.


Anecdotally, I've found that iOS handles small touch targets much better than Android. I had an HTC Raider, a Moto G, and a Moto X Play, and I felt very fat-fingered on all of them. Then I switched to an iPhone 6s and now an iPhone 12 Mini and with both of them I've had zero problems with small touch targets, even ones that are close to each other like with Nonograms.


What problems on mobile (small screens) do you see?

> no drawbacks

Would it be slower or use more CPU or memory to render?


> Would it be slower or use more CPU or memory to render?

With flexbox layout, probably (?), but likely imperceptibly (don't quote me on that, I haven't actually done the math to check if the benefits from decreased DOM elements would outweigh the increased cost of flexbox, maybe it would get faster just by virtue of shipping less HTML).

That being said, GP is being kind of extravagant, there is arguably nothing on HN that requires tables or flexbox. I always felt like inline spans and maybe a few floats/margin:autos for stuff like headers/menus would probably handle the majority of the layout.

This is part of my criticism of "HN picked the simple answer" takes; even if you go back over a decade and even if you take flexbox off the table, tables weren't really the simplest answer after CSS `margin:auto` was invented. This really isn't a website with columns of data, it almost doesn't need any layout tool at all.

I think the, "now flexbox exists so we can do it correctly" takes are also wrong in their own way; HN is a single-column reading experience with very simple menus, why bring flexbox into this? I always try to temper this claim because I haven't technically ever sat down and built a pixel-perfect replica using normal HTML, I don't technically know there's nothing that wouldn't require modern CSS. But I have poked around at different pages of the site and messed around with resolutions, and I've never seen a situation on the site that I felt required all of that extra HTML and complexity.

I almost wonder if the point of the embedded font tags and image spacers and crud is that site is trying to make it sort of render the same even with CSS turned off. But if so, that's bad practice and the site should stop doing that. Anyone who's turning off CSS is doing that for a reason and the site should just respect that and ship them the pure content.


Are we really worried about the CPU/memory impact of flexbox vs HTML tables?


Well, maybe we should! The site should be super smooth on my Amiga! :-p


Definitely works fine on mobile for me


You never miss the upvote/downvote buttons?

Quotes don't sidescroll to infinity for you?

I could go on and on...

It's okish, but it's no pinnacle for usability, that's for sure.


Eh, yeah I guess that happens, I guess I'm so used to the rest of the internet being so awful on mobile that it doesn't phase me. Which is funny, because I'm always super irritated by the rest of the internet.


Because of your comment I just looked at the source - yup tables. And a spacer gif too https://news.ycombinator.com/s.gif


I think it's old enough to be a retro design choice now.


> it just works.

Going to push back on this line specifically, HN has a number of issues. They're not dealbreakers, but they're also not particularly hard to fix with modern HTML. HN has generally kind of bad accessibility/semantics, it's pretty frustrating to use on mobile, and it degrades kind of poorly without Javascript (collapsing thread buttons still appear even though they don't work, also, collapsing thread buttons don't work without JS).

HN is (unironically) a great example of how kind of hacky you can make something and how little you can iterate on it while people still are mostly able to use it for its intended purpose. And (with the exception of maybe its blind accessibility problems) it should be held up as a great example of that.

But it's not a good example of "do things simply and they'll just work well." If anything, HN is a great example of why stuff like tables were abandoned. And the HTML isn't even that simple, this must have been a royal pain to build, everything everywhere is another embedded table. It's a weird dig at how bad some major websites have become that people don't see HN's HTML as bloated or convoluted.

Ever really dug into how HN threading works? Everything is a top-level comment, and it inserts transparent images to create the illusion of indentation. It's a wildly out-of-left-field solution that makes parsing out and styling threads way harder than it needs to be. Seriously, I've spent way too much time trying to figure out how to make CSS selectors for custom user-styling work on child comments/replies on a site that is supposed to be displaying a comment tree in the DOM tree. There's a one-to-one mapping there, you don't really need a complicated visual slight-of-hand to display this information; just put the comments in the tree.

Again, not to get mad at HN; but I think people use it as a positive example in the wrong situations. HN has bad HTML with obvious downsides that would be pretty easy to fix, but it turns out that creating an elegant website and filing off the rough edges is actually a really small part of running a community, and doesn't matter that much in the long run when compared to other things you could be doing to foster that community (like moderation/curation), and that is a very good lesson for tech people to learn from HN. But nobody should praise the HTML on this site, there are much better examples out there on the web of sites that use simple HTML to great effect.

----

> (Applications are a different matter obviously)

Much more minor push-back but I actually would love to see more applications embrace the interactive document model even when fully native and fully offline. Not all applications, but a bunch of them. Stuff like calculators, calendars, even bigger applications like database software/image viewers/file browsers, etc...

User-accessible stylesheets for applications, user-accessible scraping tools for applications, etc... I think there's a lot of potential for user computing hidden behind a willingness to say, "no, many applications are just text displayed in tree/table form when you think about it, and the app/document divide was always sort of nonsense."


> Ever really dug into how HN threading works? Everything is a top-level comment, and it inserts transparent images to create the illusion of indentation. It's a wildly out-of-left-field solution...

Having worked on an implementation of comment indentation before, I think it's a technical design choice rather than wildly out of left field. From the backend perspective, it's more performant (and simpler from as far as code maintainability) to have a flat db table with a number representing indentation level, rather than have each comment point to its parent, and then have to recursively build the html.

When you're getting as much traffic as HN, such design tradeoffs can make a huge difference in site responsiveness.


I might be misunderstanding what you're referring to. To be clear, I'm not really talking about recursively building anything about the HTML or fetching any additional data about other comments, or storing it in the database as a tree, a flat structure and iterative builds are fine for all of this. I'm talking about, purely off the top of my head, moving away from:

  result = reduce(comments, (result, comment) => {
    return result + build_comment(comment);
  }, '');
to:

  indent_level = 0;
  result = reduce(comments, (result, comment) => {
     missing_lvls = indent_level - comment.indent_level;
     close_str = missing_lvls > 0 ? repeat_str('</ul>', missing_lvls) : '';
     indent_level = comment.indent_level;

     return result + close_str + build_comment(comment);
  }, '');
I assume you're right and there's some kind of extra complexity somewhere, but HN isn't just storing the comments with no context other than indentation as far as I can tell. It is maintaining parent-child relationships at least well enough that the buttons to minimize/maximize threads work, so I am not sure what process is being skipped here by shipping flat HTML lists.

I guess I haven't ever read through HN's clientside Javascript, maybe it's calculating how to minimize threads on the fly completely clientside by iterating over the DOM, and maybe that's why the buttons don't work with JS disabled. But.. oof, I wouldn't hold that up as an example of good, simple clientside code if that's the case.

----

But again, I'm not going to argue with your conclusion, I assume there's something I've missed, I assume you're right.

This is still a bad example to use for "see, simple HTML is better". What you're describing is a good example of why it makes sense to say, "see, convoluted table setups that are worse on the client are faster overall than clean, simple output."

Which is not a bad lesson to learn from HN. It's a great lesson, sometimes performance requires us to do hacky things. But it is a very different lesson from where this thread started. You still would never point at that and say, "this is a good example of what HTML should look like", you would say, "these are the kinds of messy sacrifices that might be necessary to maintain a performant backend."


> maybe it's calculating how to minimize threads on the fly completely clientside by iterating over the DOM, and maybe that's why the buttons don't work with JS disabled

Holy crud, I just opened the JS up and this is actually what it's doing. Heck me and my little 'not going to contradict you' statements, you might just be completely entirely right about what's going on I guess.

  function kidvis (tr, hide) {
    var n0 = ind(tr), n = ind(kid1(tr)), coll = false;
    if (n > n0) {
      while (tr = kid1(tr)) {
        if (ind(tr) <= n0) {
I'm still going to reassert that any architecture that leads to someone doing this kind of logic every time a button is pressed is neither simple nor elegant, and at best this is an example of making architectural sacrifices and introducing complexity for the sake of performance; there's no way that this kind of page logic is easier to maintain or to understand than something built around a more semantic structure.

It's also definitely not easier on the client, I've seen some comments on here argue that HN might use all this weird stuff to save client battery life, and I feel more confident now saying that's not the reason, because if someone is somehow, someway legitimately in some incredible situation where they're actually worried about the battery drain of a browser rendering a table vs a some extra CSS, then this is not the code you would want to run every single time you press a button on the page.

But yeah, the "we just base everything off of an indentation integer and comment order" theory does seem a lot more plausible to me now, because I'm having a somewhat difficult time thinking why else collapsing comments would work this way.


> Yes, tailwind might make css a lot easier, and github copilot might make coding a lot faster... but is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

I mean you still can, it's just that nobody's going to hire you to do that and nobody is going to be impressed by it. Even back in the 90s when it was that easy, it was all hobbyists. By the time the dot-com boom started taking off, it was complicated. CSS + JS, no responsive design (remember designing 3 different sites to deal with the 460x640, 600x800, and 1280x1024 resolutions? Frames and no frames? Different sites for each browser [or ones that only worked in one bc things were even less standard than now - 'this site only works in IE'?] etc.)

Now it's the tools adding the complexity, but back then it was hardware, browsers, etc.


“no responsive design (remember designing 3 different sites to deal with the 460x640, 600x800, and 1280x1024 resolutions?”

Using media queries with breakpoints, and doing exactly what you describe, was the original “responsive design”. What you have in mind (I think) is what might be called “fluid design”—responsive design without media queries. You could always do this to some extent with such things as inline blocks, auto margins, and other ancient CSS technology (and an unstyled site also reflows responsively), but flex and grid allow more elaborate fluid designs.

There is also some advantage to making separate designs carefully crafted for a variety of screen shapes, that you give up with fluid design techniques.


Thanks for the correction/deep dive here. I was a kid + don't have an official education in cs, so I mix up terminology and what things were called sometimes.

I remember when media queries came out. It was so nice. Amazing. 10/10 for the time. You can do a lot with auto-margins and other ancient CSS, but there are definitely quite a lot of limits. (Sighs in someone who does a lot of email design and academic CMS design... let's party like it's 1999...)

> There is also some advantage to making separate designs carefully crafted for a variety of screen shapes, that you give up with fluid design techniques.

I agree strongly with this from a UX/UI point of view and as an educator.


> is this really easier than in the early 90ies, when you could just type <html> into notepad and make a website that didn't require any CSS or images or JS or even more than just the most basic html tags?

Nothing is stopping someone from doing that today, but that's no longer the only option, thus it seems accurate that it is easier today than ever before.


In a way, its more difficult today because there are so many options for web development that it's paralysing to even find out where to start. Still, can't go wrong starting with the most basic option.


That's like saying "it's harder than ever to cut wood because there are 100 different types of saws in Home Depot". I don't really buy this line of reasoning, different tools exist to tackle different types of problems, if someone has no idea what they're doing then it's their responsibility to get educated, that's as true today as it ever was, except today it's much easier to get educated than ever before.


I'd like to reframe your argument around expectations. Regardless of the difficulty of building websites, the bar is dramatically higher than it used to be. Look at what Amazon was able to get away with at one point [0]. The need for good security practices is also the highest its ever been (and will continue to matter more in the future).

So, in some ways, its never been more difficult to build a website. Thankfully FOSS lets us outsource a lot of the work.

[0] https://i.imgur.com/OAyZWnZ.png


You can still hack in HTML (and CSS) in a basic editor. I do occasionally. No design or js frameworks, no elaborate build systems, just me and my editor.

I don't think I'd have been so cocky five or ten years ago. Bootstrap solved real problems with cross browser layouts, jQuery fixed serious holes in some browsers' standard libraries. But the things they did then are things I'd happily freehand now.

In short, it's easier because 99% of browsers in use aren't awful and flex and grid make the float mess of the mid-Naughties irrelevant.


I am positively confident that I could teach a bunch of art students (so definitively non tech people) how to create their basic website only using html and css (without any framework/library etc) in 2 to 3 days including the domain and hosting part.


Weird flex, but okay.


What I wanted to hint at with this was, that good old HTML and CSS is both enough and probably trivially simple to get started with for the HN crowd if they don't know how to use it yet. A lot of simple websites don't need more than that. And if you sprinkle a bit of js on top you can also dig into the land of not so simple websites.


agree it's false that it's never been easier, and like to add a few small datapoints.

20 or so years ago you had microsocft frontpage - anyone could build a site with little learning. heck microsoft word could spit out the doc as html I dunno how long ago.

some 15 years ago or so, I ran into a 'junk hauler' I found via google, asked who did his site, he said he did. I was like wow you were #1 in google organic, that's amazing - he said he opened microsoft publisher, put in the text, added a picture of his truck and clicked publish.

A lot has changed since then of course - google demands responsive layouts to rank well, and most people use a phone to surf.. which for a while made frameworks the magic since css grid was not baked into word/publisher/ie...

anyhow as far as 'easy' the are many other easier tools to use than tailwinds to make a site, and have been for more than a decade.

Now that browsers can auto fit via flex and grid - tailwind and similar are the bloat, not the easy-button.


> Isn't Hacker News itself a really great example of a website that successfully uses only very basic html with very, very little styling and fancy extra features?

Hacker News is an aesthetic that would be unacceptable to the majority of businesses in the world. The login page alone would cripple sales on any e-commerce site.


why do you think that's the case? is it because people will feel 'sketched out' by the bareness and leave?


Yes exactly. Stripe built a whole business on this concept by creating an easy to implement, trustable experience for otherwise dodgy looking small shops. Trust is huge, and a sketchy looking form destroys trust.


So... Are you trying to argue that it's easier today to make an acceptable company website than it was in the 90ies... or that it was easier in the 90ies than it is now?


Neither. I'm arguing that Hacker News is not a good example to use when discussing web development. It's easy to make a site simple when you refuse to implement any features, but that refusal is a luxury most businesses do not have.


The reader view (from any of the browsers really) has become my base performance review tool.

If a webpage is meant for presenting textual content and is actually _improved_ by a reader view, that means the design has failed.

There actually are websites out there that don’t benefit much from that view HN being a prime example.


In the 90s we didn't have the developer console or sophisticated debugging tools.

The only way to debug was "alert("foo")" placed in strategic locations!

We also had to handle significantly varied browser differences.

You can do a 90s style site today and vastly benefit from modern tech.


I remember putting border: 1px solid red around my layout "blocks" to help me build layouts (of course the extra pixels could mess with the layout). Then dev console came and you could inspect and mouse over those blocks and those red lines would appear and it was just amazing.


In terms of accessibility I’m not sure we’ve done better than GeoCities and AngelFire. WYSIWYG page design tools have gotten better, but those are only “necessary” because page designs have become more complicated and simple, less “professional” styling has fallen out of popularity outside of technical circles.

Those old page hosting services also served to get non-technical users to dip their toes into writing HTML, setting them on a path of learning, whereas newer page hosting services are either skewed no-code (e.g SquareSpace) or technical (e.g. GitHub Pages and Netlify) with little in-between, which keeps non-technical users locked into the WYSIWYG tools to a greater extent.


But all the things you mentioned are optional. The web is amazingly backwards compatible. Best example: one of Germany's most read blogs: https://blog.fefe.de


I understand the point you are making but is HN really the best example of modern web app? For better or worse web apps have replaced desktop apps. Not everything on the web is a simple list of articles and comments. Some things require extremely complex interactions and workflows. I don't think minimalism is the cure for everything, sometimes you need _some_ complexity and yes sometimes things are easier to use with a little bit of interaction instead of just text, forms, and plain html.


No, it's most definitely not the best example of a modern web app.

But everyone reading this knows it, and it does serve as proof that you can build a successful website with extremely little design & features. Therefore it seemed a good choice to illustrate my point.


I seem to recall Dreamweaver was also good at pretty websites. And it is hard to deny that flash enabled some massively interactive sites back in the day. Of a nature that I don't see much nowadays.

I suspect if you are wanting to create interactive things scratch is a gem.


Is tailwind much better than tachyons?


Tailwind seems to be more mature and better documented. Apart from that, they're pretty similar, so if you're used to one you probably won't get much of a benefit from switching to the other.

I see that the example websites from Tailwind look much better than those from Tachyons, though I'm not sure whether that's just a matter of taste.


no, but the tachoyns devs are a bit "opinionated" as to which directions the thing should go (and which ones it shouldn't) and seem to have moved on for the most part. one of the reasons why even though tachyons came before tailwind, the latter has a much wider adoption and more active development


You still can build a simple web in 1 min with Next.js and with greater speed.

Pico.css is for zero customize website. Once the website requires customization, it does not work. Tailwind is easier.


I'm also developing for 25 years, I also really really like websites and coding, but not for the reasons the OP sums up. For me, it's not about all kinds of solved or unsolved technical questions.

Today is a very good time to build websites, because a good website is the only way to push back to Big Tech and it's practices. Your website can be build on techniques invented in a time where the dream of the internet was not yet shattered by Big Tech.

Of course you can choose to put your website on AWS and svck on the bolls of Jeff a little. But you don't have to! And that's what all kinds of young devs just miss.

Oh, and RSS is not dead. Not at all (thanks WordPress, for putting a feed on every instance). RSS is the only workable way to make the web social again.


"because a good website is the only way to push back to Big Tech and it's practices"

I agree with this. But what really spins my beanie is the amount of power a website gives a single person, small business or non-profit. It's really amazing.


So true! I hear small businesses and freelancers complain about being kicked of Twitter, OnlyFans, Youtube, Facebook, etc.

One might not immediately make money with a website (although you sure can!), but the moment you are kicked of some platform, you have a safe haven to fall back on. A place where you can share the same textual or visual media with your visitors(/followers/friends/fans/connections). But instead of organizing a party and renting some space, you make it a house party, where you are the host.

The only moderators on your website are you, your webhosting provider, and the government (I call it GovMod).

Thanks for your reply!


My biggest concern is for the people relying on the big platforms you mentioned but not using them to build their own lists and lines of communication with their clients.


I recently heard a podcast by the comedian Kevin Hart. In the early 2000s, when he was an unknown and social media didn't exist, he would have a sign-up sheet at his shows, where people could enter their names and email addresses. This way, he'd maintain an email list for each city.

Each time he scheduled a return appearance in that city, he'd send out an email letting people know the date and time of the show and a link to buy tickets.

Today, people build their followings on social networks that barely let you even post links to your own site (Instagram). Had Twitter/IG/FB existed back then, people would have no way to maintain independent contact lists.


Now, I would probably never seen his email because it would be filtered out with or lost in the thousands of other promotions.

Spam is a huge problem.


Agree. The big platforms make it all way too comfortable for everybody.

And I get it completely. Over the course of years, Google and the likes probably have more phone numbers of my contacts than I have. Only three weeks ago I made a text file with every name and number in it, so I have a safe copy for myself.


This is the type of thing people mean when they bring up regulations for big tech companies.

You shouldn't have to worry about suddenly losing your list of contacts. Google should be required by law to provide your data to you in cases such as account termination.


> One might not immediately make money with a website (although you sure can!)

Not if you get banned from payment processors. Which is a thing that certainly happens


True. This might be less problematic for freelancers or small businesses in Europe though. Paying by regular bank transfer is easily possible between all member states (and even more countries!). All you need is - and everybody has - an IBAN bank account. Not even your administration has to be adapted for international business (as long as you don't grow that big). The only downside is waiting one meager business day for your money, so it's not really instant payment.

I don't know about the US, South America, Asia or Russia, but I guess people have good old bank accounts, have an internetbanking app installed on their smartphone, and sometimes have to pay money to a neighboring country? It might not be as easy as here, or just completely different, but I guess there are ways to skip payment processors, as the fee-stealing middle man that a lot of people take for granted.


> Oh, and RSS is not dead. Not at all (thanks WordPress, for putting a feed on every instance). RSS is the only workable way to make the web social again.

What we need, IMO, is more experimental protocols to enrich the web, which maverick developers can use to good effect, the same way we did originally with RSS. More browsers forking Chromium or Servo to add support for these new features. Hell, maybe even something that doesn't resemble the Web at all. David didn't beat Goliath and nor did Heracles beat the Hydra by using 'the same old weapon but better'. The only thing that wins is a paradigm shift.


> is more experimental protocols to enrich the web,

New protocols are DOA until we significantly nerf the incentives to "own" the user.

IOW it's not gonna happen until we outlaw anything that resembles spying on users. And maybe also ads generally.


> And maybe also ads generally.

This is all well and good, but then we need a serious alternative funding model for websites.

Right now, a lot of people with adblockers are benefiting from a situation fausse where they free-ride on the ad-click revenue generated by others. I think in some people's minds this leads to an impossible expectation that they can continue to enjoy freely provided services, provided at considerable expense to the service provider, without paying anything or even having the inconvenience of having to see some ads.

I'm all for abolishing ads, but it needs a serious proposal, not just "what we have right now, but no ads". I'd also be all for an online payment mechanism embedded into browsers through a new protocol - something like https://www.w3.org/TR/payment-request/ but designed with more of a view towards paywalls - but I'm under no illusion: most people's revealed preference is consistently for ads over paying anything, as many startups in that space have discovered.


> This is all well and good, but then we need a serious alternative funding model for websites.

Why?

Without competition from free-but-funded-with-$billions ad-supported services, most of the valuable stuff would probably be replaced by volunteer and non-profit efforts.

Others would survive by charging (more) money.

Some would be replaced by protocols (several social networks would be among those replaced). Clients & hosting may be paid, or not. It'd work out fine.

Most of the rest isn't valuable.


> Without competition from free-but-funded-with-$billions ad-supported services, most of the valuable stuff would probably be replaced by volunteer and non-profit efforts.

It wouldn't just be 'non profit', it would be 'considerable loss'. You can't provide a service like YouTube or Google without incurring enormous expense, even if you're only counting the infrastructure costs.

> It'd work out fine.

You have no idea whether it would work out fine. Neither do I. I'm intensely sceptical of anyone who issues hand-waving proclamations about how a dramatic change would affect an almost indescribably complex system.

You may have your own wishes and preferences, but it's not a good idea to let those invade the rational, evaluative part of your mind.

> Most of the rest isn't valuable.

Anything that's used by someone is valuable to someone. I don't like paella, but I don't propose to eradicate all paella restaurants for that reason. Again, this feels like a hand-wavey and not very wise answer to dismiss problems with your idea.


> It wouldn't just be 'non profit', it would be 'considerable loss'. You can't provide a service like YouTube or Google without incurring enormous expense, even if you're only counting the infrastructure costs.

I'm not a bit worried we'd go without capable search engines, without ads. Very likely there'd be donation-supported ones that are at least as good, and maybe better for some purposes (IMO Google's utility peaked around '08).

The free side of Youtube is a UX problem to be solved by something like torrent clients (maybe plus some RSS). Or probably a dozen other ways. It's far from insurmountable, there's just no motivation to fix that now (because there's no demand for it). That's the story for most of the services that could be replaced by [two or three existing protocols] + [some not-exactly-rocket-science UX effort]. The commercial side of it is solved by... hosting videos. Yourself, or paying a service to do it for you (these services already exist, despite YouTube's dominance, all the way from simple video-hosting to full white-label video streaming services).

> Anything that's used by someone is valuable to someone. I don't like paella, but I don't propose to eradicate all paella restaurants for that reason. Again, this feels like a hand-wavey and not very wise answer to dismiss problems with your idea.

It's plain that a huge percentage of online content could be replaced with Snake Game on an old Nokia with ~0 loss of enjoyment for the consumer. A perfect replacement for them is a book of Sudoku puzzles. People look at the stuff but the value is extremely close to zero, in that nearly any other time-wasting activity is just as good. And that's after dismissing the ~75% of the Web that's spammy garbage of negative value (because it drowns out better material covering the same thing).

> You may have your own wishes and preferences, but it's not a good idea to let those invade the rational, evaluative part of your mind.

Beats accepting the wishes and preferences that created the bad situation that exists now, right? Why should that be privileged over what I'd prefer? Has zip to do with a lack of rationality on my part, though it's easier to dismiss ideas if one first paints them as irrational.

We can have useful, widely-used open protocols or we can have spying (ads may or may not also be on the table, but take away the spying and there goes much of the advantage of the huge tech companies, anyway). The two very clearly cannot co-exist. I'd prefer the former.


> I'm not a bit worried we'd go without capable search engines, without ads. Very likely there'd be donation-supported ones that are at least as good, and maybe better for some purposes (IMO Google's utility peaked around '08).

This isn't necessarily wrong. I personally use Gigablast, which is excellent and entirely independent (unlike many 'alternative' search engines it isn't backed by Google or, more often, Bing).

However, pace the problem of other minds, I am not the only person in the world, and many people enjoy and rely on Google. I think this conversation is continually falling into the trap of muddling up what you personally prefer vs what would most satisfy the majority of people, and thus achieve adoption.

It's not a good solution if most people consider it worse for their needs, irrespective of your own personal preferences, or your feelings about what other people should like.

> The free side of Youtube is a UX problem to be solved by something like torrent clients (maybe plus some RSS).

Come on. This is as near as possible to an objectively worse solution. Again, I think you're struggling to see beyond your own preferences and abilities, to how most people in the world interact with technology.

> It's plain that a huge percentage of online content could be replaced with Snake Game on an old Nokia with ~0 loss of enjoyment for the consumer.

I refer back to my previous sentence. [Also, both Snake and old Nokias are exactly as available today as they ever were, and I see no sign whatsoever of this happening, despite the clear advantages in price, battery, uptime, etc.]

> People look at the stuff but the value is extremely close to zero, in that nearly any other time-wasting activity is just as good.

I refer back to my penultimate sentence.

> Why should that be privileged over what I'd prefer?

I refer back to my antepenultimate sentence. The answer is: because you are one person in a world of seven billion, and your solution is not going to go anywhere if the mass of people don't like it.

---

Look, in summary, this is not a useful conversation if all you have to contribute is moralising about the worth of other people's preferences. I don't care if you think most people should spend their time knitting or listening to Brahms. I'm trying to come up with a solution that satisfies people, and, therefore, can actually compete.


You seem to be assuming I don't consume a bunch of content that could be replaced with Snake Game or Solitaire at ~0 loss of enjoyment, because it's incredibly low-value entertainment, so am somehow looking down on others. What do you think this is? That I'm doing right now? The value, in every sense, of nearly all online activities can be found next to "marginal" in the dictionary.

[EDIT]

> if all you have to contribute is moralising about the worth of other people's preferences

Definitely a complete characterization of my views on this, and of these posts. You've looked carefully, considered thoughtfully, and discovered the entire thing. Very good.


You make a very good point about adblockers having that negative second-order effect where they continue to let people have the expectation of getting things that are intrinsically expensive (storage, bandwidth, sysadmins) for free - I didn't think about that before.

As for alternative funding models - why not microtransactions? Attaching an explicit price tag onto website access (subscription model) or individual media/document objects (standard "pay for what you use" model) would have some other beneficial effects, such as reducing extraneous media consumption (mindlessly scrolling for hours suddenly starts costing you money, better to buy a book and get value out of it) - most advertisements are a mental cancer that we should try to get rid of anyway.


Thanks for the kind reply, I appreciate it. I do think that's the kind of mindset that adblockers are inculcating in people - they don't quite realise the extent of all the costs that are borne by everyone else. Perhaps especially so because the sort of person who uses an adblocker is probably the sort of person who can't imagine himself clicking an ad, and so underestimates the amount of revenue made from ads. And thus also likely underestimates all the costs which that revenue pays for. (And then you end up in a predicament like the very-self-aware fellow in the other subthread, insisting that Google and YouTube and Facebook could be run by non-profits, and 'it would all work out fine'.)

As for alternative funding models - which is definitely a much more interesting conversation - I actually considered starting a company in exactly that space. I have some experience in fintech ("very credentialised" according to my former Anglo-German lead investor, haha) and so I thought I could pull it off. I couldn't, and it didn't get past the MVP stage ... luckily. The trouble is that people aren't willing to pay even the $0.01 to access an article. There's something deep in people's brains which is averse to spending money, no matter how small the amount.

I believe - and this is more second-hand evidence from other founders rather than first-hand - that the approaches which typically see the most success are those where people 'top up' a certain amount and then spend it gradually. That doesn't set off the same psychological alarm that directly spending money does. However, that kind of approach would be much harder to implement - especially as something like a browser protocol - because it would require holding probably-vast sums of money in escrow[0], which is an extremely burdensome legal and regulatory position to be in.

Personally I think Brave - much as it's a stupid company started by a stupid clever man - might be onto the right big idea here (despite getting a million little things wrong, and alienating virtually all of its users and most of its non-users too). The core idea of buying attention tokens which are paid out to websites to which you pay attention is a brilliant one. However, it needs a lot more refining, since the crude version of that model is not particularly well-equipped to deal with the difference between e.g. a movie-streaming site, on the one hand, and a shorthand news site, or even a site like Twitter, on the other hand. I may well watch a movie for 180 minutes but get less value from it than I do a tweet. So attention != value, or at least the concept of 'attention' needs refining to be more than simply 'time I spend on a website', but there's a promising kernel there, I think.

[0] Compare it to Starbucks's gift card program. Starbucks is one of the largest commercial debtors in the world just by virtue of the vast number of Starbucks gift cards in people's drawers. These things add up quickly and bigly.


If you're not already aware of it, Gemini is an interesting project that aims to do exactly that. https://gemini.circumlunar.space/


Interesting, thanks for the pointer! I was aware of Gopher, but not Gemini. It seems to address what would be one of my main concerns about any proposed alternative, which is - for the beginning at least - interoperability with the web.

I'm sure people have other bold ideas. I personally think there's a lot of room for something which builds on the 'progressive web app' paradigm: i.e. a web for websites which are more like apps, downloaded once and then exchanging data ad hoc with a backend server, with potentially much richer and more performant experiences. WebAssembly (WASM + WASI) would be a great foundation for something like that. But that's just one among countless compossible paths.


In a bygone era, everything was built of forms, tables and bullet lists. It would be interesting to experiment with a browser that drops JS support in favor of a more robust list of native components.

Alternatively, the modern browser has basically evolved into a virtual machine anyway. You seem to be suggesting that we could do exciting things with a more intentional version of that idea, and I don't disagree.


I've often thought about this. People nowadays build their websites predominantly on React component kits which resemble more sophisticated HTML elements. If HTML were extended to include that more sophisticated functionality -- and come on, it's been 20 years and people are certainly sufficiently aligned on this approach to now include it in the spec -- then I wonder whether JavaScript would be necessary, or at least as necessary as it is now.

I think either direction would be interesting. Or both. The browser was an experiment in the first place, but people seem to have stopped experimenting and are just doing the equivalent of what building on an IBM mainframe would have been in those days. It's disappointing. (It's not unlike people who quote Martin Luther King today, not appreciating that the essence of his radicalism was the direction of travel towards justice, not the political compromises themselves which are now firmly established.)

I hope at least that the renewed interest in systems programming that's come with the Rust fandom might spur some actually innovative developments to replace the antiquated model we're all stuck using. But sadly the tech community seems to be split into two communities, one of which has no interest in innovating, and the other of which seems to be set on innovations so absurdly impractical (¡world wide web on the blockchain!) that they could never take off.


I agree with your sentiment, but RSS and the likes are not 'the same old weapon but better'. It was never 'weaponized' in the first place. It never really came to fruition for the masses. To me, RSS is like two sticks and a string, and it needs some people who can see that they can make a bow and arrow out of it (to stay in the realm of 'weapons').

I don't like talking about tech and business in a sense of 'weapons', 'winning', 'smashing the competition', etc. Talking about it in this sense makes it a struggle, because one will look at tech from the pov of a stockholder, capitalist, or just a narcissist. The wording is very important, and the moment you make that wording your own, you can't see it in a different light anymore.

To be honest, it's only since recently that I myself take RSS seriously. Before, I developed a sh1tload of websites without even really knowing what RSS is capable of. Ignorant me.

Thanks for your reply!


Interesting point! What would you say the full realisation of RSS would look like?

Also, the weapons metaphor was mostly just responding in the frame of the original language about 'pushing back on big tech', 'shattered by big tech', etc. And I do agree with that framing: I think there is a tussle over the direction of the internet – tussle, battle, tug of war, whatever you want to call it – and I'm not sure it helps to avoid the martial metaphor out of distaste when you are in a battle.


RSS is pub/sub, right? Doesn't social media like Facebook, LinkedIn or Twitter - where you post stuff and other people post stuff - resemble that? Replace 'your' account, profile or page with your own website.

With the right website software it's easy to have an RSS feed nowadays, so the only thing that's missing is website software that also works like an RSS reader. In the most basic sense you only have to read XML with your program.

So now there's your own feed and other people's feed, mixed up in a nice and honest timeline (however you program that). To me this sounds a lot like the first principles behind social media, but without the obscure algorithms to fvck your timeline and without the intruding ads (I have nothing against ads per se).

P.S. Sorry for ranting about the weapons metaphor. I triggered on it and felt the need to tell what I think about language.

Edit: I forgot to say that RSS feeds have been there all the time for a lot of websites based on Wordpress. That has big value, because it means there's no need for technical adoption. These are all websites with feeds up and running right now. I've been hating on Wordpress for other reasons, but this was a good move for the open web from them.


https://micro.blog for example does exactly this, sort of RSS based Twitter (i.e the idea is that your feed contains mostly shorter posts, but that’s not a hard restriction) . Give it the RSS of your website (they can also host one for you), "follow" other people (=subscribe to their rss feed) and boom there is your "social media"


Nice! I really like their idea, and very friendly UI. I get a social vibe from it.

Your explanation is exactly what I was thinking of as the ideal social media, but I don't see that explained on their website. No mention of RSS. As long as the feeds from my software can work with their feeds (and the other way around) I'm good.


Have you heard of Scuttlebutt (https://en.wikipedia.org/wiki/Secure_Scuttlebutt)? The entire protocol revolves around 'pulling' other people's feeds - unlike most all modern social media, it's a pull-based rather than push-based model. It sounds quite similar to what you're looking for, at least based on the way you describe it in this particular comment.


Yeah, I've seen it come across here on HN. For my intended web application I already chose RSS as the way to go, especially because there exist so many feeds and because I'm familiar with XML.

If you have a blog and/or RSS feed somewhere online, let me know. I'll add it to my 'to follow' list.


> the only thing that's missing is website software that also works like an RSS reader. In the most basic sense you only have to read XML with your program

This is a fascinating idea. I often think that Twitter's success, unlike Facebook's or Google's, is fundamentally as a protocol - and one which shouldn't have been centralised under the control of one company. And incidentally it seems Jack Dorsey thinks the same way, since he's suggested the possibility of having one core protocol for tweets, on top of which people could build their own frontends, and users could choose from a marketplace of both (a) frontends and (b) algorithms for filtering and ordering what they see.

I do agree with you: what's missing from RSS is not the existence of the protocol, nor even necessarily the 'supply side' of websites providing it (like you say, largely courtesy of Wordpress), but the 'demand side' which really needs a well-designed interface to consume that kind of content. I absolutely agree with you that this feels like a huge area of potential.

And thinking on a more second-order level: I wonder if one thing that's preventing these innovations is a suitable, easy 'base' for people to build this software on. For example, take `create-react-app` for the web. Countless things have been made because people know that they have that simple base to start with. For building a web browser alternative, there's no equivalent for most people: they don't know where to start. If we had a simply bundled toolkit such that people only had to write some business logic, I wonder how much more would be done.

> P.S. Sorry for ranting about the weapons metaphor. I triggered on it and felt the need to tell what I think about language.

No prob at all! Susan Sontag wrote a really interesting essay 'AIDS And Its Metaphors' in the same vein, specifically about the use of war metaphors about AIDS and cancer: "fighting", "losing the battle", &c. (It's the culmination of a series of essays on the same topic, but this is the most thought-provoking of them, IMO.) You might enjoy it. I particularly liked:

> The metaphor implements the way particularly dreaded diseases are envisaged as an alien 'other', as enemies are in modern war; and the move from the demonisation of the illness to the attribution of fault to the patient is an inevitable one, no matter if patients are thought of as victims. Victims suggest innocence. And innocence, by the inexorable logic that governs all relational terms, suggests guilt.

Wikipedia has a great summary: https://en.wikipedia.org/wiki/AIDS_and_Its_Metaphors#Militar... https://en.wikipedia.org/wiki/AIDS_and_Its_Metaphors


> I don't like talking about tech and business in a sense of 'weapons', 'winning', 'smashing the competition', etc. Talking about it in this sense makes it a struggle, because one will look at tech from the pov of a stockholder, capitalist, or just a narcissist. The wording is very important, and the moment you make that wording your own, you can't see it in a different light anymore.

I like this paragraph a lot, in my perception this is why most of cryptocurrency initiatives and recently hijacked and massacred `web3` concept are fruitless so far - too much focus on how to get rich quickly


Thanks! Getting rid of notions like 'winning' and 'competition' in areas where it's not necessary was a mind changer for me. Even personal stuff like friendship becomes a competition this way (and I see it everywhere around me). Nowadays, I prefer to think about it as 'challenges' instead of competitions.


It is against the TOS of the largest ISP in my country to even host a server of any sort. Colocation isn't that fun.


There are tens of thousands of hosting providers around the world. You don't have to pick AWS, GCP, or Azure.


That's a shame in itself, I see self hosting as the holy grail of the internet. There are a lot of good webhosts though. Worked in the 90's, and still works today.


I find lots of love for RSS by "innovators" and "experts" of HN community. Despite of it, the thing (RSS) is only going low and I don’t see anything other than RSS readers/mergers invented?


I think RSS is still in heavy use by podcasts.


Stay tuned. ;)


> Of course you can choose to put your website on AWS and svck on the bolls of Jeff a little. But you don't have to! And that's what all kinds of young devs just miss.

Where are you hosting your websites?


Not OP, but Linode. Used to be on their $10 a month VPS, downgraded when they added the $5 a month one. I think they have some fancy Kube stuff now but I just have a makefile that rsyncs a directory to my VPS and nginx picks it up. Honestly even the $5 VPS is more power than I need for static files, but it's nice to have the option to throw up a flask app or run some web-scraping scripts overnight or something.

Admittedly it's behind Cloudflare because DDOS skiddies, but I think the CF lock-in isn't much if you're not using workers or anything proprietary to them. Admittedly it does suck for people with really privacy-customized browsers or using Tor, but idk a better solution unfortunately.


But personal-website-tier static file hosting is free these days!


It's called webhosting at a hosting provider. ;) It works for me like this since the 90's. Here in Europe there are literally hundreds or thousands of webhosts. Some pricier, some faster, some cheaper.

I host some websites at Antagonist in The Netherlands. But I'm strongly considering moving, although they are by far the fastest and most reliable party I've seen.

Why? Because they - just like a lot of other European webhosts the last years - are now part of some vague sh1tshow called Group.one or something. I don't know what this organisation is up to, but I don't trust it for a second. I guess they are trying to become some European version of AWS or something. Data grabbers, probably.

Every sincere webhost that became part of their 'network' (they sell it like this, but it's just a merger/acquisition) uses stupid wording in their press releases, customers are not kept up to date about these important changes (only after the fact, of course) and they all of a sudden are now offering 'cloud storage' like it's Dropbox.

This was a bigger reply then I intended. But thanks for yours!


Smaller web hosts tend to use AWS on the backend. Sometimes they use Azure or OVH, but they rarely have their own datacentre.


Can't speak for OP, but mine are on vultr. There's a whole industry of web hosting, virtual private servers, or colocation facilities to choose from.


Contobo, Digital Ocean just to name two.


Scaleway


here here. keep fighting the good fight


> Static was a fun diversion, but we're back to what works.

I won't be moving away from static website. It is perfect solution for a blog/CV type thing.

>you might think that minutes-long build processes are normal

If compiling your static site takes minutes there is something fundamentally wrong with your site in my opinion.


I laid my first lines of JavaScript(JScript, actually) in 2001, have been doing this professionally for around a decade now and I judge this blogpost as generally ignorant, but that first quote is especially so.

90%+ of the Web is just static sites with JS sprinkled on top. That has always been the case and will stay that way for the foreseeable future, because it's the most simple, pragmatic and accessible solution.


I'm amazed by the amount of shade I've seen lately being thrown onto SSGs and building websites with static HTML. Though, it seems to be permeating from folks that have a vested interested in having you run code on a (or their) SSR platform.

There are some interesting properties of server rendered websites and it can be the best option in some scenarios, but in others it adds complexity and burden on the end user. To suggest that static was a "fun diversion" detracts from the value and I feel we'll be going full-circle once again after we've realized that "what works" isn't always server render HTML.


If a framework needs compiling to view changes instead of pressing F5 in a browser like god intended, that's gonna be a no from me dawg.


My static site is build by a generator. A generator that literally runs after each save and it takes milliseconds to compile my site. I also have autorefresh plugin for development, so as soon as I save a file whatever page I had open in my browser "instantly" gets refreshed.


When I make a change to any file in dev, Eleventy rebuilds the site (takes a few milliseconds), and then automatically reloads the browser. That's a yah from me, dawg.


> > Static was a fun diversion, but we're back to what works.

> I won't be moving away from static website. It is perfect solution for a blog/CV type thing.

Yeah, I'm keeping an eye on these new "MPA"/"transitional" frameworks, but the thought of taking a static marketing page, which costs fractions of a cent to serve and is easy to put on a CDN, to now requiring a backend server which costs magnitudes more, seems foolish.


Assuming you don't mind Microsoft or Github you can host a blog or whatever on github pages for free. You can even use a custom domain. That's how I host my site.


From reading this I get the sense the problems are lower level and there is no impetus to fix them.

I'm embarrassed to say, having programmed for over a decade, and running Linux for half of that, I still have no idea how to setup my computer to serve a webpage (or even a file)

I probably need to `apt-get install apache` and then deal with some magic config files and incantations and hope I don't mess something up and expose my whole computer to the open web. Then there is the whole mess of NAT (wasn't IPv6 supposed to kill off NATs?).

I need to then figure out the endless (and impenetrable) configurations in my OpenWRT/LuCI router to open a port and have it forwarded to my computer. Or maybe that needs to be done "upstream" in my landlords internet cabinet..? I'd have no idea how to even figure that one out b/c my router doesn't make it obvious in any way.

Then I need to find some DDNS service and figure out how to get that to route traffic from a URL to my IP (and then it's somehow supposed to reconfigure when my IP dynamically changes? Is that some cron job I need to write?)

Hosting webpages from home is still complicated and no progress has been made. And if you do it wrong someone will hack you and steal all your files :) The incumbents are probably happy it's so painful and you still need to do technical gymnastics to punch through NATs and whatnot. This text further confirms it by just telling people to host int he cloud. And understandably.. you'd have to be a total nut to serve a file from your home instead of dropping it on Google Drive.

When I was studying this stuff in college I just figured tech is in an awkward teenage phase and this will all get worked out and streamlined - but I think it's not going anywhere This isn't the tech future of 80s scifi

I don't mean to be wholly negative, I'd actually appreciate if someone pointed me to a good step by step to set everything up. At the moment I'm a sellout :) and I just use Github Pages and git push HTML/CSS files there. Ideally it really should be just as easy to git pushing to your own home computer but till then I guess I'll do that..


If opening the 80/443 port is such a nuissance in your setup, I would not bother with hosting from home and I'd just drop the website folder in Netlify Drop (https://app.netlify.com/drop)

That's what I did for the first few iterations of https://lunar.fyi and it really helped with giving people the right information fast while I could keep spending time on the real work (developing the Lunar app)

But if hosting from home is what matters the most, there is an easier way nowadays using Caddy (https://caddyserver.com) and ngrok (https://ngrok.com).

For example, I just hosted this website (https://af62-2a02-2f0e-d00f-e100-f513-b43-fbc1-cf5d.ngrok.io) using the following commands:

    caddy file-server -listen 0.0.0.0:6001
    ngrok http localhost:6001
If you want to go the extra mile and have a nice custom domain, Freenom provides you with a 1-year free domain for the following TLDs:

    .gq .tk .ml .cf. ga
For example, I just registered geokon.gq for 1 month and forwarded it to the ngrok endpoint: http://geokon.gq/


Thanks you for all that! It does look like it simplifies stuff in that there are no more config files! So that's really cool.

I will need to read the docs a bit later b/c from the landing page it's all still confusing. This is probably a function of the tech "debt". It has example commands like

   caddy file-server --domain example.com
- If it's a file server.. shouldn't it be something like FTP/FTPS? Everything on the webpage keeps saying HTTPS .. which is Hyper Text Transfer Protocol.. ie HTML webpages. So is it serving files or webpages? (I guess webpages are a type of file.. but in practice the two aren't the same)

- Do I need to stick this into my .profile? Or I need to configure a systemd service?

- What is it even serving out..? Is it just serving out everything in the immediate directory I'm in where I run the command?

- How is it "hooking into" example.com? How does the registrar know to point at my IP after I run this command? (or if that's done separately - say on the registrar website, why do you need to specify the URL locally at all?)

In any case, these are just immediate questions that come up. It's all stuff that probably makes sense if you know it already :)

And again.. i need to rtfm - so I'm not complaining or shooting the messenger here haha. Thanks for the info


File server in this context only means a HTTP(S) server that serves static files (which can be webpages like index.html, but not limited to that).

When you run `caddy file-server` in a folder it just starts serving all the files in that folder. You can serve MP3 files if you want, or .txt files, it doesn’t have to be webpages. It just happens to serve them over the HTTP protocol because that’s what the browser speaks.

Keep in mind that caddy doesn’t allow a user to get out of the folder you ran the command in, and it also doesn’t allow the user to list the files in your folder if you don’t explicitly allow that using:

    caddy file-server -browse
For example, I use the `-browse` option to allow users to download any previous release of my Lunar app here: https://releases.lunar.fyi

The domain option is not for pointing that domain to your IP address. That can only be done from your DNS provider (e.g. Cloudflare, or even Freenom which I pointed in the last comment)

What the domain option does is allow you to serve multiple websites on different domains from the same computer, and it also automatically generates SSL certificates for them so that you have encrypted `https://` support by default.

A few years ago, you would have had to buy SSL certificates from someone like Verisign, download those certificates, figure out where to put them securely and configure Apache or Nginx to use them for each domain. Caddy does all that automatically now.

Also keep in mind that if you run a file server like that: `caddy file-server -domain example.com`

… then caddy will only serve those files if you access them using that domain. If you try using your IP address directly, or any other domain instead of example.com, it won’t respond with your files.

In this way you could have multiple domains serving different files from the same computer.


Woah - thank you for taking the time to explain everything. It's challenging to find all the info so concisely in one spot. This has been really useful and educational :)


Hey - awesome work on that lunar page, i know it's a landing page and the goal is conversion but it was genuinely enjoyable to read, fun even. Fantastic.


Well thanks! Lately I’ve been afraid of the page becoming too information-dense because there are so many features and edge-cases I need to let people know about.

I’m really glad to hear that from you!


> Hosting webpages from home is still complicated and no progress has been made.

Now this would be a real improvement, not some new framework of the week.


Glad to see it isn't just me. I've been doing networking tasks on and off for over a decade and I still fear the nginx config file.


Most of the things you've mentioned are caused by the delayed IPv6 rollout, self-hosting just isn't a priority for residential ISPs. I'm sure that there are plenty of good tutorials on setting up Apache or some other web server, those tutorials can't exist for actually exposing it to the Internet, there's just too many routers and ISPs.


If you have python here is a 1liner:

python3 -m http.server

The home networking / dynamic IP stuff will probably not go away until ipv6 becomes more commonplace, but honestly that is for the better. Ask any business whose upnp-enabled receipt printer started outputting antiwork propaganda the past few weeks how it is going.


Spot on. Some of us are working on it. IMO the best solution currently (ie until ipv6 takes over and assuming we get rid of NATs when that happens) is tunneling. I maintain a list of options here:

https://github.com/anderspitman/awesome-tunneling

If you wanted to self-host a website from your home computer today I would recommend buying a domain from Cloudflare, and using Cloudflare Tunnel.

6 months from now I hope to be suggesting some variation of my open source alternative, https://boringproxy.io. It's not quite ready yet.


I found yunohost.org really simple to set up for all my homeserver needs.


The technology is far more advanced and bordering on magical in some cases, but most front-end web-development is far more painful and less fun to me than it once was.

Say about 15 years ago or so, if you had a creative inspiration for a personal site, you could whip out a text editor and start building top-notch stuff very easily. About the most complicated thing you had to do to start building something was adding jQuery to a project.

Today you have to research a pile of unnecessarily complicated (for most personal projects) frontend frameworks, libraries, and build systems, figure out the compiling and development environments, figure out a complex mess of containers and virtualization, etc, etc, etc. This stuff sucks the joy out of building anything, even if the actual coding itself is far more structured and convenient.

By the way as genuinely nice as Tailwind can be, calling anything a solved problem is kind of arrogant. There will probably be other design approaches and ideas invented over time that are even better.


I am not sure why you say that, nothing stops a developer from opening notepad and building a website. One of the wonderful things about the web is it's backwards compatibility.

It's just that there's better tooling available for you to build it faster/featureful now.


Exactly. So many comments here are acting as if they're forced to use npm+react+tailwind+etc. but you're absolutely not. Web standards (querySelector, fetch, display:flex, ...) are so far along compared to a decade ago that you could use zero libraries/frameworks and be significantly more produtive than we were back then.

Web standards still have a long way to go, but it's comical to claim that things have gotten harder.


> Today you have to research a pile of unnecessarily complicated (for most personal projects) frontend frameworks, libraries, and build systems, figure out the compiling and development environments, figure out a complex mess of containers and virtualization, etc, etc, etc.

Well yeah, but once you've learned the modern paradigms, you can get right back into hacking away in your text editor. I've been doing web development stuff for longer than 15 years or so, and it's no more difficult to start a project now than it was back then. I can run a single command to create a template nextjs project and be hacking away in no time at all. Yeah there's a learning curve initially, and for certain use cases you may not want or need these modern tools, but it's no different than when all that time ago I finally, after much foot-dragging and moaning, I finally learned CSS and transitioned away from "tables for everything" HTML I'd learned on Geocities, which seemed like the pinnacle of productivity at the time.


The way you described building a simple website (text editor only) works just as well today.

You need the frontend frameworks, build processes and million NPM dependencies only when working inside of a developer team for an enterprise software of some sort.


Do you really have to? I'm still building sites front-end with mostly HTML, CSS and jQuery or even vanilla JS.


The title is pretty much the only part of the article I agree with.

While the industry was largely distracted by various NPM packages, the foundation we are building sites on got really nice. And I believe in some situations there's value in trying to use this foundation directly, not via a plethora of abstractions. Browsers are great, DOM API is great (well, certainly provides some really nice utils we previously had to look for in external libraries like jquery). CSS is great. I will write "display: flex;" multiple times just because it feels nice to type that, compared to the dance we had to do a couple of decades ago to achieve same layouts.

Static sites are great. They are fast to build, and they build fast. They are really easy to deploy as well. And they also work really well on end user devices. No single line of JS required, but if you want to have some liveliness on the page which can't be easily achieved by CSS animations, very few lines of JS are actually required nowadays. And 0 NPM packages.

Modern JS web frameworks do solve problems some people actually have. But they're best suited for big corporate style webdev. For personal projects I prefer something more… artisanal? Dunno, it's like building a piece of furniture yourself instead of buying one from IKEA. Probably less practical, but feels nice. And may actually fit better in this one weird corner you have.


I'm honestly not sure if this is satire or not.

> Static was a fun diversion, but we're back to what works.

Static works really well for many use cases. And you are never going to beat the performance of static in a cache close to your user. I agree that there are many cases where server-rendered is the best option but static with a good sprinkling of JS and server-generated embeds definitely works.

> Tailwind CSS is the best thing to ever happen to CSS

This is obviously controversial. Maybe for non-static websites where every `<p>` in your site is generated from the same line of code it is trivial to add `class="m-4 text-gray-900"` when I may use the same component in multiple places that gets boring really fast.

> GitHub Copilot

Maybe if you weren't writing `m-2` on every p this wouldn't be as helpful /s

Maybe I haven't seen the light yet, but in most weakly-typed languages Copilot felt like a huge loaded footgun. It also generally worked on the simple cases that didn't require much work anyways.

---

I think it is clear that this person found an approach that works for them, but it seems like there is still a lot of from for improvement here.


I have a feeling that this is the next religious war in dev.

I read this article and shudder in horror at, well, all of it. I like static sites (that wouldn't take minutes to build if they were written in a decent language). Adding lots and lots of JS dependencies and frameworks gives me the screaming ab-dabs - it's just adding complexity and dependency. I like writing code, not plumbing together bits of other people's code with bizarre config files. I object to CodePilot for all sorts of reasons, but fundamentally because I enjoy writing good code.

But I know experienced devs who have exactly the reverse opinions. Like OP, they see all this as going in the right direction and making their lives easier.


Same.

I started making websites in 93, and to me, websites are about communication and sharing. They exist to facilitate communication. The current system of crazy dependencies and frameworks just doesn't do this; it values speed and ease over every other aspect of communicating. Imagine if TV developed in a way where they sat and tested how much they could speed up the show and be understood and made that the standard so shows could be 'watched' more quickly: That experience would SUCK.

Or cars that can go from 0 to 100 in .5 seconds but have no seat belts, no horns, no airbags, no turn signals...


I started in '00, and I still favour server-side rendered websites, with a smattering of JavaScript where needed.

I've worked with Angular and React, and they have their place, but IMO 95% of websites just don't need them.


Oh, actually, I started building websites earlier than '00, maybe something like '95, but started doing it for a job in '00.


> I object to CodePilot for all sorts of reasons

I've been testing copilot for about a month now, one thing I realized very quickly is it's not as useful as you might think it is. Copilot is great for writing repetitive code, so if you just wrote a function to select an item, copilot will correctly guess what the deselect function implementation should be.

Outside of that though I find that it gets in the way far more often that it helps, by far the most annoying part is that it interferes with intellisense sometimes, picking values from an enum in Typescript is now a PIA if you have Copilot enabled.


I feel the same way as the author but I work with php. Recently it's all been TDD, statically typed, and JavaScript gets web packed into one or two files so instead of juggling 15 script tags I only have one or two and a style tag for css. With CI this all happens in the background on every commit.

What you pay for when you hire a dev like me with 17 years of experience in the field is the ability to know which libraries are the best to use and which techniques are worth using in the process.


Is that not something you can learn by picking up any number of good books in the area in much less time?

I am just getting into TypeScript programming, and I am really having fun. I learnt about the details of how Javascript works, and it is quite cool to have a static layer over a dynamic one. For learning JavaScript, I like "The Good Parts" and the "You don't know JS" series. It took me a while to learn how to create a datatype that can defend its invariants not only statically, but also at runtime, but that seems quite feasible as well, especially with decorators. It is interesting though that nobody seems to be particularly interested in doing that, there is not much direct information about this available.


> Is that not something you can learn by picking up any number of good books in the area in much less time?

I'm shocked that you believe this. Work at a moderate or higher sized tech company. The juniors, mid levels and seniors all make the same mistakes and slowly grow out of making them. If it were the case that everyone could stop making the same mistakes and choosing the wrong abstractions simply by reading a book then professional programmers would be good on day one and make none of those mistakes and college students who've read dozens of books would come in at the top of the rankings.

> I am just getting into TypeScript programming, and I am really having fun. I learnt about the details of how Javascript works, and it is quite cool to have a static layer over a dynamic one. For learning JavaScript, I like "The Good Parts" and the "You don't know JS" series. It took me a while to learn how to create a datatype that can defend its invariants not only statically, but also at runtime, but that seems quite feasible as well, especially with decorators. It is interesting though that nobody seems to be particularly interested in doing that, there is not much direct information about this available.

This is learning to code, it's not learning to engineer. Assuming you finish your book(s) you'll start on the path of making lots and lots of mistakes until you grow into someone with more experience who stops making them.


> you grow into someone with more experience who stops making them.

or at least fewer, and sometimes more complicated mistakes.


Of course you need experience as an engineer. But knowing which Javascript libraries to pick is the easy part. I was not saying that you don't need experience, but you don't need much Javascript / Typescript experience. This can be done in a few months.


It will take a few months just to bring yourself up to speed on mouse/keyboard event bindings and how they interact in one specific browser and one operating system let alone all browsers and operating systems and mobile and that's without even thinking about rendering, templating, network operation wrappers, local storage, camera and sound APIs. The list goes on.

I've been doing this since literally IE 5.5 and I'm still learning new stuff all the time. The fact that you think you can learn all this in a few months is hilarious.


I already learnt most of what I need in one week. I was being pessimistic with 3 months.

If I need to know a particular event, I look it up in the documentation. The stuff you cite is just APIs, I can look that up too. I've used APIs before, you know. Right now I am piping data from Swift to Apple Metal and back, and write GPU code to estimate measurements from the FaceID camera in realtime.

Real programmers are coming to the Web, babe.


Furthermore, from what I see, a lot has changed in JS land. You can basically assume ES2018, with a few exceptions in the library, module systems are part of the standard now, and package managers seem to catch up with that. A lot of what you learnt what is older than 5 years about JS and accompanying APIs, you can basically forget now. Arguably, it might even be better if you had never learnt it in the first place. Because, let's face it, what was going on then was a hot mess of steaming shit.


Not really. You can't replicate experience. Everything you just wrote about with data types, invariants, and decorators is just flavor which you only have control over in your own personal code however as soon as you interact with web browser APIs, nodejs, and 3rd party libraries all that goes out the window and you end up conforming to other people's patterns.

In my honest opinion Typescript is a fad. You're still just coding JavaScript with an alternative syntax. Since you're borrowing dependencies from JS you need to know both and context switching between the two slows you down. From a CS perspective I get why TS syntax is better but from a functional perspective it's harder to find engineers that actually want to work with it because every TS project I've ever seen is really a mix of TS and JS.


Web browser API's usually do keep their invariants. I wouldn't use libraries which don't. Libraries which do not support TypeScript will vanish. See, all your experience (in a limited field) is telling you the wrong thing.

And it is quite obvious to me that you need to know BOTH TypeScript and Javascript, of course, you cannot just learn TypeScript, because it is just a thin layer on top of Javascript without many runtime guarantees otherwise.


> See, all your experience (in a limited field) is telling you the wrong thing.

You seem to be conflating objective wrong/right with your own personal stance. Do you have an objective argument in favor of your assertion about the longevity of non-TS libs?

> And it is quite obvious to me that you need to know BOTH TypeScript and Javascript

That's a reasonable strategy on the assumption that TypeScript will be used, but the very fact of its being so is actually an argument in favor of the point made by gp which has been left unaddressed (RE context-switching).


Obviously, there is no objective wrong/right here, as only the future can tell. My subjective experience tells me, it will be that way.

There is no need to context-switch. Just accept Javascript as part of TypeScript, because it really is. The argument is clear: TypeScript drastically reduces the amount of errors you will be making when coding, and the amount of time you need to think about stuff that is really trivial.

The rule of thumb on how to do this is also simple: Keep your types simple, use them to make your life easier, not more complicated. If you cannot model something simply using types, don't try to do so, just use the dynamic typing escape hatch that Javascript provides.


I'm just explaining to you how it plays out when a business is deciding whether or not to use TS at all. Mistakes and errors are checked for by linters and unit tests automatically in the background so that's not a benefit anyone cares about. The liability of having to find coders that want to work on your TS project decreases potential candidates and makes hiring more difficult. Large organizations can deal with it cause when you have 500 engineers on staff you can find 50 that will use TS for you. For smaller orgs it's a pain.


These are concerns that will arise in some organisations, for sure. Personally, I don't have a need to work together with people who cannot be bothered to learn TypeScript. A nice type system is something you really want, and which improves code quality. No need not also to use linters and unit tests. These things do not oppose each other, they work together nicely.


Some things just have to be experienced 50 times before you learn them


> a dev like me with 17 years of experience in the field

That's not the flex I think you think it is ;) I've been coding for over 40 years now, professionally for over 25. The attitude you have is one I very much associate with younger devs with limited experience.


Pretty sure they aren’t flexing and it’s just you who is treating this like a competition.


They're providing context, not flexing.


> Imagine if TV developed in a way where they sat and tested how much they could speed up the show and be understood and made that the standard so shows could be 'watched' more quickly: That experience would SUCK.

Honestly, a lot of times, I’m left wondering if this isn’t the case.


Well, the trend seems to be completely on the other way, on making the experience as slow as possible.


> I like writing code, not plumbing together bits of other people's code with bizarre config files

I don't like reinventing the wheel every time I build something


Meanwhile I do like reinventing the wheel. Or, more accurately, designing a wheel that fits my needs well.

Sure, I also use libraries for things that are just tedious, hard to get right or just some standard feature that's always used the same way. But you can overdo it.

Debugging my own code is annoying but doable. Debugging other people's code is a lot more tedious. Troubleshooting a mess of 1000 transitive dependencies because one package had version 10.0.6 instead of 10.0.5 and that caused an avalanche of breakage makes me reconsider my life choices.


There's a balance to be struck between your opposing opinions I imagine. I would just suggest solving the core business problems of your software in your own code, so that you have full control over them, can learn about your product from the building process, and can provide value through specializing your codebase.


> I don't like reinventing the wheel every time I build something

This is the crux of the difference, I think. I see open-source libraries as (mostly) mediocre code, usually massively bloated with features I don't need. Using them is like using an 18-wheeler to nip to the shops. I have found in my experience that it saves time in the long run to just write the minimal amount of code I need for the job, rather than dealing with the added complexity and dependency of adding a 3rd-party library.


I think that's an over generalisation, for every "world changing, do everything" trendy OSS library there's also a much simpler alternative maintained by a fellow grognard who just wants to write the bare minimum so you don't have to.

The alternative is adding my own mediocre code, with all the maintenance costs that incurs.


Ideally, I like to build on top of a batteries-included base. Failing that, I'd rather reinvent wheels than spend all my time impedance-matching with glue.

The problem with bizarre config files is that you repeatedly have to learn enough to make them work but then do not need to use that knowledge again until after you've long since forgotten and you have to start the learning process from scratch.


Exactly. Badly written libraries are just bad libraries, it doesn't mean that its a bad idea to build reusable components.


So you need a wheel, but you end up getting a whole car, and the wheels turn out to be octagons, which don't work great for your purpose (but basically work on more different terrains that different users had problems with?) ... and in half the time and one-hundredth the price you could have just made the round wheel you need.


“If you wish to make an apple pie from scratch, you must first invent the universe.”

-Carl Sagan


> but fundamentally because I enjoy writing good code.

Using Copilot speeds you up, so why not take advantage of it?


Because it's a massive footgun, no, footcannon. It also most likely violates the GPL.

It's copy-pasting from stackoverflow except a little faster.


I am still reluctant to use JS/CSS frameworks. What I learned in 10 years is the fact that I hate learning useless abstractions that will eventually fade within the span of 18/36 months. The joy in web dev for me is creating stuff not learning how to use frameworks.


Me too. Also the frameworks deliver a lot of bloat and dependencies which can be a boomerang. For example I did some pages with bootstrap 3 ... for that I had to use a module which modifies the html templates of the CMS. Than the CMS got updated several times but the module is not been updated b/c bootstrap 3 is obsolete. But hey the there is a new module for the new cool guy in town BS 4. So you can't update the CMS or you have to rebuild a large part of the website for the sake of updates. That's totally senseless since it has no advantages for my customers. Yes I can sell it with "you must update because of security but hey nothing changes ..." - something what I would absolutely hate if it's done to me.

I have my own framework - smaller and it fits my way to work. And yes I understand that this would be a other situation with a team (but then you have a guy without soul, moral and honor which is called "the sales guy" who sales everything).


The frameworks are really stable these days. There was like one major change in React since it's beginning - introducing of hooks. But if you got in right 8 years ago, you can just sit and write a good SPA with that knowledge


Vue.js is the only frontend framework that I have some clue about, and the first blog post when googling 'Vue 2 to 3 migration' [0] says it took them 4 man-weeks to migrate their product.

From my limited experience, the main issue is that it's really hard to ensure frontend actually works as before, after upgrading frontend libraries/frameworks, outside of having really extensive end-to-end test suite.

[0] https://crisp.chat/blog/vuejs-migration/


> Remix.run gives us the best of both worlds. Still writing your site predominately with JavaScript. [...]

This point is stated as if it's a widely accepted truth. Is that really the case?

I still labour under the misapprehension that one should aim to use as little javascript as possible.


Also, TFA praises Tailwind CSS as "solving CSS", and as great as it may be, I guess that statement lacks a bit of nuance.


I hear so many people recommend Tailwind but every time I look at examples it reminds me of an entry for the 17th Annual Esoteric Programming Language Competition.

Maybe you get used to it but hell it looks hard to parse visually.


Tailwind is definitely a Marmite framework (either you love it or you hate it). Yes it can be difficult to read. And it can be difficult to debug and fix something where you make a small change and it cascades 'upwards' somehow. But... the problems I have with Tailwind are no different to the ones I had with CSS and all the various magic methodologies that were supposed to "fix" css - things like BEM & SMACCS). But the big difference is that I can solve those problems in a fraction of the time with Tailwind. As a last resort you can just strip out all the classes of the elements in question and start over. If I had to do that with normal CSS, it would almost definitely break something somewhere else. With Tailwind, you are working in a highly localised area and can see everything you need on one screen. That, to me, is the power of Tailwind.


Yes. The true test of a CSS framework is how easily you can change it without breaking shit. I haven’t found a better solution than Tailwind in that regard. Emotion comes close but it also encourages you to componentize everything, which can cause change issues. In Tailwind I found my components disappear because I didn’t need them.


There's something about Tailwind (which I use, love, and evangelize to all who will listen and many who won't) that makes me think of the phrase "You can't fire me, I quit!"


All CSS is hard to parse visually.

Unless you follow very strict development guidelines, you do not really know anything unless you look at the computed styles in a debugging tool.

I think Tailwind does help wrangle some of that cognitive overhead with its naming system.


I kind of agree with the author that Tailwind is the best thing to happen to CSS. But yeah he definitely oversteps when he says that CSS is solved.

It most certainly is not a solved problem lol. UI still feels way fucking harder than it should, in all kinds of ways.


The advantages given in the CSS section are really around:

- utility classes

- theme system

Neither of which are unique to Tailwind (which is great in itself), yet the article paints it like it's the only thing out there solving for it.


TFA ?



>I still labour under the misapprehension that one should aim to use as little javascript as possible.

Well, this is so 2005.


You say that like it's a bad thing?


I'm saying it in jest, but also as an observation. The TFA writer takes it for granted, but then again, so do many (most?) today.


Maybe that makes it even more important to repeatedly raise the possibility that this assumption is wrong.


Yeah, some JavaScrit can be nice, but if your website stops working with all javascript being blocked, you should probably make a native program instead.

This is particularly important so that web scrapers and other automated tools that the Web relies on can work. Also accessibility tools.


On the remix.run site it says

>Remix is a seamless server and browser runtime that provides snappy page loads and instant transitions by leveraging distributed systems and native browser features instead of clunky static builds. Built on the Web Fetch API (instead of Node) it can run anywhere.

I found no mention of the blockchain here or anywhere else. Why are they touting their own horn so much if they are clearly not Web3 ready?


This article strikes me as hopelessly naive. Filled with the same kind of 'new is better and solves everything' mentality I've seen in the JS community for years. There's no desire for maturity because nothing has ever matured. Even frameworks from huge companies go through radical internal paradigm shifts every 18 months. JS runs natively in the browser. The author implies compilation is required to deploy client-side rendered websites. Sass, Less, css grid, all claimed to 'solve' CSS for whatever the problem in the mind of the author happened to be. ... more of the same.

This article brings no hope that somethings has fundamentally changed in the community.


As somebody who's only dabbled in creating a personal website in the late 90s, using FrontPage and hosting on a free hoster, the current web seems like a completely, and way more complex beast to build websites for.

Back then all I needed was get a bit familiar with FrontPage and html scripting, some Photoshop to create/edit image elements, and I already had a functional website.

I knew if I wanted to "really" do it, I should probably learn proper html, but 15 years old me was already plenty proud of having a pretty good looking, and well working, website at all. It even had a visitor counter, and a guest book, all the Jazz that defined 90s websites, and that was enough for me.

Now, over 2 decades later, just the prospect of getting back into it seems so extremely more daunting and complex. We are now at Html5? At "Web 3.0"? I don't really know anymore, it's all just become so "vast" with so many options and so much complexity that I wouldn't even know where or how to start anymore.

Is it still viable to just learn plain html and do something with that? Have WYSIWYG editors matured so much that they can be used at least semi-professionally, or is that still "looked down upon"?


I started in the 90s and have dipped in and out of web dev ever since, but my impression is that a lot of the complexity stems from organizations and teams getting bigger and needing to work with one another. Frameworks and tools proliferate in order to make it easier to spread the work out across devs, communicate with one another, deal with version control, etc. The other main reason is to deal with the fact that a lot of web 'pages' are actually apps now and have to talk to the server/there's a bunch of backend stuff going on, so there has to be reconciliation between front-end and back-end.

If you're working on your own and don't need a back-end, there are options for putting up a personal site. Webflow[0] and Squarespace[1] come to mind. They'd be taken seriously semi-professionally in non-tech spaces. Can't speak to tech ones.

[0]: https://webflow.com/?r=0

[1]: https://www.squarespace.com/


> Is it still viable to just learn plain html and do something with that? Have WYSIWYG editors matured so much that they can be used at least semi-professionally, or is that still "looked down upon"?

Are you trying to make a website, or a web application? That really is the biggest question. For a static website that's read-only (meaning no user login, or user-submitted data, no online store), the old ways work fine.

With modern CMSes, you can create ecommerce stores and other dynamic functionality, but it comes at the cost of flexibility. On Squarespace for example, you can only use the pre-build templates to create the site.

And even if you are building a website by hand where you have full visual control, you can you can add a lot of interactivity by embedding iframes for things like JotForm or a PayPal button.


You can still do html people will just notice that. Personally I don't care. I'm not a webdev and I don't pretend to be. I have setup a pretty basic sorta retro personal website where people can find my resume.

There's a lot more you can do with static sites, it just depends on how much time you want to sink into it. I just made something basic. Maybe when I'm bored I'll try to build on it.


No it's not, because your website will not get traction unless you SEO the hell out of it, pay google ads, pay for FB ads, pay some blogger to write about it etc. It's impossible for a website to be found when the top spots in every search are capped by a few known players. It's easy to make a private web page, but that's not a website.

In fact the best time was about 2005, when most current day's media sites started, because it was still possible to be found.

The tech you use does not matter. The web is backwards compatible

> The first time you see your website instantly update because you made an change in your CMS, it'll shatter your whole Static Site Generated world view.

Jesus, we 've had that thing since 1995. What happened?


> "...your website will not get traction..."

And?

Not to be glib, but you need exactly zero readers on your website to have a whale of a time making it. In terms of building a website, the tech is exactly what matters because that is where the fun is.

I fully agree with your last sentence though. It's ridiculous to think that only dynamic CMS' allow instant changes. A small static website reloads instantly. Even better, one doesn't even need the internet since you can see the updates localhost!


As I get older I remember more about what the grey breads I learned from would say "everything in tech is a cycle. What we have now will be replaced with something new which was which will be replaced with something old reimagined."

If you stay in IT long enough you may see this cycle repeat 3 or 4 times


> Jesus, we 've had that thing since 1995. What happened?

I thought the same thing. I have built several custom WordPress sites, and changing the PHP code and hitting F5 shows me the changes immediately. What world is this where a site needs to re-compile each time a change is made?


Beautiful how I can't read the article on a slightly outdated browser (the state-of-the-art SPA crashes), because in the best of all times to build websites with the internet-of-javascript-bs, actually reading them is a nightmare lol.


It's a strange article where I start out vigorously nodding my head "yes! yes!" and end up "WTF" when the bulk of the post is about how awesome Remix is (huh?!) and Tailwind has "solved" CSS (lol). I mean, if you like Remix and Tailwind, cool cool—but that really has nothing to do with the overall trajectory of what's generally-speaking possible, easy, or commendable about web development today.

The real success story we should be celebrating is how awesome the specs are in modern web browsers. Vanilla JS, CSS, and even HTML itself have gotten incredibly good. Just having Flexbox and Grid working as native CSS layout engines is tremendous, and soon we'll have container queries (!!). The latest ES versions are lightyears ahead of the JavaScript of yesteryear. HTML now has superpowers when you consider what custom elements/web components are capable of. Developments at the level of HTTP itself, along with in-browser imports, are affording new opportunities to leverage the browser directly to handle dependency graphs rather than requiring everyone to bundle/transpile everything all the time for all seasons.

I wish the article had gone much more into those exciting, web-spec developments—rather than tout an unproven and controversial JS framework (Remix) and a somewhat-proven yet still-controversial CSS framework.


I have a question.

Would someone please give me some form of enlightenment about the practice of product placement in blog posts?

Is this the new normal?

>I'm Simeon, a in and Solution Engineer @ Sanity.io

> Some major bias at play here but when I came across Sanity I finally found "the CMS I'd always hoped existed".

It is cool, that he acknowledges the "bias", but this is more in "conflict of interest" category in my humble opinion.

What do you think? I am curious.

Edit: Please, don't just downvote. I don't care about "karma". Give me some explanation.


The OP used to work for me. He was a massive fan of Sanity before he went to work for them. I think that in part that is why he went to work for them. Knowing him, I am sure that he was was not consciously doing any product placement, rather showing what he genuinely feels about the product and heading off any conflict of interest concerns others might have.

As to the practice of product placement in blog posts. This is HUGE and the way that many bloggers make their living (although less so in dev). Affiliate marketing (getting paid for links to products & services) makes up around 10% of e-commerce transactions. It's been around for a long time and is likely to continue to grow as it's one of the few directly attributable sales channels (not without its issues however).

I'm fine with people monetizing their content, especially if it's useful and ad free. What is less comfortable is where people don't make it clear that they get paid for it.


> I am sure that he was was not consciously doing any product placement

But it reads like product placement. You said yourself that product placement is huge.

From the article...

> "And if there's anything you feel it can't yet do, it's only because you haven't spent enough time building it yourself yet. There's no stopping you."

There's no stopping me? That reads like marketing spin.

> "it happens to be the best CMS."

Says the guy who works there.

> "incredibly generous free tier"

So not just regular generosity, "incredible" generosity!


Thanks for the answer.


> at some point we all accepted that if every time we need to fix a typo, it's okay to wait ~10 minutes to completely destroy and rebuild the website again as static files

No, a lot of developers never accepted it. It’s insane.

I have the impression that people who started in the field in the past half decade have a kind of twisted view of what is “normal” in web development. We really walked backwards in many senses in the past ten years, despite all the underlying evolution in the tech stack.


> CSS is effectively solved. Tailwind CSS is the best thing to ever happen to CSS. I cannot imagine ever writing CSS in a separate file and having to think of names for elements. It's also an excellent resource for beginners.

I work with developers still have difficulty with concepts like specificity and cascading even though they have been working with CSS for many years.

I don't think statements like this are true or encouraging.


Tailwind kind of makes specificity and the cascade a non issue, though. Sounds like it might be just the thing for these developers.


Yes, and I would argue that many people using frameworks like these never learn those fundamental concepts.


I'd argue that never learning is the primary reason those frameworks exist.


Plenty of web devs understand how specificity and the cascade work. That doesn't meant that they work well. CSS is full of examples of good-sounding ideas that didn't pan out.

Frankly the whole thing works better when you avoid touching the cascade as much as possible. Which is why these atomic frameworks are modeled this way.


I thought that was a weird statement. Tailwind solves (kind of, to me) the problem of organizing CSS. I mean, you still have to pick the right rules to write.


Isn't Tailwind the one where you write inline styles with classnames instead of inline styles? How's that good?


Yes, there is nothing good or bad in it nor it's something new - we already experienced it in the form of `Atomic CSS` and other similar - it's just another swing of the pendulum, for some use cases it's good and will bring joy and fast development pace, for some use cases it's not good and will bring pain after honeymoon is over.


It seems like a reimagining of the style HTML attribute to me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: