Hacker News new | past | comments | ask | show | jobs | submit login
The case for vanilla front-end development (pushdata.io)
54 points by rlonn 86 days ago | hide | past | web | favorite | 82 comments



> When on a 1Mbit/s WiFi connection in Greece, my vanilla app loaded in 4 seconds and started rendering after ~2 seconds, while the React/Redux app took 50 seconds to first render!

This is the thing that baffles me about the SPA/framework trend, regardless of connection speed. The whole pitch for React/Vue/etc. is that they're supposed to make for more responsive interfaces. And yet, whenever I actually use an app written in React/Vue/etc., I find myself marveling at how slow it feels. Even on a desktop PC with a fast connection, I spend so much time staring at progress bars and loading spinners as all the various bits and bobs of the interface get pulled down and rendered. I don't know whether pages that are rendered server-side are empirically faster, but they sure feel faster, because after a brief (usually < 1s) wait the whole page just, you know, appears, fully rendered and ready to go.

As another developer who's old enough to remember the CENTER tag, I'm open to the idea that this is just incipient old-fogeydom on my part. But whenever I hear people start talking about how much more responsive SPAs are, it sounds a lot like the old gag: "who are you going to believe, me or your lying eyes?"


> Even on a desktop PC with a fast connection, I spend so much time staring at progress bars and loading spinners as all the various bits and bobs of the interface get pulled down and rendered.

A lot of that is likely because of the way most REST APIs are laid out.

Suppose you're looking at a page that displays an invoice for an accounting system. With a traditional non-SPA app, you'd do a bunch of SQL queries and build a page from an HTML template.

With a REST API, you're doing one HTTP call per entity, at least, and then subsequent HTTP calls for aggregate entities.

When I've made SPAs before, I've often eschewed REST and just made a single Ajax call for "everything I need to display X to the user," not just out of efficiency, but also out of laziness -- it's easy to build. As long as you internally use your API only and don't publish keys for third parties, it's easy to maintain too.

There are some solutions to address the whole problem on a grander scale (GraphQL comes to mind), but overall, that's the cause of slow SPAs in my experience.


If that's the case, shouldn't HTTP/2 have solved that issue? HTTP/2 is multiplexed, so if you make something like 10 HTTP calls, only a single TCP connection would be established. Additionally, according to section 9.1 of the HTTP/2 spec[0], connections are persistent so unless the server closed the connection, an already established connection will be available from when you first loaded the page.

[0]: https://httpwg.org/specs/rfc7540.html#rfc.section.9.1


Well, no.

For one, the most major http client library (Axios) doesn't have support for that kind of pipelining.

But also, you don't necessarily know what to pipeline until you get the first request through. Most web apps will make a REST request, look at the contents of the response, then make another REST request. Maybe you could keep the socket open through that, or you could even use websockets to bypass HTTP altogether and use jsonrpc, but still, you don't know what you need to fetch in the second round until you finish the first one, so either way you're looking at a waterfall of requests, one after another. Pipelining them would scarcely help.

What would help is having one API request that basically says, "here's who I am and here's the page the user wants. Give me everything for it." Then you're moving the controller back to the server, and leaving only the view on the client.


Axios doesn't control what protocol is used, the browser and the server will negotitate a suitable protocol and version, which the browser will transparently use for XHR (which Axios use under the hood). Don't confuse multiplexing with pipelining which was introduced in HTTP/1.1. Multiplexing is a much better solution than pipelining to counter head-of-line blocking, since large or slow responses will delay subsequent requests while multiplexing allows parallel response/request communication.

I do agree that waterfall-style requests are more prominent with a REST structure, and that something like GraphQL could solve that - but GraphQL seems largely incompatible with how many develop web apps today; smaller components that request their own data. I'm also unsure how compatible a dynamic query language like GraphQL is with denormalized databases like Cassandra or ScyllaDB where you can't model your data before you've established what queries your site/app will perform. I've yet to see a codebase where the REST waterfall problem is a huge problem though.

Note that I too dislike how slow websites are these days, and much prefer to both build and use an entirely server-side rendered one. None of the SPAs that I know of has been a success in terms of performance; Facebook, Google Mail, Reddit's redesign, Youtube, Twitch - most of them are dreadfully slow and sluggish.


Yes, this has been my suspicion too. Calls out across a network like the Internet are always going to perform variably, and each new call you make represents a new roll of the dice on how fast you're going to get a response back. So passing the whole page over the wire in one call feels faster than breaking it up into X calls and then assembling the results of them all on the client end, with the page seeming slower as the value of X gets bigger.


Doesn't responsive UI just mean it works on different screen sizes and devices?


That's something slightly different: reponsive design (see https://en.wikipedia.org/wiki/Responsive_web_design).


I've started using Vue for some projects, and while it has some fantastic upsides, I find it very hard to optimise. With native code you can easily profile and work out where you need to speed things up or leverage cacheing.

With (e.g.) Vue, because you write so little code comparatively, it leaves you very little to do if the 'magic' that's making stuff happen under the hood is very slow.

I work with a largely native Javascript SPA (main dependency is d3) and find that Javascript can be pretty damn fast if you need it to be.


Time to first render isn't as important anymore. The expectation is more that people will open the app and leave it open for web applications and there is data to back this up. For informational sites the time to render is more important. That being said, there are also a lot of bad developers who basically invalidate any optimizations that the library makes due to their own ignorance.


> The expectation is more that people will open the app and leave it open for web applications and there is data to back this up.

I'm suspicious about this data. This sounds way too much like the same misguided thinking that makes people ignore performance, because obviously your app is so special it'll be the only thing user will have open on their computer. I can buy this expectation for work SaaS, in front of which you sit for 8 hours straight. Most of those used by genpop? Opened briefly and rarely.


> Even on a desktop PC with a fast connection, I spend so much time staring at progress bars and loading spinners as all the various bits and bobs of the interface get pulled down and rendered.

This means whoever wrote the app wrote it slow. React rendering performance is fantastic but that doesn't stop devs from abusing AJAX.


I think it's fair to say the browser has a ton of potential performance pitfalls in it, regardless of framework or lack thereof, and it's really easy to make a slow thing no matter what you do. There's definitely a ton of people who take relatively fast frameworks and do slow things with them.


I think Progressive Web Apps should help with this. The architecture there designates that you have an app shell that's only downloaded once - this should be your SPA framework and all the layout bits and blobs. Those get cached at the service worker layer. Everything else (i.e. the actual data that gets displayed) can be cached as needed or retrieved as needed, without having to redownload all the "app shell" bits.


"...while a framework will let you create standard solutions to standard problems very quickly, you always then have to add that little special sauce that makes your app unique, and that is where the frameworks can slow you down instead of speeding you up. Vanilla coding, on the other hand, is just as fast whether you're doing something "normal" or something unique to your particular app."

I've been saying this for years and always get heavy push-back by the framework proponents. I've also encouraged them to look at history. Ten years ago, it was ALL about Backbone. Then Angular came around and it was ALL about Angular. Then React came around and it's currently ALL about that. But, wait! Now Vue is starting to gain some traction. In 3-5 years it'll be ALL about Vue.

My point is this: if the frameworks are so great and all-encompassing (I've literally heard, "you can build anything with [FRAMEWORK]. You don't need anything else!"), why doesn't one of them stand out so solidly that nobody would think to create another one? Why do we, like toddlers grown tall, keep rushing to the next latest and greatest shiny object that catches our eyes?


I agree with your point, but having worked extensively with React, it really does make development easier for me. While it feels like there is always a new shiny front-end framework out there, the ones that stick out like Angular, React, Vue, etc. all have real value that developers like! So I don't really see it as a bad thing.


I don't deny that they make things easier. But there's a cost to that, sometimes a significant cost. In my experience (and confirmed by others I know--including the author of the post), frameworks get you to 80% of where you need to go in a project _really_ fast. The next 10% takes a little effort but it's not impossible. But that last 10% is like pulling teeth from a lion. That's because inevitably the project has requirements that the framework designers didn't and couldn't think of.

I submit that that last 10% is the reason why OP got more done faster with Vanilla than he could with React.


I'm not sure what you imagine that 10% to be but this has not been my experience at all.


> (I've literally heard, "you can build anything with [FRAMEWORK]. You don't need anything else!")

That's always a lie. Take a logarithm of the number of subdirectories in node_modules, and subtract the logarithm of subdir count of node_modules for framework's hello world, to get an estimate of how much other stuff you're using.


I know. But that's how these guys think. They don't actually know the fundamentals. It's almost comical. My business partner worked with a guy a few years ago who, after running npm install on something was dazzled by what he had running. My business partner pointed out, "you have 400,000 files in a project that doesn't actually do anything." The guy didn't get it.


It sounds to me like you're lacking a fundamental understanding of what's in node_modules. The lion's share of it is related to the dev environment and build system, very little of it is actually "source" that gets bundled into the output. What you're saying is basically the equivalent of including the size of the JDK in with a Java project.


>When on a 1Mbit/s WiFi connection in Greece, my vanilla app loaded in 4 seconds and started rendering after ~2 seconds, while the React/Redux app took 50 seconds to first render!

This never ceases to puzzle me. I thought yuppie framework jockeys love to travel. Don't they ever find themselves at some remote locale, try to check in on things, and realize that their site runs like pure ass outside of costal North America? Do they just not care?

Simplicity really can be a class issue. Lots of developers seem to write code with their own demographic in mind, rather than the real people who have to put up with their bullshit.


I do love to travel. I will write your app in Backbone, Angular, or React, or ??? depending on what we're building. It's hard to reply in a mature and productive way here; since you've called what some people do... bullshit. There is no silver bullet here that is the correct technical choice for all situations, and using some bad choices to say we're all making bad choices is short sighted. You could write a similar article about how people don't test, performance test, security test, or many 'tests'. We could have the same discussion. Not every situation is the same, and not every user is in a remote locale with no connection; but when they are we have to take that into account. If our app is for a north american bank in the midwest; then yes - why would we care about performance in the middle of Australia? Out in the real world, projects have priorities; from the customer, business, security, and even team (someone will have to take care of it). I don't appreciate the constant Slashdot style negativity going on in this and many other threads these days on HN. I wish you'd reconsider adding something constructive to the discussion.


I also work in the "real world", that doesn't mean I will build an inefficient untangled mess just because I am told the users of this software will have the computing power of NASA. Professional ethic.


Yeah, it's very useful to try out your own code on the crappiest infrastructure possible and see how it works. In this case the developers of the React app were inexperienced students so I think they may be forgiven, but having worked with performance the past 15 years I really agree that far too few developers make the effort to verify the performance of their code. And like you say it's probably just a case of not eating your own dogfood.


> it's probably just a case of not eating your own dogfood.

I agree, though I'd qualify it because I'm sure they're testing the site, but it's on a fat pipe with low latency that easily hides any performance issues. It doesn't seem limited to web development though - game development seems to have the same issue, where it's clear devs are playing and testing on overpowered rigs with little regard for low-end systems.


In gamedev there's a much higher awareness of performance, because in most cases, low performance creates a literal cut-off point - anyone with specs lower than what your game maxes out can't play, thus won't pay. Web developers get away with more bullshit, because monetization is more indirect, and quite often the site is not just entertainment, so users have stronger reasons to bear with it, frustration and battery life be damned.


I haven't written a line of code that ran outside of a corporate network in years. Comments like this show how ignorant HN is about what "framework jockeys" actually do.


Amen, my code mainly runs outside my company but this 1000000x over. SOOO many people on HN seem to think the entire world works exactly like they do on the same kinds of things. It seems outside of most people's understanding that, maybe, just maybe, the business conditions where someone else works are.... different GASP.

It's not the same thing but it's a pet peeve for me when people make comments like "How could a developer skip X or not do Y?!?" Well not all of use can pick 100% what we work on and are sometimes forced to cut corners. We aren't happy about it but we are more happy with taking home a paycheck that being right 100% of the time. My go-to is "Is this this hill I want to die on?" when this stuff comes up.


Not really. GP's comment is borne out of frustration with public-facing Internet, built by those framework jockeys. It doesn't mean the same problem isn't happening over at corporate.

Speaking of which, having worked a bit in corporate setting, I have my own extra point of hatred here too: I saw perfectly good, if crappy looking, server-side, plain HTML intranet apps replaced with most recent JavaScript framework shitfest, making my life extra miserable as I suddenly had to deal with some apps I was forced to use stealing computing resources from the very job I had to do.


Sorry to hear that! I consulted a year for a large financial company and their trading platform consisted of a million lines of C code. I've always loved C, but looking at the code it was actually hard to tell which language it was. They were using internally-developed libraries to do everything. Practically every line of code was a macro or library call, or a macro calling a library. I guess it makes sense to standardise, modularize, abstract away things to a ridiculous degree when you're working on very large codebases, with many people. I'm happy I don't have to do that anymore though!


I'm on rural (satellite) internet sometimes, with decent bandwidth but terrible ping times (500+ ms). Certain JS-heavy pages just don't load! It's like the dev set a timeout on AJAX of 400ms, and when it doesn't return, I just see empty boxes everywhere.

If you build websites, come spend some time in rural Idaho - it'll change how you develop!


This is a browser problem. I get the same issue working on IoT devices with limited number of inbound connections. If they can't open up their 6 sockets, all of the mainline browsers fail to consistently reuse existing connections for missing resources.


Framework jockey here. I spin Vue bullshit. I also sold all my crap a few years ago and started traveling full-time, mostly in South America.

Something to consider:

1. If the server render is slow, everyone notices.

2. If the client download is slow, only people on slow connections notice and only then if it's not cached.

So in other words, your CUSTOMER on a fast connection might notice #1, but probably won't notice #2.

I'm not defending unoptimized code exactly -- I'm just pointing out that if the person paying the bills doesn't notice a problem, it's far less likely to get fixed.


I’m very puzzled. You are operating from the assumption that 34 kB would take 50 seconds to download on a 1mbs connection? There was clearly something else at play, maybe related to the fact that the people who built his react codebase had never programmed before.


I'm not operating from any assumptions, I am just stating observations.

The observations were:

- ~6MB JS file produced by `npm run build` compared to ~400KB page weight for all items in my vanilla app

- Page load times from that Greek WiFi was ~4 secs for the vanilla app and ~50 secs for the React app, which told me the throughput was on the order of 1 Mbit/s. The React app didn't load many things (about 3 resources, if I remember correctly) while my app had more to do. This means network or server delay will have mainly affected load time of my app.

Of course, this whole performance/page weight thing wasn't the main point of the article (for those who happened to read it), nor did it very much affect my decision to go vanilla. I am surprised it became such a big issue here, but I've stated that I too think it was due to inexperienced developers. I guess my mistake was thinking that my lack of front-end experience would make it a viable comparison - I'm probably a lot more performance-conscious than most, and especially compared to someone who is just learning to program.


It's downright hilarious that now, when IE is dead at last and browsers finally have APIs and CSS support to do things cleanly, developers are completely abandoning "vanilla" JS development in favor of frameworks and per-processors. Feels like a job security thing.


Well, the fact that the JS standard library is crap doesn't help much.


I actually agree on this one. The standard library is pretty basic. I guess my background as a C programmer means I'm just not very spoiled with great standard libraries. I do think that I'm going to have to start using moment.js quite soon, for instance, just because there is so little date/time-related functionality. I might try to roll my own simple time functions, but my hunch is I'll quickly run into quite tricky problems that will take too much time to reinvent solutions to.


How do you mean?


There are a number of problems that vanilla/standard library doesn't solve, so developers need to rely on libraries or spend an enormous amount of time developing those from scratch.

Date and time manipulation, reactive programming, data binding, etc. Those are bread and butter stuff that the standard library should solve but doesn't.


This was a very interesting read, and I agree with Ragnars conclusions.

I'm amazed sometimes of how complicated and heavy some people, especially new developers, make things.

While I usually pull in JQuery for small applications, sometimes all you need is.. well, nothing, just the core language(s).


Thanks. Sometimes I feel very old and lonely when I complain about overuse of components, in any situation. But for me, it is the way so many people today just seem to use npm to pull in millions of dependencies, that themselves pull in millions of dependencies, ending up with ridiculous amount of data having to be downloaded and ridiculous amounts of code being executed, for what may be just to compile a "hello world" app. I'm actually very frugal on the back-end side also: the Go API server for pushdata only uses a handful external packages (like Stripe's Go SDK, a GetOpt package, Gorilla/mux - a URL router, A PostgreSQL connector and an AWS package for talking to AWS SES)


I think a lot of it stems from bootcamps and whatnot, focusing on letting people check off a number of boxes by the end of the course/system/whatever.

Node - Check

React - Check

Angular - Check

NPM - Check

"See how much you've learned in just two weeks! You'll easily get a job with that!"

And so on.

Rather than learning how some of this stuff actually works, many new developers are falsely led down a path where they think they can't create cool useful stuff without relying on a massive number of libraries and build processes.

See the whole leftpad disaster for example (Yes, I know, they fixed it, but it happened to begin with).

No seasoned or even moderately experienced developer should be using a package for something as simple as padding something. If I saw leftpad during a code review I'd send it back to the dev to fix.

Right now I'm creating a website using pretty basic tools, no frontend builds, and a backend using FPC, Postgresql for DB and done, no mass of libraries, no cascade of dependencies.. Just good performance, simple builds (Single FPC compile to executable), and easy for anyone to wrap their head around the entire process.


Just look at your average job ad. Its basically a checklist of tech exactly like this.


Yup, and that's a problem.

It makes perfect sense for the bootcamps to exist to fill that niche, it's just horribly unfortunate for both the companies hiring people with less skill than they think, and the new developers being led down bad paths.

Another part of the problem is that job ads only show part of the image.

If you can show that you're a good developer with a solid track record, those HR checklists largely become null and void.

Just look at how many job ads include "Must have Bachelors in CS", which most of us know is complete bull.


Of course there isn't much benefit to using React/Angular/Whatever (and other tools in those ecosystems) if your site is mostly a static thing and maybe only needs some interactive content here and there.

If you have a stateful application (or even just a single stateful part on a static site), React will simplify life a lot.

Figure out your requirements and develop based on those. That's pretty much it. There can also be external requirements (e.g. we want to be able to hire easily, so opting for a well-known framework is better) and those should be taken into account as well.


When on a 1Mbit/s WiFi connection in Greece, my vanilla app loaded in 4 seconds and started rendering after ~2 seconds, while the React/Redux app took 50 seconds to first render!

react + react-dom is 33.4Kb gzipped. That's not completely trivial obviously, and if your app isn't doing much DOM work it's probably unnecessary, but nor is it the reason why any app would take 50s to download on a 1Mbit/s connection. The problem was clearly something else.

I have nothing against people who choose to hate on frameworks, but if your maths is that far off it really undermines your argument.


I can believe the author. I have a fast internet, but I have DNS set up to resolve over DOH and over tor.

This is when an idea that having everything loaded from 7 different CDNs for a page to show anything at all falls on its head.

My uncached domain resolve times are in the range 1-10 seconds. So in an unhappy scenario, when things chain up, because a page needs one resource to decide that another resource from another domain needs to be loaded, and so on, I can easily wait 10-30s on a web page to load. Combine that with an idiotic FOUC prevention, that many website's designers fall for - and it means starring at a white space for the entire time, until some stupid web font loads, despite the actual text content having been already loaded for the most of the time.

So if the wifi is combined with a high latency DNS, it's certainly possible. Not everything is about raw throughput.


I'm not disputing that website developers do stupid shit all the time. We know that's true. I'm disputing that it's got something to do with React.

I've seen websites that are driven by vanilla JS that use 30MB images for backgrounds. I don't claim that's a reason to use a JS framework.


Agreed.

I didn't use a framework until 2016. I use React now and love it, and yet I still find myself using all of my knowledge of the DOM when writing React, to ensure that it behaves smoothly.

I personally think there are simply a higher number of devs who can code fine in React/Vue/Angular/whatever but don't fully understand the intricacies of the DOM. So the result is that these SPAs often feel slower. But it isn't the fault of the frameworks necessarily.


But the reason to use a framework is to have a fast path for devs to write SPA's that feel fast without having to know the intricacies of the DOM. I'm honestly not trying to be argumentative, I really though that was the reason for using react/vue.


I think that's a good argument. Although, React is much lighter on the DOM than Angular 1 was, so you could say there is improvement to some degree.

But still, that's a completely fair point.


I think one problem you really highlight is that, at best, those higher number of devs who can only code in the frameworks think those frameworks are able to save them from themselves. At worst, they're lazy and don't want to take the time to learn the underlying technologies like DOM, CSS, JS, etc...


No math involved. I simply looked at the load times in the browser. As to the cause, that I don't know but it seems likely to be a lot due to inexperienced developers, like others suggested. Maybe don't focus too much on a side note? The main focus of the article wasn't performance.


I once tried to pay my electric bill in the states in china. I spent an hour trying to get the js loaded before deciding to just pay the late fee once I got home. The network may seem to be mathematically perfect but its not really.


Yeah seems suspect. Hitting pushdata.io with cache disabled downloads multiple hundreds of kb of javascript:

highchart.js 75 KB

boost.js 11 KB

exporting.js 5 KB

checkout.js 26 KB

pushdata.js 57 KB !!

analytics.js 17 KB

and there's hundreds more KB of png images downloaded as well


> It's just that a framework can help you adopt good habits if you're lazy, ignorant or just have a dev team that is >1 person :)

I had to read that three times to convince myself that the author was not stricly equating "having a dev team of more than 1" with "being lazy and ignorant".

Anyway. If I'm never going to see and maintain your code, sure, go ahead and write it the way you want.


“Who knows if React will be used 5 years from now?”

It will be. I work on an application that still uses JS libraries from 5-10 years ago, simply because refactoring to use a new library is usually unnecessary (or even counterproductive). There will always be a need to keep applications like that running.


I've been writing SPAs for a number of years, and while I've become competent at it, I now use Jekyll for most of my projects.

There are some exception where a SPA is certainly justified, but for the most part a static site with some dynamic parts in JS is so much easier to produce and maintain. Also a lot faster to load.

It's easy to still use Webpack/Babel/etc with Jekyll. Here is a starter kit I made some time ago for some coworkers. It's for Vue but the same idea can be used for React, etc.

https://github.com/PierBover/jekyll-vue-webpack-starter-kit


I feel like I've been ranting about this for years. Way more the 50% of the web is simply content - text and images. Pages that serve up content should be HTML and CSS, period. Why do I sit around waiting for pages to render on magazine websites? Because the developers built what should have been a very simple website as a single-page app with a fat framework and 9000 supporting libraries. Development tell the business and tell themselves that it's about creating a great user experience, but it's really just fashion, and what sounds like fun to build.


I've been quite happy using plain typescript lately, and focusing on progressive enhancement vs graceful degradation. I just don't see many framework-based sites being 508 compliant these days.


It’s kind of ironic that I can’t read this article on my mobile screen. Have to scroll left and right.


Well, I can fill you in on the fact that a fair bit of the contents described how useless my front-end coding skills are! Also, the whole blog section, with comments etc, was done in the past couple of days so I haven't tried to fix responsiveness there....will get on it!


> my vanilla app loaded in 4 seconds and started rendering after ~2 seconds, while the React/Redux app took 50 seconds to first render

I'm all hands up for vanilla JS development, but I really don't think this drastic load/render time difference is caused by React but rather by the poor coding.

React is tiny, and you can see performance degradation like that only if you don't use it right, don't understand the life-cycle or over-engineer


Yes, that is probably very true. Another project of mine is a crossword puzzle game for kids (puzzlepirate.net), written in React by a friend who as opposed to myself knows what he's doing. The client there is not a lot heavier than pushdata, despite it having a lot more UI stuff in it.


If it's consistently poor coding on behalf of react apps, perhaps the documentation or culture around react isn't explaining the pitfalls or encouraging good design.


It’s not consistent at all, that’s my point. Being small and fast library is a React’s major selling point


Probably going to be torn to pieces for this (if someone sees it) but at least it will be entertaining :)


To each his own. You are developing this by yourself (AFAICT) and are the only one working on it. Don't let people tell you the right/wrong way to do it, do what works FOR YOU.

Me? I like frameworks, I use them daily at work, I'm familiar with them, and I'm comfortable with the tradeoffs. I think in a multiple-developer environment you need to standardize things and a framework is ONE approach to that. I think it's the best approach in a shared environment but I'm not going to be so cocky as to tell you what the best approach is for you.

I mean reading about your experience with modals made me cringe a little (at how long it took) but hey, you didn't have to lean Angular/React/etc so you probably came out ahead so who am I to judge? Focus on what makes your business successful, for you that is clearly NOT the web UI (not meant as a dig) so don't waste your time using the New Hotness (tm). Instead work on making your API servers rock solid and adding features.

A User is NEVER going to come to you and say "I'd use your product but you used vanilla JS so I went somewhere else" they will say "your site doesn't work in X browser", "Your site is ugly/inconsistent" (I don't think this about your site), "Your API is slow", "I need to be able to filter my time series on X but you only allow for Y". NONE of these problems are fixed with a framework alone and fixing them with a framework brings it's own set of issues.

You do what works for you and clearly this seems to work. Keep it up!

PS: I'll have to keep Pushdata in mind for future side projects, looks pretty cool, this is the first I've heard of it.


Trying hard to find something to disagree with in your comment, but no luck :) Seriously, I think I did at least mention in the blog post that the vanilla approach is something I'd mainly advocate for small projects, with few members. The kind of situation where I am myself, at the moment.

The thing is, I see so many reach for the big guns and over-engineer their new, tiny app that they're developing themselves, with a friend or perhaps a small team. And I see overuse of components in general - the tendency to just pull in a new dependency via npm for something completely ridiculuous. This post was not about big, in-house enterprise systems development.

And in fact, thinking about my experience from that financial company mentioned earlier, I would greatly have preferred a standardized, external framework before the in-house developed macro/library mess that worked but was pretty much undocumented and whose inner workings were known to a very small number of people.


> The thing is, I see so many reach for the big guns and over-engineer their new, tiny app that they're developing themselves, with a friend or perhaps a small team. And I see overuse of components in general - the tendency to just pull in a new dependency via npm for something completely ridiculuous. This post was not about big, in-house enterprise systems development.

Because it is much more reassuring having a community back you up in case you get into the edge cases. DIY? Good luck, you are on your own. Not to mention you have to scaffold all the things that are already taken of by the framework.

It's also a matter of maintainability. If you can successfully predict that the project will always be small, then fast, bespoke code might be great. But it's usually not the case. Prototypes almost always turn permanent.


> Because it is much more reassuring having a community back you up in case you get into the edge cases. DIY? Good luck, you are on your own. Not to mention you have to scaffold all the things that are already taken of by the framework.

And once you're reasonably familiar with a framework it would make sense to use it even for a prototype. Unless you're actually interested in trying out "VanillaJS" or an alternative framework you would just use the tools you already know and get on with the problem you're trying to solve :-)


One reason people reach for the "big guns" is also career progression. Hell, I started learning React recently precisely because of that: I prefer to touch the Web as little as possible, but everything is now eaten by SPAs, so one may as well just bite the bullet and get ready for the next job... The same reasoning currently motivates all but one of my backend co-workers, they're all learning React now. At my previous job (backend/desktop), people too started to learn Angular and React, just to jump on the (perceivably) safer, and much better paid, career path.


My top tip for you is to combine what you are doing with totally lean document structure and CSS grid. We are in a habit of using class tags for everything and having very unstructured documents. By structured I mean that with an outline viewer it looks awesome.

Instead of having a sea of divs and a trillion class tags on everything, you can use the proper tags for your document, e.g. 'article', 'aside', 'nav', 'section'. Then in your CSS just style it on those parts, so 'main > article > section {grid-column:2;}' with CSS Grid as the layout engine.

You can use classes for things like links you want to make into buttons and elements you want to toggle hidden, but otherwise the class-less way where your CSS mimics your document is much easier to maintain. If it breaks then it just means you have something wrong with your document, e.g. placed that form in an 'article' rather than the 'section' inside it where it should be.

CSS variables are also awesome, you can have your media queries just set the variables and the CSS rules just use variables, e.g. 'font-size: var(--font-size)' with what that font size is on different devices done in media queries rather than a whole bale of extra CSS.

Vanilla JS also goes with this pattern where you only care about evergreen browsers. You can use divs for containing non document things, e.g. some 'I am not a robot' iframe, but, with the mind-bending CSS grid there is no need for having everything in twenty containing divs.

Ah but, my site is complicated. Well it shouldn't be. If you have it complicated you are doing it wrong.

Also worth banishing are pixels. I took a while to banish them but now I am on ems and viewer units there is no going back.

For presentational niceties try and use pseudo elements with inline SVG stored in CSS variables. In this way the icons can be all in the stylesheet rather than extra downloads. Again, with icons, you are getting it wrong if they are complicated. If you can't draw an icon with simple 'rect' and 'circle' primitives then it isn't going to work as an icon.

Vanilla HTML5 is definitely to be downvoted by some, but you can give up the class and div HTML with silly margins, paddings, floats and hacks and do it all massively cleanly with CSS grid and the new tags. Don't even have a 'reset.css', go vanilla all the way.


Thanks for all these great tips. The site looks the way it does because I learned as I went along. I did some refactoring of the JS when adding the blog section, but a lot more is needed. And the CSS is on a whole other level of ugliness. I'll save this advice for my next big refactoring. Thanks again!


I saw a talk at SnowCamp Grenoble last week about the use of vanilla JS on both front-end and backend, only using what's available in the language / browser APIs / Node.js core libraries.

The author published his project (a 2048 game clone) here: https://github.com/Swiip/vanilla-modern-js


Vanilla is fine for small projects (and I tend to prefer it for static websites), but if you try to develop a large web app using vanilla JS you'll just end up making your own half-baked framework.

A kind of variation on Greenspun's Tenth Rule might be in order:

"Any sufficiently complicated web application contains an ad-hoc, informally specified, slow, bug-ridden version of half of React".


I struggle to find somebody who can justify their favorite framework without using a cliche. Once I hear the cliche I have stopped listening to anything that comes after, because its all bullshit. If people just say they want easier because they are self-conscience about writing this code (or really don't know how) I would really value the honesty of their answer.


I wonder how easy this code will be to maintain? If the person leaves and a new developer is needed to come in and add new features?


I feel like this is an issue to address when you hire your first employee/hand off the project to another team, not when you're trying to get started.


Hey wait. You consider AJAX plain javascript?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: