1. Edge cases. You cannot possibly cover every possible edge case. This is why back in the day we had "Site works best on IE". Libraries are often used by a lot of people thus edge cases you couldn't possibly think about are covered.
2. Someone else maintains it. If your job is to build an application, why are you spending all your time maintaining a library that just facilitates your application? If I'm building a racecar, I'm not gonna make the wheels in house, let people who are better at that do that while I focus on what I do best.
1. Edge cases. Chances are you do not need to cover every edge case, at least when building an MVP. It can take more time to learn a complex API than to build in house. See also premature optimization.
2. Someone else maintains it. In the event you decide to use someone's dependency and they decide to unpublish it with the click of a button.... (ok at least this one has been fixed, but API versioning can still be a bear).
Up next, the Angular-powered guestbook in just 10k lines of code.
Then stay tuned for the Go-powered e-card sender in just 50k lines of code.
And then, the Typescript web ring... the cloud... using BespokeDB... and service workers... in WebASM!
I just want to say thank you for your kindness - the tips and advice you gave me definitely contributed to the career path I have today. I know that responding back to some random kid may not seem like a big deal, but I would not have been able to get here without the help and generosity of experts like yourself.
Queue the OMG modern web complaints that are correct that this is a lot for a hit counter, but this also I don't think that's the point here.
If someone were to take this article and think "oh this is how to build a web counter" that's not OMG modern web, that's a different problem.
'Ok guys, now here's the modern way to make one of the more simple things from yesteryear.
First, lets bust out react....
A few hundred loc later...
Ok guys the database...
Sign up for account
Several more hundred loc later...
'Now to actually write our functions'
Download some dependencies
Several hundred loc later...
And...here's your hit counter guys...
This whole article just captures perfectly the essence of all the things we complain about with the modern web.
Overengineered, convoluted solutions to simple problems.
Honestly, it would be nice sometimes if modern solutions remembered sometimes, it's ok to just keep it simple.
I did have a quick look around to see if i could find a simple, 'modern' hit counter.
Personally, I prefer the simplicity of this to the article myself.
That said, I challenge the assumption that this way is _that_ much more complicated. I’ve built similar things with PHP, a long long time ago, and it was a lot of the same pieces. The only part that feels truly more complex is the re-render necessary in React, since the data comes in async, but this is hugely beneficial since it means the page doesn’t need to be server-rendered / the user isn’t staring at a white page waiting for the RPC to the database.
Edit: realizing that there are ways to do this in PHP that aren’t blocking, like if you use an image tag that resolves to the right image, but honestly that way seems wayyy more complex to me, especially when factoring in accessibility / screen-reader-friendliness.
React-based frontend didn't invent XHR. In the old days you just do
My point is that my example is complex because some client-side code is necessary. The snippet you shared is nice pseudo-code but a real implementation would be comparable in complexity. Either way you need to learn some stuff, and then it’s a few lines of code.
The only React-specific thing is the useEffect call, which is essentially comparable to window.onLoad.
It is starting to remind me of a guy in 2006 approximately that said all you needed to parse display an rss feed in your webpage was a call to rss.parse(my rss url); ignoring all the other stuff that needed to be done (using third party script with all rss parsing built in, having the correct css from the third party library to make the output html look good, make sure not to have any conflicting classes in your page because this third party script from the old days didn't take that kind of thing into account , having a div in your page that had the right id for the rss.parse method to append its output to etc. etc)
Before 2006 you'd probably use XMLHttpRequest directly- but of course that was only widely available in 2004 or 2005. Before that you would have used a hidden frame- or more likely you would use CGI to either replace a directive with the counter, or generate an image with the correct numbers. That's what I would call the "Old days": a dynamically generated gif with bold white-on-black numbers displaying the visit count.
People don't remember so well. Even backdating it that far is too generous. Just as a reference point, on New Year's Day 2006, jQuery didn't even exist yet. There was a time post-2010 when the scourge of jQuery showing up for trivial use cases was considered a real problem and a huge source of bloat on the web. (Side note: it's more than a little annoying, for this reason, to see people when trying to talk about changes in Web developer trends to lump jQuery and vanilla JS together.) To give another reference point, Firefox's then-new Tab Groups feature demanded the creation of a jQuery-like library for handling the tab canvas. John Resig was a longtime friend of the project and one-time Mozilla Corporation employee, but jQuery itself was out of the question because of bloat/performance. That was for Firefox 4, which would have been 2011, since it was released at the end of Q1 that year...
There's a lot more concentrated change in the 2010―2015 timeframe than is often accounted for when people think or talk about the Web.
1. Provisioning the server, either via VPS or setting it up on your local machine and exposing port 443.
2. Installing Apache on your operating system
3. Installing a cert, getting it signed by an authority
4. Installing php, enabling it in apache
5. Installing mysql
At this point you'll have to write your php from scratch or use something like wordpress. Assuming we want to keep it as simple as possible and write php from scratch, you'll want to consider file permissions so your database credentials aren't accidentally leaked.
6. Creating the schema and tables on your database
7. Distributing your site's static content through a CDN
And this doesn't even involve automated deployments, which these services give out of the box.
My point is that a "traditional" approach can appear just as overengineered and convoluted if we want to replicate the scalability, stability and security of solid PaaS services like Vercel and Fauna.
I get that a hit counter is very simple, but I assume the point of this article is to provide a simple "hello world" example that uses these PaaS services, which again, provide a lot of benefits over manually configuring infrastructure.
I think the OP’s comment was coming from a perspective of someone adding a hit counter to an app that already has app servers, etc, already in place.
In which case - as an example - you don't need to set up a serverless account and have several hundred loc for that because you've got that in your app already, and probably only need 1 loc to call what you need.
Now I'm arguing on the side of serverless, sheesh.
It's just the general idea that, such things are required even for something so simple just seems over the top when you look at something like a hit counter alone.
It's just overall, if we tried to simplify, the simple things, would all the rest of the frameworks and apps end up being so convoluted and overengineered?
As far as how simple it was back in the day https://news.ycombinator.com/item?id=24618580 already said.
Perhaps a more apples-to-apples comparison would be an img tag that returns an SVG, allowing CSS styling. You can still choose serverless backend, or you can use the tried-and-true apache/nginx with memcache, just like the 90s.
Does FaunaDB have an "atomic increment" a la Firestore? Knowing nothing about FaunaDB, I suspect this code in the blog post has the potential to "lose" hits that come in at roughly the same time...
Static compile-time sites are fantastic for a huge swath of applications, but this is a good reminder that you can shift the work around and even come up with better isolated, decoupled approaches but somewhere it still has to get done. Introducing functionality that is dynamic outside the browser and requires state (i.e. persistence) is definitely an area that seems to "fight" the natural inclinations of frameworks like Gatsby et al.