Hacker News new | past | comments | ask | show | jobs | submit login
Build your own web framework (vercel.com)
195 points by jeremylevy on July 29, 2022 | hide | past | favorite | 149 comments



This is too funny. I predicted that when I entered this post the first comment would be HN people bemoaning complexity. And indeed it was.

The commenter's proposed solution was WASM. Wasm doesn't solve the issues in this post. What does this post talk about:

- serving content from the edge (wasm doesn't do anything there) - asset optimization (wasm doesn't help)

- pre-rendering complex pages to static HTML+css (wasm doesn't help)

More broadly, minimizing network trips, pre-fetching data, etc. etc. these are all things where Wasm won't help any more or less than JS.


It’s like they are either getting old and don’t like to keep up with modern web development but it’s easy to see this behavior elsewhere too.

Aversion towards new way of doing things where people feel like the old way was just working is too common and predictable sometimes. “Things work” using old way, but, does that mean that no optimizations are to be made? Optimizations further lead to reiteration of the way we do things.

Also, an individual can stick to their old way of doing things but to rant over the new way is just- tasteless.


Young developer thing all motion is progress. Old developers know most is not, but the trap at the other end of the continuum is thinking no motion is progress.

There’s always a little progress, sometimes quite a bit. But a lot of the res5 is cycling between two options that both have very bad corner cases.


That's HN. 8 years ago it would have been how Ethereum will make this all redundant


Filecoin-esque networks may.


So you probably ought to listen to some degree? 8 years ago investing in ethereum was a very good idea and it definitely shook up the world to some extent.

Of course people exaggerate but it's important to see the trends and get an understanding of what people are feeling as a whole.

In this case, for all their complaining, people are kind of right. The web is becoming unnecessarily complex for the kinds of applications most of us interact with and there are solutions coming out there that address these pains: see HTMLx or phoenix liveview.


Look, I can sometimes be pretty quick to bemoan the state of modern web dev. I don't do it enough these days to be able to keep up with the current trend. So once or twice a year when I get the itch I inevitably spend more time than I should screaming about having to learn React's latest way to do some new thing, or that everyone has switched from npm to yarn and back again, or... there's always something. Lots of somethings.

The discourse on HN whenever this topic comes up is tiresome though and not at all to the level you'd expect on most topics from this audience. It's the unfiltered dumping of that emotional reaction I outlined above.

You know what? You can still build things the old way! Nobody has taken that away. It still works exactly as you'd expect. Do a bunch of HTML, lay it all out using tables, do inline styling. Host it on an Apache server. It'll be fine.

To think these were the glory days of dev though just washes away the fact that it was "good enough" for the subset of people that were lucky enough to have access to the internet back then. who were probably only a hop or two away from an academic backbone that was servicing them. Who, if you were in a relatively remote part of the planet like myself, would likely have a caching proxy server in there to improve the performance because most universities being able to cache the content of a huge part of the internet was a totally feasible undertaking back then.

But now half the planet is connected. With huge variance in the speed and latency on which they connect, huge variance in the types of devices they're using, incredibly distributed relative to where the content is served, and with entirely new expectations for the type of content they want and the way they expect to interact with it.

Sure, modern dev has lots of room for a simpler devex. Read through the OP though and you'll see it's because there has been a shift to prioritize the experience of the end users above that of the developers. That's what those list of features are for. You're still free to ignore it and build stuff the old way, just accept whose needs you're putting first in the process.


> there has been a shift to prioritize the experience of the end users

I was mostly on the same page with you till I got to this phrase. My immediate reaction to this was laughing in disbelief. The internet has never been more outright hostile to end users than it is today, even when the underlying intent is morally good (Would you like some cookies?).

Devex and the web in general (back end, front end, user experience, etc) is very different than it was. It's better in many ways. It's worse in many ways too. To assert that we've "sacrificed" devex for the sake of the end user is laughable though.


Both can be true at once. Huge corporations can be exploiting people for their data even while they develop new techniques to make their web applications more appealing and useful. Cookie banners can plague every website and those same websites can be more intuitive to use than ever before.

I would say that the shift has been not just toward prioritizing end users but also towards prioritizing profits. The point is that developer experience is much lower down on the priorities list than it was, because we're now paid enough money that we'll suck it up and do what the business needs rather than doing the thing that's easiest for us.


> Both can be true at once.

Which is why I asserted both statements to be true side by side. It was in fact the entire point of my comment. Things are worse overall for the end user as a result of both divergent experiences being simultaneously true.


What a sad state “modern” web dev is in.

I’m glad that I’m living in a much faster and simpler world, but I fear, for end users and developers alike, that it is an ever shrinking one.

I really can’t understand the appeal of what, to my eyes, amount to endless layers of needless complexity.

I wish articles such as this one were more common:

https://alexcabal.com/posts/standard-ebooks-and-classic-web-...

This is the web I want and love.


With modern hardware it is very easy to serve millions of daily users from a single machine with simple software.

No one bothers because everyone loves their fast-as-a-snail framework in their favorite slow scripting language and they need massive infrastructure to scale it to serve less than 1k concurrent users.

Programmers have no one to blame but themselves.


Speak for yourself. Tools these days are much more ergonomic, not to mention faster than ever before. I host a website on DigitalOcean with 0.5 GB RAM and 1 vCPU that can support millions of requests and runs with just a git push. Try doing that in the 90s.


In the 90s, it was all different. Back then, we had to type ‘make install’.


We did and do. Also, git push works with PHP as well. So does rsync. Just put some files on a server with whichever protocol you choose. Easy peasy.


I always feel bad when I see novice programmers herded toward frameworks like Flask and Express for uncomplicated API cases, you can take your pick of mature C#, Rust, Java and C++ API frameworks that serve ~7,000,000 responses per second compared to Express' and Flask's ~123,000 and ~15,000 respectively.


This is a gross generalization. And I'm not sure where you are picking up those values from as well or their accuracy, for Flask is ~15,000 on uWsgi or Gunicorn? or the default werkzeug dev server? How you deploy matters very much I'm sure you'd understand.

However, Ultimately development teams make do with the skillset they have and saved development time is saved money. Who cares of the toolset, Most programmers forget about the business objectives ultimately. Instagram's backend was originally Django, they managed just fine for a long time. Using C++ is pretty overkill even for a simple API project and potentially a huge waste of developer time, people seem to forget developer time is expensive.


I don't think it's fair to call it a gross overgeneralization; good software engineering is being able to evaluate and select the right tool(s) for the job than try to bastardize their fave lang into everything.

The numbers were from skimming TechEmpower's benchmarks. I don't have a horse in the race as I don't do (much) web dev but I found the disparity amusing even if the reality isn't quite so bleak due to, as you mentioned, many other factors before you even hit the application itself.

Ultimately I agree with you, I'd rather see someone use a framework they're comfortable with if it makes their life easier - but I won't lie to them and pretend it's optimal.


You're completely ignoring the fact that most applications will never see 15,000 requests per second. Most applications are internal apps used by a few hundred employees during working hours, or small niche SaaS products used for specific tasks by a limited group of dedicated users.

If I'm working on one of those products, prioritizing requests per second over any other factor is bad engineering, period. That's not to say that I wouldn't end up choosing a mature compiled language in the end, but there are plenty of reasons why I might not, and you have no right to judge without knowing my constraints.


Which would you recommend?


The best one is the one that fits your use case and ecosystem :')


How about Go? How fast would it be?


Go is my language of choice, not just for performance (it probably won't beat C) but because the simplicity of the process.

One code base, one language, cross compilation, native binaries, static linking.


I've never used it but I presume fast enough given its use in networking stacks like Traefik.


I've spent the previous month working with React for the first time and I find myself longing for the simpler days. There's absolutely no reason why it was needed for this project, much less for it needing to be a SPA. The codebase is bloated, the dev environment is slow and unwieldy (even on my M1 Max) and just doing simple things takes longer. The only small saving grace is we're using TypeScript.

If we're ever in a position to rewrite it I'll be pushing hard for Rails or Django with simple server rendered HTML. Anything less would be doing us and our internal customers a disservice in terms of how quickly we can deliver features.


I'm not pro-react but I find it much saner than rails/django. My brain can't wrap writing server side blend of js and html, the component way makes code pretty clean right away, and live reload is faster in vue/react than django (afaik). Not saying that the js world is great by any mean, or that in your case simple html templates would do, but most of the time I rapidly feel the need to have react/vue to make things clearer or safer (and prettier).


Have you tried Astro. Lets you get back to basic html, css and it’s not a SPA but an old school MPA. Lets you use react or similar for any interactive component but without SPA complexity. Worth a look but always depends on what your needs are.


I think our industry will have a wake up call during the recession. At least I hope so. Companies need so many people because developers have become parasites sucking up all company resources writing a two text field form.


> I wish articles such as this one were more common:

> https://alexcabal.com/posts/standard-ebooks-and-classic-web-...

> This is the web I want and love.

Worth noting that according to their article, Standard EBooks does not use javascript on their page. This might work for them, and it might work for you, but in many cases users want SPAs and interactivity (Google Maps, Google Docs, etc)


Users don't want SPAs. Users want software that works reliably.

Most SPAs are anything but reliable. If you talk to most non programmers, they hate computers and software and think pretty much all software sucks.

Sure, maps and docs are not possible without client side programming, but most websites are actually not maps nor docs.


> Most SPAs are anything but reliable. If you talk to most non programmers, they hate computers and software and think pretty much all software sucks.

Expanding on this, I had a Dell XPS 9550 running Linux until the M1 Macbook came out. The XPS was a pretty powerful machine when I first bought it, but as always happens, hardware requirements for software increased while my hardware stayed static.

And the software whose requirements increased: websites, and specifically SPA's. It was absolutely noticeable visiting many (but not all) SPA's, with Jira being the worst offender, just how slow those websites are. It doesn't have to be much slower to just grind at you, and for you to want to finish and leave that site as quickly as possible.

Now I have a much faster desktop with a 5950x processor, and a Macbook Air M1 for when I'm moving around. I don't notice the website slowness anymore (except on reddit perhaps), and it isn't really an issue for me anymore. But I'm a software developer among other things, and I spend a lot of time on computers, so I get good hardware. We who are developers are likely an exception to the general population who will be experiencing the web as I did on my XPS, or perhaps even worse.


> users want

I hear that so much, except they don't. Most sites aren't Figma.

What we're seeing is the proliferation of apps that have been traditionally just desktop apps, but being made for the web. We have to distinguish the web as an app delivery tool (Google Docs, Figma) vs what most developers are writing. CRUD apps. And for those you can ask the server for state often.


SPAs and interactivity are different and independent things.


For JS developers trying to achieve what's outlined in the linked article: https://github.com/cheatcode/joystick

Very thin abstractions over HTML, CSS, JavaScript and Node.js that don't separate you from what you're trying to ship to the browser: HTML, CSS, and JavaScript. No "magic" or "hacks," just a bit of plumbing to make you more productive.


A framework is a boxed architecture. That appeals to some people more than others.

Architecture is just a matter of organization. Ease with basic organization is what separates experts from laymen in any field.


>Architecture is just a matter of organization.

Have to say, I think that's somewhat oversimplifying. Architecture is a lot more than organization. It's also a broad and deep understanding of external and internal resources, the creative design process, corporate vision, industry insight, risk analysis, project management, stakeholder politics, marketing, and technology.

It's very possible to be super organized but still end up with a discombobulated frankenstein of a project, or one that's under water on time or budget, or a design that's too niche and misses the market.


Right, if we want to paint it with a broad brush, it’s system design.

A software architect seems to be a dirty word in some parts, but it’s really just someone who has deep knowledge of various tools, services, etc, and are able to aptly design a system to serve the businesses needs without under/over-architecting.


As a back-end engineer focusing on distributed database, I nearly have no knowledge of web development. I rarely write JS, can't tune CSS, etc.

But when I met Next.js and Vercel, I found that they are very friendly to beginners. I can build the demos on the web, more beautiful and intuitive (Previously, I had only to build demo on the console).


I'm struggling with the idea that React is a primitive used to build a Web Framework.


It's because almost no one uses React the way it was created, but ultimately depends on what you mean by Web Framework. I don't think react includes enough to be a web framework.

It's a library to make components for web pages. Notably, Facebook's chat window. Later, one people decides that it's a good idea to have one giant component mounted at `body` or `div` directly underneath the body that would hold everything else and even the sun.

React didn't have much for state management and doesn't have anything for navigation support. It was released in 2013 and Create React App, which is actually a framework, was released only in 2016. Even that isn't a framework, but rather a template.

Now, Next.js and Gatsby are actually frameworks based on React. Backbone.js is an MVC framework.


I think the problem is that react is larger than several frameworks but provide 1/10 of the functionality.

If react were preact it would feel smaller and people who care about performance would embed it without thinking too much


I have seen it said as "React is neither a library nor a web framework, but a view framework"


I'm not a fan of React myself, but it's just a view layer. Or at least that's all it was when I last looked into it a few years back.


You could swap out React here for another solution (e.g Svelte) and it would mostly read the same. The Build Output API[1] mentioned is how React, Vue, Svelte, and their "metaframeworks" run on Vercel today.

[1]: https://vercel.com/blog/build-output-api


React can be used as a templating library server-side in a node environment (like handlebars). It's a bit heavy for the task but it works.


A huge benefit React (and anything else that uses JSX) has over text-based templating is that you can strictly lint/typecheck JSX because ultimately it's just syntactic sugar over plain JS.


You can typecheck anything there exists a typechecker for.

Vue templates are typechecked for example.


React is "just" a library, it doesn't give you everything you need to build out a full-stack or framework.

For example, see Clojure's Luminus: https://luminusweb.com/

You have to dig down deep to find where React is.


I think it’s easier to consider when you look at the frameworks that provide React + some other features. Of course you could swap React with something else, or write your own rendering + state management.

I think it’s similar to how Java web servers often use the library Netty under the hood. Why re-solve a problem?


Do we ever think we'll collapse all of the complexity layers for programming on the web? Coming from other languages, I'm finding that the agreed-upon web standards make development super hard (we have to collectively build these towers of babel on top of them, since we can't fix the underlying stuff)

I guess WASM maybe is one possible solution here


This is exactly why I started building "Linux on the Web" about a decade ago. LOTW is meant to completely abstract away all the weird "browser-isms" that seem to overwhelm people like you who just want to sit down and do Plain Old Programming.

I am currently in the midst of supporting a good selection of game console emulators (GB, NES, and SNES are already working), and WASM is indeed a blessing for that!

https://lotw.site


I love this so freaking much!!!! Definitely going to be playing with this over the weekend!


If you want to, you already can. It's what's so great about the web, you don't have to use anything beyond HTTP and HTML. But each of those layers introduces its own costs and benefits, so you can pick and choose to get what you want.


Well, when you start with a "tutorial" (or whatever you call this thing) like this one, you get a false idea of just how complex web development actually is.

I once wrote an e-book on how to use the now mostly defunct Kitura web framework. It started with teaching you how to make a one-file script to output a "hello world" message. After that it talked about using templates and database connectivity and all that, very iteratively. Client-side scripting or "serverless functions" (whatever those are) were not a part of it at all.


on the note of babel, I don't usually do front end stuff but the other day I wanted to transpile one single javascript file to support older browsers, one time and then never again. I tried for like an hour and I could not figure out how to do it, without setting up like a whole environment/pipeline for it. My expectation going into it was "surely there's some sort of command that just lets you do input file -> output file", but i struggled until I just gave up and used their web demo thing to do it (which tbh I should have just done to begin with, but I had not anticipated it would be so difficult).

I mean I'm sure it's possible somehow, but it sure isn't obvious. And I get that this is not at all the usual use case for it, but still


Babel transpiles js to browsers which are older than me. You should give it a try.


If it's only one file, you could just go to the Babel website where they have a playground and paste it in, voilà, converted for older browsers. Now place the output file wherever you want.


Not sure if they edited their comment (although it seems unlikely given that your comment was four hours later), but that sounds exactly like what they said they did. Maybe I'm old-fashioned, but the idea that I need to upload a plaintext file to have it converted to another plaintext file so I can download it instead of just having a command to run locally seems almost surreal.


There is a command to run locally, `babel`. It works in the command line as well as part of a build tool. It seemed me that the parent however didn't want to spend more time learning all that if they needed to convert just one file.


yeah, I did get it to transpile with the babel command, but I couldn't get it to accept the babel-env preset, so it just transpiled from modern javascript to modern javascript, which is not very useful


> Do we ever think we'll collapse all of the complexity layers for programming on the web?

Yes, and this is what Vercel has been doing with Next.js. Used to be that I'd had to manually maintain a bunch of different tools to get them to work together, and then if I wanted to add, say, automatic image compression, I'd have to figure out how to connect that to everything else.

Nowadays, I just let Next.js handle it. I don't even have to think about (and remember, for every new project) what optimisations would be good ideas, as they'll do it for me.


I think the problem is there aren’t standards. For backend we have decades of proven architecture but for frontend people are still trying to hammer out the philosophy.


This is precisely the most common mistake doomed project managers make.

There are aspects to modern web interfaces which one cannot know a priori. Thus, the first design will almost always be implemented incorrectly, and it is usually better to choose an existing framework.

Many have found these useful:

https://www.phoenixframework.org

https://keystonejs.com

https://quasar.dev

https://angular.io


This is a nice overview of modern front-end development but I'm constantly disappointed with what 'web framework' means in node-land. None of these things are strictly necessary when building a web app but authz/authn, user management, databases, server-side logic vs client-side logic, are pretty much always needed. When I see the phrase 'web framework' these are the things I am interested in seeing and they all seem to be treated as afterthoughts in the node community. Most tutorials either point you to paid/proprietary services or to really bad local solutions like back in the hotscripts days of php. If you google 'node user login' the first tutorial has you storing a password in plain text and checking the password with the equivalent of '=='. The first result when googling the same for php, python, and ruby all returned solutions using a hash.


Its pretty hard to trust a lot of the eng content on the web right now; so much blogspam, self promo stuff from frankly underqualified people who are trying to build up a profile to get hired.

Somehow I still don't feel that way about StackOverflow. Google and random results though... yikes.


Until now I had not found an eloquent way to express that emotion, but you come close.

Somewhere around the time node got very popular, I started to notice a lot of impeccably branded (is that the right word? trendy maybe?) websites with tutorials that used all the right buzzwords to get me interested. Once I'd step through the content, it'd be very low quality. Despite the smooth lines and round edges, a lot of them were riddled with inaccuracies, assumptions, unabashed spelling and grammar issues, stolen content and lots of candid offtopic observations.. Not to mention they all loaded very slowly.

I used to joke about it mockingly during the early node days that you could judge the trajectory of a project based on the ratio of bytes dedicated to persona and branding vs actual content. Anything passing the 1:2 ratio generally didn't last long.


I am optimistic though. We are still in the early days of server-side javascript. It took 20 years for the greater php community to coalesce around a handful of quality libraries, frameworks, and solutions to things like user management instead of everyone rolling their own or using things like '$_GET["pass"] == "foo"' they they grabbed from a google search. Node is barely over a decade old. We are already seeing patterns mature in the node ecosystem and I hope things keep progressing that way.


Many frameworks or languages were fairly well developed by a decade's time. Node is 13 years old; compare it to Rails circa 2018.

I don't think PHP makes for a great comparison, because the web ecosystem was still growing. Node was birthed in the age of Github, social media, Stack Overflow, and Google. PHP was released in 1995, when search engines were still stuck in their infancy and only a small percentage of the world was even using the WWW. 20 years in PHP's time is like maybe 5 years from 2009.


Another point about PHP: Wordpress, the largest deployed PHP app, was released in 2003, PHP's 8 year mark.


> It took 20 years ... Node is barely over a decade old. We are already seeing patterns mature in the node ecosystem and I hope things keep progressing that way.

But... much of the earliest web work (PHP, etc) occurred in the early days of search. Example of quality code, best security practices, etc all were fairly .. rudimentary and hard to find.

Just because it took 20 years starting 20 years ago doesn't justify 20 years starting from 10 years ago. There are infinitely more and better resources for just about everything these days.


> early days of server-side javascript

Err ... Netscape's LiveWire introduced server-side Javascript in 1996 even before Java became widely used on the server side. The module convention, the (synchronous) core modules of Node.js, and its canonical http middleware API and express.js/Connect/JSGI is from the CommonJS initiative, a co-op of 2000's SSJS framework developers [1].

[1]: https://www.commonjs.org/


This is an "um ackshually" reply. Nobody used LiveWire and it died a swift death.

And don't "um ackshually" me about saying "nobody." You know what I mean.


Classic asp also supports JavaScript, it is old and wide spread in the bowels of legacy sites.

I thought of node as a victory for async more than anything else.


> I thought of node as a victory for async more than anything else.

In terms of server-side development? Async is rather useless on the server side. Can't generate a web page or JSON blob with the results of a database query until you actually get those results from the database. I can see the application for something like websockets but I presume most Node sites aren't using those.

I figured its success was due to V8 being reasonably fast, plus a generation of front-end-centric developers coming along who had learned to use JS quite heavily on the front end and didn't realize or didn't care that better options already existed on the back end.


> Async is rather useless on the server side [...] its success was due to [...] a generation of front-end-centric developers [who] didn't realize [...] that better options already existed

You'd be wrong, again. According to Ryan Dahl, creator of Node.js [1]:

> I think the combination of Chris Neukirchen’s rack plus how Nginx structured its web server with non-blocking IO led me to start thinking about how those two things could be combined. [...] So, when V8 came out, I started poking around with it, and it looked fascinating [...] and suddenly, I clicked that: oh! Javascript is single-threaded, and everybody is already doing non-blocking. [...] I think JavaScript plus asynchronous IO plus some HTTP server stuff would be a cool thing. And I was so excited about that idea that I just worked on it non-stop for the next four years.

[1]: https://mappingthejourney.com/single-post/2017/08/31/episode...


> In terms of server-side development? Async is rather useless on the server side. Can't generate a web page or JSON blob with the results of a database query until you actually get those results from the database.

This is precisely why async (well, cooperative concurrency generally) is useful, whether on a server or otherwise. Blocking for things you can’t do yet is a waste of idle resources. There are plenty of other concurrency models which are also useful, but yielding/suspending when idle is almost always better than not. And the cases where it’s not generally have other problems.


I don't understand. What can a Node app do in between making a database request and when the response arrives?


Make other database requests, or return the responses derived from prior requests. Or just anything else waiting in the event loop queue, including queueing more work.

You can think of the concurrency model like green threads with only one thread available, or actors in a single system process.

Edit to elaborate: yield/suspend is key here. The way it works is yielding/suspended calls have their outstanding work put behind the extant queue which proceeds until each next queued routine suspends or returns, and once that set of blocking work completes the routine which suspended can proceed until it yields again or returns. The obvious pathological case is long running work which doesn’t suspend. Which if it’s I/O or idle-heavy should be refactored to yield, if it’s compute-heavy it should be treated as an optimization or scaling target.


I'm trying to envision a situation where you'd want to make more than one database request at the same time. Like I guess on a blog or something, you could load the user record of the current user and then load the blog post details at the same time, but then there's a chance that the user doesn't have permission to view that blog post and then the request for the post has gone to waste, right? It goes against my instincts to kill a request as quickly and directly as possible if something can't be accessed.

I don't know, I'm willing to accept that maybe I'm just not used to thinking of developing back-end web apps that way. But async still just doesn't seem as useful as it would in a GUI desktop app.


It is about multi request concurrency/parallelism.

If 20 users log in at the same time node and php would behave very differently.

Php would start 20 processes/threads and execute each in independently.

Node would dispatch 20 Request events to the event loop in the same process.

I will not argue the advantage of server-side async, but here it is clear that node needs it, as those 20 database queries will be executed from the same thread. Without async the 20th user would have 20 times as much latency as the first.


> I'm trying to envision a situation where you'd want to make more than one database request at the same time.

Because I have two requests active at the same time and the second one which arrived shouldn’t wait for me to stand around doing nothing while I wait for the database to serve the first.

> Like I guess on a blog or something, you could load the user record of the current user and then load the blog post details at the same time, but then there's a chance that the user doesn't have permission to view that blog post and then the request for the post has gone to waste, right? It goes against my instincts to kill a request as quickly and directly as possible if something can't be accessed.

Here one user represents 2+ events, both of which should fail. This wasn’t part of my original explanation but yielding does give you the opportunity to fail both simultaneously. Albeit in this case it’s just concurrency broadly. Why is that against your instincts, if you know that one request’s failure implies the other will also fail? Why should that user wait in line to get the same error after another cycle of the same trial?

> I'm just not used to thinking of developing back-end web apps that way. But async still just doesn't seem as useful as it would in a GUI desktop app

Yep I get that. And async takes a bit to get used to in general. The way I think of it is: I have two kinds of work which need to be done, and a long line asking me to work. Some of the work needs labor (CPU), some needs patience and communication (IO). Everyone in line needs some of both.

If I do the first person’s immediate labor and wait for further communication, persons 1 and 2 are waiting for me to proceed. If I do the first person’s labor, send off an asynchronous request for feedback on that, I can do N people’s labor before I get any response. As long as no one spends too long making anyone else wait in queue everyone moves as fast as the queue’s capacity.


> Here one user represents 2+ events, both of which should fail. This wasn’t part of my original explanation but yielding does give you the opportunity to fail both simultaneously. Albeit in this case it’s just concurrency broadly. Why is that against your instincts, if you know that one request’s failure implies the other will also fail? Why should that user wait in line to get the same error after another cycle of the same trial?

I really have no idea what you're saying here, so let me put it this way. I would code this hypothetical blog page access situation this way in PHP: Get user data from database. Do access checks on user data. See user does not have permissions to view blog article. Show 403 page.

Assuming I'm understanding what you're saying about async database requests, this is apparently "correct" in Node: Request user data and blog post data from database. If the blog post data comes first, wait. When the user data comes, do access check and show 403 page. If the blog post data hasn't come yet… it just gets ignored or something, I guess. Otherwise the data just gets thrown out…?

What's against my instinct is to make the request for the blog article data unless and until it is known that it will be needed.


A typical Node service would probably be designed very similarly to your description of the PHP equivalent. The difference in a typical scenario is that, should another request come in while the first query for user data is in flight, the same process would initiate its first query for user data concurrently. Both requests are still pending but suspended until the IO they depend on is available (and until whatever other synchronous work on the event loop yields or completed).

The scenario you describe, performing multiple queries for the same request in tandem, certainly isn’t incorrect. It’s a less common design, but one which might make sense depending on the workload. An example where I’d choose that design: both queries are slow individually, but not significantly slower queried concurrently. Even if the query for blog post data turns out to be wasted work for one request, it might halve the time to serve another request.

When I say which is more common, it’s probably more a matter of the mental model of the human who implemented it. Fulfilling a request as a series of steps versus a set of concurrent steps only becomes a matter of correctness when one of those steps is dependent on another (and even then only if there’s no other reasonable way to address their dependency afterwards).

I’ll add that designing for concurrency this way often leads to better design overall. It helps guide towards minimizing dependencies (or at least minimizing shared state) between functions. Which in turn isn’t just easier to reason about, but also easier to scale should you want (say) to move some expensive function out to its own service or worker.


Okay, then it sounds like we've been using a different definition of "async" all along then.


Why do you say that? It’s definitely async on Node, even if a single request/response is entirely sequential.

Maybe some pseudocode will help illustrate. Here is a hypothetical[1] implementation of the Node equivalent of the PHP design you described:

  const handleRequest = async (request) => {
    try {
      const userData = await getUserData(request)
      const blogData = await getBlogData(request)

      return whatever(userData, blogData)
    } catch (error) {
      return errorResponse(error)
    }
  }
In this function, both `await` expressions suspend the current call and yield to the event loop, until the `Promise` returned by the awaited function resolves. During this time, Node will be idle and able to accept additional requests on the same thread.

But because, in this hypothetical implementation, `getBlogData` doesn’t depend on `getUserData`, it could be written this way too:

  const handleRequest = async (request) => {
    try {
      const [ userData, blogData ] = await Promise.all([
        getUserData(request),
        getBlogData(request)
      ])

      return whatever(userData, blogData)
    } catch (error) {
      return errorResponse(error)
    }
  }
Both implementations are functionally equivalent, but the latter speculatively risks wasted work for the potential benefit of IO concurrency in the non-error case. This is a common performance optimization technique for single-threaded runtimes like Node, where `await` might represent a significant portion of the response time.

To your original wondering, why `await` at all? Well, for one because you have no choice. With very few exceptions, all IO is asynchronous in Node. And async colors[2] JS functions. More importantly, because JS is (mostly) single threaded and generally spawning new processes is either not available or expensive to do on demand. Getting the most processing out of a single process is the whole point.

Other environments will either accomplish something similar with threads, or spin up a process per request and depend on the OS for scheduling work. They’re all slightly different trade offs along the same general theme: make idle resources available while waiting. Node’s trade-off is that it has to handle a global (or process-wide anyway) task queue. Threaded approaches and similar put more scheduling control, resources, and complexity, in the hands of devs. Process-per-request eliminates the need to think about any of this, at the expense of cold start times and memory.

All of those trade-offs are reasonable depending on the workload and depending on the environment. Part of the reason `async` (the keyword and/or language facility) has gained popularity across a wide variety of use cases is that the combination of concurrent state complexity, cold start time, and memory pressure makes it appealing to model the problem in a way that almost looks synchronous.

1: Node devs stumbling upon this: yes I realize this isn’t the typical interface for a request callback. That’s intentional to keep it simple, but you can do it this way if the traditional callback hell interfaces are not your cup of tea… you just need a simple wrapper.

2: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


Interesting. Based on this series of responses, you moved from "async is useless" to "actually, I don't actually know what async processes can do, and what happens between each process." It implies to me that your initial comment was written with little understanding of what you purported to denigrate.


Well, I understand what they can do in desktop and mobile GUI apps. But server-side web requests don't work that way; there's a beginning and an end and a pretty straight line between the two.


Cooperative multitasking is incredible for server side applications.


This isn’t helpful. They very clearly and frankly asked. Don’t shame people for not knowing something you already know, they’ll very probably be turned off from learning it.


It is a different story if they asked in good faith, but they clearly started their initial comment as a derisive declaration of something being useless, so it sounded to me that they had no real intention of actually learning.


You may want to follow the thread where we continue to discuss it in good faith. I think they’re sincerely inquisitive, and sincere (and impressively candid!) in expressing their understanding gap.

I think the various “assume best intent” rules and adages would benefit greatly from a huge “assume limited familiarity when dismissiveness seems out of place” addendum. At least for technical topics where the stakes are “whoops we actually just disagree on the merits”.


There aren't many good reasons to make good web content like that today. Money? It's way oversaturated with people who can live off $400/y. Recognition? Your content will be stolen by the previous group a thousand times over. To make a difference? There are more rewarding ways to do that if you've got the deep industry knowledge.


It's almost always been like that. When NoSql was the new hotness, there was blogspam and fistfights everywhere. When DI frameworks were the new hotness, there was blogspam and fistfights everywhere. I could go on. Devs see something new, learn enough to be annoyingly dangerous and blog like crazy. Other devs will glazed-eyed follow along because it makes their resume shiny.


Very accurate. There was a time that I enjoyed this stuff but now it just takes too much effort to filter.


this 100%

sometimes i Google stuff i already know just to verify i remembered it correctly

lately i've been refreshing my memory on Vue JS and this website comes up a lot: https://thewebdev.info

the solutions i read there were often incorrect and misleading

i can't imagine becoming a developer today, when most of the stuff you read on the internet bullsh*t

of course you can browse the docs, but some docs are tedious to comprehend when all you're looking for is a simple one-line answer


Maybe that's the fault of those "who know" but don't write and contribute online ?


I don't think so. If you know where to look then good content is available, but it doesn't get as widely shared because it isn't as interesting to as many people.


Those who know are fewer, they get outnumbered by the low quality content.


I am still holding my breath for a Node framework that does everything Django does for me.

Honestly, we are most of the way there but for three holdouts; 0 config auth, easily extensible admin, and migrations.

User stuff isn't so bad with bcrypt but I'd really like to be able to type `framework create superuser` and `framework create user` instead of fiddling with user access and re-imaging user groups and permissions for the hundredth time.

Admin stuff always seems to be something relegated to a separate tool. No, please, please, please include it in the framework.

Most frameworks have you choose your own ORM (which is fine if they consistently use the same one in documentation), but for some reason Node ORMS always treat migrations as an afterthought or second class citizen (if they even consider it). Changing table structures in development to add a new column? Half of them just say, sorry, we're going to have to blow that away. The rest have either manual migrations or partially automated migrations written in the SQL dialect you happen to be using during development. My favorite of the lot I've seen so far is Prisma and even they have recently indicated they don't want to support transactions in migrations because they don't want to give you a "false sense of security" (well yeah, how about a real sense of security?). To be fair, even in python land Django is really the only one to have ever gotten migrations 100% right.


This is exactly why I used RoR for a recent freelance project which required a fairly simple (but full featured, user accounts etc) backend despite being a JS developer for the last 15 years and not really having any Ruby experience. None of the Node options seemed to cover all the bases and/or inspire trust, and I didn’t feel much like piecing together a bunch of random modules for a small project.

Overall I guess it was a mixed experience, I missed the JS language (just because I know it well, Ruby has a lot more syntax) and Typescript but I liked having all the things like auth, migrations etc. which are not my speciality taken care of in a way that I could trust and easily Google thanks to it being so widely used.

I’m not 100% sure if I’d make the same choice again, the mental overhead of going to a language I don’t usually use to make changes is a downside as is the lack of type checking (though I believe there are better options in the Ruby world now), but on the other hand I could do the whole thing with very little code (relatively) and it was a good experience to see just how much work a well designed framework can save you.

I should add I did try Django but had my heart set on GraphQL (just because I wanted to try using it for real) and the options in the Python ecosystem didn’t scale well. Overall I liked Django’s ORM better and the Python language, but I felt like you got more out of the box with RoR, the ecosystem is higher quality, and I had to do fewer hacks to get stuff working. All depends on the scale of the project and how much code you want to write I guess.


I'd kind of like to try Ruby on Rails but python was my first language and I was very, very green when I first tried to make a website so Django was a natural choice. It's served me well. I think remix.run is a pretty great framework, all things considered. You could always use Django or RoR to manage the database and use Prisma to inspect the schema. There are some downsides to that (you have to mess with bcrypt to get passwords to work the same on the node and python side), but might be a good compromise.


This is largely why I built Nodewood [1]. Every time I wanted to start a new project, almost always a SaaS idea, I'd skip over the "boring stuff" like building user management, subscription management, teams, admin, all that, to get to the meat of the business logic, to make sure I had a valid idea. But I still needed all that stuff eventually, so I'd have to lose time later building it all in!

So I decided to just build it all once so I could re-use it, and then I found that others had the same problem and are happy to pay a reasonable amount to have it solved for them, and now it's doing pretty okay for itself.

It did end up taking a lot longer and involving a lot more work than I expected, but I figure that's alright, it just means a more-reliable base to start from each time.

[1] https://nodewood.com


Just an FYI, your styling doesn't seem to be responsive on an iPhone X


Oh thanks for pointing that out! I originally owned a 12 Pro when I built out that front page, but have since moved to a 13 Mini and I'm seeing some responsiveness bugs now, too.


The only framework of that scale and quality I’ve found in node-land is Adonis [0]. It’s why I chose Adonis 5 for my latest product—it needed to be built in JS or TypeScript but I wanted the “batteries included” feel of Laravel or Rails.

[0]: https://adonisjs.com/


It’s also the most rails like node framework (is it still?), which emphasizes DX, expressiveness, convention over configuration, etc

which in my book is a plus


Unfortunately they just directly copy laravel line by line when I had last seen it. It doesn't use the strengths of JavaScript as much.


Laravel is awesome, and "laravel in js" sounds good.

Perhaps you can point out issues or improvements which could be possible by leveraging these strengths if javascript?


Any specific criticism?


Im building with featherjs[0] right now and I love it. Jwt, user handling, routing, and (most notably to me) real-time functionality is all built in. Probably the most rails-like backend framework I’ve worked with in Node so far.

[0] https://feathersjs.com/


The fact that googling 'node user login' returns an insecure result is more indicative of the quality of Google's search results than it is of Node. Node has had a built in password hashing function since before it even hit version 1.0 (pbkdf2) [1]. It also has a built in timing safe comparison function for safely comparing values without leaking data via timing attacks [2]. The crypto module is actually pretty extensive.

If you're looking for a traditional web framework (something similar to Flask or Sinatra) I would take a look at Hapi (which has no third party dependencies and is maintained by one of the original authors of the oAuth spec), or you could always try Express (the most popular Node framework), or Fastify (the fastest Node framework).

[1] https://nodejs.org/dist/latest-v18.x/docs/api/crypto.html#cr...

[2] https://nodejs.org/dist/latest-v18.x/docs/api/crypto.html#cr...


Sorry but that’s a pretty un-generous take on the state of the Node ecosystem. There’s so many mature solutions for all of these problems and tons of great examples online.

That said, I agree that in the Vercel/Next universe, everything on the backend seems to be an afterthought.


React made me like frontend for the first time since I was writing html by hand with maybe some PHP. Next has made me love full stack again, I just wish the api server was express by default.


I would even take Fastify.


I’ve been looking a lot at RedwoodJS, but it doesn’t seem to be “just right”. Don’t know if I’m bike-shedding, either.

I do think React is a requisite for a rich application (not blog). I think auth + DB + deploy + fe/be in one asset is challenging. I certainly get why folks build their own constantly. But you burn out before you make any real progress.

I’m not sure we’re really achieving any higher level abstraction on our technologies.


You might dig Joystick: https://github.com/cheatcode/joystick.


Is there something explaining what differentiates joystick from other frameworks? The Readme explains what joystick is but not what's unique about it.


Not yet, I'd like to organize something for the 1.0 release. Honestly, I view Joystick as a rival to, not a competitor of the other frameworks. What's unique about it is it's philosophy and approach.

The other frameworks take a flawed approach of trying to do as much as possible on the client and ignore the server (unless they can't which leads to wishy-washy approaches like tossing basic stuff to a third-party service or hacks which add complexity). On the code side, there's an obsession with injecting unnecessary abstractions and terminology which only serve to confuse people and ultimately, create a class of developer who doesn't understand how the web works (terms like "serverless" are clever marketing, but mislead developers into thinking they "got it" when they most certainly do not). I view this as a colossal mistake that threatens the long-term stability of the industry.

In contrast, Joystick is all about simplicity and developer experience and takes a page from the Rails/Django world. You start up a server (via Express), that server has some routes, when a URL matches a route, that route fetches some data, passes it to a component, and renders some HTML back to the browser. That's it. A component is written using plain HTML, CSS, and JavaScript with an API that a beginner can understand immediately after learning the fundamentals of those languages. To make it more valuable, I take a "batteries-included" approach, adding in all of the basic functionality you need to build an app: wiring of databases to the app, user accounts, uploads, i18n, etc. with more on the way (e.g., I'm adding in support for queues atm).

Joystick is for you if you want an unexciting stack to build on that just works and continues to work in perpetuity (after 1.0, the core APIs will be frozen with the only changes being under-the-hood performance/security and additional, optional functionality). This means: the app you build in Joystick today will require the exact same knowledge to maintain 10 years from now. No deprecated APIs or rug pulls in terms of opinions about how to build stuff. My running joke is that "Joystick is the Honda Civic of JavaScript frameworks."

Edit: some tutorials if you want to get the gist of how Joystick thinks about stuff...

Building and Rendering Your First Joystick Component - https://cheatcode.co/tutorials/building-and-rendering-your-f...

How to Implement an API Using Getters and Setters in Joystick - https://cheatcode.co/tutorials/how-to-implement-an-api-using...

How to Fetch and Render Data in Joystick Components - https://cheatcode.co/tutorials/how-to-fetch-and-render-data-...


This is one of the stupidest, most circuitous ways to get a simple application up and running since the days of the J2EE Pet Shop example.


Maybe it is and maybe it isn't, but your comment would be a lot more useful to the discussion if you pointed out specific issues and/or ways of doing things differently that you think are better, and why.


It quite literally starts with, "ever wondered what it takes to build your own web framework that also deploys to edge and serverless infrastructure?”. They are not suggesting you to do this to get your “simple application up and running”.

If you dislike it so much, simply don’t use it. I don’t use it either but calling it stupid is not a stretch. It’s worse.


Having this page open for ~10 seconds on my iphone SE (I believe recently hailed as the greatest of phones) crashes safari every time.

Can someone able to read the page tell me if this article is ironic or not?


Graham, it has the phrase about how you can, "hydrate dynamic components".

Likened most to the phrase; Simpsons like, "Der".


For people concerned about the growing complexity of all this stuff, I'd appreciate you taking my approach for a spin: https://github.com/cheatcode/joystick. It's intentionally boring and simplistic.

Good ol' fashioned HTML, CSS, and full-stack JavaScript (Node.js on the back-end).


The more I see these announcements, the more I wonder - what is the appeal of something like Vercel and the likes? On the surface it seems like AWS/GCP/Azure/whichever big cloud provider can replicate literally everything they build within their infrastructure quite easily. Why host your core infra in a BigCo cloud and then the site on Vercel?


The edge network of a Vercel/Netlify is very hard to replicate in cloud as a small startup.


And if you are spending your time and resources trying to replicate it instead of building your core product you're an idiot/forever software engineer clueless about product.


I wasn't referring to the edge network for _startups_ to replicate. I was more wondering how AWS and others can replicate Vercel/Netlify and eat their lunch. They have no defensible moat as a company.


The same reason IBM can’t. They are incumbent and have an enterprise world view. They are necessarily going to be more bloaty. IAM yay!

I use Vercel (and Netlify) and once you spend the 2 minutes setting it up you never think much of it again as it just does its job.


you could use Fastly or AWS Cloudfront


Vercel makes app deployment a first-class citizen that's baked into the CDN. Fastly doesn't offer a Vercel/Netlify deploy feature, and AWS requires considerably more work to setup but can easily replicate.


At the end of the day, even when it's running sever-side or on the edge, it still all exists to deliver a front-end experience. Vercel makes delivering such an experience more palatable. This includes isolated environments that can easily be shared and a CDN-as-default deploy model. It's like someone sprinkled a little Heroku magic on a specific front-end deployment workflow.


Until Vercel adds a solid and flexible database as a service, I'll continue using Google Cloud Run + Cloud SQL + Cloud Build for a "no server" solution.

Granted, the Vercel edge network is amazing, but Google routes internal requests way way faster than the edge can communicate with Google's infra.

Vercel is great for things that aren't stateful and for automagic build configuration and asset serving. But not great for anything needing a DB.


Vercel is easy to use and comes with batteries

the primary decision driver here is hype factor

it’s just more fashionable to use Vercel than AWS

much of web development today is like this


> Vercel is easy to use and comes with batteries

On our team, we deploy our web app through Vercel primarily because of this (maybe there is some hype factor bias in there too?..). Everything else we rely on runs on AWS. We don't necessarily need Vercel for any of the edge computing or serverless environments, but the experience of building, previewing, and deploying our app is FAR superior to AWS's Amplify offering because it just works.

Trialing Amplify for a few weeks led to a world of hurt, leftover build artifacts in our accounts, failed builds left and right, unreliable preview environments, etc.


Thanks, but I'll stick with phoenix + fly.io


I prefer the traditional approach. Golang + fly.io is what I use way easier to deploy anywhere you want and closer to your users.


Why do front-end developers have such masochistic tendencies to put themselves through this kinda crap?


not sure how this differs with AWS, you can do more at less cost than Vercel


AWS offers a set of primitives, not frameworks. These low-level primitives can be combined to build anything you want. Vercel's Build Output API[1] is a level of abstraction higher.

Consider Image Optimization. In this post, there's a `/_vercel/image` URL made available to send an image and return it compressed with a format like `.webp`, if possible based on the browser.

With AWS, this would be a combination of S3 (store images), Lambda (compute the image transformations), Cloudfront (CDN), and Route53 (DNS). With the Build Output API, the entire infrastructure is defined by a JSON object (similar to Terraform). I do think the AWS CDK and Copilot[2] are making infra-as-code easier, but infra-as-filesystem[3] is an interesting twist.

[1]: https://vercel.com/blog/build-output-api

[2]: https://aws.github.io/copilot-cli/

[3]: https://news.ycombinator.com/item?id=32192498


didn't you just describe AWS Amplify? It also integrates with Cognito not sure if Vercel offers Auth out of the box.

also I'm not sure if I'm alone in this but I set up everything through the web console and then generate CDK or Terraform.


Or you could use AWS Amplify.


Or use imgix for that part


I'd say give it a shot. Vercel has a much more reasonable Free Tier, was insanely easy to use and incredibly fast. I had my changes pushed to prod from github in <10 secs

I haven't built anything big on Vercel, but from my test drive, I'd be happy to use them in the future, and this is coming from someone who works AT the A in AWS. AWS does NOT make the easy things easy; though you are right, it does make the "more" possible, but it all depends on what you're optimizing.


Reasonable Free Tier, yes, but "call us" Enterprise pricing starts at a team of 11 people, a fact that is hidden deep in their pricing grid behind a tooltip, which I find distasteful.

In what world does a 11 person team qualify as an Enterprise? Though pedanticism aside, my problem is more with the opaque pricing than the naming.


AWS is overwhelming for most of Developers. Also vercel is focusing on ease of deployment for front end, it is also zero config, you can just deploy nextjs, react, svelte, remix apps in matter of minutes. Which is impossible with AWS


I've started playing with AWS App Runner (https://aws.amazon.com/apprunner/) which makes things fairly simple. You just point to a github repo, tell it the build command and the run command and it will deploy your project in containers and even load balance and scale up automatically.

Now it's not fully baked yet. DNS configuration is broken if you want to use your own domain (see https://github.com/aws/apprunner-roadmap/issues/37 and https://github.com/aws/apprunner-roadmap/issues/65 ) but you can get around that by putting cloudFront in front with just a few more clicks (and you get the benefit of having some of your files cached if you want).


AWS Amplify Hosting is the direct Vercel competitor and has improved since it's initial launch quite a bit. Also has a decent free tier


this is pretty incredible




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: