Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I recommend CGI instead of web frameworks (halestrom.net)
77 points by _fnqu on Aug 22, 2021 | hide | past | favorite | 86 comments


CGI really is a beautiful abstraction.

The reason we stopped using it 15+ years ago was performance: forking a new process for every incoming web request just didn't make sense on ~2000 era hardware.

I wonder how true that is today, given that our machines have vastly more RAM and CPU?

If you squint at them the right way AWS Lambda functions are pretty similar to the CGI model.


It's unusable today, more so than it was a decade ago:

- modern frameworks have a much higher startup time (see Python imports, Java VM and other). CGI was fine to run a perl script with no dependencies.

- it prevents any form of caching. caching is very important for many use cases.

- it requires to open a fresh connection with every request, to the database and elsewhere (too bad if you thought you could use redis for caching).

- SSL is everywhere and it has a notable overhead on initialization, meaning you really wish you could reuse connections. (Not just the databases but to other API services)

Speaking from experience, I've inherited the CGI platform at JP Morgan (the largest US bank), that went back to 2006 and approached a thousand running applications at some point. It works and it was still the easiest way to deploy any application in 5 minutes 2 decades later but the drawback are real. It's only for short scripts that can tolerate a 5 second startup time and zero caching.

https://thehftguy.com/2020/07/09/the-most-remarkable-legacy-...


Wait a minute, how CGI is preventing anyone from reusing connections? The web server doesn’t have to cut connections between CGI requests, the same way it doesn’t have to cut connections between the delivery of two different static files.

Maybe current web servers are written in such a way that they do cut connections between CGI requests, but I’d be surprised if they really have to.


They aren't talking about client->server connections but about server->DB or server->other server connections.

With CGI, any connection you create in your script is going to close at the end of the request when your process shuts down.


> prevents any form of caching. caching is very important for many use cases

There are alternatives; even back in 2000 you'd keep the stuff you wanted cached in the filesystem, where the webserver would handle the cache headers for you, and there are ways to hook a cache miss by the webserver to generate content on demand.


>- modern frameworks have a much higher startup time (see Python imports, Java VM and other)

That's fine, they can keep. TFA suggests using CGI in lieu of modern frameworks, not WITH modern frameworks.

>- it prevents any form of caching. caching is very important for many use cases.

No, it really doesn't. You can cache whole responses with a reverse proxy on top, or you can cache complex results in whatever (Redis, disk, etc) and query it from the CGI program.

>- it requires to open a fresh connection with every request, to the database and elsewhere

Which might be fine. MySQL for example is notoriously cheap to connect to for each request. Redis as well. (And you can always bump CGI to FastCGI).

>- SSL is everywhere and it has a notable overhead on initialization, meaning you really wish you could reuse connections

You can terminate SSL on your proxy, so that's not a issue.


> it prevents any form of caching. caching is very important for many use cases

> it requires to open a fresh connection with every request, to the database and elsewhere (too bad if you thought you could use redis for caching).

nutcracker, twemproxy?


apache caching reverse proxy even.


From context i assume the poster meant object caching, since the cgi model wouldnt affect client caching or cdn style front-end caching.


The problem has got worse rather than better; the machines are approximately 1 million times more capable in terms of raw MIPS, but the runtime startup cost of the languages used has increased, so the wall clock time to start a request has gone up.

2000 era CGI tended to be in Perl or PHP. Even then Java made an appearance, with a long-lived host process, e.g. Websphere.

One advantage which the author gets right is that CGI consumes no resources while not serving requests, so you can have a lot of different CGI programs on the same machine. It also plays nicely with multi-user systems, so an ISP could host a lot of different customers on one box. We could serve CGIs from several hundred users off a 128MB Pentium system, for significantly less resources than one Slack instance.


There are modern options with really quick startup speeds though, like LuaJIT or compiled Rust.


Fastcgi maybe? It's a little more complex but doesn't spawn process like cgi


Its also generally how php is deployed when not using apache, so it clearly does scale.


Forking a new process for every incoming web request might just make a lot of sense on ~2020 era, Spectre- and Meltdown-prone hardware. And so the Wheel of Samsara turns.


The threat posed by Spectre and Meltdown cannot be mitigated by just switching to a multiprocessing architecture. The threat is exactly about bypassing that abstraction too.


What are typical process creation times nowadays on Linux?

I see that starting up a Python interpreter and running a Python program (or Ruby or whatever…) might be slow, but how far can I get with a golang, C or ocaml binary?

I don't know, but I'd expect the Linux people to have optimized that to death.


I use CGI all the time. But I’m not trying to serve 2000 users a minute.


Like writing C in a simple console-based editor and compiling it with cc on the command line, writing raw CGI scripts like this is a great way to start and learn underlying principles. You can build some simple, even useful things and understand every bit of what is happening.

It doesn't scale to what we do on the web today, of course.

It's a good place to start. Not a good place to stay.


Who’s we? Not everyone is writing services to scale to a billion users.


I love the simplicity of CGI, it was ~2002 when I read about it in a book on Linux and had my first server-side generated HTML output a couple of minutes later. But: It has no place in todays world except of educational use. Especially the fact, that every HTTP request would spawn a full new process makes it unfeasible for any serious webapp.


> It has no place in todays world except of educational use.

That's overly strong. I will create a CGI app every now and then. I can use basically any language I want. Not a lot of thought has to go into it. As the article says, it's a simple approach that makes sense for those of us that aren't so familiar with web development (we're usually making those apps for ourselves). In particular, the "you can never have too many dependencies" philosophy of modern web developers is strange to me.


I think its just that everyone on hacker news likes to pretend they have to scale to the size of google even though most people aren't even in the same ballpark.


Currently trying to order room service in a hotel from xxxxx.menu.org.Uk

It isn’t loading, just sits there spinning. Pile of shit. I wish they’d chosen to write the service as a reliable cgi rather than some web framework where errors are eaten and hidden in Ajax calls.

Uber eats let’s me place the order then vanishes when it comes to paying.

Nandos yesterday told me error UK03 when I tried ordering in the restaurant, slightly better, but perhaps those more concerned with scaling to a million concurrent users could deal with 1000 reliably first.


I mean, on the other hand you could be dealing with an unreliable CGI-served page where errors are eaten and hidden in ajax calls, wishing they’d used a reliable javascript framework[1] instead.

[1]: an oxymoron, I know


The point is that people spend far too much effort worrying about the ten millionth user rather than concentrating on making a reliable product for the hundred or two they have. “Can’t use cgi because we can only support 100 concurrent users per VM” is meaningless, concentrate on the real problem of building a site that doesn’t crash out silently


It is not necessarily about scaling the number of requests, but scaling for developers & security. You will pretty soon hit a point where you need smarter cookies access, CSRF protection, multipart/form-encoded POST requests etc. Sure, you could implement everything yourself, but this is stuff those "frameworks" already solved.

I think it is also worth to mention there are plenty of "slim/light" frameworks that do very little (i.e. take care of threading, provide html request/response as writable streams, have the http headers parsed for you into a Map-like data structure etc.). Those lightweight frameworks exist i.e. express without middleware (nodeJS), Nancy (.NET), Flask (Python) etc. The "full-blown" counterparts would be ASP.NET or Django which bring far more to the table than you might need.


Definitely overly strong. I don't personally use CGI because I've got a whole toolbox of pre-built starters that can scale if needed. Still, I've worked many projects that needed a simple, infrequently used microservice to do something that couldn't be done inside of an enterprise management system (read SAP, CRM, Salesforce). CGI would be an easy to maintain and perfectly acceptable solution.


Hum, I beg to differ. I write small utilities with web frontends that run on my PC's local apache using CGI. A couple of devs from my team and I are the only persons susceptible to run them, and it's plenty fast enough.

Even for a web-facing tool, as long as it doesn't receive more than a couple requests per second CGI is good enough, and that probably represents most web apps in existence. YAGNI principle applies, too. Most people don't work at Google of Facebook or hyper-scaling startups. A fantastic number of real-world development consists in building boring web front-ends for in-house use of random companies; and 99% of accounting forms or support tickets won't require huge performance, even running with CGI.


I'll fully admit I'm not as well versed in web programming as it's not my specialty, but aren't serverless functions (e.g. AWS Lambda) essentially the same concept?

I understand that they can solve the problem of horizontal scale as they are spawning a container rather than just a process, but surely if you started with CGI scripts it would be easier to move if you needed to at a later date.


How about FCGI?


I use FCGI with Go programs. FGCI is an orchestration system. Each worker process has a subroutine that's called when there's an incoming request, so the FCGI processes are reused for more than one task. The FCGI server will fire up more worker processes when needed, and ask them to exit when there's no work for a while. If a worker process crashes, a fresh copy is started. Every few minutes, each worker process is told to exit, so if you have a memory leak, the problem is contained.

Once you've switched to hard-compiled Go, vs. some interpreted language, you've obtained most of the possible speedups.

That's really all you need. I have a moderately busy server that's been up for over a thousand days without a reboot or restarting the FCGI service running under Apache.

You have to have a huge load before FCGI, multiple servers, a load balancer, and back-end database machines are not enough. Wikipedia used to run on something like that, until they hired too many people and had to keep them busy.


Why don't you use the golang http server instead?


I used to use FGCI with perl. I never thought it could improve performance of GO as handler routines are already "running".

So the remaining advantage to use FCGI with Go is that memory leaks would never be an issue?


We used fcgi at my last company in routers. When you make a request to a router's embedded webserver internally it forwards the request via unix sockets to a persistent CGI process which sends back the JSON. Worked really well and had a tiny memory footprint.


It's a bit better but one process can only process one request at a time. You need a lot more processes compared to a http framework processing http requests in parallel.


How do those web frameworks gain the ability to process things in parallel? What prevents a ‘single process’ from adopting those same methods?


Those web frameworks are lower level and have an embedded HTTP server designed to process things in parallel. It's in the standard library in golang or nodejs for example.


Sounds like one can use libuv in a CGI project and gain the same parallelization.


... and end up with something like Node.js or Vert.x.


Usually you have a thread pool (threads have startup costs too), monitor the server socket and assign incoming connections to one of the worker threads. Nonblocking architectures that do everything in a single thread also exist.

It's very common even to do this: just use Apache `mod_proxy` (or something more specialised) to forward requests to a backend server such as Apache Tomcat.


Most people don't need to worry about 'parallel' - that's the point of the OP.

Need a site to order XYZ? Unless you are a mega chain, small biz doesn't need to do anything more complex than serve its 100 customers reliably. Most 'modern' stuff is wasted abstraction for complexity which does not exist.


how many features can your process accumulate before it's called a framework?


Fcgi apps can handle requests in parallel in a single process just fine, as long your environment supports it.


You are right, fastcgi supports multiplexing. It seems that it's not implemented very often though.


This is basically what PHP is. PHP can be deployed in several ways, CGI, FCGI or as threaded module in apache, but that rarely matters from the perspective of the programmer, the code will work the same regardless what you put in front of it but still keep CGI-like execution. This is why PHP is a perfect match for the web.


Plain PHP without frameworks is indeed a good choice in the same ball park as plain CGI.

And on the plus side it scales to bigger things thanks to FastCGI with for instance nginx.

At work, every new team member get the task to add their profile to our team rooster. Works great thanks to the low barriers of changing a single PHP file and the low risk of breaking the whole web page is great especially for juniors.


There's an important point I haven't seen discussed yet: I don't believe it's possible to write secure web apps without a templating engine with safe-by-default XSS handling (i.e. interpolated text is sanitized, unless explicitly marked as trusted HTML somehow), which implies some amount of a web framework or at least a web-specific library.


I think important point is distinction between:

a) static documents - basically HTML+CSS and no scripts on the back-end

There is not much to discuss here a lot of stuff could be just that but people don't want to write blog posts directly in html :) We have static site generators that are doing great so it seems to be working well.

b) dynamic documents - you get data from DB based on query, like list of phone numbers in Texas and want to find specific city

Static page generators would not be that useful if one wants to reflect changes from db. Queries are also nicer in db than having insane long list on page with CTRL-F. I would say a CGI only thing would work great for such use case. You probably want to thing about SQL injections but as it is browse only then might not be an issue.

c) web applications - here people want all bells and whistles

Security is important as you probably need authentication and preventing XSS is quite important here. I would never build web application with only CGI - security headers are not that hard to add. But authentication and authorization + XSS prevention is really hard. Then you have lots of requests that send/filter data. You can have problems with SQL injection as you have to store some users and their passwords and their data, framework+orm helps preventing a lot of troubles. One probably should not use a framework for making his blog/static-page. Unfortunately nowadays most people build web apps.

This is what rubs me with posts about "you don't need X framework, it all should be static documents", well yes you don't need big framework if you build personal website. You probably need one if you build a web app. Downside is we have HTML+CSS as interface that was designed as document framework and not as application interface building framework. That is why we need a back end and front end frameworks.


Nobody would use cgi for (a) (cgi integrates with webservers, so you already have one to serve your static documents).

You still care about xss (and other vulns, although the technology the grandparent mentioned is specificly for xss) in case b just like case c.

- you might host other things on that domain (or even subdomain, which is a lot trickier but not impossible to attack), and this could be a launching point of an attack.

- attackers might rewrite your page to mislead people, e.g. as part of a phishing attack, to harm your reputation or just to redirect to advertisers.

The impact of any security vulnerability is going to depend on what you are doing and what you have to lose. It seems less significant in case B, but its a mistake to extrapolate as its impossible to tell without business-specific context. Maybe the simple list page is listing out life-saving information.


If we're talking something written in c, its at least as easy as not having memory safety vulnerabilities ;)


For these there's good old "perl -T" :)


How does perl -T protect against XSS?


Its dynamic taint checking - it tries to keep track of which variables are user controlled (tainted) and prevent you from using them unsafely.

As a strategy for dealing with xss, its fallen out of favour, but static taint analysis, which is the same thing but not at runtime and less accurate is still super popular in big shops as a CI step.

As an approach though its more a way to make sure you dont screw up as opposed to a way to solve the problem in general.


"premature optimisation is the root of all evil" for sure, but getting a denial of service by the Google bot scanning your website, someone running Apache Bench from a smartphone on a GPRS network, or an unfortunate infinite loop client side (it happens) is not great.

I think it's fine to play with CGI to learn, but I wouldn't push that to production.


All those things can happen without CGI?


I'll always have a soft spot for Classic ASP[0]. Makes me dream of a simpler time developing CRUD apps.

[0] https://en.wikipedia.org/wiki/Active_Server_Pages


I really wish I’d had a chance to use ASP in an environment doing it properly, rather than the cowboy atmosphere of a tiny web agency. If I understand correctly the idea was that you’d use a proper language for your core business logic which then gets compiled into DLLs loaded by your ASP application, which could then use VBScript for the simple template logic.

Sadly I was at an agency which (I suspect like many other places) just threw stuff like establishing database connections straight into the templates, and had ludicrously complex stuff being done in VBScript, a language designed for light automation.


That was the theory, but in practice people spent a lot of time debugging the transaction server and cleaning out the registry, trying to figure out why their components weren't working. I think we had something like 50K lines of VBS and it was alright because the code was decently structured.


Just use long variable names please. Had to debug an ASP site written in 2019 that used classic three letter variable names for everything and it was rough.


I have some old VBS ASP code around and it uses hungarian notation(!)


The reason most Python folks moved away from CGI more than two decades ago now is that the performance is terrible. The interpreter startup time (plus any time executing module imports) has to run per request. This is much less of an issue for shell scripts which startup very quickly.

The programming model does have its advantages though. Persistent servers risk leaking information across requests (seems to be a particular risk in async programming systems where requests execute concurrently in the same thread) so there's a definite advantage to isolation between requests. I'd love to see a server that used per-request v8 isolates (with snapshots for fast startup time.)


You'll find that the isolation goes out of the window when you need sessions; they need to be stored somewhere that persists across requests.


Where sessions are too large to store in signed cookies (which should usually be preferred when possible since they require no extra infrastructure and avoid any risk of session data mixups) they normally end up in some kind of database or memcache since you likely want to support multiple processes.

It's much less likely you'll lookup the wrong user's session than accidentally store some user data in a module or closure variable shared between requests which may come from different users.


That is literally what session hijacking is. A uses the session created for B, various ways.

Do not store sessions in signed cookies, because those can be stolen, and they bloat your requests and responses shipping all the data up and down each time. Store sessions in a database, tie them to an IP, and time them out. Transmit only an opaque ID. Ideally, re-create a different ID with each new request. If you are using a web framework, it should have tried and tested mechanisms to do all this for you. This is the kind of thing you lose cooking your own CGI.


Fastcgi solved that specific problem about minutes after CGI was invented.


FastCGI is pretty much equivalent to proxying to another HTTP process (though it does make it easier to share authentication information from the http server to the proxied process.)


I’m all for simplicity, but I don’t understand why the author treats CGI as a beacon of simplicity. In the example of teaching students, I wouldn’t start with dynamic websites at all. You can introduce requests, pages, urls, etc., in the context of static sites or front end development. Then it should be fairly easy to see the patterns that flask is abstracting away.

Most websites are either primarily static, with no server side code, or they’re fully dynamic, where every page is generated. CGI shines when sites are a mix.

One other alternative to CGI that the author doesn’t mention is reverse-proxying a single page. I find that to have more practical use than CGI.


After its configured, you can focus on a couple tags, and interpolating some variables.

There is nothing else.

Sure, if you want to learn html just write a page and reload in a browser. But that doesn't teach the simple bits of talking with a server from a browser.

I def agree that there is no modern framework that has an easy mode like this - they are all abstractions which insist you learn a bunch of local, mostly throwaway, tribal knowledge.

Do you want to build a multi user scheduling system to serve tens of thousands of users or robots, probably not. But its perfect for hooking up the output of a couple variables to a webpage.


A somewhat related naive question: is anyone using FastCGI (or just CGI) to interface directly with a web server like Apache, NGINX, or Caddy for their web apps? What reasons would you choose to not interface with use these web servers directly?

These web servers are battle-tested and feature-rich. If you add another layer like an application server (Puma, Gunicorn etc) that sits between the server and your programming language does the application server end up duplicating some (most?) of the functionality of the web server?


> Different units: traffic of visitors vs people running sites. I think we confuse them.

Yes we do. All. The. Freaking. Time.

It’s availability bias: most of the web sites we visit are popular and big and complex and have had to solve serious scalability issues. Most of the web sites we make have few visitors and are small and simple and do not have any scalability problem beyond the occasional aggregator hug.

Likewise, most of the software we use is big and complex and used by many. Most of the software we make have less than 10 users. Google, Facebook, Microsoft, Amazon… are everywhere, but a stupidly small proportion of companies in the world are as big as they are.

Mike Acton urges us all to "understand the data". That includes how much data we’ll be processing. How many request per days are we expecting? Are they evenly spaced, or will there be spikes? What’s considered acceptable latency? Stuff like that. Remember, the Pirate Bay at its most powerful only needed 4 rack servers, on top of each other. Very few of us will exceed the capacity of even a single server.

It’s not always easy to see, because the front page of HN (and the front page of pretty much anything for that matter) doesn’t feature the ordinary. So we only see the extraordinary, and get the impression that we have to measure up to that.


The only downside of CGI that I know about is the fact it starts a new process to handle each user request.

When you're saying it's preferable to serverside web frameworks you probably ought to point out that it does far less for you. I started out in web dev working on Perl CGI scripts and they were great (once you got your FTP app to use the right CRLF encoding), but you really had to do everything yourself. Some devs might see that as a benefit but I don't.


I can agree in that CGI is a cool and simple technology that might make sense for some applications, and can be a good method to teach new developers the basics. I don't think I'd choose it for much now though, unless I knew it was very simple and would never grow more complex.

Most of the complaints here are about performance. I don't think the decreased performance would be much of an issue until you get to pretty large scale. I do think the real issue is the huge complexity in doing things correctly and securely in such a manual environment. Oh, you're going to do cookies by just printing the Set-Cookie header manually? Well now you have to handle everything about your cookies manually and do it correctly. What are the odds you're doing that? Just let Rails etc handle it the right way for you. Going to do CSRF protection manually too, and do anti-XSS escaping correctly everywhere, and mitigate a ton of obscure security issues that most people have never heard of? No way, unless you're a world-class expert. Just use one of the major proven frameworks that already take care of all of that stuff for you.


I still use Perl CGI scripts for handling chores on the server side.

The newest version of my app uses service workers to store almost all the app code in the user's browser. Almost everything the user does with the app is done on the client side so for the most part they only hit the server to get or put data in the server side database (CouchDB) and for the most part those are very small gets and puts.

Compared to earlier versions going back to 2002 my server barely has any load on it. If your goal is to track every click a user makes then CGI isn't a great server side option, but if your goal is to make a fast and reliable app than an offline-first/local-first side benefit is you don't need a huge box or tons of bandwidth to run it and CGI scripts are a fine way to handle small server side chores, which is pretty much all that's left.


Not really CGI, but use FCGI instead.

Ruby, python, PHP every server side tech use FCGI at the backend, only exception being stacks that have built in webserver [Java].

FCGI works mostly like cgi, but the program doesn't terminate after execution, rather continues to listen for next request and serves it. Runs like a daemon.


CGI is an interface for web servers to communicate with an application.

Web frameworks are libraries/helpers on the application side to help with business logic for serving requests.

You can use CGI with web frameworks (look at the ton of useful PHP/Perl/Ruby frameworks out there).

You can also build a fully competent website without CGI OR web frameworks. Modern languages now all have built-in web servers which perform a lot better so Apache/nginx etc. need to function at most as reverse proxies.

In fact even if teaching is the goal I'd argue that Apache/CGI introduce more opaque abstractions, not less. You can create a web server and request loop in any language of choice in like 10 lines and take it from there.


This is used a lot on routers nowadays, for certain things it's great. I'm not sure running sh in cgi-bin would work for c10m (unless you've got a lot of servers!) or sanitization etc.


CGI is awesome! My Go vanity URLs are served over CGI. The overhead is fine because only Go proxies access them once in a while and the only import is templating using Jinja in Python.

Deployed: https://k.malhotra.cc/go/

Code: https://hn.malhotra.cc/git/cgi_k-malhotra-cc/tree/script.py?...


I agree with the author in fact I host my web site using CGI with zero issues. From people who discredits CGI I would like to see, in addition to personal opinions, a clear proof of the fact that it is not suitable for production in professional web sites expecially low traffic ones.


An interesting companion may be althttpd, that runs the Sqlite website:

https://sqlite.org/althttpd/doc/trunk/althttpd.md

Heard about it in the Changelog podcast the other day.


It was recently featured on HackerNews: https://news.ycombinator.com/item?id=27431910


Author says python's flask framework was way too complex for the students to grok but I don't see why that should be. They will need to learn all those concepts at some point


This text seems like it is stuck in time. I wouldn't be able to do most of the stuff I am doing without libraries and frameworks. It would simply take so much time redoing the same shit that others have done before so I couldn't finish my task at hand and solve the problems I actually care about.

Plus, just look at this dudes website. Sure it works just fine but it's nearly impossible to read on my large screen. It feels like it was designed for an older CRT screen and has never been updated since.

I wonder how people like this can survive in todays world with the requirements people have on web software. You will sooner or later be out competed by a guy using a web framework.


> Plus, just look at this dudes website. Sure it works just fine but it's nearly impossible to read on my large screen. It feels like it was designed for an older CRT screen and has never been updated since.

Using the browser's Reader View mode I can read the website beautifully just fine. In fact, this is my preferred way of reading articles on the web, because I get consistent and controlled styles.

> I wonder how people like this can survive in todays world with the requirements people have on web software. You will sooner or later be out competed by a guy using a web framework.

Not everyone doing webdev is competing to get a job or gig as a webdev. There are people who makes their own sites for various purposes, i.e. they are their own webdev customers. And if you're suggesting these people to "require more", it looks to me you start to meddle with how they run their business (which might benefit from your suggestions, but also maybe not).


Plug: I would probably use https://mkws.sh/pp.html for templating.


This is not a great way to teach anything. I would say that if you're going to go this route it would probably be more beneficial to learn AWS/GCP/Azure cloud services. The subtext I got was that this was a vocational type program anyway and getting those certs will go a long way in corporate environments versus knowing how those services run.

I digress, learning CGI might be useful in a historical context but as others pointed out its pitfalls are large and there's a reason we don't use it anymore. If you've ever seen a mess of a perl and bash to try to serialize some JSON you'll know why. It encourages bad behavior that doesn't scale well.

If your students aren't understanding template generation and mapping routes to functions they probably lack a background in a lot of fundamentals. At best they'll simply copy and paste best practices without understanding why. I think taking a step back and looking at interops, FFI and APIs will explain why web servers and the web itself became popular. To understand that you begin to need to now the basics of operating systems and by default compilation and some other undergrad theories. These aren't taught just for fun.

That said modern web frameworks hide a lot and that's not necessarily a bad thing. In attempt to make things isomorphic and routing client-side the delineation between client and server is blurred. Even I had a had hard time figuring out whether things were rendered on the browser or the client-side and WASM blurs this further.

The weird late 90s derail into "just use Windows" is a huge red herring too. I use OS X, and I've used Windows. I spend most my day in a terminal or a Linux VM and still won't have Linux as my primary desktop. On the same hand if you've ever had to do something low-level on Windows or a Mac it is painful. See Linux's cpufreq vs whatever Windows or Mac has to deal with big-little CPU architectures. The trade off is that the desktop experience is brittle at best. That's okay there's a hundred of ways to develop for Linux on Windows and Mac, that's a solved problem.

I'm seeing a lot of React jockeys come out of bootcamps like these with a cursory understanding of computing. Similarly I see people come out of top colleges thinking they'll be perfecting algorithms in Rust. Neither is right, but the React jockeys are dangerous as they have a "works on my machine" mentality that a lot of us learned the hard way doesn't work. Now it seems you can sort of throw cloud resources to make it go away. I'm not saying everyone needs a CS but a fundamental understanding of what you don't know can go a long way.


This guy doesn't sound like he's qualified to be giving advice to be perfectly honest. He failed to set up any other kind of web server?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: