Hacker News new | past | comments | ask | show | jobs | submit login
Programming culture in the late aughts (2022) (morepablo.com)
106 points by luu on June 8, 2023 | hide | past | favorite | 118 comments



12 years ago, even without AWS, you could get a Linode, host an app on its naked IP, and not made to feel too much like you weren't "serious" about hosting your app. These days you almost always need to know a fair bit of Docker and Docker Compose, a lot of people want to Kubernetes, ELBs got replaced by ALBs + NLBs which you gotta manage in VPCs which you gotta manage through Security Groups and your traffic gets routed through CDNs. Logs are structured and passed to a SaaS which will have a custom search syntax that takes tens of seconds to search them poorly, and they'll get lost in the noise of all those components. You'll spend a non-trivial amount of time devising systems for tracing requests through all these components.

That sounds so spot on.

We do have a choice though. Static HTML, server-side templating, vertical scaling and /var/log still exist and work just fine. We don’t need to adopt the newfangled techniques unless the use case calls for them.


Works better than ever. If anyone ever asks how my search engine, running on consumer hardware and hosted off domestic broadband, how it survives the hacker news front page time and time again without the load times even deteriorating a bit, while many other websites that are ostensibly static in nature tend to keel over?

Then the answer is something like this.

* nginx

* Java

* Macroservice architecture

* Vertical scaling

* Server-side rendering

* Minimal javascript, no session cookies. These things bump the cost of each page-load.

* /var/log with relatively sparse logging in the sunny day path

Like this is a pretty strict design philosophy. You can probably do more JS and front-end stuff than I do, but it should be understood it's not free. My page loads incur for the most part a single request each. There are websites where each page load incurs hundreds of requests. It's worth repeating: Back-end requests are not free. They are cheap, but they are not free. When you have a dozen users and they incur a few thousand requests, that may be within what your server can deal with. When you have a thousand users and they incur hundreds of thousands of requests, it's probably not as fine.


Let's also not forget that browsers generally limit the number of requests they send at a time - IIRC the current limit is 6, so a site may be slow just because it sends many requests.

Even (or maybe especially?) popular products like Gmail are guilty of this, sending easily over 200 requests. That's 700ms(!) worth of stalled requests in the best case scenario.


HTTP/2 does mitigate this a bit, I think it's mostly just a limitation in the number of threads. Even with pools or green threads, a CPU can only do so many things in parallel.

Context switching is expensive.


Question: how do you reliably deploy? And if it's just scp-ing tarballs, how do you handle dependencies? Do you run & upgrade any third-party processes that rely on having their filesystem or deps?

Struggling with this because even in a small app in a Linode box, I don't mind just cp-ing my bins but there's always other things/versions they need. Thinking of bringing in Ansible to help me out here.

Deploying feels easier in the container, Kubernetes world


If your problem is needing to synchronize a complex mesh of services when you upgrade stuff, the solution is to not have a complex mesh of services that needs managing.

I have a dist-directory, in it is a directory for with each release version, and a symlink current pointing to the current release. To deploy a new version, I create a new directory, unpack each service, and then redirect the symlink ... then "systemctl restart service-name".


Just read your article: https://www.marginalia.nu/log/09-system-upgrade/

You got rid of a lot but where did the functionality for those tools go? I understand wanting to get rid of containers & Kubernetes, but did you rewrite/bake the functionality in to your app? or maybe not care about monitoring and logs temporarily for now?


I mostly just got rid of the functionality. I do logs with grep and monitoring with vanilla prometheus.

Kubernetes and the surrounding ecosystem can create many of the problems they set out to solve.


capistrano makes it pretty easy/repeatable, at least for me deploying RoR code onto my VPS

All it is basically doing is a git pull, symlinking in the vendor code (gem) cache, doing all the rails preflight ceremony, and updating a "current release" symlink to point at the new folder. Oh and it restarts the servers (nginx, rails).

At a previous job I had an 8-line shell-script that did almost the same thing and it worked for years.


How do you handle authentication without cookies?


I don't.


>We do have a choice though.

I took the choice: to run from the web with my hair on fire, screaming.

Towards embedded, where none of this matters, and where the crux of the whole thing is really, really important: its the atoms, then the electrons...

We don't need the web if the user holds our interface in their hands. Surprisingly enough, the web holds an iron grip on that face/phone interface, but .. one easy solution to all the cruft, is simply return to the roots.

Yes, this requires hardware production capabilities. Its not for everyone, then.

But I tell you, 40 years into this industry, it was so refreshing to abandon all of the fancy front-end stuff and just get straight down to the embedded realm. Nothing nicer than having the machine - and the user - all to oneself.

Of course, this edge case gets discussed further in the article, but that is just one of a few things this old-timer could say about what went wrong with programming culture... (*cough* *cough*..should never have removed the compiler..*cough* *cough*.. this opened the door for all the weirdos to take over what should've stayed in the OS .. *cough*)


I keep finding myself reading Forth literature and playing with gforth in the evenings, thinking fondly of hardware to escape the atrocities of editing yaml and waiting 10-30 minutes to see results. Maybe I should run in the same direction...


You really should. I encourage you to do so. There is still so much potential in the embedded world to "get it right" - i.e. undo all the cruft and bloat of the desktop/web/etc.

But, be careful, its requires just as much rigour as any other market. Perhaps more so, in some ways. Tooling and methodology are a lot more important ..


I feel fortunate to run one of my current projects in that aforementioned "Linode-esque" fashion.

It is just so, so much simpler to debug and trace what's going on. It can handle a surprising amount of traffic and work before any actual issues come up. Nothing changes from an upstream vendor, at all. It really "just works".

That said, I'm very aware there's a sweet spot and that this doesn't work past a certain point. I just think that point is so far beyond where most people actually get.


Niche is ok, niche is good. The entire western civilization was built on shopkeepers and tradesmen. The concepts of private domicile, personal freedom, even democracy itself one could argue, are firmly founded on skilled people doing niche work for their communities. Corporations are inherently imperialistic. Bring back niche entrepreneurship, I say!


But you literally don't have to do any such thing though.

A simple make file and maybe a load balancer can easily let you launch large web applications used by millions of users.

I do not understand where people are getting stuck here that they need all of this complexity.

How many users do these apps have?


Way less than people think they’re going to.

When I started my current job, the project was veering into microservices hell by a team that did not understand how to build a web application. A rough back of envelope calculation of potential customer base and what the product was aimed at (B2B in renewables) told me that even with world domination, the backend of the application would at most have 50k active users ever. So we didn’t really need 30 different microservices all which could scale independently, maybe just 2 or 3 larger services.


Yup, this is what I'm working on right now.

10 microservices to support a 2 page static html webpage.

We "needed" microservices for "Scalability", but we will never have more than a few hundred users at a time.

We can't have simple logging of our services, because all logs have to go to Loki, and Loki only does full text search. So first, learn how to query with Grafana's querying language in order to do any searches at all, then figure out what you will ever need to search, and turn that into a prometheus metric, but wait, now we have to properly compose our metrics for queryability....

I need to implement retry logic in one of my microservices. Would be easily implementable with an Amazon SQS or RabbitMQ queue. But we can't have queues, because they aren't stateless. We need to use Kafka because it has the ability to scale to billions of events, never mind that we never see more than 10k events in a 24 hour period. So now we'll need to write custom logic for retries and managing state outside of the queue, which means we're back to using database tables, which means there's no point to using kafka to begin with for our application...

Our services build in gitlab to docker images which then are hosted in the cloud and managed by k8s in custom vpcs. Sure, makes sense. But also, can someone show me how the developers on the team are supposed to be able to run the system locally? What? Can't do it? Replicating the production environment for local builds is now too heavy a lift and has been deprioritized by management? So how are they supposed to develop again? Oh, they'll just never learn how the whole system works, and continuously make changes they assume are isolated to their specific microservice without realizing they've introduced an error into the wider system unless the lead/manager catch it? Got it.

I could go on. Architects choosing technologies over listening to their engineers, in order to pad their resumes with technologies that look good to other architects. Committees of middle-managers choosing the 'safe' enterprise technologies instead of what is actually needed.


I can feel the raw anger and exasperation. Well done.


Sounds all too familiar. We had an application that ran MRI analyses. The web app to manage the jobs was all set up in kubernetes to scale on demand. But is was a basic CRUD app and we had less than 300 users.


Resume based development!


Do you work where I left earlier this year? Exact same description: extremely small total user base, 30~ people company, deploying 30~ services on Kubernetes... I was being asked to take over repositories full of configuration files. No, thanks.


Not only that, half this supposedly necessary tech actually slows you down: containers and kubernetes, websockets galore, clients like giant barnacles on your server, then we get these best practices that weren’t thought out well: sessions, etc… mostly it’s the cargo cult implementation knee jerk response “gotta have it” like some frontend maniac from 2010 adding jquery to everything, when what is needed is to keep the hood open and keep people talking about the engine parts instead of calling it done and schlepping some guys tractor drag race motor around town delivering pizzas


> instead of calling it done and schlepping some guys tractor drag race motor around town delivering pizzas

This sounds way too specific to be a hypothetical


Startup mentality - you preparing to serve 100M users even if you have 1.5 right now and it will be rare luck to have even 1M because if you will not hit high numbers it doomed: high investment will not bring equally high returns and everything will be scrapped.


100M records is ~ 20 GB database. One of my companies needs to deal with large spikes in traffic coming from sudden virality on Tiktok and it peaks at around 8M users within a day or so. I handle all of that on a single beefy Hetzner machine with cycles to spare.


Stack overflow has very modest requirements considering it's one of the most visited sites on the net. https://stackexchange.com/performance


> How many users do these apps have?

There aren't that many projects that have "millions" of users.


I disagree. There are so many simpler options available. DO app platform is point and click deploy from GitHub for $5/month.

Knowing docker these days is a huge boon - it immediately removes the "works on my machine" problem, and honestly the amount of docker knowledge required to deploy a basic even crud app is about 10 lines that can be copy and pasted from stack overflow by searching for "Dockerfile for X"


The problem isn't so much that it's hard to get started with Docker, it's debugging stuff when it goes wrong is so hard. There's this 80M dockerd binary that needs to run as root and last time I checked it's over a million lines of code – lots of potential for failure there, ranging from "Docker doesn't do the right thing and I can't figure out so let's just nuke the entire server and start up a new one" to "Oh dockerd magical firewall rules made my firewall ineffective".

Is that a good tradeoff? I don't know; it seems to me you can get 95% of what Docker gives you with 5% of the complexity. Luckily the ecosystem has started to realize that and better tools have started to materialize.


Which alternatives are you thinking of?


I've used runc, which worked out well for me, but it's not a "drop-in replacement" for everything Docker does; it's essentially just an OCI runtime (originally extracted from Docker I believe), but 9 times out of 10 that's what I want.

There are other tools too, such as containerd (and more I don't recall the name of first-hand), but I don't have first-hand experience with them.


Just one anecdote, but I've had a script run flawlessly on its own in multiple environments, but it's docker image exhibited the "works on my machine" problem for some reason!

(Never bothered figuring out why since the script ran fine on its own and was replaced with a proper application shortly thereafter.)


It solves that problem but adds it's own. I guess its overall a positive.


Trying to convince people of that seems to be a loosing battle.


> Logs are structured and passed to a SaaS which will have a custom search syntax that takes tens of seconds to search them poorly, and they'll get lost in the noise of all those components. You'll spend a non-trivial amount of time devising systems for tracing requests through all these components.

I feel this one. At a previous gig we had an entire project to try and pick a logging solution, and none on offer let you do the one fundamental operation that makes using logs bearable; full text search.


I have never found a logging tool that I like more than grep (or ripgrep, I suppose, combined with maybe some awk). In principle there's a lot that could be improved on grep/awk, but somehow all the tools that I've had to use thus far come with implementations that are so bad and constrained that grep still wins. It's almost like the people writing these things have never used it themselves.


I have to say filebeat bothers me. We already had rsyslogd and it’s a very elegant idea.


Cloudwatch tho


"12 years ago everyone was wondering how we'd program on multicore CPUs" is either laughable or enlightening to read as an OS developer.


Even in user space. Electron became de jeur for web developers to build desktop apps. Single threaded memory hogs.

Thankfully many DAWs, video editing, CAD, servers and other useful software saw the value in multi-core workloads.

It’s a thing… just not for web folks most of the time.


Most serious electron based applications are not single threaded.



I don't see any details about their usage of threads in this video.


12 years ago it might have been laughable. But the Core2 Duo was released in 2006, some 7 years before, so the problem was well in hand by then.

But go back 20 years, and you predate Linux RCU. The OS Developers weren't laughing at the problem then.

Arguably the Linux Real Time patches are solving a similar problem, which could be said to be extracting maximum parallelism out of the code. For Real Time this isn't so they can run in parallel, but rather so a higher priority task can interrupt a lower priority one with impunity. Real Time Linux still isn't done yet, although it's getting close.


> But go back 20 years, and you predate Linux RCU. The OS Developers weren't laughing at the problem then.

I believe it was the DEC Ultrix dev team who, reportedly, amused themselves after hours by making transparencies of Linux source code, projecting them on a screen, cracking open a few beers and having a laugh at the idiot mistakes those college kids made -- mistakes, of course, that you can avoid by choosing a real Unix OS.

It took a while for Linux to actually be taken seriously outside of home-lab tinkering.


> It took a while for Linux to actually be taken seriously outside of home-lab tinkering.

It took a while and big companies like IBM and organizations like the OSDL and FSG (which became the Linux Foundation) pouring many thousands of person hours and millions of dollars into it. The Linux kernel has plenty of independent "amateur" (not paid to do so) contributors but it's also got a lot of paid contributors for who it's their day job.

It's an awesome mix of contributors but I think there's a definite generational shift around the 2.2/2.4 to a much more capable kernel than 2.0 and before. Had Linux not gotten the financial backing it did I don't know that it would be where it is today.


Yeah, s/everyone/webdevs.

Typical audio software, for example, has been multithreaded since the 90s (simply because the audio callback and the GUI must run in different threads). And in the mid 2000s these threads started to run on different CPUs.


There was certainly a lot of noise about it.


I was a freshman CS student in 2010-2011 and we learned multi-threaded programming in our first year.


Sure but you would have learned it in 1995 too.


As a freshman CS student? Granted things were changing rapidly in the 90s, but this wasn't the case my freshman year a couple of years prior. I know that I was aware of the concept of threading by my sophomore or junior year, and by the time I graduated had encountered them first hand. I'm not sure we even covered multitasking (fork() and friends) at all in my freshman year, but of that I'm far less certain.


From my freshman years around 2005, I vaguely remember that multi-thread GUI apps were:

- a way to avoid blocking the UI,

- tool to gain some perf. on new fancy multi-core processors,

- making the code much harder to read and debug, which often makes them a victim in a cost-benefit analysis.


Point is the programming paradigm of POSIX threads or multitasking wasn't a new concept then and likewise been in the curriculum for a while. Multi-core only brought a new level of performance to it.

There were in fact attempts for new concurrency paradigms (like Software Transactional Memory) which however led nowhere outside academia.


I learned about it in 1991.


Threads are but one way to handle the concurrency/parallelism problem on a given set of processors, old or new.

So what are some other ways of handling multi-core code that you were taught?


> The story that solidified is: Use Boring Technology. Use [...] Ruby [...]

> I think this narrative, while conventional, is bollocks. [...] Ruby's weird semantics are credited for how one person as able to use it to make a world-changing framework; to call Ruby a "boring" choice now is a testament to how successful the right weird tech can be!

Sorry, but this whole section is bollocks. I've never heard anyone call Ruby boring, and I've had one too many methods monkey-patched in from halfway across the universe to ever call Ruby "boring". Basically no-one seems to use Ruby for new Serious™ Programming™ Projects™ anymore.


https://blog.codinghorror.com/why-ruby/

This guy called it boring when justifying its use for discourse back in 2013.

> Ruby isn't cool any more. Yeah, you heard me. It's not cool to write Ruby code any more. All the cool people moved on to slinging Scala and Node.js years ago. Our project isn't cool, it's just a bunch of boring old Ruby code.


Its not entirely bollocks, you may have just missed the point: technology is cyclical, according to its application. Once, Ruby on Rails was the badass, non-boring way to write apps, now - because it was once so badass - it is 'boring' in the sense that anyone can do it, and you don't really have to think about much - its quite possible to operate on cruise control, and is thus 'boring' from the coding cowboy/techno-aristocrat sense of things.

>Basically no-one seems to use Ruby for new Serious™ Programming™ Projects™ anymore.

Yes, and in so witnessing this, you are demonstrating how non-bollocks the original bollocks was and now is. Such is the cyclical nature of things in our highly stratified industry .. its one layer of bollocks on top of a shit cake, after another ..


Plenty of new Serious™ Programming™ Projects™ use boring technology like Java and have for a long time. Once being the hot new thing and falling out of favor does not mean it's now a proven boring technology.


It's "boring" in the sense that it's been around for a while and has proven itself to be fairly stable and reliable, is not going away any time soon, is no longer changing at a very rapid pace, has some good books written about it. Rust is also fast becoming "boring" in this sense (maybe it already is; I don't keep up that closely).

This is opposed to Rust from 5 years ago, or e.g. Zig today.

That's how I typically use "boring technology" anyway.


It's hard to overstate how monumental Stack Overflow was even 10 years ago. I remember being in University and dreading my computer graphics classes because SO didn't have answers for a lot of those questions at the time.


I got my first programming job in 2011, working on a specialized and computationally heavy desktop app. This already felt like a bit of an anachronism, but smartphones weren't central to our lives yet and Electron didn't exist.

I bet the total number of people who will learn a C++ GUI framework between now and the end of time is smaller than the number who know one now.

I was happy to work on anything algorithmic. It seemed like there was so much CRUD and foundational communication stuff happening. All the programming literature talked about "business logic". Now with the rise of machine learning, a new grad might have a better chance of doing non-CRUD work now vs. then.


I think IRC was a big part of community knowledge and also such a hot spot for weird inspiration. I can attribute my picking up of Common Lisp and ultimately getting a job in Australia from the UK to an off the cuff comment by someone in IRC.


There were things like IRC and Usenet. But the thing I remember most was books. Lots and lots of books. And doing things like building up a mental model of the indexes for their reference sections so I could look things up quickly. And doing something similar for `man`, being able to recite from memory that I wanted `man 3 foo` instead of `man 5 foo`.

Having direct access to good reference material was was critical, and often that meant hardcopy.


I started learning programming (C++) when I was 12-13 (2000-2001) because a guy on QuakeNet told me I should learn how to code if I want to learn about hacking. All these years later I'm very well paid to create distributed systems in technologies I could've never imagined back then.

The funny thing is that I want to stop doing that and have other problems instead, because I'm just tired of everything that has to do with what (mostly) web development (and other distributed systems) has become and this incessant either layering more things on top of other things or adding extraneous things.

I'm currently re-orienting around a possible switch to game development.

(For those who might think I'm being impossibly naïve; I have plenty of lower-level experience and have worked at 2 game studios previously, it's not that foreign of a world. I originally left to do consulting because it paid significantly better.)


With games as a service and increasing desire to de-risk large projects I'd be a little wary of the grass is greener.

There might be something a kin to old school hacking in the indie domain.


I think the point about grass being greener is a good one in general, but I have to say that I'm wording things specifically to account for this: I want other problems than the ones I have now. I don't like the problems modern distributed systems development present me with anymore. It's boring and I genuinely feel like we keep having issues that shouldn't even be issues because of the technologies we use and the layers/systems that have been added to everything.

I will also say, having worked at game studios before I believe them to have a much better barometer for what is extraneous, so even the live services teams at these places tend to be a lot more sensible than your average web shop. Your mileage may vary, though.

Games as a service and de-risking large projects I'm not sure is that relevant? Someone has to work on the engine, the networking code, and so on, and this work is largely not really going anywhere. There have been approximately zero innovations in the last 20 years that remove the need for being able to write lower-level code and knowing the internals of a game engine, provided you want to make something half-way decent.

Even if game studios somehow manage to not do the work I'm interested in, I might just take an extremely long sabbatical while continuing to make games and work on those things.

There is an alternate future where I also try to create custom solutions as a contractor in performance-critical niches and still work primarily with backend stuff, but I'm not sure what that would mean.


If you've got really top notch low level skills then maybe something in HFT financial space.


I learned linux only because people on IRC were willing to handhold me through it.

Microsoft had a very strong hold on education during that period.

I didn't learn anything outside of the Microsoft Ecosystem: Excel and Word, yes, but also Access and VB.NET when I was trying to get into programming.


I've only learned of the Good Ol' Days of the 90s from second hand sources, and I was in middle school throughout the aughts, so for me these periods are just a mirage of a better time. In high school I pursued reverse engineering and other stuff, so I could blissfully ignore the newer direction the industry was taking all the way until 2019.

But now I'm close to graduating university. Is there any kind of job where this kind of software is still more common? PHP web development, sysadmin work, writing systems software or native software, etc.


My experience of programming as a Junior Software Engineer in the '90s was replacing inner loops with inline assembly, avoiding cache misses (keep your code in the L1 cache) and making sure your memory accesses stayed on one page. Nobody seems to care even in the slightest about this stuff anymore, and the abysmal performance of modern software shows it.

During the last 30 years, everyone's focus has drifted up the stack, to higher and higher levels of abstraction and higher and higher level languages, to the point where we are totally divorced from the electrons and realities of the underlying hardware.


Certain industries still care about this stuff. Some trading firms rely on performant systems which utilize strategies like what you described. Not everything is in hardware and it varies from place to place


There's a lot of places that care about performance now, but in many if them, instead of optimizing your own business logic, you would be building performant platforms so that others could build their own business logic on top of them, be it V8 or Unreal.

Honestly, I think that this separation of concerns makes much more sense.


For php you're probably best off learning a framework or two (zend, symfony, cake, laravel) which is drastically different than writing regular old php. WordPress devs are always in demand for contract type of work, but it's a grind for minimum pay unless you dominate a niche with a plugin.

Unfortunately sites like square and social media have taken over the small time web dev companies where you used to be able to get your hands dirty at all levels.

I'd probably look around at digital marketing companies. Your tech skills won't be "praised", because they favor creative over technical abilities. But they'll probably throw a wide range of technical problems at you and expect you to be "fullstack".


Heavy industry. Manufacturing, mining etc. This stuff isn't always in the flashiest locations and wfh is bad form in my experience, but it is certainly critical. There are all sorts of custom systems keeping track of (and tying together) high value physical processes.


Embedded. Things are getting fat and wide enough that you can apply all of that tech to an embedded system and, hopefully, no one will be any wiser that you've done so.

SOM-based development (system-on-module) in the embedded Linux world means that being able to productively pick and place the right software components is a highly sought after skill.

There is gold in them thar' repo's, especially if your shovel is a SOM and your mule is a functioning supply chain direct to the customer ..


> just a mirage of a better time

Don't fall into the "good ol' days" trap. There were more than enough problems/challenges/frustrations to go around back then too.


By good ol' days I meant that everything I've read about them tells me that set of problems and frustrations is better than what we have today, and software was actually made better by those circumstances.

Of course, I can fire up DOS, NeXTStep, or Windows XP and get to doing it on my own, or hack on open sourced codebases from the time, but the key component is the teams of people doing it, and the common wisdom that they had.

Rather than thinking it was heaven, it's a hell I'd prefer over our current one, so to speak.


Right I'm just saying that's not correct. It's easy to view history through a very positive lens...nostalgia (even nostalgia for something you did not personally experience) is a powerful drug.

I have a lot of nostalgia for writing BASIC and the feeling that I got the first time I learned about HTML4 tables and ASP.NET. But you could not pay me to return to those technologies.


There are plenty of legacy systems around that were developed like this and never moved on. Finding them might be tricky though.


At the end of the post the author talks a little bit about intuition vs understanding and seems to (mis-?)attribute it to professionalization. Maybe that’s the case. Could that be related to some of his earlier points of how much more stuff there is, though? The amount of time people have hasn’t grown, but seemingly the amount of stuff they have to deal with has. You have to pick your battles, as it were.


There’s more to know now but it’s also much easier to get up to speed on that stuff. Over time I’ve switched from having everything memorized to having enough of an understanding to allow me to get running when I need to do so. It allows me to spread myself over the larger surface area


I miss that. There was only one way to do a thing, and I could run. Then, ... many ways to do one thing, and I did freeze.


One thing the author seems to miss: web-based rich UI was facing serious headwinds because W3C accessibility standards were definitely still a thing and nobody wanted to associate with a project that was getting dragged publicly by ADA activists for giving screen readers the finger.


> Oddly, while parallelism didn't turn out to matter much, concurrency did

This is true, but I don't think for the reason the author may think. Basically, parallelism in a single process is hard as hell and nobody can do it reliably. On the other hand, concurrency is easy (mainly due to not sharing memory between threads). Basically we just achieved the parallelism we wanted by utilizing concurrency within individual processes, but the processes themselves are parallelized at the OS/hardware level.


I was a little bothered by the misunderstanding about Moore’s law and the performance wall we hit: clock speeds stopped doubling every 18 months. At that point, the obvious way forward was more cores and more threads.

We still get clock increases (5 GHz was once the exclusive domain of exotic platforms), but memory latency also hit a wall, and that’s why processors add more memory channels along cores.


php and mysql were a much simpler time


Perl is my LAMP stack of choice.

Just trying to generate html templates dynamically with a $db object interspersed in is still a pretty rad & pure experience. Not much between you & the machine. Not that I've actually done that in the last two decades. I used a little ejs though & wrote a little build tool to turn ejs template like things into standalone modules.


php and mysql are not dead. They make for a great choice for a startup or fortune 500.


The milestone is in RestFul architecture. It changed everything.


what a nice post... I hope so that your program is as best as your posting.


> think of how many Americans buy giant cars who don't need giant cars.

I hesitate to comment since I’m nitpicking a parenthetical, but this weird little bit of classism totally broke my immersion in an article I was otherwise really enjoying. Middle class people overwhelmingly can’t afford housing, but let’s shame them for trying to find at least one kind of affordable luxury in life? We’re the industry making giant luxury phones people “””don’t need”””, but people would rightfully call me an asshole if I shamed them for wanting one.


> this weird little bit of classism totally broke my immersion in an article I was otherwise really enjoying. Middle class people overwhelmingly can’t afford housing, but let’s shame them for trying to find at least one kind of affordable luxury in life?

A giant car is probably the most harmful-to-others luxury indulgence you could pick (and a significant factor in why people can't afford housing). I'm all for having a hobby and spending money on your hobby, but maybe pick one that doesn't kill other people's kids so much. And I'm baffled that you're calling this classism and then talking about middle class.


> A giant car is probably the most harmful-to-others luxury indulgence you could pick

Some people are simply refusing to harm themselves when they choose a large vehicle.

Getting in and out of a "regular" sized car routinely injures me. The injuries are painful, immobilize me for weeks at a time, and occur several times per year.

I also know several people for whom being in a compact car causes anxiety or panic.

Do cases like this enter into your calculus, or do you just assume everybody you see in a large vehicle doesn't much care for the rest of humanity?


> Getting in and out of a "regular" sized car routinely injures me. The injuries are painful, immobilize me for weeks at a time, and occur several times per year.

Seek medical assistance, immediately. This is not a joke, I am not intending to offend you- you genuinely should seek medical assistance as in: right now.

> I also know several people for whom being in a compact car causes anxiety or panic.

Now I will intend to offend: Yes, lets make the problem worse! Humans without cars should feel even more anxiety and panic! Especially tiny ones which you can't even see.


> Humans without cars should feel even more anxiety and panic!

I mean, pedestrians do, right? I guess it depends on the local driving culture.

My best friend is blind, and this is precisely how he feels trying to navigate the world.

But yeah, living in America, car ownership seems unavoidable for most people in most areas.


> Seek medical assistance, immediately.

It's not a medical problem.

I'm just tall and small cars are the wrong size for my body. Like choosing a piece of clothing, the solution is to choose a vehicle that fits my body.

I mean, I could try to have surgery to change my height, but that's wrongheaded, right?


My friend is 6’4” and drives a Honda fit without issue.

There is definitely middle ground - frankly, I don’t believe you are interested in hearing it because you have already decided that obscenely large cars that kill children easily are the only way you can feel like you're experiencing comfort.


I’m 6’ and I barely fit inside a Lotus. The solution, for me, is simple: I don’t have one.

But sustaining so significant injuries from getting into a car larger than a Lotus or a Mini Cooper is concerning enough to warrant medical attention - we simply shouldn’t get damaged so easily.


My position is that large vehicles can be accessibility tools. It's okay for people to need and use such things.

The way you react to that is pretty rude.


I simply refuse to believe that an ordinary sized european car (that is no problem for countries with comparatively higher average heights) are "immobilizing [you] for weeks at a time".

I think you're being disingenuous, I have no obligation to be kind to your face when you spin such dramatic falsehoods.


> Getting in and out of a "regular" sized car routinely injures me. The injuries are painful, immobilize me for weeks at a time, and occur several times per year.

How does it injure you?


I'm tall. The doors are too low to the ground and small, so I injure my back getting in and out.

Edit, more context: I've routinely gotten in and out of small sedans all my life. When I was younger, this wasn't much of a problem, but now that I'm in my thirties, it's become very frustrating and dibilitating.


I agree with the advice to see a doctor. The thirties is way too early to have such issues. They are starting to me and I’m in my mid-fifties.

Unless your small car is a Mazda Miata or some other triumph of miniaturisation, it shouldn’t happen.


Thanks for the reply. Being tall has its advantages, but not best for knees and back. Best of luck to you!


> A giant car is probably the most harmful-to-others luxury indulgence you could pick

International travel. People need to get to work, nobody needs to backpack through Europe to "find themselves" (besides perhaps Europeans).


> International travel.

Probably less harmful; the per-year CO2 emissions are similar (depending how long your commute is and how much travel you do), and travel has fewer of the other negative externalities (taking up road space and city space, killing people directly).

> People need to get to work, nobody needs to backpack through Europe

Perhaps, but I don't see how putting out an extra n tonnes of CO2 by driving an unnecessary giant car to work is any better than putting out an extra n tonnes of CO2 on things that are more directly self-indulgent. (If anything I'm more sympathetic to wasteful consumption if the consumer is at least enjoying themselves)


If the co2 emissions are similar and both are just for fun I don't see how you ethically justify the one and not the other. This looks elitist and classist.


Because, as I said, the CO2 emissions are not the only negative externality cars impose on other people.


More people, Americans mostly, really should backpack through Europe, not just to find themselves, but to also discover the wonders of universal healthcare and a welfare state.


> People need to get to work, nobody needs to backpack through Europe

That is why American capitalism/consumerism is bad. Our society would be far, far better served if people exposed themselves to different cultures, beliefs, experienced world first hand instead of through talking head media. If more "found themselves" they'd know life isn't about work and money.


As an owner of a normal sized car, oversized cars are unnecessarily aggressive towards other people. Both because of the drivers attitude and the design.

Some examples: Lights always blind me, they block almost all vision, and have very poor vision themselves.


and chrush any pedestrians they hit.


At least here in Ireland, they’ll stop if you look like you want to cross the street, regardless of there being a crossing or not.


Those giant cars are pretty expensive themselves though, both to buy and to fuel. If you have a huge truck or SUV but can't afford housing... well, you probably could afford more housing without the huge car. This isn't "avocado toast" style criticism, cars are a significant expense - typically your second largest expense after housing.


The big car owner has ab unquestionable license to bully others, got it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: