Hacker News new | past | comments | ask | show | jobs | submit | lkrubner's comments login

When it comes to code reviews, the return on investment faces the Law Of Diminishing Returns. While many of the comments made in code reviews might be interesting, they are not so interesting that they pay for themselves. If you put some dollar value on the time invested, you'll find that the vast majority of this process is simply burning money. And not only money, but also, as this article says, morale.

A curious fact about the politics of modern tech teams is that some of the same people who consider themselves "anti bureaucracy" are strongly in favor of this type of bureaucracy, even when it brings no measurable benefits.

My opinion of code reviews has become more negative over time, mostly due to the fact that I have not seen it achieve reliable positive outcomes.

I'll give specific examples:

https://www.futurestay.com had the worst code that I'd ever seen. When I joined the company I was horrified at the level of tech debt. And yet, for years, they had a rule that at least two engineers had to review every PR. So this terrible code, the worst I'd seen in my 24 years of coding, had been approved by two engineers.

And likewise:

https://openroadmedia.com also had very bad code, and also had a rule that nothing could be pushed to production until at least 2 engineers had reviewed the PR.

In both companies the code review slowed down the deployments, slowed the down the team, and created a culture of internal sniping, while leading to no improvements.

But how can I say the code was objectively bad? Because many bugs were found in production. And what would reduced the number of bugs in production? More tests, especially end-to-end tests. And so I eventually came to this conclusion: it is best to skip code review, and instead have the team invest that same time in writing more tests, especially high level tests. For the most part, if the engineers on a team are bad then the code reviews will also be bad, but if the engineers on the team are good, the the code reviews will not be needed. And in both cases, the path to improvement comes from writing the kinds of tests that ensure no bugs get into production.

As I've grown in my career, and taken on higher level management jobs, I've also realized that code reviews do not scale to large-scale leadership, but tests do. Code review seemed like a good idea when I was leading a team of 3 engineers, but not when I was leading a team of 30. When I was leading a team of 30, the only tool-of-oversight that worked for me was automated testing. And so I've concluded this is where the effort should be made. Sitting at my own computer, I cannot review the work of 30 engineers, but I can still run the tests on the various software, to see if they are passing. In particular, I can look at API interfaces and then change the dummy test data to break the tests, and this lets me see quickly thorough the test coverage is, and how prepared we are for unexpected shocks.

When I am running a team I will assign software developers the task of writing various tests, including high-level end-to-end stress tests. These tests sometimes reveal problems. We then respond to the problems. But we don't waste time responding to problems that have not been proven by a test, which is to say we do not do code reviews. Code reviews are all about predicting what might be a problem. But much of the concerns are phantoms. It is better to respond to real problems that have been revealed by tests.

I've also come to realize that software developers, quite naturally, develop strong opinions about questions of style, and yet these questions have no long-term impact on the health of the tech in the company. Other than enforcing the rules of some automated linter, the time spent on issues of style are 100% wasted. If you allow the team to spend even one minute discussing issues of style, then you are setting money on fire. But I know this opinion is unpopular, because software developers enjoy enforcing their own opinions about style. But it is all bikeshedding, it has no real-world impact.

I have the impression that there is some middle era, in the career of a software developer, where concerns about issues of style and organization tend to peak. The junior developer does not know enough to care about such issues, but somewhere between 4 years of experience and 10 years of experience, these issues feel important. I think you need more than 10 years of experience to see the waste. In particular, you need to run one large project where the team invests a lot of time into code reviews, and you need to notice how much tech debt builds up, despite the code reviews, to realize that code reviews do not offer a path forward.

On recent projects I've told the team there will be no code reviews, but instead we will focus on building tests. I get a surprising amount of pushback on this. Less experienced software developers get angry with me and tell me that I'm being unprofessional. In some sense they are correct, in the sense that "professional" refers to "standard norms that are accepted by a profession" -- I am deviating from those, clearly.

The most reasonable criticism I get is that code reviews could catch not just problems of style but problems of algorithms. What if, they ask, some junior developer introduces code that is ignorant of the implications of Big O Notation? What if they introduce code that runs in polynomial time? But I would ask, how do we know if it is running slowly? We can only know that through tests. So lets build tests that measure time, and stress test with large loads. Big O Notation offers an excellent example of where software developers can worry about the wrong thing: what if a junior level software developer writes code that runs in polynomial time but on data that will only ever have a few hundred records? While the algorithm might be sloppy, the code will run quickly because there are few records? In that case, any time invested to find a different algorithm will be a poor investment. Once I'm running a team of 30, the only thing I care about is whether code is actually slow, and I discover that through end-to-end stress tests, not code reviews.

Kent Beck, when he invented Xtreme Programming, also popularized the phrase "Do not write a comment, instead, write a method with an easy-to-understand name that communicates what the comment was going to communicate."

I've come to a similar conclusion. Do not write a comment in a code review, instead, write a test that would catch whatever danger you want to warn about. And if you cannot find a way to express your concern as a test, the return-on-investment of worrying about that concern is probably zero, so we should ignore it.


> I've come to a similar conclusion. Do not write a comment in a code review, instead, write a test that would catch whatever danger you want to warn about. And if you cannot find a way to express your concern as a test, the return-on-investment of worrying about that concern is probably zero, so we should ignore it.

What if your concern is "this approach makes the code difficult to test"?


You discover that by writing the test. You don't try to predict that ahead of time. The goal is to get away from trying to predict problems ahead of time, because most of those problems turn out to be phantoms.


Interesting that the mania for over-investment in devops is beginning to abate. Here on Hacker News I was a steady critic of both Docker and Kubernetes, going to at least 2017, but most of these posts were unpopular. I have to go back to 2019 to find one that sparked a conversation:

https://news.ycombinator.com/item?id=20371961

The stuff I posted about Kubernetes did not draw a conversation, but I was simply documenting what I was seeing: vast over-investment in devops even at tiny startups that were just getting going and could have easily dumped everything on a single server, exactly as we used to do things back in 2005.


It's just the hype moving on.

Every generation has to make similar mistakes again and again.

I am sure if we had the opportunity and the hype was there we would've used k8s in 2005 as well.

The same thing is true for e.g. JavaScript on the frontend.

I am currently migrating a project from React to HTMX.

Suddenly there is no build step anymore.

Some people were like: "That's possible?"

Yes, yes it is and it turns out for that project it increases stability and makes everything less complex while adding the exact same business value.

Does that mean that React is always the wrong choice?

Well, yes, React sucks, but solutions like React? No! It depends on what you need, on the project!

Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)


>Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job. (Albeit it's less clear than for the carpenter, granted)

The problem is that most devs don’t view themselves as carpenters. They view themselves as hammer capenters or saw carpenters etc…

It’s not entirely their fault, some of the tools are so complex that you really need to devote most of your time to 1 of them.

I realize that this kind of tool specialization is sometimes required, but I that it’s overused by at the very least an order of magnitude.

The vast majority of companies that are running k8s, react, kafka etc… with a team of 40+, would be better off running rails (or similar) on heroku (or similar), or a VPS, or a couple servers in the basement. Most of these companies could easily replace their enormous teams of hammer carpenters and saw carpenters with 3-4 carpenters.

But devs have their own gravity. The more devs you have the faster you draw in new ones, so it’s unclear to me if a setup like the above is sustainable long term outside of very specific circumstances.

But if it were simpler there wouldn’t be nearly many jobs, so I really shouldn’t complain. And it’s not like every other department isn’t also bloated.


Along those lines, I am building https://github.com/claceio/clace for teams to deploy internal tools. It provides a Cloud Run type interface to run containers, including scaling down to zero. It implements an application server than runs containerized apps.

Since HTMX was mentioned, Clace also makes it easy to build Hypermedia driven apps.


Would you be open to non Python support as well? This tool seems useful, very useful in fact, but I mainly use .NET (which yes can run very well in containers).


Starlark (python like config language) is used to configure Clace. For containerized apps, python frameworks are supported without a Dockerfile being required. All other languages currently require a user provided Dockerfile, the `container` spec can be used.

I do plan to add specs for other languages. New specs have to be added here https://github.com/claceio/appspecs. New specs can be created locally also in the config, see https://clace.io/docs/develop/#building-apps-from-spec


> Just as a carpenter doesn't use a hammer to saw, we as a profession should strive to use the right tool for the right job

I think this is a gross misunderstanding of the complexity of tools available to carpenters. Use a saw. Sure, electric, hand powered? Bandsaw, chop saw, jigsaw, scrollsaw? What about using CAD to control the saw?

> Suddenly there is no build step anymore

How do you handle making sure the JS you write works on all the browsers you want to support? Likewise for CSS: do you use something like autoprefixer? Or do you just memorize all the vendor prefixes?


Htmx works on all browsers I want to support.

I don't use any prefixed CSS and haven't for many years.

Last time I did knowingly and voluntarily was about a decade ago.


As far as browser prefixes go, you know that browser vendors have largely stopped using those? Not even recently, that process started already way back in 2016. Chances are that if you are using prefixes in 2024 you are supporting browsers versions who, by all logic, should no longer have internet access because of all the security implications....


It's actually kinda hilarious how RSC (React Server Components) is pretty much going back to what PHP was but yeah proves your point as hype moves on people begin to realize why certain things were good vs not


where does tailwind stand on this? you can use it without a build step but it's strongly recommended in production


A build step in your pipeline is fine because, chances are, you already have a build step in there.


no, having a build step kills the magic of interactivity when developing for the web.

And that's why you can have the giant tailwind css file instead of "building" when you're developing.

People gravely miss-understand containerization and Docker.

All it lets you do is put shell commands into a text file and be able to run it self-contained anywhere. What is there to hate?

You still use the same local filesystem, the same host networking, still rsync your data dir, still use the same external MySQL server even if you want -- nothing has changed.

You do NOT need a load balancer, a control plane, networked storage, Kubernetes or any of that. You ADD ON those things when you want them like you add on optional heated seats to your car.


Why would you want to run it anywhere. People mostly select an OS and just update that. It may be great when distributing applications for others to host, but not when it’s the only strategy. I have to reverse engineer dockerfiles when the developer wouldn’t provide a proper documentation.


OS upgrades are a pain. Even just package updates could break everything. Having everything in containers makes migrating to another system much easier.


I've worked at a few tiny startups, and I've both manually administered a single server and run small k8s clusters. k8s is way easier. I think I've spent 1, maybe 2 hours on devops this year. It's not a full-time job, it's not a part-time job, it's not even an unpaid internship. Perhaps at a bigger company with more resources and odd requirements...


But how much this costs extra? Sounds like you are using cloud-provided k8s.


EKS is priced at $876 / yr / cluster at current rates.

Negligible for me personally, it's much less than either our EC2 or RDS costs.


Yeah, using EKS isn't the same thing as "administering k8s", unless I misread you above. Actual administration is already done for you, it's batteries included, turn-key, and integrated with everything AWS.

A job ago we had our own k8s cluster in our own DC, and it required a couple of teams to keep running and reasonably integrated with everything else in the rest of the company. It was probably cheaper overall than cloud given the compute capacity we had, but also probably not by much given the amount of people dedicated to it.

Even my 3-node k3s at home requires more attention than what you described.


You did misread me, I never said I administered k8s. The quoted phrase does not exist :)


I currently use k8s to control bunch of servers.

The amount of work/cost of using k8s for handling them in comparison to doing it "old style" is probably negative by now.


So, let's say you want to deploy server instances. Let's keep it simple and say you want to have 2 instances running. You want to have zero-downtime-deployment. And you want to have these 2 instances be able to access configuration (that contains secrets). You want load balancing, with the option to integrate an external load balancer. And, last, you want to be able to run this setup both locally and also on at least 2 cloud providers. (EDIT: I meant to be able to run it on 2 cloud providers. Meaning, one at a time, not both at the same time. The idea is that it's easy to migrate if necessary)

This is certainly a small subset of what kubernetes offers, but I'm curious, what would be your goto-solution for those requirements?


That's an interesting set of requirements though. If that is indeed your set of requirements then perhaps Kubernetes is a good choice.

But the set seems somewhat arbitrary. Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

Indeed given that you have 4 machines (2 instances, x 2 providers) could a human manage this? Is Kubernetes overkill?

I ask this merely to wonder. Naturally if you are rolling out hundreds of machines you should, and no doubt by then you have significant revenue (and thus able to pay for dedicated staff) , but where is the cross-over?

Because to be honest most startups don't have enough traction to need 2 servers, never mind 4, never mind 100.

I get the aspiration to be large. I get the need to spend that VC cash. But I wonder if Devops is often just premature and that focus would be better spent getting paying customers.


> Can you reduce it further? What if you don't require 2 cloud providers? What if you don't need zero-downtime?

I think the "2 cloud providers" criteria is maybe negotiable. Also, maybe there was a misunderstanding: I didn't mean to say I want to run it on two cloud providers. But rather that I run it on one of them but I could easily migrate to the other one if necessary.

The zero-downtime one isn't. It's not necessarily so much about actually having zero-downtime. It's about that I don't want to think about it. Anything besides zero-downtime actually adds additional complexity to the development process. It has nothing to do with trying to be large actually.


I disagree with that last part. By default, having a few seconds downtime is not complex. The easiest thing you could do to a server is restart it. Its literally just a restart!


It's not. Imagine there is a bug that stops the app from starting. It could be anything, from a configuration error (e.g. against the database) to a problem with warmup (if necessary) or any kind of other bug like an exception that only triggers in production for whatever reasons.

EDIT: and worse, it could be something that just started and would even happen when trying to deploy the old version of the code. Imagine a database configuration change that allows the old connections to stay open until they are closed but prevents new connections from being created. In that case, even an automatic roll back to the previous code version would not resolve the downtime. This is not theory, I had those cases quite a few times in my career.


I managed a few production services like this and it added a lot of overhead to my work. On the one hand I'd get developers asking me why their stuff hasn't been deployed yet. But then I'd also have to think carefully about when to deploy and actually watch it to make sure it came back up again. I would often miss deployment windows because I was doing something else (my real job).

I'm sure there are many solutions but K8s gives us both fully declarative infrastructure configs and zero downtime deployment out of the box (well, assuming you set appropriate readiness probes etc)

So now I (a developer) don't have to worry about server restarts or anything for normal day to day work. We don't have a dedicated DevOps/platforms/SRE team or whatnot. Now if something needs attention, whatever it is, I put my k8s hat on and look at it. Previously it was like "hmm... how does this service deployment work again..?"


"Imagine you are in a rubber raft, you are surrounded by sharks, and the raft just sprung a massive leak - what do you do?". The answer, of course, is to stop imagining.

Most people on the "just use bash scripts and duct tape" side of things assume that you really don't need these features, that your customers are ok with downtime and generally that the project that you are working on is just your personal cat photo catalog anyway and don't need such features. So, stop pretending that you need anything at all and get a job at the local grocery store.

The bottom line is there are use cases, that involve real customers, with real money that do need to scale, do need uptime guarantees, do require diverse deployment environments, etc.


Yep. I'm one of 2 Devops at an R&D company with about 100 employees. They need these services for development, if an important service goes down you can multiply that downtime by 100, turning hours into man-days and days into man-months. K8 is simply the easiest way to reduce the risk of having to plead for your job.

I guess most businesses are smaller than this, but at what size do you start to need reliability for your internal services?


You know that you can scale servers just as well, you can use good practices with scripts and deployments in bash and having them documented and in version control.

Equating bash scripts and running servers to duct taping and poor engineering vs k8s yaml being „proper engineering„ is well wrong.


The question is why solve a solved problem?

I think you are proving the point; there are very, very few applications that need to run on two cloud providers. If you do, sure, use Kubernetes if that makes your job easier. For the other 99% of applications, it’s overkill.

Apart from that requirement, all of this is very doable with EC2 instances behind an ALB, each running nginx as a reverse proxy to an application server with hot restarting (e.g. Puma) launched with a systemd unit.


To me that sounds harder than just using EKS. Also, other people are more likely to understand how it works, can run it in other environments (e.g. locally), etc.


Hmm, let's see, so you've got to know: EC2, ALB, Nginx, Puma, Systemd, then presumably something like Terraform and Ansible to deploy those configs, or write a custom set of bash scripts. And all of that and you're tied to one cloud provider.

Or, instead of reinventing the same wheels for Nth time, I could just use a set of abstractions that work for 99% of network services out there, on any cloud or bare metal. That set of abstractions is k8s.


Sorry, that was a misunderstanding. I meant that I want to be able to run it on two cloud providers, but one at a time is fine. It just means that it would be easy to migrate/switch over if necessary.


My personal goto-solution for those requirements -- well 1 cloud provider, I'll follow up on that in a second -- would be using ECS or an equivalent service. I see the OP was a critic of Docker as well, but for me, ECS hits a sweet spot. I know the compute is at a premium, but at least in my use-cases, it's so far been a sensible trade.

About the 2 cloud providers bit. Is that a common thing? I get wanting migrate away from one for another, but having a need for running on more than 1 cloud simultaneously just seems alien to me.


Last time I checked ECS was even more expensive than using Lambda but without the ability of fast starting your container, so I really don't get the niche it fits into, compared to Lambda on one side and self-hosting docker on minimal EC2 instances on the other side.


I may need to look at Lambda closer! At least way back, I thought it was a no-go since the main runtime I work with is Ruby. As for minimal EC2 instances, definitely, I do that for environments where it makes sense and that's the case fairly often.


Actually, I totally agree. ECS (in combination with secret manager) is basically fulfilling all needs, except being not so easy to reproduce/simulate locally and of course with the vendor lock-in.


Do you know of actual (not hypothetical) cases, where you could "flip a switch" and run the exact same Kubernetes setups on 2 different cloud providers?


I run clusters on OKE, EKS, and GKE. Code overlap is like 99% with the only real differences all around ingress load balancers.

Kubernetes is what has provided us the abstraction layer to do multicloud in our SaaS. Once you are outside the k8s control plane, it is wildly different, but inside is very consistent.


Yes. I've worked on a number of very large banking and telco Kubernetes platforms.

All used multi-cloud and it was about 95% common code with the other 5% being driver style components for underlying storage, networking, IAM etc. Also using Kind/k3d for local development.


Both EKS (Amazon) and GKE (Google Cloud) run Cilium for the networking part of their managed Kubernetes offerings. That's the only real "hard part". From the users' point of view, the S3 buckets, the network-attached block devices, and compute (CRIO container runtime) are all the same.

You are using some other cloud provider or want uniformity there's https://Talos.dev


Yes, but it would involve first setting up a server instance and then installing k3s :-)


I actually also think that k3s probably comes closest to that. But I have never used it, and ultimately it also uses k8s.


If you are located in germany and run critial IT infrastructure (banks, insurance companies, energy companies) you have to be able to deal with a cloud provider completely going down in 24 houres. Not everyone who has to can really do it, but the big players can.


I'm just happy to see the tl;dr at the TOP of the document.


I've worked at tiny startups before. Tiny startups don't need zero-downtime-deployment. They don't have enough traffic to need load balancing. Especially when you are running locally, you don't need any of these.


Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

Tiny startups are rarely trying to build projects for small customer bases (eg little scaling required.) They’re trying to be the next unicorn. So they should probably make sure they can easily scale away from tossing everything on the same server


> Tiny startups can’t afford to loose customers because they can’t scale though, right? Who is going to invest in a company that isn’t building for scale?

Having too many (or too big) customers to handle is a nice problem to have, and one you can generally solve when you get there. There are a handful of giant customers that would want you to be giant from day 1, but those customers are very difficult to land and probably not worth the effort.


Startups need product-market fit before they need scale. It’s incredibly hard to come by and most won’t get it. Their number one priority should be to run as many customer acquisition experiments as possible for as little as possible. Every hour they spend on scale before they need it is an hour less of runway.


while true, zero downtime deployments is... trivial... even for a tiny startup.. So you might as well do it.


Zero downtime deployments were a thing long before K8S


Tiny startups don't have money to spend on too much PaaS or too many VMs or faff around with custom scripts for all sorts of work.

Admittedly, if you don't know k8s, it might be non-starter... but if you some knowledge, k3s plus cheap server is a wonderful combo


Why does a startup need zero-downtime-deployment? Who cares if your site is down for 5 seconds? (This is how long it takes to restart my Django instance after updates).


Because it increases development speed. It's maybe okay to be down for 5 seconds. But if I screw up, I might be down until I fix it. With zero-downtime deployment, if I screw up, then the old instances are still running and I can take my time to fix it.


If you're doing CD where every push is an automated deploy a small company might easily have a hundred deploys a day.

So you need seamless deployments.


I think it's a bit of an exaggeration to say a "small" company easily does 100 deployments a day.


Not necessarily. Some companies prefer to have a "push to master -> auto deploy" workstyle.


We’ve been deploying software like this for a long ass time before kubernetes.

There’s shitloads of solutions.

It’s like minutes of clicking in a ui of any cloud provider to do any of that. So doing it multiple times is a non issue.

Or automate it with like 30 lines of bash. Or chef. Or puppet. Or salt. Or ansible. Or terraform. Or or or or or.

Kubernetes brings in a lot of nonsense that isn’t worth the tradeoff for most software.

If you feel it makes your life better, then great!

But there’s way simpler solutions that work for most things


I'm actually not using kubernetes because I find it too complex. But I'm looking for a solution for that problem and I haven't found one, so I was wondering what OP uses.

Sorry, but I don't want to "click in a UI". And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.


> And it is certainly not something you can just automate with 30 lines of bash. If you can, please elaborate.

Maybe not literally 30.. I didn't bother actually writing it. Also bash was just a single example. It's way less terraform code to do the same thing. You just need an ELB backed by an autoscaling group. That's not all that much to setup. That gets you the two loadbalanced servers and zero downtime deploys. When you want to deploy, you just create a new scaling group and launch configuration and attach to the ELB and ramp down the old one.. Easy peasy. For the secrets, you need at least KMS and maybe secret manager if you're feeling fancy.. That's not much to setup. I know for sure AWS and azure provide nice CLIs that would let you do this in not that many commands. or just use terraform

Personally if I really cared about multi cloud support, I'd go terraform (or whatever it's called now).


> You just need an ELB backed by an autoscaling group

Sure, and then you can neither 1.) test your setup locally nor 2.) easily move to another cloud provider. So that doesn't really fit what I asked.

If they answer is "there is nothing, just accept the vendor lock-in" then fine, but please don't reply with "30 lines of bash" and make me have expectations. :-(


A script that installs some dependencies on an Ubuntu vm. A script that rsyncs the build artifact to the machine. The script can drain connections and restart the service using the new build, then onto the next VM. The cloud load balancer points at those VMs and has a health check. It's very simple. Nothing fancy.

Our small company uses this setup. We migrated from GCP to AWS when our free GCP credits from YC ran out and then we used our free AWS credits. That migration took me about a day of rejiggering scripts and another of stumbling around in the horrible AWS UI and API. Still seems far, far easier than paying the kubernetes tax.


I guess the cloud load balancer is the most custom part. Do you use the alb from aws?


For something this simple, multi-cloud seems almost irrelevant to the complexity. If I’m understanding your requirements right, a deployment consists of two instances and a load balancer (which could be another instance or something cloud-soecific). Does this really need to have fancy orchestration to launch everything? It could be done by literally clicking the UI to create the instances on a cloud and by literally running three programs to deploy locally.


Serverless containers.

Effectively using Google and Azure managed K8s. (Full GKE > GKE Autopilot > Google Cloud Run). The same containers will run locally, in Azure, or AWS.

It's fantastic for projects but and small. The free monthly grant makes it perfect for weekend projects.


0 downtime. Jesus Christ. Nginx and HAProxy solved this shit decades ago. You can drop out a server or group. Deploy it. Add it back in. With a single telnet command. You don’t need junk containers to solve things like “0 downtime deployments”. That was a solved problem.


Calm down my friend!

You are not wrong, but that only covers a part of what I was asking. How about the rest? How do you actually bring your services to production? I'm curious.

And, PS, I don't use k8s. Just saying.


Cloud Run. Did you read the article?

Migrating to another cloud should be quite easy. There are many PaaS solutions. The hard parts will be things like migrating the data, make sure there's no downtime AND no drift/diff in the underlying data when some clients write to Cloud-A and some write to CLoud-B, etc. But k8 do not fix these problems, so..


Came here to say the same thing: PaaS. Intriguing that none of the other 12 sibling comments mention this… each in their bubble I guess (including me). We use Azure App Service at my day job and it just works. Not multi-cloud obviously, but the other stuff: zero downtime deploys, scale-out with load balancing… and not having to handle OS updates etc. And containers are optional, you can just drop your binaries and it runs.


The attraction of this stuff is mostly the ability to keep your infrastructure configurations as code. However, I have previously checked in my systemd cofig files for projects and set up a script to pull them on new systems.

It's not clear that docker-compose or even kubernetes* is that much more complicated if you are only running 3 things.

* if you are an experienced user


Having done both: running a small Kubernetes cluster is simpler than managing a bunch of systemd files.


Yeah this is my impression as well which makes me not understand the k8s hate.


The complexity of k8s comes the moment you need to hold state of some kind. Now instead of one systemd entry, we have to worry about persistent volume claims and other such nonsense. When you are doing things that are completely stateless, it's simpler than systemd.


If you need to care about state with systemd you still have the "nonsense" of persistent volume claims, they are just something you keep in notes somewhere, in my experience usually in heads of the sysadmins or an excel sheet or a text file that tries to track which server has what data connected how.


Understand that in the hypothetical system we are discussing, there are something like 1-2 servers. In that case the "volume claim" is just "it's a file on the obvious filesystem" and does not actually need to be spelled out they way you need to spell it out in k8s. The file path you give in environment variables is where the most up-to-date version of the volume claim is. And that file is free to expand to hundreds of GB without bothering you.


Things get iffier when you start doing things like running multiple instances of something (maybe you're sticking two test environments for your developers), or suddenly you grew a bit or no longer fit on the server and start migrating around.

The complexity of PVCs in my experience isn't really that big compared to this, possibly lower, and I did stuff both ways.


As an industry, we spent so much time sharpening our saw that we nearly forgot to cut down the tree.


Kubernetes, as an industry standard that a lot of people complain about is just a sitting duck waiting to be disrupted.

Anybody who doesn't have the money, time or engineering resources will jump on whatever appear as a decent alternative.

My intuition is that alternative already exist but I can't see it...

A bit like Spring emerged as an alternative to J2EE or what HTMX is to React & co.

Is it k3s or something more radical?

Is it on a chinese Github?


I wish Docker Swarm would get more attention. It could be the perfect Kubernetes lightweight alternative. Instead it seems like it could get deprecated any day now.


ZIRP is over.


Start-ups that don't need to scale will quickly go away, because how else are you going to make a profit?

How have you been going since 2005 and still not understand the economics of software?


CPUs are ~300x more powerful and storage offers ~10,000x more IOPS than 2005 hardware. More efficient server code exists today. You can scale very far on one server. If you were bootstrapping a startup, you could probably plan to use a pair of gaming PCs until at least the first 1-10M users.


10 million users on a pair of gaming PCs is ridiculous. What's your product, a website that tells the current time?


How many requests do you expect users actually do? Especially if you're serving a B2B market; not everything is centered around addiction/"engagement". My 8 year old PC can do over 10k page requests/second for a reddit or myspace clone (without getting into caching). A modern high end gaming PC should be around 10x more capable (in terms of both CPU and storage IOPS). The limit in terms of needing to upgrade to "unusual" hardware for a PC would likely be the NIC. Networking is one place where typical consumer gear is stuck in 2005.

Webapps might make it hard to tell, but a modern computer (or even an old computer like mine) is mindbogglingly fast.


Just to make it clear: There are a million use cases that don't involve scaling fast.

For example B2B businesses where you have very few but extremely high value customers for specialized use cases.

Another one is building bully hardware. Your software infrastructure does not need to grow any faster than your shop floor is building it.

Whether you want to call that a "startup" is up for debate (and mostly semanticist if you ask me) but at one point they were all a zero employee company and needed to survive their first 5 years.

In general you won't find their products on the app store.


It's disappointing to see how tone deaf some users like yourself are. Such a immature way to speak.


There is no money to be made from individual users. All of the money comes from companies building something on top of the LLMs, and those of us building startups on top of LLMs are very much aware of the differences between the LLMs. And, to the point made in the article, it is trivially easy for us to switch from one LLM to another, so the LLMs don't have much of a moat and therefore they cannot charge much money.


Probably true in the long run, but at the moment OpenAI is making about 90% of their revenue from ChatGPT subscriptions.


I agree that there must have been earlier writing, likely written on wood. Early systems could have evolved from markings on trees, like we still use on the Appalachian Trail, and other trails. Warnings for bears or tigers, symbols for different tribes on different paths. If you've hiked much then you're aware that even experienced woodsmen can get lost as the season changes and a valley changes, or after a hard storm washes away evidence of a trail. Children, in particular, would have been at risk, but would have almost certainly needed to do work over distances, in particular fetching water, which is something that even today children as young as 5 are asked to do. Notches on trees would have been a likely starting point for a system of symbols to communicate.

When I was much younger I used to work as a hike leader for a summer camp in Virginia. We would take a small group of teenagers out for 7 day hikes, during which we could cover something between 70 to 90 miles (112 to 145 kilometers). At one time I knew that stretch of trail so well I thought I could walk it blindfolded. And yet, I only knew it in the summer. One year I went in the fall and I was astonished how different it was. I was helped by the markings on the trees. (This was before cell phones and GPS.)


Exactly - there's probably a fluent transition between symbols and painting and writing and then alphabetic writing.

Territorial animals that we are, I'd add "here starts the territory of the Saber-Toothed Tiger Clan" signs to path markings as likely candidates for earliest symbolic communication.

Nice to see that the earliest examples of writing are still somewhat recognizable (as opposed to modern alphabets) - see https://en.wikipedia.org/wiki/History_of_writing - a hand, a foot, a goat or sheep.

Fun thing is, with modern technology we have regressed (advanced?) to a massive use of pictograms - a modern smartphone wielding human, in addition to the alphabet, knows at least a few hundreds or even thousands of pictograms ¯\_(ツ)_/¯


> there's probably a fluent transition between symbols and painting and writing and then alphabetic writing

I'm with you until we get to alphabetic writing, which has (to our knowledge) only been invented once. To get from other writing systems to an alphabet requires a few conceptual leaps which are much more challenging and, I would suggest, not fluent.

If it were a smooth path, we ought to have seen alphabetic scripts arise independently multiple times (as we have other forms of writing).


Not sure, but I think Hangul counts as a second invention of alphabetic writing.

If you count syllabic writing systems (which are not technically alphabetic, but are more so than Chinese, or Mayan or Egyptian hieroglyphics), there are more: Japanese hiragana and katakana, Cherokee syllabics, Pahawh Hmong, Vai (West Africa), and Linear B (and presumably Linear A).

There's also Thaana, the script used for Maldivian, which uses some Arabic script symbols, as well as Indic digits. So while it's semi-alphabetic (partly abugida), and it's derived from existing writing systems, it uses the borrowed symbols in unique ways.

There are other syllabic writing systems as well, like Inuktitut and Cree, but those were created by missionaries familiar with other writing systems.


> Not sure, but I think Hangul counts as a second invention of alphabetic writing.

It is my understanding that Hangul is believed to have been influenced by other alphabetic writing (e.g. Phagspa) which themselves descended from the original alphabet. Though it was a distinct creation, the core alphabetic idea was not independently discovered.

> If you count syllabic writing systems (which are not technically alphabetic, but are more so than Chinese, or Mayan or Egyptian hieroglyphics), there are more: Japanese hiragana and katakana, Cherokee syllabics, Pahawh Hmong, Vai (West Africa), and Linear B (and presumably Linear A).

Syllabic writing systems are significantly less powerful than the alphabet (hence why they have generally been superceded by alphabetic ones).

They have been invented multiple times, so you can argue the smooth slope goes up to syllabic writing, sure. But only once has that led to an alphabet.

> There's also Thaana

I hadn't heard of this, but Wikipedia seems to suggest it's descended from Phoenician like everything else (although it has made the step from abjad -> alphabet).


Alphabets may only have been invented once, but writing systems that have a (roughly, it's never perfect) 1:1 correspondence with the sounds of the language have been invented several times independently, e.g. in syllabaries (Japanese Kana are derived from Kanji) and abugidas. I would suggest that that conceptual leap is a much bigger one than the one of treating consonants and vowels as independent.


Syllabaries have been invented multiple times independently and an alphabet only once, which to me would suggest the alphabetic step is the harder one to make.

Why would you suggest the opposite? I'm a complete layperson in this area, so I understand my view might be quite limited.


The alternative possibility is that alphabets lend themselves much more naturally to adaptation for other languages, and so, once invented, they spread extremely fast - faster than it would take for another one to appear naturally.


Yes, that's a nice point - this adds censoring to our "data" on other writing systems. My intuition is that even if you accounted for exposure to an existing alphabet, the time-to-develop alphabet would still be much longer than for syllabaries or other writing systems, but that's a guess.


I'm so old that we didn't even have Emojis, not even letters yet, and we had to communicate with punctuation alone! ;)


You had punctuation? We had to to with empty spaces and silence! I once read an entire poem just using silence!


Dan Morena, CTO at Upright.com, made the point that every startup was unique and therefore every startup had to find out what was best for it, while ignoring whatever was considered "best practice." I wrote what he told me here:

https://respectfulleadership.substack.com/p/dan-morena-is-a-...

My summary of his idea:

No army has ever conquered a country. An army conquers this muddy ditch over here, that open wheat field over there and then the adjoining farm buildings. It conquers that copse of lush oak trees next to the large outcropping of granite rocks. An army seizes that grassy hill top, it digs in on the west side of this particular fast flowing river, it gains control over the 12 story gray and red brick downtown office building, fighting room to room. If you are watching from a great distance, you might think that an army has conquered a country, but if you listen to the people who are involved in the struggle, then you are aware how much "a country" is an abstraction. The real work is made up of specifics: buildings, roads, trees, ditches, rivers, bushes, rocks, fields, houses. When a person talks in abstractions, it only shows how little they know. The people who have meaningful information talk about specifics.

Likewise, no one builds a startup. Instead, you build your startup, and your startup is completely unique, and possesses features that no other startup will ever have. Your success will depend on adapting to those attributes that make it unique.


  > No army has ever conquered a country
Napoleon and his army would like to have a word with you…

I get the analogy but I think it can be made a lot better, which will decrease people who dismiss it because they got lost in where the wording doesn’t make sense. I’m pretty confident most would agree that country A conquered country B if country B was nothing but fire and rubble. It’s pretty common usage actually. Also, there’s plenty of examples of countries ruled by militaries. Even the US president is the head of the military. As for army, it’s fairly synonymous with military, only really diverting in recent usage.

Besides that, the Army Corp of engineers is well known to build bridges, roads, housing, and all sorts of things. But on the topic of corp, that’s part of the hierarchy. For yours a battalion, regiment, company, or platoon may work much better. A platoon or squad might take control of a building. A company might control a hill or river. But it takes a whole army to conquer a country because it is all these groups working together, even if often disconnected and not in unison, even with infighting and internal conflicts, they rally around the same end goals.

By I’m also not sure this fully aligns with what you say. It’s true that the naive only talk at abstract levels, but it’s common for experts too. But experts almost always leak specifics in because the abstraction is derived from a nuanced understanding. But we need to talk in both abstractions and in details. The necessity for abstraction only grows, but so does the whole pie.

https://en.wikipedia.org/wiki/Military_organization


It's a cute analogy, but like all analogies it breaks after inspection. One might try and salvage it by observing that military "best practice" in the field and Best Practice at HQ need not be, and commonly are not, the same, either for reasons of scope or expediency. Moreover, lower case "practice" tends to win more, more quickly. Eg guerillas tend to win battles quickly against hidebound formal armies.

For a startup, winning "battles, not wars," is what you need, because you have finite resources and have an exit in mind before you burn through them. For a large enterprise, "winning wars not battles" is important because you have big targets on your back (regulators, stock market, litigation).

One might paraphrase the whole shooting match with the ever-pithy statement that premature optimization is the root of all evil.


The US president, a civilian, is in command of the US military. This is, in fact, the inverse of a country being run by its military.


  >> Also, there’s plenty of examples of countries ruled by militaries. Even the US president is the head of the military
Maybe I should have reversed the order of these two. I didn't intend to use the US as an example of a country ruled by a military but rather that military is integral and connected directly to the top.


Also true in the UK. Even in a war the UK armed forces are ultimately tasked by and report to politicians.


Its true everywhere except for military dictatorships.


> I’m pretty confident most would agree that country A conquered country B if country B was nothing but fire and rubble.

I think we can all agree that if that is the case, you’ve in fact conquered nothing.

Edit: Since we say opposite things, maybe we wouldn’t agree.


So.. how would you make it a lot better?


> If you are watching from a great distance, you might think that an army has conquered a country, but if you listen to the people who are involved in the struggle, then you are aware how much "a country" is an abstraction.

Most things of any value are abstractions. You take a country by persuading everyone you've taken a country, the implementation details of that argument might involve some grassy hill tops, some fields and farm buildings, but its absolutely not the case that an army needs to control every field and every grassy hill top that makes up "a country" in order to take it. The abstraction is different to the sum of its specific parts.

If you try to invade a country by invading every concrete bit of it, you'll either fail to take it or have nothing of value at the end (i.e fail in your objective). The only reason it has ever been useful or even possible to invade countries is because countries are abstractions and it's the abstraction that is important.

> The real work is made up of specifics: buildings, roads, trees, ditches, rivers, bushes, rocks, fields, houses.

Specifics are important - failing to execute on specifics dooms any steps you might make to help achieve your objective, but if all you see is specifics you won't be able to come up with a coherent objective or choose a path that would stand a chance of getting you there.


The army that is conquering is carrying best practice weapons, wearing best practice boots, best practice fatigues, best practice tanks, trucks, etc.

They're best practice aiming, shooting, walking, communicating, hiring (mercs), hiding, etc...

The people that are in the weeds are just doing the most simple things for their personal situation as they're taking over that granite rock or "copse of lush oak trees".

It's easy to use a lot of words to pretend your point has meaning, but often, like KH - it doesn't.


This is frequently not true. There’s examples all through history of weaker and poorer armies defeating larger ones. From Zulus, to the American Revolution, to the great Emu wars. Surely the birds were not more advanced than men armed with machine guns. But it’s only when the smaller forces can take advantage and leverage what they have better than others. It’s best practices, but what’s best is not universal, it’s best for who, best for when, best for under what circumstances


That doesn't defeat my point- is the smaller/poorer army using best practices?

When all things are the same, the army with more will win.

When all things are not the same, there are little bonuses that can cause the smaller/poorer, malnourished army to win against those with machine guns. Often it's just knowing the territory. Again though, these people are individually making decisions. There isn't some massively smart borg ball sending individual orders to shoot 3 inches to the left to each drone.


  > That doesn't defeat my point- is the smaller/poorer army using best practices?
I don't agree, but neither do I disagree. But I do think it is ambiguous enough that it is not using best practices to illustrate the point you intend.

  > malnourished army to win against those with machine guns
With my example I meant literal birds

https://en.wikipedia.org/wiki/Emu_War


The Zulus won a pitched battle or two, but lost the war.


Sure, they (eventually) lost against the British, but they won against many of the southern African tribes before.


Occasionally something novel and innovative beats the best practice. In that case it usually gradually gets adopted as best practice. More often it doesn't, and falls by the wayside.


> It’s best practices, but what’s best is not universal, it’s best for who, best for when, best for under what circumstances.

I’m pretty sure building an organization on a free for all principle is anathema to the idea of an organization.


That's a straw man. The actual argument is about the danger of applying "best practices" uncritically, not about doing away with leadership.

"Do X because it's best practice" is very different than "do X because you were commanded by your rightful authority to do so."


Often not true. Often they are just "good enough" weapons, etc.


Wow what a fantastic little article. Thanks for writing and sharing that.


my irony detector is going off, but it's feeble. do I need a better irony detector?


I was being genuine.


More people should be.


I think the word you're looking for is "nation", not "country". A country is the land area and would be conquered in that example, while a nation is the more abstract entity made of the people. It's why it makes sense to talk about countries after the government falls, or nations without a country.


Likewise, people do business with people, not with companies. Assert that “society” is merely an abstraction invoked for political gain to become an individualist.


> people do business with people, not with companies

Many of my interactions are with electronic systems deployed by companies or the state. It's rare that I deal with an actual person a lot of the time (which is sad, but that's another story).


Seriously? From whom do I buy a computer or a car or a refrigertor?


"Hierarchy is so baked in to every other company, organization, and education system that people just don't know how to operate absent it."

That's partly because it works. It is proven. It is well-known. We have excellent tooling for it. There is absolutely no need to change it. Changing does not bring any advantages. You cannot point to any company and say "That company was an outstanding success because they were flat and rejected all hierarchy." If you run a business then do you want to make a political point or do you want to make a profit?

"There are examples of stable large companies with flat org charts that do work, they're just not the hypergrowth scale-ups."

So why waste any time with them? What is the advantage? What is your real motivation? You seem more focused on some political agenda than you are focused on making a profit.


> Changing does not bring any advantages.

Sometimes, progress requires fundamental change.

> So why waste any time with them?

Can you not envision a business model that is not a high-growth startup?

> What is your real motivation? You seem more focused on some political agenda than you are focused on making a profit.

What agenda do you think I'm focused on?


“progress requires fundamental change”

Progress towards what? Do you have some political goal? There is zero evidence that flatness increases profits. There is no company that we can point to and say “that company was successful because it was flat” (with the obvious exception of franchise models).


> Do you have some political goal?

Again, what political goal or agenda do you think I’m pushing?


Progress towards what? Do you have some political goal? What you've written does not make sense.


This explains everything:

"It’s not practical. Hybrid work may be technically the best route, but it’s also complicated to oversee."

Who cares if workers are productive, when the leadership is clearly less productive? And the leadership's time is extremely valuable. If remote work makes workers more productive, while the leadership is less productive, then remote work is bad for a company, full stop, no other conversation is needed.


If leadership cannot take advantage of, let alone adjust to remote work in 2024, they're not good leaders.


If workers cannot do what the leadership needs, then they are bad workers, they need to be fired.


How would that be possible? Novelty is a known weakness of the LLMs and ideally the only things published in peer-reviewed journals are novel.


Detecting images and data that's reused in different places has nothing to do with novelty.


Wouldn’t it be cool if people got credit for reproducing other people’s work instead of only novel things. It’s like having someone on your team that loves maintaining but not feature building.


Completely irrelevant. Humans cannot survive in the conditions of the Cambrian. Merely going back to the Cambrian is enough to ensure our extinction. No one cares that the CO2 levels were higher then. What matters is that we will all be dead if we go anywhere near those levels.


"The way we discover interesting websites needs innovating, why not let anyone contribute to any webpage?"

I remember there was a website that did this in 1999, using frames to allow people to post comments on any website. The courts shot this down as an illegal infringement of trademark. Does anyone remember the name of that website that did this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: