Hacker News new | comments | show | ask | jobs | submit login

Some day I would like a powwow with all you hackers about whether 99% of apps need more than a $5 droplet from Digital Ocean, set up the old-fashioned way, LAMP --- though feel free to switch out the letters: BSD instead of Linux, Nginx instead of Apache, PostgreSQL instead of MySQL, Ruby or Python instead of PHP.

I manage dozens of apps for thousands of users. The apps are all on one server, its load average around 0.1. I know, it isn't web-scale. Okay, how about Hacker News? It runs on one server. Moore's Law rendered most of our impressive workloads to a golf ball in a football field, years ago.

I understand these companies needing many, many servers: Google, Facebook, Uber, and medium companies like Basecamp. But to the rest I want to ask, what's the load average on the Kubernetes cluster for your Web 2.0 app? If it's high, is it because you are getting 100,000 requests per second, or is it the frameworks you cargo-culted in? What would the load average be if you just wrote a LAMP app?

EDIT: Okay, a floating IP and two servers.




As somebody who has his own colocated server (and has since Bubble 1.0), I definitely agree that the old-fashioned way still works just fine.

On the other hand, I've been building a home Kubernetes cluster to check out the new hotness. And although I don't think Kubernetes provides huge benefits to small-scale operators, I would still probably recommend that newbs look at some container orchestration approach instead of investing in learning old-school techniques.

The problem for me with the old big-server-many-apps approach is the way it becomes hard to manage. 5 years on, I know that I did a bunch of things for a bunch of reasons, but I don't really remember what or why. It mixes intention with execution in a way that gets muddled over time. Moving to a new server or OS is more archaeology than engineering.

The rise of virtual servers and tools like Chef and Puppet provided some ways to manage that complexity. But "virtual server" is like "horseless carriage". The term itself indicates that some transition is happening, but that we don't really understand it yet.

I believe containers are at least the next step in that direction. Done well, I think containers are a much cleaner way of separating intent from implementation than older approaches. Something like Kubernetes strongly encourages patterns that make scaling easier, sure. But even if the scaling never happens, it makes people better prepared for operational issues that certainly will happen. Migrations, upgrades, hardware failures, transfers of control.


"5 years on, I know that I did a bunch of things for a bunch of reasons, but I don't really remember what or why."

For my home servers, I've settled on "a default install of distro $X and an idempotent shell script that sets everything up for me". You have to use discipline to do everything in the shell script rather than simply fix the problem, but if you can do that, you end up with documentation as to how your server differs from a default install, and the ability to recover it again reasonably well if you store it in git somewhere or something.

It's only "reasonably" well because when you have one server running for years at a time, your script decays more quickly than you are going to fix it. If your server goes down three years later, and you decide to go with the latest $X instead of whatever you used last time, then your script will be out of date and need to be updated. It isn't nirvana. But it's the best bang for the buck when you're in a situation where chef/ansible/puppet/etc. is massive, massive overkill.

If you're already an expert with Docker, go nuts, but IMHO it's a bit silly to run a server just to run two Docker containers, just so you can say you're running Docker or something. Plus no matter how slick Docker has gotten, it's still more of a pain that just setting a few things up.


From my perspective, a Dockerfile already is an idempotent shell script that sets everything up for me. With the advantage that I can easily write and run tests for it that verify that the app comes up just fine.

The main struggle for me there is existing apps that weren't made with Docker in mind. There, using the OS install tools can me easier. But I think that's changing. The Docker Postgres images, for example, let you configure key things via simple environment variables: https://hub.docker.com/_/postgres/

So I expect that we'll continue to see more and more apps provide Dockerized versions, gradually chipping away at the advantage built up over the years by OS packaging.


Huh, I haven't found Docker to be a pain at all now that I sort of vaguely have an idea of what I do.

A docker file takes maybe ten minutes, and is really documentation more than anything.

That with a tmuxp yml file to set up a tmux session for developing can pretty much outline both how the product is released and how it's developed for anyone coming into the project.

Pretty neat, super easy, very cool.

I'm not really doing docker to say I'm doing docker but because once I realized how easy it is to containerize things it's not much more than a few steps to have a development environment as well as a production environment even for my crappy little website.


> That with a tmuxp yml file to set up a tmux session for developing can pretty much outline both how the product is released and how it's developed for anyone coming into the project.

Would you mind sharing more about your team uses tmuxp? Sounds like an interesting alternative to a README for shared configuration etc.


Hey pcl, So I discovered tmuxp relatively recently and am between jobs at the moment but I'll tell you that for my personal projects I can look at my yaml file and immediately see that there's a gulp dev command which is run in the front end directory, a sync bash script which is run, and a gmake run which runs the server.

It's nothing groundbreaking, but it's nice to have it all laid out and it's possible if I got to the point where someone else was working on the same project they'd find it useful to know these three commands without having to wonder why their static assets weren't updating on change, or why make didn't work.

I think wherever I end up I'll likely start creating tmuxp files and possibly docker files for any repos I work in, mainly so it's super easy for me to hop on a terminal, type one command, and have a whole environment to work in. It is pretty neat to have a server start, a watch, a sync, and two windows for vim for front and back end.


How can Ansible be "massive overkill"? It's literally an interpreter of scripts, just like sh or bash. It doesn't require daemons or other infrastructure, just connects over SSH and runs the script.


docker image definitions are idempotent as a matter of principle. creating an idempotent shell script is non trivial IMO - e.g. what return code is returned from package manager XY when something is already installed etc.


Really? What do your dockerfiles look like? Most of the ones I've seen in the wild look something like:

    FROM debian:jessie
    RUN apt-get install -y somepkg
    ...
What happens when the "debian:jessie" image changes (as it does weekly on dockerhub)? What about "somepkg" in debian's repositories?

The answer is that `docker build` will produce a different image. In fact, very few docker image builds I've seen are idempotent. They're not declarative, they're not reproducible, merely the output (the docker image itself) is itself able to be run reproducibly. The actual image definition, not so much.

Creating idempotent shell scripts is no harder than creating an idempotent dockerfile. Both are the same problem. A dockerfile is almost entirely the same as a shell script; it copies files around, it runs commands in an environment, and that's all.


that's why you version pin and vet new updates before you let them in.


Okay, so walk me through how to do this in a dockerfile?

My first line becomes:

    FROM debian@sha256:14e15b63bf3c26dac4f6e782dbb4c9877fb88d7d5978d202cb64065b1e01a88b
Okay, that's easy.

Now, what about older versions of packages in debian's apt repos that have been deleted? How do I get those?

I run my own apt mirror I guess which I update in lock-step with my dockerfile and thus don't let the Dockerfile reach out to the network.

Is this any different from what you do in a shell script on a server? You use btrfs/zfs/whatever to snapshot the initial version and back it up, you run an apt repository so you can pin package versions, you snapshot before and after updating...

I don't see how a docker image definition makes any of this easy. There's not even a flag to disallow network access during "docker build".

The claim I'm responding to is "docker image definitions are idempotent as a matter of principle".

The large majority of dockerfiles I've seen are not idempotent. Yes, it's possible to make them idempotent, but they do not make it easy.


> Now, what about older versions of packages in debian's apt repos that have been deleted? How do I get those?

you can version pin your apt packages if you need to, i personally prefer the minor patches so i get my security updates. my build tool will catch if there's a bug affecting my software.

> Is this any different from what you do in a shell script on a server?

yes, because I can take that built image and deploy it to any host and all my developers get to all use the same one in their development. but hey, if you like building with shell, you could try out packer and run that shell script to create an image; which can safely be used on any host that supports docker or kub?

> I don't see how a docker image definition makes any of this easy. There's not even a flag to disallow network access during "docker build".

Easier depends on your goals and perspectives. For me, it's easier to write a docker file that installs what I need to run a service. Where bash doesn't have that, it's just a script that needs an environment to run. where do you run bash? is it locally on your OS with your packages with your settings and your needs? what happens when I run that bash script in a different OS? Who's going to debug that? Are you going to track your changes made in version control? how do you update the other servers/users who use your script? I got other fun things to do than worry about that.


What you describe is still not an idempotent build process, which is all I'm arguing against.

I'm happy to admit docker images are more portable than declarative shell script's output.

You're arguing against something I'm not saying. I'm talking about how easy it is to make script/docker-image-definitions idempotent, not about their usability, not about their distribution.

When I wrote "is this any different from what you do", I meant "what you do to make it idempotent", not is the resulting artifact and usability any different.

Same with "any of this easy", "any of this" was "idempotency", not anything else.

Everything you are arguing against is a strawman based on misreading the intent of my comment I think.


A Dockerfile is an input which produces an image as an output. That image should not suffer from the bit rot examples you gave (e.g. "what about older versions of packages in debian's apt repos that have been deleted?")

However, when security patches are released, your image obviously will not contain them.


I am not arguing that the docker image output is mutably changing. It is a good artifact that can be reproducibly run.

The comment I am originally replying to is 'docker image definitions are idempotent'. Note, 'image definitions', not 'images'.

My point has nothing to do with the image, but with the image definition itself.


Understood, just trying to point out there is still a flaw with the image (in that updates are actually important!)

FWIW at my work, we don't use apt for installing packages. We compile the packages as a part of the Docker build. This generates mostly idempotent builds.


This. Kubernetes (or whatever other container scheduler) might feel like overkill, but if all they do is force you to adopt a container-centric / 12-factor way of building your applications it was worth trying them. And once you've adopted that workflow it's a no-brainer to go from a single node to a cluster which will dynamically allocate the workloads it runs.

Running a small container cluster at work has even changed how I setup single-host projects in my spare time: I will build everything into a container, bind-mount whatever it might need, create a simple systemd unit that just runs / rms the docker container on start and stop. Bliss.


I've found it just pushes the complexity elsewhere or opens up (or silences) performance or security problems you wouldn't have had if you'd stuck to the old fashion way of doing things.

Keep checklists and script what you can. I find it helpful to follow edge whenever I can so if I hit a snag the developers that made the change still have that change fresh in their mind. It really doesn't take that much time to keep things updated if you don't go hog-wild on different libraries and if your stuff is reasonably well tested.

As an aside, I've been thinking that there should be a stack that is designed and built for the sole purpose of staying stable over decades. Something with a bunch of stripped down technology. As robust and stable as possible. Only allow security updates. Only allow certain character sets. Something built on a language that is just stupid simple and secure. A Swift or Rust subset maybe? Lua? Lisp without the macro insanity?

If the future needs some sort of tech that we didn't anticipate (say, something to handle quantum computers breaking cryptography) then the stack should be setup in such a way to decouple the varying layers with minimal work.

I liken it to building codes. We should have pre-setup combinations of technologies that are stable, simple, and combinable. Sure, go outside them for skyscrapers, but for the day-to-day buildings things are getting too complicated.


How do you use checklists in your workflows - are they part of your repository alongside the code, in some documentation system, printed out?

I'm most of the way through the checklist manifesto and I'd love some insight on how software engineers incorporate them into their work.


Funny. After keeping them in files and emails and docs for years I finally decided to systematize it by writing a CLI that I plan to open source one day. If you want, send me an email and we can talk about it further.


I'm curious how docker will help with the "5 years on" problem. I'd be willing to place money saying your docker setup for this week will have trouble running "as is" next month. Especially true for the vast majority of one-offs out there.


For me it separates application environment issues from the machine issues. As an example, I have a daemon that runs my ambient home lighting: https://github.com/wpietri/sunrise

It has been running in a Docker container for a little over 4 years. Moving that from one machine to another was trivial. I didn't have to worry about language runtime or libraries or config files tucked away somewhere in /etc. I just told Kubernetes where to pull the image from and away it went.

That still leaves me with various problems building the app, as I needed to do when I made some configuration changes. But even there Docker was some help. The addition of multi-stage builds [1] means I can describe the build environment and the run environment in one file, giving me an easy way to have a repeatable build process.

Over time, my goal is set up all my home services similarly, so that when I decide to replace my current home server, there won't be a multi-day festival of "what the hell did I do in 2004 to get Plex working?" I'll bring up the new one, add it as a Kubernetes worker, and then kill the old server. I'm hoping it will also make me braver about upgrading server OS versions, as right now I'm pretty slack about that at home.

[1] https://docs.docker.com/v17.09/engine/userguide/eng-image/mu...


Docker-compose is a pretty straightforward infrastructure-as-code tool for casual servers and local-dev. Basically you have a YAML description of your server that you can commit and comment on and pin Docker image versions to and etc. This includes persistence (volumes), networking, dependency management (bringing up the services in the right order), health checking, configuration, environment management, as well as managing the actual services. The only critical thing it's missing is a secrets management solution.


Even on a single-node, I think Docker swarm mode is a better choice than docker-compose. Docker swarm mode is integrated in Docker. You just run `docker swarm init` to enable it. It gives you everything docker-compose provides, plus configuration and secret management, and zero-downtime deployments (docker stack deploy).


This is already happening. People are using "containers" the way that they used AMIs, which is the way that they used VMs: as a black-box execution environment that magically abstracts all problems. Until something breaks. As soon as you have to upgrade or fix anything inside the container, you're back to the same old set of challenges, which remain unsolved.

But this isn't really why people are using containers. Probably the biggest reason that folks are stuck with this complexity (outside of big server farms) is because containers are being foisted upon them by companies who want to sell software, but don't want to solve deployment issues. Why plan for execution in an uncertain environment when you can just require Docker?


I use Docker for my single server because it (and its ecosystem) offers a straightforward path for:

1. Deployment

2. Distribution (I don't have to build a package for every platform)

3. Supervisory

4. Standard logging

5. Configuration management

6. Infrastructure as code

7. Process isolation (not perfect, but I can get some reasonable protection without managing VMs or figuring out how to roll my own isolation and permissioning)

8. Networking

Basically I don't have to be a professional sysadmin but a "mere" engineer (yes, yes, in a perfect world I would have time to learn everything "properly", but for all its faults, Docker lets me build something useful Right Now).

EDIT: For downvoters, I'd really appreciate more elaborate feedback.


Sorry, no idea. Seems like a decent summary of why docker would be interesting to a dev instead of a sysadmin.


I believe those chef and puppet scripts won't help you resurrect a project from 5 years ago. You'll practically have to rewrite all the scripts to get it up and running in whatever new hotness is around in 2023. Package names will have changed, new config systems will be invented, and previous workarounds for bugs will now cause bugs.


You can put everything in containers and still not need much orchestration, though. My personal projects run in dozens of containers, and the "orchestration" consists of a Makefile include pulled into each project that creates a systemd service file based on some variables, and pushes it into the right place. The service files will pull down and set up a suitable Docker container. The full setup for a couple of dozen containers is 40-50 lines of makefile and a ~20 lines or so of a template service file.

Of course it won't scale to massive projects, and for work, I occasionally use Kubernetes and other more "serious" orchestration alternatives, but frankly it takes fairly big projects before they start paying for themselves in complexity.

Meanwhile, my docker containers has kept chugging along without needing any maintenance aside from auto-updating security updates for several years.

I do agree with you that Kubernetes may encourage patterns that are useful, though. But really the most essential part is that you can find relatively inexperienced devops people who have picked up some Kubernetes skills. That availability make up for a lot of pain vs finding someone experienced enough to wire up a much leaner setup.


When you deploy a new version of a container, how do you avoid downtime? Do you start a new container running the new version, wait for the new container to be ready, switch traffic to the new container, stop traffic to the old container and drain connections, and then stop the old container?


For my home projects it doesn't matter. For work projects, yes. It's an easy thing to automate. Incidentally most of the pain in this is that most load balancers are reverse of what makes most sense: the app servers ought to connect to the load balancer and tell it when it can service more requests, not get things pushed at it.


> most load balancers are reverse of what makes most sense: the app servers ought to connect to the load balancer and tell it when it can service more requests, not get things pushed at it

Reminds me of Mongrel2


Yes, Mongrel2 is an interesting design. So many things get simpler when you invert that relationship.


> Of course it won't scale to massive projects

Most projects aren't massive -- at work, 2 years on, we're still using a single instance of a single node, with the only component that needs to be reliable stored as static files in S3.


Absolutely. Which is one of the reasons I find things like Kubernetes overkill for most setups.


> I've been building a home Kubernetes cluster to check out the new hotness

I tried to do this for the same reason, but all of the writeups seem to stop at "getting a cluster running", but that's not enough to actually run apps since you need a load balancer / ingress, dns, and probably a number of other things (ultimately I was overwhelmed by the number of things I needed but didn't completely understand). I haven't had any luck finding a soup-to-nuts writeup, so if you have any recommendations, I'd love to hear them.


I've heard good things from Kelsey Hightower's https://github.com/kelseyhightower/kubernetes-the-hard-way


Will read! Thanks for the recommendation!


> The problem for me with the old big-server-many-apps approach is the way it becomes hard to manage. 5 years on, I know that I did a bunch of things for a bunch of reasons, but I don't really remember what or why.

I thought this was a solved problem.

I use SaltStack for config management & orchestration on my own machines. (I suggest any config management tool becomes 'worth the effort' once you're managing more than a handful of machines, and/or want to rebuild or spin up new machines with minimal effort and surprises more than once a year.)

Why I do something is described in a comment in the yaml that does the something.

For more nuanced situations, I'll document it in my wiki. (With a yaml comment pointing to same -- I am extremely pessimistic about future-Jedd's ability to remember things.)

If you're running a big-server-many-apps or many-servers-with-their-own-apps, I'd expect the same approach to work equally well.

Though the whole idea of virtual servers & config management (but not necessarily docker or k8s) is that you don't have a bunch of disparate and potentially conflicting apps with potentially conflicting dependencies on a single server.

> But "virtual server" is like "horseless carriage". The term itself indicates that some transition is happening, but that we don't really understand it yet.

That's a challenging assertion to fully unpack. IT's undeniably in a constant state of transition -- and not always thoughtfully directed -- but the problem isn't _'virtual server'_.

The general trend is obviously towards isolation -- but the tooling, performance, scaling, design, and security disparities make the arguments around what level you try to implement your isolation so interesting.


I agree that "virtual server" isn't a problem. Neither was "horseless carriage" or "radio with pictures". All of them were steps forward. But they're transitional states on the way to new paradigms.

When servers were expensive things that had full-time staff, the old ways of installing software made a lot of sense. But as server power got cheaper, they became impractical. A virtual server was at least familiar; slicing big machines up let us turn the clock back to when servers were less powerful. But that didn't really solve the problem, as we now had to do something to manage the explosion in the number of servers, real and virtual. Things like chef and puppet jumped in to solve this, but they are IMHO clumsy; it's all the work of managing a lot of servers the old way, even though it may be a small number of physical boxes.

Containerization says: Forget about installing apps on servers; just wrap the app up with what it needs. Things like Kubernetes takes that further, saying: Don't worry about which apps are running on which servers; it'll just work. The impedance mismatch between modern hardware and the 1970s-university-department paradigm that underlies Unix gets solved automatically.

I'm not sure if that's the end state in the paradigm shift. But I'm convinced that the approach to sysadminning I learned in the 1980s is on its way out.


As someone who runs a very successful data business on a simple stack (php, cron, redis, mariadb), I definitely agree. We've avoided the latest trends/tools and just keep humming along while outperforming and outdelivering our competitors.

We're also revenue-funded so no outside VC pushing us to be flashy, but I will definitely admit it makes hiring difficult. Candidates see our stack as boring and bland, which we make up for that in comp, but for a lot of people that's not enough.

If you want to run a reliable, simple, and profitable business, keep your tech stack equally simple. If you want to appeal to VCs and have an easy time recruiting, you need to be cargo cult, even if it's not technically necessary.


> Candidates see our stack as boring and bland

I would say that there are probably a lot of developers who would be very happy to work on a non-buzzword stack, but the problem is that as a developer, it's extremely hard to know if your tech stack is the result of directed efforts to keep it simple, or if it's the haphazard result of some guy setting it up ten years ago when these technologies were hot.


I would be happy to work on a stack like that, but I can't deny that it seems somewhat career limiting long term.Especially as I am over 40 now. I will be seen as not keeping up to date.

(I certainly think I design and build better software than most people having done a few years of maintenance programming recently, people just create over complex monstrosities for what should be simple apps)


There is nothing "cargo cult" about realizing that PHP is just way more difficult to work with and maintain in any large or long-term project than basically any of the more modern "culty" languages (especially the functional ones, which focus on determinism/reliability/transparency, unlike, say, PHP which last I heard has flagging tests in its very own test suite, and does the same "complexity hiding" (read: brushing tech debt under the rug) that every OOP language with an ORM does.

That said, good on you for running a successful business well using tried-and-true tech, can't knock that!


Backwards compatibility.

It makes the language a bit of a mishmash between things that were popular 10 years ago, and whatever the new hotness is. And errors once made, will never really leave the language.

But the app that I built in php5 10 years ago is still running on 7.2 with a one character change.


Modern PHP's quite a bit better than the utter mess that was 4.0 or even the half-ugly 5.0. They've deprecated the worst of the misfeatures, especially by default. Now if only they'd adopted the HHVM/Hack Collections instead of the terrible arrays...


Hear hear. I lead a team that builds and manages critical emergency services infrastructure.

Our stack is pretty boring, but then it has to be running 99.999% of the time. Rather than wasting time chasing the latest flavour-of-the-month tool or framework we invest our time in plugging any kind of gap that could ever bring our service down.

We don't need people who are only looking to pad their resume with the hottest buzzwords, we look for people who want to make critical services run all that time, that rarely fail and when they do, they handle failure gracefully.

The number of devops/agile/SaaS style shops I have seen where the product falls over for several hours at a time is astounding, and it can often be attributed to rushed/inexperienced engineering or unnecessary complexity.

Lucky for them it's usually just the shareholders bottom line that is affected. If the services my team provides don't work, ambulances and fire engines are either not arriving fast enough or at all.


Good on you mate. I love this approach.

I'll be doing the same thing my self with a few products I'm developing with my brother.

"Keep it simple, keep it stable" is what I like to say.


I work in the same environment, and hiring is the only downside I see. Resume Driven Development and "Dev Sexy" has made it difficult to find developers who are willing to come on board, despite the sanity provided by simplicity & comp.


It will now, but how many people are going to be interested in my LAMP experience 5 or 10 years down the line? While everyone else has been working with the cloud/kubernetes/aws/gcp/serverless technologies.


I have never seen a VC care about what software stack you use. I did have one particularly geeky one asking me for advice on whether he should invest in MariaDB.

Your point about recruiting is spot-on, however. It's not that all candidates necessarily believe in the cargo cult, but they have their own career and employability to consider.


To me the biggest red flag there is the php. After developing with typed languages, a dynamic language is honestly a pain. Cron is easy to replace if needed.

Like I would feel much better if it was python, golang, java or C#. Javascript I feel like is the new PHP. Another issue is what I call the COBOL syndrome, where your career future isn't as great. You can still be a shop with a relatively good career future tech set that is old, but it has to be the 'right' old things unfortunately.

Do you at least use something like Hack to add types?


PHP has had optionally typed function parameters and return values since the late 5.x releases I think (current release is 7.3). Types are checked at runtime and throw TypeErrors if the declared types are violated. They can also be checked ahead of time by IDEs with code inspection such as PhpStorm.

The 7.4 release is adding typed class properties as well[1].

I maintain a ~75k LOC PHP codebase (using the Laravel framework), and we have almost never encountered type issues. The new style of PHP (and Javascript is heading in a similar direction tbh) is to write it almost like it's Java, but with the option to fall back to dynamic "magic" as needed. If you utilize the dynamic elements sparingly, and following widely understood conventions, the productivity-vs.-reliability tradeoff is highly favorable for many applications compared to languages like Java and C#.

P.S. I like Python, but I would argue PHP actually has a better story to tell about types these days. "While Python 3.6 gives you this syntax for declaring types, there’s absolutely nothing in Python itself yet that does anything with these type declarations..." [2]. Declaring types in a dynamic language only to have them ignored at runtime does not inspire much confidence.

1. https://laravel-news.com/php7-typed-properties

2. https://medium.com/@ageitgey/learn-how-to-use-static-type-ch...


Declaring types in a dynamic language only to have them ignored at runtime does not inspire much confidence.

Funny, I see it the other way; declaring types in a dynamic language yet only having them checked at runtime does not inspire much confidence. With mypy, you actually get static checking, so you're not dependent on your tests hitting the bug.


As an experienced developer I’m finding it harder to find companies who use “boring”, simple and stable solutions. Any chance you’re hiring remotely?


I really think that running a LAMP server for the average beginning developer these days would be just as complicated, maybe more complicated, than running a single deployment on Google Kubernetes Engine. You have to know about package managers and init systems and apache/nginx config files and keep track of security updates for your stack and rotate the logs so the hard drive doesn't fill up. If you already know how do this stuff in your sleep because you've done it for years, then yeah, don't fix what isn't broken. But if you're starting with no background, there's nothing inherently wrong with using a more advanced tool if that tool has good resources to get you started easily.

Just because there's more complexity in the entirety of the stack when running an orchestration system doesn't necessarily mean more complexity for the end user.

Side note - couldn't you make a similar argument about any kind of further abstraction? "Question for all you hackers out there - do you really need HTTP requests with their complicated headers and status codes and keepalive timeouts? I run several apps just sending plain text over TCP sockets and it works fine."


>Side note - couldn't you make a similar argument about any kind of further abstraction? "Question for all you hackers out there - do you really need HTTP requests with their complicated headers and status codes and keepalive timeouts? I run several apps just sending plain text over TCP sockets and it works fine."

No, because the end users already have an HTTP browser. Your analogy doesn't work because switching between k8s and lamp stacks is invisible to your users where dumping HTTP means you need dedicated clients.


maybe the end user is someone sending curl requests?


I think from the perspective of getting things deployed, you're probably right. Kubernetes really shines there.

I still think that a LAMP server is better for an average beginning developer because troubleshooting is significantly easier on a stock install. Stock installs of Kubernetes give you very few troubleshooting tools. I've had issues where CoreDNS stops responding, so some pods basically don't have DNS, and even figuring out which pod that traffic was going to is a nightmare.

My devs frequently struggle with things that I would consider basic in Kubernetes, despite having worked with it for a year or so. Things like creating an ingress, a service and a deployment that can all work together are still a struggle, and Kubernetes isn't very helpful when those things don't play nicely together. Just today had to work through with someone who created an ingress and service correctly, but forget to open a containerPort, which caused the service to decide there were no valid backends and to route all the traffic to the default backend.

It's probably mostly the network, but Kubernetes overlay network can make simple troubleshooting very difficult.


I would say that it's still not a fully mature technology. As gaps become known new things emerge to plug those gaps. Istio springs to mind here in terms of making the networking and monitoring side of things easier.

The trend I see is that smaller and smaller teams are becoming capable of bigger and bigger things and when it comes to smaller apps that don't need it we're trading some complexity familiar to a large group of people (Kubernetes) for complexity known to only a handful or single individual (Bob's idempotent script for distro $X that only he knows the intricacies of).

I consider it fairly remarkable that a single developer today could accomplish, in terms of building and operating a system, what it would have taken a large team of specialists to do even 5 ~ 6 years ago let alone 10 years ago.

Now the OPs point is "you don't probably need it" and sure. Maybe you don't. But I would say watch how it shifts the economics of software development in the broader sense and especially over the next few years as the technologies mature.


It would be nice if kube told me why my containers crash in the event log. Right now it just shows ‘crashed’.


>I really think that running a LAMP server for the average beginning developer these days would be just as complicated, maybe more complicated, than running a single deployment on Google Kubernetes Engine.

Back when I first started doing web dev I went from knowing nothing about server setups or Unix (i.e. running off managed hosting) to a reasonably secure FreeBSD server with a working content management system in 3 days. This included installing the OS. The same FAMP setup (with modifications and updates, obviously) continued to work perfectly fine for the next decade.

Re-edit: In the original version of the post I drew a parallel with several teams at my previous job "figuring out" AWS Lambda for several month, and stumbling over gotchas, multiple ways of doing everything and a myriad of conflicting tools. Since there is a reply to that statement, I guess I will add this note.


I believe you believed that :-) But realistically, after over a decade of doing effectively DevOps, I'm still learning about mistakes I made before. In 3 days you may get something running and learn basics, but likely it's a false sense is security...


Containers are a mechanism to run your old machine[s], but with a reproducible setup script. A machine packed in a container happens to also run on your dev/CI environments. There isn't much logical difference between a physical machine, a VM and a container [0].

Serverless offers a large surface of APIs, some of them proprietary, tangled in an ever evolving dependency hell.

Historical note: Google Cloud started serverless with AppEngine, then focused on GCE [and later GKE] _because_ serverless was hard and AWS was eating their launch with VMs.

[0] For example, we can argue about security isolation issues in containers vs VMs. Eventually this will become a moot point as technology advances far enough that we can run each container inside a hardware-backed VM.


>Containers are a mechanism to run your old machine[s], but with a reproducible setup script.

Well, exactly. There is not much to them, conceptually. So why does orchestration has to be so complicated? https://www.influxdata.com/blog/will-kubernetes-collapse-und...

Also, serverless should be simpler. But it's not, like you said. That's my point. There is way too much accidental complexity bundled with these technologies.


Containers are conceptually simple. Products like Kubernetes, Docker, etc which need to be sold or have a for-profit motive, on the other hand, need to be complex so the supporting company can sell software and support contracts.

The two purposes are directly opposed to each other.


> need to be complex so the supporting company can sell software and support contracts.

I think it's more likely due to the need to solve everybody in the world's use case. Yours is just a convenient (for some) side-effect.


It's not accidental complexity, it's economic complexity: the complexity required for the service to have had the extra performance and features relative to its predecessor that lead to its mass adoption.

Let me give just one example, of the replacement of VMs with containers:

VMs have static allocations of CPU/memory/disk/etc. Therefore, you don't need to ask "where" you're running a VM, The VM is some size, therefore it finds a free slot of that size on a hypervisor cluster, and stays there. Simple!

Containers are like VMs if VMs were only the size (in CPU, memory, disk-space, etc.) that they were actively using. Which means you can potentially pack lots of containers—i.e. heterogeneous workloads—onto one container-hypervisor. So you "need" to introduce rules about how to do so, if you want to take advantage of that.

And, because a container sees a filesystem (where a VM just sees a block device), you "need" to give containers rules about how to share a hypervisor filesystem. So you "need" a concept of volume mounts, rather than just a concept of disk targets on a SAN, if you want to take advantage of that.

These "needs" aren't really needs, if you're okay with your container having the exact same CPU/memory/disk overhead that a VM would. That's what services like Elastic Beanstalk get you: the ability to forget about those details.

But in return, with such services, you only get one container running per VM. So you don't gain any enterprise-y advantages on axes like "marginal per-workload cost-savings" or "time to complete rolling upgrade" versus just using VMs. And so—if everyone did things this way—nobody would have ever switched from VMs to containers.

The fact that containers do have widespread adoption, implies that container advocates managed to convince ops people to do something in a new and more complex way; and that this new complexity lead to advantages for the people who adopted it.

I call this complexity "economic", because it's the result of a https://en.wikipedia.org/wiki/Race_to_the_bottom (of adding complexity to squeak out higher efficiency at scale) which costs more and more in dev-time per marginal gain in performance, but where we can't opt out, because then we're outcompeted (in terms of having lower costs) by platforms that are willing to go all-in on the increased complexity.


Yeah! If you just encode your plain text as JSON, and then add something like the resource you are operating on, and the action you want to take, that’s much easier than the complicated HTTP.


I tried learning web dev with a LAMP stack and it was frustrating. I hate learning systems that try to do everything for you because you get way too many boxes with question marks over them in your understanding.

Its different after you know how things work, but to start with I really appreciated nc and node http servers


I realize there is a need for multi-server applications with automated deployment and scaling. However, the accidental complexity of serverless setups and container orchestration tools is just off the charts. When reading these articles I get roughly the same feeling I got when reading J2EE articles back when J2EE was "the future" and "the only way to build scalable infrastructure".


Pepole say that serverless keeps thing simpler. In reality its just moving the complexity to different areas (devops as opposed to application complexity). We have been writing applications for longer, so it should be easier to keep the complexity lower if you keep the logic there. Are there similar well known patterns / best practices / ways of structuring devops that have been developed for applications over the years?


It's not just about scaling. That seems to be the only thing people talk about because it sounds sexy but the reality is about operations.

Kubernetes makes deployments, rolling upgrades, monitoring, load balancing, logging, restarts, and other ops very easy. It can be as simple as a 1-line command to run a container or several YAML files to run complex applications and even databases. Once you become familiar with the options, you tend to think of running all software that way and it becomes really easy to deploy and test anything.

So yes, for personal projects a single server with SSH/Docker is fine, but any business can save time and IT overhead with Kubernetes. Considering how easy the clouds have made it to spin up a cluster, it's a great trade-off for most companies.


Exactly. It solves some of the most important problems that come up of working with microservice based architectures, and establishes mature patterns around the ability for multiple developer teams to update and scale each piece of a distributed application.

Also, what do you do when your digital ocean droplets needs hundreds or thousands of customers on it? Maybe each customer needs it's own database (well in my case they do), configuration, storage requirements, and multi-node requirements. How do you keep track of all that, how do you automatate it and QUICKLY recover in failure scenarios. You need to be able to deal with failure on a node, or if there's a bad actor you need to be able to move them off easily without downtime or affecting other customers, automatically, seamlessly. You need to be able see an overview of your resources across all nodes and where apps are placed and have something decide if the hardware your new container is being added to can handle another JVM or whatever. For cost effectiveness, you want to be able to overcommit resources, so you want containers. You want those to translate to other platforms, AWS, google, azure, on-prem. You have a single declarative language that works anywhere you can deploy a k8s cluster. You need to deal with growth, good patterns for rolling back and updating versions of parts of the stack. You want all of your deployments to be declarative and to be able to tightly control the the options for each one and get back to where you were.

I agree that it doesn't make sense for everything, and it requires fundamental understanding of linux and software before it even makes sense to try shoehorning onto, but it solves real world problems for many people, it's not just a hype thing. I would say docker itself was more of a hype thing than k8s, the maturity and features of k8s and other orchestration systems that came out of the docker model is there for a reason, because it solves all of the real-world problems people couldn't solve with vanilla docker without tons of custom scripting and hacky workarounds. Docker solved the big problem by providing the isolating environments for each app and splitting things out into microservices that way, without having to commit a full statically resourced VM or bare metal per service. K8s solves all of the other problems that came out of that (pods, stateful sets, init containers, jobs, cronjobs, service definitions, deployments, volume claims.)


Exactly!


FreeBSD, Apache, Python - the FAP stack

Ok. I'll see myself out.


I laughed ;-)


Me too, surprised this hasn't been down-voted to hell on here.


Stop thinking of Kubernetes as an easy way to scale ops for a single app, and start thinking of Kubernetes as an easy way to scale ops for non-trivial amount n apps.

If you're a startup with a monolith then sure, you probably don't need Kubernetes. If you're not using Heroku/GAE/etc. then you generate a machine image from your app, deploy it behind a load balancer (start with two servers), and use some managed database for the backend. That's pretty simple. You can scale development without scaling the size of your ops team (1-2 people, only need two if you're trying to avoid bus factor 1), at least until you need to outscale a monolith.

If you need to run a bunch of applications, made by a bunch of different teams (let alone when they don't work for you - i.e. an off-the-shelf product from a vendor), then using a managed Kubernetes provider makes this relatively simple without needing more people. If you try to do that without containers and orchestration, and want to keep a rapid pace of deployment, and not hire tons more people, you will go crazy.


The reliability and performance story for Hacker News is not great, and that's despite the fact that its design has lots of simplifying assumptions. I wouldn't call HN a success story for the "just drop it on a server" approach.

Of course, HN is a kind of art project, and its scaling and performance goals are not typical of most applications.


I think you're right - at least 90% of servers on the web would be fine with a couple of instances at most backed by a decent db. It can get more compex depending on your resiliance requirements but it really doesn't have to be.

I guess I run a CPG stack - Coreos, Postgresql, Go. Don't bother with containers as Go produces one binary which can be run under systemd. It is far simpler than kubernetes and the only real reason for other servers is redundancy. The only bit of complexity is I usually run the db servers as a separate instance or use a managed service. You can go a long way with very boring tech. I've run a little HN clone written in go on one $5 digital ocean droplet for years - it handles moderate traffic spikes with little effort.


I think of these in this way: 99% of the apps are developed by the developers who are not in the top 1%. The cheap access to computing power has led to a growth of developers beyond the highly skilled ones who can milk out everything available in a less powerful computer. I'd like to believe we are at an Electron development phase where we just want to ship as much as possible easily without worrying about hiring great talent (And yeah I hate that its inefficient in terms of memory usage). This has led to the explosion of so many frameworks that does a lot of things easily which requires such complex devops pipeline.


I personally use Docker combined with a $5 droplet on Digital Ocean. This makes it easy to spin up multiple applications and sites without worrying about conflicting dependencies, and docker-compose gives me most of the benefits of orchestration tools (e.g. Kubernetes) that actually matter for my small scale usage.

Also Traefik makes a nice load balancer for this uage


> docker-compose gives me most of the benefits of orchestration

I feel this is a very unappreciated feature of docker-compose. I've gotten pretty far with setting restart: always, baking a machine image, using cloud scaling and load balancers.


Interesting! When you rollout a new version, how do you coordinate docker-compose and Traefik to avoid any user-facing downtime (no 502)?


The simplest answer is I am the only user currently so it doesn't matter. Even with users, a short down time to bring down the old images and bring up the new images should be manageable. If I ever got to a point where that wasn't sufficient then I would consider that project successful enough to merit investment into zero downtime strategies and probably its own VPS as well.

From a more technical side, I haven't seen an issue with Traefik when I take a container down and back up again. The only delay I would anticipate is when you bring a new service online for the first time and Traefik detects it and configures + fetches LetsEncrypt certs for it.


Thanks.


For reference, this runs on a $5 AWS instance:

https://hnprofile.com/

The database is $600 per month, but that data runs five different websites (and it's a few hundred Gb of data).

EDIT: for those mentioning the 502 gateway error, it does auto scales - Now it's costing more per month, at least temporarily.


>502 Bad Gateway

maybe a $5 instance isn't enough


Not a good example as it is currently 502'ing: https://i.imgur.com/sU8Zn5v.png


$5 AWS instance ? aren't they all substantially more than that ?



Does hacker news really run on one server? What if server goes down?

I've always though high availability was the more important reason for multiple servers, rather than performance.

Even if you have only two paying customers, they are probably paying for the right to hit your website / service 24/7.


What if hacker news goes down, more work gets done for the day?

It’s really common to over estimate the cost of being down, while underestimating the costs of resiliency. And in the end if you can’t fully afford the resiliency, you end up with a system that’s more complicated and thus less stable than it could have been had you just accepted a tiny bit of risk.


extremely valid point. The initial setup isn't the challenge, it's the endless tweaks to make it fault tolerant. For anyone looking into k8s, take time to research best practices for monitoring and readiness probes.


> What if server goes down?

Productivity of Y Combinator startups goes up. So YC benefits either way.


An interesting case study of an app at scale is Stack Overflow. Rather more than one server, but rather less than you might expect.

Edit: fixed url

https://nickcraver.com/blog/2013/11/22/what-it-takes-to-run-...


Key point:

"The primary reason the utilization is so low is efficient code."


One application server, yes. HN is fronted by Cloudflare for CDN + DDoS protection, which of course is a lot more than one server.

That's why if you get a particularly long thread (1k+ comments), admins will beg people to log out so that the responses can be served from the CDN cache.

Example: American 2016 presidential election https://news.ycombinator.com/item?id=12909752 (1,700 comments)


Not Cloudflare anymore. Traffic goes to somewhere in San Diego from Europe, which it wouldn't if it was Cloudflare (different IP range too)


HN hasn't been fronted by Cloudflare since July.


Hm, interesting. I noticed HN started responding to HEAD requests with 405s recently; perhaps that is the cause.


Why did they stop using Cloudflare?


It was part of their networking rework. Presumably to improve stability of the website. I've also been told that it was a big step towards enabling IPv6 for HN.

There are other parts of Y Combinator that still use Cloudflare though.


Well, there's another important question: What percentage of all services need high availability? Stetson-Harrison method shows that it's less than 5%.


Need is determined by the customer. You might be able to explain to them that they don't need it on their way to a competitor. May not be fair or rational, but redundancy and its peace-of-mind has real benefits in most systems compared to cost.


Has hacker news ever gone down? I cant remember it ever going dkwnt but I assume it has to have at some point.


I've seen it happen at least once in the few years that I've been on Hacker News, see https://news.ycombinator.com/item?id=17228704


I got a server can't respond to your request type message this morning. OK, the server wasn't down, but it was loaded enough that it couldn't serve me. A page refresh and it was working.


Here's the official status twitter: https://twitter.com/hnstatus


You probably don't need kubernetes.

Lets be fair, it offers:-

> Orchestration of block storage

> Resource Management

> Rolling Deploys

> Cloud provider agnostic APIs*

If you don't need any of these things, and your stack fits on a single server or two, and you aren't already familiar with it, I'm not sure why you bother other than an interest.

That said, there's a world of companies that aren't FAANG, ub3r and Basecamp, and many of those paying reasonble sums of money have more complicated and resource intensive requirements that don't fit on a single server.

Government Departments, Retail Companies and Banks all likely have a number of different software development projects where giving a number of developers API access to a platform that offers the above advatanges is, in my opinion, a good thing. Once you get to FAANG level, who knows whether kube itself will actually help or hinder at that level.

* Personally I'd rather use the kube APIs than talking to any of the cloud providers directly. I imagine that's somewhat personal preference and somewhat because I've been able to easily run it in my basement.

*2 Namespaces also makes making more environments for CI/CD easier, so as soon as you have a team of developers and you want to do that sort of thing, it also makes sense. Not so much for a loan developer and his server.


There is also overhead in the form of instances to run and maintain the backend datastore and control plane components. You should already be at a certain scale before considering kubernetes.


This is free on GKE and I believe AKS, but of course if you're doing this yourself for some reason, you need to compare it with the alternatives.


Spot on, friend.

So recently I started writing a simple web application for my family. They send emails to each other with gift wish lists in them and we all have to juggle those emails around. I figured some products would exist already to solve this problem, but I wanted to make my own.

When it came time to make it I thought: "This has to be a REST API with a JS front end" and then further down the line, "Man I should use Flutter and only make it a mobile app!" I had other thoughts about making it Serverless and doing $thisCoolThing and using $thatNewTech. In the end nothing got done at all.

Fast forward to today and it's a monolith Go application that renders Bootstrap 4 templates server-side, serves some static CSS directly, sits on a single server (DigitalOcean) and uses a single PostgreSQL instance (on the same server). The Bootstrap 4 CSS and JS come from their CDN.

I made the technology simpler and the job got done. It's an MVP with basic database backups in place, using Docker to deploy the app. It just works.

Lessons for me from this:

* Server-side template rendering is perfectly fine and actually easier, frankly * JS can still be used client-side to improve the experience without replacing the above or making the entire rendering process client-side * Although Go compiles to a single static binary I still need other assets, so it went into a Docker container for the added security benefits not to mention portability * Serverless is nice, but unless it's replaced the above day-to-day, there's always a steep learning curve around something you haven't done with it yet, but need * Picking the latest and greatest tech tends to stagnate progress or halt it entirely, in most cases * An a software MVP needs an MVP infrastructure to go with it

Just my thoughts.


"I figured some products would exist already to solve this problem, but I wanted to make my own."

But why? You could've used so many different products. You could've even used Google Docs.


There's a bit more to the story than simply solving a problem. I have other plans for the software and we have ideas about how we want to move it forward.

One of the key features is being able to "tag" an item on someone's wish list as "bought" or "buying". This allows others viewing the list to know that item has been taken. But there's also a requirement that the original author of the list/item cannot see that it has been bought otherwise there's no element of surprise for them come time to open the gift(s) :-)

Spreadsheets don't enable that privacy/secrecy.


Still, I highly doubt that there are not any apps that do this already. I mean, my family uses Amazon wish lists and set it so that you can't see when stuff is purchased.


Like I said, it goes beyond just solving the problem. It's a learning exercise as well as a solution.

Have you never wondered what projects you can write to help you learn X or Y?


I'm interested in something like this - is the code publicly available?


> so it went into a Docker container for the added security benefits

Which are? Last I heard containers (with baked-in dependencies) were generally considered very bad for security because the dependencies never get updated.


Is there a short downtime when you deploy new versions of the Go application? Is the Go application directly exposed to the Internet?


I did a programming project for a job interview recently at a company called Willowtree that makes iOS and Android apps for other companies.

It was a pretty simple project, basically wrap a rest API around some JSON data provided to you.

I ended up deploying mine to Google Cloud Platform onto a VM running Ubuntu and Apache, and they seemed rather concerned that I took that approach instead of leveraging some kind of containerization or PaaS approach.

My API definitely had problems, as I don’t have much back end experience, but I found it strange that they would look down upon deploying to a cloud VM. It doesn’t seem like it was that long ago that a VM hosted on AWS or Digital Ocean was the latest and greatest and it seemed like a logical choice for something that would only ever be used by about five people.


You prob dodged a bullet then. We give out take home exercises (not my idea but whatever), and we tell the client we don't care which config mgmt you use just pick something you're comfortable with. We use TF+Ansible but we would never frown upon work that's something using Salt, Chef or CFEngine.


They do not, I run tens of low-traffic projects very successfully on a $10/mo Hetzner server on Dokku. Dokku is amazing and so is Hetzner, I don't know why people always go for the high-scalability, expensive options just to end up with 0 utilization.


Because my company is risk-averse, and perfectly happy to drop $1000 a month for a ridiculously overprovisioned database instance just to ensure an issue with the database will never cause their contracts to be lost.


if you have zero utilization, then you should scale down or lower your instance type until you are optimized for performance and cost. Which is easier to do when you are in an environment such as gce or aws.


There's a minimum when you have deployed each one of your hundred microservices to a server, though.


On a service that has been up for almost 20 years, same code base, thousands of daily users, the first server was constantly on 100% CPU. The second server, averaged around 10% CPU with lots of spikes. The third server, now average below 1% CPU usage. Next time I need to upgrade I will probably get a "NUC", or a smarphone, or something even smaller. But it's not only CPU's that has gotten better. The first server also maxed out the bandwidth! And now, although with less users, the bandwidth usage is less then 1% Started out on 0.5Mbit DSL, and it's now on a Gbit fiber.


> If it's high, is it because you are getting 100,000 requests per second, or is it the frameworks you cargo-culted in?

Mine's high because our business model involves blockchain stuff, and

1. blockchain nodes are CPU+memory+disk hogs;

2. ETL pipelines that feed historical data in from blockchains produce billions of events in their catch-up phase. (And we're constantly re-running the catch-up phase as we change parameters.)

Sadly, we need several fast servers even without any traffic :/


Initially I was skeptical as well. One server in a colocation will handle enough traffic until you can afford to hire all the people to make you web scale. But then I started playing with the various tools and seeing how people used them, and it totally changed my view.

The key point is that many of the new technologies in operations are about simplicity rather than speed. Standing up a stack in AWS can be flipped on and off like a light switch, and all the configuration steps can be much more easily automated/shared/updated/documented etc...

It's not about any of these technologies being more efficient; it is about spending more in order to abstract away many of the headaches that slow down development.

Certainly there are some people who are prematurely planning for a deluge of traffic and spending waayyy too many engineering resources on a 'web scale' stack, but that's not the majority.


This is a really interesting comment, thanks


I think for smaller use cases its more about high availability than load balancing.


The load average on my kubernetes cluster is actually around 3-4 without it even doing anything.

There’s a bunch of apps running in there, but nothing that would justify the load.

It’s also generating roughly 20 log lines per second.

I’m really not sure what it’s doing...


> I manage dozens of apps for thousands of users. The apps are all on one server, its load average around 0.1.

If you're at this scale you can do whatever you want. Most of the stuff I've made has been with simple building blocks like you've described, maybe thrown in with some caches and a load balancer.

Although I've worked with other teams who really did have the high scale request flows that require you think about using a different architecture. Even so, K8s is not the end game and you can make something work even just extending the LAMP stack.


I think that kub and, generally, cloud providers have allowed for more ambitious projects to be generally accessible.

My side project is intended to handle > 1 billion events per day, with fairly low latency. That's well over 10k events per second.

I doubt I could do this easily on a single box, and I wouldn't really want to try. Why constrain myself that way? Is it worth just doing this the standard LAMP way?

More and more problems are available to be solved using commodity systems, so we have more and more people solving those problems with these new systems.


Depends on the box. Per core performance is probably your metric there if the app can utilize multiple cores.


Hacker news running on a single server is a very bad precedent. Wish people running the show address it quickly since its being used as a bad example.

When building a business you should take care of having an environment which is resilient. I agree it's not for everyone. But its quite essential when you have a huge customer base and care about unpleasant experience. If someone is running an important business and leaving it to chance - its just pure arrogance or gross incompetence.


Not everyone uses K8s for webapps. You would be surprised at the level of enterprise penetration of K8s. Those enterprises do boring stuff like datawarehousing etc.


One of the more interesting use cases I have read in recent memory: Chick-fil-A used it to set up a bare metal edge compute network between all it's restaurants

The how: https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at...

The why: https://medium.com/@cfatechblog/edge-computing-at-chick-fil-...


> Some day I would like a powwow with all you hackers about whether 99% of apps need more than [...]

Close. But I also need it HA w/ automatic failover, auto-SSL certs (meaning I might need to give my DNS provider creds depending upon LE approach), notifications on outages, easy viewing of logs past and present, easy metric viewing, automatic backups, and updates that are easy for me to sign off on to then run. I'll do the plugging on the app side (that is exporting metrics, logs, etc). And no vendor-specific solutions (even if they are repackaged common components like RDS is for Postgres), I should be able to run on a couple VMs on my laptop if I want and it be the exact same. And I may want to add an MQ/stream (e.g. Kafka), in-mem DB (e.g. Redis), etc later and still want log aggregation, metrics, backups, etc.

Really, that's not asking too much but it's definitely more than LAMP. We need a pithy name for this startup-in-a-box (again, that's NOT a PaaS, but a self-hosted management on an existing set of servers). Nobody wants to fumble w/ Ansible/Puppet/Salt/Chef/whatever all over the place or hire an ops guy, and people don't want to use vendor-specific solutions.

I agree with the "you only need this"...but we need just a bit more to handle outages and auditing.


It's convenient to be able to trivially create new production-like environments. Great for reproducing bugs or simulating deploys or running demos. My company's setup and scale doesn't necessitate kubernetes, but I still find it useful. It was fairly straightforward to set up.


>I manage dozens of apps for thousands of users. The apps are all on one server,

How are these backed up?


And how do they fail over when the server dies (at least the non-user-app part like their DBs)?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: