
You Don’t Need All That Complex/Expensive/Distracting Infrastructure - ingve
https://blog.usejournal.com/you-dont-need-all-that-complex-expensive-distracting-infrastructure-a70dbe0dbccb
======
cwyers
> ‘Engineers get sidetracked by things that make engineers excited, not that
> solve real problems for users’ — we’ve heard it all before. There’s no
> shocking revelation here (you are on Medium.com after all …). But there
> seems to be a pervasive sense that you can’t launch a product without a K8s
> cluster or two, load balanced across a couple of regions, oh and if you have
> to deploy anything manually how can you possibly expect to turn a profit?

Not everyone needs K8s, not everyone needs multi-region. But as far as manual
deployment goes... it's all fun and games until someone loses an eye. Often
what you find if you have a process that can't be automated as-is is that you
have a process that's got problems. Maybe those problems are bugs. Maybe those
problems are "only one person knows how to run this, and if he wants to take a
vacation or more onto something else, we're hosed." Automation is a good thing
at any scale.

~~~
hjk05
> Automation is a good thing at any scale.

Once dealt with a company that spend close to 2 years to develop 2 different
generations of a robotics system to glue together two plastic pieces. One of
the pieces changed slightly close to the end making both robots useless. The
pieces ended up being glued together manually for next to no labor cost
because a person could actually do the gluing quite fast.

Premature automation wrecks budgets and production lines and can kill entire
companies.

~~~
dtech
There's a lot of difference between a gluing robot and the tasks that Devops
advocates to automate.

Spending a few hours on setting up an automated build on a SaaS CI server will
start bringing benefits immediately and pay off very quickly. Not only
quantitative (next deployments will take less human time) but also qualitative
(less chance of errors/bugs/mistakes, easier to do them more frequently etc).

Obligatory XKCD: [https://xkcd.com/1205/](https://xkcd.com/1205/)

~~~
jdmichal
The _other_ obligatory XKCD: [https://xkcd.com/1319/](https://xkcd.com/1319/)

------
agentultra
I largely agree... but to offer more nuance you need to know what the right
amount of automation versus _toil_ is right for your project. It's worth
reading the Google SRE [0] books, especially the workbook, if you're thinking
about running and operating a service.

Complexity is the enemy in software engineering as much as it is in operations
engineering. Sustainable reliability is about doing as little as you can get
away with to achieve the reliability results that make your customers happy.
That does mean following the advice of TFA and avoiding going straight to k8s
for a small indie site that could be deployed with a simple Ansible/Puppet
script. However it also means defining what your targets are and having
indicators so that you're not relying on your gut instincts either: aim for
60% toil, 40% automation at first. Whatever the right balance is review it
regularly as you scale your project up.

You can do a lot with a couple of beefy VPSs and a solid database these days.

[0]
[https://landing.google.com/sre/books/](https://landing.google.com/sre/books/)

~~~
lostphilosopher
"Simplicity is a necessary precondition for reliability." \- Edsger W.
Dijkstra (handwritten annotation to EWD498)

~~~
ken
That's in EWD1175:
[https://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/E...](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD11xx/EWD1175.html)

~~~
lostphilosopher
Ok, maybe you can help settle this for me. I know that phrasing appears in
EWD1175, but years ago I came across this[1] which claims it was originally a
handwritten annotation to EWD498. So that's how I've always cited it (I prefer
Dikstra's phrasing to Hoare's.)

Some searching today found a wikiquote page[2] which led me here[3] also
claiming an EWD498 handwritten annotation as the source.

Do you know definitively whether the annotation appears in EWD498?

(I thought at one time I actually found an image with the handwritten
annotation, but I can't find it at the moment... Maybe I dreamed that...)

1\.
[http://www.cs.virginia.edu/~evans/cs655/readings/ewd498.html](http://www.cs.virginia.edu/~evans/cs655/readings/ewd498.html)
2\.
[https://en.wikiquote.org/wiki/Talk:Edsger_W._Dijkstra](https://en.wikiquote.org/wiki/Talk:Edsger_W._Dijkstra)
3\.
[http://web.archive.org/web/200011201643/http://www.cbi.umn.e...](http://web.archive.org/web/200011201643/http://www.cbi.umn.edu/inv/burros/ewd498.htm)

------
jeena
Funny and true story:

2 years ago we we needed to move in from the cloud to a internal network.
Because of time pressure we got a PC as our first server, where we installed a
set of tools for our embeded developers, from Gitlab, over Jenkins, LDAP
backend, Nagios, Rocket.chat, Crowd, file sharing, nginx, Volumerize backups,
nfs sharing, build artifact storage, a private Docker registry, we even run a
build slave instance on it.

This was supposed to be a intermediate solution until we move to the real
infra. In the new infra they have all of the things like several stages of
load ballancers, tons of firewalls between the servers, the slaves are
physically in a different network, everything is insanely complex and takes at
least 20 times more time to set up than you'd expect. This is why we still run
on that old PC (we added 7 more as build slaves) and ca. 200 people use it now
for 2 years on a daily basis, which seems pretty weird but it just works.

Lately we saw that the network card started hanging and we needed to do a hard
reboot which is not nice. The guys who are close to that PC (we use it remote)
had a USB network card lying around and we asked them to connect it because
the old one in the tower most probably had it's end of life because it has
been used so much during the last 2 years.

We're still on boarding more and more people and it's not clear when we will
be able to move to the new very complex infra.

~~~
the_common_man
How do you run all the apps on a single server? Docker?

Also, who is responsible for updating all those apps? Is that a full time job
for somebody?

~~~
jeena
Yes we run every app with their official docker image so updating it is quite
easy, basically pulling the newest image and restarting the docker container.

When it's just everything on one server there is surprisingly little to do, so
for that we have a rotating sysadmin where one of the team members is
responsible for the servers for one sprint and then the next person takes
over. For some time we had a dedicated sysadmin but it was never enough work
for them so the rotating sysadmin does it on the side right now.

In the new infra this is changing significantly and there it already is much
more work and we will grow with at least two more teams to handle it full
time.

------
cle
These sweeping generalizations in either direction are wrong and
counterproductive. Some people need the complex infrastructure, some people
don't.

What you should focus on is finding out what your use cases are, and then
building the simplest thing that meets them. For some folks, high availability
and zonal resiliency is an absolute must. For other folks, like Peter, it
might not be. These context-less platitudes are pretty useless outside of the
context in which they're made.

~~~
batmenace
I thiink the idea behind this post was more to avoid this kind of thing from
happening [https://xkcd.com/1319/](https://xkcd.com/1319/)

------
013a
Well, your users are gonna have a fun time when linode has to take down your
single vpc for maintenance, or an AZ goes down (does linode even have AZs?
There's some evidence that even GCP doesn't have truly redundant AZs, so I
doubt Linode could.)

Point being: its a balance. I tell any startup who will listen: App Engine or
Heroku. That'll get you REALLY far and strike a good balance between
autoscaling, simplicity, and redundancy.

If you're still using a single server, you're doing something horribly wrong.
This random guy's strategy isn't something to be proud of. He just doesn't
know that there are better options out there. Simplicity isn't a VM you have
to maintain, update, and secure. Its a fully managed PaaS.

~~~
nickdandakis
I'm not sure when exactly this shift happened, but people (myself included)
fetishize high availability so much nowadays. What would really happen if your
startup-of-the-year with less than 1k active users was down for five minutes?
Or an hour? Do you really think people visiting a site that's down think "I'm
dropping my account"? Or do they think "Oh. I'll check back later."?

NomadList and RemoteOK (from the article) have both had downtime before.
They're both much more profitable than whatever startup of the day has decided
they desperately need complicated infrastructure to run their CRUD app.

I've fallen for this trap also. I've written an article about deploying a
Next.js app to Elastic Beanstalk, when I probably could've just stood up a
server and SSH'd in to deploy. I've used Firebase when I could've just stood
up a simple REST API and PostgreSQL.

There's nothing horribly wrong with using a single server, he knows there are
other (not better) options out there.

~~~
sbov
The shift started back in the late 2000s when the explosion of NoSQL and AWS
made it easier than before for people to pretend they have problems they
don't.

Before NoSQL you had to do expensive/difficult stuff like manual database
sharding and buying expensive dedicated hardware. Now you can instantly spin
up a few instances.

Considering how much easier it is to put the infrastructure in place IF you
ever need it, its odd to me how focused on it people seem to be.

~~~
nickdandakis
That makes a lot of sense, and if that's the case, I'd chalk it up to
advertising for convincing people they _need_ all of this type of
infrastructure.

------
stirfrykitty
An old Unix admin mentor of mine always said to "do the simplest thing that
works properly". I've always tried to do this and it's great advice.

~~~
bsenftner
That advice is gold. I follow it too. Currently in beta with a C++ application
that has its own web server, video encode/decode, and NSA quality FR - all in
a 10MB executable with a 200MB run-time foot print. This single application
removes the requirement for separate video encode/decode, video serving, REST
API serving, and all DB needs. And it runs exponentially faster than what it
replaces, while being happy on a single box, anything from an Intel Compute
Stick to you name it heavy iron server. I like to say "the infrastructure is
inside, nothing else is needed."

------
pacala
FWIW, I run Minikube to manage a single ML churning machine in my garage,
because I only need to learn 'docker build Dockerfile', 'docker push <image>',
'docker run <image>', 'kubectl create -f <foo.json>' and 'kubectl delete -f
<foo.json>' to manage the workloads on a regular basis. For this minimal
brain-space investment I get a bunch of features like resource management, a
workload queue, dashboards and being able to test/debug an image on my dev
machine for free. Then I scale this knowledge up to managed clusters in the
cloud with as little machines I can get away with, using minimal brain-space
to learn additional systems.

While I agree with the general sentiment of using the bare minimum to get the
job done, the gratuitous complexity problem is usually caused by the people
using the tools than by the tools themselves.

~~~
chimen
What are you using in production? Still minikube?

~~~
pacala
GKE.

------
justizin
> I’ve seen the idea that every minute spent on infrastructure is a minute
> less spent shipping features

This is why working in infra is so awful, people follow this principle for
_years_ and then invest a million a year in an infra team whose hands they
tie.

Resilience is a feature, though I agree at early stage folks often need less
than they think, a single auto-scaling group and load balancer in AWS or
something similar isn't much heavier than a single linode VPS, except that it
has substantially improved resilience.

~~~
EdwardDiego
> This is why working in infra is so awful, people follow this principle for
> years and then invest a million a year in an infra team whose hands they
> tie.

Yep, having to wait for development resource to build a scalable system
because the legacy is working just fine.... and then the legacy falls over and
burns because it exceeded capacity like you warned it would a year before it
did, and the business guys suddenly want you to fix it yesterday because now
every account manager is out their end of month reports...

A significant downside of Scrum as a methodology is that it assumes product
owners listen to the engineering team as well as the sales people yelling at
them 24/7 for latest feature X before making prioritisation decisions.

------
Animats
Wasn't this on HN last week?

Remember Soylent boasting about their elaborate compute infrastructure, for a
business that made a few sales per minute? I once pointed out that they could
handle their sales volume on a HostGator Hatchling account with an off the
shelf shopping cart program, for a few dollars a month. But then they wouldn't
be a "tech" company.

Soylent is apparently still around, competing with SlimFast and Ensure.

~~~
sbuttgereit
In one of my past lives, I managed an IT group where we processed ~50 point of
sale transactions per second and did it on a two smallish application servers
and a single "large small" Oracle database server. Our entire infrastructure
only had about 10 servers... (including things like email, file services,
redundancy, etc. and this was almost 15 years ago.) A few years later I was
brought into another retail company that only did 500 orders a day... with
damn near 150 servers. My jaw dropped on that one...

------
itslennysfault
> Obviously if you’re FAANG-level or some established site where that 0.1%
> downtime translates into vast quantities of cash disappearing from your
> books, this stuff is all great

As the CTO of a small startup 0.1% downtime translates into vast quantities of
lost trust and irreversible damage to our brand. Netflix goes offline for a
bit they have some angry customers, and maybe lose some cash, but they'll
still be chugging along. However, if we're down for a while our customers, who
are already taking a risk trusting a new company, may disappear forever, and
as a smaller/newer company the overall brand damage is far worse.

While I largely agree with the premise that companies over do it on
infrastructure. I strongly disagree that my up time is less important than
that of FAANG companies.

~~~
mattmanser
No, it doesn't.

Whether you're talking about a new Twitter or Reddit, which used to go down
all the time (like, constantly), or some business startup, a downtime of 30
mins or so no-one really cares about, in my experience. At worst you'll get a
phone call or two, a few emails, but if you handle them compassionately you'll
be fine.

You can run a startup serving thousands or tens of thousands of customers on a
single server, with no micro services, a simple server-side MVC setup and
never have a $600 a year dedicated server even break 15% CPU.

I once took down a server by flooding the email server with error emails,
which generated more error emails, which ran out of disk space, etc., etc.
Took an hour to get the site up again.

Barely a blip in revenue. Client finally coughed up for proper email hosting
rather than running their own server on the same box as their site, as I'd
been advising them for 4 years.

~~~
mayank
> You can run a startup serving thousands or tens of thousands of customers on
> a single server, with no micro services, a simple server-side MVC setup and
> never have a $600 a year dedicated server even break 15% CPU.

This is a gross over-generalization. There are many kinds of problems in
computing that can be approached by a startup AND which require large amounts
of compute. The soup du jour is machine learning problems.

As for downtime, OP has a perfectly valid point. If you're building a B2B
application, the customer has already taken a risk on you. If you go down in
the middle of a busy workday, even for 30 minutes, you can be damned sure that
someone at your client is getting some heat for taking that risk rather than
going with $BIG_CO.

~~~
mattmanser
Not really, most people hate $BIG_CO, they're invested, they're invested in
your trendy brand, your lovely UI vs the lotus notes style $BIG_CO.

The early adopters are going to give you slack.

As for the tiny number of startups solving ML business problems, compared to
the thousands of web apps launched daily on product hunt, if your USP is
computing power and special tech, then obviously this advice does not apply.

Edit: You should really disclose you work on AWS when discussing this sort of
stuff

~~~
hombre_fatal
> Edit: You should really disclose you work on AWS when discussing this sort
> of stuff

Yikes. Is this irrelevant attempt to disarm them why we have these obnoxious
"disclaimers" all over HN?

It's an absolutely meaningless gesture.

------
wiremine
> Your goal, when launching a product, is to build a product that solves a
> problem for your users. Not build the fanciest deployment pipelines, or
> multi-zone, multi-region, multi-cloud Nuclear Winter proof high availability
> setup.

I agree, but context is key: if you're bootstrapping a start up, you don't
need these things. You need to prove your product, then you scale.

But automation != scale. Having a process that streamlines your delivery
process, regardless of scale, can be helpful. I've screwed up enough single-
box deployments to learn that less.

Stepping back: our industry is pretty horrible about creating tools that can
start small and scale up. I like where CockroachDB is going for that reason
(just as an example). It would be great to start with a single database, and
have a clear path to scale it horizontally across multiple nodes and data
centers.

Kubes might get there... I'm not sure how focused they are on making small
things work well, though... any examples of that?

------
honkycat
I do not want to be overly harsh, but this is just lazy clickbait. Nothing new
is said here, just a hipster developer making sweeping generalizations about
companies he does not work for, and does not know their requirements.

Honestly, I do not even agree with his premise that you should start at bare
metal with manual deployments. Getting some basic automation set up is STUPID
EASY between Travis CI, Jenkins, Google App Engine, etc. I feel that toiling
to deploy your services is a massive waste of time.

Obviously, the reality of this lands somewhere in the middle. I feel like
Kubernetes is the whipping boy for "over-complicated infrastructure"
undeservedly. Hosting it yourself I am sure is a bear, but there are a LOT of
great hosted solutions available.

Google Hosted Kubernetes makes my job easier.

I write a few Deployments, Services, and Ingress controllers, set up keel.sh
to update my deployments based on docker image uploads, and BOOM: Awesome,
absurdly automated infrastructure.

Log aggregation? Comes out of the box with Stackdriver logging.

Monitoring and alerts? Comes out of the box with Stackdriver monitoring.

My developers can edit the Kubernetes resources through the Google Cloud GUI.

Deploying to an environment is as simple as pushing a docker image with the
correct tag and letting Keel take care of the rest.

We have looked at alternatives, including bare metal, MULTIPLE TIMES, but in
the end, we keep on deciding that Kubernetes is doing a lot for us and we do
not want to stop using it.

------
mooreds
... you just need heroku.

I remember chatting with someone intimately familiar with k8s and docker. We
were talking about an app I was working on which deployed to heroku. He asked
how many dynos and I told him (it was under 10) and he said: "yup, you'll not
need k8s".

10 dynos and a big database can serve an awful lot of users.

~~~
joevandyk
10 of the large instances costs $5000 a month. Not insignificant. Smaller ones
in my experience don’t run larger Rails apps very well.

~~~
preordained
$5000 a month could be a single employee's wage. Just pretend you have a new
hire named "Heroku" who is generating immense value for you. If you are a one
or two man shop, sure...but if you find yourself spending inordinate amounts
of man hours toying with infrastructure, I think you'd need to ask what you
are really saving.

~~~
noneeeed
Yeah, I've been frustrated so many times over my career when people get
fixated over the sticker price of some SaaS/PaaS and instead insist on wasting
rediculous hours building something or gaffa-taping some open-source solution
together that then has to be maintained.

I've been in meetings where the combinded cost of the time taken to discuss if
we should use a thing was more than the cost of the thing. Utterly
infuriating.

Even when I worked for an agency there seemed to be an automatic discounting
of the cost of people's time vs spending actual cash (and cashflow wasn't an
issue).

I appreciate that there is often good reasons to DIY, but when there are not I
will always favour something off-the shelf unless it is significantly more
expensive.

~~~
superhuzza
I once worked at a company where the CEO would change the CRM every eight
months or so, based on what kind of deals he could negotiate. Can you imagine
how much time was wasted migrating, remapping and relearning the new
systems??!

------
wolframhempel
A good rule of thumb is: "plan for scale, but don't implement until needed".
True, you don't want to engineer yourself into a dead end by designing a
system that can't be parallelized, but I've also seen so many startup founders
(including myself) that design for massive scale that won't arrive for years,
but introduces complexities that will slow down development in the exact phase
one needs to be the most agile.

~~~
ChikkaChiChi
This is good advice. It's all about risk appetite. We should be identifying
and communicating both the potential for over-optimization and risk of not
being scalable. You can only hope to hit the sweet spot if you have clarity on
both being potential outcomes.

------
avaika
Well. It's all about a golden mean. As usual.

The mindset shouldn't be about building an ideal infrastructure, but it should
be about having a reliable infrastructure instead. It surely doesn't required
to have every brand cool thing which was advertised on HN within last 2 weeks.
But fully automated pipeline for code delivery and configuration as code is
essential. It doesn't require that much time (especially if you reuse one of
thousands example from github), but it will save you later. Even for a single
node in linode.

Even though it won't help you to build new features and attract new users,
just think about as a necessary action to keep your existing users. Nobody
wants to use the thing which isn't available because a developer messed up
with deployment command, didn't notice it and left home.

------
tim333
8 days ago also, 81 comments
[https://news.ycombinator.com/item?id=19299393](https://news.ycombinator.com/item?id=19299393)

After the last time I watched a video of the Nomadlist guy, Pieter Levels,
talking about it and he said

>I've woken up so many times at 4:00 a.m. to just check if my website down and
I have to do all this stuff and then I'm awake for three hours because the
server crashed.
[https://www.youtube.com/watch?v=6reLWfFNer0&feature=youtu.be...](https://www.youtube.com/watch?v=6reLWfFNer0&feature=youtu.be&t=2127)

So maybe just PHP on Linode has some drawbacks

------
geggam
Enterprise systems must employ 100's of people therefore we must have complex
stacks with multiple vendors providing services.

~~~
rawoke083600
/s ??

~~~
geggam
my current situation :)

I wish it was sarcastic

------
mindcrime
TFA makes a more-or-less valid point, albeit one that's been made quite a few
times before. My problem with this is that the headline elides a lot of the
nuance involved in these discussions. For example, once you read TFA, you
realize that the author is talking specifically about very early stage
projects, where traffic and expectations are minimal. In that case, yes, it's
probably correct that you don't need a lot of complex infrastructure.

But as even the author allows, at some point you _do_ need this stuff. The
real truth is closer to "You don’t need all that complex/expensive/distracting
infrastructure... until you do."

The other thing I might posit, although I haven't sat down and worked out a
complete argument for, is that sometimes even in a very early stage a bit of
more complex infrastructure (automation in particular) can be very helpful...
specifically when it serves to allow you to run more experiments per unit of
time / effort.

~~~
tim333
For the tweet in TFA

>That single @linode VPS takes 50,000,000+ requests per month for
[http://nomadlist.com](http://nomadlist.com) ,
[http://remoteok.io](http://remoteok.io)

So it's not that early stage. I think remoteok.io is currently the worlds #1
remote working site. ("Remote OK is the #1 remote jobs board in the world
trusted by millions of remote workers" it says)

------
ngrilly
I like the simplicity advocated by Pieter Levels.

The fact that he uses PHP with PHP-FPM solves the problem of deploying a new
version (starting the new version, switching new connections to the new
version, draining existing connections, stopping old version).

But when using a single host machine, you still have the issue of updating the
kernel and the OS, and this is sometimes better done in an "immutable" way, by
setting up a new VM and switching traffic to it. This is where things become a
bit more complex. You have to use things like AWS EBS or Google Cloud
Persistent Disk, detach them from the old VM and reattach them to the new VM.
You also need to use floating IP or a load balancer.

In other words, baby-sitting the machine and its OS, either done manually or
automatically, is a real pain.

Maybe the real simplicity lies in using something like Google App Engine,
Heroku, Clever Cloud or Scalingo.

------
crdoconnor
I used to think that resume driven development was nothing but a curse but
I've softened my stance on it a little.

Realistically developers are compensated by a range of different things - free
gourmet meals might be one - another even more important one being career
development.

If you let developers who want to use kubernetes use kubernetes even if it's
not strictly necessary it might be a net positive for the company.

Hell, even if kubernetes is a slight negative compared to the simpler
equivalent it could be a net positive because it gave somebody who wanted it a
career boost from experience with a "hot" technology which made them happy.

Now, only if RDS is going to cause a massive headache would I be seriously
against it - provided we walk into this situation with open eyes.

~~~
jjeaff
There's also something to be said about not having to scramble to configure
things once you outgrow that simple VM setup.

Obviously, you shouldn't prematurely optimize, but you also shouldn't wait
until you are getting multiple hours of downtime every day while you are
scrambling to get a better solution in place.

And I think many times we underestimate the server power we will need as we
grow.

After a few hundred users I used to extrapolate that I would never really need
more than a large VM and database instance. But then we added more and more
functions and features and more and more computer power is needed per user
now.

Then we started running into issues where certain procedures would take too
long so they need to be queued, well then you have the overhead of a queuing
system and on and on.

It's kind of the same thing that happens with companies as you hire employees.
You think, oh, we have 5 developers now. We will never need more than 15 devs
and some customer service and sales. But then you need someone to handle HR,
and maybe a bookkeeper, and then people to manage the non developers, and then
someone to manage the project and it just kind of grows exponentially.

------
gfodor
The elephant in the room here is this is talking about projects made by a
single person. Part of the infrastructure mentioned is only in part about
delivering reliable service, it's also about enabling deployments and
concurrent work. If you go from one to two people, you're going to be thankful
you set up a continuous delivery pipeline and the usual things needed to ship
code consistently like a working CI build and test suite. Even with one
person, making it easy and reliable to deploy changes without some crazy
manual steps is a good idea.

(I agree with the whole "don't run k8 et al" for your side project, though
obv.)

------
skybrian
For hobby web apps that fit within the limitations of the free tier, I found
deploying to App Engine to be simplest. It's hard to beat free, and this lets
you keep the app running for many years without maintenance.

------
myth2018
It may be an impression of mine, but it seems that such articles proposing a
return to the simplicity are appearing more often here. I've just commented on
another one questioning the nowadays richer front-ends.

This article here resonated to me as well.

We are currently running a fairly busy web app on a single Linux cloud-based
VM instance, occasionally rising new instances during higher loads, and a
deployment pipeline based on small python and shell scripts. Maybe rudimentary
for some current standards, but it's been working sufficiently well.

------
jrochkind1
While I agree that some people have much more complicated infrastructure than
they need, and much more complicated infrastructure management tooling than
they need... refraining from automating it and doing it all manually sounds to
me like saying "Why are you writing tests? Your users don't care about tests,
they don't care about how the code gets written, they just care about the
product, you're wasting your time writing tests." Yeah, but, um.

~~~
jjeaff
You also don't need to write test early on. Especially if you are still
validating your product. For the first year or so of our service, we just
manually tested the important features after every deploy.

~~~
jrochkind1
That sounds miserable to me, but it may be that we are working on different
platforms where the cost/benefit of tests differ.

------
sbhn
If its just a message you want share, such as ‘delete all docker containers
and images’ Why hire a team of specialists, and pay for only the best in
complete server infrastructure, when i can just get it for free from GitHub
pages. [https://seanwasere.com/delete-all-docker-containers-and-
imag...](https://seanwasere.com/delete-all-docker-containers-and-images/)

------
rawoke083600
Could not agree more !!! Most of my side projects....some big some small. Has
a vps, self managed mysql db and a bash script called deploy.sh Ive seen whole
sprints devotes to I fancy infrastructure... with the payoff usually not
much... expect for one guy ending up owning all the magical pieces and know-
how...

------
cakebrewery
Or in my case, I feel the need to learn these technologies so I implement them
to some extent, maybe that way I'm more employable. Then again, I love
tinkering with new tools and my side projects don't make me any money yet.

------
ktpsns
> The answer? Simple … a single Linode VPS.

Fully second that. I started some of my best projects on the small Intel Atom
Linux server at my residence. And we all know the garage story of Facebook
(and similar)

------
kadendogthing
"I didn't need it so you don't either. Look at how easy it is to claim
everyone else is over complicating things because I didn't have a need for
it!"

Is essentially the TL;DR of that article. Being dismissive while not really
substantially expounding on the faults of other processes is just bland
contrarianism. Bald assertions need only be met with bald assertions.

Allow me to write a retort:

You do need all of that infrastructure and you should spend even more time
making sure your pipelines are a well oiled machine to reduce deployment fears
and establish confidence in your infrastructure. Make it nice to use. That new
tool that came out? It was written for a reason, and is probably tackling a
problem you weren't even aware you'd be facing. So stick it in and call it a
day with the knowledge that you've avoided spending time on a problem someone
else has already solved.

Your turn, cynical blog person.

~~~
coleca
This ^^

I've seen / worked with a number of startups that used the author's advice and
just had a simple setup "that worked". Until it didn't.

Here's how it typically plays out: Our "dev" setup "the server" for us at
<AWS/GCP/DO/Linode/etc> and everything worked fine until the provider
restarted the server, or an OS upgrade happened, or we fired the dev we found
on Upwork and he shut down the server. Now X doesn't work. We don't know how
to reproduce what he did.

Now you are left with trying to go through someone else's bash history and
decipher what steps they used to build the server. Did they forget to tell a
service to autorun? Who knows.

I agree with OP that it's possible to over-engineer a fancy CI / CD pipeline
and matching infrastructure for a founder that just has an idea and zero users
when you should be getting product/market fit, esp. when you just have one
developer working on the system. However, the opposite is also true. It's
possible to under-engineer the infrastructure where developer productivity is
dramatically slowed when you're spending a significant amount of your dev's
time doing deploys and blocking other work from happening at the same time.
This can happen fast when you hire dev #2 and #3. This isn't even getting into
the perils around security and scaling when you play fast and loose with the
infrastructure side of the house.

------
detaro
duplicate:
[https://news.ycombinator.com/item?id=19299393](https://news.ycombinator.com/item?id=19299393)

------
mruts
Maybe I'm weird, but I like servers that I can touch. Maybe if you're only a
couple person company, it makes sense to use the cloud. But just paying for
some space at a data-center and having your own servers seems ideal for any
company medium and above sized start-up.

My last job was at a financial start-up worth about 200M at the time. And our
server setup was dead simple: 8 servers and a load balancer. No containers, no
bullshit, just servers running JVMs.

~~~
tim333
Coding Horror just have a discussion on using colocation for Discourse. Seems
kind of ok really [https://blog.codinghorror.com/the-cloud-is-just-someone-
else...](https://blog.codinghorror.com/the-cloud-is-just-someone-elses-
computer/)

------
bribri
It takes a lot of experience and discipline to apply the right amount of
leverage to the problem

------
efields
Sometimes you do. Sometimes you don't.

