
CapRover: Build your own PaaS - vincent_s
https://caprover.com/
======
xu6ahb8E
For those who have even simpler needs (like side projects, or 1 dev projects),
I found using simply docker and git to be plenty enough.

Basically, you can create a bare git repository on your server (`git init
--bare`), and put a `hooks/post-receive` script within it that will clone
sources in a temporary directory, build the docker image and rotate
containers. That way, you can `git push` to build and deploy, and it's easy to
migrate server.

The added bonus is that you now have a central git repos that can act as
backup, so you don't need github or gitlab.

The main painpoint, which I find dokku interesting for (and I assume caprover
too) is zero-downtime deployment. But well, if this is critical, you probably
need something more extensive.

~~~
antoniomika
I actually developed a system similar to this but used docker compose as an
alternative to Procfiles and nginx+le to handle dynamic virtual hosting. It's
actually a golang app that will automatically provision git repos with the
necessary hooks and also allow you to exec into a container directly over SSH.
I had the thought of using docker stack to achieve zero downtime but haven't
had a chance to try that out. Happy to open source it if anyone is interested
in using it.

~~~
1337shadow
The problem with nginx based setups is that one wrong container option (label,
env var, etc ...) may cause a syntax error in the nginx configuration file and
then nginx won't start, so all services will be down. Loved apache, loved
nginx, but I can see traefik is the only http server on my tech blog for the
last years...

Nginx is made to load a configuration: you don't have the auto-configuration
that comes with service discovery. Service discovery is doable with a standard
HTTP API through /var/run/docker.sock or /var/run/podman/podman.sock in more
advanced systems.

As such, service discovering HTTP servers are more reliable because it's built
with a service isolation from the ground up: if one service has some poor
value then it won't work, but it won't block the other services.

Nginx is too far behind now, they might have some service discovery module,
but even then the thing that happens when your configuration autogenerates
(like with snapshot testing) is that you still have to read the configuration
it generates. Traefik offers a great dashboard for this so it's even more
pleasant than reading a configuration file that you didn't even write ;)

For sure, I bet that in a patch into something like CapRover (or your own
solution), changing nginx to traefik would end up removing quite a lot of code
;)

I'm not really sure what you mean "acheiving ZDD", ZDD is complicated any time
there's a data schema migration, not to mention that containers deployment
traditionally is "delete a container: KILL a process" and "create another one
like cattle". uWSGI for example, could gracefully renew every worker process
on SIGHUP, but re-creating the uWGSI process in another container defeats
that. Maybe you have some kind of blue green deployment, maybe even canary, in
this case I wonder if basing a container platform on configuration files such
as those for nginx would really make it to ZDD. Would love to read more about
your setup

~~~
antoniomika
These are all extremely valid points! I guess I should've clarified how things
work a bit (which I am currently in the process of documenting how it works
and how to deploy it).

Ingress management is done with the very useful nginx-proxy[0] service that
loads virtual host definitions directly from the docker daemon and sets
virtual hosts based on env vars set on the container. Configuration changes
are loaded using an nginx reload, so even if there was an error in the
configuration (which I personally have never run into though is likely
possible), it wouldn't take effect. LE is then handled using the nginx-proxy-
letsencrypt-companion[1]. My goal was to abstract away reverse proxy+cert
management and I think any solution (traefik, caddy, etc) would work here and
I'm more than happy to change it. More or less just went with nginx since it
was easy and I didn't have to do any configuration other than adding it to a
docker-compose file.

I guess my goal wasn't to handle ZDD for stateful applications. As you
mentioned, there's a plethora of issues that arise and make that type of
application much more difficult to do ZDD. I tend to write a lot of stateless
web apps for simple use cases and like to have an easy way to deploy them. In
the primitive sense, creating a new container, waiting for it to be ready, and
then swapping the upstream used for the reverse proxy with the new pointer
would be ideal but isn't supported directly with docker-compose (as
mentioned).

Happy to talk this through also, especially if you'd be interested in
contributing!

[0] [https://github.com/nginx-proxy/nginx-proxy](https://github.com/nginx-
proxy/nginx-proxy) [1] [https://github.com/nginx-proxy/docker-letsencrypt-
nginx-prox...](https://github.com/nginx-proxy/docker-letsencrypt-nginx-proxy-
companion)

~~~
1337shadow
Actually, the experience with broken config comes from using nginx-proxy
(along with the le companion) in production for 18 months or something, that
and other inconveniences pushed me to give Traefik a try and I was really
delighted, no dealing a configuration template, and I can view the
configuration from the web dashboard instead of having to ssh into nginx to
get the config.

Traefik and Caddy are not comparable in my opinion because Traefik was
literally made for self-configuration based on service discovery, see an
interresting discussion here:
[https://www.reddit.com/r/selfhosted/comments/gq90aw/traefik_...](https://www.reddit.com/r/selfhosted/comments/gq90aw/traefik_v2_or_caddy_2_for_docker_setups/)

I completely agree with you about ZDD, 99.9% of uptime is plenty enough for
99.9% of the projects, and trashing the container to start a fresh process
from a fresh system build does come with other advantages. Sure, any kind of
blue/green deployment or canary would be really nice to see, but it wouldn't
seem to create a lot of value for 99.9% of the projects, and for the rest well
there's k8s that deals with clusto

Currently I'm just using a bunch of ansible roles with an ansible command line
wrapper, so I'll do `bigsudo yourlabs.netdata @somehost` and it'll auto-
install yourlabs.traefik if not already there, which will auto-install
yourlabs.docker if not already there, and basically just leave me with
`[https://netdata.somehost.fqdn`](https://netdata.somehost.fqdn`)

Thank you for the invitation to contribute ! As you probably guessed, I'm a
bit like you in the sense that I cannot live without making my own system, and
I have made different design decisions:

\- Python for server side, I find it more fun than JS, nothing we can do about
that

\- Python for client side, because we maintain our crazy isophormic component
library in python

\- Not docker, but podman, which can run rootless and daemonless (thought we
need the daemon that provides a docker compatible API to have Traefik service
discovery)

\- Not docker build, but something I'm cooking on my own ("shlax") that I find
a lot better for my taste, and that uses buildah which can build rootless

\- Not docker-compose, but shlax, which aims to support a broader range of use
cases (such as backup/restore)

\- The thing I'm building is first a really KISS Sentry alternative, then also
a GitLab alternative, and I'm in the process of adding CI into it ... but I
stopped doing that until I have finished my little Python lib ("shlax") that
replaces docker/compose and ansible to have something to put in the CI test
that's not tech that I'm trying to move away from, and so that it can
build/test/deploy itself,

So, I suppose our goals and design decisions are a bit too different, but I
can assure you that I'm always happy to see CapRover featured in social media,
and I'm always happy to discuss rare passions like that, if you're looking for
a crazy friend recoding his entire little world to just talk about these kind
of things feel free to send me an email or give me a call ;)

------
Smerity
I have been using CapRover and love it. I donate to their OpenCollective[1].

For those noting "why don't you just use Linux / k8s / ...", that feels close
to the original complaints re: Dropbox on Hacker News[2]. I've run clusters
hundreds of nodes in size myself but CapRover gives me the pleasure of not
having to sweat the small details. You can get this from other platforms but
usually there's a dollar cost tied to each option. When I'm experimenting I
don't want to have a dollar cost attached.

Deploys are trivial. The default nginx setup is most of what I'd want to do.
LetsEncrypt is a single button click. Monitoring is included by default. If I
need to scale up, everything I'm pushing is Docker containers. If I want to
experiment, there's great fun in looking at the included "One click apps /
databases" and just playing around.

CapRover is just a lovely freeing experience that will do what you need :)

[1]:
[https://opencollective.com/caprover](https://opencollective.com/caprover)

[2]:
[https://news.ycombinator.com/item?id=8863](https://news.ycombinator.com/item?id=8863)

~~~
GordonS
Have you used Dokku before, or did you look at it before settling on CapRover?
(I'm just starting to look at both)

~~~
gavinray
I have experience with Dokku, Flynn (also look at this if you're looking at
Dokku), CapRover, k8s, and a tiny bit of Nomad.

I think CapRover has the best experience out of the Dokku/Flynn/CapRover
"group". Not a huge Dokku fan. Would use Flynn over Dokku again, but I'd
rather use CapRover over both.

[https://flynn.io/](https://flynn.io/)

If you're intent on using Dokku, there's a useful web console:

[https://github.com/palfrey/wharf](https://github.com/palfrey/wharf)

~~~
filmgirlcw
I would agree with that. I’ve used most of the various options for a DIY PaaS
and also prefer CapRover.

CloudRon [1] is actually really great, but its pricing is prohibitive (and
changes frequently) for side-projects, which is when I most want to use
something like this rather than just deploying/managing the k8a myself.

[1]: [https://cloudron.io/](https://cloudron.io/)

------
julianwachholz
I have been using CapRover for about half a year now on my personal server
running multiple projects. It does what you'd expect and the configuration
format is pretty easy to use, using any docker image directly works without
any extra steps, just enter the image name and it'll do the rest. I'm pretty
happy with it and will probably stay with it for the time being.

It's not the best for hosting many static pages, as you'll need a HTTP server
for each site anyway.

But my main gripe is that there is only single factor authentication and you
can't easily secure it more other than using a strong password and a hidden
subdomain. (because of webhooks, acme, etc. I guess)

~~~
867-5309
> you'll need a HTTP server for each site

isn't that what virtualhosts are for?

~~~
znpy
he's still in the early 90s

~~~
867-5309
awaiting TIL..

------
onion2k
Slightly nitpicky, but is something a PaaS if you run it yourself? _Anything_
as a Service isn't a service if you're running it yourself. It's just ...
infrastructure.

~~~
bomdo
Not nitpicky at all - this is an important distinction to highlight for
pointy-haired decision makers.

This product is undoubtedly the P in PaaS, but there is no service behind it.
If your company uses this as an alternative to a real Heroku/AWS/xyz PaaS, you
must have engineers at hand for 24/7 ops, scaling servers and fixing bugs. In
my opinion, this is quite risky for anything running in production and should
not survive a cost-benefit analysis.

~~~
1337shadow
> should not survive a cost-benefit analysis

I completely disagree, the difference of price between dedicated servers and
even EC2 instances is completely amazing.

This is what you get for less than $200/month with a dedicated server:

1× AMD EPYC 7281 CPU - 16C/32T - 2.1 GHz, 2 × 1 To NVMe, 96 Go DDR4 ECC,
unmetered 750 Mbps

In one of my companies the AWS bill is just completely insane, we have like
half that hardware, with a really small bandwidth, which is metered, for more
than $800/month, which is fine while we're on free credits.

I love working for cloud companies, it's a lot of fun, but when it comes to my
money then I never go for anything but a dedicated server.

~~~
zeveb
Yes, hardware as a service will always be much more expensive than hardware
you own. But it may be less expensive than the team you will require to run
that hardware at an acceptable service level. It very likely will be less
expensive than the opportunity cost of running your own hardware.

As an example of the latter bit, if you are running your own hardware and need
to add another host and you do not have a spare lying around, then you need to
order one. It has to be shipped. Someone has to unpack it. Someone has to make
sure that the data centre has sufficient power. Someone has to install it, its
power and its network cables. Each of these steps takes time, but also each
step is an opportunity for friction.

By contrast, with a service, you would just add a new host. Five minutes later
you are up and running. That gives you an operational nimbleness that you
wouldn't otherwise have had.

~~~
parliament32
I love how there's this myth that servers and services just blow up every 10
minutes 24/7 and unless you have a legion of ops personnel you're going to get
hours of downtime each year.

Servers, for the most part, just work. In DC climate-controlled environments,
hardware failures is _exceedingly_ rare. Apart from harddrives, most hardware
will happily tick along for a decade, if not longer.

Sane production-grade OSes (read: not Ubuntu) will also happily run for
literal years with zero human intervention. For obvious reasons, it's a bad
idea to not patch your systems, but things will continue to "just work" pretty
much forever unless you're running really shitty code.

For renting vs buying servers, there's upsides and downsides. Buying gear is
far far cheaper if you plan to be around for more than a year, but renting
dedicated servers gives you a lot more flexibility -- to provision a new
server, you hit a button in their online panel, wait 15 minutes, then let your
deployment strategy take care of the rest.

I find it almost mind-boggling that AWS and friends have convinced people that
it's normal to spend ridiculous amounts of money for fairly "meh" service
specs in what's essentially VMs.

~~~
mm89
The points you make are fine but I think the experience becomes more painful
linearly with the number of servers you manage, since you're N times more
likely to see something happen that takes down a server. It just happens more
frequently. At some point that becomes often enough that you don't want to
deal with it anymore.

~~~
parliament32
I don't think you understand the sheer scale you need to be experiencing a
failure more often than once a month. By my anecdotal experience you'd need at
least 1k servers for that to happen... and if your company is big enough for
$2MM capex for servers alone you can handle $100 remote hands and 30 minutes
of engineer time.

Not to mention that at that scale you have plenty of redundancy and, if your
ops team knows what they're doing, automagic failover / HA. Anything that
happens can easily "wait till Monday", no need for 24/7 anything.

------
Nextgrid
Curious as to how this compares to Dokku
([http://dokku.viewdocs.io/dokku/](http://dokku.viewdocs.io/dokku/))?

~~~
MrCheese
CapRover has support for multi-server deployments using Docker Swarm. It also
has a nice dashboard with built-in monitoring and such. There is a marketplace
of sorts with single-click deployment for certain applications.

Dokku on the other hand has support for buildpack deployment as well as
Procfile support for running multiple processes.

I prefer Dokku. The main reason is that I only need a single server for my
apps and running Docker Swarm adds complexity.

I wrote about some other differences in my blog:
[https://www.mskog.com/posts/heroku-vs-self-hosted-
paas/](https://www.mskog.com/posts/heroku-vs-self-hosted-paas/)

~~~
josegonzalez
Dokku supports multi-server deployment via Nomad and Kubernetes as well.

\- [https://github.com/dokku/dokku-scheduler-
kubernetes](https://github.com/dokku/dokku-scheduler-kubernetes)

\- [https://github.com/dokku/dokku-scheduler-
nomad](https://github.com/dokku/dokku-scheduler-nomad)

~~~
StavrosK
Do these work well? I've never heard of either.

------
fullito
I can recommend in getting in k8s with something like microk8s from ubuntu:

You will learn k8s and you will get the same thing as they do but with open
components, industry standards and a whole industry moving in this direction.

I have already microk8s running at home with argocd. I have never had IaC that
quick and that simple setup.

With traefik you can have your domains as well. Then just go to gitlab (or now
to github, haven't checked out yet if i wanna migrate back) and register your
microk8s cluster as a buildrunner.

Thats it you are set. Quite future proof setup, modern, stable, easy to use.

~~~
MrCheese
How does the deployment process differ from CapRover/Dokku?

Deploying a simple app with a database with Dokku is something like: 1\. Run
command to create a database of your choice(Postgres, MySQL, Redis etc) 2\.
Run command to create application 3\. Run command to link the database to the
application 4\. Push to the Dokku repo to deploy the application.

~~~
zerubeus
Don't bother k8s is like the lvl of the dragon in front of something like
dokku, If you don't really look for auto scaling, or you are running a
business alone, don't go for k8s

------
hardwaresofton
Another great alternative in this space is dokku[0]. Haven't tried CapRover
recently but it looks fantastic.

[0]: [https://github.com/dokku/dokku](https://github.com/dokku/dokku)

~~~
Longwelwind
I've been using Dokku for a side-project, and it's a really nice tool! My only
gripe with it is that it's not easy to deploy an existing docker image. You
have to pull it, then transmit it over ssh with "docker save" and "docker
load".[1]

Migrating the docker image building from the dokku server to a CI would be
easier to do without this. On top of that, deploying an existing software into
your machine would be easier.

[1]
[http://dokku.viewdocs.io/dokku/deployment/methods/images/#de...](http://dokku.viewdocs.io/dokku/deployment/methods/images/#deploying-
an-image-from-ci)

~~~
xu6ahb8E
dokku is also meant to build a custom image on deploy : rather than using
heroku's buildpacks, you can put a Dockerfile at the root of your project and
it will be used instead.

So basically, you could put a Dockerfile file container just FROM and
MAINTAINER, referring the image you want to use in the FROM, and dokku will
download and execute it on `git push` (provided it can access to the image
repository).

~~~
Longwelwind
This is good at first, but I deployed on a very small machine, that couldn't
handle the live application + the Docker image being built, thus why I wanted
to build the image in my CI pipelines.

~~~
hardwaresofton
+1 for this -- requiring the docker image to be built/managed on the machine
it's being deployed on is the simpler architectural choice (easier to debug,
etc), but it doesn't necessarily make sense for production.

I wonder if there's a ticket about this on dokku already

[EDIT] - Couldn't find anything... Some tickets about how the containers are
built and changing the base image but not much about.

I wonder if you could jury rig something like kraken[0] and make sure wherever
your building images is a peer or something... Of course the simpler solution
might be to add a CI step that just pushes the image (via the working `docker
save` method) to the deployment machine(s)? Maybe if you have a staging
environment, let CI push _there_ , then if that machine is peered (via
something like kraken) with production, production will get the image (though
it may never run the image).

[0]: [https://github.com/uber/kraken](https://github.com/uber/kraken)

~~~
josegonzalez
You can deploy a custom image via the `tags` plugin.

------
dan_can_code
This tool looks really cool. The section where it listed reasons for using it
really struck a chord with me. I am not the most comfortable using all the
Linux tools when it comes to setting up servers / system administration. This
product looks to be a really good bridge between devs who dev primarily, and
those with skills in deployment. Super cool. Thanks for sharing, I will be
using this!

~~~
esquire_900
Isn't that a setup waiting for disaster to happen? Everything is happily
running up to a serious production problem, at which point you miss the
experience to debug and fix it.

~~~
dan_can_code
Then I will have to learn then. I feel most productive and comfortable working
on a hobby project if I don't need to spend all of my time dotting the i's and
crossing the t's with cli and configuration files. I just want to build. I
don't see the value investing my time learning the ins and outs of tooling
that I will use maybe a few times when it makes minimal impact, as it comes
with an opportunity cost for me elsewhere. That's just me though, I have no
gripes with people who love to tinker with their set ups. It just isn't my
thing.

~~~
esquire_900
And that's the problem; by then it's too late (i.e. never took the time to
backup the db). It's about finding the balance, writing a PHP app shouldn't
involve studying C compilers and CPU design. But I think these tools
(whichever you decide to use) are such an essential part of what you're
building that "outsourcing" them as much as possible might be a bit ignorant.

That being said, as long as it works, it works. And if your app is small
enough never to get into the grey waters, all the better.

~~~
dan_can_code
I absolutely agree, were it something commercial these things would need to be
considered.

Are there any tools you recommend looking into, were I to take the next step?
I don't plan on depending on CapRover to fill gaps in my knowledge for too
long, but for now this product really is a good start for me.

~~~
esquire_900
No problem in depending on CapRover, as long as you are at least somewhat
familiar with the tools it sets up for you. Combine that with some crude
generic UNIX skills (quickly analyze cpu/ram/disk usage, search in logs,
transfer files, modify configs etc.) and you're way better prepared.

Ironically it's best learned "on the job" (for me at least); just try to
deploy your app from scratch. Play around with nginx/apache, letsencrypt, your
db stack, packages installation etc. and get a working product.

I'm no expert by far in any of this, but think that knowing "just enough"
about these tools really helped along the way. Up to the point where I can now
use CapRover like tools with some degree of confidence, closing the full
circle ;)

~~~
dan_can_code
Thanks for the info. I look forward to learning from it. I will try the 'on
the job' approach as that's how I learn most effectively. :-)

------
wilsonfiifi
Caprover is a great tool to have in one's kit but coming from Dokku [0] i
think it lacks a certain flexibility when deploying applications with worker
processes. You can get around this by creating multiple "captain-definition"
files in your project but i prefer Dokku's adherence to Heroku's "Procfile"
approach. However Caprover's web admin/dashboard and docker swarm features are
a nice touch.

    
    
      [0] https://github.com/dokku/dokku

------
danr4
I'm using CapRover on a personal server of mine and it's pretty awesome. I use
it for side projects and tinkering and tooling (analytics, bitwarden). it's
very stable with lots of "one click app deploys" of popular open source
software.

Combined with portainer (which u can install with caprover) I'm improving my
docker knowledge. I'd recommend it for someone starting out with containers
and "home labs".

------
gnud
CapRover was mentioned in the thread about the coinbase stack [1] - I guess
that's why it's popping up here now.

As I said in that thread - this looks interesting, but the installation
instructions put me off a bit. Open a port on your server, and don't change
the default password `captain42` - then run a cli tool from your dev machine.

1:
[https://news.ycombinator.com/item?id=23460066](https://news.ycombinator.com/item?id=23460066)

------
umaar
A few weeks ago I mentioned on HN [1] I was looking for something where I can
take a $5 VPS and install a bunch of Node.js apps easily. I did this with
DigitalOcean + CapRover and it was extremely smooth. Initialising the app,
enabling SSL, deploying - everything just worked. It was great.

I was hoping to move over, but I won't just yet. Was hoping for two factor
auth support [2]. The dashboard was publicly facing and only guarded by a
password.

There was also an issue which concerned me: 'netdata image in use is spyware'
[3] - however I have not digested that thread and its related discussions to
understand if it's a genuine issue yet.

Finally, I was hoping to understand more about the motivations of the project.
Who's funding this project? The OpenCollective [4] page shows an annual budget
of $529.55 USD? What are the long term goals? How can they sustain themselves?

I use PM2 for running some of my Node.js apps, but I can see there's also PM2
Plus and PM2 Enterprise [5] which helps me understand how they're able to
sustain the free version.

I don't mean to be pessimistic towards a piece of software which actually
works extremely well. Just want to better understand its long-term suitability
for deploying production-grade applications.

[1]
[https://news.ycombinator.com/item?id=23278095](https://news.ycombinator.com/item?id=23278095)

[2]
[https://github.com/caprover/caprover/issues/493](https://github.com/caprover/caprover/issues/493)

[3]
[https://github.com/caprover/caprover/issues/553](https://github.com/caprover/caprover/issues/553)

[4] [https://opencollective.com/caprover](https://opencollective.com/caprover)

[5] [https://pm2.io/pricing](https://pm2.io/pricing)

------
Aeolun
Since someone was happy with this before, I’d like to recommend
[https://github.com/exoframejs/exoframe](https://github.com/exoframejs/exoframe)
again for a more console oriented way of doing this same thing.

------
rcarmo
Shameless plug: If you don't want to use containers or are using resource-
constrained Linux boxes, have a go at
[https://github.com/piku](https://github.com/piku) :)

------
peterwwillis
Can some developers explain to me why they don't want to set things up
themselves? If you already know how to do it, it's not very time consuming. If
you don't yet know how to do it, learning how it all works only benefits your
understanding of the service you're providing, and empowers you to fix it.
It's almost like learning a new trade, and can give you a new perspective how
your code runs.

Maybe it's because there's so much arduous research required to finally figure
out what magic commands to run to get something to work. Would having a set of
HOWTOs that just explain the steps to set up each component work as well for
you as a turn-key solution? (It would be great if we could start a trend of
people writing a _HOWTO.md_ after writing their _README.md_ )

~~~
filmgirlcw
I mean, speaking for myself — I know how to set things up for myself and have
done it plenty of times, but sometimes for side projects and playing around, I
just don’t want to go through that process. I like having a dashboard I can
login to, I like having to install one thing rather than having to setup my
environment from scratch the same way every time. And I like that if I have to
let someone else have access to something, I can give them something without
having to pray they don’t break something or teach them.

Honestly, I do a lot of the setup work for my actual job — including
documenting/creating demos and examples for others — when I’m doing my own
side stuff, I really just don’t want to bother, especially if it isn’t in
production and it’s just on the home lab.

To use a crude analogy, I could build my own robust NAS with hardware
components and a BSD or Linux distro optimized for storage and acting as a
home server with better performance at a lower price than a Synology system.
Or I could continue to use my 8-bay Synology NAS (that I really want to
upgrade), because the appliance nature is worth the extra cost and pure
performance deficits. There was a time I took great pleasure in maintaining
all that stuff myself but honestly, I just want to plug it in and know it’ll
work with all my machines without having to think about it.

------
omk
Looks solid. I am all set for the wave of fully controlled PaaS solutions
coming our way. Most of the innovation has stayed locked behind closed rooms
at AWS, Microsoft or one of the major cloud companies. This brings in more
control and an extra dimension to optimize.

------
unixhero
Looks like an open, free, very very early re-implementation of a solution such
as Cloudron.io .

Very cool!

------
vincent_s
Background info: [https://www.freecodecamp.org/news/how-i-cut-my-heroku-
cost-b...](https://www.freecodecamp.org/news/how-i-cut-my-heroku-cost-
by-400-5b9d0220ce13/)

------
sandGorgon
this makes me so happy - to see a PAAS on Docker Swarm!

However, I wish the caprover had built this experience on top of kubernetes
(or k3s) instead of Swarm. The future of Swarm is really unknown and the
ecosystem is undoubtedly behind k8s.

~~~
mromanuk
but swarm is much more simpler, I'm concerned about future too.

------
greaber
How does CapRover work with databases? Does it replace something like RDS?

------
yig
What does PaaS stand for? The website doesn't say. Platform? Product?

~~~
chasd00
Platform as a service. It's like a step up in abstraction from IAS,
infrastructure as a service. The lines begin to blur near the edges though,
it's a marketing thing really. Just like "cloud" means many different things
to many different people but it's a simple one syllable word perfect for
brochures.

------
explodingcamera
Does Caprover support multiple "ingresses"? In that I can have an external
load balancer balancing between my clusters servers? I can't seem to find any
info on that in their docs.

~~~
mromanuk
Should be possible it uses Docker Swarm, which can handle multiple ingress.
[https://docs.docker.com/engine/swarm/ingress/](https://docs.docker.com/engine/swarm/ingress/)

You can expose some ports on different nodes and point your external LB (for
ex. cloudflare)

------
chris_st
I'd be interested to know how people who use this kind of thing (or Dokku,
etc.) keep their OS, database, applications, etc. up to date, for security
reasons if nothing else.

~~~
progx
You update your image, stop the container, start the container (with the new
image). That's all.

You can create complex containers that could update with security fixes
without restarting. But it is easier to update an image e.g. once per week/day
and auto restart the containers.

~~~
mikepurvis
I've been using portainer for managing a handful of basic containers on my
home server (zoneminder, deluge, jellyfin, unifi controller). Overall I really
like it, but some kind of feature to do this is probably the #1 thing I'm
missing. It even lets you launch "stacks" from a compose file in a git repo,
but doesn't have any facility to remember that info or do a redeploy, so
you're basically starting from scratch every time:

[https://github.com/portainer/portainer/issues/1753](https://github.com/portainer/portainer/issues/1753)

------
lessname
I wonder what happens if something (like mysql) crashes for some reason or
something like that happens, would something like that be easier to handle?

~~~
julianwachholz
it will restart the container if the docker healthchecks fail

------
wiradikusuma
Can I say it's a poor man's Kubernetes?

~~~
zerubeus
Sort of yes, for personal projects and small business I would go for something
like this or dokku and try to split services as much as I can, rather than
managing k8s in my own

~~~
wiradikusuma
But if the K8s cluster is managed by cloud (e.g. AWS, Google, even DO), is
this/Dokku still worth it (easier)?

~~~
zerubeus
Yes dokku still worth it
[https://news.ycombinator.com/item?id=23460066](https://news.ycombinator.com/item?id=23460066)
and most importantly much easier

------
jaggs
I can see exactly why this exists. It won't suit hard core roll your own
developers - any more than WordPress theme generators suit DIY web developers
- but for those who may be light on skills and time, this could be a superb
way to get an MVP going really quickly and easily. Very nice tool to fill a
gap.

------
monkeydust
(non dev) looking to run my own installation of
[https://github.com/excalidraw/excalidraw](https://github.com/excalidraw/excalidraw)
for team. Could I use this app? Straightforward to do? How would I estimate
the costs?

~~~
lapnitnelav
As long as you have a (sub) domain you can use and a bit of free time, you
could go to your favourite cheap 'cloud' provider, i.e. Digital Ocean,
Scaleway, ...

Spin up a cheap instance (DO has a preconfigured image ready to go), git pull
and caprover deploy to test. I am pretty sure even the cheapest ones will be
able to run that.

------
ryanmarsh
A friend often tells me “you’re only ever one CloudFormation template away
from your ideal PaaS”.

------
zerubeus
ahaha I like how this comes after the thread about k8S
[https://news.ycombinator.com/item?id=23460066](https://news.ycombinator.com/item?id=23460066)

------
shuringai
how is this any different than setting up grapahana, nginx-proxy with
letsencrypt companion etc with docker-compose and just replace my app image?

~~~
ecoqba11
I was thinking the same. I guess it has a user friendly UI and monitoring
features at the same time?

------
pinfisher
Is there anything like this for non web based applications? Looking for
hosting for some Python apps pre-procssing data before delivery to clients.

~~~
rcarmo
Have a look at piku ([https://github.com/piku](https://github.com/piku)). I
built it especially for that kind of thing.

------
97-109-107
Quick tip on messaging on the homepage - replace _a developer who..._ with the
feature highlighted (ie simplicity, batteries included, etc)

------
ecoqba11
Is there something similar and user friendly, but built on top of kubernetes?
I like the one-click app approach.

------
risyachka
Can I deploy apps on different machines with this? E.g. can I run my app on 3
servers with a load balancer?

~~~
mromanuk
Yes, that is easy and work right out of the box. You should deploy it with 3
nodes and lat the manager work as the LB (is a single point of failure,
though). A more complex solution would be deploying it with an external LB
EDIT: Rephrased

------
mleonhard
Does CapRover support all configuration via files? Can I use it for cattle
servers not pet servers?

------
appleflaxen
this is a bit like sandstorm.io, which is also an open source platform for web
applications. I've used it for a couple of years, and love it. It's cool to
see other people exploring the same software space.

------
netmonk
I find it surprising that they choose nginx as routing/reverse proxy while
Traefik does the job very seamless in a matter of minute with benefit of
docker container tag for live configuration. and with full integration of
letsencrypt.

------
ev0xmusic
Give a try to Qovery (qovery.com) for a very simple Container as a Service
platform for developers

------
Legogris
I can not not read this as CA Prover and thinking it has something to do with
PKI.

------
bovermyer
I like the idea, but I will not support the use of nginx.

~~~
jchook
Care to elaborate on this? Which http server do you use?

~~~
CSMastermind
HAProxy is what we switched over to at work. I'm not informed enough to give
you the pros and cons of each.

------
ComplexSpidey
Awesome.

