Hacker News new | past | comments | ask | show | jobs | submit login
Cockpit – Integrated, glanceable, web-based interface for servers (cockpit-project.org)
347 points by gilad 17 days ago | hide | past | favorite | 126 comments

Shameless plug,I made a lightweight real time monitoring tools using websockets, it's open source: https://github.com/elestio/ws-monitoring/ Contributions are welcome :)

Is there a reason why successfully running software on a server is so much harder than running software on your phone?

I don't think it needs to be this way. Someone needs to figure out server software for consumers.

Just like PCs became more accessible, so should servers!

Edit: Brainstorming here: specifically, I'd like a more accessible UI, automatic updates, sensible defaults on all apps, an easier way to get started and so on.

Because managing the correct way to handle downtime and how to respond to problems is vastly different when serving one person that will generally be actively using the device less than half the time of the day or far less, and serving thousands or millions of people using it such that at any one moment multiple people might be actively using it.

> Edit: Brainstorming here: specifically, I'd like a more accessible UI, automatic updates, sensible defaults on all apps, an easier way to get started and so on.

To some degree, that's sort of like saying "I can drive a car, and cards are simple, why isn't driving an 18-wheeler truck as simple, or a cargo ship, or a cargo plane? I don't see why it has to be more complicated than a car."

I think of it more like a pickup truck: It's easy to drive and gives consumers all the cargo space they would ever need. Not everyone needs to run Netflix's backend. But for things like a blog, an email server, a matrix server, controls for your smart home, and so on, it would be preferable to have users control them.

Cloudron, Sandstorm, Synology, etc. exist for exactly those people, as do Canonical's Snaps for those more comfortable with a CLI.

I claim that it's possible to design cargo ships and planes to be driven by anyone with short training compareable to driving license.

It all comes down how much you want to invest for being as safe as possible from accidents. Cost of cargo ship or plane accident is very high and there is no valuable reasons(?) why everyone should be able to drive those vehicles. Therefore it makes sense that those vehicles are driven by professionals and are designed for professionals.

If your server has millions of users and it provide such value that down time is not an options, maintaining such server should be done by professionals and maintenance tools should be designed for professionals.

However thats not case for every server and cost of failing "empty" server is basicly zero (unlike empty cargo plane).

I think it would be interesting to see software designed around not centralized servers, not PCs, but PSs = Personal Servers where user data lives on their own servers and services only link and communicate between them.

This is exactly what we do :) check out our demo at https://cloudron.io. Another user commented there is no market for this, but this is not true since we exist and are doing well. Its a niche but this is expected since this is a developing market.

Just wanted to pop in and say that Cloudron is excellent and I really, really love it. I discovered it a few years ago and it's just fantastic.

CapRover is excellent too but of all the various tooling I've tried over the years, Cloudron is hands-down the most polished option I've seen/used!

I started using Cloudron recently and I really like it! I came to this thread to mention it, but I see you're already here.

But it's not as simple as that though is it? Does this take into account things like HA? Or offsite backups? What about security? I guess this works for someone who sets this up on a single server, perhaps just for themselves. But running services is a lot more complex that just isn't possible as a 1 click install.

Cloudron can backup to a variety of destinations including S3/GCS/Backblaze (https://docs.cloudron.io/backups/#storage-providers). I guess it depends on what you mean by "service". If you mean running something for a million users, we are not the right product. Cloudron is meant to setup services for personal use and businesses. Our target audience has 10k users max.

What is a good option for doing this on kubernetes?

I am not aware of any. Cloudron itself might run on k8s some day but so far we haven't found the right customers for that scenario yet!

Cloudron.io and Sandstorm.io both do this well. (Cloudron is very actively maintained with a lot of package updates and a very wide package library and feature set, Sandstorm is open source and adds a layer of security in assuming apps are evil or compromised.)

I do think there's probably a good market for a "just plug it into the back of your router" box that has one of these pre-installed and ready to go.

Windows Home Server used to be an attempt by Microsoft to make a home server accessible to the average user crowd. Unfortunately, Microsoft doesn't feel consumers (or businesses, if we're being honest) should have on-premise servers anymore, and has deprecated both Home Server and Small Business Server, and the UI features that made them more accessible to the layman.

Thanks, i did not know about both of these! They do seem like a step in the direction I have in mind!

Unfortunately things are moving very slowly in this area. Sandstorm is basically abandoned and Cloudron is not even open source. Bitnami and Yunohost are also major options.

You might like some of the pointers here: https://github.com/awesome-selfhosted/awesome-selfhosted#sel...

> Sandstorm is basically abandoned

This is untrue. Sandstorm gets monthly releases, and new features tend to show up every couple months or so. Several major improvements to the platform are in the works at the moment. It's definitely true we could use more help, but it's still probably the most secure way to self-host cloud services for personal use.

> Cloudron is not even open source

True, though it's probably the best "successful" approach right now, in that the Cloudron devs have a functional business that allows them to very actively support the platform. (Sandstorm failed here, so as a Sandstorm contributor, I can't really knock their approach.)

> Bitnami and Yunohost are also major options.

Yunohost has zero isolation between applications, a single compromised app can hose your entire server. I would bear that in mind when recommending it widely.

Bitnami is going to leave you mostly on your own to decide how you're going to host it's app packages. I'm not sure it's directly comparable, it's more like a Docker Hub than a managed self-hosting platform.

I'm not recommending any of these; I think it's best to just run services manually - with one VPS, Docker and something like Caddy the pain isn't that large.

It would be great if Sandstorm could get enough attention to thrive because it looked promising, but the activity of the blog¹ doesn't give me much hope. Since 2019 when it announced the hosting was shutting down there have been just 4 posts and apparently no major releases.

[1] https://sandstorm.io/news/

We probably should write another blog post or two. We generally try to only post substantial content to the blog, our mailing list tends to have a bit more of the mundane, and of course, our GitHub issues/PRs. Many projects make blog posts for releases, but we generally do not.

For what it's worth, the "just four posts" constitute some major things:

1. Continuing Sandstorm as a community project

2. The 1.0 release of the main app packaging tool

3. Let's Encrypt support built-in

4. A major security improvement in disabling apps from making outgoing HTTP requests without permission

> apparently no major releases

I push at least one release each month. The change log is here:


But it's certainly true that development is much slower today than it was in 2016 when there were seven people working full-time on it.

NAS products like from Synology or QNAP is viable home server for now.

Well... I think the common consensus is that servers and automatic updates don't go together. For the most part you don't want your server deciding to upgrade its database/etc at random times.

Similarly for the "accessible UI" bits, particularly if your managing more than a single server and you want to say upgrade a few thousand of them at the same time.

In both cases its pretty "easy" to configure a more server mgmt related tool to do those operations (hence cockpit! or anisible GUIs/etc). If you looking for a more android level of software mgmt then its pretty easy to install the desktop tooling on something like fedora server that comes with fedora workstation. With that you get nice app stores layered on top of both the traditional dnf/rpm package mgmt as well as flatpak and various other container technologies. For a single home/etc server, just install something like fedora server, and group install one of the desktop/etc profiles during setup (or later if it suits you).

I think you're asking a perfectly valid question. IMHO authentication is still an enormous mess and new standards like OAuth did nothing to improve it on the server side.

On your phone app store there's a strong and trusted source of identity coming from Apple or Google. They know who you are, what you're allowed to do, etc. and can delegate that authority to your apps.

On the server though... welcome to the wild west. How does your server know the person on the other end of a TCP connection is really you, or the person you shared a document to view, etc? You can put your trust in a third party authority like Google, Facebook, Auth0, Okta, etc. but that usually comes with a financial cost. You can roll your own auth or self-host an auth server, but then you're taking on a huge security burden and it's a big leap in complexity to manage something like Keycloak, an LDAP server, etc. It's just not an easy problem to solve with the tools the web gives us today.

It seems to me you are making it needlessly complicated (LDAP...). There are many tools for authenticated access to server with minimal cost in terms of administration. TLS+Letsencrypt+Basic HTTP auth, SSH, OpenVPN, Wireguard, etc.

If you’re using N+1 servers that have multiple users, then you definitely want some kind of centralized user management. It doesn’t matter how you connect to the server (ssh, etc). Those won’t solve the problem of keeping user account information in sync between the servers. You still need some way to keep account information (username, password, public keys) consistent between the servers.

That’s what the GP post was comparing to.

I use LDAP to manage access to multiple servers and it’s more work to setup than /etc/passwd, but much easier to keep things in sync.

Does LDAP have some advantage as opposed to generic cms system like ansible?

> How does your server know the person on the other end of a TCP connection is really you, or the person you shared a document to view, etc?

Client certs solve this problem quite nicely.

There's no demand for it. There are some small & generous groups (eg. linuxserver.io) who cater to the tiny niche of "homelabbers", but even that is somewhat passionate & skilled group of people. It's mostly a waste of time to cater to anyone lower. Like most niche hobbies, no one is getting paid enough to care about the lowest common denominator.

Indeed, and the reason for that, I think, is that if you are at a level of capabilities where you are able to provide services, you have specific needs and ideas about how your business works. You can't outsource your business logic and customer care to some mass product and stay in business.

> Is there a reason why successfully running software on a server is so much harder than running software on your phone?

Sure (exaggerating)

phone - reboot is your FIRST option

server - reboot is your LAST option

Huh? We reboot our servers all the time. We would never design an app that isn’t tolerant to losing servers somewhat arbitrarily. How else could you ever handle hardware failures?

There is probably a niche for unreasonable customers who require 99.9+% uptime but do not want to pay for clustering and redundant servers. The best way to achieve that is to have a physical server that never gets rebooted. HW failures happen but you can explain to the customer it isn't your fault.

The affected user base is considerably different.

When I update an app on my phone, if it fails or changes considerably, it only affects me. When I update anything on a server(s), it's going to affect hundreds, thousands, or even millions of other people and getting back to the state it was in before the update can cost hundreds of hours and thousands of dollars, not to mention potential money lost from the affected users.

Exactly the same can happen by rolling out a buggy app to millions of other people. You can "brick" their app, and then have to work out how to get the fix rolled out to every user.

Updating a server is much cheaper than that.

Because you don’t run your own apps on your phone, it’s some developer’s app on a phone paid by you

This. You want control? You do the work. On a phone, the only easy part is being a user/consumer.

> specifically, I'd like a more accessible UI, automatic updates, sensible defaults on all apps, an easier way to get started and so on.

Isn't this kind of what Ubuntu does with the snaps? You just snap install someapp and there it goes. It will helpfully restart to update whenever the developer publishes a new version, etc.

There's an infamous, quite involved debate, over this exact thing with a bunch of people wanting an option to actually disable this behavior and another bunch arguing why that's not a good idea [0].

I guess the issue with servers is that there isn't really a one size fits all, or at the very least it's not easy to figure out what it is. All this combined with the fact that one argument in favor of running Linux is the customizability, I'm not sure all that many people are looking for such a solution.


[0] https://forum.snapcraft.io/t/disabling-automatic-refresh-for...

Once you get the infrastructure set up and cloud-init-enabled template images, servers can be very turnkey. I can spin up a new DNS-addressable, auto-updating Ubuntu VM in two or three clicks then deploy software like this on it from the CLI. There’s even a system for easy LXC container apps called turnkey Linux. Works great with Proxmox.

> Is there a reason why successfully running software on a server is so much harder than running software on your phone?

Yes. "running software on a server" is actually done when running a business, which requires careful attention to detail, quality of service, actual work and dedication to customers. "running software on a phone" is just being a consumer, the hard part of that is done by Google/Apple. In short, provider vs. consumer, it can't be easier to provide than to consume.

EDIT Yes one can run software on a server as easily as on a phone, it's just few clicks or installation command away. But most people running software on servers do not want that, because they want control and understanding and security etc... That is, not yet, maybe in future every family will have their own home NAS server with apps.

I personally think it’s due to how the internet is designed, where applications are an afterthought. It’s after all easy to just get html pages published online, but anything beyond that is a hack.

With a better protocol and set of primitives it shouldn’t be as complicated (even though the challenges of scale can be unique to server software)

It's not a trivial thing - docker and sandstorm.io might be two examples of making some decent headway here.

Just today I set up a postgres and ms sql server for testing - pretty much identically, running out of their own named docker containers (for those not aware, it's even more similar than it sounds, ms sql runs on Linux now).

Why do you want to run MS SQL on Linux? What will you do when the db gets locked or crashes? You call Microsoft and wait?

For development - I was waiting for a dba to sort access to the new prod server, and needed to check that the current app build was minimally working correctly talking to an ms sql instance. It might be interesting to run in CI as well.

Running any rdbms in production in docker isn't a great idea. But for dev and test it can be great.

For my use case, we deploy mostly to traditional setup (dedicated sql server) - but I could also see it useful for prototyping deployment to mssql in azure cloud.

> What will you do when the db gets locked or crashes? You call Microsoft and wait?

Actually, it would appear ms is quite serious about sql server on Linux:


So I guess, similar to running sql server on windows?

It seems like an order of magnitude harder to run something on a phone, personally! i've never had to beg a company to let me run software on a server, at least not yet! ...let alone process payments through the platform.

Besides cloudron and sandstorm, yunohost is another alternative for this: https://yunohost.org/

This is what Canonical tried with snap packages. We need an open source variant of this system. I don’t think this exists.

The sand boxing model for app stores make apps completely independent and also difficult to incorrectly configure.

Many modern server platforms do this too. Be it something that pushes a web-based app store model, like Cloudron and Sandstorm, or a plain infrastructure tool like Docker.

This is a fascinating thread.

Someone starts a "Brainstorming" and everyone else poo-poos on it. This is how:

1. ideas are killed

2. Entrepreneurs are forged

Related reading: https://www.macleans.ca/society/science/scientists-mrna-covi...

I set it up for my handrolled homelab server automation ( all Arch Linux servers ), back when I was doing everything with virsh.

It worked "ok". It required pulling in a bunch of dependencies I wouldn't have normally installed. I had it set up behind an HAproxy LB, with ssl terminated at the LB. When I was using it ~1 year ago, it was pretty buggy, and certain components would crash and I would have to restart the web page.

Overall it was a mediocre experience, but I suppose better than having to ssh into every server. The main pain point was that I still had to go in to every server, and install cockpit.

In the end, I ended up just moving on to Proxmox. But I suppose cockpit is nice if you don't want a centrally managed cluster, but still want a web interface.

Considering Cockpit is neither supported nor tested on Arch Linux[1], I think forming an opinion of it based on that would be unfair.

[1] https://cockpit-project.org/running.html

It's in the official Arch Linux repository. And they even decided to include it in their table from your link.

I think its definitely a valid opinion to have. And I would stand by not recommending Cockpit until it becomes more polished.

Glad I am not the only insane person running Arch for their homelab stuff.

Has anyone tried to use this stuff at a scale of greater than a dozen servers? I briefly tried it in my homelab on CentOS 8, and immediately hit enough weirdness that I'm just back to SSH with Ansible now.

I suppose having a GUI can be nice for a health overview of your infrastructure, but in general I dislike GUIs for actual administrative work, since using them is a practice that steers you away from automation.

Most GUI tools don't provide you very good ways to make "atomic" changes, which is made very easy if you run your automation from a git repository, since the final review before hitting go is just "git diff".

Tried it on ubuntu, it also has weirdness there. The whole tool is definitely rough around the edges.

I would never use it to actually change machines though. Definitely Ansible over SSH is the way to go there.

I've mixed feelings about controlling the machines through cockpit. In theory there shouldn't be any difference between SSH and HTTPS on the security of the protocol side, but it definitely feels iffy to have a python (I think?) web app execute administration commands on a server.

I wonder what the security professionals think of it.

I have to admit the Cockpit architecture is not entirely clear to me, but at least it seems to allow using SSH as a remote transport, so you don't actually need to install the web stuff on all servers.

GUIs are great for discoverability and observation, but they always make my life harder when I actually need to manage change in a system.

As simple as it is, there's so far nothing that beats plain old text as the source of truth for how things should be; even if you have a fancy API to actually make changes into a system, you'd still want the desired state of that system to be stored as plain old text, so that changes may be tracked and reviewed easily.

You could have the best of both worlds and only modify the state of your system through a well defined API and then serialize the change in some kind of config file if and only if the change was successful (rollback to the last good state otherwise).

If curious see also

Cockpit – Administer Linux servers via a web browser - https://news.ycombinator.com/item?id=16445612 - Feb 2018 (148 comments)

Would anyone be interested in a desktop application with similar functionality, accessing [multiple] hosts via SSH?

Depends. Open source or proprietary? Actually desktop or just electron? Are you going to maintain it? What use cases would it be intended for? Would the client software be available for macOS? Which GUI framework(s) would you be using?

proprietary, cross-platform desktop linux/mac/windows, javafx.

use cases: server management and monitoring, log viewer, ssh terminals, parallel execution of commands, vnc, sftp, secure storage of credentials and keys.

[0] https://github.com/andy-goryachev/AccessPanelPublic

This is very cool actually.. neat idea! And this just uses ssh protocol for all communication? Nothing needs to be installed server-side right?

thank you!

it's still far, far away from being useful.

Actually no, the web interface is really slick. I would not install a desktop app for it even if there was one.

you do need to install cockpit on [all] your server[s], right?

i am trying to figure out if there is a business case for a desktop app. there is plenty of open source and commercial systems more or less similar to cockpit, and it's hard to compete with free

> you do need to install cockpit on [all] your server[s], right?

apt install cockpit

Installation is not an issue, more is keeping it up to date package-wise. Little or nothing extra on to of having the tools and libraries it needs installed.

So unless you have a _really_ compelling auto config system for common enough workloads I doubt there would be a large enough paying market.

There could be a case for if using cockpit or similar was not possible due to policy rather than technical concerns (there aren't going to be many circumstances where someone who can't install cockpit has a SSH account with the privs required for your app to be useful much beyond monitoring).

Perhaps there is a use case for people tentatively moving from shared hosting to having their own VMs/servers, so they can give your app the host address, user, and pass word/key & let it discover available resources & install what they need. But that is a market who notoriously won't want to pay for anything generally (they want the unlimited ride their last shared hosting provider promised but failed to deliver, for as little as that host or some chapter other is charging) but will bitch endlessly if there free moon-ona-stick isn't 100% perfect and 110% reliable.

I don't like having to install cockpit on every instance/devices. That means I have to keep all of them up to date in addition of installing them and configuring the devices to make it operable.

I'd rather have a client-only app that connect through ssh and get its data from standard binaries installed on the server.

So, yes. I'd give that a try.

any specific feature that would motivate you to shell out fifty bucks? (half-joke)

As a hobbyist that'd be hard to justify. At work I manage two standard LAMP servers only. So I am not really your audience.

I think I'd like some kind of alerts on some specific events (disk space, some logs, IDK), systemctl management and status/reporting, some instantaneous “update as I type” filtering/searching in logs, cron and/or systemctl timer management, space usage graphics, booting reports, etc. ... maybe these are just things I usually do and think a GUI would be nice to have if I had to do it for multiple servers. Not enough experience with that in a professional setting though so take it all with a grain of salt.

But as-is, I think I could justify asking for 50 bucks for the product if I needed it.

Thank you!

Looking at the logs is how this tool was started. This in itself is a failrly involved chunk of functionality. It'll be a year, probably, before I can release even that.

https://www.royalapps.com/ does this. I'm a happy paying user of RoyalTS.

I don’t think just an ssh client is exactly what he is going for/talking about. Check out the GitHub he references further on down in this thread.

Friends don't let friends configure linux from a web UI. It's messy, built off of assumptions, and is ripe for exploitation. Plus, if all you ever use is a web UI, how are you supposed to troubleshoot or fix the machine when said web UI stops working?

If you're looking for pretty, single host, read-only monitoring dashboards though, checkout Netdata: https://www.netdata.cloud/

I had to fire a guy in 2003 for installing webmin on a bunch of internet facing servers (webhost). Any Linux guy worth their salt isn't using a web gui for administration.

I would say it’s easier to just fix your web UI. And if that fails then revert to SSH and the command line.

Webmin is pretty useful.

Can vouch for netdata

Cockpit is super cool. I've been using it for personal stuff for years now, especially since it's trivial to enable and use.

I even use it in production for monitoring small sites/apps. The graphs for CPU/Mem/Network/Disk are really great, and I can leave them open in a tab on my browser. I run one fairly popular blog that as a web machine and a db machine, and it's great for that.

That said I don't use it for "serious production" where I have more than a couple of machines simply because at that scale I prefer cattle to pets and I prefer aggregation.

I also find myself strongly preferring SSH and the CLI, likely because I'm very familiar with all that and have been doing that for decades.

I've never heard of security issues with Cockpit, but I do firewall it off from everyone but my own IP (or a few others if they are involved). It's pretty easy to do:

    # Get your IP address from home or work:  curl -s 'https://api.ipify.org'

    firewall-cmd --zone-public --permanent --remove-service=cockpit
    firewall-cmd --zone=public --permanent \
      --add-rich-rule="rule family=\"ipv4\" source address=\"${MY_IP}\" port protocol=\"tcp\" port=\"9090\" accept"
    firewall-cmd --reload
Here's a gist of it: https://gist.github.com/FreedomBen/0aabe5493ba02d1c9bb33fea2...

I installed on one of my ubuntu servers and allowed port 9090, but when I try accessing it from chrome it gives me a warning page

"You cannot (IP) right now because the website sent scrambled credentials that Google Chrome cannot process."

Is this because I run a NGINX server from that box?

This sounds like a certificate problem, or the browser trying to use https on an HTTP port. If you've deployed Let's Encrypt on the same host, the browser might force the connection to work over HTTPS because Let's Encrypt often adds a so-called HSTS header to the config.

If it's just Chrome not trusting the certificate, you can usually override the error by clicking "details" and then continuing by clicking a link. If the override isn't there (because of HSTS or similar), you can type "thisisunsafe" into the web page to override any non-technical certificate errors (there's no input field but it'll work)

Do you have an expired self signed certificate on that server? See https://support.google.com/chrome/thread/10551759?hl=en

sounds like a TLS cert error related to a 'snakeoil' self signed cert or something

I sure wouldn't rely on this to do anything serious, but it's better than the ancient/mediocre alternative (webmin)

I prefer virt-manager to Cockpit's VM management, way more feature complete.

For VM management I agree with you. For other things cockpit is mostly fine.

Cockpit is not ever going to be as good as using the underlying tools it integrates with, but it is still pretty nice for what it is and I like having it.

I haven't really kept up with this, but last I heard virt-manager was deprecated (on RHEL at least) in favor of just Cockpit, hence the comparison.

Cockpit is definitely nice, but it still feels pretty incomplete compared to virt-manager.

virt-manager appears to still be developed, though, just without the same blessing/level of support from Red Hat.

Sadly true: deprecated in RHEL 8, removed in RHEL 9. Although we'll continue to develop and ship virt-manager in EPEL so the majority of people will still easily be able to install it.

virt-manager is just a shim on top of libvirt same as cockpit's VM component. So there really isn't any excuse on that front. Although your comparing a tool built as a competitor to something like vmware workstation/virtualbox against a general purpose machine mgmt tool that happens to be able to do some VM mgmt as well.

PS: I think both are good, but I too use virt-manager for all my VM twitting because nothing else on linux is both as feature complete for qemu/KVM while also avoiding having to read the manual just to adjust some VM parameter.

I'm searching for remote server file manager right now. I want to copy (and compare) large directories on same Linux server from my Macbook. Cannot find one so far.

Cockpit: "here is no graphical file manager, and we don't plan to add one." https://github.com/cockpit-project/cockpit/issues/11011

CuberDuck: "FTP and SFTP do not support a copy operation."

Is there a way with GUI?

Syncthing if you want continuous sync. Rclone if you want periodic rsync-style copies. Both have web UIs, although they're definitely aimed at the power user and not novices. If they're non-technical users you probably want to setup a nextcloud or similar system so they get a more dropbox-like experience.

edit: reddit.com/r/selfhosted always has good threads on self-hosted web file managers too

To clarify, priority is browsing, copying and moving stuff on same remote server, not copying data between machines. Thanks, will try these anyway. Trying this too https://cloudcmd.io

Edit: Cloud Commander works, gives you two-pane webgui with a row of Fx buttons below. F5 copies files locally.

"lsyncd" is really good for efficient, continuous sync. If you have password-less, public-key SSH auth, lsyncd is a bless (using it for millions of files and folders and doing good)

Can you mount the remote server over SSHFS or NFS or SMB and do the work locally on a file manager of your choice with a remote filesystem?

Or, if that's not a good option, what about a installing a TUI remotely that you can use in the console like midnight commander?

Just some ideas. YMMV.

> TUI remotely that you can use in the console like midnight commander

This was it :)) Even mouse works in Terminal!

thanks for mentioning this. I looked up midnight commander and I think I'm gonna end up using it for a few things.

Not sure about your CyberDuck problem, though. You should check your ssh server settings. I always use CyberDuck over ssh/sftp with my linux server.

When browsing remote server, I select a file, menu Edit - Copy. Go to another place on remote server, Edit - Paste. It starts downloading file to my macbook for some reason.

They're making great progress, it works and I have a friend who otherwise doesn't work in Linux happily use it on his homelab server.

For anyone experienced it's probably just a hinderance.

It comes on fedora servers by default. I’ve been using it for the last 3 years on my home servers. I’ve also installed it on armbian running on my *Pi devices.

It works just fine in my experience. I’ve configured mdraid with it for fun - and it worked. Ditched it in favour of Zfs which is not yet supported unfortunately.

I think it’s a good direction. It will become the default “GUI” for server maintenance I think.

I’ve actually been investigating cPanel alternatives as it’s become expensive and I think customers should have choice.

I find it interesting that I didn’t come across this one in all of my research I did last week or so, even though it’s backed by Redhat.

I actually think although we’re in a time of striving for serverless, there will always be a market for self hosting, be it niche.

Not everyone is building a huge SaaS platform but wants to run more than just a blog or website.

I don’t think this is the answer though. I think what these type of servers need is a standardised layer to interact with them, an API, something like how we have EPP for domains.

Because as we know, frontend will change so fast. The underlying hardware and OS changes too. Now seems like the right time to invent a new level of abstraction.

I’m not aware of anything that exists like this.

I've been using cockpit and cockpit-machine at home to manage a Windows VM I need for work stuff. It works pretty well except I wish there was better documentation for configuration; I haven't yet got the VM to recognize that I have a 4k monitor.

I live in the shell but for my home vms/boxes I love having this available. It’s particularly neat that you can integrate them all together and jump between different hosts. There are also extension packages to manage vms and Docker containers which is fun if I want to fire up a remote task.

Back in the day the closest thing to this was software called webmin.

searched linked page and documentation contents for "secu" - 0/0 results found. can anyone vouch for its security?


In short, most likely, unless you are reckless with it.

I first ran into Cockpit when I put Fedora 33 Server Edition on an RPi. Needless to say, I was blown away by it and now keep it enabled on several servers I run.

I highly recommend it to anyone here, especially on something like a headless server where you don't want or can't have an X11 UI running.

It looks like this is written in Javascript with C backend components. I would never trust giving root to a web service written in C.

If anyone deploys this, make sure to bind only to localhost, and use an ssh tunnel to access it remotely, otherwise you're opening a massive attack surface.

Anybody uses this in production? Is it good/suitable for managing servers (bare metal) behind VPN?

I haven't used this in production as something like this gets a bit out of hand once you need to manage more then like 5 machines. However, for home setups or a very small office, this works great. I've used it on and off for years for my home servers. Its really nice to just be able to restart a service without having to whip out a ssh client, just log into a webpage on your phone and you can fully manage your server.

Checkout MultiSSH for administering multiple machines over SSH: https://multissh.dev/

I'm surprised no one has compared this with cPanel yet. Seems to be a wonderful alternative, and I wonder if it will be offered by VPS providers.

Is there a way in Cockpit to log the actual commands run by the system to e.g. format/partition a storage device, create Virtual Networks etc?

This is 1990s philosophy with a pretty wrapper. We know it does not work even for pets, not to mention cattle.

Have you tried Netdata?

I just get a nice white screen on mobile :/

there are so many tools and interfaces for similar things: prometheus, elastic search(ES), new relic, splunk .... I think the world is converging on 2 stacks: ES and prometheus. ES is hard to operate but can handle everything: metrics, tracing, logs. Prometheus promises to be easy to use and has good metrics and alerting. In our work, we chose ES. I don't think so far ease of use is possible and team has to get hands dirty for any good solution.

In what universe do prometheus and elastic search enable you to manage containers and administer storage?

oh I just looked at the. UI and saw logs. I did not realize.

Tenable is using it.

ISPConfig vs Cockpit?

What is this "servers" you speak of?

This is pretty cool but honestly why is this written in c. For no reason you are increasing the attack surface of your users.

Sure, security of a piece of software is always fully dependent on the programming language it is written on. /s

Name a popular C lib that has not had memory safety issues.

It is very nice but like other redhat stuff this somehow prevents browsers from saving passwords, or at least it did last I used it.

EDIT: Actually they seem to have fixed it now. Nifty.

Cockpit is a nice frontend for servers.

But for my homelab I run UnRAID (https://unraid.net/) which is an amazing software to rollout your own NAS and run services as Docker containers. Furthermore, the web UI is amazing to manage Docker containers and VMs.

And the community is awesome if you need any help.

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact