The commoditisation of VPS (and cloud) servers brings exactly the same problem. Updates aren't automatic by default and so many VPS holders are devs sitting in Windows without a clue about anything past the `sudo apt-get install lamp-server^` they read on a forum. Months pass without updates and before you know it, there's a remote update a script can catch.
Webapp updating is another thing altogether. Relatively few are well packaged and even fewer have nice automatic update migrations. So they go stale and get exploited.
The [immediate] future of internet security looks pretty dismal.
Being able to host a personal site comprising a few static pages, which is something that would fit well with the goals of this project, shouldn't require all that much in the way of software; maybe even a full Linux kernel is overkill.
I'm not sure "never needs updating" is ever going to be achievable. As long as it's connected and there could be a vulnerability, it could be exploited.
So there are updates needed from time-to-time, but I agree that running a small webserver shouldn't require constant updates to fix neverending security bugs.
Anyways, if those Yuno people are smart enough, they do this once a day automatically via CRON. And if their systems are expensive enough, they consist of 2 redundant engines.
I mean just think of a file server. Everybody can tell you how much of a security nightmare an FTP server is. Why? Probably because the ftp daemons are so complex and have so many options. So people turned to Dropbox and other expensive but simple solutions.
I definitely don't want to automate my server's updates in such a way.
That is, the really interesting target user here is not the one who will drop into the command line to tweak /etc/ssh/sshd_config and then get annoyed when their changes are overwritten on the next update.
I mean a zero configuration server cannot be compared to a server that has > 100 users. The latter usually has a lot of custom settings which would
a) easily get damaged during updated
b) be lost when resetting the whole thing
There is definitely a need for easily manageable servers that should be low on customization and maintenance.
I think that what is happening here in the hosting sphere is the same thing that happened in the OS/user space, inasmuch as packaging has become the solution to solving the layers of abstraction. Todays .ipa archive is yesterdays .app bundle, was last centuries filesystem-containing standalone .exe, is todays 'one dock fits all' deployable-application solution.
Whereas in systems past the user was required to absorb a cognitive load of abstraction ("C:\DOS"), yet today we have far friendly flick and execute interfaces, the same is true of sysadmin abstractions becoming less and less significant and more and more point and click. Yesterdays 'tar xvf somesources.tar.gz && cd somesources && ./configure && make install' is todays 'docker run -i -t someapp', and so on ..
But yet, we still must face the quandry that the more these abstractions become compartmentalized, the more we can just pile new crap on top and end up with new abstractions requiring eventual compartmentalization, ad infi...
If an average user (non full-time server admin) wants to set up a server that has common apps like Wordpress, Roundcube and Transmission, YunoHost seems like the OS for the job.
How would a user like that go about this with OpenMirage? Is there a similar web interface for setting up mail, web and torrents (for example)? There's a lot of technical documentation, but how would they actually set up these services?
After reading the overview link, I'd wager that most users still don't really know why it's better than YunoHost, or even what OpenMirage actually is.
Running "sudo apt-get update" and "sudo apt-get upgrade" in a cron job is not a security solution.
Operating systems have to be designed so that a security hole in a single application can't compromise the entire system. If not, I would argue they are unsuitable for the average user to "self host". Modern Linux distros don't meet this criteria.
OpenMirage is solving some of the same problems as YunoHost, just on a longer time scale and with real solutions rather than hacks piled on top of hacks (I'm not affiliated with the project). apt-get is a big hack and not suitable for distributed computing.
I don't really mean to suggest that Mirage/Nymote is ready for use by the public right now (I'm supportive of things that give users that option today). However, I do believe it will be better to build self-hosting options using the Mirage stack later, hence the link to the Nymote work.
That's not nice.
I've wondered for a while why browsers don't catch this and let you back-button over the redirect. Perhaps there are some cases where it would be the wrong thing in an annoying way. It is annoying when I can't back button. For me, neither OSX Chrome nor OSX Firefox skips the URL with the redirect, both of them leave the back button not working well. (You can still hold down to get the back history menu, to skip over the redirect)
When you install apps in the admin interface, there's a description of what each app does before you install it.
Maybe someone else finds it cute. I didn't.
I'm familiar with this through Telecomix, but nowhere else, really. Seems to fit the bill of a Telecomix op.
We also took "is an operating system" out of the title for similar reasons.
This comment would be better if instead of "Nice job at breaking", it simply said "This breaks".