Hacker News new | past | comments | ask | show | jobs | submit login
Comments on Shared Unix Hosting vs. the Cloud (oils.pub)
28 points by ThatGuyRaion 4 days ago | hide | past | favorite | 22 comments





> I just want write some scripts and SSH to a Unix box I don't want to maintain kernels, web servers, or SSL certificates.

That's my use case too. I don't want to maintain a VPS, just have some server available. Something fast and cheap which is maintained by someone else.


I maintain a moderately complex site for a nonprofit, with an e-commerce back end and a searchable full-text database. The budget is a shoestring (there's, like, one e-commerce purchase a week) and I don't have the time to keep a server patched.

For me, the right solution is "semi-dedicated hosting" from NameCrane.com. Really it's just high quality shared hosting with resource guarantees. You can pay anywhere from eight to 120 bucks a year depending on the amount of resources you want. For me the sweet spot is $15 a month for a quarter terabyte of NVMe storage, two cores, and four gigs of RAM.

Unlike with Mythic Beasts, you don't get to run your own persistent processes; you have to be content with the mysqld and lighthttpd provided. But on the plus side, you get backups, cron jobs, and good outbound email deliverability. SSH access is available. You don't get sudo, which is kind of the point, but you can install whatever software you want, as long as you "pip install --user" and "configure --prefix $HOME". That might take some getting used to, for folks used to a VPS; but it's a worthwhile trade-off for not having to do server admin.


(author here) Oh interesting, I haven't heard of them

I imagine this is a PHP site? They say:

cPanel or DirectAdmin with all the goodies: Litespeed, LSCache, PHP8, JetBackup & more

I have a PHP app, but also some Python apps ... How do you use Python? For cron jobs, or with CGI?


Cpanel Python support I believe is more robust than DirectAdmin, some pretty good info here at cpanel doc site:

https://support.cpanel.net/hc/en-us/articles/360049921014-Ho...

RE: the CloudLinux-specific python features, I _think_ Namecrane has cloudlinux on all their shared/crate plans

And since namecrane I believe also uses litespeed on most or all of their plans, here's some related info that includes some python things: https://docs.litespeedtech.com/lsws/cp/cpanel/cloudlinux/

PS BuyVM/Namecrane/Frantech have an active discord with good/active/friendly company participation. Nice little community..

BTW I really enjoyed your article! This topic isn't explored that much these days and was a breath of fresh air


I'm glad you liked the article! I hope to write some follow-ups -- this kind of thing has been bugging me for 1 to 2 decades :-)

I think it's mainly the stability. I don't think it should be that hard to serve some static web pages and some output from a Python script. Yet even Google internally had like 10 different solutions that were constantly being deprecated

Most software infrastructure really assumes you have a big team behind everything, because you need a team to make money

---

I was not aware of the Litespeed server ... I looked through the source code and it looks well done!

I think it's a shame that nginx doesn't support the ~/user model with CGI. I mentioned in the post that it seems to recommend uwsgi, but I think that is a bad and non-Unixy protocol

---

I'm not sure if I would rely on the Phusion server on shared hosting -- I mentioned in the previous post that Dreamhost quietly deprecated it:

https://www.oilshell.org/blog/2024/06/cgi.html#contrast-gate...

And it supports a few languages as special cases, which I don't like. I would like to plug in my language YSH without modifying their codebase! It should be a Unix-y protocol


Hard to get better value than the smorgasbord of offerings that Namecrane has on the books right now. Domain reg is coming soon as well.

I'm surprised containers aren't mentioned once.

With mass virtual hosting (not virtualisation, think in terms of vhosts in Apache) resource sharing and security issues that aren't easily fixed with some UID/GID quota tricks and as such we got chrooted FTP, SSH and SCP, then OpenVZ at some point, and LXC. Later down the line we got container sand cgroups and even later, cgroupv2. All of them with similar goals in mind: shared resources, but strong enough isolation to not have unwanted side-effects.

This is still something that exists, but isn't really used this way (as far as I can tell - only ISPConfig seems to do this?), because containers were then also used as stateless packaging methods where you don't edit anything in a running container, but rather just re-deploy the whole thing. That is of course not a good match for the article, but there is nothing preventing this from being done in a container. Heck, if you have classic shared storage (i.e. NFS) you could get any container hosting company to do this without them knowing it.

But you'd still be on the hook for managing the container lifecycle...


(author here) Yeah this is something I'd like in a host

Google used/uses cgroups with just plain UID isolation, no namespaces (of course this was for mostly trusted internal users, although they became less and less trusted over time ... due to being attacked by nation states and all that)

Heroku used LXC.

Dreamhost does use cgroups now, but it is not configured well, as mentioned in my post (the FastCGI and SSH processes share a cgroup, which means that you can't debug when it goes wrong!)

But most hosts these days just use Docker (fly.io, Render, etc.), which I think conflates a bunch of things. Not just cgroups, but also networking ...

So yeah it would be nice to have modern shared hosting, without Docker, but with good resource isolation


Yunohost isn't really meant for shared hosting but you can have multiple "apps" on a single box and it doesn't use docker. Not sure how good the isolation/security is though..

https://yunohost.org/

You're right though I think the "modern shared hosting with a more paas approach but without docker and good security/isolation" is something that doesn't really exist. Perhaps someone could use Sandstorm (Kenton Varda's (pretty much abandoned now?) fantastic project/isolation technology) as a starting point and gear it towards a more "shared hosting" approach.


Yeah definitely, Sandstorm was a good effort ... they did use plain cgroups and Linux kernel features from what I remember.

To me, it shows that the economics are hard. Writing things like Sandstorm isn't trivial, and having paid employees helps. But the open source, self-hosted model limits your revenue opportunities.

I'm not even sure what the business model of Sandstorm was -- was it just to have a hosted version? Maybe enterprise auth features or something?

---

It's obviously technically possible to do medium-scale, friendly hosting ... but it's not at all obvious from the standpoint of a self-sustaining business, or even non-profit.

I think the "hyper-scalers" are basically swallowing all the engineers and sys admins, which I mentioned in this post

One thing I'd liken it to is that in America, almost everyone eats the same corn, wheat, chicken, frozen Russet potatoes, etc. (or at least they are familiar with these commodities)

That is, the largest food producers are "hyper-scalers", and some of them are monopolies. They cut the costs to the bone, and convinced everyone that the quality was the same, when it isn't

---

I also think customer support is costly, and makes the business hard. Because the customers vary widely in their skill levels ... I almost think that if you could incentivize community support, that might help -- i.e. customers who actually help other customers could get paid perhaps ...

A related thought I've had is that shared hosting companies dropped the ball on git push-to-deploy, which Heroku pioneered over 15 years ago

I think SSH and shell are too hard for 50% or 90% of customers (one reason I'm working on a shell). So you have cPanel and the like.

But actually I think there are a decent number of customers who'd rather use text files? And they have some kind of git GUI or whatever

For example Github pages lets you check in a CNAME file for configuration - https://til.simonwillison.net/github/custom-subdomain-github...

Although now I see that it actually has a web interface too. So I guess you still need cPanel-like things for most customers (though certainly cPanel itself is showing its age)


> they did use plain cgroups and Linux kernel features from what I remember.

Technically just Linux namespaces and seccomp. Those are the parts important for security. cgroups are more about enforcing resource limits, which Sandstorm never got around to (but planned to, eventually).

> I'm not even sure what the business model of Sandstorm was

We had basically two plans, and the problem is we didn't consistently follow one of them:

Plan A: Make technology that excites people, especially developers, and shows fast growth and adoption. Sell investors on a long-term vision of an app store and such -- but get them to fund a big Series A pre-revenue that would allow us to develop for many years.

Plan B: Sell a product to organizations (enterprise, government, etc.) that, for policy and/or compliance reasons, could not use cloud apps. There are still a lot of these!

Plan A is really what we needed to stay focused on, and what our team really understood how to execute. We were actually totally succeeding at growing the developer community! But Plan B was enticing because it seemed like a faster path to revenue, and it felt like showing revenue would make our pitch even stronger. But we had no idea how to actually execute on Plan B, how to sell to those kinds of organizations. So our efforts there totally flopped. And when investors saw us fall on our faces they weren't excited to keep investing.

All that is to say, I don't think it's the idea that failed, I think we failed in the execution.

I had intended to keep working on Sandstorm on the side as an open source project when I joined Cloudflare, but then my work there (Cloudflare Workers) was successful, and, well, it's a lot more fun to work on something that is succeeding, so I gave up on Sandstorm.


Interesting, thanks for the response!

I guess my argument is that there are products that people want, that aren't financially viable under the current model

I think Plan A is evidence of that -- investors want to fund hockey stick growth, because they want one win to pay for 99 losses

But that shouldn't be the only way to grow a company, and retain skilled engineers. You may be right that it would have been the best strategy at the time, given the flood of investment money into tech.

---

Some kind of "bootstrapped" path sounds more appealing and sustainble to me, and perhaps more likely to lead to a high quality product (or at least I don't think it would lead to an inferior product)

That's what Plan B sounds like. I am not sure that a cloud OS designed to be self-hosted is ever going to make revenue justified by Plan A (though I'd be happy to hear an argument otherwise!)

That is, there's the question of what happens if Plan A succeeds, and the company don't find revenue.

I guess that's sort of what happened to Docker -- the company shrunk, and lost a bunch of people. Some people may see that as a net positive outcome, but it seems a bit inefficient to me. And I think better tech could have won there ... i.e. it's not clear that the tech that got a lot of VC investment is the one that we should be using

I see a similar dynamic with say Github Actions -- they give a ton of computing resources for free, because it's subsidized. But that actually prevents better companies from forming!


I don't realistically think there's a "bootstrapped" path to build what Sandstorm was building. Too many pieces needed to be in place before it was a really viable product. After a dozen or so eng-years we still had something that was pretty janky.

Moreover trying to take a bootstrapped path creates a distraction from the core technology by forcing you to design for ways you can sell more quickly. That might require shortcuts and compromises, or building things that you don't actually need in the long term. That was certainly the case for us: we spent a lot of time engineering a way to paywall certain "enterprise features", and building a paid hosting service (which was sort of antithetical to the whole idea), and we made compromises in the security model to allow us to move faster.

I think it's just a fact that most businesses require upfront capital to get off the ground, and some require a lot of it. So you either need founders who are already independently wealthy or you need some sort of venture capital. And yeah sometimes you have a Docker who makes a great piece of technology and can't figure out how to sell it, but would Docker have been built at all otherwise? Maybe it was a loss for the shareholders but it still seems like a win for the industry.


It was fun while it lasted. Perhaps it can be revived/reinvented etc..

I've yet to find anything that competes with the combination of easy to use, powerful, and affordable as a cPanel/WHM setup.

If you run a VPS for multiple websites you can use different Linux users for each site. Obviously not as secure as a container but simple and light weight.

For reference, this post prompted a lot of discussion on lobste.rs: https://lobste.rs/s/f5ziu7/comments_on_shared_unix_hosting_v...

"This site now runs on Mythic Beasts, basically because they are a Unix host, not just a PHP host.

They allow you to run persistent HTTP servers, supervised with systemd"

Systemd isn't Unix...


Almost nobody is running "real" Unix for Internet hosting these days. The last time I saw a Solaris or AIX box used in production was over 15 years ago.

Many telecom related services are still using Solaris and AIX and not going away anytime soon and even new deployments. Verizon being one.

It's such a security nightmare it makes sense why few people do it anymore, it has nothing to do with cloud hipsters or whatever. A lot of these shared hosting companies were hacked constantly. 0-day local privilege escalation exploits aren't that difficult to find comparatively, there is a HUGE exposed surface area.

IMHO If you share computing with random other people you want AT LEAST Intel VT/AMD-V virtualization for isolation.


There is a risk, but I think you're overstating it

I think the shared hosting companies have issues because of customers who host "grayware" and the like, although I'd probably agree they don't have enough security for a critical business like a bank or health care organization

(I don't know much about their internals, but I know the Dreamhost has grsec-hardened kernels)

IMHO If you share computing with random other people you want AT LEAST Intel VT/AMD-V virtualization for isolation.

I think you can probably do this with Firecracker VMs. That is, preserve the shared hosting model, but have all the protections that VMs have.

It might be possible to plug in firecracker purely through config, or make a slight patch to sshd.

sshd already has support for ChrootDirectory, which is conceptually the same, but just a weaker form of isolation:

https://unix.stackexchange.com/questions/542440/how-does-chr...

And then for a web process, you can mount some disk and a connection to a database into a Firecracker VM, etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: