Also open to suggestions for a "Offer HN" post :)
Should come by the Cloudkick/Rackspace offices again sometime, would love to catch up.
I want it to be known that Solomon and Sebastien are awesome. They set us up with hosting for our project when we applied to YC several months ago and Solomon took several hours of his own time to really sit down with me to walk us through getting setup. Their system is really kick-ass and amazing. We were working with Django for our framework, and configuration for a project is really straightforward. It was awesome not to have to worry about server configurations at all and just focus on getting work done.
Is there any reason you're running an outdated version of Nginx (0.7.65 vs 0.7.68 vs 0.8.54)? It's one major revision behind current and a few minor behind stable. Which brings me to a question I had lingering in my mind, what is your "software" upgrade process since users have no control over it?
Another question, do you cache non-static objects (cookie based caching when proxy_pass'ing)? If so, how are you handling the different cookies for each particular app? Can this be turned off?
- 0.7.65 is the version currently used in ubuntu lucid. We use it only as a proxy.
- On services like python-wsgi we currently use 0.8.52 and we will soon upgrade to 0.8.54
- On the ruby-passenger service we follow the nginx pulled by passenger
So it's always dependent of the context. The idea is to keep the stacks stable and secure at all times.
Once we have tested and approved an upgrade we create a new revision of the corresponding service and deploy it across the platform. If the upgrade requires downtime we schedule a rolling maintenance window and notify users.
We don't cache by default (but this could change). Users have the option to add caching services like varnish.
Cassandra and Hadoop alone need a good chunk of domain knowledge to keep them running smoothly. Seeing them listed casually next to so many other deployment stacks makes me feel slightly dizzy.
I'm definitely looking forward to see if you'll be able to pull this off.
You're correct, there's a beefy learning curve to properly configure, fine-tune and scale each of these components. Our job is to tackle that learning curve so you don't have to.
As a developer, you get a little more value out of that deal every time you want to play with something new.
It works for us, because after a while you start seeing patterns in proper automation. There are only so many ways to store, modify and move bits around.
That's where a bit of skepticism stems from, as I've dealt with both of these databases first hand, also various other stacks ranging from java, python, ruby, even lua.
The not-so-stellar uptime record of Heroku shows that making one size fit all is quite hard, even when you're doing it only for a relatively small set of components.
Doing it well for nearly all of them is nothing short of the holy grail in systems management.
We'll start with the fundamentals, and gradually expand to the full catalog.
Currently, I don't really have the need or desire to manage my own Hadoop cluster, but I haven't been completely satisfied with Amazon's Elastic Map Reduce offering either.
Definitely interested in any beta tests and/or feedback for that!
If someone can tell me which YC batch they're a part of, I'll edit the title - this is the first time I'm hearing about them.
To be honest, I am skeptical, that a newcomer startup can do the heavy lifting of supporting such a big stack. Each of these has a lot of specialities that need to be known and to fight.
Background: Our stack uses appjet, nginx, varnish and couchdb. Each of these has different challenges. Think of optimization, scaling, resource limiting, leveling, monitoring, statistics and enforcing governor limits/notifying customers. We needed over a year to establish this. I don't want to sound negative, just think about all this, what we needed learn.
Yes, it's hard work for us. But 90% of the work is the same across all deployments. We take advantage of that fact to offer massive savings in engineering and sysadmin time.
We don't offer root access. But we're working on customization tools that will make you forget you ever had to ssh into a server directly.
As for pricing: we're already charging a few test customers, and will expand paid plans to all beta users very soon.
As for pricing, looking back, I realize that my question may have been a bit blunt and offensive. What I was really curious to know was whether you were going to be charging "per instance", "per hour", "per instance-hour", "per package", etc. For example, is Apache+MySQL on a single machine (if that's possible) less expensive than Apache on one machine and MySQL on another? What about compared to Nginx+Cassandra? (FWIW: You don't have to answer this directly if you want to keep the cards close while in testing...)
None the less, I wish you the best. (signed up for a beta.)
Edit: now it's gone... that was strange. I'm sure I saw a gold name!
- for django you will have an easy way to add more instances
- for mysql more slaves
- for celery or resque more workers
You can scale up any component in your stack by cranking up the number of instances. We automatically provision and reconfigure instances for you.
Would you be willing to host a production .Net app on Mono?
Azure's addon options are pretty limited too.
In our experience, for most customers 10-second manual scaling is just as good as auto-scaling.
Our problem with Heroku was that we didn't know how many units to allocate, and ended up way over-spending. But if we had tweaked it to be just right, we would have been screwed by an unexpected surge.
We offer resource usage data, so you can make an informed decision when scaling. You are correct that a very sudden surge can still screw you - but we are preparing an alert feature to help you mitigate this.
We would love to hear your suggestions at email@example.com.
If we start seeing a recurrent pattern we'll try to automate.