Hacker News new | past | comments | ask | show | jobs | submit login

Great question!

My goal with Deepserve.ai is to be drastically easier than the big cloud providers, with less lock-in. Rather than giving you infinite configurability with low-level primitives, I want to give you an easy, scalable solution right out of the box. Sagemaker and Tensorflow Serving have some quick-to-deploy options like this, but are really built in expecting you to use their entire ecosystem.

I started off with Fast.ai for simplicity and want to expand to support PyTorch, Tensorflow and ApacheMX. My aim is to have an option that is tested and proven for every library version and every mix of inputs/outputs. So if you have a PyTorch 1.5.0 model that expects an image and returns a set of bounding boxes, you choose that configuration and have everything ready to go.

You're right on with having things like logging and tuning built in for everyone from the get go.

I will obviously never be cheaper than the big cloud providers. I want to compete on ease of use and the soft costs of hiring for or dealing with devops.

Thanks for your kind words!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact