Hosting AI is challenging because it requires GPUs and most of the time, data scientists are left to design and build the infrastructure. Configuring Nvidia and Cuda is another issue that can consume days or weeks (add a few more weeks for TensorRT).
That’s why we built Inferrd, it’s a Heroku-style GPU hosting platform specifically designed for AI models. All the libraries are pre-configured, no cold start and out-of-the-box support for most major frameworks (spaCy, TensorFlow, PyTorch, Keras, Scikit, XGBoost). In addition, we leverage GPU sharing to provide very competitive pricing. Lastly, we provide key authentication, smooth scaling and monitoring tools.
AI hosting is a very interesting new space, and we’ll be around to answer any questions!
The other is that you can use Inferrd from your notebook and have it running in 1mn.