
Show HN: Build Machine Learning Web-Service with Python and Django - pplonski86
https://github.com/pplonski/my_ml_service
======
ampdepolymerase
Calling the ML model should be done via Django Channels or a similar
background queue and not directly within the view (request handler) as if the
inputs/inference is too big, it will block and cause issues for load handling.

~~~
tecoholic
I agree. The ML Algorithms and models can run into huge sizes. I have one with
half a gig of serialised model, when running takes up close to 3.5 Gigs. This
approach of directly using it in the views would pose scaling issues.

------
theo31
For anyone looking to deploy ML models a little bit easier, check out
[https://inferrd.com](https://inferrd.com)

It's a simple drag and drop to deploy tensorflow, scikit, spacy, keras or
pytorch.

~~~
pplonski86
Nice!

1\. Is it open-source?

2\. Can I add models with REST API?

3\. How do you handle large models? Are they working in the background or you
just use larger severs?

~~~
theo31
1\. No it's not open source, but I'm not excluding it from the roadmap :) The
selling point is really the easiness of it.

2\. Yes! We will be releasing tokens soon so that anyone can interact with the
API.

3\. Models can be up to 1GB, each model gets their own server , we only do
real-time predictions for now. Meaning the models are constantly running
waiting for a request

------
denimboy
A Django app for managing training data and active labeling would be a nice
complement to this. Something similar to AWS GroundTruth.

We should start thinking about an ML life cycle were data is ingested, data
labeled labeled, models trained, model tested, model deployed and monitored.
Rinse, lather, repeat.

~~~
pplonski86
Agree, data labeling app in Django will be interesting. It will require a
front-end with a lot of features to make it usable.

------
jeffrwells
For anyone who is looking to make deploying ML models easier, I created
[https://deepserve.ai](https://deepserve.ai)

You can run `deepserve deploy` on the command line and it will stand up a REST
endpoint for your model. I'm also building client side SDKs to make it easier
to call on these models in your application code.

It's still in beta but feel free to reach out to me jeff @ deepserve.ai

~~~
pplonski86
The website is down. Is it open-source? Do you know any open-source
alternatives?

------
Aliabid94
[Gradio](gradio.app) can save a lot of time to serve the same purpose - making
it super easy to serve any python based ML. It provides dozens of interface
components, including text, image, audio, files, and dataframes to support
many different types of predictions.

------
tecoholic
I noticed that the models use CharField with a max_length of 10K. Why not use
TextField?

~~~
jamestimmins
Depending on the underlying DB, these might be handled the same way at the DB
level. For example, Postgres uses the same field type for both CharField and
TextField.

~~~
pplonski86
In this tutorial I've used SQLite. I use PostgreSQL for production services. I
don't remember now why I used such type.

------
VHRanger
Why use Django when Flask is so much simpler and has less overhead?

For just a REST ML model endpoint, Flask does everything you need, and no
more. It's a perfect fit.

~~~
anaganisk
May be Op wanted to take the advantage of batteries included features like,
admin panel, templates etc. Or is just very comfortable with django.

If we are talking speed and simplicity then I would father go with Fast Api
rather than flask.

~~~
pplonski86
You are right! I like Django admin panel, without coding you get UI and I'm
comfortable with Django + DRF.

In Flask you need to add many things to make it usable.

If you check the code of the tutorial you will see that it differs from the
most of the ML-to-REST-API tutorials. It's not only setting the endpoint. You
can have many algorithms with many versions and stages (testing, production).
There is A/B testing example. Of course you can do all in Flask but I prefer
Django because many things are there already.

