- ML models are often trained on very specific datasets and fail in embarrassing/costly ways when deployed in real-world settings where things looks different than the training data
- The worst part is that these limitations are usually not detected until the model is released, because usually the ML engineers may not realize that their model is: biased, unfair, or fragile
It takes a diverse team of users and testers to realize this. Gradio gives you the ability to instantly create a web interface around your model that you can share with a public link.
Your users or testers or collaborators can (right from their browsers, without having to install any software) try your model, understand its limitations, and point out biases in the model and data.