I was going across the projects and found my own - "Quasi-Recurrent Neural Network". I was interested to see what type of model would be trained as the QRNN is a component, not a model. Upon clicking it, I'm shown the README I wrote on GitHub and a "Get model" button that takes you to the Github repo where I know there isn't a model.
This is true for other projects I looked at.
A model zoo without the models seems misleading especially when the claim is "Model Zoo - Pretrained deep learning models" ...
Mixed with already having an Advertise button gets me a little concerned.
The current model offering is limited, but seems to be on the right direction.
The goal of the site was to create a common platform to search and aggregate models (or code) available for reimplementation. I'm planning to add tools to allow users to flag or report errors on pages, since most of the content is automatically scraped.
Not only does it not do that, but it apparently only scrapes third party resources with little to no manual oversight in whether or not the e.g. code repository even contains a pretrained model. But wait: your plan going forward is to offload the moderation onto users?! So instead of being responsible for the content, the users (who ostensibly came to your site because they couldn't find what they were looking for) are now obligated to do due diligence. What's the difference between this and just searching GitHub for paper titles or keywords?
The final issue I have is more meta. I don't really see the value of the site as implemented. Why are you automatically scraping all of these resources? Why don't you curate them yourself and demonstrate that competency to the community? As it stands this is blatantly misleading and seems like a transparent attempt to cash in views for buzzwords, regardless of whether or not the user is ultimately helped by the content.
Sorry for being harsh, but this is kind of brazenly inept. I can understand that automatically scraping these resources gives you a lot of leeway to scale up inclusion to make it more viable. However you really can't just turn on a scraper, direct it at a few keywords and tell your users to sort it out. Users will want this site to make their lives easier instead of wading into the complexity themselves. You're not reducing that complexity, you're just adding another layer of abstraction to it.
It's not about accurate link caption. It's that you make a site with it's whole purpose being getting models, you call it "model zoo", and then you don't link to models and don't check whether models exist in all your links! A site for aggregating models (for the ones that provide it) and code already exists: it's github.
Your site simply does not work.
You don't have to manually verify each one--that's what machines are for. If you find a file that is recognized as a model list and link to the_model. Obviously a README is not a model.
If anyone is looking for curated pre-trained DL models that have IPython Notebooks that run out of the box, check out https://modeldepot.io
If you have some pretrained models that you'd love to share, feel free to hit share via the submit button :)
We might not have the volume of modelzoo.co, but we have a focus on quality and understandability, especially for those that are newer to the ML/DL field.
I thought by clicking "Get Model" I would get the model right away but it just redirects me to the github page of the project.
There is certainly value in getting information about all these models in one place but I feel more friction can be elimaned by providing direct ability to download the model files.
It's a pity - I'm still looking for a WikisumWeb pretrained model!
Perhaps the registry would need to be a lot larger before semantic search would be really useful?
I'll just add to the other comments here that I think you need to try a little harder to make this site do what it claims to do.