
Announcing Confident Learning: Finding and Learning with Dataset Label Errors - cgn
https://l7.curtisnorthcutt.com/confident-learning
======
cgn
Hi, Hackers. I'm excited to share confident learning for characterizing,
finding, and learning with label errors in datasets. To promote and
standardize future research in learning with noisy labels and weak
supervision, I've also open-sourced the cleanlab Python package:
[https://pypi.org/project/cleanlab/](https://pypi.org/project/cleanlab/)

Post: [https://l7.curtisnorthcutt.com/confident-
learning](https://l7.curtisnorthcutt.com/confident-learning)

Paper: [https://arxiv.org/abs/1911.00068](https://arxiv.org/abs/1911.00068)

Code:
[https://github.com/cgnorthcutt/cleanlab/](https://github.com/cgnorthcutt/cleanlab/)

Abstract: Learning exists in the context of data, yet notions of confidence
typically focus on model predictions, not label quality. Confident learning
(CL) has emerged as an approach for characterizing, identifying, and learning
with noisy labels in datasets, based on the principles of pruning noisy data,
counting to estimate noise, and ranking examples to train with confidence.
Here, we generalize CL, building on the assumption of a classification noise
process, to directly estimate the joint distribution between noisy (given)
labels and uncorrupted (unknown) labels. This generalized CL, open-sourced as
cleanlab, is provably consistent under reasonable conditions, and
experimentally performant on ImageNet and CIFAR, outperforming recent
approaches, e.g. MentorNet, by 30% or more, when label noise is non-uniform.
cleanlab also quantifies ontological class overlap, and can increase model
accuracy (e.g. ResNet) by providing clean data for training.

