I think another very interesting point brought up was when that even such a well-ranked model on Kaggle did a poor job when applied to an different dataset and had to be retained, which is a nice example of over-fitting.
Excellent article and nice details, thanks!
This second challenge actually featured a different dataset with different hydrophones used etc. But even without retraining (which was rather trivial to do at that point; the hard work of finding the right hyper parameters had already been done), I would have still scored well above 90%. And I think Nick Kridler reported the same.
So overfitting yes, but not too much considering there was a different sensor.
I guess it is technically overfitting, but overfitting sounds wrong when you didn't have access to the extra data in the first place (or even realize you were being given pre-cleaned-up data).
Solution: the first de-noising done with actual noise?
The work is great though. More of this and less bitcoin, godaddy announcements, and SV gossip politics on the front page, please.
Learning fluent English doesn't teach you Portuguese.
From the perspective of other source data, I wonder if that limits you to five features (X,Y and RGB) or whether you could extend to fictional/non-human-visible colours as extra features and just be unable to view them in the weight maps.