
ML Algorithms Addendum: Passive Aggressive Algorithms - gbonaccorso
https://www.bonaccorso.eu/2017/10/06/ml-algorithms-addendum-passive-aggressive-algorithms/
======
clircle
It would be nice if there was a little bit more motivation for using this
method. You claim that the performance is superior to other methods, but there
is no evidence, not even empirical.

Also what's the deal with the error plot? Is that error in the training set?
Why do I care about that? I care about out of sample error.

~~~
gbonaccorso
Thank you for your comment. I'm working on some examples based on methods
which are different from the ones adopted in the original paper.

In that case, the comparison is done with an online perceptron and MIRA
algorithm. Of course, I don't want to repeat the same experiments (that I
believe correct), so I'm benchmarking in a different way and with more complex
datasets.

I understand this is a limitation, but I'm going to integrate with other
pieces of information.

~~~
gbonaccorso
For the error plot, as this is an online method, I consider every sample
implicitly as a test one. The goal was to show that after N correction, the
algorithm converges to a weight configuration that is compatible with a
specific data distribution. As I wrote, this can be a limitation (common to
many online methods), because it's almost impossible to filter outliers out
and the tolerance to noise is surely worse than an offline algorithm.

------
tw1010
Totally off topic, but just a word of advice. The recommendations below each
article on your main page are really visually distracting (especially since
they "fade in" only after you've scrolled down to them). And a lot of the
recommendations are repeated (meaning the same article is recommended over and
over as you scroll down). Maybe you don't care but just thought I would give
you some feedback if you're interested.

~~~
gbonaccorso
Thank you. I'm going to check

