
DeepMarks: A Digital Fingerprinting Framework for Deep Neural Networks - lainon
https://eprint.iacr.org/2018/322
======
black_puppydog
Is it just me, or does anyone else also think the very last the world needs
right now is for the Deep Learning crowd to merge with the pro-DRM crowd? This
seems even worse than copyright on databases and such...

EDIT maybe to clarify what I'm afraid of here: Copyright law is _draconian_
these days, and as someone handling pre-trained models on a regular basis I am
(was?) in the lucky position that all the models available are free, as is the
software executing them. If this takes hold, those days are over and we might
start seeing the kind of horror headlines that come with modern day arbitrary
copyright enforcement...

~~~
adfm
Waaaaaaaay too late, I'm afraid...
[https://www.tineye.com/technology](https://www.tineye.com/technology)

~~~
black_puppydog
Content fingerprinting has been around for ages, yes. This is _model_
fingerprinting.

------
mtrn
From the abstract:

> Sharing the trained DL models has become a trend that is ubiquitous in
> various fields.

Apart from the Model Zoo[1], are there any other sites for sharing serialized
models?

[1] [https://github.com/BVLC/caffe/wiki/Model-
Zoo](https://github.com/BVLC/caffe/wiki/Model-Zoo)

~~~
black_puppydog
it's not very organized for sure. but in research, barely anyone I know trains
models from scratch just to _use_ them afterwards. In pytorch and tensorflow,
you'd not even have to go to a model zoo page to use the most common
architectures, the library will just load it on demand into a folder in ~. So
yes, using and sharing models is ubiquitous from what I can tell.

~~~
mtrn
Yes, I used `load_data` and friends and it is certainly convenient. I was just
curious about a kind of open source market place for models, where you could
find niche models that people train in their spare time.

------
black_puppydog
Because I see some people misunderstand the goal of this:

> DeepMarks introduces the first fingerprinting methodology that enables the
> model owner to embed unique fingerprints within the parameters (weights) of
> her model and later __identify undesired usages of her distributed models __

This is _not_ a content fingerprinting technology, but rather a method for
$google/$facebook to prevent/detect use of their models by others.

------
RcouF1uZ4gsC
I wonder if you can leverage adversarial networks to eliminate the fingerprint
while still maintaining model performance.

------
rerandomizer
Looks like this has already been done:
[https://arxiv.org/abs/1802.04633](https://arxiv.org/abs/1802.04633)

