
Affinity Profiling and Discrimination by Association in Behavioural Advertising [pdf] - DyslexicAtheist
https://poseidon01.ssrn.com/delivery.php?ID=194116069100028027005087019002109122123035068045044085007112029075102008006127078066019021099031023051029091100079092078067027054018089018064000085115108028022083072002007039029083079000104064122099090078001108005072066007085010007015001124121020085111&EXT=pdf
======
DyslexicAtheist
_Abstract

Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it
has been widely and repeatedly claimed that the GDPR will legally mandate a
‘right to explanation’ of all decisions made by automated or artificially
intelligent algorithmic systems. This right to explanation is viewed as an
ideal mechanism to enhance the accountability and transparency of automated
decision-making. However, there are several reasons to doubt both the legal
existence and the feasibility of such a right. In contrast to the right to
explanation of specific automated decisions claimed elsewhere, the GDPR only
mandates that data subjects receive meaningful, but properly limited,
information (Articles 13-15) about the logic involved, as well as the
significance and the envisaged consequences of automated decision-making
systems, what we term a ‘right to be informed’. Further, the ambiguity and
limited scope of the ‘right not to be subject to automated decision-making’
contained in Article 22 (from which the alleged ‘right to explanation’ stems)
raises questions over the protection actually afforded to data subjects. These
problems show that the GDPR lacks precise language as well as explicit and
well-defined rights and safeguards against automated decision-making, and
therefore runs the risk of being toothless. We propose a number of legislative
and policy steps that, if taken, may improve the transparency and
accountability of automated decision-making when the GDPR comes into force in
2018._

