This is an awesome (and thorough) response and a great idea. I totally agree about the arms race and how AI + crowdsourced data could be applied to create much more realistic fake viewing patterns. I'm super-glad that this conversation is taking place.
But, wouldn't collecting viewing habits and then using AI to define (and emulate) real-looking behavior immediately put the developer(s) in that moral grey area that so many algorithms occupy? Technically it could be done, and it would be fascinating to work on, but we'd have to start with a huge browsing dataset (creepy) and then process it to figure out the patterns (exactly what this tool is trying to subvert), and then feed that back as output from within the user's browser (probably feeding back indistinguishable-but-AI-driven data and creating a loop). It's a murky space to wade into, and one that needs a lot more conversation.
Instead I decided to just keep it simple. The first page is chosen randomly from a list of (user-approved) sites. A link on that page is chosen randomly from the list of links that open in the same window and point to the same domain, and that's clicked. That's repeated a somewhat-random number of times, usually about 2-7 times, before a new site is chosen from the user-approved list, and the process starts over.
Check out Cathy O'Neil's definition of "Weapons of Math Destruction" (good overview of her book here: http://money.cnn.com/2016/09/06/technology/weapons-of-math-d...) - I'd love to hear your thoughts on that framework for determining the morality of algorithms.
But, wouldn't collecting viewing habits and then using AI to define (and emulate) real-looking behavior immediately put the developer(s) in that moral grey area that so many algorithms occupy? Technically it could be done, and it would be fascinating to work on, but we'd have to start with a huge browsing dataset (creepy) and then process it to figure out the patterns (exactly what this tool is trying to subvert), and then feed that back as output from within the user's browser (probably feeding back indistinguishable-but-AI-driven data and creating a loop). It's a murky space to wade into, and one that needs a lot more conversation.
Instead I decided to just keep it simple. The first page is chosen randomly from a list of (user-approved) sites. A link on that page is chosen randomly from the list of links that open in the same window and point to the same domain, and that's clicked. That's repeated a somewhat-random number of times, usually about 2-7 times, before a new site is chosen from the user-approved list, and the process starts over.
Check out Cathy O'Neil's definition of "Weapons of Math Destruction" (good overview of her book here: http://money.cnn.com/2016/09/06/technology/weapons-of-math-d...) - I'd love to hear your thoughts on that framework for determining the morality of algorithms.