
Banks and Retailers Are Tracking How You Type, Swipe and Tap - JumpCrisscross
https://www.nytimes.com/2018/08/13/business/behavioral-biometrics-banks-security.html
======
lostimpo
Does anyone else feel a sense of overwhelming futility with respect to
internet privacy? I'm at the point where I feel like I might as well just use
all the fancy features and devices, privacy be damned.

It feels like it's a giant waste of time, even if you go out of your way to
use "privacy protecting" expensive devices and software. Use an iPhone or
LineageOS! Use Firefox! Don't use Google services! Don't use iCloud, back up
everything to a local NAS! Pay for your email services!

It all feels mostly pointless. There's always another thing right around the
corner. You're always defeated and tracked -- this time with "behavioral
biometrics." For the average person, why not just give up? Throw an Amazon
Echo in the corner and at least you can control your lights and play Jeopardy.

It's totally exhausting.

~~~
beat
I deal with it mostly by asking myself what "privacy" I'm really concerned
about. Let's see... I don't want anyone actually stealing my money. I don't
want sensitive security information that can be used to steal my money (ie
passwords) falling into unauthorized hands.

But my personal history? My photos? Words I've written? Oh well. If someone
wants that stuff, they can probably get it. And the gaping maw of robotic
commerce doesn't actually care about me personally, it only cares what it can
sell me.

I'm not worried about the police or some authoritarian tyranny on a _personal_
level, on the level where what I say on the internet matters. I worry about it
in an _impersonal_ way. When the robot overlords are rounding up the granola-
munching people of south Minneapolis for extermination, they won't be checking
our internet history first.

~~~
plurgid
> But my personal history? My photos? Words I've written? Oh well. If someone
> wants that stuff, they can probably get it. And the gaping maw of robotic
> commerce doesn't actually care about me personally, it only cares what it
> can sell me. > > I'm not worried about the police or some authoritarian
> tyranny on a personal level, on the level where what I say on the internet
> matters.

Yeah, but then 2016 happened. This sort of "tactical positioning" on internet
privacy is where I was prior to that year. The fact of the mater is that
"personal threat exposure", while not completely irrelevant, misses the forest
for the trees.

It's not YOU they're trying to manipulate, it's the herd. And it works. And
that eventually comes around to impacting you just as personally as if you'd
posted your bank account number and ssn on facebook.

~~~
dhimes
By "2016 happened" are you talking about the US presidential election?

~~~
cgriswald
I think he is... and if so, he either misspoke and meant to say 9/11 or
PATRIOT ACT happened or he’s young.

------
blakesterz
After reading this one I'm not sure this is as bad as I thought it would be
based on the headline:

"On new account applications, for example, behavioral biometric systems pay
close attention to where and when applicants pause. A legitimate applicant
typically types personal information — their name, their address, their Social
Security number — fluidly, with few breaks. A scammer will often either cut
and paste or take breaks to consult their notes."

It's currently used to do security checks. And it's being done while your
interacting with the site/app. Seems like a useful security check, though I
suppose there's some potential for abuse, somehow?

I think Google's CAPCHTCA thing does this too? It watches how your mouse moves
and knows your a real person (or something close to that).

~~~
21
> though I suppose there's some potential for abuse, somehow?

One day you want to post online a video obtained through illegal means (for
example you film animal abuse in a US abatory), you buy a burner phone and go
to an open Wifi, but the behavioral fingerprint you left behind while using
YouTube is used to track you down.

I guess this is still fair game, you were breaking the law after all, but you
get the idea.

~~~
jack_pp
I really really doubt you could get such a specific fingerprint and even so,
you could automate video uploading to YouTube in a script

------
giarc
Journalists need to realize that likely every app is doing this. Look at the
companies providing this data (FullStory, MouseFlow, AppSee etc etc), they are
industry agnostic. It's as easy as installing a library or adding a few lines
of code and voila, you have this same data.

~~~
Bartweiss
This would be a huge improvement in the quality of these stories.

There's a meaningful difference between something like Red Shell, where a few
games gathered data that most others don't collect, and the stream of "apps
log events!" stories which find some examples and present them as company-
specific outrages instead of a nasty practice across the industry. Without
understanding that, it's hard to have a useful discussion about tracking.

------
nathanaldensr
Sigh... give them an inch and they take the universe. It seems like nearly
every piece of local data an application can gather is being abused for
privacy invasion, this time under the guise of fraud prevention. It's
bullshit; we all know how this information will be used. Swiping, mouse
movements, keyboard input habits, etc. will all be used to build a profile
about you and target ads. What a sad, sad use of humanity's infinite
ingenuity.

~~~
tomasien
I think there's a fairly likely future that the banks and financial companies
will try (and IMO succeed) to ban the providers of this data from using it for
anything else. Aka: we will not use a biometrics service that uses our user's
biometric data for anything other than fraud prevention and authentication.

The banks are ultra-sensitive about personal data at the moment at least, but
they really want these kinds of solutions to stop fraud without interrupting
your user experience. They have the scale to demand vendors stay in a
particular use case and I think they'll succeed.

I would expect the future you're predicting to exist, but I expect it to come
from e-commerce companies and others adopting tech from similar vendors but
without demanding any restrictions on data use.

------
sailfast
I understand these metrics are easy to capture, but only if the browser /
device makes it accessible via APIs. How difficult would it be to notify users
(like GPS use, or Notifications on a phone) that X information is also being
captured and allow for an opt-in?

It's not trivial as obviously applications need to know at some level where
users are clicking / tapping to function, but does the device need to share
that with the code being executed and/or can it hide that layer?

I'd pay good money (or pay money to support) a browser that did this well on
mobile / desktop. (Tails browser has similar prompts for this kind of thing)

Admittedly, I am not well versed in the lower-level workings of the browser,
so this may be a stupid thing to ask for.

~~~
Raphmedia
How would the browser know which to block?

`onClick(){fetchAllForumTopics();}` vs `onClick(){sendTrackingData();}`. Both
are simple ajax calls.

Even if you somehow managed to know which calls to block, you could simply
hack it by calling a jpg with extra parameters and then have the server save
those:

`onClick(){body.css({backgroundImage: 'http :// mysite.com/ images/
logo.png?x=123&y=512&elementName=BankAccountNumber'});}`

~~~
sailfast
I wasn't thinking of blocking specific function calls so much as that data not
being available to the browser at all to send in the first place, providing
the ability to selectively block mouse x,y coordinates tracking, etc before
the code on the page was executed.

Again, this may not be possible without breaking the web given the structure
of the browser and what pages need to render at a low level. Perhaps client-
side middleware that only passes user data when an action is performed rather
than constantly streaming coordinate and interaction data all the time?

~~~
Kalium
This is a great idea! It's always a good idea to control carefully what
information is available to a system you might not always be able to trust
completely. It's especially important to be sure you're not leaking info
through a channel you didn't consider, which at times means that an
allowlisting approach is preferable to a blocklisting approach.

Your heart is in the right place. Keep iterating!

------
thecombjelly
This is abuse and wrong. Just as it is abuse and wrong for every website or
app that does something similar. And this couldn't exist if we as software
creators didn't make it. We as the creators of software need to look at
everything we create for its potential to be used against people and if it
could potentially be used to abuse someone, like say exposing interfaces that
can be used for biometrics, we need to refuse to make it. People have survived
for ages without your software and they might be better off if it stays that
way. Don't be a part of this.

~~~
boboboba
But how would that work? If I work on a word processing application, that
application can be used to write horrible lies and propaganda that many people
would believe. Are you saying I shouldn't work on such a product because of
that?

If you look at the story of how browser cookies came about[0], it wasn't for
advertising and tracking people. It was just intended to be a simple way to
allow users to keep state between web pages to provide services the people
wanted. It was later hijacked by ad companies to do surveillance and then the
government hopped on board.

[0]
[https://en.wikipedia.org/wiki/HTTP_cookie#History](https://en.wikipedia.org/wiki/HTTP_cookie#History)

~~~
thecombjelly
I think you make a great point. We _should_ question whether it's a good idea
to make the word processor. I'm not saying it shouldn't necessarily be made
but we should think about how it can be used and if the potential for harm is
too high or if it can be mitigated. My point is these things should always be
kept in mind and we need to not be afraid of not making the technology.

------
walterbell
_> Because your reaction is so individual, it’s hard for a fraudulent user to
fake._

Malware can record behavioral user data from a compromised device then
replay/modify to simulate the human on that device or another. Yet another
arms race for black marketable behavioral "signatures".

------
econ4all
Do newspapers have to start a privacy panic about every damn thing? there are
other ways to frame these reports, it's like they won't be satisfied until
everything done on an electronic device is outlawed.

~~~
CaptSpify
Do we technologists need to build privacy invading tools on every damn thing?

They are reporting on a problem that we keep creating. I don't see any
problems with this.

~~~
radiorental
Huh? The customer ('banks') asked and paid for this functionality. Not the
other way around.

You make it sound like software engineers lobbied Bank CTOs to install this
under duress

~~~
CaptSpify
A) Where do you think banks got the idea? They just copied an existing shady
business model.

B) The developers could have said no. If my boss got a contract with a bank
that said I had to track users against their will, I would turn it down.

The point is that the newspapers are bringing up a legitimate problem. It's
not a "panic", it's a valid concern.

~~~
jasonlotito
> Where do you think banks got the idea? They just copied an existing shady
> business model.

Tracking user input methods is not a shady business model. I was doing this
for web forms back in 2003/4 to fight fraud, improve the user experience, and
find problems.

> If my boss got a contract with a bank that said I had to track users against
> their will

But they aren't being tracked against their will, so you would have done this.

~~~
CaptSpify
Just because you have done it before doesn't mean it isn't a shady business
model. Are they being notified that they are tracked?

Just because you don't have a problem with this practice doesn't mean it is
without problems.

~~~
jasonlotito
> Just because you have done it before doesn't mean it isn't a shady business
> model.

It's not a business model. It's a security mechanism. Willing to bet that you
track people.

> Just because you don't have a problem with this practice doesn't mean it is
> without problems.

Didn't say it was without problems. I just don't see any issue with using data
people provide freely.

------
rconti
Is this why the New York Times webpage scrolling stops working every time my
mouse arrow ends up over an ad?

------
jasonlotito
This has been going on for more than a decade. I implemented something like
this back in 2004 for a payments page. The goal was completely focused on 1)
fighting fraud 2) finding problems with filling out forms and 3) improving the
experience for users.

People can argue that these things can be used for nefarious purposes, such as
serving up targeted ads, but that's literally every browser you've ever used.
That's literally the internet, and how it works.

------
rahimnathwani
"Then the visitor typed on the numerical strip at the top of a keyboard, not
the side number pad the customer typically used."

How can a browser-based app detect whether I'm using the number pad or not?

------
pishpash
Banks have been reocrding you non-stop to use for "voiceprint," under the
existing disclaimer and guise of "training purposes," also highly invasive.

~~~
soared
FUD. Provide some proof when you make a claim like that. If you've ever worked
outside of dev you know how a paper trail or physical recording is critical
for everything that happens in a businesses. You think a bank would let a
customer and rep make account changes over the phone and not have a recording
of that? What happens when there is a disagreement about what was said?

