
Late-Stage Adversarial Interoperability - panarky
https://www.eff.org/deeplinks/2019/12/mint-late-stage-adversarial-interoperability-demonstrates-what-we-had-and-what-we
======
smitty1e
> But stories like Mint are rare today, thanks to a sustained, successful
> campaign by the companies that owe their own existence to adversarial
> interoperability to shut it down, lest someone do unto them as they had done
> unto the others.

This invites the question of whether there are any consumer-data-friendly
financial services out there.

I'm a fan of both the FSF and FireFox, as both seem relatively less rapacious
than the alternatives.

This was a great article.

Aside: [https://eff.org/join](https://eff.org/join) is confusing. Can I join
if I buy something?

~~~
greglindahl
The EU has a requirement for companies to share financial data about consumers
if the consumer requests it. It's designed to support services like Mint.

[https://www.cnbc.com/2017/12/25/psd2-europes-banks-brace-
for...](https://www.cnbc.com/2017/12/25/psd2-europes-banks-brace-for-new-eu-
data-sharing-rules.html)

~~~
tialaramex
And because this is a rule about something the banks have to enable, it gets
to build in the security design you'd actually want as well. Since your bank
has to help make this possible, it knows this is "Useful Spend Monitor Inc."
calling the API on behalf of greglindahl and thus is allowed to see
greglindahl's balance and recent transaction list but isn't allowed to create
a new transfer authority for Mr A. Nigerian-Fraudster when they "somehow" get
the credentials from Useful Spend Monitor Inc.

That's never going to happen for services built out of scraping Internet
Explorer sessions or similar shenanigans.

------
cromwellian
The downside of such adversarial interoperability is that it reduces security,
because the scrapping systems need the user's credentials to login. I'm not
going to give any third party my banking credentials, and I have 2FA turned on
every site as well.

I suppose if the scrapping is done by the user's browser via say, a timed
service worker or something, and the credentials never leave the user's
browser, it would be ok, but the idea of a bank of headless (IE!) browsers
with a database of millions of users bank credentials sends a shudder down my
spine.

~~~
singron
Similarly, recaptcha prevents this as well. Recaptcha primarily checks that
it's running on a real browser on the correct website using botguard. Then it
might give you the CAPTCHA challenge to prove you are human. It's not possible
to solve the CAPTCHA challenge without first passing botguard.

A good example of this is discord. Alternative clients aren't very popular
since you need to pass recaptcha to login. The currently available clients
require you to copy cookies out of your browser.

~~~
pabs3
The Discord plugin for Pidgin doesn't require copying cookies:

[https://github.com/EionRobb/purple-
discord/](https://github.com/EionRobb/purple-discord/)

------
closeparen
The response to Cambridge Analytica set out a clear social contract: to allow
users to share their data with a 3rd party is to take responsibility for what
the 3rd party does with it. Users cannot be trusted to read consent screens or
make their own decisions about authorization. And even if they can, they don't
have the consent of others involved in the data (whether friends on social
media, or counterparties to bank transactions).

Companies taking steps to block interoperability is an outcome of the
righteous campaign that this community fought and won. Comments here should be
celebrating.

~~~
stereolambda
But was Cambridge Analytica really adversarial? If they used the data without
the knowledge or specific consent of FB, the latter should not be blamed in
principle. (And that was no the case) Although being pretty against FB's
practices and business model, I have to admit that the mainstream backlash
against it have been largely "politically" motivated (as in, more partisanship
than ideology). So likely FB would be bashed for this specific reason anyway.

I'm open to the idea that there should be a way to interoperate without the
consent or responsibility of the primary operator and no way of preventing it
by them. Though I don't know about the oughts, but nowadays the desire to
integrate and annotate your finance seems quaint in the light of possible data
weaponization and profiling. (And institutions are trying to do it whether you
like it or not, my bank does try to sort my transactions "intelligently", no
thanks). I'd say _I_ would want to do it at home on paper if at all, but
people probably[0] should be able to do these thing and others if they want.

[0] Unless there is a legitimate case to be made that they endanger freedom of
others.

~~~
closeparen
Facebook published an API for third party apps to access data with user
consent, and Cambridge Analytica used it. The outrage indicates that a belief
that there should not have been such an API. Or at least that users were not
competent to consent and that Facebook should have been controlling which 3rd
parties they could share with.

------
skybrian
"Adversarial interoperability" is what happens when you decide to abandon
consent as a governing principle for interoperating, because it prevents
progress. People complain when unicorns do this.

Even if you had consent in principle, you'd still have to agree to terms for
interoperating. In particular, what price and who pays? Or does neither side
pay? Consider ISP's and peering agreements.

Even some open source projects will grumble if Amazon uses their software
following the terms that are right there in the license agreement. There seems
to be an implicit "shadow agreement" that if you use their software to make
money, you should give something back. (But that's not what the license says.)

~~~
mfer
> "Adversarial interoperability" is what happens when you decide to abandon
> consent as a governing principle for interoperating, because it prevents
> progress.

Whose consent is needed?

Open source projects who grumble about amazon created a model where their
consent was not required. This is where the ideals of open source don't meed
the desires of the VCs paying for the work. It's possible to have a stable
business supporting and serving open source but it is hard to do something to
meed VC returns on investment with open source.

The problem is in the financial model and whose consent is required in that.

When it comes to finances things are a little different because the business
model is different and so are the regulations. It's also not about open source
but about data control.

There is the whole idea of interoperating, too. The financial organizations
often do not see a benefit to their income by interoperating with their
competitors.

It might be useful to look at how competitors can come together to collaborate
in open source in a way that helps raise all those involved.

~~~
skybrian
Well, typically you need consent from the service being used. If you look at
how the "robots.txt" standard works, consent is assumed even for basic web
crawling by search engines. (At least for legitimate businesses; many crawlers
do ignore robots.txt.)

~~~
AstralStorm
That document should not be considered legally binding. It is not a contract.

Robots.txt is not even really a standard, best it was is a draft, set to
expire on January 2nd next year.

The purpose of the file was to preserve performance, not to hide anything. It
targets crawlers not API calls.

Consent of a non-single person business entity takes months at the very least,
if it ever happens. It's easier for them to deny (incl. lawyers) and
intimidate than compete and handle support tickets. Welcome to capitalism.

~~~
skybrian
In practice, the crawlers that read robots.txt don't do adversarial
interoperability. If a website doesn't want to be in Google's index, they can
opt out, and no lawyers are needed to do that.

------
geofft
I don't understand why there aren't popular _client-side_ tools that do this
sort of thing. Not only is that hugely better for security (no third party
sees your credentials or your data), it's much harder to distinguish from
automated centralized scraping.

Mint apparently was able to get some users to install browser extensions. I
believe RECAP and Sci-Hub work this way too. So why do trusted-third-party
scrapers remain popular?

~~~
jdsully
Users hate installing software and investors are used to immediate up to the
minute metrics.

~~~
taneq
I think hating installing software is actually pretty prudent these days. Look
at how mobile app publishers act towards personal data, now think about the
fact that desktop computer software has almost no restriction on what it can
do with a user's files. We're basically stuck just hoping that any app we
install is respecting our privacy.

~~~
zrm
The problem is the alternative is giving your bank credentials and other
sensitive personal information directly to a third party.

At least when the software is on your computer it's possible to analyze it to
see what it's doing with your information. Once it leaves your computer into
the possession of a third party, it's no longer even within your power to
observe what they're doing with it.

~~~
taneq
I'm not sure I agree. On the one hand, once you provide data to a web service,
that piece of data is permanently out of your control. On the other hand, once
you're running untrusted code on your computer, _all_ of your data is
potentially permanently out of your control if you ever reconnect to the
internet.

As for analyzing software, even for software devs that's impractical. It's not
enough to run it under some kind of process spy to see what it's doing because
the malicious behaviour might be time limited or otherwise triggered
unpredictably, so you really need to decompile and reverse engineer the
program to figure out what files it can read and what it does with them. Not
gonna happen. A better option is to spin up a fresh VM to run untrusted code
on, but even that's cumbersome.

~~~
zrm
Decompiling the program is arduous but it is possible. You only need one
person to do it and expose the nefarious activity to destroy their reputation,
which acts as a deterrent.

Another option is to restrict its network activity. If it can't communicate
with the developer then it can't send your information to them.

You could also merely _log_ its network activity, maybe using some hooks into
any crypto library it links against so you can see the plaintext. That
wouldn't prevent it from doing something bad, but it would at least allow you
to detect it after the fact and then everyone would know to stop trusting that
developer, and you get the deterrent without investing as much time and
effort.

------
jdsauce
> Imagine if an adversarial interoperator were to enter the market today with
> a tool that auto-piloted its users through the big tax-prep companies' sites
> to get them to Free File tools that would actually work for them (as opposed
> to tricking them into expensive upgrades, often by letting them get all the
> way to the end of the process before revealing that something about the
> user's tax situation makes them ineligible for that specific Free File
> product).

Sounds like [https://simpletax.ca](https://simpletax.ca)

~~~
lstamour
Canadian taxes were/are simplified, APIs exist to login and fetch T4s and
other services, and while it could be simplified further (particularly for
corporate taxes and provincial incorporation, PEPPOL E-Invoicing and open
banking legislation, etc.), the current state of things in Canada is pretty
straightforward, including a relatively open program so developers can write
and validate their independent tax software with the CRA. By comparison, when
similar simplifications were attempted in the US, they generally failed due to
lobbying efforts from Intuit and others. And my understanding is that in
Canada, provinces try to harmonize more with the CRA, with the exception of
Quebec primarily, while in the US states create many different tax situations,
in part because there are more U.S. states than Canadian provinces.

------
viggity
this is somewhat off topic... but can you imagine how much money mint's data
is worth? I once read this story about a guy who worked in the fraud
department for citibank(?) and he had access to all the transactions. He used
that live data to see that purchases at Chipotle(?) were down X percent and
shorted the stock before they released their quarterly earnings report. He got
thrown in jail and I don't remember exactly why. He wasn't really entitled to
that data.

However, Mint/Intuit certainly obtains that data legally. They could make
trades on that information, couldn't they? That is essentially like printing
money. Maybe they already do and it is just a secret? Idk.

------
ybv_weopenly
papergov.com -> adversarial interop for government websites :)

