
Who to Sue When a Robot Loses Your Fortune - lnguyen
https://www.bloomberg.com/news/articles/2019-05-06/who-to-sue-when-a-robot-loses-your-fortune
======
blunte
If you're an accredited or institutional investor, then you have only yourself
to blame (unless the investment was misrepresented, in which case it should be
the one who sold/brokered the investment).

In this case, the investor was foolish to believe that a magic AI system could
generate reliable positive returns. At the same time, it sounds like the
performance of the system was misrepresented. Either way, if you're
responsible for 1bn of assets, you better do really good homework.

~~~
username444
This.

Investing is not a guaranteed return system. You might win, or you might lose
all your money. At the end of the day, the only person who is responsible for
it is you.

Don't want to lose your money? Don't invest.

~~~
twblalock
> At the end of the day, the only person who is responsible for it is you.

There are rules about fraud and fiduciary duty and disclosure which make
blanket statements like this incorrect.

~~~
blunte
I think the point is, an experienced investor should be able to either see
through a fraud or at least do proper due diligence before investing a large
sum. Plus, institutions often make a small initial test investment. If the
performance is as expected, risk considered, then they may make the big
investment.

A retail investor can't be expected to properly research claims, but a
professional should be.

------
onion2k
Robots don't just spring in to existence from nothing. _Someone_ made the
decisions necessary to make the robot do the work they're selling (even if
that's just deciding what training data and biases to feed in to something
they choose to call AI). You can sue that person.

~~~
rfugger
You sue whoever misrepresented the robot's capabilities to you. There's
nothing wrong with making a robot that's bad at trading. There is something
wrong with pretending it's not bad at trading when you're selling it.

~~~
hammock
What if that person didn't fully understand how the robot worked? The best
your could hope for is negligence

~~~
onion2k
There are places where adopting AI is going to be difficult for this reason -
notably anywhere that has a legal requirement to explain _why_ a decision went
a particular way. That includes things like insurance risk models, pension
investments, etc.

~~~
cosmodisk
The losses made by software training may be the least problem for the
institution.If case goes to court,they may be required to reveal the reasoning
behind software's decision and that means going through the code,as all that
marketing bs called AI is just a trading algorithm trained on some datasets.
No company would want to reveal the code.

------
mruts
It seems like this hedge fund was run by people who didn't understand finance
in the first place. This is how many quant funds work:

1.You develop an alpha factor that you believe is associated with out
performance.

2\. You control risk factors such as beta, volatility, Fama French 3 factor,
etc.

3\. Now you create a neutral L/S portfolio that only has exposure to your
alpha factor while also negating the risk factors like beta and volatility.

4\. Backtest it, run a paper portfolio for awhile, etc.

5\. Combine your new alpha factor with a bunch of other factors that you have
already developed. The idea being that layering these alpha factors on top of
each other will negate some of the noise inherent to each factor.

6\. Make money and cycle out and in alpha factors for as long as they
work/don't work.

It seems like this "hedge fund" was solely trading based on sentiment without
creating a L/S market neutral portfolio or layering on any other factors. This
is a _very_ bad idea. The quote from the guy who made the software is pretty
damning:

"The signals we have been provided have a strong scientific foundation. I
think we did a pretty decent job. I know I can detect sentiment. I’m not a
trader."

This is a big red flag. A lot of AI/ML people have this arrogance that trading
is easy and that you don't need any financial knowledge to make money. Maybe
that was true in the 1980's, but at this point, it requires an incredible
level of expertise to generate and implement profitable quant trading
strategies.

But all of this is off-topic, to get to the point: the Bloomberg article is
garbage click-bait. You sue the General Partners, and I don't think anyone is
confused about this.

~~~
IceWreck
I am not a trader (and have zero knowledge about it) and what I'm about to say
might be utter garbage, but hear me out and correct me where I'm wrong.

> This is a big red flag. A lot of AI/ML people have this arrogance that
> trading is easy and that you don't need any financial knowledge to make
> money. Maybe that was true in the 1980's, but at this point, it requires an
> incredible level of expertise to generate and implement profitable quant
> trading strategies.

If they were making the trading bot via training it on past data and what they
call 'sentiment signals' generated from social media reaction, etc, do they
need experience with trading?

Is there something else that a human trader knows that the bot (which has been
trained on past data and accounts for live reactions) doesn't?

~~~
elliekelly
Something that stood out to me from the article was that the bot "generated a
single trade in the morning." Unless they're trying to realize a quick gain by
exploiting the "dumb money" most knowledgeable investors specifically _avoid_
trading at market open because the volume/volatility typically comes from
Charles Schwab et al. executing trades from regular Joes & Janes. It's noise,
not signal.

Since the bot wasn't implementing a day-trading strategy it's pretty clear the
programmers didn't know much about how the markets work. I would guess the
training data reflected that knowledge gap. As the saying goes, you don't know
what you don't know.

------
snek
He should sue himself for going into high risk/velocity trading in the first
place.

~~~
wccrawford
To me, this is the right solution. Someone might have persuaded him into the
decision, but he's the one that made that decision in the end. You shouldn't
be making decisions like this if you don't understand the risks.

------
LaserToy
I'm surprised with comments that puts all the blame on the investor and make
the impression that authors never do similar things (rely on systems they
don't fully understand). The thing is, software is all around us, some is more
or less harmless (web browser) and some can create a lot of damage (self
driving car, buggy aircraft controller, bank software).

The thing is that we generally have absolutely no clue what is the quality of
the piece of the code that is controlling our destiny right now. What if the
airplane we are flying right now decides to dive? What if bank looses all our
record? What if Nest thermostat goes crazy and burns thousands of dollars on
heating during your vacation? What if your 401k disappears because of the
obvious bug in the bot's script? I can go on and on.

To be able to live in this world without going nuts we have to trust that
those systems are correct and if something goes wrong we do need a legal way
to punish responsible party (if there is a fault on their side).

The weird thing is that we still treat software and real engineering
differently. If you enter the bridge that collapses under you feet because it
is a bad design - you will sue. But if you trust a company that sells a
superhuman trading bot which makes silly decisions and loses your money -> it
is your problem. Following this logic - don't step on the bridge without
reviewing the design and making sure it is safe.

------
thesausageking
The founder/CEO of the company that built the trading system had previously
"agreed to pay $17 million to the U.S. Securities and Exchange Commission to
settle charges of defrauding investors at his mobile-payments company, Jumio
Inc."

How could you trust him to trade $2.5B?

------
coldcode
Salespeople often exaggerate to make a sale, it's not uncommon. Believing them
with enthusiasm is a sign of someone salespeople love to find. This guy is
right to sue but he's still a sucker who was taken in and then leveraged
himself to the hilt based on a pitch he could not (or would not) verify. A
prudent person would have tested it with a small amount to start with.

------
mring33621
I'm sorry but this sounds like a couple idiots with too much money.

"Li eventually let K1 manage $2.5 billion—$250 million of his own cash and the
rest leverage from Citigroup Inc."

Leverage = Loan

r/wallstreetbets is that-a-way.

------
tomkaos
If my microwave oven burn my pop corn, I don't sue the microwave oven. The
product (robot) is not what seller tell you : you sue the seller. The product
(robot) has a defect : you sue the maker.

------
kyrieeschaton
This is a straightforward dispute about to what extent the backtested
performance claims were accurate, and whether the fund was following the
strategy as declared to its client. The fact that it's a "robot" is irrelevant
to the dispute - the same fact pattern would apply to a human fund manager.

------
plouffy
This is an important aspect:

> Over the following months, Costa shared simulations with Li showing K1
> making double-digit returns, although the two now dispute the thoroughness
> of the back-testing.

And I wished the article linked to the fillings or at the least discussed this
more thoroughly.

------
DonHopkins
Should have bought Old Glory Robot Insurance!

[https://www.youtube.com/watch?v=KXnL7sdElno](https://www.youtube.com/watch?v=KXnL7sdElno)

------
eyeundersand
"People tend to assume that algorithms are faster and better decision-makers
than human traders,” said Mark Lemley ... “That may often be true, but when
it’s not ..."

Is it just me or is that quote kind of disingenuous? "People tend to assume"
and "That may OFTEN be true" make it sound like it isn't as clear cut or
there's still doubt over whether such algorithms outperform humans. Is that
truly the case? I don't know much about trading but aren't algorithms doing
most of the trading now?

~~~
Nasrudith
The faster part is certainly not a matter of debate. The better less so and it
depends a lot on the area. High Frequency Trading they essentially dominate
and given what I heard also really prove their role's legitimacy as arbitrage
as more HFT bots drove down the margins in a move towards uniformity - just
like arbitrage should.

They are prone to some occasional "dumb" errors however - like humans but
different those which attempt to predict trends using feeds like Twitter have
caused losses as they fail to get the context.

And it depends on the area and how you define algorithm and trading. Your bank
has lists of requirements for mortgages beyond legal minimums - essentially
already following an algorithm but due dilligence involves humans in the loop.

~~~
eyeundersand
Thank you for the insight! Much appreciated.

------
rajacombinator
10x leverage on a fantasyland AI magic machine. Might as well sue the casino
when you lose playing baccarat. (And people do sue.)

------
cosmodisk
It depends on what basis algo trading was sold to the customer. Regardless of
whether it's a human or computer, anyone willing to give such an amount a go,
should be asking how risk is beimg hedged?What happens when shit hits the fan?
How fast can you exitvthe position?If it's Forex, what stops are being used to
minimise the losses and etc. I'm sceptical that either sides signed the deal
without firstly going through some details on how it works, the risks and
rtc.Also,the legal department probably did their homework to cover as much as
possible in such cases. The responsibility, ultimately,falls upon the
institution utilising the software and not the software itself.

------
Hasz
Haha, let the algo decide the stop-loss, genius. This guy was sold on a fancy
robo investor that couldn't beat VTSAX if had twice as much money to start
with.

If you don't know who the idiot is, it's you.

The dudes who put this thing together were using a theory from, get this,
2015, to do sentiment analysis. Cool 4 character domain, but they couldn't
even be assed to get a let's encrypt cert for it -- why would you trust them
to manage billions?

[http://42.cx/](http://42.cx/) The number of buzzwords is both overwhelming
and inherently fishy.

------
driverdan
The whole framing of this case is incorrect and stupid. It doesn't matter if
the software used for trading is a bunch of if statements or machine learning,
it's still just software. If it's misrepresented the case is pretty
straightforward. The algos used don't matter.

------
duiker101
If it was bug that caused him to lose money, sure, he should be compensated by
the company that made/owns the robot. But he says that it's capabilities were
misrepresented, so the robot doesn't actually have fault, he either just made
a bad investment or was scammed.

~~~
elliekelly
It usually works the other way around. A bug in the code doesn't often rise to
the level of negligence. (Though 99.99% of the time a fund will reimburse a
loss caused by a buggy algorithm for customer relations reasons they aren't
obligated.) Misrepresenting the capabilities/returns of a strategy is where
fund salespeople get themselves into trouble time and again. It's perfectly
legal to sell a terrible investment strategy so long as you've been truthful.

------
SmellyGeekBoy
If I buy a chainsaw and accidentally cut my finger off I wouldn't assume that
I have any legal right to sue the chainsaw manufacturer or the person who sold
it to me.

At least not where I'm from - is this not the case in the USA?

~~~
FussyZeus
A more fitting metaphor along this line would be if you bought a chainsaw-
weilding robot and directed it to cut down some trees, and it instead cut off
your finger. Once a product has agency and is making decisions of its own
accord, if it makes the wrong decisions, I don't think it's unreasonable to
say the person who programmed it has some level of responsibility, likely some
kind of negligence.

~~~
rytill
You really can’t pin anything like this on a single person, as tempting as it
may be. The company that sells a product should deal with the consequences and
accept responsibility as a complete unit. If someone coded something and
somehow the bug eluded understanding all the way up the chain, it’s everyones
fault for not putting more “solidness” pressure all the way down.

~~~
FussyZeus
Yeah you’re right, poor wording on my part. Agree.

------
landryraccoon
Maybe the concept of blame is meaningless when it comes to robots, and we’re
clinging to a cultural and spiritual concept of blame that serves no purpose
when it comes to robots.

If a robot doesn’t function properly, you fix it or take it out of service.
Taking retribution against a robot (or abstractly, the civilization that
created it) is of questionable value. The right question is how do we build
robots that don’t lose fortunes?

The fact that human beings get angry at robots and want retribution or
recompense for malfunctions is an evolutionary adaptation for dealing with
other humans. It is useless when it comes to dealing with non sentient
deterministic agents. Sure, if the robot was being controlled by a human, go
after the human. If not, what’s the point?

~~~
rout39574
This cannot work as a practical matter. If a person is killed by a robot, it
is important to have a legal theory to understand some analogue to "the
responsible party". Otherwise, the introduction of a "robot" in the workflow
becomes a wildcard to evade torts of all sorts.

~~~
landryraccoon
If the goal of the legal theory is to reduce human deaths by robots, why not
just reduce human deaths by robots by dismantling or repairing the appropriate
robots?

Again, the legal theory works because when dealing with human beings you have
to assign responsibility to human beings. Hurricanes and earthquakes kill tons
of human beings and there’s no legal theory of responsibility. We simply work
as a society to minimize those deaths without blaming anyone.

~~~
rout39574
You seem to be suggesting that a sufficiently complex system dissipates
responsibility, transforms an act of engineering into an Act Of God.

If that's not your intent, I suggest you rephrase.

When a bridge (another system which is complex but tractable in principle)
fails, we work hard to find the feature of the design that led to the failure,
and if we decide it was negligence, we exact consequences.

From current news: Should Boeing get a pass on their MCAS failures because
"Gosh, it's really hard" ? The MCAS is a robot.

~~~
landryraccoon
Responsibility itself is a heuristic with a goal. As autonomous systems take
on more and more responsibility, tracing back a historical cause will be less
and less relevant than actually fixing the fault in the system.

Take Boeing for example. Assigning blame to individuals, while rewarding, may
not be as effective as changing the system that allows Boeing to continue to
act the way it does. Boeing scapegoats, fires some people (which may include
the CEO) but the organization, and the nation and legal system that sustains
it continues largely unchanged.

You’re trying to patch by punishing humans individually, while ignoring faults
in the system we have collectively created. Our instinct is to assign blame
because historically and evolutionarily harm is usually done by individual
humans, and punishing humans is a good heuristic. But it does not seem to me
that punishment does a good job of creating good large scale organizations or
systems.

------
zaphirplane
Clearly depends on the laws of the nanny state you live in.

In my nanny state, the government will appoint a team of therapist to mange
the trauma, we get mandatory employer 4 weeks paid pain and suffering leave.
The government refunds your loss plus potentially lost opportunity cost

In the country next to us, they are savages. Can you believe each adult is
responsible for the consequences of their decision They have some meager
protection for fraud and general exploitive behavior. But if it’s a legitimate
investment firm and you go in knowing what you are doing and they loose your
money THAT IT your money is lost. savagery

Edit: post is tongue in cheek, assuming there is no fraud some of the comment
suggest there could be subtle fraud involved

------
Mikeb85
Forgetting who's responsible for trading losses (the customer should know
investing is never a sure thing), this sounds like a very crude algorithm. The
best trading programs don't use machine learning and sentiment scraped from
the internet, they use well defined strategies that are set by a competent
human. The whole point of using AI is to use it for tasks humans aren't suited
to; we're far better than machines at gauging sentiment over days and weeks.
Computers are better at seeing numerical trends over shorter time periods, or
identifying lesser known securities based on trading signals. Using a computer
to try timing a major benchmark seems like a massive waste of time.

------
chrischen
I don't get the whole issue with the who to blame conundrum when it comes to
robots or AI. They're just glorified buttons, pressed by a human.

------
vorotato
Obviously it's whoever misrepresented the abilities of said robot. If that's
nobody then you don't have a case.

------
jamisteven
Answer: The person responsible for creating the decision making logic behind
said robot.

------
raviolo
What if the robot was written by another robot and a “salesperson” was also a
robot?

~~~
DonHopkins
Wasn't there a Philip K Dick story about a human computer programmer in the
future where robots were taking over all the good programming jobs, who had to
pretend to be a robot, in order to get the high paying robot-only programming
gigs? Every morning when he signed in to work, he had to fail a Turing Test in
order to prove that he was really a robot.

------
crankylinuxuser
Who do you sue when a factory machine smashes your hand?

The factory owner.

------
jandrese
That title is pure 1%er American.

------
Haga
The guild of the guilty?

------
sonnyblarney
There is no new ethical dilemma here.

Software has been around for a long time.

------
DoofusOfDeath
Pedantry: "Whom to sue", not "Who to sue".

(Pedantic tl;dr: I refuse to put the punctuation inside the quotes. I consider
that an outright bug in English and it needs to be stamped out by rebellion.)

~~~
gerbilly
If you want a bug in english, then how about: "Petting Dave's cat", "Going
toMartha's Vineyard", "Using its eyes." (no apostrophe)

What a stupid idea to make "it's" stand for "it is"

~~~
jfk13
Not a bug, IMO. In general, we have separate individual words for possessive
pronouns; we don't create them by adding the possessive suffix to a nominative
pronoun: I -> my (not I's), you -> your, he -> his, she -> her, it -> its,
etc.

"It's" is just the standard use of apostrophe to indicate a contraction.

------
mrhappyunhappy
It’s amazing that people with that kind of money would be persuaded into
paying millions in fees thinking they can beat the index.

~~~
mruts
What is "the index?"

The S&P 500? The FTSE 250? The Nikkei? The Russell 2000? The Russel 3000? The
risk-free rate? LIBOR?

When you make statements like this, they mean nothing at all.

Let's say you're talking about the S&P 500. Many hedge funds have been beating
it for 20 years or more.

But let's say you had 2 Billion dollars. What would you do with it? Are you
telling me you would put it all in one index fund? Maybe you would put it in a
couple, maybe you would put it in some bonds? Well guess what, you just made
an active management decision! You decided what index, or what bonds, or
whatever.

There is no such thing as passive management.

~~~
soVeryTired
> Many hedge funds have been beating it for 20 years or more

Relatively few, I would say, if you measure it by realised Sharpe. Rentech,
Brevan Howard, D.E. Shaw, and maybe AHL and Winton excluding the past few
years. I'm struggling to name others that have done well over a sustained
period.

There are a _lot_ of funds out there, and IMO much of the supposed
'outperformance' is a combination of leverage and survivor bias (I say this as
a former hedge fund quant).

~~~
mruts
2 Sigma and Bridgewater due pretty well for themselves Sharpe ratio-wise.

Also if we're talking about market makers, Jane Street, Susquehanna, Fortress,
and Citadel very well for themselves.

~~~
soVeryTired
Huh. I hadn't realised that two sigma were that old. My point still stands
though.

Market making is a different kettle of fish in the sense that it's more of a
financial service than a directional bet on markets. It's also much less
capital intensive so there's less pressure to raise money from clients. As a
market-maker, you can certainly find yourself on the wrong side of a trade,
but generally the goal is to be as market-neutral as possible.

~~~
mruts
I mean, the goal of absolute return hedge funds is pretty much the same: only
have exposure to alpha factors. With market makers it’s pretty much the same
thing: only have exposure to order flow. That’s the alpha.

I’ve worked as a quant at both hedge funds and market makers, and both have a
similar philosophy of a only getting exposure to “alpha”, though the means and
capital required (as you mentioned) are quite different.

I think a concept of Sharpe ratio still applies to both though, with market
making having an clearly higher one.

Personally, I’ve found working for a hedge fund a lot better, market making is
kind of a drab business when you get down to the nuts and bolts.

------
mschuster91
> "That may often be true, but when it’s not, or when they quickly go astray,
> investors want someone to blame"

Uh... don't get me wrong, but when you go to the casino and lose all your
money, it's no one's fault but yours. Same way for stock markets.

Whoever is dumb enough to put money into essentially a gambling algorithm
should be able to pay for the losses himself.

~~~
daniel-cussen
No not the same way for stock markets. It's not a statistical certainty that
you will lose money like gambling in a casino is, and millions if not billions
of people depend on the markets in order to invest for retirement. There are
people who are certified stock brokers and certified financial advisors who
have a _legal fiduciary duty_ to maximize returns and minimize risk on their
client's behalf.

~~~
mschuster91
> It's not a statistical certainty that you will lose money like gambling in a
> casino is

But it is a very real possibility as the recent financial crisis shows.
Everyone who is unable to stomach a total loss of his investment in one basket
should not do so, and it is double dumb to invest in unproven automation like
this.

------
vectorEQ
I'm glad it's not possible to Sue in my country. Saves a lot of silly nonsense
and has people a bit more responsible about their actions instead of doing
random shit and sueing willy nilly when the shit hits the fan

~~~
lordfoom
Serious question: how do handle broken contracts, malicious acts, etc?

~~~
Kiro
In Sweden you can only sue in civil cases where there's a real disagreement,
not because someone spilled coffee on you. Not sure if it would be applicable
in this case though.

We often hear about the crazy lawsuit culture in US anyway and how insane it
is. Whether that is true or not I don't know.

~~~
chki
But tort law does exist, right? So if somebody purposefully spills boiling hot
coffee over you, you will be able to sue them. The claims are just less
outrageous than in the US because there are no punitive damages for example.

