
Machine learning in UK financial services - goatinaboat
https://www.bankofengland.co.uk/-/media/boe/files/report/2019/machine-learning-in-uk-financial-services.pdf?la=en&hash=F8CA6EE7A5A9E0CB182F5D568E033F0EB2D21246
======
tixocloud
Thanks for sharing. Was literally just reading it a few minutes ago with great
intrigue. Would be interesting to see what is classified as real ML and how
much of it is being used for actual decision making - I lead an ML team at a
bank and everyone tries to sell it more than it is.

Fraud detection is one of the areas I've come across that are actually doing
something interesting within retail banking. But agreed - plenty of
opportunities for ML to make things more efficient.

~~~
bigred100
A a paraphrase of a kind of self-deprecating but insightful comment from a ML
prof I heard: There’s a lot of hype around machine learning, but it can’t
really do that much. If you want to see what it’s capable of, check your
Amazon recommendations. Sort of works some of the time, and other times offers
you complete nonsense. Amazon has very smart engineers and the best algorithms
and tons of data. This is about the limit of what you can currently do.

------
hogFeast
Watching the UK try to adopt ML is like watching a toddler try to drive a
Ferrari.

The BoE prides itself on being "up" with the latest talking points amongst the
chattering classes. But take a look at how the BoE is actually run and you
will get an idea why British society struggles with change.

Incompetence and knowledge of irrelevant information is pride of place in
British society. Example: Carney has been the most accident prone BoE governor
since...the last one, who was trying to raise rates in 2008...the Chief
Economist was head of Financial Stability in 2006...yes, seriously
(incidentally, he recovered his reputation by being able to hit headlines by
talking about new trendy topics like ML).

Where ML has been implemented in the UK it is often by managers who have
almost no competence in any real-world skills. They have heard about ML, think
it is magic, and want to look like they know what they are doing. This has
resulted in very bad outcomes (a recent story -
[https://www.theguardian.com/society/2019/oct/15/councils-
usi...](https://www.theguardian.com/society/2019/oct/15/councils-using-
algorithms-make-welfare-decisions-benefits)).

I don't think this effect is at all well understood (economists, for example,
tend to assume that the person in charge actually knows what they are
doing...despite all the research suggesting the contrary in the UK). Insurance
has done well because there is a good level of statistical knowledge amongst
managers. But I suspect the UK will continue to lag the world (as it does in
almost everything else) as the hype disappears, and the typical British
suspicion of new technology and change returns.

~~~
7952
I work in a corporate and am constantly amazed how much currency buzz words
seem to have. We got a new CEO who is into "tech" and different directors are
fighting to deploy poorly though out AI and VR projects. It has little benefit
to the business but it helps people sell themselves within the company.

~~~
bigred100
After working on a completely failed project in a non-software department of a
large organization, I’ve realized that (depending where you are) the main
deliverable for a lot of things is what your manager can put in his power
point presentation to his manager.

Of course if the project lasts long enough the software being reasonably good
starts to matter as you start finding out things you promised people don’t
work anymore, your huge rats nest of code makes it too hard to add features
like, etc. But until it hits the “power point bottom line” a non technically
educated manager will not care at all.

------
osullivj
Interesting to see that insurance is ahead in adoption, compared to capital
markets, where I work. Suspect that the tree based models reported as popular
are used for underwriting decision making. We sales & trading capital markets
people tend to see insurance as the sleepy backwater of wholesale capital;
just cash cow real money funds to be gouged by sharp market makers to put it
bluntly.

~~~
w0rkaccount
Why is this? Insurance providers actually provide a valuable service (not
necessarily implying that much of the enormous UK financial industry does
not...) and is profitable and sustainable; I am surprised to hear it described
as something to be "gouged". How would such a gouging occur?

~~~
disgruntledphd2
It's worth noting that traditionally, actual insurance runs at a loss or
break-even, most of the profits tend to come from investment.

This has changed recently, as investment returns have dropped, and may also be
driving the adoption of better statistical techniques for modelling risk (as
they can't rely on investment income to make up for insurance losses).

Also, having read this report, I'm very very sceptical about whether or not
companies are using "ML". I suspect that most of them are just doing
linear/logistic regression on larger data sizes, which isn't really the same.

~~~
mlthoughts2018
If the linear / logistic regression stuff is improved through Bayesian methods
and hierarchical modeling, and if it grows large enough that there is a need
to use MCMC sampling techniques for approximate posterior inference, then even
though the model specification itself would be very simple, I’d absolutely say
this is “ML”.

~~~
disgruntledphd2
I really, really wouldn't. That's Andrew Gelman's gig, and he's been doing
that since the 80's, when ML/AI was all about expert systems. I think of that
as a statistical technique, not an ML one (and for some weird reason, many
quantitative people in insurance don't like Bayesian techniques).

But then, our disagreement here just highlights the difficulties involved in
getting consensus on what ml vs statistics actually are.

~~~
salty_biscuits
I think I have settled for ML = gets better with more examples ("learns"). So
I'd say Bayesian models of all flavours are definitely in the ML umbrella. AI
is super fuzzy though. I think of that as, does something you'd think a
computer shouldn't be able to do but a human can, and is thus a forever moving
goalpost.

~~~
disgruntledphd2
I probably wouldn't use that definition, as all statistical techniques will
improve as the number of examples increase (i.e. the estimate will be more
precise). If you mean that ML normally estimates more parameters, and as such
improves more with more examples, I would agree, but it's very difficult to
draw a dividing line then (what about splines - loads of parameters, very
flexible, but not normally classified as an ML technique).

~~~
gbrown
Depending on how you look at "improvement", that's not strictly true. Often,
large data sources are slightly biased relative to the population we want to
generalize to (say, a single regional company's customers, trying to
generalize to a national population). Working with larger and larger such data
sets, that bias can become large relative to standard errors/credible interval
width.

Xiao-Li Meng has had some interesting talks/papers about this and related
ideas:
[https://dash.harvard.edu/handle/1/10886849](https://dash.harvard.edu/handle/1/10886849)

~~~
disgruntledphd2
I was never convinced by that paper, to be honest. I almost always use MLE so
that's not really an issue. It just struck me as a pedantic distinction
without a difference.

But if you use GEE, it's probably great to know.

------
goatinaboat
The key takeaway from this for me is that there is no shortage of machine
learning practitioners in the UK despite popular opinion that it is a good
career move

~~~
bogle
Where I am at the moment, a UK bank's retail data analysis service, any ML
projects are set up on a "learn as you go" basis whilst still doing your main
job. No specialised practitioners are deemed required.

Whether or not this is a good idea is moot: this is in the bowels of the bank
and not the bleeding edge of commerce.

~~~
faceplanted
We don't really need specific ML practitioners for most jobs in the same way
we don't need Genetic Algorithm practitioners in most jobs either, unless
you're a researcher you can just read a book about it and start trying it out.

~~~
goatinaboat
Indeed. If you already understand the data and the business and you already
know Python or R then you can download Keras in the morning and have something
useful to the business running in the afternoon. Then you can add ML to your
CV.

It’s not clear where an ML expert who isn’t already familiar with the data and
the business fits in or adds value, or even what an expert really means in
industry.

~~~
ska
Domain knowledge is really important, you can't overstate that.

But you've also succinctly described why a huge percentage of ML as practiced
in industry under performs or flat out fails. To a first approximation the
person who reads a few tutorials and plays with example data sets, then sets
out to apply it to their own domain, has no real idea what they are doing -
and it shows.

------
BenoitEssiambre
It's hard not to be cynical reading these discussions. Avoiding plastic or
recycling it does near zero for the environment. Reducing travelling, reducing
goods consumption, reducing construction, reusing things and protecting land
are the real effective solutions but require sacrifice. Yet people refuse to
go beyond performing ridiculous plastic straw theatre.

~~~
pjc50
Is this comment supposed to be attached to "machine learning in uk financial
services"?

~~~
BenoitEssiambre
No. I'm sorry. I don't know what happened.

------
bigred100
My frank opinion as someone ~2 years into grad school in something to do with
computers and statistical inference (taken a bunch of numerical classes, ML
courses, maybe ~1 year of grad level stats coursework): if you actually want
to do statistical analysis for a living, you need get a statistics degree. CS
people are irrelevant except for computational issues. As a CS guy probably
you can be useful for scaling a system to do automated inference, or as an
advanced database guy, or writing shell scripts or something.

If you go to grad school in CS and work hard in the right subfield you might
develop the understanding of someone with part of a stats BS. Again, the part
you would actually know something about is computation, which as far as I can
tell isn’t a real bottleneck for anything except very large automated systems.
Otherwise some guy can just use R and his laptop.

I mentioned my status above because this is solely based on me evaluating
things in an academic setting and not job experience, but this seems very
clear to me once laid out like above.

ML is “hot” but it seems to me like an extraordinarily specialized area of
expertise with little general applicability.

~~~
mistrial9
agree but -- previous eras of statistics used important and non-obvious ways
to infer from subsets of data.. now, whole sets are used, and the subset part
is more like a representation problem. Second, the things that are made to be
important in school, are not always important outside of school. You do
acknowledge that tacitly there..

The point of view in the post here is from an individual, regarding the work
of another (hypothetical) individual, while the paper linked spends quite a
lot of effort to characterize in the real world, real activity by large and
very large, organizations, mainly business, and mainly the business of money.

You conclude that the field has "little general applicability" and that might
be true in some sense.. but large organizations, and particularly large
finance organizations, are not general at all, but rather composed of a very
large number of specialized and repetitive operations.. exactly what ML and AI
perform, no?

------
hnarn
My understanding of the history of the stock market is that it was nothing
short of a revolution in the sense that "normal people" suddenly could invest
in large companies, and that these new types of publicly traded companies
through the power of what we today would call "crowdsourcing" managed to
bypass the old giants through their accountability to their shareholders and
their ability to raise capital from the masses. So far, so (pretty) good.

Next, I'm reminded of when Elon Musk talked about integrated AI, and how we
may one day end up in a situation where any type of intellectual or physical
competition between humans, in the labor market or elsewhere, may just come
down to how much money they have to spend on cybernetic, biological
enhancement products.

My synthesis here is that from my perspective, the stock market has since the
advent of digital trading tools, now vastly accelerated by expanded usage of
machine learning in financial markets, completely undermined the original
benefit of the stock market, where a pretty true democratization of the
economy, almost an incredibly true one considering it was born in a capitalist
society, has now been rolled back and undermined by "big capital" finding a
tool that allows them to rig the game to their favor.

So if the assumption is correct that machine learning in financial markets
essentially is just an interface for the already-rich to beat the never-rich
by throwing more money at the problem, doesn't it stand to reason that these
types of tools should be heavily regulated, if we want a stock market that has
any resemblance of fairness and democratic influence on the economy?

I understand very well we are far away from this today, and I understand that
rich people will not allow regulations that punish them, but if we go beyond
the knee-jerk reaction of "cash rules baby", is this really the way things
should be?

~~~
short_sells_poo
There are two big assumptions in your hypothesis that are incorrect in my
opinion:

1\. The prevalence if machine learning in trading. 2\. That the game wasn't
rigged against retail at a point in the past.

First, there is very little (almost none) sophisticated ML in trading. Sure,
if you call linear regression or simple classification tools ML, then yes,
there's plenty of ML in finance, and has been for decades. If you think that
there are super sophisticated DNNs running the majority of trading today, you
are very far from the truth. The vast majority of trading is based on fairly
simple models. It happens mostly based on human insight and exploiting
specific market flows. Sure, almost every firm has some token ML overlay for
marketing purposes because it was the fad of the recent years. They all want
the public to think that they have some super secret sophisticated AI that
gives then an edge. It's all a smoke screen. Almost none of them mean it in
earnest. In truth, it is incredibly difficult to build anything resembling a
scalable and robust strategy with complex ML tools. The reasons are too
numerous to describe here. a very small handful of firms have the capability
and mandate to actually use these tools with success, but their market
participation is minute, basically negligible.

Second, the game was always rigged in a sense. Big money could always buy
privileged access to liquidity, information and tools. Nothing has changed
here. If anything, the commoditization of all three of those things has
lowered the barrier of entry. You can run a successful quant fund with a
couple of friends now. That was much more difficult before.

------
ArtWomb
Thanks for sharing ;)

Of interest as well may be London Fintech week July 4 2020

[https://www.fintechweek.com/](https://www.fintechweek.com/)

------
ThetaOneOne
Maybe before we rush to adopt machine learning in financial services we should
consider the consequences of blithely giving this technology such a central
position in our lives.

