Hacker News new | comments | show | ask | jobs | submit login
Why do traders in investment banks feel their jobs are immune from AI, etc? (quora.com)
220 points by aburan28 266 days ago | hide | past | web | 220 comments | favorite

because traders are used to seeing such predictions fail.

Reuters was trading FX electronically since the early 1990s. At the tier one IB I worked for the IT budget was 500m USD a year (across products), and that was in 1997! Huge resources were thrown at automation. However, to this day, large trades in FX (> 10m USD notional) are still almost exclusively performed by humans over a telephone or over the bloomberg messaging system.

That's because, no matter how much you automate stuff, there is still the 1% "edge case" scenario where something goes wrong, and when that happens, you most definitely want a human that you can "look in the eye", when you have that sort of execution risk. Remember that markets move really fast and there is a lot of risk in big trades that "go wrong" because unwinding said trade will almost certainly cost one of the sides a fortune.

Also, high finance is not just about what you know. It's inevitably about who you know, about "illogical" factors such as salesperson charisma, entertainment, and most importantly, a credible personality type that understands the edge case risks. These things are very hard to replicate with a machine. You'll say they should be, that these things are unfair, but they remain a fact after many attempts at removing them have failed.

As for AI, let's for now call it what it is: machine learning. Learning from the past. That's fine for recognising stop signs at different distances, angles and degrees of noise. But in finance, the past is often misleading. Sure there's trend, but there are also very big instabilities in the historical correlation matrix. Paradigms shift without you even realising it. The constant is change. AI is not good enough at that, yet.

BTW, that's not to say machines are not making inroads. It's becoming almost impossible to get a decent trading job now with knowing at least R and Python to a comfortable degree, and good quant programmers cost a fortune. There's massive demand.

Machine learning is "learning from data." It is not the assumption that there are no dynamics, and that the future will simply be a repetition of the past. To the extent that the future is predictable, learning from data is the best that can be done.

The reality is that speech recognition, language translation, face recognition, object classification and detection, semantic segmentation, speech and image synthesis have improved by extraordinary leaps and bounds in recent years. If we used the logic that past failures inform a confident belief that future success in a challenge will inevitably fail, then we should've bet heavily against Alpha Go defeating one of the most accomplished human Go champions in the world. Self driving cars seem like a sci-fi fantasy until they become a mundane reality.

There's an irrational arrogance to human beings in general, and Wall Street types in particular, regarding the specialness / non-reproducibility of their intelligence. It's not unlike the belief that people had that organic molecules were somehow special, "vital," and not synthesizable from base elements.

Certainly, there's a long way to go to replicate the capabilities of a human brain, but I don't think we should exaggerate or fetishisize the human power to estimate and mitigate risk. We've seen many spectacular failures of that in recent years.

There are varying levels of sophistication from which data can be learned from. Pigeons can learn non-trivial word concepts and statistics but they do so at a rate that is much slower than a human infant.

Thus far, machines have not done well in scenarios of low stationarity. Those are scenarios where the past is not so good a predictor of the future or where the data manifold is rapidly changing. This occurs for example, when the act of predicting changes what is being predicted. The related notion of Antipredictable Sequences is the most interesting consequence of the oft misconstrued notion of the No free lunch theorem.

Humans are far from the ideal machine for this scenario but they vastly outmatch current algorithms. This is because humans are much better in the kind of generalization that is based off theory building. While not perfect, it does well enough at generalizing from observations (think Newton deriving the laws of gravitation from observations and Brahe's data) to scenarios that are not so close to the data.

This means that for now and in complex environments, simpler models that are adaptable to change online are better. Additionally, their inspectability means they are more easily modified to better match any very changed dynamics.

So your examples of: `speech recognition, face recognition, object classification and detection, speech`

Are of the type where a learner, such as a deep network, whose generalization ability mostly hails from interpolating to fill in missing structure based on data (implying poor out of sample performance) does well when you can give lots of examples which provide good coverage.

Language translation is sort of inbetween (the space is not as smooth but is still fairly stationary at the scale of years) and so we also find that the advantages of deep learning have not been as pronounced and dramatic as has been the case for image and speech.

The digital machine advantage is two. Their memory capacity is outmatched. Second, and related to the first, energy needs are not so pressing a concern.

I would add a third big advantage to the digital machine: the ability to seamlessly connect to "classical" algorithms. For example, AlphaGo also used a classic MonteCarlo algorithm.

A hypothetical "robot-cat-AI" could for example have access to a really precise physics engine, and also could run at a much higher "frame-rate", so it could see the world in slow-motion and execute really exquisite moves as a result.

How do you know that cats, flies, and other lifeforms run their brains at the same "frequency" as humans?

When it comes to comparing the sophistication, adaptivity and functions of life to the in-silico mashines we produce and use very, very primitive.

No one is suggesting that AI is a solved problem. I simply cited a handful of areas such as computer vision where progress in ML has been remarkably rapid after many years where the problems seemed daunting and skeptics had written off the prospects of the field.

Citing Newton as a typical example of the superiority of human inferential abilities perhaps represents a case of cherry picking. How often do most human beings come to exhibiting the level of insight and reasoning power required to construct the calculus and discover the laws of classical mechanics? Inventing supernatural explanations for natural phenomena, burning sacrifices / witches / books (on Evolution or other heresies) seem more par for the course than Newtonian level revelations.

Having said that, learning the laws of classical physics from observational data strikes me as a pretty natural task for the right kinds of machine architectures, since they represent essentially geometric symmetries which hold over a vast range of scales, space and time.

My point was that you've underestimated the problem difficulty, when the signal is not of a smooth manifold and where the dynamics are far from stationary, it is incorrect to infer that techniques which have worked well in stable and smooth scenarios will continue to work well. In fact they have not and it looks, not for a while yet either.

> Citing Newton as a typical example of the superiority of human inferential abilities perhaps represents a case of cherry picking

Are hyperparameter searches across a sea of AWS instances cherry picking?

Anyways, I don't think I cherry picked. There is not a single machine that can do the same yet. And more importantly, a learning algorithm that notices model failure and works out the surgery required to correct or even completely replace it.

Newton is an example of the human brain at its best. But he was far from the only: Maxwell, Green, Einstein, Noether, Archimedes and so on the list goes. But we don't need to go that far since Crows are already capable of generalization that out matches what machines can do for the near future.

> Inventing supernatural explanations for natural phenomena

If our mathamechanical minions ever start musing about 'The Supreme Eigenvector' that would be the time to start worrying about machines taking over.

The trouble with stock market systems is that any successful one is defeated by its own success - as its logic gets factored into everyone else's strategy.

This self-defeating problem is not present in the other AI applications you mentioned.

Again, I'm not trying to provide an exhaustive survey of AI or ML applications. Merely point out that the fallacy of using past failures of AI and ML to conclude that they will continue to fail at tasks against which they have thus far made limited headway.

Bear in mind that in the 90's only a handful of supercomputers had teraflops of computing power, where now you can get an 11 TFLOPS Titan X Ultimate for $1200. Compute power continues to grow exponentially, yet it has only recently reached a level where certain kinds of approaches are truly practical. As Heinlein said, "When it's time to go railroading, people go railroading."

It's interesting that you should talk about antagonistic systems, since Actor-Critic Models, dueling architectures, Generative Adversarial Networks (GANs) are an extremely hot area of AI/ML research at the moment.

Any strategy a human can think of, they can also code as a program. The difficulty is in expressing one's strategy clearly.

The difference is that chess, go, etc are all essentially rules based. Finance has very few rules that do not break over time. Just look at QE. Arguably the financial market represents the collective intelligence of a huge amount of very clever people. Machines are only just starting to challenge a single human at a rules-based activity. We're very far from beating a brutally darwinian, impressively adaptive, human hive-mind whose main skill is figuring out when rules are about to get broken.

What are the rules for image / video captioning? For natural sounding speech synthesis? For realistic image generation? For semantic segmentation? For determining perceptual visual similarity between images?

Frankly, rules based AI is basically a bust compared to learning from data approaches.

The hive was pretty amazing at destabilizing the entire global economy within a brief span following fundamental financial de-regulation.

People flatter themselves in ways which satisfies their egos and self-interest. Of course they want to believe in, and have everyone else credit, their magical superpowers. No government (backstop) wires necessary.

Automatic image and video captioning is in a very sorry state. Machines can't do it.

Machine learning is rule based. The difference is just that it creates many more rules for more and more granular cases. But it will never be intelligent, it will always be a dumb digital bureaucrat.

Given results from ex. neural networks, ML "rules" appear to be granular and flexible enough that I'm not sure they usefully count as being rules anymore.

There are rules in that you're able to create an exhaustive labeled training dataset. The rules may not be generic, but they exist through a specification by example.

Beating a human when there is no discoverability to the problem, and when building an exhaustive dataset is a problem in itself is not on the horizon yet.

> Automatic image and video captioning is in a very sorry state. Machines can't do it.

You may have missed various dramatic improvements over the past year:


Definitely haven't. Where is captioning applied with human like accuracy?

Hmm. I suggest that recent SOTA results contradict your contention that the field is in "a very sorry state" and you up the standard to "human like accuracy". Feels like the goalposts are being moved.

Anyway, this paper demonstrates some impressive results:


arguably the "crisis" was a perfectly rational response to inputs, namely, people no longer being able to service their debts.

Interest rates cannot go negative was a commonly accepted rule at the beginning of my career (and interest rate option traders where using models that did not allow for negative interest rates).


Here's my first result from Google:


Google is getting really good.

Quantitative Easing

But the idea that you can predict the future is absurd.

And even if you could, acting on that knowledge would change the future.

The nuances of the game are different from strictly trying to predict the future.

Casinos don't try to predict the future of every spin of the wheel or roll of the dice. Every individual game is random. But over a large enough sample size (number of games played), the odds are not random.

The same principle is at play in trading the markets, except the "odds" are determined by the traders' skill - their "edge." Which is roughly some combination of access to the right people, access to better information (where better might but not necessarily mean quicker), and experience (gut feelings, hunches, what-have-you).

So the real question is, can machine learning algorithms develop an "edge", and if so, can they stay solvent long enough to do so.

No. In casinos the games are designed so that in the long run the house always wins. They don't need to care about odds.

The stock market isn't a game; it isn't designed and it continuously evolves.

> In casinos the games are designed so that in the long run the house always wins.

That's exactly what I meant by odds; they always win because the odds of the games are in their favor.

> The stock market isn't a game

It's a game in the same sense that life itself is a game.

> it isn't designed

The mechanisms that fit all the pieces together are very much designed by humans. True the behavior of the market isn't designed; it's an emergent property of the complex system we call "the markets."

> it continuously evolves

Yes, but change is also the constant. What underlies that change is supply and demand.

Of course you're free to disagree. But if you'd like to learn more about the not-100%-accurate analogy I described in abbreviated form, the source is here: http://www.goodreads.com/book/show/253516.Trading_in_the_Zon...

Edit: Good reading on "life as a game": http://www.goodreads.com/book/show/189989.Finite_and_Infinit...

>It's a game in the same sense that life itself is a game.

Life isn't a game.

But the traders are trying to predict the future already! If it's impossible, then the machines are a shoe in, and if it is possible I'd still give the machines great odds.

I bet Google "could" write some algorithms that can predict if a stock will go up or down seconds before change based on live search data. All it takes is for an article or something to come out, then watch people search for "Will IBM stock drop" - and perform live sentiment analysis across all such live queries involving the stock name.

In a sense, it's all about what information you have access to.

So the rumor is Goldman Sachs got out of the financial crash first because their systems saw the market move before anyone else. I don't know if that is apocrypha, but it is more to the point Google would not be predicting the market, but instead see market movements before anyone else.

I highly doubt that it is as technical as you seem to make it.

GS is very well connected, and they are constantly out there asking people for their opinions. Ten years ago they probably had some sort of database where people could share experiences, and that is most likely what led to them guessing the financial crash.

But it's not like they had some machine that one day told them to short mortgages.

I checked you bio are developing your own HFT strats? I ask because I am connected with a couple startup HFs. Ping me on Symphony {Brian Hewes} if your up for chatting about it.

Would Alpha go beat the same human on 17x17 or 21x21? Or even 13x13? Let's not get started about self-driving cars…

Absolutely, both Fan Hui and Lee Sedol.


The link you provide does give no data on games played on 1717 or 2121. They were played on the standard 19*19 board.

Maybe not, because it has not been trained for those dimensions. Then again neither have humans (except maybe on 13x13). That could be interesting, but I'm not sure what we could learn from this.

A question - with the answer left as an exercise for the reader:

What do speech recognition, language translation, face recognition, object classification/detection have in common that is not true about predicting the future price of a security?

> because traders are used to seeing such predictions fail.

There was a joke from the 80s: soon the whole trading floor will be replaced by a computer, a man and a dog. The man presses the button to turn on the computer every morning. The computer operates all of the transactions and settlements automatically. And the dog is there to bite the man if he touches any other button.

30 years later, still no dog on the floor!

This is a paraphrase of a semi-well-known quote from a respected business professor named Warren Bennis, about the "factory of the future" - who knows whether he coined it or adapted it from a common joke though. Here's his version of the quote, which I like a bit more:

The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.

The NYSE floor traders are only there for window dressing to be seen by the TV cameras. They are entirely redundant and watch movies for much of the day when there is no need to show a flurry of activity after opening or before closing. The entire building is no longer of any real importance.

Most of those floor traders is running a small business (2/3 guys/girls) based on past contacts and trader know-how (such as it is - some better, some worse.) It's really more of a 'bazaar' for small, likely well connected, trading firms rather than anything else. It has been 'hard' for them in recent years as order flow has moved to the largest, too-big-to-fail brokers (GS, JMP etc). But certainly the NYSE gets a lot out of the media coverage, but the brokers themselves are not owned by the NYSE.

I think the joke was that the man is there to feed the dog, and the dog is there to keep the human from touching the computer.

But on the flip side, most physical trading floors are closed. Can't see the last few lasting much longer.

> Can't see the last few lasting much longer.

I believe the remaining ones are effectively TV studios for business channels, so they have a completely different purpose now.

There are several floors that have "real" action. A couple of the options pits that are truly specialist and the CBOE VIX pits for instance come to mind.

But I don't think anyone believes those have more than a couple of years left in them.

yeah, i really don't know what these guys are talking about. the business day to day has been massively revamped by automation. there are hft ops that are blasting millions and millions of orders a second and probably account for 70%+ of all trading volume.

I think you have cash equity in mind. This is a very small subset of what investment banks trade.

i trade financial futures and cash equity. i realize ibanks offer leveraged bets on all kinds of nonsense to provide a "service", but pretty much any strategy known to humanity can be achieved on an exchange with negligible counterparty risk

Being centrally cleared to reduce counterparty risk and being traded electronically on an exchange are two distinct things.

Electronic trading does change dramatically the shape and profile of a trading floor. Introducing central clearing with margining not so much.

You won't achieve electronic trading anytime soon on products where there are perhaps at most 200-300 buyers in the world, must of who are only interested in buying big chunks.

> Also, high finance is not just about what you know. It's inevitably about who you know, about "illogical" factors such as salesperson charisma, entertainment, and most importantly, a credible personality type that understands the edge case risks

That explains the LIBOR scandal.

And virtually every scandal in human history.

We were talking about the finance industry, which some say that it should be this cold, rational thing where no "personal", charisma-based decisions are ever made, it's all based on offer and demand. Otherwise I do agree with you, our particularities as a species have caused lots of "scandals" in the past (like world-conquerors getting drunk and killing their best friends because of it), but that's to be expected, we're not robots.

> It's inevitably about who you know, about "illogical" factors such as salesperson charisma, entertainment, and most importantly, a credible personality type that understands the edge case risks.

And this is why AI/ML will never be very disruptive in law.

The low-dow is: It's not about the facts, it's a social status game.

The trouble with stock market systems is they are great at predicting the past and pretty much useless for predicting the future.

A fellow student at Caltech was developing a stock market AI in the 1970s. He was very secretive about it, and was sure it was going to make him rich. I sometimes wonder whatever happened to him.

One thing we fail to consider when talking about automation is that, at the higher end, tasks are complex and (man+machine) is much much better than machine alone. Further, the time-growth of (man+machine) will asymptotically dominate (machine). This will mean people with newer skills to master (man+machine) will be hired rather than jobs just going away. So predicting a particular job going away is a futile exercise at the higher end. The lower end is different, jobs there are simple enough to fully be automated away.

Though one of the new big players in FX doesn't have any traders [1].

[1] https://www.bloomberg.com/news/articles/2016-10-13/this-bank...

There's a duality of thought going on here (with apologies to the false dichotomy). One says that all intelligence is ultimately materialistic/mechanical and that all we must do is retrace these steps with electronics and we can achieve whatever we want including high-value traders. The other seems to leave room for... let's call it the "irrational" side of things.

My only point here is that just about all values themselves are irrational. The same problem in trying to get a computer to recognize that the Mona Lisa (or poched eggs or Mozart or whatever) have value, is the same one that makes it hard to teach them to figure out if Tesla is still a good long-term investment etc.

A lot of traders are losing their jobs, and many fear this.

As other mention, though, Wall Street makes a lot of money trading the edge cases. For instance, many people thought derivatives traders would become obsolete when the Black Scholes formula arrived. In reality, the model grew the size of the derivatives market, and traders made money knowing where the model was wrong. (Example: It assumes constant volatility)

Similarly, many investors use an OAS (Option Adjusted Spread) model to justify prices on one-off Mortgage Backed Securities. This also helps grow the market, as there's more transparent pricing. But traders know where the models are wrong, and make money off of them.

When technology enabled FX trade spreads to be less than a penny, people thought traders were done. But this increased the volume of trading (more hedging became cost-efficient) so while the % skimmed by traders decreased, the absolute $s increased.

Net, as long as the financial pie grows, traders can find ways to siphon money off. That amount may grow or shrink, but generally the story is more technology has helped them.

Perhaps the best analogy is a chess expert paired with a computer can beat either the computer or the expert alone.

> Perhaps the best analogy is a chess expert paired with a computer can beat either the computer or the expert alone.

This has stopped being true for a number of years. Computers play chess so much better now, that a human will actually impede it.

Think this way: could a 12 year old (the human) help a math graduate (the computer) on some problem? Or more likely he will just be a distraction?

More elaborations on this from gwern: https://www.gwern.net/Notes#advanced-chess-obituary

A better analogy would be computer-assisted poker. The computer can keep track of the cards and provide probabilities, while the human provides the social computations (figuring out exploits by modelling the opponent's mental state). Computers have been very successful at pure optimization problems like chess, but still lag behind on games of assymetrical information.

The stock market is more like poker than like chess, because it is the correct application of theory of mind (Can a computer bluff? Can a computer cheat?) that elevates it above a game of chance.

Kasparov seems to be of a different opinion[0], telling that machine + computer combination is still more powerful

[0] https://youtu.be/fiyBJeNBIIA?t=1h27m36s

Fair enough, it's a weak analogy for chess. [0] The pairing is tighter in financial markets where the outcomes (prices) are not necessarily deterministic and the models are incomplete.

[0] http://marginalrevolution.com/marginalrevolution/2013/11/wha...

> When technology enabled FX trade spreads to be less than a penny

This also created the incentive for collusion/price fixing in the FX market as regular bid/offer spreads became too small to make any money.

This is true too. (And perhaps another reason traders don't fear computers!)

For around eight years my primary job function was to put investment bank traders out of a job, by automating what they did.

There were still humans in charge of the algorithms, but they moved more towards Python programmers than market traders.

Many of the "old-style" traders bitched about what we did, and most moved jobs to banks that were less advanced.

(I was in the interest rates line; typical trade size is $10M)

This was my first thought. Of course their jobs can be automated. Automation increases the productivity of the average employee to the point where fewer employees are needed. Investment bankers are extremely vulnerable to this given how much fat there tends to be in their organizations.

I am curious about how the old-style traders actually did their jobs. Did they base their trading decisions on data or instinct?

Knowledge and experience of how the market reacts to other movements. I guess you could call some of that "instinct".

Of course, they did know more than the machine did in certain situations, but not enough to make up the cost.

Trading illiquid products is all about knowing your market, and having a good idea of who you will be able to sell something when you accept to buy it (you are not in the business of taking a position you cannot get out of). Having a good understanding of the flows, who is buying, who is selling, who still has room for this exposure, etc.

This is achieved by discussing with sales people, who themselves discuss with investors, as well as traders at other banks (through brokers).

Now the definition of illiquid varies. Many products that used to be illiquid are now pretty liquid (interest rate derivatives) while many other are liquid only for small trade size (certain bonds). FX is an interesting example. It is very liquid but it is also a market where many clients need to make jumbo transactions on the spot, and these would be market moving if not carefully managed. That's something that could be probably automated.

I find the article very poorly written. "Investment bankers" is extremely vague and the breadth of very different product in different markets traded by investment banks makes this sort of generalization a bit absurd.

The other reason why I think the article doesn't make sense is that investment banks cater for certain giant markets (equity, FX, treasuries, etc) but also for a huge variety of niche markets, in which only a handful of banks and traders are active. If you add the salaries of the couple of traders in each of the active banks on one particular market, the savings you would do automating the market wouldn't justify the years of IT development to train and fine tune an algo, which would still need to be maintained as the market evolves (and then you end up overpaying an AI expert instead of overpaying a couple of traders...).

I think AI will shine at solving wide problems that affect a large number of people. Self driving cars. Butler robots. Building a house. Manufacturing something common. These markets have the scale to justify large investments.

I very much doubt that AI will replace every single complex task done by a man today, for the very same reason that software hasn't replaced every single manual task done by a man today: the cost of developing and maintaining software can easily exceed the salary of the few guys you are trying to replace. To overcome that with software, we need to dramatically reduce the cost of developing software, enabling ordinary employees to develop their own software. But even today we are very far from that. What's the % of a generation who can actually code out of college today? How easy and useful are the main programming languages? I'd argue we are now going into the opposite direction. Microsoft is poised to take VBA out of office. All major OS are evolving toward the iOS-style locked down platform where you can only run Apple/Microsoft approved software. Corporate IT is ever more locking down platforms with software whitelisting, etc. I wonder if the golden age of productivity improvement through software has not peaked.

>To overcome that with software, we need to dramatically reduce the cost of developing software, enabling ordinary employees to develop their own software.

I think this is the key to running an effective organisation.

It requires that most people know how to program, or at least know enough to understand what is possible to program.

I also don't see this coming, partly because the education systems are not really up to par, but also because it's really hard to develop complex systems.

I'd argue that the languages are not really the issue here even if the trends seems to go towards even crappier languages (like js, php) to really mess up the heads of beginners.

But even with a hypothetical "perfect language" the main problems is the huge amount of time and effort it takes to learn any programming language at all, plus the even more enormous amount of time it takes to write something useful even for a really experienced developer.

But sure. It's perhaps time for a new cycle of tearing down the "mainframes" (in someone else's basement this time) and reinvent personal computing for the 2nd or 3rd time...

There seems to be some confusion going on about investment bankers and traders in the discussion.

Trading has been changing significantly since the 'big bang' when trading went from pits to electronic. From there on in you see the evolution of algorithm / program trading. This area has been using quants for decades at this point. There are a good few big names brands out there that are known for being 'algorithmic heavy', Man, Citadel, DE Shaw come to mind (I"m a few years out of date). That whole field has been open to introducing automation / algorithms to create a business edge and will probably continue to advance because its good for business. The profile of traders has also changed (Barrow boys versus PhDs)

Then I guess on the other side is investment banking such as m&a, equity and debt capital markets. Generally there its relationship based , juniors work on pitch books which from what I saw / heard were generally overlooked. This is potentially a lot harder to automate away. Then the bank would try to pull in some rain makers or grow them internally to land big deals. Usually these opportunities open up because their clients (Other companies) have learnt to trust the organization or at the least learn to expect a certain behavriour when enlisting their services.

Agree. But even the trading you are referring to is the trading of liquid products (essentially equity). A lot of OTC trading is still very illiquid and will likely not move to electronic platforms for the foreseeable future.

I've got a story that connects the new and the old way of trading.

When I joined the industry just after the millenium, I joined a firm of guys who used to be floor traders. Basically the guys from "Trading Places" with Eddie Murphy: coloured coats, loud shouting, eating contests. As London got automated, they moved "upstairs", which basically meant holding the eating contests in a room of screens and squawk boxes. It was a fun time (but not for everyone; old school also means macho culture and sexual harassment lawsuits). One day I thought to myself "in what other job in the world would you find your boss breakdancing?"

I checked up on that breakdancing guy the other day. He's continued along the way of old school market makers, taking calls from brokers and manually entring them into a system. He literally said "Nacho, I'm a dinosaur. I can't code, but I have 25 years of experience trading. And trading changed. Everything is eaten by the computers, and on top of that we have free money keeping the market from having more than one opinion."

We talked about a guy who used to work in the firm we were at. He'd gone the other way, and caught the start of the HFT boom. Now he's a billionaire. It's amazing that someone could go from the pit to trading several percent of global daily volume each day.

But basically, the old school traders are well aware of what the computers can do. They aren't stupid, the ones who can't code know they can't, and they can see the writing on the walls.

As far as I can tell, there's only one area of trading that's relatively immune to the machines. And even that isn't completely immune. It's special situations. That's where you're looking at corporate events like mergers, rights issues, and so on. It's somewhat hard to automate because there's just not that many things to make bets on. One guy can sit and read through a bunch of events and put on big bets, and there aren't that many people who have the specifics of how to make the decisions. So the benefits of automating it are not as huge as with most other types of trading.

The question is a logical fallacy. I worked at an IB and even in 2011 traders were acutely aware of the benefits of technology and eager to invest in it and embrace it. Many of the answers here are regurgitations of techno-utopian talking points that do not take into account three issues

1. Traders generally exert an advisory/supervisory role on the set of prices that the bank offers, prices which are based on a formula or other fairly automatic means, with an added human adjustment. They already extensively use technology.

2. Traders are therefore most involved where profit can be made but simple algorithms don't work. For example, pricing big deals in illiquid markets, like when a company issues a large complex bond. As this contract is by definition not traded yet and not the same as others, there is necessarily limited applicable training data, so that there is no way to learn by example - i.e., use deep learning techniques (what I assume the question is asking about). In this case, trust and relationships are extremely important as both sides of the deal have limited information.

3. Markets change dynamics, often very rapidly. Traders have to react intelligently to events: like interest rates hitting the zero lower bound, wars breaking out or industrial accidents. They need to anticipate the actual consequence to future cash flows and also to sense the appetite of the market after the event. Publicly announced AI techniques are very far away from this kind of complex general reasoning.

The days of manual trading are long gone: of open outcry traders, yelling in bullpits and making handsignals, when banks would hire big imposing ex-football players. The question is a "why do you feel you can get away with beating your wife"?

The last few minutes of this documentary on Long Term Capital Management and its algorithmic failure will tell you why: https://vimeo.com/28554862

The complete lack of any sign of remorse on their faces about having burned a trillion dollars made the hair on the back of my neck stand up. I've never seen such a blatant display of psychopathy.

Due diligence on the financials of a company (what investment bankers are supposed to do) is actually really hard to get right with the algorithms we have today. Much of the data and insight compiled by an I-banker today does not exist in an easily parse-able form for automated algorithms, and a substantial amount of the computation relies on common sense knowledge.

More to the point, most investment banking valuations are just guesses and understood to be as such. Estimating discount rates and growth rates in particular are very much gut-driven and aren't expected to be precise, no matter how complex a model the analyst comes up with.

There's only so much you can ask a kid 10 months out of Harvard econ to do, no matter how many pounds of cocaine and borrowed Excel sheets they have.

At the more senior levels, banking is relationship based, and making partner is a function of how much business you can bring in. This means maintaining good relationships with the clients you work with and reaching out to new ones. Show me an algorithm that can send cards on kids' birthdays, drink champagne, and play golf.

Not to mention the difficulty of an algorithm that can come up with a convincing explanation for why last quarter's predictions were way off but these ones should be fine.

> Due diligence on the financials of a company (what investment bankers are supposed to do) is actually really hard to get right

Diligence is automatable, in the long run. There is a human element to it, but it is small in most contexts.

Bankers solve trust problems. An AI would need to be trustworthy in a personal way to replace bankers--this only happens with AGI. For a long time, as long as humans control capital, there will be other humans interfacing those humans with each other.

> An AI would need to be trustworthy in a personal way to replace bankers--this only happens with AGI.

This happens with an unbeatable public track record. We trust AIs in lots of tasks already, and that is not because they smile and speech nicely.

> Diligence is automatable, in the long run. There is a human element to it, but it is small in most contexts.

I'm not sure I agree with the first sentence, but I agree that there is a human element. If true, then at least some investment banking jobs must remain immune.

> ...actually really hard to get right with the algorithms we have today

assuming, very few companies which would have publically available published financial going over a large (> 50 years) timespan, would it be safer to conjecture that it is actually the _paucity_ of data which impoverishes the algorithms ?

It only takes one person to prove you wrong.

That explains the current state, but has little prediction value. The availability of parsable data can change at any moment either because of regulation or through market forces (either external companies providing parsable data as a service, or enough companies realising that providing plentiful parsable data about themselves attracts capital).

Once enough data is available, common sense and gut feeling can be programmed as a combination of fixed rules and machine learning.

I generally don't think that all traders believe their jobs are immune. Any rational person that looks back at history will observe that the markets have constantly been changing due to technology that speeds up a trade or makes it more accurate. However, most of the significant changes have not been to due to Artificial Intelligence innovations. Instead, those changes have been about automating tasks that were once repetitive. I wouldn't necessarily call that AI.

If you'll allow me to simplify the IB Trader's job: there are 2 types of traders: agency and principal traders. Agency traders build relationships with clients, accept orders from them, execute them in the market. They make money on commissions. Principal traders will take risk. Sometimes on behalf of a client - on the back of a client order - or sometimes purely for the bank's own account.

The agency traders have seen their roles decimated by technology - because much of their role (minus the relationship building part) was automatized. On the risk-taking side, AI has crept into some places - albeit in very narrow use-cases. For example, we've seen the rise of the robo-advisor, where an "algorithm" comes in and automatically adjusts your portfolio to reduce risk and increase alpha. Well, the risk reduction party is well-known (Markowitz portfolio theory). But the increasing alpha part is the difficult thing. And AI seems to be quite far off in its ability to be a stock picker - simply because the passive approach is superior (ie, no intelligence needed at all).

Because investment bankers are just salespeople? Nobody is suggesting AI will replace salespeople in the near future. I could see there being fewer grunt analysts in the future, though.

> Nobody is suggesting AI will replace salespeople in the near future.

They aren't? Because I see this happening all the time. Automated checkout lines, E-Trade and online brokerages, etc. Low-value transactions are handled more and more by electronic "salespeople". All it takes to reach high-value transactions as well is for the clientele to decide they trust machines more than humans, which is not all that unlikely given the many sources of human error involved.

It's possible that humans will be involved in high-value transactions like investment for a long time, since the marginal cost of human labor is low as the numbers get bigger, but it's not impossible to replace; we already have the technology.

I think that the term 'salespeople' here refer to those who proactively push their products to the market and by doing so widen the market for the company they represent.

The examples you give are indeed places where buying or selling happens, but where does the incentive to do these transactions come from? It's still from humans. Sure, we are bombarded left and right with ads that were carefully targeted for us by complex algorithms, but I don't think I've ever bought anything from Amazon without reading a review about the product from another user, or bought stock without going through several opinions written by more-or-less respected analysts.

You'll note that for a lot of online financial services, there wasn't anyone at all to serve low profile customers before robot-advisor/self-serving services came in.

There is a line (few hundreds k) where it's enough to play but not enough to pay for the costs of a human to help you (sales people/financial advisor/trader type).

TIL cashiers are considered salespeople.

>Nobody is suggesting AI will replace salespeople in the near future.

Huh? For one, investment bankers are "just salespeople" in the same way doctors are just nurses.

Second, everybody and their mother is suggesting AI will replace retail level salespeople in the future. In fact lots of places, from British supermarkerts (where you can bill and pay for your purchases yourself), to Amazon's Go experiment, and numerous other uses of automation in retail.

Cashiers aren't salespeople. Salespeople are people who convince you to buy something. Higher-end stores have them, grocery stores do not.

> Salespeople are people who convince you to buy something. Higher-end stores have them

Many more lower-end stores did too, pre- or earlier in Amazon's and online shopping's proliferation. It's already happened to a not insignificant extent.

Well, in that case, travel agents have been replaced by "buy your own tickets online", real estate and rental agents have seen similar automation through web apps (+ reviews, photos, walkthroughs, etc for the "convincing part"), and lots of other salespeople jobs.

Bookstores is probably a famous example, where the people working there were far more than cashiers, but they still got decimated by Amazon and co.

I think grandparent is referring to salespeople as in "human specialized in marketing and selling special/expensive/complicated products one-on-one to entities", not wholesale retailers.

> "Would you trust purchasing a house from a seller, without meeting/talking to them, or a single person before and throughout the purchase?"

You mean I can get an unbiased look at a house in peace, compare the numbers, look at the plans, measure the humidity and do my due diligence without a sales person breathing down my neck?

Hell yea. I'd pay premium for that.

> > > "Would you trust purchasing a house from a seller, without meeting/talking to them, or a single person before and throughout the purchase?"

> You mean I can get an unbiased look at a house in peace, compare the numbers, look at the plans, measure the humidity and do my due diligence without a sales person breathing down my neck?

No, I don't think anyone means that. They mean purchase the house.

You can't go into a company and get them to show you their sales pipeline and their staff's performance evaluations before you (potentially) invest. If you want to do these things, you need to make a series of smaller deals first.

Human beings can be fluid, can enter into non-disclosure agreements and make these kinds of bespoke deals, while a web-based transaction form with a buy-it-now button cannot.

I found my apartment online. Set up the meeting. Negotiated the documents and corrected them where necessary. The broker I paid 10% of my first year's rent? Apparently he bought a ticket to Paris the morning of our closing.

If you negotiated the documents yourselves, why were any broker involved ?

For my first appartment I used a broker as well. At the end of the process I discovered their added value was less than 0. Disregarding their fee, I paid too much for the appartement and they encouraged the high price, because they just wanted to close and receive their fee. When I bought my next house I did not use a broker. That's just the way things go, you live and learn.

People are more and more selling without a broker as well, or at least they start to use cheaper brokers where you only buy the service you need.

Most apartments in New York City have a broker involved somewhere. In this case, the building owner had a sell-side broker. A condition for renting the building was paying the building owner's fee, directly to the broker, and while simultaneously absolving the broker of any responsibility to me, the paying party, over the building owner, the technical client.

The whole NYC real estate brokers thing is completely fucked. The whole thing is a scam.

Ah, I see. What's in it for the building owner then ?

> If you negotiated the documents yourselves, why were any broker involved ?

It's a scamola with a similar racket being Ticketmaster's "convenience fees". Most apartments in NYC will require going through their broker who collects a fee (usually a percentage of the first years rent). A chunk of that fee ends up being a kickback to the owner of the apartment. Ticketmaster does the same thing with venues (tack on $15 fee, give the venue $7.50).

since a company is made up of people, and the performance of a company is closely tied to the ability of the management, and when you buy over the company you might be concerned about your affinity as an owner to the group of people that makes up management, then I think an investment banker fulfils the same role as an interviewer or a recruiter for a job candidate.

would you hire a person for your team just by looking at degree transcripts, LOC written, and the resume? maybe you would, but I'm not too inclined to do that.

I did exactly this when buying my home through Redfin. I only met the selling agent in person because they unlocked the door so I could see inside. The rest was though email mostly.

This is the classical hn bias. Normal people would like to talk to humans. Hn users would rather get info, plans, etc from houses and eventually buy them and sign the contract using a REST API with a node.js client..

That's a rational buyer...what do you think the bank does? They run the numbers to see if the person qualifies, has great credit, etc...do you think they care how your day is going? Lol...

Of course they don't... But they are not the client... It's the client who needs honeyed words, and that an AI can't do..

Rational buyer is almost an oxymoron.

Exactly lots of people want the experience of the salesman in picking a house, information on the neighbours and what its like to leave there & why the previous person is getting rid of it.

The HN bias is a bit like the salesman is the inefficient part disincentivized to help you and trying to rip you off.

The salesman is incentivized to screw you. They work on commission so the faster they can move houses, the better. Real estate agents are screwing both the buyer (feed you bullshit to get you to buy) and the seller (convince to back off of higher prices because the 0-15% difference in price isn't worth the weeks more effort on their commission).

Like a restaurant though, if they were bad at their job they wouldn't attract new clients. Under a HN analogy if they screwed clients they'd have a 0 rating and lose future clients.

The assumption you have is they are simply a market matcher but they're offering more than that.

Their incentive is to get the highest possible sale price in the shortest amount of time and a good reputation. That high price has to also be clearable meaning it cannot be just up in the air and seemingly arbitrary (it can cost either time or reputation).

Without them there is much more information asymmetry. Using an online version you basically have access to the same information you would as with them but without the human element to judge through nor their experience on offer.

This is not true. Some professions are based on screwing the current client and moving on to the next one. How many times do you get to buy a house? This is why most people use an agent to sell their house. They can lie without seeming they're lying (sorry was unaware of xyz...that's what the owner told me)

The incentive for the agent is actually highest cash in down payment because they don't want to deal with the bank to get their money plus they get to keep the down payment if the buyer pulls out.

Perhaps things are different internationally, but if word got out that agents from the estate agencies I work with were regularly screwing buyers, they'd sink quickly.

> plus they get to keep the down payment if the buyer pulls out.

What? Where does this happen?

> Like a restaurant though, if they were bad at their job they wouldn't attract new clients.

In many places, the law is forcing you to go through an accredited agent to make a deal.

Talking to a human incurs 10% of fees when you're buying a house/flat.

I don't know anyone who wants to pay that fee.

Kind of defeats the analogy though.

Judging by the way estate agents (realtors in the US) are viewed in the UK I don't think anybody likes dealing with them. I suspect most people would rather learn JavaScript, Node and a REST API rather than have to deal with them.

a real estate agent can never be as shitty as javascript. (few things have the potential to get there)

In an investment bank a trader has a high ratio of support staff around them: legal, compliance, IT, operations, quants, finance/tax, risk. These 'support' jobs are a significant fraction of the real cost of a trading seat - not just the trader's salary.

Automation has happened incrementally in the industry for years, like many others - starting with the easiest stuff (low hanging fruit) like some operations tasks and mechanical trading tasks, and leaving the more complex tasks for humans - or letting a human scale to do more.

The more complex tasks that are left typically require non-trivial intelligence, e.g. understanding why the new product brought to market by your competitor or counterparty is slightly different to what you are trading today, and deciding if you can/should transact in it. Understanding what impact the upcoming compliance rule changes have on your market and activities (there are always regulatory rule changes). Understanding what the limits of your trading are, to avoid concentrating too much exposure in one area. Understanding why your counterparty is upset about some aspect of the transaction. etc.

Famous last words for a lot of businesses:

>When it comes to AI/Machine Learning: the nature of ____ and markets are drastically different to other fields that AI/ML have previously excelled in. The main reason being what are the laws and rules that govern how an AI/ML should view a field?

>Since the very nature of a market is… a constant change respecting an infinite and broad amount of variables (____), a complex system (repeating exact past actions, will not give you the same results) and a social interaction/conflict regarding an ambiguous ____, it becomes incredibly difficult to determine those laws and rules.

>In physics, mathematics, computer programming, you will always have a safe level of predictability when it comes to certain functions and how they interact to give you a reliable solution. With ____, instead of this safe level, there is just irrationality beliefs; something you cannot model with accurate certainty (just look at how AI/ML would handle predictions for Brexit and Trump, and then, how it would cope with the aftermath).

Markets predictions were at 20% of Brexit and Trump the day before the vote. And moving from 15 to 50% during the week before that. Noting they followed similar patterns.

They assessed the risk fine.

Yup, agreed. Iirc, AI on twitter predicted trump (my own model trained from 2007 and onwards) data. Also, I know for a fact renaissance has been using ml for a while.

a constant change respecting an infinite and broad amount of variables (macroeconomic), a complex system (repeating exact past actions, will not give you the same results) and a social interaction/conflict regarding an ambiguous valuation of a price, it becomes incredibly difficult to determine those laws and rules.

Just like driving a car.

Well, no, not really. Ideally a car has a definable spec, such as a road with consistent markers. Steering within a consistent set of lines and reacting to other cars on the road is difficult enough, especially if you try to account for random behavior from other drivers.

Modeling and forecasting a security's price mathematically, or a set of security prices, is complex in a different sort of way. A better analogy would be like trying to program a car to deal with a meteorite that suddenly hits the road in a random spot, or a bomb that goes off 100 feet in front of the car as it's cruising on the road. There are far fewer rules with which to start a foundation, and the entire macroeconomic structure is far more delicate and in constant flux from news, other traders competing against you, information closed to the market, etc. With cars, you at least get a road, and add randomness in with other drivers. With security prices, you might have a lower bound of potential losses, if you're buying long. It's just a different animal.

Maybe with their work, labor costs are a relatively small portion of the product price?

Look up the comp ratios of some of the major investment banks. It can easily be 50%. As with all professional services firms, the biggest costs are people and real estate.

Is that percentage high or low compared to the products of low skilled labor?

I think he's saying the money client pay to investment banks is a small compared to size of the deals bankers advice / underwrite etc.

But it is large compared to what the shareholders earn, and they are the ones ultimately pulling the strings. The days of the white-shoe partnership are long gone...

If traders have managed to persuade people that they are able to beat the market against all evidence[1], why would they not be able to convince people that they are better than AI?

[1] Of course, there are anecdotal evidence of the opposite, but there should be those if the performance of a trader is a random process.

Do you consider several hedge funds consistently beating the market with significant margins for over 20 years to be "anecdotal" evidence?

The evidence shows that it is very difficult for traders to consistently beat the market. The evidence does not show that their performance, as a profession, is random.

Yes, I consider them anecdotal. If you have a billion people flipping coins, a few are for sure getting 20 tails in row, but that is no evidence that they would be better coin flippers than the others.

> The evidence does not show that their performance, as a profession, is random.

I think you are right, if I recall correctly, the evidence points to a conclusion that the performance of their profession is worse than random when you take fees into account...

Yes, that's the classical coin flipping example from the strong position on Fama's Efficient Market Hypothesis. There are several problems with the coin-flipping analogy:

1. As stated, it's not falsifiable. So you start with a conception of the market as entirely random, and you observe that participants are consistently beating this market. Each time you observe someone beating the market, you chalk it up to the probability distribution. "Well, that's just a two-sigma event." Then you see it happen again. "Well, that's just a three-sigma event." Then again, and again, and again. How many sigmas from the average market performance are you willing to accept before you agree that someone is legitimately and purposely beating the market with a skill-based mechanism, not a chance-based mechanism?

Furthermore, do you have the numbers to turn this into a falsifiable claim? What is your time interval? Daily, weekly, monthly or annually? How many correct forecasts do they have to make ("how many sigmas from the average"), compared to the chance expectation of coin flipping over the same timescale? If you don't have these numbers handy, then it's purely a thought experiment. Subsequently, the observation that funds like Berkshire Hathaway, Bridgewater, Renaissance Technologies, Baupost Group, Citadel, DE Shaw, etc. consistently beat the market for at least 20% net of fees over 20-30 years suggests that, per Occam's Razor, people can beat the market due to skill.

2. The analogy is not comparable to active trading. You don't need to hit 20 heads in a row to beat the market consistently, you just need to hit a p-value number of x heads correct for y coin flips greater than chance would suggest. We don't assume that basketball is a game of chance if the players can't make all their shots in a row; nor do we assume that baseball players with a 0.3 batting average aren't clearly better than the average high school dugout. If your trading interval is weekly or monthly, and you're consistently up over the market (even net of fees!) for 240 months or 360 months, it doesn't matter if every single month was a winner.

3. Have you ever read Warren Buffet's response to the EMH assertion, as postulated by Fama?[1] He outlined an excellent rebuttal in his 1984 The Superinvestors of Graham and Doddsville. Essentially, if you assume that the coin flipping analogy does map to trading, then you should expect to see a normal distribution of the winners, given that the market is inherently random and no one is achieving superior coin flips through skill. However, if you observe that the winning coin-flippers consistently hail from a small village with standard coin-flipping training, then it is more reasonable to assume that there is something unique about those particular flippers. This is what we see in reality - yes, most amateur traders fail miserably, and yes, most hedge funds underperform the market over time. But there is a relatively small concentration of extremely successful funds and traders in an uneven distribution.

4. Even Fama has walked back on Efficient Market Hypothesis, and no longer espouses the view that the market is inherently random. It is deeply complex, yes, but it is not efficient, nor entirely random. Several studies have been conducted to empirically examine EMH, and the results in favor of the hypothesis are dubious.[2][3][4] A much more charitable retelling of EMH is the weak position, which essentially states that any obvious alpha will be quickly arb'd out of real utility, but that non-obvious alpha, or alpha which is technically public but not easily accessible will retain utility until it becomes obvious. This also maps more cleanly to reality, in which trading on e.g. news reports is mostly unprofitable (everyone can get a news report at around the same time, for the same level of skill) whereas mathematically modeling pricing relationships can be extremely profitable (doing so accurately requires public, but mostly unclean data and a great deal of skill).


1. The Superinvestors of Graham and Doddsville - http://www8.gsb.columbia.edu/rtfiles/cbs/hermes/Buffett1984....

2. Investment Performance of Common Stocks in Relation to Their Price-Earning Ratios: A Test of the Efficient Market Hypothesis - http://onlinelibrary.wiley.com/doi/10.1111/j.1540-6261.1977....

3. The Cross-Section of Expected Stock Returns - http://onlinelibrary.wiley.com/doi/10.1111/j.1540-6261.1992....

4. International Stock Market Efficiency and Integration: A Study of 18 Nations - http://onlinelibrary.wiley.com/doi/10.1111/1468-5957.00134/a...

My belief in some of the weaker forms of EMH does not stem from the idea that markets would be somehow correct, but vice versa. Markets are (almost[1]) always incorrect, and nobody can know how much they are incorrect tomorrow[2]. Thus nobody can beat the markets, other than the random coin tosser.

Now, I fully agree that there have been certain anomalies (value premium anomaly in case of Buffett) that pretty much align with the strategies of these long term successful investors. Question is, has it been their skill to pick right anomaly as a basis for their strategy, or luck? Again, in the world of investment strategies, there for sure is someone trying almost anything. And if that anomaly disappears[3], do they have the skill to change their strategy?

But we are a bit off topic here. The original question was "Why do traders in investment banks..." that is a different species from the warrenbuffetts.

[1] Asset markets can be right somewhat like a clock that has stopped is right twice a day.

[2] Yes, Keynes said "The market can stay irrational longer than you can stay solvent."

[3] I have actually bet my money that the value premium anomaly is not disappearing, but that anomaly is not something investment bank traders can enjoy, as the anomaly is far too long term anomaly for them.

Is there a fallacy name for this? i.e., asking a question that suggests something ("investment bankers have this feeling") as a premise, that may be not true.

It isn't a fallacy to ask a question :)

For similar reasons that spreadsheets didn't put accountants out of work.

Honestly, maybe they just don't care? Rather, what should they do about it, what would we expect people with such huge earning potential right now to do other than push forward with the plan that works under the status quo.

Answering my own question: I'd expect bankers to save more (of their own money) in anticipation of the good times not lasting as long. More conservative types will weather the storm and spendthrifts will get wiped out.

In other words: business as usual, up until the very moment it isn't.

Everybody feels their job is safe from automation. Its the same way planes crash but not mine.

Believing "I am special", is just built into us.

Regarding your airplane example, it's actually the opposite: plane crashes are so infrequent that you have to believe "I am special" (with a negative twist) if you think you are likely to be in a crash.

They're a lot more frequent than many realise: http://www.planecrashinfo.com/database.htm & https://aviation-safety.net/database/

Although per passenger mile, it's true that air is very safe overall.

Scheduled flights inside or between advanced countries are incredibly safe. Unscheduled flights and any flights in non advanced¹ countries aren't that safe.

The overall statistics are not useful for evaluating your next trip. You must use the ones from your cohort.

1 - Mostly, the countries with broken governments. If human rights aren't respected, that's a great predictor that flights aren't safe.

That's a non answer.

If some belief applies to everybody, but on a different degree (per individual, or per job sector, etc), then the mere fact that it applies on everybody doesn't say anything to explain this variability.

And if what you say is that it applies to everybody with no variability, then that is obviously wrong. People in specific job sectors are way more worried about automation than people in other sectors, even if their jobs are not yet starting to get automated. E.g. office workers vs surgeons...

Not only do I believe that my job can be automated - I've spent years trying to do just that.

Of course, I wouldn't be giving the scripts to my employer, just spending more time doing other things...

I think the difference here is "task" !== "job". You're automating tasks. Your job is to keep the system running as effectively as possible (I don't know what you actually do). Meanwhile, I do get that you're not actively trying to lose the responsibility of these tasks, i.e. "I wouldn't be giving the scripts to my employer", but that is because you're trying to hold onto the relaxed transition phase between tasks X, Y, Z to tasks 1, 2, 3. If you really had job security issues, you would not have written those scripts.

Are you sure you haven't perhaps fallen prey to too much job security? Such a large sense of job security that you don't care about automating X part of it away. You, like me, like other "rockstar" software developers, (perhaps[0]) believe it doesn't matter if you automate X, Y, Z, because you're so good, that when you're out of responsibilities you will be offered the followup/orthogonal tasks 1, 2, 3.

Anyways, I'm just trying to highlight that you too, perhaps subconsciously, think "I am special, it wont happen to me", because at least I who also don't care about automation feel this way.

[0] I know for me, this is the truth. This overconfidence is why I don't care about automating my tasks away. Honestly, even giving the scripts to my employer makes little difference to me.

To some extent what you say is true, but only partly.

I'm not part of the cult of the "rockstar programmer". I do write a lot of good code, and I tend to do it much faster than other programmers that I work with. But programming is not actually my job description.

Without going into details - I do something else for living, that uses computers quite heavily, and automating my job is more of a hobby. I have a huge amount of flexibility in what I do, so there is an element of substituting 1,2,3 for X,Y,Z because I find them more interesting. My boss wouldn't care one way or another - he is more interested in whether or not the tasks get finished, and maintaining a high quality level in the final product.

You actively sabotage your employer by keeping part of the code you write secret?

Not at all. I do keep part of the code that I write secret. This is not a form of sabotage. My employer likes the fact that the things that I am responsible for run smoothly.

If you wrote those scripts during work time, they might as well belong to your employer.

While that would be true of IP in the US, it is not true where I live and the scripts belong to me.

A good trader can explain their trades, a good AI cannot yet. This is important in many financial settings when some human must still be personally responsible for the trades. For a manager it is much easier to approve regression coefficients or a particular trading formula than to sign a neural network into production.

> A good trader can explain their trades

Yeah, but what are the odds that their explanation is just a post-hoc rationalization (ie. "heads I get credit, tails I avoid blame")?

what a strange question. technology has eliminated swathes of traders, open outcry exchanges, equity execution traders. if you watch CNBC, you see almost no one on the floor of the NYSE , because there is nothing worth doing there. (except for a couple of exceptions due to exchange rules)

Some vocabulary might be helpful here, generally there a few buckets most traders can be put into:

1. Proprietary trading (think hedge funds or the old bank prop desks, ranges from the bass brothers to renaissance tech in terms of style)

2. Execution or sales traders (often work for or are the counterparts to #1, their goal is get the best price for their clients in the quantities and time frames desired)

3. Market making (essentially providing liquidity to both buyers and sellers)

4. Sales - essentially layered on top of 2 & 3 to interact with 1

With that said, #3 has pretty much been automated in some markets and is the easiest to automate, #2 has had a lot of automation for vanilla stuff, but people are still involved in big, complicated things. #1 may be automated but is largely strategy dependent, #4 is pretty hard to automate...

I don't agree with the question, especially if we are talking about de facto traders. There is a high level of worry because commission margins and returns per unit of risk are the lowest ever. I would say there is a high level of calculated ignorance in the industry.

Truth is, most trader jobs have been gone in the last 15 years. First it was the automation of order collection (phone orders being replaced by automatic order routing processes), then automation of the execution process (from human order management to algorithmic trading), then automation of the investment management process (from discretionary processes to quantitative management), and the last frontier will be the automation of the relationship with the client.

Problems are different for different trader types.

Execution traders execute orders on behalf of clients. Today a lot of clients execute directly using web terminals and APIs. And if you still execute on behalf of others you spend a lot of time fighting against the machines (one solution is to trade out of market, but this too is being automated). Very very hard to come up with new execution algos.

For delta hedging traders the role has been reduced to monitoring, unless you have allowance to take risk (which most banks don't since the 2008 crisis).

For alpha traders, whose job is to extract money from market, the risk is not in being replaced by the machines, but one of overcrowding, i.e. everyone following the same strategies is reducing returns and increasing risk (skew) across the board.

Sales traders whose activity is to sell first, and trade last, still need to manage relationships with customers. But this has been changing gradually, with clients prefering digital interfaces, passive investment programs, and now the rise of automated investment management programs.

I started working in the industry circa 1999. The head of trading I was working with told me once machines will never be able to invest like a human. He's still there doing a lot of money, but certain that it will end someday. His strategy: "let's see if I can make it one more year"...

If the role of investment banking is to optimally allocate capital, then part of that job is research. Think Andrew Left's exposing fraudulent Chinese tech stocks, or the Lumber Liquidators controversy. Algorithms can augment this work, but cannot replace it.

Well, an equally good question is why do programmers feel their jobs are immune from AI, etc.?

People are working on it. Search for: artificial general intelligence. There was a book out last year called: "the master algorithm" that talked about it (the book is not that great). We all know about how well certain AI are doing now (neural nets etc), and historically there were high hopes for logic based systems (prolog) that are not as popular at the moment. Anyway, the author speaks of 4-6 different AI camps that are all separate (depending on definitions) and he believes that combining them together could result in gerenral AI. The book is a half decent review of some the less well known AI (that might be worth exploring) but it has the problem that it's too much info for the novice and not enough for a non AI CS person. He actually has code etc, can't remember the name of the system.

Interesting, I never thought about that. It seems like something very far off, though. Does anyone know about any research currently being done in this field?

That's what (supervised) machine learning and neural networks do, isn't it. You specify the "what" and not the "how", they figure out the "how". Basically techniques for automating the task of computer programming given requirements.

So any advance in AI/ML/NNs can be seen as advances in the field of automating computer programmers. We don't tend to think of it like that because these techniques have to be applied, currently, by computer programmers, and the software the techniques "write" was generally infeasible to write by hand anyway: it's been so far purely additive and solving problems that hand-written software either couldn't do at all or where it sucked and progress was very slow.

But watch out. The tech is getting better scarily fast. Neural networks have been able to learn how to do basic algorithms like sorting given only examples. The primary limit is still the amount of data they need to work with, but R&D is driving that down too.

I can see a time when many tasks that today automatically require skilled programmers are instead done by domain experts who simply shovel data into an advanced neural network and discover it gets results that are good enough. Perhaps hand-written software made by a skilled programmer would be better, but the machine is a lot cheaper ...


Heh, it's the oldest thing there is in the field, almost. FORTRAN and COBOL were invented in the 1950s so that scientists and business managers could write their own programs without having to employ those expensive "programmers" who could write machine code.

Now it's 60+ years later, and there are still human programmers.

Because we're needed to write and maintain the AI.

well, until the day an AI will be good enough to write and improve itself, then we'll all be doomed equally.

Traders will still be needed to justify bad decisions, or 'good' logic with bad outcomes run by AI. ;)

Everytime I see a gameshow where you can 'bank' your current winnings, ... I imagine the future of trading will include some strategy that has an AI yelling those sorts of things to other ai agents acting in concert. The strategies of the macro stock market positive outcomes, being applied to micro stock actions. https://www.t0.com is also going to be a fun reality.

For one thing, traders have knowledge of multiple markets and real-world events (hurricanes, elections, etc.) can infer correlation between those and place trades accordingly. AI is completely unaware of these things and can only train itself on the market data it trades against.

Also, not all trading is liquid, nor is it short term, both of which are targets for automated algorithmic trading. Long terms investments are typically human decisions.

God I'm so fucking jealous of computer specialists who work in investment banks. How do I get in without going to Harvard?

They are exactly what they claim to be: specialists. You need to become one in some area that banks deem valuable. Current white-hot areas would be FPGA, machine learning (but you'll also want a PhD in statistics, or at the very least a good degree in it from a good uni) or be a badass systems developer who can write absurdly low latency code in terms of allocation, cache coherency, network sympathy and so on. To get this good, you need to have been doing it for a decade or more, so start now. At the bottom. No one leaves university and gets one of these jobs. They leave university, maybe get a PhD or work for a decade and work up to them. You are jealous of people who have put in thousands of hours of very hard work into their careers, but will you do the same?

Large IBs do hire dozens of new graduates every year into IT.

You have to be bright, articulate, interested.

That tends to be Java CRUD not trading logic.

Got any primers on how to write low-latency code?

Really, you just have to start trying to do it. Find an old, good hashmap library in C somewhere, benchmark it and then try to reach feature and performance parity yourself.

Plenty of downsides to balance out the high pay and the excitement of working with big transactions. The hours and level of stress take a heavy, heavy toll on many people.

> I'm so fucking jealous

Don't be.

They've had to work insanely hard to get into and finish Harvard, only to be granted hundreds k of debt. Then they had to work their ass harder to get into hard-to-get into-companies and acquire the experience required to get into even harder-to-get-into companies (i.e. investment banking). After a decade, they're finally there and all they got is long hours, too much responsibilities and no free food.

What's the hourly wage pre-tax after a decade of experience at these companies?

So, you're in only for the money?

Er, it's trading... what else is there?

Interesting challenges, brilliant people, nice co-workers, instant feedback loop, lots of resources at your disposal, large company benefits (i.e. pension/healthcare), quality matters, work with grown up.

A lot of which can't be found in the usual web startup.

> "Why do traders in investment banks feel their jobs are immune from AI, etc?"

Why do people need to have anxiety and fear about AI is the better question. Let's solve those instead of this blindless ambition to automate everything, and then somehow assume our benevolent government will give everyone UBI or some other pipe dream.

I just think that if AI could beat them, they were already replaced. Any innovation in trading is automatically implemented. May be this will be possible in the future but it doesn't depend only on deep learning techniques and having huge samples for learning because they have both.

Organisational inertia is huge at a big investment bank. It might take a decade to change their behaviour significantly, even when they're being out-competed by smaller firms. The big places simply do not need to compete so much due to the pseudo-monopolistic nature of their business.

AFAIK the bulk of investment bankers are beaten just by chance as this article suggests: http://www.automaticfinances.com/monkey-stock-picking/

The bulk of investment bankers are not picking stocks for funds. In the field of fund management, there are some people doing this, and it is a very common trend that passive indexes do better than the active funds - it is true.

In the field of investment banking trading (which is mostly market making) the amount of automation varies by asset class: very automated for some asset classes like fx, equities, much less for more illiquid asset classes like credit, commodities, bespoke products.

As well, senior traders operate as the 'business' making more decisions than pricing of products. They make the business decisions often judging legal, compliance, accounting risks. (and not always correctly.).

e.g. do you accept to trade with a Dutch counterparty who wants to trade against your German legal entity knowing that you can only hedge the position in London? What is the risk between the two legal setups? What premium should you charge for those risks?

Do you trade the very large size that the counterparty wants, knowing it takes you over your balance sheet position limit - can you get approval for this from your senior management? Can you offset the position in the market without it moving against you? What premium do you charge for this?

I don't think bankers are naive, they are seeing it happen.

If your job can be done overseas, it probably will be outsourced.

If your job is algorithmic, you'll be replaced by a computer.

A lot of M&A activity though doesn't fit that category.

If it's true, it could be that there is more insider information being used than is generally recognized. Computers can't get that info.

They don't. I work in in the trading department of a large bank - directly in finance, not IT. Have been here 10 years.

Here's what I have seen. For more "liquid" markets - like delta-one blue chip equities, IR swaps, spot FX - jobs ARE moving quickly. People are not complacent. Competition comes from firms that are much faster in execution, having automated most of the process - and presumably having lower hedging costs by being able to move quickly. Or just basing their decisions "better" on history.

Having said that, you have to remember that: a) Size of balance sheet matters. A large bunch of clients of such firms trade with them due to "synergies". Large shops are one-stop shops. They don't just sell you a swap. Its usually a loan/funding + an option to hedge it + some tax advice thrown in. Its hard to break it down into components and trade individually, as most other players may not be even willing to face such counterparts. And shopping for each component separately might mean larger time and friction costs. Here being a large bank helps massively. Basically - capital is hard to be automated, decision making is easier.

b) There are a large number of markets that cannot be automated anytime soon. These are "illiquid" or "exotic" markets. The data on them is scarce and trades happen in a lot less frequency, then say swaps or delta-one equity. Throw in some options, and your model might say here's a price, but there's no guarantee on it. Then you have to balance it compared to your book size, and the price is not discoverable, or even same across firms (for good reason - my cost of cash liquidity might be different to yours). Sometimes counterparts will trade with you even if you have a worse price, because there's lower risk of you falling down in 10 years on a 20 year cross currency trade or inflation swap.

c) Good traders are very defensive and paranoid. They don't work under the paradigm that history will repeat itself. They will try to look for risks that might seem to have nothing to do with their market. I know some traders who have been making good money every year in these exotic markets, regardless of crisis or not. Some of the sharpest people I know - and they love their job. Not just the money aspect. They just love the whole high pressure environment and everything it entails. Now they are being drowned in regulation. Don't get me wrong - regulation is good, but right now we have a mess. I won't try to convince you - and unless you work in a bank, you will not agree with this.

d) Management is old. Might seem obvious, but most of the people at the top have made their money the old-style way. After a certain amount of time, you tend to trust your own instincts more than other things, especially if you've been successful (which may have just been a random process to start with). So they think this is just another fad and it will pass. These things have come and gone - we're still here. This is a prime reason why some banks fail suddenly. The top guys just didn't see it coming.

Because investment banking is a sales job.

Investment bank business is all about conncetion. Connections lead to deal. Machine can not replace that forever.

Because investment banks are run by sales. Only the people at the bottom actually care about this stuff

>No amount or greater sophistication of the algorithmic structures listed above, can replace genuine human nuance, interaction and trust.

I don't think this is true. Not at all.

I think there is a technical and experiential aspect to these 3 elements. I think the technical aspect can be met, but perhaps it's equally true that the experience of trust etc with another human can not be replaced (in a literal sense, given that it specifically requires a human).

Me neither. How much of the genuine human nuance is present in today's financial services or any commission driven industry - next to nothing. Every agent/broker is motivated by the highest commission he/she can make, nothing more.

I can reasonably say that you make this comment because you know nothing about the business logic of trading desks.

A trader is not necessarily a sales or a broker, though. Nor is he a quant, or a dev. People rarely imagine how many different jobs are involved in the trading job, and how rich the business logic is. At my shop, traders are the piece that connects all of the jobs in the value chain.

I believe, currently, the business logic can be improved locally by learning systems (and it is), but there is no public example of an industrial learning application encompassing a scope comparable to what the usual trading desk handles. Sure, there are many inefficiencies ; traders work on heuristics, afterall. But I don't believe we have the necessary horizon to aptly predict the end of traders, because I don't see how we could make AIs with a better efficiency.

I'd say I know something about the business logic of trading desks. Some of my friends worked at GS. I'm not implying traders are brokers. I do agree that the trading business logic is very rich and that AI will be nowhere close to replacing a trader in the next 15-20 years at least.

I do foresee a future in which a trader can accomplish a lot more than he/she can today using AI. So in the future as the per trader efficiency increases, the number of traders required will most probably decline, unless there is a dramatic increase in trading volume that cannot be matched by the then state of the art AI.

This is essentially what's happening today with X.ai, Facebook messenger and the like. Sure the logic involved in booking air tickets for a group of 5 over 10 conversations isn't as complicated as the rich business logic of a trade, but 5 years back Facebook messenger would've seemed almost impossible, just like the trading business logic seems impossible to do with AI today.

I have no doubt new technology will be leveraged to improve productivity, but that's been quite common in the last decades.

On the other hand, the technology advances needed to transform the tools into standalone actors are not merely a matter of scaling current technology. Especially, the creation of training datasets is a problem for which we currently have no solution, that's why we fall back on human trainers (mturk, etc). That's why a solution to a problem with no clear, bounded model and no easy dataset seems out of reach.

Even if this was true (which it isn't: in most areas of finance blindly following this strategy is a rapid route to getting sued for breach of fiduciary duty) there's a lot of human nuance involved in actually selling the service to a client who won't take "well the model isn't tractable but here's last year's results and an article on ML" as an answer

So what is a more reasonable answer that the client will accept ? Ultimately, it's a pattern of words and data. Just playing the Devil's advocate here. Any ML system that can be trained on such patterns should be able to learn the nuance, per client. No ?

I'm certainly not convinced any ML system can pass a high-stakes, niche-interests version of the Turing test (and since winning people's trust back after losing their money is an emotionally charged thing, probably something akin to Philip K Dick's Voigt Kampf empathy test too). Even if they could, you haven't got a training set because (i) client conversations aren't usually on the record (ii) none of the previous human conversations that could be used if they had been recorded were remotely related to the whys and wherefores of the investment decisions the algorithm actually took (iii) the investment model generated by the ML process probably isn't tractable enough for even its own human designer to convert into words what it's actual strategy was and will be in the next period.

You don't need "any ML system" to be able to not only process the data, but also explain the strategy behind its data processing at various levels of granularity, answer abstract questions and do a convincing impression of listening and responding to client feedback, you need advanced general intelligence.

As for what the more reasonable answer a client would accept is, I don't think I really have the requisite years of investment banking experience to know that...

Why do programmers think their jobs are immune from AI, hubris and history.

This post will get buried in this discussion, but I actually worked for a HFT company trying to break into all sort of algorithmic trading (and made a small fortune doing so).

The topic question is:

1. Loaded question

2. Uninformed question, look at a small subset of the electronic trading reality.

3. Overly broad: jobs are immune from AI, etc (what? depression? ancient aliens?)

So, in order:

1. Some traders will always get fired as they can't do their jobs (or are just plain unlucky in the wrong time).

2. The 'traders', or rather what people think of a 'trader' is a wrong image in what I worked with. They were people configuring algorithms, linking dependencies between tickers in various market segments, etc. These traders did some rudimentary AI guiding of a sort. They used their brains, read newspapers, tried to code engines to parse said newspapers with machine learning long before it became media catchphrase, failed (badly), configured various triggers. They weren't 'traders' in your ordinary sense of the word, but struggled to make that work. It takes a lot more in scope, than buy low sell high' to make trading work in this millennia. Several magnitudes or more work, if I'm to scope it. It's a corporate effort, not a singular one. Traders do get fired but whole corporations go under also.

3. The 'etc' part:

- AI will never advance to the level to predict the future, as it will not know the sum of human experiences driving that future. This is especially true for the financial market as it is linked with the rest of all human activities in a un-linkable fashion. I can take this as a philosophy thought experiment, or as an economy fact, or from my limited experience, but, take it as you will, AI is a 2017 buzzword to come. Take it with a grain of 100 shares worth of salt.

- There is a very finite liquidity in the market, compared to the money available. When you have to iceberg orders to not destroy a trading strategy and move the gap for everybody, it's really obvious that the money you can make are limited. This was true in 2010, when most of the major banks decided to also enter, and is more true today. So, connected with the previous point. AI will never be fast enough, to earn on the limited liquidity compared to available players, considering we are capping off in silicon CPU processing power right now. I'm open to arguing this point, as it was argued to death among friends in the last decade, but I don't see this moving in any direction unless processing power moves off silicon, and even then, for a while. You can't scale atoms, and you still need 80 of them (or whatever is the latest) for building anything, so that's the hard wall. [extra thoughts] Given all of the above, humans have an edge, and humans with experience, have a sharper edge than the AI. In trading, nothing will change, and AI wont replace traders. We can argue this point, feel free to reply.


[extra thoughts] Purely my take on this, as future trends:

Javascript as popular web language will get capped off on performance soon (few years?). Thus, as demand increases one of these will happen (or all three of them):

- Javascript will get threads, locks, and the rest of the circus and Javascript programmers will have to learn a lot more on top of what they know (in a language not quite friendly for writing threaded code in the first place) and it will be super fun to be one for a while.

- A new language will emerge to replace the web languages and get more performance by simplifying the interaction with he browsers (Think CICS + IBM + inline assembly there)

- The JavaScript+Browser Layers+OS+VM approach will get replaced with something more lean in the next 5-10 years because people will want more (whatever more is, nobody predicted it 100% so far).

Anyways, this post got unintentionally too long, I guess I had something to say on the subject :)

Because what they are doing is really (an indirect form of) sales.

A lot of them didn't understand technology.

Insider trading can not be automated ;-)


can AI discover insider info?

HN title is significantly different than the question the submission actually asks which is "why do TRADERS in investment banks . . . " and should be updated

My 2 cents:

The job is actually extremely complicated. I wear many hats daily including working as a software developer, IT project manager/architect, salesman to clients, point of contact to other institutions trading desks, legal drafter, tax/accounting policy design, deal negotiator, risk taker/manager, systematic/fundamental strategy research etc. My job title is "Trader". There aren't any dumb big swinging dick 80/90's style traders left shouting buy and sell (if they are it's purely relationships and at small shops). Nearly everyone on my desk has a CS degree or similar quant background.

AI/deep learning is currently nowhere near being able to master one of these fields let alone cohesively manage all together in a direction to actually make money (just pay a "trader" a 200k-1.5mn USD a year its much easier). Most grunt work is automated and most outfits do in fact make use of statistical methods/classifiers/predictors and differentiable networks where appropriate. At the crux of it someone needs to manage, design the systems and fill in for them with a bit of intuition and common sense where there are gaps. The world is a chaotic place and its not getting simpler. Machines are not robust against chaos.

And this is why traders won't be replaced by AI - Because most are smart individuals who are continuously increasing their worth by learning new skills and ensuring they work WITH new technology rather than opposing it. They regularly elevate their skillset e.g. from manual trading to monitoring, performing trend analysis on a more automated trading system.

I worked in the finance industry as business facing software dev for 10 years and now work in a different but similar industry - my experiences can't be more different. My current industry's business is packed full of "traders" who don't trust the systems, won't work with them and outright refuse to admit the systems can outperform human's PnL numbers (especially not their own) despite being faced with raw numbers saying just that. It's a continuous battle between the technology teams being told to deliver automated systems to reduce costs and increase profitability vs plenty of "traders" who seem to see the systems as some sort of threat. As you say, most of the 'dump big swinging dick' style traders are long gone in finance - they're definitely still around in mine. My guess is that because our industry is significantly smaller meaning less employee opportunities, and the fact that almost all of these "traders" are relatively poorly educated (compared to those in the finance sector) they find it tough to elevate their own skillset to do things such as statistical quantitative analysis etc.

If it wasn't for the very high difficulty of entry, my industry is ripe for statistically minded, methodical, quant people to come in and make a killing.

What industry is that?

In case the full title makes a difference it's "Why do traders in investment banks feel their jobs are immune from AI, automation, and deep learning?"

cc @dang

It looks like the question title was changed, in order to aggregate similar questions/answers. Most of the answers answer the question in the current title. I agree that trading and IB is a big distinction, and traders are 1.5 feet out the door already.

There's also a length limit to submission titles. I don't know what it is, but I suspect the original title exceeds it.

> Please limit title to 80 characters [0]

[0]: https://news.ycombinator.com/submit

It does make a difference. Investment banks employ traders sure, but "investment banking" as most people understand it is, M&A advisory, IPO underwriting, syndication and so on. These are relationship businesses first and foremost.

"Give me control of a nation's money supply, and I care not who makes its laws." --Rothschild, 1744

Because Warran Buffet uses a computer - but only for bridge playing. But he uses a pocket calculator for his investment decisions.

His philosophy is heavily based on soft facts. Appreciation of management and understanding of the business model. This is not only based on logic and numbers. Therefore a computer does not help.

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact