Reuters was trading FX electronically since the early 1990s. At the tier one IB I worked for the IT budget was 500m USD a year (across products), and that was in 1997! Huge resources were thrown at automation. However, to this day, large trades in FX (> 10m USD notional) are still almost exclusively performed by humans over a telephone or over the bloomberg messaging system.
That's because, no matter how much you automate stuff, there is still the 1% "edge case" scenario where something goes wrong, and when that happens, you most definitely want a human that you can "look in the eye", when you have that sort of execution risk. Remember that markets move really fast and there is a lot of risk in big trades that "go wrong" because unwinding said trade will almost certainly cost one of the sides a fortune.
Also, high finance is not just about what you know. It's inevitably about who you know, about "illogical" factors such as salesperson charisma, entertainment, and most importantly, a credible personality type that understands the edge case risks. These things are very hard to replicate with a machine. You'll say they should be, that these things are unfair, but they remain a fact after many attempts at removing them have failed.
As for AI, let's for now call it what it is: machine learning. Learning from the past. That's fine for recognising stop signs at different distances, angles and degrees of noise. But in finance, the past is often misleading. Sure there's trend, but there are also very big instabilities in the historical correlation matrix. Paradigms shift without you even realising it. The constant is change. AI is not good enough at that, yet.
BTW, that's not to say machines are not making inroads. It's becoming almost impossible to get a decent trading job now with knowing at least R and Python to a comfortable degree, and good quant programmers cost a fortune. There's massive demand.
The reality is that speech recognition, language translation, face recognition, object classification and detection, semantic segmentation, speech and image synthesis have improved by extraordinary leaps and bounds in recent years. If we used the logic that past failures inform a confident belief that future success in a challenge will inevitably fail, then we should've bet heavily against Alpha Go defeating one of the most accomplished human Go champions in the world. Self driving cars seem like a sci-fi fantasy until they become a mundane reality.
There's an irrational arrogance to human beings in general, and Wall Street types in particular, regarding the specialness / non-reproducibility of their intelligence. It's not unlike the belief that people had that organic molecules were somehow special, "vital," and not synthesizable from base elements.
Certainly, there's a long way to go to replicate the capabilities of a human brain, but I don't think we should exaggerate or fetishisize the human power to estimate and mitigate risk. We've seen many spectacular failures of that in recent years.
Thus far, machines have not done well in scenarios of low stationarity. Those are scenarios where the past is not so good a predictor of the future or where the data manifold is rapidly changing. This occurs for example, when the act of predicting changes what is being predicted. The related notion of Antipredictable Sequences is the most interesting consequence of the oft misconstrued notion of the No free lunch theorem.
Humans are far from the ideal machine for this scenario but they vastly outmatch current algorithms. This is because humans are much better in the kind of generalization that is based off theory building. While not perfect, it does well enough at generalizing from observations (think Newton deriving the laws of gravitation from observations and Brahe's data) to scenarios that are not so close to the data.
This means that for now and in complex environments, simpler models that are adaptable to change online are better. Additionally, their inspectability means they are more easily modified to better match any very changed dynamics.
So your examples of: `speech recognition, face recognition, object classification and detection, speech`
Are of the type where a learner, such as a deep network, whose generalization ability mostly hails from interpolating to fill in missing structure based on data (implying poor out of sample performance) does well when you can give lots of examples which provide good coverage.
Language translation is sort of inbetween (the space is not as smooth but is still fairly stationary at the scale of years) and so we also find that the advantages of deep learning have not been as pronounced and dramatic as has been the case for image and speech.
The digital machine advantage is two. Their memory capacity is outmatched. Second, and related to the first, energy needs are not so pressing a concern.
A hypothetical "robot-cat-AI" could for example have access to a really precise physics engine, and also could run at a much higher "frame-rate", so it could see the world in slow-motion and execute really exquisite moves as a result.
When it comes to comparing the sophistication, adaptivity and functions of life to the in-silico mashines we produce and use very, very primitive.
Citing Newton as a typical example of the superiority of human inferential abilities perhaps represents a case of cherry picking. How often do most human beings come to exhibiting the level of insight and reasoning power required to construct the calculus and discover the laws of classical mechanics? Inventing supernatural explanations for natural phenomena, burning sacrifices / witches / books (on Evolution or other heresies) seem more par for the course than Newtonian level revelations.
Having said that, learning the laws of classical physics from observational data strikes me as a pretty natural task for the right kinds of machine architectures, since they represent essentially geometric symmetries which hold over a vast range of scales, space and time.
> Citing Newton as a typical example of the superiority of human inferential abilities perhaps represents a case of cherry picking
Are hyperparameter searches across a sea of AWS instances cherry picking?
Anyways, I don't think I cherry picked. There is not a single machine that can do the same yet. And more importantly, a learning algorithm that notices model failure and works out the surgery required to correct or even completely replace it.
Newton is an example of the human brain at its best. But he was far from the only: Maxwell, Green, Einstein, Noether, Archimedes and so on the list goes. But we don't need to go that far since Crows are already capable of generalization that out matches what machines can do for the near future.
If our mathamechanical minions ever start musing about 'The Supreme Eigenvector' that would be the time to start worrying about machines taking over.
This self-defeating problem is not present in the other AI applications you mentioned.
Bear in mind that in the 90's only a handful of supercomputers had teraflops of computing power, where now you can get an 11 TFLOPS Titan X Ultimate for $1200. Compute power continues to grow exponentially, yet it has only recently reached a level where certain kinds of approaches are truly practical. As Heinlein said, "When it's time to go railroading, people go railroading."
It's interesting that you should talk about antagonistic systems, since Actor-Critic Models, dueling architectures, Generative Adversarial Networks (GANs) are an extremely hot area of AI/ML research at the moment.
Frankly, rules based AI is basically a bust compared to learning from data approaches.
The hive was pretty amazing at destabilizing the entire global economy within a brief span following fundamental financial de-regulation.
People flatter themselves in ways which satisfies their egos and self-interest. Of course they want to believe in, and have everyone else credit, their magical superpowers. No government (backstop) wires necessary.
Machine learning is rule based. The difference is just that it creates many more rules for more and more granular cases. But it will never be intelligent, it will always be a dumb digital bureaucrat.
Beating a human when there is no discoverability to the problem, and when building an exhaustive dataset is a problem in itself is not on the horizon yet.
You may have missed various dramatic improvements over the past year:
Anyway, this paper demonstrates some impressive results:
Google is getting really good.
And even if you could, acting on that knowledge would change the future.
Casinos don't try to predict the future of every spin of the wheel or roll of the dice. Every individual game is random. But over a large enough sample size (number of games played), the odds are not random.
The same principle is at play in trading the markets, except the "odds" are determined by the traders' skill - their "edge." Which is roughly some combination of access to the right people, access to better information (where better might but not necessarily mean quicker), and experience (gut feelings, hunches, what-have-you).
So the real question is, can machine learning algorithms develop an "edge", and if so, can they stay solvent long enough to do so.
The stock market isn't a game; it isn't designed and it continuously evolves.
That's exactly what I meant by odds; they always win because the odds of the games are in their favor.
> The stock market isn't a game
It's a game in the same sense that life itself is a game.
> it isn't designed
The mechanisms that fit all the pieces together are very much designed by humans. True the behavior of the market isn't designed; it's an emergent property of the complex system we call "the markets."
> it continuously evolves
Yes, but change is also the constant. What underlies that change is supply and demand.
Of course you're free to disagree. But if you'd like to learn more about the not-100%-accurate analogy I described in abbreviated form, the source is here: http://www.goodreads.com/book/show/253516.Trading_in_the_Zon...
Edit: Good reading on "life as a game": http://www.goodreads.com/book/show/189989.Finite_and_Infinit...
Life isn't a game.
In a sense, it's all about what information you have access to.
GS is very well connected, and they are constantly out there asking people for their opinions. Ten years ago they probably had some sort of database where people could share experiences, and that is most likely what led to them guessing the financial crash.
But it's not like they had some machine that one day told them to short mortgages.
What do speech recognition, language translation, face recognition, object classification/detection have in common that is not true about predicting the future price of a security?
There was a joke from the 80s: soon the whole trading floor will be replaced by a computer, a man and a dog. The man presses the button to turn on the computer every morning. The computer operates all of the transactions and settlements automatically. And the dog is there to bite the man if he touches any other button.
30 years later, still no dog on the floor!
The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.
I believe the remaining ones are effectively TV studios for business channels, so they have a completely different purpose now.
But I don't think anyone believes those have more than a couple of years left in them.
Electronic trading does change dramatically the shape and profile of a trading floor. Introducing central clearing with margining not so much.
You won't achieve electronic trading anytime soon on products where there are perhaps at most 200-300 buyers in the world, must of who are only interested in buying big chunks.
That explains the LIBOR scandal.
And this is why AI/ML will never be very disruptive in law.
The low-dow is: It's not about the facts, it's a social status game.
A fellow student at Caltech was developing a stock market AI in the 1970s. He was very secretive about it, and was sure it was going to make him rich. I sometimes wonder whatever happened to him.
My only point here is that just about all values themselves are irrational. The same problem in trying to get a computer to recognize that the Mona Lisa (or poched eggs or Mozart or whatever) have value, is the same one that makes it hard to teach them to figure out if Tesla is still a good long-term investment etc.
As other mention, though, Wall Street makes a lot of money trading the edge cases. For instance, many people thought derivatives traders would become obsolete when the Black Scholes formula arrived. In reality, the model grew the size of the derivatives market, and traders made money knowing where the model was wrong. (Example: It assumes constant volatility)
Similarly, many investors use an OAS (Option Adjusted Spread) model to justify prices on one-off Mortgage Backed Securities. This also helps grow the market, as there's more transparent pricing. But traders know where the models are wrong, and make money off of them.
When technology enabled FX trade spreads to be less than a penny, people thought traders were done. But this increased the volume of trading (more hedging became cost-efficient) so while the % skimmed by traders decreased, the absolute $s increased.
Net, as long as the financial pie grows, traders can find ways to siphon money off. That amount may grow or shrink, but generally the story is more technology has helped them.
Perhaps the best analogy is a chess expert paired with a computer can beat either the computer or the expert alone.
This has stopped being true for a number of years. Computers play chess so much better now, that a human will actually impede it.
Think this way: could a 12 year old (the human) help a math graduate (the computer) on some problem? Or more likely he will just be a distraction?
More elaborations on this from gwern:
The stock market is more like poker than like chess, because it is the correct application of theory of mind (Can a computer bluff? Can a computer cheat?) that elevates it above a game of chance.
This also created the incentive for collusion/price fixing in the FX market as regular bid/offer spreads became too small to make any money.
There were still humans in charge of the algorithms, but they moved more towards Python programmers than market traders.
Many of the "old-style" traders bitched about what we did, and most moved jobs to banks that were less advanced.
(I was in the interest rates line; typical trade size is $10M)
Of course, they did know more than the machine did in certain situations, but not enough to make up the cost.
This is achieved by discussing with sales people, who themselves discuss with investors, as well as traders at other banks (through brokers).
Now the definition of illiquid varies. Many products that used to be illiquid are now pretty liquid (interest rate derivatives) while many other are liquid only for small trade size (certain bonds). FX is an interesting example. It is very liquid but it is also a market where many clients need to make jumbo transactions on the spot, and these would be market moving if not carefully managed. That's something that could be probably automated.
I find the article very poorly written. "Investment bankers" is extremely vague and the breadth of very different product in different markets traded by investment banks makes this sort of generalization a bit absurd.
The other reason why I think the article doesn't make sense is that investment banks cater for certain giant markets (equity, FX, treasuries, etc) but also for a huge variety of niche markets, in which only a handful of banks and traders are active. If you add the salaries of the couple of traders in each of the active banks on one particular market, the savings you would do automating the market wouldn't justify the years of IT development to train and fine tune an algo, which would still need to be maintained as the market evolves (and then you end up overpaying an AI expert instead of overpaying a couple of traders...).
I think AI will shine at solving wide problems that affect a large number of people. Self driving cars. Butler robots. Building a house. Manufacturing something common. These markets have the scale to justify large investments.
I very much doubt that AI will replace every single complex task done by a man today, for the very same reason that software hasn't replaced every single manual task done by a man today: the cost of developing and maintaining software can easily exceed the salary of the few guys you are trying to replace. To overcome that with software, we need to dramatically reduce the cost of developing software, enabling ordinary employees to develop their own software. But even today we are very far from that. What's the % of a generation who can actually code out of college today? How easy and useful are the main programming languages? I'd argue we are now going into the opposite direction. Microsoft is poised to take VBA out of office. All major OS are evolving toward the iOS-style locked down platform where you can only run Apple/Microsoft approved software. Corporate IT is ever more locking down platforms with software whitelisting, etc. I wonder if the golden age of productivity improvement through software has not peaked.
I think this is the key to running an effective organisation.
It requires that most people know how to program, or at least know enough to understand what is possible to program.
I also don't see this coming, partly because the education systems are not really up to par, but also because it's really hard to develop complex systems.
I'd argue that the languages are not really the issue here even if the trends seems to go towards even crappier languages (like js, php) to really mess up the heads of beginners.
But even with a hypothetical "perfect language" the main problems is the huge amount of time and effort it takes to learn any programming language at all, plus the even more enormous amount of time it takes to write something useful even for a really experienced developer.
But sure. It's perhaps time for a new cycle of tearing down the "mainframes" (in someone else's basement this time) and reinvent personal computing for the 2nd or 3rd time...
Trading has been changing significantly since the 'big bang' when trading went from pits to electronic. From there on in you see the evolution of algorithm / program trading. This area has been using quants for decades at this point. There are a good few big names brands out there that are known for being 'algorithmic heavy', Man, Citadel, DE Shaw come to mind (I"m a few years out of date). That whole field has been open to introducing automation / algorithms to create a business edge and will probably continue to advance because its good for business. The profile of traders has also changed (Barrow boys versus PhDs)
Then I guess on the other side is investment banking such as m&a, equity and debt capital markets. Generally there its relationship based , juniors work on pitch books which from what I saw / heard were generally overlooked. This is potentially a lot harder to automate away. Then the bank would try to pull in some rain makers or grow them internally to land big deals. Usually these opportunities open up because their clients (Other companies) have learnt to trust the organization or at the least learn to expect a certain behavriour when enlisting their services.
When I joined the industry just after the millenium, I joined a firm of guys who used to be floor traders. Basically the guys from "Trading Places" with Eddie Murphy: coloured coats, loud shouting, eating contests. As London got automated, they moved "upstairs", which basically meant holding the eating contests in a room of screens and squawk boxes. It was a fun time (but not for everyone; old school also means macho culture and sexual harassment lawsuits). One day I thought to myself "in what other job in the world would you find your boss breakdancing?"
I checked up on that breakdancing guy the other day. He's continued along the way of old school market makers, taking calls from brokers and manually entring them into a system. He literally said "Nacho, I'm a dinosaur. I can't code, but I have 25 years of experience trading. And trading changed. Everything is eaten by the computers, and on top of that we have free money keeping the market from having more than one opinion."
We talked about a guy who used to work in the firm we were at. He'd gone the other way, and caught the start of the HFT boom. Now he's a billionaire. It's amazing that someone could go from the pit to trading several percent of global daily volume each day.
But basically, the old school traders are well aware of what the computers can do. They aren't stupid, the ones who can't code know they can't, and they can see the writing on the walls.
As far as I can tell, there's only one area of trading that's relatively immune to the machines. And even that isn't completely immune. It's special situations. That's where you're looking at corporate events like mergers, rights issues, and so on. It's somewhat hard to automate because there's just not that many things to make bets on. One guy can sit and read through a bunch of events and put on big bets, and there aren't that many people who have the specifics of how to make the decisions. So the benefits of automating it are not as huge as with most other types of trading.
1. Traders generally exert an advisory/supervisory role on the set of prices that the bank offers, prices which are based on a formula or other fairly automatic means, with an added human adjustment. They already extensively use technology.
2. Traders are therefore most involved where profit can be made but simple algorithms don't work. For example, pricing big deals in illiquid markets, like when a company issues a large complex bond. As this contract is by definition not traded yet and not the same as others, there is necessarily limited applicable training data, so that there is no way to learn by example - i.e., use deep learning techniques (what I assume the question is asking about). In this case, trust and relationships are extremely important as both sides of the deal have limited information.
3. Markets change dynamics, often very rapidly. Traders have to react intelligently to events: like interest rates hitting the zero lower bound, wars breaking out or industrial accidents. They need to anticipate the actual consequence to future cash flows and also to sense the appetite of the market after the event. Publicly announced AI techniques are very far away from this kind of complex general reasoning.
The days of manual trading are long gone: of open outcry traders, yelling in bullpits and making handsignals, when banks would hire big imposing ex-football players. The question is a "why do you feel you can get away with beating your wife"?
There's only so much you can ask a kid 10 months out of Harvard econ to do, no matter how many pounds of cocaine and borrowed Excel sheets they have.
At the more senior levels, banking is relationship based, and making partner is a function of how much business you can bring in. This means maintaining good relationships with the clients you work with and reaching out to new ones. Show me an algorithm that can send cards on kids' birthdays, drink champagne, and play golf.
Diligence is automatable, in the long run. There is a human element to it, but it is small in most contexts.
Bankers solve trust problems. An AI would need to be trustworthy in a personal way to replace bankers--this only happens with AGI. For a long time, as long as humans control capital, there will be other humans interfacing those humans with each other.
This happens with an unbeatable public track record. We trust AIs in lots of tasks already, and that is not because they smile and speech nicely.
I'm not sure I agree with the first sentence, but I agree that there is a human element. If true, then at least some investment banking jobs must remain immune.
assuming, very few companies which would have publically available published financial going over a large (> 50 years) timespan, would it be safer to conjecture that it is actually the _paucity_ of data which impoverishes the algorithms ?
Once enough data is available, common sense and gut feeling can be programmed as a combination of fixed rules and machine learning.
If you'll allow me to simplify the IB Trader's job: there are 2 types of traders: agency and principal traders. Agency traders build relationships with clients, accept orders from them, execute them in the market. They make money on commissions. Principal traders will take risk. Sometimes on behalf of a client - on the back of a client order - or sometimes purely for the bank's own account.
The agency traders have seen their roles decimated by technology - because much of their role (minus the relationship building part) was automatized. On the risk-taking side, AI has crept into some places - albeit in very narrow use-cases. For example, we've seen the rise of the robo-advisor, where an "algorithm" comes in and automatically adjusts your portfolio to reduce risk and increase alpha. Well, the risk reduction party is well-known (Markowitz portfolio theory). But the increasing alpha part is the difficult thing. And AI seems to be quite far off in its ability to be a stock picker - simply because the passive approach is superior (ie, no intelligence needed at all).
They aren't? Because I see this happening all the time. Automated checkout lines, E-Trade and online brokerages, etc. Low-value transactions are handled more and more by electronic "salespeople". All it takes to reach high-value transactions as well is for the clientele to decide they trust machines more than humans, which is not all that unlikely given the many sources of human error involved.
It's possible that humans will be involved in high-value transactions like investment for a long time, since the marginal cost of human labor is low as the numbers get bigger, but it's not impossible to replace; we already have the technology.
The examples you give are indeed places where buying or selling happens, but where does the incentive to do these transactions come from? It's still from humans. Sure, we are bombarded left and right with ads that were carefully targeted for us by complex algorithms, but I don't think I've ever bought anything from Amazon without reading a review about the product from another user, or bought stock without going through several opinions written by more-or-less respected analysts.
There is a line (few hundreds k) where it's enough to play but not enough to pay for the costs of a human to help you (sales people/financial advisor/trader type).
Huh? For one, investment bankers are "just salespeople" in the same way doctors are just nurses.
Second, everybody and their mother is suggesting AI will replace retail level salespeople in the future. In fact lots of places, from British supermarkerts (where you can bill and pay for your purchases yourself), to Amazon's Go experiment, and numerous other uses of automation in retail.
Many more lower-end stores did too, pre- or earlier in Amazon's and online shopping's proliferation. It's already happened to a not insignificant extent.
Bookstores is probably a famous example, where the people working there were far more than cashiers, but they still got decimated by Amazon and co.
You mean I can get an unbiased look at a house in peace, compare the numbers, look at the plans, measure the humidity and do my due diligence without a sales person breathing down my neck?
Hell yea. I'd pay premium for that.
> You mean I can get an unbiased look at a house in peace, compare the numbers, look at the plans, measure the humidity and do my due diligence without a sales person breathing down my neck?
No, I don't think anyone means that. They mean purchase the house.
You can't go into a company and get them to show you their sales pipeline and their staff's performance evaluations before you (potentially) invest. If you want to do these things, you need to make a series of smaller deals first.
Human beings can be fluid, can enter into non-disclosure agreements and make these kinds of bespoke deals, while a web-based transaction form with a buy-it-now button cannot.
People are more and more selling without a broker as well, or at least they start to use cheaper brokers where you only buy the service you need.
It's a scamola with a similar racket being Ticketmaster's "convenience fees". Most apartments in NYC will require going through their broker who collects a fee (usually a percentage of the first years rent). A chunk of that fee ends up being a kickback to the owner of the apartment. Ticketmaster does the same thing with venues (tack on $15 fee, give the venue $7.50).
would you hire a person for your team just by looking at degree transcripts, LOC written, and the resume? maybe you would, but I'm not too inclined to do that.
The HN bias is a bit like the salesman is the inefficient part disincentivized to help you and trying to rip you off.
The assumption you have is they are simply a market matcher but they're offering more than that.
Their incentive is to get the highest possible sale price in the shortest amount of time and a good reputation. That high price has to also be clearable meaning it cannot be just up in the air and seemingly arbitrary (it can cost either time or reputation).
Without them there is much more information asymmetry. Using an online version you basically have access to the same information you would as with them but without the human element to judge through nor their experience on offer.
The incentive for the agent is actually highest cash in down payment because they don't want to deal with the bank to get their money plus they get to keep the down payment if the buyer pulls out.
> plus they get to keep the down payment if the buyer pulls out.
What? Where does this happen?
In many places, the law is forcing you to go through an accredited agent to make a deal.
I don't know anyone who wants to pay that fee.
Automation has happened incrementally in the industry for years, like many others - starting with the easiest stuff (low hanging fruit) like some operations tasks and mechanical trading tasks, and leaving the more complex tasks for humans - or letting a human scale to do more.
The more complex tasks that are left typically require non-trivial intelligence, e.g. understanding why the new product brought to market by your competitor or counterparty is slightly different to what you are trading today, and deciding if you can/should transact in it. Understanding what impact the upcoming compliance rule changes have on your market and activities (there are always regulatory rule changes). Understanding what the limits of your trading are, to avoid concentrating too much exposure in one area. Understanding why your counterparty is upset about some aspect of the transaction. etc.
>When it comes to AI/Machine Learning: the nature of ____ and markets are drastically different to other fields that AI/ML have previously excelled in. The main reason being what are the laws and rules that govern how an AI/ML should view a field?
>Since the very nature of a market is… a constant change respecting an infinite and broad amount of variables (____), a complex system (repeating exact past actions, will not give you the same results) and a social interaction/conflict regarding an ambiguous ____, it becomes incredibly difficult to determine those laws and rules.
>In physics, mathematics, computer programming, you will always have a safe level of predictability when it comes to certain functions and how they interact to give you a reliable solution. With ____, instead of this safe level, there is just irrationality beliefs; something you cannot model with accurate certainty (just look at how AI/ML would handle predictions for Brexit and Trump, and then, how it would cope with the aftermath).
They assessed the risk fine.
Just like driving a car.
Modeling and forecasting a security's price mathematically, or a set of security prices, is complex in a different sort of way. A better analogy would be like trying to program a car to deal with a meteorite that suddenly hits the road in a random spot, or a bomb that goes off 100 feet in front of the car as it's cruising on the road. There are far fewer rules with which to start a foundation, and the entire macroeconomic structure is far more delicate and in constant flux from news, other traders competing against you, information closed to the market, etc. With cars, you at least get a road, and add randomness in with other drivers. With security prices, you might have a lower bound of potential losses, if you're buying long. It's just a different animal.
 Of course, there are anecdotal evidence of the opposite, but there should be those if the performance of a trader is a random process.
The evidence shows that it is very difficult for traders to consistently beat the market. The evidence does not show that their performance, as a profession, is random.
> The evidence does not show that their performance, as a profession, is random.
I think you are right, if I recall correctly, the evidence points to a conclusion that the performance of their profession is worse than random when you take fees into account...
1. As stated, it's not falsifiable. So you start with a conception of the market as entirely random, and you observe that participants are consistently beating this market. Each time you observe someone beating the market, you chalk it up to the probability distribution. "Well, that's just a two-sigma event." Then you see it happen again. "Well, that's just a three-sigma event." Then again, and again, and again. How many sigmas from the average market performance are you willing to accept before you agree that someone is legitimately and purposely beating the market with a skill-based mechanism, not a chance-based mechanism?
Furthermore, do you have the numbers to turn this into a falsifiable claim? What is your time interval? Daily, weekly, monthly or annually? How many correct forecasts do they have to make ("how many sigmas from the average"), compared to the chance expectation of coin flipping over the same timescale? If you don't have these numbers handy, then it's purely a thought experiment. Subsequently, the observation that funds like Berkshire Hathaway, Bridgewater, Renaissance Technologies, Baupost Group, Citadel, DE Shaw, etc. consistently beat the market for at least 20% net of fees over 20-30 years suggests that, per Occam's Razor, people can beat the market due to skill.
2. The analogy is not comparable to active trading. You don't need to hit 20 heads in a row to beat the market consistently, you just need to hit a p-value number of x heads correct for y coin flips greater than chance would suggest. We don't assume that basketball is a game of chance if the players can't make all their shots in a row; nor do we assume that baseball players with a 0.3 batting average aren't clearly better than the average high school dugout. If your trading interval is weekly or monthly, and you're consistently up over the market (even net of fees!) for 240 months or 360 months, it doesn't matter if every single month was a winner.
3. Have you ever read Warren Buffet's response to the EMH assertion, as postulated by Fama? He outlined an excellent rebuttal in his 1984 The Superinvestors of Graham and Doddsville. Essentially, if you assume that the coin flipping analogy does map to trading, then you should expect to see a normal distribution of the winners, given that the market is inherently random and no one is achieving superior coin flips through skill. However, if you observe that the winning coin-flippers consistently hail from a small village with standard coin-flipping training, then it is more reasonable to assume that there is something unique about those particular flippers. This is what we see in reality - yes, most amateur traders fail miserably, and yes, most hedge funds underperform the market over time. But there is a relatively small concentration of extremely successful funds and traders in an uneven distribution.
4. Even Fama has walked back on Efficient Market Hypothesis, and no longer espouses the view that the market is inherently random. It is deeply complex, yes, but it is not efficient, nor entirely random. Several studies have been conducted to empirically examine EMH, and the results in favor of the hypothesis are dubious. A much more charitable retelling of EMH is the weak position, which essentially states that any obvious alpha will be quickly arb'd out of real utility, but that non-obvious alpha, or alpha which is technically public but not easily accessible will retain utility until it becomes obvious. This also maps more cleanly to reality, in which trading on e.g. news reports is mostly unprofitable (everyone can get a news report at around the same time, for the same level of skill) whereas mathematically modeling pricing relationships can be extremely profitable (doing so accurately requires public, but mostly unclean data and a great deal of skill).
1. The Superinvestors of Graham and Doddsville - http://www8.gsb.columbia.edu/rtfiles/cbs/hermes/Buffett1984....
2. Investment Performance of Common Stocks in Relation to Their Price-Earning Ratios: A Test of the Efficient Market Hypothesis - http://onlinelibrary.wiley.com/doi/10.1111/j.1540-6261.1977....
3. The Cross-Section of Expected Stock Returns - http://onlinelibrary.wiley.com/doi/10.1111/j.1540-6261.1992....
4. International Stock Market Efficiency and Integration: A Study of 18 Nations - http://onlinelibrary.wiley.com/doi/10.1111/1468-5957.00134/a...
Now, I fully agree that there have been certain anomalies (value premium anomaly in case of Buffett) that pretty much align with the strategies of these long term successful investors. Question is, has it been their skill to pick right anomaly as a basis for their strategy, or luck? Again, in the world of investment strategies, there for sure is someone trying almost anything. And if that anomaly disappears, do they have the skill to change their strategy?
But we are a bit off topic here. The original question was "Why do traders in investment banks..." that is a different species from the warrenbuffetts.
 Asset markets can be right somewhat like a clock that has stopped is right twice a day.
 Yes, Keynes said "The market can stay irrational longer than you can stay solvent."
 I have actually bet my money that the value premium anomaly is not disappearing, but that anomaly is not something investment bank traders can enjoy, as the anomaly is far too long term anomaly for them.
Answering my own question: I'd expect bankers to save more (of their own money) in anticipation of the good times not lasting as long. More conservative types will weather the storm and spendthrifts will get wiped out.
In other words: business as usual, up until the very moment it isn't.
Believing "I am special", is just built into us.
Although per passenger mile, it's true that air is very safe overall.
The overall statistics are not useful for evaluating your next trip. You must use the ones from your cohort.
1 - Mostly, the countries with broken governments. If human rights aren't respected, that's a great predictor that flights aren't safe.
If some belief applies to everybody, but on a different degree (per individual, or per job sector, etc), then the mere fact that it applies on everybody doesn't say anything to explain this variability.
And if what you say is that it applies to everybody with no variability, then that is obviously wrong. People in specific job sectors are way more worried about automation than people in other sectors, even if their jobs are not yet starting to get automated. E.g. office workers vs surgeons...
Of course, I wouldn't be giving the scripts to my employer, just spending more time doing other things...
Are you sure you haven't perhaps fallen prey to too much job security? Such a large sense of job security that you don't care about automating X part of it away. You, like me, like other "rockstar" software developers, (perhaps) believe it doesn't matter if you automate X, Y, Z, because you're so good, that when you're out of responsibilities you will be offered the followup/orthogonal tasks 1, 2, 3.
Anyways, I'm just trying to highlight that you too, perhaps subconsciously, think "I am special, it wont happen to me", because at least I who also don't care about automation feel this way.
 I know for me, this is the truth. This overconfidence is why I don't care about automating my tasks away. Honestly, even giving the scripts to my employer makes little difference to me.
I'm not part of the cult of the "rockstar programmer". I do write a lot of good code, and I tend to do it much faster than other programmers that I work with. But programming is not actually my job description.
Without going into details - I do something else for living, that uses computers quite heavily, and automating my job is more of a hobby. I have a huge amount of flexibility in what I do, so there is an element of substituting 1,2,3 for X,Y,Z because I find them more interesting. My boss wouldn't care one way or another - he is more interested in whether or not the tasks get finished, and maintaining a high quality level in the final product.
Yeah, but what are the odds that their explanation is just a post-hoc rationalization (ie. "heads I get credit, tails I avoid blame")?
1. Proprietary trading (think hedge funds or the old bank prop desks, ranges from the bass brothers to renaissance tech in terms of style)
2. Execution or sales traders (often work for or are the counterparts to #1, their goal is get the best price for their clients in the quantities and time frames desired)
3. Market making (essentially providing liquidity to both buyers and sellers)
4. Sales - essentially layered on top of 2 & 3 to interact with 1
With that said, #3 has pretty much been automated in some markets and is the easiest to automate, #2 has had a lot of automation for vanilla stuff, but people are still involved in big, complicated things. #1 may be automated but is largely strategy dependent, #4 is pretty hard to automate...
Truth is, most trader jobs have been gone in the last 15 years. First it was the automation of order collection (phone orders being replaced by automatic order routing processes), then automation of the execution process (from human order management to algorithmic trading), then automation of the investment management process (from discretionary processes to quantitative management), and the last frontier will be the automation of the relationship with the client.
Problems are different for different trader types.
Execution traders execute orders on behalf of clients. Today a lot of clients execute directly using web terminals and APIs. And if you still execute on behalf of others you spend a lot of time fighting against the machines (one solution is to trade out of market, but this too is being automated). Very very hard to come up with new execution algos.
For delta hedging traders the role has been reduced to monitoring, unless you have allowance to take risk (which most banks don't since the 2008 crisis).
For alpha traders, whose job is to extract money from market, the risk is not in being replaced by the machines, but one of overcrowding, i.e. everyone following the same strategies is reducing returns and increasing risk (skew) across the board.
Sales traders whose activity is to sell first, and trade last, still need to manage relationships with customers. But this has been changing gradually, with clients prefering digital interfaces, passive investment programs, and now the rise of automated investment management programs.
I started working in the industry circa 1999. The head of trading I was working with told me once machines will never be able to invest like a human. He's still there doing a lot of money, but certain that it will end someday. His strategy: "let's see if I can make it one more year"...
So any advance in AI/ML/NNs can be seen as advances in the field of automating computer programmers. We don't tend to think of it like that because these techniques have to be applied, currently, by computer programmers, and the software the techniques "write" was generally infeasible to write by hand anyway: it's been so far purely additive and solving problems that hand-written software either couldn't do at all or where it sucked and progress was very slow.
But watch out. The tech is getting better scarily fast. Neural networks have been able to learn how to do basic algorithms like sorting given only examples. The primary limit is still the amount of data they need to work with, but R&D is driving that down too.
I can see a time when many tasks that today automatically require skilled programmers are instead done by domain experts who simply shovel data into an advanced neural network and discover it gets results that are good enough. Perhaps hand-written software made by a skilled programmer would be better, but the machine is a lot cheaper ...
Now it's 60+ years later, and there are still human programmers.
well, until the day an AI will be good enough to write and improve itself, then we'll all be doomed equally.
Everytime I see a gameshow where you can 'bank' your current winnings, ... I imagine the future of trading will include some strategy that has an AI yelling those sorts of things to other ai agents acting in concert. The strategies of the macro stock market positive outcomes, being applied to micro stock actions. https://www.t0.com is also going to be a fun reality.
Also, not all trading is liquid, nor is it short term, both of which are targets for automated algorithmic trading. Long terms investments are typically human decisions.
You have to be bright, articulate, interested.
They've had to work insanely hard to get into and finish Harvard, only to be granted hundreds k of debt. Then they had to work their ass harder to get into hard-to-get into-companies and acquire the experience required to get into even harder-to-get-into companies (i.e. investment banking). After a decade, they're finally there and all they got is long hours, too much responsibilities and no free food.
A lot of which can't be found in the usual web startup.
Why do people need to have anxiety and fear about AI is the better question. Let's solve those instead of this blindless ambition to automate everything, and then somehow assume our benevolent government will give everyone UBI or some other pipe dream.
In the field of investment banking trading (which is mostly market making) the amount of automation varies by asset class: very automated for some asset classes like fx, equities, much less for more illiquid asset classes like credit, commodities, bespoke products.
As well, senior traders operate as the 'business' making more decisions than pricing of products. They make the business decisions often judging legal, compliance, accounting risks. (and not always correctly.).
do you accept to trade with a Dutch counterparty who wants to trade against your German legal entity knowing that you can only hedge the position in London? What is the risk between the two legal setups? What premium should you charge for those risks?
Do you trade the very large size that the counterparty wants, knowing it takes you over your balance sheet position limit - can you get approval for this from your senior management? Can you offset the position in the market without it moving against you? What premium do you charge for this?
If your job can be done overseas, it probably will be outsourced.
If your job is algorithmic, you'll be replaced by a computer.
A lot of M&A activity though doesn't fit that category.
Here's what I have seen. For more "liquid" markets - like delta-one blue chip equities, IR swaps, spot FX - jobs ARE moving quickly. People are not complacent. Competition comes from firms that are much faster in execution, having automated most of the process - and presumably having lower hedging costs by being able to move quickly. Or just basing their decisions "better" on history.
Having said that, you have to remember that:
a) Size of balance sheet matters. A large bunch of clients of such firms trade with them due to "synergies". Large shops are one-stop shops. They don't just sell you a swap. Its usually a loan/funding + an option to hedge it + some tax advice thrown in. Its hard to break it down into components and trade individually, as most other players may not be even willing to face such counterparts. And shopping for each component separately might mean larger time and friction costs. Here being a large bank helps massively. Basically - capital is hard to be automated, decision making is easier.
b) There are a large number of markets that cannot be automated anytime soon. These are "illiquid" or "exotic" markets. The data on them is scarce and trades happen in a lot less frequency, then say swaps or delta-one equity. Throw in some options, and your model might say here's a price, but there's no guarantee on it. Then you have to balance it compared to your book size, and the price is not discoverable, or even same across firms (for good reason - my cost of cash liquidity might be different to yours). Sometimes counterparts will trade with you even if you have a worse price, because there's lower risk of you falling down in 10 years on a 20 year cross currency trade or inflation swap.
c) Good traders are very defensive and paranoid. They don't work under the paradigm that history will repeat itself. They will try to look for risks that might seem to have nothing to do with their market. I know some traders who have been making good money every year in these exotic markets, regardless of crisis or not. Some of the sharpest people I know - and they love their job. Not just the money aspect. They just love the whole high pressure environment and everything it entails. Now they are being drowned in regulation. Don't get me wrong - regulation is good, but right now we have a mess. I won't try to convince you - and unless you work in a bank, you will not agree with this.
d) Management is old. Might seem obvious, but most of the people at the top have made their money the old-style way. After a certain amount of time, you tend to trust your own instincts more than other things, especially if you've been successful (which may have just been a random process to start with). So they think this is just another fad and it will pass. These things have come and gone - we're still here. This is a prime reason why some banks fail suddenly. The top guys just didn't see it coming.
I don't think this is true. Not at all.
A trader is not necessarily a sales or a broker, though. Nor is he a quant, or a dev. People rarely imagine how many different jobs are involved in the trading job, and how rich the business logic is. At my shop, traders are the piece that connects all of the jobs in the value chain.
I believe, currently, the business logic can be improved locally by learning systems (and it is), but there is no public example of an industrial learning application encompassing a scope comparable to what the usual trading desk handles. Sure, there are many inefficiencies ; traders work on heuristics, afterall. But I don't believe we have the necessary horizon to aptly predict the end of traders, because I don't see how we could make AIs with a better efficiency.
I do foresee a future in which a trader can accomplish a lot more than he/she can today using AI. So in the future as the per trader efficiency increases, the number of traders required will most probably decline, unless there is a dramatic increase in trading volume that cannot be matched by the then state of the art AI.
This is essentially what's happening today with X.ai, Facebook messenger and the like. Sure the logic involved in booking air tickets for a group of 5 over 10 conversations isn't as complicated as the rich business logic of a trade, but 5 years back Facebook messenger would've seemed almost impossible, just like the trading business logic seems impossible to do with AI today.
On the other hand, the technology advances needed to transform the tools into standalone actors are not merely a matter of scaling current technology. Especially, the creation of training datasets is a problem for which we currently have no solution, that's why we fall back on human trainers (mturk, etc). That's why a solution to a problem with no clear, bounded model and no easy dataset seems out of reach.
You don't need "any ML system" to be able to not only process the data, but also explain the strategy behind its data processing at various levels of granularity, answer abstract questions and do a convincing impression of listening and responding to client feedback, you need advanced general intelligence.
As for what the more reasonable answer a client would accept is, I don't think I really have the requisite years of investment banking experience to know that...
The topic question is:
1. Loaded question
2. Uninformed question, look at a small subset of the electronic trading reality.
3. Overly broad: jobs are immune from AI, etc (what? depression? ancient aliens?)
So, in order:
1. Some traders will always get fired as they can't do their jobs (or are just plain unlucky in the wrong time).
2. The 'traders', or rather what people think of a 'trader' is a wrong image in what I worked with. They were people configuring algorithms, linking dependencies between tickers in various market segments, etc. These traders did some rudimentary AI guiding of a sort. They used their brains, read newspapers, tried to code engines to parse said newspapers with machine learning long before it became media catchphrase, failed (badly), configured various triggers. They weren't 'traders' in your ordinary sense of the word, but struggled to make that work. It takes a lot more in scope, than buy low sell high' to make trading work in this millennia. Several magnitudes or more work, if I'm to scope it. It's a corporate effort, not a singular one. Traders do get fired but whole corporations go under also.
3. The 'etc' part:
- AI will never advance to the level to predict the future, as it will not know the sum of human experiences driving that future. This is especially true for the financial market as it is linked with the rest of all human activities in a un-linkable fashion. I can take this as a philosophy thought experiment, or as an economy fact, or from my limited experience, but, take it as you will, AI is a 2017 buzzword to come. Take it with a grain of 100 shares worth of salt.
- There is a very finite liquidity in the market, compared to the money available. When you have to iceberg orders to not destroy a trading strategy and move the gap for everybody, it's really obvious that the money you can make are limited. This was true in 2010, when most of the major banks decided to also enter, and is more true today. So, connected with the previous point. AI will never be fast enough, to earn on the limited liquidity compared to available players, considering we are capping off in silicon CPU processing power right now. I'm open to arguing this point, as it was argued to death among friends in the last decade, but I don't see this moving in any direction unless processing power moves off silicon, and even then, for a while. You can't scale atoms, and you still need 80 of them (or whatever is the latest) for building anything, so that's the hard wall. [extra thoughts]
Given all of the above, humans have an edge, and humans with experience, have a sharper edge than the AI. In trading, nothing will change, and AI wont replace traders. We can argue this point, feel free to reply.
[extra thoughts] Purely my take on this, as future trends:
- A new language will emerge to replace the web languages and get more performance by simplifying the interaction with he browsers (Think CICS + IBM + inline assembly there)
Anyways, this post got unintentionally too long, I guess I had something to say on the subject :)
The job is actually extremely complicated. I wear many hats daily including working as a software developer, IT project manager/architect, salesman to clients, point of contact to other institutions trading desks, legal drafter, tax/accounting policy design, deal negotiator, risk taker/manager, systematic/fundamental strategy research etc. My job title is "Trader". There aren't any dumb big swinging dick 80/90's style traders left shouting buy and sell (if they are it's purely relationships and at small shops). Nearly everyone on my desk has a CS degree or similar quant background.
AI/deep learning is currently nowhere near being able to master one of these fields let alone cohesively manage all together in a direction to actually make money (just pay a "trader" a 200k-1.5mn USD a year its much easier). Most grunt work is automated and most outfits do in fact make use of statistical methods/classifiers/predictors and differentiable networks where appropriate. At the crux of it someone needs to manage, design the systems and fill in for them with a bit of intuition and common sense where there are gaps. The world is a chaotic place and its not getting simpler. Machines are not robust against chaos.
I worked in the finance industry as business facing software dev for 10 years and now work in a different but similar industry - my experiences can't be more different. My current industry's business is packed full of "traders" who don't trust the systems, won't work with them and outright refuse to admit the systems can outperform human's PnL numbers (especially not their own) despite being faced with raw numbers saying just that. It's a continuous battle between the technology teams being told to deliver automated systems to reduce costs and increase profitability vs plenty of "traders" who seem to see the systems as some sort of threat. As you say, most of the 'dump big swinging dick' style traders are long gone in finance - they're definitely still around in mine. My guess is that because our industry is significantly smaller meaning less employee opportunities, and the fact that almost all of these "traders" are relatively poorly educated (compared to those in the finance sector) they find it tough to elevate their own skillset to do things such as statistical quantitative analysis etc.
If it wasn't for the very high difficulty of entry, my industry is ripe for statistically minded, methodical, quant people to come in and make a killing.
His philosophy is heavily based on soft facts. Appreciation of management and understanding of the business model. This is not only based on logic and numbers. Therefore a computer does not help.