Nate Silvers wrote an entire book on this subject called "The Signal and the Noise" [1]. Humans are so often taken in by people claiming to be able to make predictions by combining new data points. The more unusual, or unrelated to the subject matter, the better. They make good headlines but (not surpsingly) almost always turn out to be heavily flawed in practice.
You can basically measure how much a pundit/expert is going to be wrong in their predictions by how ideological they are in their analysis. The best indicator is when they use only one or two metrics as a basis of a prediction of an otherwise very complex scenario.
One example from the book is how a researcher became famous before the 2000 US presidential elections by claiming to predict races with 90% accuracy [2]. He claimed that by measuring a) per-capita disposable income combined with b) # of military causalities you can determine whether democrat or republicans get elected. He said historical data backs up his theory. He then proceeded to fail to predict that years election and faded into obscurity.
Nate did his own historical analysis and demonstrated it was only 60% accurate instead of 90%. Plus that was only if you ignore 3rd party candidates as the model assumes a two-party system.
Plenty of other examples are provided in the book which makes me highly suspicious of the value of the predictions made in this article.
The general idea is that we need to stop looking for simple one-off solutions to complex problems. Instead we should adopt multi-factor approaches which suffer from fewer biases and are better grounded in reality. Otherwise these predictions are just another form of anti-intellectualism.
The law of averages will tell you that the more metrics (however random) you throw into your prediction engine, the closer your prediction will be to the actual result. But it's not very remarkable and it will never put you ahead consistently.
> CEOs whose faces during a media interview showed disgust [...] were associated with a 9.3% boost in overall profits in the following quarter.
I'm surprised I haven't seen anyone say "Regression to the mean" yet.
Suppose the CEO gets obviously-scowly whenever their last quarter was abnormally bad... Well, the next quarter will naturally tend to be better, purely because it's a return to a "normal" state of affairs.
In other words, perhaps they've simply found a way to detect the PAST performance by looking at the CEO's face, which is... rather less-useful.
I have never heard of regression towards the mean in profits. It certainly doesn't exist for stock prices. You would actually tend to observe trends - the opposite phenomenon. For example, Google would have a 20% increase in profits on one quarter and another similar increase the following quarter but not a sudden loss caused by a regression to the mean.
In this case, "regression to the mean" is probably the wrong phrase.
Generally, though, when a CEO is looking stern and fearful and declaring writedowns and layoffs and erasing the 'goodwill' off of their books one quarter along with huge losses and financial penalties, etc then the next quarter usually isn't quite as big of a shitshow...
So the point here would be to buy a stock that has been overly punished and become unfashionable, while the overall business is still sound and will eventually rebound and the stock price should perk up.
If true, though, reading the negative emotions of the CEO would be correlated with past performance and it wouldn't be useful to determine if the company really was sound or if the company was actually heading to zero.
'Regression to the mean' would be buying a stock that is currently cheap because of mistakes last quarter yet still has good fundamentals. So we'd expect it to recover based on those fundamentals. It's what a lot of people do already.
Nice correlation study, but I'm really skeptical of any causation inference that might be possible. I mean, if the software gets to the point where it can identify the next Jeff Skilling[1] then great, but I doubt such surface level data has a lot of predictive potential.
I do find it kind of funny that the article cites the study mentioning 'negative' type emotional states aligned with ~9% profit boost, when one of the most interesting 'tells' in the Enron case was when Jeff Skilling got really bitchy at an analyst who was probing him hard on some difficult questions. The disgust was holding up a facade in that instance, and I don't doubt dishonesty might be a factor in the emotional state of others.
>“Fear is widely recognized as a powerful motivator. Thus it is not surprising to find that a CEO who appears fearful under interrogation is perceived by the market as a CEO who will work harder to increase firm value,” said the paper, which was co-authored by Steve Ferris of the University of Central Missouri and Ali Akansu and Yanjia Sun of New Jersey Institute of Technology.
This is a quite optimistic view of what one's behaviors might result in when driven by fear. I'm fairly confident fear of failure drives a lot of fraud. It sure seems a familiar story...
It doesn't seem to me that whether there can be shown to be a causal relationship between this data and future performance would have any bearing on whether the relationship has predictive power.
Create a hedge fund and run your proprietary algorithm. If you succeed over the long term and generate a consistent market premium for a given risk exposure, you've got a story. If you don't do this, you've got an unproven claim like many others in history, the vast majority of which were proven false when put into practice.
The problem with pure research in this field is that your decisions will influence the market. How much you invest will influence your returns and how successful you are will influence the behaviour of others in the future.
Which is actually why you shouldn't invest based on this algorithm if you care about the research. I think, ideally, you should:
1) Generate a few hypothesis algorithms, including one that invests at random.
2) Publish a cryptographic commitment for each algorithm.
3) Never actually invest any money. Alternatively: let someone else invest your money for you, without knowledge of your hypotheses.
4) Run your algorithms privately, without updating them at all. Capture the data the algorithms use (including random choices taken).
5) 5, 10 or 20 years later, publish all your algorithms, the data they had as input and their results, see if any of them would have predicted the actual performance of the market in an statistically meaningful way.
I imagine the main reason most researchers are unlikely to do that is the 5-20 years project requirement. It is a lot easier and faster to just take historical data from the market and then produce algorithms that would have predicted performance after year X, based on information before year X. Of course, the problem is that you run into over-fitting and survivor bias (in that only positive results are generally published).
Btw, having your algorithm be run by a fund and having that fund succeed, then publishing the algorithm, would also be susceptible to survivor bias.
This only works under the assumption that your activity would have had absolutely no impact on the market. The real world is complex and interconnected, and everyone is monitoring each other closely.
I believe that the post you were responding to was noting that you can't do pure research, because the results of the trading strategy actually being executed will influence the market. The pure research cannot determine how trading with that strategy will cause the market to react. Over time, the strategy will change the market more and more (if it's successful) which will change it's efficacy.
Paul Ekman talks about the "Desdemona Problem" (https://en.wikipedia.org/wiki/Othello_error), which is always an issue with face-reading of emotions. Many facial expressions (especially those that express on the top half of the face) correlate very well with emotions. But you don't know the context of those emotions or what the person is thinking.
Hence the Desdemona Problem. She is fearful when accused by Othello, not because of infidelity, but because she's being accused. You see that already with the surprising finding that fear and disgust actually correlate with positive financial performance. Yet you see those same emotions in suicidal patients and they're undoubtedly negative.
The fact that the article presents different metrics in response to different inputs (CEO expressions of disgust correlated with 9.3% boost in overall profits over the following quarter, CEO expressions of fear correlated with 0.4% rise in stock price the following week) makes me strongly suspect excessive data-mining.
:) I have to admit I haven't heard of a fund started to use this idea but I guess it was just a matter of time.
There are always funds you hear about that are created based on some previously unexplored data signal like this, twitter sentiment is an example that was popular circa 2011.
The problem that most of these signals has is that its really not a predictor on its own and it becomes just one of the 100's of signals that is consumable by financial models.
This means that you need to go through the trouble of collecting, cleaning, calibrating and discretizing this signal only to have it feed into a model where it might get a weighting of 0.5% of the overall signal.
> However, accuracy is an issue. Dr. Ekman claimed 90% accuracy for his emotion-coding system, but software inspired by his work hasn’t been tested independently.
This seems a bit dubious. Is this 90% accuracy for predicting stock movements? Or 90% accuracy for predicting emotions based on facial features? I doubt its the former or someone like two sigma would have just hired the author before he published. If its the later then its really unclear just how accurate their system is.
Whenever someone claims a one-dimensional measure of "accuracy", you can know they are lying. They cherry-pick one detail in their pipeline, not report the overall lift in performance vs other known methods of predicting the important variable.
>Is this 90% accuracy for predicting stock movements? Or 90% accuracy for predicting emotions based on facial features?
The latter. Dr. Eckman's work on microexpressions is focused on facial movements as it relates to a small set of commonly felt emotions (i.e. fear, disgust, surprise, happiness). The author is just presenting a possible application of Dr. Eckman's theories.
With all due respect, this sounds like data dredging mixed with a bit of astrology, filled out on in a PR release sent in by the authors and reprinted by the wsj.
And in the end, a drop one quarter can be followed by a rise the next. Hell, a company should be allowed to take a loss over a few years if it means they are working on something internally that will bring it back to profitability afterwards. But shareholders these days rarely have the icy stomachs for that kind of play.
I would be interested to know how he trained a fear detector.
There are a lot of companies that develop software that track facial expressions.
It is conceivable to use Affectiva's SDKs to automatically annotate data for facial expressions and then use that data to develop models that correlate facial expressions or facial expressions of emotions into things like performance prediction ...
If the founder and CEO sells all his stocks 20+% of "his" public company like some did the other Friday, you definitely can guess his emotion (eg NEWR). Then the other day, when the company stock drops to half the value, you can guess his emotion and different emotions from surprised stock owners.
The empirical data is weak. Stock prices change for a myriad reasons, CEO emotion and behavior being only one of them. Dr. Cicon's data may be sufficient for day traders but rigorous real world testing is needed to confirm if the tech is any good.
If this works on CEOs, then it would work on, say, the chairman of the Federal Reserve, or the heads of other central banks. Predictions gleaned from that data should be much more lucrative.
With higher stakes come greater incentives for counter-measures.
The CEO of your company is a paid sociopath. I mean that in the nicest way possible, of course, but it is literally their actual job description to place the interests of a soulless legal fiction over the needs and desires of living, breathing, human beings with actual feelings. He or she probably isn’t inherently evil. But if they can find a way to make the company 100% more profitable by firing you, they have to do it. That is exactly what their job is.
From now onwards we have to force all CEOs be autistic incapable of expressing emotions, or they need to be replaced by Geminoids built in their own image.
You can basically measure how much a pundit/expert is going to be wrong in their predictions by how ideological they are in their analysis. The best indicator is when they use only one or two metrics as a basis of a prediction of an otherwise very complex scenario.
One example from the book is how a researcher became famous before the 2000 US presidential elections by claiming to predict races with 90% accuracy [2]. He claimed that by measuring a) per-capita disposable income combined with b) # of military causalities you can determine whether democrat or republicans get elected. He said historical data backs up his theory. He then proceeded to fail to predict that years election and faded into obscurity.
Nate did his own historical analysis and demonstrated it was only 60% accurate instead of 90%. Plus that was only if you ignore 3rd party candidates as the model assumes a two-party system.
Plenty of other examples are provided in the book which makes me highly suspicious of the value of the predictions made in this article.
The general idea is that we need to stop looking for simple one-off solutions to complex problems. Instead we should adopt multi-factor approaches which suffer from fewer biases and are better grounded in reality. Otherwise these predictions are just another form of anti-intellectualism.
[1] http://www.amazon.com/Signal-Noise-Many-Predictions-Fail--bu...
[2] the "Bread and Peace" model by Douglas Hibbs of the University of Gothenberg http://query.nytimes.com/gst/fullpage.html?res=9803E5DD1F3DF...