It really doesn’t claim or do much that isn’t already quite obvious (it also has the other problem of going on completely unrelated tangents.)
First, it does this funny thing where it “proves” that EMH (or market efficiency) is NP-hard, by building gadgets based on options and then using them to “embed” 3-SAT. This is fine except (a) we often don’t care about an exact solution (so we need approximation-hardness that isn’t shown) and (b) I can come up with millions of other obvious things like this! Here’s a silly one one: I give a derivative whose price is $1 if a solution (that is known to exist) to an NP-hard problem is found by a given time. This will “show the NP hardness of EMH” because obviously the derivative price should be 1 (the problem has a solution, by construction) and so anyone should “obviously” price it at $1.
This is like saying “life is NP-hard.” Like, yeah, of course it is. That doesn’t mean we aren’t pretty good at doing the right thing and rationality (like EMH) is a good, if imperfect tool to analyze behavior. Taking it to its extreme then everything breaks, because the model is wrong. We don’t throw up our hands and quit because making real-life decisions is a sampling-hard, query-hard, and NP-hard problem (if all data is known), but a basic model of rationality would say that humans can “perfectly” solve it. (Which is obviously crazy, but we obviously don’t throw the assumption of rationality of agents out the window because of this.)
Otoh, I am definitely of the opinion that some economists hold EMH as a “magical” axiom, but that’s a different story.
In fact, you can embed more powerful oracles like this, e.g. publicly traded company announces a $1,000,000 dollar prize for anyone who submits an inconsistent statement in ZFC.
The EMH is a framework. With any scrutiny it's obvious that it can at most be first order accurate since obtaining and processing information is costly. This is not a particularly new insight or one that would be a surprise to any proponent of the EMH.
Great clarification. I'm always suspicious of interpretations of EMH that are a lot more literal than they should be, drawing effectively false conclusions from double negations. I think the truth is as you said.
It is known on a quantum computer factoring is polynomially hard.
The EMH is accurate to way better than the tangent line, which is what a first order approximation is.
The granularity of the "correctness" is the major difference here and you're asking the author to be ok with an extremely rough success criteria when they are proposing a mathematically perfect success criteria.
So, define how you would quantify "pretty good" in comparison with "mathematically optimal" and you might get what is in effect an infinite delta.
>but a basic model of rationality would say that humans can “perfectly” solve it.
"basic model of rationality" doesn't make sense. Rationalism has not been shown to provide a pathway to a general epistemology.
Fundamentally the author is making a really good case for the idea that oblique or poorly specified/constructed goal and measurement criteria (eg. EMH) should not be assumed as a competent goal vectoring mechanism for society.
All I'm saying is that there are easier and clearer ways of presenting the same thing that could be done in a sentence and make the "reason why" obvious. No need to introduce gadgets and go on and on for 30 pages about "exponential time" (is the author talking about the exponential time hypothesis? Or is this P vs. NP?) among other things. There's also some additional crossed wires about "needing to try all cases" or something, that doesn't make any sense. No SAT solver/TSP solver/MILP solver, etc., does this. Anyone who is at all familiar with this topic would not make such statements, and certainly not, in the way you mention, a "mathematically perfect" criterion that "is proven wrong." So which is it? Is the author being cavalier about the literature? Or do they simply not understand the basics?
Additionally, they're not proving anything wrong. This is, like what I mentioned before, similar to bashing people for saying "Nash-equilibria are PPAD-hard (and communication hard) so Nash-equilibrium as a concept is bad because it's computationally difficult." (In fact, I'd say it's even less interesting than this, because at least the latter is a nontrivial result; not a one-line argument that we can embed arbitrarily complex things into the market.)
> Rationalism has not been shown to provide a pathway to a general epistemology.
This is "rational" as in "rational agent" (utility-maximizing, or an agent that makes at least Pareto-optimal choices, or who plays to Nash or correlated equilibria) not "rationalists" as in people who apply rationalism as an epistemology. Applied broadly, "rational agents" could solve halting-hard problems by making an obvious game where an agent who solves such a problem gets a million dollars, since such agents are often assumed to have unbounded computational power. (In particular, nothing about the agent's computational power is often assumed.)
> Fundamentally the author is making a really good case for the idea that oblique or poorly specified/constructed goal and measurement criteria (eg. EMH) should not be assumed as a competent goal vectoring mechanism for society.
Not really, no. For one, EMH says nothing about exact solutions (this would require a much more careful argument with a different type of reduction), and two, in the same way as before, putting EMH on a pedestal (like putting Nash equilibria on a pedestal, or agent rationality, etc.) leads to absurd results. My point is that (a) this is well-known and (b) you can make much simpler examples (like the ones above) that don't appear to be as "clever" but prove the same (nearly-obvious) point.
I think that's a perfectly cromulent position actually. It's a great example where the theory doesn't help when you're working with real non-homo economicus agents, and I would argue hurts if you assume that people behave (or should) behave according to those concepts.
>This is "rational" as in "rational agent" (utility-maximizing, or an agent that makes at least Pareto-optimal choices, or who plays to Nash or correlated equilibria)
...even worse. Go find me one of these perfect information people. I suppose you could collapse the stated preference and revealed preference superposition into observed preference to solve the equation.
My point is that, like EMH, none of these are pretending to be the "be all end all solution to all models" and it's obviously silly to pretend otherwise? So it seems you're agreeing with me?
I think that's the whole crux of the paper - the people informing how we structure our economy shouldn't consider EMH doctrine.
In my view it is a good thing to prove the model wrong, because so many people take extreme moral and political stances on account of the model being accurate.
At the very least, displaying counter-examples is a way to show those stances for what they are, exposing them as beliefs without scientific support.
As much as this is certainly the case (people using models wrong is one of my personal pet peeves), there are at least as many people who take equal and opposite extreme moral and political stances on account of the model being inaccurate.
Anyone who actually understand EMH just takes it for what it is.
> beliefs without scientific support
Any political or moral belief is a belief without scientific support. Those are simply not things within the domain of science. Anyone who says otherwise is just trying to bamboozle you into some weird ideology.
But this wouldn't affect thoses stances (in either direction). An EMH advocate can just (rightly) say, "Markets are efficient up to the limits of computation speed." Those same limits apply to anyone else's attempt to correct the market, so it doesn't validate view based on its negation either.
>> Last summer Bringsjord and Taylor posted a paper entitled “P=NP” to the arXiv. This paper argues that, since (1) finding a Steiner tree is NP-hard, (2) soap bubbles find a Steiner tree in polynomial time, (3) soap bubbles are classical objects, and (4) classical physics can be simulated by a Turing machine with polynomial slowdown, it follows that P=NP.
>> My immediate reaction was that the paper was a parody.
I mean, did nature "solve the problem" of creating human-like consciousness from inanimate matter? I suppose so, since humans are certainly human-like, and it would appear that there is a reasonable path from t=0 to now that includes building up all the structures at every scale to allow us to come enough into existence to talk about it with each other. But this way of speaking implies something that isn't there, namely an intention. Humans, to take one example, aren't a solution to a problem, we just are. And its the same for all interesting phenomena like, for example, the parabolic shape of a particle being thrown on Earth, or ball packing, or crystal lattices.... The fact that these structures are solutions to a problem is an artifact of the human ability to model the system mathematically in the first place. Nature doesn't model, she just is.
Nature might give efficient solvers for specific problems up to a certain size, but we already have "practical" NP problem solvers in classical silicon in the form of SMT solvers.
They're out there, and practical for some problems, but SMT solvers aren't exactly a solved problem. They're used for deductive proofs about program correctness, which are still pretty finicky. 
That's the idea behind DNA-computing, for instance.
The question is whether it is actually possible to do so at a reasonable cost and at scale.
Regardless, what "nature" does has 0 to do with P vs NP, computability or anything else. P vs NP is a statement strictly about mathematical objects, there is no notion of "nature" anywhere in it.
Here's one particular example: I tried what is essentially a greedy heuristic on a physical design problem (a problem where we design a device such that an excited wave within it best matches a specific shape). I had earlier showed that solving the general problem is NP-hard.
The objective value the greedy heuristic returned? .589. It took <100ms to compute this.
Now, I rewrote the problem in a way that was amenable to a mixed-integer convex problem solver (which involved some somewhat-complicated reductions because the problem isn't easily phrased in this way). Put it on Mosek (which is a pretty fast solver) and let it run.
The globally-certified optimal value (to <1%)? .587. Mosek took two and a half hours to find this.
So, tl;dr, the problem was "hard enough" that a branch-and-bound solver took a long time to solve, but still "easy enough" that a basic heuristic might as well have nailed the global optimum.
I think the reason such papers keep popping up (and I see them around surprisingly often) is that people really don't realize how good basic heuristics really can be!
With a similar approach, we could 'prove' that market efficiency is non-computable, right?
I was thinking If you give me a program that solves the halting problem, I'll give you $1.
On reflection though, that's not just a bargain, it's also an outright contradiction; we know no such program exists, so making this offer doesn't affect the market at all. Perhaps there's a sneakier variation on this.
> On reflection though, that's not just a bargain, it's also an outright contradiction; we know no such program exists, so making this offer doesn't affect the market at all. Perhaps there's a sneakier variation on this.
We do know Busy beaver 1 to 4 (via brute force). We have lower bound for 5. It's definitely possible to compute more of these values. Uncomputable does not mean not possible to compute, it means you cannot find an algorithm that solves the problem in general.
So, maybe by dedicating bizarre amount of compute, you could find BB # 1M to claim your $1M from me. Get rich quick!
Most importantly, it doesn't mean that any alternative to markets are not also NP-hard.
If we're trying to prove np-hardness we do.
If we are going to play the academic one-upmanship game, a more general result that "best" or "multiple" N-player Nash Equilibria for N > 2 is already NP-hard. The implication of an efficient market would be if every player had a polynomial-time algorithm to solve the NP-hard problem, ergo P=NP.
Discussed at the time: https://news.ycombinator.com/item?id=1144548
and https://news.ycombinator.com/item?id=1124782 (a bit)
Of course economists know that markets are not perfectly efficient, but what they tend to believe is that markets tend towards efficiency, and markets that are big enough and free enough will correct trivial inefficiencies relatively quickly.
This is why the advice is generally that returns in easy to access markets are very closely related to the risk you took on. If you made better than market returns, it's probably because (whether intentionally or not) you took on more risk than the market as a whole. This doesn't hold true for those with an information edge, but more people believe they have such a thing than actually have it.
This is a particularly... interesting (for lack of a better word) submission to appear so repeatedly. Maybe it's just the right word-combination to appear in hn? Computer science + finance, with a touch of academic-ish?
Socialists need to discredit the market.
These real events are incompatible with perfectly efficient markets:
* google (billion dollar startups)
* salem witch hunts
that’s not to say markets are perfectly inefficient either. But they’re clearly fallible and imperfect. They clearly misinterpret available information all the time, even though they easily could’ve interpreted it correctly. Markets are full of emotion driven irrational actors, not Bayesian robotic egoless predictors, even though we like to pretend we’re the latter sometimes.
It is a model. All models are wrong. Some are useful.
The EMH is very useful. For example, a good understanding of its failure modes can lead to opportunities for profit. You list a few potential failure modes.
or an understanding that "efficiency" depends on diffusion of information, thus when information has uneven diffusion times-- an efficient market still allows for information-diffusion arbitrage.
The time scale is usually dependent on the transparency of the company's financials to investors. For public companies, the time scale is measured in quarters. For private companies, it could be measured in years or decades.
And this actually lines up with the EMH - stock prices reflects all available information. Nothing about the hypothesis says it reflects the true value when key information is unavailable (e.g. fraud or a private company that doesn't open their books).
The problem with your comment (and this paper) is that it misrepresents what economists mean when they talk about the EMH. A more widely accepted interpretation of the EMH is just that it's impossible to reliably predict asset prices, or colloquially "beat the market", and it has a lot of evidence going for it (e.g. the poor track record of professional traders and fund managers over long periods of time).
In order to make the EMH plausible, you have to restrict it so much that it's essentially meaningless: something on the order of, "If you take the ensemble of market players, they do not beat the market on average." But it's clear that individual market players do often better understand the value of particular assets than the market, and not just because they're lucky, but because they just do better analysis than the crowd does.
In fact, following your reasoning every trade would invalidate the EMH. If the price goes up afterward then the buyer was "right" and if the price goes down afterward then the seller was "right".
> In fact, following your reasoning every trade would invalidate the EMH.
No, only trades in which the person making the right call was right by something other than luck.
The strongest version of the EMH that actually holds is probably something on the order of, "If you pick a random investor, their expected returns are the same as those of the broader market." That is, you have to weaken it until it's trivially correct.
Were there free markets in the 17th century, would they have been able to incorporate Galileo's insight? Or would they have joined the church in denunciation? Part of me thinks that if they did eventually come around to the solar system, it wouldn't be all that quick.
In other words, you can also lose money, even if you're right.
1. Pseudo-Keynes: https://quoteinvestigator.com/2011/08/09/remain-solvent/
Nowadays we know that markets aren’t perfectly efficient, 2008 was quite the instruction, and we can now have a serious discussion about degrees of efficiency and inefficiency that the markets achieve.
"The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information."
2008 is not a contradiction and the other examples by the parent aren't contradictions either.
This is a further misrepresentation of the efficient market hypothesis. The efficient market hypothesis states that the price agreed upon by the market is accurate to a risk-adjusted level, to the extent that it is not possible to consistently beat the returns of the market.
This is generally proven true by the fact that most funds fail to beat their benchmarks in any given year, and the more long term you start to look, the closer the number of funds that outperform their benchmarks gets to 0.
Page 9 of this document shows this quite conclusively: https://www.spglobal.com/spdji/en/documents/spiva/spiva-us-y...
Anybody who "beats the market" on any given day could only achieve that by discovering an inefficiency in the markets decision making. The efficient market hypothesis doesn't say that this is impossible. It says that it's impossible to do this consistently, which is reflected quite accurately in the historical performance of mutual funds. I guess the theory is flawed in the sense that it doesn't define how long you have to beat the market for before you can be considered 'consistent' at it, but I'd consider the small number of funds that achieve this over any significant period as being within the error bars of luck.
EDIT: So one thing to consider is that markets are predictive. A "correct" interpretation is only correct if it can predict what will happen in the future. And because the future is not predictable (proof needed but whatever) there is no way to know which interpretation is "correct".
The same way fantastic tennis players can consistently outperform the rest of the competition, fantastic traders can certainly outperform others. Of course, in an ultra competitive environment, with a massive number of constant competitors, its very hard to retain a number one spot. Its also hard to predict markets because of the very fact that they're inefficient and irrational. There are so many active participants its hard to predict how they will react to available information.
Take 2008 for instance. There was public information demonstrating fatal weakness in the housing market, but it was neglected en masse. This makes it very hard to beat the market. Even if you're correct, the market could decide to ignore the facts, and you wouldn't be able to reap any alpha from your observation.
Sure, assuming every market participant has knowledge of the winning strategy, and the skills to employ it..which is a pretty big caveat.
By this logic why do some chess players beat others? Wouldn't they all simply adopt the same strategy?
If you play chess against someone, and you discover a winning strategy, and you play it repeatedly, they'll eventually discover your strategy and be able to counter you. Your opponents in the market are adaptive in this same way, so trying to always stay ahead is very hard. Some funds find some way (which I am not privy to) to succeed at this, and people still make boat loads of money as professional traders or quants, so there is reason to believe it is possible.
The fact that it is possible to have a very successful career in finance as a quant or trader also seems to me like evidence the efficient market hypothesis (taken in its strongest form, that markets are perfectly efficient) is false.
It's the opposite. Traders increase market efficiency up to a certain point. If traders didn't exist then markets couldn't be efficient in the first place. Liquidity problems would persist and cause large spreads between buy and sell prices.
But sure you could always define efficiency in an unattainable way by saying no matter how much you beat the market your streak could end down the line and the limited sample size means it doesn't prove anything.
And to me that's what efficient markets are. This unfalsifiable word game you can play where as long as the definition is flexible enough, you get to say the sentence "markets are efficient".
I think this question is hard to comprehend when you blow it up to this massive scale. Heres a simpler (and I claim) equivalent problem.
Imagine you're in a room with a friend. You're both tasked with estimating the price of a company. You both read through the companies 10Ks and whatever other information you can dig up. You read a bit more carefully than your friend, and detect an issue in the company's finances. Now you have alpha, and can develop a better estimate.
The market is this same scenario, but there are millions of folks playing the same game. That means its harder to find that undiscovered blip of insight, but that doesn't mean it doesn't lie in wait for the diligent. Just as there are still original thoughts, there are still original insights, and possessing them means, for a moment, you can beat the market.
Sure but the exception proves the rule. We don't have a whole generation of Warren Buffets, we only have one.
Be careful before holding up a single example as an exception to the rule. Some basic statistical analysis show there was a 30% chance the Red Baron, the most famous WW1 fighter pilot, was just lucky. And obviously his luck eventually ran out.
It's easy for politicians to blame 2008 on the markets, but I think it was caused by Keynesian government policies.
I assume we'll have similar problems soon with all the stimuli being granted.
I doubt it is an “instruction” on markets efficiency — yes, if a government puts a lot of tax and newly-printed money in ponzi-schemes, bad things will happen.
Similarly, the market can be 99.99% efficient, in which case it's obviously not entirely perfect but close enough that the average citizen is extremely unlikely to find any real arbitrages (which is what the EMH is essentially about).
It actually argues that if P=NP, and Moore's law continues indefinitely, then eventually someone could build an efficient market.
Another simple proof is that not everybody bets on the market for every opinion they have but the market only prices info if you act in the market.
Some NP-complete problem classes have polynomial time approximation schemes, but not all, unless P=NP. That is to say, sufficiently large markets may be arbitrarily inefficient. I'm curious to see that dichotomy resolved.
Your "if" is doing a ton of work there, and this paper is quite abstract market theory. The "practical use" is to better contextualize the efficient market hypothesis.
Anyway markets have a lot more going on than a simple greedy algorithm. For injecting noise into the optimization to get out of local minima. Markets clearly have a lot of this.
There's an intriguing recent book (that, admittedly, I haven't read) called "The People's Republic of Walmart" https://www.penguinrandomhouse.com/books/564287/the-peoples-...
If large corporations are the shining beacons of capitalism, how come they are run internally as top-down planned economies? And to the extent they prove than planning can work at scale, could planning work society-wide? Not saying I agree or disagree (again, I haven't actually read it), but dusting off the ye olde calculation debate and updating it for today seems like an interesting idea.
If all Walmart does is make mistakes then they go out of business. Central planning won't work but federated planning with 20 super corporations that are in competition with each other might actually be good enough. The only problem is that we both need and want the underdogs(startups) and this would undermine them.
You are conflating an economic model with a political one. In fact, the central planner doesn't even need to be a person, but a set of assumptions and strategies that can be measured against the benefit they provide and be continually adjusted.
You touch an important point, however. The people most affected by Walmart's central planning, their employees, have no say in what strategies and assumptions the central planner uses and that removes any incentive to provide benefit to them.
The notion that these huge corporate bureaucracies are somehow "efficient" at anything except trashing the environment and homogenising and controlling - if not ultimately economically cannibalising - their own workers and customer base is clearly counterfactual.
From reading the book synopsis (again, I haven't read the book itself), it seems the authors are proposing some kind of democratic socialism rather than USSR-style authoritarianism. Is it workable? I have no idea. To me it seems that USSR-style authoritarianism and current Western neoliberal capitalism are but two small dots on the political economy spectrum. There are other ways of organizing society as well; certainly worth thought experiments to see what kind of other options there might be and what we might learn from them.
Look. I don't believe that P=NP. I can't describe a superior optimization process, and if I did, I would claim the Clay Math prize. I'm also not an economist.
My takeaway is that any simple strategy is almost certainly bad in the long term. You can optimize your own personal take, but in doing so, "the invisible hand" isn't going to somehow flip that into a benefit for all.
But if large corporations prove the efficacy of planning at scale, then they surely prove the drawbacks as well. What are the society-wide central planning equivalents of layoffs and maximizing shareholder value? Does a centrally planned corporation outsourcing their supply chain or HR department represent a fundamental failure of central planning?
That said, it is indeed quite interesting to figure out at what size such central planning starts to become non viable. And for that the observations: "It is viable for wallmart, but not for all of the USA" give some interesting bounds.
Large corporations rule top-down because goals are set at the top. That’s very different from a society problem.
I love that meme for how wrong it is. Public companies don’t maximize short-term profits, maximize profits (with future profits discounted).
The fact that to a lot of people “profits” == “short term profits”, reveals quite a lot about our assumptions and time preferences..
Now if someone said companies focus on continuous quarterly growth, and sometimes sacrifice a portion of long-term growth for short-term growth, I could agree with that.
My own suspicions have been for quite some time now that fundamentals are what move markets and that technicals are mostly bullshit and are only valid because algo-traders school like fish based on technicals.
Actually it does. The central premise of the Efficient Market Hypothesis is that all relevant information is already priced in.
>"the market can remain irrational longer than you can stay solvent."
Irrational behaviour by uninformed investors is theoretically already priced in, if the market is efficient.
Thing is, the hypothesis can be useful whilst being only approximately true. If it is true to with 0.1% then, in most cases, it is quite appropriate to model it as being perfectly true. And obviously, there are other cases where that model isn't useful. In those cases, it would probably still be useful to take the 0.1% number into account.
IMO markets can't be efficient because we as humans aren't - our perception of value itself can be subjective and influenced by many things including FOMO, safety in numbers perceptions, risk aversion (usually), etc etc.
The intuition behind the efficient market hypothesis is that each of these deviations from "homo economicus", if it occurs in a market, is an opportunity for someone else to make money. EMH says that opportunity won't be wasted, at least for very long.
Well, define "well enough"...
We do get results. Are they "well enough"?
.... and the paper is written in MS Office.
Math, CS and Physics are the exception here.
Hyper-optimizing just-in-time inventory and production practices and betting on the outcome at nanosecond timescales? Probably harmful.
Investing in useful businesses? Probably pretty beneficial.
Certainly, betting on nanosecond timescales seems very unlikely to be very beneficial for society at large.
Everything is O(1) if you can ignore steps.
That said, recent work on photonic Fourier transforms does look interesting from the perspective of a co-processor. They offer meaningful gains over alternatives.
What I was pointing out is that digital implementations have this exact same property -- if the input is bounded in the sense that there are a maximum number of bits being fed into your digital computer then the maximum runtime is a constant equal to the maximum runtime for any of those finite inputs -- i.e., the runtime is O(1) for the inputs you're actually able to provide (the exact same inputs you could provide to the photonic processor to achieve a constant-time Fourier transform). To scale up beyond that threshold you still need more time or more/better processors.