Hacker News new | past | comments | ask | show | jobs | submit login
Markets are efficient if and only if P=NP (2010) (arxiv.org)
214 points by DarkContinent 38 days ago | hide | past | favorite | 169 comments



This paper just gets me every time I see it.

It really doesn’t claim or do much that isn’t already quite obvious (it also has the other problem of going on completely unrelated tangents.)

First, it does this funny thing where it “proves” that EMH (or market efficiency) is NP-hard, by building gadgets based on options and then using them to “embed” 3-SAT. This is fine except (a) we often don’t care about an exact solution (so we need approximation-hardness that isn’t shown) and (b) I can come up with millions of other obvious things like this! Here’s a silly one one: I give a derivative whose price is $1 if a solution (that is known to exist) to an NP-hard problem is found by a given time. This will “show the NP hardness of EMH” because obviously the derivative price should be 1 (the problem has a solution, by construction) and so anyone should “obviously” price it at $1.

This is like saying “life is NP-hard.” Like, yeah, of course it is. That doesn’t mean we aren’t pretty good at doing the right thing and rationality (like EMH) is a good, if imperfect tool to analyze behavior. Taking it to its extreme then everything breaks, because the model is wrong. We don’t throw up our hands and quit because making real-life decisions is a sampling-hard, query-hard, and NP-hard problem (if all data is known), but a basic model of rationality would say that humans can “perfectly” solve it. (Which is obviously crazy, but we obviously don’t throw the assumption of rationality of agents out the window because of this.)

Otoh, I am definitely of the opinion that some economists hold EMH as a “magical” axiom, but that’s a different story.


It's not even needed to introduce weird derivatives, a publicly traded company can announce $1,000,000 dollar prizes for factoring a RSA keys.

In fact, you can embed more powerful oracles like this, e.g. publicly traded company announces a $1,000,000 dollar prize for anyone who submits an inconsistent statement in ZFC.

The EMH is a framework. With any scrutiny it's obvious that it can at most be first order accurate since obtaining and processing information is costly. This is not a particularly new insight or one that would be a surprise to any proponent of the EMH.


> The EMH is a framework. With any scrutiny it's obvious that it can at most be first order accurate since obtaining and processing information is costly. This is not a particularly new insight or one that would be a surprise to any proponent of the EMH.

Great clarification. I'm always suspicious of interpretations of EMH that are a lot more literal than they should be, drawing effectively false conclusions from double negations. I think the truth is as you said.


The complexity of factoring RSA keys is not known, so this method isn’t sufficient.

It is known on a quantum computer factoring is polynomially hard.


>at most be first order accurate

The EMH is accurate to way better than the tangent line, which is what a first order approximation is.


When someone says a consideration is orthogonal to another one, they are not literally embedded in an Euclidean space. In this context, first order is used loosely to mean that the deviations come from effects which are only secondary in magnitude and relevance, not to indicate that we're looking at the asymptotic behavior of a formula when a parameter is taken to be infinitesimal.


The trouble is that EMH is a formula, so the "to first order" thing has a literal meaning that is not what the author meant. It's kind of like saying that "1-x^2/2 is cos(x) to first order," moreso than saying "my business plan is a good idea for every involved party, to first order."


>This is like saying “life is NP-hard.” Like, yeah, of course it is. That doesn’t mean we aren’t pretty good at doing the right thing and rationality (like EMH) is a good, if imperfect tool to analyze behavior.

The granularity of the "correctness" is the major difference here and you're asking the author to be ok with an extremely rough success criteria when they are proposing a mathematically perfect success criteria.

So, define how you would quantify "pretty good" in comparison with "mathematically optimal" and you might get what is in effect an infinite delta.

>but a basic model of rationality would say that humans can “perfectly” solve it.

"basic model of rationality" doesn't make sense. Rationalism has not been shown to provide a pathway to a general epistemology.

Fundamentally the author is making a really good case for the idea that oblique or poorly specified/constructed goal and measurement criteria (eg. EMH) should not be assumed as a competent goal vectoring mechanism for society.


> The granularity of the "correctness" is the major difference here and you're asking the author to be ok with an extremely rough success criteria when they are proposing a mathematically perfect success criteria.

All I'm saying is that there are easier and clearer ways of presenting the same thing that could be done in a sentence and make the "reason why" obvious. No need to introduce gadgets and go on and on for 30 pages about "exponential time" (is the author talking about the exponential time hypothesis? Or is this P vs. NP?) among other things. There's also some additional crossed wires about "needing to try all cases" or something, that doesn't make any sense. No SAT solver/TSP solver/MILP solver, etc., does this. Anyone who is at all familiar with this topic would not make such statements, and certainly not, in the way you mention, a "mathematically perfect" criterion that "is proven wrong." So which is it? Is the author being cavalier about the literature? Or do they simply not understand the basics?

Additionally, they're not proving anything wrong. This is, like what I mentioned before, similar to bashing people for saying "Nash-equilibria are PPAD-hard (and communication hard) so Nash-equilibrium as a concept is bad because it's computationally difficult." (In fact, I'd say it's even less interesting than this, because at least the latter is a nontrivial result; not a one-line argument that we can embed arbitrarily complex things into the market.)

> Rationalism has not been shown to provide a pathway to a general epistemology.

This is "rational" as in "rational agent" (utility-maximizing, or an agent that makes at least Pareto-optimal choices, or who plays to Nash or correlated equilibria) not "rationalists" as in people who apply rationalism as an epistemology. Applied broadly, "rational agents" could solve halting-hard problems by making an obvious game where an agent who solves such a problem gets a million dollars, since such agents are often assumed to have unbounded computational power. (In particular, nothing about the agent's computational power is often assumed.)

> Fundamentally the author is making a really good case for the idea that oblique or poorly specified/constructed goal and measurement criteria (eg. EMH) should not be assumed as a competent goal vectoring mechanism for society.

Not really, no. For one, EMH says nothing about exact solutions (this would require a much more careful argument with a different type of reduction), and two, in the same way as before, putting EMH on a pedestal (like putting Nash equilibria on a pedestal, or agent rationality, etc.) leads to absurd results. My point is that (a) this is well-known and (b) you can make much simpler examples (like the ones above) that don't appear to be as "clever" but prove the same (nearly-obvious) point.


>"Nash-equilibria are PPAD-hard (and communication hard) so Nash-equilibrium as a concept is bad because it's computationally difficult."

I think that's a perfectly cromulent position actually. It's a great example where the theory doesn't help when you're working with real non-homo economicus agents, and I would argue hurts if you assume that people behave (or should) behave according to those concepts.

>This is "rational" as in "rational agent" (utility-maximizing, or an agent that makes at least Pareto-optimal choices, or who plays to Nash or correlated equilibria)

...even worse. Go find me one of these perfect information people. I suppose you could collapse the stated preference and revealed preference superposition into observed preference to solve the equation.


Sorry, I'm afraid I don't understand your position at all.

My point is that, like EMH, none of these are pretending to be the "be all end all solution to all models" and it's obviously silly to pretend otherwise? So it seems you're agreeing with me?


Maybe you and I agree that EMH shouldn't be considered god's truth, but macroeconomists who inform policymakers and asset pricing theory sure as hell do. Trust the market, market clearing etc... all come out of the philosophy.

I think that's the whole crux of the paper - the people informing how we structure our economy shouldn't consider EMH doctrine.


That only works if the alternatives to EMH are better heuristics than it is.


> Taking it to its extreme then everything breaks, because the model is wrong.

In my view it is a good thing to prove the model wrong, because so many people take extreme moral and political stances on account of the model being accurate.

At the very least, displaying counter-examples is a way to show those stances for what they are, exposing them as beliefs without scientific support.


> because so many people take extreme moral and political stances on account of the model being accurate

As much as this is certainly the case (people using models wrong is one of my personal pet peeves), there are at least as many people who take equal and opposite extreme moral and political stances on account of the model being inaccurate.

Anyone who actually understand EMH just takes it for what it is.

> beliefs without scientific support

Any political or moral belief is a belief without scientific support. Those are simply not things within the domain of science. Anyone who says otherwise is just trying to bamboozle you into some weird ideology.


>In my view it is a good thing to prove the model wrong, because so many people take extreme moral and political stances on account of the model being accurate.

But this wouldn't affect thoses stances (in either direction). An EMH advocate can just (rightly) say, "Markets are efficient up to the limits of computation speed." Those same limits apply to anyone else's attempt to correct the market, so it doesn't validate view based on its negation either.


There have been (at least semi-)serious proposals that P=NP based on the fact that nature seems to be able to find optima to NP-hard optimization problems such as minimizing the area:volume ratio in a foam. Of course, they have been unable to give any reasonable arguments as to why those should be global optima.


Shameless plug for my Hiter parody of cranks who write such papers (modeled after Scott Aaronson's reaction to such a paper [0]).

https://www.youtube.com/watch?v=NwUQVv7wCGA

[0] https://www.scottaaronson.com/papers/npcomplete.pdf

>> Last summer Bringsjord and Taylor posted a paper entitled “P=NP” to the arXiv. This paper argues that, since (1) finding a Steiner tree is NP-hard, (2) soap bubbles find a Steiner tree in polynomial time, (3) soap bubbles are classical objects, and (4) classical physics can be simulated by a Turing machine with polynomial slowdown, it follows that P=NP.

>> My immediate reaction was that the paper was a parody.


... visions of the NSA buying lots of washing-up liquid and old wire coat hangers.


Funny. Thanks!


Yeah, that rebuttal paper of Scott's was what I had in mind as well. And that Hitler parody was surprisingly funny, thanks :D


Glad to hear!


It's troubling to me to talk about "nature solving problems". Nature doesn't "solve" anything, those are meanings we humans assign to the various configurations of matter that we see, guided by our understanding of the underlying principles of motion (energy, fields, forces, or however you want to express them).

I mean, did nature "solve the problem" of creating human-like consciousness from inanimate matter? I suppose so, since humans are certainly human-like, and it would appear that there is a reasonable path from t=0 to now that includes building up all the structures at every scale to allow us to come enough into existence to talk about it with each other. But this way of speaking implies something that isn't there, namely an intention. Humans, to take one example, aren't a solution to a problem, we just are. And its the same for all interesting phenomena like, for example, the parabolic shape of a particle being thrown on Earth, or ball packing, or crystal lattices.... The fact that these structures are solutions to a problem is an artifact of the human ability to model the system mathematically in the first place. Nature doesn't model, she just is.


Of course it's just a figure of speech. But it gets awkward to talk about, say, evolution, unless you can use anthropomorphic language at times. Even mathematicians discuss math using sloppy language all the time, only with the silent understanding that they could do everything rigorously if they really wanted to. Programmers anthropomorphize computers and software all the time. Does a sudoku solver not solve a problem just because it has no agency? Or is it different just because the human who wrote it does?


I think the distinction is important to address for the current discussion about P=NP, because of the meta nature of the discussion. Any argument about natural phenomena meaning anything a priori is obscuring the computation required ot assert meaning.


Proposals which don't survive contact with experiment. See pp. 3-4 of https://www.scottaaronson.com/papers/npcomplete.pdf


Doesn't that line of argument only work if nature's algorithms can be shown to be both precise (giving exact solutions) and complete (able to solve every valid problem)?


Yeah I doubt that nature is able to solve problems above a certain size. After you put together a certain number of soap bubbles they'll collapse into a black hole. Furthermore there are concerns of speed of light or speed of sound.

Nature might give efficient solvers for specific problems up to a certain size, but we already have "practical" NP problem solvers in classical silicon in the form of SMT solvers.


> we already have "practical" NP problem solvers in classical silicon in the form of SMT solvers.

They're out there, and practical for some problems, but SMT solvers aren't exactly a solved problem. They're used for deductive proofs about program correctness, which are still pretty finicky. [0]

[0] https://blog.adacore.com/using-spark-to-prove-255-bit-intege...


NP-complete problems can be polynomially reduced to each other. If you had a magical oracle that was able to solve any one specific NP-complete problem in O(1) time, you could turn that into a O(n^k) solution for all NP-complete problems.

That's the idea behind DNA-computing, for instance.

The question is whether it is actually possible to do so at a reasonable cost and at scale.

Regardless, what "nature" does has 0 to do with P vs NP, computability or anything else. P vs NP is a statement strictly about mathematical objects, there is no notion of "nature" anywhere in it.


Yeah, for sure. I mean, it's often the case (as I've found in my own research) that even relatively unsophisticated heuristics will often nail the global optimum for a good number of problems.

Here's one particular example: I tried what is essentially a greedy heuristic on a physical design problem (a problem where we design a device such that an excited wave within it best matches a specific shape). I had earlier showed that solving the general problem is NP-hard.

The objective value the greedy heuristic returned? .589. It took <100ms to compute this.

Now, I rewrote the problem in a way that was amenable to a mixed-integer convex problem solver (which involved some somewhat-complicated reductions because the problem isn't easily phrased in this way). Put it on Mosek (which is a pretty fast solver) and let it run.

The globally-certified optimal value (to <1%)? .587. Mosek took two and a half hours to find this.

So, tl;dr, the problem was "hard enough" that a branch-and-bound solver took a long time to solve, but still "easy enough" that a basic heuristic might as well have nailed the global optimum.

-----

I think the reason such papers keep popping up (and I see them around surprisingly often) is that people really don't realize how good basic heuristics really can be!


> Here’s a silly one one: I give a derivative whose price is $1 if a solution (that is known to exist) to an NP-hard problem is found by a given time.

With a similar approach, we could 'prove' that market efficiency is non-computable, right?


"If you find me nth Busy Beaver program for any n>10, I'll give you $n"


I could be wrong, but that isn't known to be undecidable, is it? Merely extremely difficult?

I was thinking If you give me a program that solves the halting problem, I'll give you $1.

On reflection though, that's not just a bargain, it's also an outright contradiction; we know no such program exists, so making this offer doesn't affect the market at all. Perhaps there's a sneakier variation on this.


It is undecidable. If we had an algorithm to compute # of steps in Busy Beaver (n) (let's call it S(n)) then for all Turing Machines with n states run S(n) steps. If it doesn't halt, it will never halt. Halting problem solved! (well, blank tape halting problem, which is as hard as halting problem).

https://en.wikipedia.org/wiki/Busy_beaver#Proof_for_uncomput...

> On reflection though, that's not just a bargain, it's also an outright contradiction; we know no such program exists, so making this offer doesn't affect the market at all. Perhaps there's a sneakier variation on this.

We do know Busy beaver 1 to 4 (via brute force). We have lower bound for 5. It's definitely possible to compute more of these values. Uncomputable does not mean not possible to compute, it means you cannot find an algorithm that solves the problem in general.

So, maybe by dedicating bizarre amount of compute, you could find BB # 1M to claim your $1M from me. Get rich quick!


> This is like saying “life is NP-hard.” Like, yeah, of course it is. That doesn’t mean we aren’t pretty good at doing the right thing and rationality (like EMH) is a good, if imperfect tool to analyze behavior.

Most importantly, it doesn't mean that any alternative to markets are not also NP-hard.


> (a) we often don’t care about an exact solution

If we're trying to prove np-hardness we do.


It doesn’t really make sense in this case though, does it? One example is that Knapsack can be approximately solved in polynomial time for all tolerances eps (I.e., it has a Fully-polynomial time approximation scheme, or FPTAS). The EMH is only talking about essentially approximate behavior (as the paper also mentions, with repeated experiments).


Which is why this whole venture is more an exercise in pendantry rather than an actual attempt to solve np problems with markets.


The report, while a fun thought exercise for any aspiring academic, is not a novel insight, and is obviously untrue for most definitions of market efficiency.

If we are going to play the academic one-upmanship game, a more general result that "best" or "multiple" N-player Nash Equilibria for N > 2 is already NP-hard. The implication of an efficient market would be if every player had a polynomial-time algorithm to solve the NP-hard problem, ergo P=NP.

[1] https://people.csail.mit.edu/costis/simplified.pdf

[2] https://www.quantamagazine.org/in-game-theory-no-clear-path-...

[3] https://arxiv.org/abs/1104.3760#:~:text=Unlike%20general%20N....


Yeah, even with unbounded computational power, Nash equilibria are communication-hard; i.e., you need exponentially many bits for an n-player game to converge to an equilibrium, even when all players have unbounded computational power.

See https://arxiv.org/pdf/1608.06580.pdf.



Curious, why does this specific arxiv submission keep on cropping up popularly over the years? Is this common for any top post?


I think it's because people like to laugh at silly economists for thinking that markets are efficient.

Of course economists know that markets are not perfectly efficient, but what they tend to believe is that markets tend towards efficiency, and markets that are big enough and free enough will correct trivial inefficiencies relatively quickly.

This is why the advice is generally that returns in easy to access markets are very closely related to the risk you took on. If you made better than market returns, it's probably because (whether intentionally or not) you took on more risk than the market as a whole. This doesn't hold true for those with an information edge, but more people believe they have such a thing than actually have it.


I'm also curious.

This is a particularly... interesting (for lack of a better word) submission to appear so repeatedly. Maybe it's just the right word-combination to appear in hn? Computer science + finance, with a touch of academic-ish?


Enough time passed since the collapse of USSR and socialism is becoming popular again.

Socialists need to discredit the market.


Markets are obviously not efficient. The proof is trivial. Have you ever made a mistake? Congrats, that’s your proof. The market is a collection of people making decisions. People are capable of mistakes. People en masses are still people, and just as fallible, if not more. Thus, markets can and constantly do misprice things/inefficiently allocate capital.

These real events are incompatible with perfectly efficient markets:

* tulips

* Bitcoin

* google (billion dollar startups)

* crashes

* theranos

* salem witch hunts

that’s not to say markets are perfectly inefficient either. But they’re clearly fallible and imperfect. They clearly misinterpret available information all the time, even though they easily could’ve interpreted it correctly. Markets are full of emotion driven irrational actors, not Bayesian robotic egoless predictors, even though we like to pretend we’re the latter sometimes.


But your proof also works against any system that involves people (i.e. all of them), so is the real question whether markets are as efficient as possible given the constraints of having to use fallible people?


The word market loses all meaning without people. Markets are inherently a trade relationship between people.


I agree with your examples and I agree that markets aren't efficient, but the "proof" you present is invalid because individual investors could be inefficient yet that doesn't mean that in aggregate markets are inefficient. (Informally, this reminds me of the central limit theorem.)


I agree with your logic. To self-interpret: I think my rationale demonstrates that the claim "markets are inefficient" is coherent, and my examples demonstrate that its true.


The central limit theorem doesn't get you close enough to perfect efficiency to distinguish between P and NP.


As a fan of the EMH..

It is a model. All models are wrong. Some are useful.

The EMH is very useful. For example, a good understanding of its failure modes can lead to opportunities for profit. You list a few potential failure modes.

or an understanding that "efficiency" depends on diffusion of information, thus when information has uneven diffusion times-- an efficient market still allows for information-diffusion arbitrage.


I'm going to throw this out there as an idea (it's probably not new) - I would argue the markets are efficient on a long enough time scale. Yes, stock prices don't always instantaneously reflect true value, but on a long enough time scale, months to years, they do.

The time scale is usually dependent on the transparency of the company's financials to investors. For public companies, the time scale is measured in quarters. For private companies, it could be measured in years or decades.

And this actually lines up with the EMH - stock prices reflects all available information. Nothing about the hypothesis says it reflects the true value when key information is unavailable (e.g. fraud or a private company that doesn't open their books).


> Markets are obviously not efficient. The proof is trivial. Have you ever made a mistake? Congrats, that’s your proof.

The problem with your comment (and this paper) is that it misrepresents what economists mean when they talk about the EMH. A more widely accepted interpretation of the EMH is just that it's impossible to reliably predict asset prices, or colloquially "beat the market", and it has a lot of evidence going for it (e.g. the poor track record of professional traders and fund managers over long periods of time).


That version is also obviously wrong. It is possible for individual people to more correctly analyze the value of an asset than the stock market. People who looked carefully into the mortgages underlying mortgage-backed securities during the housing bubble (e.g., Michael Burry) are a good example of this. They clearly were not right "by accident" - they were right because they carefully analyzed the underlying assets, at a time when most people were not doing so.

In order to make the EMH plausible, you have to restrict it so much that it's essentially meaningless: something on the order of, "If you take the ensemble of market players, they do not beat the market on average." But it's clear that individual market players do often better understand the value of particular assets than the market, and not just because they're lucky, but because they just do better analysis than the crowd does.


This doesn't prove the EMH wrong. In order to prove EMH is wrong you need to show someone like Michael Burry can consistently outperform the market using only publically available information. Making one good trade (no matter how large) isn't that.

In fact, following your reasoning every trade would invalidate the EMH. If the price goes up afterward then the buyer was "right" and if the price goes down afterward then the seller was "right".


One trade is enough, if the reason for that trade is something other than dumb luck. The idea behind demanding consistency is that you can get lucky with one trade, but that over the long run, your luck will average out, and your true ability will show. The EMH says that your true ability to judge value won't be greater than the market's ability to judge value. But there are cases where it's clear that the trade wasn't merely luck. That's why I gave the example of the housing bubble: people using only public information were able to better judge the value of MBSs, and it wasn't just luck. They were objectively smarter than the market, because they did something that most people weren't doing: they analyzed the underlying assets. Put another way, the market can behave irrationally, and can seriously mis-price assets. In fact, it often does so, and the reason is not simply that there's missing information. The reason is that people are irrational, lazy, hubristic, etc.

> In fact, following your reasoning every trade would invalidate the EMH.

No, only trades in which the person making the right call was right by something other than luck.

The strongest version of the EMH that actually holds is probably something on the order of, "If you pick a random investor, their expected returns are the same as those of the broader market." That is, you have to weaken it until it's trivially correct.


I almost totally agree, and want to add a related observation - it isn't so much that the market is efficient (it isn't) but that anyone who makes the market more significantly more efficient will become very wealthy. So the market has big incentives to move towards efficiency.


This is interesting to me. It makes me wonder "are there cases where someone made the market more efficient and was not made very wealthy?" I think you're right that there is a massive incentive for the market to be more efficient, but I suspect there are cases where people were able to develop alpha without being able to capitalize on it, because the market denied or neglected the insight.

Were there free markets in the 17th century, would they have been able to incorporate Galileo's insight? Or would they have joined the church in denunciation? Part of me thinks that if they did eventually come around to the solar system, it wouldn't be all that quick.


Did Jonas Salk make the market more efficient?


"Markets can remain irrational longer than you can remain solvent."[1]

In other words, you can also lose money, even if you're right.

1. Pseudo-Keynes: https://quoteinvestigator.com/2011/08/09/remain-solvent/


Perhaps, but the Google anti-trust suit would seem a timely contradiction too. Monopolies are incentivised to keep the market inefficient (or roughly as efficient as it is) because change jeopardises their dominant position.


A monopoly is not a market. One person is not a group.


A monopoloy dominates a market but you are welcome to ignore that.


interesting. Let me see if I can parrot that back in a different way. If one investor 'messes up' (an inefficiency). Another one may step in and make it 'closer to efficient' as there is some value to be made. That would assume an unlimited number of investors maybe? inefficiency is therefore removed from the system. On a static scale which most people forget the market is most certainly not one, it would be in an inefficient state. But as time iterates on it would tend towards it as inefficient actors remove themselves.


It’s kind of amazing in retrospect that “the markets are efficient” was a position leading up to 2008 that didn’t get you laughed out of the room.

Nowadays we know that markets aren’t perfectly efficient, 2008 was quite the instruction, and we can now have a serious discussion about degrees of efficiency and inefficiency that the markets achieve.


That's not the definition of an efficient market though. An efficient market isn't a guarantee of a correct price. It just means the known information is reflected in the price:

"The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information."

2008 is not a contradiction and the other examples by the parent aren't contradictions either.


This is a misinterpretation of what is meant by the efficient market hypothesis. "a hypothesis that states that share prices reflect all information and consistent alpha generation is impossible". What they're saying is that the market correctly sets share prices based on available information. To say that the market is considering available information is tautological and not interesting. The EMH as its popularly understood is not just "the market considers information" (again, not an interesting claim) but "the market considers all information and synthesizes it correctly" which is deflating to hopeful investors and trivially false. My examples are contradictions to this hypothesis.


> "the market considers all information and synthesizes it correctly"

This is a further misrepresentation of the efficient market hypothesis. The efficient market hypothesis states that the price agreed upon by the market is accurate to a risk-adjusted level, to the extent that it is not possible to consistently beat the returns of the market.

This is generally proven true by the fact that most funds fail to beat their benchmarks in any given year, and the more long term you start to look, the closer the number of funds that outperform their benchmarks gets to 0.

Page 9 of this document shows this quite conclusively: https://www.spglobal.com/spdji/en/documents/spiva/spiva-us-y...

Anybody who "beats the market" on any given day could only achieve that by discovering an inefficiency in the markets decision making. The efficient market hypothesis doesn't say that this is impossible. It says that it's impossible to do this consistently, which is reflected quite accurately in the historical performance of mutual funds. I guess the theory is flawed in the sense that it doesn't define how long you have to beat the market for before you can be considered 'consistent' at it, but I'd consider the small number of funds that achieve this over any significant period as being within the error bars of luck.


The point is that you can not beat the market with your interpretation of the available information. This is not a question of the correctness of any given interpretation. Correctness can never be determined, there is no "correct" or "incorrect" interpretation of the available information.

EDIT: So one thing to consider is that markets are predictive. A "correct" interpretation is only correct if it can predict what will happen in the future. And because the future is not predictable (proof needed but whatever) there is no way to know which interpretation is "correct".


But this is also obviously false. Markets are subject to human phenomena, like mob mentality, which is the theme of the examples I listed.

The same way fantastic tennis players can consistently outperform the rest of the competition, fantastic traders can certainly outperform others. Of course, in an ultra competitive environment, with a massive number of constant competitors, its very hard to retain a number one spot. Its also hard to predict markets because of the very fact that they're inefficient and irrational. There are so many active participants its hard to predict how they will react to available information.

Take 2008 for instance. There was public information demonstrating fatal weakness in the housing market, but it was neglected en masse. This makes it very hard to beat the market. Even if you're correct, the market could decide to ignore the facts, and you wouldn't be able to reap any alpha from your observation.


You don't seem to understand the problem. If there is a way to consistently "beat" the market then everyone will adopt the strategy and you're no longer ahead. The only way you can beat the market is by having exclusive access to information or opportunities (dumb luck) and by definition the vast majority of people do not have these, otherwise they wouldn't be exclusive.


But that’s obviously not true, at least not all of the time. In 2008 there was plenty of information indicating that the market was not healthy, and a number of market actors spotted this early on and won big betting against what the market was doing. They neither had insider information nor was it luck; the market in 2008 was catastrophically wrong about what the price of mortgage backed CDOs should be.


> If there is a way to consistently "beat" the market then everyone will adopt the strategy and you're no longer ahead.

Sure, assuming every market participant has knowledge of the winning strategy, and the skills to employ it..which is a pretty big caveat.

By this logic why do some chess players beat others? Wouldn't they all simply adopt the same strategy?


I'm stuck between these definitions, which seem to represent polar opposites. On the one hand, I can follow the argument and agree that by that definition, it is "trivially false". However, how do you square this with the generally correct observation that investors in the long term cannot / don't seem to be able to outperform the markets? Or do you feel this is wrong?


I think it is true that it is very hard to beat the market. That can be true without the market being perfectly efficient. The way I see it is that the market is ultra competitive and full of very competent & very rational (not perfectly rational) individuals with a very high incentive to interpret facts correctly. So its not easy to beat the market. You have to be very knowledgeable, and invest very deeply in order to generate insights your competitors dont have. As another commenter mentioned, its not clear to me how economically sensible it is to make this kind of investment.

If you play chess against someone, and you discover a winning strategy, and you play it repeatedly, they'll eventually discover your strategy and be able to counter you. Your opponents in the market are adaptive in this same way, so trying to always stay ahead is very hard. Some funds find some way (which I am not privy to) to succeed at this, and people still make boat loads of money as professional traders or quants, so there is reason to believe it is possible.

The fact that it is possible to have a very successful career in finance as a quant or trader also seems to me like evidence the efficient market hypothesis (taken in its strongest form, that markets are perfectly efficient) is false.


>The fact that it is possible to have a very successful career in finance as a quant or trader also seems to me like evidence the efficient market hypothesis (taken in its strongest form, that markets are perfectly efficient) is false.

It's the opposite. Traders increase market efficiency up to a certain point. If traders didn't exist then markets couldn't be efficient in the first place. Liquidity problems would persist and cause large spreads between buy and sell prices.


This just isn't true that investors don't beat the markets in the long term though. Sure many don't, but there are plenty of examples who do. And really all it takes is one who does to disprove itt.


To be more clear, the statement should have been that on average, they don't. As far as I am aware, this _is_ true, although quite surprising, and not a popular idea among investors. The second part of the argument is interesting. On the one hand, a la Warren Buffet's analogy of thousands of professional coin flippers, it is obvious that a small number will do great, while the average will do average (or slightly worse, because of fees). This does not imply that there are _zero_ people who can consistently and on average outsmart the markets, only that it seems to take more than the median full time, relatively highly trained professional to do so.


I mean the market literally is the average of everybody's position so I would say it's a tautology that the average performance of everyone who participates in a market will equal an index of that market.

But sure you could always define efficiency in an unattainable way by saying no matter how much you beat the market your streak could end down the line and the limited sample size means it doesn't prove anything.

And to me that's what efficient markets are. This unfalsifiable word game you can play where as long as the definition is flexible enough, you get to say the sentence "markets are efficient".


Is warren buffet himself not an example of someone who has been able to consistently generate alpha?

I think this question is hard to comprehend when you blow it up to this massive scale. Heres a simpler (and I claim) equivalent problem.

Imagine you're in a room with a friend. You're both tasked with estimating the price of a company. You both read through the companies 10Ks and whatever other information you can dig up. You read a bit more carefully than your friend, and detect an issue in the company's finances. Now you have alpha, and can develop a better estimate.

The market is this same scenario, but there are millions of folks playing the same game. That means its harder to find that undiscovered blip of insight, but that doesn't mean it doesn't lie in wait for the diligent. Just as there are still original thoughts, there are still original insights, and possessing them means, for a moment, you can beat the market.


>Is warren buffet himself not an example of someone who has been able to consistently generate alpha?

Sure but the exception proves the rule. We don't have a whole generation of Warren Buffets, we only have one.


They might just be statistical outliers. Given infinite time, would any investor beat the market? That's the real question (sticking to the mathematical theory here).

Be careful before holding up a single example as an exception to the rule. Some basic statistical analysis show there was a 30% chance the Red Baron, the most famous WW1 fighter pilot, was just lucky. And obviously his luck eventually ran out.

https://www.scientificamerican.com/article/news-bytes-red-ba...


Or thee might not be. If your burden of proof is that no data is valid because new data might come in that disagrees, then sure it's easy to see why you can't acknowledge that someone has disproved the theory. It also makes the theory unfalsifiable ie meaningless. So we are right back at it not being meaningfully true.


2008 was an example in which known information was not reflected in the price. There was a huge amount of mortgage-backed securities that were backed by non-performing loans. Anyone who did due diligence could have figured this out, and some did. But most people didn't do due diligence, so extremely important, publicly available information was not factored into the price of MBSs.


I have a competing hypothesis. "Efficiently inefficient markets hypythesis". It says that alpha is possible, but that it takes a lot of work and smarts and that you may be better off working your dayjob than trying to beat the market unless you have comparative advantages.


The thing is that there is a limited of value that the market generates. Like producing half a million tons of steel or something. If you extract more value than the market generates then you have to take that value from someone else who made a mistake. Therefore maximum upside by "beating the market" is limited by the amount of mistakes other people are making and that upside is much smaller than you get from simply taking market returns.


I think we're arguing semantics. Markets are obviously more efficient than a single investor or you would be able to beat the market with public information. And when you're able to beat the market you're making it more efficient.

It's easy for politicians to blame 2008 on the markets, but I think it was caused by Keynesian government policies.

I assume we'll have similar problems soon with all the stimuli being granted.


Wasn’t 2008 crisis due to excessive risk-taking by government-backed banks?

I doubt it is an “instruction” on markets efficiency — yes, if a government puts a lot of tax and newly-printed money in ponzi-schemes, bad things will happen.


Actually markets are efficient, they just aren't maximally efficient.


That is like saying that Bubblesort is efficent because it is more efficient than https://en.wikipedia.org/wiki/Bogosort :-) Unless you come up with an upper bound of efficiency loss your statement is IMHO pretty meaningless.


You're talking about real world markets which are obviously not efficient by theoretical standards. They are not efficient the same way a physical ball is not spherical compared to a mathematical sphere.


Of course there are some errors in real life. The more interesting question is how far from optimality the markets are. To carry on your analogy, a tennis ball is not perfectly spherical but it is round enough for practical use.

Similarly, the market can be 99.99% efficient, in which case it's obviously not entirely perfect but close enough that the average citizen is extremely unlikely to find any real arbitrages (which is what the EMH is essentially about).


I didn't read the full paper carefully, but does this mean P != NP? :)


The paper blatantly lies about proving that if P=NP, then markets are efficient.

It actually argues that if P=NP, and Moore's law continues indefinitely, then eventually someone could build an efficient market.


If Moore's law continues indefinitely, then we will climb the 2^n NP problem cost with a 2^t computer speed, which means that any specific NP-hard problem will become solvable in an amount of time linear in n.


However the number of stock data points goes up over time, making it more like 2^(n*t) cost.


lol


we did it reddit


Right. The idea that anybody takes efficient markets seriously is outlandish.

Another simple proof is that not everybody bets on the market for every opinion they have but the market only prices info if you act in the market.


How much does this matter given that in the vast majority of cases, we don't care about exact/perfect solutions to problems that might be NP complete in the general case, but usually have heuristic methods that work well enough for most practical purposes?


The first corollary that I see is that greedy algorithms will almost certainly suck for society at large. I'm looking specifically at the meme that public companies are somehow required to maximize short-term profits for their shareholders.

Some NP-complete problem classes have polynomial time approximation schemes, but not all, unless P=NP. That is to say, sufficiently large markets may be arbitrarily inefficient. I'm curious to see that dichotomy resolved.


In every case of problems classes I have seen, in my admittedly limited experience, polynomial time approximation schemes only fail under pathological circumstances. I'm not convinced that there's a connection between pathological problem instances and problem sizes, necessarily, and I'm even less convinced that real-world markets would reflect such pathological instances.


Granted, but such a PTAS would need to be agreed upon and faithfully executed by all market players.


Greedy algorithms suck compared to what? If the greedy algorithm still converges fastest and is closest to optimal compared to any other practical scheme, the paper's result still does not say anything of practical use.


They suck at escaping local minima, for one.

Your "if" is doing a ton of work there, and this paper is quite abstract market theory. The "practical use" is to better contextualize the efficient market hypothesis.


They don't escape local minimum at all. And that means they suck for society at large compared to what? Random guessing? A corrupt government price system? An omniscient one that can solve NP hard problems? The "if" motivates answering the question.

Anyway markets have a lot more going on than a simple greedy algorithm. For injecting noise into the optimization to get out of local minima. Markets clearly have a lot of this.


> And that means they suck for society at large compared to what? Random guessing? A corrupt government price system? An omniscient one that can solve NP hard problems?

There's an intriguing recent book (that, admittedly, I haven't read) called "The People's Republic of Walmart" https://www.penguinrandomhouse.com/books/564287/the-peoples-...

If large corporations are the shining beacons of capitalism, how come they are run internally as top-down planned economies? And to the extent they prove than planning can work at scale, could planning work society-wide? Not saying I agree or disagree (again, I haven't actually read it), but dusting off the ye olde calculation debate and updating it for today seems like an interesting idea.


The worst part of centrally planned economies is that they are prescriptive and essentially authoritarian in nature. If they make mistakes then nothing prevents them from repeating the same mistake. There is no threat of the central planner being replaced.

If all Walmart does is make mistakes then they go out of business. Central planning won't work but federated planning with 20 super corporations that are in competition with each other might actually be good enough. The only problem is that we both need and want the underdogs(startups) and this would undermine them.


> There is no threat of the central planner being replaced

You are conflating an economic model with a political one. In fact, the central planner doesn't even need to be a person, but a set of assumptions and strategies that can be measured against the benefit they provide and be continually adjusted.

You touch an important point, however. The people most affected by Walmart's central planning, their employees, have no say in what strategies and assumptions the central planner uses and that removes any incentive to provide benefit to them.


The EMH is also conflating an economic model with a political belief system.

The notion that these huge corporate bureaucracies are somehow "efficient" at anything except trashing the environment and homogenising and controlling - if not ultimately economically cannibalising - their own workers and customer base is clearly counterfactual.


> The worst part of centrally planned economies is that they are prescriptive and essentially authoritarian in nature. If they make mistakes then nothing prevents them from repeating the same mistake. There is no threat of the central planner being replaced.

From reading the book synopsis (again, I haven't read the book itself), it seems the authors are proposing some kind of democratic socialism rather than USSR-style authoritarianism. Is it workable? I have no idea. To me it seems that USSR-style authoritarianism and current Western neoliberal capitalism are but two small dots on the political economy spectrum. There are other ways of organizing society as well; certainly worth thought experiments to see what kind of other options there might be and what we might learn from them.


I'd say startups are a much better example. But I didn't come here to argue politics, just asked for someone to describe their superior optimization process (math being my actual interest, so still waiting to hear it). I wouldn't describe "do what walmart did" as a particularly useful optimization strategy.


> But I didn't come here to argue politics, just asked for someone to describe their superior optimization process

Look. I don't believe that P=NP. I can't describe a superior optimization process, and if I did, I would claim the Clay Math prize. I'm also not an economist.

My takeaway is that any simple strategy is almost certainly bad in the long term. You can optimize your own personal take, but in doing so, "the invisible hand" isn't going to somehow flip that into a benefit for all.


The fact that large corporations are certainly top-down, planned, and successful in a meaningful sense is certainly a rhetorical blow to anyone who believes in a simplistic way that "markets" and "decentralization" are literally the only way to be large and successful.

But if large corporations prove the efficacy of planning at scale, then they surely prove the drawbacks as well. What are the society-wide central planning equivalents of layoffs and maximizing shareholder value? Does a centrally planned corporation outsourcing their supply chain or HR department represent a fundamental failure of central planning?


It seems quite obvious that 'competition' between the various 'planned economies' and choice between them by consumers is generally a good thing. One that would not work for a fully centrally planned economy.

That said, it is indeed quite interesting to figure out at what size such central planning starts to become non viable. And for that the observations: "It is viable for wallmart, but not for all of the USA" give some interesting bounds.


I’ve always seen this debate as kind of moot. The thing with optimizing, is that you always optimize with a target.

Large corporations rule top-down because goals are set at the top. That’s very different from a society problem.


Greedy algorithms can be arbitrarily bad on large NP-hard problems. But if we're using terms metaphorically, we could also say each greedy company is just a Monte Carlo sample of the whole economy. Some fail or stagnate in local optima; others succeed for longer. But the economy is too complex to expect anyone to ever find a global optimum, so continuous exploration of the landscape is important for overall growth. Thus we can aim for and admire success but should be vigilant against anticompetitive behaviors that stifle the sampling.


> I'm looking specifically at the meme that public companies are somehow required to maximize short-term profits for their shareholders.

I love that meme for how wrong it is. Public companies don’t maximize short-term profits, maximize profits (with future profits discounted).

The fact that to a lot of people “profits” == “short term profits”, reveals quite a lot about our assumptions and time preferences..


Indeed. I can think of plenty of examples where companies strongly focus on long-term profits, I mean, if they don't it's likely the company won't exist.

Now if someone said companies focus on continuous quarterly growth, and sometimes sacrifice a portion of long-term growth for short-term growth, I could agree with that.


Well what I think is interesting as an amateur with an interest in economics and investing, is that if P!=NP then it would cast a great deal of doubt on so-called Technical Market Analysis or "Technicals" which are predicated on the Efficient Market Hypothesis being true.

My own suspicions have been for quite some time now that fundamentals are what move markets and that technicals are mostly bullshit and are only valid because algo-traders school like fish based on technicals.


The efficient market hypothesis doesn't say anything about the amount of time it might take for markets to iron out inconsistencies and take into account new information; "the market can remain irrational longer than you can stay solvent." It's the sort of thing that's hard to falsify because you can always say "oh, it'll eventually become consistent." Given that, I don't think this fundamentally changes anything about "technicals," even if P!=NP, but my intuition a priori is also the same as yours, that "technicals" are mostly window dressing anyways.


>The efficient market hypothesis doesn't say anything about the amount of time it might take for markets to iron out inconsistencies and take into account new information

Actually it does. The central premise of the Efficient Market Hypothesis is that all relevant information is already priced in.

>"the market can remain irrational longer than you can stay solvent."

Irrational behaviour by uninformed investors is theoretically already priced in, if the market is efficient.


Taking that literally, it's clearly false; information can't affect prices faster than the speed of light.


[flagged]


We're literally talking about a paper comparing emh to np=p, isn't pointless pedantry the name of the game?


Ah, but here's an interesting topic for an ambitious research project, "Efficient relativistic markets". The proposal would score cross-disciplinarity brownie points in the evaluation committee.


You might be surprised to learn that the well-known economist Paul Krugman wrote in the late 70s about trade in relativistic scenarios, see https://en.wikipedia.org/wiki/The_Theory_of_Interstellar_Tra... and http://www.standupeconomist.com/pdf/misc/interstellar.pdf (for the paper itself).


Ah, but of course. When you're not a subject matter expert what you think is a novel idea is something the actual experts have already investigated decades ago. sigh, oh well.


I mean, in that sense the efficient market theory is immediately false, since someone must be the first to learn some new information and price it in.

Thing is, the hypothesis can be useful whilst being only approximately true. If it is true to with 0.1% then, in most cases, it is quite appropriate to model it as being perfectly true. And obviously, there are other cases where that model isn't useful. In those cases, it would probably still be useful to take the 0.1% number into account.


I thought technical analysis (as opposed to fundamental analysis) assumes markets aren't efficient and subject to things like herd behavior and other psychological effects of human decision making. Otherwise things like momentum trading wouldn't work (i.e. price would just gap to the true value if markets were indeed efficient). If tech analysis works at times for me it would show that markets aren't that efficient as a whole.

IMO markets can't be efficient because we as humans aren't - our perception of value itself can be subjective and influenced by many things including FOMO, safety in numbers perceptions, risk aversion (usually), etc etc.


> our perception of value itself can be subjective and influenced by many things including FOMO, safety in numbers perceptions, risk aversion (usually), etc etc.

The intuition behind the efficient market hypothesis is that each of these deviations from "homo economicus", if it occurs in a market, is an opportunity for someone else to make money. EMH says that opportunity won't be wasted, at least for very long.


That still means efficiency is lost. E.g. the market might fluctuate between non-optimal state. And then again in case monopolies build up, then there is no easy way to get out of a suboptimal state, because there are no real opportunities for someone else.


Technical analysis doesn't depend on EMH being true, in fact it depends on EMH being false. It is not evidence based in any way.


>How much does this matter given that in the vast majority of cases, we don't care about exact/perfect solutions to problems that might be NP complete in the general case, but usually have heuristic methods that work well enough for most practical purposes?

Well, define "well enough"...

We do get results. Are they "well enough"?


Even better, in practice we have amazing tools nowadays that can solve practical NP complete problems in a reasonable amount of time (I mean, optimal solutions, not OK-ish ones).


Well given that this assumes rational actors and humans aren't really rational, it's probably not very important in practice.


And we know markets to be efficient! Therefor P=NP QED


The only if is clear but the if side seems to be assuming that P=NP entails not just proving the existence of polynomial time solution algorithms but also constructing said algorithms?


Does this mean the Grossman-Stiglitz paradox proves P != NP?



Like foodgore, but for economics.


From 2010


still arxiv and not published... it does say a lot of about the paper.

.... and the paper is written in MS Office.


There are entire conferences out there that only accept papers written in MS Word.

Math, CS and Physics are the exception here.


And has been cited 39 times in that period FWICT. If it is a remarkable paper, it's definitely not attracted much attention.


So much for capitalism then?


Inefficient markets still work; you just have to regulate the arbitration potential so it doesn't eat too much legitimate profit and regulate scammers to protect information-poor people.

Hyper-optimizing just-in-time inventory and production practices and betting on the outcome at nanosecond timescales? Probably harmful.

Investing in useful businesses? Probably pretty beneficial.


> Hyper-optimizing just-in-time inventory and production practices and betting on the outcome at nanosecond timescales? Probably harmful.

Certainly, betting on nanosecond timescales seems very unlikely to be very beneficial for society at large.


I suspect other solutions also require P=NP.


They just have better than what humans can plan.


Yup. Go send yourself to the gulag because somebody published a paper.


... you can't prove mathematical theory with social science 'theories'. The latter is not even well defined.. ln(x) \approx x (for interest rate)? common...


P==PN on analog photonic machines. A light prism calculates Fourier in O(1) (given a prism is not aware of Big O). Using Fourier, one can factorize a number in O(1), and then solve NP in P.


Whatever you're encoding into the light that gets shined on the prism... that process is probably not gonna be O(1).


It's not, no. Whats your point? The factorization step is still O(1).


You can't factorize a number in O(1) if it takes O(n) or whatever to set up the problem.

Everything is O(1) if you can ignore steps.


There's a limited amount of information in the signal to a photonic machine of a given size though. If the inputs are bounded in some sense then a digital computer can calculate Fourier in O(1) too. That doesn't make the problem any easier asymptotically -- you still need more machine or more time to calculate arbitrary Fourier.

That said, recent work on photonic Fourier transforms does look interesting from the perspective of a co-processor. They offer meaningful gains over alternatives.


How can you calculate Fourier in O(1) if the input is bound?


If the input maxes out at 64 bits, then calculating the Fourier transform is O(64), which is O(1).


I don't follow.


O(whatever) refers to the asymptotic behavior as n gets arbitrarily large. If n is bounded, big O notation doesn’t really mean anything because everything is O(1).


I understand big O however Fourier bio O runtime is bound by the length of the signal, not by maximum value of the signal.


The photonic processor in question can only process some maximum M number of bits worth of signal in a given time T though. To have a throughput of better than M/T you need more time or more/better processors (i.e., for large enough inputs the runtime isn't actually O(1) ).

What I was pointing out is that digital implementations have this exact same property -- if the input is bounded in the sense that there are a maximum number of bits being fed into your digital computer then the maximum runtime is a constant equal to the maximum runtime for any of those finite inputs -- i.e., the runtime is O(1) for the inputs you're actually able to provide (the exact same inputs you could provide to the photonic processor to achieve a constant-time Fourier transform). To scale up beyond that threshold you still need more time or more/better processors.


Are we talking about analog computers?


Yes.


Photons are discrete.


What's your point?


You cannot do continuous computation with discrete components.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: