
Markets are efficient if and only if P = NP (2010) - perculis
https://arxiv.org/abs/1002.2284
======
CJefferson
While this is a fun, the title is a little strong.

There are three limitations (whuch apply to many papers about P=NP).

1\. The market could still be efficient, because the situations which must
arise to cause P vs NP problems are very complicated. In particular thry
require very expensive indivisible things to buy, whereas in most situations
we can treat things like shares as continuous with only a small error.

2\. Markets could be efficient if P=NP and we know how to solve NP probkems in
P, and we do it. The title makes it sound like the market will already be
efficient if P=NP, which isnt true.

3\. Even if P=NP, the polynomial could still be big enough the market cant be
efficient. Similarly, P could not equal NP but the expoenential be small
enough markets can still be efficient in reality.

~~~
SolarNet
On two, as this appears to be a cross disciplinary paper, it's important to
consider that some economists currently claim markets are efficient (the
efficient market hypothesis, which is like a big open question in economics).
By drawing a link between the EMH and P=NP (which many computer scientists
believe is unlikely) the author is linking two open questions with opposing
beliefs. So I think point two is sort of a technicality that with context it
should be understood that the author is specifically talking about two open
questions as they stand today.

Also to further hammer home the point, due to the phrasing of the EMH,
although no one may currently be using P=NP, markets would still have the
efficiency property _now_ even if no one is exploiting it. Perhaps this sort
of vacuously true statement rubs you the wrong way (like it does me a bit)
with the strength of the "if and only if" the author used. But if you read
"markets are efficient" as the EMH then it is still a valid literal
formulation.

On three, sure that's great for reality. But for the formulation of markets
being efficient as an inherent property (again the EMH) of markets, the size
of the market could be held as effectively infinite (or at least extremely
large) and the property should still hold. At some point the size of the
theoretical market will explode the polynomial, and for the EMH to hold P=NP
must be true.

~~~
ianai
I don’t think economists claim markets are efficient. There’s far too many
examples to the contrary - see amazon, wellsfargo, and Verizon as examples.
While those companies do technically have competition, they operate with
significant market power.

Also, I once heard an Econ explain EH as “true if enough people operate under
the assumption that it is not true”

Edit-After waking up a little more, I’m not entirely sure of my statement that
amazon, wellsfargo, and Verizon serve as examples against EH. But the later
quote I heard from someone with 30+ years of research.

~~~
yellowstuff
The efficient market theory is that all information, public and private, is
incorporated into all stock prices at all times, so there's no point to
researching companies to try to outperform the stock market. I don't think
anyone believes it is literally true, just that it is a good model for most
stocks most of the time. If everyone believed it then no one would bother to
research stocks, and the markets would be less efficient.

~~~
cimmanom
That's a far more limited claim. There's a very common claim that "markets are
efficient". Extending to all markets all the time. Explaining why market based
solutions are the best way to provide health care and education and other
(arguably common) goods.

But many markets (the labor market, the health care market, etc) work in such
a way that it's ridiculous to assume that most participants know all public
and private information all or most of the time.

~~~
stult
The EMH is specific to asset markets. So stocks and bonds basically. No
(respectable, well-known) economist has ever argued that all markets are
always efficient.

Also what people mean when they say markets are efficient in other contexts
means something very different. The EMH specifically describes how quickly and
what kinds of information get incorporated into asset prices. It doesn't make
claims about how markets organize production or optimize utility without
centralized direction, which is usually what people mean when they say free
markets are efficient.

I think a lot of confusion has arisen from the very general-sounding name of
the hypothesis which does not reflect its relatively narrow claims. Not that I
buy the EMH (no pun intended), but that's a different story.

------
zwww
Most people seem to read this as a proof that markets are inherently flawed
and as lending support to their ideological distrust of market economies. I
think that if the authors thesis holds true and p indeed != np, this kind of
conclusion could spell an even bigger problem for those who advocate to
agument or replace market economies with another, typically more centralized,
form of economic calculation. Allende's cybersyn famously used linear
programming (P) in order to centrally 'simulate' and improve upon more regular
market mechanics. If the authors thesis holds I think it's actually an
argument in favor of the economic calculation problem talking point of Hayek
and the like: efficient calculation of economic distribution problems is
impossible and flawed dynamics of the market are probably close to the best
approximation we can afford.

~~~
Iv
When I tried to read about Allende's cybersyn all I could find are a few
retro-futuristic furniture, but no meat whatsoever about the kind of software
that was behind. It looked like pure PR to me. Do you have good sources about
it? It has always intrigued me.

Personally I think it is very unlikely that the markets are close to the best
approximation we can afford. The current market-making agents use limited
intelligence on limited data. It is an efficient system in the sense that it
beats randomness and it beats a central (human) intelligence with (allegedly)
superior access to information.

~~~
baursak
I'm interested in this. So far, the best resources I found are:

\- "Red Plenty" by Francis Spufford, a mix of fiction and non-fiction about
planning experience in the USSR. It includes a rich bibliography and
references to papers published over the past 70 years around this issue.

\- There are few papers by Chinese economists, most notably this one:
[https://boingboing.net/2017/09/14/platform-
socialism.html](https://boingboing.net/2017/09/14/platform-socialism.html)
(you have to mess around with Sci-Hub mirrors to get a free copy).

\- There are few papers and books by Michael Ellman, e.g.:
[https://www.amazon.com/Planning-Problems-USSR-
Contribution-M...](https://www.amazon.com/Planning-Problems-USSR-Contribution-
Mathematical/dp/0521202493/ref)

\- I also have a few primers on linear programming in my to-do list, e.g.:
[https://www.amazon.com/gp/product/0486654915/](https://www.amazon.com/gp/product/0486654915/)

\- Somewhat tangential, but "the greatest American capitalist" ripping into
EMH is a fun read too: [https://www8.gsb.columbia.edu/articles/columbia-
business/sup...](https://www8.gsb.columbia.edu/articles/columbia-
business/superinvestors)

None of these really talk about the software still, but I would imagine a
combination of:

\- existing supply-chain systems already in place at Amazon, Walmart, etc.,

\- something along the lines of non-monetary Kickstarter to gauge popularity
of ideas from the ground up, and encourage innovation

\- strong democratic institutions

\- still allow free market at low levels, like individual entrepreneurs that
don't employ anybody (once you employ someone, it must be a co-op).

Dunno, these are just random ideas in my head. :)

~~~
rileyphone
Check out "Towards a New Socialism", by the aforementioned Cockshott and
Cotrell. Cockshott himself is a computer scientist and proposes planning the
economy based on solving a linear system of labour inputs.

~~~
baursak
Ah, yes, forgot to mention them. I haven't read the book yet, but read some of
their papers, yes those are good too.

------
aatchb
I'm amazed that someone has written a paper that considers computational
complexity that isn't written in LaTeX...

~~~
viraptor
That reminds me of Scott Aaronson's post about the early signs a complexity
paper is unlikely to be valid:
[https://www.scottaaronson.com/blog/?p=304](https://www.scottaaronson.com/blog/?p=304)
Not tex is the first point.

------
js8
Frankly, everybody with a bit of economic common sense knows that efficient
market hypothesis (EMH) is just a weird theoretical nonsense which is nowhere
close to describing real world.

If you want a full critique, read Steve Keen's Debunking Economics, it has a
chapter on EMH.

Oh and by the way, there is quite a bit of people who believe that P=NP. Most
famously Donald Knuth. I recently became convinced about that as well.

Amusingly enough, the first thing that a rational person would do upon
discovering a relatively efficient algorithm to solve NP problems would be to
cash in all the Bitcoins. Thus proving in practice that no, markets are not
really efficient. :-)

~~~
psergeant
> Frankly, everybody with a bit of economic common sense knows

Citation? True Scotsman fallacy?

~~~
js8
I gave a citation. When I say "everybody with a bit of economic common sense",
I mean everybody in economic profession who is able of some elementary
reflection on what they are doing. Even most neoclassicals (which is probably
the only school where somebody actually believes in EMH) know that many of the
theoretical assumptions are bullshit. Another classic example, aside from EMH,
is SMD theorem.

I am pretty sure that most economic Nobel prize winners do not believe in EMH,
from the top of my head, Akerlof and Kahneman.

~~~
psergeant
You gave a link to a popsci book whose Wikipedia Criticism section is ...
unforgiving and relentless.

You then double down on your True Scotsman fallacy, mae a sweeping unsupported
claim about a nebulous group of people, and follow it up with another
conjecture about another group in an attempt to appeal to authority.

We can do better than this.

~~~
js8
So do better! Find me an economist who truly believes in EMH and is either not
a neoclassical (that's about half of all economists) or makes some real-world
economic decisions (like in a central bank) or has expressed his belief in
support of EMH after 2008 crash.

I gave you two Nobel prize recipients who actually wrote famous articles
explicitly outlining market inefficiencies.

Regarding Steve Keen, I don't have my copy handy, but I am sure it has plenty
of additional citations. While it is written in accessible style, it is
certainly not unsupported. (I personally think that every student of economics
should read him, but it's up to each individual what they want to do with
their free time.) Many post-keynesians I have read expressed similar dismay
about EMH.

~~~
psergeant
The product description for the book you quoted:

> Debunking Economics exposes what many non-economists may have suspected and
> a minority of economists have long known: that economic theory is not only
> unpalatable, but also plain wrong.

Plainly alludes to the fact that the book is not mainstream economic theory.
People arguing for minority views of complex subjects find — rightly — that
the burden of proof is on them, and not the other way around.

Related: it’s not in the least bit surprising to see the book make an appeal
to common sense over “those experts”. Truly we live in the age of Trump and
Farage.

~~~
js8
And Sharpe and Fama, which he quotes expressing doubts about empirical
validity of EMH, are mainstream?

But you can only lead horse to a water..

~~~
sethrin
What's your opinion on Mandelbrot?

------
QML
What does this mean for the class of PPAD-complete [1] problems?

Someone correct me if I'm wrong, but if 1\. Nash Equilibrium ⊂ FNP 2\.
"Markets are efficient" => FNP ⊂ FP

How is this different from "FP = FNP if and only if P = NP" [2], which is a
result already found?

[1]
[https://en.wikipedia.org/wiki/PPAD_(complexity)](https://en.wikipedia.org/wiki/PPAD_\(complexity\))
[2]
[https://en.wikipedia.org/wiki/FNP_(complexity)](https://en.wikipedia.org/wiki/FNP_\(complexity\))

------
jkingsbery
> "But given that there are 3n patterns to test, finding a solution, as
> opposed to verifying one, requires O(3^n), i.e. it is exponential in the
> size of n. For small n, this is computable, even though it is exponential.
> But as n grows, it becomes impossible to check every possible pattern
> quickly."

If a strategy is a sequence of BUY-HOLD-SELL decisions, then just because
there's O(3^n) strategies doesn't mean you need to evaluate them all. It seems
pretty easy (if a strategy is only defined in retrospect) to define a greedy
algorithm that finds the optimal strategy. (see page 16.)

The author goes on to compare this to the Knapsack problem (pg 19). The thing
that makes the Knapsack question (and NP-complete problems generally) hard is
that greedy algorithms don't work (as far as we know), whereas it seems like a
greedy algorithm work for the problem the author has laid out.

------
em70
This paper is rubbish. Trading is ultimately a partial information, sequential
game with an unknown number of participants whose payoff functions (and
utilities) are also unknown. Could the market be closer to being efficient if
P=NP? Likely. Does P=NP imply efficiency? Not at all.

~~~
argv_empty
On top of that, the argument rests on the kind of error you expect to see
freshmen making in a discrete math course:

> The basic argument is as follows. For simplicity but without loss of
> generality, assume there are n past price changes, each of which is either
> UP (1) or DOWN (0). How many possible trading strategies are there? Suppose
> we allow a strategy to either be long, short, or neutral to the market. Then
> the list of all possible strategies includes the strategy which would have
> been always neutral, the ones that would have been always neutral but for
> the last day when it would have been either long or short, and so on. In
> other words, there are three possibilities for each of the n past price
> changes; there are 3n possible strategies.

There are 2^n possible histories of length n. If a strategy maps each history
to one of three positions, there are 3^(2^n) strategies that consider n bits
of history.

~~~
kizer
Why wouldn't it be 3^n? Edit: wait, it's because bitstring -> strategy is a
function, right?

~~~
argv_empty
Yes. Mapping from 2^n bitstrings to 3 market positions.

~~~
kizer
Thanks.

------
lend000
Interesting topic and thought experiment. But this theory is not very fleshed
out and not at all convincing (especially the part regarding using an existing
efficient market to perform computation for anything other than price of the
underlying instrument, i.e. what the computation is intended for). The
following quote sums up how the author makes very open ended assumptions:

> So what should the market do? If it is truly efficient, and there exists
> some way to execute all of those separate OCO orders in such a way that an
> overall profit is guaranteed, including taking into account the larger
> transactions costs from executing more orders, then the market, by its
> assumed efficiency, ought to be able to find a way to do so. In other words,
> the market allows us to compute in polynomial time the solution to an
> arbitrary 3-SAT problem.

In reality, most financial markets are pretty efficient but none are perfectly
efficient -- if they were perfectly efficient, it would imply not only
perfectly efficient trading systems and an inability to get an 'edge' on the
market, but also perfectly efficient market systems, which are limited by
technology, conventions, and regulations (for example, significant
inefficiencies arise in US securities from not being open all the time, with
very little liquidity still available in the 'after hours' markets). To
achieve even 'pretty good' efficiency requires significant energy, and I'm not
sure I understand how the author can imply that the energy used in the past to
calculate the current price is equivalent to the energy to verify the current
price. As a trader, I can tell you that most market participants do not care
to verify past calculations of the current price; they only care about the
future price, and will generate an action from the differential between the
predicted price and the current price.

~~~
Ar-Curunir
> _I 'm not sure I understand the author's implication that the energy used in
> the past to calculate the current price is equivalent to the energy to
> verify the current price._

That's just what P = NP means: the cost to verify a solution is the same as
the cost of finding one.

~~~
babypistol
In my opinion P = NP does not mean that. There still might be a big polynomial
difference between verifying a solution and finding one.

~~~
Ar-Curunir
Of course, the concrete costs can be different; I was talking asymptotically
:)

~~~
babypistol
Well that's the problem, man.

And asymptotically is not the right word either, because n^3 is
_asymptotically_ different from n^2. You probably meant poly-time reducible,
whichs is quite different from " _the same_ ".

------
babypistol
I have always been puzzled by why we as computer scientists give such
importance to P vs NP. I always thought that even if P = NP the solutions
might still be much harder (but only polynomially) to find than to verify. I
always get angry when people say that P = NP would mean that problems would be
equally as easy to solve as to verify. So, because of that, P v NP always
seemed irrelevant to me.

But in the article, there is an interesting section on that:

> If P = NP, even with a high exponent on the polynomial, that means that
> checking strategies from the past becomes only polynomially harder as time
> passes and data is aggregated. But so long as the combined computational
> power of financial market participants continues to grow exponentially,
> either through population growth or technological advances, then there will
> always come a time when all past strategies can be quickly backtested by the
> then-prevailing amount of computational power. In short, so long as P = NP,
> the markets will ultimately be efficient, regardless of how high the
> exponent on the polynomial is; the only question is when, and the answer
> depends on the available computational power.

So this section, if I understand it correctly, says that problems in P are
easy because the computational power in the world grows exponentially and we
can assume that they will at least once become feasible to solve.

That's an interesting way of looking at it. Is this really the reason why we
consider polynomial problems much easier than NP-hard ones?

~~~
leereeves
That's assuming computational resources could grow forever without limit,
which of course they can't.

~~~
QML
To add on: Moore's law is "dying", and that places further pressure on
algorithms to get faster.

However, even in an exponential world, I am reminded of a quote:

"exponential algorithms make polynomially slow progress, while polynomial
algorithms advance exponentially fast".

------
modeless
Before people get too carried away criticising markets, check this quote from
the paper.

> The results of this paper should not be interpreted as support for
> government intervention into the market; on the contrary, the fact that
> market efficiency and computational efficiency are linked suggests that
> government should no more intervene in the market or regulate market
> participants than it should intervene in computations or regulate computer
> algorithms.

~~~
SolarNet
Sure, but if what the author is saying is true, then it implies that there is
nothing special about markets, and a system involving an equal number of
humans and computers following some other optimization algorithm could achieve
similar results in efficiency.

And if a government sponsored and modified such an algorithm in an attempt to
optimize for equality (second only to efficiency of usage), such a system
could be an effective socialism. It is at least an interesting avenue to
consider if mathematical and computational parallels could be constructed.

~~~
lajhsdfkl
> then it implies that there is nothing special about markets

How so? It implies simply that markets are not efficient. It does not imply
that state control would be more efficient and it does not imply that markets
are not the _most_ efficient way of determining the value of a resource.

What it definitely says however, is that a government committee could not, in
any way, successfully determine the value of all goods and fix prices based on
that determination.

~~~
SolarNet
> It implies simply that markets are not efficient.

No it implies that efficient market states are an NP complete problem. And
that we are likely approximating the optimization of efficiency (of the
allocation of resources) using markets. Different approximation algorithms
have different properties. Might markets be the best in every possible way,
sure, but it's very unlikely given what we know about approximation
algorithms. It's like one of the best variants in one way, perhaps that's the
best way, _but we should figure that sort of thing out_. And that's what this
paper is laying the ground work for.

> What it definitely says however, is that a government committee could not,
> in any way, successfully determine the value of all goods and fix prices
> based on that determination.

In it's editorializing about markets. Which isn't incorrect. But the larger
consequences of efficiency being an NP complete problem means that there are
many possible algorithms we could use to solve them if they are given
equivalent resources. If those equivalent resources are thousands (or
millions) of government panels then we should be able to mathematically prove
equivalency. That's my point.

~~~
lajhsdfkl
> If those equivalent resources are thousands (or millions) of government
> panels then we should be able to mathematically prove equivalency.

There is a method of solving an NP complete problem with thousands (or
millions) of government panels?

~~~
SolarNet
There is a method of solving an NP complete problem with millions of companies
participating in stock markets?

Your question is not related to my point. And you'd have to at least answer it
for markets first before I would bother to try. My point is that it is an
interesting area of research, we should answer both questions and their
interrelationships.

------
grosjona
The market cannot be efficient because a large portion of public information
is not true.

Even if all the information was true, there is still the problem that humans
have very poor reasoning abilities combined with herd mentality which almost
always overrides the reasoning part.

To predict the market, you don't need to understand the market, you need to
understand people's distorted view of the market.

------
Symmetry
I don't think anybody would claim that even the weak form of the EMH applies
in all cases. Like the idea that objects of different weights fall at the same
speed it's an approximation that can be very close to true in some cases but
can be badly misleading in others. If you've ever listened to economists talk
about prediction markets they always seem to bring up the idea that markets
with low capitalization have low efficiency.

And even looking at the highly capitalized stock market I know at least one
story of a major player that got its start noticing a violation of weak-form
efficiency and becoming very rich by fixing the violation.

There's no reason to think that actual markets are now finally efficient and
in fact there are some that are publicly know, but which only bring a small
return and require decades of investment to fix so nobody is interested in
trying.

------
lynal
Based on a quick skim, this is not a good paper. Computer scientists writing
on economics is great, it's helpful to grow new ideas in the field.
Unfortunately they sometimes use economic concepts imprecisely at detriment to
their question, methodology, and results.

That's the case here. This paper posits a definition of efficiency, but does
not explain why that definition is correct or how it relates to other
efficiency measures.

A better proof of arbitrage opportunities in markets is Wah 2016, which
identifies actual arbitrage opportunities in actual markets.

Separately, what does "Since P probably does not equal NP" mean as a
probabilistic statement?

And what is the correct way to concisely and precisely write: "most people
familiar with the P = NP problem believe with varying degrees of confidence
that P is not equal to NP, but so far no proof exists."

~~~
openasocket
> Separately, what does "Since P probably does not equal NP" mean as a
> probabilistic statement?

It actually does have a formal meaning! This is one of the most interesting
results (in my opinion) on the P vs NP problem. Given a random oracle R,
Pr(P^R != NP^R) = 1. So given a random universe, it is almost certainly true
that your version of P != your version of NP. At least that's how my theory
prof liked to explain it.

Here's a link with the proof:
[http://theory.stanford.edu/~trevisan/cs254-14/lecture04.pdf](http://theory.stanford.edu/~trevisan/cs254-14/lecture04.pdf)

------
TekMol
As I understand it, his argument is that there might be information available
that needs time to interpret. And that the time the market took to interpret
the available data is not sufficient to gain the maximum insight possible. So
that somebody with more computing power then the market could beat the market.

I don't see that as an argument against the EMH. I think it is inherent in the
EMH that the market has more computing power then any individual.

I would say that it is the core of the EMH. That more people make better
predictions. Because they have more computing power.

I always found it strange that the EMH is often defined as the market price
including all available data. As if the data can simply be added without
interpretation.

------
mortdeus
So just to better understand the issue of P=NP, am I right in assuming that an
NP problem is like trying to build a neural net for image recognition?

In the sense that it takes a huge amount of time and images to train the thing
to become smart (in other words, to go through the entire network of neurons
one by one and assign better weight values etc) but when it comes to actually
verifying if our network is smart all it takes is to run an image straight
through the network?

And the issue of trying to prove N=NP is essentially trying to prove that
there isn't a magical way to train a neural network with just one image of
training data?

~~~
yoklov
No, I don't think building a neural net for image recognition qualifies.

NP problems are essentially problems where the following two properties hold:

1\. No better algorithm is known than using brute force -- generating every
possible result and checking if it's a valid result.

2\. Checking that the given result is valid is doable in polynomial time or
better. (This is less critical to understanding, but essentially your
`isValidSolution(input, solution)` function needs to take `O(n)`, `O(n^2)`,
`O(n^3)`, (etc.) time or better.)

Essentially, a NP problem is a problem where you have to brute force the
solution, but it's easy to know when you've found the correct solution.

If P = NP that means that there are no problems where this is true -- it would
mean that any problem where it's easy to know if you've found the solution
also has an algorithm for finding it more efficiently than brute force.

Your neural network example doesn't apply because training a neural network
doesn't require brute forcing the solution space of the neural network
weights. That would be crazy. So it's a problem in P.

~~~
mortdeus
but trying to devise a program that can predict somebody's password without
using brute force is a NP problem? Since its hard to discover what the
password is, but trivial to check if it lets you in the vault?

I get the reasoning why people aren't sure if P = NP.

Here is another question.

What defines time? And what about a solution?

Consider the fact that in the theory of general relativity we have the twin
paradox.

What if I sent a computer on a rocket at close to the speed of light and then
on earth I had the same machine running the calculation that takes a million
years, and then when the other computer gets back from it's interstellar
vacation, I just have the computer ask the other what it's part of the
solution is.

According to the computer that flew away the amount of time that it takes to
solve the problem can be considered ~P in a way.

My intuition is telling me that this is probably the wrong way to think about
time complexity though.

~~~
zaarn
Time and solution are usually abstract concepts here.

The time a turing machine takes to solve the problem can be considered local
and in absense of general relativity, it doesn't matter much.

When you talk about time in NP/P, you usually talk about the O Notation (Big O
and friends) which gives you the driving factor of the time needed to solve.

Ie, O(n) = 2 + 3x + 9x^2 + 9^x is n^x complexity because this part of the
equation will grow much quicker than the others. In reality of course, it can
take a while for the biggest part to overtake which is why searching an array
linearly in CPU cache can be faster than a hashmap.

The solution in NP/P is usually reduced to either a predicate being proven
true or the answer being boolean or can be reduced to either. (ie, solving a
jig-saw puzzle can be reduced to "is this jig-saw puzzle solvable" which can
be proven by presenting a solution)

The problem you present is not in the scope of NP/P. You're already involving
multiple computers and generally here, people still assume general relativity
doesn't exist. Plus NP still holds because one computer simply accessed a
blackbox function (oracle) while the other solved it in NP.

Time-complexity isn't the only complexity either, you also have space-
complexity.

SC tells you how much memory you need to solve a problem. Or rather, how
quickly that memory need will grow. You can trade a lot of NP problems to P
problems if you have "NP" space complexity.

SC complexity won't spit out a byte-accurate estimation of memory but it can
tell you that O(n) = 100 + 100*x will grow linearly in size (O = x) and O(n) =
100 + x^2 will grow exponentially in memory usage (O = x^2).

------
tomtimtall
Reading this comment thread is hilarious to a degree. I think that it really
highlights how much noise gets thrown into a discussion when your naming
becomes too relatable. Essentially bike shedding of theory discussion. If the
Efficient Market Hypothesis has been called the “Fundamental Market Property
Hypothesis M = H” or similar I doubt most would have made the comments they
made. But because “efficient market” sounds to relatable most feel that they
can comment without even knowing what the specifics of the EMH are.

------
soVeryTired
Meh. Whatever your opinions on P = NP, the efficient markets hypothesis is
unfalsifiable. You can only falsify a joint hypothesis of efficiency plus some
model of information flow into a market.

~~~
jonathanstrange
Wait a minute, if the paper correctly links a theoretical definition of
efficiency to the complexity class and indeed shows that markets can be
efficient only iff. P=NP, then any future proof that P!=NP falsifies the
thesis that markets are efficient. And most experts agree that if we ever get
a proof, then it will be a proof of NP!=P.

Knuth is a notable exception, although he has to my knowledge never really
vigorously advocated P=NP but merely suggested the possibility that P=NP and
that the algorithms to transform NP problems into P problems could be very,
very complex but still in P. Seems unlikely, though.

~~~
argv_empty
The paper doesn't even manage to prove that finding a significantly profitable
technical strategy is in NP.

------
inputcoffee
The author’s claim only applies to weak form of market efficiency.

The original Fama paper distinguishes between three forms of the EMH: weak,
semi strong and strong.

The weak form only alleges that historical price info is fully incorporated in
the current price. You can still make money by working hard and figuring
things out from outside the price history.

The weak form applies to purely technical trading that extracts value from
price history like momentum and the simple moving averages cross over studies
you see.

------
known
[https://en.wikipedia.org/wiki/Information_asymmetry](https://en.wikipedia.org/wiki/Information_asymmetry)
is still rampant e.g [https://www.economist.com/books-and-arts/2018/06/02/the-
rise...](https://www.economist.com/books-and-arts/2018/06/02/the-rise-and-
fall-of-elizabeth-holmes-silicon-valleys-startup-queen)

------
Rainymood
Interesting. It is well known in the high-frequency literature that at
frequencies higher than 5-minute one obtains market microstructure noise. The
observations you observe do not fully reflect the "true" price. I.e. you
observe bid/ask quotes with a spread and the "true" price is somewhere in
between. This is due to the bid-ask bounce and other latency factors. How can
markets ever be efficient if we can not observe the true price of something?

~~~
totalZero
There's a transaction cost in either direction.

If you buy a call option, the market-maker buys stock to hedge it. That moves
the underlier, and raises the price of the call option he just sold. The
reverse is true if you sell a call (or buy a put).

Not only that, but there is a cost for all the people and computers that your
order touches as it gets executed.

Without price impact from transactions, markets can't be efficient, because
new information has to get priced into the market somehow.

------
sova
P = NP when you nail it.

From the paper: "if markets are weak-form efficient, meaning current prices
fully reflect all information available in past prices, then P = NP, meaning
every computational problem whose solution can be verified in polynomial time
can also be solved in polynomial time. I also prove the converse by showing
how we can "program" the market to solve NP-complete problems.

However, there has been no proof that there does not exist some algorithm that
can determine satisfiability in polynomial time. If such an algorithm were
found, then we would have that P = NP. If a proof is discovered that no such
algorithm exists, then we would have that P ≠ NP. Just as most people in the
field of finance believe markets are at least weak-form efficient, most
computer scientists believe P ≠ NP. Gasarch (2002) reports that of 100
respondents to his poll of various theorists who are “people whose opinions
can be taken seriously,” the majority thought the ultimate resolution to the
question would be that P ≠ NP; only 9 percent thought the ultimate resolution
would be that P = NP. The majority of financial academics believe in weak form
efficiency and the majority of computer scientists believe that P ≠ NP. The
result of this paper is that they cannot both be right: either P = NP and the
markets are weakly efficient, or P ≠ NP and the markets are not weakly
efficient. "

~~~
hvidgaard
That actually sounds like a refreshingly fun paper to read.

~~~
sova
Only if you can nerd out about EMH "Efficient Market Hypothesis" for 33 pages

------
arrownot
What kind of markets get created when the input to the NP problem is 2^10000,
I think our idea of markets are based on small scale transactions and
extrapolations can't take into account intelligent agents. Also Arrow's
theorem hints how small formals systems can't achieve agents desires. Anyway,
I have only read the title not the arxig paper.

------
Yuioup
Can't markets just assume that it is and move on. It's not like markets never
do speculation /s

~~~
captainbland
This is bad news for the technology business when those markets only
'efficiently' allocate capital to companies that attempt to exploit P=NP.

------
MaysonL
Somehow this seems to be a confusion of categories: markets are real-world
mechanisms, while P and NP are mathematical abstractions. Does not compute. To
the extent that it does, it's typical mathematical macroeconomic BS.

~~~
jonathanstrange
Since when do mathematical limitations not apply to the real world? Have you
ever tried to square the circle?

~~~
ernst_klim
Let's start with the fact that circles do not exist in the real world.

~~~
jonathanstrange
Well, that's a quite insubstantial point, to say the least. Mathematical
theorems place upper and lower bounds on what is possible in the real world
within almost every domain. Of course, there may often be _other reasons_ why
something is impossible, too.

------
argestes
Does this also mean if we can prove that markets are efficient then P = NP?

~~~
adtac
Yes.

------
sddfd
Even if P ist not equal NP, hardness of efficient markets could be in APX2
i.e. computing a solution that is at most twice as bad as the optimal solution
is in P.

------
carapace
Er, markets are massively parallel, yes?

Um, am I wrong in thinking that P and NP refer to non-parallel algorithms?

(Apologies in advance if I'm having a brain fart.)

------
splitrocket
Profit is the measure of inefficiency in a market.

------
devnull791101
free market efficiency is an evolutionary concept not accounting one. there's
no intelligent design.

------
adament
First of all this paper leaves a lot to be desired in terms of rigor and it is
for good reason not how mathematics is written by most mathematicians. It is
very conversational and assumptions and derivations are jumbled together,
while you sometimes find these in breakthrough papers from visionary
mathematicians it often makes it harder to verify.

My understanding of the central argument in the paper is the following:

Definition: Let N be a positive integer denoting the length of the history, M
be a positive integer denoting the number of assets. A market realization is
an element of {-1, 1}^(NxM), i.e. M vectors of length n where all entries are
either +1 or -1.

Definition: We call a function f: {-1, 1}^(TxM) -> {0, 1}^M satisfying f \circ
s = s \circ f for s any element in the symmetric group S_M a technical
strategy of lookback T. The symmetric group condition just states that
permuting the vectors of length T among the M assets is the same as permuting
the output the strategy. I.e. that the strategy has no inherent preferences
among the assets.

The payoff of a technical strategy s on a market realization h is given by
payoff(s, h) = \sum_{i=T}^N s(h_{i-T, ..., i-1}) \cdot h_i where the
indexation is in the time dimension i.e. h_i denotes a length M vector. The
budget of a technical strategy is budget(s) = max_{v \in {-1, 1}^TxM} s(v)
\cdot (1, ..., 1). That is the maximal number of assets it wants to hold in
any given state of the world.

Given a market realization h, positive integers B and K we say that h is (B,
K) EMH-inconsistent if there exists a technical strategy s such that budget(s)
<= B and payoff(s, h) >= K. If a market realization h is not (B, K) EMH-
inconsistent we call it (B, K) EMH-consistent.

Claim (presented as a theorem in the paper): The problem of determining
whether a market realization is (B, K) EMH-consistent is in P if and only if
the knapsack problem is in P.

Claim: The weak efficient market hypothesis is true if and only if EMH-
consistency is in P.

In the second part of the paper he indicates a model of an order book where he
wants to encode 3-SAT as combinations of market orders. I do not understand
how this is intended to work, i.e. if all information is available and
incorporated into the market and the information generating process is
stopped, and I have bid-offer spreads because of transaction costs, and I
irrationally (remember I am not interested in the buying or selling the
security, I am just interested in solving a 3-SAT problem, thus my actions
should not influence the price generation process of an efficient market)
enter an OCO-3 order to buy A, B or C at mid. Why should this result in a
transaction? In the case (a or a or a) and (!a or !a or !a) I make one trade
with myself in the case (a or a or a) and (b or b or b) and (!a or b or b) I
make one trade with myself, but one of the problems is satisfiable the other
is not. Now it seems obvious that by inventing new order types we can get
order book rules that allow for complex computation to resolve clearing,
however this is a problem with the proposed order types not the efficient
market hypothesis? A (to me) equivalent avenue of investigation would be to
imagine different order types such that to decide the clearing of an order
book it would involve solving an undecidable problem - i.e. what are the most
reasonable order types and order book rules such that we can encode the
halting problem?

------
cargocult_coder
Therefore it's false.

------
John_KZ
That's why we need peer-review.

------
maxehmookau
The title of this post scores 10000 HN points.

------
wei_jok
But markets are not efficient ...

~~~
coliveira
Now this will degenerate into a fight between economists and computer
scientists...

------
AKifer
Efficient or not efficient, as long as it can make me rich, I'm OK with it.

------
mortdeus
"For simplicity but without loss of generality, assume there are n past price
changes, each of which is either UP (1) or DOWN (0)."

You know, this is the kind of thing that makes me wonder about how much
quantum computation is going to change the game.

Where we aren't just calculating ups and downs but superpositions between the
two.

~~~
cheschire
Digital is a framework built on top of analog. Boolean logic is the foundation
of all of our modern computing, and so we built all of our circuits around the
ability to handle HIGH and LOW signals. What your describing seems to me like
you're saying that leveraging an analog signal is the benefit of quantum
computation. If so, I don't really agree. If not, then I misunderstood.

Either way, I suspect that the superposition between the change in value and
the change in time, in relation to all of the other superpositions in
value/time changes, is the true computational power of quantum technology.

------
viraghr
Have another account here but don't want to take the karma hit for being a
crank. I published this 2-page proof that the EMH is false:

[https://arxiv.org/abs/1011.0423](https://arxiv.org/abs/1011.0423)

(didn't set the date so in the document it's wrong, this was published 2010.)

It doesn't depend on P = NP, it's simply a rigorous proof that EMH is false.

Let's switch gears a second. Here's a famous elementary proof[1] that there
are infinite primes. Suppose there are just finite primes, up to some largest.
Multiply them together and add one. No prime divides the new number (because
"every" prime leaves a remainder 1), so you've just produced a new prime. This
new prime is larger than the largest prime in your finite set because you
multiplied that by the rest of them and added one to get it. So this is a
contradiction, you couldn't have had finite primes up to some largest.

Anyway if you think there might be a largest prime after what you just read,
it just means you don't understand the proof. If you believe EMH might be true
it just means you don't understand the proof that it is false.

Of course, nobody ever even hypothesized that academia was efficient :)

[1]
[https://en.wikipedia.org/wiki/Euclid%27s_theorem#Euclid's_pr...](https://en.wikipedia.org/wiki/Euclid%27s_theorem#Euclid's_proof)

\--

EDIT: no mistake in my comment

~~~
ayepif
Maybe I'm being super pedantic and possibly confrontational (I apologise in
advance) but it jumped out on me here. I think you've misunderstood the
'famous elementary proof' that there are infinite primes. You do not create a
new prime as you have suggested (quoted below).

 _" Multiply them together and add one. No prime divides the new number
(because "every" prime leaves a remainder 1), so you've just produced a new
prime. This new prime is larger than the largest prime in your finite set
because you multiplied that by the rest of them and added one to get it."_

Instead you have created a number that may or may not be prime but definitely
requires a new prime number (not in your set) to factorise it. Counter
example: Take your set of prime numbers to be {2,3,5,7,11,13} then

(2.3.5.7.11.13)+1 = 30031

30031 factorises into 59.509 so you have found two prime numbers that are not
in your original set.

EDIT: Responding to the edit above. The problem is that you claim that you
make a new prime number by multiplying them all together and adding one. You
didn't multiply all the numbers and added one to get the prime number, you
multiplied all the numbers and added one to get a number (POSSIBLY NOT PRIME)
whose FACTORS are prime numbers not in your original 'supposed' finite set.
Your proof essentially lacks the step: IF new_number is prime: proof finished
ELSE: factor new_number and show that at least one of the factors is not in
your finite set.

EDIT 2: Counterexample number 2. Suppose your finite set of primes is {2,7}
(2.7)+1 = 15 So you have found 3 and 5 as primes that are not in your original
set and are SMALLER than the largest prime in your original set. This is now a
second mistake in your proof. Whether you are trolling or just too arrogant to
see the mistake/error I do not know.

~~~
n4r9
You're entirely correct. A very ironic mistake/misunderstanding for the parent
post to make.

Edit: I've thought about this a bit more, and you can save the parent post by
prepending a result like "Every number larger than one is either prime, or
divisible by a prime smaller than itself". In this case you can then assert
that your constructed number is prime. However, this result requires its own
proof and was not mentioned by the parent post.

