Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scientists Prove Toxic Assets are Impossible to Regulate (dailykos.com)
29 points by amichail on Oct 22, 2009 | hide | past | favorite | 21 comments


A lot of people are complaining that NP-hard doesn't mean you can't get good approximations.

I would like to remind you people that we are not talking about abstract problem spaces where you can count on the problems being essentially "randomly selected" for some suitable definition of random. We are talking human-created financial instruments, being created by agents with every incentive to game the system. Complexity analysis is a whole different kettle of fish when you have to assume a malicious agent generating the problems! Everything you think you know about approximation may or may not apply, and probably doesn't apply, because approximation algorithms never (or virtually never) start out with "Assume a hostile malicious agent has constructed your problem instance..."

The correct mindset here is probably the security mindset, where no matter how small or trivial the "arbitrary command execution as root" vulnerability is, you damn well better close it, because in the computing world the smallest crack can be easily widened large enough to drive a truck through.


Very true. Especially if the approximation algorithms become well known, people can find ways around them.

I wonder if some sort of game theory and/or randomized algorithms can help...

In any event, the root problem was still giving loans to people who shouldn't have gotten them. Regulation really needs to focus on that...


It's also impossible for a traveling salesman to find the best route between cities, but that doesn't mean that salesmen don't travel. Approximations are often Good Enough, and there are a variety of polynomial algorithms that produce Good Enough solutions to NP problems.

Everything in the real world is impossible, but we can try and get good results anyway.


I believe that the paper argues that this is one of those cases where approximations are not good enough. You cannot read RSA encrypted email with approximate factorization algorithms. You can't forge a document and fool anyone with an approximate SHA collision. I'm not entirely convinced by this analogy but it was made by Appell, and others pointed out in an earlier HN thread that the authors are experts in approximation algorithms.


Sanjeev Arora, paper's coauthor, but also leading expert on approximations to NP-hard problems, has posted a FAQ detailing why he thinks no approximation would help,

http://www.cs.princeton.edu/~rongge/derivativeFAQ.html

And also his response to RJ Lipton who believes approximation ought to work (but then again Lipton believes P=NP, so for him nothing is impossible):

http://rjlipton.wordpress.com/2009/10/22/helping-wall-street...

These are mostly practical reservations, carefully stated as to convince of intractability in the real-world case (which they do) but not prove in theory. Excerpt:

   current pricing and rating algorithms use monte carlo methods
   and would not solve densest subgraphs even for moderate parameters.
   So at the very least those should be changed.
Turns out problem they reduced their model to is open in terms of finding good approximation to it. Excerpt from the FAQ:

   The paper relies upon a stronger form of "P not equals NP", namely,
   that the planted dense subgraph problem does not have an efficient
   algorithm. (In fact it is conjectured that there is no algorithm
   to even compute any approximate solutions to this problem).


>It's also impossible for a traveling salesman to find the best route between cities

It is not impossible, merely Hard.


Which, once translated out of engineering jargon, means "Practically impossible in the general case, assuming you are not willing to accept an approximation."


Exactly. And as the work of this year's Economics Nobel Elinor Ostrom has highlighted, communities often work out cultural norms that resolve difficult coordination and trust issues.

A regulator trying to preempt unfair dealings with rules may have no chance, no matter how wise and disinterested, but people trying diverse strategies over time eventually settle on something that's good enough. (And some scams and business cycles are an inevitable part of that discovery process.)


I only read the abstract and the Daily Kos summary, and am not a complexity theorist, but I have to say this seems a little facetious. Sure the valuation problem might be np-complete, so technically you can't "solve" it. This has a nice ring and I'm sure lots of members of congress are going to hoist this paper aloft in hearings while they're busy chewing out the CEO of Lehman.

But approximation techniques are being used all the time to obtain near-optimal solutions to NP-hard problems. If you can obtain a valuation for a complex bundle of derivatives that's within 10% of the true value, then it seems silly to claim that they whole practice should be tossed out the window. This is like saying salesmen should no longer be able to travel because we cannot solve the traveling salesman problem.


hold the phone, just because a problem is NP complete does not mean that it is computationally intractable. most algorithms in the real world are NP complete, but people still try to solve them.

just because there's no way to prove which CDOs have been tampered with doesn't mean that you can't figure it out. there are local search algorithms that may be able to give you a pretty good answer, even if you don't know for sure. there's some extra risk in CDOs because of this, fine, but it doesn't mean that the market would be locked up, or that no one would have any clue if there was misconduct.

regulators also have extra tools to detect fraud. maybe the buyer can't tell, but if the buyer suspects, and the regulator can check the seller's internal records, examine their processes, and in general, investigate whether the seller did any tampering.


As much as I love dailykos at times, considering that on top of the factual inaccuracy you point out, the non-watered down non-hysteric version [1] was pretty popular here a little bit ago, I think this article deserves some [dead]-ing.

1. http://news.ycombinator.com/item?id=883316


It's probably even worse than that, in that one probability isn't evidence of cheating, whether you have found the optimal dense subgraphs or not.

The biggest I take from this (apart from it being an interesting analysis in itself) is that if there is an assumption of randomness in creation of CDO's that should be verified at creation.



Why would anyone think that the value of a non-trivial asset could be expressed as a single number?

Let's consider a house. Clearly the amount of money that someone would pay for said house varies from person to person. Which value is "correct"?

Before you jump in with some clever aggregation function, suppose that the owners want to sell said house. Will they agree to the value that your aggregation function produces? (They can choose to delay the sale, so if they don't agree, your function may well deprive someone of the ability to buy said house at a price that its acceptable to said someone.)

Ah, but you say that stock is stock. That's true, but again, different people have different priorities. I probably weight dividend paying history differently than you do. I have different expectations on inflation. I need to go liquid at different times. The end result is that we disagree about the value of the same share of Microsoft stock.


Let's consider a house. Clearly the amount of money that someone would pay for said house varies from person to person. Which value is "correct"?

The one that actually results in a deal happening, which often means the highest amount on offer. The more standardized a good is and the more it's traded, the more well-defined its price is. If deals don't happen, there's no market-clearing price, e.g. no way to price sex with Mother Theresa.

Before you jump in with some clever aggregation function, suppose that the owners want to sell said house. Will they agree to the value that your aggregation function produces?

Yes they will, by my definition above :-)


The market clearing price is one definition o "value", but the folks complaining about complexity of derivative pricing are clearly unwilling to accept it.

BTW - Folks who don't participate in a deal often complain that the price at which it occurred was "wrong". Folks who can't find someone to buy from them at the price they want have an analogous complaint.


The single number does not have to be 100% accurate to be useful. The end result is that though we disagree about the value of the Microsoft stock, it nonetheless has been assigned a price by some invisible hand.


I would have thought the idea that CDO sellers keeping the most vulnerable tranch protecting against cherry picking was a fairly obvious non-starter anyway. (The following will be massively oversimplified for easier explanation, but I think it still applies in the far more complicated real life equivalent)

If I make 10 CDOs and I think 5% of the underlying assets are bad, if I share them out equally the top tranch takes all the damage for each CDO - if the top tranch is 5% then I make nothing basically. If I make one CDO with 50% bad assets, and 9 with 100% good assets, then some of those losses go to other people deeper into the bad CDO tranch, suddenly 90% of my assets pay off instead of 0%.


A fairly good summary, though I'm not certain someone who hasn't studied computability is going to get it. The "See Also" amichail posted is good.


The question now is whether this information will have any effect in Washington. My bet is that it will not.


Likewise, we should outlaw tide-charts since we haven't solved the n-bodies problem.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: