Hacker News new | past | comments | ask | show | jobs | submit login

What about superoptimization? For instance, Stoke (http://stoke.stanford.edu/) or Souper (https://github.com/google/souper). They will not find all optimizations of course, and program equivalence is undecidable indeed, but they are a good shot at optimal compilation, I would say.



Just because a problem is undecidable that doesn't mean that you can't in fact solve it for a large set of interesting instances (maybe all instances you care about).


It's decidable provided the problem-space is finite, right?


Every finite problem is decidable in linear time because you can hardcode the solution.


Right, meaning decidability isn't an issue for superoptimisation of many kinds of problems.


What does that mean in practice though?

I can say that every finite computational problem can be solved in O(1), so if the universe is finite, all real problems can be solved in O(1).

This sounds great until you realise that the constant factors involved would overwhelm any practical instance of this strategy.


> What does that mean in practice though?

It means the idea isn't necessarily wholly impractical. That's not nothing.

I imagine superoptimisers scale extremely poorly, but that's a different question. I imagine they're best suited to bit-twiddling problems rather than, say, figuring out the best assembly code to implement a parallel Timsort.

I have to admit ignorance here though, I know almost nothing about superoptimisers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: