Hacker News new | past | comments | ask | show | jobs | submit login
Rosser's Theorem via Turing Machines (2011) (scottaaronson.blog)
67 points by dargscisyhp 7 months ago | hide | past | favorite | 9 comments



When I TA'd computability & complexity as a graduate student, I always loved giving this proof of Godel's Theorem as an easy corollary of the Halting Problem as a homework assignment.

It's beautiful, elegant, and easy to understand. I was introduced to the proof by a note in Sipser's text.


This seems like a good place to ask - if your knowledge of (computation/complexity) theory is at the level of the Sipser Introduction to the Theory of Computation book, what's a good learning path to get current?


Disclaimer: I have no clue of a complete learning path. But here are some suggestions you might find interesting.

If you want to go heavy on complexity, you could try 'Combinatorial Optimization: Polyhedra and Efficiency' by Alexander Schrijver. But it's basically the equivalent of Knuth's Art of Computer Programming for complexity (in terms of rigour and breadth of approach).

I have the three volumes of Schrijver's book on my desk, and even occasionally look into them. But I admit I never read the whole thing. I love the historical context he gives, eg explaining how maximum flow / minimum cut problems were first formally investigated in the Cold War when Western boffins wanted to work out how to most efficiently cut the Soviet rail network's ability to move stuff to Western Europe.

If you want something much lighter, you could check out some of Scott Aaronson's backlog. Eg https://www.scottaaronson.com/papers/philos.pdf 'Why Philosophers Should Care About Computational Complexity'. You can follow his bibliography for more background. Scott's blog backlog is also good.

If you like randomised algorithms, you might also like 'The Discrepancy Method Randomness and Complexity' by Bernard Chazelle. It's available for free online. Eg at https://api.pageplace.de/preview/DT0400.9781316047804_A25932...

Apropos Bernard Chazelle, I'm working on a little paper myself. It's about how to simulate the outcome of a series of heap operations (like insert and delete-minimum) in O(n) time, instead of the trivial to achieve O(n log n) you get from a naive implementation. I'm looking for some collaborators, if you are interested.


The next step book wise is Arora and Barak, Computational Complexity: a modern approach. Of course that book is already out of date. The breakthrough that comes to mind is Ryan Williams' circuit lower bounds. He wrote a nice explanation of the proof here: https://arxiv.org/abs/1111.1261 He defines all the key terms pretty well but probably requires a bit of "maturity" to understand. There are also various breakthroughs to do with interactive proofs.


Not OP but while I aced my class that used Sipser's book, nevertheless I still lacked the mathematical comfort/maturity to do Arora and Barak. The first class started talking about "random functions" and I had no background for that at the time. I don't know if some kind of advanced discrete math class would have helped fill in the gaps at the time, at least for myself. Or maybe a remedial discrete math class, I was mainly an engineer not originally intending to be a CS major.


There is the textbook by Papadimitriou, I don't have a copy but I once taught a class that had been based on that book and slightly updated to include elements from Arora Barak. I think it's still a jump up from Sipser but perhaps a little gentler than AB. Ymmv


There is more than one "Rosser's theorem" to wit:

  In number theory, Rosser's theorem states that the  {\displaystyle n}th prime number is greater than  log  {\displaystyle n\log n}, where log{\displaystyle \log } is the natural logarithm function. It was published by J. Barkley Rosser in 1939.
(wikipedia)

Wiki refers to what Scott Aaronson is calling Rosser's theorem as "Rosser's trick"

  Rosser's trick uses a formula that says "If this sentence is provable, there is a shorter proof of its negation".
Not that wiki is the cite beyond all reproach, having more than one theorem named after you is also a good thing.


> a basically-similar argument in Kleene's 1967 textbook.

This textbook is an astonishing piece of work of an astonishing mathematical genius. The amount of all kinds of weird (and useful? I'm not much of a logician myself) logical stuff in it is just mind-boggling, just as its writing style. You can keep come back to this book for years and find something new in it every time.

Barendregt's book on lambda calculus is another example of such a book.


My professor in college also referred us to read Kleene's book whenever we could, but the darn thing was (and still is) so hard to find.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: