

The Algorithm: Idiom of Modern Science  - ypavan
http://www.cs.princeton.edu/~chazelle/pubs/algorithm.html

======
IsaacL
The article looks like it might have some interesting content, but I don't
fancy going through what appears to be a rather long and basic introduction to
Computer Science. Can anyone enlighten me as to whether there's anything worth
reading here?

~~~
gjm11
Not much, I fear. Chazelle is a very smart guy, but here he's trying waaaay
too hard to be cute, and the information density comes out very low.

Here's a one-paragraph summary. Anything that doesn't immediately seem
familiar is an indication that it might be worth skimming the corresponding
bit of Chazelle's article, although frankly there isn't much more content in
his article than in my paragraph.

Progress in computer hardware is slowing; progress in algorithms is where the
fun will be in future. At a high enough level of abstraction, all computers
are basically the same. You can think of a computer as a thing that applies
programs to data. The correspondence between programs and meanings-of-programs
is basically arbitrary, kinda like some early 20th-century French
intellectuals said about natural language. You can treat programs as data,
which e.g. allows you to write programs that make copies of themselves. This
is a bit like how living things make copies of themselves. An important class
of problems take the form "Find a thing satisfying such-and-such requirements"
where it's clear how to check an alleged solution rapidly, but there's no
obvious way to find a solution in the first place. For instance, finding
mathematical proofs versus checking that they're valid. It turns out to be
good to interpret "rapidly" as "in time that grows at most polynomially in the
size of the problem specification". Making this all rigorous gets you the
notion of an "NP-complete" problem; there are many such, and it turns out that
in some sense solving one is the same as solving them all. No one knows
whether such problems can after all be solved rapidly (in the lingo: if "P=NP"
is true). If they could, then mathematics and protein-folding and many other
such things would become easy. Also, e-commerce would collapse, because it
depends on public-key cryptography, one version of which depends on the
conjecture that large numbers can't reliably be factored rapidly. There is no
such thing as a program that looks at another program and the data it operates
on, and tells you whether it will ever terminate. Such undecidability seems
kinda like intractability (i.e., insolubility in polynomial time) but it turns
out that undecidability is useless but intractability is useful, because often
if you replace "find something such that X" with "find something such that
showing it isn't X is intractable" you get an easier problem but one whose
solution is just as useful. For instance, generation of (pseudo-)random
numbers. You can "prove" you know something without revealing what it is
(e.g., demonstrate that you can factor a large number, without revealing the
factors); the same ideas also allow, e.g., two people to determine which of
them is wealthier without either revealing their actual wealth. Similarly, you
can "prove" that you have a proof of a mathematical theorem without revealing
any of the actual steps in the proof. All this depends on P!=NP, or some
slightly stronger variant thereof. Algorithms are cool. Traditional
mathematics isn't enough for modern science; in the future there'll be lots of
simulations and computer models -- algorithms. But we know how to reason with
mathematical formulae, and as yet we have no equally strong "calculus of
algorithms". This needs more research.

(Some of the claims in that paragraph are pretty silly. I think that in every
case that's because what Chazelle wrote is silly in the same way.)

~~~
caffeine
Here's a summary of your summary:

 _Algorithms are cool. Traditional mathematics isn't enough for modern
science; in the future there'll be lots of simulations and computer models --
algorithms. But we know how to reason with mathematical formulae, and as yet
we have no equally strong "calculus of algorithms". This needs more research._

