
"We Really Don't Know How To Compute" - Sussman keynote at Strange Loop - puredanger
https://thestrangeloop.com/sessions/we-really-dont-know-how-to-compute
======
swannodette
I think most people on this thread are missing the point entirely. Computer
Science is the sustained study of processes and how to encode them. Sussman is
simply pointing out the most complex, powerful (yet flexible) processes that
he is aware of and stating that our models of computer programming can't even
begin to describe such processes.

Seems like just the kind of thing that Sussman, who has been invested in logic
programming, and constraint logic programming for 30 years, would be
interested in.

Sad I'm going to miss this talk, I'm becoming convinced that future
programming languages will need to combine our rich knowledge of object
oriented, functional, and (constraint) logic programming.

~~~
elviejo
Completely agree with you. In "My Favourite Interview Question"[1] The author
asks: How would you design a Monopoly game? He goes on to say that with
'basíc' OOP you can model the elements: dice, buildings.

But What about the rules? One of his suggestions is to look to the Strategy,
Visitor, and Command patterns.

But I disagree. I want to model the rules using Prolog! That is what prolog is
great at.

So can please the next high level language standup? I just want: top of the
line OOP (Smalltalk) constraint programming (prolog) functional (haskell) And
Design By Contract (Eiffel)

And no... I'm not asking for the kitchen sink language [2]

Is simply that this concepts aren't exclusive and all of them helps us to
better model reality.

[1][http://weblog.raganwald.com/2006/06/my-favourite-
interview-q...](http://weblog.raganwald.com/2006/06/my-favourite-interview-
question.html) [2]<http://zedshaw.com/essays/kitchensink.html>

~~~
jodrellblank
Isn't this something the .Net Framework should be good at? Integrate your OOP
(C#) program with functional (F#) modules, and so on. There are
implementations of Eiffel, Smalltalk and Prolog for .Net too.

------
shawndrost
As many of us know, one fact about dense programs is that they're difficult
for humans to understand. Since human-created programs are sparse, we can
understand them -- which means that we can improve them, and programs don't
have to wait for aeons to improve via genetic selection.

Density is overrated.

------
epsilondelta
100 ms / 10 ms = 10 steps

But that's a straw man argument. The brain is a massively parallel net of
neurons connected by synapses. Saying that it takes "10 steps" to respond to a
stimulus is like saying that a GPU only applies a few pixel shaders on a scene
per frame. Perhaps, but that's over millions of pixels.

Also, human DNA is unique from that of close mammals on a set of base pairs of
size the order of 1-5 percent of our genome. So that means 10-50 MB. That's
actually a pretty substantial size: it can even store a small operating system
kernel (Linux can be compiled to under 10 MB).

Edit: The 1 GB figure comes from the fact that the human genome size is almost
3 billion base pairs, where each base pair exactly encodes 2 bits. So,
(3x10^9)*2/8=750 MB which rounds up.

[http://www.nature.com/nature/journal/v431/n7011/abs/nature03...](http://www.nature.com/nature/journal/v431/n7011/abs/nature03001.html)

~~~
stiff
_Also, human DNA is unique from that of close mammals on a set of base pairs
of size the order of 1-5 percent of our genome. So that means 10-50 MB. That's
actually a pretty substantial size: it can even store a small operating system
kernel (Linux can be compiled to under 10 MB)._

I "passionately love" metaphors and comparisons of this kind, they are so much
meaningful. I mean, 50MB of "code" is "a bit" different when your computer is
build from logic gates and a bit different when your computer is the universe,
or, to be more precise, the laws of physics that determine the final form and
function of the proteins that are build from the DNA. In other words, I don't
think it is right to use a measure of information like it would be a measure
of complexity (which such analogies imply). Also it completely ignores the
very complicated process of development of the brain, where information is
supplied from the outside all the time and without which the brain isn't too
useful.

It's the same with the page linked:

 _We don’t have any idea how to make a description of such a complex machine
that is both dense and flexible._

Just like the fact that we have computers build from gates and not laws of the
universe to depend on didn't make the task of writing "dense and flexible
descriptions" quite a bit harder. Especially we do it with our brains and we
don't have millions of years of trial and error at our disposal like evolution
has.

~~~
ckuehne
I agree with quite a bit of your argument. However, what exactly do you think
do "computers build from gates" depend on if not the laws of physics?

~~~
stiff
The question is not really what does the computer depend on, but what does
_the program_ depend on. The boolean algebra that computers are an
implementation of is very simple and can be implemented in a wide variety of
ways and regardless which one you choose, programs can be compiled and ran
using the new architecture obtained in this way. So you could hypothetically
build a biological computer that would realize ANDs, ORs etc., in the end
providing the same instruction set as our current PC, and the unchanged Linux
source code (example from the parent post) could be compiled an ran on it. In
other words, while our computers depend on laws of physics in some way, those
laws are not part of the computational model.

This is different with the DNA. There is no middle layer like Boolean algebra
to abstract-out the device that does the computation. The protein that will be
the result of connecting the amino acids specified by the DNA, it's form and
function, is very highly dependent directly on physical laws in a very
complicated way - if you simulate protein folding (and consider how hard this
is in the first place), you can see how much the outcome will vary when you
for example change the value of some physical constant by a small amount. Then
all those proteins start interacting with each other in highly complicated
ways, also dependent on a wide variety of physical laws and on the outside
environment, and if you consider DNA a program, those physical laws are parts
of the computational model, of course if the concept of a computational model
makes any sense when studying non-man-made artefacts. That's roughly why
applying computer science metaphors to the DNA always sounds a bit ridiculous
to me.

Of course, it is a different question whether we can find a computational
model that would explain to us the working of a healthy, fully-developed human
brain, I think it is worth mentioning as it is easy to confuse those two
questions.

------
ScottBurson
I beg to differ with Gerry. We know a lot about how to compute. It's the
messy, opportunistic, gestalt-driven activity of _thinking_ that we still
don't know much about.

~~~
bdhe
Interesting point: _It's the messy, opportunistic, gestalt-driven activity of
thinking that we still don't know much about._

I have a couple of questions that I would love someone well-versed in AI to
answer:

1\. Does the difficulty of systems like driverless cars arise because we
haven't been able to replicate the feedback loop mechanism that is largely
hardwired? Is it some limitation of control theory (I'm just speculating). How
is this so fundamentally harder than the ability to exponentiate one 1024-bit
numer to another mod a third 1024-bit number (which is done in microseconds)?

2\. With regards to aspects of AI that we might want to interact with, is
human vision and visual post-processing done in the brain the hardest to
replicate? Is it a matter of unknown algorithms or rather massive parallelism
that gives humans a large advantage? If not, are other senses, like hearing
(voice-recognition) or haptics harder to replicate?

~~~
epsilondelta
A lot of the challenges in making driverless cars has been in visual object
recognition, which is probably the "visual post-processing" you mean.

It is important to note that humans, and other mammals, are very hard-wired to
process vision and other inputs. For example, the retina is more than just an
organic lens; it also encodes information about the motion of objects seen
within its field of vision:

[http://www.sciencedirect.com/science/article/pii/S0896627307...](http://www.sciencedirect.com/science/article/pii/S0896627307006277)

So in addition to post-processing, there is probably significant pre-
processing done by the sensory extensions of our brain. Similarly, the
physicist Georg Zweig studied the cochlea and found how it mechanically
separates sound into its frequency distribution. Zweig's research on the
cochlea also resulted in the discovery of the continuous wavelet transform,
whose discrete version may be familiar through its use in JPEG2000.

<http://scienceworld.wolfram.com/biography/Zweig.html>

------
seats
Reminded me of this PZ Myers post that really rips into Kurzweil. There are
problems with too simplistically thinking of DNA as a program.

[http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does...](http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php)

~~~
Confusion
Well, it's easy to rip into a straw man.

[http://www.kurzweilai.net/ray-kurzweil-responds-to-ray-
kurzw...](http://www.kurzweilai.net/ray-kurzweil-responds-to-ray-kurzweil-
does-not-understand-the-brain)

------
Rickasaurus
I hope this gets recorded.

~~~
puredanger
It will be recorded and released on <http://infoq.com>

------
marchdown
What I would like to know is how compressible (transcribed) human genome is.

~~~
hxa7241
It is, I believe, fairly well compressible: down to about 200MB (from about
800MB).

(I think I got this from the book 'Genomes' by Brown, 2002.)

------
gcb
1GB for DNA data is assuming we know all that there is in a DNA.

~~~
azakai
Not sure why you are downmodded. I took your comment to imply that there might
be additional information that is not in the standard way we understand DNA,
which is a very legitimate question. But if you meant something else and I am
wrong, maybe the downmod was justified ;)

Getting back to DNA, we measure information there based on the base pairs. But
for all we know, there could be additional sources of information, like some
subtle aspect of physical shape that DNA has, that is also inherited (as part
of the replication process). Perhaps the amount of information encoded is
substantially higher due to that.

(That's wild speculation, of course.)

~~~
wynand
You're right - DNA methylation is inherited (see epigenetics). So DNA alone
does not give you enough information.

~~~
infinite8s
And methylation patterns are believed to affect higher-order structure in DNA.
This higher order structure has to do with the way DNA is packaged when not
being actively read for protein synthesis or cell division. Most of the time
it is stored in a tightly coiled form that renders it inaccessible to most of
the DNA machinery, and it's believed that the methylation patter on the DNA
affects the structure of this compaction.

------
hasenj
> We don’t have any idea how to make a description of such a complex machine
> that is both dense and flexible.

I've been thinking about this topic for a while and trying to find the right
"words" to describe what I've been thinking about. I think I just found them.

