
Is it really 'Complex'? Or did we just make it 'Complicated'? [video] - mpweiher
https://www.youtube.com/watch?v=ubaX1Smg6pY&=
======
zackmorris
One of the key points of the talk happens around the 32-37 minute mark,
comparing the crew sizes of various ships. He makes the point that the core of
development is really putting together the requirements, not writing code.
Unfortunately we've put nearly all of the resources of our industry into
engineering rather than science (tactics rather than strategies). What he
stated very succinctly is that a supercomputer can work out the tactics when
provided a strategy, so why are we doing this all by hand?

That’s as far as I’ve gotten in the video so far but I find it kind of
haunting that I already know what he’s talking about (brute force, simulated
annealing, genetic algorithms, and other simple formulas that evolve a
solution) and that these all run horribly on current hardware but are trivial
to run on highly parallelized CPUs like the kind that can be built with FPGAs.
They also would map quite well to existing computer networks, a bit like
SETI@home.

Dunno about anyone else, but nearly the entirety of what I do is a waste of
time, and I know it, deep down inside. That’s been a big factor in bouts with
depression over the years and things like imposter syndrome. I really quite
despise the fact that I’m more of a translator between the spoken word and
computer code than an engineer, architect or even craftsman. Unfortunately I
follow the money for survival, but if it weren’t for that, I really feel that
I could be doing world-class work for what basically amounts to room and
board. I wonder if anyone here feels the same…

~~~
david927
That's exactly it. I sometimes wonder how many brilliant people have left this
industry because they were similarly enlightened. (One prominent person in
applied computer science recently told me she was considering going to go into
medicine because she wanted to do something useful.)

It doesn't have to be like this. We have the brain power. But I feel we
squander it on making silly websites/apps when we could be making _history_.

~~~
jsprogrammer
I think the previous poster already identified the problem:

>I follow the money for survival

Maybe we can have a system where people can survive without money?

~~~
colund
Nice, however that would remove an incentive for innovation

~~~
perfunctory
> incentive for innovation

Do you think Alan Kay was incentivised by money to do all the stuff he talked
about in this video?

~~~
minthd
OK - so alan kay did working on creating this great system. Is there any
knowledge from it used in a practical application today ? Probably no, because

a.It's probably too soon.

b.It often takes boring work(and money) to transfer pure ideas to successful
practical applications.

~~~
perfunctory
we were talking about insentives for innovation, not insentives for boring
work. I agree that boring work should be insentivised by money.

~~~
jsprogrammer
I'd be interesting in going the course of: boring work should be eliminated
through automation or obsolescence.

------
sivers
Very interesting related talk about complecting things - "Simple Made Easy" \-
by Rich Hickey, inventor of Clojure:

[http://www.infoq.com/presentations/Simple-Made-
Easy](http://www.infoq.com/presentations/Simple-Made-Easy)

If you're a Ruby person, maybe watch this version instead, since it's almost
the same talk but for a Rails Conference, with a few references:

[https://www.youtube.com/watch?v=rI8tNMsozo0](https://www.youtube.com/watch?v=rI8tNMsozo0)

------
nilkn
I think there's an unfortunate trend throughout a lot of human activity to
make things complicated. The prototypical example in my mind has always been
the Monty Hall problem, where you often see convoluted explanations or
detailed probabilistic calculations, but the solution is just plain as it can
be when you consider that if you switch you win iff you chose a goat and if
you stay you win iff you chose the car.

I've also always been a fan of Feynman's assertion that we haven't really
understood something until it can be explained in a freshman lecture. At the
end of his life he gave a now famous explanation of QED that required almost
no mathematics and yet was in fact quite an accurate representation of the
theory.

~~~
cbd1984
> The prototypical example in my mind has always been the Monty Hall problem,
> where you often see convoluted explanations or detailed probabilistic
> calculations, but the solution is just plain as it can be when you consider
> that if you switch you win iff you chose a goat and if you stay you win iff
> you chose the car.

The problem here is, when two people disagree, they both think the problem is
simple and they both think their answer is trivial. The thing is, they haven't
solved the same problem.

In the standard formulation, where switching is the best strategy, Monty's
actions are not random: He always knows which door has the good prize ("A new
car!") and which doors have the bad prize ("A goat.") and he'll never pick the
door with the good prize.

If you hear a statement of the problem, or perhaps a slight _misstatement_ of
the problem, and conclude that Monty is just as in the dark as you are on
which door hides which prize and just happened to have picked one of the goat-
concealing doors, then switching confers no advantage.

A large part of simplicity is knowing enough to state the problem accurately,
which circles back to your paragraph about Feynman: Understanding his initial
understanding of QED required understanding everything he knew which lead him
to his original understanding of the problem QED solved; for his final
formulation of QED, he understood the problems in terms of more fundamental
concepts, and could therefore solve them using those same concepts.

~~~
nilkn
Actually, if Monty chooses randomly and just happens to pick a door with a
goat you should still switch, because it's still true that you will win if you
switch iff you chose a goat but you'll win if you stay iff you chose the car
already. The host's method of selection is not relevant given a priori the
observation that the host selected a goat. The host's method _is_ relevant if
we forgo that observation and iterate the game repeatedly.

------
david927
A doctor told a programmer friend once, "I have to fix problems that God gave
us. You have to fix problems you give yourselves."

~~~
a_dutch_guy
God on HN? Come on, the world isn't flat anymore.

~~~
jacquesm
You don't have to take that literally in order to understand it, but just in
case you really missed the point: the doctor is essentially saying that he is
dealing with problems that have no direct cause or architect that he can
consult whereas the IT people have essentially only themselves to blame and
have made the mess they're in (usually) rather than that it has been handed
down to them through the mists of time or from above.

In a general sense that is true but in a more specific sense it is definitely
possible to be handed a project without docs and horrible problems and it
might as well be an organic problem. Even so, the simplest biology dwarfs the
largest IT systems in complexity.

~~~
dasil003
> _Even so, the simplest biology dwarfs the largest IT systems in complexity._

And also mercifully in continuous real-world testing over time with no Big
Rewrite to be seen. While an individual organism might have unsolvable
problems, nature doesn't have the same tolerance for utterly intractable
systems that we sometimes see in the software community.

------
ad_hominem
See also "No Silver Bullet — Essence and Accidents of Software Engineering"[1]
by Fred Brooks wherein "essential complexity" vs "accidental complexity" is
explored.

[1]:
[http://en.wikipedia.org/wiki/No_Silver_Bullet](http://en.wikipedia.org/wiki/No_Silver_Bullet)

------
TheBiv
I have made it to 31 min in, and I genuinely think he made this talk too
complicated.

~~~
bbcbasic
Ha ha I skipped through until he started talking about code again. Go back and
do that!

------
PuercoPop
This appears to be a presentation of part of the work of VPRI[0], you should
check it out, they made a whole OS and the (word processor, PDF viewer, Web
Browser, etc) in under 20K lines of code. There is more info on:
[http://www.vpri.org/vp_wiki/index.php/Main_Page](http://www.vpri.org/vp_wiki/index.php/Main_Page)

Besides Nile, the parser OMeta is pretty cool to imho.

~~~
renox
> you should check it out, they made a whole OS and the (word processor, PDF
> viewer, Web Browser, etc) in under 20K lines of code.

Check it out? Where is the source code for the OS? I haven't found it in the
link you gave..

------
3minus1
Alan Kaye seems to enjoy making asinine statements. "Teachers are taught
science as a religion" "Romans had the best cement ever made" "We live in the
century where the great art forms are the merging of science and engineering"

There's also a kind of arrogance when he shits on intel and djikstra that I
find off-putting.

~~~
anfedorov
Only familiar with one of them, but "teachers are taught science as a
religion" seems spot-on.

~~~
3minus1
I disagree. Teaching science as a series of facts is in no way akin to
religious indoctrination.

Edit: And Kaye is also ignoring the fact that every high school science
curriculum covers the scientific method and includes hands on experimentation.

~~~
e12e
You've never seen how students too often get better grades on "perfect"
science experiments than on honest reports of "failures" (experment differed
from expected outcome)?

You must've been in an elite high school.

~~~
3minus1
That actually did happen to me sometimes. I don't think you can convince me
that there's anything religious about it.

------
m_mueller
This talk blows everything out of the water. It's like the current culmination
of the work from Douglas Engelbart to Bret Victor. Maybe just maybe there
could be a rebirth of personal computing coming out of this research, it looks
at least promising and ready for use - as demonstrated by Kay.

~~~
mpweiher
>It's like the current culmination of the work from Douglas Engelbart to Bret
Victor.

This is no coincidence.

"Silicon Valley will be in trouble when it runs out of Doug Engelbarts ideas"
\-- Alan Kay

Bret works for Alan now, as far as I know.

~~~
m_mueller
I've noticed that in his talk. Easily looks like the most prestigious CS R&D
lab in the world right now.

------
serve_yay
Well, we made it complicated because one must complicate pure things to make
them real. I guess things would be better if our code ran on some
hypothetically better CPU than what Intel makes. But "actually existing in the
world" is a valuable property too.

Plato was pretty cool, though, I agree.

~~~
ozten
So the "frank" system, which he uses throughout the presentation is real
enough to browse the internet and author active essays in. It is written in
1000x less code than Powerpoint running on Windows 7.

------
WoodenChair
People in this thread who seem to be asserting that building for the Web as
opposed to native apps is an example of how things should be "less complex"
perhaps have never written for both platforms... Also the entire argument
completely misses the point of the lecture.

~~~
ankurdhama
I think those people are just suggesting that we should be writing apps and
NOT android app, ios app, windows app, linux app etc etc.

~~~
rimantas
we will do that once there is just OS, not android, ios, windows, linux, etc.
And no, web does not fit as a substitute for common OS.

------
samspot
15 minutes in, I am finding this talk to be slow and unfocused. I assume this
is getting a lot of promoters because we all agree with the theme?

~~~
bbcbasic
Seeing the 1:42:51 length scared me at first. That's going to take me days to
get through given how much free time I have per day! Anyway best bit is to
skim through bits of it. It is all interesting and good but I do prefer it
when he talks about code.

~~~
dredmorbius
This is where I find 1) downloading the video and 2) playing it back using a
local media player to be _vastly_ superior to viewing online. I can squeeze
the window down to a small size (400px presently), speed it up (150%), start
and stop it at will, zoom to full screen, and _not_ have to figure out where
in a stack of browser windows and tabs it is.

Pretty much the point I was making some time back where I got in a tiff with a
documentary creator for D/L'ing his video from Vimeo (its browser player has
_pants_ UI/UX in terms of scanning back and forth -- the site switches
_videos_ rather than advances/retreats through the actual video you're
watching).

And then as now I maintain: users control presentation, once you've released
your work you're letting your baby fly. Insisting that people use it only how
you see fit is infantile itself, and ultimately futile.

(Fitting with my Vanity theme above).

------
htor
This video is very interesting.

Very good point about the manufacturing of processors being backwards. Why
_not_ put an interpreter inside the L1 cache that accomodates higher level
programming of the computer? Why are we still stuck with assembly as the
interface to the CPU?

~~~
frik
We had already Lisp running directly on a CPU:
[http://en.wikipedia.org/wiki/Lisp_machine](http://en.wikipedia.org/wiki/Lisp_machine)

Java runs directly on hardware:
[http://en.wikipedia.org/wiki/Java_processor](http://en.wikipedia.org/wiki/Java_processor)
,
[http://en.wikipedia.org/wiki/Java_Card](http://en.wikipedia.org/wiki/Java_Card)

Pascal MicroEngine:
[http://en.wikipedia.org/wiki/Pascal_MicroEngine](http://en.wikipedia.org/wiki/Pascal_MicroEngine)

And some more: [http://en.wikipedia.org/wiki/Category:High-
level_language_co...](http://en.wikipedia.org/wiki/Category:High-
level_language_computer_architecture)

And the x86 instruction set is CISC, but (modern) x86 architecture is RISC
(inside) - so your modern Intel/AMD CPU already does it.
[http://stackoverflow.com/questions/13071221/is-x86-risc-
or-c...](http://stackoverflow.com/questions/13071221/is-x86-risc-or-cisc)

Backward compatibility aside, we could have CPUs that support a ANSI C or the
upcoming Rust directly in hardware, instead of ASM.

~~~
vezzy-fnord
Don't forget the Burroughs B5000, from 1961:
[https://en.wikipedia.org/wiki/Burroughs_large_systems#B5000](https://en.wikipedia.org/wiki/Burroughs_large_systems#B5000)

The fact that we had an architecture running entirely on ALGOL with no ASM as
far back as 54 years ago, is quite astonishing, followed by depressing.

~~~
Animats
I know. I once took a computer architecture course from Bill McKeeman at UCSC,
and we had an obsolete B5500 to play with. We got to step through instruction
execution from the front panel, watching operands push and pop off the stack.
The top two locations on the stack were hardware registers to speed things up.

An address on the Burroughs machines is a path - something like "Process 22 /
function 15 / array 3 / offset 14. The actual memory address is no more
visible to the program than an actual disk address is visible in a modern file
system. Each of those elements is pageable, and arrays can be grown. The OS
controls the memory mapping tree.

One of the things that killed those machines was the rise of C and UNIX. C and
UNIX assume a flat address space. The Burroughs machines don't support C-style
pointers.

------
jamesfisher
Around 51:40 he talks about how semaphores are a bad idea, and says that
something called "pseudotime" was a much better idea but which never caught
on.

What is this "pseudotime"? Google turns up nothing. Am I mis-hearing what he
said?

------
ecesena
My prof. of calculus once said that complex doesn't mean complicated, yet more
group/set/combination. He clearly didn't use any of these terms as all have a
specific meaning in Maths (but I don't have any better translation from
Italian). I believe it's very true for complex numbers, and I'm trying to keep
the same distinction in real life as well. So, for me, complex means putting
together more things in a smart way to have a more clear -not more
complicated- explanation/solution.

------
rndn
Around the 31 minute mark, shouldn't the gear and biology analogies be the
other way around (biological systems being analogous to bloated software), or
am I missing something?

~~~
rndn
Oh, the tactics should be realized by biological systems rather than by manual
"brick stacking" of technical systems?

------
DonHopkins
Someone asked Alan Kay an excellent question about the iPad, and his answer is
so interesting that I'll transcribe here.

To his credit, he handled the questioner's faux pas much more gracefully than
how RMS typically responds to questions about Linux and Open Source. ;)

Questioner: So you came up with the DynaPad --

Alan Kay: DynaBook.

Questioner: DynaBook!

Yes, I'm sorry. Which is mostly -- you know, we've got iPads and all these
tablet computers now.

But does it tick you off that we can't even run Squeak on it now?

Alan Kay: Well, you can...

Q: Yea, but you've got to pay Apple $100 bucks just to get a developer's
license.

Alan Kay: Well, there's a variety of things.

See, I'll tell you what does tick me off, though.

Basically two things.

The number one thing is, yeah, you can run Squeak, and you can run the eToys
version of Squeak on it, so children can do things.

But Apple absolutely forbids any child from putting a creation of theirs to
the internet, and forbids any other child in the world from downloading that
creation.

That couldn't be any more anti-personal-computing if you tried.

That's what ticks me off.

Then the lesser thing is that the user interface on the iPad is so bad.

Because they went for the lowest common denominator.

I actually have a nice slide for that, which shows a two-year-old kid using an
iPad, and an 85-year-old lady using an iPad. And then the next thing shows
both of them in walkers.

Because that's what Apple has catered to: they've catered to the absolute
extreme.

But in between, people, you know, when you're two or three, you start using
crayons, you start using tools.

And yeah, you can buy a capacitive pen for the iPad, but where do you put it?

So there's no place on the iPad for putting that capacitive pen.

So Apple, in spite of the fact of making a pretty good touch sensitive
surface, absolutely has no thought of selling to anybody who wants to learn
something on it.

And again, who cares?

There's nothing wrong with having something that is brain dead, and only shows
ordinary media.

The problem is that people don't know it's brain dead.

And so it's actually replacing computers that can actually do more for
children.

And to me, that's anti-ethical.

My favorite story in the Bible is the one of Esau.

Esau came back from hunting, and his brother Joseph was cooking up a pot of
soup.

And Esau said "I'm hungry, I'd like a cup of soup."

And Joseph said "Well, I'll give it to you for your birth right."

And Esau was hungry, so he said "OK".

That's humanity.

Because we're constantly giving up what's most important just for mere
convenience, and not realizing what the actual cost is.

So you could blame the schools.

I really blame Apple, because they know what they're doing.

And I blame the schools because they haven't taken the trouble to know what
they're doing over the last 30 years.

But I blame Apple more for that.

I spent a lot of -- just to get things like Squeak running, and other systems
like Scratch running on it, took many phone calls between me and Steve, before
he died.

I spent -- you know, he and I used to talk on the phone about once a month,
and I spent a long -- and it was clear that he was not in control of the
company any more.

So he got one little lightning bolt down to allow people to put interpreters
on, but not enough to allow interpretations to be shared over the internet.

So people do crazy things like attaching things into mail.

But that's not the same as finding something via search in a web browser.

So I think it's just completely messed up.

You know, it's the world that we're in.

It's a consumer world where the consumers are thought of as serfs, and only
good enough to provide money.

Not good enough to learn anything worthwhile.

------
scotty79
Do you know of any languages that you could use to specify precisely what a
program should do?

Can you use that specification to verify if a program does what it should? I
know BDD taken to extremes kinda does that.

But could a specification be verified if it's non-contradictory and perhaps
some other interesting things like where are its edges that it does not cover?

------
raspasov
People that enjoyed this talk should check this out
[http://www.infoq.com/presentations/Simple-Made-
Easy](http://www.infoq.com/presentations/Simple-Made-Easy)

------
tdicola
Wow, this is a really great presentation that resonates a lot with me.

------
stevebmark
I'm 15 minutes in and frankly I'm finding the video hard to follow. Is there
an article that accompanies this that might help summarize his points?

------
andrey-g
What was he talking about at 51:45? Pseudo timer?

~~~
dgreensp
Pseudo-time -- I think he's talking about something called Croquet. There's a
trail here: [http://lambda-the-ultimate.org/node/2021/](http://lambda-the-
ultimate.org/node/2021/)

~~~
e12e
TeaTime is very interesting:
[http://dl.acm.org/citation.cfm?id=1094861&dl=ACM&coll=DL&CFI...](http://dl.acm.org/citation.cfm?id=1094861&dl=ACM&coll=DL&CFID=470663128&CFTOKEN=18653614)

------
Jugurtha
Still no love for Heaviside and Poincaré? :(

