
Alan Kay has agreed to do an AMA today - alankay
This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I&#x27;ll be around for most of the day today (though the early evening).
======
guelo
When you were envisioning today's computers in the 70s you seemed to have been
focused mostly on the educational benefits but it turns out that these devices
are even better for entertainment to the point were they are dangerously
addictive and steal time away from education. Do you have any thoughts on
interfaces that guide the brain away from its worst impulses and towards more
productive uses?

~~~
alankay
We were mostly thinking of "human advancement" or as Engelbart's group termed
it "Human Augmentation" \-- this includes education along with lots of other
things. I remember noting that if Moore's Law were to go a decade beyond 1995
(Moore's original extrapolation) that things like television and other "legal
drugs" would be possible. We already had a very good sense of this before TV
things were possible from noting how attractive early video games -- like
SpaceWar -- were. This is a part of an industrial civilization being able to
produce surpluses (the "industrial" part) with the "civilization" part being
how well children can be helped to learn not to give into the cravings of
genetics in a world of over-plenty. This is a huge problem in a culture like
the US in which making money is rather separated from worrying about how the
money is made.

~~~
stcredzero
Then what do you think about the concept of "gamification?" Do you think high
densities of reward and variable schedules of reward can be exploited to
productively focus human attention and intelligence on problems? Music itself
could be thought of as an analogy here. Since music is sound structured in a
way that makes it palatable (i.e. it has a high density of reward) much human
attention has been focused on the physics of sound and the biomechanics of
people using objects to produce sound. Games (especially ones like Minecraft)
seem to suggest that there are frameworks where energy and attention can be
focused on abstracted rule systems in much the same way.

~~~
alankay1
I certainly don't think of music along these lines. Or even theater. I like
developed arts of all kinds, and these require learning on the part of the
beholder, not just bones tossed at puppies.

~~~
stcredzero
I've been playing traditional music for decades, even qualifying to compete at
a high level at one point. There is a high density of reward inherent in
music, combined with variable schedules of reward. There is competition and a
challenge to explore the edges of the envelope of one's aesthetic and sensory
awareness along with the limits of one's physical coordination.

Many of the same things can happen in sandbox style games. I think there is a
tremendous potential for learning in such abstracted environments. What about
something like Minecraft, but with abstracted molecules instead of blocks?
Problems, like the ones around portraying how molecules inside a cell are
constantly jostling against water molecules, could be solved in such
environments using design. Many people who play well balanced games at a high
level often seem to be learning something about strategy and tactics in
particular rule systems. I suspect that there is something educationally
valuable in a carefully chosen and implemented rule system.

Also perhaps, it's so much easier to exploit such mechanisms to merely addict
people, that overwhelms any value to be gained.

------
di
Hi Alan,

In "The Power of the Context" (2004) you wrote:

    
    
      ...In programming there is a wide-spread 1st order
      theory that one shouldn’t build one’s own tools,
      languages, and especially operating systems. This is
      true—an incredible amount of time and energy has gone
      down these ratholes. On the 2nd hand, if you can build
      your own tools, languages and operating systems, then
      you absolutely should because the leverage that can be
      obtained (and often the time not wasted in trying to
      fix other people’s not quite right tools) can be
      incredible.
    

I love this quote because it justifies a DIY attitude of experimentation and
reverse engineering, etc., that generally I think we could use more of.

However, more often than not, I find the sentiment paralyzing. There's so much
that one could _probably_ learn to build themselves, but as things become more
and more complex, one has to be able to make a rational tradeoff between
spending the time and energy in the rathole, or not. I can't spend all day
rebuilding everything I can simply because I _can_.

My question is: how does one decide when to DIY, and when to use what's
already been built?

~~~
alankay1
This is a tough question. (And always has been in a sense, because every era
has had projects where the tool building has sunk the project into a black
hole.)

It really helped at Parc to work with real geniuses like Chuck Thacker and Dan
Ingalls (and quite a few more). There is a very thin boundary between making
the 2nd order work vs getting wiped out by the effort.

Another perspective on this is to think about "not getting caught by
dependencies" \-- what if there were really good _independent module systems_
\-- perhaps aided by hardware -- that allowed both worlds to work together (so
one doesn't get buried under "useful patches", etc.)

One of my favorite things to watch at Parc was how well Dan Ingalls was able
to bootstrap a new system out of an old one by really using what objects are
good for, and especially where the new system was even much better at
facilitating the next bootstrap.

I'm not a big Unix fan -- it was too late on the scene for the level of ideas
that it had -- but if you take the cultural history it came from, there were
several things they tried to do that were admirable -- including really having
a tiny kernel and using Unix processes for all systems building (this was a
very useful version of "OOP" \-- you just couldn't have small objects because
of the way processes were implemented). It was quite sad to see how this
pretty nice mix and match approach gradually decayed into huge loads and
dependencies. Part of this was that the rather good idea of parsing non-
command messages in each process -- we used this in the first Smalltalk at
Parc -- became much too ad hoc because there was not a strong attempt to
intertwine a real language around the message structures (this very same thing
happened with http -- just think of what this could have been if anyone had
been noticing ...)

~~~
pault
> I'm not a big Unix fan

What is your preferred technology stack?

~~~
astrodust
What's a good non-UNIX open-source operating system that's useful for day-to-
day work, or at least academically significant enough that it's worth diving
in to?

~~~
nickpsecurity
Here's a list of alternatives I put together to see some capabilities or
traits UNIX lacked:

[https://news.ycombinator.com/item?id=10957020](https://news.ycombinator.com/item?id=10957020)

I think, usable day-to-day, I'd say you're down to Haiku, MorphOS, Genode,
MINIX 3, and/or A2 Bluebottle. Haiku is a BeOS clone. MorphOS is one of last
Amiga's that looks pretty awesome. Genode OS is a security-oriented,
microkernel architecture that's using UNIX for bootstrapping but doesn't
inherently need it. MINIX 3 similarly bootstrapping on NetBSD but adds
microkernels, user-mode drivers, and self-healing functions. A2 Bluebottle is
most featured version of Oberon OS in safe, GC'd language. Runs fast.

The usability of these and third party software available vary considerably.
One recommendation I have across the board is to back up your data with a boot
disc onto external media. Do that often. Reason being, any project with few
developers + few users + bare metal is going to have issues to resolve that
long-term projects will have already knocked out.

~~~
MustardTiger
Minix isn't bootstrapping on netbsd, the entire goal of the system is to be a
microkernel based unix. It uses the netbsd userland because you don't need to
rewrite an entire unix userland for no reason just to change kernels.

~~~
nickpsecurity
Mental slip on my part. Thanks for the correction. I stand by the example at
least for the parts under NetBSD like drivers and reincarnation server. Their
style is more like non-UNIX, microkernel systems of the past. Well, some
precedent in HeliOS operating system but that was still detour from
traditional UNIX.

[https://en.wikipedia.org/wiki/Helios_os](https://en.wikipedia.org/wiki/Helios_os)

------
satysin
Hi Alan,

I have three questions -

1\. If you were to design a new programming paradigm today using what we have
learnt about OOP what would it be?

2\. With VR and AR (Hololens) becoming a reality (heh) how do you see user
interfaces changing to work better with these systems? What new things need to
be invented or rethought?

3\. I also worked at Xerox for a number of years although not at PARC. I was
always frustrated by their attitude to new ideas and lack of interest in new
technologies until everyone else was doing it. Obviously businesses change
over time and it has been a long time since Xerox were a technology leader. If
you could pick your best and worst memories from Xerox what would they be?

Cheers for your time and all your amazing work over the years :)

~~~
alankay1
Let me both acknowledge your questions, and also acknowledge that this forum
(the media authoring tools) are not in scale with the needed answers ...

~~~
satysin
Perhaps a reddit AMA would be better? They have much more flexible/powerful
comment system.

Edit: Not sure why I am getting down voted for making a suggestion. Oh well.

~~~
jensv
Or maybe a Quora session.

~~~
alankay1
A lot more good activity here than on Quora ...

~~~
codinghorror
Quora has some onerous policies, unfortunately:
[https://twitter.com/waxpancake/status/453958676529696769](https://twitter.com/waxpancake/status/453958676529696769)

HN is an excellent venue, but is necessarily text oriented, which is an OK
tradeoff I think.

My next project after Stack Overflow, Discourse, is an 100% open source,
flexible multimedia-friendly discussion system. It's GPL V2 on the code side,
but we also tried to codify Creative Commons as the default license in every
install, so discussion replies belong to the greater community:
[https://discourse.org](https://discourse.org)

(Surprisingly, the default content licenses for most discussion software tend
to be rather restrictive.)

~~~
discordianfish
Could you afterwards build a discussion Platform to find (partial) agreement
in various political etc topics? That seems like it would have huge impact and
is really missing.. thought about starting something like that but never got
to it.

------
ianbicking
1\. After Engelbart's group disbanded it seemed like he ended up in the
wilderness for a long time, and focused his attention on management. I'll
project onto him and would guess that he felt more constrained by his social
or economic context than he was by technology, that he envisioned
possibilities that were unattainable for reasons that weren't technical. I'm
curious if you do or have felt the same way, and if have any intuitions about
how to approach those problems.

2\. What are your opinions on Worse Is Better
([https://www.dreamsongs.com/RiseOfWorseIsBetter.html](https://www.dreamsongs.com/RiseOfWorseIsBetter.html))?
It seems to me like you pursue the diamond-like jewel, but maybe that's not
how you see it. (Just noticed you answered this:
[https://news.ycombinator.com/item?id=11940276](https://news.ycombinator.com/item?id=11940276))

3\. I've found the Situated Learning perspective interesting
([https://en.wikipedia.org/wiki/Situated_learning](https://en.wikipedia.org/wiki/Situated_learning)).
At least I think about it when I feel grumpy about all the young kids and
Node.js, and I genuinely like that they are excited about what they are doing,
but it seems like they are on a mission to rediscover EVERYTHING, one
technology and one long discussion at a time. But they are a community of
learning, and maybe everyone (or every community) does have to do that if they
are to apply creativity and take ownership over the next step. Is there a
better way?

~~~
alankay
It used to be the case that people were admonished to "not re-invent the
wheel". We now live in an age that spends a lot of time "reinventing the flat
tire!"

The flat tires come from the reinventors often not being in the same league as
the original inventors. This is a symptom of a "pop culture" where identity
and participation are much more important than progress...

~~~
9erdelta
This is incredibly hard hitting and I'm glad I read it, but I'm also afraid it
would "trigger" quite a few people today.

What steps can a person take to get out of pop culture and try to get into the
same league as the inventors? Incredibly stupid question to have to ask but I
feel really lost sometimes.

~~~
alankay1
I think it is first a recognition problem -- in the US we are now embedded in
a pop culture that has progressed far enough to seriously hurt places that
hold "developed cultures". This pervasiveness makes it hard to see anything
else, and certainly makes it difficult for those who care what others think to
put much value on anything but pop culture norms.

The second, is to realize that the biggest problems are imbalance. Developed
arts have always needed pop arts for raw "id" and blind pushes of rebellion.
This is a good ingredient -- like salt -- but you can't make a cake just from
salt.

I got a lot of insight about this from reading McLuhan for very different
reasons -- those of media and how they form an environment -- and from delving
into Anthropology in the 60s (before it got really politicized). Nowadays,
books by "Behavioral Economists" like Kahneman, Thaler, Ariely, etc. can be
very helpful, because they are studying what people actually do in their
environments.

Another way to look at it is that finding ways to get "authentically educated"
will turn local into global, tribal into species, dogma into multiple
perspectives, and improvisation into crafting, etc. Each of the starting
places stays useful, but they are no longer dominant.

------
sebastianconcpt
Hi Alan,

1\. what do you think about the hardware we are using as foundation of
computing today? I remember you mentioning about how cool was the architecture
of the Burroughs B5000 [1] being prepared to run on the metal the higher level
programming languages. What do hardware vendors should do to make hardware
that is more friendly to higher level programming? Would that help us to be
less depending on VM's while still enjoying silicon kind of performance?

2\. What software technologies do you feel we're missing?

[1]
[https://en.wikipedia.org/wiki/Burroughs_large_systems](https://en.wikipedia.org/wiki/Burroughs_large_systems)

~~~
alankay1
If you start with "desirable process" you can eventually work your way back to
the power plug in the wall. If you start with something already plugged in,
you might miss a lot of truly desirable processes.

Part of working your way back to reality can often require new hardware to be
made or -- in the case of the days of microcode -- to shape the hardware.

There are lots of things vendors could do. For example: Intel could make its
first level caches large enough to make real HLL emulators (and they could
look at what else would help). Right now a plug-in or available FPGA could be
of great use in many areas. From another direction, one could think of much
better ways to organize memory architectures, especially for multi-core chips
where they are quite starved.

And so on. We've gone very far down the road of "not very good" matchups, and
of vendors getting programmers to make their CPUs useful rather than the exact
opposite approach. This is too large a subject for today's AMA.

~~~
elcritch
Have you looked into the various Haskell/OCaml to hardware translators people
have been coming up with the past few years?

It seems like it's been growing and several FPGA's are near that PnP status.
In particular the notion of developing compile time proved RTS using
continuation passing would be sweet.

Even with newer hardware it seems we're still stuck in either dynamic mutable
languages or functional static ones. Any thoughts on how we could design
systems incorporating the best of both using modern hardware capacities?
Like... Say reconfigurable hierarchical element system where each node was an
object/actor? Going out on a bit of a limb with that last one!

~~~
alankay1
Without commenting on Haskell, et al., I think it's important to start with
"good models of processes" and let these interact with the best we can do with
regard to languages and hardware in the light of these good models.

I don't think the "stuckness" in languages is other than like other kinds of
human "stuckness" that come from being so close that it's hard to think of any
other kinds of things.

~~~
elcritch
Thanks! That helps reaffirm my thinking that "good models of processes" are
important, even though implementations will always have limitations. Good to
know I'm not completely off base...

A good example for me has been virtual memory pattern, where from a processes
point-of-view you model memory as an ideal unlimited virtual space. Then you
let the kernel implementation (and hardware) deal with the practical (and
difficult details). Microsoft's Orleans implementation of the actor model has
a similar approach that they call "virtual actors" that is interesting as
well.

My own stuckness has been an idea of implementing processes using hierarchical
state machines, especially for programming systems of IoT type devices. But I
haven't been able to figure out how to incorporate type check theorems into
it.

------
losvedir
At my office a lot of the non-programmers (marketers, finance people, customer
support, etc) write a fair bit of SQL. I've often wondered what it is about
SQL that allows them to get over their fear of programming, since they would
never drop into ruby or a "real" programming language. Things I've considered:

    
    
        * Graphical programming environment (they run the queries
          from pgadmin, or Postico, or some app like that)
        * Instant feedback - run the query get useful results
        * Compilation step with some type safety - will complain
          if their query is malformed
        * Are tables a "natural" way to think about data for humans?
        * Job relevance
    

Any ideas? Can we learn from that example to make real programming
environments that are more "cross functional" in that more people in a company
are willing to use them?

~~~
numlocked
SQL is declarative. Compare:

    
    
        for user in table_users:
            if user.is_active:
                return user.first_name;
    

vs:

    
    
        SELECT first_name FROM users_table
        WHERE is_active
    

It's unfortunate that the order of the clauses in SQL is "wrong" (e.g. you
should say FROM, WHERE, SELECT: Define the universe of relevant data, filter
it down, select what you care about), but it's still quite easy to wrap your
mind around. You are asking the computer for something, and if you ask nicely,
it tells you what you want to know. Compare that to procedural programming,
where you are telling the computer what to do, and even if it does what you
say, that may not have been what you actually wanted after all.

~~~
niftich
Along this point, C# and VB.NET have SQL-like expressions that can be used for
processing, called LINQ [1]. They even get the order of the clauses correct!

A feature like this may help your programmers who are used to thinking in
terms of filter -> select -> order.

[1] [https://msdn.microsoft.com/en-
us/library/bb397927.aspx](https://msdn.microsoft.com/en-
us/library/bb397927.aspx)

~~~
numlocked
Yes! Absolutely what I was thinking of when I wrote this :) Getting that right
is one of my favorite parts of LINQ.

------
IsaacL
What do you think of Bret Victor's work?
([http://worrydream.com/](http://worrydream.com/)) Or Rich Hickey?

Who do you think are the people doing the most interesting work in user
interface design today?

~~~
tluyben2
Aren't Alan Kay and Bret Victor working together at SAP currently?

~~~
skadamat
Technically, at HARC -
[https://blog.ycombinator.com/harc](https://blog.ycombinator.com/harc)

------
LeicesterCity
Hi Alan,

Previously you've mentioned the "Oxbridge approach" to reading, whereby--if my
recollection is correct--you take four topics and delve into them as much as
possible. Could you elaborate on this approach (I've searched the internet,
couldn't find anything)? And do you think this structured approach has more
benefits than, say, a non-structured approach of reading whatever of interest?

Thanks for your time and generosity, Alan!

~~~
alankay1
There are more than 23,000,000 books in the Library of Congress, and a good
reader might be able to read 23,000 books in a lifetime (I know just a few
people who have read more). So we are contemplating a lifetime of reading in
which we might touch 1/10th of 1% of the extent books. We would hope that most
of the ones we aren't able to touch are not useful or good or etc.

So I think we have to put something more than randomness and following links
to use here. (You can spend a lot of time learning about a big system like
Linux without hitting many of the most important ideas in computing -- so we
have to heed the "Art is long and Life is short" idea.

Part of the "Oxbridge" process is to have a "reader" (a person who helps you
choose what to look at), and these people are worth their weight in gold ...

~~~
randomsearch
General question about this figure, which I've seen before:

> read 23,000 books in a lifetime

As a very conservative lower bound, a person who lives to the age of 80 would
have to read 0.79 books per day, from the day they were born, to reach this
figure.

Or, to put it another way, who has read 288+ books in the last year?

I'm quite sceptical about this figure. Any thoughts as to how this might be
possible? Are the people Alan mentions speed-reading? Anyone else know
similarly prolific readers?

~~~
di4na
AS someone that read at least one book per day if not more since the age of 6,
yes it is possible. I can read between 100 to 200 page per hour, depending of
the book.

You reach a storage and money problem fast (Ebook are a savior nowadays). And
you tend to have multiple books open at the same time.

How does it work? There are several strategy. First i read fast. Experience
and training make you read really fast. Secondly, you get a grasp of how
things works and what the wirter has to say. In a fiction book, it is not
unusual for me to not read a chapter or two because i _know_ what will happen
inside.

Finally... Good writers helps. Good writers make reading a breeze and are
faster to read. They present ides in concise and efficient way, that follow
the flow of thinking.

I will take more question gladly if you have some :)

~~~
3minus1
> In a fiction book, it is not unusual for me to not read a chapter or two
> because i know what will happen inside.

This is ridiculous. It doesn't count as reading if you skip whole chapters.

~~~
Chlorus
Hell, I 'read' whole books by just reading the back cover! This way, I get
through hundreds of books every time I visit the library!

------
edwintorok
Hi, I have a few questions about your STEPS project:

\- Is there a project that is the continuation of the STEPS project?

\- What is your opinion of the Elm language?

\- How do you envision all the good research from the STEPS model could be
used for building practical systems?

\- STEPS focused on personal computing, do you have a vision on how something
similar could be done for server-side programming?

\- Where can I find all the source code for the Frank system and the DSLs
described in the STEPS report?

~~~
e12e
Apologies for rambling on a bit - but I also have some questions about VPRI.
As far as I can gather, it was never the intention to publish the entire
system (The whole stack needed to get "Frank" running)? If so, I'd like to
know why not? Where you afraid that the prototypes would be taken "too
seriously" and draw focus away from the ideas you wanted to explore?

The VPRI reports, and before that some of the papers on Croquet (especially
the idea of "teatime" which might be described as event-driven, log-based,
relative time with eventual data/world-consistency) are fascinating, and I'm
grateful for them being published. Also the Ometa-stuff[o] is fascinating (if
anything, I think it's gotten too little mind-share).

It seems to me, that we've evolved a bit, in the sense that some things that
used to be considered programming (display a text string on screen), no longer
is (type it into notepad.exe) -- it's considered "using a computer". At the
same time some things that were considered somewhat esoteric is becoming
mainstream: perhaps most importantly the growing (resurging?) trend that
programming really is meta-programming and language creation.

ReactJS is a mainstream programming model, that fuses html, css, javascript
and a at least one templating language - and in a similar vein we see a great
adoption in "transpiled" languages, such as coffee script, typescript,
clojurescript and more. HN runs on top of Ark, which is a lisp that's been
bent hard in the direction of http/html. I see this as a bit of an evolution
from when the most common DSLs people were writing for themselves were ORMs -
mapping some host language to SQL.

In your time with VPRI - did you find other new patterns or principles for
meta-programming and (micro) language design that you think could/should be
put to use right now?

Other than the web-developers tendency to reinvent m4 at every turn, in order
to program html, css and js at a "higher" level, and the before-mentioned ORM-
trends -- the only somewhat mainstream system I am aware of that has a good
toolkit for building "real" DSLs, is Racket Scheme (Which shows if one
contrasts something like Sphinx, which is a fine system, with Racket's
scribble[2]).

Do you think we'll continue to see a rise of meta-programming and language
design as more and more tools become available, and it becomes more and more
natural to do "real" parsing rather than ad-hoc munging of plain text?

[o] [https://github.com/alexwarth/ometa-
js](https://github.com/alexwarth/ometa-js)

[s] [https://docs.racket-lang.org/scribble/getting-
started.html](https://docs.racket-lang.org/scribble/getting-started.html)

[http://lambda-the-ultimate.org/node/4017](http://lambda-the-
ultimate.org/node/4017)

------
coldtea
Hi Alan,

On the "worse is better" divide I've always considered you as someone standing
near the "better" (MIT) approach, but with an understanding of the pragmatics
inherent in the "worse is better" (New Jersey) approach too.

What is your actual position on the "worse is better" dichotomy?

Do you believe it is real, and if so, can there be a third alternative that
combines elements from both sides?

And if not, are we always doomed (due to market forces, programming as
"popular culture" etc) to have sub-par tools from what can be theoretically
achieved?

~~~
alankay1
I don't think "pop culture" approaches are the best way to do most things
(though "every once in a while" something good does happen).

The real question is "does a hack reset 'normal'?" For most people it tends
to, and this makes it very difficult for them to think about the actual
issues.

A quote I made up some years ago is "Better and Perfect are the enemies of
What-Is-Actually-Needed". The big sin so many people commit in computing is
not really paying attention to "What-Is-Actually-Needed"! And not going below
that.

~~~
tern
I fear this is because "What-Is-Actually-Needed" is non-trivial to figure out.
Related: "scratch your own itch", "bikeshedding", "yak shaving".

~~~
alankay1
Exactly -- this is why people are tempted to choose an increment, and will say
"at least it's a little better" \-- but if the threshold isn't actually
reached, then it is the opposite of a little better, it's an illusion.

------
jarmitage
Hi Alan,

What advice would you give to those who don't have a HARC to call their own?
what would you do to get set up/a community/funding for your adventure if you
were starting out today? What advice do you have for those who are currently
in an industrial/academic institution who seek the true intellectual freedom
you have found? Is it just luck?!

~~~
alankay1
I don't have great advice (I found getting halfway decent funding since 1980
to be quite a chore). I was incredibly lucky to wind up quite accidentally at
the U of Utah ARPA project 50 year ago this year.

Part of the deal is being really stubborn about what you want to do -- for
example, I've never tried to make money from my ideas (because then you are in
a very different kind of process -- and this process is not at all good for
the kinds of things I try to do).

Every once in a while one runs into "large minded people" like Sam Altman and
Vishal Sikka, who do have access to funding that is unfettered enough to lead
to really new ideas.

~~~
jarmitage
Thanks.

Do you have any advice about community building, especially around fostering
new and big ideas?

------
germinalphrase
Hi Alan,

As a high school teacher, I often find that discussions of technology in
education diminish 'education' to curricular and assessment documentation and
planning; however, these artifacts are only a small element of what is,
fundamentally, a social process of discussion and progressive knowledge
building.

If the real work and progress with my students comes from our intellectual
both-and-forth (rather than static documentation of pre-exhibiting knowledge),
are there tools I can look to that have been/will be created to empower and
enrich this kind of in situ interaction?

~~~
alankay1
This is a tough one to try to produce "through the keyhole" of this very non-
WYSIWYG poorly thought through artifact of the WWW people not understanding
what either the Internet or computer media are all about.

Let me just say that it's worth trying to understand what might be a "really
good" balance between traditional oral culture learning and thinking, what
literacy brings to the party, especially via mass media, and what the computer
and pervasive networking should bring as real positive additions.

One way to assess what is going on now is partly a retreat from real literacy
back to oral modes of communication and oral modes of thought (i.e. "texting"
is really a transliteration of an oral utterance, not a literary form).

This is a disaster.

However, even autodidacts really need some oral discussions, and this is one
reason to have a "school experience".

The question is balance. Fluent readers can read many times faster than oral
transmissions, and there are many more resources at hand. This means in the
21st century that most people should be doing a lot of reading -- especially
students (much much more reading than talking). Responsible adults, especially
teachers and parents, should be making all out efforts to help this to happen.

For the last point, I'd recommend perusing Daniel Kahneman's "Thinking: Fast
and Slow", and this will be a good basis for thinking about tradeoffs between
actual interactions (whether with people or computers) and "pondering".

I think most people grow up missing their actual potential as thinkers because
the environment they grow up in does not understand these issues and their
tradeoffs....

~~~
jbrennan
>I think most people grow up missing their actual potential as thinkers
because the environment they grow up in does not understand these issues and
their tradeoffs....

This is the meta-thing that’s been bugging me: how do we help people realize
they’re “missing their actual potential as thinkers”?

The world seems so content to be an oral culture again, how do we convince /
change / equip people to be skeptical of these media?

Joe Edelman’s Centre for Livable Media
([http://livable.media](http://livable.media)) seems like a step in the right
direction. How else can we convince people?

~~~
heurist
Marijuana helped me realize there was a lot about myself I didn't understand
and launched my investigation into more effective thought processes. I've
become much more driven and thoughtful since I began smoking as an adult.

~~~
tomp
What kinds of changes to your thought processes did you make?

~~~
heurist
First of all, I now enjoy talking about myself :)

I stopped assuming I knew everything, and a childlike sense of wonder returned
to my life. I began looking beyond what was directly in front of me and sought
out more comprehensive generalizations. What do atoms have in common with
humans? What does it mean to communicate? Do we communicate with ecosystems?
Do individuals communicate with society? What is consciousness and
intelligence? Is my mind a collection of multiple conscious processes? How do
the disparate pieces of my brain integrate into one conscious entity, how do
they shape my subjective reality?

I found information, individuals, and networks to be fundamental to my
understanding of the world. I was always interested in them before, but not
enough to seek them out or apply them through creative works. I discovered for
myself the language of systems. I found a deep appreciation of mathematics and
a growth path to set my life on.

I was able to do this exploration at a time when my work was slow and steady.
It came along a couple years ago when I was 25, which I've heard is when the
brain's development levels off. I feel lucky to have experienced it when I did
because I was totally unsatisfied with my life before then.

Since then I've found work I love at a seed stage startup where I've been able
to apply my ideas in various ways. I have become much more active as a
creator, including exploring latent artistic sensibilities through writing
poetry and taking oil painting classes with a very talented teacher. I've
found myself becoming an artist in my work - I've become the director and lead
engineer at the startup and am exploring ways to determine and distribute
truth in the products we sell, and further to make a statement on what art is
in a capitalistic society (even if I'm the only one who will ever recognize
it). I've also become more empathic and found a wonderful woman and two pups
to share my life with, despite previously being extremely solitary. Between
work and family I have less time for introspection now, but I expect I'll
learn just as much through these efforts.

Ultimatey, I've learned to trust my subconscious. I was always anxious and
nervous about being wrong in any situation before, but now I trust that even
if I am wrong in the moment my brain can figure out good answers over longer
stretches of time.

I don't know how far cannabis led me down this path but it definitely gave me
a good strong push.

~~~
bpchaps
This is almost exactly my experience! I don't think HN talks about it much,
but cannabis is a great way to approach intuitive depth on subjects. For me it
was ego, math, music, civics and information theory concepts.

When I started, it was at a job that I absolutely hated (rewriting mantis to
be a help desk system), and it helped me get out of it by opening up better
understanding of low level systems. That eventually led to high frequency
trading systems tuning and some pretty deep civics using Foia.

Not that it was a direct contributor, but I do consider it a seed towards
better understanding of the things around me. I don't necessarily feel
happier, but I feel much more content.

~~~
socrates666
It is IDENTICAL to mine as well. Even down to the information theory bit. Very
bizarre, but reassuring.

------
alankay1
\-- I was surprised that the HN list page didn't automatically refresh in my
browser (seems as though it should be live and not have to be prompted ...)

~~~
projectramo
Imagine: 1\. trying to read something long, or 2\. going off to a follow a
link and to come back and respond, only to find that the page has been
refreshing while you looked away. Now you have to scroll about to find the
place you were at in order to respond or to continue reading the comments.

~~~
alankay1
How about a little model of time in a GUI?

~~~
diiq
This is maybe the most Alan-Kay-like response so far. Short, simple, but a
tiny bit like a message from an alternate dimension. "No, no, I'm not asking
you to build the also-wrong solution someone else has tried. I'm saying: solve
the problem.

~~~
msutherl
Also feels like worse-is-better vs. the right thing. How much engineering
effort and additional maintenance would be required to develop and support
such a time-model? A lot. Alas, let us re-create software systems to be
radically simpler so that we can do the right thing! Still waiting for Urbit
and VPRI's 10k line operating system ... but that's what Alan stands for in
our industry: "strive to do the right thing," or as you put it, "solve the
problem".

------
testmonkey
Jaron Lanier mentioned you as part of the, "humanistic thread within
computing." I understood him to mean folks who have a much broader
appreciation of human experience than the average technologist.

Who are "humanistic technologists" you admire? Critics, artists,
experimenters, even trolls... Which especially creative technologists inspire
you?

I imagine people like Jonathan Harris, Ze Frank, Jaron Lanier, Ben Huh, danah
boyd, Sherry Turkle, Douglas Engelbart, Douglas Rushkoff, etc....

------
nnq
Hi Alan, the question that troubles me now and I want to ask you is:

 _Why_ do you think there is always a difference between:

A. the people who _know best how something should be done_ , and

B. the people who _end up doing it in a practical and economically-successful
or popular way?_

And should we _educate our children_ or _develop our businesses_ in ways that
could encourage both _practicality_ and _invention?_ (do you think it's
possible?). Or would the two tendencies cancel each other out and you'll end
up with mediocre children and underperforming businesses, so the right thing
to do is to pick one side and develop it at the expense of the other?

(The "two camps" are clearly obvious in the space of programming language
design and UI design (imho it's the same thing: programming languages are just
"UIs between programmers and machines"), as you well know and said, with one
group of people (you among them) having the right ideas of what OOP and UIs
should be like, and one people inventing the technologies with success in
industry like C++ and Java. But the pattern is happening at all levels, even
business: the people with the best business ideas are almost never the ones
who end up _doing things_ and so things get done in a "partially wrong" way
most of the time, although we have the information to "do it right".)

~~~
alankay1
We were lucky in the ARPA/PARC communities to have both great funding, and the
time to think things through (and even make mistakes that were kept from
propagating to create bad defacto standards).

The question you are asking is really a societal one -- and about operations
that are like strip mining and waste dumping. "Hunters and gatherers" (our
genetic heritage) find fertile valleys, strip them dry and move on (this only
works on a very small scale). "Civilization" is partly about learning how to
overcome our dangerous atavistic tendencies through education and planning.
It's what we should be about generally (and the CS part of it is just a
symptom of a much larger much more dire situation we are in).

~~~
nnq
So you're rephrasing the question to mean that you see it as 'hunter gatherer
mode' thinking (doing it in a practical and short term economically-successful
way) vs. 'civilized builder mode' thinking (doing it the way we know it should
be done) and that they are antagonistic, and that because of the way our
society is structured 'hunter gatherer' mode thinking leads to better
economical results?

This ends up as a _pretty strong_ critique of capitalism's main idea that
market forces drive the progress of science and technology.

 _Your thinking would lead to the conclusion that we 'd have to find a way to
totally reshape/re-engineer the current world economy to stop it from being
hugely biased in favor of "hunter gatherers that strip the fertile valley dry"
..right?_

I hope that people like you are working on this :)

------
nostrademons
What turning points in the history of computing (products that won in the
marketplace, inventions that were ignored, technical decisions where the
individual/company/committee could've explored a different alternative, etc.)
do you wish had gone another way?

~~~
alankay1
Just to pick three (and maybe not even at the top of my list if I were to
write it and sort it), are

(a) Intel and Motorola, etc. getting really interested in the Parc HW
architectures that allowed Very High Level Languages to be efficiently
implemented. Not having this in the 80s brought "not very good ideas from the
50s and 60s" back into programming, and was one of the big factors in:

(b) the huge propensity of "we know how to program" etc., that was the other
big factor preventing the best software practices from the 70s from being the
start of much better programming, operating systems, etc. in the 1980s, rather
the reversion to weak methods (from which we really haven't recovered).

(c) The use of "best ideas about destiny of computing" e.g. in the ARPA
community, rather than weak gestures e.g. the really poorly conceived WWW vs
the really important and needed ideas of Engelbart.

~~~
jonathanlocke
I get (a) and (b) completely. On (c), I felt this way about NCSA Mosaic in
1993 when I first saw it and I'm relieved to hear you say this because
although I definitely misunderstood a major technology shift for a few years,
maybe I wasn't wrong in my initial reaction that it was stupid.

~~~
mmiller
I didn't begin to get it until the industry started trying to use browsers for
applications in the late '90s/early 2000's. I took one look at the "stateful"
architecture they were trying to use, and I said to myself, "This is a hack."
I learned shortly thereafter about criticism of it saying the same thing,
"This is an attempt to impose statefulness on an inherently stateless
architecture." I kept wondering why the industry wasn't using X11, which
already had the ability to carry out full GUI interactions remotely. Why
reject a real-time interactive architecture that's designed for network use
for one that insisted on page refreshes to update the display? The whole thing
felt like a step backward. The point where it clobbered me over the head was
when I tried to use a web application framework to make a complex web form
application work. I got it to work, and the customer was very pleased, but I
was ashamed of the code I wrote, because I felt like I had to write it like I
was a contortionist. I was fortunate in that I'd had prior experience with
other platforms where the architecture was more sane, so that I didn't think
this was a "good design." After that experience, I left the industry. I've
been trying to segue into a different, more sane way of working with computers
since. I don't think any of my past experience really qualifies, with the
exception of some small aspects and experiences. The key is not to get
discouraged once you've witnessed works that put your own to shame, but to
realize that the difference in quality matters, that it was done by people
rather like yourself who had the opportunity to put focus and attention on it,
and that one should aspire to meet or exceed it, because anything else is a
waste of time.

~~~
ontouchstart
How can we bring back X11 and good old interactive architecture to the
generation of programmers growing up with AngularJS and ReactJS?

Or shall we reboot good ideas with IoT?

~~~
mmiller
My reference to X11 was mostly rhetorical, to tell the story. I learned at
some point that the reason X11 wasn't adopted, at least in the realm of
business apps. I was in, was that it was considered a security risk. Customers
had the impression that http was "safe." That has since been proven false, as
there have been many exploits of web servers, but I think by the time those
vulnerabilities came to light, X11 was already considered passe. It's like how
stand-alone PCs were put on the internet, and then people discovered they
could be cracked so easily. I think a perceived weakness was that X11 didn't
have a "request-respond" protocol that worked cleanly over a network for
starting a session. One could have easily been devised, but as I recall, that
never happened. In order to start a remote session of some tool I wanted to
use, I always had to login to a server, using rlogin or telnet, type out the
name of the executable, and tell it to "display" to my terminal address. It
was possible to do this even without logging in. I'd seen students demonstrate
that when I was in school. While they were logged in, they could start up an
executable somewhere and tell it to "display" to someone else's terminal. The
thing was, it could do this without the "receiver's" permission. It was pretty
open that way. (That would have been another thing to implement in a protocol:
don't "display" without permission, or at least without request from the same
address.) Http didn't have this problem, since I don't think it's possible to
direct a browser to go somewhere without a corresponding, prior request from
that browser.

X11 was not the best designed GUI framework, from what I understand. I'd heard
some complaints about it over the years, but at least it was designed to work
over a network, which no other GUI framework of the time I knew about could.
It could have been improved upon to create a safer network standard, if some
effort had been put into it.

As Alan Kay said elsewhere on this thread, it's difficult to predict what will
become popular next, even if something is improved to a point where it could
reasonably be used as a substitute for something of lower quality. So, I don't
know how to "bring X11 back." As he also said, the better ideas which
ultimately became popularly adopted were ones that didn't have competitors
already in the marketplace. So, in essence, the concept seemed new and
interesting enough to enough people that the only way to get access to it was
to adopt the better idea. In the case of X11, by the time the internet was
privatized, and had become popular, there were already other competing GUIs,
and web browsers became the de facto way people experienced the internet in a
way that they felt was simple enough for them to use. I remember one
technologist describing the browser as being like a consumer "radio" for the
internet. That's a pretty good analogy.

Leaving that aside, it's been interesting to me to see that thick clients have
actually made a comeback, taking a huge chunk out of the web. What was done
with them is what I just suggested should've been done with X11: The protocol
was (partly) improved. In typical fashion, the industry didn't quite get what
should happen. They deliberately broke aspects of the OS that once allowed
more user control, and they made using software a curated service, to make
existing thick client technology safer to use. The thinking was, not without
some rationale, that allowing user control led to lots and lots of customer
support calls, because people are curious, and usually don't know what they're
doing. The thing was, the industry didn't try to help people understand what
was possible. Back when X11 was an interesting and productive way you could
use Unix, the industry hadn't figured out how to make computers appealing to
most consumers, and so in order to attract any buyers, they were forced into
providing some help in understanding what they could do with the operating
system, and/or the programming language that came with it. The learning curve
was a bit steeper, but that also had the effect of limiting the size of the
market. As the market has discovered, the path of least resistance is to make
the interface simple, and low-hassle, and utterly powerless from a
computational standpoint, essentially turning a computer into a device, like a
Swiss Army knife.

I think a better answer than IoT is education, helping people to understand
that there is something to be had with this new idea. It doesn't just involve
learning to use the technology. As Alan Kay has said, in a phrase that I think
deserves to be explored deeply, "The music is not in the piano."

It's not an easy thing to do, but it's worth doing, and even educators like
Alan continue to explore how to do this.

This is just my opinion, as it comes out of my own personal experience, but I
think it's borne out in the experience of many of the people who have
participated in this AMA: I think an important place to start in all of this
is helping people to even hear that "music," and an important thing to realize
is you don't even need a computer to teach people how to hear it. It's just
that the computer is the best thing that's been invented so far for expressing
it.

~~~
ontouchstart
I had similar experience as yours and was comfortable coding web pages via
cgi-bin with vi. :-)

That is why now I am very interested in containers and microservices in both
local and network senses.

As a "consumer", I am also very comfortable to communicate with people via
message apps like WeChat and passing wikipedia and GitHub links around. Some
of them are JavaScript "web apps" written and published in GitHub by typing on
my iPhone. Here is an example:

[http://bigdata-
mindstorms.github.io/d3-playground/ontouchsta...](http://bigdata-
mindstorms.github.io/d3-playground/ontouchstart/yaml2json)

Hope I can help more people to "hear the music" and _make_ and _share_ their
own.

------
discreteevent
Hi Alan,

A lot of the VPRI work involved inventing new languages (DSLs). The results
were extremely impressive but there were some extremely impressive people
inventing the languages. Do you think this is a practical approach for
everyday programmers? You have also recommended before that there should be
clear separation between meta model and model. Should there be something
similar to discipline a codebase where people are inventing their own
languages? Or should just e.g. OS writers invent the languages and everyone
else use a lingua franca?

~~~
alankay1
Tricky question. One answer would be to ask whether there is an intrinsic
difference between "computer science" and (say) physics? Or are the
differences just that computing is where science was in the Middle Ages?

~~~
Natanael_L
Computer science is defined by information theory, and we already have
mathematical proofs binding together information theory with the laws of
quantum physics (such as the example of the minimum energy needed to erase one
bit of entropy from memory, something which is bounded by the ambient
temperature).

~~~
alankay1
Respectfully ... I think you missed the point of my answer.

~~~
Natanael_L
Did you intend to compare the progress and formalization of the fields? Didn't
pick up on that

~~~
alankay1
Yes, that was what I was driving at. Anyone could do physics in the Middle
Ages -- they just had to get a pointy hat. A few centuries later after Newton,
one suddenly had to learn a lot of tough stuff, but it was worth it because
the results more than paid for the new levels of effort.

------
wdanilo
Hi Alan! I've got some assumptions regarding the upcoming big paradigm shift
(and I believe it will happen sooner than later):

1\. focus on data processing rather than imperative way of thinking (esp.
functional programming)

2\. abstraction over parallelism and distributed systems

3\. interactive collaboration between developers

4\. development accessible to a much broader audience, especially to domain
experts, without sacrificing power users

In fact the startup I'm working in aims exactly in this direction. We have
created a purely functional visual<->textual language Luna ( [http://www.luna-
lang.org](http://www.luna-lang.org) ).

By visual<->textual I mean that you can always switch between code, graph and
vice versa.

What do you think about these assumptions?

~~~
alankay
What if "data" is a really bad idea?

~~~
richhickey
Data like that sentence? Or all of the other sentences in this chat? I find
'data' hard to consider a bad idea in and of itself, i.e. if data ==
information, records of things known/uttered at a point in time. Could you
talk more about data being a bad idea?

~~~
alankay
What is "data" without an interpreter (and when we send "data" somewhere, how
can we send it so its meaning is preserved?)

~~~
richhickey
Data without an interpreter is certainly subject to (multiple) interpretation
:) For instance, the implications of your sentence weren't clear to me, in
spite of it being in English (evidently, not indicated otherwise). Some
metadata indicated to me that you said it (should I trust that?), and when.
But these seem to be questions of quality of
representation/conveyance/provenance (agreed, important) rather than critiques
of data as an idea. Yes, there is a notion of sufficiency ('42' isn't data).

Data is an old and fundamental idea. Machine interpretation of un- or under-
structured data is fueling a ton of utility for society. None of the inputs to
our sensory systems are accompanied by explanations of their meaning. Data -
something given, seems the raw material of pretty much everything else
interesting, and interpreters are secondary, and perhaps essentially, varied.

~~~
alankay
There are lots of "old and fundamental" ideas that are not good anymore, if
they ever were.

The point here is that you were able to find the interpreter of the sentence
and ask a question, but the two were still separated. For important
negotiations we don't send telegrams, we send ambassadors.

This is what objects are all about, and it continues to be amazing to me that
the real necessities and practical necessities are still not at all
understood. Bundling an interpreter for messages doesn't prevent the message
from being submitted for other possible interpretations, but there simply has
to be a _process_ that can extract signal from noise.

This is particularly germane to your last paragraph. Please think especially
hard about what you are taking for granted in your last sentence.

~~~
ontouchstart
I think object is a very powerful idea to wrap "local" context. But in a
network (communication) environment, it is still challenging to handle
"remote" context with object. That is why we have APIs and
serialization/deserialization overhead.

In the ideal homogeneous world of smalltalk, it is a less issue. But if you
want a Windows machine to talk to a Unix, the remote context becomes an issue.

In principle we can send a Windows VM along with the message from Windows and
a Unix VM (docker?) with a message from Unix, if that is a solution.

~~~
alankay1
This is why "the objects of the future" have to be ambassadors that can
negotiate with other objects they've never seen.

Think about this as one of the consequences of massive scaling ...

~~~
ontouchstart
Along this line of logic, perhaps the future of AI is not "machine learning
from big data" (a lot of buzz words) but computers that generate runtime
interpreters for new contexts.

~~~
yanivt
When high bandwidth communication is omnipresent, is "portability" of the
interpreter really something to optimize for?

~~~
alankay1
How can you find it?

The association between "patterns" and interpretation becomes an "object" when
this is part of the larger scheme. When you've just got bits and you send them
somewhere, you don't even have "data" anymore.

Even with something like EDI or XML, think about what kinds of knowledge and
process are actually needed to even do the simplest things.

------
fchopin
Hi, Alan!

Like many here, I'm a big fan of what you've accomplished in life, and we all
owe you a great debt for the great designs and features of technologies we use
everyday!

The majority of us have not accomplished as much in technology, and many of
us, though a minority, are in the top end of the age bell curve. I'm in that
top end.

I've found over the years that I've gone from being frustrated with the churn
of software/web development, to completely apathetic about it, to wanting
something else- something more meaningful, and then to somewhat of an
acceptance that I'm lucky just to be employed and making what I do as an older
developer.

I find it very difficult to have the time and energy to focus on new
technologies that come out all of the time, and less and less able as my brain
perhaps is less plastic to really get into the latest JavaScript framework,
etc.

I don't get excited anymore, don't have the motivation, ability, or time to
keep up with things like the younger folk. Also, I've even gotten tired of
mentoring them, especially as I become less able and therefore less respected.

Have you ever had or known someone that had similar feelings of futility or a
serious slowdown in their career? If so, what worked/what didn't and what
advice could you provide?

Thank you for taking the time to read and respond to everyone you have here.
It definitely is much appreciated!

~~~
mikekchar
I'm a fair bit closer to the right hand side of the age curve than the left.
My advice: Look at the brevity of Alan Kay's responses. When I was young I
would have soared past them looking for the point. Now I see that one sentence
and I weep. Why didn't anyone say that 20 years ago?

Maybe they did. I was too busy being frustrated with the churn of software
development. All my time and energy was focused on new technologies that came
out all the time. My young plastic brain spent it's flexibility absorbing the
latest framework, etc.

Now that I have lost the motivation, ability and time to keep up with things
like the younger folk, I can finally listen to the older folk (hopefully while
there are still folk older than me to listen to).

These days I'm trying just to write code. All those young people have soared
past the wisdom of their elders looking for the point. It's still there. Don't
look at the new frameworks, look at what people were doing 10, 20, 30, 40, 50,
60 years ago. How does it inform what you are doing?

I hope that helps! It's a struggle for me too.

~~~
mmiller
I was fortunate to grow up during a time when Alan Kay was a well-known figure
in the personal computing world, and while what he said didn't make sense to
me at the time, it still interested me intensely, and I always wondered what
he meant by what he said. Strangely enough, looking back on my younger
experience with computers, I think I actually did get a little bit of what he
was talking about. It's just that I came to understand that little bit
independently from listening to him. I didn't realize he was talking about the
same thing. It wasn't until I got older, and got to finally see his talks
through internet video that I finally started seeing that, and realizing more
things by listening to him at length. Having the chance to correspond with
him, talk about those things more in-depth, helped as well.

The way I look at it is just take in how fortunate you are to have your
realizations when you have them (I've had my regrets, too, that I didn't "get"
them sooner), and take advantage of them as much as you can. That's what I've
tried to do.

------
fogus
I can think of no better person to ask than Alan Kay:

What are the best books relevant to programming that have nothing to do with
programming? (e.g. How Buildings Learn, Living Systems, etc.)?

~~~
alankay1
Lots ...

Molecular Biology of the Cell

Notes on a Synthesis of Form

etc

~~~
fogus
I for one would love to see the 'etc' expanded, but in any case I do
appreciate you taking the time to respond. Thanks!

~~~
alankay1
There is a very old reading list online I made for the company that is now
Accenture -- and this was the subject of a recent HN "gig". I think there is a
URL for this discussion in this AMA.

~~~
dang
It's at
[https://news.ycombinator.com/item?id=11803165](https://news.ycombinator.com/item?id=11803165).

------
16bytes
Hi Alan,

I'm preparing a presentation on how to build a mental model of computing by
learning different computer languages. It would be great to include some of
your feedback.

* What programming language maps most closely to the way that you think?

* What concept would you reify into a popular language such that it would more closely fit that mapping?

* What one existing reified language feature do you find impacts the way you write code the most, especially even in languages where it is not available?

~~~
alankay1
I think I'd ask "What programming language design would help us think a lot
better than we do now (we are currently terrible!)

Certainly, in this day and age, the lack of safe meta-definition is pretty
much shocking.

~~~
dang
Could you give an example of what you mean by "safe meta-definition"? I'd like
to understand this better.

~~~
alankay1
"Meta is dangerous" so a safe meta-language within a language will have
"fences" to protect.

(Note that "assignment" to a variable is "meta" in a functional language (and
you might want to use a "roll back 'worlds' mechanism" (like transactions) for
safety when this is needed.)

This is a parallel to various kinds of optimization (many of which violate
module boundaries in some way) -- there are ways to make this a lot safer
(most languages don't help much)

~~~
edejong
I've always felt that the meta space is too exponential or hyper to mentally
represent or communicate. Perhaps we need different lenses to project the
effects of the meta space on our mental model. Do you think this is why Gregor
decided to move towards aspects?

~~~
alankay1
I don't think Aspects is nearly as good an idea as MOP was. But the
"hyperness" of it is why the language and the development system have to be
much better. E.g. Dan Ingalls put a lot of work into the Smalltalks to allow
them to safely be used in their own debugging, even very deep mechanisms. Even
as he was making these breakthroughs back then, we were all aware there were
further levels that were yet to be explored. (A later one, done in Smalltalk
was the PIE system by Goldstein and Bobrow, one of my favorite meta-systems)

~~~
phaedrus
Aside from metaprogramming, from reading the "four reports" document that is
the first Google link, it seems PIE also addresses another hard problem. In
any hierarchically organized program, there are always related pieces of code
that we would like to maintain together, but which get ripped apart and spread
out because the hierarchy was split according to a different set of aspects.
You can't get around this problem because if you change what criteria the
hierarchy is split on in order to put these pieces near each other, now you've
ripped apart code that was related on the original aspect. I've come to the
conclusion that hierarchical code organization itself is the problem, and we
would be better served by a way to assemble programs relationally (in the
sense of an RDBMS). It seems like PIE was in that same conceptual space. Could
you comment on that or elaborate more on the PIE system? Thanks.

~~~
alankay1
Good insights -- and check out Alex Warth's "Worlds" paper on the Viewpoints
site -- this goes beyond what PIE could do with "possible worlds" reasoning
and computing ...

~~~
phaedrus
This is a very interesting paper. Its invocation of state space over time as a
model of program side effects reminds me of an idea I had a couple years ago:
if you think of a program as an entity in state-space where one dimension is
time, then "private" object members in OO-programming and immutable values in
functional programming are actually manifestations of the same underlying
concept. Both are ways to create fences in the state-space-time of a program.
Private members create fences along a "space" axis and functional programming
creates fences along the "time" axis.

~~~
jacques_chester
And you get to use "relational" and "relativity" side by side in a discussion.

A lot of interesting things tend to happen when you introduce invariants,
including "everything-is-a" invariants. Everything is a file, everything is an
object, everything is a function, everything is a relation, etc.

------
CharlesMerriam2
Many mainstream programming tools feel to be moving backwards. For example,
Saber-C of the 1980s allowed hot-editing without restarting processes and
graphical data structures. Similarly, the ability to experiment with
collections of code before assembling them into a function was advance.

Do you hold much hope for our development environments helping us think?

~~~
Joeri
Hot-editing updates behavior while keeping state, causing wildly unpredictable
behavior given the way objects are constructed from classes in today's
languages. The current approach to OO is to bootstrap fresh state from an
external source every time the behavior changes so guarantees can be made
about the interaction between behavior and state. It seems to me the
equivalent of using a wheelchair because you might stumble while walking, the
concern is genuine, but the cure is possibly worse than the affliction.

I don't know what the solution is. Perhaps a language with a fundamentally
different view of objects, maybe as an ancestry of deltas of state/behavior
pairings, somewhat like prototypes but inheriting by versioning and
incrementally changing so that state and behavior always match up but still
allowing you to revert to a working version. Likely Alan has some better ideas
on what sort of language we need.

~~~
aidos
I use hot-editing in python by default and I find it incredibly useful (now I
feel crippled when I'm on a system without it). There are times when I need to
reload the state completely but it's pretty rare (changing something that uses
metaclasses, like sqlalchemy, is one such place).

Maybe there's something about the style I've adopted that lends itself more to
hot-editing but it's definitely a tool I'd hate to be without.

~~~
jasonamyers
I'm super interested in how you do that! Can you share at all?

~~~
aidos
Yes! I can! I quickly made a video with super crappy audio quality last time
it came up -
[https://www.youtube.com/watch?v=k-mAuNY9szI](https://www.youtube.com/watch?v=k-mAuNY9szI)

It's pretty poor quality listening but you should get the point. You can send
me an email (see my profile) if you wanted to go through it in more detail.

------
kartD
Hi Alan, What do think about the current state of language design (Swift,
Rust, Go)? Anything that makes you happy/annoys you?

~~~
alankay1
I think all languages today annoy me -- just put me down as a grump. They seem
to be at a very weak level of discourse for the 21st century. (But a few are
fun when looked at from the perspectives of the past e.g. Erlang ...)

~~~
kartD
I see, is there anything you see today that you like? (Coq, Idris maybe?)

~~~
Ericson2314
While the methods are different from what Alan has espoused in the past, I'd
like to believe the goals are very well aligned.

~~~
alankay1
But are "the good goals of the past" good enough goals for today and tomorrow?

~~~
Ericson2314
I would hope not. But, based on your work and other comments here, I think
we'd agree that our collective sights have been set lower for no good reason.
If the goals of the past cannot be fully realized, what hope can we have for
those of today and tomorrow?

Concretely, I follow existing languages like Agda, Lean, Haskell, and Rust for
pushing the envelope on language semantics, compiler ingenuity, and library
abstractions; and [http://unisonweb.org/](http://unisonweb.org/) and
[http://www.lamdu.org/](http://www.lamdu.org/) for pushing the envelope on the
programming workflow itself. While I don't believe editors and languages are
orthogonal problems, I do believe there is enough independence to make
pursuing these fronts separately in parallel worthwhile.

[Of all of those, [http://unisonweb.org/](http://unisonweb.org/) might
especially fit your interests, if I understand them correctly.]

~~~
ddimitrov
I tried to skim-read through the Unison About page, but all I saw was an
under-designed variation of Jetbrains MPS for a single language. I assume you
have spent longer with the project - do you care to summarize the differences?

------
trsohmers
Hi Alan,

We met at a retreat last fall, and it was a real treat for me to hear some
fantastic stories/anecdotes about the last 50 years of computing (which I have
only been directly involved with for about 1/10th of). Another one of my
computing heroes is Seymour Cray, which we talked about a bit and your time at
Chippewa Falls. While a lot of HN'ers know about you talking about the
Burroughs B5000, I (and I bet most others) would have had no idea that you got
to work with Seymour on the CDC 6600. DO you have any particular Seymour
Cray/6600 stories that you think would be of interest to the crowd?

Thanks again for doing this, and I hope to be able to talk again soon!

~~~
alankay1
Seymour Cray was a man of few words. I was there for three weeks before I
realized he was not the janitor.

The "Chippewa OS" is too big a story for here, but it turned out that the
official Control Data software team failed to come up with any software for
the 6600! Hence a bunch of us from Livermore, Los Alamos, NCAR, etc. -- the
places that had bought the machine -- were assembled in Chippewa Falls to "do
something".

Perhaps the most interesting piece of unofficial software was a multitasking
OS with graphical debugger for the 6600 that had been written by Seymour Cray
-- to help debug the machine -- in octal absolute! I had the honor of writing
a de-assembler for this system so we ordinary mortals could make changes and
add to it (this was an amazing tour de force given the parallel architecture
and multiple processes for this machine). And it was also a good object lesson
for what Cray was really good at, and what he was not so good at (there were
some really great decisions on this machine, and some really poor ones -- both
sets quite extreme)

------
logicallee
1\. What do you wish someone would ask you so that you could finally share
your thoughts, but nobody has broached as a subject?

2\. (This question is about interactive coding, as a dialogue).

Human dialogs (conversations) are interactive. I think in the past computers
were limited, and computer languages had to be very small (as compared with
any human language + culture) so that a programmer could learn what the
computer could do. But now that services can be connected (programming as a
service?), would it make sense to have a dialogue? My example is that in the
1980s it wouldn't have made sense for any programming language to have a
function called double() that just multiplies by 2. There's * 2 for that.

But in 2016, it makes sense for a beginner to write "and double it" and
considerably less sense for a beginner to have to learn x *= 2 if they wanted
to double a number.

Human language is also ambiguous. It would make sense for an interactive
language to ask:

"Did you mean, set x equal to x multiplied by 2?" which most people would
select, but maybe someone would select

"Did you mean, set x equal to the string "x" appended to the string "x"?"

For these reasons: do you think it would make sense to have an interactive
programming language that is connected with a server you "talk" with
interactively?

Or should programmers still have to learn a fixed programming language that
has no room for interpretation, but instead a strict meaning.

Among other things, this means programmers can never write "it", "that",
"which" to refer to a previous thing (since the referent could be ambiguous if
the compiler doesn't confirm.) But every human language includes such
shorthand.

I'd love to hear your thoughts regarding a connected, interactive programming
process similar to the above (or just on whatever lines).

~~~
alankay1
1\. I actually don't think this way -- my actual interior is a kind of "hair-
ball" and so questions are very helpful.

2\. Let's do a dialog about this. Note that taking the process far enough
starts resembling prayer to a deity and gives rise to many kinds of hedging.
Math is completely expressible in ordinary language, but instead, the attempt
to make it less ambiguous leads to conventions that have to be learned.

That said, have you thought about (say) objects needing to negotiate meaning
with each other?

~~~
logicallee
>Let's do a dialog about this.

Okay! I'm up for a dialog with Alan Kay if Alan Kay is up for a dialog with me
:)

>gives rise to many kinds of hedging.

100% agreed. I have a great example of hedging. Human languages are _usually_
quite ambiguous. But there's an exception: when there is a legal document,
then if it gets to court the other party can take any ambiguity and turn it
around. (For example argue that an "it" refers to other than the closest
possible referent.) As a result, legal documents are very unambiguous. This
makes them very, very explicit. Implicit contexts, and human culture, are
difficult - they get lost with time. If you've ever read Shakespeare's
dialogues, they're hard to understand without heavy glossing just 400 years
later. But we actually have a copy of Shakespeare's will. Here it is:
[http://www.cummingsstudyguides.net/xWill.html](http://www.cummingsstudyguides.net/xWill.html)

Without a single gloss or footnote and _without_ modernizing the spelling (as
we do for plays) you can understand nearly 100.00% of it 400 years later.
Without even modernized spelling, you could likely turn it into "code" (it's
very similar to code) with complete unambiguity.

But the actual "conversation" that led to that, in the room where Shakespeare
was talking with the lawyer, if we had a transcript, would be likely
impenetrable to us without careful reading and maybe footnotes -- just like
the dialogue of Shakespeare's plays. Imagine calling up your lawyer and
leaving a voicemail describing what you want in some document. That might lead
to a brief dialog and then a draft for your approval. That dialog would be
hard to understand, possibly even for an outsider today.

So the question is: is there room for an 'agent' (service) that builds up a
shared context with someone interacting with it, then produces something for
outside consumption? That might mean that the user can say "it" or "unless"
but the service turns "it" into a referent (and makes sure it got the right
want) and turns unless into "; if ( not ) { }" for outsiders. I say this
because this is only one interpretation of "unless". Another interpretation is
that an exception might happen that you want to address and then not
continue...

>Math is completely expressible in ordinary language, but instead, the attempt
to make it less ambiguous leads to conventions that have to be learned.

This is very true and extremely interesting. When people rearrange an equation
in symbolic form (crossing out common factors, etc), they do so doing complex
symbol processing that isn't linguistic in nature. I think it's a different
tangent from the one I'm asking about - after all, would it be common for
anyone to write "The limit of the function f of x, as x tends to 0" instead of
the common lim notation?

So the line of thinking with symbolism is extremely powerful, and after all
isn't that why we have whiteboards that don't have neat lines on them for you
to write sentences into? Diagrams / symbols / pictures are all very powerful
and aspects of thinking. This part is tangential to what I was thinking of.

It would be interesting, though, if the interactive process could produce a
diagram for you to rearrange if you wanted. I don't know if you know
electrical engineering (probably!) but magine being able to ask an interactive
service "I'd like a simple circuit that lights a led from a battery", and you
get one -- as well as some questions about whether you really didn't need any
fuses in it? what sized batteries you were talking about? that it's a DC
circuit right? And so forth. It's a separate question whether you could
rearrange the results.

Of course, if you are allowed to say something like "I'd like a circuit around
an ARM processor where all the components cost under $30 in quantity under
1000 including pcb setup costs". That's like praying to a diety!

Is there room for some level of interaction between "double x" and praying?

Wolfram Alpha certainly suggests there is. Although it doesn't ask you
anything back / isn't interactive, I've certainly been shocked at some of the
things it was able to interpret.

For example, I could ask it how far sound travels in 10 ms, so that I could
judge what large of a perceived physical offset effect introducing 10 ms of
latency would cause. Well, I just tried it again so I could link to you, and
it didn't get "how far does sound travel in 10 ms", it didn't get "distance
sound travels in 10 ms", but on my third try "speed of sound in air * 10 ms"
it got me the answer - its interpretation was:[1]

>speed of sound in dry air at 20 °C and 1 atmosphere pressure×10 ms
(milliseconds)

and it gave me 3.432 meters.

What is interesting is that there was nothing _interactive_ in this process,
just me guessing until it got what I meant. It didn't ask me anything back.
For me, the three phrases I just quoted are equal. For Wolfram Alpha, it
misinterpreted the first two quite badly, and got the third one easily.

So the question is - could such a process be applied to programming? Could the
user try to write "lowest common factor of a and b" have the compiler
completely miss, try "least common factor of a and b", have the compiler
completely miss, try "least common multiple of a and b" and finally have the
compiler get it? Because that's not how programming works today. At all.
(Well, in actual practice it kind of is thanks to Google - but it's not what
goes on in the IDE.)

So it would be interesting to know if some progress could be made along these
lines.

>have you thought about (say) objects needing to negotiate meaning with each
other?

No, it's a tough one. For simplicity, I thought of the current output being
boilerplate code (like existing C/C++ code), so that the headache you just
mentioned doesn't need to be thought about :-D

[1]
[http://www.wolframalpha.com/input/?i=speed+of+sound+in+air+*...](http://www.wolframalpha.com/input/?i=speed+of+sound+in+air+*+10+ms)

------
asymmetric
Do you agree with the POV that sees Erlang as implementing some of the core
tenets of OOP, namely message passing and encapsulation of state? (Cfr. for
example [http://tech.noredink.com/post/142689001488/the-most-
object-o...](http://tech.noredink.com/post/142689001488/the-most-object-
oriented-language))

------
ducklord
Hey Alan, you once said that lisp is the greatest single programming language
ever designed. Recently, with all the emergence of statically typed languages
like Haskell and Scala, has that changed? Why do you think after being around
for so long, lisp isn't as popular as mainstream languages like Java, C or
Python? And lastly, what are your thoughts on MIT's switch to use Python
instead of Scheme to teach their undergraduate CS program?

~~~
alankay1
I should clarify this. I didn't exactly mean as a language to program in, but
as (a) a "building material" and (b) especially as an "artifact to think
with". Once you grok it, most issues in programming languages (including
today) are much more thinkable (and criticizable).

The second question requires too long an answer for this forum.

~~~
agumonkey
Would it be better on a ML ? I remember long topics on FONC.

------
emaringolo
Do you still see an advantage of using Smalltalk (like Squeak/Pharo) as a
general purpose language/tool to build software or do you think that most of
its original ideas were somehow "taken" by other alternatives?

~~~
alankay1
Smalltalk in the 70s was "just a great thing" for its time. The pragmatic fact
of also wanting to run in it real-time fast enough for dynamic media and
interactions and to have it fit within the 64Kbyte (maybe a smidge more) Alto
rendered it not nearly as scalable into the future in many dimensions as the
original ideas intended.

We have to think about why this language is even worth mentioning today
(partly I think by comparison ...)

~~~
emaringolo
I think it is the only language that enables a single individual to understand
a big and complex system like the development environment itself.

~~~
sebastianconcpt
Mmm yeah. The exploration, instant feedback and minimalist syntax are features
that I wish more people would value.

I "secretly" think that Self would have achieved that too (and even better
because is not constrained to the artificial abstraction of classes) but it
never had a chance due to its unsuccessful IDE.

Our cognitive system converges too much to objectify things to ignore. We
compulsively do that. There is something about objects that fits our cognitive
system better. It's a waste if we don't take full advantage of it.

~~~
philippeback
There is a gang here.

------
tlack
Have you spent any time studying machine learning and how it might affect the
fundamental ways we program computers? Any thoughts on how the tooling of
machine learning (TensorFlow, ad hoc processes, etc) could be improved?

~~~
alankay1
Too large a subject -- sorry!

------
erring
In a recent talk, Ivan Sutherland spoke in the lines of, “Imagine that the
hardware we used today had _time_ as a first-class concept. What would
computing be like?” [1]

To expand on Sutherland's point: Today's hardware does not concern itself with
reflecting the realities of programming. The Commodore Amiga, which had a
blitter chip that enabled high-speed bitmap writes with straightforward
software implementation, brought about a whole new level in game programming.
Lisp machines, running Lisp _in silicon_ , famously enabled an incredibly
powerful production environment. Evidence is mounting that the fundamental
concepts we need for a new computing have to be ingrained in silicon, and
programmers, saved from the useless toil of reimplementing the essentials,
should be comfortable working in the (much “higher” and simpler) hardware
level. Today, instead of striving for better infrastructure of this sort, we
are toiling away at building bits of the perpetually rotting superstructure in
slightly better ways.

The more radical voices in computer architecture and language design keep
asserting in their various ways that a paradigm shift in how we do
infrastructure will have to involve _starting over with computing as we know
it_. Do you agree? Is it impossible to have time as a first-class concept in
computing with anything short of a whole new _system of computing_ , complete
with a fundamentally new hardware design, programming environment and
supporting pedagogy? Or can we get there by piling up better abstractions on
top of the von Neumann baggage?

[1] This is from memory. Apologies for a possible misquotation, and
corrections most welcome.

~~~
juanuys
Yes! I want this!

The problem is that distributed systems can't agree on what the time is.
Remember how Google tried to fix this with atomic clocks and GPS? [1]

My hunch is that soon timing devices on distributed systems will reach
instantaneous consensus by way of quantum entanglement.

[1] [http://www.theverge.com/2012/11/26/3692392/google-spanner-
at...](http://www.theverge.com/2012/11/26/3692392/google-spanner-atomic-
clocks-GPS)

------
_mhr_
What is HARC currently working on? Is it for now a continuation of the old CDG
Labs / VPRI projects or are there already new projects planned / underway?

Also, how do you organize and record your ideas? Pen and paper? Some kind of
software? What system do you use? I ask because I'm fascinated by the idea of
software that aids in thought, collaboration, and programming - meshing them
all together.

I've seen elsewhere
([https://news.ycombinator.com/item?id=11940007](https://news.ycombinator.com/item?id=11940007))
that you agreed that many mainstream "paradigms" should be "retired". Retiring
implies that you replace. In particular, I'm curious what you would like to
see filesystems or the Unix terminal replaced with?

------
ontouchstart
Hi Alan,

We know you are not a big fan of web. Regardless how we got here, what is your
view on how we should address the real world decentralization problems in the
context of
[http://www.decentralizedweb.net/](http://www.decentralizedweb.net/) ?

~~~
alankay1
One of several major mistakes with the web has to do with thinking that the
browser is some kind of application "with features" \-- if you think of the
actual scale of the Internet (and they didn't) you realize that at the very
least, the browser, etc, has to be more like an operating system, and with as
few features as possible: really to safely run encapsulated modules and deal
out resources. It is crazy that after more than 20 years of the web that this
CS101 principle still can't be done ... and it runs on machines that can do
it....

~~~
ontouchstart
Thanks for the insight.

In your point of view, on top of the current hardware infrastructure, how do
we build a decentralized network operating system (contrary to the old
centralized timesharing system)?

~~~
alankay1
Take a look at the Internet itself -- and then take a look at Dave Reed's 1978
PhD thesis at MIT (can be found via the CSAIL website) -- we used many of
these ideas in the Croquet project

~~~
ontouchstart
Alan, is this the right direction we are looking at?

[https://en.m.wikipedia.org/wiki/Croquet_Project](https://en.m.wikipedia.org/wiki/Croquet_Project)

[https://en.m.wikipedia.org/wiki/Open_Cobalt](https://en.m.wikipedia.org/wiki/Open_Cobalt)

~~~
alankay1
I think you can find much of what is worth knowing about these projects
starting there.

~~~
ontouchstart
Now we feel like Steve Jobs visiting PARC. :-)

------
adamnemecek
What are some opinions (CS related or not) that you completely changed your
mind on?

------
walterbell
Do you see further research paths for metacompilers [1] to reduce code and
enable customizable user interfaces?

With containers and hypervisors now in desktop OSes (Windows, Mac OS, Linux),
could an open-source research OS (e.g. KSWorld) be packaged for developers and
end-users who want to test your team's experimental UIs?

Is there long-term value in "private machine learning" where some data and
algos are focused on user/owner interests, with "public machine learning"
providing variably-trusted signals to user-owned algos for intelligence
augmentation?

[1]
[https://news.ycombinator.com/item?id=8297996](https://news.ycombinator.com/item?id=8297996)

------
defvar
Hi Alan, do you still do coding (any kind of, for any purpose) these days? If
you do, what's your comfortable setup (say, language, editor, tools and etc)?

~~~
DeepAndDark
I would also like to know if he uses emacs or vim, or whatever. And if he
thinks the editor is relevant.

~~~
alankay1
Yikes!

~~~
syngrog66
possible takeaway: even Alan Kay knows not to wade into the religious war that
is vi-or-emacs ;-)

~~~
mmiller
I think one thing that scared him away was the word "relevant"... VIM might
have as well (very primitive). Emacs tries to do some things that are worth
doing, but it brings the user into "textland," not "systemland."

~~~
ontouchstart
Are we in a "textland" or a "systemland" when we have these symbolic
conversations on HN across space and time?

~~~
mmiller
More "textland" than anything else, though that has to do with the medium
we're using to communicate "across space and time." What Alan has advocated is
that the medium we're using in this particular instance should be a version of
"systemland."

~~~
ontouchstart
Imagine we had this "systemland" in place, what kind of communication
experience we might have?

(For example, I came back to this thread from email notification.)

~~~
mmiller
It's difficult at this point to come up with an example, since we don't have
it yet (that I know of), but to give you an idea, take a look at Lively Kernel
[https://www.lively-kernel.org/](https://www.lively-kernel.org/)

If you want a demo, you can take a look here:

[https://youtu.be/QTJRwKOFddc](https://youtu.be/QTJRwKOFddc)

~~~
ontouchstart
Yes I have seen it when it first came out. Very impressive.

However, IMHO, there are a few limitations with these "desktop" type of
environments:

1\. Limited I/O capability.

2\. Limited Network capability.

3\. Limited mobility.

Follow the trails outlined by Bret Victor:
[http://worrydream.com/TheHumaneRepresentationOfThought/note....](http://worrydream.com/TheHumaneRepresentationOfThought/note.html)
, it seems that we are yet to find the "systemland" for the future.

------
panic
Hi Alan,

There's a lot of economic pressure against building new systems. Making new
hardware and software takes longer than building on the existing stuff. As
time goes on, it gets harder and harder to match the features of the existing
systems (imagine the effort involved in reimplementing a web browser from
scratch in a new system, for example), not to mention the massive cost
benefits of manufacturing hardware at a large scale.

Many people working in software realize the systems they use are broken, but
the economics discourage people from trying to fix them. Is it possible to fix
the economics? Or maybe we need more people able to resist this pressure?

~~~
alankay1
One start to this is not to do "a web browser" \-- or other software -- with
such bad initial conceptions (see elsewhere above). There was nothing
necessary about this.

We have a technology in which hacks are easy and can be really easily
multiplied by the billions and sent everywhere. If we translate this into the
world of medicine and disease and sanitation, what should we do? How far
should we go?

(But it is still really hard for me to understand the nature and force of the
resistance to "real objects" \-- which -- as actual virtual encapsulated
machines -- were invented to deal with scaling and replacement to parallel the
same kinds of thinking we did for the Internet with physical objects (called
"computers")

Yikes!

~~~
panic
I guess I'm thinking about this from the perspective of someone trying to make
a computer for the general public. How will you convince anyone to buy and use
a computer that can't browse the web?

(And I doubt this is a problem that will go away with time; the web is big
enough that it seems unlikely to go anywhere anytime soon.)

~~~
mmiller
To provide a little hope, I think it should be pointed out that the Frank
project (VPRI) achieved web browsing that is compatible with the existing web
protocol, using real objects. What it allowed is an extension of browsing on
the web. So, it's not as if the two (good architecture and bad architecture)
can't both exist in the same space, and be used at the same time. I think the
key to answering your question is asking will people in the general public
find the impetus to understand the limited nature of bad architecture, and (on
one possible track) either use the good architecture to make the experience
better, or (on another possible track) come up with a better computing/content
architecture that does away with the web as we know it altogether?

------
torstenB
Beside objects one true revolutionary idea in Smalltalk is the uniformity of
meta facilities - an object knowing about itself and being able to tell you.

I see so many dev resources burnt just because people build boring UIs or
persistence bindings by wiring MANUALLY in traditional languages. All this is
a no-brainer when enough meta infos (objects and relations) are available and
a program is reflected as data as in Smalltalk (not dead text). You can not
only transform data but also your code. Pharo now makes some more additonal
steps to enhance reflection (metalinks, slots, etc).

What do you see as next steps in using metadata/-infos for (meta)programming
...

~~~
alankay1
I think one of the biggest blindnesses of our field is scaling (and I'm not
sure quite why). This leads to an enormous amount of effort at roughly the
same scales of some good ideas decades ago. (I think my background in
molecular biology -- which continues to a very small extent -- helped this a
lot. At some point one has to grapple with what is more or less going on and
why it more or less works so amazingly well.)

What would a programming language be like if we actually took the many
dimensions of scaling seriously ... ?

~~~
heurist
Would you consider the actor programming paradigm to be a good scalable model?
It largely matches what I observe in both nature and where we seem to be
headed with software engineering (containers in the cloud, near-trivial
redundancy, stability, and scalability when properly designed). When I
consider society I see a complex network of distributed actors, and when I
consider my mind/brain I see the same. At this point in my philosophical
development I am definitely resonating with the actor model, but I'm sure you
are more familiar with this paradigm than I am - if not actors, where would
you recommend searching?

~~~
alankay1
I have _every reason_ to like Actors! But think about scaling for a while, ...

~~~
heavenlyhash
Can you share some more seeds of thought on this? I sense you have better
questions here than anything I can yet think to ask, myself :)

I'll take a crack at some things:

Actors are great patterns for encapsulating concurrency, but struggle in
scaling --

because failure domains and persistence/forgetfulness on an intergalactic
network aren't always lined up on the same borders as the actors?

What are some of dimensions of scaling would you identify as under-understood?

~~~
alankay1
Just to bring up one possibly helpful analogy:

Suppose we have a good model of atoms -- how much of this will be a good way
to think about living systems? (Or should we come up with better and more
useful architectural ideas that are a better fit to the scales we are trying
to deal with -- hint: in biology, they are not like atomic physics or even
much of chemistry ...)

A point here is that trying to make atomic physics better will only help a
little if at all. So trying to make early languages "much better" likely
misses most if not all of the real progress that is needed on an Internet or
larger scale.

(I personally think that much enterprise software even today needs
architectural ideas and languages that are different in _kind_ than the
languages of the 60s and 70s (meaning most of the languages used today).

~~~
heavenlyhash
(preface: Riffing wildly here -- and may have gone in a different direction
than your analogy's original intent --)

So, regardless of whether or not actors are a good pattern, what we need is
scale-free patterns?

I can see how getting hung up on actors as a programming language feature
would impede that.

How can we make the jump to scale-free, though?

\- With actors, historically, we seem to have gravitated to talking about them
in terms of a programming language feature or design problem -- while in some
sense it implies "message passing", we usually implement the concept at scales
of small bits of an in-memory process.

\- With processes in the unixish family, we've made another domain with
boundaries, but the granularity and kind of communication that are well-
standardized at the edges of process aren't anywhere near what we expect from
the languages we use to craft the _interior_ of processes. And processes don't
really compose, sadly.

\- With linux cgroups, things finally go in a tree. Sorta. (It's _rough_
trying to stack them in a way where someone arbitrarily deep in the tree can't
decide to take an axe directly to the trunk and topple the whole thing). Like
processes, we're still handling granularity of failure domains here (better
than nothing), but not defining any meaningful or scalable shepherding of
communication. And we still haven't left the machine.

I'm sold that we need some sort of architectural ideas that transcend these
minutiae and are meaningful at the scale of the-internet-or-larger. But what
patterns are actually scalable in terms of getting many systems to
consensually interoperate on them?

I'm twitchy about trying to define One True Pure Form of message passing, or
even intent passing, which seems to be a dreamier name that still converges at
the same limits when implemented.

But I dream that there's a few true forms of concurrent coordination pattern
that really simplify distributed and asynchronous systems, and perhaps _are_
scale-free. Perhaps we haven't hit them yet. Words like "actor" and "agent"
(divorced of e.g. programming language library) sometimes seem close -- are
there other concepts you think are helpful here?

~~~
alankay1
One of many problems with trying to use Unix as "modules" and "objects" is
that they have things that aren't objects (like strings, etc) and this makes
it difficult for arranging various scales and extensions of use.

It's not so much "scale-free" but this idea I mentioned elsewhere of "find the
most difficult thing you have to do really nicely" and then see how it scales
down (scaling up nicely rarely even barely possible). This was what worked
with Smalltalk -- I came up with about 20 examples that had to be "nice", and
some of them were "large" (for their day). We -- especially Dan Ingalls and
Ted Kaehler -- were able to find ways to make the bigger more general things
small and efficient enough to work uniformly over all the scales we had to
deal with.

In other parts of this AMA I've mentioned some of the problems when extended
to the whole world (but go for "galactic" to help thinking!)

Almost nothing in today's languages or OSs are in the current state of
"biology".

However, several starts could be to relax from programming by message sending
(a tough prospect in the large) to programming by message receiving, and in
particular to program by intent/meaning negotiation.

And so forth.

Linda was a great idea of the 80s, what is the similar idea scaled for 40
years later? (It won't look like Linda, so don't start your thinking from
there ...)

Etc.

------
iyn
Thanks for doing this AMA.

Q: How do you think we can improve todays world (not just with technology)?
What do you think is our species way forward? How as a civilization can we
'get to higher level'? Specifically, I'm interested in your views on ending
poverty, suffering, not destroying the Earth, improving our political and
social systems, improving education etc. I understand that these are very
broad topics without definitive answers but I'd love to hear some of your
thought about these.

Thank you and I just want to mention that I appreciate your work.

~~~
alankay1
"What Fools these Mortals be!" Puck meant that we are easy to fool. In fact we
like to be fooled -- we pay lots of money to be fooled!

One way to look at this is that the most important learning anyone can do is
to understand "Human beings as if from Mars" \-- meaning to get beyond our
fooling ourselves and to start trying to deal with what is dangerous and
counterproductive in our genetic (and hence cultural) makeups. This is quite
different than what most schools think they are supposed to be about -- but
the great Jerome Bruner in the 60s came up with a terrific curriculum for 5th
graders that was an excellent start for "real anthroplogy" in K-5.

~~~
dang
> _the great Jerome Bruner in the 60s came up with a terrific curriculum for
> 5th graders that was an excellent start for "real anthroplogy"_

Was that the MACOS program?

[https://en.wikipedia.org/wiki/Man:_A_Course_of_Study](https://en.wikipedia.org/wiki/Man:_A_Course_of_Study)

I was casting around recently looking for something to post to HN about it,
but there's surprisingly little on the web. (I haven't yet watched the
National Film Board documentary on it, which was the only substantive source I
could find.)

~~~
alankay1
Yes, it was MACOS. A very good book about this is "Politics in the Classroom".

There is actually quite a lot of stuff on MACOS on the web, most of the
materials, etc.

Bruner wrote a number of outstanding essays as part of the MACOS design
project ...

------
pizza
Hi Alan, how do you think that object-oriented programming and distributed
computing will intertwine in the not-so-far future?

~~~
alankay1
I've been constantly surprised about how what I called "object-oriented" and
"system-oriented" got neutered into Abstract Data Types, etc., (I think
because people wanted to retain the old ways of programming with procedures,
assignment statements, and data structures. These don't scale well, but
enormous amounts of effort have been expended to retain the old paradigms ...

~~~
mmiller
Indeed.

I think another part of it was people wanted to go on using systems the way
they had been, with the operating system being a base layer on which things
happen, which does not exist as a foundation for building a different system.
To create code, you put it in files, files go in directories, you compile it
into a static executable, which becomes temporarily dynamic when executed, and
then goes back to being static when it exits, because it was based on a static
image. The OS makes it difficult for the executable to update the state of its
own image, assuming that trying to do so is a mistake. You run it on top of
the OS, not _in_ the OS, not part of it. It's the recapitulation of the
application metaphor, where the app. only exists while someone is using it,
and then all its state goes away when they're done with it, unless a small
piece of it is serialized (unpacked into raw bytes, with no meta-code) for
later retrieval. With that kind of setup, the thinking is there's no need for
inter-entity relationships that become part of the larger whole, though
eventually people wanted applications systems with capabilities, and so more
cruft got added onto the pile, trying to create versioned late-binding in the
midst of a system designed for abstract data types.

When I talk to people about OOP, I explicitly try to distinguish it from ADTs
and systems like I've described above, though I'm at a bit of a loss to come
up with a phrase for what to call languages that people commonly call "OO."
I've sometimes called them procedural languages, with an extra layer of
scoping, or that work with ADTs, but that's a mouthful.

------
kens
Looking at your 1972 Dynabook paper [1], would you make any changes to the
Dynabook vision now? Also, what do you see as the biggest missing pieces
(software or hardware) today? What current software gets closest to the
vision?

[1] Everyone should really take a look at the Dynabook paper: [http://history-
computer.com/Library/Kay72.pdf](http://history-computer.com/Library/Kay72.pdf)

~~~
alankay1
There's a lot online and the Viewpoints "writings" page

~~~
callil
link: [http://vpri.org/html/writings.php](http://vpri.org/html/writings.php)
\- you can filter by author in the dropdown.

------
brogrammer6431
I remember a few weeks back, you said that you wanted to take a closer look at
the Urbit project (www.urbit.org). Just wondering if you had gotten the chance
to do so, and, if so, what your thoughts were.

~~~
hsribei
Do you have a link to where he mentioned Urbit?

I find the whole project (politics aside, though i wish this caveat was
unnecessary) stunningly beautiful, especially the focus on verbalization
(#spokenDSL), and having Alan Kay be interested in it is flattering even
though I have no relation whatsoever :)

~~~
protopete
This subthread here?
[https://news.ycombinator.com/item?id=11810177](https://news.ycombinator.com/item?id=11810177)

------
siteshwar
Hi Alan,

You have an interesting reading list at
[http://www.squeakland.org/resources/books/readingList.jsp](http://www.squeakland.org/resources/books/readingList.jsp).
However it seems that it was created long time back. Are there any other books
that you would like to add to this list ?

~~~
alankay1
Take a look at the HN discussion on this -- perhaps someone will supply a URL?

~~~
cloudmike
[https://news.ycombinator.com/item?id=11803165](https://news.ycombinator.com/item?id=11803165)

~~~
siteshwar
I missed that post somehow. Thanks! :)

------
diiq
As a community, we often think on quite short time-scales ("What can we build
right now, for users right now, to make money asap?"). I feel like you've
always been good at stepping back, and taking a longer view.

So what should a designer or a developer be doing now, to make things better
in 10 years, or 100 years?

~~~
alankay1
What does the world actually need?

~~~
diiq
Not sure how seriously to treat that question, but...

It seems like a cheap answer, but it needs more people, with more
perspectives, trying out various answers to this question.

I suspect the greatest long-term leverage comes from providing people relief
from the cognitive load of fighting for food, shelter, health, transportation,
and education.

Those are heavy goals that lots of people have spent lots of time shifting
only very slowly. The little-Alan-Kay-in-my-mind replies, "We need better
thinking-tools in order to do it faster", which I can half-believe, but beyond
that, I get stuck. I don't see how to work backwards from that to a first step
that can actually be taken.

~~~
spfccmt42
>but it needs more people,

Or less.

------
snowwrestler
I recall reading an article about 10 years ago describing a PARC research
project in which networked computers with antennae were placed throughout a
set of rooms, and the subject carried a small transmitter with them from room
to room. As the computer in each room detected the transmitter, it triggered
actions in each room. I think it was called "ambient computing."

Does this ring a bell for you? I have searched for this article recently and
not been able to find it again.

~~~
alankay1
Yes, this idea was originally Nicholas Negroponte's in the 70s. The Parc
version was called "Ubiquitous Computing" and was led by Mark Wieser in the
80s ...

~~~
snowwrestler
Thanks! This research was my first thought when I saw big companies starting
to come out with things like smart watches, hyper-local beacons, and home
automation products.

But strangely, it seems like companies are still treating those all as
separate lines. For example, Apple markets their home and watch projects
totally independently of one another. But I have to think that they are
working toward ubiquitous computing, much as they worked toward the Dynabook
over the decades.

~~~
msutherl
The people working in R&D at these companies (i.e. Apple) are to varying
extents aware of this history. Indeed there are different "brands" for a set
of concepts that have be evolving since the 70s, IoT being the most popular
term in public discourse as of the past few years, and "ubiquitous computing"
having fallen almost completely out of favor except in academic circles.

Genevieve Bell and Paul Dourish wrote an interesting book few years ago that
argued that Nicholas Negroponte's and Mark Wiser's visions had indeed come to
pass, as predicted over and over again, just by different names and under
guises we that we didn't recognize: [https://www.amazon.com/Divining-Digital-
Future-Mythology-Ubi...](https://www.amazon.com/Divining-Digital-Future-
Mythology-Ubiquitous/dp/0262525895/). The story continues!

------
psibi
What do you think about functional languages like Haskell, OCaml etc ?

~~~
alankay1
They need a much better idea of _time_ (such as approaches to McCarthy's
fluents).

And then there is the issue that we need to make "systems" ...

I like what a function is, and this idea should be used, but I think it is
better used rather differently ...

~~~
bbcbasic
> They need a much better idea of time (such as approaches to McCarthy's
> fluents).

I am not sure what the _time_ problem is for functional programming, but I
reckon the Elm language/framework solves problems with time in a very elegant
way with it's flavour of FRP and Signals.

In Elm, you can play back your UI interactions in a debugger as they happened
and watch the variables as they would have been!

~~~
alankay1
Worth looking at Bob Balzer's EXDAMS system at Rand in the late 60s early 70s.

~~~
nickpsecurity
Google search gives a lot of gibberish on those terms. Here's the paper for
anyone trying to follow-up on that comment:

[https://www.computer.org/csdl/proceedings/afips/1969/5073/00...](https://www.computer.org/csdl/proceedings/afips/1969/5073/00/50730567.pdf)

------
miguelrochefort
Do you believe that the gap between consuming software and creating software
will disappear at some point? That is, do you expect we will soon see some
homoiconic software environment where the interface for using software is the
same as the interface for creating it?

I feel like the current application paradigm cannot scale, and will only lead
to further fragmentation. We all have 100+ different accounts, and 100+
different apps, none of which can interact with each other. Most people seem
to think that AI will solve this, and make natural languages the main
interface to AI, but I don't buy it. Speech seem so antiquated in comparison
to what can be achieved through other senses (including sight and touch). How
do you imagine humans will interact with future computer systems?

~~~
alankay
Talked to a lawyer recently?

One of the most interesting processes -- especially in engineering -- is to
make a model to find out what it is that you are trying to make. Sounds a
little weird, but the model winds up being a great focuser of "attempts at
intent".

Now let's contemplate just how bad most languages are at allowing model
building and having optimizations being orthogonal rather than intertwined ...

~~~
sebastianconcpt
Indeed. It's like most languages punish exploration instead of rewarding it.
Intellectual curiosity gets bullied that way.

------
unimpressive
Hi Alan,

One of the concepts I've heard you talk about before in interviews and the
like is simulation. I think simulation is huge and we should be seeing
products that cater towards it, but largely aren't.

[http://www.scholastic.com/browse/article.jsp?id=5](http://www.scholastic.com/browse/article.jsp?id=5)

[http://slatestarcodex.com/2014/12/08/links-1214-come-ye-
to-b...](http://slatestarcodex.com/2014/12/08/links-1214-come-ye-to-
bethlinkhem/#comment-165173)

Do you still think simulation is an important promise of the computer
revolution, and are there any products you know of or ideas you have that
are/would be a step in the right direction?

~~~
alankay1
Yes, and I don't really. An interesting one that has been around for a few
years -- that can be used by the general public, children, etc. -- is NetLogo
(a takeoff from StarLogo). There is also a StarLogo Nova that is worth looking
at.

But these are not really general simulation languages of the kind we need in
2016 and beyond ...

~~~
DigitalJack
I need to read what you've said regarding simulations and pseudotime, but I've
been wondering if this is along the same lines has as an HDL like
VHDL/Verilog/SystemVerilog.

I do chip implementations in those languages, which of course involves
simulation and a time concept. I get the idea that is not the sort of
simulation you mean, but I'm not sure. The other thought I had was simulation
more in the SPICE sense that iterate toward a solution in steps.

Any chance you could briefly clarify, and/or toss out a resource that points
in the right direction?

~~~
alankay1
What would a simulation of an epidemic look like? Or a simulation of ants
finding food via pheromone trails. Or simulation of dye diffusing in water? Or
a simulation of an object being dropped?

Etc

~~~
DigitalJack
Cellular automata comes to mind. I'll think on it.

This AMA has really been neat, thanks so much for doing it.

------
agumonkey
Hi Sir Kay,

How do you feel about the role of computing technology in society today ? is
it still important or should be work on other domains (education, medicine,
ecology, and the industrial tissue that is our interface to reality nowadays).

While I'm at it, just did a MOOC about Pharo (~ex squeak) and ST was indeed a
very interesting take on OO (if I may say so ;). So thanks for you and your
teammates work along the years (from ST to STEPS).

------
tmerr
Hi Alan,

I watched an OOPSLA talk where you described how tapes were used when you were
in the air force. The tape readers were simple because they stupidly followed
instructions on the tapes themselves to access the data. You seemed to like
this arrangement better than what we have with web browsers and html, where
the browser is assumed to know everything about the format. One way to
interpret this is that we should have something more minimal like a bytecode
format for the web, in place of html.

So I'm interested on your take: Is WebAssembly a step in the right direction
for the web? (Although it's not meant as a replacement for html, maybe it will
displace it over time).

~~~
alankay1
This is a perfect example of "systems blindness" in the face of real scaling.

What should a browser actually know in order to scale (and why didn't the
browser folks realize this? (rhetorical question) -- this knowledge was
already well around by the early 90s).

------
oooooppmba
What is your one piece of advice to college students studying CS?

~~~
alankay
Learn a lot of other things, and at least one _real science_ and one _real
engineering_. This will help to calibrate the somewhat odd lore aka "computing
knowledge". I would certainly urge a number of anthropology courses (and
social psychology, etc), theater, and so forth. In the right school, I'd
suggest "media theory" (of the "Mcluhan", "Innis", "Postman" kind ...)

~~~
stcredzero
Whenever I've gone to an anime con, it seems like the older genre masters like
Yoshiyuki Tomino (Gundam) were always urging the audience to get to know
something besides anime/manga. Specifically to go out and get involved in
something to create media _about_ , so as to avoid producing something
completely self-referential and navel-gazing. That also seems to apply to the
medium of programming. (As in: Do we _really_ need another To-Do app?)

It's also related to what Scott Adams urges. It's pretty hard to get to be in
the top best 10% at a single field. It's much easier to be in the top 25% of
two different fields, which would make you one of the top 10% of that
interdisciplinary combination.

~~~
chjohasbrouck
> It's also related to what Scott Adams urges. It's pretty hard to get to be
> in the top best 10% at a single field. It's much easier to be in the top 25%
> of two different fields, which would make you one of the top 10% of that
> interdisciplinary combination.

This is really great, I've been thinking about that for years. Which Scott
Adams said this, and where is it from?

The best tech founders often seem to have this quality. They're by no means
top 10% in any individual category, but they're strong in technology and in
some specific non-tech-related domain. In the right setting, dividing your
attention between tech and a non-tech-related domain can actually make you
_more_ specialized in a way, not less.

~~~
stcredzero
_Which Scott Adams said this, and where is it from?_

The Dilbert guy. It's from:
[https://amzn.com/1591847745](https://amzn.com/1591847745)

~~~
patrickk
Here's a quote that leapt off the page at me:

Just after college, I took my first airplane trip, destination California, in
search of a job. I was seated next to a businessman who was probably in his
early 60s. I suppose I looked like an odd duck with my serious demeanor, bad
haircut and cheap suit, clearly out of my element. I asked what he did for a
living, and he told me he was the CEO of a company that made screws. He
offered me some career advice. He said that every time he got a new job, he
immediately started looking for a better one. For him, job seeking was not
something one did when necessary. It was a continuing process.

This makes perfect sense if you do the math. Chances are that the best job for
you won't become available at precisely the time you declare yourself ready.
Your best bet, he explained, was to always be looking for a better deal. The
better deal has its own schedule. I believe the way he explained it is that
your job is not your job; your job is to find a better job.

This was my first exposure to the idea that one should have a system instead
of a goal. The system was to continually look for better options.

~~~
Retra
It makes no sense at all if you don't view "having the best possible job" as a
primary motivator in your life.

------
s800
Any conventional paradigms that you'd like to see retired? FS, Unix,
signalling/messaging, etc.?

~~~
alankay1
Most of them ...

------
dookahku
You invented a lot of what I'm using this very instant to compose this
message.

I yearn to do great works of engineering and art, which I consider what you
have done.

How do you come up with ideas?

~~~
alankay
There's coming up with ideas: learn to dream while you are awake, the ideas
are there.

There's coming up with a good idea: learn how to not get buried in your ideas
(most are mediocre down to bad even for people who have "good idea skills"!)

I write down ideas in notebooks to get rid of them. Every once in a while one
will capture a different point of view.

And, there's the Princeton Tea joke of scientists comparing what they did for
ideas. One says "I have them in the middle of the night so I have a pad by my
bed". Another says "I have them in the shower so I have a grease pencil to
write them on the walls". Einstein was listening and they asked him about his
ideas. He said "I don't know, I've only had two!"

(Some people are better at filtering than others ...)

~~~
msdos
How much impact have actual dreams had in your work? Have you ever gotten
ideas or answers in dreams?

~~~
alankay1
Bob Barton got most of his great ideas in sleeping dreams.

Most of my ideas come in "waking dreams" (this is a state that most children
indulge in readily, but it can be retained in a more or less useful way -- I
don't think you quite get into adulthood by retaining it, so it's a tradeoff).

Main thing about ideas is that, however they come, most of them are mediocre
down to bad -- so steps have to be taken to deal with this major problem.

------
qwertyuiop924
Hey Alan,

You seem very disapointed and upset with the way computing has gone in the
last decade. Speaking as a younger (15) and more satisfied (I still think the
UNIX abstraction is pretty solid, despite what others may say) programmer, how
do you not get depressed about the way technology is going?

Also, what do you propose to eliminate the "re-inventing the flat tire"
problem? Should every programmer be forced through a decade of learning all of
the significant abstractions, ideas, and paradigms of the last 50 years before
they write anything? Because I don't see another solution.

~~~
alankay
I do get depressed -- how could one not? -- the trick with depression is to
not allow it to take you into in-action.

Re: Unix etc. try to imagine computer systems without "operating systems" as
they are thought of today (hint: look at the Internet, etc.).

The basic heuristic here is to avoid the "when you criticize something you are
implicitly buying into it's very existence!". First try to see if there is
anything worth existing! (OSs are not necessary ...)

How long does it take to learn real science? And shouldn't computer science be
a real science?

Another way to look at this is that anyone could be a doctor until recently
(really recently!) because no one knew what was going on. And a lot of damage
was done (and is still being done.) Once some real knowledge is obtained, we
can't afford to have random practitioners dabbling into important things. (The
idea that this might be OK is another pop culture delusion and desire ...)

~~~
qwertyuiop924
...and I'm not sure I agree. The idea that code from randoms shouldn't be put
into critical infrastructure is certainly true. But the idea that people
shouldn't tinker, hack on code, and learn, even if they end of with non-
optimal, possibly broken code, is a dangerous one, because that's how people
learn. And yes, they should dabble in important things. How else do they learn
them. It's then our job, as the Real World, to analyze our dependancies, and
make sure we can trust them to be well written, and that they aren't just
toys.

As for whether operating systems are necessary, by most common definitions
(being, as I understand it, a set of software that defines a common base of
abstractions over hardware, so as to allow multiple other pieces of software
to access said hardware at a higher level of abstraction), yes, they are, if
you don't want to go mad. But I get the feeling that isn't what you meant...

In short, I can respect your opinions, but I do not agree with all of them.
Much of this derives from the mindset that tinkering with even the most
important things, regardless of skill, is important to learning. If that makes
me stupid, so be it.

~~~
alankay1
It's not the learning process that is in question, but the ramifications of
really extending the industrial revolution to distributing broken code.

(No language has ever been more set up to allow tinkering with everything for
the purpose of learning than Smalltalk -- I think you probably realize this.)

I think most people would not recognize most exiting OSs in use by your
definition in your second paragraph.

~~~
qwertyuiop924
Okay. What is the definition of the OS in your opinion?

And thank you for taking the time to disagree with me respectfully. So many do
not.

And I do recognize the tinkering capabilities of smalltalk. This is why I
found your points odd.

I have yet to learn Smalltalk. The full environment, as opposed to text editor
development feels odd to me, especillay after struggling with monsters like
Eclipse. Also, SBE is several years out of date, and there isn't really much
in the way of good documentation for those who don't already know what they're
doing.

------
spamfilter247
Hi Alan. What are your thoughts on how rapidly GUIs are evolving nowadays?
Many apps/services revamp their UI fairly often and this oftentimes hurts
muscle memory for people who have just about gotten a workflow routine figured
out.

Also, what big UI changes do you foresee in the next 10 years - or would like
to see. Thanks.

~~~
alankay
Or devolving?

------
Bystroushaak
I have two questions:

1\. It is known that you read a lot. Do you plan to write a book? You have
been a big inspiration for me and I would love to read a book from you.

2\. What is your opinion about Self programming language
([http://www.selflanguage.org](http://www.selflanguage.org))? I've read „STEPS
Toward The Reinvention of Programming“ pdf and this feels related, especially
to with the Klein interpreter
([http://kleinvm.sourceforge.net/](http://kleinvm.sourceforge.net/)).

~~~
alankay1
I liked Self. "Good OOP" is still waiting for a much better notion to replace
the idea of a "Class"

~~~
shkaboinka
Trygve Reenskaug (MVC inventer ) and Jim Coplien (Patterns/Hillside) developed
the DCI paradigm as a way model code around the roles played by objects,
rather than the concrete type (class) of each object.

video: [https://www.infoq.com/presentations/Reflection-OOP-
Social](https://www.infoq.com/presentations/Reflection-OOP-Social)

PDF:
[http://fulloo.info/Documents/RestoringFunctionAndFormToPatte...](http://fulloo.info/Documents/RestoringFunctionAndFormToPatterns.pdf)

Website: [http://fulloo.info/](http://fulloo.info/)

~~~
shkaboinka
Alan, of you are aware of DCI, what is your response to it?

It (or more specifically, Jim Coplien) claims to build on your vision of OOP,
but also criticises "emergence" within the OO vision. (Personally, I think
those problems are design issues apart from the OO model)

~~~
alankay1
We had a lot of fun with Trygve and we all learned a lot. I'm glad that
they've been pushing at these issues.

------
gnocchi
Hi Alan,

I'm part of a generation who didn't grew up with the PDP but had LOGO and
basic available in computers and calculators. With the Amstrad CPC it was
possible to interupt a program and to change a few lines of code to make it do
something else which was a great way to keep interested. And with calculator
it was possible to code formulas to resolve/check problems.

But how would you teach programming today to a kid? Would you choose a
particular medium such as a computer, a raspberry pi or even a tablet?

And if I may, do you recommend any reading for bedtime stories?

Thanks you, Kevin

~~~
alankay
It's time to do another children's language. My answer a few years ago would
have been "in part: Etoys", "in lesser part: Scratch (done by some of the same
people but too much of a subset)".

------
throwathrowaway
Is there any instructions for getting the Frank system (that you show off at
talk) on a Linux computer? Even instructions with some missing steps to be
filled in. It would beat recreating things from scratch from glimpses.

I find it much easier to explore with a concrete copies that can be queried
with inputs and outputs, even if they are far from their platonic ideals. For
example, the Ometa interpreter I wrote recently was much easier to make by
bootstrapping the creation of an initial tree from the output of an existing
implementations of Ometa.

------
filleokus
Do you believe everyone should be thought, or exposed to, programming in
school? I'm afraid that universal inclusion of programming in the curriculum
would have an opposite effect and make the next generation despise
programming, in the same way some people feel about math today.

~~~
alankay1
Everyone should get fluent in "real science" and many other things come along
very nicely with this -- including the kinds of programming that will really
help most people.

------
auggierose
Hi Alan,

what do you think about interactive theorem proving (ITP)? Assuming that you
are aware of it, is it something that you have tried? If yes, how was your
experience, and which system did you try? If no, why not? What do you think
about ITP's role in the grander scheme of things?

------
pdog
Alan,

Computers have been a part of education in the United States for several
decades, but you've argued that technology isn't used to its full potential.

1\. Why has technology in schools failed to make an impact?

2\. How would you design a curriculum for your children that uses technology
effectively today?

~~~
alankay1
I've written and talked a lot on these subjects, so let me suggest you look at
the Viewpoints website, YouTube, there's a TED talk, etc.

~~~
pdog
Check out _Points of View_ : [http://vpri.org/pov/](http://vpri.org/pov/)

------
anildigital
Do you think Java is an Object Oriented programming language?

~~~
alankay
Object oriented to me has always been about encapsulation, sending messages,
and late-binding. You tell me ...

~~~
yazaddaruvala
Semi-offtopic: I've always been fascinated by the origins of religions, their
evolution, potential re-definitions and conflicts with other ideas.
Specifically, I like thinking about how the founding members of a set of
ideas, might retrospectively analyze the entire history of their ideas, and
the "idea set"'s metamorphosis into a religion whose followers now treat it as
dogma.

I have this, entirely unprovable, theory that most founders of these types of
"idea sets" are actually poly-ideological, i.e. giving weight to all possible
ideas, and just happened to be exploring ideas which made the most sense at
the time.

While I enjoy your thoughts on "object oriented", "functional", etc, I'd love
to hear your thoughts about philosophy of religion and its origins (i.e. a
slightly meta version of the conversation around "object oriented",
"functional", etc). You may be one of a handful of humans able to provide me
more data. Is this a topic that interests you, and is it something you think
about? If it is something you think about, regarding the dogma you potentially
accidentally helped instigate,

Did you and your peers intend for it to become dogma? The rest of my questions
sorta assume you did not.

Retroactively, do you feel it was inevitable that these ideas, i.e. popular /
powerful / effective ideas, which were espeically extremely effective _at the
time_ , became dogma for certain people, and potentially the community as a
whole?

Either retroactively or at the time, did you ever identify moments when the
dogma/re-definitions were forming/sticking? If so, did you ever want to
intervene? Did you feel you were unable to?

Do you have any lessons learned about idea creation/popularization without
allowing for re-definitions / accidentally causing their eventual turn into
dogma?

Again, if this type of conversation doesn't interest you, or because it cloud
potentially be delicate, you'd rather not have it in public, I'd understand.

Thanks regardless

~~~
alankay1
Bob Barton once called systems programmers "High priests of a low cult" and
pointed out that "computing should be in the School of Religion" (ca 1966).

Thinking is difficult in many ways, and we humans are not really well set up
to do it -- genetically we "learn by remembering" (rather than by
understanding) and we "think by recalling" rather than actual pondering. The
larger issues here have to do with various kinds of caching Kahneman's "System
1" does for us for real-time performance and in lieu of actual thinking.

~~~
poppingtonic
1\. Spaced repetition can make the recalling - and thus the thinking and
pondering - easier. It can certainly make one more consilient, given the right
choice of "other things to study" e.g. biology or social psychology, as you've
mentioned in an earlier comment. 2\. It takes quite a bit of training for a
reader to detect bias in their own cognition, particularly the "cognition"
that happens when they're reading someone else's thoughts.

What to do about System 1, though? Truly interactive research/communication
documents, as described by Bret Victor, should be a great help, to my mind,
but what do you think could be beyond that?

~~~
alankay1
I think that the "training" of "System 1" is a key factor in allowing "System
2" to be powerful. This is beyond the scope of this AMA (or at least beyond my
scope to try to put together a decent comment on this today).

~~~
poppingtonic
There's a recursive sense in which "training" "System 1" involves assimilating
more abstractions, through practice and spaced repetition, such as deferring
to the equations of motion when thinking about what happens when one throws a
ball in the air. Going as far as providing useful interfaces to otherwise
difficult cognitive terrain (a la Mathematica) is still part of this
subproject. The process of assimilating new abstractions well enough that they
become part of one's intuition (even noisily) is a function of time and
intense focus. What do you see as a way to aggregate the knowledge complex and
teach further generations of humans what the EEA couldn't, fast enough that
they can solve the environmental challenges ahead? What's HARC's goal for
going about this?

~~~
michaelscott
Personally, I've found that discovering "hazy" intuitive connections between
otherwise dissonant subjects/ideas (such as the mentioned physics example)
cements new concepts at a System 1 level quickly if done early in the learning
process. It's also surprising how far one can go on such noisy assimilations
alone as well, before needing to dig deeper.

------
vainguard
How important is finding the right language?

~~~
alankay1
For big problems, "finding the problem" is paramount -- this will often
suggest representation systems that will help think and do better. A language
that allows you to quickly make and refine your sense of the context and
discourse -- i.e. to make languages as you need using tools that will
automatically provide development environments, etc. and allow practical
exploration and progress.

------
testmonkey
What do you think about a "digital Sabbath," [1] specifically in the context
of touchstones like:

Engelbart's Augmenting Human Intellect [2] Edge's annual question, How is the
Internet Changing the Way you Think? [3] Carr's Is Google Making Us Stupid?
[4] ...and other common criticisms of "information overload"

[1] [http://www.sabbathmanifesto.org/](http://www.sabbathmanifesto.org/) [2]
[http://www.dougengelbart.org/pubs/augment-3906.html](http://www.dougengelbart.org/pubs/augment-3906.html)
[3] [https://www.edge.org/annual-question/how-is-the-internet-
cha...](https://www.edge.org/annual-question/how-is-the-internet-changing-the-
way-you-think) [4] [http://www.theatlantic.com/magazine/archive/2008/07/is-
googl...](http://www.theatlantic.com/magazine/archive/2008/07/is-google-
making-us-stupid/306868/)

~~~
alankay1
Candles, wine and bread aren't technologies? Hard to take this seriously. (And
I like to play music, and both music and musical instruments are technology,
etc.)

A better issue is not getting sucked into "legal drugs" that have no
nutritional value.

"We are already stupid" \-- this is why things could be much much better but
aren't. We have to start with ourselves, and a positive way to do this is to
ask "what can real education really do to help humanity?"

------
xt00
Hi Alan, do you think privacy on the web should be guaranteed by design or
malleable such that in special cases the government can look up your google
searches and see if you follow terrorists on twitter? When I say guaranteed by
design, I mean should people be creating a system to obfuscate, encrypt, and
highly confuse the ability of people who wish to track/deduce what people are
doing on the web?

~~~
alankay1
What are the most important purposes for our living in a society?

------
username3
Is there any site that lists all arguments from all sides and reach a
conclusion? If they don't reach a conclusion, do they have an issue tracking
system and leave the issue open for anyone to find easily and respond?

Debates should have a programming language, have CI for new arguments, have
unit tests to check logic, have issues tracked and collaborated on GitHub.

~~~
alankay1
The problems with most arguments is that they arguers assume they are in a
valid context (this is usually not the case, and this is the central problem
of "being rational"). Another way to look at it is "Forget about trying to win
an argument -- use argumentation to try to understand the issues better and
from more perspectives ..."

~~~
mmiller
I think another way of looking at what you said is people can feel threatened
or diminished by certain arguments, even if they're not directed at them, when
certain cherished premises are challenged, and part of learning to argue well
is to learn to take those feelings as signals that the argument needs to be
looked at closely and seriously, but not as truth. It's not the final word,
but more checking needs to be done to see if one's own premises are close to
reality or not. Of course, that always needs to be done, but from what I see,
many people are not equipped to deal with an argument that hits a nerve, much
less to listen to ideas that don't agree with each other, and consider the
argument based on its content, not on attitudes ("What it sounds like or feels
like").

Since "fast and slow" have been a part of the discussion here, perhaps what
I'm describing has to do with the relationship between "System 1" and System
2?

~~~
alankay1
Most people confuse "arguing" with "debating".

~~~
mmiller
I agree. I hate to be a downer about this, but these days I'm glad to find
someone who knows how to debate well, much less argue well.

------
stuque
What language do you think we should teach first to computing major students
these days? What about non-major students?

~~~
alankay1
"There aren't any really good ones" \-- so computing major students especially
should have to learn 4 or 5 very different ones and write fairly major systems
in them.

We should take another pass at design for both majors and non-majors ...

------
pascient
Hi Alan,

You mentioned Bob Barton's lecture a few times [1], emphasizing the role he
played in debunking some of your (and his own) ideas about computing. Could
you touch on some dogmas that haven't yet been evoked in this thread or in
your videos on YouTube ? Either from the 70's or today. Let me link to Ted
Nelson's remarks about the tradition of files/lumps [2] for a start.

Bullet-point may not be the ideal form for an answer but feel free to get away
with that :)

[1]
[https://www.youtube.com/watch?v=YyIQKBzIuBY&t=7m39s](https://www.youtube.com/watch?v=YyIQKBzIuBY&t=7m39s)

[2]
[https://youtu.be/c_KbLKm89pU?t=1m46s](https://youtu.be/c_KbLKm89pU?t=1m46s)

------
fcc3
[Earlier]([https://news.ycombinator.com/item?id=11945254](https://news.ycombinator.com/item?id=11945254))
you mention "good models of processes" and
[elsewhere]([https://news.ycombinator.com/item?id=11940688](https://news.ycombinator.com/item?id=11940688))
about model building. Which models have been the best for you? Have you read
[Robin Milner]([http://www.cl.cam.ac.uk/archive/rm135/Bigraphs-
draft.pdf](http://www.cl.cam.ac.uk/archive/rm135/Bigraphs-draft.pdf))? Do you
consider his work to be a good foundation for processes? If not, who do you
consider to give a good foundation?

~~~
alankay1
I think this is "still to be done" and "still extremely important"

------
0xdeadbeefbabe
Well what is thinking about then? What was the mistake the greeks made? In
this video[0] you said thinking is not about logic and that was the mistake
the greeks made.

[0]
[https://youtu.be/N9c7_8Gp7gI?t=45m45s](https://youtu.be/N9c7_8Gp7gI?t=45m45s)

~~~
alankay1
It's basically confusing "math" (which they had and like all of us were
overwhelmed by how neat it is and how much you can do with "thinking
rationally") with "science" which they didn't really have. "Math" is hermetic
and Science is a negotiation between our representation systems and "What's
out there?"

I.e. it is really difficult (didn't happen) to guess the "self-evident"
principles and operations that will allow you to deduce the universe. And
being super-rational without science is one of the most dangerous things
humans have come up with.

------
ldargin
What do you think of Chris Crawford's work on Interactive Storytelling? His
latest iteration is "Siboot" [http://siboot.org/](http://siboot.org/)

Note: He mentions being inspired for that from discussions with you at Atari.

~~~
alankay1
Chris was fun to work with!

------
paulsutter
Alan, what is your view of deep/machine learning as an approach to
programming? The Deepmind Atari player is only 1500 lines of code, because the
system learns most of the the if/thens. Software is eating the world, but will
learning (eventually) eat the software?

------
dang
This is one of the best threads HN has ever seen and we couldn't be more
thrilled to have had such an interesting and wide-ranging discussion. I know
I'm not the only one who will be going back over the wealth of insights,
ideas, and pointers here in the weeks to come.

Alan, a huge and heartfelt thanks from all of us. The quality and quantity
(over 250 posts!) of what you shared with the community surpassed all
expectations from the outset and just kept going. What an amazing gift! Thank
you for giving us such a rich opportunity to learn.

(All are welcome to continue the discussion as appropriate but the AMA part is
officially done now.)

~~~
krmboya
Thanks to all who made this happen. It was really mind bending trying to parse
and understand both the questions and responses, like simultaneously traveling
to both the past and the future of computing.

Since then all other HN threads have felt so lightweight, and it is just now
that that feeling is starting to wear off..

~~~
dang
We should find a way to mine some of the rich veins of discussion here—perhaps
picking one thing and going into it in more detail. Every time I read or hear
Alan I end up with a list of references to new things I'd never heard of
before.

------
lispython
Hi Alan, I'm an editor at Programmer Magazine in China[1], could you allow me
to translate your answers into Chinese, and share with readers in China?

[1] [http://programmer.com.cn/](http://programmer.com.cn/)

~~~
alankay1
Sure -- and you have my permission to make things more clear in the process!

------
mempko
Alan,

You have inspired me deeply, thank you. I love working with man's greatest
invention, but I have a deep sense of dread. HN is very good about projecting
a fantasy about the future, that technology can solve all problems. I would
love to see a world where people use computers to compute. However, global
warming is a real threat and my biggest fear is that our pop-culture will
prevent us from solving our problems before the chance to solve them is taken
away from us.

With such a huge threat to humanity on the horizon, do you maintain a sense of
optimism here? Or will humanity forget how to "compute" the same way Europeans
forgot how to make Roman concrete?

~~~
alankay1
There are some good old sci-fi stories about your last question.

People in my process are optimists. People in my position who spend a lot of
time trying to improve education are realists.

------
jfaucett
Hi Alan,

Do you think we'll ever have a programming language that isn't fully text
based and gets closer to a direct mapping of our own thoughts than current
systems? If so any ideas what properties this language would have?

~~~
hemisphere
A few years ago I asked another visionary, Marvin Minsky, if he thought that,
in the future, we'd do our programming in something other than plain text. He
said, "If it's good enough for Aristotle and Plato, it's good enough for me."

RIP, Professor.

------
shkaboinka
I've seen similarities between your work (OOP) and that of Christopher
Alexander (Patterns).

Do you have anything to say about how your/his works tie together?

(Note that Alexander's work is perhaps even more misrepresented in software
than OOP has come to be).

For example, he talks a lot about how anything that is to evolve naturally, or
even properly serve the human component, must be composed of living structure.

video of C.A. addressing the software community's (mis-)application of his
work at OOPSLA: [https://youtu.be/98LdFA-_zfA](https://youtu.be/98LdFA-_zfA)

~~~
alankay1
My favorite Alexander book by far is "Notes on a Synthesis of Form" (which he
has disavowed ...)

~~~
shkaboinka
Here is a PDF of a version with an added preface with that disavowment. His
reasoning (focus on human application rather than just on method itself) makes
sense to me, but I'm still excited to see read the methods.

[https://monoskop.org/images/f/ff/Alexander_Christopher_Notes...](https://monoskop.org/images/f/ff/Alexander_Christopher_Notes_on_the_Synthesis_of_Form.pdf)

------
GregBuchholz
At the OOPSLA keynote speech in 1997, you mentioned that "The Art of the
Metaobject Protocol" was one of the best books written in the past ten years.
Any new candidates for "best" books?

------
kirkdouglas
What are you working on now?

------
diiq
How do you seek out the people you choose to work with, now or in the past? Is
it an active process, or do you find interesting people naturally glom around
a nucleus of interesting work?

~~~
alankay1
Interesting people, even as students, leave trails in various ways

------
testmonkey
If you were to design your own high school curriculum for gifted students,
what would it look like?

~~~
alankay1
I would work on the early grades for all students, especially gifted ones. The
epistemological stance you wind up with gets set fairly early -- not in stone
but also hard to work with -- and the early grades are where we should be
putting our resources and efforts.

~~~
dredmorbius
What topics, methods, or activities would you see more of?

Which less?

Is there any specific cognitive, childhood, intellectual, educational, etc.,
development model that this is based on?

How would you account and adapt for different proclivities and/or interests?

~~~
alankay1
So many of the questions point up the problems with the format and media for
an AMA. So many of the best questions are out of the scope ...

But, let's just pick one biggie here: how about taking on "systems" as a
lingua franca and set of perspectives for thinking about: our universe, our
world, our cultures and social systems, our technologies, and the systems that
we ourselves are?

This would be one very good thing to get 21st century children started on ...

~~~
dredmorbius
Your systems-as-a-lingua-franca suggestion seems _straight_ up David
Christian's "Big History" concept:

[https://en.m.wikipedia.org/wiki/Big_History](https://en.m.wikipedia.org/wiki/Big_History)

"Big History is an emerging academic discipline which examines history from
the Big Bang to the present. It examines long time frames using a
multidisciplinary approach based on combining numerous disciplines from
science and the humanities,[1][2][3][4][5] and explores human existence in the
context of this bigger picture.[6] It integrates studies of the cosmos, Earth,
life, and humanity using empirical evidence to explore cause-and-effect
relations..."

I'm rather a fan, though I've only explored it quite briefly.

If you're _not_ familiar with the Santa Fe Institute and its work, I suspect
you'll find it fascinating. The general rhubric is "complexity science",
applied across a large number of fields. Geoffrey West and Sander van der
Leuuw are two other fellows. Founders included Murray Gell-Mann and Kenneth
Arrow.

"SFI's original mission was to disseminate the notion of a new
interdisciplinary research area called complexity theory or simply complex
systems. This new effort was intended to provide an alternative to the
increasing specialization the founders observed in science by focusing on
synthesis across disciplines.[4] As the idea of interdisciplinary science
increased in popularity, a number of independent institutes and departments
emerged whose focus emphasized similar goals."

[https://en.m.wikipedia.org/wiki/Santa_Fe_Institute](https://en.m.wikipedia.org/wiki/Santa_Fe_Institute)

I keep discovering that the most interesting people -- in multiple fields --
I'm running across are quite often affiliated.

Related to my other question:
[https://news.ycombinator.com/item?id=11941965](https://news.ycombinator.com/item?id=11941965)

And, since many of your responses point at challenges to the AMA (or HN)
format, what would your preference be? Is there an existing platform or model
that fits, or is there a set of requirements that

~~~
alankay1
I will just reply that many of my best friends wound up at SFI and I've had a
long association, etc.

And that -- I think you realize this -- what I was advocating is something
simpler than the "Big History" idea ...

~~~
dredmorbius
On friends winding up at SFI: that's good to hear multiple ways -- for your
friends, your familiarity, and yet another endorsement of the Institute.

Thinking in Systems (and not just Meadows' book) _is_ something I'd also like
to see developed more fully. Big History is _more_ than that, but it's also
one logical development -- systems pervasive throughout the academic
curriculum. I think that's a powerful concept.

There's also the possiblity that many people _don 't and cannot get systems
thinking_. Another author I've been reading, William Ophuls (most especially
_Plato 's Revenge_) discusses this in the context of Jean Piaget's theory of
cognitive development, and comes to sobering conclusions regarding facing
social challenges based on typical population cognitive foundations. Basing
your Solution to the World's Problems on "all the children are above average"
is bound to fail.

Thanks again for your time.

------
juliangamble
Hi Alan,

My understanding was that you were there at the keynote where Steve Jobs
launched the iPad. From what we've heard Steve came up to you after the event
and asked you what you thought (implicitly acknowledging your work on the
Dynabook).

Subsequent interviews suggested you thought that the iOS range of products
"were the first computers good enough to criticise".

My question is: what has to happen next for the iPad to start achieving what
you wanted to do with the Dynabook?

~~~
mmiller
I won't presume to speak for Alan, but my understanding of what he's said is
very different historically. As I recall, he said that the first Macintosh was
"the first computer worth criticizing" (something like that). When Jobs
introduced the iPhone, he asked Alan whether it was "worth criticizing," and
he said no, but if Jobs were to make it a certain size (I forget the
dimensions he mentioned, but one of them was 8"), he would "own the world."
This suggested the iPad, which did become very popular, though arguably the
iPhone became even more so.

I remember one thing Alan said (this was a couple years after the introduction
of the iPhone, but I think it was before the iPad), which relates to your
question is, "I wish they'd allow people to program the darn thing," which I
took to mean, "program _on_ the darn thing!"

Tablet computers existed long before the iPad, though they were bulky and
expensive. I used to write software that ran on Telxon tablet units back in
the mid-1990s, which used WiFi. A bit later they got enough memory and
processing power that we were able to run Microsoft Windows on them, though
they cost several thousand dollars each (not a consumer price point). I
remember in video of a reunion of former Xerox PARC employees (around the year
2000. It's on the internet), Chuck Thacker held up a Tablet PC running Squeak,
and he said, "This is a Dynabook right here." That sort of thing is more of a
Dynabook than the iPad, because the software development licensing
restrictions for iOS don't even allow people to share code from one unit to
the next, because it's considered a security hazard. Apple let up on the
restrictions such that people could write code on them, in an environment such
as Scratch, but they're not allowed to share code from unit to unit. Instead,
they have to jump through hoops, posting code on a web server for others to
download.

Part of what the Dynabook was supposed to do was allow people to share code
without the thought that it was a security hazard, because it would be
designed to be a safe environment for that sort of thing. The iPad has the
hardware form factor of a Dynabook (if I may say so), but its system ideas are
far from it. It's designed as a consumer product where people are supposed to
use it for personal interaction, gaming, consumption of digital content, and
little else.

------
corysama
What are the first 3 books I should read on the topic of teaching tech to
kids? Thanks!

~~~
alankay1
What are the three best books you've read on teaching anything to kids? Let me
know, and we'll start from there.

~~~
corysama
I am completely ignorant. So, it's going take a lot more than three books :)
How about I blindly guess at my own answer and ask for alternatives?

1) Start with books like: How To Read A Book, Thinking Fast and Slow

2) Then, move on to books like: The Secret of Childhood, Instead of Education,
Mindstorms

3) Then, you are ready for: Flow, The Children's Machine

Thanks for coming back to answer more questions.

If you've missed 'Hare Brain, Tortoise Mind', I do recommend it. IMHO,
Gladwell's Blink was HBTM with most of the material replaced with funny
stories.

~~~
alankay1
A big part of getting into reading is making choices of "this book, this
time". And finding ways to do lots of reading (of pretty much anything ...)

------
mathattack
Hi Alan. A lot has been made of the visions that your colleagues and you had
that ultimately became the fabric of our industry.

Did you have any ideas, predictions or visions which ultimately didn't play
out? (And any ideas on why?)

Thank you very much to your contributions to our industry. Anyone blessed to
be working in this field today owes you an enormous debt of gratitude. You
have made a dent in the universe.

~~~
alankay1
Our most unrealistic assumption was that when presented with really good ideas
most people will most definitely learn and use them to make progress. In
reality, only a tiny percentage can do this, and it is an enormous amount of
work to deal with the norms.

~~~
mathattack
Very insightful. There's a large gap between the intellectually curious early
adopters and the majority in the middle. I just recall how long people were
still auto-paying AOL when better options were available. :-)

Thanks again!

~~~
alankay1
And, early adopters will often adopt poor ideas -- they are mostly just early.

------
lootsauce
What is your take on the future of our general relationship with technology in
the light of the new optimistic view of AI with recent advances in machine
learning? I can't help but think we are over-estimating the upside, and under-
estimating the problems (social, economic, etc.) much like the massive
centralization of the net has had downsides hi-lighted by Snowden and others.

------
david927
You have stated before that the computer revolution hasn't happened yet. It
seems we stopped trying in earnest back in the early 1980's. Why?

And what could be done to re-spark interest in moving forward?

My gut feeling says that it would require a complete overhaul in almost every
layer in the stack we use today and that there's reluctance to do that. Would
you agree to some degree with that?

~~~
alankay1
Have you seen the "gyre" in the Pacific?

~~~
david927
Yes, of course; we agree. Let me refine my question:

In our not-quite-an-industry, we seem to laud attempts to optimize the
artifact of a residual hack, and we are absolutely dismissive of attempts to
rebuild the stack as being too ambitious. And the problem with being
dismissive is that it's a judgement without trial. We have precious few "crazy
professors" and no tolerance for them.

What can we do?

There's the 2020 group in San Francisco. Is that kind of meetup the right
direction?

~~~
alankay1
"Cops need criminals"

"Doctors need disease"

Things have happened in the past when passionate and confident people with
chops have decided to do something. These have not always correlated with
"What is actually needed" (often not) but they usually get things into a state
where people who are mainly trying to advance their own goals see some
advantage (including "tech chic").

We are certainly not in a position where this can't happen a few more times.

Looking back, I've been struck not by how few really good researchers there
are, but more so by how few really good _managers of researchers_ there have
been, and even more so by how really really few good funders there have been.

Maybe too simplistic, but in my view great funding has caused great stuff. So
the funders should get the gold medals rather than the researchers! (Think
about it: the good funders give out the gold in advance knowing full well that
if even if they are very luckly 70% of the gold will turn to lead in just a
few years!)

~~~
david927
_few really good managers of researchers_

Bob Taylor has been generally praised as a great manager, and I believe them
of course. But when I heard stories of his management style, it seemed to go
against every instinct we have on how to foster creativity. Could you comment
on that?

Also, in terms of funding, I wonder -- haven't things changed? Wasn't funding
more important in, say, the 1960's and 70's due to the cost of computer time,
especially at the processing level that would let you "see the future." A
$1000 computer today is not so different, in terms of power, from a similar
machine 5 years ago, right?

 _Things have happened in the past when passionate and confident people with
chops have decided to do something._

But wasn't the past more open? When a field is just forming, everything is
crazy, so nothing is. It's only when it has solidified (and in exactly the
wrong direction) that you would more expect to find a "crazy professor" having
more of an impact, right?

~~~
alankay1
Tell me the stories, and I'll comment on them.

If you are computing on a $1000 computer, you are computing in the past. Part
of the idea here is that you want to do research and development on
supercomputers of the present that will give you the resources that will be
available at lower prices in the future.

With salaries as they are today -- and real estate much worse -- it is
actually more expensive to fund the same kinds of research today.

The past was similar to today in that most people in computing back then were
orbiting around some local vendor's and local fads notions of computing. And
it was a lot harder to make computers and other tools back then. It wasn't a
crazy professor having an impact back then, but a whole research community
that was required to make an impact.

~~~
david927
The stories I know came from Dealers of Lightning and what I remember
(correctly I hope) is that the weekly meetings were quite contentious and that
he would even encourage haranguing the presenter. Is that true? Did the people
there feel it as healthy constructive criticism? I would worry that this would
more likely stifle creative thought than encourage it.

 _If you are computing on a $1000 computer, you are computing in the past._

I know that this was true in the past but it's my contention that hardware has
had Moore's Law where software has stagnated, and that the passing of time has
meant that now current software doesn't often take full advantage of the
hardware available. If you did architect based on where the puck is going
(optimized for multi-core, no main memory) current hardware wouldn't slow you
down the same way as it did before and sometimes you would even see a
performance gain.

By the way, thanks for doing this.

~~~
alankay1
The research community was "an arguing community", and knew how to argue "in
good ways" (i.e. no personal attacks, only trying to illuminate the issues,
etc.) This worked pretty well almost always ... (The weekly meetings were not
for being creative, but for discussion ...)

Let me respectfully disagree with your contention. The point that is missed is
not what the hardware could do, but _what can you do without optimizing_. It
is very very hard to put on the optimization hat without removing the design
hat, and once you've removed the latter you are lost.

The key to the Parc approach was to be able to do many experiments in the
future without having to optimize. (There was a second part to this "key" but
I'll omit it here)

~~~
david927
I should have assumed, given the results that PARC had, that it must have been
that way. It's fantastic that I just got the answer to that question from...
you.

For my second point -- I definitely didn't mean optimization that way. I guess
the word I meant to use was "targeted", as in "targeted for multi-core" but I
see your point and respect it fully.

I'm thrilled and deeply, deeply honored to have had this time with you, Dr.
Kay. Thank you so much again. If you're ever in the Bay Area on the first
Saturday of the month and feel inclined to stop briefly by the 2020 group
meeting (which is an attempt at a replacement for the Future of Programming
Workshops that used to take place at Strange Loop), we would be over the moon.
You can reach out to Jonathan Edwards if you need the details.

~~~
alankay1
The basic principle of both points above is that "problem finding" is the
hardest thing in "real research", so a lot of things need to be done to help
this ... (this is a very tough sell and even "explain" today -- almost
everyone is brought up -- especially in schools -- to solve problems, rather
than to actually find good ones) -- and virtually all funders today want to
know what problems their fundees are going to solve - so they underfund (to
the point of 0!) the finding processes ...

The ARPA/Parc process "funded people, not projects" \-- and today this seems
quite outre to most.

As you probably know, Jonathan is doing some work with us ...

~~~
david927
Yes I know about Jonathan, which is why I mentioned him because I didn't want
to publish the details of the group here and he's been something like an
adviser to the group.

And that group is probably also the reasons for my views. I would never
qualify to be in HARC and yet I don't just want to be content to scorn the
state of the industry and state of the art. I see that there's additionally
interesting research coming from garages and weekend projects. I feel that
"problem finding" is somewhat interchangeable with "point of view" and that
can come from surprising sources. Funding might be needed for the hardware but
my own experience, for what it's worth, hasn't borne that out. The main
component then is time, and while weekends aren't much, they'll have to
suffice. The last step is to see if we can't get further as a community,
giving that feedback so critical at PARC, and so that is why the group was
created.

~~~
alankay1
Better to invent the electric light bulb than to cope with candles ... ?

------
drzaiusapelord
How has your relationship with technology changed, especially in regard to its
use politically and socially, as you've gotten older?

------
melloclello
Hi Alan, what do you think of the Unison project? [1]

On the surface it's a structured editor for a type safe language in which it's
impossible to write an invalid program, but the author has some pretty lofty
goals for it.

[1]
[http://unisonweb.org/2015-05-07/about.html](http://unisonweb.org/2015-05-07/about.html)

~~~
alankay1
Many valid programs are quite wrong.

------
compute_me
Since you care about education so deeply, but also seem to be critical of a
lot of recent technological developments that appear to be more accessible
than some of the systems / approaches we had in the past: Do you think that it
is a reasonable path to embrace technologies with shortcomings, and perhaps
even to utilize a reduction in expressiveness of UIs, such as hiding the
ability to multitask, or to employ the power of games to "draw us in [and keep
us spellbound]" (where [this] part may be a danger), if this can form points
of entry for young people who may otherwise not have found their way into
technology (think smartphones in rural areas without reliable other means to
guarantee access to information to large numbers of people; e.g. lack of
mentors and role models), or would it be more promising to rather focus on
developing alternative means for "on-boarding"?

~~~
alankay1
There are several questions here.

A prime UI design principle -- which is also an education principle -- is that
you have to start where the learners/users are. ("All learning happens on the
fringes of what you know" \-- David Ausubel)

For children especially -- and most humans -- a main way of thinking and
learning and knowing is via stories. On the other hand most worthwhile ideas
in science, systems, etc. are not in story form (and shouldn't be). So most
modern learning should be about how to help the learner build parallel and
alternate ways of knowing and learning -- bootstrapping from what our genetics
starts us with.

And -- everything we are immersed in causes "normal" to be reset, for most
people invisibly. This should be a conscious part of the "helping learning"
process.

This could be too elliptical an answer ...

------
OoTheNigerian
Hi Alan,

I have read a lot about you and your work at Xerox.

Do you enjoy travel? What continents have you been to? What's your favorite
country outside the US?

How many hours a day did you sleep per day during your most productive
research years? Cos i usually wonder how very productive people seem to
achieve much more than others within the same 24 hours we all have.

Greetings form Lagos Nigeria.

~~~
alankay1
I used to sleep about 5 hours a night well into my 50s, but then started to
get respiratory infections, especially brought on by plane travel. After many
years, my sleep habits were examined and I was told to get at least 8 or more
hours a night. This is hard, so I usually make up with an afternoon map. This
has cured the infections but has cut down the number of active hours each day.

------
erwinflaming
Hi Alan,

Imagine we have already gotten to a place where software hasn't been built
like pyramids but with some serious object architecture. Let's just assume a
Benjamin Franklin turned this computer science into a real science(as the good
old Ben did with electricity). We're there. There are now trillions of objects
within my reach, without creating a huge indexing table and depending on data,
how do I reach the object I am looking for?

This was a huge problem on the internet and Google had a solution that many
tolerated and even accepted. In biology you have structures and many cells
compose bodies but as I understand it, the objects would be like functioning
cells scattered around on the floor. Maybe give me a general and vague idea
what the process for this input could look like: "discover songs that I might
like but have never heard before"(I have a hard time dealing with
abstractions).

Thank you!

------
adamgravitis
Hi Alan,

I've heard you frequently compare the OOP paradigm to microbiology and
molecules. It seems like even Smalltalk-like object interactions are very
different from, say, protein-protein interactions.

How do you think this current paradigm of message-sending could be improved
upon to enable more powerful, perhaps protein-like composition?

~~~
alankay1
Not proteins, but cell to cell (this is still an interesting mechanism to
contemplate and develop ...

~~~
astrobe_
Do you believe in self-healing/self-repairing software?

~~~
alankay1
This is a worthy goal, and I think quite possible. Note that Biology requires
a lot of organization in order to do this, so it is likely not to be
straightforward from where we are. But we had to make the Internet -- etc. --
self-healing in many respects (we had to go to dynamic stabilities rather than
trying to make perfect machines ...)

~~~
michaelscott
Do you think that genetic programming and machine learning are effective
avenues to pursue regarding this? Or is that introducing unnecessary
complexity in many/most cases?

~~~
alankay1
I think "real AI" could help (because it could also explain as well as
configure). Systems that can't explain themselves (and most can't) are a very
bad idea.

~~~
michaelscott
Because when things go sideways a human can't fix anything without copious
amounts of reading/testing/poking around? I'm taking your meaning of "explain"
literally here, which might be shortsighted.

Either way, the idea of machines or systems as "living" and able to
communicate intent and process, even if only within their own "umwelt", is
really interesting. Even a taste of that would make modern systems easier to
debug and understand, if not more robust (which would be a better starting
point for many systems anyway I suppose).

~~~
astrobe_
I believe he is referring to something like expert systems explanations, which
was the holy grail 20 years ago (I don't know if it has been achieved), and as
opposed to neural networks which are more like black boxes (at least to me).

~~~
michaelscott
Ah I see, that's quite interesting. So the idea of a system that could explain
its own decision-making and inferences?

Neural networks definitely are black boxes, at least at an individual level.
Sure the concept remains the same generally, but the internals are different
and hidden from case to case

------
annasaru
Hi Alan,

I am sometimes involved with mentoring younger people with STEM projects
(arduino etc). Its all the buzz. But I heard one of my younger relatives
lament about the tendency of young people to gravitate towards a quantitative
field of study / training - is there too much hype?. "Learning to Code" is a
general movement that is helping many youth to improve their career prospects.
Do you think it's being effective in improving education on a meaningful
scale.

What kind of educational initiatives would you like entrepreneurs (of all
shades) come up with. Do these need to be intrinsically different in different
parts of the world?

Finally, as a person who gets scared away from bureaucracy ("the school
district") - what would you advise. School districts don't always make the
best technology investments on precious dollars.

~~~
alankay1
It's a big problem along many dimensions. A good start is to try to get a
handle on what you think should be required. One hint is to forget about
vocational goals and focus on "adults as real citizens in a large diverse
society"

------
sandgraham
Hello Mr. Kay, Are you still going to be active with VRI or CDG now that HARC!
has formed?

PS:

I once ran into you in Westwood and you invited me to check out the CDG lab.
Unfortunately I missed you when I came by. I'm always tempted to try again,
but I'd hate to interrupt the serious thinking of the fellows stationed up
there.

~~~
alankay1
CDG has morphed into HARC, etc. And I do plan to be active.

------
josephhurtado
Alan,

What do you think will be the impact of Cognitive Computing, and AI on the way
software will be built in the next 5 years.

Do you think AI may automate fully some jobs, or part of the jobs people do
today in IT & Software Development?

If so what do you think is the best approach professionals should take?

Thanks in advance for the answer, and thanks for doing this AMA.

Joseph

~~~
alankay1
I think there is a lot of potential here (think 10 years and you might get
something useful in the first 5). This will require great will on the part of
those trying to make it happen (the market is not demanding something great --
and neither are academia or most funders).

"The softer an area, the tougher you have to be"

------
Dangeranger
Hello Alan,

Something that I find striking about you and your work is your cross
discipline approach to hardware, software, and "humanware".

Can you speak about people and subjects which have inspired you from fields
other than computer science and how they have changed you as a person and
technologist.

~~~
alankay1
I grew up in a house filled with books and parents and relatives who were in
many fields. This ruined me for school (and that helped, though it was a
struggle).

Here is a "personal essay" I was asked to write in 2004:
[http://www.vpri.org/pdf/m2004002_center.pdf](http://www.vpri.org/pdf/m2004002_center.pdf)

------
pierre_d528
Thank you very much for all you have done and will do.

How can we apply to join the HARC and make the Dynabook a reality?

------
antoinevg
Hi Alan,

Do you think we're yet at a position where we could catalog a set of
"primitives" that are foundational to programming systems? (Where "systems"
are fundamentally distributed and independent of software platform,
programming language or hardware implementation)

~~~
alankay1
Sure.

------
zyxzevn
Hi Alan, Since you may spend another day answering questions.. a got some more
for you :-)

What do you think about the different paradigms in programming?

And what do you think about type theory, etc?

Bonus question: I am trying to develop a new general programming system for
children. I was inspired by Smalltalk and ELM.

[http://www.reddit.com/r/unseen_programming](http://www.reddit.com/r/unseen_programming)
It is a graphical system that uses function blocks connected with flow-logic.
So basically it is functional, but much simpler. The function-blocks form a
system, very similar to classes/objects in Smalltalk. What do you think about
such a system, or what tips do you have about designing a new language?

~~~
alankay1
A good heuristic for designing programming languages is to try to take the
largest most complicated kinds of things you want to do, work them out, and
then see if there is a "language lurking".

Most people make the big mistake of lovingly making small neat examples that
are easy to learn -- these often don't scale at all well. E.g. "data" in the
small "seems natural" but the whole idea scales terribly. And so forth for
most of the favorite paradigms around today.

~~~
kkoomi
Could you tell what you mean by data scaling badly?

~~~
mmiller
The problem with the idea of procedures acting on data structures is that as a
system scales up, it gets more complex in terms of its data structure, and the
amount of code that must operate on it, and be dependent on it. As that
happens, it gets harder to change, both in terms of the structure, and the
procedures that work on it. The dependencies between the structure and
procedures grow in number and type. The attempt to understand it creates a
cognitive load that makes it difficult and inefficient to keep track of them
(if not impossible), and keep consistency in how they operate. Secondly, the
amount of code that's required to operate up to spec. becomes so voluminous
that it creates a cognitive load that is too much to handle, in terms of
finding and fixing bugs.

Part of scaling is understanding the relationship between what is necessary to
express to carry out the complete, intended model, and the number of
relationships (the best I can express this is "in chunks") that we can keep
track of simultaneously. Modern engineering in other fields of endeavor
understands this notion of cognitive load and complexity, in terms trying to
organize resources such that a constructed structure can carry out its
intended function well as a result of principled organization methods.

------
0xdeadbeefbabe
I get the impression from the book Dealers of Lightning that Bob Taylor played
an indispensable role in creating Xerox Parc. What are the Bob Taylors of
today up to, and why aren't they doing something similar?

Edit: just noticed HARC and YC-Research. I'll check it out.

~~~
alankay1
"Dealers of Lightning" is not the best book to read (try Mitchell Waldrop's
"The Dream Machine").

That said, Bob Taylor cannot be praised too highly, both for Parc and for his
earlier stint as one of the ARPA-IPTO directors.

Simple answer: There aren't a lot of Bob Taylors in any decade (but there are
some). Big difference between then and now is that the funders were "just
right" back then, and have been "quite clueless" over the last 30 some odd
years. An interesting very recent exception is what Sam Altman is doing --
this is shaping into the biggest most interesting most important initiative
since the 70s.

------
icarito
Hello Alan, In light of the poor results of the OLPC project, in reference to
the Children's Machine, leaving aside commercial factors, do you think the
Sugar user interface is appropriate for the task? If not, how can it be
improved, what is good/bad about it?

Thanks!

~~~
DonHopkins
A related question: in your opinion, what were the successes and failures of
the OLPC project, what openings and obstacles contributed to that, and where
do we go from here?

I've studied the Sugar design and source code, and programmed Sugar apps and
widgets, including wrapping the X11/TCL/Tk version of SimCity [1] in a thin
Sugar python script wrapper to make it into an activity, working on reducing
the power consumption of the eBook reader, and developing pie menu widgets for
Sugar in Python/GTK/Cairo/Pango [2].

My take on the OLPC project is that they were tackling so many interdependent
problems at once, both software and hardware, that it was impossible to
succeed at its original lofty goals. But it was also impossible to achieve
those goals without tackling all of those problems at once. However, a lot of
good came out of trying.

It was like the "Stone Soup" folk story [3], that brought together brilliant
people from many different fields.

A great example of that effect was that we were able to convince Electronic
Arts to relicense SimCity under GPLv3 so it could be used on the OLPC. [4]

One of the many big goals was reducing power consumption, which cross-cut
through all parts of the system, requiring coordination of both hardware,
software, and the firmware in-between.

Some of the designs were brilliant and far ahead of their time, especially
Mary Lou Jepsen's hybrid display, and Mitch Bradley's Open Firmware Forth
system.

RedHat modified Linux to support a tickless kernel, consolidating periodic
interrupts together to run at the same time so they didn't each wake the CPU
at many different times. [5]

Many of the solutions to problem the OLPC project was working on have
benefitted other more successful platforms.

John Gilmore credits the OLPC in lighting a fire under laptop vendors to make
competing low power low cost laptops like the Chromebook a reality.

Some of the ideas were ridiculous, like the silly crank to charge it.

Sugar had too many dependencies on all the other stuff already being in place
and working flawlessly. And its was far too ambitious and revolutionary, while
still being layered on tons of old legacy cruft like X11, Python, GTK, etc.

I love Python, but the OLPC came at a time when it would have been a better to
implement the entire user interface in JavaScript/HTML.

Sugar app developers realized they needed the services of a deeply integrated
web browser (not to mention the ability to run in any desktop or mobile web
browser outside of the Sugar ecosystem), but the overhead of plugging
xulrunner into Python and integrating JavaScript and Python via XP/COM and GTK
Objects was just too astronomically complex, not to mention horribly wasteful
of power and memory and simplicity.

You have to navigate the trade-offs of building on top of old stuff, and
building new stuff. And I think Sugar chose the wrong old stuff to build on
top of, for that point in time. Python and Cairo are wonderful, but JavaScript
won, and Cairo migrated down the stack into the web browser rendering layer,
HTML Canvas component, etc.

Also there was no 3D acceleration (or OpenGL/WebGL), which was a big
disappointment to game developers, but at the time it was necessary to keep
power usage low.

I'll try to specifically address your question about Sugar's appropriateness
for the task and what's good and bad about it. I'll quote some stuff I wrote
about it when I was porting SimCity to Sugar (prefixed by ">), and then make
some retrospective comments (no prefix): [5]

>Sugar is based on Python, and uses the GTK toolkit, Cairo rendering library,
Pango international text layout library, and Hippo drawing canvas, and many
others useful modules. Once SimCity is integrated with Python, it will be
great fun to create a kid-friendly multi-player user interface that's totally
integrated with the OLPC's unique hardware design (like the hires mono/color
LCD screen, which flips over into book mode with a game controller pad) and
Sugar's advanced features, like scalable graphics, journaling, mesh
networking, messaging, collaboration, and (most importantly) applying Seymour
Papert's philosophy of "Constructionist Education" to SimCity.

Sugar was trying to reinvent far too many wheels at once. Python was a great
choice of languages, but it was in the process of being eclipsed by
JavaScript. Python is strong at integrating native code (its API is easy to
use, then there's SWIG, Boost, and many other ways to wrap libraries), and
meta-integrating other code (GTK Objects, COM, XP/COM, etc).

But there's an overhead to that, especially when you mix-and-match different
integration layers, like for example if you embedded a web browser in a Sugar
interface, registered event handlers and made calls on DOM objects, etc.

On top of Cairo for graphics and Pango for text, which are two wonderful solid
well tested widely supported libraries used by many other application, Sugar
had its own half-baked object oriented graphics drawing canvas, "Hippo", which
was written in a mish-mash of Python and GTK Objects. Nobody should have to
learn how to wrangle GTK Objects just to draw a circle on the screen.

And there's this thing about universal object oriented graphics representation
APIs, which is why the OLPC didn't support PEX, X11 PHIGS Extension
(Programmer's Hierarchical Interactive Graphics System), and why we're not all
using GKS (Graphical Kernel System) terminals instead of web browsers.

As a Sugar programmer at the time, all parts of the system were in flux, and
it was hard to know what to depend on. For the Sugar pie menus, I stuck to the
solid Python/Cairo/Pango APIs, with a thin layer of GTK Objects and event
handlers.

As for all that stuff about journaling, mesh networking, messaging,
collaboration: great ideas, hard problems, fresh snow, thin ice. As Dave Emory
says: "It's food for thought and grounds for further research."

Sugar activities are implemented in Python. What I had to do with SimCity to
integrate it into Sugar was to take an existing X11/TCL/Tk application, and
wrap it in Sugar activity that just launched it as a separate process, then
send a few administrative messages back and forth.

That was also the way Scratch/eToys and other monolithic existing applications
were integrated into Sugar.

The idealistic long term plan was to refactor SimCity to make it independent
of the user interface, and plug it into Sugar via Python, then finally re-
implement the user interface with Sugar.

As progress towards that goal, which was independent of Sugar, I stripped out
the UI, refactored and reformatted the code as C++ independent of any
scripting language or UI platform, and then plugged it into Python (and
potentially other languages) with SWIG. I then implemented a pure GTK/Cairo
user interface on top of that in Python (without any Sugar dependencies), and
developed some interfaces so you could script your own agents and zones in
Python. (As an example, I made a plug-in giant PacMan agent who followed the
roads around, turning at corners in the direction of the most traffic, eating
cars thus reducing traffic [7], and a plug-in Church of PacMania whose
worshippers generate lots of traffic, to attract the PacMan to its
neighborhood, and sacrifice themselves to their god [8]!)

As it turned out, the SimCity kernel plugged into Python was also useful for
implementing a SimCity web server with a Flash web client interface, which is
a much better architecture for an online multi player game than the half-
baked, untested collaboration APIs that Sugar was developing.

At the high level, there were a lot of great ideas behind Sugar, but they
should have been implementing on top of existing systems, instead of developed
from scratch.

>The goals of deeply integrating SimCity with Sugar are to focus on education
and accessibility for younger kids, as well as motivating and enabling older
kids to learn programming, in the spirit of Seymour Papert's work with Logo.
It should be easy to extend and re-program SimCity in many interesting ways.
For example: kids should be able to create new disasters and agents (like the
monster, tornado, helicopter and train), and program them like Logo's turtle
graphics or Robot Odyssey's visual robot programming language!

Even if we didn't achieve those goals for Sugar, we made progress in the right
direction that have their own benefits independent of Sugar.

Choose your lofty goals so that when projected onto what's actually possible,
you still make progress!

[1] [http://wiki.laptop.org/go/SimCity](http://wiki.laptop.org/go/SimCity)

[2]
[http://www.donhopkins.com/drupal/node/128](http://www.donhopkins.com/drupal/node/128)

[3]
[https://en.wikipedia.org/wiki/Stone_Soup](https://en.wikipedia.org/wiki/Stone_Soup)

[4]
[http://micropolisonline.com/static/documentation/HAR2009Tran...](http://micropolisonline.com/static/documentation/HAR2009Transcript.html)

[5] [https://access.redhat.com/documentation/en-
US/Red_Hat_Enterp...](https://access.redhat.com/documentation/en-
US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/Tickless-
kernel.html)

[6]
[http://www.donhopkins.com/drupal/node/129](http://www.donhopkins.com/drupal/node/129)

[7]
[https://github.com/SimHacker/micropolis/blob/master/Micropol...](https://github.com/SimHacker/micropolis/blob/master/MicropolisCore/src/pyMicropolis/micropolisEngine/micropolisrobot.py)

[8]
[https://github.com/SimHacker/micropolis/blob/master/Micropol...](https://github.com/SimHacker/micropolis/blob/master/MicropolisCore/src/pyMicropolis/micropolisEngine/micropoliszone.py)

~~~
icarito
That's not one question, that's four.

I am familiar with your port and found it unplayable on the XO laptop
(although I commend you on your apparently painful task of making it run in
the first place!).

While I appreciate your thoughts on the OLPC, I am more interested in Alan's
thoughts on Sugar.

Nice to meet you.

Sebastian

~~~
DonHopkins
It's great meeting you, and wonderful getting some honest feedback from
somebody who's used it. Thank you! I also hope Alan has a chance to answer
your question, and my four.

Could you please tell me more specifically about what made it unplayable for
you? What was the nature of the problem? Did you remember to disable
disasters? ;)

Please don't blame it on Sugar -- the user interface was based on a 1993
version of TCL/Tk, so it looks pretty klunky since it was designed to emulate
Motif, whose widget design (according to Steve Strassman) is from the same
style manual as the runway at Moscow International Airport [1].

Here's a demo of SimCity running on the OLPC [2] -- does that show any of the
problems you had that made it unplayable?

Once it passed EA's QA regime, I didn't put any more effort into the TCL/Tk
user interface, instead refactoring it to remove TCL/Tk and plug in other
GUIs. Have you given the pure GTK/Cairo interface a try?

What was totally unplayable was the X11 based multi player feature [3], which
I removed from the OLPC version, since no child should be forced to wrangle
xauth permissions on the command line, and David Chapman's MIT-MAGIC-COOKIE-1
tutorial isn't suitable for children [1]. I also disabled the Frob-O-Matic
Dynamic Zone Finder [3 @ 3:35], since that was a prank I played as a tribute
to Ben Shneiderman [4].

Again, thanks for the feedback, which I appreciate!

[1] [http://www.art.net/~hopkins/Don/unix-
haters/x-windows/disast...](http://www.art.net/~hopkins/Don/unix-
haters/x-windows/disaster.html)

[2]
[https://www.youtube.com/watch?v=EpKhh10K-j0](https://www.youtube.com/watch?v=EpKhh10K-j0)

[3]
[https://www.youtube.com/watch?v=_fVl4dGwUrA](https://www.youtube.com/watch?v=_fVl4dGwUrA)

[4]
[https://www.youtube.com/watch?v=5X8XY9430fM](https://www.youtube.com/watch?v=5X8XY9430fM)

------
wslh
Hi Alan, are you envisioning a way to participate/connect-to YC Research as an
independent researcher? I don't mean as an associate since many of us have the
daily focus in startups but as a place where our ideas and code would be
better nurtured.

~~~
alankay1
Sure.

------
hydandata
Hi Alan,

1\. Do you think the area of HCI is stagnating today?

2\. What are your thoughts on programming languages that encapsulate Machine
Learning within language constructs and/or generally take the recent
advancements in NLP and AI and integrate them as a way to augment the
programmer?

~~~
alankay1
1\. Yes for the most part. The exceptions are also interesting.

2\. Haven't seen anything above threshold yet

------
IonoclastBrig
I have been designing and hacking my own languages (to varying degrees of
completion) for almost as long as I have been programming. A lot of the time,
their genesis is a thought like, "what if language X did Y?" or, "I've never
seen a language that does this, this, and that... I wonder if that's because
they're insane things to do?"

When you're working on a system, how do you approach the question, "Is this
really useful, or am I spinning my wheels chasing a conceit?" Is the answer as
simple as try it out and see what happens? Or do you have some sort of
heuristic that your many years of experience has proven to be helpful?

~~~
alankay1
I keep ideas on the back burners for a long time. (The nature of the ideas
will determine whether this heuristic works well.)

------
dflock
Hi Alan,

You've said here a few times here that maybe "data" (in quotes), is a bad
idea. Clearly data itself isn't a bad idea, it's just data. What do you mean
by the quotes? That the way we think about data in programming is bad? In what
context?

I've been thinking & reading about Data Flow programming & languages -
datalog, lucid, deadalus/bloom etc... in the context of big data & distributed
systems and the work that Chris Granger has been doing on Eve, the BOOM lab at
Berkeley, etc... - and that seems like a lot of _really good_ ideas.

What's your opinion on data flow/temporal logic - and how does that square
with "maybe data is a bad idea"?

Thanks!

Dunc

~~~
alankay1
Just to say one more time here: the central idea is "meaning", and "data" has
no meaning without "process" (you can't even distinguish a fly spec from an
intentional mark without a process.

One of many perspectives here is to think of "anything" as a "message" and
then ask what does it take to "receive the message"?

People are used to doing (a rather flawed version of) this without being self-
aware, so they tend to focus on the ostensive "message" rather than the
processes needed to "find the actual message and 'understand' it".

Both Shannon and McLuhan in very different both tremendously useful ways were
able to home in on what is really important here.

Most humans are quite naive about this -- but it is endlessly surprising to me
-- and depressing -- to see computer people exhibit similar naivete.

For example, the extent to which most code today relies on "outside of code"
programmer views (and hopes) is astounding and distressing.

------
dredmorbius
Do you have any thoughts or favourite authors on the topic of technology and
innovation, and the process of that specifically?

I've been particularly interested lately in the works of the late John
Holland, W. Brian Arthur (of PARC & Stanford), J. Doyne Farmer, Kevin Kelley,
David Krakauer, and others (many of these are affiliated with the Santa Fe
Institute).

In particular, they speak to modularity, technology as an evolutionary
process, and other concepts which strike me being solidly reflected in
software development as well. Steve McConnell's _Code Complete_ , for example,
first really hammered home to me the concept of modularity in design.

------
kgr
Hi Alan,

In your paper "A Personal Computer for Children of All Ages", where you
introduce the concept of the DynaBook, you explicitly say that the paper
should be read as a work of science fiction. I understand that you're a big
fan of science fiction. Do you draw any inspiration from science fiction when
inventing the future?

Thanks,

Kevin

Related: I've given a talk on "What Computer Scientists can Learn From Science
Fiction":

[https://docs.google.com/presentation/d/1CaieHjx9j24UFXq0GwmC...](https://docs.google.com/presentation/d/1CaieHjx9j24UFXq0GwmCdnIslK-
KqIztRqlv20aLy_I/edit#slide=id.p)

~~~
alankay1
I was a big fan of science fiction in the 40s and 50s -- pretty much literally
"read everything" \-- and tapered off in the 60s -- partly because science was
taking more of my time than science fiction, and partly because in many areas
science and technology over-ran sci-fi, and partly because my favorite authors
were writing less (and writing less good stuff).

------
grincho
I admire the compactness and power of Smalltalk. What advice would you give
language designers looking to keep the cognitive load of a new language low?
What was your design process like, and would you do it that way again?

~~~
alankay1
What kinds of learning do you want your prospective programmers to go through?
It's what you know fluently that determines cognitive load. In music you are
asking for an answer in a range from kazoos to violins.

The main thing a language shouldn't have is "gratuitous difficulties" (or
stupidities).

That said, it's worth thinking about the problems of introducing things in our
culture today that even have the learning curves of bicycles, let alone
airplanes ...

Or ... what is the place of tensor calculus in a real physical science?

------
azeirah
Hi Alan, this is a bit of a long shot, but I'd like to try anyway. I've been
following CDG from early on, and am really interested in the exploratory
research that's going on in there. I'm a 20 year old computer science student
looking for an internship, would it at all be possible to pursue an internship
at CDG? My primary selling point is that, given the right environment, I have
a lot of motivation.

I understand this is not the right place to discuss these matters, but I know
it's highly likely that this message will be read here, I am happy to take
this topic elsewhere.

------
stop1234
Hi Alan,

Thank you for spending some of your time here and writing your thoughts.

I would like to ask you for some advice.

The idiom "Everything old is new again" is currently picking up steam,
especially in the hardware and software scene.

Amazing stuff is happening but it is being drowned in the mass pursuit of
profit for mediocrity in both product and experience.

What would you say to those who are creating wonderful (and mostly
educational) machines but finding it difficult to continue due to constraints
and demands of modern life?

Most don't have the privilege to work at a modern day Xerox PARC. Then again
there is no modern day Xerox PARC.

Thanks for all the inspiration!

~~~
alankay1
Take a look at what Sam Altman is doing with YC Research.

------
mythz
Do you still code today? If so what's your preferred language, editor, OS?

~~~
alankay1
When I write code it is usually either "kiddicode" for future "kiddilanguages"
or "metacode" (for future languages ".")

I did have a lot of fun last year writing code in a resurrected version of the
Notetaker Smalltalk-78 (done mostly by Dan Ingalls and Bert Freudenberg from a
rescued disk pack that Xerox had thrown away) to create a visual presentation
for a tribute to Ted Nelson on his 70th birthday:
[https://youtu.be/AnrlSqtpOkw?t=135](https://youtu.be/AnrlSqtpOkw?t=135)

This particular system was a wonderful sweet spot for those days -- it was
hugely expressive and quite small and understandable (my size). (This was the
Smalltalk system that Steve Jobs saw the next year in 1979 -- though with
fewer pictures because of memory limitations back then).

------
arloc
Hi Alan,

Can you confirm that a message is a declarative information sent to a
recipient which can (or can not) react to it? And what is your opinion about
inheritance in OOP? Is it absolutely essential feature in an OOP language?

~~~
alankay1
I like "messages" as "non-command" things. And I left out inheritance in the
first Smalltalk because I didn't think Simula's version was semantic enough
(and still don't).

The idea of a "generalization" from which you can get instances is good. Now
the question -- then and now -- is how can you define the "generalization". We
didn't do it well back then, and I don't know of a system that does this well
enough today.

------
bachback
What do you think of Bitcoin and use of computers for money and contracts?

~~~
alankay1
I'm not a huge fan of this particular approach (interesting as it is ...)

------
rjurney
How satisfied are you with the tablets that finally satisfied your vision (did
they?) of a personal computer? How much were you able to infer about how they
would work? Any lessons from this?

~~~
alankay1
I'm not satisfied. They have more computing power and display resolution and
depth than my minimums back then, but don't pay strong enough attention to
"services" (one of the simplest they don't do well enough -- and they could --
is to really allow a person who knows how to draw to draw (and this means
allow a person who doesn't know how to draw to learn to draw)).

There are a lot of similar quite blind spots in today's offerings, and none of
them are at all necessary.

------
nwmcsween
Hi Alan, a few questions:

1\. Do you have any recommended books to read?

2\. Why do you think current programming paradigms are bad?

3\. What changes to current operating systems need to happen?

[2] My view is you want to pass terse but informative information to a
compiler in order for optimizations to take effect and there are three roads
programming languages take: abstract away by layering which burdens the
programmer to unravel everything (C++), abstract away from the hardware so
much that specifics are hidden (most high level languages) or something
similar to C.

------
mythz
Hi Alan,

You've been a long-time proponent for creating educational software (e.g
squeak etoys) helping teach kids how to program and have been fairly critical
of the iPad in the past. What are your thoughts on Apple's new iPad Swift
playground
([http://www.apple.com/swift/playgrounds/](http://www.apple.com/swift/playgrounds/))
in teaching kids how to learn how to program in Swift?

Do you think UI aesthetics are important in software for kids?

~~~
alankay1
"Criticizing reasonably" takes much longer than praise -- and my reactions to
this question are "critical"

------
kurenainogrey
That was an elucidating dialog; and I have appreciated watching recordings of
your lectures as well.

I realize I am replying a bit after the fact, so I am writing more on the off
chance that you may read this rather than further the dialog. It seems
unlikely that I will have an opportunity to directly collaborate with you
given the present realities of my existence, but I deeply appreciate your
contributions to the field of computing and education and for sharing your
knowledge and wisdom.

Kind regards and sincere thanks.

｜ グ レ ェ

------
gbenegas
Hi Alan, I'm a CS student thinking about graduate school.

1\. Would you suggest going into a popular field that excites me but already
has lot of brilliant students? (for example AI and ML)

Or rather into a not-so-popular field where maybe I can be of more help? (for
example computational biology)

2\. If I had to choose between studying my current favorite field at an
average research group, or another still interesting field with a top group,
would you suggest going with the latter for the overall learning experience?

~~~
alankay1
Please try to avoid too much planning for your future at this point and just
try to get to a place where things are going on. Any good educational
experience will take someone who is trying to get from A to B and instead get
them to C (note that an under-educated person is not in a good position to
make big choices -- so learn more!)

~~~
gbenegas
Thanks for your advice! I hope I can keep my intellectual drive fresh in the
different stages of my life, you are a great example!

------
amasad
Hi Alan,

You've been involved in visual programming environments like GRAIL and Etoys
for kids. What do you think of the current state of visual programming for
both kids and adults?

~~~
alankay1
I wasn't involved in GRAIL, but was (and am) a huge admirer of what these
people were able to do (and when they were able to do it). Worth really
looking into the history here!

The current state of _programming_ (visual or not) for children and adults is
not good enough.

------
akeck
What's your most successful problem solving technique?

~~~
alankay1
Delay ...

------
smd4
Hi Alan - the innovation from PARC appears to be the result of a unique
confluence of hardware, software, market forces, recent government research
investment, and Michelangelo-level talent for bringing big ideas to fruition.

Do you think that any factors that were significant back then are going to be
difficult to reproduce now, as HARC gets started? Conversely are there novel
aspects of today's environment that you wished for at PARC?

~~~
alankay1
Parc in the 70s was an outgrowth of the ARPA projects that were started to be
set up in 1962. Bob Taylor was a factor for both, and wanted young researchers
who already had imbibed the mother's milk of "the ARPA dream", This created a
culture that never argued about what the general vision and goals were, and
also was able to argue in good ways about how to get there.

Such a homogeneous culture organized around a particular vision doesn't exist
today (that I know of), and it means that places like HARC will have to do
some of the culture building that was done in the ARPA projects (I think of
the HARC initiatives as being more like the ARPA projects than like Parc at
this point)

------
lispython
Recalling those past days, is there any idea that not yet really played an
important role, but to be forgotten?

Especially we are losing the initial generation of programmers.

------
alehander42
Is syntax important?

Do you imagine a future where there would be just several programming
languages semantically using a lot more bidirectional "syntax skins"?

~~~
alankay1
Yes.

~~~
alehander42
Do you think some kind of more "flexible/fluid" syntax can help dyslexic
people? Sometimes they are excellent at problem solving and architecture, but
the micromanagement of syntax limits them.

~~~
alankay1
Sure

------
dineshp2
Hi Alan!

From your comments, it's clear that you are not happy with the state of
programming languages as it stands.

You mentioned that the current languages lack safe meta-definition and also
that the next generation of languages should make us think better.

Apart from the above, could you mention more properties or features of
programming languages, at a high level of course, that you consider should be
part of the next generation of languages?

~~~
alankay1
I've written and talked about this over the last decade or so. I think a huge
problem in even thinking about this is that the people who should be thinking
about it have gotten very fluent in many ways of "programming" that are almost
certainly not just obsolete but make it very difficult to think about what are
likely to be the most important issues of today.

If we look at CAD->SIM->FAB in various engineering fields -- mechanical,
electrical, biological, etc. -- we see something more like what is needed.
From another view, if we look at the great need for designing and assessing,
etc. we can see that the representations we need for "requirements", "specs",
"legalities", etc have to be debuggable, have to run, and might as well just
flow into a new kind of "CAD->SIM->FAB" process for programming. A lot of what
has to happen is to replace main-stream "hows" with "whats" (and have many of
the hows be automatic, and all of them "from the side).

------
noobermin
It seems that dynamic or at least sloppily typed langauges like javascript and
python have become more and more popular. Do you think typeless/dynamic
languages are the future? I personally really like "classless OOP[0]".

[0]
[https://www.youtube.com/watch?v=PSGEjv3Tqo0&t=6m](https://www.youtube.com/watch?v=PSGEjv3Tqo0&t=6m)

~~~
alankay1
It would be great to have a notion of type that would actually pay for itself
in real clarity. For example, what would be really useful in many dimensions
is "semantic types" rather than value oriented types (which I don't think are
particularly valuable enough).

------
yan
What most recent topic in the field of programming languages, or computing
more broadly, have you changed your mind about in a substantial way?

------
state_less
When will we get better at saying what we mean? I don't think this just
important when speaking with computers, but also human-to-human interaction.

What is the best interface for computer programming? I have settled on the
keyboard with an emacs interpreter for now, but I'm curious if you believe
voice, gestures, mouse or touch are or will be better ways of conveying
information?

~~~
AnimalMuppet
> When will we get better at saying what we mean?

The hard part is _knowing_ what we mean - really knowing, not just thinking
that we know.

------
0xdeadbeefbabe
Did you guys ever talk about Man-Computer Symbiosis in terms of the computer
unfairly benefiting some men over other men?

One example could be, give me money and I'll give you a computer that can
translate English to Spanish.

Another example could be, Apple share holders profit from iphone sales, and
the iphone UI leads naive/normal people to think texting while driving is ok.

~~~
alankay1
Books have the same problem, and both are fixable (lots easier than the big
ones we should be tending to)

------
pnathan
Hi,

I'm curious what you think the most interesting line of research is today in
the 'computering' world.

Thanks for taking the time to do this Q&A.

Regards, Paul

~~~
alankay1
I like what Sam Altman is setting up and fostering

------
huherto
A piece of advice for software engineers on the mid of the careers ? How do
you find challenges and leverage their experience ?

~~~
alankay1
I don't know -- I'm not a software engineer -- but I think this is a very
tough profession to do well.

------
testmonkey
Any memories or thoughts about Gregory Bateson?

~~~
alankay1
Larger than life character, including literally!

A rather kind big bear of a guy, with lots of ideas (and maybe a bit too much
baggage from the past ...)

Worth reading all "the Macy people", including Gordon Pask, Heinz von
Forester, etc.

------
icc97
What do you do to keep focus during the day?

~~~
alankay1
Take a nap, play music, take showers, goof off, read junk novels, etc.

~~~
icc97
Thank you along with everyone for coming back to answer all the questions. A
very interesting approach, not something I'd considered but definitely worth
trying.

------
poppingtonic
Alan, Thank you for doing this AMA!

I would love to hear your thoughts on how to "train" "System 1", in order to
make "System 2" more powerful. Not necessarily here, due to the time factor,
but if you find some time to think more deeply on this, please let me know and
we can think through this together.

~~~
alankay1
This is a really interesting issue and one we should all be thinking about ...

------
AndrewCrick
Hi Alan,

I came up with an idea that seems a bit like the Dynabook. It helps the user
to understand design decisions. Here's a short video about it (under 2 mins):

[https://www.vimeo.com/24332798](https://www.vimeo.com/24332798)

In this case it's about how to build a digger.

I'd love to know what you think about it.

------
olantonan
There's still no good languages for young kids in my opinion. Should we get
LOGO back, including a turtle robot?

------
miguelrochefort
What are your thoughts on Ethereum and DAOs (Decentralized Autonomous
Organizations)? Do you believe they will lead to a new way to think about and
distribute software? It kind of reminds me of the "fifth generation computer",
with constraint/logic programming, smart contracts and smart agents.

------
buzzkills
In terms of real, in use, user interfaces, what do you think are the best
examples? What do you like about them?

~~~
alankay1
I'm not a big fan of most UIs today (not that I was a big fan of those in the
past -- but they seem generally worse in basic outlook)

~~~
BrutallyHonest
Even minimalistic flat style popularised by Microsoft?

~~~
alankay1
I didn't mean "look" but what they do for the user ...

------
duck
Hi Alan, how do you keep up with technology/news? Do you subscribe to any
newsletters? Visit HN regularly?

------
ehudla
What are your thoughts on work/life balance in the computing industry? On
growing older in our industry?

~~~
alankay1
Try to avoid "growing old"!

A big problem with the general industry is the lack of subsidized "sabbatical
years" \-- these are a good way to recharge and re-orient. Xerox, amazingly,
used to have them for employees. But I haven't heard of any such thing
recently.

If you think about life as needing "renewables" then you shouldn't allow
yourself to be "stripmined" (easier said than done).

~~~
ehudla
I was more interested in the social critique angle. Tech is often perceived as
a young men's [yes!] game.

------
anildigital
What do you think of statement "Erlang is the only true object oriented
language in use today."?

~~~
alankay1
Too absolute, but I like Erlang

------
blendo
Any thoughts on UC Berkeley's "Beauty and Joy of Computing"?
([http://bjc.berkeley.edu/](http://bjc.berkeley.edu/))

Should AP's new "CS Principles" course count towards the math requirement for
college admission?

~~~
alankay1
If it were done much better ...

------
bouh
Dear Alan,

What do you think about EAST paradigm which tries to revamp the original
spirit of OOP you stated ?

Do you think that the machine learning community suffer from the syndrome of
"normal considered harmful". Like using vendor hardware instance of designing
their own (FPGA for instance)

------
infinite8s
A bit late to this AMA, but I've been watching Alan's videos on
Smalltalk/Squeak, and I'm wondering why the OLTP didn't use Squeak as the
basis for its software (instead of the linux/Sugar/python combo that they went
with instead).

------
cardmagic
Why haven't machine learning and neural networks been applied to programming
languages with as much interest as human languages? Wouldn't AI augmentation
of writing computer code lead to faster breakthroughs in all other fields
within computer science?

------
brebla
What impresses you the most about american free enterprise? What most
disappoints you about it?

~~~
alankay1
If there are lots of resources more or less available, then a lot of "hunting
and gathering" types can do things with them, and some other types can see
about what it takes to make resources rather than just consume them. The
former tends to be competitive, and the latter thrives on cooperation.

The biggest problems are that the "enterprisers" very often have no sense that
they are living in a system that has many ecological properties and needs to
be "tended and gardened".

Not an easy problem because we are genetically hunters and gatherers, we had
to invent most of the actual sources of wealth, and these inventions were not
done by the most typical human types.

Yet another thing where education with a big "E" should really make a
difference (today American education itself has pretty much forgotten the
"citizenship" part, which is all about systems and tending them.

------
westoncb
Hi Alan,

I'm curious whether you think it might be an important/interesting direction
for program editors to depart from character sequence manipulation to
something along the lines of AST editors. Or is this only a red herring, and
perhaps not so deep a change?

~~~
alankay1
This has been done -- many times -- check them out and see what you think ...

------
collint
Hello Alan,

I'm curious if you've read much about Activity Theory. (in particular, Yrjö
Engeström's Learning by Expanding.) I feel like it's compatible with much of
what I've heard you discuss in lectures. Is it something you have an opinion
on?

~~~
alankay1
I've looked at it -- some good ideas there. Often formal systems have problems
in scaling, etc.

------
patrec
Hi Alan (and others!),

Logo is ~50 years old now, squeak 20 and olpc ~10. Do you know innovators who
are now in their 20ies, 30ies and 40ies and who at least partly credit their
mental development to childhood exposure to logo, e-toys, mindstorms, Turtle
Geometry etc?

~~~
fchopin
I'm not someone that would be widely considered to be an innovator, but I had
a few of the very first microcomputers that came out alone with some
accompanying books of how to program them in BASIC, which was very important
to the start of my career. The only free games I had early on were the ones I
wrote myself, though it was only a few years before I was copying commercial
games from others in an Apple II computer club. I also had the 1979 Big Trak-
the real world version of logo; in fact if you were to attach a piece of
sidewalk chalk to the back of it, it would be even more similar. I've often
thought of getting the newer version for my daughter, even though it's not
exactly the same: [https://www.amazon.com/BBT-BIGTRAK-Big-
Trak/dp/B0035IZ85G/](https://www.amazon.com/BBT-BIGTRAK-Big-
Trak/dp/B0035IZ85G/)

If you're looking for ideas for youth to get into programming, I've had the
most success with Scratch:
[https://scratch.mit.edu/](https://scratch.mit.edu/) , but I think Legos and
Minecraft (or the free imitation, Exploration Lite)- things that you build
with- are also important. Read "Jeff Bezos on the best gift he's ever
received": [http://www.marketplace.org/2014/12/08/business/jeff-bezos-
be...](http://www.marketplace.org/2014/12/08/business/jeff-bezos-best-gift-
hes-ever-received) And of course, getting kids into music is a great thing for
creativity.

------
syngrog66
hi Alan!

Q: I've always been a big fan both of text console accessible UI's like CLI's
and REPL's as well as of GUI's. In my mind they each clearly have a different
mix of strengths and weaknesses. One way a user might have a bit of the "best
of both worlds" potentially is an app or client featuring a hybrid input
design where all 3 of these modes are available for the user to drive. Any
thoughts on that?

I'm writing a paper in my free time about some architectural ideas in this
area and would love to hear your thoughts. Feel free to tell me this is a FAQ
and that I should go read a particular book/paper of yours, and/or to get off
your lawn. :-)

thank you!

~~~
alankay1
Smalltalk and some other languages have all three -- easy to do if the system
is a "certain way".

~~~
syngrog66
thanks! do you mean if the underlying architecture design makes it easier?

~~~
alankay1
Yes ...

~~~
syngrog66
ok that's what I thought you meant. thank you, sir! I began to realize after I
hit return it was probably a redundant question, I apologize. I get starstruck
a little when dealing with people whose work I've admired for so long. each
time, boom, my effective IQ probably just drops right through the floor. :-)

one of your influences on me as a software engineer has been to strive for
solutions which make certain things easier, all things possible, because of
architecture choices, while still allowing best-of-both-worlds modalities. eg.
that which is to UI's as DSL's are to a general purpose programming language.

------
osense
What is your opinion on the so-called Function-level programming, and
languages such as J?

------
miguelrochefort
What are your thoughts on the Semantic Web? Why do you think it hasn't
succeeded yet?

~~~
alankay1
Too weak a model of meaning on all counts. Not a new idea, and still they did
it again. (This is not an easy problem, and machine learning won't do the job
either.)

------
duncanawoods
Hi Alan,

Whats the next step to improve remote working? Face to face still seems to be
so superior for relationship building and problem solving despite the wealth
of video conferencing, social and collaboration tools we have. I don't want to
wear goggles...

thanks!

~~~
alankay1
Take a look at Hiroshi Ishii's "Clearboard" etc (look on YouTube)

~~~
pcunite
Link:
[http://tangible.media.mit.edu/project/clearboard/](http://tangible.media.mit.edu/project/clearboard/)

I skype frequently. To feel connected, simply having the camera imbedded in
the monitor itself is enough to convince the other person that I'm looking at
them in the eye.

------
alehander42
Hi Alan,

Do you think the object ~ biological cells metaphor can be related somehow to
automated programming using GP or neural networks? (I've sometimes imagined
neural networks as networks of many small objects with probability-based
inheritance)

------
man2525
Is a web browser sufficient to provide rich and meaningful experiences on the
Internet?

~~~
alankay1
Not close as things are currently and have been ...

------
acd
Hi Alan!

Biggest thanks for helping create the modern computer and its peripherals and
helping advocate programming for children! Computers is the base which I enjoy
the most as a hobby and make my living off.

What is your vision for the future of computing?

------
BrutallyHonest
Hi Alan,

Could you please answer:

1) What is your opinion about Actor Model? Does it have a potential? What is
the next step for OOP?

2) Do you think software of the future should be end-user modifiable?

3) What would be the Dynabook of 2016? Smart contact lenses with a gesture
interface?

Thank you very much!

------
huherto
How do you become a lifelong learner ? How do you stay excited about the
future ?

~~~
alankay1
Have good parents (and probably some pre-dispositions). Then be really
stubborn about wanting to understand what is going on (no one else really
wants you to)

------
kafkaesq
So what do you think of Scala?

------
msutherl
Hi Alan,

I'd like to go deeper into your notion of "pop cultures" vs. "progress", in
the context of innovation, but also the arts. Can you recommend some readings
that might fill out those concepts?

~~~
alankay1
Try to understand the nature of "pop culture", especially as it relates to
"traditional cultures", to human genetics, etc. What does a pop culture want?
(What do people in a pop culture really want?) And why?

What are "developed cultures" all about? What are the strengths and weaknesses
etc.

What kinds of criticism obtain and help in various kinds of cultures?

Etc.

------
Ericson2314
Alan, while trying come up with a good question, I learned you are a musician.
Great! As a fellow musician (also jazz and classical) I'm curious whether you
feel this has influenced your engineering.

~~~
alankay1
I'm not really an engineer -- but yes music furnishes many good parallels and
metaphors ...

~~~
Ericson2314
Glad to hear it! I would certainly agree.

[Do you prefer "[computer] science", "design", or another term? I personally
get a bit queasy about calling it "science" when the field isn't centered
around experiments and their analysis. Moreover "sciencing" is not a verb :).]

~~~
alankay1
I like "computer science" when doing real "computer science". Science is
making explanatory models from and about phenomena and looking for more
phenomena.

Most of the time I think I'm "designing".

In English "sciencing" can be a verb. And there's the great line from "The
Martian" \-- "I'm going to have to science the shit out of this!"

~~~
Ericson2314
That makes sense. Heh, didn't seee that movie but good quote.

------
mti27
What TV show or movie have you seen that has realistically portrayed advanced
computer technology, or is growing into it? In other words, now that we have
Amazon Echo is the Forbin Project more realistic?

------
quakeguy
He is a great guy, i just want to thank him via this textfield i am given.

------
nekopa
Hi Alan,

What is your view on Literate Programming and why it hasn't taken off (yet)?

~~~
alankay1
It would be great for the programs themselves to somehow be "literate". That
said, it is quite a bit of work to write an essay (Don Knuth makes it look
easy, but ...)

------
lpalmes
Hi Alan, If you are familiar with Go, what do you think about it's simplicity
as a language? It's something other languages should start thinking about in
their design?

~~~
alankay1
"Simple" is not the main force in a good programming language

~~~
lucasArg
And, in your opinion, what is the _main force_ in a good programming language?
Or maybe the top 3 :)

~~~
alankay1
Why don't you try to pick a few, and I'll try to comment?

~~~
rubidium
things/persons/places

relationships

time

~~~
alankay1
These are often important, useful, and needed.

I would take a different perspective that puts as "higher forces" things like:

\-- what helps thinking about things in general, about problems, and resolving
them (epistemological concerns, which include the whole environment as
intrinsic to "langauge")

\-- representational matchups to what we are trying to model and create
dynamic inference processes for (mathematical concerns -- this is why
"mathematics" is a plural, in real math you invent maths when needed ...)

\-- orthogonal axes for many areas, including meaning and optimizations,
including definitions and meta definitions, debugging, reformulation, etc.
(pragmatic concerns for eventually winding up with workable artifacts)

\-- and so forth ...

------
olantonan
Do you have an opinion on text based vs visual programming languages? I think
the latter is good for learning, but feel impractical in my day-to-day job. Is
there a sweet spot?

------
corysama
Could you recommend a small number of historic papers in computer science for
undergrads to read so that they can have a bit more context for the state of
modern tech? Thanks!

~~~
alankay1
I should probably do this, but also probably won't ...

------
infinite8s
Did you ever experiment with 'late-binding' hardware? IE something akin to
FPGAs today? Could that be considered the progression of the Alto's microcode
design?

~~~
infinite8s
Possibly related to that is the idea of 'late-binding' of physical items (ie
3D printing). It's interesting to think about the combination of computer
driven synthesis and manufacturing of physical/computational products (ie
smart materials).

------
rudedogg
What are your favorite talks you've given? Can you link to the videos if they
were recorded?

I enjoy watching your presentations, but I'm sure there are some I've missed.

~~~
alankay1
I don't keep track of them ...

------
nextputall
Hi Alan,

What do you think about the Newspeak Programming Language
([http://www.newspeaklanguage.org](http://www.newspeaklanguage.org))?

------
Atwood
Is durufle Requiem hindered or helped by a full pit? Does Chip excite you the
way OLPC XO did/does? Salutations/felicitations, appreciation for the ama.

------
alehander42
Do you think artificial human languages designed with strong logical rules
(Lojban..) can be succesful?

Do you think they can act in a-la Newspeak(the 1984 Newspeak) way?

~~~
alankay1
I was really into this stuff 40-50 years ago. It's still an interesting area
to ponder.

------
mbrock
Do you recall any interesting work on discussion forums or alternatives to
them for promoting collaborative thinking?

Or for another approach, how do you like HN?

------
childintime
Hi Alan,

What skills would your (hypothetical?) apprentice need to have?

If this were more like a partnership what would be the subject to work on?

For that matter, what are you working on now?

~~~
alankay1
I'm always working one way or another on helping children grow up to be
caretakers of the 21st century.

------
dillonforrest
What are your pet peeves within your field of work?

~~~
alankay1
None of my peeves are "pets"

------
ksec
What do you think of Steve Jobs? And the Current Apple. Do you have any
meaning friendship with him? Do you miss him? Any Story to share?

~~~
alankay1
Steve was not the kind of person to have friends, but he and I were "pretty
friendly" right up until his death -- this is partly because our lives
intertwined closely a few times -- not just for Apple, but also for Pixar, and
then later as I tried to get him back to real education as an Apple goal.

~~~
melchebo
What do you think of their recent Swift Playground programming education
effort ?

~~~
alankay1
Answered (obliquely) elsewhere in this AMA

------
lispython
Hi Alan,

Comparing how you think about doing research now with how you thought when you
were starting out, what is the biggest change in your thinking?

~~~
alankay1
I had the great -- and lucky -- benefit of falling into a well established
research community -- ARPA-IPTO -- in the 60s with roots going back into the
50s and 40s. They "knew how to make progress" and I learned most of what
little I understand about "process" from growing up in it.

Here is a little tribute essay I wrote for this research community in 2004:
[http://www.vpri.org/pdf/m2004001_power.pdf](http://www.vpri.org/pdf/m2004001_power.pdf)

------
arkj
Consider a kid starting to learn programming, which language would you suggest
him to learn first?

Also is there a minimal list of must know languages?

------
ldargin
Do you consider the recent advances in AR/VR as a useful trend, or is it's
emphasis on spatial movement mostly superfluous?

------
gorlist
Can you name a few of today's Michelangelos?

------
Adam-Kadmon
What is the best language to learn OOP concepts ?

~~~
alankay1
Good question. If we are talking "real OOP", I'm not sure these days. What do
other people think?

Smalltalk is very long in the tooth these days, but it is still "rather
object-oriented in good ways".

Erlang and its derivatives are fun and good to help that kind of thinking.

~~~
npm83
What do you think is missing from Smalltalk, and what would you expect for the
"21th century Smalltalk" if there was such think?

Even though is "old", I find it extremely valuable for learning to think and
reason in a pure OO fashion.

~~~
alankay1
It's worth thinking about what scales and what doesn't scale so well. For
example, _names_ are relatively local conventions. We could expect to have to
find better ways to describe resources, or perhaps "send processes rather than
messages". Think about what's really interesting about the way Parc used what
became Postscript instead of trying to define a file format for "documents"
for printers ... (a programming language can have far few conventions and be
more powerful, so ...)

~~~
blihp
Granted, names have their problems. But as a tactical solution URLs can be
used to send high-level messages or at least as a name resolution scheme to
initiate communication between two or more OO systems. And URLs are just
names... (credit where it is due: I first heard the thought expressed by you
and it gets the job done until something better comes along)

Admittedly this does not (directly) address the issue of sending processes
(which could be handled indirectly as payload.) Or am I missing the bigger
picture you're driving at here?

~~~
alankay1
Keep on turning the crank ...

What if you need something but don't know its name or URL? Etc.

~~~
blihp
My example definitely presumed things like service discovery which gets you to
URLs. But I think I see the larger point you make about what if you don't even
know what kind services are available (which my example assumes you already
know) and once told of their existence how do you negotiate their capabilities
and usage (my example, as I envisioned it, completely falls apart here without
some major plumbing which might require running an open sewage line through
the kitchen. i.e. probably not the best way to do it)

P.S. I have found the team's work on STEPS quite thought provoking. If anyone
could find the time once things settle down, it would be most appreciated if
some quick docs re: how to reproduce (i.e. build) the Frank environment could
be put together. (if they already exist, a pointer to them would be helpful)

~~~
mmiller
One thing I've found about the web, as it exists, is that the people who set
up a system for the outside world to use change it over time. The URLs change.
The CGI their service will accept changes. As things exist now, a program
_must_ use "sticky" names, and if a name changes, all of a sudden the
connection is broken, even though the exact same functionality still exists.
It would be good if a program could find the functionality it needs based on a
functional description of what it needs, rather than a name. That gets more to
what's really important. I once complained to Alan that Smalltalk had the same
problem. If I change the name of a class, all of a sudden all the code in the
system that needs that class could no longer find it, even though I had
changed none of its functional code. This seemed like an extremely brittle
scheme for finding resources. The name is not the important thing about what a
program needs. It's just a reference point that does nothing. Names are still
good, because they allow _us_ to quickly identify resources, but they should
mainly be for us, not the program, because when you really think about it, a
program doesn't care what something is called.

~~~
alankay1
Hi Mark

Actually, this is not quite true about Smalltalk code (which is linked to its
class). But referring to things is also done via variables and selectors of
various kinds, and these are names which have to be locally known. Smalltalk
can also find quite a few things by description (for example it can find
things like the "sine function" via examples of inputs and outputs).

~~~
chriswarbo
> it can find things like the "sine function" via examples of inputs and
> outputs

I think this is a really important idea. On one hand, it can save us from re-
inventing code which already exists (e.g. "this existing code will satisfy
your test suite"), it can help us discover relationships between existing
things ("the function 'sine' behaves like the function 'compose(cosine,
subtract(90))'"), it can aid refactoring/optimisation/etc.

On the other hand, it could also help us discover services/information which
we could not obtain by ourselves. For example, discovering a database mapping
postcodes to latitude/longitude.

There's some interesting work applying this to pure functions, using testing
[https://hackage.haskell.org/package/quickspec](https://hackage.haskell.org/package/quickspec)
and proofs [https://github.com/danr/hipspec](https://github.com/danr/hipspec)

It's also closely related to inductive programming (e.g. inductive logic
programming, inductive functional programming, or even superoptimisation),
where _combinations of_ existing functions are checked against the
specification. Of course, that leads down the path to genetic programming, and
on to AI and machine learning in general!

------
musha68k
How can I get my thinking out of the/my box?

~~~
alankay1
Pay a lot of attention to realizing you (and all of us) are in "boxes". This
means what we think of as "reality" are just our beliefs" and that "the
present" is just a particular construction. The future need have no necessary
connection to this present once you realize it is just one of many possible
presents it could have been. This will allow you to make much more use of the
past (you can now look at things that didn't lead to this present, but which
now can be valuable).

~~~
musha68k
OK I'll continue to get my history straight :)

Thank you, Alan!

------
nxzero
Alan,

What research have you been a part of that is the most promising, yet least
known, and why do you feel it failed to become more well known?

------
agentgt
What other interests do you have that are not technology related. For example
what kind of music do you like? Do you like art?

~~~
alankay1
See what you can find out from the web on this one ...

------
olantonan
Reddit commenter implying this AMA is fake. How are HN accounts verified? How
do we know this is a real AMA? Just curious.

~~~
dang
I personally vouch that this isn't fake. (But anyone familiar with Alan's work
could tell that just from reading his comments here—who could possibly fake
these?)

We detached this comment from
[https://news.ycombinator.com/item?id=11940227](https://news.ycombinator.com/item?id=11940227)
and marked it off-topic.

~~~
olantonan
"who could possibly fake these?"

Copying from other sources. I'm not saying it's fake, just curious about
verification as someone claimed so.

For instance, your "vouch", are you an official HN account. How can I tell?

~~~
BrutallyHonest
If an answer could be confused with a "real" one then it's as good.

~~~
olantonan
No

~~~
coldtea
That's an ongoing issue in philosophy though -- you can't just say "no" as if
it's a solved issue.

~~~
olantonan
Yes

------
olantonan
No language today is able to improve itself like Smalltalk was able to. That's
pretty sad, wouldn't you say?

~~~
alankay1
Let's give Dan Ingalls the majority of the credit here. I will admit to
"seeing" what was possible, but Dan was able to make really great compromises
between what __Should Be __vs what would allow us to make great progress in
the early 70s on relatively small machines (give Chuck Thacker the majority of
the credit here for similar "art of the possible" with the Parc HW).

I liked the MOP book because they carried the model more deeply into the
actual definitions -- I very much liked their metaphor that you want to supply
a "region of design and implementation space" rather than a single point.

------
ehudla
Do you like the culture of Silicon Valley?

------
olantonan
Does it suck getting old for you? Do you have stamina to make new stuff?

I'm old, very hard to stay on top of all the changes.

~~~
alankay1
It doesn't suck "getting old" \-- and you only find out about stamina by
trying to do things ...

(We are fortunate that most of what is "new" is more like "particular 'news'"
rather than actually "new". From the standpoint of actual categorical change,
things have been very slow the last 30 years or so.)

~~~
olantonan
About categorical change, I agree... seems like everybody is just busy
optimizing stuff invented in the 70s.

~~~
alankay1
Or less -- I think UIs have really gone downhill.

But, it's also worth looking at places where there's been enough change of one
kind or another to constitute "qualitative". This has certainly happened in
many areas of engineering and in science. How about in computering?

~~~
phantarch
Can you explain more about what in UIs you think has gone downhill? I've seen
you refer to this idea in quite a few of your comments and it would be great
to get insight on what aspects you think need
improving/exterminating/rethinking.

~~~
alankay1
I should get you to articulate this yourself

But let's see -- how about UIs giving up on UNDO, not allowing or not showing
multiple tasks, not having good ways to teach how to us the UI or the apps,
...

And a zillion more ...

Yikes!

------
bitmadness
Hi Alan,

I'm a CS PhD student at Caltech. What advice do you have for young computer
scientists, especially for PhD students?

~~~
alankay1
A PhD program is very much about the context of forefront ideas and people. I
lucked into one -- that got me to realize that this is what prospective grad
students should be looking for.

------
chews
Given that tablets have lived up to the Dynabook concept, what do you think
about seeing 3 year olds with iPads?

~~~
alankay1
The Dynabook was/is much more of a "service idea" than a physical
manifestation (there were actually 3 physical ideas for it). Today's tablets
don't fulfill the service ideas.

------
Woodi
Hi Alan,

Do current OO languages miss or unuse good ideas/features already invented ?

Is programming "with GUI only" a sensible for the future ?

GL

------
lispython
Hi Alan,

Have you ever made serious mistake? That if given an opportunity, you would
start over with a different approach.

~~~
alankay1
"Most ideas are mediocre down to bad" \-- and mine certainly have been.

Beyond ideas, I would do some process things differently, especially those
that could have been done better if I had been able to understand people
better.

------
seccess
Are you familiar with (the programming language) Go? What do you think of Go's
approach to objects?

------
slrigevol
this is a spectacular thread. I gave up everything and traveled to the U of U
in 1976 because alankay1 had done his thesis there. They got so much right in
such a short time - Eliot Organick (Multics), Tony Hearn (Reduce, symbolic OS
for TI 92, 89). All inspired by Alan Kay.

------
EGreg
What is Alan Kay doing these days?

~~~
alankay1
Too much typing into too small apertures!

------
mej10
What is your recommendation to someone wanting to get into the kind of
research you do?

~~~
alankay1
"No one owes more to his research community than I do"

I lucked into the ARPA community 50 years ago (without even knowing that it
existed).

A good start is to find people and places that are doing things you think are
interesting ...

------
atarian
What is your stance on the future of AI? Is it something we should be
concerned about?

------
olantonan
I'm no doubt your biggest fan. What do you think of the Simula inventors work?

~~~
alankay1
The Simula guys cannot be too highly praised, especially Nygaard.

~~~
bakul
Nygaard (along with Møller-Pedersen) later designed the Beta language. A very
fine "object oriented" language that tried to unify the concepts of classes,
procedures and structures into "patterns". Seemed like a far better language
than C++, Java etc. but it never quite took off. I wonder if it "lost" because
a) it was not brashly promoted by an American company or b) it used (# ... #)
instead of { ... } :-)

------
anildigital
Statically typed programming languages or Dynamically typed programming
languages?

~~~
alankay1
What does an "Intergalactic Network" need?

------
BrutallyHonest
What is Actor Model lacking?

~~~
alankay1
For what?

~~~
BrutallyHonest
1) To become as mainstream in a general-purpose way as OOP and functional
programming paradigms are. One of the problems I know is lack of
composability.

2) To become your favourite approach.

~~~
alankay1
Historically, (1) has not particularly depended on merit, and (2) would
require a lot (because "nice atoms" were more interesting back in the 60s than
today).

------
cardmagic
What do you believe that many programmers your know don't agree with?

------
philippeback
Alan,

What do you think of the Pharo project?

------
uptownfunk
Alan, what are your thoughts on lbstanza and it's class-less object system?

Thanks!

------
textmode
alankay1 says UI's have declined in usefulness. But aren't there hordes of UI
programmers today, getting paid handsomely for their "work"? How to explain
this?

------
jyotipuri
Hi Alan,

What do you find most frustrating about software development at current times.

Regards, Jyoti

~~~
alankay1
Yikes!

------
kev009
What is your opinion on Operating Systems research and industry? I find the
Linux monoculture tiresome.

~~~
alankay1
See elsewhere in this AMA

------
samirm
Hi Alan,

Tabs or spaces?

------
skull205485
i need help because i do not know how to hack so can you help me?

------
alankay1
Hi Folks

Thanks for all your questions, and my apologies to those I didn't answer. I
got wiped out from 4:30 of answering (I should have taken some breaks). Now I
have to. I will look at more of the questions tomorrow.

Very best wishes

Alan

~~~
mathattack
Thank you very much for your time here. More importantly, thank you for your
contribution to the field. All of us here live and work in a more exciting
place because of the visions that you pursued and built. We are better for it.

------
jsprogrammer
You say the problem with Xerox is that they were only interested in billions
(instead of trillions).

Should we currently be interested in quadrillions, upper trillions, or,
perhaps, larger? Once we become interested in an appropriately large number,
what preparations should we be taking so that we can operate at that level? Do
we just start putting product out there and collect the value on the open
markets, or, do we need to segment markets to maximize value? Can you tell us
about any other mistakes you feel Xerox might have made in realizing the value
of PARC?

~~~
alankay1
That was a metaphorical hyperbole to attract the attention of people who think
in terms of money rather than in "basic improvements to the world".

The idea is that "qualitatively changing the context in a powerful way"
trivially creates enormous potential for everything (including making money).

~~~
jsprogrammer
I don't think it was hyperbole and I am being 100% serious. You make a good
case that PARC contributed $35+ trillion in value, of which Xerox was only
able to capture a portion.

If we extrapolate out an exponential trend from forty years ago, a quadrillion
seems like it might be about right. If I'm doing my math correctly, it is only
about $100k value/person. Companies will soon be doing $1 trillion in annual
revenue, so an 1000x multiplier doesn't seem out of question.

I guess my question is: what needs to be done to realize, operate, and
maintain the enormous potential? But, maybe it is just something that happens
as a result of the changes?

~~~
alankay1
I don't dispute the figure, but in most cases when I say that phrase I'm
trying to be hyperbolic to make a different point

~~~
jsprogrammer
Maybe we can talk about ways context should be changed?

I still don't walk into a store with my phone and then quickly walk out with
exactly those items I predetermined. Occasionally I will use my phone as a
list, but a small piece of paper is easier to reference while in the store. I
don't ever see anyone else doing any better and most never use their phone or
any other device.

I think this means that we do not yet have true personal computers.

What might be the most important context to change/solve?

~~~
alankay1
Most of what is between most people's ears

------
smegel
What do you think about 4GLs - do you think they still hold any promise and/or
represent a solution to today's language woes?

------
testmonkey
What role do people like Terry A. Davis (and his TempleOS) serve in imaging
what's possible in computing? I'm thinking of Jaron Lanier's idea of society
getting "locked in" after certain technical decisions become seemingly
irreversible (like the MIDI standard).

------
pyed
Do we "really" need more programming languages ?

~~~
alankay1
We could use a few "good ones" (meaning ones that really are about the
realities, needs and scales of the 21st century).

~~~
anildigital
Could you list top 3 good ones as per your opinion?

~~~
alankay1
I meant, we could "use three good ones", not that I knew of three ...

------
mischief01
It is not unlikely that you will never get to the same league as 'the
inventors'.

Most people are mediocre at everything they do and will always be. Most likely
that includes you. We live in a culture that doesn't just tell everyone that
they can easily outgrow mediocrity, no, this culture tells people they are
above it from the very start.

The following advice doesn't apply to geniuses or those who can be, but it
applies to the majority who will read this:

The best you can do is look back on your life and see if there is only
mediocrity. If there is, you have to be honest with yourself and recognize if
it's so because of factors that you can still change, or not. For most people
reading this the latter will be the case, which means you simply have to live
with it and stop trying to influence the world, because everything you end up
doing is going to make things worse (for you, and everyone else).

~~~
ehmorris
This comment is so annoying - the perfect HN mix of pretentious and nihilist.

If you really think that average people trying to create anything will make
things worse, then you're probably fine with the entertainment-addiction cycle
in the first place. Maybe you work for supercell.

~~~
aleh
I believe this comment is not directed to people who are trying to create
something, but to people who feel entitled/special because they are trying to
create something they perceive as improvement for humanity.

------
Avshalom
Are you 2 years old?

~~~
dang
We detached this subthread from
[https://news.ycombinator.com/item?id=11952046](https://news.ycombinator.com/item?id=11952046)
and marked it off-topic.

------
olantonan
I'm not into quackery.

~~~
dang
Please stop posting uncivil and unsubstantive comments to Hacker News.

We detached this comment from
[https://news.ycombinator.com/item?id=11941925](https://news.ycombinator.com/item?id=11941925)
and marked it off-topic.

------
ycombinatorMan
>this is stealing my life

Really? Because I quite enjoy my "Wasted" time. I can't imagine what situation
you could possibly be in that not only is it not enough for _you_ not to
"waste" time being happy, but you need other people to do the same. You are
whats wrong with the world.

~~~
dang
> _You are whats wrong with the world._

Oh dear. Comments like this are definitely not allowed on Hacker News. Please
(re-)read the site guidelines and don't post anything else like this!

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

[https://news.ycombinator.com/newswelcome.html](https://news.ycombinator.com/newswelcome.html)

We detached this comment from
[https://news.ycombinator.com/item?id=11948642](https://news.ycombinator.com/item?id=11948642)
and marked it off-topic.

