
Hacker News Highlights, the Alan Kay Edition - craigcannon
http://themacro.com/articles/2016/06/hn-highlights-june-4/
======
nickpsecurity
This was already useful with him saying that Barton was a brilliant engineer.
I have one paper with his name on it, sent by pjmlp, that describes his design
of modern computing:

[https://de.scribd.com/doc/61812037/Barton-B5000](https://de.scribd.com/doc/61812037/Barton-B5000)

He describes need for ALGOL-like programming languages, how they'll be
implemented, training people, and early notions of hardware/software co-
design. Also, touches on various aspects of ALGOL that could be turned into a
CPU. Later, helps design and build one that's essentially the first, business
mainframe plus the first of mostly-safe, truly-engineered computers.

[http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...](http://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm)

Later, another Burroughs guy, Anderson, uses similar engineering mindset
invents INFOSEC per Roger Schell who expanded on his work. Another hired by
Intel puts the MMU and segments in there to give security developers a chance.
All may ultimately trace to Barton's work and framework for thinking about
machines from high-level to hardware. Worth highlighting and becoming more
clear thanks to Kay's little remark that tells me who brains of operation
probably was among many author's names I see in various places.

Has Wikipedia page it turns out:

[https://en.wikipedia.org/wiki/Robert_S._Barton](https://en.wikipedia.org/wiki/Robert_S._Barton)

Note: Reading this, he _may_ be the inventor of practical, abstract machines
as computing solutions. He also taught lots of success stories in industry.
Wonder what else he would've done if he stayed in industry.

~~~
alankay1
Bob Barton is in a class of early computer people I don't think can be over-
praised. However, just to put a caution on the hyperbole, I had been a
journeyman programmer in the Air Force and then at NCAR -- the latter to work
my way through the last two years of college -- and I just "didn't know
anything" except how to code a little and design a little. When I got to grad
school 50 years ago in 1966, I was immediately confronted by Ivan Sutherland's
Sketchpad, the first Simula, many other things in the actual field, and
especially in the ARPA community, and ... Bob Barton himself, who Dave Evans
had convinced to be on the Utah faculty for a few years (something he was
uncomfortable with).

In any case, this was like someone going from the 10th century to the 20th
century, where as Arthur C Clarke pointed out, "Advanced science resembles
magic". I am quite surprised 50 years later to still feel the same way about
these people, but my initial impressions have stuck. They seemed and do seem:
magical.

I've described Bob a little in "The Early History of Smalltalk" that I wrote
for the ACM in 1993
[http://worrydream.com/EarlyHistoryOfSmalltalk/](http://worrydream.com/EarlyHistoryOfSmalltalk/).
A truly amazing character (and I should try to write a portrait of him at some
point).

Both the details, and especially the overall design philosophy of the
Burroughs machines are worth close study. What was attempted in the B5000 and
B5500 was breathtaking, and as is often the case, perhaps too much in a few
places (this happened in the later Flex Machine as well, and in the much later
Intel 432).

At Parc in the 70s we were able to take a second pass at "higher level
architectures" (and so did Bob Barton at Burroughs with the B1700) via mixing
microcode with fixed functions.

The Parc approach was much simpler and less comprehensive (had to work on
relatively inexpensive personal computers, and we had a truly wonderful
engineering mind in Chuck Thacker who was instrumental in retaining powerful
ideas in a parsimonious fashion).

But this hybrid turned out extremely well for a wide variety of needs to make
"well-fitted-processors" for higher level languages. Part of the need for the
hybrid was to tinker with various kinds of storage schemes, formats, garbage
collection, swapping, etc., that were not well understood enough to be put
directly into hardware (this was one of the two or three slight flaws in the
B5000 scheme).

I commend the "The Approach ..." (1961) paper listed above as one of the great
papers in our field -- and it must be in the top two or three for max content
in six pages. I should also note that Bob was a mathematician, thought like a
mathematician, and designed like a mathematician -- engineering was more of a
hobby for him (and not one that he always paid a lot of attention to). I had
some of the same "hangups" \-- including being a constant reader of everything
-- and these were part of the basis of our relationship.

~~~
nickpsecurity
"I've described Bob a little in "The Early History of Smalltalk" that I wrote
for the ACM in 1993"

That was an amazing read. So much connects now. I'm going to have to think on
it as it's a lot to take in for one morning. I do spot some things to
immediately reference.

"They seemed and do seem: magical."

Well, when I read the paper I linked, it does look like so much then-unheard-
of stuff compressed into one, short document that magic is apt description. It
led to B5000. Not just that, though. Your Smalltalk history showed it also
helped shape OOP and your work. A Roger Schell interview showed INFOSEC
basically started with Anderson: a Burroughs engineer that looked at people,
software, and hardware to find systemic, security risk. Their methods are
similar to Barton's w/ different perspective. I bet Anderson's thinking traces
to it & Burroughs B5000. So, Barton seemingly out of thin air hits all kinds
of critical points in concept space of IT, helps put them into a demo (B5000),
and its ripple effects help create OOP (indirectly) and INFOSEC (directly).
Fair to call it magic as most people were more narrow and incremental with
nowhere near as much impact.

" I should also note that Bob was a mathematician, thought like a
mathematician, and designed like a mathematician"

That makes a lot of sense. His work resembles high-assurance a bit in that he
worked from clear specs of what it had to do to primitives that could do it to
right combination of pieces. It's like he was formulating and solving
equations for high-level programming itself. It's why I originally skipped the
paper because I wasn't mathematical enough to read some of it. Then, your HN
comment said Barton, I had _one_ paper with that name, and now all kinds of
light bulbs are turning on. :)

Speaking of people doing magic, I landed on Margaret Hamilton's team a year or
two ago:

[https://en.wikipedia.org/wiki/Margaret_Hamilton_%28scientist...](https://en.wikipedia.org/wiki/Margaret_Hamilton_%28scientist%29)

[http://htius.com/Articles/articles.htm](http://htius.com/Articles/articles.htm)

The "Lessons learned from Apollo" paper suggests they were similarly working
in a vacuum around same time as Burroughs stuff was coming out. Invented all
sorts of engineering and software principles in the process. Are you aware of
any others work that went into that or any influence of such work on your side
of the fence? As in, was her team's work... especially person-high stack of
correct code... as independent and ground-breaking as it appears? Or building
on prior work in a way we just haven't seen?

Just curious as I've tied INFOSEC, Burroughs, and your work but not sure how
her work or legacy fit into the larger picture. Outside safety-critical
systems that is.

" the later Flex Machine as well"

Is the Flex Machine with tagged hardware and Ten15 VM the same as yours or
different? Yours mentioned an "aerospace" company. In any case, I wish I had
more details on both as they seem like an extension on Burroughs thinking that
sort of appeared and then disappeared. I think a HW-accelerated Ten15 could be
valuable today. Even if it was an imperative-style RISC + one customized for
functional stuff with a clean IPC or functional call mechanisms to integrate
anything written for both.

[https://en.wikipedia.org/wiki/Flex_machine](https://en.wikipedia.org/wiki/Flex_machine)

"via mixing microcode with fixed functions."

I keep recommending groups doing RISC-V, etc to use microcode instead of
fixed. The reason being old work showed microcoded processors could be
repurposed to do many things. The microcode was also used for speed boosts and
atomic execution of arbitrary, instruction sequences. Do you second the
recommendation of keeping microcode in CPU's wherever possible?

" Part of the need for the hybrid was to tinker with various kinds of storage
schemes, formats, garbage collection, swapping, etc., that were not well
understood enough to be put directly into hardware"

Here's the two leading candidates that are the successors of Burroughs and
capability-oriented systems:

[http://www.crash-safe.org/papers.html](http://www.crash-safe.org/papers.html)

[https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/](https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/)

SAFE combines tagged architecture, functional programming, and a device for
arbitrary, policy expression and enforcement. It's clean-slate with strong
properties. CHERI implements a capability architecture that allows fine-
grained enforcement and C compatibility. Runs CheriBSD, a FreeBSD port. I see
CHERI as an interim solution with SAFE going in direction of more, ideal
system. I think they can be mixed. Yet, I think CHERI might not be thorough
enough and SAFE is more complex than needed. I thought a subset of SAFE could
run an Oberon System port with instant practicality.

So, here's what lines I'm thinking along that I'd like to run by you. I
started with i432 successor, the i960. The manual has nice reference of its
capabilities. Amazing compromise between an i432 and a RISC.

[https://en.wikipedia.org/wiki/Intel_i960](https://en.wikipedia.org/wiki/Intel_i960)

What it had was common RISC stuff you can run anything with, MMU, _error
handling_ of common errors, and _object descriptors_ for OOP constructions or
POLA enforcement. The other thing it had was _speed_ despite all that. So,
start with something like that for RISC-V or OpenSPARC. Implement, a la B5000
or SAFE, some specific, tiny-cost checks for basic primitives: pointer/array
bounds-checks; stack protection if it has one; code vs data tag (or Itanium-
style read/write/execute per word or page); labeling & protection of pointers
themselves. These checks run in parallel in tiny HW as CPU runs so essentially
no overhead. IOMMU auto-manages tagging of incoming or outgoing I/O data to
fit the model. Any failure takes CPU to exception handler with access to
relevant data. Optional, dedicated HW for concurrent GC possibly built-in to
MMU like some LISP machines. Implemented either clean-slate or with
configurable, open CPU like Gaisler's Leon3.

It seems those minimal, primitive protections knock out 90+% of the problems.
Functional can be mapped to them. Extra error handling or object descriptors,
like in i960, do much more while avoiding heavy-weight constructions. I think
this is a nice start. I have primitives for safe concurrency, too. What do you
think is absolute minimum, safe primitives on top of that we need for OOP like
a safe Smalltalk? I don't do dynamic, late binding, or any of that currently.
I don't know if it needs special consideration in HW or if prior HW above
could handle it with compiler doing all that. Anything come to mind?

And thanks for the time and helping me connect dots in IT history. :)

~~~
alankay1
I'll try to reply to this in sections. One way to think of "magic" is height
of leap. Newton had something before him, but his leap was cosmic. I don't
think we've had a Newton yet in computing, but we have had some cosmic leaps.
Just to pick a few: Sketchpad, Lisp, and the B5000 ideas all had something
before them, but their leaps were enormous.

In the "Early History of Smalltalk" I mentioned that I had seen and partly
learned the B5000 while in the Air Force ca 1963. I say "partly" because I
didn't understand all the implications until a few years later in grad school.
For example, I did understand what was great about the byte codes and stack
mechanisms, but it took until later to really grok that the machine was
organized so that environments and access were -granted-, and that the
protected tagged pointers -- Descriptors -- could not be forged, and that the
actual code contained no addresses. (This was the real start of "capability-
style protection.) Also, the multi-tasking OS -- written in an Extended Algol
-- was not just the first OS in a higher level language, but was more profound
than Algol (it was "Simula" before Simula). This was done extremely well
because -- and most people don't realize this -- the B5000 had multiple CPUs!

More in a next note ...

~~~
alankay1
I've been a huge fan of Margaret Hamilton for a long time. Her story is in
many ways what can be wonderful about great design and creation by really
smart and able people, and so disappointing about our not-quite-a-field. (And
if you want a tough computer to code and make a great system for, look no
further than the Apollo Guidance Computer!)

I think that a mainframe computer like the B5000 (or more conventional
mainframe) was just too much machinery back then to overlap much with the
needs of Apollo). Hamilton realized that a really good notion of "module"
would help tremendously with integrity, and that the OS should have an active
"overlord" to dynamically assess and deal with real-time conditions and needs
for resources. I think this was just amazingly brilliant work for that time
(and any time). NASA did give her a major award (but where was the ACM?).

I've never had the pleasure of talking with her, but I have a feeling that
NASA could also have done something to encourage her ideas to be more
communicated and put out into the open.

~~~
alankay1
"Flex" has been a very popular label for computer systems. I was referring to
an early desktop computer done by Ed Cheadle and myself in the late 60s for a
company owned by LTV (Ling Tempco Vought -- an aerospace conglomerate).

The one you are referring to was done in the 80s in the UK (I think). I recall
that there were many good things about this system, and it did use microcode
in a rather similar way to Parc in the 70s. Wasn't one of the machines they
used a PERQ (which was an Alto spinoff by some CMU folks)?

I think I recall -- as with the Intel 432 -- they bit off more than they could
chew, and tried to optimize too many things.

~~~
alankay1
The final part is about "microcode" etc. in being able to get the most
flexibility/performance from hardware. A good heuristic is to look far out (30
years) for "It would be ridiculous if we didn't have ...". See what any of
those might mean 10-15 years out. Simulate the promising ones of these today
using $ to pay for what Moore's Law and other engineering will provide at
lower cost in the future. This will give a platform today on which the
software of the future can be invented, developed, and tested. (This is what
we did to get the Alto at Parc -- and this is what it was -- in the mid 70s an
Alto cost about $22K -- about $120K today. But it allowed the software of the
future to be invented.)

Microcode (invented long long ago by Maurice Wilkes, who did the EDSAC,
arguably the first real programmable computer) used the argument that if you
can make a small amount of memory plus CPU machinery be much faster than main
memory, then you can successfully program "machine-level" functionality as
though it were just hardware. For example, the Alto could execute about 5
microinstructions for every main memory cycle -- this allowed us to make
emulators that were "as fast as they could be".

This fit in well with the nature, speed, capacity, etc of the memory available
at the time. But "life is not linear", so we have to look around carefully
each time we set out to design something. As Butler Lampson has pointed out,
one of the things that make good systems design very difficult is that the
exponentials involved every few years mean that major design rules may not
still obtain just a few years later.

So, I would point you here to FPGAs and their current capacities, especially
for comingling processing and memory elements (they are the same) in highly
parallel architectures. Chuck Thacker, who was mainly responsible for most of
the hardware (and more) at Parc, did the world a service by designing the
BEE-3 as "an Alto for today" in the form of a number of large FPGA chips plus
other goodies. Very worth looking at!

The basic principle here is that "Hardware is just software crystallized
early" so it's always good to start off with what is essentially a pie in the
sky software architecture, and then start trying to see the best way to run
this in a particular day and time.

------
dang
We asked Alan to do an AMA a while ago and he said yes, but having him show up
to comment on the topic of a given thread is in a way even better. I thought
those comments were pure gold. If you missed them, take a look.

Alan and his group joining YC is the most mind-blowing thing to happen (for
me) since I started working on HN. No one has influenced me more in thinking
about computing, and his tireless work in talking about computing history
(especially the work and culture around ARPA) is a true service. Watching his
talks seems to be the only easy place to get that information, and when you
start doing it it's like one of those dreams where you enter into a wing of
your house that you didn't know existed.

Edit: One of my dreams for HN is for this community to become active in
recovering, learning, and extending the computing culture that Alan talks
about, which is so much more satisfying than the morass of complexity we
mostly find ourselves bogged down in.

~~~
wwweston
> it's like one of those dreams where you enter into a wing of your house that
> you didn't know existed.

Wait. How common is this?

(I've had these dreams semi-frequently over the last 3 years, but no one else
I've told about them has volunteered that they've had any similar experience.)

~~~
mancerayder
The dreams must be incremental. For example, for me, dreaming about a house so
big it has 'wings' is a latter-stage dream.

I'm still fantasizing about the German doors from the other HN thread this
week.

------
themartorana
On object "states" as recorded object history (snapshots in time?):

 _" The just computed stable state is very useful. It will never be changed
again -- so it represents a "version" of the system simulation -- and it can
be safely used as value sources for the functional transitions to the next
stable state. It can also be used as sources for creating visualizations of
the world at that instant. The history can be used for debugging, undos, roll-
backs, etc._

 _" In this model -- again partly from McCarthy, Strachey, Simula, etc., --
"time doesn't exist between stable states": the "clock" only advances when
each new state is completed. The CPU itself doesn't act as a clock as far as
programs are concerned. This gives rise to a very simple way to do
deterministic relationships that has an intrinsic and clean model of time._

 _" For a variety of reasons -- none of them very good -- this way of being
safe lost out in the 60s in favor of allowing race conditions in imperative
programming and then trying to protect against them using terrible semaphores,
etc which can lead to lock ups."_

Oh, my kingdom for atomic, history-recording (and replayable) object states!
Safety existed in this fashion in the 60s!! I didn't know this, and now I'm
sad.

Race conditions continue to haunt us all to this day, especially as languages
start supporting concurrency and parallelism primitively. (Not to mention that
debugging race conditions is a nightmare - a nod to the Go language devs for
making clear a race condition caused a crash in the stack trace.)

~~~
pjmlp
You will be even more sad when you start researching the history of operating
systems and systems programming and discover the safety that we already had in
the 60's in systems like Burroughs that was destroyed by UNIX's industry
adoption.

~~~
m_mueller
And MacOS and Windows and and...

It's the "worse is better" approach or whatever that is called that set the
'not-quite-field' (thank you AK for this) back to where it started. To me it
feels like we are in a hamster wheel of constant reinvention rather than using
the software technologies of the 70ies (mainly from Xerox Parc) as a stepping
stone. And whenever the methodologies break down we cry for the next
framework/language/hardware platform to solve everything.

I for one would love to see a refresh about what is happening to OMeta and its
family of technologies developed at Viewpoint [1]. Is this still ongoing?

[1]
[https://www.youtube.com/watch?v=BbwOPzxuJ0s](https://www.youtube.com/watch?v=BbwOPzxuJ0s)

~~~
blihp
Here's a presentation, while chronologically earlier, that is specifically
about the Viewpoints STEPS project which includes a good segment on OMeta:
[https://www.youtube.com/watch?v=HAT4iewOHDs](https://www.youtube.com/watch?v=HAT4iewOHDs)

As jgon mentioned, Alex (the author of OMeta) continues work on Ohm, his
successor to OMeta. Also, various flavors of OMeta implementations live on in
Smalltalk, Lisp, Javascript etc. Recently VPRI released their final annual
report to the NSF publicly with indications that work is ongoing. So there are
at least pockets of interest and activity.

~~~
m_mueller
That's actually the presentation I had in mind but I couldn't find it anymore.
Searching with Alan Kay as a keyword is not a big help xD. Thank you for
linking it. I'd love to play around with this stuff.

~~~
blihp
I can probably point you in the right direction for OMeta... what's you're
preferred dynamic language?

The other material is a bit tougher to come by as I don't believe Viewpoints
has released a code snapshot of what they've been working on to date. There
are bits and pieces in terms of code drops but mainly what they've released
are the NSF annual reports and various papers discussing aspects of STEPS on
the VPRI web site (i.e.
[http://vpri.org/html/writings.php](http://vpri.org/html/writings.php))

~~~
m_mueller
OMeta's successor library has been linked here, so that's not a problem. IMO
it's a shame that Professor Kay never fully adopted OSS. Their new way of
doing a whole OS sure is inspirational - I could even see it as a very
interesting basis as a successor to Linux.

------
dirtyaura
The "Alan Kay"-style moment of Hacker News for me was when I posted a link to
article describing sendfile, tcp_nodelay and tcp_nopush. The article referred
to Nagle's algorithm when describing tcp_nodelay and described the problem
incorrectly. And lo and behold, Nagle itself came to enlighten the uninformed.

[https://news.ycombinator.com/item?id=9045125](https://news.ycombinator.com/item?id=9045125)

~~~
dang
From later in that thread:

 _It still bothers me that the Nagle algorithm (which I called tinygram
prevention) and delayed ACKs interact so badly._

Mark of a true engineer.

------
vonnik
Feature request: Let us follow HN users we like and have all their comments
show up in a feed.

~~~
hkmurakami
A friend of mine made a service that does this. (Iirc it's called HNwatcher)

I follow a small number of people with it and periodically get emails about
their new posts. I think some companies use it to track mentions of their
company or product as well.

A follow feature would be nice but this is currently filling the gap for me.

------
gavinpc
Dr. Kay, if you're still following... then with singular respect and gratitude
for your life-changing work and ideas, I would like to ask you one question.

Is there a good way to use bad systems?

Such as the web, which you describe as a “broken wheel,” lacking even a
fraction of (e.g.) Englebart's vision. Or Linux, which you call “a budget of
bad ideas.” (And no small budget, at that.) Or the iPad (and all common
tablets, I assume), whose interface you call “brain dead.”[0]

What should we do with these things? Are they dead ends? Are they good for
anything? Can they not be salvaged incrementally?

Here in the Hacker News community, where I am happy to see that my enthusiasm
about your work and message is strongly shared, there is yet a huge amount of
energy being poured into the wrong end of the low-pass filter, or, as you call
it, “the world.” I know that we are not averse to learning curves, but maybe
there is too much sunk cost to question what's already “working”? What should
we do?

One answer is to use bad systems to simulate better ones. But—when this is
even feasible—it's always done at the cost of performance, and VPRI's
publications make no secret of that. A proof-of-concept does not equal a
product. And at any rate, most of us are not researchers.

Because of this apparent dilemma, the exhiliration that I always feel when I
hear you speak or read your writing is always tainted with a sense of despair.
Is there any enlightened way to _use_ today's systems (for example, as
application developers), or should all of our efforts be directed at fixing
(or indeed replacing) the systems themselves?

Thank you again, for all that you've done and continue to do.

[0]
[https://www.youtube.com/watch?v=gTAghAJcO1o&t=28m](https://www.youtube.com/watch?v=gTAghAJcO1o&t=28m)

[https://archive.org/details/130124AlanKayFull](https://archive.org/details/130124AlanKayFull)

And others. I believe that these quotes are representative and not misused out
of context.

~~~
alankay1
I just found your comment. One answer wrt e.g. Javascript is to use it as a
"machine code" and just put a whole better designed thing on top of it. Or try
to get the web world to get behind Native Client or something better that
allows a protected sandbox to be really developed into new facilities.

Another answer is to not to go back to Engelbart, but to at least start with
large ideas like his, and try to think of the Internet and Personal Computing
as something much more than a convenience -- but more as a "lifter" of
humanity. What would that mean?

Another ploy would be to simply think about what is needed close to the end-
users and to follow that back towards the plug in the wall (one hint is that
there is no need to encounter a 60s style "operating system" \-- so what would
be much better in a real Internet universe?)

The main heuristic is to posit "I don't really know what I'm doing, and I'm
not a strong enough thinker, and 'You can't learn to see until you admit you
are blind, and etc." This is my starting place for trying to do real thinking
i.e. we aren't as smart as we need to be -- so we have to put real work and
method into thinking and design.

Tony Hoare had a good observation. He said that debugging is harder than
programming, so don't use all your abilities in programming, or you'll never
get the program working. We can extend that into design. Design is difficult,
but being able to assess one's designs is even harder -- leave something in
reserve to avoid simply making things because we can.

~~~
gavinpc
Thanks for your insights, Dr. Kay. I appreciate your taking the time to reply.

For context, I was recently struggling with these questions in trying to
rationalize a Shakespeare project (on current-day systems) as being good for
humanity. In the section "why new media," I rely on your and Bret Victor's
ideas as a standard for making that argument.[0]

Thanks again.

[0]
[http://gavinpc.com/project_willshake.pdf](http://gavinpc.com/project_willshake.pdf)

------
justin66
Alan Kay's response to the Dijkstra quote was wonderful. I'd always
appreciated Kay's calling out Dijkstra, but that he was amused and not angered
by Dijkstra's attitude is great.

------
JepZ
Since one of my professors forced all students to learn Smalltalk I deeply
respect Alan Kay. This language is so consistent and readable at the same
time. Whenever I heard a story of him or saw something he had done this
respect grows.

But somehow I stick to code in golang in my spare time. I am afraid I will
never see a language designed by Alan Kay and Ken Thompson together ;-)

Thx for the HN Alan Kay Edition

~~~
vanderZwan
Have you tried Pharo[0], which is trying to be a modern Smalltalk-like? There
is a free online course[1] going on at the moment as well (French videos but
subtitled), it's really good.

[0] [http://pharo.org/](http://pharo.org/)

[1] [https://www.fun-mooc.fr/courses/inria/41010/session01/](https://www.fun-
mooc.fr/courses/inria/41010/session01/)

~~~
JepZ
Looks interessting, but I don't like those large IDEs and the image philosophy
neither. So for example I use vim instead of eclipse and I really like the
golang tooling, because it is quite simple to use from a shell.

What I like about Smalltalk is the language and how it is build on very few
principles. The IDEs integrate well with the language but not with the system
around them and have a lot of overhead :-/

~~~
nine_k
're very few principles: I suppose you're acquainted wild Scheme (Racket,
Clojure) and Forth?

------
dang
While we're at it, anybody else have a highlight they'd like us to add to that
list? Either recent or old is fine.

------
curiousgal
I don't know why I found this heart-warming. I love HN. :')

------
xufi
Thanks Alan for the Q&A. It was great to see you giving answers and your own
thoughts on aching questions we had. Glad to have you as part of HN

------
kenko
The idea that The Glass Bees is "little-known" is ... curious. It was
republished by New York Review Books, not exactly a small little press no
one's ever heard of.

~~~
dang
That was my entirely-unthought-through phrase in sending the link to Craig,
but I can tell you what I meant by it: I've heard of Ernst Junger and had no
idea of that book. I still think it's bizarre and cool that he wrote anything
like it (WWI novelist crossed with Philip K. Dick-level bizarre and cool),
which is why the comment struck me as a highlight.

But I think NYRB's press exists specifically to bring little-known things to
light. Ivy Compton-Burnett, anyone?

------
minimaxir
Er, the "Six Years of Hacker News Comments about Twilio" article was an
admitted troll by the OP.
[https://news.ycombinator.com/item?id=11786464](https://news.ycombinator.com/item?id=11786464)

~~~
dang
Ha, definitely a mistake. Thanks for the QA :)

(We use an internal chat channel to mention links for the highlight list, but
also lots of other stuff that shows up on HN on a given day. Probably the
streams got crossed.)

------
syngrog66
I am 99% confident of this, but would you clarify for us 100%: are you _that_
Alan Kay?

~~~
dang
It's definitely him. Let's not make him bother with authenticating himself.

We detached this comment from
[https://news.ycombinator.com/item?id=11839876](https://news.ycombinator.com/item?id=11839876)
and marked it off-topic.

~~~
alankay1
It's about Bob Barton, why would you mark it off topic?

~~~
dang
Hi Alan. The Barton comments couldn't be more on topic! The only comment
affected was the one asking you to prove that your account was really you,
since we know it's you and I could just say so. If you view the overall page
([https://news.ycombinator.com/item?id=11836832](https://news.ycombinator.com/item?id=11836832))
you'll see that the Barton thread is still at the top, where it belongs.

The reason I did this was so the discussion could focus on the important thing
(Barton) and not the procedural thing (how do we know you're the real Alan).
This seems not to have entirely succeeded :)

Background: A weakness of the comment tree model is that an off-topic reply to
a top comment (like the Barton one) hangs near the top by virtue of its
parent. "Detached" here means snipping a child comment away from its parent
and making it a top-level comment in its own right, which then allows it to
fall in rank on the page. That's what I did in this case. Whenever we do that
we post a little mantra explaining what we did, along with a link to the
original parent, in case anyone wants to see the previous context.

~~~
alankay1
Thank you!

------
jcoffland
This over the top hero worship is part of the HN culture that I just can't get
behind. Alan Kay has an impressive resume but so do a lot of people on here.

Political rallying behind a famous name only leads to the hangers-on getting a
free ride to the top and keeps me standing far clear of the corporate world.
I'm here to discuss the latest news.

~~~
nekopa
But actually, according to the guidelines, this is not a news site. Anything
intellectually stimulating to hackers is fair game.

~~~
jcoffland
Sorry but Alan Kay's name dropping history does not stimulate me
intellectually.

~~~
teach
Then move on to a different thread. Don't demand that the rest of HN conform
to your desires.

~~~
jcoffland
What's the Internet for if I cannot demand it conform to my desires?

