Hacker News new | comments | show | ask | jobs | submit login
Alan Kay has agreed to do an AMA today
1401 points by alankay1 539 days ago | hide | past | web | favorite | 893 comments
This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).



This is one of the best threads HN has ever seen and we couldn't be more thrilled to have had such an interesting and wide-ranging discussion. I know I'm not the only one who will be going back over the wealth of insights, ideas, and pointers here in the weeks to come.

Alan, a huge and heartfelt thanks from all of us. The quality and quantity (over 250 posts!) of what you shared with the community surpassed all expectations from the outset and just kept going. What an amazing gift! Thank you for giving us such a rich opportunity to learn.

(All are welcome to continue the discussion as appropriate but the AMA part is officially done now.)


Thanks to all who made this happen. It was really mind bending trying to parse and understand both the questions and responses, like simultaneously traveling to both the past and the future of computing.

Since then all other HN threads have felt so lightweight, and it is just now that that feeling is starting to wear off..


We should find a way to follow up on some of the rich veins of discussion here—perhaps picking one thing and going into it in more detail. Every time I read or hear Alan I end up with a list of references to new things I'd never heard of before.


@dang thanks to YC for making this happen! I read HN for the comments and this AMA showed why HN has the best comments on the internet.


I agree. Alread read several hundred of them. Looks like I gotta come back for anotger 200 or so. Just so many interesting subthreads here with who knows what impact waiting to happen. Thanks to you and HN for arranging it.


The same here. I reread it already two times. Made a ton of notes.


Want to share the notes?


When you were envisioning today's computers in the 70s you seemed to have been focused mostly on the educational benefits but it turns out that these devices are even better for entertainment to the point were they are dangerously addictive and steal time away from education. Do you have any thoughts on interfaces that guide the brain away from its worst impulses and towards more productive uses?


We were mostly thinking of "human advancement" or as Engelbart's group termed it "Human Augmentation" -- this includes education along with lots of other things. I remember noting that if Moore's Law were to go a decade beyond 1995 (Moore's original extrapolation) that things like television and other "legal drugs" would be possible. We already had a very good sense of this before TV things were possible from noting how attractive early video games -- like SpaceWar -- were. This is a part of an industrial civilization being able to produce surpluses (the "industrial" part) with the "civilization" part being how well children can be helped to learn not to give into the cravings of genetics in a world of over-plenty. This is a huge problem in a culture like the US in which making money is rather separated from worrying about how the money is made.


Then what do you think about the concept of "gamification?" Do you think high densities of reward and variable schedules of reward can be exploited to productively focus human attention and intelligence on problems? Music itself could be thought of as an analogy here. Since music is sound structured in a way that makes it palatable (i.e. it has a high density of reward) much human attention has been focused on the physics of sound and the biomechanics of people using objects to produce sound. Games (especially ones like Minecraft) seem to suggest that there are frameworks where energy and attention can be focused on abstracted rule systems in much the same way.


I certainly don't think of music along these lines. Or even theater. I like developed arts of all kinds, and these require learning on the part of the beholder, not just bones tossed at puppies.


I've been playing traditional music for decades, even qualifying to compete at a high level at one point. There is a high density of reward inherent in music, combined with variable schedules of reward. There is competition and a challenge to explore the edges of the envelope of one's aesthetic and sensory awareness along with the limits of one's physical coordination.

Many of the same things can happen in sandbox style games. I think there is a tremendous potential for learning in such abstracted environments. What about something like Minecraft, but with abstracted molecules instead of blocks? Problems, like the ones around portraying how molecules inside a cell are constantly jostling against water molecules, could be solved in such environments using design. Many people who play well balanced games at a high level often seem to be learning something about strategy and tactics in particular rule systems. I suspect that there is something educationally valuable in a carefully chosen and implemented rule system.

Also perhaps, it's so much easier to exploit such mechanisms to merely addict people, that overwhelms any value to be gained.


I just tried, albeit slightly unsuccessfully, to describe the philosophy of the Montessori system to someone. Your answer, learning on the part of the beholder, sums it up beautifully. Thank you for that.


The way you describe music here sounds a lot like how Steve Pinker has described music: as a mental equivalent of cheesecake; something that just happens to trigger all the right reward systems (the ones based on our love of patterns and structure, and exploiting the same biological systems we use for language) but isn't necessarily nutritious itself.

However, all evidence points to him being wrong about this, making the mistake of starting with language as the centrepiece and explaining everything around it. Human music likely predates human speech by hundreds of thousands of years, and is strongly tied to social bonding, emotions and motor systems in ways that have nothing to do with the symbolic aspects of language.


The way you describe music here sounds a lot like how Steve Pinker has described music: as a mental equivalent of cheesecake;...isn't necessarily nutritious itself.

Note that I didn't mean that in a negative way. Also, if you want to consume macro-nutrients, cheesecake is a pretty effective way to get simple carbs and dairy fat.

is strongly tied to social bonding, emotions and motor systems in ways that have nothing to do with the symbolic aspects of language.

I think there is something akin to this that can be found in games, and that there is something particularly positive that can be found in well constructed games.


Yes, sorry: I could have been more clear that the what I described was Steve Pinker's judgement, not yours.

And I tried to stay neutral towards games on purpose - I have taught game design myself ;). Having said that, a lot of real-world attempts at gamification are pretty banal carrot/stick schemes.


What are some examples of such well-constructed games?


I think games are more like instruments than they are like music. The game itself isn't as interesting as the gameplay you can perform inside it. Speedrunning in particular has a lot in common with musical performance.


I guess in the use of technology one faces a process rather similar to natural selection, in which the better the user's ability to restrict his use to what he has to do, the more likely the survival, i.e. the user will not procrastinate and get distracted. The use of computers for entertainment is unstoppable, it's nearly impossible to not allow the kids find and play those games, chat with friends on WhatsApp, and be exploited otherwise by companies that make money from that sort of exploitation, even though that's at the cost of their psychological health and future success. People spend every single second of the day connected and distracted, and this seems irreversible. I wonder if you have any practical thought on how this can be remedied.


My friend Neil Postman (our best media critic for many years) advocated teaching children to be "Guerilla Warriors" in the war of thousands of entities trying to seize their brains for food. Most children -- and most parents, most people -- do not even realize the extent to which this is not just aggressive, but regressive ...


Can you elaborate more on that?


Neil's idea was that all of us should become aware of the environments we live in and how our brain/minds are genetically disposed to accommodate to them without our being very aware of the process, and, most importantly, winding up almost completely unaware of what we've accommodated to by winding up at a "new normal".

The start of a better way is similar to the entry point of science "The world is not as it seems". Here, it's "As a human being I'm a collection of traits and behaviors, many of which are atavistic and even detrimental to my progress". Getting aware of how useful cravings for salt, fat, sugar, caffeine, etc., turn into a problem when these are abundant and consumer companies can load foods with them....

And, Neil points out -- in books like "Amusing Ourselves To Death" and "The End Of Childhood" -- we have cravings for "news" and "novelty" and "surprise" and even "blinking", etc. which consumer companies have loaded communications channels with ...

Many of these ideas trace back to McLuhan, Innis, Ong, etc.

Bottom line: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.


> Bottom line: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.

Most children meet entertainment technology as early as before the first birthday, though. Many pre-teens that I see around possess smartphones and/or tablets. Most of the early teenagers possess multiple devices. None of these will be able to judge what's is beneficial to their future and well-being, and opt for it rather than what is immediately fun and pleasing. Just like most of them will live on chocolate bars and crisps if let to do so. The burden falls on the parents, a burden they don't take.

I myself can't think of a future other than one full of device addicts, and a small bunch that managed to liberate themselves from perennial procrastination and pseudo-socialisation only in their twenties. And while my country can prohibit certain products (food, etc.) from import and production within its own borders (e.g. genetically modified, chemically engineered to be consumed greedily), this can't be done with websites, because (a) it's technically impossible and (b) it 'contradicts freedom of speech'. I'll ask the reader to philosophise over (b), because neither the founding fathers of the US nor the pioneers of the french revolution, nor most of the libertarian, freedom-bringing revolutionists had a Facebook to tag their friends' faces.

(edit: I don't want to get into a debate over freedom of speech, and don't support any form of cencuring of it, tho I don't want freedom of speech at the cost of exploitation of generations and generations by some companies that use it as a shelter for themselves.)


I once said that "Television is the last technology we should be allowed to invent without a Surgeon General's warning on it"


> Kay: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.

> Gkya: I myself can't think of a future other than one full of device addicts, and a small bunch that managed to liberate themselves from perennial procrastination and pseudo-socialisation only in their twenties.

As a infovore this worries me. If we cannot control ourselves and come up with better solutions for self control then the authoritarian minded are likely to do it for us.

The Net is addictive and all those people pretending it ain't so are kidding themselves.

It's easy to imagine anti-Net campaigners in the same way as we see anti-globalization activists today.

I myself have seen the effects of good diet, exercise and meditation on a group of people, and it is quite remarkable how changed for the better people are. So there is hope!

I believe that social change, example: phubbing being widely regarded as taboo, isn't fast enough to keep with the Net's evolution. By the time a moral stance against phubbing is established mobile phones probably won't exist. For this I think we need a technological solution which is as adaptive as an immune system, but also one which people can opt in to. Otherwise eventually people will demand governments do things like turn off the Net at certain times during the day or ban email after 6pm and so on.


The introduction to technology, well, essentially I'm talking about internet, is so early on a kids life that we can't just say "we should control ourselves". You can't put your kid in a room full of crisps, sweets, alcohol, drugs, pornography, and expect it to come out ten, fifteen years later as a healthy individual that is not an addict to none of them. This is what we essentially do with the internet.

> I myself have seen the effects of good diet, exercise and meditation on a group of people, and it is quite remarkable how changed for the better people are. So there is hope!

You're an adult, I am too. We can realize: this is stealing my life. But a kid can't. And stolen days don't return. This is why I'm commenting: we'd rather raise better individuals than letting them do wtf they want and hoping they'll fix themselves later.


I agree. it is pretty sobering.

Just yesterday before this thread even started (I work as part-time cleaner) I was polishing a window. Through it I saw some children in a sitting room, one of who was literally standing centimeters away from a giant flat screen television. Glued to it.

I thought: "Fuck, they don't have a chance". Their attention spans will be torn to pieces like balls of wool by tiny kittens. Now multiply that effect with the Net + VR and you have an extraordinary psychological effect best compared to a drug.

I didn't have a television in my childhood. I read countless books, and without them, I wouldn't be sitting here, I wouldn't have done any of the things I could reasonably consider inventive or innovative. They might not be world changing things, but they were mine and my life was better for doing so.

I was speaking to a friend who has children a few months ago. He was in the process of uploading photos of his family to Facebook. I asked him whether he considered what he was doing to be a moral act, since he is for practical purposes feeding his children's biometrics into a system that they personally have not, and could not, opt in to. He was poleaxed by the thought. He was about to say something along the lines of 'well everybody's doing this' but I could visibly see the thought struck him that "wow, that's actually a really bad line of reasoning I was about to make". Instead he agreed with me, uncomfortably, but he got it.

I don't know how you get millions of people to have that kind of realization. I do think parental responsibility has a huge role though. My parents got rid of the television in the 80s. It was the right thing to do.


The thing that disturbs me about this argument is that IMHO it's a slippery slope towards "back in my day, we didn't have this new-fangled stuff". We have to be extremely careful that our arguments have more substance than that. That requires a lot of introspection, to be honest.

See, my grandparents worried that the new technology that my parents grew up with would somehow make them dumber (growing up with radio, parents getting television); my parents' generation worried that the technology we grew up with would be bad for us (too much computer, too much gaming, too much Internet). The upcoming generation of parents will grow up wondering whether VR and AR is going to ruin their kids' chances.

Yet kids ALWAYS adapt. They don't view smartphones or tablets as anything particularly out of the ordinary. It's just their ordinary. I'm certain their brains will build on top of this foundation. That's the thing - brains are extremely adaptable. All of us adapted.

There's a term for this worry - it's called 'Juvenoia':

https://www.youtube.com/watch?v=LD0x7ho_IYc

http://time.com/19818/whats-really-wrong-with-young-people-t...

Now, I'm not saying that this is a discussion that shouldn't be had - it certainly should. I just think we all need to be mindful about where our concerns might be coming from.


I never said myself that tech per se will make kids dumber. What I say is, there should be measures governing their exposure, just like there are for other things.

Just like an alcohol drinker and an alcohol addict are different, an internet user and an internet addict are different too. Just because some or most are not addicts, we can't dismiss the addiction altogether.


It's just that it seems a bit unfair to decry (or place undue burdens upon) the vast majority of responsible alcohol drinkers because we've found a few people who have an unhealthy relationship with it.

Recognizing potential dangers is a far cry from saying that there's a risk of "losing the century" because of easy access to technology and entertainment, and it strikes me as rather belittling to the younger generation.

Millennials and their children are still humans, after all, and are just as intelligent, motivated, and adaptable as every generation before them.


Who are responsible alcohol drinkers? In my country the minimum age for consummation of alcoholic products is 18. What would you think of a 10 or 15 years old kid that's a responsible drinker?

What I'm arguing is against an analogue of this in tech. There is a certain period during which the exposure of a minor to technological devices should be governed by parents.

What do you think of adolescents which get recorded nude on chatrooms? Some of them commit suicide. What do you think of children victim to bullying online? What do you think of paedophiles tricking kids online? Isn't a parent responsible of protecting a minor from such abuses?

My general argument on this thread is that we should raise out children as good as we can. Protect them from danger that they cannot be conscious of. We can't certainly place burdens on adults, but we can try to raise adults that are not inept addicts with social deficiencies. And because most of the worlds population is tech-illiterate, it falls on governments to provide education and assistance to parents, just like they do so with health and education.

Most of the counter-arguments here has been strawmans, because while I'm mostly targeting children, I've been countered with arguments about adults.


So by that logic, would you say that the only reasons children should not be allowed to buy alcohol are biological development reasons?


> The thing that disturbs me about this argument is that IMHO it's a slippery slope towards "back in my day, we didn't have this new-fangled stuff".

> I just think we all need to be mindful about where our concerns might be coming from.

Basically we're on the same page.

Here is proposition. I'll steelman the Conservative view and you tell me what you think. I promise not to claim vidya causes violence or D&D is a leading cause of Satanism.

My proposition is that television media has meaningfully worsened our society by making it dumber. This is an artifact of the medium itself, rather than an issue with any specific content on it. To explain what I mean by dumber I must elaborate.

The television is a unidirectional medium. It contains consensus on various intellectual issues of the day and gives a description of the world I'd call received opinion. There exists no meaningful difference between the advertising that tranches people into buying products and the non-advertising that tranches people into buying ideas. Most ideas that are bought are not presented as items to be sold, they are pictured as 'givens', obvious. Most lying is done by omission. Even were all information presented truthfully, we have a faux sense of sophistication about our awareness which is a problem. When you buy prepackaged meals at a store you are not in the makings of becoming a chef, and in that way you are not chewing over the ideas presented to you, you do no mental cognition. Your state is best described as, and feels like, a hypnotic trance.

One of the problems with this is that television creates a false sense of normalcy that has no objective basis. It asks the questions and provides the answers. All debate is rhetorical debate.

It's the cognitive equivalent of 'traffic shaping' that Quality of Service mechanisms do on routers. In a way that is a much bigger lie. This concept is very similar to Moldbug's Cathedral concept. The people who work for the Cathedral don't realize they represent a very narrow range of thought on the spectrum. Their opinions cannot plausibly be of their own manufacture because one arbitrary idea is held in common with another arbitrary idea and they all hold them.

The key to understanding this is very real and not at all abstract, is that millions of people have synchronized opinions on a range of issues without any other discernible cause other than the television (or radio). Why do populations of teenagers become anorexic after the introduction of television where they did not suffer before it? Synchronized opinion is always suspicious. It defies probability theory to think my grandmother and millions of others suddenly came to the conclusions for example, that gay marriage was a positive idea? Why do millions of conservatives think buying gold is a good idea? It is not that there is something wrong with gay marriage or buying gold. It's that there is no genuine thinking going on about about any of this. There many ways to hedge against inflation that don't involve buying gold. Why is gay marriage the morality tale of the age, and not, say, elder abuse in nursing care facilities.

Why do some things become 'issues' and not a myriad of others? How directed this is is up for debate, but what is not is that the selectivity and constraints of the medium have narrowed our perception of the world, and that has led to the thing that made us dumber: it stunted our native creativity and curiosity.

> Yet kids ALWAYS adapt. They don't view smartphones or tablets as anything particularly out of the ordinary. It's just their ordinary. I'm certain their brains will build on top of this foundation. That's the thing - brains are extremely adaptable. All of us adapted.

There does exist a series of schools in Silicon Valley. The software engineers at Google and Facebook and other firms send their children to them, and they strictly contain no computing related devices. Instead it's schooling of the old fashioned sort, from the early 20th century.

It is possible that this is juvenoia as you suggested. But at least take into account those parents may understand something else about electronic media and its affects on brains. After all many of them study seriously human attention for a living.

The other thing I want to ask you is have you ever visited in your country what we call council estates in Europe? These are places which contain the poorer class of people in our society. I've been to many of these gray lifeless places and they all have many characteristics in common. Television is a major part of their lives and their shelves are bare of books. It is ubiquitous. In the past the working classes were much more socially and intellectually mobile. They read. They did things. Little evidence remains of that today, but it was so.

It is possible that television is like a slow poison that affects some classes more than others. You can't just say people you know are unaffected and therefore it does not matter, because it is possible you may be part of an advantaged group for which reasons may exist why they could be more immunized than most e.g. having challenging or interesting work to do. It's worth considering that all the problems I mentioned still exist without television in society but you might say the 'dose' determines whether it's medicine or poison. There is certainly a sense among many people that television has progressively gotten worse and watching old news broadcasts and documentaries it is hard not to see what they mean. I appreciate this isn't objective measurement, but comparing like with like, say James Burke's Connections with Neil deGrasse Tyson's Cosmos, the difference is obvious and the Cosmos reboot would be considered very good relative to its current competition.

Evidence for my claims could be a reduction in the number of inventions (excluding paper patents) per capita, reduced library visitations with respect to population changes, increasing numbers of younger people unable to read, evidence of decreased adventurousness or increased passiveness in the population, some metric for diminished curiosity/creativity over time. If those were mainly found wanting then I'll concede my error.

I'd be much more concerned about curiosity/creativity, than reduction in IQ or school test scores because creativity is really the key to much of what is good about human endeavor.

I'd also like to point out that you might not be able to spot the 'brain damage' so easily, since it's hard to come up with objective measures without a good control group. If it happened to most people then it's a new normal but that doesn't mean it had no effect.


Thank you, this was a wonderfully thought-provoking response (also, the first season of Connections is probably my favourite documentary of all time!).

One thing I will offer is that in my household growing up, television was positive because it was an experience that we shared as a family. We would watch TV shows together, talk about them together, laugh at them together, etc. In that sense, television brought outside viewpoints into our household and spurred conversation. I think that is one of the key factors that may differentiate between TV having good effects and TV having bad effects on different people.

In a sense, I think that although television itself isn't interactive, you could say that our family was 'interactive about' television. So we got the benefits of being able to use television in a positive way.

Thanks for reminding me of how important that was for me :)

By the way, on the limitation of television being a passive medium.... This reminds me of something I read back when I was a kid that was very profound for me. I can't recall exactly now, but I think it was in a Sierra On-Line catalogue where Roberta Williams said something about wanting her children to play adventure games rather than watch television as with adventure games, they had to be actively engaged rather than passive. This really resonated with me at the time, given that I was really getting into the Space Quest & other 'Quest games :)


> Thank you, this was a wonderfully thought-provoking response (also, the first season of Connections is probably my favourite documentary of all time!).

Thank you. I hope to meet or communicate with Mr Burke at some point soon, I know Dan Carlin had a podcast with him a little while back if you're interested in his new take on the world. Connections remains the high water mark for documentary making and it is worth reading the books. If you want to watch a documentary in a similar style I suggest The Ascent of Man.

> In a sense, I think that although television itself isn't interactive, you could say that our family was 'interactive about' television. So we got the benefits of being able to use television in a positive way.

I believe you, I am mainly thinking of the average 5 hours per day the average American (or European) spends in front of the television. The dose makes the poison!

> This really resonated with me at the time, given that I was really getting into the Space Quest & other 'Quest games

Yes, it is clear that videogaming can provide for a shared community and culture, most obviously the MMORPGS. This is not something television achieves, or if it does, it is rare, like fans of Mythbusters or Connections. In the present we are concerned with developing the foundations of the Net, like commerce or the law. But ultimately I think a Net culture will be the most valued feature we ascribe to the Net.


You can't put your kid in a room full of crisps, sweets, alcohol, drugs, pornography, and expect it to come out ten, fifteen years later as a healthy individual that is not an addict to none of them

I know this is bandied about a lot, but is this actually proven? With the exception of drugs, all of those you mention have been within easy reach for me (actually, as a Dutchman, even softdrugs were just one step away if I'd wanted to). Yet I don't consider myself addicted to any of those.


I'm not a native speaker of English, so I wonder: does kid not mean person not yet adolescent? I'm referring to 0-14 yrs olds when I say kid. If we agree on that, and you still say, it's not proven, we can try, well then I can't do much than hoping you either don't have children or no child's responsibility is on you otherwise.


Reading his post I believe he meant the above mentioned things were within reach of him as a child (I don't believe he meant now as an adult).

" I can't do much than hoping you either don't have children or no child's responsibility is on you otherwise."

That's a strong statement to make. Implying he's unable to raise children because he'd like to see evidence that the internet actually has a negative influence on children.


I interpreted his message as he did not only want evidence for the internet, but also the other stuff I mentioned, and their effects on kids. I'm sorry if that wasn't the case.


No, I did not mean I wanted evidence of their effect on kids. I want evidence that "putting your kid in a room full of $bad_stuff" always leads to addiction, since that strikes me as nothing more than scare stories.

Good parents can raise their children correctly even with $bad_stuff present around them, that was the point I was trying to make.


> Good parents can raise their children correctly even with $bad_stuff present around them, that was the point I was trying to make.

I concur. But the internet exposure of kids is mostly not governed by parents. They either are alone with the connected device in their rooms, away from them, or with a mobile device out of their home. The best the parents can do is to educate the kids, but the public lacks the knowledge to effectively do so. They should be given the formation to be able to educate their children, and furthermore schools should educate minors on the use of tech.

"putting your kid in a room full of $bad_stuff" will mostly lead to addiction if the parent is not there to teach the kid: this is harmful to you; not you think?


Mostly agreed, yes. But I would rephrase it as "introducing kids to $bad_stuff without guidance is a bad idea": I don't think that permanent supervision should be required. Once the novelty wears off, and the parent is confident that the kid can behave themselves even in the presence of $bad_stuff, even "putting your kid in a room full of $bad_stuff" can be fine.

And I don't mean that in the sense of "the kids are fine with their heroin syringes", but in the sense "I can leave the cookie jar on the counter and it will still be there when I leave the room".


I think there exist records of hospital mix-ups with babies, with pretty profound differences changing them depending on what environment they wound up in, but this may be mostly anecdotal. One case in Japan like this but it illustrated wealth difference as opposed to what we're looking for here.

http://www.telegraph.co.uk/news/worldnews/asia/japan/1048109...

Provocative but not evidence. I did look up some twin studies but I can't find one with a clear vice/virtue environment study. Gwern is good at ferreting out this kind of information if you ask him.


Who is it a problem for? Why is it a problem?


Hi Alan,

In "The Power of the Context" (2004) you wrote:

  ...In programming there is a wide-spread 1st order
  theory that one shouldn’t build one’s own tools,
  languages, and especially operating systems. This is
  true—an incredible amount of time and energy has gone
  down these ratholes. On the 2nd hand, if you can build
  your own tools, languages and operating systems, then
  you absolutely should because the leverage that can be
  obtained (and often the time not wasted in trying to
  fix other people’s not quite right tools) can be
  incredible.
I love this quote because it justifies a DIY attitude of experimentation and reverse engineering, etc., that generally I think we could use more of.

However, more often than not, I find the sentiment paralyzing. There's so much that one could probably learn to build themselves, but as things become more and more complex, one has to be able to make a rational tradeoff between spending the time and energy in the rathole, or not. I can't spend all day rebuilding everything I can simply because I can.

My question is: how does one decide when to DIY, and when to use what's already been built?


This is a tough question. (And always has been in a sense, because every era has had projects where the tool building has sunk the project into a black hole.)

It really helped at Parc to work with real geniuses like Chuck Thacker and Dan Ingalls (and quite a few more). There is a very thin boundary between making the 2nd order work vs getting wiped out by the effort.

Another perspective on this is to think about "not getting caught by dependencies" -- what if there were really good independent module systems -- perhaps aided by hardware -- that allowed both worlds to work together (so one doesn't get buried under "useful patches", etc.)

One of my favorite things to watch at Parc was how well Dan Ingalls was able to bootstrap a new system out of an old one by really using what objects are good for, and especially where the new system was even much better at facilitating the next bootstrap.

I'm not a big Unix fan -- it was too late on the scene for the level of ideas that it had -- but if you take the cultural history it came from, there were several things they tried to do that were admirable -- including really having a tiny kernel and using Unix processes for all systems building (this was a very useful version of "OOP" -- you just couldn't have small objects because of the way processes were implemented). It was quite sad to see how this pretty nice mix and match approach gradually decayed into huge loads and dependencies. Part of this was that the rather good idea of parsing non-command messages in each process -- we used this in the first Smalltalk at Parc -- became much too ad hoc because there was not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http -- just think of what this could have been if anyone had been noticing ...)


> I'm not a big Unix fan

What is your preferred technology stack?


What's a good non-UNIX open-source operating system that's useful for day-to-day work, or at least academically significant enough that it's worth diving in to?


Here's a list of alternatives I put together to see some capabilities or traits UNIX lacked:

https://news.ycombinator.com/item?id=10957020

I think, usable day-to-day, I'd say you're down to Haiku, MorphOS, Genode, MINIX 3, and/or A2 Bluebottle. Haiku is a BeOS clone. MorphOS is one of last Amiga's that looks pretty awesome. Genode OS is a security-oriented, microkernel architecture that's using UNIX for bootstrapping but doesn't inherently need it. MINIX 3 similarly bootstrapping on NetBSD but adds microkernels, user-mode drivers, and self-healing functions. A2 Bluebottle is most featured version of Oberon OS in safe, GC'd language. Runs fast.

The usability of these and third party software available vary considerably. One recommendation I have across the board is to back up your data with a boot disc onto external media. Do that often. Reason being, any project with few developers + few users + bare metal is going to have issues to resolve that long-term projects will have already knocked out.


Minix isn't bootstrapping on netbsd, the entire goal of the system is to be a microkernel based unix. It uses the netbsd userland because you don't need to rewrite an entire unix userland for no reason just to change kernels.


Mental slip on my part. Thanks for the correction. I stand by the example at least for the parts under NetBSD like drivers and reincarnation server. Their style is more like non-UNIX, microkernel systems of the past. Well, some precedent in HeliOS operating system but that was still detour from traditional UNIX.

https://en.wikipedia.org/wiki/Helios_os


SqueakNOS? http://squeaknos.blogspot.com ;-) It has a native TCP/IP stack in Squeak.



The difference is PharoNOS has a Linux running behind while the idea of SqueakNOS is to build a complete operating system via Squeak. In this way you can quickly hack it. There is a great page about these initiatives here: http://wiki.squeak.org/squeak/5727

BTW, prior to SqueakNOS we implemented this: http://swain.webframe.org/squeak/floppy/ (using Linux and modifying Squeak to work with SVGALib instead of X) in just 900mb inspired in QNX Demo Disk: http://toastytech.com/guis/qnxdemo.html


I was going to mention QNX Demo Disk in my UNIX alternatives comment. I think I edited it out for a weak fit to the post. It was an amazing demo, though, showing what a clean-slate, alternative, RTOS architecture could do for a desktop experience. The lack of lag in many user-facing operations was by itself a significant experience. Showed that all the freezes and huge slow-downs that were "to be expected" on normal OS's weren't necessary at all. Just bad design.

It's neat that it was the thing that inspired one of your Squeak projects. Is SqueakNOS usable day-to-day in any console desktop or server appliance context? Key stuff reliable yet?


We implemented SqueakOS while some friends implemented SqueakNOS. I don't think they are being used anywhere but for educational purposes it is amazing that drivers and a TCP/IP stack could be implemented (and debugged!) in plain smalltalk. There was some more information here: http://lists.squeakfoundation.org/pipermail/squeaknos/2009-M...


There's GNU, which is by definition not UNIX. ;)


I'd assume that depends on your measure of worth, I'd say. Many operating systems had little academic significance when it was most academically or commercially fruitful to invest time in. Microkernel and dependency specific operating systems would be interesting. Or hardware based capability based operating systems.


Could someone give hints/pointers that help me understand the following? "parsing non-command messages in each process ... not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http"

Does that mean the messages should have been part of a coherent protocol or spec? That there should have been some thought behind how messages compose into new messages?


Smalltalk was an early attempt at non-command-messages to objects with the realization that you get a "programming language" if you take some care with the conventions used for composing the messages.


By non-command-messages do you mean that the receiver was free to ignore the message?


Yes


Akin to "signals" / "event emitters"?


If you think about the "whole system", even if it's just a Shannon channel, what do you actually need?


well, in this talk it sounds like you do advocate tool building - isn't tool building a way to try elliptic orbits instead of the circular ones ?

https://www.youtube.com/watch?v=NdSD07U5uBs 'Power of simplicity'


Yes, I do advocate tool building -- basically "if you can do it without getting buried, you should".


I tend to do both in parallel and the first one done wins.

That is, if I have a problem that requires a library or program, and I don't know of one, I semi-simultaneously try to find a library/program that exists out there (scanning forums, googling around, reading stack overflow, searching github, going to language repositories for the languages I care about, etc) and also in parallel try to formulate in my mind what the ideal solution would look like for my particular problem.

As time goes by, I get closer to finding a good enough library/program and closer to being able to picture what a solution would look like if I wrote it.

At some point I either find what I need (it's good enough or it's perfect) or I get to the point where I understand enough about the solution I'm envisioning that I write it up myself.


Yes. If it takes me longer to figure out how to use your library or framework than to just implement the functionality myself, there is no point in using the library.

Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.


Other points of consideration: My coworkers might not already know some library, but they definitely won't know my library. My coworker's code is just about as "3rd party" as any library - as is code I wrote as little as 6 months ago. Also my job owns that code, so rolling my own means I need to write another clone every time I switch employers - assuming there's no patents or overly litigious lawyers to worry about.

But you're of course correct that there is, eventually, a point where it no longer makes sense to use the library.

> Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.

The problem is I got so tired of fixing bugs in coworker / former coworker code that I eventually replaced their stuff with off the shelf libraries, just so the bugs would go away. And in practice, they did go away. And it caught several usage bugs because the library had better sanity checks. And to this day, those former coworkers would use the same justifications, in total earnestness.

I've never said "gee, I wish we used some custom bespoke implementation for this". I'll wish a good implementation had been made commonly available as a reusable library, perhaps. But bespoke just means fewer eyes and fewer bugfixes.


It's all trade-offs.

If there happens to be a well-tested third party library that does what you want, doesn't increase your attack surface more than necessary, is supported by the community, is easy to get up and running with, and has a compatible license with what you are using it in, then by all means go for it.

For me and my work, I tend to find that something from the above list is lacking enough that it makes more sense to write it in-house. Not always, and not as a rule, but it works out that way quite a bit.

I would also argue that if coworkers couldn't write a library without a prohibitive number of bugs, then they won't be able to write application or glue code either. So maybe your issue wasn't in-house vs third party libraries, but the quality control and/or developer aptitude around you.


You're not wrong. The fundamental issue wasn't in-house vs third party libraries.

The developers around me tend to be inept at time estimation. They completely lack that aptitude. To be fair, so do I. I slap a 5x multiplier onto my worst case estimates for feature work... and I'm proud to end up with a good average estimate, because I'm still doing better than many of my coworkers at that point. Thank goodness we're employed for our programming skills, not our time estimation ones, or we'd all be unemployable.

They think "this will only take a day". If I'm lucky, they're wrong, and they'll spend a week on it. If I'm unlucky, they're right, and they'll spend a day on it - unlucky because that comes with at least a week's worth of technical debt, bugs, and other QC issues to fix at some point. In a high time pressure environment - too many things to do, too little time to do it all in even when you're optimistic - and it's understandable that the latter is frequently chosen. It may even be the right choice in the short term. But this only reinforces poor time estimation skills.

The end result? They vastly underestimate the cost of supporting the extra code they'll write. They make the "right" choice based on their understanding of the tradeoffs, and roll their own library instead of using a 3rd party solution. But as we've just established their understanding was vastly off basis. Something must give as a result, no matter how good a programmer they are otherwise: schedule, or quality. Or both.


If you don't have the time or energy for such projects then you CAN'T do them. The answer is there.


Isn't the answer contained in the quote? Do a cost/benefit analysis of the "amount of time and energy" that would go "down these ratholes" versus the "the time not wasted in trying to fix other people’s not quite right tools."


The real reason to do the 2nd order is get new things rather than incrementing on older poorer ideas.


But how can you assess this until you have gone down those rat holes?


The Lean Startup advocates proportional investment in solutions. WHEN the problem comes up (again, after deciding to do this) determine how much your time percentage wise this took out of your week or month. Invest that amount to fix it, right now. My interpretation would be, spend that time trying to solve part of it. Every time that problem comes up keep investing in that thing, that way if you've made the wrong call you only waste a small portion of your time. But you also are taking steps to mitigate it if becomes more of an issue in the future.


Having gone down several myself, I can say it's hard. You lose time. You have to accept you've lost time and learn how not to do it in the future.

My advice is to collaborate with people who are much, much smarter than you and have the expectation that things actually get done because they know they could do it. You learn what productivity looks like first, at the most difficult and complex level you're capable of.

That sets the bar.

Everything has to be equal to or beneath that unless your experience tells you you'll be able to do something even greater (possibly) with the right help or inspiration


You gain experience by going down similar rat holes, until you feel that you can adequately compare the situation you are in now to an experience in the past.

You'll still be wrong, but perhaps less often.


For many particular examples, there have already been enough rathole spelunkers to provide useful data. Maybe start looking in the places where there isn't already useful data?


Any area in which enough such spelunkers are found is unlikely to be significantly improved by adding your own effort.


Agreed. It's often much, much harder to articulate why an idea is bad or a rat hole. You just move on.

I've come up with explanation by analogy. You can demonstrate quite easily in mathematics how you can create a system of notation or a function that quickly becomes impossible to compute. A number that is too large, or an algorithm that would take infinity amount of time and resources to solve...

It seems to be in nature that bad ideas are easy. Good ideas are harder, because they tend to be refinements of what already exists and what is already good.

So pursue good ideas. Pursue the thing that you have thought about and decided has the best balance between values and highest chance to succeed. Sometimes it's just a strong gut feeling. Go for it, but set limits, because you don't want to fall prey to a gut feeling originating from strong intuition but an equally strong lack of fundamental understanding.


I think you have to weigh your qualms against the difficulty of implementation. They're both spectra, one from 'completely unusable' to 'perfect in its sublime beauty', the other from 'there's a complete solution for this' to 'i need to learn VHDL for this'.

There's some factors that help shift these spectra.

Configurability helps. If I can change a config to get the behavior I want, that is incredible, thank you.

Open source helps. Getting to see how they did it reduces reverse engineering work immensely if I ever have to dig in.

Modularity helps. If I can just plop in my module instead of doing brain surgery on other modules, that makes it a lot easier.

Good components help. Say I need a webscraper and know python. Imagine there was only selenium and not even urllib, but some low level TCP/IP library. I get a choice between heavy but easy or slim but high maintenance. But there's the sexy requests library, and there is the beautiful beautifulsoup4. I tell requests what to get, tell bs4 what I want from it, and I'm done.

Another great example for this is emacs. python-mode + elpy (almost complete solution), hide-show mode, electric-pair mode, and if anything still bugs me, it is fixable. If it were OOP, I'd inherit a lot of powerful functions, but I can always override anything that is wrong.

Expertise helps. If I have written a kernel module, that's another avenue to solving problems I have.

Expertise is a special case here worth more attention. It's the main thing that changes for any single programmer, and can skew this equation immensely. Expertise grows when you struggle with new things. Preferably just outside what you know and are comfortable with.

Considering that, DIY whenever you can afford to DIY (eg. pay the upfront cost of acquiring expertise), DIY whenever it is just outside what you can do, or DIY when it makes a lot of sense (eg. squarely in your domain of expertise, and there's a benefit to be had).

In concrete examples, that means don't DIY when you're on a tight deadline, don't attempt to write your own kernel after learning about variables, don't write your own parser generator when say, YACC, solves your problem just fine.


Specifically with regards to languages and OS's, I wonder how much that cost/benefit equation shifts as things have become so much more complex, and as we continue to pile on abstraction layer after abstraction layer.


I think the problem is not complexity but size. Most of the source for the Linux kernel is in the drivers, for instance. As for languages, most of the weight is in the libraries.


Hi Alan,

I have three questions -

1. If you were to design a new programming paradigm today using what we have learnt about OOP what would it be?

2. With VR and AR (Hololens) becoming a reality (heh) how do you see user interfaces changing to work better with these systems? What new things need to be invented or rethought?

3. I also worked at Xerox for a number of years although not at PARC. I was always frustrated by their attitude to new ideas and lack of interest in new technologies until everyone else was doing it. Obviously businesses change over time and it has been a long time since Xerox were a technology leader. If you could pick your best and worst memories from Xerox what would they be?

Cheers for your time and all your amazing work over the years :)


Let me both acknowledge your questions, and also acknowledge that this forum (the media authoring tools) are not in scale with the needed answers ...


Perhaps a reddit AMA would be better? They have much more flexible/powerful comment system.

Edit: Not sure why I am getting down voted for making a suggestion. Oh well.


I like the vibes here


Or maybe a Quora session.


A lot more good activity here than on Quora ...


Quora has some onerous policies, unfortunately: https://twitter.com/waxpancake/status/453958676529696769

HN is an excellent venue, but is necessarily text oriented, which is an OK tradeoff I think.

My next project after Stack Overflow, Discourse, is an 100% open source, flexible multimedia-friendly discussion system. It's GPL V2 on the code side, but we also tried to codify Creative Commons as the default license in every install, so discussion replies belong to the greater community: https://discourse.org

(Surprisingly, the default content licenses for most discussion software tend to be rather restrictive.)


Could you afterwards build a discussion Platform to find (partial) agreement in various political etc topics? That seems like it would have huge impact and is really missing.. thought about starting something like that but never got to it.


Still there seems to be only a sandbox install. Why can't we have discourse just like stackoverflow just with technical discussions allowed instead of attacked by both mods and the rules.


I'd be curious if he's planning on returning to Croquet/OpenCobalt with the VR revolution.


Come to think of it, AltSpaceVR on the HTC Vive looks a lot like Croquet.

I think Google Glass should've been held back until VR/Augmented Reality gets established. Many Croquet style roving "viewports" projected from Google Glass feeds in an abstracted 3D model of a real world location would be a great way to do reporting on events.


1. After Engelbart's group disbanded it seemed like he ended up in the wilderness for a long time, and focused his attention on management. I'll project onto him and would guess that he felt more constrained by his social or economic context than he was by technology, that he envisioned possibilities that were unattainable for reasons that weren't technical. I'm curious if you do or have felt the same way, and if have any intuitions about how to approach those problems.

2. What are your opinions on Worse Is Better (https://www.dreamsongs.com/RiseOfWorseIsBetter.html)? It seems to me like you pursue the diamond-like jewel, but maybe that's not how you see it. (Just noticed you answered this: https://news.ycombinator.com/item?id=11940276)

3. I've found the Situated Learning perspective interesting (https://en.wikipedia.org/wiki/Situated_learning). At least I think about it when I feel grumpy about all the young kids and Node.js, and I genuinely like that they are excited about what they are doing, but it seems like they are on a mission to rediscover EVERYTHING, one technology and one long discussion at a time. But they are a community of learning, and maybe everyone (or every community) does have to do that if they are to apply creativity and take ownership over the next step. Is there a better way?


It used to be the case that people were admonished to "not re-invent the wheel". We now live in an age that spends a lot of time "reinventing the flat tire!"

The flat tires come from the reinventors often not being in the same league as the original inventors. This is a symptom of a "pop culture" where identity and participation are much more important than progress...


This is incredibly hard hitting and I'm glad I read it, but I'm also afraid it would "trigger" quite a few people today.

What steps can a person take to get out of pop culture and try to get into the same league as the inventors? Incredibly stupid question to have to ask but I feel really lost sometimes.


I think it is first a recognition problem -- in the US we are now embedded in a pop culture that has progressed far enough to seriously hurt places that hold "developed cultures". This pervasiveness makes it hard to see anything else, and certainly makes it difficult for those who care what others think to put much value on anything but pop culture norms.

The second, is to realize that the biggest problems are imbalance. Developed arts have always needed pop arts for raw "id" and blind pushes of rebellion. This is a good ingredient -- like salt -- but you can't make a cake just from salt.

I got a lot of insight about this from reading McLuhan for very different reasons -- those of media and how they form an environment -- and from delving into Anthropology in the 60s (before it got really politicized). Nowadays, books by "Behavioral Economists" like Kahneman, Thaler, Ariely, etc. can be very helpful, because they are studying what people actually do in their environments.

Another way to look at it is that finding ways to get "authentically educated" will turn local into global, tribal into species, dogma into multiple perspectives, and improvisation into crafting, etc. Each of the starting places stays useful, but they are no longer dominant.


What steps would a group of people (civilization?) need to take in order to make progress here? When choices are abundant, the masses have been enabled, and yet knowledge is still at a premium?


All cultures have a lot of knowledge -- the bigger influences are contextual and epistemological (i.e. "points of view" and "stance", and "what is valued", etc.)

Self-awareness of what we are ("from Mars") is the essential step, and it's what real education needs to be about.


What does "from Mars" mean here?


It means "outside our human prejudices about ourselves". As though we actually were a valid object of real science....


Hi Alan,

1. what do you think about the hardware we are using as foundation of computing today? I remember you mentioning about how cool was the architecture of the Burroughs B5000 [1] being prepared to run on the metal the higher level programming languages. What do hardware vendors should do to make hardware that is more friendly to higher level programming? Would that help us to be less depending on VM's while still enjoying silicon kind of performance?

2. What software technologies do you feel we're missing?

[1] https://en.wikipedia.org/wiki/Burroughs_large_systems


If you start with "desirable process" you can eventually work your way back to the power plug in the wall. If you start with something already plugged in, you might miss a lot of truly desirable processes.

Part of working your way back to reality can often require new hardware to be made or -- in the case of the days of microcode -- to shape the hardware.

There are lots of things vendors could do. For example: Intel could make its first level caches large enough to make real HLL emulators (and they could look at what else would help). Right now a plug-in or available FPGA could be of great use in many areas. From another direction, one could think of much better ways to organize memory architectures, especially for multi-core chips where they are quite starved.

And so on. We've gone very far down the road of "not very good" matchups, and of vendors getting programmers to make their CPUs useful rather than the exact opposite approach. This is too large a subject for today's AMA.


Thanks for the attention Alan! I love the reverse-engineering the driven by desire approach :D

We need to find ways to free ourselves from the cage of "vendors getting programmers to make their CPUs useful rather than the exact opposite approach" <- meditate on this we all should


> Intel could make its first level caches large enough to make real HLL emulators

If you make the L1 cache larger, it will become slower and will be renamed to "L2 cache". There are physical reasons why the L1 cache is not larger, even though programs written in non-highlevel languages would profit from larger caches (maybe even moreso than HLL programs).

> Right now a plug-in or available FPGA could be of great use in many areas.

FPGAs are very, very HLL-unfriendly, despite lots of effort from industry and academia.


Have you looked into the various Haskell/OCaml to hardware translators people have been coming up with the past few years?

It seems like it's been growing and several FPGA's are near that PnP status. In particular the notion of developing compile time proved RTS using continuation passing would be sweet.

Even with newer hardware it seems we're still stuck in either dynamic mutable languages or functional static ones. Any thoughts on how we could design systems incorporating the best of both using modern hardware capacities? Like... Say reconfigurable hierarchical element system where each node was an object/actor? Going out on a bit of a limb with that last one!


Without commenting on Haskell, et al., I think it's important to start with "good models of processes" and let these interact with the best we can do with regard to languages and hardware in the light of these good models.

I don't think the "stuckness" in languages is other than like other kinds of human "stuckness" that come from being so close that it's hard to think of any other kinds of things.


Thanks! That helps reaffirm my thinking that "good models of processes" are important, even though implementations will always have limitations. Good to know I'm not completely off base...

A good example for me has been virtual memory pattern, where from a processes point-of-view you model memory as an ideal unlimited virtual space. Then you let the kernel implementation (and hardware) deal with the practical (and difficult details). Microsoft's Orleans implementation of the actor model has a similar approach that they call "virtual actors" that is interesting as well.

My own stuckness has been an idea of implementing processes using hierarchical state machines, especially for programming systems of IoT type devices. But I haven't been able to figure out how to incorporate type check theorems into it.


At my office a lot of the non-programmers (marketers, finance people, customer support, etc) write a fair bit of SQL. I've often wondered what it is about SQL that allows them to get over their fear of programming, since they would never drop into ruby or a "real" programming language. Things I've considered:

    * Graphical programming environment (they run the queries
      from pgadmin, or Postico, or some app like that)
    * Instant feedback - run the query get useful results
    * Compilation step with some type safety - will complain
      if their query is malformed
    * Are tables a "natural" way to think about data for humans?
    * Job relevance
Any ideas? Can we learn from that example to make real programming environments that are more "cross functional" in that more people in a company are willing to use them?


SQL is declarative. Compare:

    for user in table_users:
        if user.is_active:
            return user.first_name;
vs:

    SELECT first_name FROM users_table
    WHERE is_active
It's unfortunate that the order of the clauses in SQL is "wrong" (e.g. you should say FROM, WHERE, SELECT: Define the universe of relevant data, filter it down, select what you care about), but it's still quite easy to wrap your mind around. You are asking the computer for something, and if you ask nicely, it tells you what you want to know. Compare that to procedural programming, where you are telling the computer what to do, and even if it does what you say, that may not have been what you actually wanted after all.


It's unfortunate that the order of the clauses in SQL is "wrong"

SQL is written goal-oriented.

You start with what you want (the goal). Then you specify from where (which can also be read as "what", since each table generally describes a thing) and finally you constrain it to the specific instances you care about.

SELECT the information I want FROM the thing that I care about WHERE condition constrains results to the few I want

Having said that, I would personally still prefer it in reverse like you say. I can see the value of how SQL does it, though, especially for non-programmers who think less about the process of getting the results and more about the results they want (because they haven't been trained to think of the process, like programmers have).

It makes sense for someone who isn't thaaaaat technical to start with "well, I want the name and salary of the employee but only those that are managers": SELECT name, salary FROM employee WHERE position = 'manager'

Admittedly even that isn't perfect and I assume that it wouldn't take much for someone to learn the reverse.


> Compare that to procedural programming, where you are telling the computer what to do, and even if it does what you say, that may not have been what you actually wanted after all.

Procedural vs. functional phrasing in no way changes the basic fact that if you ask a computer the wrong question it'll give you the wrong result.

"go through the list of all users and add the ones which are active to a new list"

vs.

"the list I want contains all active users from the list of all users"


In imperative you don't ask questions but give instructions.

As long as the instructions are longer than the question (and they often are, even in your example ;)), you are bound to make more errors here.

Plus, it requires some understanding of how this damn machine works in the first place.

When turning questions into instruction is decidable it pays off to automate it.


Along this point, C# and VB.NET have SQL-like expressions that can be used for processing, called LINQ [1]. They even get the order of the clauses correct!

A feature like this may help your programmers who are used to thinking in terms of filter -> select -> order.

[1] https://msdn.microsoft.com/en-us/library/bb397927.aspx


Yes! Absolutely what I was thinking of when I wrote this :) Getting that right is one of my favorite parts of LINQ.


Ecto (Elixir) does the from-in-where syntax as well.


It may be that its easier for people to define the desired result set, then tweak the query until it gives them what they want.


To play devil's advocate Prolog is considered much more similar to SQL than any other language and I suspect that will have an extremely high learning cost. That may be me being biased due to learning procedural languages first. At the same time I consider myself well versed in SQL.


I think Prolog suffers in that comparison mostly because of its much more ambitious scope. Most non-developer/DBA people have no concept of what a SQL query is actually doing, whereas most nontrivial Prolog programs require conceptualizing the depth-first-search you're asking the language to perform in order to get it right. If you restricted your Prolog world to the kind of "do some inference on a simple family tree database of facts" that people first learn, Prolog would be pretty easy too.


I fail to see a meaningful difference between these two approaches, especially if we transform the first one in a list comprehension:

    [user.first_name for user in table_users if user.is_active]


But a list comprehension is a declarative construct, which can be best appreciated when porting some list comprehensions into loops. Especially nested comprehensions.


Totally meaningful difference! With the list comprehension, you're still telling the machine how to go about getting the data; there is an explicit loop construct. With SQL, I'm simply declaring what results I want, and the implementation is left to the execution engine.

For instance, the SQL query can be parallelized, but not so with the Python list comprehension. If you wanted to create a version that could be run in parallel in Python, you'd have to do it with a map()/filter() construct. Ignoring readability for a sec (pretend it's nice and elegant, like it would be in e.g. Clojure), you are still specifying how the machine should accomplish the goal, not the goal itself.

    filter(lambda x: x is not None, map(lambda u: u.first_name if u.is_active else None, table_users))


I teach SQL to my journalism students. Example exercise:

http://2015.padjo.org/tutorials/sql-walks/exploring-wsj-medi...

My main reason for teaching it was that it was a skill that helped me immensely as a journalist, in terms of being able to do data analysis. Because I learned it relatively late in my career, I thought it'd be hard for the students but most of them are able to get it.

Even though I use relatively little SQL in my day to day work, it's my favorite thing to teach to novices. First, it has a similar data model to spreadsheets, so it feels like a natural progression. Secondly, for many students, this is the first time that they'll have done "real" programming and the first time that they learn how to tell a computer to do something rather than learn how to use a computer. In Excel, for example, you double click a file and the entire thing opens. With SQL, you're required to not just specify the database and table, but also each and every column...it's annoying at first, but then you realize that there is power in being explicit.

The main advantage of teaching SQL over, say R, as a first language is that SQL's declarative syntax is easy to follow AND you can do most of what you need with a limited subset of the language...for instance, I don't have to teach variables and loops and functions...which is good because I don't even know how to really do those in SQL (just haven't had the need when I can work from R or Pandas).

When a beginner student fucks up a basic Python script, there are any number of reasons for the failure that is beyond the student's expected knowledge. When a novice student fucks up a SQL query...it's easier to blame the mistake on the student (e.g. Misspelling of names/syntax)


What are the main factors which encourage (and are helpful) non-programmers to use SQL ?

We provide low code platform (SQL) to organize data and build custom applications as per specific workflow requirements. We are assuming that teaching/educating/training combined with lots of sample SQL code with real world examples are helpful to non-programmers for using SQL.

Sample SQL is available at https://mydataorganizer.com/MyDataOrganizer/QuarterDatesCalc...

Thanks, Neal


My guess would be that there is a lot of interesting public data available in SQL/CSV/Excel formats. If a journalist can browse that data efficiently they can probably find some interesting stories and leads.


Just a thought: Is it mostly select statements that your colleagues write? Because if they do, they might not fear accidentally altering the data. I found that new programmers can get confused by the difference between things that are immutable and those that aren't.


>>I've often wondered what it is about SQL that allows them to get over their fear of programming

Thats barely programming. Even by the most lenient definition what they do isn't programming.

Firstly SQL's are a little like Excel Macros, they lower the barrier to entry to basic twiddling. Got a SQL client(Toad etc?)? you can throw a snippet or two quickly. Anything beyond that gets difficult. Tricky joins, sub queries, troubleshooting big queries, optimization problems etc etc. Beyond this writing re usable code, test discipline and a range of other tasks that make code run for years is what is your everyday work as a programmer.

Sure you could saw a log of wood once a while, but don't confuse that for being a full time carpenter.


Why not ask them? It's an interesting question, and I've noticed similar things with business analysts I've worked with in the past.


As someone pretty close to this camp, it comes down to your last bullet point - needing to do it, in my opinion. A smaller subset of those people will also learn VBA for the same reason-it helps them get their job done. The benefit those two have is that they are either built in to the tools already (VBA), or a DBA does most of the set up and the used mostly just runs queries against it and doesn't have to worry too much about indexing, performance, schemas, etc (SQL). If I were to try to turn them onto python, it'd be an effort to get it installed and then get them to use the commandline.


With SQL, you get a complete solution to your problem immediately (the data you want is returned). So, high value return on effort motivates people to learn it.


What do you think of Bret Victor's work? (http://worrydream.com/) Or Rich Hickey?

Who do you think are the people doing the most interesting work in user interface design today?


I love Bret Victor's work!

He is certainly one of the most interesting and best thinkers of today.


Aren't Alan Kay and Bret Victor working together at SAP currently?


Technically, at HARC - https://blog.ycombinator.com/harc


They collaborate together at YCR / HARC!


YCR is not "my group" -- I'm very happy to have helped set up HARC! with its very impressive group of Principal Investigators (including Bret).


Hi Alan,

Previously you've mentioned the "Oxbridge approach" to reading, whereby--if my recollection is correct--you take four topics and delve into them as much as possible. Could you elaborate on this approach (I've searched the internet, couldn't find anything)? And do you think this structured approach has more benefits than, say, a non-structured approach of reading whatever of interest?

Thanks for your time and generosity, Alan!


There are more than 23,000,000 books in the Library of Congress, and a good reader might be able to read 23,000 books in a lifetime (I know just a few people who have read more). So we are contemplating a lifetime of reading in which we might touch 1/10th of 1% of the extent books. We would hope that most of the ones we aren't able to touch are not useful or good or etc.

So I think we have to put something more than randomness and following links to use here. (You can spend a lot of time learning about a big system like Linux without hitting many of the most important ideas in computing -- so we have to heed the "Art is long and Life is short" idea.

Part of the "Oxbridge" process is to have a "reader" (a person who helps you choose what to look at), and these people are worth their weight in gold ...


The late Carl Sagan had a great sequence in the original Cosmos where he made a similar point about how many books one could read in a lifetime:

  If I finish a book a week, I will read only a few thousand
  books in my lifetime, about a tenth of a percent of the 
  contents of the greatest libraries of our time. The trick 
  is to know which books to read.


General question about this figure, which I've seen before:

> read 23,000 books in a lifetime

As a very conservative lower bound, a person who lives to the age of 80 would have to read 0.79 books per day, from the day they were born, to reach this figure.

Or, to put it another way, who has read 288+ books in the last year?

I'm quite sceptical about this figure. Any thoughts as to how this might be possible? Are the people Alan mentions speed-reading? Anyone else know similarly prolific readers?


Yes, it is possible. It is partly developing a kind of fluency that is very similar to sight-reading music (this is a nice one to think about because you really have to grok what is there to do it, and you have to do it in real time at "prima vista").

Doing a lot of it is one of the keys! Doing it in a way that various short and long-term memories are involved is another key (rapid reading with comprehension of both text and music is partly a kind of memorization and buffering, etc.)

I don't think I've read 23,000 books in 76 years, but very likely somewhere between 16,000 and 20,000 (I haven't been counting). Bertrand Russell easily read 23,000 books in his lifetime, etc.


I was late to this and didn't expect a reply, so thanks for taking the time to come back and respond!

I agree with the practice, as for some periods I've noticed an increase in speed when I've been consistently reading every day.

Regarding the second point - short and long-term memory - do you have a link or other suggestion for where to learn more, please?


There was quite a bit of discussion about this on the HN gig about my long ago "reading list"



AS someone that read at least one book per day if not more since the age of 6, yes it is possible. I can read between 100 to 200 page per hour, depending of the book.

You reach a storage and money problem fast (Ebook are a savior nowadays). And you tend to have multiple books open at the same time.

How does it work? There are several strategy. First i read fast. Experience and training make you read really fast. Secondly, you get a grasp of how things works and what the wirter has to say. In a fiction book, it is not unusual for me to not read a chapter or two because i know what will happen inside.

Finally... Good writers helps. Good writers make reading a breeze and are faster to read. They present ides in concise and efficient way, that follow the flow of thinking.

I will take more question gladly if you have some :)


> In a fiction book, it is not unusual for me to not read a chapter or two because i know what will happen inside.

This is ridiculous. It doesn't count as reading if you skip whole chapters.


Hell, I 'read' whole books by just reading the back cover! This way, I get through hundreds of books every time I visit the library!


Well to be honest if it is badly written and contain nothing of interest...


What motivates you? Do you ever apply the knowledge you've gotten this way (do you even care)?


Multiple things. First it is something i like.

Secondly, it is the only way i can absorb information in a way that works. Talks, video, podcast, etc are too slow for me. It lacks a good throughput of information and meaning. Which means i tend to just drop or complete what the speaker is telling.

About applying knowledge : yes everyday, in my life. Once you hit a good amount of knowledge and have a nice way to filter it, think about it and deal with it, thngs become nice. Understanding a problem come faster. You can draw link between different situations or use ideas from other field into yours.

Knowledge is rarely lost.


Thanks. What type of training did you do?

I'm not keen on skipping chapters! Do you do the same with non-fiction?

Another question - how do you keep track of what you've read? (would be happy to hear from others esp. Alan on the same topic)


as a training... i read. That is all. I begun when i was 5. Never stopped. So i nearly always was like that. The more you read the more you train your brain to read. And your mind to understand how to deal with knowledge and information. Filter it, classify it, absorb it, apply it.

For non fiction, yes it happens. Lot of book sjust repeat the same thing over and over again. When you begin to read a chapter and can complete what will be said in the next 20 pages just from your understanding of the whole situation, reading it is a loss of time. And it would make me be bored and get down from "The Zone".

I keep track in my brain. I have the advantage of being able to always remember if i have read something by just looking at the backcover and the first lines. I still have to forget a book i read. I can not remmeber all the technicalities of course but far enough to know if i read it before or not.

I reread the books i really like or need when needed anyway. Mainly during vacations.


It depends on your definition of "reading a book."

Wait, what?

I've been reading a book called, I kid you not, "How to Read a Book: The Classic Guide to Intelligent Reading."

Adler and Doren identify four levels of reading:

1. Elementary: "What does the sentence say?" This is where speed can be gained

2. Inspectional: "What is the book about?" Best and most complete reading given a limited time. Not necessarily reading a book from front to back. Essentially systematic skimming.

3. Analytical: Best and most complete reading given unlimited time. For the sake of understanding.

4. Synoptical: Reading many books of the same subject at once, placing them in relation to one another, and constructing an analysis that may not be found in any of the books.

Amazon link for those interested: https://www.amazon.com/How-Read-Book-Intelligent-Touchstone/...


Recent research (along with past research) has cast doubt on the plausibility of extreme speed reading [1].

I don't mean to contradict Alan; no doubt he's a fast reader. But if you're actually reading an entire book every day or two, you're spending a lot of every day reading.

[1] http://psi.sagepub.com/content/17/1/4


Was it in The Future of Reading [1] perhaps? From page 6:

In a very different approach, most music and sports learning only has contact with a one on one expert once or twice a week, lots of individual practice, group experiences where “playing” is done, and many years of effort. This works because most learners really have difficulty absorb ing hours of expert instruction every week that may or may not fit their capacities, styles, or rhythms. They are generally much better off spending a few hours every day learning on their own and seeing the expert for assessment and advice and play a few times a week.

A few universities use a process like this for academics—sometimes called the “tutorial system”, they include Oxford and Cambridge Universities in the UK.

[1] http://www.vpri.org/pdf/future_of_reading.pdf


Hi, I have a few questions about your STEPS project:

- Is there a project that is the continuation of the STEPS project?

- What is your opinion of the Elm language?

- How do you envision all the good research from the STEPS model could be used for building practical systems?

- STEPS focused on personal computing, do you have a vision on how something similar could be done for server-side programming?

- Where can I find all the source code for the Frank system and the DSLs described in the STEPS report?


Apologies for rambling on a bit - but I also have some questions about VPRI. As far as I can gather, it was never the intention to publish the entire system (The whole stack needed to get "Frank" running)? If so, I'd like to know why not? Where you afraid that the prototypes would be taken "too seriously" and draw focus away from the ideas you wanted to explore?

The VPRI reports, and before that some of the papers on Croquet (especially the idea of "teatime" which might be described as event-driven, log-based, relative time with eventual data/world-consistency) are fascinating, and I'm grateful for them being published. Also the Ometa-stuff[o] is fascinating (if anything, I think it's gotten too little mind-share).

It seems to me, that we've evolved a bit, in the sense that some things that used to be considered programming (display a text string on screen), no longer is (type it into notepad.exe) -- it's considered "using a computer". At the same time some things that were considered somewhat esoteric is becoming mainstream: perhaps most importantly the growing (resurging?) trend that programming really is meta-programming and language creation.

ReactJS is a mainstream programming model, that fuses html, css, javascript and a at least one templating language - and in a similar vein we see a great adoption in "transpiled" languages, such as coffee script, typescript, clojurescript and more. HN runs on top of Ark, which is a lisp that's been bent hard in the direction of http/html. I see this as a bit of an evolution from when the most common DSLs people were writing for themselves were ORMs - mapping some host language to SQL.

In your time with VPRI - did you find other new patterns or principles for meta-programming and (micro) language design that you think could/should be put to use right now?

Other than the web-developers tendency to reinvent m4 at every turn, in order to program html, css and js at a "higher" level, and the before-mentioned ORM-trends -- the only somewhat mainstream system I am aware of that has a good toolkit for building "real" DSLs, is Racket Scheme (Which shows if one contrasts something like Sphinx, which is a fine system, with Racket's scribble[2]).

Do you think we'll continue to see a rise of meta-programming and language design as more and more tools become available, and it becomes more and more natural to do "real" parsing rather than ad-hoc munging of plain text?

[o] https://github.com/alexwarth/ometa-js

[s] https://docs.racket-lang.org/scribble/getting-started.html

http://lambda-the-ultimate.org/node/4017


Hi Alan,

What advice would you give to those who don't have a HARC to call their own? what would you do to get set up/a community/funding for your adventure if you were starting out today? What advice do you have for those who are currently in an industrial/academic institution who seek the true intellectual freedom you have found? Is it just luck?!


I don't have great advice (I found getting halfway decent funding since 1980 to be quite a chore). I was incredibly lucky to wind up quite accidentally at the U of Utah ARPA project 50 year ago this year.

Part of the deal is being really stubborn about what you want to do -- for example, I've never tried to make money from my ideas (because then you are in a very different kind of process -- and this process is not at all good for the kinds of things I try to do).

Every once in a while one runs into "large minded people" like Sam Altman and Vishal Sikka, who do have access to funding that is unfettered enough to lead to really new ideas.


Thanks.

Do you have any advice about community building, especially around fostering new and big ideas?


Hi Alan,

On the "worse is better" divide I've always considered you as someone standing near the "better" (MIT) approach, but with an understanding of the pragmatics inherent in the "worse is better" (New Jersey) approach too.

What is your actual position on the "worse is better" dichotomy?

Do you believe it is real, and if so, can there be a third alternative that combines elements from both sides?

And if not, are we always doomed (due to market forces, programming as "popular culture" etc) to have sub-par tools from what can be theoretically achieved?


I don't think "pop culture" approaches are the best way to do most things (though "every once in a while" something good does happen).

The real question is "does a hack reset 'normal'?" For most people it tends to, and this makes it very difficult for them to think about the actual issues.

A quote I made up some years ago is "Better and Perfect are the enemies of What-Is-Actually-Needed". The big sin so many people commit in computing is not really paying attention to "What-Is-Actually-Needed"! And not going below that.


I fear this is because "What-Is-Actually-Needed" is non-trivial to figure out. Related: "scratch your own itch", "bikeshedding", "yak shaving".


Exactly -- this is why people are tempted to choose an increment, and will say "at least it's a little better" -- but if the threshold isn't actually reached, then it is the opposite of a little better, it's an illusion.


Jaron Lanier mentioned you as part of the, "humanistic thread within computing." I understood him to mean folks who have a much broader appreciation of human experience than the average technologist.

Who are "humanistic technologists" you admire? Critics, artists, experimenters, even trolls... Which especially creative technologists inspire you?

I imagine people like Jonathan Harris, Ze Frank, Jaron Lanier, Ben Huh, danah boyd, Sherry Turkle, Douglas Engelbart, Douglas Rushkoff, etc....


-- I was surprised that the HN list page didn't automatically refresh in my browser (seems as though it should be live and not have to be prompted ...)


Au contraire, I'm happy that the last-seen-state is preserved and I'm given the option to refresh to see the current state should I chose to.


It certainly helps when reading long replies, that's for sure. I do think a mini-update box with "click here to load" like on stackoverflow for replies or edits would be an interesting idea.

Of course, the 90's style is pretty hacker-hipster as well...can't deny that.


HN feels old school. That's why I like it. (I'm considered a fossil by all of my 20, 30-something colleagues.)


How old are you? "Old school" to me is what could be done at Parc, etc. ... (hint: quite a bit more than on this website ...)


Imagine: 1. trying to read something long, or 2. going off to a follow a link and to come back and respond, only to find that the page has been refreshing while you looked away. Now you have to scroll about to find the place you were at in order to respond or to continue reading the comments.


How about a little model of time in a GUI?


This is maybe the most Alan-Kay-like response so far. Short, simple, but a tiny bit like a message from an alternate dimension. "No, no, I'm not asking you to build the also-wrong solution someone else has tried. I'm saying: solve the problem.


Also feels like worse-is-better vs. the right thing. How much engineering effort and additional maintenance would be required to develop and support such a time-model? A lot. Alas, let us re-create software systems to be radically simpler so that we can do the right thing! Still waiting for Urbit and VPRI's 10k line operating system ... but that's what Alan stands for in our industry: "strive to do the right thing," or as you put it, "solve the problem".


That sounds like a feature. HN doesn't have features, only necessities.


This is the way Facebook is now... Not an improvement. It's not designed for following threads, and it looks like they don't care.


Legend is, this forum runs on an abandoned LISP implementation.

Most things around here are not how they should seem.


Yep- arclanguage.org.


Hi Alan,

As a high school teacher, I often find that discussions of technology in education diminish 'education' to curricular and assessment documentation and planning; however, these artifacts are only a small element of what is, fundamentally, a social process of discussion and progressive knowledge building.

If the real work and progress with my students comes from our intellectual both-and-forth (rather than static documentation of pre-exhibiting knowledge), are there tools I can look to that have been/will be created to empower and enrich this kind of in situ interaction?


This is a tough one to try to produce "through the keyhole" of this very non-WYSIWYG poorly thought through artifact of the WWW people not understanding what either the Internet or computer media are all about.

Let me just say that it's worth trying to understand what might be a "really good" balance between traditional oral culture learning and thinking, what literacy brings to the party, especially via mass media, and what the computer and pervasive networking should bring as real positive additions.

One way to assess what is going on now is partly a retreat from real literacy back to oral modes of communication and oral modes of thought (i.e. "texting" is really a transliteration of an oral utterance, not a literary form).

This is a disaster.

However, even autodidacts really need some oral discussions, and this is one reason to have a "school experience".

The question is balance. Fluent readers can read many times faster than oral transmissions, and there are many more resources at hand. This means in the 21st century that most people should be doing a lot of reading -- especially students (much much more reading than talking). Responsible adults, especially teachers and parents, should be making all out efforts to help this to happen.

For the last point, I'd recommend perusing Daniel Kahneman's "Thinking: Fast and Slow", and this will be a good basis for thinking about tradeoffs between actual interactions (whether with people or computers) and "pondering".

I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs....


>I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs....

This is the meta-thing that’s been bugging me: how do we help people realize they’re “missing their actual potential as thinkers”?

The world seems so content to be an oral culture again, how do we convince / change / equip people to be skeptical of these media?

Joe Edelman’s Centre for Livable Media (http://livable.media) seems like a step in the right direction. How else can we convince people?


Marijuana helped me realize there was a lot about myself I didn't understand and launched my investigation into more effective thought processes. I've become much more driven and thoughtful since I began smoking as an adult.


What kinds of changes to your thought processes did you make?


First of all, I now enjoy talking about myself :)

I stopped assuming I knew everything, and a childlike sense of wonder returned to my life. I began looking beyond what was directly in front of me and sought out more comprehensive generalizations. What do atoms have in common with humans? What does it mean to communicate? Do we communicate with ecosystems? Do individuals communicate with society? What is consciousness and intelligence? Is my mind a collection of multiple conscious processes? How do the disparate pieces of my brain integrate into one conscious entity, how do they shape my subjective reality?

I found information, individuals, and networks to be fundamental to my understanding of the world. I was always interested in them before, but not enough to seek them out or apply them through creative works. I discovered for myself the language of systems. I found a deep appreciation of mathematics and a growth path to set my life on.

I was able to do this exploration at a time when my work was slow and steady. It came along a couple years ago when I was 25, which I've heard is when the brain's development levels off. I feel lucky to have experienced it when I did because I was totally unsatisfied with my life before then.

Since then I've found work I love at a seed stage startup where I've been able to apply my ideas in various ways. I have become much more active as a creator, including exploring latent artistic sensibilities through writing poetry and taking oil painting classes with a very talented teacher. I've found myself becoming an artist in my work - I've become the director and lead engineer at the startup and am exploring ways to determine and distribute truth in the products we sell, and further to make a statement on what art is in a capitalistic society (even if I'm the only one who will ever recognize it). I've also become more empathic and found a wonderful woman and two pups to share my life with, despite previously being extremely solitary. Between work and family I have less time for introspection now, but I expect I'll learn just as much through these efforts.

Ultimatey, I've learned to trust my subconscious. I was always anxious and nervous about being wrong in any situation before, but now I trust that even if I am wrong in the moment my brain can figure out good answers over longer stretches of time.

I don't know how far cannabis led me down this path but it definitely gave me a good strong push.


This is almost exactly my experience! I don't think HN talks about it much, but cannabis is a great way to approach intuitive depth on subjects. For me it was ego, math, music, civics and information theory concepts.

When I started, it was at a job that I absolutely hated (rewriting mantis to be a help desk system), and it helped me get out of it by opening up better understanding of low level systems. That eventually led to high frequency trading systems tuning and some pretty deep civics using Foia.

Not that it was a direct contributor, but I do consider it a seed towards better understanding of the things around me. I don't necessarily feel happier, but I feel much more content.


It is IDENTICAL to mine as well. Even down to the information theory bit. Very bizarre, but reassuring.


Thanks, this is very interesting!


In seeking to consider what form this “‘really good’ balance” might take, can you recommend any favored resources/implementations to illustrate what “real positive additions” computers and networking can bring to the table? I’m familiar with the influence of Piaget/Papert - but I would love to gain some additional depth on the media/networking side of the conversation.

Thank you for your thoughts. I feel similarly about the cultural regression of literacy.


With a good programming language and interface, one -- even children -- can create from scratch important simulations of complex non-linear systems that can help one's thinking about them.


I wish this were a better platform for fluid discussion, but I'll dig into your writings and talks (Viewpoint/Youtube/TED/elsewhere?) to gain a better understanding of your thoughts on these topics.

Thank you.


What turning points in the history of computing (products that won in the marketplace, inventions that were ignored, technical decisions where the individual/company/committee could've explored a different alternative, etc.) do you wish had gone another way?


Just to pick three (and maybe not even at the top of my list if I were to write it and sort it), are

(a) Intel and Motorola, etc. getting really interested in the Parc HW architectures that allowed Very High Level Languages to be efficiently implemented. Not having this in the 80s brought "not very good ideas from the 50s and 60s" back into programming, and was one of the big factors in:

(b) the huge propensity of "we know how to program" etc., that was the other big factor preventing the best software practices from the 70s from being the start of much better programming, operating systems, etc. in the 1980s, rather the reversion to weak methods (from which we really haven't recovered).

(c) The use of "best ideas about destiny of computing" e.g. in the ARPA community, rather than weak gestures e.g. the really poorly conceived WWW vs the really important and needed ideas of Engelbart.


I get (a) and (b) completely. On (c), I felt this way about NCSA Mosaic in 1993 when I first saw it and I'm relieved to hear you say this because although I definitely misunderstood a major technology shift for a few years, maybe I wasn't wrong in my initial reaction that it was stupid.


I didn't begin to get it until the industry started trying to use browsers for applications in the late '90s/early 2000's. I took one look at the "stateful" architecture they were trying to use, and I said to myself, "This is a hack." I learned shortly thereafter about criticism of it saying the same thing, "This is an attempt to impose statefulness on an inherently stateless architecture." I kept wondering why the industry wasn't using X11, which already had the ability to carry out full GUI interactions remotely. Why reject a real-time interactive architecture that's designed for network use for one that insisted on page refreshes to update the display? The whole thing felt like a step backward. The point where it clobbered me over the head was when I tried to use a web application framework to make a complex web form application work. I got it to work, and the customer was very pleased, but I was ashamed of the code I wrote, because I felt like I had to write it like I was a contortionist. I was fortunate in that I'd had prior experience with other platforms where the architecture was more sane, so that I didn't think this was a "good design." After that experience, I left the industry. I've been trying to segue into a different, more sane way of working with computers since. I don't think any of my past experience really qualifies, with the exception of some small aspects and experiences. The key is not to get discouraged once you've witnessed works that put your own to shame, but to realize that the difference in quality matters, that it was done by people rather like yourself who had the opportunity to put focus and attention on it, and that one should aspire to meet or exceed it, because anything else is a waste of time.


How can we bring back X11 and good old interactive architecture to the generation of programmers growing up with AngularJS and ReactJS?

Or shall we reboot good ideas with IoT?


This is not "bringing X11 back," but it's an improvement on JS.

https://news.ycombinator.com/item?id=11965253


My reference to X11 was mostly rhetorical, to tell the story. I learned at some point that the reason X11 wasn't adopted, at least in the realm of business apps. I was in, was that it was considered a security risk. Customers had the impression that http was "safe." That has since been proven false, as there have been many exploits of web servers, but I think by the time those vulnerabilities came to light, X11 was already considered passe. It's like how stand-alone PCs were put on the internet, and then people discovered they could be cracked so easily. I think a perceived weakness was that X11 didn't have a "request-respond" protocol that worked cleanly over a network for starting a session. One could have easily been devised, but as I recall, that never happened. In order to start a remote session of some tool I wanted to use, I always had to login to a server, using rlogin or telnet, type out the name of the executable, and tell it to "display" to my terminal address. It was possible to do this even without logging in. I'd seen students demonstrate that when I was in school. While they were logged in, they could start up an executable somewhere and tell it to "display" to someone else's terminal. The thing was, it could do this without the "receiver's" permission. It was pretty open that way. (That would have been another thing to implement in a protocol: don't "display" without permission, or at least without request from the same address.) Http didn't have this problem, since I don't think it's possible to direct a browser to go somewhere without a corresponding, prior request from that browser.

X11 was not the best designed GUI framework, from what I understand. I'd heard some complaints about it over the years, but at least it was designed to work over a network, which no other GUI framework of the time I knew about could. It could have been improved upon to create a safer network standard, if some effort had been put into it.

As Alan Kay said elsewhere on this thread, it's difficult to predict what will become popular next, even if something is improved to a point where it could reasonably be used as a substitute for something of lower quality. So, I don't know how to "bring X11 back." As he also said, the better ideas which ultimately became popularly adopted were ones that didn't have competitors already in the marketplace. So, in essence, the concept seemed new and interesting enough to enough people that the only way to get access to it was to adopt the better idea. In the case of X11, by the time the internet was privatized, and had become popular, there were already other competing GUIs, and web browsers became the de facto way people experienced the internet in a way that they felt was simple enough for them to use. I remember one technologist describing the browser as being like a consumer "radio" for the internet. That's a pretty good analogy.

Leaving that aside, it's been interesting to me to see that thick clients have actually made a comeback, taking a huge chunk out of the web. What was done with them is what I just suggested should've been done with X11: The protocol was (partly) improved. In typical fashion, the industry didn't quite get what should happen. They deliberately broke aspects of the OS that once allowed more user control, and they made using software a curated service, to make existing thick client technology safer to use. The thinking was, not without some rationale, that allowing user control led to lots and lots of customer support calls, because people are curious, and usually don't know what they're doing. The thing was, the industry didn't try to help people understand what was possible. Back when X11 was an interesting and productive way you could use Unix, the industry hadn't figured out how to make computers appealing to most consumers, and so in order to attract any buyers, they were forced into providing some help in understanding what they could do with the operating system, and/or the programming language that came with it. The learning curve was a bit steeper, but that also had the effect of limiting the size of the market. As the market has discovered, the path of least resistance is to make the interface simple, and low-hassle, and utterly powerless from a computational standpoint, essentially turning a computer into a device, like a Swiss Army knife.

I think a better answer than IoT is education, helping people to understand that there is something to be had with this new idea. It doesn't just involve learning to use the technology. As Alan Kay has said, in a phrase that I think deserves to be explored deeply, "The music is not in the piano."

It's not an easy thing to do, but it's worth doing, and even educators like Alan continue to explore how to do this.

This is just my opinion, as it comes out of my own personal experience, but I think it's borne out in the experience of many of the people who have participated in this AMA: I think an important place to start in all of this is helping people to even hear that "music," and an important thing to realize is you don't even need a computer to teach people how to hear it. It's just that the computer is the best thing that's been invented so far for expressing it.


I had similar experience as yours and was comfortable coding web pages via cgi-bin with vi. :-)

That is why now I am very interested in containers and microservices in both local and network senses.

As a "consumer", I am also very comfortable to communicate with people via message apps like WeChat and passing wikipedia and GitHub links around. Some of them are JavaScript "web apps" written and published in GitHub by typing on my iPhone. Here is an example:

http://bigdata-mindstorms.github.io/d3-playground/ontouchsta...

Hope I can help more people to "hear the music" and _make_ and _share_ their own.


I don't think networked X11 is quite the web we'd want (it's really outdated), but it does seem better than browsers, which as you point out are so bad you want to stab your eyes out. Unfortunately, now that the web has scaled up to this enormous size, people can't un-see it and it does seem like it's seriously polluted our thinking about how the Internet should interact with end users.

Maybe the trick is something close to this: we need an Internet where it's very easy to do not only WYSIWYG document composition and publishing (which is what the web originally was, minus the WYSIWYG), but really deliver any kind of user experience we want (like VR, for example). It should be based on a network OS (an abstract, extensible microkernel on steroids) where user experiences of the network are actually programs with their own microkernel systems (sort of like an updated take on postscript). The network OS can security check the interpreters and quota and deal out resources and the microkernels that deliver user experiences like documents can be updated as what we want to do changes over time. I think we'd have something more in this direction (although I'm sure I missed any number of obvious problems) if we were to actually pass Alan Kay's OS-101 class as an industry.

We actually sort of very briefly started heading in this direction with Marimba's "Castanet" back at the beginning of Java and I was WILDLY excited to see us trying something less dumb than the browser. Unfortunately, it would seem that economic pressures pushed Marimba into becoming a software deployment provider, which is really not what I think they were originally trying to do. Castanet should have become the OS of the web. I think Java still has the potential to create something much better than the web because a ubiquitous and very mature virtual machine is a very powerful thing, but I don't see anyone trying go there. There's this mentality of "nobody would install something better." And yet we installed Netscape and even IE...

BTW, I do think the security problems of running untrusted code are potentially solvable (at least so much as any network security problems are) using a proper messaging microkernel architecture with the trusted resource-accessing code running in one process and the untrusted code running in another. The problem with the Java sandbox (so far as I understand all that) is that it's in-process. The scary code runs with the trusted code. In theory, Java is controlled enough to protect us from the scary code, but in practice, people are really smart and one tiny screw-up in the JVM or the JDK and bad code gets permissions it shouldn't have. A lot of these errors could be controlled or eliminated by separating the trusted code from the untrusted code as in Windows NT (even if only by making the protocol for resource permissions really clear).


Hi, Alan!

Like many here, I'm a big fan of what you've accomplished in life, and we all owe you a great debt for the great designs and features of technologies we use everyday!

The majority of us have not accomplished as much in technology, and many of us, though a minority, are in the top end of the age bell curve. I'm in that top end.

I've found over the years that I've gone from being frustrated with the churn of software/web development, to completely apathetic about it, to wanting something else- something more meaningful, and then to somewhat of an acceptance that I'm lucky just to be employed and making what I do as an older developer.

I find it very difficult to have the time and energy to focus on new technologies that come out all of the time, and less and less able as my brain perhaps is less plastic to really get into the latest JavaScript framework, etc.

I don't get excited anymore, don't have the motivation, ability, or time to keep up with things like the younger folk. Also, I've even gotten tired of mentoring them, especially as I become less able and therefore less respected.

Have you ever had or known someone that had similar feelings of futility or a serious slowdown in their career? If so, what worked/what didn't and what advice could you provide?

Thank you for taking the time to read and respond to everyone you have here. It definitely is much appreciated!


You could just be seeing things more or less as they are.


I'm a fair bit closer to the right hand side of the age curve than the left. My advice: Look at the brevity of Alan Kay's responses. When I was young I would have soared past them looking for the point. Now I see that one sentence and I weep. Why didn't anyone say that 20 years ago?

Maybe they did. I was too busy being frustrated with the churn of software development. All my time and energy was focused on new technologies that came out all the time. My young plastic brain spent it's flexibility absorbing the latest framework, etc.

Now that I have lost the motivation, ability and time to keep up with things like the younger folk, I can finally listen to the older folk (hopefully while there are still folk older than me to listen to).

These days I'm trying just to write code. All those young people have soared past the wisdom of their elders looking for the point. It's still there. Don't look at the new frameworks, look at what people were doing 10, 20, 30, 40, 50, 60 years ago. How does it inform what you are doing?

I hope that helps! It's a struggle for me too.


I was fortunate to grow up during a time when Alan Kay was a well-known figure in the personal computing world, and while what he said didn't make sense to me at the time, it still interested me intensely, and I always wondered what he meant by what he said. Strangely enough, looking back on my younger experience with computers, I think I actually did get a little bit of what he was talking about. It's just that I came to understand that little bit independently from listening to him. I didn't realize he was talking about the same thing. It wasn't until I got older, and got to finally see his talks through internet video that I finally started seeing that, and realizing more things by listening to him at length. Having the chance to correspond with him, talk about those things more in-depth, helped as well.

The way I look at it is just take in how fortunate you are to have your realizations when you have them (I've had my regrets, too, that I didn't "get" them sooner), and take advantage of them as much as you can. That's what I've tried to do.


>> what advice could you provide?

I'm not Mr. Kay, but great question!

I think we need a system, website, TV show, etc. in which experiences could be posted and rated. The best ideas and past experiences would rise to the top. You could vote and push things into view.

For example, your years of experience have realized that "yet another framework" is not the answer. We need a slower churn. But if the goal is to sell books ... well, now we are fighting capitalism.


Hi Alan,

A lot of the VPRI work involved inventing new languages (DSLs). The results were extremely impressive but there were some extremely impressive people inventing the languages. Do you think this is a practical approach for everyday programmers? You have also recommended before that there should be clear separation between meta model and model. Should there be something similar to discipline a codebase where people are inventing their own languages? Or should just e.g. OS writers invent the languages and everyone else use a lingua franca?


Tricky question. One answer would be to ask whether there is an intrinsic difference between "computer science" and (say) physics? Or are the differences just that computing is where science was in the Middle Ages?


In physics, you can tell you're making progress because you can explain more things that happen in nature. How can you tell when you're making progress in computer science?

To me it seems like "computer science" lumps together too many different goals. It's like if we had a field called "word science" that covered story-writing, linguistics, scientific publication, typesetting, etc.


This is a terrific question, and I'll try to do it justice tomorrow morning.


Now that it is "morning", I'm not sure that I can do justice to this question here...

But certainly we have to take back the term "computer science" and try to give it real meaning as to what might constitute an actual science here. As Herb Simon pointed out, it's a "science of the artificial", meaning that it is a study of what can be made and what has been made.

Science tries to understand phenomena by making models and assessing their powers. Nature provides phenomena, but so do engineers e.g. by making a bridge in any way they can. Like most things in early engineering, bridge-lore was put in "cookbooks of practice". After science got invented, scientist-engineers could use existing bridges as phenomena to be studied, and now develop models/theories of bridges. This got very powerful rather recently (the Tacoma Narrows bridge went down just a few months after I was born!).

When the first Turing Award winner -- Al Perlis -- was asked in the 60s "What is Computer Science?", he said "It is the science of processes!". He meant all processes including those on computers, but also in Biology, society, etc.

His idea was that computing formed a wonderful facility for making better models of pretty much everything, especially dynamic things (which everything actually is), and that it was also the kind of thing that could really be understood much better by using it to make models of itself.

Today, we could still take this as a starting place for "getting 'Computer Science' back from where it was banished".

In any case, this point of view is very different from engineering. A fun thing in any "science of the artificial" is that you have to make artifacts for both phenomena and models.

(And just to confuse things here, note how much engineering practice is really required to make a good theory in a science!)


Thanks for the answer! It seems like there's a distinction here between exploring how models can/should be built (a mathematical/philosophical task), helping people create and understand these models with computers (a design/engineering task), and using these models to formulate and test hypotheses about ourselves and the world (a scientific task). Maybe the lack of science is because we haven't figured out the math/philosophy/design/engineering parts yet!


The lack of science is because most people are not only not interested in science, but really don't understand what it is.


Thanks. I've been thinking about your questions. I might be misreading you but I think that the answer is probably yes to both. So we should try to get out of the middle ages by inventing new theories and criticising and testing them like physics. But maybe just the physicists should do that. In the meantime the engineers should focus on being able to communicate clearly with the best tools that are currently available.(part of which is restricting their desire to invent)


Engineering is wonderful -- but think of what happened after real science got invented!

Today's "computer science" is much more like "library science" than it should be on the one hand, and too much coincident with engineering on the other (and usually not great engineering at that).

It's way past time for our not-quite-a-field to grow up more in important ways.


Agreed. It's really motivating to have someone who has shown a few times what can be done continuing to push for better. It's also helpful for you call a spade a spade when you talk of reinventing the flat tire. If more people would recognise both of these then maybe we could have a better future and more stable engineering present (rather than framework/language of the week!)


It would be very good if we starting to do real engineering and real science wrt software and most design ...


Computer science is defined by information theory, and we already have mathematical proofs binding together information theory with the laws of quantum physics (such as the example of the minimum energy needed to erase one bit of entropy from memory, something which is bounded by the ambient temperature).


Sort of. There's quite a few theories operating on computer science as we know it today. Especially in software and hardware. Examples include model-driven development, flow-based programming, lambda calculus, state machines, logic-oriented systems, and so on. The mathematical models involved underlying structuring and verification of anything built in these can be quite different although often with some overlapping techniques or principles. There's also been lots of work in high-assurance systems going from requirements and design specifications in a rigorous, mathematical (even mechanical) way down to an implementation in HW, SW, or both. None of them cite information theory. Heck, the analog computers might be outside of it entirely given they implement specific, mathematical functions with continuous operation on reals. I know Shannon had a separate model for them.

So, given I don't study it or read on it, I'm actually curious if you or anyone else has references on where information theory impacts real software development over the years. I study lots of formal methods & synthesis research but never even see the phrase mentioned. I've been imagining it's in its own little field working at a strongly theoretical level making abstract or concrete observations about computers. Just don't see them outside some cryptography stuff I've read.

EDIT to add example below where Bertrand Meyer presents a Theory of Programs that ties it all to basic, set theory.

https://bertrandmeyer.com/2015/07/06/new-paper-theory-of-pro...


Respectfully ... I think you missed the point of my answer.


Did you intend to compare the progress and formalization of the fields? Didn't pick up on that


Yes, that was what I was driving at. Anyone could do physics in the Middle Ages -- they just had to get a pointy hat. A few centuries later after Newton, one suddenly had to learn a lot of tough stuff, but it was worth it because the results more than paid for the new levels of effort.


Hi Alan,

I'm preparing a presentation on how to build a mental model of computing by learning different computer languages. It would be great to include some of your feedback.

* What programming language maps most closely to the way that you think?

* What concept would you reify into a popular language such that it would more closely fit that mapping?

* What one existing reified language feature do you find impacts the way you write code the most, especially even in languages where it is not available?


I think I'd ask "What programming language design would help us think a lot better than we do now (we are currently terrible!)

Certainly, in this day and age, the lack of safe meta-definition is pretty much shocking.


I have a few things in my links on the topic of safe metaprogramming outside your own work. Here's a few I could remember on top of head:

Type-safe metaprogramming Sheard

http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=E63...

Type-safe, reflective metaprogramming Microsoft

http://research.microsoft.com/apps/video/dl.aspx?id=103561

Rascal - Metaprogramming language and platform

http://www.rascal-mpl.org/

So, given work like that, what remaining tough problems are there before you would find a metaprogramming system safe and acceptable? Or do we have the fundemantals available but you just don't like the lack of deployment in mainstream or pragmatic languages and IDE's?

Note: Just dawned on me that you might mean abstract programming in the sense of specifying, analyzing, and coding up abstract requirements closer to human language. Still interested in what gripes or goals you have on that end if so.


Could you give an example of what you mean by "safe meta-definition"? I'd like to understand this better.


"Meta is dangerous" so a safe meta-language within a language will have "fences" to protect.

(Note that "assignment" to a variable is "meta" in a functional language (and you might want to use a "roll back 'worlds' mechanism" (like transactions) for safety when this is needed.)

This is a parallel to various kinds of optimization (many of which violate module boundaries in some way) -- there are ways to make this a lot safer (most languages don't help much)


I've always felt that the meta space is too exponential or hyper to mentally represent or communicate. Perhaps we need different lenses to project the effects of the meta space on our mental model. Do you think this is why Gregor decided to move towards aspects?


I don't think Aspects is nearly as good an idea as MOP was. But the "hyperness" of it is why the language and the development system have to be much better. E.g. Dan Ingalls put a lot of work into the Smalltalks to allow them to safely be used in their own debugging, even very deep mechanisms. Even as he was making these breakthroughs back then, we were all aware there were further levels that were yet to be explored. (A later one, done in Smalltalk was the PIE system by Goldstein and Bobrow, one of my favorite meta-systems)


Aside from metaprogramming, from reading the "four reports" document that is the first Google link, it seems PIE also addresses another hard problem. In any hierarchically organized program, there are always related pieces of code that we would like to maintain together, but which get ripped apart and spread out because the hierarchy was split according to a different set of aspects. You can't get around this problem because if you change what criteria the hierarchy is split on in order to put these pieces near each other, now you've ripped apart code that was related on the original aspect. I've come to the conclusion that hierarchical code organization itself is the problem, and we would be better served by a way to assemble programs relationally (in the sense of an RDBMS). It seems like PIE was in that same conceptual space. Could you comment on that or elaborate more on the PIE system? Thanks.


Good insights -- and check out Alex Warth's "Worlds" paper on the Viewpoints site -- this goes beyond what PIE could do with "possible worlds" reasoning and computing ...


This is a very interesting paper. Its invocation of state space over time as a model of program side effects reminds me of an idea I had a couple years ago: if you think of a program as an entity in state-space where one dimension is time, then "private" object members in OO-programming and immutable values in functional programming are actually manifestations of the same underlying concept. Both are ways to create fences in the state-space-time of a program. Private members create fences along a "space" axis and functional programming creates fences along the "time" axis.


And you get to use "relational" and "relativity" side by side in a discussion.

A lot of interesting things tend to happen when you introduce invariants, including "everything-is-a" invariants. Everything is a file, everything is an object, everything is a function, everything is a relation, etc.


Rust has quite powerful macros (hygienic macros). It is still not a major programming language, but it wants to be one.


I'm guessing safe meta-definition means type-safe meta-programming.

For example in Lisp, code is data and data is code (aka homoiconicity). This makes it very convenient to write macros (i.e. functions that accept and return executable code).

Unsafe meta-programming would be like the C pre-processor whose aptness for abuse make it a leading feature of IOCCC entries.


Me too. But if he doesn't answer it he may mean how languages don't have a well designed meta protocol. See the one they built for CLOS in that good book.


This reminded me of an interesting dream I had. I dreamt I created a nice language with a meta protocol. In working with the language and using this protocol I changed the language into a different language which gave me insights on changing that language -- all through meta protocols. I woke up having a distinct feeling of what is means to not be plodding around in a Turing tarpit.


Hi Alan, the question that troubles me now and I want to ask you is:

Why do you think there is always a difference between:

A. the people who know best how something should be done, and

B. the people who end up doing it in a practical and economically-successful or popular way?

And should we educate our children or develop our businesses in ways that could encourage both practicality and invention? (do you think it's possible?). Or would the two tendencies cancel each other out and you'll end up with mediocre children and underperforming businesses, so the right thing to do is to pick one side and develop it at the expense of the other?

(The "two camps" are clearly obvious in the space of programming language design and UI design (imho it's the same thing: programming languages are just "UIs between programmers and machines"), as you well know and said, with one group of people (you among them) having the right ideas of what OOP and UIs should be like, and one people inventing the technologies with success in industry like C++ and Java. But the pattern is happening at all levels, even business: the people with the best business ideas are almost never the ones who end up doing things and so things get done in a "partially wrong" way most of the time, although we have the information to "do it right".)


We were lucky in the ARPA/PARC communities to have both great funding, and the time to think things through (and even make mistakes that were kept from propagating to create bad defacto standards).

The question you are asking is really a societal one -- and about operations that are like strip mining and waste dumping. "Hunters and gatherers" (our genetic heritage) find fertile valleys, strip them dry and move on (this only works on a very small scale). "Civilization" is partly about learning how to overcome our dangerous atavistic tendencies through education and planning. It's what we should be about generally (and the CS part of it is just a symptom of a much larger much more dire situation we are in).


So you're rephrasing the question to mean that you see it as 'hunter gatherer mode' thinking (doing it in a practical and short term economically-successful way) vs. 'civilized builder mode' thinking (doing it the way we know it should be done) and that they are antagonistic, and that because of the way our society is structured 'hunter gatherer' mode thinking leads to better economical results?

This ends up as a pretty strong critique of capitalism's main idea that market forces drive the progress of science and technology.

Your thinking would lead to the conclusion that we'd have to find a way to totally reshape/re-engineer the current world economy to stop it from being hugely biased in favor of "hunter gatherers that strip the fertile valley dry" ..right?

I hope that people like you are working on this :)


I can think of no better person to ask than Alan Kay:

What are the best books relevant to programming that have nothing to do with programming? (e.g. How Buildings Learn, Living Systems, etc.)?


Lots ...

Molecular Biology of the Cell

Notes on a Synthesis of Form

etc


I for one would love to see the 'etc' expanded, but in any case I do appreciate you taking the time to respond. Thanks!


There is a very old reading list online I made for the company that is now Accenture -- and this was the subject of a recent HN "gig". I think there is a URL for this discussion in this AMA.



I'm intersted in this question as well. I'd like to contribute to your e.g: Ant Encounters by Deborah M. Gordon.


Many mainstream programming tools feel to be moving backwards. For example, Saber-C of the 1980s allowed hot-editing without restarting processes and graphical data structures. Similarly, the ability to experiment with collections of code before assembling them into a function was advance.

Do you hold much hope for our development environments helping us think?


You could "hot-edit" Lisp (1.85 at BBN) in the 60s (and there were other such systems). Smalltalk at Parc in the 70s used many of these ideas, and went even further.

Development environments should help programmers think (but what if most programmers don't want to think?)


Hot-editing updates behavior while keeping state, causing wildly unpredictable behavior given the way objects are constructed from classes in today's languages. The current approach to OO is to bootstrap fresh state from an external source every time the behavior changes so guarantees can be made about the interaction between behavior and state. It seems to me the equivalent of using a wheelchair because you might stumble while walking, the concern is genuine, but the cure is possibly worse than the affliction.

I don't know what the solution is. Perhaps a language with a fundamentally different view of objects, maybe as an ancestry of deltas of state/behavior pairings, somewhat like prototypes but inheriting by versioning and incrementally changing so that state and behavior always match up but still allowing you to revert to a working version. Likely Alan has some better ideas on what sort of language we need.


I use hot-editing in python by default and I find it incredibly useful (now I feel crippled when I'm on a system without it). There are times when I need to reload the state completely but it's pretty rare (changing something that uses metaclasses, like sqlalchemy, is one such place).

Maybe there's something about the style I've adopted that lends itself more to hot-editing but it's definitely a tool I'd hate to be without.


I'm super interested in how you do that! Can you share at all?


Yes! I can! I quickly made a video with super crappy audio quality last time it came up - https://www.youtube.com/watch?v=k-mAuNY9szI

It's pretty poor quality listening but you should get the point. You can send me an email (see my profile) if you wanted to go through it in more detail.


Redux does this at a library level - http://redux.js.org/docs/introduction/


Yes. I think they have been slowly getting better.

Visual Studio has let you do hot code editing for over a decade now, they call it "Edit and Continue"[0]. Only works for some languages (C#, Visual Basic/C++). It also lets you modify the program state while stopped on a break-point with code of your devising.

Most browsers also let you adhoc compose and run code without modifying the underlying programs.

Thanks to hardware performance counters, profilers are now able to profile code with much less impact on performance (eg: no more adjusting timeouts due to profiler overhead). Network debuggers are getting better at decoding traffic and displaying it in a more human readable format (eg: automatic gzip decompression, stream reassembly, etc).

[0]: https://msdn.microsoft.com/en-us/library/bcew296c.aspx


I don't know in what context "hot editing" was used to start this thread, but what I read in it is the idea that you can change code while it's running. Edit and continue has a different feel to it, because it works by a different method, by literally patching memory that the suspended thread is going to execute. It has the convenience of stopping the execution of the program before the patch is done. What "hot editing" in, say, Smalltalk has been able to do is you can have a live program running, you can call up a class that the thread uses, change the code in a method, compile it, while the thread is still running, and instantly see the change take effect. The reason it can do this is that method dispatch is late-bound. In .Net it's bound early. Late binding allows much more of a sense of experimentation. You don't have to stop anything. You just change it like you're changing a setting in an app., and you can see the change instantly. This gives you the feel that programming is much more fluid than the typical "stop, edit, compile, debug" cycle.


BASIC also did this on pretty much every microcomputer in the 1980s.


Kind of, but it was clunkier. You could Break out of an executing program, edit the code you wanted, and then type CONT to continue execution from the break point. The state from that point forward might not be what you want, though. At least inside VS it tries to revert state so that the revision executes as if the state came into it "clean."

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: