Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay has agreed to do an AMA today
1401 points by alankay on June 20, 2016 | hide | past | favorite | 893 comments
This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).



When you were envisioning today's computers in the 70s you seemed to have been focused mostly on the educational benefits but it turns out that these devices are even better for entertainment to the point were they are dangerously addictive and steal time away from education. Do you have any thoughts on interfaces that guide the brain away from its worst impulses and towards more productive uses?


We were mostly thinking of "human advancement" or as Engelbart's group termed it "Human Augmentation" -- this includes education along with lots of other things. I remember noting that if Moore's Law were to go a decade beyond 1995 (Moore's original extrapolation) that things like television and other "legal drugs" would be possible. We already had a very good sense of this before TV things were possible from noting how attractive early video games -- like SpaceWar -- were. This is a part of an industrial civilization being able to produce surpluses (the "industrial" part) with the "civilization" part being how well children can be helped to learn not to give into the cravings of genetics in a world of over-plenty. This is a huge problem in a culture like the US in which making money is rather separated from worrying about how the money is made.


Then what do you think about the concept of "gamification?" Do you think high densities of reward and variable schedules of reward can be exploited to productively focus human attention and intelligence on problems? Music itself could be thought of as an analogy here. Since music is sound structured in a way that makes it palatable (i.e. it has a high density of reward) much human attention has been focused on the physics of sound and the biomechanics of people using objects to produce sound. Games (especially ones like Minecraft) seem to suggest that there are frameworks where energy and attention can be focused on abstracted rule systems in much the same way.


I certainly don't think of music along these lines. Or even theater. I like developed arts of all kinds, and these require learning on the part of the beholder, not just bones tossed at puppies.


I've been playing traditional music for decades, even qualifying to compete at a high level at one point. There is a high density of reward inherent in music, combined with variable schedules of reward. There is competition and a challenge to explore the edges of the envelope of one's aesthetic and sensory awareness along with the limits of one's physical coordination.

Many of the same things can happen in sandbox style games. I think there is a tremendous potential for learning in such abstracted environments. What about something like Minecraft, but with abstracted molecules instead of blocks? Problems, like the ones around portraying how molecules inside a cell are constantly jostling against water molecules, could be solved in such environments using design. Many people who play well balanced games at a high level often seem to be learning something about strategy and tactics in particular rule systems. I suspect that there is something educationally valuable in a carefully chosen and implemented rule system.

Also perhaps, it's so much easier to exploit such mechanisms to merely addict people, that overwhelms any value to be gained.


I just tried, albeit slightly unsuccessfully, to describe the philosophy of the Montessori system to someone. Your answer, learning on the part of the beholder, sums it up beautifully. Thank you for that.


The way you describe music here sounds a lot like how Steve Pinker has described music: as a mental equivalent of cheesecake; something that just happens to trigger all the right reward systems (the ones based on our love of patterns and structure, and exploiting the same biological systems we use for language) but isn't necessarily nutritious itself.

However, all evidence points to him being wrong about this, making the mistake of starting with language as the centrepiece and explaining everything around it. Human music likely predates human speech by hundreds of thousands of years, and is strongly tied to social bonding, emotions and motor systems in ways that have nothing to do with the symbolic aspects of language.


The way you describe music here sounds a lot like how Steve Pinker has described music: as a mental equivalent of cheesecake;...isn't necessarily nutritious itself.

Note that I didn't mean that in a negative way. Also, if you want to consume macro-nutrients, cheesecake is a pretty effective way to get simple carbs and dairy fat.

is strongly tied to social bonding, emotions and motor systems in ways that have nothing to do with the symbolic aspects of language.

I think there is something akin to this that can be found in games, and that there is something particularly positive that can be found in well constructed games.


Yes, sorry: I could have been more clear that the what I described was Steve Pinker's judgement, not yours.

And I tried to stay neutral towards games on purpose - I have taught game design myself ;). Having said that, a lot of real-world attempts at gamification are pretty banal carrot/stick schemes.


What are some examples of such well-constructed games?


I think games are more like instruments than they are like music. The game itself isn't as interesting as the gameplay you can perform inside it. Speedrunning in particular has a lot in common with musical performance.


I guess in the use of technology one faces a process rather similar to natural selection, in which the better the user's ability to restrict his use to what he has to do, the more likely the survival, i.e. the user will not procrastinate and get distracted. The use of computers for entertainment is unstoppable, it's nearly impossible to not allow the kids find and play those games, chat with friends on WhatsApp, and be exploited otherwise by companies that make money from that sort of exploitation, even though that's at the cost of their psychological health and future success. People spend every single second of the day connected and distracted, and this seems irreversible. I wonder if you have any practical thought on how this can be remedied.


My friend Neil Postman (our best media critic for many years) advocated teaching children to be "Guerilla Warriors" in the war of thousands of entities trying to seize their brains for food. Most children -- and most parents, most people -- do not even realize the extent to which this is not just aggressive, but regressive ...


Can you elaborate more on that?


Neil's idea was that all of us should become aware of the environments we live in and how our brain/minds are genetically disposed to accommodate to them without our being very aware of the process, and, most importantly, winding up almost completely unaware of what we've accommodated to by winding up at a "new normal".

The start of a better way is similar to the entry point of science "The world is not as it seems". Here, it's "As a human being I'm a collection of traits and behaviors, many of which are atavistic and even detrimental to my progress". Getting aware of how useful cravings for salt, fat, sugar, caffeine, etc., turn into a problem when these are abundant and consumer companies can load foods with them....

And, Neil points out -- in books like "Amusing Ourselves To Death" and "The End Of Childhood" -- we have cravings for "news" and "novelty" and "surprise" and even "blinking", etc. which consumer companies have loaded communications channels with ...

Many of these ideas trace back to McLuhan, Innis, Ong, etc.

Bottom line: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.


> Bottom line: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.

Most children meet entertainment technology as early as before the first birthday, though. Many pre-teens that I see around possess smartphones and/or tablets. Most of the early teenagers possess multiple devices. None of these will be able to judge what's is beneficial to their future and well-being, and opt for it rather than what is immediately fun and pleasing. Just like most of them will live on chocolate bars and crisps if let to do so. The burden falls on the parents, a burden they don't take.

I myself can't think of a future other than one full of device addicts, and a small bunch that managed to liberate themselves from perennial procrastination and pseudo-socialisation only in their twenties. And while my country can prohibit certain products (food, etc.) from import and production within its own borders (e.g. genetically modified, chemically engineered to be consumed greedily), this can't be done with websites, because (a) it's technically impossible and (b) it 'contradicts freedom of speech'. I'll ask the reader to philosophise over (b), because neither the founding fathers of the US nor the pioneers of the french revolution, nor most of the libertarian, freedom-bringing revolutionists had a Facebook to tag their friends' faces.

(edit: I don't want to get into a debate over freedom of speech, and don't support any form of cencuring of it, tho I don't want freedom of speech at the cost of exploitation of generations and generations by some companies that use it as a shelter for themselves.)


I once said that "Television is the last technology we should be allowed to invent without a Surgeon General's warning on it"


> Kay: children need to learn how to use the 21st century, or there's a good chance they will lose the 21st century.

> Gkya: I myself can't think of a future other than one full of device addicts, and a small bunch that managed to liberate themselves from perennial procrastination and pseudo-socialisation only in their twenties.

As a infovore this worries me. If we cannot control ourselves and come up with better solutions for self control then the authoritarian minded are likely to do it for us.

The Net is addictive and all those people pretending it ain't so are kidding themselves.

It's easy to imagine anti-Net campaigners in the same way as we see anti-globalization activists today.

I myself have seen the effects of good diet, exercise and meditation on a group of people, and it is quite remarkable how changed for the better people are. So there is hope!

I believe that social change, example: phubbing being widely regarded as taboo, isn't fast enough to keep with the Net's evolution. By the time a moral stance against phubbing is established mobile phones probably won't exist. For this I think we need a technological solution which is as adaptive as an immune system, but also one which people can opt in to. Otherwise eventually people will demand governments do things like turn off the Net at certain times during the day or ban email after 6pm and so on.


The introduction to technology, well, essentially I'm talking about internet, is so early on a kids life that we can't just say "we should control ourselves". You can't put your kid in a room full of crisps, sweets, alcohol, drugs, pornography, and expect it to come out ten, fifteen years later as a healthy individual that is not an addict to none of them. This is what we essentially do with the internet.

> I myself have seen the effects of good diet, exercise and meditation on a group of people, and it is quite remarkable how changed for the better people are. So there is hope!

You're an adult, I am too. We can realize: this is stealing my life. But a kid can't. And stolen days don't return. This is why I'm commenting: we'd rather raise better individuals than letting them do wtf they want and hoping they'll fix themselves later.


You can't put your kid in a room full of crisps, sweets, alcohol, drugs, pornography, and expect it to come out ten, fifteen years later as a healthy individual that is not an addict to none of them

I know this is bandied about a lot, but is this actually proven? With the exception of drugs, all of those you mention have been within easy reach for me (actually, as a Dutchman, even softdrugs were just one step away if I'd wanted to). Yet I don't consider myself addicted to any of those.


I'm not a native speaker of English, so I wonder: does kid not mean person not yet adolescent? I'm referring to 0-14 yrs olds when I say kid. If we agree on that, and you still say, it's not proven, we can try, well then I can't do much than hoping you either don't have children or no child's responsibility is on you otherwise.


Reading his post I believe he meant the above mentioned things were within reach of him as a child (I don't believe he meant now as an adult).

" I can't do much than hoping you either don't have children or no child's responsibility is on you otherwise."

That's a strong statement to make. Implying he's unable to raise children because he'd like to see evidence that the internet actually has a negative influence on children.


I interpreted his message as he did not only want evidence for the internet, but also the other stuff I mentioned, and their effects on kids. I'm sorry if that wasn't the case.


No, I did not mean I wanted evidence of their effect on kids. I want evidence that "putting your kid in a room full of $bad_stuff" always leads to addiction, since that strikes me as nothing more than scare stories.

Good parents can raise their children correctly even with $bad_stuff present around them, that was the point I was trying to make.


> Good parents can raise their children correctly even with $bad_stuff present around them, that was the point I was trying to make.

I concur. But the internet exposure of kids is mostly not governed by parents. They either are alone with the connected device in their rooms, away from them, or with a mobile device out of their home. The best the parents can do is to educate the kids, but the public lacks the knowledge to effectively do so. They should be given the formation to be able to educate their children, and furthermore schools should educate minors on the use of tech.

"putting your kid in a room full of $bad_stuff" will mostly lead to addiction if the parent is not there to teach the kid: this is harmful to you; not you think?


Mostly agreed, yes. But I would rephrase it as "introducing kids to $bad_stuff without guidance is a bad idea": I don't think that permanent supervision should be required. Once the novelty wears off, and the parent is confident that the kid can behave themselves even in the presence of $bad_stuff, even "putting your kid in a room full of $bad_stuff" can be fine.

And I don't mean that in the sense of "the kids are fine with their heroin syringes", but in the sense "I can leave the cookie jar on the counter and it will still be there when I leave the room".


I think there exist records of hospital mix-ups with babies, with pretty profound differences changing them depending on what environment they wound up in, but this may be mostly anecdotal. One case in Japan like this but it illustrated wealth difference as opposed to what we're looking for here.

http://www.telegraph.co.uk/news/worldnews/asia/japan/1048109...

Provocative but not evidence. I did look up some twin studies but I can't find one with a clear vice/virtue environment study. Gwern is good at ferreting out this kind of information if you ask him.


I agree. it is pretty sobering.

Just yesterday before this thread even started (I work as part-time cleaner) I was polishing a window. Through it I saw some children in a sitting room, one of who was literally standing centimeters away from a giant flat screen television. Glued to it.

I thought: "Fuck, they don't have a chance". Their attention spans will be torn to pieces like balls of wool by tiny kittens. Now multiply that effect with the Net + VR and you have an extraordinary psychological effect best compared to a drug.

I didn't have a television in my childhood. I read countless books, and without them, I wouldn't be sitting here, I wouldn't have done any of the things I could reasonably consider inventive or innovative. They might not be world changing things, but they were mine and my life was better for doing so.

I was speaking to a friend who has children a few months ago. He was in the process of uploading photos of his family to Facebook. I asked him whether he considered what he was doing to be a moral act, since he is for practical purposes feeding his children's biometrics into a system that they personally have not, and could not, opt in to. He was poleaxed by the thought. He was about to say something along the lines of 'well everybody's doing this' but I could visibly see the thought struck him that "wow, that's actually a really bad line of reasoning I was about to make". Instead he agreed with me, uncomfortably, but he got it.

I don't know how you get millions of people to have that kind of realization. I do think parental responsibility has a huge role though. My parents got rid of the television in the 80s. It was the right thing to do.


The thing that disturbs me about this argument is that IMHO it's a slippery slope towards "back in my day, we didn't have this new-fangled stuff". We have to be extremely careful that our arguments have more substance than that. That requires a lot of introspection, to be honest.

See, my grandparents worried that the new technology that my parents grew up with would somehow make them dumber (growing up with radio, parents getting television); my parents' generation worried that the technology we grew up with would be bad for us (too much computer, too much gaming, too much Internet). The upcoming generation of parents will grow up wondering whether VR and AR is going to ruin their kids' chances.

Yet kids ALWAYS adapt. They don't view smartphones or tablets as anything particularly out of the ordinary. It's just their ordinary. I'm certain their brains will build on top of this foundation. That's the thing - brains are extremely adaptable. All of us adapted.

There's a term for this worry - it's called 'Juvenoia':

https://www.youtube.com/watch?v=LD0x7ho_IYc

http://time.com/19818/whats-really-wrong-with-young-people-t...

Now, I'm not saying that this is a discussion that shouldn't be had - it certainly should. I just think we all need to be mindful about where our concerns might be coming from.


I never said myself that tech per se will make kids dumber. What I say is, there should be measures governing their exposure, just like there are for other things.

Just like an alcohol drinker and an alcohol addict are different, an internet user and an internet addict are different too. Just because some or most are not addicts, we can't dismiss the addiction altogether.


It's just that it seems a bit unfair to decry (or place undue burdens upon) the vast majority of responsible alcohol drinkers because we've found a few people who have an unhealthy relationship with it.

Recognizing potential dangers is a far cry from saying that there's a risk of "losing the century" because of easy access to technology and entertainment, and it strikes me as rather belittling to the younger generation.

Millennials and their children are still humans, after all, and are just as intelligent, motivated, and adaptable as every generation before them.


Who are responsible alcohol drinkers? In my country the minimum age for consummation of alcoholic products is 18. What would you think of a 10 or 15 years old kid that's a responsible drinker?

What I'm arguing is against an analogue of this in tech. There is a certain period during which the exposure of a minor to technological devices should be governed by parents.

What do you think of adolescents which get recorded nude on chatrooms? Some of them commit suicide. What do you think of children victim to bullying online? What do you think of paedophiles tricking kids online? Isn't a parent responsible of protecting a minor from such abuses?

My general argument on this thread is that we should raise out children as good as we can. Protect them from danger that they cannot be conscious of. We can't certainly place burdens on adults, but we can try to raise adults that are not inept addicts with social deficiencies. And because most of the worlds population is tech-illiterate, it falls on governments to provide education and assistance to parents, just like they do so with health and education.

Most of the counter-arguments here has been strawmans, because while I'm mostly targeting children, I've been countered with arguments about adults.


So by that logic, would you say that the only reasons children should not be allowed to buy alcohol are biological development reasons?


> The thing that disturbs me about this argument is that IMHO it's a slippery slope towards "back in my day, we didn't have this new-fangled stuff".

> I just think we all need to be mindful about where our concerns might be coming from.

Basically we're on the same page.

Here is proposition. I'll steelman the Conservative view and you tell me what you think. I promise not to claim vidya causes violence or D&D is a leading cause of Satanism.

My proposition is that television media has meaningfully worsened our society by making it dumber. This is an artifact of the medium itself, rather than an issue with any specific content on it. To explain what I mean by dumber I must elaborate.

The television is a unidirectional medium. It contains consensus on various intellectual issues of the day and gives a description of the world I'd call received opinion. There exists no meaningful difference between the advertising that tranches people into buying products and the non-advertising that tranches people into buying ideas. Most ideas that are bought are not presented as items to be sold, they are pictured as 'givens', obvious. Most lying is done by omission. Even were all information presented truthfully, we have a faux sense of sophistication about our awareness which is a problem. When you buy prepackaged meals at a store you are not in the makings of becoming a chef, and in that way you are not chewing over the ideas presented to you, you do no mental cognition. Your state is best described as, and feels like, a hypnotic trance.

One of the problems with this is that television creates a false sense of normalcy that has no objective basis. It asks the questions and provides the answers. All debate is rhetorical debate.

It's the cognitive equivalent of 'traffic shaping' that Quality of Service mechanisms do on routers. In a way that is a much bigger lie. This concept is very similar to Moldbug's Cathedral concept. The people who work for the Cathedral don't realize they represent a very narrow range of thought on the spectrum. Their opinions cannot plausibly be of their own manufacture because one arbitrary idea is held in common with another arbitrary idea and they all hold them.

The key to understanding this is very real and not at all abstract, is that millions of people have synchronized opinions on a range of issues without any other discernible cause other than the television (or radio). Why do populations of teenagers become anorexic after the introduction of television where they did not suffer before it? Synchronized opinion is always suspicious. It defies probability theory to think my grandmother and millions of others suddenly came to the conclusions for example, that gay marriage was a positive idea? Why do millions of conservatives think buying gold is a good idea? It is not that there is something wrong with gay marriage or buying gold. It's that there is no genuine thinking going on about about any of this. There many ways to hedge against inflation that don't involve buying gold. Why is gay marriage the morality tale of the age, and not, say, elder abuse in nursing care facilities.

Why do some things become 'issues' and not a myriad of others? How directed this is is up for debate, but what is not is that the selectivity and constraints of the medium have narrowed our perception of the world, and that has led to the thing that made us dumber: it stunted our native creativity and curiosity.

> Yet kids ALWAYS adapt. They don't view smartphones or tablets as anything particularly out of the ordinary. It's just their ordinary. I'm certain their brains will build on top of this foundation. That's the thing - brains are extremely adaptable. All of us adapted.

There does exist a series of schools in Silicon Valley. The software engineers at Google and Facebook and other firms send their children to them, and they strictly contain no computing related devices. Instead it's schooling of the old fashioned sort, from the early 20th century.

It is possible that this is juvenoia as you suggested. But at least take into account those parents may understand something else about electronic media and its affects on brains. After all many of them study seriously human attention for a living.

The other thing I want to ask you is have you ever visited in your country what we call council estates in Europe? These are places which contain the poorer class of people in our society. I've been to many of these gray lifeless places and they all have many characteristics in common. Television is a major part of their lives and their shelves are bare of books. It is ubiquitous. In the past the working classes were much more socially and intellectually mobile. They read. They did things. Little evidence remains of that today, but it was so.

It is possible that television is like a slow poison that affects some classes more than others. You can't just say people you know are unaffected and therefore it does not matter, because it is possible you may be part of an advantaged group for which reasons may exist why they could be more immunized than most e.g. having challenging or interesting work to do. It's worth considering that all the problems I mentioned still exist without television in society but you might say the 'dose' determines whether it's medicine or poison. There is certainly a sense among many people that television has progressively gotten worse and watching old news broadcasts and documentaries it is hard not to see what they mean. I appreciate this isn't objective measurement, but comparing like with like, say James Burke's Connections with Neil deGrasse Tyson's Cosmos, the difference is obvious and the Cosmos reboot would be considered very good relative to its current competition.

Evidence for my claims could be a reduction in the number of inventions (excluding paper patents) per capita, reduced library visitations with respect to population changes, increasing numbers of younger people unable to read, evidence of decreased adventurousness or increased passiveness in the population, some metric for diminished curiosity/creativity over time. If those were mainly found wanting then I'll concede my error.

I'd be much more concerned about curiosity/creativity, than reduction in IQ or school test scores because creativity is really the key to much of what is good about human endeavor.

I'd also like to point out that you might not be able to spot the 'brain damage' so easily, since it's hard to come up with objective measures without a good control group. If it happened to most people then it's a new normal but that doesn't mean it had no effect.


Thank you, this was a wonderfully thought-provoking response (also, the first season of Connections is probably my favourite documentary of all time!).

One thing I will offer is that in my household growing up, television was positive because it was an experience that we shared as a family. We would watch TV shows together, talk about them together, laugh at them together, etc. In that sense, television brought outside viewpoints into our household and spurred conversation. I think that is one of the key factors that may differentiate between TV having good effects and TV having bad effects on different people.

In a sense, I think that although television itself isn't interactive, you could say that our family was 'interactive about' television. So we got the benefits of being able to use television in a positive way.

Thanks for reminding me of how important that was for me :)

By the way, on the limitation of television being a passive medium.... This reminds me of something I read back when I was a kid that was very profound for me. I can't recall exactly now, but I think it was in a Sierra On-Line catalogue where Roberta Williams said something about wanting her children to play adventure games rather than watch television as with adventure games, they had to be actively engaged rather than passive. This really resonated with me at the time, given that I was really getting into the Space Quest & other 'Quest games :)


> Thank you, this was a wonderfully thought-provoking response (also, the first season of Connections is probably my favourite documentary of all time!).

Thank you. I hope to meet or communicate with Mr Burke at some point soon, I know Dan Carlin had a podcast with him a little while back if you're interested in his new take on the world. Connections remains the high water mark for documentary making and it is worth reading the books. If you want to watch a documentary in a similar style I suggest The Ascent of Man.

> In a sense, I think that although television itself isn't interactive, you could say that our family was 'interactive about' television. So we got the benefits of being able to use television in a positive way.

I believe you, I am mainly thinking of the average 5 hours per day the average American (or European) spends in front of the television. The dose makes the poison!

> This really resonated with me at the time, given that I was really getting into the Space Quest & other 'Quest games

Yes, it is clear that videogaming can provide for a shared community and culture, most obviously the MMORPGS. This is not something television achieves, or if it does, it is rare, like fans of Mythbusters or Connections. In the present we are concerned with developing the foundations of the Net, like commerce or the law. But ultimately I think a Net culture will be the most valued feature we ascribe to the Net.


Who is it a problem for? Why is it a problem?


Hi Alan,

In "The Power of the Context" (2004) you wrote:

  ...In programming there is a wide-spread 1st order
  theory that one shouldn’t build one’s own tools,
  languages, and especially operating systems. This is
  true—an incredible amount of time and energy has gone
  down these ratholes. On the 2nd hand, if you can build
  your own tools, languages and operating systems, then
  you absolutely should because the leverage that can be
  obtained (and often the time not wasted in trying to
  fix other people’s not quite right tools) can be
  incredible.
I love this quote because it justifies a DIY attitude of experimentation and reverse engineering, etc., that generally I think we could use more of.

However, more often than not, I find the sentiment paralyzing. There's so much that one could probably learn to build themselves, but as things become more and more complex, one has to be able to make a rational tradeoff between spending the time and energy in the rathole, or not. I can't spend all day rebuilding everything I can simply because I can.

My question is: how does one decide when to DIY, and when to use what's already been built?


This is a tough question. (And always has been in a sense, because every era has had projects where the tool building has sunk the project into a black hole.)

It really helped at Parc to work with real geniuses like Chuck Thacker and Dan Ingalls (and quite a few more). There is a very thin boundary between making the 2nd order work vs getting wiped out by the effort.

Another perspective on this is to think about "not getting caught by dependencies" -- what if there were really good independent module systems -- perhaps aided by hardware -- that allowed both worlds to work together (so one doesn't get buried under "useful patches", etc.)

One of my favorite things to watch at Parc was how well Dan Ingalls was able to bootstrap a new system out of an old one by really using what objects are good for, and especially where the new system was even much better at facilitating the next bootstrap.

I'm not a big Unix fan -- it was too late on the scene for the level of ideas that it had -- but if you take the cultural history it came from, there were several things they tried to do that were admirable -- including really having a tiny kernel and using Unix processes for all systems building (this was a very useful version of "OOP" -- you just couldn't have small objects because of the way processes were implemented). It was quite sad to see how this pretty nice mix and match approach gradually decayed into huge loads and dependencies. Part of this was that the rather good idea of parsing non-command messages in each process -- we used this in the first Smalltalk at Parc -- became much too ad hoc because there was not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http -- just think of what this could have been if anyone had been noticing ...)


> I'm not a big Unix fan

What is your preferred technology stack?


What's a good non-UNIX open-source operating system that's useful for day-to-day work, or at least academically significant enough that it's worth diving in to?


Here's a list of alternatives I put together to see some capabilities or traits UNIX lacked:

https://news.ycombinator.com/item?id=10957020

I think, usable day-to-day, I'd say you're down to Haiku, MorphOS, Genode, MINIX 3, and/or A2 Bluebottle. Haiku is a BeOS clone. MorphOS is one of last Amiga's that looks pretty awesome. Genode OS is a security-oriented, microkernel architecture that's using UNIX for bootstrapping but doesn't inherently need it. MINIX 3 similarly bootstrapping on NetBSD but adds microkernels, user-mode drivers, and self-healing functions. A2 Bluebottle is most featured version of Oberon OS in safe, GC'd language. Runs fast.

The usability of these and third party software available vary considerably. One recommendation I have across the board is to back up your data with a boot disc onto external media. Do that often. Reason being, any project with few developers + few users + bare metal is going to have issues to resolve that long-term projects will have already knocked out.


Minix isn't bootstrapping on netbsd, the entire goal of the system is to be a microkernel based unix. It uses the netbsd userland because you don't need to rewrite an entire unix userland for no reason just to change kernels.


Mental slip on my part. Thanks for the correction. I stand by the example at least for the parts under NetBSD like drivers and reincarnation server. Their style is more like non-UNIX, microkernel systems of the past. Well, some precedent in HeliOS operating system but that was still detour from traditional UNIX.

https://en.wikipedia.org/wiki/Helios_os


SqueakNOS? http://squeaknos.blogspot.com ;-) It has a native TCP/IP stack in Squeak.



The difference is PharoNOS has a Linux running behind while the idea of SqueakNOS is to build a complete operating system via Squeak. In this way you can quickly hack it. There is a great page about these initiatives here: http://wiki.squeak.org/squeak/5727

BTW, prior to SqueakNOS we implemented this: http://swain.webframe.org/squeak/floppy/ (using Linux and modifying Squeak to work with SVGALib instead of X) in just 900mb inspired in QNX Demo Disk: http://toastytech.com/guis/qnxdemo.html


I was going to mention QNX Demo Disk in my UNIX alternatives comment. I think I edited it out for a weak fit to the post. It was an amazing demo, though, showing what a clean-slate, alternative, RTOS architecture could do for a desktop experience. The lack of lag in many user-facing operations was by itself a significant experience. Showed that all the freezes and huge slow-downs that were "to be expected" on normal OS's weren't necessary at all. Just bad design.

It's neat that it was the thing that inspired one of your Squeak projects. Is SqueakNOS usable day-to-day in any console desktop or server appliance context? Key stuff reliable yet?


We implemented SqueakOS while some friends implemented SqueakNOS. I don't think they are being used anywhere but for educational purposes it is amazing that drivers and a TCP/IP stack could be implemented (and debugged!) in plain smalltalk. There was some more information here: http://lists.squeakfoundation.org/pipermail/squeaknos/2009-M...


There's GNU, which is by definition not UNIX. ;)


I'd assume that depends on your measure of worth, I'd say. Many operating systems had little academic significance when it was most academically or commercially fruitful to invest time in. Microkernel and dependency specific operating systems would be interesting. Or hardware based capability based operating systems.


well, in this talk it sounds like you do advocate tool building - isn't tool building a way to try elliptic orbits instead of the circular ones ?

https://www.youtube.com/watch?v=NdSD07U5uBs 'Power of simplicity'


Yes, I do advocate tool building -- basically "if you can do it without getting buried, you should".


Could someone give hints/pointers that help me understand the following? "parsing non-command messages in each process ... not a strong attempt to intertwine a real language around the message structures (this very same thing happened with http"

Does that mean the messages should have been part of a coherent protocol or spec? That there should have been some thought behind how messages compose into new messages?


Smalltalk was an early attempt at non-command-messages to objects with the realization that you get a "programming language" if you take some care with the conventions used for composing the messages.


By non-command-messages do you mean that the receiver was free to ignore the message?


Yes


Akin to "signals" / "event emitters"?


If you think about the "whole system", even if it's just a Shannon channel, what do you actually need?


I tend to do both in parallel and the first one done wins.

That is, if I have a problem that requires a library or program, and I don't know of one, I semi-simultaneously try to find a library/program that exists out there (scanning forums, googling around, reading stack overflow, searching github, going to language repositories for the languages I care about, etc) and also in parallel try to formulate in my mind what the ideal solution would look like for my particular problem.

As time goes by, I get closer to finding a good enough library/program and closer to being able to picture what a solution would look like if I wrote it.

At some point I either find what I need (it's good enough or it's perfect) or I get to the point where I understand enough about the solution I'm envisioning that I write it up myself.


Yes. If it takes me longer to figure out how to use your library or framework than to just implement the functionality myself, there is no point in using the library.

Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.


Other points of consideration: My coworkers might not already know some library, but they definitely won't know my library. My coworker's code is just about as "3rd party" as any library - as is code I wrote as little as 6 months ago. Also my job owns that code, so rolling my own means I need to write another clone every time I switch employers - assuming there's no patents or overly litigious lawyers to worry about.

But you're of course correct that there is, eventually, a point where it no longer makes sense to use the library.

> Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.

The problem is I got so tired of fixing bugs in coworker / former coworker code that I eventually replaced their stuff with off the shelf libraries, just so the bugs would go away. And in practice, they did go away. And it caught several usage bugs because the library had better sanity checks. And to this day, those former coworkers would use the same justifications, in total earnestness.

I've never said "gee, I wish we used some custom bespoke implementation for this". I'll wish a good implementation had been made commonly available as a reusable library, perhaps. But bespoke just means fewer eyes and fewer bugfixes.


It's all trade-offs.

If there happens to be a well-tested third party library that does what you want, doesn't increase your attack surface more than necessary, is supported by the community, is easy to get up and running with, and has a compatible license with what you are using it in, then by all means go for it.

For me and my work, I tend to find that something from the above list is lacking enough that it makes more sense to write it in-house. Not always, and not as a rule, but it works out that way quite a bit.

I would also argue that if coworkers couldn't write a library without a prohibitive number of bugs, then they won't be able to write application or glue code either. So maybe your issue wasn't in-house vs third party libraries, but the quality control and/or developer aptitude around you.


You're not wrong. The fundamental issue wasn't in-house vs third party libraries.

The developers around me tend to be inept at time estimation. They completely lack that aptitude. To be fair, so do I. I slap a 5x multiplier onto my worst case estimates for feature work... and I'm proud to end up with a good average estimate, because I'm still doing better than many of my coworkers at that point. Thank goodness we're employed for our programming skills, not our time estimation ones, or we'd all be unemployable.

They think "this will only take a day". If I'm lucky, they're wrong, and they'll spend a week on it. If I'm unlucky, they're right, and they'll spend a day on it - unlucky because that comes with at least a week's worth of technical debt, bugs, and other QC issues to fix at some point. In a high time pressure environment - too many things to do, too little time to do it all in even when you're optimistic - and it's understandable that the latter is frequently chosen. It may even be the right choice in the short term. But this only reinforces poor time estimation skills.

The end result? They vastly underestimate the cost of supporting the extra code they'll write. They make the "right" choice based on their understanding of the tradeoffs, and roll their own library instead of using a 3rd party solution. But as we've just established their understanding was vastly off basis. Something must give as a result, no matter how good a programmer they are otherwise: schedule, or quality. Or both.


If you don't have the time or energy for such projects then you CAN'T do them. The answer is there.


Isn't the answer contained in the quote? Do a cost/benefit analysis of the "amount of time and energy" that would go "down these ratholes" versus the "the time not wasted in trying to fix other people’s not quite right tools."


The real reason to do the 2nd order is get new things rather than incrementing on older poorer ideas.


But how can you assess this until you have gone down those rat holes?


The Lean Startup advocates proportional investment in solutions. WHEN the problem comes up (again, after deciding to do this) determine how much your time percentage wise this took out of your week or month. Invest that amount to fix it, right now. My interpretation would be, spend that time trying to solve part of it. Every time that problem comes up keep investing in that thing, that way if you've made the wrong call you only waste a small portion of your time. But you also are taking steps to mitigate it if becomes more of an issue in the future.


Having gone down several myself, I can say it's hard. You lose time. You have to accept you've lost time and learn how not to do it in the future.

My advice is to collaborate with people who are much, much smarter than you and have the expectation that things actually get done because they know they could do it. You learn what productivity looks like first, at the most difficult and complex level you're capable of.

That sets the bar.

Everything has to be equal to or beneath that unless your experience tells you you'll be able to do something even greater (possibly) with the right help or inspiration


You gain experience by going down similar rat holes, until you feel that you can adequately compare the situation you are in now to an experience in the past.

You'll still be wrong, but perhaps less often.


For many particular examples, there have already been enough rathole spelunkers to provide useful data. Maybe start looking in the places where there isn't already useful data?


Any area in which enough such spelunkers are found is unlikely to be significantly improved by adding your own effort.


Agreed. It's often much, much harder to articulate why an idea is bad or a rat hole. You just move on.

I've come up with explanation by analogy. You can demonstrate quite easily in mathematics how you can create a system of notation or a function that quickly becomes impossible to compute. A number that is too large, or an algorithm that would take infinity amount of time and resources to solve...

It seems to be in nature that bad ideas are easy. Good ideas are harder, because they tend to be refinements of what already exists and what is already good.

So pursue good ideas. Pursue the thing that you have thought about and decided has the best balance between values and highest chance to succeed. Sometimes it's just a strong gut feeling. Go for it, but set limits, because you don't want to fall prey to a gut feeling originating from strong intuition but an equally strong lack of fundamental understanding.


I think you have to weigh your qualms against the difficulty of implementation. They're both spectra, one from 'completely unusable' to 'perfect in its sublime beauty', the other from 'there's a complete solution for this' to 'i need to learn VHDL for this'.

There's some factors that help shift these spectra.

Configurability helps. If I can change a config to get the behavior I want, that is incredible, thank you.

Open source helps. Getting to see how they did it reduces reverse engineering work immensely if I ever have to dig in.

Modularity helps. If I can just plop in my module instead of doing brain surgery on other modules, that makes it a lot easier.

Good components help. Say I need a webscraper and know python. Imagine there was only selenium and not even urllib, but some low level TCP/IP library. I get a choice between heavy but easy or slim but high maintenance. But there's the sexy requests library, and there is the beautiful beautifulsoup4. I tell requests what to get, tell bs4 what I want from it, and I'm done.

Another great example for this is emacs. python-mode + elpy (almost complete solution), hide-show mode, electric-pair mode, and if anything still bugs me, it is fixable. If it were OOP, I'd inherit a lot of powerful functions, but I can always override anything that is wrong.

Expertise helps. If I have written a kernel module, that's another avenue to solving problems I have.

Expertise is a special case here worth more attention. It's the main thing that changes for any single programmer, and can skew this equation immensely. Expertise grows when you struggle with new things. Preferably just outside what you know and are comfortable with.

Considering that, DIY whenever you can afford to DIY (eg. pay the upfront cost of acquiring expertise), DIY whenever it is just outside what you can do, or DIY when it makes a lot of sense (eg. squarely in your domain of expertise, and there's a benefit to be had).

In concrete examples, that means don't DIY when you're on a tight deadline, don't attempt to write your own kernel after learning about variables, don't write your own parser generator when say, YACC, solves your problem just fine.


Specifically with regards to languages and OS's, I wonder how much that cost/benefit equation shifts as things have become so much more complex, and as we continue to pile on abstraction layer after abstraction layer.


I think the problem is not complexity but size. Most of the source for the Linux kernel is in the drivers, for instance. As for languages, most of the weight is in the libraries.


Hi Alan,

I have three questions -

1. If you were to design a new programming paradigm today using what we have learnt about OOP what would it be?

2. With VR and AR (Hololens) becoming a reality (heh) how do you see user interfaces changing to work better with these systems? What new things need to be invented or rethought?

3. I also worked at Xerox for a number of years although not at PARC. I was always frustrated by their attitude to new ideas and lack of interest in new technologies until everyone else was doing it. Obviously businesses change over time and it has been a long time since Xerox were a technology leader. If you could pick your best and worst memories from Xerox what would they be?

Cheers for your time and all your amazing work over the years :)


Let me both acknowledge your questions, and also acknowledge that this forum (the media authoring tools) are not in scale with the needed answers ...


Perhaps a reddit AMA would be better? They have much more flexible/powerful comment system.

Edit: Not sure why I am getting down voted for making a suggestion. Oh well.


I like the vibes here


Or maybe a Quora session.


A lot more good activity here than on Quora ...


Quora has some onerous policies, unfortunately: https://twitter.com/waxpancake/status/453958676529696769

HN is an excellent venue, but is necessarily text oriented, which is an OK tradeoff I think.

My next project after Stack Overflow, Discourse, is an 100% open source, flexible multimedia-friendly discussion system. It's GPL V2 on the code side, but we also tried to codify Creative Commons as the default license in every install, so discussion replies belong to the greater community: https://discourse.org

(Surprisingly, the default content licenses for most discussion software tend to be rather restrictive.)


Could you afterwards build a discussion Platform to find (partial) agreement in various political etc topics? That seems like it would have huge impact and is really missing.. thought about starting something like that but never got to it.


Still there seems to be only a sandbox install. Why can't we have discourse just like stackoverflow just with technical discussions allowed instead of attacked by both mods and the rules.


I'd be curious if he's planning on returning to Croquet/OpenCobalt with the VR revolution.


Come to think of it, AltSpaceVR on the HTC Vive looks a lot like Croquet.

I think Google Glass should've been held back until VR/Augmented Reality gets established. Many Croquet style roving "viewports" projected from Google Glass feeds in an abstracted 3D model of a real world location would be a great way to do reporting on events.


1. After Engelbart's group disbanded it seemed like he ended up in the wilderness for a long time, and focused his attention on management. I'll project onto him and would guess that he felt more constrained by his social or economic context than he was by technology, that he envisioned possibilities that were unattainable for reasons that weren't technical. I'm curious if you do or have felt the same way, and if have any intuitions about how to approach those problems.

2. What are your opinions on Worse Is Better (https://www.dreamsongs.com/RiseOfWorseIsBetter.html)? It seems to me like you pursue the diamond-like jewel, but maybe that's not how you see it. (Just noticed you answered this: https://news.ycombinator.com/item?id=11940276)

3. I've found the Situated Learning perspective interesting (https://en.wikipedia.org/wiki/Situated_learning). At least I think about it when I feel grumpy about all the young kids and Node.js, and I genuinely like that they are excited about what they are doing, but it seems like they are on a mission to rediscover EVERYTHING, one technology and one long discussion at a time. But they are a community of learning, and maybe everyone (or every community) does have to do that if they are to apply creativity and take ownership over the next step. Is there a better way?


It used to be the case that people were admonished to "not re-invent the wheel". We now live in an age that spends a lot of time "reinventing the flat tire!"

The flat tires come from the reinventors often not being in the same league as the original inventors. This is a symptom of a "pop culture" where identity and participation are much more important than progress...


This is incredibly hard hitting and I'm glad I read it, but I'm also afraid it would "trigger" quite a few people today.

What steps can a person take to get out of pop culture and try to get into the same league as the inventors? Incredibly stupid question to have to ask but I feel really lost sometimes.


I think it is first a recognition problem -- in the US we are now embedded in a pop culture that has progressed far enough to seriously hurt places that hold "developed cultures". This pervasiveness makes it hard to see anything else, and certainly makes it difficult for those who care what others think to put much value on anything but pop culture norms.

The second, is to realize that the biggest problems are imbalance. Developed arts have always needed pop arts for raw "id" and blind pushes of rebellion. This is a good ingredient -- like salt -- but you can't make a cake just from salt.

I got a lot of insight about this from reading McLuhan for very different reasons -- those of media and how they form an environment -- and from delving into Anthropology in the 60s (before it got really politicized). Nowadays, books by "Behavioral Economists" like Kahneman, Thaler, Ariely, etc. can be very helpful, because they are studying what people actually do in their environments.

Another way to look at it is that finding ways to get "authentically educated" will turn local into global, tribal into species, dogma into multiple perspectives, and improvisation into crafting, etc. Each of the starting places stays useful, but they are no longer dominant.


What steps would a group of people (civilization?) need to take in order to make progress here? When choices are abundant, the masses have been enabled, and yet knowledge is still at a premium?


All cultures have a lot of knowledge -- the bigger influences are contextual and epistemological (i.e. "points of view" and "stance", and "what is valued", etc.)

Self-awareness of what we are ("from Mars") is the essential step, and it's what real education needs to be about.


What does "from Mars" mean here?


It means "outside our human prejudices about ourselves". As though we actually were a valid object of real science....


Hi Alan,

1. what do you think about the hardware we are using as foundation of computing today? I remember you mentioning about how cool was the architecture of the Burroughs B5000 [1] being prepared to run on the metal the higher level programming languages. What do hardware vendors should do to make hardware that is more friendly to higher level programming? Would that help us to be less depending on VM's while still enjoying silicon kind of performance?

2. What software technologies do you feel we're missing?

[1] https://en.wikipedia.org/wiki/Burroughs_large_systems


If you start with "desirable process" you can eventually work your way back to the power plug in the wall. If you start with something already plugged in, you might miss a lot of truly desirable processes.

Part of working your way back to reality can often require new hardware to be made or -- in the case of the days of microcode -- to shape the hardware.

There are lots of things vendors could do. For example: Intel could make its first level caches large enough to make real HLL emulators (and they could look at what else would help). Right now a plug-in or available FPGA could be of great use in many areas. From another direction, one could think of much better ways to organize memory architectures, especially for multi-core chips where they are quite starved.

And so on. We've gone very far down the road of "not very good" matchups, and of vendors getting programmers to make their CPUs useful rather than the exact opposite approach. This is too large a subject for today's AMA.


> Intel could make its first level caches large enough to make real HLL emulators

If you make the L1 cache larger, it will become slower and will be renamed to "L2 cache". There are physical reasons why the L1 cache is not larger, even though programs written in non-highlevel languages would profit from larger caches (maybe even moreso than HLL programs).

> Right now a plug-in or available FPGA could be of great use in many areas.

FPGAs are very, very HLL-unfriendly, despite lots of effort from industry and academia.


Thanks for the attention Alan! I love the reverse-engineering the driven by desire approach :D

We need to find ways to free ourselves from the cage of "vendors getting programmers to make their CPUs useful rather than the exact opposite approach" <- meditate on this we all should


Have you looked into the various Haskell/OCaml to hardware translators people have been coming up with the past few years?

It seems like it's been growing and several FPGA's are near that PnP status. In particular the notion of developing compile time proved RTS using continuation passing would be sweet.

Even with newer hardware it seems we're still stuck in either dynamic mutable languages or functional static ones. Any thoughts on how we could design systems incorporating the best of both using modern hardware capacities? Like... Say reconfigurable hierarchical element system where each node was an object/actor? Going out on a bit of a limb with that last one!


Without commenting on Haskell, et al., I think it's important to start with "good models of processes" and let these interact with the best we can do with regard to languages and hardware in the light of these good models.

I don't think the "stuckness" in languages is other than like other kinds of human "stuckness" that come from being so close that it's hard to think of any other kinds of things.


Thanks! That helps reaffirm my thinking that "good models of processes" are important, even though implementations will always have limitations. Good to know I'm not completely off base...

A good example for me has been virtual memory pattern, where from a processes point-of-view you model memory as an ideal unlimited virtual space. Then you let the kernel implementation (and hardware) deal with the practical (and difficult details). Microsoft's Orleans implementation of the actor model has a similar approach that they call "virtual actors" that is interesting as well.

My own stuckness has been an idea of implementing processes using hierarchical state machines, especially for programming systems of IoT type devices. But I haven't been able to figure out how to incorporate type check theorems into it.


At my office a lot of the non-programmers (marketers, finance people, customer support, etc) write a fair bit of SQL. I've often wondered what it is about SQL that allows them to get over their fear of programming, since they would never drop into ruby or a "real" programming language. Things I've considered:

    * Graphical programming environment (they run the queries
      from pgadmin, or Postico, or some app like that)
    * Instant feedback - run the query get useful results
    * Compilation step with some type safety - will complain
      if their query is malformed
    * Are tables a "natural" way to think about data for humans?
    * Job relevance
Any ideas? Can we learn from that example to make real programming environments that are more "cross functional" in that more people in a company are willing to use them?


SQL is declarative. Compare:

    for user in table_users:
        if user.is_active:
            return user.first_name;
vs:

    SELECT first_name FROM users_table
    WHERE is_active
It's unfortunate that the order of the clauses in SQL is "wrong" (e.g. you should say FROM, WHERE, SELECT: Define the universe of relevant data, filter it down, select what you care about), but it's still quite easy to wrap your mind around. You are asking the computer for something, and if you ask nicely, it tells you what you want to know. Compare that to procedural programming, where you are telling the computer what to do, and even if it does what you say, that may not have been what you actually wanted after all.


It's unfortunate that the order of the clauses in SQL is "wrong"

SQL is written goal-oriented.

You start with what you want (the goal). Then you specify from where (which can also be read as "what", since each table generally describes a thing) and finally you constrain it to the specific instances you care about.

SELECT the information I want FROM the thing that I care about WHERE condition constrains results to the few I want

Having said that, I would personally still prefer it in reverse like you say. I can see the value of how SQL does it, though, especially for non-programmers who think less about the process of getting the results and more about the results they want (because they haven't been trained to think of the process, like programmers have).

It makes sense for someone who isn't thaaaaat technical to start with "well, I want the name and salary of the employee but only those that are managers": SELECT name, salary FROM employee WHERE position = 'manager'

Admittedly even that isn't perfect and I assume that it wouldn't take much for someone to learn the reverse.


Along this point, C# and VB.NET have SQL-like expressions that can be used for processing, called LINQ [1]. They even get the order of the clauses correct!

A feature like this may help your programmers who are used to thinking in terms of filter -> select -> order.

[1] https://msdn.microsoft.com/en-us/library/bb397927.aspx


Yes! Absolutely what I was thinking of when I wrote this :) Getting that right is one of my favorite parts of LINQ.


Ecto (Elixir) does the from-in-where syntax as well.


> Compare that to procedural programming, where you are telling the computer what to do, and even if it does what you say, that may not have been what you actually wanted after all.

Procedural vs. functional phrasing in no way changes the basic fact that if you ask a computer the wrong question it'll give you the wrong result.

"go through the list of all users and add the ones which are active to a new list"

vs.

"the list I want contains all active users from the list of all users"


In imperative you don't ask questions but give instructions.

As long as the instructions are longer than the question (and they often are, even in your example ;)), you are bound to make more errors here.

Plus, it requires some understanding of how this damn machine works in the first place.

When turning questions into instruction is decidable it pays off to automate it.


It may be that its easier for people to define the desired result set, then tweak the query until it gives them what they want.


To play devil's advocate Prolog is considered much more similar to SQL than any other language and I suspect that will have an extremely high learning cost. That may be me being biased due to learning procedural languages first. At the same time I consider myself well versed in SQL.


I think Prolog suffers in that comparison mostly because of its much more ambitious scope. Most non-developer/DBA people have no concept of what a SQL query is actually doing, whereas most nontrivial Prolog programs require conceptualizing the depth-first-search you're asking the language to perform in order to get it right. If you restricted your Prolog world to the kind of "do some inference on a simple family tree database of facts" that people first learn, Prolog would be pretty easy too.


I fail to see a meaningful difference between these two approaches, especially if we transform the first one in a list comprehension:

    [user.first_name for user in table_users if user.is_active]


But a list comprehension is a declarative construct, which can be best appreciated when porting some list comprehensions into loops. Especially nested comprehensions.


Totally meaningful difference! With the list comprehension, you're still telling the machine how to go about getting the data; there is an explicit loop construct. With SQL, I'm simply declaring what results I want, and the implementation is left to the execution engine.

For instance, the SQL query can be parallelized, but not so with the Python list comprehension. If you wanted to create a version that could be run in parallel in Python, you'd have to do it with a map()/filter() construct. Ignoring readability for a sec (pretend it's nice and elegant, like it would be in e.g. Clojure), you are still specifying how the machine should accomplish the goal, not the goal itself.

    filter(lambda x: x is not None, map(lambda u: u.first_name if u.is_active else None, table_users))


I teach SQL to my journalism students. Example exercise:

http://2015.padjo.org/tutorials/sql-walks/exploring-wsj-medi...

My main reason for teaching it was that it was a skill that helped me immensely as a journalist, in terms of being able to do data analysis. Because I learned it relatively late in my career, I thought it'd be hard for the students but most of them are able to get it.

Even though I use relatively little SQL in my day to day work, it's my favorite thing to teach to novices. First, it has a similar data model to spreadsheets, so it feels like a natural progression. Secondly, for many students, this is the first time that they'll have done "real" programming and the first time that they learn how to tell a computer to do something rather than learn how to use a computer. In Excel, for example, you double click a file and the entire thing opens. With SQL, you're required to not just specify the database and table, but also each and every column...it's annoying at first, but then you realize that there is power in being explicit.

The main advantage of teaching SQL over, say R, as a first language is that SQL's declarative syntax is easy to follow AND you can do most of what you need with a limited subset of the language...for instance, I don't have to teach variables and loops and functions...which is good because I don't even know how to really do those in SQL (just haven't had the need when I can work from R or Pandas).

When a beginner student fucks up a basic Python script, there are any number of reasons for the failure that is beyond the student's expected knowledge. When a novice student fucks up a SQL query...it's easier to blame the mistake on the student (e.g. Misspelling of names/syntax)


What are the main factors which encourage (and are helpful) non-programmers to use SQL ?

We provide low code platform (SQL) to organize data and build custom applications as per specific workflow requirements. We are assuming that teaching/educating/training combined with lots of sample SQL code with real world examples are helpful to non-programmers for using SQL.

Sample SQL is available at https://mydataorganizer.com/MyDataOrganizer/QuarterDatesCalc...

Thanks, Neal


My guess would be that there is a lot of interesting public data available in SQL/CSV/Excel formats. If a journalist can browse that data efficiently they can probably find some interesting stories and leads.


Just a thought: Is it mostly select statements that your colleagues write? Because if they do, they might not fear accidentally altering the data. I found that new programmers can get confused by the difference between things that are immutable and those that aren't.


>>I've often wondered what it is about SQL that allows them to get over their fear of programming

Thats barely programming. Even by the most lenient definition what they do isn't programming.

Firstly SQL's are a little like Excel Macros, they lower the barrier to entry to basic twiddling. Got a SQL client(Toad etc?)? you can throw a snippet or two quickly. Anything beyond that gets difficult. Tricky joins, sub queries, troubleshooting big queries, optimization problems etc etc. Beyond this writing re usable code, test discipline and a range of other tasks that make code run for years is what is your everyday work as a programmer.

Sure you could saw a log of wood once a while, but don't confuse that for being a full time carpenter.


Why not ask them? It's an interesting question, and I've noticed similar things with business analysts I've worked with in the past.


As someone pretty close to this camp, it comes down to your last bullet point - needing to do it, in my opinion. A smaller subset of those people will also learn VBA for the same reason-it helps them get their job done. The benefit those two have is that they are either built in to the tools already (VBA), or a DBA does most of the set up and the used mostly just runs queries against it and doesn't have to worry too much about indexing, performance, schemas, etc (SQL). If I were to try to turn them onto python, it'd be an effort to get it installed and then get them to use the commandline.


With SQL, you get a complete solution to your problem immediately (the data you want is returned). So, high value return on effort motivates people to learn it.


What do you think of Bret Victor's work? (http://worrydream.com/) Or Rich Hickey?

Who do you think are the people doing the most interesting work in user interface design today?


I love Bret Victor's work!

He is certainly one of the most interesting and best thinkers of today.


Aren't Alan Kay and Bret Victor working together at SAP currently?


Technically, at HARC - https://blog.ycombinator.com/harc


They collaborate together at YCR / HARC!


YCR is not "my group" -- I'm very happy to have helped set up HARC! with its very impressive group of Principal Investigators (including Bret).


Hi Alan,

Previously you've mentioned the "Oxbridge approach" to reading, whereby--if my recollection is correct--you take four topics and delve into them as much as possible. Could you elaborate on this approach (I've searched the internet, couldn't find anything)? And do you think this structured approach has more benefits than, say, a non-structured approach of reading whatever of interest?

Thanks for your time and generosity, Alan!


There are more than 23,000,000 books in the Library of Congress, and a good reader might be able to read 23,000 books in a lifetime (I know just a few people who have read more). So we are contemplating a lifetime of reading in which we might touch 1/10th of 1% of the extent books. We would hope that most of the ones we aren't able to touch are not useful or good or etc.

So I think we have to put something more than randomness and following links to use here. (You can spend a lot of time learning about a big system like Linux without hitting many of the most important ideas in computing -- so we have to heed the "Art is long and Life is short" idea.

Part of the "Oxbridge" process is to have a "reader" (a person who helps you choose what to look at), and these people are worth their weight in gold ...


The late Carl Sagan had a great sequence in the original Cosmos where he made a similar point about how many books one could read in a lifetime:

  If I finish a book a week, I will read only a few thousand
  books in my lifetime, about a tenth of a percent of the 
  contents of the greatest libraries of our time. The trick 
  is to know which books to read.


General question about this figure, which I've seen before:

> read 23,000 books in a lifetime

As a very conservative lower bound, a person who lives to the age of 80 would have to read 0.79 books per day, from the day they were born, to reach this figure.

Or, to put it another way, who has read 288+ books in the last year?

I'm quite sceptical about this figure. Any thoughts as to how this might be possible? Are the people Alan mentions speed-reading? Anyone else know similarly prolific readers?


Yes, it is possible. It is partly developing a kind of fluency that is very similar to sight-reading music (this is a nice one to think about because you really have to grok what is there to do it, and you have to do it in real time at "prima vista").

Doing a lot of it is one of the keys! Doing it in a way that various short and long-term memories are involved is another key (rapid reading with comprehension of both text and music is partly a kind of memorization and buffering, etc.)

I don't think I've read 23,000 books in 76 years, but very likely somewhere between 16,000 and 20,000 (I haven't been counting). Bertrand Russell easily read 23,000 books in his lifetime, etc.


I was late to this and didn't expect a reply, so thanks for taking the time to come back and respond!

I agree with the practice, as for some periods I've noticed an increase in speed when I've been consistently reading every day.

Regarding the second point - short and long-term memory - do you have a link or other suggestion for where to learn more, please?


There was quite a bit of discussion about this on the HN gig about my long ago "reading list"



AS someone that read at least one book per day if not more since the age of 6, yes it is possible. I can read between 100 to 200 page per hour, depending of the book.

You reach a storage and money problem fast (Ebook are a savior nowadays). And you tend to have multiple books open at the same time.

How does it work? There are several strategy. First i read fast. Experience and training make you read really fast. Secondly, you get a grasp of how things works and what the wirter has to say. In a fiction book, it is not unusual for me to not read a chapter or two because i know what will happen inside.

Finally... Good writers helps. Good writers make reading a breeze and are faster to read. They present ides in concise and efficient way, that follow the flow of thinking.

I will take more question gladly if you have some :)


> In a fiction book, it is not unusual for me to not read a chapter or two because i know what will happen inside.

This is ridiculous. It doesn't count as reading if you skip whole chapters.


Hell, I 'read' whole books by just reading the back cover! This way, I get through hundreds of books every time I visit the library!


Well to be honest if it is badly written and contain nothing of interest...


What motivates you? Do you ever apply the knowledge you've gotten this way (do you even care)?


Multiple things. First it is something i like.

Secondly, it is the only way i can absorb information in a way that works. Talks, video, podcast, etc are too slow for me. It lacks a good throughput of information and meaning. Which means i tend to just drop or complete what the speaker is telling.

About applying knowledge : yes everyday, in my life. Once you hit a good amount of knowledge and have a nice way to filter it, think about it and deal with it, thngs become nice. Understanding a problem come faster. You can draw link between different situations or use ideas from other field into yours.

Knowledge is rarely lost.


Thanks. What type of training did you do?

I'm not keen on skipping chapters! Do you do the same with non-fiction?

Another question - how do you keep track of what you've read? (would be happy to hear from others esp. Alan on the same topic)


as a training... i read. That is all. I begun when i was 5. Never stopped. So i nearly always was like that. The more you read the more you train your brain to read. And your mind to understand how to deal with knowledge and information. Filter it, classify it, absorb it, apply it.

For non fiction, yes it happens. Lot of book sjust repeat the same thing over and over again. When you begin to read a chapter and can complete what will be said in the next 20 pages just from your understanding of the whole situation, reading it is a loss of time. And it would make me be bored and get down from "The Zone".

I keep track in my brain. I have the advantage of being able to always remember if i have read something by just looking at the backcover and the first lines. I still have to forget a book i read. I can not remmeber all the technicalities of course but far enough to know if i read it before or not.

I reread the books i really like or need when needed anyway. Mainly during vacations.


It depends on your definition of "reading a book."

Wait, what?

I've been reading a book called, I kid you not, "How to Read a Book: The Classic Guide to Intelligent Reading."

Adler and Doren identify four levels of reading:

1. Elementary: "What does the sentence say?" This is where speed can be gained

2. Inspectional: "What is the book about?" Best and most complete reading given a limited time. Not necessarily reading a book from front to back. Essentially systematic skimming.

3. Analytical: Best and most complete reading given unlimited time. For the sake of understanding.

4. Synoptical: Reading many books of the same subject at once, placing them in relation to one another, and constructing an analysis that may not be found in any of the books.

Amazon link for those interested: https://www.amazon.com/How-Read-Book-Intelligent-Touchstone/...


Recent research (along with past research) has cast doubt on the plausibility of extreme speed reading [1].

I don't mean to contradict Alan; no doubt he's a fast reader. But if you're actually reading an entire book every day or two, you're spending a lot of every day reading.

[1] http://psi.sagepub.com/content/17/1/4


Was it in The Future of Reading [1] perhaps? From page 6:

In a very different approach, most music and sports learning only has contact with a one on one expert once or twice a week, lots of individual practice, group experiences where “playing” is done, and many years of effort. This works because most learners really have difficulty absorb ing hours of expert instruction every week that may or may not fit their capacities, styles, or rhythms. They are generally much better off spending a few hours every day learning on their own and seeing the expert for assessment and advice and play a few times a week.

A few universities use a process like this for academics—sometimes called the “tutorial system”, they include Oxford and Cambridge Universities in the UK.

[1] http://www.vpri.org/pdf/future_of_reading.pdf


Hi, I have a few questions about your STEPS project:

- Is there a project that is the continuation of the STEPS project?

- What is your opinion of the Elm language?

- How do you envision all the good research from the STEPS model could be used for building practical systems?

- STEPS focused on personal computing, do you have a vision on how something similar could be done for server-side programming?

- Where can I find all the source code for the Frank system and the DSLs described in the STEPS report?


Apologies for rambling on a bit - but I also have some questions about VPRI. As far as I can gather, it was never the intention to publish the entire system (The whole stack needed to get "Frank" running)? If so, I'd like to know why not? Where you afraid that the prototypes would be taken "too seriously" and draw focus away from the ideas you wanted to explore?

The VPRI reports, and before that some of the papers on Croquet (especially the idea of "teatime" which might be described as event-driven, log-based, relative time with eventual data/world-consistency) are fascinating, and I'm grateful for them being published. Also the Ometa-stuff[o] is fascinating (if anything, I think it's gotten too little mind-share).

It seems to me, that we've evolved a bit, in the sense that some things that used to be considered programming (display a text string on screen), no longer is (type it into notepad.exe) -- it's considered "using a computer". At the same time some things that were considered somewhat esoteric is becoming mainstream: perhaps most importantly the growing (resurging?) trend that programming really is meta-programming and language creation.

ReactJS is a mainstream programming model, that fuses html, css, javascript and a at least one templating language - and in a similar vein we see a great adoption in "transpiled" languages, such as coffee script, typescript, clojurescript and more. HN runs on top of Ark, which is a lisp that's been bent hard in the direction of http/html. I see this as a bit of an evolution from when the most common DSLs people were writing for themselves were ORMs - mapping some host language to SQL.

In your time with VPRI - did you find other new patterns or principles for meta-programming and (micro) language design that you think could/should be put to use right now?

Other than the web-developers tendency to reinvent m4 at every turn, in order to program html, css and js at a "higher" level, and the before-mentioned ORM-trends -- the only somewhat mainstream system I am aware of that has a good toolkit for building "real" DSLs, is Racket Scheme (Which shows if one contrasts something like Sphinx, which is a fine system, with Racket's scribble[2]).

Do you think we'll continue to see a rise of meta-programming and language design as more and more tools become available, and it becomes more and more natural to do "real" parsing rather than ad-hoc munging of plain text?

[o] https://github.com/alexwarth/ometa-js

[s] https://docs.racket-lang.org/scribble/getting-started.html

http://lambda-the-ultimate.org/node/4017


Hi Alan,

On the "worse is better" divide I've always considered you as someone standing near the "better" (MIT) approach, but with an understanding of the pragmatics inherent in the "worse is better" (New Jersey) approach too.

What is your actual position on the "worse is better" dichotomy?

Do you believe it is real, and if so, can there be a third alternative that combines elements from both sides?

And if not, are we always doomed (due to market forces, programming as "popular culture" etc) to have sub-par tools from what can be theoretically achieved?


I don't think "pop culture" approaches are the best way to do most things (though "every once in a while" something good does happen).

The real question is "does a hack reset 'normal'?" For most people it tends to, and this makes it very difficult for them to think about the actual issues.

A quote I made up some years ago is "Better and Perfect are the enemies of What-Is-Actually-Needed". The big sin so many people commit in computing is not really paying attention to "What-Is-Actually-Needed"! And not going below that.


I fear this is because "What-Is-Actually-Needed" is non-trivial to figure out. Related: "scratch your own itch", "bikeshedding", "yak shaving".


Exactly -- this is why people are tempted to choose an increment, and will say "at least it's a little better" -- but if the threshold isn't actually reached, then it is the opposite of a little better, it's an illusion.


Hi Alan,

What advice would you give to those who don't have a HARC to call their own? what would you do to get set up/a community/funding for your adventure if you were starting out today? What advice do you have for those who are currently in an industrial/academic institution who seek the true intellectual freedom you have found? Is it just luck?!


I don't have great advice (I found getting halfway decent funding since 1980 to be quite a chore). I was incredibly lucky to wind up quite accidentally at the U of Utah ARPA project 50 year ago this year.

Part of the deal is being really stubborn about what you want to do -- for example, I've never tried to make money from my ideas (because then you are in a very different kind of process -- and this process is not at all good for the kinds of things I try to do).

Every once in a while one runs into "large minded people" like Sam Altman and Vishal Sikka, who do have access to funding that is unfettered enough to lead to really new ideas.


Thanks.

Do you have any advice about community building, especially around fostering new and big ideas?


Hi Alan, the question that troubles me now and I want to ask you is:

Why do you think there is always a difference between:

A. the people who know best how something should be done, and

B. the people who end up doing it in a practical and economically-successful or popular way?

And should we educate our children or develop our businesses in ways that could encourage both practicality and invention? (do you think it's possible?). Or would the two tendencies cancel each other out and you'll end up with mediocre children and underperforming businesses, so the right thing to do is to pick one side and develop it at the expense of the other?

(The "two camps" are clearly obvious in the space of programming language design and UI design (imho it's the same thing: programming languages are just "UIs between programmers and machines"), as you well know and said, with one group of people (you among them) having the right ideas of what OOP and UIs should be like, and one people inventing the technologies with success in industry like C++ and Java. But the pattern is happening at all levels, even business: the people with the best business ideas are almost never the ones who end up doing things and so things get done in a "partially wrong" way most of the time, although we have the information to "do it right".)


We were lucky in the ARPA/PARC communities to have both great funding, and the time to think things through (and even make mistakes that were kept from propagating to create bad defacto standards).

The question you are asking is really a societal one -- and about operations that are like strip mining and waste dumping. "Hunters and gatherers" (our genetic heritage) find fertile valleys, strip them dry and move on (this only works on a very small scale). "Civilization" is partly about learning how to overcome our dangerous atavistic tendencies through education and planning. It's what we should be about generally (and the CS part of it is just a symptom of a much larger much more dire situation we are in).


So you're rephrasing the question to mean that you see it as 'hunter gatherer mode' thinking (doing it in a practical and short term economically-successful way) vs. 'civilized builder mode' thinking (doing it the way we know it should be done) and that they are antagonistic, and that because of the way our society is structured 'hunter gatherer' mode thinking leads to better economical results?

This ends up as a pretty strong critique of capitalism's main idea that market forces drive the progress of science and technology.

Your thinking would lead to the conclusion that we'd have to find a way to totally reshape/re-engineer the current world economy to stop it from being hugely biased in favor of "hunter gatherers that strip the fertile valley dry" ..right?

I hope that people like you are working on this :)


Hi Alan,

As a high school teacher, I often find that discussions of technology in education diminish 'education' to curricular and assessment documentation and planning; however, these artifacts are only a small element of what is, fundamentally, a social process of discussion and progressive knowledge building.

If the real work and progress with my students comes from our intellectual both-and-forth (rather than static documentation of pre-exhibiting knowledge), are there tools I can look to that have been/will be created to empower and enrich this kind of in situ interaction?


This is a tough one to try to produce "through the keyhole" of this very non-WYSIWYG poorly thought through artifact of the WWW people not understanding what either the Internet or computer media are all about.

Let me just say that it's worth trying to understand what might be a "really good" balance between traditional oral culture learning and thinking, what literacy brings to the party, especially via mass media, and what the computer and pervasive networking should bring as real positive additions.

One way to assess what is going on now is partly a retreat from real literacy back to oral modes of communication and oral modes of thought (i.e. "texting" is really a transliteration of an oral utterance, not a literary form).

This is a disaster.

However, even autodidacts really need some oral discussions, and this is one reason to have a "school experience".

The question is balance. Fluent readers can read many times faster than oral transmissions, and there are many more resources at hand. This means in the 21st century that most people should be doing a lot of reading -- especially students (much much more reading than talking). Responsible adults, especially teachers and parents, should be making all out efforts to help this to happen.

For the last point, I'd recommend perusing Daniel Kahneman's "Thinking: Fast and Slow", and this will be a good basis for thinking about tradeoffs between actual interactions (whether with people or computers) and "pondering".

I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs....


>I think most people grow up missing their actual potential as thinkers because the environment they grow up in does not understand these issues and their tradeoffs....

This is the meta-thing that’s been bugging me: how do we help people realize they’re “missing their actual potential as thinkers”?

The world seems so content to be an oral culture again, how do we convince / change / equip people to be skeptical of these media?

Joe Edelman’s Centre for Livable Media (http://livable.media) seems like a step in the right direction. How else can we convince people?


Marijuana helped me realize there was a lot about myself I didn't understand and launched my investigation into more effective thought processes. I've become much more driven and thoughtful since I began smoking as an adult.


What kinds of changes to your thought processes did you make?


First of all, I now enjoy talking about myself :)

I stopped assuming I knew everything, and a childlike sense of wonder returned to my life. I began looking beyond what was directly in front of me and sought out more comprehensive generalizations. What do atoms have in common with humans? What does it mean to communicate? Do we communicate with ecosystems? Do individuals communicate with society? What is consciousness and intelligence? Is my mind a collection of multiple conscious processes? How do the disparate pieces of my brain integrate into one conscious entity, how do they shape my subjective reality?

I found information, individuals, and networks to be fundamental to my understanding of the world. I was always interested in them before, but not enough to seek them out or apply them through creative works. I discovered for myself the language of systems. I found a deep appreciation of mathematics and a growth path to set my life on.

I was able to do this exploration at a time when my work was slow and steady. It came along a couple years ago when I was 25, which I've heard is when the brain's development levels off. I feel lucky to have experienced it when I did because I was totally unsatisfied with my life before then.

Since then I've found work I love at a seed stage startup where I've been able to apply my ideas in various ways. I have become much more active as a creator, including exploring latent artistic sensibilities through writing poetry and taking oil painting classes with a very talented teacher. I've found myself becoming an artist in my work - I've become the director and lead engineer at the startup and am exploring ways to determine and distribute truth in the products we sell, and further to make a statement on what art is in a capitalistic society (even if I'm the only one who will ever recognize it). I've also become more empathic and found a wonderful woman and two pups to share my life with, despite previously being extremely solitary. Between work and family I have less time for introspection now, but I expect I'll learn just as much through these efforts.

Ultimatey, I've learned to trust my subconscious. I was always anxious and nervous about being wrong in any situation before, but now I trust that even if I am wrong in the moment my brain can figure out good answers over longer stretches of time.

I don't know how far cannabis led me down this path but it definitely gave me a good strong push.


This is almost exactly my experience! I don't think HN talks about it much, but cannabis is a great way to approach intuitive depth on subjects. For me it was ego, math, music, civics and information theory concepts.

When I started, it was at a job that I absolutely hated (rewriting mantis to be a help desk system), and it helped me get out of it by opening up better understanding of low level systems. That eventually led to high frequency trading systems tuning and some pretty deep civics using Foia.

Not that it was a direct contributor, but I do consider it a seed towards better understanding of the things around me. I don't necessarily feel happier, but I feel much more content.


It is IDENTICAL to mine as well. Even down to the information theory bit. Very bizarre, but reassuring.


Thanks, this is very interesting!


In seeking to consider what form this “‘really good’ balance” might take, can you recommend any favored resources/implementations to illustrate what “real positive additions” computers and networking can bring to the table? I’m familiar with the influence of Piaget/Papert - but I would love to gain some additional depth on the media/networking side of the conversation.

Thank you for your thoughts. I feel similarly about the cultural regression of literacy.


With a good programming language and interface, one -- even children -- can create from scratch important simulations of complex non-linear systems that can help one's thinking about them.


I wish this were a better platform for fluid discussion, but I'll dig into your writings and talks (Viewpoint/Youtube/TED/elsewhere?) to gain a better understanding of your thoughts on these topics.

Thank you.


-- I was surprised that the HN list page didn't automatically refresh in my browser (seems as though it should be live and not have to be prompted ...)


Au contraire, I'm happy that the last-seen-state is preserved and I'm given the option to refresh to see the current state should I chose to.


It certainly helps when reading long replies, that's for sure. I do think a mini-update box with "click here to load" like on stackoverflow for replies or edits would be an interesting idea.

Of course, the 90's style is pretty hacker-hipster as well...can't deny that.


HN feels old school. That's why I like it. (I'm considered a fossil by all of my 20, 30-something colleagues.)


How old are you? "Old school" to me is what could be done at Parc, etc. ... (hint: quite a bit more than on this website ...)


Imagine: 1. trying to read something long, or 2. going off to a follow a link and to come back and respond, only to find that the page has been refreshing while you looked away. Now you have to scroll about to find the place you were at in order to respond or to continue reading the comments.


How about a little model of time in a GUI?


This is maybe the most Alan-Kay-like response so far. Short, simple, but a tiny bit like a message from an alternate dimension. "No, no, I'm not asking you to build the also-wrong solution someone else has tried. I'm saying: solve the problem.


Also feels like worse-is-better vs. the right thing. How much engineering effort and additional maintenance would be required to develop and support such a time-model? A lot. Alas, let us re-create software systems to be radically simpler so that we can do the right thing! Still waiting for Urbit and VPRI's 10k line operating system ... but that's what Alan stands for in our industry: "strive to do the right thing," or as you put it, "solve the problem".


That sounds like a feature. HN doesn't have features, only necessities.


This is the way Facebook is now... Not an improvement. It's not designed for following threads, and it looks like they don't care.


Legend is, this forum runs on an abandoned LISP implementation.

Most things around here are not how they should seem.


Yep- arclanguage.org.


Jaron Lanier mentioned you as part of the, "humanistic thread within computing." I understood him to mean folks who have a much broader appreciation of human experience than the average technologist.

Who are "humanistic technologists" you admire? Critics, artists, experimenters, even trolls... Which especially creative technologists inspire you?

I imagine people like Jonathan Harris, Ze Frank, Jaron Lanier, Ben Huh, danah boyd, Sherry Turkle, Douglas Engelbart, Douglas Rushkoff, etc....


What turning points in the history of computing (products that won in the marketplace, inventions that were ignored, technical decisions where the individual/company/committee could've explored a different alternative, etc.) do you wish had gone another way?


Just to pick three (and maybe not even at the top of my list if I were to write it and sort it), are

(a) Intel and Motorola, etc. getting really interested in the Parc HW architectures that allowed Very High Level Languages to be efficiently implemented. Not having this in the 80s brought "not very good ideas from the 50s and 60s" back into programming, and was one of the big factors in:

(b) the huge propensity of "we know how to program" etc., that was the other big factor preventing the best software practices from the 70s from being the start of much better programming, operating systems, etc. in the 1980s, rather the reversion to weak methods (from which we really haven't recovered).

(c) The use of "best ideas about destiny of computing" e.g. in the ARPA community, rather than weak gestures e.g. the really poorly conceived WWW vs the really important and needed ideas of Engelbart.


I get (a) and (b) completely. On (c), I felt this way about NCSA Mosaic in 1993 when I first saw it and I'm relieved to hear you say this because although I definitely misunderstood a major technology shift for a few years, maybe I wasn't wrong in my initial reaction that it was stupid.


I didn't begin to get it until the industry started trying to use browsers for applications in the late '90s/early 2000's. I took one look at the "stateful" architecture they were trying to use, and I said to myself, "This is a hack." I learned shortly thereafter about criticism of it saying the same thing, "This is an attempt to impose statefulness on an inherently stateless architecture." I kept wondering why the industry wasn't using X11, which already had the ability to carry out full GUI interactions remotely. Why reject a real-time interactive architecture that's designed for network use for one that insisted on page refreshes to update the display? The whole thing felt like a step backward. The point where it clobbered me over the head was when I tried to use a web application framework to make a complex web form application work. I got it to work, and the customer was very pleased, but I was ashamed of the code I wrote, because I felt like I had to write it like I was a contortionist. I was fortunate in that I'd had prior experience with other platforms where the architecture was more sane, so that I didn't think this was a "good design." After that experience, I left the industry. I've been trying to segue into a different, more sane way of working with computers since. I don't think any of my past experience really qualifies, with the exception of some small aspects and experiences. The key is not to get discouraged once you've witnessed works that put your own to shame, but to realize that the difference in quality matters, that it was done by people rather like yourself who had the opportunity to put focus and attention on it, and that one should aspire to meet or exceed it, because anything else is a waste of time.


How can we bring back X11 and good old interactive architecture to the generation of programmers growing up with AngularJS and ReactJS?

Or shall we reboot good ideas with IoT?


My reference to X11 was mostly rhetorical, to tell the story. I learned at some point that the reason X11 wasn't adopted, at least in the realm of business apps. I was in, was that it was considered a security risk. Customers had the impression that http was "safe." That has since been proven false, as there have been many exploits of web servers, but I think by the time those vulnerabilities came to light, X11 was already considered passe. It's like how stand-alone PCs were put on the internet, and then people discovered they could be cracked so easily. I think a perceived weakness was that X11 didn't have a "request-respond" protocol that worked cleanly over a network for starting a session. One could have easily been devised, but as I recall, that never happened. In order to start a remote session of some tool I wanted to use, I always had to login to a server, using rlogin or telnet, type out the name of the executable, and tell it to "display" to my terminal address. It was possible to do this even without logging in. I'd seen students demonstrate that when I was in school. While they were logged in, they could start up an executable somewhere and tell it to "display" to someone else's terminal. The thing was, it could do this without the "receiver's" permission. It was pretty open that way. (That would have been another thing to implement in a protocol: don't "display" without permission, or at least without request from the same address.) Http didn't have this problem, since I don't think it's possible to direct a browser to go somewhere without a corresponding, prior request from that browser.

X11 was not the best designed GUI framework, from what I understand. I'd heard some complaints about it over the years, but at least it was designed to work over a network, which no other GUI framework of the time I knew about could. It could have been improved upon to create a safer network standard, if some effort had been put into it.

As Alan Kay said elsewhere on this thread, it's difficult to predict what will become popular next, even if something is improved to a point where it could reasonably be used as a substitute for something of lower quality. So, I don't know how to "bring X11 back." As he also said, the better ideas which ultimately became popularly adopted were ones that didn't have competitors already in the marketplace. So, in essence, the concept seemed new and interesting enough to enough people that the only way to get access to it was to adopt the better idea. In the case of X11, by the time the internet was privatized, and had become popular, there were already other competing GUIs, and web browsers became the de facto way people experienced the internet in a way that they felt was simple enough for them to use. I remember one technologist describing the browser as being like a consumer "radio" for the internet. That's a pretty good analogy.

Leaving that aside, it's been interesting to me to see that thick clients have actually made a comeback, taking a huge chunk out of the web. What was done with them is what I just suggested should've been done with X11: The protocol was (partly) improved. In typical fashion, the industry didn't quite get what should happen. They deliberately broke aspects of the OS that once allowed more user control, and they made using software a curated service, to make existing thick client technology safer to use. The thinking was, not without some rationale, that allowing user control led to lots and lots of customer support calls, because people are curious, and usually don't know what they're doing. The thing was, the industry didn't try to help people understand what was possible. Back when X11 was an interesting and productive way you could use Unix, the industry hadn't figured out how to make computers appealing to most consumers, and so in order to attract any buyers, they were forced into providing some help in understanding what they could do with the operating system, and/or the programming language that came with it. The learning curve was a bit steeper, but that also had the effect of limiting the size of the market. As the market has discovered, the path of least resistance is to make the interface simple, and low-hassle, and utterly powerless from a computational standpoint, essentially turning a computer into a device, like a Swiss Army knife.

I think a better answer than IoT is education, helping people to understand that there is something to be had with this new idea. It doesn't just involve learning to use the technology. As Alan Kay has said, in a phrase that I think deserves to be explored deeply, "The music is not in the piano."

It's not an easy thing to do, but it's worth doing, and even educators like Alan continue to explore how to do this.

This is just my opinion, as it comes out of my own personal experience, but I think it's borne out in the experience of many of the people who have participated in this AMA: I think an important place to start in all of this is helping people to even hear that "music," and an important thing to realize is you don't even need a computer to teach people how to hear it. It's just that the computer is the best thing that's been invented so far for expressing it.


I had similar experience as yours and was comfortable coding web pages via cgi-bin with vi. :-)

That is why now I am very interested in containers and microservices in both local and network senses.

As a "consumer", I am also very comfortable to communicate with people via message apps like WeChat and passing wikipedia and GitHub links around. Some of them are JavaScript "web apps" written and published in GitHub by typing on my iPhone. Here is an example:

http://bigdata-mindstorms.github.io/d3-playground/ontouchsta...

Hope I can help more people to "hear the music" and _make_ and _share_ their own.


This is not "bringing X11 back," but it's an improvement on JS.

https://news.ycombinator.com/item?id=11965253


I don't think networked X11 is quite the web we'd want (it's really outdated), but it does seem better than browsers, which as you point out are so bad you want to stab your eyes out. Unfortunately, now that the web has scaled up to this enormous size, people can't un-see it and it does seem like it's seriously polluted our thinking about how the Internet should interact with end users.

Maybe the trick is something close to this: we need an Internet where it's very easy to do not only WYSIWYG document composition and publishing (which is what the web originally was, minus the WYSIWYG), but really deliver any kind of user experience we want (like VR, for example). It should be based on a network OS (an abstract, extensible microkernel on steroids) where user experiences of the network are actually programs with their own microkernel systems (sort of like an updated take on postscript). The network OS can security check the interpreters and quota and deal out resources and the microkernels that deliver user experiences like documents can be updated as what we want to do changes over time. I think we'd have something more in this direction (although I'm sure I missed any number of obvious problems) if we were to actually pass Alan Kay's OS-101 class as an industry.

We actually sort of very briefly started heading in this direction with Marimba's "Castanet" back at the beginning of Java and I was WILDLY excited to see us trying something less dumb than the browser. Unfortunately, it would seem that economic pressures pushed Marimba into becoming a software deployment provider, which is really not what I think they were originally trying to do. Castanet should have become the OS of the web. I think Java still has the potential to create something much better than the web because a ubiquitous and very mature virtual machine is a very powerful thing, but I don't see anyone trying go there. There's this mentality of "nobody would install something better." And yet we installed Netscape and even IE...

BTW, I do think the security problems of running untrusted code are potentially solvable (at least so much as any network security problems are) using a proper messaging microkernel architecture with the trusted resource-accessing code running in one process and the untrusted code running in another. The problem with the Java sandbox (so far as I understand all that) is that it's in-process. The scary code runs with the trusted code. In theory, Java is controlled enough to protect us from the scary code, but in practice, people are really smart and one tiny screw-up in the JVM or the JDK and bad code gets permissions it shouldn't have. A lot of these errors could be controlled or eliminated by separating the trusted code from the untrusted code as in Windows NT (even if only by making the protocol for resource permissions really clear).


Hi Alan,

A lot of the VPRI work involved inventing new languages (DSLs). The results were extremely impressive but there were some extremely impressive people inventing the languages. Do you think this is a practical approach for everyday programmers? You have also recommended before that there should be clear separation between meta model and model. Should there be something similar to discipline a codebase where people are inventing their own languages? Or should just e.g. OS writers invent the languages and everyone else use a lingua franca?


Tricky question. One answer would be to ask whether there is an intrinsic difference between "computer science" and (say) physics? Or are the differences just that computing is where science was in the Middle Ages?


In physics, you can tell you're making progress because you can explain more things that happen in nature. How can you tell when you're making progress in computer science?

To me it seems like "computer science" lumps together too many different goals. It's like if we had a field called "word science" that covered story-writing, linguistics, scientific publication, typesetting, etc.


This is a terrific question, and I'll try to do it justice tomorrow morning.


Now that it is "morning", I'm not sure that I can do justice to this question here...

But certainly we have to take back the term "computer science" and try to give it real meaning as to what might constitute an actual science here. As Herb Simon pointed out, it's a "science of the artificial", meaning that it is a study of what can be made and what has been made.

Science tries to understand phenomena by making models and assessing their powers. Nature provides phenomena, but so do engineers e.g. by making a bridge in any way they can. Like most things in early engineering, bridge-lore was put in "cookbooks of practice". After science got invented, scientist-engineers could use existing bridges as phenomena to be studied, and now develop models/theories of bridges. This got very powerful rather recently (the Tacoma Narrows bridge went down just a few months after I was born!).

When the first Turing Award winner -- Al Perlis -- was asked in the 60s "What is Computer Science?", he said "It is the science of processes!". He meant all processes including those on computers, but also in Biology, society, etc.

His idea was that computing formed a wonderful facility for making better models of pretty much everything, especially dynamic things (which everything actually is), and that it was also the kind of thing that could really be understood much better by using it to make models of itself.

Today, we could still take this as a starting place for "getting 'Computer Science' back from where it was banished".

In any case, this point of view is very different from engineering. A fun thing in any "science of the artificial" is that you have to make artifacts for both phenomena and models.

(And just to confuse things here, note how much engineering practice is really required to make a good theory in a science!)


Thanks for the answer! It seems like there's a distinction here between exploring how models can/should be built (a mathematical/philosophical task), helping people create and understand these models with computers (a design/engineering task), and using these models to formulate and test hypotheses about ourselves and the world (a scientific task). Maybe the lack of science is because we haven't figured out the math/philosophy/design/engineering parts yet!


The lack of science is because most people are not only not interested in science, but really don't understand what it is.


Thanks. I've been thinking about your questions. I might be misreading you but I think that the answer is probably yes to both. So we should try to get out of the middle ages by inventing new theories and criticising and testing them like physics. But maybe just the physicists should do that. In the meantime the engineers should focus on being able to communicate clearly with the best tools that are currently available.(part of which is restricting their desire to invent)


Engineering is wonderful -- but think of what happened after real science got invented!

Today's "computer science" is much more like "library science" than it should be on the one hand, and too much coincident with engineering on the other (and usually not great engineering at that).

It's way past time for our not-quite-a-field to grow up more in important ways.


Agreed. It's really motivating to have someone who has shown a few times what can be done continuing to push for better. It's also helpful for you call a spade a spade when you talk of reinventing the flat tire. If more people would recognise both of these then maybe we could have a better future and more stable engineering present (rather than framework/language of the week!)


It would be very good if we starting to do real engineering and real science wrt software and most design ...


Computer science is defined by information theory, and we already have mathematical proofs binding together information theory with the laws of quantum physics (such as the example of the minimum energy needed to erase one bit of entropy from memory, something which is bounded by the ambient temperature).


Sort of. There's quite a few theories operating on computer science as we know it today. Especially in software and hardware. Examples include model-driven development, flow-based programming, lambda calculus, state machines, logic-oriented systems, and so on. The mathematical models involved underlying structuring and verification of anything built in these can be quite different although often with some overlapping techniques or principles. There's also been lots of work in high-assurance systems going from requirements and design specifications in a rigorous, mathematical (even mechanical) way down to an implementation in HW, SW, or both. None of them cite information theory. Heck, the analog computers might be outside of it entirely given they implement specific, mathematical functions with continuous operation on reals. I know Shannon had a separate model for them.

So, given I don't study it or read on it, I'm actually curious if you or anyone else has references on where information theory impacts real software development over the years. I study lots of formal methods & synthesis research but never even see the phrase mentioned. I've been imagining it's in its own little field working at a strongly theoretical level making abstract or concrete observations about computers. Just don't see them outside some cryptography stuff I've read.

EDIT to add example below where Bertrand Meyer presents a Theory of Programs that ties it all to basic, set theory.

https://bertrandmeyer.com/2015/07/06/new-paper-theory-of-pro...


Respectfully ... I think you missed the point of my answer.


Did you intend to compare the progress and formalization of the fields? Didn't pick up on that


Yes, that was what I was driving at. Anyone could do physics in the Middle Ages -- they just had to get a pointy hat. A few centuries later after Newton, one suddenly had to learn a lot of tough stuff, but it was worth it because the results more than paid for the new levels of effort.


Hi Alan! I've got some assumptions regarding the upcoming big paradigm shift (and I believe it will happen sooner than later):

1. focus on data processing rather than imperative way of thinking (esp. functional programming)

2. abstraction over parallelism and distributed systems

3. interactive collaboration between developers

4. development accessible to a much broader audience, especially to domain experts, without sacrificing power users

In fact the startup I'm working in aims exactly in this direction. We have created a purely functional visual<->textual language Luna ( http://www.luna-lang.org ).

By visual<->textual I mean that you can always switch between code, graph and vice versa.

What do you think about these assumptions?


What if "data" is a really bad idea?


Data like that sentence? Or all of the other sentences in this chat? I find 'data' hard to consider a bad idea in and of itself, i.e. if data == information, records of things known/uttered at a point in time. Could you talk more about data being a bad idea?


What is "data" without an interpreter (and when we send "data" somewhere, how can we send it so its meaning is preserved?)


Data without an interpreter is certainly subject to (multiple) interpretation :) For instance, the implications of your sentence weren't clear to me, in spite of it being in English (evidently, not indicated otherwise). Some metadata indicated to me that you said it (should I trust that?), and when. But these seem to be questions of quality of representation/conveyance/provenance (agreed, important) rather than critiques of data as an idea. Yes, there is a notion of sufficiency ('42' isn't data).

Data is an old and fundamental idea. Machine interpretation of un- or under-structured data is fueling a ton of utility for society. None of the inputs to our sensory systems are accompanied by explanations of their meaning. Data - something given, seems the raw material of pretty much everything else interesting, and interpreters are secondary, and perhaps essentially, varied.


There are lots of "old and fundamental" ideas that are not good anymore, if they ever were.

The point here is that you were able to find the interpreter of the sentence and ask a question, but the two were still separated. For important negotiations we don't send telegrams, we send ambassadors.

This is what objects are all about, and it continues to be amazing to me that the real necessities and practical necessities are still not at all understood. Bundling an interpreter for messages doesn't prevent the message from being submitted for other possible interpretations, but there simply has to be a process that can extract signal from noise.

This is particularly germane to your last paragraph. Please think especially hard about what you are taking for granted in your last sentence.


Without the 'idea' of data we couldn't even have a conversation about what interpreters interpret. How could it be a "really bad" idea? Data needn't be accompanied by an interpreter. I'm not saying that interpreters are unimportant/uninteresting, but they are separate. Nor have I said or implied that data is inherently meaningful.

Take a stream of data from a seismometer. The seismometer might just record a stream of numbers. It might put them on a disk. Completely separate from that, some person or process, given the numbers and the provenance alone (these numbers are from a seismometer), might declare "there is an earthquake coming". But no object sent an "earthquake coming" "message". The seismometer doesn't "know" an earthquake is coming (nor does the earth, the source of the 'messages' it records), so it can't send a "message" incorporating that "meaning". There is no negotiation or direct connection between the source and the interpretation.

We will soon be drowning in a world of IoT sensors sending context-or-provenance-tagged but otherwise semantic-free data (necessarily, due to constraints, without accompanying interpreters) whose implications will only be determined by downstream statistical processing, aggregation etc, not semantic-rich messaging.

If you meant to convey "data alone makes for weak messages/ambassadors", well ok. But richer messages will just bottom out at more data (context metadata, semantic tagging, all more data) Ditto, as someone else said, any accompanying interpreter (e.g. bytecode? - more data needing interpretation/execution). Data remains a perfectly useful and more fundamental idea than "message". In any case, I thought we were talking about data, not objects. I don't think there is a conflict between these ideas.


2nd Paragraph: How do they know they are even bits? How do they know the bits are supposed to be numbers? What kind of numbers? Relating to what?

Etc


It contravenes the common and historical use of the word 'data' to imply undifferentiated bits/scribbles. It means facts/observations/measurements/information and you must at least grant it sufficient formatting and metadata to satisfy that definition. The fact that most data requires some human involvement for interpretation (e.g. pointing the right program at the right data) in no way negates its utility (we've learned a lot about the universe by recording data and analyzing it over the centuries), even though it may be insufficient for some bootstrapping system you envision.


I think what Alan was getting at is that what you see as "data" is in fact, at its basis, just signal, and only signal; a wave pattern, for example, but even calling it a "wave pattern" suggests interpretation. What I think he's trying to get across is there is a phenomenon being generated by something, but it requires something else--an interpreter--to even consider it "data" in the first place. As you said, there are multiple ways to interpret that phenomenon, but considering "data" as irreducible misses that point, because the concept of data requires an interpreter to even call it that. Its very existence as a concept from a signal presupposes an interpretation. And I think what he might have been getting at is, "Let's make that relationship explicit." Don't impose a single interpretation on signal by making "data" irreducible. Expose the interpretation by making it explicit, along with the signal, in how one might design a system that persists, processes, and transmits data.


If we can't agree on what words mean we can't communicate. This discussion is undermined by differing meanings for "data", to no purpose. You can of course instead send me a program that (better?) explains yourself, but I don't trust you enough to run it :)

The defining aspect of data is that it reflects a recording of some facts/observations of the universe at some point in time (this is what 'data' means, and meant long before programmers existed and started applying it to any random updatable bits they put on disk). A second critical aspect of data is that it doesn't and can't do anything, i.e. have effects. A third aspect is that it does not change. That static nature is essential, and what makes data a "good idea", where a "good idea" is an abstraction that correlates with reality - people record observations and those recordings (of the past) are data. Other than in this conversation apparently, if you say you have some data, I know what you mean (some recorded observations). Interpretation of those observations is completely orthogonal.

Nothing about the idea of 'data' implies a lack of formatting/labeling/use of common language to convey the facts/observations, in fact it requires it. Data is not merely a signal and that is why we have two different ideas/words. '42' is not, itself, a fact (datum). What constitutes minimal sufficiency of 'data' is a useful and interesting question. E.g. should data always incorporate time, what are the tradeoffs of labeling being in- or out-of-band, per datom or dataset, how to handle provenance etc. That has nothing to do with data as an idea and everything to do with representing data well.

But equating any such labeling with more general interpretation is a mistake. For instance, putting facts behind a dynamic interpreter (one that could answer the same question differently at different times, mix facts with opinions/derivations or have effects) certainly exceeds (and breaks) the idea of data. Which is precisely why we need the idea of data, so we can differentiate and talk about when that is and is not happening - am I dealing with facts, an immutable observation of the past ("the king is dead") or just a temporary (derived) opinions ("there may be a revolt"). Consider the difference between a calculation involving (several times) a fact (date-of-birth) vs a live-updated derivation (age). The latter can produce results that don't add up. 'date-of-birth' is data and 'age' (unless temporally-qualified, 'as-of') is not.

When interacting with an ambassador one may or may not get the facts, and may get different answers at different times. And one must always fear that some question you ask will start a war. Science couldn't have happened if consuming and reasoning about data had that irreproducibility and risk.

'Data' is not a universal idea, i.e. a single primordial idea that encompasses all things. But the idea that dynamic objects/ambassadors (whatever their other utility) can substitute for facts (data) is a bad idea (does not correspond to reality). Facts are things that have happened, and things that have happened have happened (are not opinions), cannot change and cannot introduce new effects. Data/facts are not in any way dynamic (they are accreting, that's all). Sometimes we want the facts, and other times we want someone to discuss them with. That's why there is more than one good idea.

Data is as bad an idea as numbers, facts and record keeping. These are all great ideas that can be realized more or less well. I would certainly agree that data (the maintenance of facts) has been bungled badly in programming thus far, and lay no small part of the blame on object- and place-oriented programming.


Why do you limit the meaning of 'data' to facts and/or observations?


"datum" means "a thing given" - a fact or presumed fact.

http://www.dictionary.com/browse/datum


I think in the Science of Process that is being related as a desirable goal, everything would necessarily be a dynamic object (or perhaps something similar to this but fuzzier or more relational or different in some other way, but definitely dynamic) because data by itself is static while the world itself is not.


Your selection of data is arbitrary.

Not only is your perception based on an interpreter, but how can you be sure that you were even given all of the relevant bits? Or, even what the bits really meant/are?


Of course the selection of data is arbitrary -- but Rich gives us a definition, which he makes abundantly clear and uses consistently. All definitions can be considered arbitrary. He's not making any claim that we have all the relevant bits of data or that we can be sure what the data really means or represents.

But we can expound on this problem in general. In any experiment where we gather data, how can we be sure we have collected a sufficient quantity to justify conclusions (and even if we are using statistical methods that our underlying assumptions are indeed consistent with reality) and that we have accrued all the necessary components? What you're really getting at is an __epistemological__ problem.

My school of thought is that the only way to proceed is to do our best with the data we have. We'll make mistakes, but that's better than the alternative (not extrapolating on data at all.)


I hope we can do our best, I'm just not sure there is really a satisfactory way to define/measure/judge that we have actually done so....


Isn't the interpreter code itself data in the sense that it has no meaning without something (a machine) to run it? How do you avoid having to send an interpreter for the interpreter and so on?


Yes, so think about how to make this work "nicely" in an Intergalactic Network ...


It can't be turtles all the way down, so maybe set theory?


A good question isn't it?

For parallel ideas and situation, take a look at Lincos https://en.wikipedia.org/wiki/Lincos_(artificial_language)


Thank you! I started to think on those lines too thanks to the Carl Sagan's Contact novel. That was the first thing that came to mind.

Now the question is, what if there are "objects" more advanced than others and what if advanced-object sends a message concealing an trojan horse? I think this question was also brought up in the novel/movie too...

I think this is a real life and practical show stopper to develop this concept...


Thanks for the reference. I've been trying to think along these lines.


Wow. Thanks.


I think object is a very powerful idea to wrap "local" context. But in a network (communication) environment, it is still challenging to handle "remote" context with object. That is why we have APIs and serialization/deserialization overhead.

In the ideal homogeneous world of smalltalk, it is a less issue. But if you want a Windows machine to talk to a Unix, the remote context becomes an issue.

In principle we can send a Windows VM along with the message from Windows and a Unix VM (docker?) with a message from Unix, if that is a solution.


This is why "the objects of the future" have to be ambassadors that can negotiate with other objects they've never seen.

Think about this as one of the consequences of massive scaling ...


Along this line of logic, perhaps the future of AI is not "machine learning from big data" (a lot of buzz words) but computers that generate runtime interpreters for new contexts.


It's not "Big Data" but "Big Meaning"


When high bandwidth communication is omnipresent, is "portability" of the interpreter really something to optimize for?


How can you find it?

The association between "patterns" and interpretation becomes an "object" when this is part of the larger scheme. When you've just got bits and you send them somewhere, you don't even have "data" anymore.

Even with something like EDI or XML, think about what kinds of knowledge and process are actually needed to even do the simplest things.


Sounds pretty much like the problem of establishing contact with an alien civilization. Definitely set theory, prime numbers, arithmetic and so on... I guess at some point, objects will be equipped with general intelligence for such negotiations if they are to be true digital ambassadors!



It's hard for me to grasp what this negotiation would look like. Particularly with objects that haven't encountered each other. It just seems like such a huge problem.

I don't really know anything at all about microbiology, but maybe climbing the ladder of abstraction to small insects like ants. There is clearly negotiation and communication happening there, but I have to think it's pretty well bounded. Even if one ant encountered another ant, and needed to communicate where food was, it's with a fixed set of semantics that are already understood by both parties.

Or with honeybees, doing the communication dance. I have no idea if the communication goes beyond "food here" or if it's "we need to decide who to send out."

It seems like you have to have learning in the object to really negotiate with something it hasn't encountered before. Maybe I'm making things too hard.

Maybe "can we communicate" is the first negotiation, and if not, give up.


It is worth thinking of an analogy to TCP/IP -- what is the smallest thing that could be universal that will allow everything else to happen?


I remember at one point after listening to one of your talks about TCP/IP as a very good OO system, and pondering the question of how to make software like that, an idea that came to mind was, "Translation as computation." I was combining the concept that as implemented, TCP/IP is about translation between packet-switching systems, so a semantic TCP/IP would be a system that translates between different machine models, though, in terms of my skill, the best that I could imagine was "compilers as translators," which I don't think cuts it, because compilers don't embody a machine model. They assume it. However, perhaps it's not necessary to communicate machine models explicitly, since such a system could translate between them re. what state means. This would involve simulating state to satisfy local operation requirements while actual state is occurring, and will eventually be communicated. I've heard you reference McCarthy's situation calculus re. this.


Well, there's the old Component Object Model and cousins ... under this model an object a encountering a new object b will, essentially, ask 'I need this service performed, can you perform it for me?' If b can perform the service, a makes use of it; if not, not.

Another technique that occurs to me is from type theory ... here, instead of objects we'll talk in terms of values and functions, which have types. So e.g. a function a encountering a new function b will examine b's type and thereby figure out if it can/should call it or not. E.g., b might be called toJson and have type (in Haskell notation) ToJson a => a -> Text, so the function a knows that if it can give toJson any value which has a ToJson typeclass instance, it'll get back a Text value, or in other words toJson is a JSON encoder function, and thus it may want to call it.


Alan, what is your view on Olive Executable Archive ?https://olivearchive.org/


The Internet Archive (http://archive.org) is doing the same thing. They have old software stored that you can run in online emulators. I only wish they had instructions for how to use the emulators. The old keyboards and controllers are not like today's.



Their larger goals are important.


Do you think they are on the right path to their larger goals?


I think for so many important cases, this is almost the only way to do it. The problems were caused by short-sighted vendors and programmers getting locked into particular computers and OS software.

For contrast, one could look at a much more compact way to do this that -- with more foresight -- was used at Parc, not just for the future, but to deal gracefully with the many kinds of computers we designed and built there.

Elsewhere in this AMA I mentioned an example of this: a resurrected Smalltalk image from 1978 (off a disk pack that Xerox had thrown away) that was quite easy to bring back to life because it was already virtualized "for eternity").

This is another example of "trying to think about scaling" -- in this case temporally -- when building systems ....

The idea was that you could make a universal computer in software that would be smaller than almost any media made in it, so ...


I agree that the "image" idea is more powerful than the "data" idea.

However since PC revolution, the mainstream seemed to take on the "data" path for whatever technical or non-technical reasons.

How do you envision the "coming back" of image path via either bypassing the data path or merging with it in a not so faraway future?


Over all of history, there is no accounting for what "the mainstream" decides to believe and do. Many people (wrongly) think that "Darwinian processes" optimize, but any biologist will point out that they only "tend to fit to the environment". So if your environment is weak or uninteresting ...

This also obtains for "thinking" and it took a long time for humans to even imagine thinking processes that could be stronger than cultural ones.

We've only had them for a few hundred years (with a few interesting blips in the past), and they are most definitely not "mainstream".

Good ideas usually take a while to have and to develop -- so the when the mainstream has a big enough disaster to make it think about change rather than more epicycles, it will still not allocate enough time for a really good change.

At Parc, the inventions that made it out pretty unscathed were the ones for which there was really no alternative and/or no one was already doing: Ethernet, GUI, parts of the Internet, Laser Printer, etc.

The programming ideas on the other hand were -- I'll claim -- quite a bit better, but (a) most people thought they already knew how to program and (b) Intel, Motorola thought they already knew how to design CPUs, and were not interested in making the 16 bit microcoded processors that would allow the much higher level languages at Parc to run well in the 80s.


It seems that barriers to entry in hardware innovation are getting higher and higher due to high risk industrial efforts. In the meantime barriers to entry in software are getting lower and lower due to improvement of toolings in both software and hardware.

On the other hand due to the exponential growth of software dependency, "bad ideas" in software development are getting harder and harder to remove and the social cost of "green field" software innovation is also getting higher and higher.

How do we solve these issues in the coming future?


I don't know.

But e.g. the possibilities for "parametric" parallel computing solutions (via FPGAs and other configurable HW) have not even been scratched (too many people trying to do either nothing of just conventional stuff).

Some of the FPGA modules (like the BEE3) will slip into a Blades slot, etc.

Similarly, there is nothing to prevent new SW from being done in non-dependent ways (meaning the initial dependencies to hook up into the current world can be organized to be gradually removeable, and the new stuff need not have the same kind of crippling dependencies).

For example, a lot can be done -- especially in a learning curve -- if e.g. a subset of Javascript in a browser (etc) can really be treated as a "fast enough piece of hardware" (of not great design) -- and just "not touch it with human hands". (This is awful in a way, but it's really a question of "really not writing 'machine code' ").

Part of this is to admit to the box, but not accept that the box is inescapable.


Thank you Alan for your deep wisdom and crystal vision.

It is the best online conversation I have ever experienced.

It also reminded me inspiring conversations with Jerome Bruner at his New York City apartment 15 years ago. (I was working on some project with his wife's NYU social psychology group at the time.) As a Physics Ph.D. student, I never imaged I could become so interested in Internet and education in the spirit of Licklider and Doug Engelbart.

謝謝。


You probably know that our mutual friend and mentor Jerry Bruner died peacefully in his sleep a few weeks ago at the age of 100, and with much of his joie de vivre beautifully still with him. There will never be another Jerry.



>Please think especially hard about what you are taking for granted in your last sentence.

Any Meaning can only be the Interpretation of a Model/Signal?


Information in "entropy" sense is objective and meaningless. Meaning only exists within a context. If we think "data" represent information, "interpreters" bring us context and therefore meaning.


Thank you - I was beginning to wonder if anyone in this conversation understood this. It is really the key to meaningfully (!!) move forward in this stuff.


The more meaning you pack into a message, the harder the message is to unpack.

So there's this inherent tradeoff between "easy to process" and "expressive" -- and I imagine deciding which side you want to lean toward depends on the context.

Check this out for a practical example: https://www.practicingruby.com/articles/information-anatomy

(not a Ruby article, but instead about essential structure of messages, loosely inspired by ideas in Gödel, Escher, Bach)


So the idea is to always send the interpreter, along with the data? They should always travel together?

Interesting. But, practically, the interpreter would need to be written in such a way that it works on all target systems. The world isn't set up for that, although it should be.

Hm, I now realize your point about HTML being idiotic. It should be a description, along with instructions for parsing and displaying it (?)


TCP/IP is "written in such a way that it works on all target systems". This partially worked because it was early, partly because it is small and simple, partly because it doesn't try to define structures on the actual messages, but only minimal ones on the "envelopes". And partly because of the "/" which does not force a single theory.

This -- and the Parc PUP "internet" which preceded it and influenced it -- are examples of trying to organize things so that modules can interact universally with minimal assumptions on both sides.

The next step -- of organizing a minimal basis for inter-meanings -- not just internetworking -- was being thought about heavily in the 70s while the communications systems ideas were being worked on, but was quite to the side, and not mature enough to be made part of the apparatus when "Flag Day" happened in 1983.

What is the minimal "stuff" that could be part of the "TCP/IP" apparatus that could allow "meanings" to be sent, not just bits -- and what assumptions need to be made on the receiving end to guarantee the safety of a transmitted meaning?


Would some kind of IDL not be enough to allow meanings to be sent?


Now it's to late to fix.


I don't think it's too late, but it would require fairly large changes in perspective in the general computing community about computing, about scaling, about visions and goals.



Data, and the entirety of human understanding and knowledge derived from recording, measurement and analysis of data, predates computing, so I don't see the relevance of these recent, programming-centric notions in a discussion of its value.


Wouldn't Mr. Kay say that it is education that builds the continuity of the entirety of human understanding? Greek philosophy and astronomy survived in the Muslim world and not in the European, though both possessed plenty of texts, because only the former had an education system that could bootstrap a mind to think in a way capable of understanding and adding to the data. Ultimately, every piece of data is reliant on each generation of humans equipping enough of their children with the mindset capable to use it intelligently.

The value of data is determined by the intelligence of those interpreting it, not those who recorded it.

Of course, this dynamic is sometimes positive. The Babylonians kept excellent astronomical records though apparently making little theoretical advance in understanding them. Greeks with an excellent grasp of geometry put that data to much better use very quickly. But if they had had to wait to gather the data themselves, one can imagine them waiting a long time.


This kind of gets into philosophy, but a metaphor I came up with for thinking about this (another phrase for it is "thought experiment") is:

If I speak something to a rock, what is it to the rock? Is it "signal," or "data"?

Making the concept a little more interesting, what if I resonate the rock with a sound frequency? What is that to the rock? Is that "signal," or "data"?

Up until the Rosetta Stone was found, Egyptian hieroglyphs were indecipherable. Could data be gathered from them, nevertheless? Sure. Researchers could determine what pigments were used, and/or what tools were used to create them, but they couldn't understand the messages. It wasn't "data" up to that point. It was "noise."

I hope I am not giving the impression that I am a postmodernist who is out here saying, "Data is meaningless." That's not what I'm saying. I am saying meaning is not self-evident from signal. The concept of data requires the ability to interpret signal for meaning to be acquired.


Computing has existed for thousands of years. We just have machines do some of it now.


What if "data" is a really great idea?


Your blog looks very interesting. You should share some links of it here on hackernews!


Thanks. I share it wherever I think it will add to the discussion.


Data is semantically defined by the processes using/interpreting it. Not by the data itself. So Rich Hickey is right and Alan Kay is wrong.


Yes, I think if we could get rid of this notion we can probably move in interesting directions. Another way to look at it: if we take any object with sufficient complexity in the universe, how could it interact with other object of sufficient complexity? If we look at humans, as first order augmentation devices for other humans, it's notable that the difference between levels of complexity of their internal state is much higher than the level of complexity of input at any sufficiently small time frame (whatever measurement you decide to take). Basically, the whole state is encoded internally, by means of successive undifirentiated input. In that sense, for example - neural networks don't work with data as such, the data presupposes an internal structure that is absent in an input from the standpoint of the network itself. It is it's job to covert that to something we can reasonably call "data". Moreover, this knowledge is encoded in it's internal state, essentially being the "interpreter" bundled in. Another angle that I like to think from is this: TRIZ has a concept of an ideal device, something performing it's function with minumum overhead required, best that the function be performed by itself, in absence of any device. If we imagine the computer (in a very generic sence) to be such a device, it stands to reason that ideally it will require minimum, or even no input. Obviously it means that we don't need to encode meaning or interpretation into it through directed formal input. The only way for it to happen is for a computer to have a sufficiently complex internal state, capable of converting directed, or even self acquired input to whatever we can eventually call "data". This logic could possibly be applied to some minimimal object - we could look for a unit capable of performing a specific function on a defined range of inputs, building the meaning from it's internal state. The second task then, would be to find a way to compose those object, provided they have no common internal state, and to build systems in which combination of those states would render a larger possible field of operation. Third interesting question would be: how can we build up the internal state of another object, provided we would want to feed it the input requiring interpretation further down the line, building up from whatever minimum we already have.


Welcome to Claude Shannon! It's not about the message but about the receiver ...


Actually it is as much about the sender and the message as the receiver.


Sure, the message matters insomuch as it contains any information the receiver might be able to receive, but that doesn't guarantee it will be received, so how much does a message really matter? I don't see how the sender matters that much (unless perhaps the sender and receiver are linked, for example, they exchange some kind of abstract interpreter for the message). But does the message matter on its own if is is encrypted so well that it is indistinguishable from noise to any but one particular receiver? It's just noise without the receiver. I'm not sure what was meant, but this is the best I can do in understanding it.


data isn't the carrier, it isn't the signal (information), and it certainly isn't the meaning (interpretation). A reasonable first approximation is that data is _message_.


Sorry for straying, but can I get an invite?


Yes, you can :) Write an email to me and I'll grand you access. For now a very limited audience is allowed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: