Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay is still waiting for his dream to come true (fastcompany.com)
337 points by sohkamyung on Sept 16, 2017 | hide | past | favorite | 288 comments



Let me try to help this community regarding this article by providing some context. First, you need to realize that in the more than 50 years of my career I have always waited to be asked: Every paper, talk, and interview has been invited, never solicited. But there is a body of results from these that do put forward my opinions.

This article was a surprise, because the interview was a few years ago for a book the interviewer was writing. It's worth noting that nowhere in the actual interview did I advocate going back and doing a Dynabook. My comments are mostly about media and why it's important to understand and design well any medium that will be spread and used en-mass.

If you looked closely, then you would have noticed the big difference between the interview and the front matter. For example, I'm not still waiting for my dream to come true. You need to be sophisticated enough to see that this is a headline written to attract. It has nothing to do with what I said.

And, if you looked closely, you might note a non seq right in the beginning, from "you want to see old media?" to no followup. This is because that section was taken from the chapter of the book but then edited by others.

The first version of the article said I was fired from Apple, but it was Steve who was fired, and some editor misunderstood.

In the interview itself there are transcription mistakes that were not found and corrected. And of course they didn't send me article ahead of time (I could have fixed all this).

I think I would only have made a few parts of the interview a little more understandable. It's raw, but -- once you understand the criticisms -- I think most will find them quite valid. In the interview -- to say it again -- I'm not calling for the field to implement my vision -- but I am calling for the field to have a larger vision of civilization and how mass media and our tools contribute or detract from it. Thoreau had a good line. He said "We become the tools of our tools" -- meaning, be very careful when you design tools and put them out for use. (Our consumer based technology field is not being at all careful.)


Professor Kay, it's an honor to see you taking the time to provide context to this interview. For what it's worth, I was skimming the article until I saw your comments on human universals, and that made me sit up and take notice. From that point on, it was pretty clear to me you were sounding a note of alarm about the addictive quality of our current consumer electronics industry, and the wasted potential of them as pedagogic devices.


Wow, this article is a pretty egregious misinterpretation of those points. I didn't come away with anything even vaguely resembling what you just said, even if we disregard the factual errors.

Candidly, your comments here make a lot more sense than some "We haven't yet built the DynaBook the way we should" piece.


People often ask me "Is this a Dynabook, is that a Dynabook?". Only about 5% of the idea was in packaging (and there were 2 other different packages contemplated in 1968 besides the tablet idea -- the HMD idea from Ivan, and the ubiquitous idea from Nicholas).

Almost all the thought I did was based on what had already been done in the ARPA community -- rendered as "services" -- and resculpted for the world of the child. It was all quite straightforwardly about what Steve later called "Wheels for the Mind".

If people are interested to see part of what we had in mind, a few of us including Dan Ingalls a few years ago revived a version of the Xerox Parc software from 1978 that Xerox had thrown away (it was rescued from a trash heap). This system was the vintage that Steve Jobs saw the next year when he visited in 1979. I used this system to make a presentation for a Ted Nelson tribute. It should start at 2:15. See what you think about what happens around 9:05. https://youtu.be/AnrlSqtpOkw?t=135

Next year will be the 50 anniversary of this idea, and many things have happened since then, so it would be crazy to hark back to a set of ideas that were in the context of being able to be built over 10 years, and would be ridiculous if we didn't have in 30 (that would be 1998, almost 20 years ago).

The large idea of ARPA/PARC was that desirable futures for humanity will require many difficult things to be learned beyond reading and writing and a few sums. If "equal rights" is to mean something over the entire planet, this will be very difficult. If we are to be able to deal with the whole planet as a complex system of which we complex systems are parts, then we'll have to learn a lot of things that our genetics are not particularly well set up for.

We can't say "well, most people aren't interested in stuff like this" because we want them to be voting citizens as part of their rights, and this means that a large part of their education needs to be learning how to argue in ways that make progress rather than just trying to win. This will require considerable knowledge and context.

The people who do say "well, most people aren't interested in stuff like this" are missing the world that we are in, and putting convenience and money making ahead of progress and even survival. That was crazy 50 years ago, and should be even more apparently crazy now.

We are set up genetically to learn the environment/culture around us. If we have media that seems to our nervous systems as an environment, we will try to learn those ways of thinking and doing, and even our conception of reality.

We can't let the commercial lure of "legal drugs" in the form of media and other forms put us into a narcotic sleep when we need to be tending and building our gardens.

The good news about "media as environment" was what attracted a lot of us 50 years ago -- that is, that making great environments/cultures will also be readily learned by our nervous systems. That was one of Montessori's great ideas, and one of McLuhan's great ideas, and it's a great idea we need to understand.

There aren't any parents around to take care of childish adults. We are it. So we need to grow up and take responsibility for our world, and the first actions should be to try to understand our actual situations.


I'm interested to see if you've seen the concepts of atemporality[1] and network culture[2] floating around. Basically, the core thesis associated with these is that we have adopted the internet as our primary mode of processing information, and have as such lost the sense of a cohesive narrative that is inherent in reading a book/essay, or listening to a whole talk, in the process.

You become fully immersed in Plato's worldview when reading The Republic, but if you were see someone explaining the allegory of the cave in the absence of a wider context you will only take the elements which fit your worldview and not his wider conception of knowledge.

I think this ties into what happened to the concept of a centralised computer network, working for the good of humanity, turning into todays fractured silos, working to mine individuals for profit.

[1]http://index.varnelis.net/network_culture/1_time_history_und... [2]http://index.varnelis.net/network_culture


Thanks for the references. I haven't seen these, but the ideas have been around since the mid-60s by virtue of a number of the researchers back then having delved into McLuhan, Mumford, Innis, etc. and applied the ideas to the contemplated revolutions in personal computing and networking media we were trying to invent.

I think a big point here is that going to a network culture doesn't mandate losing narrative, it just makes the probability much higher for people who are not self-conscious about their surrounding contexts. If we take a look at Socrates (portrayed as an oral culture genius) complaining about writing -- e.g. it removes the need to remember and so we will lose this, etc. -- we also have to realize that we are reading this from a very good early writer in Plato. Both of them loved irony, and if we look at this from that point of view, Plato is actually saying "Guess what? If you -decide- to remember while reading then you get the best of both worlds -- you'll get the stuff between your ears where it will do you the most good -and- you will have gotten it many times faster than listening, usually in a better form, and from many more resources than oral culture provides".

This was the idea in the 60s: provide something much better -- and by the way it includes narrative and new possibilities for narrative -- but then like reading and writing, teach what it is in the schools so that a pop culture version which misses most of the new powers is avoided.


"There aren't any parents around to take care of childish adults. We are it. So we need to grow up and take responsibility for our world, and the first actions should be to try to understand our actual situations."

I am seeing the expectation that there will be parents around to take care of childish adults, though this has really come into prominence in the last 10 years, the last 3 years, in particular. For me, it's evoked notions of H. G. Wells's "Eloi." If that sentiment moves forward unchanged, we won't get "parents" in reality, of course, but some perverted in loco parentis in society. I've heard hope expressed in some quarters that reality will provide some needed blows from some 2x4's across the head, once the young venture out into the world, but I wonder whether sheer numbers will decide this; whether the young will choose to reorient our society, in an attempt to please themselves, rather than being influenced by its experience.

Re. "most people are not interested in this"

From what I've seen, this excuse came out of a combination of the technical side of the "two cultures," and the distraction of a lot of people becoming excited by some perceived new possibilities. More recently, my perspective has shifted to it coming out of a perverted notion of "self-esteem," that challenge is "harmful," because being contradicted creates a sense of limits, isolation or shame, or more materialistically, the fear of economic isolation, thereby reducing career prospects for something original.

What's emerged is a desire to affirm one's self-image as "good," regardless of notions of good works. This is where the "legal drugs" come in, reinforcing this. Neil Postman was right to fear this dynamic.

Diverting off of what I'm saying here (though staying on your topic), have you looked at William Easterly's critique of how foreign aid is conducted? I think it dovetails nicely with what you're talking about, here. The short of it is that most aid efforts to the undeveloped world offer some form of short-term relief, but they don't address at all the political and economic issues that cause the problems the aid is trying to address in the first place. Secondly, when he's tried to confront the aid organizations about this, there is no interest in pursuing these matters, a version of "most people aren't interested in stuff like this." Whether there's a sincere desire to solve problems, or just go through the motions to "help," to make it appear like something is happening (ie. putting on a kind of show of compassion for public relations, and satisfying certain political goals), I don't know. It seems like the latter.

Do you have any insight on what's causing the reticence to get into these matters? Easterly didn't seem to have answers for that, as I recall.


Closer to home -- for example in the US public educational system -- we have prime examples of your (and Easterly's) point about "band-aids" vs. "health". After acknowledging how politics works, I think we can see other factors at work in those more genuinely interested in dealing with problems. Some of these are almost certainly (a) the idea that "doing something" is better than doing nothing (b) that "large things are harder than small things" (c) the lack of "systems consciousness" amongst most adults (d) pick a few more.

The "it's a start" reply, which is often heard when criticizing actions in education which will get nowhere (or worse, dig the hole deeper) is part of several fallacies about "making progress": the idea that "if we just iterate enough" we will get to the levels of improvement needed. Any biologist will point out that "Darwinian processes" don't optimize, they just find fits to the environment. So if the environment is weak you will get good fits that are weak.

A "being more tough" way to think about this is what I've called in talks "the MacCready Sweet Spot" -- it's the threshold above the "merely better" where something important is different. For example, consider reading scores. They can go up or down, but unless a kid gets over the threshold of "reading for meaning" rather than deciphering codes, none of the ups and downs below count. For a whole population, the US is generally under the needed threshold for reading, and that is the systemic problem that needs to be worked on (not raising the scores a few points).

To stay on this example, we find studies that show it is very hard to learn to read fluently after we've learned oral language fluently. Montessori homed in on this earlier than most, and it has since been confirmed more rigorously. And this is the case for many new things that we need to get fluent at and above threshold.

So at the systems level of thinking we should be putting enormous resources into reforming the elementary grades rather than trying to "fix" high schools.

And so forth. This is the logic behind building dams and levees and installing pumps and runoff paths before flooding. One recent study indicated that the costs of prevention are 20% of the costs of disaster.

We could add to (d) above the real difficulties humans have of imagining certain kinds of things: we have no trouble with imagining gods, demons, witches, etc. but can't get ourselves beforehand into the "go all out" state of mind we have during an actual disaster (where heroes show up from everywhere). The very same people mostly can't take action when there isn't a disaster right in front of them.

This is very human. But, as I've pointed out elsewhere, part of "civilization" is to learn how to "do better than human" for hard to learn things.


Hmm. So, it sounds like the same "keyhole" problem I've seen you talk about before (you used an AIDS epidemic as an example with this). What's seen is taken as "good enough," because the small perspective seems large enough. If there are any frustrations or tragedies, they're taken as, "It goes with the territory. Just keep plugging away."

There's a parable I used to hear that I think plays right into this:

Two people are walking along a beach, and they see an enormous field of starfish stranded ashore, and one of them starts throwing them, one by one, back into the sea. The other is watching, and says, "What's the point? You're not going to be able to save all of them." The person doing the throwing holds up a starfish, and says, "I can save this one."

It's a nice thought, talking about good will and perseverance, and certainly the message shouldn't be, "Give up," but I think it nicely illustrates the "keyhole" problem, because ideas like this lead people to believe that because they can see people who need help, even if the number is more than they can handle, and they're trying to help those people in the moment, that they're improving their lives in the long-term. That may not be true.

I've seen you talk about the "MacCready Sweet Spot" in relation to the Apollo program. BTW, I first heard you talk about that in a web video from some congressional testimony you gave back in 1982, when Al Gore was Chairman of the Science and Technology Committee. When you said that the Apollo rockets were below threshold, not nearly good enough to advance space travel, and that the rockets were a kind of kludge, the camera was panning around the room, showing large posters of different NASA missions that had been hung up around the chamber. Gore said in jest, "The walls in this room are shaking!" I can imagine! When I first heard you say that, it struck me as so contrary to the emotional impact I had from understanding what was accomplished (I do think that landing on the Moon and returning safely to Earth was a mean feat, particularly when the U.S. couldn't get a rocket into space to save our lives 12 years earlier (I don't mean that literally)), but as I listened to you explain how the rocket was designed (450 ft., mostly high explosives, with room for only 3 astronauts, not to mention that the missions were for something like 9 days at a time. Three days to get there. Two, sometimes more days, on the surface, and then three days to get home), it occurred to me for the first time, "My gosh! He's right!" It really helped explain my disappointment at seeing us not get beyond low-Earth orbit for decades. For years, I thought it was just a lack of will.

I've explained to people that when I was growing up in the '80s, I had this expectation drummed into me (willingly), as many people in my generation did, that we would see interplanetary travel, probably within our lifetimes, and in several generations, interstellar travel. It was very disappointing to see the Space Shuttle program cancelled with seemingly nothing beyond it on the horizon, and I think more importantly, no goals for anything beyond it that have been compelling. I heard you explain in a more recent presentation that this was a natural outcome of Apollo, that it set in motion something that had its own inertia to play out, but the end result is no one has any enthusiasm for space travel anymore, because the expectations have been set so low. The message being, "Beware of large efforts below threshold." Indeed!


We are "story creatures" and it takes a lot of training and willpower to depart from "fond stories and beliefs" to "actually think things through".

That the moon shot was just a political gesture -- and also relevant to ICBMs etc -- was known to every scientist and most engineers who were willing to think about the problem for more than a few seconds.

We hoped that the -romance- of the shot would lead to the very different kinds of technologies needed for real space travel (basically it's about MV = mv, and if you don't want to have to carry (and move) a lot of M, you have to have very high V (beyond what chemical reactions can produce). If you have to have a large M you use most of it to move just it! This has been known for more than 100 years.

But the real romance and its implications didn't happen in the general public and politicians.


And the story we tell ourselves today is, frankly, a dismal one. It's that all of computing should be invented and put into the service of "the economy" rather than people. Instead of a culture of "computational literacy" in which human thought is extended to another level to the same effect as written literacy hundreds of years ago, we have an environment of complex technologies that cater to our most base evolutionary addictions and surveil us for profit.

Our universities are no longer institutions where people learn how to think, but rather where they learn how to "do" -- usually "doing" involves vocational practices that already exist, especially those that some manager (ie provost or dean) deems economically important. This is why you have generations of programmers bitching about type systems instead of the very politics, history, and social consequences of their own wares.

We don't have funding like ARPA/IPTO anymore and the devices and software of our world show it. Everything is some iteration on ideas that came from that period, good or bad – iterations whose goal is always "efficiency" in some form. Our current political culture prevents big initiatives like this, because how on Earth would they benefit the economy in the short or medium term, the limits of our new horizon?

Because these technologies have been created in service of an economic system that has proliferated social problems, they can never be a meaningful solution to those problems. Sure, we might invent some new systems for dealing with environmental catastrophe, but they are always predicated on the assumption that people should consume more and more. We are at the behest of billionaires – smart ones, mostly – who understand complex systems but also have an interest in ensuring that they remain complex.

It is unlikely that we will achieve a new kind of transcendent way of computing until we change the way we think about politics and economics. That is our environment. That is the "fit" that our technical systems have, as you say.


Great description of the problem (and great description of what we could have instead)! What came to mind as I read what you said here was a bit that I caught Neil deGrasse Tyson talking about from 8 years ago. As I heard him say this, I thought he was right on point, but I also felt sad that it's pretty obvious we're not thinking like this in computing. It turns out this is not just a problem in computing, but in science funding generally. That's what he was talking about, though he was quite polite about it:

https://youtu.be/UlHOAUIIuq0?t=22m30s

It strikes me that a very corrosive thought process in our society has been to politicize the notion of "how competitive we are" economically. Sure, that matters, but I see it more as a symptom than a cause of social problems. I hate seeing it brought up in discussions about education, because sure, competition is going to be a part of societal living, and in many educational environments, there's some aspect of competition to it (a story I heard from my grandfather from when he entered medical school was, "Look to your left. Look to your right. Only one of you will be graduating with a medical degree," because that was the intended ratio along the bell curve), but bringing economic competitiveness into education misses the point badly. I understand where the impulse to focus on that comes from, because globalization tends to produce a much more competitive economic landscape, where people feel much more uncertain about basic questions they have to answer. Part of which is creating the life they want, but often people end up missing a significant part of actually creating it (if it's even feasible. What I see more often is a compromise, because there are only so many hours in a day, and only so much effort can be put into it) in the process of trying to create it. They get caught up in "doing," as you said.

As I've thought back on the '60s, it seems like while there was still competition going on, the emphasis was on a political competition, internationally, not economic. There was a significant technological component to that, because of the Cold War/nuclear weapons. The creation of ARPA and NASA was an effect of that. My understanding is we underwent a reorientation in the 1980s, because it was realized that there was too little attention paid to the benefits that a relatively autonomous economy can produce, killing off bad ideas, where what's being offered doesn't match with what people need or want, and allowing better ones to replace them. That's definitely needed, but I'm in agreement with Kay that what education should be about is helping people understand what they need. Perhaps we could start by telling today's students that if and when they have children, what their children need is to understand the basic thought-inventions of our society in an environment where they're more likely to get that. Instead, what we've been doing is treating schools like glorified daycare centers. Undergraduate education has been turned into much the same thing.


> My understanding is we underwent a reorientation in the 1980s, because it was realized that there was too little attention paid to the benefits that a relatively autonomous economy can produce, killing off bad ideas, where what's being offered doesn't match with what people need or want, and allowing better ones to replace them.

There was a rightward swing in the late 1970s that took root in our political system, then commentariat, and then culture. It has never reverted. The term "neoliberalism" gets thrown around (usually by dweebs like me) but it's the precise term to use. Wendy Brown's recent book is probably the best overview of the topic in recent years.

The cultural shift that was unleashed in that period is so insidious that you don't even notice it half the time. Think about dating apps/sites where users talk about their romantic lives using terms like "R.O.I." Or people discussing ways to "optimize" their lives by making them more efficient. It's nuts.

Steve Jobs' old "bicycle of the mind" chestnut is, in a way, emblematic of this way of thinking. He was talking about how the most "efficient" animal was a human with a bicycle. He wanted human thinking to be "more efficient." If you listen to Kay, on the other hand, he's talking about something entirely different. The transcendent effect of literacy on mankind created the very possibility of civilization, for good or ill. Computing as an aid to thinking in the way the written word was previously could take this to the next, higher stage – one we cannot really describe or talk about because we don't even have the language to do so.

But short term thinking, shareholder value, and the need for economic growth – these are and have been the pillars of our politics and culture for several decades now. No one says who that growth benefits, of course, which is why it's no coincidence that the maw of inequality has opened ever wider during the same period. If you're wondering where all the "good ideas" are, well, we don't have time for good ideas. We only have time for profitable ones, or at least ones that can be sold after a high valuation.

The culture also trickled into the university, and then to funding (not just science funding, but funding for most fields. We need more than science to do new science). I have been on the bitch end of writing NSF grants for pretty ambitious projects, and the requirements are straight out of Kafka. They want you to demonstrate that you'll be able to do the things you're saying you'll hope to be able to do. That's not how it used to work. But the angle is always the same: they want something "innovative" that can be useful as immediately as possible. Useful for the economy, that is. They don't understand this undeniable fact: if you want amazing developments, you have to let passionate and smart people screw around and you have to pay them for it. The university used to be the place to screw around with ideas and methods. Now it's career prep.

> I understand where the impulse to focus on that comes from, because globalization tends to produce a much more competitive economic landscape, where people feel much more uncertain about basic questions they have to answer

This kind of globalization is a choice, one made by powerful people with explicit interests. It was not inevitable. Right now I live in the wealthiest country that has ever existed on the planet. And right now many of its citizens are calling their elected leaders to beg them not to take away the sliver of health care that they have left. We serve the economy and not each other. When there's a big decision to make, our leaders wonder "how the market will react," rather than how people will be affected.

Last point: the idea of this thing called "the economy" as an object of policy is relatively new. Timothy Mitchell has an amazing chapter on it in his book Carbon Democracy. The 20th century was one where we allowed the field of economics to cannibalize all others. The 21st has not taken the chance to escape this.


We should get "Fast Company" to interview you -- you'd do a better job! (Actually I think I did do a better job than their editing wound up with.)

Your comments and criticism of the NSF are dead-on (and is the reason I gave up on NSF a few years ago -- and I was on several of the Advisory Committees and could not convince the Foundation to be tougher about its funding autonomy -- very trick for them admittedly because of the way it is organized and threaded through both Congress and OMB).

One way to look at it is there is a sense of desperation that has grown larger and larger, and which manifests both in the powerless and the powered.


What came to mind when I read your comments were some complaints I've had that relate to the "looking for the keys under the streetlight" fallacy. There are intuitions and anecdotes we can have about the unknowns, which is the best we can do about many things in the present, until they can be measured and tested. A problem I see often is there are people who believe that if it can't be measured, it's not part of reality. I find that the unknowns can be a very important part of working with reality successfully, and that what can be measured in the present can end up being not that significant. It depends on what you're looking for.

As Kay and I have discussed here, efficiency is not irrelevant, but we agree it's not the only significant factor in a system that we all hope will produce the wealth needed for societal progress. What seems to be needed is some knowledge and ethics re. the wealth of society, ideally enacted voluntarily, as in the philanthropic efforts of Carnegie, and similar efforts.

I happened to watch a bit of Ken Burns's doc. on the Vietnam War, and I was reminded that McNamara was a man of metrics. He wanted data on anything and everything that was happening to our forces, and that of the Viet Cong. He got reams of it, but there were people who asked, "Are we winning the hearts and minds?" There were no metrics on that. We didn't have a way to measure it, so the question was considered irrelevant. The best that could've been done was to get honest opinions from commanders in the field, who understood the war they were fighting, and were interacting with the civilian population, if people were willing to listen to that. In a guerrilla war, which is what that was, "hearts and minds" was one of the most important factors. Most of the rest could've been noise.

I dovetail with your complaint about focus on the economy in policy, but for me, it's philosophical: It's not the government's job to be worrying about that so much. If you look at the Constitution, it doesn't say a thing about "shall maintain a prosperous economy," or, "shall ensure an equitable economy," or any of that. Sure, people want enough wealth to go around, but it's up to us to negotiate how that happens, not the government. I think unfortunately, politicians and voters, no matter their political stripe, have lost track of what the government's job is. I think, broadly, we treat it like an insurer, or banker of last resort. If things don't seem to work out the way we'd like, we appeal to government to magically make us whole (including economically). That's really missing the point of it.

I could go into a whole thing about the medical system (I won't), but I'll say from the research I've done on it (which probably is not the best, but I made an effort of it), it is one of the most tragic things I've seen, because it is grossly distorted from what it could be, but this is because we're not respecting its function. As you've surmised with globalization, it's been set up this way by some interested people. It's a choice. I see a big knowledge problem with what's been done to it for decades. Doesn't it figure that people interested in healing people should be figuring out how to do it, to serve the most people who need their help, rather than people who have no idea how to do that thinking they should tell them how to do it? This relates back to your proposition about scientific research. Shouldn't research be left to people who know how to do that, rather than people who don't trying to micromanage how you do it? I think we'd be better off if people had a sense of understanding the limits of their own knowledge. I don't know what it is that has people thinking otherwise. The best term I can come up for it is "hubris." Perhaps the more accurate diagnosis, as Kay was saying, is fear. It makes sense that that can cause people to put their nose in deeper than where it should be, but it's like a horde panicking around someone who's collapsed from cardiac arrest, which doesn't have the good sense to give someone who knows CPR some room, and then to allow medical personnel in, once they show up.

It's looked to me like a feedback loop, and I shudder to think about where it will end up, but I feel pretty powerless to stop the process at the moment. I made some efforts in that direction, only to discover I have no idea what I'm really dealing with. So, with some regret, I've followed Sagan's advice ("Don't waste neurons on what doesn't work."), left it alone, and directed my energy into areas I love, where I hope to make a meaningful contribution someday. The experience of the former has given me an interest in listening to scientists who have studied people, what they're really like. It seems like something I need to get past is what Jon Haidt has called the "rationalist delusion," particularly the idea that rational thought alone can change minds. Not so.


Clear thoughts and summary!


"We are 'story creatures' and it takes a lot of training and willpower to depart from 'fond stories and beliefs' to 'actually think things through'."

What your analysis did for me was help put two and two together, but yes, it "collided" with my notions of what an accomplishment it was, and what I had been led to believe that would lead to. What you exposed was that the reality of "what that would lead to" was quite different, and it explained the reality that was unfolding.

I knew that Apollo was a big rocket (ironically, that was one thing that impressed me about it, but I thought how amazing it was that such a thing could be constructed in the first place, and work. Though, I thought many years later about just what you said, that the more fuel you add, the more the fuel is just expending energy moving itself!), and that there were only three people on it, though the "efficiency" perspective, relating that to how it did not contribute to further knowledge for space travel, didn't occur to me until you laid it out. I also knew from listening to Reagan's science advisor that NASA was heavily influenced by the goals of military contractors that had done R&D on various technologies in the '60s, and which exerted political pressure to put them to use, to get return on investment. He said something to the effect of, "People worry about the Military Industrial Complex. Well, NASA IS the Military Industrial Complex! People don't think of it that way, but it is."

Not too long after I heard you talk about this, I happened to hear about a simulator called the Kerbal Space Program (commonly referred to as KSP), and someone posted a video of a "ludicrous single-launch vehicle to Mars (and back)" in it. Even though I think I've heard that KSP does not completely use realistic physics, it drove your point home fairly well. Though, people would point out that none of the proposed missions to Mars have talked about a single-launch vehicle from surface to surface. All of the proposals I've heard about have talked about constructing the vehicle in orbit. KSP, though, assumes chemical propellant.

https://youtu.be/mrjpELy1xzc

"the real romance and its implications didn't happen in the general public and politicians."

In hindsight, I've been struck by that. When I took the time to learn about the history of the Apollo program, I learned that Apollo 11 made a big impression on people all over the world, but that was really it. I think as far as the U.S. was concerned, people were probably more impressed that it met a political goal, JFK's bold proclamation that we would get men to the Moon and back, and that it was a historic first, but there was no sense of, "Great! Now what?" It was just, "Yay, we did it! Now onto other things." There's even been some speculation I've heard from politicos, who were in politics at the time, that we wouldn't have done any of the moon shots if Kennedy hadn't been assassinated, that it was sympathy for his legacy that drove the political will to follow through with it (if true, that's where the romance lay). Hardly anybody paid attention to it after 11, with the exception of Apollo 13, since there was the drama of a possible tragedy. Apollo 18 never got off the ground. The rocket was all set to go, but the program was scrubbed. People can look at the rocket, laid out in its segments sideways, at the Johnson Space Center in Houston.


James Fletcher -- twice the head of NASA -- had a very good speech that "the moon shot, and etc" were really about learning to coordinate 300,000 people and billions of dollars to accomplish something big in a relatively short time. (And that the US should use these kinds of experiences (wars included because the moon shot was part of the cold war), to pick "goals for good" and do these.

Most of the old hands and historians of the moon shot point to the public in the 70s no longer being afraid of the Russians in the way they had been in the 50s, and the successful moon landings helped assuage their fears. The public in general was not interested in space travel, science, etc. and did not understand it or choose to understand it. I think this is still the case today.


"Most of the old hands and historians of the moon shot point to the public in the 70s no longer being afraid of the Russians in the way they had been in the 50s, and the successful moon landings helped assuage their fears."

That's what I realized about 10 years ago. The primary political motivation for the space missions was to establish higher ground in a military strategic sense, and once that was accomplished, most people didn't care about it anymore. There was also an element of prestige to it, at least from Americans' perspective, that because we had reached a "higher" point in space than the Russians did, that gave us a sense of dominance over their extension of power.

You know this already, but people should keep in mind that what got the ball rolling was the launch of Sputnik in 1957. The message that most people got from that was that the Russians controlled higher ground, militarily, and that we needed to capture that pronto, or else we were going to be at a disadvantage in the nuclear arms race.

It also created a major push, as I understand, by the federal government to put more of an emphasis on math and science education, to seed the population of people who would be needed to pull that off. I've heard thinking that this created a generation of scientists and engineers who eventually came into industry, which created the technological products we eventually came to use. There's been a positive sense of that legacy from people who have reviewed it, but I've since heard from people who went through the "new math" that was taught through that push. They hated it with a passion, and said it turned them off to math for many years to come.

The more positive aspect I like to reflect on is that Sputnik inspired young people to become interested in math, science, and engineering on their own, and they really experienced those disciplines. A nice portrayal of one such person is in the movie "October Sky," based on Homer Hickam's autobiographical book, "Rocket Boys."


It seems to me the Raspberry Pi people have put out a lot of good work making hard things possible and transforming education.

The video you posted reminded me about some of the work of Bret Victor, especially his interactive environments in his video on "inventing on principle". Although what's missing there is the ability to connect and modify the environment itself.

I still have to think a bit more about your link to Montessori, who has been a great inspiration for the school ( http://aventurijn.org ) that my parents started. Also in relation to what you said about teaching real math. Montessori has this system with beads and other countable cubes and pies to teach things like multiplying fractions, that is not used in most schools that call themselves "Montessori schools".


Bret is a great thinker and designer


> See what you think about what happens around 9:05

I saw two interesting things around 9:05. A 13 year old made an 'active essay' on the computer which contains not just text but also a dynamic interactive environment so the reader can follow along and even try out new ideas. This type of media is not prevalent today - essays written by 13 year olds today would be in Google/Word docs and contain only static text and static pictures (i.e. digitized paper), but no interactivity. There are ways to do interactivity today, of course, but they are not easy and not the default. Is this what you are getting at?

The other interesting thing is how two tools - the drawing tool and animation tool - are made to work together, even when they were not created with each other in mind. IIUC the image is not a file format here but an object, but don't both tools then need to work with the same image protocol? I suppose you can always have adapters to connect different image protocols, but it doesn't seem like the best option. Still thinking about how much (or how little) shared knowledge is needed to make this scale to all types of objects.


My reason for drawing your attention to this section of the talk is to show some of the ideas (now 40-50 year old) were about "dynamic media" - of course live computing should be part of the combined media experience on a personal computer -- and of course you should be able to do what are now called "mash ups", but to be able to combine useful things easily and at will (it's crazy that this e.g. isn't even provided for maps in a general way on smartphones, tablets and PCs).

But the larger point here is that if one is dealing with dynamic objects as originally intended, the objects can help greatly and safely in coordinating them. This shouldn't be more difficult than what we do in combining ordinary materials in our physical world (it should be even simpler!).

In the system used for the demo -- Smalltalk-78 -- every thing in the system is a dynamic object -- there are no "date structures". This means in part that each object, besides doing its main purpose, can also provide useful help in using it, can include general protocols for "mashing up", etc.

We can do better today, but my whole point in the interview and in these comments is that once e.g. Engelbart showed us great ideas for personal computing, we should not adopt worse ideas (why would any reasonable people do this?), once dynamic media has been demonstrated in a comprehensive way, we should not go back to imitating static media in ways that preclude dynamic media (if you have dynamic media you can do static media but not vice versa!).

Going back and doing Engelbart or Parc also makes no sense, because we have vastly more computing resources today than 50 years ago. We need to go forward -- and -to think things through- ! -- about what computers are, what we are, and how to use the best of both in powerful combinations. This was Licklider's dream from 1960, and some of it was built. The dream is still central to our thinking today because it was so large and good to be always beckoning us ahead.


Thank you for taking the time to respond, and I'd really appreciate if you can clarify my follow up below too!

> every thing in the system is a dynamic object -- there are no "date structures"

I'm still programming in data structures :/. I've seen many of your talks over the years and it took me quite a while to realize what you mean by objects (I think) is not just the textual specification (i.e. 'source code' in today's world), but rather a live, run-able thing that can be probed, inspected and made to do its thing, all by sending messages to it. In the Unix world this would be more akin to a long running server process, but with a much better unified, discover-able IPC mechanism (i.e. 'messaging'). The only thing that needs to be standardized here is the messaging mechanism itself. Larger processes would be constructed by just hooking up existing objects. Automatic persistence would mean these objects don't need to extract and store 'just data' outside themselves, etc.

This model blurs the distinction between what today we call 'programming' (writing large gobs of text), what we call 'operations' (configuring and deploying programs), and what we call 'using' (e.g. reviewing, organizing my photos). Instead, for every case, I would be doing the same kind of operation - i.e. inspecting and hooking up objects - but the objects I'm working with would be different, and the UI could be different. This makes programming more interactive ('let me see if this object can talk to that object by actually connecting them' vs. 'let me see if I can write a large blob of text that satisfies the compiler, by simulating the computer in my head').

The other thing I notice is you don't slice the computation the same way that is so common today. E.g. today I write source code (form #1 of computation), which runs through a compiler to produce an executable file (form #2 of the same computation), which is then executed and loaded in memory (form #3, because now it merges with the data from outside itself). Form #1 is checked in to source control, form #2 is bundled for distribution and form #3 is rather transient.

Instead, you're slicing computation on a different axis and all forms of the same computation are kept together - i.e. the specification, executable and runtime forms are one and the same 'object'. The decomposition happens by breaking down along functional boundaries. This means modification of the specification can happen anywhere I encounter one of these live objects, right then and there. I don't have to trace the computation to its 'source'.

So my main question is - am I on the right track here?

> Going back and doing Engelbart or Parc also makes no sense

I agree, but given the sad state of composition, even if we had some of those ideas today, it would seem like a step forward :) IMO, today we want to think of farms of computers as one large computer, and instead of programming in the small, we want to program all of them together.


Yes, you have the gist of our approach in the 60s and especially at Parc in the 70s. And the Doug McIlroy parts of Unix also got this (the "pipes" programming and other ideas).

What I called "objects" ca 1966 was a takeoff from Simula and Sketchpad that was highly influenced by both biology, by the "processes" (a kind of virtual computer) that were starting to be manifested by time-sharing systems, and by my research community's discussions and goals for doing an "ARPAnet" of distributed computers. If you took the basic elements to be "computers in communication" you could easily get the semantics of everything else (even to simulate data structures if you still thought you needed them).

So, yes, everything could be thought of as "servers". Smalltalk at Parc was entirely structured this way (and the demo I made from one of the Parc Smalltalks for the Ted Nelson tribute shows examples of this).

It's worth noting that you then have made a "software Internet", and this can be mapped in many ways over a physical Internet.

And so forth. This got quite missed. In a later interview Steve said that he missed a bunch of things from his visit to Parc in 1979. What was ironic was that the context of the interview was partly that the SW of the NEXT computer now did have these (in fact, not really).

To be a bit more fair, big culprits in the miss in the 80s were Motorola and Intel for not making IC CPUs with Chuck Thacker's emulation architectures that we used at Parc to be able run ultra high level languages efficiently enough. The other big culprit was that you could do -something- and sell it for a few thousand dollars, whereas what was needed was something whose price tag in the early 80s would have been more like $10K.

Note that a final culprit here is that the personal computer could not be seen for all it really was, and especially in the upcoming lives of most people. The average price of a car when the Mac came out was about $9K (that according to the web is about $20K today -- the average price of a car today is about $28K). To me a really good personal computer is worth every penny of $28K -- I'd love to be able to buy $28K of personal computer! One way to evaluate "the computer revolution" is to note not just what most people do with their computers in all forms, but what they are willing to pay. I think it will be a while before most people can see enough to put at least the value of a car on their "information and thinking amplifier vehicle"


> It's worth noting that you then have made a "software Internet", and this can be mapped in many ways over a physical Internet.

The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers, either doing parallel computation or partitioned computation on each. I feel the semantics of mapping the object onto a physical computer would have to be encoded in the object itself.

Perhaps some other kinds of higher level semantic model (i.e. not a 'software internet') might also be easy to map onto a physical internet. This is something I am interested in actively exploring. That is, how to build semantic models that are optimized for human comprehension of a problem, but can be directly run on farms of physical computers? Today a lot of the translation is done in our heads - all the way down to a fairly low level.

> big culprits in the miss in the 80s were Motorola and Intel for not making IC CPUs with Chuck Thacker's emulation architectures

Maybe there is a feedback loop where the growth of Unix leads to hardware vendors thinking 'lets optimize for C', which then feeds the growth further? OTOH, even emulated machines are faster than hardware machines used to be.

> I'd love to be able to buy $28K of personal computer!

Well, you can already buy $28K or more of computing resources and connect it to your personal device. It's not easy to get much value from this today, though.


"The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers, either doing parallel computation or partitioned computation on each. I feel the semantics of mapping the object onto a physical computer would have to be encoded in the object itself."

You might be interested in Alan Kay's '97 OOPSLA presentation. He talked in a similar vein to what you're talking about: https://youtu.be/oKg1hTOQXoY?t=26m45s

Inspired by what he said there, I tried a little experiment in Squeak, which worked, as far as it went (scroll down the answer a bit, to see what I'm talking about, here): https://www.quora.com/What-features-of-Smalltalk-are-importa...

I only got that far with it, because I realized once I did it that I had more work to do in understanding what to do with what I got back (mainly translating it into something that would keep with the beauty of what I had started)...

"Maybe there is a feedback loop where the growth of Unix leads to hardware vendors thinking 'lets optimize for C', which then feeds the growth further? OTOH, even emulated machines are faster than hardware machines used to be."

There is a feedback loop to it, though as development platforms change, that feedback gets somewhat attenuated.

As I recall, what you describe with C happened, but it began in the late '90s, and into the 2000s. I started hearing about CPUs being optimized to run C faster at that point.

I once got into an argument with someone on Quora about this re. "If Lisp is so great, why aren't more people using it?" I used Kay's point about how bad processor designs were partly to blame for that, because a large part of why programmers make their choices has to do with tradition (this gets translated to "familiarity"). Lisp and Smalltalk did not run well on the early microprocessors produced by these companies in the 1970s. As a consequence, programmers did not see them as viable for anything other than CS research, and higher-end computing (minicomputers).

A counter to this was the invention of Lisp machines, with processors designed to run Lisp more optimally. A couple companies got started in the '70s to produce them, and they lasted into the early '90s. One of these companies, Symbolics, found a niche in producing high-end computer graphics systems. The catch, as far as developer adoption went, was these systems were more expensive than your typical microcomputer, and their system stuff (the design of their processors, and system software) was not "free as in beer." Unix was distributed for free by AT&T for about 12 years. Once AT&T's long-distance monopoly was broken up, they started charging a licensing fee for it. Unix eventually ran reasonably well on the more popular microprocessors, but I think it's safe to say this was because the processors got faster at what they did, not that they were optimized for C. This effect eventually occurred for Lisp as well by the early '90s, which is one reason the Lisp machines died out. A second cause for their demise was the "AI winter" that started in the late '80s. However, by then, the "tradition" of using C, and later C++ for programming most commercial systems had been set in the marketplace.

The pattern that seems to repeat is that languages become popular because of the platforms they "rode in on," or at least that's the perception. C came on the coattails of Unix. C++ seems to have done this as well. This is the reason Java looks the way it does. It came out of this mindset. It was marketed as "the language for the internet," and it's piggybacked on C++ for its syntax and language features. At the time the internet started becoming popular, Unix was seen as the OS platform on which it ran (which had a lot of truth to it). However, a factor that had to be considered when running software for Unix was portability, since even though there were Unix standards, every Unix system had some differences. C was reasonably portable between them, if you were careful in your implementation, basically sticking to POSIX-compliant libraries. C++ was not so much, because different systems had C++ compilers that only implemented different subsets of the language specification well, and didn't implement some features at all. C++ was used for a time in building early internet services (combined with Perl, which also "rode in" on Unix). Java was seen as a pragmatic improvement on C++ among software engineers, because, "It has one implementation, but it runs on multiple OSes. It has all of the familiarity, better portability, better security features, with none of the hassles." However, it completely gave up on the purpose of C++ (at the time), which was to be a macro language on top of C in a similar way to how Simula was a macro language on top of Algol. Despite this, it kept C++'s overall architectural scheme, because that's what programmers thought was what you used for "serious work."

From a "power" perspective, one has to wonder why programmers, when looking at the prospect of putting services online, didn't look at the programming architecture, since they could see some problems with it pretty early, and say to themselves, "We need something a lot better"? Well, this is because most programmers don't think about what they're really dealing with, and modeling it in the most comprehensive way they can, because that's not a concept in their heads. Going back to my first point about hardware, for many years, the hardware they chose didn't give them the power so they could have the possibility to think about that. As a result, programmers mostly think about traits, and the community that binds them together. That gives them a sense of feeling supported in their endeavors, scaling out the pragmatic implementation details, because they at least know they can't deal with that on their own. Most didn't think to ask (including myself at the time), "Well, gee. We have these systems on the internet. They all have different implementation details, yet it all works the same between systems, even as the systems change... Why don't we model that, if for no other reason than we're targeting the internet, anyway? Why not try to make our software work like that?"

On one level, the way developers behave is tribal. Looked at another way, it's mercantilistic. If there's a feedback loop, that's it.

"OTOH, even emulated machines are faster than hardware machines used to be."

What Kay is talking about is that the Alto didn't implement a hard-coded processor. It was soft-microcoded. You could load instructions for the processor itself to run on, and then load your system software on top of that. This enabled them to make decisions like, "My process runs less efficiently when the processor runs my code this way. I can change it to this, and make it run faster."

This will explain Kay's use of the term "emulated." I didn't know this until a couple years ago, but at first, they programmed Smalltalk on a Data General Nova minicomputer. When they brought Smalltalk to the Alto, they microcoded the Alto so that it could run Nova machine code. So, it sounds like they could just transfer the Smalltalk VM binary to the Alto, and run it. Presumably, they could even transfer the BCPL compiler they were using to the Alto, and compile versions of Smalltalk with that. The point being, though, that they could optimize performance of their software by tuning the Alto's processor to what they needed. That's what he said was missing from the early microprocessors. You couldn't add or change operators, and you couldn't change how they were implemented.


Actually ... only the first version of Smalltalk was done in terms of the NOVA (and not using BCPL). The subsequent versions (Smalltalk-76 on) were done by making a custom virtual machine in the Alto's microcode that could run Smalltalk's byte codes efficiently.

The basic idea is that you can win if the microcode cycles are enough faster than the main memory cycles so that the emulations are always waiting on main memory. This was generally the case on the Alto and Dorado. Intel could have made the "Harvard" 1st level caches large enough to accommodate an emulator -- that would have made a big difference. (This was a moot point in the 80s)


I know this is getting nit-picky, but I think people might be interested in getting some of the details in the history of how Smalltalk developed. Dan Ingalls said in "Smalltalk-80: Bits of History":

"The very first Smalltalk evaluator was a thousand-line BASIC program which first evaluated 3 + 4 in October 1972. It was followed in two months by a Nova assembly code implementation which became known as the Smalltalk-72 system."

The first Altos were produced, if I have this right, in 1973.

I was surprised when I first encountered Ingalls's implementation of an Alto on the web, running Smalltalk-72, because the first thing I was presented with was, "Lively Web Nova emulator", and I had to hit a button labeled "Show Smalltalk" to see the environment. He said what I saw was Nova machine code from a genuine ST-72 image, from an original disk platter.

I take it from your comment that you're saying by the time ST-76 was developed, the Alto hardware had become fast enough that you were able to significantly reduce your use of machine code, and run bytecode directly at the hardware level.

I could've sworn Ingalls said something about using BCPL for earlier versions of Smalltalk, but quoting out of "Bits of History" again, Ingalls, when writing about the Dorado and Smalltalk-80, said of BCPL that the compiler you were using compiled to Alto code, but ...

"As it turned out, we only used Bcpl for initialization, since it could not generate our extended Alto instructions and since its subroutine calling sequence is less efficient than a hand-coded one by a factor of about 3."


The Alto didn't get any faster, and there was not a lot of superfast microcode RAM (if we'd had more it would have made a huge difference). In the beginning we just got Smalltalk-72 going in the NOVA emulator. Then we used the microcode for a variety of real-time graphics and music (2.5 D halftone animation, and several kinds of polytimbral timbre synthesis including 2 keyboards and pedal organ). These were separate demos because both wouldn't fit in the microcode. Then Dan did "Bitblt" in microcode which was a universal screen painting primitive (the ancestor of all others). Then we finally did the byte-code emulator for Smalltalk-76. The last two fit in microcode, but the music and the 2.5 D real-time graphics didn't.

The Notetaker Smalltalk (-78) was a kind of sweet spot in that it was completely in Smalltalk except for 6K bytes of machine code. This was the one we brought to life for the Ted Nelson tribute.


Thanks for the long write up. I found it very interesting.

> You might be interested in Alan Kay's '97 OOPSLA presentation

Oh yeah I have actually seen that - probably time to watch it again.

> Well, this is because most programmers don't think about what they're really dealing with

Agree with that. Most people are working on the 'problem at hand' using the current frame of context and ideas and focus on cleverness, optimization or throughput within this framework. When changing the frame of context may in fact be much better.

> What Kay is talking about is that the Alto didn't implement a hard-coded processor. It was soft-microcoded.

Interesting. I wonder if FPGAs could be used for something similar - i.e. program the FPGAs to run your bytecode directly. But I'm speculating because I don't know too much about FPGAs.


Yes re: FPGAs -- they are definitely the modern placeholder of microcode (and better because you can organize how the computation and state are hooked together). The old culprit -- Intel -- is now offering hybrid chips with both an ARM and a good size patch of FPGA -- combine this with a decent memory architecture (in many ways the hidden barrier these days) and this is a pretty good basis for comprehensive new designs.


> The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers ...

Yes, this is the essence of Dave Reed's 1978 MIT thesis on the design of a distributed OS for the Internet of "consistent" objects mapped to multiple computers. In the early 2000s we had the opportunity to test this design by implementing it. This led to a series of systems called "Croquet" and an open source system and foundation called "Open Cobalt".

> how to build semantic models that are optimized for human comprehension of a problem, but can be directly run on farms of physical computers?

Keep on with this ...


> Keep on with this ...

This is still kind of a mishmash of early thoughts and I have a couple of different lines of thought, which I hope will come together. I'll start with a couple of observations:

1. Most programming languages and DSLs are uni-directional - the computer doesn't talk back to the human in the same language.

2. The mental models (not the language) humans use to communicate to each other, even when using a lot of rigor and few ambiguities, is often different than the languages and models used for computation.

The first idea is: there are some repeating structures in mental models. We think new concepts in terms of old by first thinking the structures (which are few and axiomatic, like the structure/function words in English) and then materializing the content, as well as refining the structures. E.g. I can say to a non-programmer that 'classes contain methods' and they kind of get the structure without knowing the content. In my mind this is represented as a graph, where the 'contains' relationship is an edge that connects two 'content nodes'.

   [something called class] --(contains)-> [something called method]
If I follow up with 'methods contain code', they can reason that classes indirectly contain code, without even knowing what these things actually mean! So 'contains' is kind of a universal concept - it applies to abstract content and physical content in a similar way. Another universal connection is 'abstraction of', this implies one node (the abstract thing) is related to other nodes (the concrete things) in a specific way.

Maybe structures can be made composable, and we can operate on graphs structurally, without knowing what the content means? While another operation might eventually figure out what the content means. The main assumption here is my thoughts are organized as graphs, where connections are both universal also domain specific but of few kinds. Can I talk to the computer in terms of such graphs?

The second idea is: I want to combine high level concepts and strategies from from somewhat different domains. E.g. if I know different strategies for 'distributing things into bins' (consistent hashing, sharding, etc.), I invoke this 'idea' manually whenever I have see a situation which looks like 'distribute things into bins' and make a choice - irrespective of scale. Can I get the computer to do this for me instead?

So the final thing here is to get to something like this: I take an idea (i.e. a node in a graph) from the distributed computing domain, merge it with a definition (another node) of a computation I created (e.g. persistence strategies), and have the computer offer options on how to distribute that computation (i.e. 'distributed persistence strategies'). Then I can make choices and combine it with a 'convert idea to machine code' strategy and generate a program. This is all a bit abstract at this point, but I'm also trying to find where this overlaps with prior art.


A clarifying comment here. When one thinks in terms of what I called objects ca 1966, one is talking about entities that from the outside are identical to what we think of as computers (and this means not just sending messages and getting outputs, but that we don't get to look inside with our messages, and our messages don't get to command, unless whatever is going on in the interior of the computer has decided to allow.

So from the outside, there are no imperatives, only requests and questions. Another way to look at this is that an object/computer is a kind of "server" (I worry to use this term because it also has "pop" meanings, but it's a good term.)

This is sometimes called "total encapsulation".

From this standpoint, we don't know what's inside. Could be just hardware. Could be variables and methods. Could be some form of ontology. Or mix and match.

This is the meaning of computers on a network, especially large worldwide ones.

The basic idea of "objects" is that what is absolutely needed for doing things at large scale, can be supplied in non-complex terms for also doing the small scale things that used to be done with data structures and procedures. Secondarily, some of the problems of data structures and procedures at any scale can be done away with by going to the "universal servers in systems" ideas.

Similarly, what we have to do for critical "data structures" -- such as large scalings, "atomic transactions", versions, redundancy, distribution, backup, and "procedural fields" (such as the attribute "age") are all more easily and cleanly dealt with using the idea of "objects".

One of the ways of looking at what happened in programming is that many if not most of the naive ways to deal with things when computers were really small did not scale up, but most programmers wanted to stay with the original methods, and they taught next gen programmers the original methods, and created large fragile bodies of legacy code that requires experts in the old methods to maintain, fix, extend ...


> So from the outside, there are no imperatives, only requests and questions.

This threw me off a bit as Smalltalk collections have imperative style messages for instance.

> Could be some form of ontology.

This remark helped me find some clarity.

I want the computer to help me do cross ontological reasoning and mapping. For instance, if I want to compute geometry, how do I map the ontology of 'geometry' onto the ontology of 'smalltalk'? I 'think up' the mapping, but it would be great if the computer helps me here too. Mapping 'smalltalk' onto 'physical machines' is another ontological mapping. The 'mapping of ontologies' is itself an ontology.

In large systems there are a lot of ontological 'views' and 'mappings' at play. I want to inspect and tweak each independently using the language on the ontology, and have the computers automatically map my requests to the physical layer in an efficient way. This is not possible in systems today because there an incredible amount of pre-translation that happens so a high level questions cannot be directly answered by the system - I have to track it down manually to a different level.

Maybe the answer is to define the ontologies as object collections and have them talk to each other and figure it out. I want to tweak things after the system is up, of course, so I could send an appropriate message (e.g. 'change the bit representation of integers' or 'change the strategy used in mapping virtual objects to physical') and everything affected would be updated automatically (is this 'extreme late binding'?).


Yes, "collections" and other such things in Smalltalk are "the Christian Scientists with appendicitis". Our implementations were definitely compromises between seeing how to be non-imperative vs already having the "devil's knowledge" of imperative programming. One of the notions we had about objects is that if we had to do something ugly because we didn't have a better idea, then we could at least hide it behind the encapsulation and the fact that message sending in the Smalltalks really is a request.

Another way of looking at this is if an "object" has a "setter" that directly affects a variable inside then you don't have a real object! You've got a data structure however much in disguise.

Another place where the "sweet theory" was not carried into reality was in dependencies of various kinds. Only some important dependencies were mitigated by the actual Smalltalks.

Two things that helped us were that we did many on the fly changes to the system over 8-10 years -- about 80 system releases -- and including a new language every two years. This allowed to avoid getting completely gobbed up.

The best and largest practical attempt at an ontology is in Doug Lenat's CYC. The history of this is interesting and required a number of quite different designs and implementations to gather understanding.


> Yes, "collections" and other such things in Smalltalk are "the Christian Scientists with appendicitis".

Interesting to hear this perspective - drives home the point that we shouldn't just stop at generic late bound data structures.

> Only some important dependencies were mitigated by the actual Smalltalks.

Dependency management in today's systems is just mind numbing. If only we had a better way to name and locate these.


One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem. At small scales it's fine. As systems get bigger, it becomes a problem. The internet went through this. When it started as the Arpanet, there was (if I remember correctly) one guy who kept the directory of names for each system on the network. The network started small, so this could work. As it grew into the thousands of nodes, this became less manageable, partly because there started to be duplicate requests for the same name for different nodes--naming conflicts, which is why DNS was created, and why ICANN was ultimately created, to settle who got to use which names. I doubt something like that, though, would scale properly for code, though many organizations have tried that, by having software architects in charge of assigning names to entities within programs. The problem then comes when companies/organizations try to link their systems together to work more or less cooperatively. I heard despondent software engineers talk about this 15 years ago, saying, "This is our generation's Vietnam." (They didn't lack for the ability to exaggerate, but the point was they could not "win" with this strategy.) They were hoping to build this idea of the semantic web, but different orgs. couldn't agree on what terms meant. They'd use the same terms, but they would mean different things, and they couldn't make naming things work across domains ("domains" in more than one dimension). So, we need something different for locating things. Names are fine for humans. We could even have names in code, but they wouldn't be used for computers to find things, just for us. If we need to disambiguate, we can find other features to help, but computing needs something, I think, that identifies things by semantic signifiers, so that even though we use the same names to talk about them, computers can disambiguate by what they actually need by function. It wouldn't get rid of all redundancy, because humans being humans, economics and competition are going to promote some of that, but it would help create a lot more cooperation between systems.


The thing about DNS and naming is that there were a lot of ideas flying around, some of them in the big standards committees. X.400 and X.500 were the OSI standards for messaging and directory services that were going to handle finding entities using specific attributes rather than with direct names or even straight hierarchical naming (like DNS finally used). It's interesting to read all the old stuff -- I had to sift through much of it a few years back when I wrote my dissertation on the early history of DNS (a cure for insomnia).

I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.


> I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.

You could slowly bootstrap a new system on the existing one, but you'd need a fleshed out design first :) Everything is replaceable, IMO, even well established conventions and standards, if something compelling comes along.

The CCN ideas are related to naming as well. Maybe the ideas could be extended to handle 'objects' rather than just 'content'.


Hardware architecture is the horizon of my knowledge. But one thing I've always wondered is this: why not just have memory addresses inside a computer map to local IPv6 addresses, then have some other "chip" that can distinguish between non local IP addresses that would, in a perfect object world, point to places in memory on another remote machine?

Obviously there would need to be some kind of virtualization of the memory but hopefully you get the idea. Not exactly related to naming but whatever.


Interesting - is the broader idea here that there is a virtual machine that spans multiple physical machines? Instead of virtual 'memory access', why not model this as a virtual 'software internet'?


I don't even think you need a VM, really. Just have this particular computer equipped with some soft-core that handles IP from the outside. The memory mapping, since it's just IPv6, can determine whether you are dealing with information from the outside world (non-localhost ips) or your own system (local ips). Because logically they are already different blocks in memory, they're already isolated.

With something like that you might be able to have "pure objects" floating around the internet. Of course your computer's interpretation of a network object is something it has to realize inside of itself (kind of like the way you interpret the words coming from someone else's mouth in your own head, realizing them internally), but you will always be able to tell that "this object inside my system came from elsewhere" for its whole lifecycle.

Maybe you could even have another soft-core (FPGA like) that deals with brokering these remote objects, so you can communicate changes to an incoming object that you want to send a message to. This is much more like communication between people, I think.


> I don't even think you need a VM, really.

I mean a VM as in the idea that you are programming an abstract thing, not a physical thing. Not a VM as in a running program. You could emulate the memory mapper in software first - hardware would be an optimization.

The important point is 'memory mapper' sounds like the semantics would be `write(object_ip, at_this_offset, these_bytes)`, but what you really want IMO is `send(object_ip, this_message)`. That is, the memory is private and the pure message is constructed outside the object.

You still need the mapping system to map the object's unique virtual id to a physical machine, physical object. So having one IP for each of these objects could be one way.

Alan Kay mentioned David Reed's 1978 thesis (http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-20...) which develops these ideas (still reading). In fact, a lot of 'recently' popular ideas seem to be related to the stuff in that thesis (e.g. 'psuedo-time')


> One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem.

I don't think naming itself a problem if you have a fully decentralized system. E.g., each agent (org or person) can manage their namespace any way they choose in a single global virtual namespace. I'm imagining something like ipfs/keybasefs/upspin, but for objects, not files, and with some immutability and availability guarantees.

But yes, there should probably be other ways to find these things, using some kind of semantic lookup/negotiation.


What he means is that names are a local convention, and scaling soon obliterates the conventions. Then you need to go to descriptions that use a much smaller set of agreed on things (and you can use the "ambassador" idea from the 70s as well).


A few year ago when I was doing some historical research about DNS, I came across quite a few interesting papers that all discussed "agents" in a way that seemed based on some shared knowledge/assumptions people had at the time. In particular, these would be agents for locating things in the "future internetwork". There's a paper by Postel and Mockapetris that comes to mind. Is this an example of "ambassadors"?


Ah I see, we're taking about interoperability, not just naming.

This is related to my original interest in language structure words and ontologies. The idea there is that the set of 'relationships' between things is small and universal (X 'contains' Y, A 'is an abstraction of' B) and perhaps can be used to discover and 'hook up' two object worlds that are from different domains.


Yes, this got very clear in a hurry even in the ARPAnet days, and later at Parc. (This is part of the Licklider "communicating with aliens" problem.)

Note that you could do a little of this in Linda, and quite a bit more in a "2nd order Linda".

I've also explained the idea of "processes as 'ambassadors'" in various talks (including a recent one to the "Starship Congress").


> Could be just hardware. Could be variables and methods. Could be some form of ontology. ...

> more easily and cleanly dealt with using the idea of "objects".

OK after this sitting in my mind for a bit longer something 'clicked'. What I'm thinking now is that there are many types of 'computer algebra' that can be designed. Data structures and procedures are only one such algebra - but they have taken over almost all of our mainstream thinking. So instead of designing systems with better suited algebra, we tend to map problems back to the DS+procedures algebra quickly. Smalltalk is well suited to represent any computer algebra (given the DS/procedure algebra is implemented in some 'objects', not the core language).

> created large fragile bodies of legacy code that requires experts in the old methods to maintain, fix, extend

If I understand correctly you are saying that better methods would involve objects and 'algebra' that perhaps don't involve data structures and procedures at all, even all the way down for some systems.


Mathematics is a plural for a reason. The idea is to invent ways to represent and infer that are not just effective but help thinking.

I don't think Smalltalk is well suited to represent any algebra (the earliest version (-72) was closer, and the next phase of this would have been much closer as a "deep" extensible language.

A data structure is something that allows fields to be "set" from the outside. This is not a good idea. My original approach was to try to tame this, but I then realized that you could replace "commands" with "requests" and imperatives with setting goals.


> and the next phase of this would have been much closer as a "deep" extensible language.

Are these ideas (and the 'address space of objects') elaborated on somewhere?

> A data structure is something that allows fields to be "set" from the outside. This is not a good idea. My original approach was to try to tame this, but I then realized that you could replace "commands" with "requests" and imperatives with setting goals.

I agree with in principle - but I'm having trouble imagining computing completely without data structures though (and am reading 'Early History of Smalltalk' to see if it clicks.)


You need to have "things that can answer questions". I'd like to get the "right answer" when I ask a machine for someone's date of birth, and similarly I'd like to get the right answer when I ask for their age. It's quite reasonable that the syntax in English is the same.

? Alan's DOB

? Alan's age

Here "?" is a whole computer. We don't know what it will do to answer these questions. One thing is for sure: we are talking to a -process- not a data structure! And we can also be sure that to answer the second it will have to do the first, it will have to ask another process for the current date and time, and it will have to do a computation to provide the correct answer.

The form of the result could be something static, but possibly something more useful would to have the result also be a process that will always tell me "Alan's age" (in other words more like a spreadsheet cell (which is also not "data" but a process)).

If you work through a variety of examples, you will (a) discover that questions are quite independent of the idea of data, and (b) that processes are the big idea -- its just that some of them change faster or slower than others.

Add in a tidy mind, and you start wanting languages and computing to deal with processes, consistency, inter-relations, and a whole host of things that are far beyond data (yet can trivially simulate the idea in the few cases its useful).

On the flip side, you don't want to let just anybody change my date of birth willy nilly with the equivalent of a stroke of a pen. And that goes for most answers to most questions. Changes need to be surrounded by processes that protect them, allow them to be rolled back, prevent them from being ambiguous, etc.

This is quite easy stuff, but you have to start with the larger ideas, not with weak religious holdovers from the 50s (or even from the way extensional way math thinks via set theory).


Thank you for the elaboration!

(And for anyone else reading this thread I found an old message along the same topic: http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...)

I'm thinking along these lines now: decompose systems along lines of 'meaning', not data structures (data structures add zero meaning and are a kind of 'premature materialization'), design messages first, late bind everything so you have the most options available for implementation details, etc.

The other thing I'm thinking is why have only one way to implement the internal process of an object/process? There are often multiple ways to accomplish goals, so allow multiple alternative strategies for responding to the same message and let objects choose one eventually.

Edit: wanted to add that IIUC, 'messages' doesn't imply an implementation strategy - i.e. they are messages in the world of 'meaning'. In the world of implementation, they may just disappear (inlined/fused) or not (physical messages across a network), depending on how the objects have materialize at a specific point in time.


You've got the right idea. If you think about how services on the internet operate, they don't have one way of implementing their internal processes, either.

I finally got the idea, after listening to Kay for a while, that even what we call operating systems should be objects (though, as Xerox PARC demonstrated many years ago, there is a good argument to be made that what we call operating systems need a lot of rethinking in this same regard, ie. "What we call X should be objects."). It occurred to me recently that we already have been doing that via. VMs (through packages such as VMWare and VirtualBox), though in pretty limited ways.

Incidentally, just yesterday, I answered a question from a teacher on Quora who wanted advice on how to teach classes and methods to a student in Python, basically saying that, "The student is having trouble with the concepts of classes and methods. How can I teach those ideas without the other OOP concepts?" (https://www.quora.com/Can-I-explain-classes-and-objects-with...) It prompted me to turn the question around, and really try to communicate, "You can teach OOP by talking about relationships between systems, and semantic messaging." If they want to get into classes and methods later, as a way of representing those concepts, they can. What came to mind as a way around the class/method construct was a visual environment in which the student could experience the idea of different systems communicating through common interfaces, and so I proposed EToys as an alternative to teaching these ideas in Python. I also put up one of Kay's presentations on Sketchpad, demonstrating the idea of masters and instances (which you could analogize to classes and object instances).

I felt an urge, though, to say something much more, to say, "You know what? Don't worry about classes and methods. That's such a tiny concept. Get the student studying different kinds of systems, and the ways they interact, and make larger things happen," but I could tell by the question that, dealing with the situation at hand, the class was nothing close to being about that. It was a programming class, and the task was to teach the student OOP as it's been conceived in Python (or perhaps so-called "OOP". I don't know how it conceives of the concept. I haven't worked in it), and to do it quickly (the teacher said they were running out of time).

The question has gotten me thinking for the first time that introducing people to abstract concepts first is not the right way to go, because by going that route, one's conceptions of what's possible with the idea become so small that it's no wonder it becomes a religion, and it's no wonder our systems don't scale. As Kay keeps saying, you can't scale with a small conception of things, because you end up assuming (locking down) so much, it's impossible for its morphology to expand as you realize new system needs and ways of interacting. The reason for this is that programming is really about, in its strongest conception, modeling what we know and understand about systems. If we know very little about systems that already exist, their strengths and weaknesses, our conception of how semantic connections are made between things is going to be very limited as well, because we don't know what we don't know about systems, if we haven't examined them (and most people haven't). The process of programming, and mastering it, makes it easy enough to tempt us to think we know it, because look, once we get good enough to make some interesting things happen (to us), we realize it offers us facilities for making semantic connections between things all day long. And look, we can impress people with that ability, and be rewarded for it, because look, I used it to solve a problem that someone had today. That's all one needs, right?...

Kay has said this a couple different ways. One was in "The Early History of Smalltalk". He asked the question, "Should we even teach programming?" Another is an argument he's made in a few of his presentations: Mathematics without science is dangerous.


(also, for the benefit of anyone else reading this thread, the following section written in 1993 talks more about these ideas: http://worrydream.com/EarlyHistoryOfSmalltalk#oostyle)


> The main assumption here is my thoughts are organized as graphs

Herbert Simon talks a lot about this in Sciences of the Artificial. It turns out most of human thinking is just lists. I'm not sure if that still stands in the field of psychology (my version of the book is pretty old, from the 80s).

There's a good book (a little dense though) that might help with the more abstract thinking in the direction you're going. It's called "Human Machine Reconfigurations" and it's one of the more clever books I've come across on human machine interaction, written by an anthropologist/sociologist who also worked at PARC. So often the human part is what gets lost here.


Thanks for the references! Sciences of the Artificial is already on my list.

> The main assumption here is my thoughts are organized as graphs

I realize this would be better phrased as "the information I'm trying to communicate is organized as graphs'.


I think your analogy is going in a better direction. I had the same idea after taking a look at Squeak for a while. What would need to be added to your analogy is a notion of design. You see, in Smalltalk, for example, your programming takes place in the messages. So, even the daemons (which, for the sake of argument, we could think of as analogies to objects), which would be the senders and receivers of messages would be made out of the same stuff, not C/C++. So, this is a pretty dramatic departure from the way Unix operates. What this should suggest is that the semantic connection between objects is late-bound. Think of them as servers.

Secondly, in the typical Smalltalk implementation, there is still compilation, but it's incremental. You compile expressions and methods, not the whole body of source code for the whole system. What's really different about it is since the semantic actions are late-bound, you can even compile something while a thread is executing through the code you're compiling. So, you get nearly instant feedback on changes for free. Bret Victor's notion of programming environments blurs the axes you're talking about even more, so that you don't have to do two steps to see your change, while a thread is running (edit, then compile). You can see the effect of the change the moment you change an element, such as the upper bound of an iterative loop. To make it even more dynamic, he tied GUI elements (sliders) to such things as the loop parameters, so that you don't have to laboriously type the values to try them out. You can just change the slider, and see the effects of the change in a loop's range very quickly, such that it almost feels like you're using a tool-based design space, rather than programming.

I don't know how this was done in ST-78, but in ST-80, at least in accounts I've heard from people who've used versions of it, and in the version of Squeak I've used from time to time, the source code is not technically stored with the class object, though the system keeps references to the appropriate pieces of source code, and their revisions, mapping them to the classes in the system, so that when you tell the system you want to look at the source code for a class, it pulls up the appropriate version of the code in an editor.

Source code is stored in a separate file, and Smalltalk has a version control system that allows reviewing of source code edits, and reversion of changes (undo). The class object typically exists in the Smalltalk image as compiled code.

There are many things that are different about this vs. what you typically practice in CS, but addressing your point about data, in OOP, objects are supposed to take the place of data. In OOP, data contains its own semantics. It inverts the typical notion of procedures acting on data. Instead, data contains procedures. It's an active "live" part of the programming that you do. So, yes, data is persisted, along with its procedures.

A simple example in Smalltalk is: 2 + 2. If we analyze what's going on, the "2"'s are the pieces of data, the objects/servers, and "+" is used to reference a method in one of the data instances, but it doesn't stop there. The "2" objects communicate with each other to do the addition, getting the result: 4. As you can tell implicitly, "2 + 2" is also source code, to generate the semantic actions that generate the result.


"More or less" ... I can see that it's hard after decades of "data-centric" perspectives to think in terms of "computers" rather than "data", and about semantics rather than pragmatics. It's not "data contains procedures" but that objects are (a) semantically computers, and are impervious to attack from the outside (a computer has to let an attack happen via its own internal programming), and (b) what's inside can be anything that is able to deal with messages in a reasonable way. In Smalltalk, these are more objects (there are only objects in Smalltalk). The way the internals of typical Smalltalk objects are organized could be done better (we used several schemes, but all of them had variables and methods (which were also objects).

So "2" is not "data" in Smalltalk (unfortunately, it is in Java, etc.)

We had planned that the interior of objects should be an "address space of objects" because this is a better and more recursive way to do modularization, and to allow a different inter-viewing scheme (we eventually did some of this for the Etoys system for children about 20 years ago). But the physical computers at Parc were tiny, and the code we could run was tiny (the whole system in the Ted Nelson demo video was a little over 10,000 lines of code for everything). So we stayed with our top priority: to make a real and highly interactive system that was very comprehensive about what it and the user could do together.


Taking your feedback into consideration, I had the thought that it would be more accurate to talk about things like "2" in an OOP context as symbols with semantics (which provide meaning to them), not data, since "data" connotes more a collection of inputs/quantities, where we may be able to attach a meaning to it, or not, and that wasn't what I was going after. I was going after a relationship between information and semantics that can be associated with it, but trying to provide a transition point from the idea of data structures to the idea of objects, for someone just learning about OOP. Doing a sleight of hand may not do the trick.

My starting point was to use a very interesting concept, when I encountered it, in SICP, where it discussed using procedures to emulate data, and everything that can be done with it. It seemed to help explain for the first time what "code is data" meant. It illustrated the inversion I was talking about:

https://tekkie.wordpress.com/2010/07/05/sicp-what-is-meant-b...

"In 2.1.3 it questions our notion of “data”, though it starts from a higher level than most people would. It asserts that data can be thought of as this constructor and set of selectors, so long as a governing principle is applied which says a relationship must exist between the selectors such that the abstraction operates the same way as the actual concept would." It went on to illustrate how in Lisp/Scheme you could use functions to emulate operations like "cons", "car", and "cdr", completely in procedural space, without using data structures at all.

This is what I illustrate with "2 + 2", and such, that code is doing everything in this operation, in OOP. It's not a procedure applied to two operands, even though that's how it looks on the surface.


Yes, the SICP follows "simulate data" ideas much further back in the past, including the B5000 computer, and especially the OOP systems I did. But the big realization is that there are very few things that are helpful when they are passive, and the non-passive parts are the unique gift of computing. The question is not whether ideas from the past can be simulated (easy to see they can be if the building blocks are whole computers) but what do we "mean by 'meaning' "?

Good answers to this are out of the scope of HN, but we should be able to imagine -processes- moving forward in both our time and in simulated time that can answer our questions in a consistent way, and can be set to accomplish goals in a reasonable way.


I was using "data" in the spirit of a saying I heard many years ago in CS, that, "Data is code, and code is data." It seems that people in CS are still familiar with this phrase. I was focusing on the latter part of that phrase. I was trying to answer the question that I think is often implied once you start talking to people about real OOP, "What about data?" I almost don't like the term "data" when talking about this, because as you say, it gets one away from the focus on semantics, but whenever you're talking to people in the computing field, such as it is, I think this question is unavoidable, because people are used to thinking of code and information as separate, hence the notion of data structures. People need a way to translate in their minds between what they've done with information before, and what it can be. So, I used the term "data" to talk about "literal objects" (like "2", or other kinds of input), but I was using the description of "processors" (ie. computers), "containing procedures," which can also be thought of as "operators."

I think the idea of an "inversion" is quite apt, because as you've said before, the idea of data structures is that you have procedures acting on data. With real objects, you still have the same essential elements in programming, the same stuff to deal with, but the kinds of things programmers typically think about as "data" are objects/computers in OOP, with intrinsic semantics. So, you're still dealing with things like "2", just as procedures acting on data do, but instead of it being just a "dead" symbol, that can't do anything, "2" has semantics associated with an interface. It's a computer.


> We had planned that the interior of objects should be an "address space of objects" because this is a better and more recursive way to do modularization

Something that nags me in the back of my mind is that messages are not just any object, they always have the selector attached. Why not let objects handle any other object as a message? Is this what you mean by the above?

Thinking about the biological analogy (maybe taking it too far...): the system of cells is distinct from the system of proteins inside the cells and going up the layers we have the systems of creatures. So the way proteins interact is different from how cells interact, etc. but each system derives its distinct behaviors from the lower ones. Also, the messages are typically not the entities themselves but other lower level stuff (cells communicate using signals that are not cells). So in a large scale OO system we might see layers of objects emerge. Or maybe we need a new model here, not sure.


Take a look at the first implemented Smalltalk (-72). It implemented objects internally as a "receive the message" mechanism -- a kind of quick parser -- and didn't have dedicated selectors. (You can find "The Early History of Smalltalk" via Google to see more.)

This made the first Smalltalk "automatically extensible" in the dimensions of form, meaning, and pragmatics.

When Xerox didn't come through with a replacement for the Alto we (and others at Parc) had to optimize for the next phases, and this led to the compromise of Smalltalk-76 (and the succeeding Smalltalks). Dan Ingalls chose the most common patterns that had proved useful and made a fixed syntax that still allowed some extension via keywords. This also eliminated an ambiguity problem, and the whole thing on the same machine was about 180 times faster.

I like your biological thinking. As a former molecular biologist I was aware of the vast many orders of magnitude differences in scale between biology and computing. (A typical mammalian cell will have billions of molecules, etc. A typical human will have 10 Trillion cells with their own DNA and many more in terms of microbes, etc.) What I chose was the "Cambrian Revolution Recursively": that cells could work together in larger architecture from biology, and that you can make the interiors of things at the same organization of the wholes in computing because of references -- you don't have to copy. So just "everything made from cells, including cells", and messages made from cells, etc.

Some ideas you might find interesting are in an article I wrote in 1984 -- called "Computer Software" -- for a special issue of Scientific American on "Software". This talks about the subject in general, and looks to the possibility of "tissue programming" etc.


I should have mentioned a few other things for the later Smalltalks. First, selectors are just objects. Second, you could use the automatic "message not understood" mechanism to field an unrecognized object. I think I'd do this by adding a method called "any" and letting it take care of arbitrary unknown objects ...


> adding a method called "any"

Right, I understand there are ways to do this with methods but my question was more about the purity aspect, which you already addressed above.


A selector is an object -- so that is pure -- and its use is a convention of the messaging, and the message itself is one object, that is an instance of Class message.

What's fun is that every Smalltalk contained the tools to make their successors while still running themselves. In other words, we can modify pretty much anything in Smalltalk on the fly if we choose to dip into the "meta" parts of it, which are also running. In Smalltalk-72, a message send was just a "notify" to the receiver that there was a message, plus a reference to the whole message. The receiver did the actual work of looking at it, interpreting it, etc.

This is quite possible to make happen in the more modern Smalltalks, and would even be an interesting exercise for deep Smalltalkers.


> A selector is an object -- so that is pure -- and its use is a convention of the messaging

The selector 'convention' is hard coded in the syntax - this appears to elevate selector based messaging over other kinds. But now I'm rethinking this differently - i.e. selectors isn't part of the essence, but a specific choice that could be replaced (if we find something better.)


I can't remember if I've brought this up already in this thread, but if you want to "kick the tires" on ST-72, Dan Ingalls has an implementation of it up on the web. It's running off of a real ST-72 image. I wrote about it at https://tekkie.wordpress.com/2014/02/19/encountering-smallta...

I include a link to it, and described how you can use it (to the best of my knowledge), though my description was only current to the time that I wrote it. Looking at it again, Ingalls has obviously updated the emulation.

The nice thing about this version is it includes the original tutorial documentation, written by Kay and Adele Goldberg, so you can download that, and learn how to use it. I found that I couldn't do everything described in the documentation. Some parts of the implementation seemed broken, particularly the class editor, which was unfortunate, and some attempts to use code that detected events from the mouse didn't work. However, you can write classes from the command line (ST-72 was largely a command-line environment, on a graphical display, so it was possible to draw graphics).

If you take a look at it, you will see a strong resemblance to Lisp, if you're familiar with that, in terms of the concepts and conventions they were using. As Kay said in "The Early History of Smalltalk," he was trying to improve on Lisp's use of special forms. I found through using it that his notion of classes, from a Lisp perspective, existed in a nether world between functions and macros. A class could look just like a Lisp function, but if you add parsing behavior, it starts behaving more like a macro, parsing through its arguments, and generating new code to be executed by other classes.

The idea of selectors is still kind of there, informally. It's just that it takes a form that's more like a COND construct in Lisp. So, rather than each selector having its own scope, as in later versions, all of them exist in an environment that exists in the scope of the class/instance.

After using it for a while, I could see why they went to a selector model of message receipt, because the iconic language used in ST-72 allowed you to express a lot in a very small space, but I found that you could make the logic so complex it was hard to keep track of what was going on, especially when it got recursive.


> I wrote about it at https://tekkie.wordpress.com/2014/02/19/encountering-smallta....

Sweet, thanks! There's also the ST-78 system at https://lively-web.org/users/bert/Smalltalk-78.html

> existed in a nether world between functions and macros

Macros are just functions that operate on functions at 'read-time', from my POV. So if you eliminate the distinction between read-time and run-time, they're the same.

> It's just that it takes a form that's more like a COND construct in Lisp.

And even COND isn't special, it's just represented as messaging in Smalltalk, right?

> you could make the logic so complex it was hard to keep track of what was going on

Interesting, I see.


"And even COND isn't special, it's just represented as messaging in Smalltalk, right?"

Right. What I meant was that the parsing would begin with "eyeball" (ST-72 was an iconic language, so you would get a character that looked like an eye viewed sideways), and then everything after that in the line was a message to "eyeball," talking about how you wanted to parse the stream--what patterns you were looking for--and if the patterns matched, what messages you wanted to pass to other objects. That was your "selector" and method. What felt weird about it, after working in Squeak for a while, is these two concepts were combined together into "blobs" of symbolic code. You would have a series of these "messages to eyeball" inside a class. Those were your methods.

The reason I said it was similar to COND was it had a similar format: A series of expressions saying, "Conditions I'm looking for," and "actions to take if conditions are met." It was also similar in the sense that often that's all that would be in a class, in the same way that in Lisp, a function is often just made up of a COND (unless you end up using a PROG instead, which I consider rather like an abomination in the language).

In ST-72, there's one form of conditional that uses a symbol like "implies" in math (can't represent it here, I don't think), and another where you can be verbose, saying in code, "if a = b then do some stuff." But what actually happens is "if" is a class, and everything else ("a = b then do some stuff") is a message to it. Of course, you could create a conditional in any form you want.

In ST-80, they got rid of the "if" keyword altogether (at least in a "standard" system), and just started with a boolean expression, sending it a message.

a = b ifTrue: [<do-one-thing>] ifFalse: [<do-something-else>].

They introduced lambdas (the parts in []'s) as objects, which brought some of the semantics "outside of the class" (when viewed from an ST-72 perspective). It seems to me that presents some problems to its OOP concept, because the receiver is not able to have complete control over the meaning of the message. Some of that meaning is determined by partitioned "blocks" (lambdas) that the receiver can't parse (at least I don't think so). My understanding is all it can do with them is either pass parameter(s) to the blocks, executing them, or ignore them.

One of the big a-ha moments I had in Smalltalk was that you can create whatever control structures you want. The same goes for Lisp. This is something you don't get in most other languages. So, a temptation for me, working in Lisp, has been to spend time using that to work at trying to make code more expressive, rather than verbose. A positive aspect of that has been that it's gotten me to think about "meanings of meaning" in small doses. It creates the appearance to outsiders, though, that I seem to be progressing on a problem very slowly. Rather than just accepting what's there and using it to solve some end goal, which I could easily do, I try to build up from the base that's there to what I want, in terms of expression. What I have just barely scratched the surface of is I also need to do that in terms of structure--what you have been talking about here.


It's an extensible language with a meta system so you can make each and every level of it do what you want. And, as I mentioned, the first version of Smalltalk (-72) did not have a convention to use a selector. The later Smalltalks wound up with the convention because using "keywords" to make the messages more readable for humans was used a lot in Smalltalk-72.


I've seen some of Bret Victor's talks but your description made it click!

> the source code is not technically stored with the class object

This is more of an optimization choice, perhaps? Given the link is maintained, it might be OK to say the source code and the bytecode form are two forms representing the same object?


Nothing is technically stored with the class object (everything is made from objects related by object-references). Semantically everything is "together". Pragmatically, things are where they need to be for particular implementations. In the early versions of Smalltalk on small machines it was convenient to cache the code in a separate file (but also every object -- e.g. in Smalltalk-76) was automatically swappable to the disk -- just another part of the pragmatics of making a very comprehensive system run on a tiny piece of hardware.


"This is more of an optimization choice, perhaps? Given the link is maintained, it might be OK to say the source code and the bytecode form are two forms representing the same object?"

Sure. :)


9:05-12:30 I'm sold. A mindblowing vision for what the universality of computation can truly do... Thank you.


Actually, just what could be done 40 years ago. Much more can be (and should be) done today. That's the biggest point.


It is a privilege of modern media, like HN, that we get to interact with someone like Alan Kay. Just 30 years ago interacting with a person of great importance to a field only happened if you were lucky enough to work with them or go to school where they taught, which is just a lottery. And it's not just Alan Kay, there are many amazing visionaries who stop by HN. It truly is a privilege that we should not take for granted.


One question for Alan, if he's reading on (and of course thanks for decades of great work and inspiration).

Do you think that your criticism against passive consumption etc might be incompatible not just with how things like modern computers and platforms are designed, but also with how most people are wired to behave?

That is, that it's not just that our tools constrain us to being passive consumers but that we also prefer, promote, and seek tools that make it easier for us to be passive consumers, because most of us rather not be bothered with creating?

If this is true, this might be (a) an inherent condition of man everywhere, or (b) something having to do with our general culture (above and beyond our tools).

I think that by talking about schools etc you alluded to (b), but could (a) also be the case?

Edit: I see that later on here you commented "We are set up genetically to learn the environment/culture around us. If we have media that seems to our nervous systems as an environment, we will try to learn those ways of thinking and doing, and even our conception of reality." which kind of answers my question.


One perspective on this is to think about your definition of "civilization" and compare it with both "human universals" and what we know about the general lives of hunter-gatherers and the extent that we can use this to guess about the several hundred thousand years before agriculture.

Many of the things on my list for "civilization" are not directly in our genes or traditional cultures: reading and writing, deductive mathematics, empirical science of models, equal rights, representative governments, and many more. It's not that we do these well, or even willingly, but learning how to do them has made a very large difference. We can think of "civilization" not as a state of being that we've achieved, but as societies that are trying to become more civilized (including getting better ideas about what that should mean).

Most of the parts of civilization seem to be relatively recent inventions, and because the inventions are a bit more distant from our genetic and cultural normals, most of them are more difficult to learn. For example, as far as history can tell, schools were invented to concentrate the teaching of writing and reading, and they have been the vehicle for the more difficult learning of some of the other inventions.

And, sure, from history (and even from looking around) we can see that a very high percentage of people would be very happy with servants, even slaves, whether human or technological.

In Tom Paine's argument against the natural seeming monarchy in "Common Sense" he says don't worry about seems natural, but try to understand what will work the best. His great line is "Instead of having the King be the Law, we can have the Law be the King". In other words, we can design a better society than nature gives rise to, and we can learn to be citizens of that society through learning.

In other contexts I've pointed out that "user friendly" may not always be "friendly". For example, the chore of learning to read fluently is tough for many children, but what's important beyond being able read afterwards is that the learning of this skill has also forced other skills to be learned that bring forth different and stronger thinking processes.

Marketing, especially for consumers, is aimed at what people -want-, but real education has to be aimed at what people really -need-. Since people often don't want what they need, this creates a lot of tension, and makes what to do with early schooling a problem of rights as well as responsibilities.

One way to home in on what to do is to -- just for a while -- think only about what citizens of the 21st century need to have between their ears to not just get to the 22nd century, but to get there in better shape than we are now. Children born this year will be 83 in 2100. What will be their fate and the fate of their children?

If people cannot imagine that the situation they are in had to be invented and worked at and made, they will have a hard time to see that they have to learn how to work the garden as they become adults. If they grow up thinking as hunter gatherers they can only imagine making use of what is around them, and to move on after they've exhausted it. (But there is not place to move on to for the human race -- larger thoughts and views have to be learned as part of schooling and growing up.)


Thank you for the reply.

I think you hit the nail on the head -- even though making the distinction between what people "want" and "deserve" is not very popular in the US (or where I live and anywhere else for that matter) at the moment (or even since the 70s or so).

Despite all the self-improvement books and seminars and everything, the norm is that people should just be given what they want, and not anything harder but that can potentially make them better and more creative. In fact a person would be labelled "elitist" to dare suggest anything else -- e.g. that universities are not just vocational schools for getting the skills for one's future job, or that technology should not just be about what Joe Public is capable of mastering, but about advancing Joe and Jane public.


It's not about "deserve" but about what people "need" to be part of a civilization and help sustain and grow it. We tend to lose sight of what is not happening right in front of us and take the larger surround as a given. (But it had to be created from scratch, and not just made but tended and sustained and grown. This is one of the main original reasons for having schools in the US: to help create enough sophistication so that thinking, arguing and voting can be done to make progress rather than just for individual gain.)

The simplest ways to think about all this is through the "Human Universals" ideas (more nicely put than "Lord of the Flies"!). Our natural tendencies via genetics and some remnants in our common sense culture is to be oral, social,tribal, act like hunter-gatherers, etc. (With reference to the last idea, consider the general behavior of businesses, of "shopping", etc.)

It's not about "creativity" per se -- this has been misunderstood. It's about helping children become part of the "large conversation" beyond just vocational training. (Consider adding becoming a citizen, responsibilities to growing the next generation (whether or not you have children), gathering and participating in "richness of life", etc.)


"In fact a person would be labelled "elitist" to dare suggest anything else -- e.g. that universities are not just vocational schools for getting the skills for one's future job, or that technology should not just be about what Joe Public is capable of mastering, but about advancing Joe and Jane public."

I've run into this very thing. The ironic thing about it is advocating for literacy, and understanding the abstract inventions of our civilization is the antithesis of elitism, because think about what would take place in a society where nobody who has the ability to influence events knows these things; where people do not know how to create their own analysis of events, to argue well, or to invent new social structures that benefit communities; to be active agents in negotiating their society's political destiny, keeping in mind the invented conceptual "guardrails" that maintain things like rule through law, and equal rights. In a society like that, elitism is all you have! Where the thought-inventions of our modern civilization are maintained, there is more possibility for creating an egalitarian society, in certain contexts, more power-sharing, and I think the evidence suggests more social progress.

This doesn't get enough credit these days, but I think the historical evidence suggests certain forms of religion have helped promote some of these civilizational inventions as virtues, on a widespread basis, supporting the secular education that spread them.

My experience has been that the reason people may put the label "elitist" on it is that learning these ideas is hard, and everybody knows that. So, there's been an understood implication that only a small number of people will be able to know most of them, and if most of the attention is devoted to inculcating these inventions in a relatively small number of people, then that will create an elite who will use their skills to the disadvantage of everyone else, who did not receive this education. What these people prefer to use instead is a lowest-common-denominator notion of egalitarianism, which doesn't promote egalitarian virtues in the long run, because it attenuates the possibility of maintaining a society where people can enact them. If we forget the hard stuff, because it's hard, we won't get the benefits they offer, over time. We will revert to a pre-modern state, because all the power to maintain our civilization is in these ideas. Keeping them as conventions/traditions is not good enough, because as we see, conventions get challenged via. a pragmatic, anti-tradition impulse with each new generation.


I think more needs to be done to understand why some really hard things to do -- like hitting a baseball, shooting 3 pointers, etc. -- are not considered elitist, and have literally millions of children spending many hours practicing them. (And note that it is hard to find complaints about the "unfairness" of sports teams trying to find and hire the very best players.)

And, in the end, very few get really good (I think I saw somewhere that there are only about 70,000 professional athletes in the US -- vs e.g. about 900,000 PhDs in the sciences, math, engineering and medicine.)

Two ideas about this are that (a) these are activities in which the basic act can be seen clearly from the first, and (b) are already part of the larger culture. There are levels that can be seen to be inclusive starting with modest skills (cooking is another such).

Music is interesting to look at. If the music is simple, we find the singing of songs with lyrics vastly more popular than instrumentals on the pop charts. But we also find "guitar gods" at the next and still high level of popularity. As the music gets more complex and requires more learning to be able to hear what is going on, the popularity drops off (and this has happened in many pop genres over the last 100 years or so). A lot of pop culture (I think) comes from teenagers wanting their place in the sun, and quickly. Finding a genre that's doable and can be a proclamation of identity -- akin to trying to be distinctive with clothing or hair cuts -- can be much easier than tackling a developed skill.

I think a very large problem for the learning of both science and math is just how invisible are their processes, especially in schools. The wonderful PSSC physics curriculum from the 50s (https://en.wikipedia.org/wiki/Physical_Science_Study_Committ...) bridged that gap with many short films showing scientists doing their thing on topics and using methods that were completely understandable right from the first minutes. This made quite a difference to many high school students.

I think I said somewhere in the blather of the interview that the easiest way to deal with the problems of teaching reading is to revert back to an oral society, especially if schools increasingly give in to what students expect from their weak uses of media. A talk I gave some years ago showed an alternating title line: between "The best way to predict the future is to invent it" and "The easiest way to predict the future is to prevent it". The latter is more and more popular these days.


> I am calling for the field to have a larger vision of civilization and how mass media and our tools contribute or detract from it

That would be nice. Unfortunately outside of Kay's rarefied circles, "the field" is close to being a Darwinian selection machine for the worst (most unimaginative, trivial, greedy, pusillanimous) traits. I've ducked in and out of various IT and development roles since the early 2000s. Nearly every one of the best people I've known in that time has left the field for others where they felt they could develop more of their broad humane selves. Some have retreated to academia, others moved altogether (nursing seems to be a theme, oddly). Those of the best that didn't leave, are embittered and cynical. For all the febrile contrepreneur-speak, the overall picture is pretty bleak.


See the comment by scroot above. It meshes with yours, but recognizes that the underlying culture has gone in directions that are undermining real progress and value of life. This is why I've put a fair amount of time along with many others in trying to get elementary education to a better place. I think we'll have to grow a few generations of "civilization carriers" to pull out. (And, yes, I've been staggered at the low place that much of computing has gotten to, but it is reflecting much larger cultural problems.)


Yes, wholeheartedly agree with this being part of a wider cultural current. It feels like we're at a low point from which things can only improve, but I have thought that before, only to see the decline continue.

My comment was a mere crise de coeur from the bottom ranks of the industry hierarchy. Reading the article and comments here was a slight shock. I had almost forgotten that these high-minded humanistic strands had once been a prominent part of the tech world. I hope they re-emerge. Thanks for your contributions.


Take a look at the how the US responded to WWI re "free speech" etc, and you'll see that we haven't hit bottom yet. But we have fulfilled H.L. Mencken's 1919 prophesy:

“As democracy is perfected, the office of president represents, more and more closely, the inner soul of the people. On some great and glorious day the plain folks of the land will reach their heart's desire at last and the White House will be adorned by a downright moron.” H.L. Mencken, 1919


Hi Alan, Here is my latest attempt to realize some of those dreams in a browser using JavaScript (not in the Lively Kernel or Amber way, but a more browser-native way even as some elegance is lost): https://github.com/pdfernhout/Twirlip7

You can try it here: http://rawgit.com/pdfernhout/Twirlip7/master/src/ui/twirlip7...

Or also here on a shared experimental server (with some more context on a click-through cover page): http://twirlip.net/

Anyway, still plugging along on moving the dream forward in my spare time. Thanks for sharing so much good software and humane inspirations with the world. Nice to meet you in person at OOPSLA 1997. Cheers!


Dear Alan: Your views & ideas expressed in this interview are spot-on. Gives us food for thought for the next 50 years. Thank you!


Yes. But, even if the field adopted a larger/better vision, you still have a bigger problem: Most smart people are attracted to mediocrity. Example: The LFTR (ie molten salt breeders) people have been trying to get their stuff adopted but most smart people (I know of) are attracted to micro-optimizing wind mills and outlawing coal. Even smart people who want to take on a larger vision are still able to snatch mediocrity from the jaws of succes. It may be counter-intuitive, but Hans Hoppe's views on how to improve civilizaltion would lead to a fulfillment of your goals. It's not a coincidence Japan, Germany, US, Italy were centralized in the 1800s and became involved in world wars in the 1900s. Medieval Europe had no central empire, yet was able to catch up and surpass China and Japan, with the Renaissance a culmination. A lack of mega states lead to more diversity, cooperative competition and a stable global civilization through variety and choice. If we are to repeat past success and get more Faradays and Teslas, micro states or no states are necessary. The EU is an example of this: A poltical wing of NATO. First they get you by passing regulations protecting the "consumer", then the wars and corruption come about. Goldman Sachs made money when the wealthier states wanted to pay off the debts of its clients (Greece). This is no different when Clinton/Congress wanted to bailout Mexico (who owed lots of money to US banks like Citibank/Citigroup). Singapore amd Lichentenstein both have a powerful de facto monarchy in control, but its small land size and population limits the destructive desires of the state. Yet look at the diversity of race/opinions/demographics that exist in such a small area. Until the fetish for megastates subsides, we are doomed to slow adoption of knowledge. However, I am not a good communicator. Might I suggest this book for a better scholarly discussion: https://mises.org/library/economics-and-ethics-private-prope...


My goals are a lot bigger than Hoppe's (and I fervently hope for something much better than his vision).


Could you explain the "are a lot bigger" part?

I get the sense Hoppe is repulsive to you, but I am very open-minded to hearing a counter-argument and find out what I got wrong about him. For example, are you against a complete free-market in schooling? Or do you want a voter-based approach on the city/state/federal levels?


My perspective is that our planet is small but there are lots of people. "Biology is variation" so there are wide distributions of properties. I like the idea of "Equal Rights" (and also think that it's necessary). The systems that are critical, including the human systems, are non-linear and intertwined. The combination of these is that the human society needs to find - invent - how to organize itself.

This is very much in the spirit of the thinking that led to the American Constitution, and Tom Paine's "Instead of having the King be the Law, we can have the Law be the King".

We need solutions that allow further thinking and design to be done.

One way to look at this is to ask questions about "human nature" and to what extent does it need to be followed and to what extent should we try to teach (even train) the children to act in "designed ways" -- for example, not trying to take revenge.

The interesting and difficult parts of goals like these are that we have humans in the mix while trying to come up with better societal designs. For example, even the best forms of socialism have been "gamed" incessantly, and often fatally, by humans only interested in "harvesting" for themselves.

This is one of many reasons why well thought out versions of social reform have often been destroyed when the implementation phase is started "Every one loves Change, except for the Change part!"


I can not argue with any of your vision. It's based on sound thought, knowledge, and scholarly research.

However, I disagree with your examples, specifically the Constitution. I apologize for disagreeing with you, but I still don't see how any form of state can function with a population in the double digits of millions. Even if 90%+ of the population were educated in Montessori-based systems.

It goes against the basic economic principles of "incentives". The state is still a monopoly (in the classical sense) of violence, even under a Constitution. The state is also the sole interpreter of its own laws.

Even under a Constitutional Republic, politicians have become the new monarchy, priesthood, and witch doctors. Just like when Christians find a backdoor for polytheism, so do the Citizens the same at the voting booth and in the media to worship kings and queens.

So the state is still the king. We just play musical chairs at each election. In fact, we even have fewer checks/balances on power today than the British Empire had under American Colonial rule. The US government has absorbed private institutions that could check it's power: It' subsidizes church(s), subsidizes and regulates private schools, the Ivy Leagues have become an extension of the state, and the US has become the #1 employer of scientists, engineers, lawyers, etc. This is a combo of socialism and fascism lite, that continues to grow through each election and generation, albeit slowly.

When the Law is King, people still rely on politicians to interpret and carry out the Law. They will defend "their" neo-king/queen (ie favored politican). Previously, they would have been suspicious of unelected officials." That suspicion was a de facto check/balance on power. But, not the sole check/balance.

Example: Obama (and Congress!) passed legislation to support Neo-Nazis in the Ukraine. https://www.stpete4peace.org/Ukraine https://www.thenation.com/article/congress-has-removed-a-ban...

Where was the outcry? Trump is bad. But, Congress and Obama are just as bad. These are the politicians people want in power. These are the crimes people overlook and ignore. This is just one more example of how the voters let senators and presidents get away with crimes. It is easy to expose Trump as a shady/scheming politician and crooked entrepreneur (eg Trump Uni.). However, politicians like Bill Clinton, Reagan, etc. get away with murder and Congress is fine with it. So too are the voters. The Constitution has not prevent a global military empire.

Look at how the Constitution was promoted: with G. Washington's celebrity endorsement. The US was thus created partly by a militia leader who used warfare to overthrow a government and won wars. (Losing generals are unpopular.)

In practice, the Law was never in full hands of the King. The King/monarchy had checks and balances going back to the medieval periods. The various religions and churches (plural), guilds, Parliament, and so on.

The Constitution has given people a false sense of ownership and protection and lowered their distrust of the state. This is one of the reasons why parents love government schooling: they can blame tax-cheats for why Johnny can't read.

"Who cares if the Dept. of Edu. is unconstitutional? The ends justify the means."

Canada, Australia, and other states did not overthrow British rule, yet they still became multi-cultural and prosperous... without a violent Revolution that led to war widows, orphans and high taxes.

This is why I disagree with the idea of a state that would let politicians (including monarchs or dictators) have access to lots of money, weapons, etc. I've seen people much better educated and with high IQ ignore the crimes of entire political parties like Republicans, Democrats, and celebs like Lenin, Che, Mao, etc.

In other words: The Constitution, from my p.o.v., promoted political centralization and was a hidden psuedo-absolute monarchy in disguise. Nobles replace the absolute monarch. So I agree with your vision, but I just disagree with political centralization.

(Sidenote: The Constitution was passed despite the delegates not having authority to overthrow the Articles of Confederation. The Constitution expanded the state's power, did not solve the underlying issues with the Articles of Confederation, and set a bad precedent of upper-class members of the establishment being able to expand state power to solve problems at the expense of freedom. )

Thank you again for reading and putting up with this. I realize we disagree on a few things, but your patience is appreciated.


I'll avoid trying to reply to this. The Roman poet Juvenal quipped "But who will guard the guardians?" referring to one of the main problems of any republic. Plato had one suggested solution, and the US founders had another. Today we have something quite different than either had imagined.

One key question for "civilization" has usually centered around the extent to which enough children can learn to reach beyond their genes to embrace ideas and behaviors that have been invented for the better.

Another key question revolves around the trade-offs between individual choices vs "smartest choices". This reflects the distribution of talents, outlooks, skills, knowledge, etc in any population. And also the distribution of what various people need to "feel whole". (These are often at odds with larger organizations of societies.)


I think I'm beginning to understand. Thanks again for the replies. I'm going to have to think a lot of about what you have written. (My apologies if I kept mis-interpreting you and going off on the wrong tangents.)


When Neil Postman was a grad student he followed Marshall McLuhan around for a few months. One thing he noted was that McLuhan -- when argued with or when asked a question never directly replied, but just came out with another one of his zinger "koans". Neil said he finally realized that McLuhan was not concerned about whether people were agreeing with him or even understanding him, but was most aimed at getting them to think at all!

I've never been able to pull this off, but McLuhan had a real point. Socrates had the idea (via Plato) that there "was truth" and careful thinking would get everyone to the same place. (Much of science has this assumption (if you throw in a lot of experimentation and debugging as part of the careful thinking). In any case, a reasonable explanation in science is not in the form of sentences.

One of the key ideas here is that modern understanding is a lot more than changing from one set of sentences to another. (This is a huge problem for humans because for tens of thousands of years and more there were no significant differences between our models and our sentences.) Now some of our models can't be reasonably represented in sentences, but for many cases we still have to use sentences to point at the models (or don't even use the models at all).

To me, the consequence of all this is that most of the work needed to be done on thinking about important difficult problems is not primarily "logic" (in the sense of dealing with premises, operations, and inferences) but "extra-logical" (trying to understand contexts and boundaries and models and tests before trying to do anything like classical thinking).

I often try to point out in talks that modern thinking is "not primarily logical" and this is what I'm driving at.


> modern thinking is "not primarily logical" and this is what I'm driving at.

Is that because "logical" thinking makes sense in one context, but fails in a better context? (ie. Pink vs blue: Change the context, understand it, and then later the knowledge is found?)


Yes, it's not the apparent logic in the operations that counts but the choice of definitions (and these include the definitions of the operations). I wrote a paper for the "Mind-Body Conference" in 1975 that discussed this. The idea is that "reasonable things" are done within "stable neighborhoods of 'truth' " that can be thought of as regions whose boundaries are the definitions. Inside we pretend the definitions are true, while the larger view from above knows the neighborhood is arbitrary.

This is an old idea (e.g. Euclid). The "modern" part of it is that the definitions are not assumed to be true outside the neighborhood. This is simple and powerful because the results are larger worlds that can be compared to others and to phenomena and experiment (and without setting up dogmas and religions).

Because of the way human minds work, there will be tendencies to think the definitions are "actually true" (and so the logic inside the boundaries) if they and the conclusions are appealing. But the form of this knowledge helps keep us saner if we are diligent about drawing the maps and boundaries correctly.

Much of science has this character, and the model helps to understand what it means to "know" something scientifically. Science is a negotiation between "what's out there?" and what we can represent inside our heads via phenomena on the one hand and the "boundaries and neighborhoods" on the other. Einstein's nice line about "math vs. reality" hits it perfectly ("As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.").

Newton's Principia was a huge step along these (and many other lines). He completely separates out the math part in the first large parts of the work. And only then does he start looking to see how the math models map onto observed phenomena.

To say it again, Science is partly about being very careful about how the definitions map to "out there".

In terms of context, if you are aware that you are in contexts -- the first step! -- and aware and careful about the ones you are using -- the next steps! -- there is a chance that "reasonable thinking" might happen.


> Science is partly about being very careful about how the definitions map to "out there".

I keep tripping on this part. Definition as in understanding the boundary? As in trying to find the boundaries between better, perfect, and the impossible (ie sweet spot)?

> Because of the way human minds work, there will be tendencies to think the definitions are "actually true" (and so the logic inside the boundaries) if they and the conclusions are appealing.

Does this mean?: Humans are pre-disposed to a model of how the Universe works: Zeus's thunderbolts, witches, Saturn/Satan/Santa, Geo-centric, etc. Science is a set of guidelines to help prevent us from shoehorning the Universe into the inaccurate mental models we are pre-disposed to believing?

> while the larger view from above knows the neighborhood is arbitrary

Can the leap from Geocentric model to Kepler's work be an example of this? As in: Geocentric model becomes irrelevant with Kepler's discoveries and p.o.v.?


1. The definitions are the boundary -- these are what are used to make the interior. The analogy is to definitions (used to be called axioms) in math. Simple for math (because it is only about itself). Difficult for science because we can't just make up the definitions, we have to try to find ones that have some mappings to "out there".

A boundary for chemistry is the physics standard model. Within physics the model is pretty accurate but unsatisfying as far as knowledge. They would like to have a better boundary, and get the standard model (and the key constants) from it. But it makes a good boundary for chemistry to make excellent chemical models.

Similarly (oversimplifying here) chemistry makes a good boundary of definitions for molecular biology.

Note that in this scheme of thinking, the "knowledge and meanings" exist inside the boundary, but don't include the boundary.

One of the issues addressed in this approach is how to make progress in "knowing" without infinite regresses. Philosophically, it is a kind of pragmatism.

As I mentioned elsewhere, science is a negotiation between two different kinds of things not a set of truths. It has many things in common with mapping (and making good maps is a branch of science, and one of the real starts of real science).

2. We are predisposed to believe things. Bacon's notion of why we needed to invent a "new science" is to create a set of processes and heuristics that would help us deal with and get around to some extent "what's wrong with our brains".

As a young scientist, I got the warning that is given to most young scientists "Beware, you always find what you are looking for!"

Some of the interesting examples of good science turning into belief revolve around Newton and both Maxwell's Equations and the orbit of Mercury (neither are "Newtonian"). And if you look at the history you'll see that Newton was a lot less Newtonian than many of his followers.

And yes, we also seem to have some things that are easier to imagine than others -- gods, demons, witches, etc seem easy, but future floods etc seem hard.

3. Sure. E.g. if you think things have to be circles, then this will be part of your implicit context for thinking about orbits. Geocentric used circular orbits and then epicycles to correct them and save the theory (and some great metaphors there for lots of human thinking). But it's important to realize that Copernicus also used circular orbits, and they also used epicycles to save that theory. Kepler worked with Brahe and admired him, so decided to trust his measurements. This led to a different model. The planets themselves didn't care about any of the models.

The definitions are still not -true- and the neighborhood is still not the phenomena. It's just better. You only get Newton from Kepler, not Maxwell or the orbit of Mercury and then Einstein for both.


A few years back, I wondered: Why is it I believe in things no else agrees with?

I seemed to look at the different schools of thought, pick the best one that answers as many questions as possible, and believe in it until something better comes along (ie can it solve more problems than the previous school of thought). (But, I was at least somewhat aware it was not scientific or scholarly. A combination of logic, emotion, and preferences.)

My peers in HS who went on to higher education (Stanford, Yale, Harvard, Worcester, MIT, etc.,) agree with the status quo, work within it, and have careers, children, so on. (Nothing wrong with that and there is no professional jealously on my part.)

I suspect, however, they are shipbuilders, who are treated as explorers. Scientific researchers who find juicy hacks and turn them into over-priced drugs with dangerous side-effects.

You, however, take on the far superior approach to accumulating and developing new knowledge. However, it seems to come more naturally to you than to your peers. (Even taking into account your mathematics/musician parents and good teachers you encountered.)

Which cities and universities around the world have been most receptive to your lectures? (I would assume it would be some in Canada, Scandinavian countries, and China. With the least receptive being in the US.)

How was poor Faraday able to contribute so much to science and technology despite the vast resources of the classical educated members of The Royal Institution and Royal Society? (Granted, there were many people who contributed before/during/after Faraday's time to allow him to make those discoveries. Then it took others like Maxwell to carry on even further.)


In 2004 I wrote a tribute to the research community I grew up in "The Power of the Context" (http://www.vpri.org/pdf/m2004001_power.pdf) and this will possibly help with some of your questions. I was just one of many in this community, and as I said at a recent Stanford lecture "The goodness of the results depends primarily on the goodness of the funders". Every era has enough of the kind of people who like to "ask what would real progress be?" and then try to make it happen. The large differences in "real progress" have fluctuated as the good funding has fluctuated (right now and for quite a few years, there has been almost none).

A very important first step in this is to put a lot of work into "learning to see the present" and then where it came from (this is a lot of work, and it's not what human minds generally want to do). This will free up most thinking, will open up many other parts of the useful past, and especially about much better futures.


The article was a pleasure to read. I like the way you present important issues in a high-level way, it always puts me into a mood where I sense what is possible, because, just look at what has already been done! The origins of Montessori were unknown to me, and I tried out the thing with the ruler and the iPad :-)

It seems to me that the combination of an iPad Pro 12.9, Apple Pencil, and a bluetooth keyboard is a great environment for computing. It's just that the software running on it is not really that great, but there is no reason why it should not be possible to make much much much better software for it.


Thanks for being an inspiration to many of us, and keep up the good work.


I found it interesting that in an HN discussion about how to auto-update comments Alan Kay said, "How about a little model of time in a GUI?"[1] It ties into when he said of smartphone apps, "It's painful to see people using billions of devices that have forgot that undo is a good idea." [2]

It made me think about how things like Redux (or even just Google Drive revision history) where you can 'time travel' through state can be more important for UI than we realize sometimes.

1) https://news.ycombinator.com/item?id=11940472

2) https://www.youtube.com/watch?v=S6JC_W9F8-g&t=40m30s


It is painful that most desktop apps never even got branching undo/redo.


Most non-technical people I know have trouble with "simple" undo-redo and copy-paste ; this would only make it harder for them.


Toys vs tools. I want paying my electric bill to by trivial. If I'm doing real work, I want all the power software can give me.

I can forgive my retired grandmother for not getting copy and paste. A professional however, should constantly strive to improve their mastery of their tools.


Exactly. Toys vs. tools. Or, as AK put in the interview:

"Well, a saying I made up at PARC was, “Simple things should be simple, complex things should be possible.” They’ve got simple things being simple and they have complex things being impossible, so that’s wrong."


Don't you think that a good GUI and visualization of the situations and consequences would do the trick (i.e. think about what the GUI accomplished for interacting with computers at all).

I fear that too many technical people have no feel for the plight of the users (and amazingly to me, for their own plight as users -- they are often willing to put up with execrable BS in their own programming environments).


So what? It shouldn't be too difficult to implement three states of userspace and have it be selected upon login:

- Novice - Advanced - Expert

'Novice' gives you simple 'undo'. 'Advanced' gives you 'undo' with history. And 'Expert' gives you branching. For example... For file-managers this could be made so, that the novice gets something like the Finder or WindowsExplorer. Basically a file-browser, that handles its pane contents as a document. 'Advanced' gives you tabs, rearrangable layouts (dual-pane) and some more configuration. 'Expert' gives you fully configurable menus and toolbars, integrated with the hosts generic scripting host. Etc.


Finder does have tabs, and further automation is available through scripting (AppleScript and Terminal).

Actually, both scripting and the commandline are effective ways of making you organize your program into nouns and verbs that can be extended by the user.

But I don’t think rearranging the basic UI is that useful, because all your users are basically the same - we’re all human and not Lisp programmers.


Implementing even one of those is hard, and costs tons of developer salaries, for what might ultimately fail anyway (like most software products/startups do).

All 3?


If everyone took the time to really learn and understand advanced undo/redo, I wonder how that would change our thinking? Might we find all kinds of new ways to apply it in the physical world?

Maybe the reason people find it hard now is just the limit of natural languages as they exist today. We haven't made time travel, alternative realities, and hypotheticals a part of our ordinary speech (beyond trivial cases), but maybe we should?


I find it interesting that Google Docs also doeen't have branching undo. There is no way to revert a commit, you can only go back in time. And there is no blame.

I loved the article by the way. The future of writing should be version controlled Juniper notebooks. Next step for GitLab is to build a cloud IDE to bring that closer.


Google Docs has blame by looking at the history of changes in the file, although I will admit that it is not the most obvious.


They have a commit history. But if I want to see who last changed a certain line I have to go through the history until I see the change. I would love something like https://gitlab.com/gitlab-org/gitlab-ce/blame/master/CONTRIB...


I'd heard vim does, but when the situation came where I needed it, it turned out I'm fine with just moving back in time.


The "undo tree" in Emacs is a thing of beauty and really helps with interacting with the undo state. https://www.emacswiki.org/emacs/UndoTree

I certainly agree I can get by without it. It does seem like daily I will pull up the tree, though, to walk up and down the history to make sure I understand what the last thing I did was.


I've used vim + emacs's undo tree quite a bit. It's really nice to be able to make a bunch of explorative changes to the code base, undo them, make another set of changes and then use the undo tree to recover interesting bits from another branch.


If, like Alan Kay, you think we need a major overhaul in software construction and are in the San Francisco Bay Area, let me know. I'm part of a group that meets on Saturdays once a month and we work on side research projects toward that end. My email is in my profile.


If anyone in the Boston area is interested in whiteboarding high-skill software development environments in the context of 2018/2019 hardware, I'd be interested. So: continuous speech recognition; hand tracking, controllers, and haptic exoskeletons, but still keyboards; HMDs with 30+ px/deg, and eye tracking, but still screens.

Dynabook was a nice vision. As always, society executed poorly, and failed to realize much of the potential of its technologies. No doubt that will happen again with VR/AR. For example, there's a toxic cauldron of patents brewing. Screens and keyboards and mice, have had a good half-century run. And GUIs, and personal local hardware. But perhaps time to shift focus.


When you say society always executes poorly, what alternative social reality are you comparing it with?


I don't understand - are you suggesting the only way to know you could have done something better, is to see someone else doing the same thing for comparison? That's certainly one way. Others include... all the ways you evaluate the execution of a company or project, when you don't have a direct competitor to compare it with, no?


Maybe you would agree with the statement "reality is broken"? I certainly don't think it's meaningful.

The unfortunate associations of wanting society to "execute better" (i.e. the merger of industry and the state found under fascism) just underline the fact that applying performance measurements to human life/social phenomena is misguided. We don't all agree on the purpose of life or the nature of happiness, nor should we.


I'll be in Boston next week for a few days, so I'd be down.


Interesting. Email sent!


Emailed! : -)


I don't buy it that everyone would be a digital creative if they just had the right tools. To make quality content, whether it be apps/music/movies/etc..it requires much more than tools.

I respect Alan Kay, but I don't understand the need to bash on modern day technology.

Have we really come along way in terms of general computing? Maybe not [Example: 0]. But in terms of the digital world, I can take a video of my parents and send it to my cousin who lives 10,000 miles away and he can respond in a matter of seconds.

I can literally meet people in remote areas of the world. I can interact with people who barely have food but somehow can get cell phone access and now they are learning and communicating with everyone else. I can't imagine a better way to level the playing field (socioeconomically) globally then how we have it.

Do we have a lot more work to do? Sure. But Rome wasn't built in a day.

The future of the internet & technology, the direction it's headed, is going to come from small contributions from millions across the world.

What people will need will turn into what we have and use. And there won't be some magical device that just pops out of nowhere that will change everything.

[0]: https://www.youtube.com/watch?v=yJDv-zdhzMY


> "I don't buy it that everyone would be a digital creative if they just had the right tools."

That is not Kay's argument. Not everyone who scribbles notes or draws something on a piece of paper is a "creative" or artist. Not all who write at all are professional writers. But we do not have the computing equivalent of paper for everyone. Sure, someone can write in a word processor or draw in a drawing program, but that's not all that a computer is for -- that's just imitating paper. The outstanding question is: can we make something that is as extensible as pen and paper and literacy for aiding human thought for the next level, the computing level? All we have now are stiff applications. Saying that such things make people literate in computing is like writing by filling out forms.

> "Do we have a lot more work to do? Sure. But Rome wasn't built in a day."

Kay is your ally here, a constant gadfly putting the lie to all the hype and BS. He's a constant reminder that we shouldn't feel so satisfied with our mud huts. He's reminding us to build Rome at all.


I see the problem right here:

"To make quality content ..."

The right tools shouldn't be about creating "content", they should be about letting you solve your own problems, instead of trying to transform them into problems you can buy a solution for. Creative arts are only subset of this - they happen when someone's problem is "I want to make a piece of art".

> Have we really come along way in terms of general computing? Maybe not

That's the point AK seems to be making, though. All the great things you mention are huge accomplishments, yet they're also disappointing compared to what is possible, what we should have now. Rome wasn't built in a day, but you have this whole army of Rome builders who decided that it's better to sell bricks instead of building the city...


>I don't buy it that everyone would be a digital creative if they just had the right tools. To make _quality_ content,...

who cares about quality ? who is the judge of what "quality" is ? As long as someone can create the content _and distribute it_, they are a digital creative.


It's clear from the interview that Kay thinks mobile computing isn't where it should be. But I haven't studied the DynaBook. Does anyone know exactly what Kay thinks is needed, in order to realize his dream?

I found a few such things in the article:

- more-discoverable undo

- some sort of AI-based virtual assistant

- a stylus and holder, and presumably input methods and applications that are better suited for that stylus

- something nebulous about having warning labels and being designed for the higher-order cognitive centers of our brains


What's missing in today's software is described by the field of End User Development, the ability for normal people to craft new software artifacts without the need to learn a formal programming language.

For most people, using computers means being force-feed a selection of existing applications; and most applications are pro- consumption, which is favoured by the media companies.

App stores (and the Linux repositories that preceded them) were a significant advance for the public, as they allowed common people top be able to find tools to cover their needs; previous to that, only power users were able to tune the computer to their needs.

However, non-developers still depend on the capabilities put in there by the developers, and can't fine-tune their behaviour.


> the ability for normal people to craft new software artifacts without the need to learn a formal programming language.

The problem is not programming. It's conceiving. You can get 8 years old kids to do pretty advanced imperative algorithms with scratch. What's hard is to get people to write down in a blank sheet what they actually want to do (mostly because you think you want something and then when you've got this thing you notice you actually wanted something a bit different).


You don't need to be able to write down in a blank sheet what you actually want to do if you don't need someone else to do what you want to do.

Software developers aren't special in this sense - even when writing our own software we can't do what you want. This is why "scratching your own itch" is so much easier than the alternative. You reduce the "is this what I really want?" loop by an order of magnitude by cutting out most of the people from the process.


> You don't need to be able to write down in a blank sheet what you actually want to do if you don't need someone else to do what you want to do.

This.

Why is it so hard for developers to understand this simple idea?

Life in computing would be so much easier if the people building the system understood what part of their job is essential (finding out the best way to arrange hardware and software to satisfy a requirement) and which part is circumstancial, gathering a set of precise requirements to begin with.

The last part wouldn't be needed if end users were able to build their own working prototypes to solve their needs, and engineers were simply called to rebuild the prototype with best engineering practices.


You'd be surprised the kind of people who's able to create functional applications and complex data models in my office. The point for common users to define software behaviour is doing it iteratively, with every step being based on a previous functional version.


>the ability for normal people to craft new software artifacts without the need to learn a formal programming language.

The problem with that is that normal people often have formal requirements that software has to meet in order to be useful.

Therefore I think the secret to end-user programming is not to be less formal. It's to be less complete or less general - DSLs.

But at that point you could legitimately ask whether it's programming at all or just configuration and what was actually achieved compared to the status quo.


You're right about DSLs; they're the greatest tool for tasks that require a formal spec.

The thing is, not all tasks that users engage with need a formal sspecification. For those, the ability to iterate and refine quickly is more valuable than having a precise description of every machine behaviour; I'm thing of things like page structure on Wikipedia (optional, and done with simple text markup); or the structure of documents in outlets like OneNote or Evernote, or attributes in Excel tables, which are all easy to re-arrange on the fly without needing a data schema.

All my examples refer to data structure. What's missing is that kind of lightweight, optional structure for behaviour. All current software, except debuggers and unit tests, expect a fully defined program to be defined before execution.

Power users need a top level loop that could execute partially specified functions over any DSL, prompting user input for the parts of the procedure not yet specified, and the capability to expand the DSLs themselves. There's nothing in the status quo like that, although environments like Hypertalk, Squeak and Excel are close, yet limited to a single DSL each.


I think the first step we can take toward "software artifacts" is modularization.

If each piece of functionality is a discrete module, the entire platform suddenly becomes both malleable and stable.

The best way to accomplish modularization is by using a functional design.


You seem to be describing the ideas underlying smalltalk, which uses the word 'objects' instead of 'discrete modules'. Every piece of functionality is an object - any object can talk to any other objects, etc.

If you're proposing something different, can you contrast your ideas with smalltalk style OOP?


I think that currently the best example of modular code is Haskell.

Functions and Functors/Monads can be used to express the most elegant abstraction over data that I am aware of.

In Haskell, you can write a program that uses maps/folds/etc, your program is suddenly able to use any kind of Monad-derived data type.

Everything is discrete, since everything that isn't purely functional is wrapped in a Monad.


> most elegant abstraction over data

This may be. We should also step back consider if building functional abstractions over data is a good way for modelling all of computation itself? Or something like 'smaller virtual computers all the way down' (i.e. smalltalk) is a better model? Decomposing a thing into the smaller things of the same kind seems cleaner from a certain perspective, at least.

In Haskell, higher level functions can be composed from other functions, which is very clean. But the state isn't inside the function (like it is for a real computer), so it doesn't model a computer completely in my mind - it models one aspect (the transformation).

Another perspective is thinking about building large scale systems and coupling of modules (how do you abstract over data from a third party library where you don't have the source code?). I kind of get smalltalk's answer here (an object scales up to become a distributable module), but I don't know if there is a good answer in a full Haskell based world.

(BTW, I'm not saying one is better, I'm still thinking these things through.)


> In Haskell, higher level functions can be composed from other functions, which is very clean. But the state isn't inside the function

> ... Another perspective is thinking about building large scale systems and coupling of modules

Both models have been proven formally equivalent, so it doesn't matter which one you use as the base computation model "all the way down"; you can always transform one into the other.

So in practice you end using the one which best represents the problem domain that you're solving. State-based object-orientation works best to represent simulations of the world where previous states are not needed. Functional is best when you need to reason about the properties of the system, since it allows you to access any present or past state of the computation.


The other neat thing about modular and functional programming is that since modules/functions are discrete, you can drop in any replacement you want to, without the rest of the codebase even noticing (except during the build process).

As a real example, you can use Haskell's Foreign Function Interface (FFI) to call C functions, and wrap C structs. The rest of your Haskell code doesn't even need to know about it. You can use the same interface with a variety of languages in place of C. Rust even has a specific compiler option for generating C-compatible shared object/dll files.


> (except during the build process)

That's the key point though. In a large scale system (think multiple systems written by multiple groups running on multiple machines) when you modify a shared data type, how do you update the systems?


more-discoverable undo

I think he wanted more discoverable everything, the undo was just used as a particularly bad example on the iPhone.

He wants the world to be more discoverable, to modify our very culture so that learning is easy (an example he gave was with math, so it's not just a small, 'smart' percentage of the population who actually gets it, but everyone absorbs it from a young age).


In the end i think he, like many intellectuals, overestimate people's interest in learning.

Aka you can bring a horse to water, but you can't make it drink.


actually I'd argue he understands the problem pretty well. He talks about needing to offer something SO compelling they cannot help but interface with it, in spite of their desire for things that do not push them.


I'm not sure if there's a problem at all. Learning how to do anything is much easier today than ever. Just type "how to do X" in google, and you will probably see a nice YouTube video, or a tutorial. If you get stuck, there are online forums with people willing to help for free, 24/7.

Has the number of people who want to learn decreased? If so, was it because of iPhones/iPads?


When I do that I constantly think why is it so difficult to learn things? Why do I need to look at least 3 different sources just to confirm that they are not bullshitting me?

After learning one thing related to the topic I want to learn I still have 99 to learn but I don't even know what those 99 things are. So basically I'm learning through dumb luck. Yes the reason is that I haven't reached the magical "bootstrap knowledge" where you have enough information in your head to just look up the things I don't know.

Those youtube videos are nice and all but they will barely teach you anything, they are easy to consume low quality crap for lazy people. I know that because I'm one of them.

The fundamental problem is still the same. Discoverability sucks. I still have to ask someone who is more knowledgeable than me. So why not let the AI do that task?


I don't know why it's hard for you to learn things, you were supposed to get the "bootstrap knowledge" in your undergrad education.

My point is that the discoverability today is much better than it was, say, 30 years ago. Back then your options were: go to a library, find and read good books, or find someone knowledgeable, and go talk to him in person, or write him a paper letter with your questions, and hope he responds.

For example, I remember how I was given my first access to a computer, with a command prompt blinking on a black screen, and I had not a slightest idea about what to do with it. Oh, and people nearby were not particularly keen on stopping what they were doing and explaining things to me. I was given a poorly translated DOS manual and a few simple BASIC programs to study. My computer access was limited to one hour, twice a week, and I remember how I thought this was so cool, and that I want to learn.


It's not a problem. It's a way things can be better.


Sure, nothing wrong with trying to make things more discoverable. I think the first step towards this goal could be making an app which implements what AK is talking about. I'm not sure what this app would be like, or for, but if someone makes it, then we can test how effective are his ideas. You don't have to redesign an iPad to do this.

On the other hand, things can (and probably will) get better organically. The iPad/iPhone is only 10 years old. In that time it has improved both in terms of hardware (e.g. stylus for iPad, 3d touch, extra sensors, slo-mo video recording), and software (voice recognition, Siri, AppStore, multi-tasking, StreetView, even something as simple as cut and paste - none of that was available in iOS 1!). In a few years we will move to smart glasses and virtual/augmented reality. My guess is it will make learning experience better for everyone.


People have little desire to get education force-fed into them. But they have plenty of desire to solve their own problems, whatever those problems are.

What I feel AK (and others) are advocating is for computing to support individual problem-solving in an iterative manner. You have this powerful general-purpose machine, you should be able to mold it towards something that solves the problem you currently have.


Right now the problem is that the horse doesn't even know that water even exists so unless it happens to find it through dumb luck or talks to someone other horse about it's problem it won't ever find water to even consider drinking it.

The idea is that the AI takes over the role of the other horse.


If you want to study the ideas of the dynabook, here you go: http://www.newmediareader.com/book_samples/nmr-26-kay.pdf


> rather than the wheels for the mind that Steve Jobs envisioned.

Proper quote, for those confused:

"I read a study that measured the efficiency of locomotion for various species on the planet. The condor used the least energy to move a kilometer. Humans came in with a rather unimpressive showing about a third of the way down the list....That didn't look so good, but then someone at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle and a man on a bicycle blew the condor away. That's what a computer is to me: the computer is the most remarkable tool that we've ever come up with. It's the equivalent of a bicycle for our minds.”


oblig. Bikes need road infrastructure, or at the very least, a path cleared of trees/underbrush (though less efficient). Condors don't.


Pretty sure Condors need a clear path. You usually don't see them flying below the tree tops low to the ground.


Reminds of what the people of Red lang are trying to achieve: http://www.red-lang.org/p/about.html?m=1

Maybe I'll finally give it a try!

Every once in a while I daydream about a sort of "distributed app platform" built on top of Scheme and IPFS. -- Perhaps Red would actually be perfect for this...


> Every once in a while I daydream about a sort of "distributed app platform"

Have a peek at dat and beaker browser, they've done interesting stuff in that regard recently.


Without the vision that came from parc none of us would be viewing this today. The vision they wanted was vast and it still has more to offer today.

Disappointed many comments have disassociated this from where we are now.

Thank you Alan. Excellent read.


Alan Kay comes from a time, when IT business was all about the creation of tools. The companies forming out of the 60's/70's experiments, of which he was a main protagonist, would drive the innovation according to more abstract needs of information management. That era ended in the 2000's.

The companies, who drive techical advancement and innovation now, are a shopping mall, some telocs, family camps with added spying bonus, and often any form of information gatherers, that do business by the worst part of our economy, selling image rather than delivering identity. I am talking about the advertisement business.

Alone, compare the Windows 10 address book to former installations. Back in the old days, the addressbook was scriptable. You could query it from other apps. Some third party apps had powerful filtering. And now? It is totally dumbed down to the most basic, primitive needs. And while we use cloud based addressbooks these days, it is much more difficult to keep them neat and ordered. I could go on for hours. Like XHTML vs HTML5 and so on.


Simplicity is closely related to education, and education is closely related to freedom.

How are we supposed to take education seriously when we create devices of such complexity that they are essentially magic, and hand them to students?

Before computers, complexity always had tight upper bounds. A watch could only get so complex before it wouldn't fit in its housing; and a car could only get so complex before it could not be efficiently manufactured/repaired. With computers, there is no upper bound to the complexity.

We are now overrun with complexity, but it's hard to see how to fix that. A business that puts blind faith in a complex system to do something slightly faster/better will probably win against a business that tries to truly understand what it's doing. An employee who just keeps adding to the complexity will probably win against an employee stepping back to simplify things. A consumer who makes convenient choices about complex things without asking many questions will probably find life easier.


Simlpicity is related to education in a sense that education is about teaching you a sequence of simple steps that lead toward the complex.

Also, I think the perception of computers being magic doesn't come as much from their complexity as from the people who try to force the complex into looking like it's simple. They're hiding complexity so hard the end result doesn't make any logical sense, so it's magic.

AK had a lovely quote in the article:

"Well, a saying I made up at PARC was, “Simple things should be simple, complex things should be possible.” They’ve got simple things being simple and they have complex things being impossible, so that’s wrong."


Education offers tools to help manage complexity -- abstract mathematics and lab sciences help us understand the complex world around us.

But that has mostly been used to manage complexity that already existed (e.g. physics, biology), or that is somehow inherent to our world (e.g. complexity resulting from social interactions among humans).

Now, we are creating our own, new kinds of complexity that didn't exist before. Software isn't there to help us manage complexity, it is adding to the complexity of our world, and dramatically so. That moves education backwards.


The problem is not the complexity of a computer.

A child can learn complexity.

The problem is, in fact, limiting a computer's complexity.

The lower the barrier of entry for creation, the sooner one can learn to create.

The second problem is media.

You can use paper as a tool for media consumption, or for creativity. Media can come printed on the page. Creativity, however, requires a pen.

What Apple and others have done is create a platform that favors media consumption, and limits what a user's "pen" can do.


Interesting that neither Kay nor the interviewer seemed aware of Swift Playgrounds. I would have loved to hear his criticisms of that environment as a teaching tool for kids. It seems to be targeted at 8-12 year olds given the graphics. I would be curious as to how closely Swift Playgrounds follows Papert’s ideas.


Neglecting path dependence is the root of all evil. I mean, he’s not wrong, but maybe the answer is just that this is fucking hard. And some of his specifics are just wrong. The pen? No. It’s super useful in some areas, like drawing, but if you include it by default you get lazy developers making UI that depends on it for navigation. Apple absolutely made the right call. I’m not taking away any of his amazing contributions, but his habit of slamming every incremental bit of mainstream, at-scale tech because it doesn’t fulfill his vision (yet) is not that helpful.


This isn't the first time Alan Kay express his view on HTML / CSS and Javascitpt.

But what alternative do we have / had?


That's the problem here, actually, but not in the way you think. There are alternatives (Canvas, SVG, etc.) but they're not as good for the standard use case of coming up with something quick that fulfills the business requirement.

And then we need to recognize that that stack is a fucking abomination that should be burned with fire. It should so deeply embarrass us, that we wouldn't want to admit that we use it in polite company.

We need people who are willing to experiment with new ideas and give us something better than HTML/CSS/JS. And we need them to get funding.


The basic thing to remember is that HTML was never meant to handle UIs. It was a very basic document layout format, with some simple ways to provide the reader with addresses for references etc.

It was in effect Netscape that tried to turn it into an UI as a way to get into the office network market. This riled up Microsoft, and the rest is history.


You can still get pretty far with the ideas of Smalltalk, html+js and webdav:

http://lively-kernel.org/ (from ~2012)


>We need people who are willing to experiment with new ideas and give us something better than HTML/CSS/JS. And we need them to get funding.

It’s coming. In 5 years Javascript development will be a distant bad memory. Front end web will become much like native mobile development but with a richer ecosystem of IDE’s and languages based around WASM.


What should embarrass us so deeply and why? HTML and CSS are some of the most generative, enduring, and transformative technologies ever. I wouldn't fund anyone who doesn't recognize how great they are!


They're embarrassing once you realize what's been done in the past, with incredibly smaller 'stacks' in much simpler systems, producing more powerful capabilities.

HTML and CSS require that a large amount of information be pre-shared and implemented by different systems (i.e. all browsers need to implement ginormous standard with all subtleties). After all that, you don't even get much interactivity, you need Javascript.

A better design would be to share a tiny set of primitives that are designed to compose nicely and that can be used to build up much more powerful apps.


HTML & CSS were good for the early web, when it was a simple document layout platform. They're not ideal for building rich UIs and applications. Just because they've been extended to allow that doesn't mean they are the best tool for the job. Similarly, Excel is abused to do all sorts of things it's not intended for as well, when there are better options for databases, financial applications and data science.


There is a huge difference between Excel and HTML: the former is Turing-complete (in addition to containing Turing-complete scripting language(s)) and was like this for as long as I remember, while HTML is not. I guess CSS nowadays may be Turing-complete (?), but HTML is most decidedly not.

That difference means that it's orders of magnitude easier to "abuse" Excel to do some things it wasn't designed to do than to do the same with HTML. I seriously wonder if the reliance on HTML to this day is not a kind of "job security" thing for web devs (like myself, btw)...


Innovation is, by definition, not very respectful of the past or the status-quo. Innovation is, by definition, mutiny.


I have been working with the web since 1997, but almost only on the server-side (back in the the simpler web 1.0 days) and (during the past decade) on the browser UX/network side.

I just recently started looking at actually creating some web content that uses the modern web stack. Holy moly. What a cluster f__k of insanely over/badly-engineered craziness.


Try Mithil.js/HyperScript for defining HTML and Tachyons for inline-ish CSS. I've been happy with that mix. Both emphasize simplicity and maintainability.


Read anything by Ted Nelson. He was the original visionary of hypertext (he even coined the word). He hates the web like Kay hates personal computers.

Of course he's also spent 50 years trying to implement his vision of the web and failing [0] so there's that.

[0] https://en.m.wikipedia.org/wiki/Project_Xanadu


It is terrible though. It is worth stating the obvious fact once in a while.

This organic design of the web stack with layer upon layer upon layer upon layer can't go on for forever. At some point it will just become too painful and an alternative will emerge.


The web platform has its virtues though. I can still load a webpage made in 1996 and at least read the text if nothing else; we can't say the same of erstwhile competitors like Flash, Silverlight, Java applets...


Perhaps.

but try loading various present day pages without JS enabled.

Many of produce just a white screen unless you turn on JS for 3+ external domains to the one you are trying to read.

And once those load, you often find half the screen filled with persistent headers and footers.

In effect Flash didn't go way, it was just turn into JS+CSS.

And in large part i fear we get this because the people in charge are hell bent on recreating printed media in electronic form.


> And in large part i fear we get this because the people in charge are hell bent on recreating printed media in electronic form.

I agree. I see the conflict between designers and users to be one of the primary reasons for the current state of the web.

The conflict is about who gets to control how the content looks. The web was intended for users to have that control. The browser is your "user agent", you get to decide what and how gets fetched and displayed. The designers, on the other hand, want to treat webpages as color magazines - they want to have full control over what's shown on your screen, as for them the form is just as important as the actual content. They would happily serve you the web page as a PDF, if they could get away with it.

Personally, I'm firmly in the "user gets to control" camp, but the market prefers the "designers get to control" camp. So here we are, with web browsers continuously removing the ability to control anything, and the HTML/CSS accruing more and more layout control tools.


The JS just fetches the text over the network and transforms it though. The browser still renders HTML markup in the end.

I agree with the rest of what you said but those are just design decisions. My point was that, regardless of the code headaches, the platform itself (HTML, CSS, JS) is an incredibly flexible and resilient system.


"Can still read text 20 years later" is a pretty low bar. What are the odds the javascript will work?


Open standards tend to win/survive...



I have a new alternative called Tree Notation:

https://github.com/breck7/treenotation

A sample from an HTML replacement:

  @html
   @body
    @div Hello world
A sample from a CSS replacement:

  @a
   cursor pointer
   text-decoration none
It is not HAML, though that was one of my first inspirations many years ago.


So Jade with an ‘@‘ symbol added?


I do like Jade, but my HTML ETN is not Jade either. The little tiny details have big downstream implications for parsing and static tools. My language has a pure Tree Notation grammar.

The benefit of that is tools written for an HTML ETN also work for a C ETN or a Swift ETN, and vice versa. So as we get more ETNs and more ETN tools, the network effects will be quite large.


Just today I read that VPRI fonc mailing list was shutting down because they shifted focus onto other area (communication design or something like that). A YC incubator thing IIRC.


Hey, the FONC mailing list isn't shutting down! It's just moving! It's now hosted at MIT instead of VPRI. It's located here:

http://mailman.mit.edu/mailman/listinfo/fonclist


ha good to know, I only read a few mail on that thread maybe I missed the part where they mentionned relocation.


YC HARC - harc.ycr.org


Thanks a lot.

ps: project list is cool http://harc.ycr.org/project/


The title is ironic, given Alan Kay is the one who said "The best way to predict the future is to invent it."


> let’s do what Sony did with the Walkman, let’s use this technology to make things more convenient in a way that a consumer can recognize, even if the user interface limits them.

That's a fantastic characterization of what I hate about Apple, and to contrast, what I love about free software.

The sad thing (in my opinion) is that people love the limitations Apple gives them. There it's no undefined behavior when the behavior is so arbitrarily limited. People don't want to create. They just want content. I find it rather depressing.


Also from the article:

> Simple things should be simple, complex things should be possible.” They’ve got simple things being simple and they have complex things being impossible, so that’s wrong.


Many of Alan Kay's concepts will take not only time, but also some kind of catalyst to gain acceptance. What might that catalyst be?


For me, the flip side of all of Alan's (and other's) inspiring work on Smalltalk is that, having started programming with Smalltalk in the late 1980s (especially with VisualWorks and OTI's Envy), my entire professional career since then has seemed like a long succession of dealing with less pleasant systems around people who just don't get it. That has been frustrating and painful (even if the money is good) -- in some ways, it might be better not to know what is possible because then you don't miss it.

It's sad that ParcPlace Systems made such a mess of commercializing ObjectWorks/VisualWorks Smalltalk (including not licensing it to Sun affordably for settop boxes, where Sun then reacted by creating Oak/Green/Java) and then they sold Smalltalk for a song to Cincom instead of open sourcing it and becoming a services company. I remember talking with one ParcPlace salesperson who kept going on about how Digitalk was their competitor, with my trying to point out that Visual Basic was really their competitor (and ParcPlace buying Digitalk was another part of the disaster for Smalltalk commercially).

It was also sad to see IBM abandon two reliable fast Smalltalks it owned in favor of a buggy slow early Java for marketing reasons. And even working within IBM at IBM Research around 1999, I could not use Smalltalk because IBM's OTI subsidiary (which IBM had bought) wanted around US$250K a year internally just to let the embedded speech research group I was in try their embedded Smalltalk for the product we were making. They claimed they'd have to devote an entire support person to it -- and I tried to explain I was going out on a limb there as it was to suggest using Smalltalk instead of C and VxWorks (which my supervisor had already licensed). So instead I got IBM Research to approve Python for internal use (which took weeks of dealing with IBM's lawyers) and Python, not Smalltalk, ended up on Lou Gerstner's desk when he asked for one of our "Personal Speech Assistant" devices for his office.

I really learned to dislike proprietary software from that experience of seeing something I loved like Smalltalk being killed by a business model focusing on runtime fees and such. I remember how when an IBM Research group licensed their Jikes Java compiler as FOSS, their biggest surprise was not how many external users they had -- but how many internal IBM users they suddenly had who no longer had to jump through complex hoops to gain access to it.

Even now, Java, JavaScript, and Python are not quite where Smalltalk was back then in many ways. Of course, it's decades later, so there are other good things like faster networks and CPUs and mobility and DVCS and above all a widespread culture of FOSS -- and when I squint, I can kind of see all the browsers out there running JavaScript as a sort of big multi-core Smalltalk system (just badly and partially implemented).

For a while I had hopes for Squeak, but it got bogged down early on by not having a clear free and open source license. Sames as I did recently to Automattic when it used React with a PATENTS clause, when I pointed out the concern, some lawyer chimed in with how everything was fine -- and only years later did the growing damage to the community become obvious to all. http://wiki.squeak.org/squeak/501 https://github.com/Automattic/wp-calypso/issues/650

I really learned to dislike unclear licensing on a community project from that early Squeak experience.

Squeak continues to improve and has fixed the licensing issue, but it lost a lot of momentum. Also the general Squeak community remains more focused making a better Smalltalk -- not making something better than Smalltalk. That is something Alan Kay has pointed out -- saying he wanted people to move beyond Smalltalk with Squeak as a stepping stone. Yet most people seem to not pay attention to that.

But if even Dan Ingalls could be willing to build new ideas like the Lively Kernel on JavaScript, I decided I could too, and I shifted my career in that JavaScript direction inspired by his example.

As luck would have it, my day job is now programming in the overly-complex monstrosity that is Angular, when I know the more elegant Mithril.js is so much better from previous experience using it for personal FOSS projects... I guess every technology has its hipster phase where decision making is more about fads than any sort of technical merit or ergonomics? I can hope that sorts itself eventually. But once bad standards get entrenched either in a group or the world at large, it can be hard to move forward due to sunk costs, retraining costs, retesting costs, and so on. But, thick-headed as I am, I keep trying anyway. :-)

And as in another comment I made, there are important social/political reasons to keep trying to create better systems to support democracy: https://news.ycombinator.com/item?id=15311950


If people could understand what computing was about, the iPhone would not be a bad thing. But because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that Guitar Hero is the same as a real guitar. That’s the simple long and the short of it.

Some kids learn guitar. Some kids fantasize about learning guitar. Apple makes machines that enable the computing equivalent of both of these things.

You're never going to turn all listeners into musicians, all readers into writers, all theater-goers into actors. You can't even get everyone to dance at a wedding with an open bar. Technology can play a role in getting a few more people over the line, but Alan Kay's head is way too far in the clouds for him to even see the problem that way.

Honestly, I think what he's done is dissociated himself from his own snobbery. He talks about technology failing humanity, but he's really saying he doesn't like people. He doesn't like how most people think and live. Why else would he constantly be disappointed in technology failing to turn all of humanity into always-on cultural creatives? It's like he looks at books and says, well, the dream for books is for everyone to read challenging literature, but at the beach or on the subway you see mostly "Eat Pray Love" and Stephen King, so clearly books have failed.

People are doing the same dumb shit with books and technology that they did without it. They choose passive consumption. When I complain about it, I'll blame it on technology so I don't sound like a misanthrope.

He constantly talks about creating, learning, and education, but he doesn't bring any attention to any current work supporting these activities. He doesn't say, look, here's this great work being done for kids, and it's limited because the iPad needs this feature. He doesn't say, look, here's this great computing system I use to support my own work, and it be a lot better with this specific kind of OS or hardware support. Instead of talking about how the Apple hardware is holding back current work, he's talking about how it fails to implement his own ideas from thirty years ago.

You'd think he could find SOME new idea to promote and celebrate, if only to point out what he thinks progress looks like, but apparently Apple has snuffed out every worthwhile avenue of improvement by a few unfortunate unfortunate hardware choices.

Hell, where's even the usesthis interview with Alan Kay?


>Honestly, I think what he's done is dissociated himself from his own snobbery. He talks about technology failing humanity, but he's really saying he doesn't like people. He doesn't like how most people think and live.

And why should he? Millions voted for Trump. Heck, millions also voted for Hillary. Neo-nazis, tele-evangelists, climate change denial, PC, twerking and Justin Bieber are all things. Does that sound like a society we should like how "most people think and live"?

>Why else would he constantly be disappointed in technology failing to turn all of humanity into always-on cultural creatives?

Because he actually loves people and doesn't want to see them squander their potential?

>He constantly talks about creating, learning, and education, but he doesn't bring any attention to any current work supporting these activities. He doesn't say, look, here's this great work being done for kids, and it's limited because the iPad needs this feature.

Err, he literally does just this in the interview.


The topic is Apple because the interviewer is a journalist who was writing a book about the iPhone. Plenty of other resources online show him praising work he likes (Bret Victor, for example) and systems that he thinks are in the right direction. It just wasn't in the bounds of the interview topic.

As for the "not turning all readers into writers," well, what can we say? One would hope that all readers are literate enough to write. Now, we don't expect all literate people to be professional writers, but this is not in line with what Kay is arguing anyway. He's saying that being literate in computing should mean being able to wield computing in order to aid human thinking -- for everyone. He's calling out our current systems, saying that their availability and use, though ubiquitous, do not constitute such literacy.


In particular, even though we don't expect all literate people to produce professional-quality writing, we do hope they can use their writing skills to solve their own problems. Be it filling forms, writing letters, posting "lost cat" messages on trees, whatever.

That's what I think computing should be. Giving people ability to solve even more complex problems themselves, even more effectively.


His "vision" is too self-centered and he expects people to live after his ideals but in fact people have better things to do than create art, or whatever.

>They choose passive consumption

The same person is likely an active producer in one area, and a passive consumer in others. I know I am.


>His "vision" is too self-centered and he expects people to live after his ideals but in fact people have better things to do than create art, or whatever.

That's what you see around you, talking to most people?


Should kids just look at sand castles?


I kind of feel sorry for Alan Kay. He's trapped in endlessly rehashing ideas from 60's and 70's.


Unlike CS programs in high schools and universities. They're stuck in the 50s rehashing sorting algorithms for their students. Meanwhile an entire community of "programmers" was trained and let loose in the wild to endlessly engage in pedantic debates about type systems, or which ALGOL like language is the best for this-or-that.


What new ideas are there though? Social networking I guess. Have you seen the mother of all demos? Engelbart's team invented, like, everything.

On the programming side, dependent types maybe. Everything else is from ml or lisp.

Neural networks are old old ideas as well. Computers have finally gotten fast enough to make use of the ideas.


> On the programming side, dependent types maybe.

Have you learned about monads yet?


Behind his curtain, it turned out that Alan Kay was a good man but a terrible wizard.


For all the respect and admiration I have for Alan Kay, I think he's often showing a lot of bad faith.

His ideal vision is always described in very abstract terms. If he took the care to write it down precisely, what it should look like, I'm sure there would be hundreds eager to build it for him. Of course, part of it is "design principles". But then do some common use cases.

When the abstraction meets reality we get comments like "the iPad should have a pen". Which is really an interesting observation, but doesn't quite rill the crowd like the abstract impressive-sounding stuff does. And you'd hardly call these isolated observation "vision".

Clearly, the user interfaces we have today are sub-optimal, and I can make hundreds of factual remarks on what could be improved. But I don't think this works out to a "vision". In fact, a vision is I think sometimes bound to be too abstract. If we just had products where the shit was fixed relentlessly, I reckon we'd get to "good" without needing them.


Maybe you disagree with him or perhaps think his position is ill-considered or too woolly or somesuch. But 'bad faith'? You really believe he's being intentionally duplicitous and deceptive?


His ideal vision is always described in very abstract terms. If he took the care to write it down precisely, what it should look like, I'm sure there would be hundreds eager to build it for him.

He did. There's "Personal Dynamic Media", the classic paper.[1] Kay's vision focused on the user as creator, rather than consumer, of information. In practice, most people are content to watch the boob tube, now available in handheld. This has been the frustration of new media inventors from Edison to Zworykin. What Kay is complaining about is the banality of computing today. That's a problem with humans, not computers.

Kay also thought that discrete event simulation was going to be really important. Smalltalk is based on SIMULA, which was a discrete event extension of ALGOL-60. Objects were supposed to be entities in the simulation. In practice, discrete event simulation is a tiny niche application of computing.

(I got a tour of PARC in 1975 when taking a course at UC Santa Cruz, met Kay and Goldberg, and saw some of the first Altos. The idea back then was "this is what can be done with enough money. Someday it will be cheap." Eventually it was, but not soon enough for Xerox. Years later, I programmed an Alto in Mesa while at Stanford, although by then it was obsolete.)

[1] http://www.newmediareader.com/book_samples/nmr-26-kay.pdf


Doesn't your typical business application model some discrete events on the real world? Why is object oriented programming so popular of it doesn't meet needs of software it helped to write? There should be some correspondance between principles that layed foundation of OOP and their application, and OOP's popularity os not a coincidence


>> Why is object oriented programming so popular of it doesn't meet needs of software it helped to write?

Perhaps because the vast majority of programmers learn to program in an OOP language first, and then find it very hard to learn a different paradigm. Or maybe they have very little time to do so.

I think this leads right back to the point alankay1 is making (if I got it right): that given a hammer and sufficient time, you become a hammerer; someone who can hammer nails and can't do much else.

So asking "why is OOP so popular" is a bit like asking "why do people watch Holywood movies" (i.e., if they're not good cinema). That's what they're used to.


Most of what OOP is used for is to implement the thousands of minor details that go into implementing a complex system. Peruse the Firefox source code some time; you'll find a large class hierarchy of different types of strings, and another one for different types of DOM nodes, and another for different types of documents that the nodes belong to, another for windows of various types, another for various types of images that it can decode, etc. None of that has anything to do with modelling entities in the real world.


You must not be familiar with the STEPS project, a complete OS project (with a compiler collection, 2D vector drawing and compositing, desktop publishing, online communications…), in 20K lines of code. Mainstream OSes by comparison are more like 200M lines (10K books).

That stuff is pretty concrete, with an actual GUI and real software. One can get their hands on the source code, though it's really not meant for public consumption (it was a proof of concept, not a commercial product).

Here are the progress reports (latest first):

http://www.vpri.org/pdf/tr2012001_steps.pdf

http://www.vpri.org/pdf/tr2011004_steps11.pdf

http://www.vpri.org/pdf/tr2010004_steps10.pdf

http://www.vpri.org/pdf/tr2009016_steps09.pdf

http://www.vpri.org/pdf/tr2008004_steps08.pdf

http://www.vpri.org/pdf/tr2007008_steps.pdf

That stuff definitely works out to a vision.


"it was a proof of concept, not a commercial product"

Within that distinction, lies dragons.

The issues I have with existing systems are divided in two categories: bugs/issues/not being polished enough, and conceptual inadequacy.

Most of my issues with actual products fall within the first bucket, because the second bucket actually means I'd want another product. This is where innovation is.

That being said, unlike Kay, I don't loathe what we have. I'd be very happy if it was just better. But, like him, I thin the fact it's bad has deep roots in the way it is conceived.

I think another commenter hit the nail on the head when he said that Kay was mostly disappointed with users.

About Steps specifically, I haven't tried it out, but I harbor a lot of doubt. A huge amount of work goes into making stuff merely tolerable.

Something I do have experience with is Pharo Smalltalk (which I believe is the most advanced offering in the market). In theory, having your own small universe based on coherent principle is heaven. In practice, it feels very constricting to work inside that box (the Smalltalk image), because you don't have access to some of your best power tools, or even to some basic modern niceties.

(Pharo Smalltalk and great, and improving steadily, you should try it out. But I don't see myself calling myself a Smalltalk developer/hacker any time soon.)


Are binaries available, to run in a VM at least? Also, can you provide a link to the source code?


Some software and code are linked here http://www.vpri.org/vp_wiki/


But not the "STEPS" project's code, apparently.


> One can get their hands on the source code

Where?


His ideal vision is always described in very abstract terms. If he took the care to write it down precisely, what it should look like, I'm sure there would be hundreds eager to build it for him.

Look at Squeak. http://squeak.org/ The idea that you should be able to dig in and modify things is very powerful.

Another way to look at it is in terms of discoverability: you are at one level of understanding (using the phone, clicking on things), and it should lead you to a deeper level of understanding (perhaps moving things around, modifying them). At each level, there are hooks inviting you deeper, drawing you in. Ideally you get down low enough to start programming, to actually understanding how assembly and the CPU works, so there is no longer any area of magic.


On Hackernews there was once a skeptic of postmodernism who got into a debate with a Derrida fan. The skeptic said "If deconstruction is not BS, how about you explain to us, in plain English, what it's all about." The Derrida fan replied "No. How about you demonstrate that you've read Husserl, Heidegger, Hegel, Nietzsche, Freud, and Levinas. Then we can begin to talk about Derrida."

If Alan Kay were like the Derrida fan, then he could have rebuffed anyone who came asking what the Dynabook was about with "How about you demonstrate that you've studied the achievements of McLuhan, Montessori, Papert, Minsky, Engelbart, etc. Then we'll talk." Of course he's not like that so he's going to try to give the reporter a bit of the background he has in those things and let them figure it out for themself.

The Dynabook is supposed to be, roughly, an iPad with a keyboard, stylus, and a Lisp Machine-like OS. Of course Apple and third parties have closed the gap with keyboards and styli but it's the OS that's the really important bit because in order for a Dynabook to be a Dynabook it must not be a thing that you can just download and run preapproved apps on. You have to be able to discover -- and change -- the entire system from the bare metal on up. Think "Stallmanism on steroids".

It's like a television you can talk back to. Computing is a dynamic medium that demands interaction from its user, so to deny the user the ability to question and change the assumptions of the medium is to lock them in to certain stereotyped modes of interaction and deny the full richness of the medium. This is where Kay's "real guitar vs. Guitar Hero" analogy comes in. You can either have a device that offers its full potential up front, like a real guitar, or you can have a device that can only be used in circumscribed ways, in vague imitation of having the full richness of the original device, like Guitar Hero. The price you pay for having a real guitar, or a real Dynabook, is what we call "ease of use" -- the ability for a complete neophyte to pick it up and immediately display something resembling proficiency with it. (It is much easier for a trained data-entry clerk to operate a 3270 terminal than it is for that same clerk to fill out a modern Web form, but that sort of ease doesn't count.) And Apple has really doubled down on ease of use, at the expense of everything else wonderful about having a powerful computing device you can hold in one hand. To the point where it's become just like regular, can't-talk-back-to-it television.


Deconstruction is a way of viewing reality which comes naturally equipped with a process for disassembling ideas (the "deconstructive process") as well as a process for viewing ideas as the synthesis of other ideas (the "constructive process"). The core is this: all semantics are relative, all observations surface-level, and all truths buried in the raw materials. But since the materials are themselves relative, all truths are ultimately undefinable, recursively unknowable. To see a house is to know that it was built on a foundation, that its walls are made of gypsum, and that the number of children living there are proportional to the number of toys left on the lawn. Conversely, to see a corporate email thread is to know the shape of the corporate power structure, the bickering between departments, the products under development, and the CEO's ability to curse. Everything is connected and related. This is the entailment of postmodernism, where we first glimpsed that narrative truths could be relative; now, we know that reality itself is relative, whether it is the reality of a book or the reality of our current "real-world" narrative.

Deconstruction isn't bullshit, but we've been swimming in deconstructive and reconstructive narrative media for nearly half a century, and it has altered how we critique media so heavily that people don't see it as a distinct feature of their worldview. (Great example: Folks seem to love Rick & Morty, a deconstructive soft sci-fi serial, and also Steven Universe, a reconstructive soft sci-fi serial, both of which are running on a network which markets to youth, not literature postdocs.)

Kay, like Derrida, hasn't been unable to explain to people for lack of words, but because the enormity of the paradigm shift means that no number of words will necessarily suffice.

(Aside: If you are rubbed wrong by the first paragraph, in particular because you believe that reality is absolute or that truth is definable, then do not fear. You are merely having trouble overcoming instincts. Trust in mathematics; postmodernism arrived in mathematics in 1874, was completed in 1930, and is widely accepted by the mathematical community as meta-truth. Dissenters exist but necessarily must accept the meta-truth ("There is no one true foundation for mathematics.") to avoid being laughed out of the room.)


I am far from an expert in Deconstruction, but I am an expert in mathematics and have read a fair amount on the foundations of mathematics. I find your description technically correct, but highly misleading.

The vast majority of mathematicians will agree with "no one true foundation". However, unlike the bulk of postmodern/deconstructive work, they are then perfectly happy to say "this particular foundation is good enough to explore some interesting ideas", as opposed to wallowing in the fact that they had to take a generative step to create the mooring of their meaning.

In my non-expert opinion, I think many people instinctively rebel against Deconstruction, not because they can't accept that there has to be a faith-act to pragmatically root meaning... but because Deconstructionists, having discovered that such faith acts are necessary, then refuse to engage with any particular choice made by people who want to discuss pragmatic meaning.

Mathematicians have long ago internalized the important, but ultimately quite limited, idea behind Deconstruction (which, by the way, they didn't need Deconstructionists to tell them, since the idea of having to subjectively choose an axiom system goes back at least to the Greeks). Having understood this, they have since moved on with their lives to continue to do important and valuable things, just as they were doing before. What have the Deconstructionists done?


"Deconstructionists, having discovered that such faith acts are necessary, then refuse to engage with any particular choice made by people who want to discuss pragmatic meaning." I'm perfectly happy to have such discussions, as long as it's understood that, regardless of the beliefs brought to the table, the idea of "good enough" is not set in stone and itself is a perspective that is also brought to the table. I'm also not a Deconstructionist, whatever that is.

I find that most arguments that I have had with people usually end with them implying that their epistemology is rooted in something that is "good" or "reasonable" or "close enough" without any explanation of why it is that their chosen way of looking at the world must necessarily be beneficial. Indeed, you have pointed out that maths is a source of "important and valuable things", which I could agree with, but only with the understanding that our agreement is bound to the reality defined by our joint observations, and not a universal truth.

My point was not about choosing axioms, but about incompleteness, undefinability, uncomputability, etc., which cut short any attempts to bless a particular set of axioms as uniquely correct. You are only seeing the surface level so far; I assure you that it all deconstructs. (Be careful when looking directly at The Void.)

Finally: Where does meaning come from when having dialogues? Hofstadter explains it well: We constantly exchange deeply-coded messages with recursive layers of meaning, trying to reflect the symbols in our mind into words which we think will reconstruct those symbols in the minds we talk to. The "mooring of their meaning" is a fascinating illusion based on the inherent difficulty of stepping outside ourselves to examine where our own sense of meaning comes from.


> What have the Deconstructionists done?

Deconstruction changed the way we think about what we read, see, and hear, and taught us to question the assumptions inherent in the same. It gave us the spaghetti western, cyberpunk, and arguably punk rock itself (and it definitely influenced hip hop and dub).

Always remember that postmodernism is "defense against the dark arts" for the left. The techniques of playing shell games with symbol and meaning were devised by the right and by corporations and weaponized against us; postmodernism exists to make us aware of these techniques.

But I didn't really bring up the Derrida fan to argue about deconstruction or postmodernist thought.


> Deconstruction changed the way we think about what we read, see, and hear, and taught us to question the assumptions inherent in the same. It gave us the spaghetti western, cyberpunk, and arguably punk rock itself (and it definitely influenced hip hop and dub).

I've seen related claims before, and, having attempted to dig into them, my impression as a lay person is that what is called "postmodern literature" is actually not obviously connected to postmodern thought and deconstruction.

I personally find, for example, the idea of an unreliable narrator or non-linear storytelling not to be obviously derived from the idea of the subjectivity of anchors of truth. There are of course superficial similarities, but I see no evidence that the academic and philosophical ideas were in any way necessary for the literary, musical, and cinematic ideas. My feeling about the alleged similarities is about the same as the claims that Cubism was deeply connected to allegedly parallel ideas in the physics of relativity -- ideas that would pass in an art house, given enough narcotics, but that do not withstand serious scrutiny if one actually knows anything about the origins of ideas in physics.

So I wonder if you would be able to draw me a straight line from, say, Derrida to The Sex Pistols? Or to explain to me how Gibson's Neuromancer would not exist but for Foucault? These are not rhetorical questions -- I would be truly grateful to receive a good answer, and it might just turn around my generally low opinion of the post-60s contributions of postmodernism and deconstruction.


It's given us a framework to understand what's going on in a world with fake news. Arguably, Deconstructionists made Trump possible.


The basic problem there is that no company in the industry, from Apple all the way to the lowliest gray box manufacturer in China is not interested in that.

OLPC tried, it got hoodwinked by Microsoft et al.

Mozilla tried, FirefoxOS bombed.

Hell even Linux is turning ever more complex an opaque year by year.

Simple thing is that such flexibility self support do not sell widgets and support contracts, and thus company are not interested in making them.


Well, yeah. For the benefits to be realized you need a society that values that sort of thing. It will look quite a bit different from our society, just like societies with writing are completely different from entirely oral societies -- something else Kay touches on.


In effect Star Trek lala land...


OLPC and Firefox OS are not examples of projects that pushed the idea of an OS and application stack that could be understood by a single person. To get a better idea of what Alan Kay has been working towards I'd recommend checking out the STEPS project.


I'd say that they were the closest in recent memory.


The software for both OLPC and Firefox OS were both based on Linux. Is there anything that made them more approachable on the low level than any other Linux distro?


I say that they were the closest, not that they were complete implementations of the vision.

Firefox OS had a particularly hackable UI layer. You could install webpages as if they were apps (all assets stored locally), bypassing the need to get your app into a curated store. Writing new drivers in Javascript could have come later.


>> On Hackernews there was once a skeptic of postmodernism who got into a debate with a Derrida fan. The skeptic said "If deconstruction is not BS, how about you explain to us, in plain English, what it's all about." The Derrida fan replied "No. How about you demonstrate that you've read Husserl, Heidegger, Hegel, Nietzsche, Freud, and Levinas. Then we can begin to talk about Derrida.

I find that a valid rebuff of a "skeptic" (on quotes). A true skeptic should know their shit, first and better than anyone they care to er, skepticise.


>> And Apple has really doubled down on ease of use, at the expense of everything else wonderful about having a powerful computing device you can hold in one hand. To the point where it's become just like regular, can't-talk-back-to-it television.

Worse, actually. At least those old TV sets didn't spy in on your intimate conversations, neither did they keep track of your whereabouts and generally spy in on you. Your phone, does.


Well, he could implement the Dynabook. Why should others do it for him?

Sounds like a lot of hand-waving to me.


That's what he was trying to do at Xerox and Apple.

Without the resources of a large company, the Dynabook will get Pyra levels of adoption and be quickly forgotten about. And no large company wants to take that kind of risk nowadays. Not when there's money to be made pushing "apps" and monetizing eyeballs.


AR / VR might be an interesting place for the ideas.


I'd bet someone at VPRI is porting Croquet to the Oculus as we speak, if it hasn't happened already.


I have total respect for Alan Kay but I hope he stops criticizing other people and actually shows his vision through his own execution (and by execution I don't just mean invention of a prototype but actual productization and distribution)

If you're building something visionary and are not the one who's making tons of money off of it, you have two solutions:

1. IF you're interested in making money: Try to figure out what you're doing wrong and fix it so you actually CAN capitalize on your original invention

2. IF you're NOT interested in making money: Enjoy the respect these people pay you and please don't badmouth others who worked hard to build important products that changed the world.

I feel like Alan Kay is on the second boat, so my heart breaks whenever he badmouths other people who worked hard to build cool things.

It doesn't matter what came first. What matters is what actually managed to pull off the ultimate success.


> (and by execution I don't just mean invention of a prototype but actual productization and distribution)

So, the STEPS project I mentioned above¹ doesn't count? Bret Victor's work wouldn't count? That's incredibly short sighted.

You seem to have missed the part where it's really really hard to be successful with unnatural stuff, even when that stuff is unbelievably useful and valuable. Writing is unnatural, and took a long time to spread. It's still not there yet, and with the television and YouTube, we're actually reverting back a little bit.

Computing is even less natural, so instead of a Victor-esque interactive wonder, we got the caveman interface, where you point and you grunt².

Speaking of badmouthing, my heart breaks every time someone implies that recent Apple devices are cool. They're dystopian demons that seduced people into giving into thinking digital prisons are cool. The Apple ][ was cool. The iPad is crap (it could be cool, with the right OS).

[1]: https://news.ycombinator.com/item?id=15266230

[2]: https://www.youtube.com/watch?v=uKxzK9xtSXM&t=2m18s


> So, the STEPS project I mentioned above¹ doesn't count? Bret Victor's work wouldn't count? That's incredibly short sighted.

It is incredibly short sighted to make an assumption about what I said and criticize on that wrong assumption. There's a term for this logical fallacy but i forget.

Anyway, there's a huge difference between coming up with an idea/prototype and turning it into a product that's meant for mainstream usage. You would only know this if you have actually built something that's meant for mainstream usage so if you don't get this it's fine, but try doing that next time and you'll realize that's true.

What I'm criticizing is his comments on Apple's implementation decisions on things like the stylus. There are plenty of reasons why they decided to go that way, and I totally understand if this criticism was coming from some ignorant journalist who hasn't built anything (because they're ignorant), but Alan Kay should know better. There are way more things to consider than just ideals when you're building a real product.

This is probably why Xerox Parc built so many innovative prototypes but most of them were commercialized by other companies. If they had people like Steve Jobs instead, Xerox would be Apple by now.

If you think I'm being overly critical, just watch some of his speech videos on youtube, too much of what he says is based on the past and not the future.


> It is incredibly short sighted to make an assumption about what I said and criticize on that wrong assumption.

Are you saying I was mistaken and the STEPS project and Bret Victor's demos do count after all? Even though they're prototypes that haven't been "productised and distributed"?

I merely assumed you meant what you wrote. If you didn't, don't complain.

> What I'm criticizing is his comments on Apple's implementation decisions on things like the stylus.

Come on, that criticism was spot on. The iPad could have a spot for the pen. The pen could be tethered to the iPad. Apple could have sold pen like they now do. The Nintendo DS ships with a freaking pen.

I'm reluctant to dispute your point in general, but that was a bad example.


> The iPad could have a spot for the pen. The pen could be tethered to the iPad.

Yes they could, but I really think this is subjective. In my case I prefer NOT to have a spot because of aesthetic reasons, plus I never use the stylus.

But like I said, this is subjective. And that's exactly the point. You can't really judge execution based on subjective qualities like this. It's impossible to satisfy everyone.

It's just a difference in vision. People have different priorities. For example the Bitcoin community right now is divided into two based on their philosophy, and you can't say one is right and one is wrong because each has their own vision and each side makes sense in their own right.

I'm sure Alan Kay had his own vision about what a computer could become. I think if he's so dissatisfied with what it's come to, the best approach is to build one himself and give it to the world. (But honestly, as someone who's been thinking a lot about this field, I see so many "plot holes" in all the criticism he's making)

I am all for visionaries building inspiring products and shipping to market. But it's sad to see people like him dwelling on the past instead of the future.


What part of "if you see a style, they blew it" is hard for you to understand?


Anyone who has put as much work into it as Kay has and delivered as many innovations is absolutely entitled to criticize. He is especially entitled to criticize products that built on his ideas but because of some fundamental flaws actually hobble the ideas and hobble society. This is important to point out and if he isn't qualified to do so then who is?


There's a difference between badmouthing and constructive criticism. Try reading through the entire article. I did. And there's tons of badmouthing and condescension.


The article is filtered through another reporter and I didn't find specific instance of badmouthing (but my reading is also filtered through my own notion of where I think Alan Kay is coming from.)

If you're interested in what Kay really means, maybe listen to some of his numerous other talks?


There are plenty of things worth doing that are not profitable. If anything, the people who made the profits where standing on the shoulders of giants like Kay; I think we can afford him some space to be grumpy over shitty execution and lack of vision. It's a matter of perspective; what you call cool things, Kay would call consumer bull crap; and I tend to agree.


If you're not building something for people, what are you building? Art?

I'm fine with that, but just because you're making art, doesn't mean everyone else is an idiot.

Also I think it's very condescending to think that others didn't have vision.

Lastly it's cheap to criticize someone else for "shitty execution" when you yourself have the ability to show with your own execution but you clearly haven't. It's easy to throw around the word "execution" but it's not an easy thing to do.


Herein lies a dangerous assumption prominent in neoliberal thinking: that "building something for people" is only possible by making something that is inherently "profitable." Kay has spoken endlessly on the fact that PARC was -- in spirit, structure, and personnel -- largely the last gasp of the ARPA/IPTO funding culture. That we don't have this kind of basic research funding anymore is one of the reasons we don't have inventions on par with the Internet and personal computing. This is one of the points he harps on about.

Few companies today will create anything that is a truly shattering invention, for the simple fact that companies are not structured to invest the time and money in ways that such inventions demand: decade long cycles, where failure and "wasting" money is the norm. The quarterly earnings cycle prevents this. What's worse, that cycle has been absorbed into the way politicians and governments now think. Today the NSF wants you to demonstrate how your research will produce the desired results when applying for grants. That's not how exploratory basic research works. You'll only ever get more arcane iterations on current systems and technologies.

Kay has spent his life doing this kind of exploratory research. PARC, Squeak, STEPS -- I'd call all of that "execution." But perhaps others who see businesses and profits as the sole reason to get up in the morning would disagree.


1. I agree with your general argument about the need for deep research.

2. Which means you're making assumptions about what I'm thinking.

3. Please don't label others on the Internet with words like "Neoliberal", etc. We are all same people and there may be misunderstanding but we are not fundamentally different species.

4. My point was not to say these not-for-profit research is useless, but that if you're in this field, you should own it. If people build things that are mediocre and you know a better way, why not build one and show it for yourself? He hasn't done that because he doesn't know either. And by "Knowing" I mean actually knowing to build a product that people will use without any problem. It is easy to criticize implementation, it's not easy to actually do it yourself. With all due respect, prototyping and actually turning it into a product that general public can use are completely different ball games.


He didn't call you a neo-liberal. Your whole argument however does rely on the now widespread (and false) assumption that anything worthwhile can be made profitable with the current economic system.

By the way, Alan Kay has built stuff people can actually use. Scratch, for instance is so usable that even children can learn it.

Ah. Learning. I don't want to assume, but it looks like you missed the part where Alan Kay wants the general public to learn about computers, and stops viewing them as magic golems. This is a completely different ball game from merely making a popular consumer device.


What does building for people (which I am) have to do with making profits (which I don't)? What about sharing? Building for profit is the opposite of building for people, the two mind sets are not compatible.


> and actually shows his vision through his own execution

As Steve Jobs reportedly was fond of saying, "Great artists ship."


Among the shallowest of cheapjack Silicon Valley slogans. But certainly revealing of its ethos. In such a climate one could be forgiven for imagining that not everything has to be a commercial product to be used, appreciated, or valuable. What slovenly "artists" we had in the government oh so long ago, who never "shipped" the Internet.


What would the Internet be like today had it remained a government project?


Not the point. "What would information technology be like today if ARPA/IPTO style funding still existed?" would be the more relevant question. I'd call those dudes "great artists". That's the community Kay comes from, the one that gave rise to the Internet, PARC, and everything else that several generations of hustlers have made careers off iterating upon.

It's also worth noting that TCP/IP was able to outfox the planned X.25 ISO standard for international network transport in large part because ARPA was releasing code and standards for free and testing it live. All of that happened before the Internet was commercialized.


The ad layout on mobile makes this article unreadable on my phone to the point of rage quitting halfway through or so. How appropriate somehow...



I have seen this pattern in other people with interesting but narcissistic personalities:

"Steve wasn’t capable of being friends. That wasn’t his personality."


I hope someone could tell Alan Kay about the Galaxy Note. It has a really nice pen and a place to put it.


I recognize the Alan Kay I met at HP. Very sure of his own genius and quick to snarkily dismiss the genius of others.

Case in point: The iPad and iPhone makes kids creative like no other tool has before. Kids know how to use it to text, exchange ideas or pictures of kittens, draw or sketch, learn how to program (Swift Playground anyone), take and edit photos. It is everything but a glorified TV where kids are passive. This is the only environment where kids are truly first-class citizens. How on earth Kay could spend so much energy (a whole article) trying to convince us otherwise based mostly on super-weak arguments like "nobody is using the Shake gesture to undo" is truly beyond my understanding.

Don't get me wrong. The Dynabook was a somewhat interesting idea. But it's about as much an inspiration for the real thing we use daily as the Star Trek tricorder. What Kay, to this day, did not get, unlike Jobs, is the power of networking, both physical and human. What made the iPhone greater than the Dynabook ever would have been is the App Store, and the combined creativity of millions of developers. What made the iPhone greater is apps like Messages, Facetime, Facebook, Twitter, Whatsapp, and countless others. What made the iPhone greater is Instagram, Flickr, sport apps, medical apps, HP-48 emulators, 3D "kill them all and think later" games.

By contrast, what would have made the Dynabook great in Kay's mind was "designed by Kay for Kay's personal interests". Or, as Fake Bill Gates would have said: "One programmer should be enough for everybody".


Personal attacks are not allowed on Hacker News, regardless of whom you're attacking. Personal attacks that purport to read someone's mind and put nasty words in their mouth are particularly self-revealing.

Please read https://news.ycombinator.com/newsguidelines.html and don't comment like this here again.


"This article was a surprise, because the interview was a few years ago for a book the interviewer was writing."

The interview probably predates Swift playground. Alan's critique was probably part of making it available, I heard it was part of why Scratch was eventually allowed.


What Kay, to this day, did not get, unlike Jobs, is the power of networking, both physical and human. What made the iPhone greater than the Dynabook ever would have been is the App Store,

Interesting phrasing, considering that the iPhone did not come with the App Store. In fact, Apple's was the third software distribution platform to be released for iOS, after Installer.app and Cydia.

Meanwhile, Alan Kay was working in Croquet[1] - a project that allows the creation of multi-user collaborative software with live-editing and automatic distribution and replication - since at least 2001.

[1] https://en.wikipedia.org/wiki/Croquet_Project


>Interesting phrasing, considering that the iPhone did not come with the App Store. In fact, Apple's was the third software distribution platform to be released for iOS, after Installer.app and Cydia.

Which is irrelevant.

First, because it eventually came, and second, because whether they intend to put one out or not, obviously they couldn't create a first class API for third party use, fix public interfaces as much as possible, write documentation, create a store, et al, all while creating the first edition of a new product (which they didn't even knew if it would be a success or not).


>> Kids know how to use it to text, exchange ideas or pictures of kittens, draw or sketch, learn how to program (Swift Playground anyone), take and edit photos.

Those things don't sound particularly creative to me [at least not to the extent you can actually do them on a phone] and you don't need an iPhone to do them, either.

Isn't it more the case that kids (and everyone else) are naturally creative and Apple marketing somehow latched on to this as a way to push their devices upon as many people as possible?


Pop-up warning.


writing diminished thinking and reasoning capacity, computers continued offloading cognitive functions to adaptive machines. dig up, stupid! is a fitting epithet.


Elon didn't wait, he just made it happen


I don't know Elon Musk personally, but I'm pretty sure he'd refer to Alan Kay as one of the giants on whose shoulders he and quite a few industries stand on.


Alan Kay is the one who said "The best way to predict the future is to invent it." He did so by inventing quite a bit of the technology that is in every PC and smartphone today.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: