Hacker News new | comments | show | ask | jobs | submit login
Some excerpts from recent Alan Kay emails (worrydream.com)
337 points by dasmoth 51 days ago | hide | past | web | favorite | 136 comments



There's an interesting book on the topic discussed in these emails, entitled "Why Greatness Cannot Be Planned":

http://amzn.to/2CtIrRR

There's a YouTube talk by the author on the subject here:

https://youtu.be/dXQPL9GooyI

The rough idea is this:

- creativity often arises by stumbling around in a problem space, or in operating "randomly" under an artificially-imposed constraint

- modern life is obsessed with metrics and goal-setting, and this has extended into creative pursuits including science, research, and business

- sometimes, short-term focus on the goal defeats the goal-given aims (see e.g. shareholder value focus)

- the authors point out that when they were researching artificial intelligence, they discovered that systems that focused too much on an explicitly-coded "objective" would end up producing lackluster results, but systems that did more "playful" exploration within a problem space produced more creative results

- using this backdrop, the authors suggest perhaps innovation is not driven by narrowly focused heroic effort and is instead driven by serendipitous discovery and playful creativity

I found the ideas compelling, as I do find Kay's description of the "art" behind research.


That book was written by computer scientists, and they demonstrate how the optimal search algorithm is often to pick the most unusual thing to try that you haven't tried yet. I'd like to see more discussion about this as an approach to personal development. Many people advise students and new graduates to sample the search space broadly, without detailed goals, just to see what's out there. It almost becomes the "do something every day that scares you" maxim. There is an old saying about "Everything you want in life is waiting just outside your comfort zone."


I've not watched the video nor read the book but I take it slightly differently.

Reinertsen, in "Managing the Design Factory", describes the need for companies to generate high value information as efficiently as possible. Using Shannon's theory of information, he argues that low-information events are those events where the likelihood of either success or failure is high. Out of this comes the fact that you maximize the information output when the probability of failure is 50%.

How many of us take risks - professionally or personally - where the probability of failure is 50%?


This is the difference between Research(which Kay excelled at) and Development and Commercialization (which he didn’t). Both are important. The latter requires focus, discipline and deadlines.


Have you any references to the Development and Commercialization he didn't excel at? I know of Kay by name, but not his life story.


Well, in a few of his presentations, he talks about the difference between invention, which is what PARC did, and innovation, which is what companies like Apple do.


See Apple. They took a lot of the inventions there and brought them to market.


He was a Xerox PARC guy back in the Smalltalk days, when they developed incredibly expensive systems that never really sold. In the 2000s, he was still griping about how everything was worse than PARC in the 1970s. I haven't paid much attention to him since then.


There is a related book, Strategic Intuition [1], by William Duggan, which speaks to the nature of creativity. Highly recommended read if you are interested in this topic.

[1] https://www.amazon.com/Strategic-Intuition-Creative-Achievem...


I'm coming here from the hackerbooks newsletter where I saw this book mentioned. Just wanted to chime in how compelling I find the ideas.

Though the book is slightly repetitive, what stuck with me is the idea of a "stepping stone approach". In order to discover something new or interesting, all we need to do is to find the next stepping stone to jump to. The rest is not is not in our hands. I tell this to myself whenever I think I have to have this "grand vision", which I never have so I don't do anything. I'm happy I stumbled upon it.


"Erasmus Darwin had a theory that once in a while one should perform a damn-fool experiment. It almost always fails but when it does come off, it is terrific. Darwin played the trombone to his tulips. The result of this particular experiment was negative"


- the authors point out that when they were researching artificial intelligence, they discovered that systems that focused too much on an explicitly-coded "objective" would end up producing lackluster results, but systems that did more "playful" exploration within a problem space produced more creative results

A comment of mine from a year ago: "I think the way forward for AI will be in simulating human conciousness processes, including whatever limitations computationally that come with that. In this sense, I feel that the kind of AI that will break barriers first is going to be in a game, and not on a factory floor."


Off-topic: always interesting to see how referral links fare here.

I personally am not opposed if disclosed, since AFAIK (documented as of 2012 https://news.ycombinator.com/item?id=4156414) this gives access to other items purchased in the same time frame (whether the specific item is purchased or not) and analytics for links with no orders.


For anyone who hasn't run across this, Alan has shown up on HN periodically, and the resulting post history is just astonishingly rich: https://news.ycombinator.com/posts?id=alankay1. Threaded conversations at https://news.ycombinator.com/threads?id=alankay1.

If you read the threads you'll see that he has frequently continued conversations long after the submission dropped off the HN front page, and many topics have gotten developed further than one usually sees. HN's archives are full of gold and for me this one of the more obvious veins of it.

If anyone wants to spend some time aggregating and collating these writings, as the OP has done from emails, I would see about getting YC to fund the effort, subject to Alan's permission of course. Email hn@ycombinator.com if you're interested.


Thanks for pointing it out.

I did a search in my mailbox and found a small branch of a huge discussion thread tree 555 days ago:

https://news.ycombinator.com/item?id=11958781

Perhaps we can build a "systemland" in some way.


Emailed! Great idea.


Irony of ironies!


[This bit of pointless & unexplained snark would have been deleted or edited had I revisited in time]


>Socrates didn't charge for "education" because when you are in business, the "customer starts to become right". Whereas in education, the customer is generally "not right".

Very important takeaway!


The whole paragraph is gold:

> Socrates didn't charge for "education" because when you are in business, the "customer starts to become right". Whereas in education, the customer is generally "not right". Marketeers are catering to what people want, educators are trying to deal with what they think people need (and this is often not at all what they want). Part of Montessori's genius was to realize early that children want to get fluent in their surrounding environs and culture, and this can be really powerful if one embeds what they need in the environs and culture.

This is so true on so many levels. Schools and university came to mind immediately. For me, the most striking fit is to educational articles and blog posts on the web.

When the www was young it was full of stuff that catered my intellectual curiosity. Nowadays content creation is driven by keyword research, search volume and CPC. After three paragraphs I usually know this will end in a call to action. Not that there wouldn't be any interesting stuff out there, but it is drowned in a sea of ever repeating trivialities.


The web now presents mainly economic activity and validation of currently held opinion. When we were both younger, the web and I, the ideas and opinions expressed by it were on a wider spectrum. It feels like it went from 4:2:2 down to a 32 color fixed pastel palette.


Remember when Google was useful to find good stuff faster instead of driving what should be there ?


I remember that when I started googling, I would often go to page 2,3,4,5 to find what I wanted. These days I never go past the first page. I actually had to check now that there was still a page switcher...

What changed the most? How good I am at formulating a query? What kind of information I'm looking for? How good Google is at curating the first page? How much information is available? Sure hope its not that my queries are more trivial, or that my preferences for answers favors simpler,easier ones...

Thankfully I still get lost on Wikipedia (and Youtube), in a kind of what-is-all-this-I-need-to-know-more learning spree.


A bunch of factor:

- we became impatient if not lazy - google does a huge job at parsing complex queries - the content is more organized in itself (html semantic markup, meaningful css classes, etc etc)

The other day I tried a tails/tor and .. i cant recall.. ipfs probably. This network is bare, so bare I felt excited just like the 56K days. It quickly went bad because there's a huge % of bad / crazy content, but still I felt entering a more interesting territory as a visitor.


> Thankfully I still get lost on Wikipedia (and Youtube), in a kind of what-is-all-this-I-need-to-know-more learning spree.

I know what you mean, but should we be thankful? Learning is stimulating, but are we just indulging in it with little or negative results because we are acquiring misinformation?


Yes, absolutely. I use DuckDuckGo and it feels a lot like the early Google. The only thing I still use Google for is local searches, like opening hours, this is where the filter bubble shines.


Find DDG like early Google days in that a bit harder to find things where today Google will have it in the first page. Google has got to a point where it is just amazing how accurate it is.

Suspect part of it is also me and knowing how to phrase searches. Bit that just does not work to the same level on DDG.


and this is whats happening in medicine. it's going from an activity where the doctor and patient are trying to 'learn' about an illness to a business where the 'client' must be pleased by the 'provider'. The concept of allopathic medicine would never have been invented in our day and age.


The "concept" of allopathic medicine was invented by homeopaths.


The thing is that Licklider's vision of computers as "interactive intellectual amplifiers for all humans, pervasively networked world-wide" has already come to pass, and created huge economies of scale and exponential pressures for compatibility and conformity that didn't exist before.

In the 1970's a few dozen brilliant people could create a completely new and self-contained computer system because the entire computing world was tiny and fragmented. there wasn't the imperative to be compatible to all-pervasive standards (even IBM's dominance in business was being challenged by the minis).

These day if you want to create a new computer system that people will use you need at the minimum to provide a networking stack and a functional web browser, some emulation or compatibility system to support legacy software that people rely on, device drivers for a huge range of hardware, etc. All this not only takes a huge amount of work, it also punctures the design integrity of your system, making it into a huge mountain of compatibility hacks before you even start on your own new concepts. But the deadliest enemy of innovation is the mental inertia of masses of users with a long history of interacting with computers. They are no longer the blank slates who have never seen a computer you had in the 70's.

Even in the realm of art people realized that the romantic or modernistic model of artistic revolution that Kay invokes is untenable and retreated into postmodernism.


> All this not only takes a huge amount of work, it also punctures the design integrity of your system, making it into a huge mountain of compatibility hacks before you even start on your own new concepts.

One possible approach that doesn't require writing a full compatibility layer is virtualization. Run your new system alongside a mature system like Linux, then rig it up so windows can appear inside whatever new environment you come up with. You do still need to write the code to send events and get pixel buffers, but it seems like much less work overall.


Then you're constrained by the existing window and input systems, which probably makes many interesting new concepts impossible to implement.

I don't know an alternative except to do everything new within one window, which would be at least as bad.


I think this is why it is necessary to fund "searching for problems" research. We don't need to solve the 1970s vision of computing, we need to solve the 2020s vision of computing, whatever that may be.


I hate to be the one to say it, but Apple’s the only company anywhere near the capacity to make a new platform happen.


Does anyone know what happened to HARC ? It was announced in May 2016, with Alan Kay's involvement and Bret Victor's:

http://blog.ycombinator.com/harc/

https://news.ycombinator.com/item?id=11679680

In September "eleVR" said it was leaving YC research:

http://elevr.com/elevr-leaving-ycr/

I presume these e-mail excerpts are inspired by the end of HARC too? What happened?

In the thread about first year reports a couple months ago, someone suspects that funding has ended too. Also, Bret Victor's work and name are conspicuously missing from the first year reports.

https://news.ycombinator.com/item?id=15564072

https://harc.ycr.org/reports/


Thanks for the link by @j_s, I found following insightful comment by you (@chubot):

> The misleading thing about Linux is that it IS IN FACT a big idea -- it's just not a technological idea. We already knew how to write monolithic kernels. But the real innovation is the software development process. The fact that thousands of programmers can ship a working kernel with little coordination is amazing. That Linus wrote git is not an accident; he's an expert in software collaboration and evolution.

https://news.ycombinator.com/item?id=15164092

From your comment, I think the funding and development process of these post Bell Lab/PARC research projects might need a paradigm change from technology idea oriented to social development oriented due to the reality of our highly networked social environment (the dream of Licklider).

If we read the CRAZY eleVR thread as a tree/forest, https://news.ycombinator.com/item?id=15162957

We can see that how much misunderstanding of each other exists in our tech community. There are much deeper social development problems to be solved than simple VC or philanthropy funding.

@vihart was right saying

> HN. You're a weird place.

It is a more fundamental problem that vihart and gang can’t solve. And that would be an opportunity at another meta level.

Think what Lick would do.


Thanks, glad you got something out of the comment.

You have a point that, ironically, the success of Licklider's vision may have changed the nature of collaboration for the worse in some ways. It's been a huge net positive, in that humanity as a whole is more productive, but there are things that have been lost.

(Or maybe another way to look at it is that Licklider/Engelbart's vision has been watered down, and you could solve these communication problems with better software design.)

Another example I recall is that Rob Pike said that Bell Labs was a wonderful environment because everyone had to physically sit in the same room, huddled around big shared computers. There is a natural exchange of ideas in that environment, which leads to big research breakthroughs. When Ethernet and personal workstations came around, suddenly you could sit at your desk and close the door and put your headphones on.

I personally noticed that irony at my own job. Often people would be talking more to people in a remote office than the person sitting next to them. And sometimes people would chat electronically with someone in their own office -- e.g. if they didn't want to take off your headphones or disturb other officemates!

I also wonder what a new model for funding basic research looks like. It seems like YCombinator would be a logical entity to fund such research, but as far as I can tell, it was a failed experiment.

And I think there is some danger in longing for the "golden age". I was reminded in reading the "Dream Machine" that a lot of it was driven by fear of the Soviets (Sputnik, etc.). And I also wonder if what made it work back then is no longer present, for better or worse.


The fact that we are having this conversation across time and space shows the power of Lick's "Intergalactic Computer Network". We have reasons to be optimistic.

On the other hand, on top of the computer hardware/software communication layers where this conversation takes place, there is a profound layer that we seem to take for granted and not consciously aware of (like fish in the water), i.e., written language composed and consumed by real human being (English in this case). To communicate effectively with this distributed (non-local in space) and asynchronous (non-local in time) layer, we need more contexts other than the words themselves. In Rob Pike's Bell Labs and Alan Kay's bean-bag-chaired PARC, local synchronous face-to-face communication would be much more effective because the audience were from the same background and shared the same context.

Let's take a look at the contexts of Bret Victor's blog post "Some excerpts from recent Alan Kay emails". These are excerpts of private written communication (emails) between Alan Kay and Bret Victor. We could imagine what Alan had in mind when he wrote those emails, or what Bret had in mind when he read them, further more, why Bret made those excerpts and posted them on his blog. However, since we were neither Bret nor Alan, we can only read them within the constraint of our own contexts. We can only guess what message Bret wanted to send with his blog and we are not even sure we (HN community) are his intended audience.

I am sure there are some YC funders reading this and the fact that we don't see any of them making a statement here shows that they are also under the constraint of their contexts. So I guess the challenges and solutions to funding of research lie upon these contexts as well.

It is all about pink and blue planes.


> Unfortunately, a combination of forces in the world make nonprofit long-term research a tough sell right now. It doesn’t matter how good we are at what we do. Everyone is overextended trying to solve all the world’s problems at once, and we’re in the unpopular space of being neither for-profit nor directly and immediately philanthropic.

http://elevr.com/elevr-leaving-ycr/


I read that (and linked it), but I'm more interested in YC's side of the story.

How long do they commit to funding for? How do they evaluate when to continue funding? What do they consider "success"?

A year and change is incredibly short for basic research. If at the outset you only planned for a year of funding, then you can't expect any results. It's almost wasted money. So I think that something happened to make them discontinue the funding.


Discussion of September "leaving" announcement: https://news.ycombinator.com/item?id=15162957


> An example of the vision was Licklider's "The destiny of computers is to become interactive intellectual amplifiers for all humans, pervasively networked world-wide". This vision does not state what the amplification is like or how you might be able to network everyone in the world.

i am going through licklider's 'the dream machine' book. it is quite long, and dense but ties together lots of individual threads (for me at least) that were kind of floating loose otherwise. i had known about luminaries in the field (and their work) specifically, licklider, shannon, norbert-weiner, cerf and kahn, metcalfe, von neumann, vannaver bush (to name a few) but this book brings them together.

interestingly though, licklider's background was in psychology, as opposed to ee/maths. did we lose some of the diversity of perspective ?

another book, that i found to be pretty good is 'the idea factory', it complements the 'dream machine' book quite well.

edit-001 : fmt changes, and added reference to another book.


> interestingly though, licklider's background was in psychology, as opposed to ee/maths. did we lose some of the diversity of perspective ?

The answer to this question is almost right in the emails: the focus of funding research switched from visionary work of defining problem space to exploration of solution space. Engineers are people who are educated, perhaps sometimes predisposed, to solve problems, not find problems worth solving.


I’m currently reading this book as well, having come across it on Alan Kay’s reading list[0]. It’s definitely worth reading, and the diversity of thought and interest surrounding that time does seem to be in contrast to today’s environment.

[0] http://www.squeakland.org/resources/books/readingList.jsp


Hm I posted an eerily similar comment to this yesterday morning on lobste.rs:

https://lobste.rs/s/i9s39q/best_books_you_have_read_2017#c_z...

It looks like you made at least four of the same points that I did.


The name Licklider is mentioned a couple of times. Recently I read a fascinating account of his work. Great read for those who want to know more about the history of computing and the internet: https://www.amazon.com/dp/B01FIPHEXM/


> The destiny of computers is to become interactive intellectual amplifiers for all humans, pervasively networked world-wide". This vision does not state what the amplification is like or how you might be able to network everyone in the world.

That's a vision that recognizes the value in creating tools, not constrained solutions for the current problem. Freedom can be frightening[1], but letting people explore what is possible with your new tool sometimes leads to unexpected uses and the occasional paradigm-shifting discovery.

[1] (with apologies to DHH[2]) If we give people unconstrained tools, they might change the String class to shoot out fireworks at inappropriate time!

[2] https://vimeo.com/17420638#t=27m27s


Another way of thinking about this is that large companies have basically decided that executing optimally is more important than necessarily creating huge leaps themselves. They can just focus on making money, create a sizable amount of cash, and acquire new technology.

I'm sure there's a cultural effect here as well. Seems like there are fewer large companies that are willing to just let technologists "muck around for a bit" swinging for the fences without some kind of goal.

But, I just wonder, if VC-funded startups are where new tech grows, we end up just focusing on incremental improvements, instead of potential large leaps.


> Seems like there are fewer large companies that are willing to just let technologists "muck around for a bit"

I wonder if this is true. Many of the major computer industry companies have labs: Google, Microsoft, IBM ... (Apple? Facebook? Amazon?)

I wonder if it's more or less than before.


Compared to Bell Labs or PARC, these are purely applied labs.


Thanks. Do you know of any source that talks about this issue (applied vs basic in the major corporate labs) and on a somewhat sophisticated level?


Much less than before.


What this makes me wonder is, if the problem is not enough funding for visionary work, or more accurately that modern funders misunderstand how and why to find visionary work, then what will change their minds?

Because I feel like I've read a few of these posts, watched similar talks, and I nod along and agree every time. Only problem is, I'm not a wealthy benefactor, director of a research institute, or a business exec with control of discretionary R&D funding.

Where do those people hang out, what would convince them like this post convinces me, and is anyone working on that?


I think it is hard to get people to understand what they do not understand yet, actually what they have opposing experiences to. It seems to me making money is easy if you a) work hard b) are extremely goal driven (and obviously you need to be sufficiently smart). Now this seems to be working in research too, where the extremely goal driven part means you focus on publishing research papers in prestigious places.

I think it is working in research (given for example that some of the papers in conferences like POPL are just great). Of course you won't hear from those instances where it is not working (until it does).

The problem is you somehow have to decide who to give money to, and you cannot possibly know who carries a vision inside of them that they are actually able to realise given enough funding and time.


> set up deadlines and quotas for the eggs. Make the geese into managers. Make the geese go to meetings to justify their diet and day to day processes. Demand golden coins from the geese rather than eggs. Demand platinum rather than gold. Require that the geese make plans and explain just how they will make the eggs that will be laid. Etc.

-- I have seen these types of situations so many times while project implementation.


Is the underlying assumption behind this text is that less innovation is happening right now? Are the billions poured in AI not counting?


The assumption is (I believe) that innovation is better when it's not goal-oriented. Currently much of the funding goes to projects that have a clear path to results, but finding that path is a prerequisite, and is not funded.


Kay makes a distinction between invention and innovation. Invention is the creation of qualitatively new things and innovation is the betterment of pre-existing things. The AI research that’s happening now sounds more like innovation. Innovation is fine, but civilization-changing things (like the constitution or Internet) are the result of invention, not incremental improvement (to paraphrase Kay’s argument).


Particularly considering that most of the AI work is openly published (companies averse to publishing have difficulty hiring in AI)


>Are the billions poured in AI not counting?

Unless there are any great results, no, they are not.

Tons of research money where poured into AI in the 60s-70s too, and when it got nowhere great, it all folded in the so-called "AI winter".


The comparison is with an environment where the mouse, windowing and Ethernet were created.

Billions poured into AI have produced some interesting tricks but little that changed the entire world of computing in the way those things have.


That's funny you should mention "the mouse, windowing and Ethernet", because in the last 5 years the majority of people have stop using all 3 of those (thanks to mobile). So that takes care of the "little that changed the entire world of computing" argument.

Yes, billions have been poured into AI and we still have no strong AI. That just shows that it's a problem that is way more complicated that inventing the concept of a pointing device, making computers talk to each other, or having applications show something on a screen that isn't text. The people in the 80s weren't smarter (or dumber) than the people of today, there were just a lot of low hanging fruits back then.


> because in the last 5 years the majority of people have stop using all 3 of those (thanks to mobile).

Even if this were true, which I disagree with, it has changed the world. Just because we don't still use the iPod anymore doesn't make it any less brilliant.

> The people in the 80s weren't smarter (or dumber) than the people of today, there were just a lot of low hanging fruits back then.

They didn't just take the low hanging fruit. In the 60s it was assumed that computers would just get bigger and bigger. They had to take the highest hanging fruit that everyone thought was ridiculous. They had to envision an entirely different world. So perhaps they weren't more intelligent but they had a more powerful vision of the future.

You're probably laughing at similar people today.


Things wouldn't be too different without the mouse. We already had the trackball.


As far as I can tell, it's a bit like the 20% Google time. Except for instead of it being one day a week, it's 25 people in a building for 5 years.

The people being given money for AI are expected to work on AI, not on whatever they think is cool.


Makes me wonder, what could we do, or what could current organizations do to recreate that environment of research and creativity in today's (and tomorrow's) world? Also, what do you do in your own life (if any) to create that sort context that is conductive to this kind of creativity?


The recipe from the article should still work:

Hire a bunch of smart people, give them advance funding for 5 years without strings attached, tell them to build the future without specifying what that means.

From a personal view I have plenty of ideas for things never built before, but I lack the time and energy to do it after hours and I can almost never fit it into the scope of my professional work. Sometimes this makes me sad.


Committees impose compromises and conditions on funding, so govt or public company money won't work, as Kay points out. It must be pvt money, like the Medicis funding the renaissance. Something like the Gates Foundation. It could be a prestige project for Bezos or Zuck.


Isn't this framing of for-profit (prestige is a kind of profit) part of the problem he describes?


DARPA was government though


And DARPA is nowhere near as good as ARPA was.


> Parc was "effectively non-profit" because of our agreement with Xerox, which also included the ability to publish our results in public writings (this was a constant battle with Xerox).

Parc was "effectively non-profit" because there was no way to collect or monetize reams of user data like there is today. If there had been, either Kay wouldn't have gotten that agreement or Parc would have been much more selective about what constituted "results."

For example: how many companies have solved the talk-to-our-computers-and-our-computers-talk-back problem now?


> there was no way to collect or monetize reams of user data like there is today

Xerox was at the time the world leader in handling literal reams.

A huge portion of data processing was related to managing customer data, mailing lists, etc.


I'm talking about a different type of user data-- what would be input into things like visual programming languages and devices that Alan Kay talks about-- Grail, light pen inputs, etc. I assume what Kay is talking about is the ability to publish insights, data, and specifications for the languages and UIs that were being developed at the time. So when you see Kay doing user studies working with kids, the stuff he made public was the actual research on what was learned about human-computer interaction during that time, languages used, developed, etc. It wasn't the position of the kids' eye, fingerprint of their typing patterns, mouse movement patterns, etc.

Someone can correct me if I'm wrong, but I've never seen anything Kay wrote from the 70s or 80s that discussed the implications of mining user data on such an enormous and regular basis as is currently done. And I doubt Xerox itself thought of that category of user data as being valuable outside of research, or in any way significant the way "customer data, mailing lists, etc." were. But today, that user data is assumed to be valuable and can cheaply be collected and monetized. So even if some Parc-like team exists today, the results will be practically locked-down to keep a competitive advantage over others. Or it will make much smaller strides with specific goals in mind-- like Mozilla with Rust. Or it will clone and improve upon existing proprietary technology and make it available to the public-- like GNU.

It's a bit like reading someone wax nostalgic about Bitcoin's initial bootstrapping mechanism and complaining that today's altcoins don't put enough emphasis on distributing the tokens to people who aren't speculators. That's fine as a description of a problem, but its a fairly toothless observation in terms of solving the problem. Distributing tokens when nearly nobody is watching is a completely different problem from distributing them when everyone is not only watching but also speculating. Similarly, having a research team do work that is a pubic good when nobody thinks the data generated is something that can be directly monetized is a completely different problem from trying to do the same when the greater part of the economy depends on everybody watching, collecting, and monetizing the data that your software/hardware generates.


This bit

"An important part of any art is for the artists to escape the "part of the present that is the past", and for most artists, this is delicate because the present is so everywhere and loud and interruptive. For individual contributors, a good ploy is to disappear for a while"

reminded me of John Carmack explaining in his .plans how he would go off and seclude himself for a couple of weeks to start a new iteration of the id technology.


I have intimately enjoyed the work of Engelbart, Kay, Nelson, Victor, etc over these years. They make computer history meaningful and believable.


It is a very difficult narrative. Mainly because while reading it, I had to think constantly about survivorship bias and appeal to authority. Would I really read this if it was written by a 20 year old without a track record? Or how many Alan Key's are out there that completely failed and would prescribe a totally different approach and attribute the cause and effect on other factors?

I also totally get the point of funding problem finding versus funding business plans that are full of bullshit and pretend to have a risk free solution with a play book that just needs some money to execute it linearly step-by-step. So, for that part it is written very well and the point is clear.

Another topic is: would all kind of innovations really emerge if the funding mechanism was radically different in the last 20 years? Would we see all kinds of big leaps or was ARPA-Parc just a serendipity moment that couldn't be repeated no matter what the funding structure was?


On your second point -- namely why does Juicero get funded and qualitative innovation doesn't -- I think it comes down to the still very real weight given to subjective alpha-primate-ness in business decision making.

On two occasions I've had VC types (in one case post-beer) tell me they almost never fund anyone who doesn't have (paraphrasing) "presence" or "a reality distortion field" etc. I would have assumed it to be subconscious but these comments convinced me that it's a conscious explicit bias in at least some cases.

This is a really outstanding book on the topic:

https://en.wikipedia.org/wiki/Quiet:_The_Power_of_Introverts...

Put enough weaponized extroversion and alpha primate signaling behind it and you can get people to invest in millions in absolute BS. You can also over-inflate the value of something real to stratospheric degrees. See also: Uber.

On the other hand if you have an underspoken introvert try to sell something outstanding, the majority of businesspeople will not pay attention.

I have seen a case where a meek soft-spoken woman tried to pitch a mobile startup with outstanding user base growth, revenue, and cash efficiency numbers and got yawns. She wasn't asking for much. This was seed stage stuff and these were crazy amazing numbers. In the same pitch fest the investor panel perked up for a number of vapid pre-revenue proposals backed by more extroverted alpha personalities. I don't think it was just sexism since one of these was also a woman. (I suspect some sexism is actually this bias since it may be tougher for women to play the alpha primate card with men.)

Most innovators ("artists") are more on the introvert end of the spectrum.

Edit: this may be rational for funding things that rely heavily on in-person salesmanship, but it's applied broadly.

Edit #2: another reason this may be rational for VCs is that they are only thinking about their cash-out. They may be looking for people who can pump the value of the company's stock to later investors more than people who can actually deliver lasting value. The company's product is its stock, not its product.


If you think that companies by introverts are undervalued, a ton of money could be made exploiting this trend.


I'd do it if I were rich-- I'd look for companies with:

(1) Breakthrough technologies and/or amazing numbers.

(2) Introverted founders passed over by other investors.

(3) A sales process that is not in-person where being introverted, understated, or shy is not a major handicap. Companies with mostly inbound, viral, indirect, or automated sales processes or that can outsource sales easily would work.


Re #2

What tools could you use to decrease the friction caused because introversion makes a founder harder to find?


I think Parc was a unique "a serendipity moment". These Cambrian Explosion type moments happen frequently in history: high creativity in which a new form is developed, followed by an increasing focus on small incremental improvements, and finally stagnation. The amount of potential a new form has determines how long this process takes.

I believe that for workstation/desktop computing we have already hit stagnation. I also suspect that programming languages have not advanced in any significant way since around 1990. By this perhaps surprising claim, I mean that the languages available today are no more capable than those available in the 1980s.


> I believe that for workstation/desktop computing we have already hit stagnation

Nope. Not even close. We haven’t even made a user-friendly function editing tool. After that happens there will be another Cambrian explosion on the desktop.

Let me see if I can list more things like that...

Well, smart contracts. Those have approximately zero adoption and will probably constitute a double digit share of scripts in the future. We don’t even know how to write them yet. You won’t be writing smart contracts by talking to Alexa... not at first anyway.

What else... I think if I can come up with three I will have proven my point...

Well, a shared spatial computing environment. VR will encourage prototyping of libraries for structuring that stuff. But there’s nothing about shared computational space that necessitates a headset. I think most of what VR will do is show us new ways to structure desktop software.

And those are just things I, a person from the 80s, can point to and say “we haven’t figured this out on the desktop yet but we will”. That’s not even admitting the things we can’t even imagine.

I would say we have designed less than 1% of the computational structures that will come standard on a mobile phone in 2050.

We’ve been advances in for decades—approaching human lifespan timescales—so it sometimes feels like we’re close to the end. But we’re just at the very beginning of discovering the final form of “windows”.


Funny you should talk about shared spatial computing environments. Have you seen what Alan Kay's research group is doing lately? :-)

https://dynamicland.org/


You need only watch the YouTube video about the Xerox Star from 1982 to see that the desktop is conceptually the same now. Sure, we have better graphics but all the key elements were there 35 years ago. I'm sure there will be innovation in other areas, perhaps in AR or VR, but the desktop is done; frozen in amber dated 1982.


Reading books about the evolution of computing from different angles, it quickly becomes clear 1) how few ideas are original and/or how few are big leaps vs. incremental steps that tends to occur multiple places at once even in the beginning; e.g. the number of internal projects that would have been "close enough" that were cancelled because competitors beat them by mere months. 2) how much innovation gets stomped into the ground because of ridiculous reasons such as corporate politics.

While there certainly are amazing people doing amazing things that deserve recognition, it seems increasingly clear to me that if any of them weren't there, the space would have been filled, and quickly. Maybe in different ways and things wouldn't be the same, but history is so full of independent near-simultaneous invention of practically the same things that it seems that a lot of ideas end up being realized not because of some unique intellect, but because their time has come, and the best these people can do is speed it up a bit.

Marketing and adoption is a different issue - I think William Gibson got it right: "The future is already here – it's just not evenly distributed."

A lot of the challenge of innovation is not the innovation itself, but getting the knowledge to filter out there. Arguably, it's first the last decade or so that languages like Lisp and Smalltalk has won people over - not by themselves, but by the success of an increasing array of languages created by people who were inspired by them that themselves have taken years to get traction.

And even then there are so many low hanging fruit that has not seen traction because, frankly, people are busy, and the amazing stuff often isn't the work most likely to pay the bills then and there.


I don't know how you're defining "capable"; I'd take the view that Haskell/Scala/... are immensely more effective than anything that was comparably mainstream in 1990 (and that Haskell/Scala/... development in practice today is a lot more effective than Haskell in the '90s, even if a language that's notionally the same was available), and that the interesting research languages of today have made similar progress beyond the interesting research languages of 1990. It takes a frustratingly long time for things to percolate into the mainstream, so it's easy to think that, say, ML existed in 1970, and forget that ML is still a long long way ahead of what were commonly used industrial languages even as late as 1990.


I would define "capable" in terms of developer productivity--the time it takes to deliver quality software. For example, if you decided to write a DNS server from scratch would choosing Haskell or Scala (both fine languages) really enable a massive (order of magnitude) improvement over choosing Smalltalk, C, Ada, or Common Lisp? I doubt it. What we do benefit from is the availability of libraries, IDE's, and various frameworks. These, I would argue, rather than innate abilities of the modern languages are what enable us to be more productive (assuming that we are).


> For example, if you decided to write a DNS server from scratch would choosing Haskell or Scala (both fine languages) really enable a massive (order of magnitude) improvement over choosing Smalltalk, C, Ada, or Common Lisp?

Well, for a sufficiently low defect rate, I suspect it would take an order of magnitude longer (for normal developers) to achieve it in C, Smalltalk, or most Lisps. Ada perhaps not, but then maybe it would've taken the order of magnitude longer to implement, particularly if you factor in making enough money to buy the development tools first. Then as now, the future is not evenly distributed.

I mean, you're undeniably right on one level: for a small, standalone executable whose main task is bit-banging, where a high defect rate is acceptable, it was possible to bash that out 20 years ago at a similar speed to what you can do in today's languages. But I think that's actually a much smaller niche than it first appears, and not where the focus of our efforts has been because that's not actually what businesses most need.

> What we do benefit from is the availability of libraries, IDE's, and various frameworks. These, I would argue, rather than innate abilities of the modern languages are what enable us to be more productive (assuming that we are).

I think the ability to write long-term-maintainable, general-purpose libraries or frameworks is actually very tightly tied to the languages we use.


Ironically, I'd point at GHC Haskell and Scala as the prime examples of what's so wrong with programming language research today. Today's academic environment only rewards work that can deliver results in a short time frame, which discourages academics from doing badly needed fundamental work. What we need is “a formal semantics for a language, with practical proof principles for designing correct programs”. What academics give us is “a type system with first-class foos, with a proof of type soundness”. Pathetic.


It's the industry that only wants small improvements that it can digest one at a time. Look at how much anti-intellectual hostility Haskell and Scala attract (even on this very site) for the really quite small steps they're taking. It's not the academics' fault.


I am not criticizing the academics behind Haskell and Scala for “not doing enough”. I am criticizing them for doing counterproductive “work”.

(0) Haskell and Scala are already very complicated, and they are becoming even more complicated at a very fast rate. This will only make life harder when we finally bite the bullet and admit that we need a formal semantics for the mess we have created.

(1) Moreover, this complexity only yields at best marginal increases in the extent to which these languages aid software verification. Effort would be better spent trying to figure out how much verification you can wrangle out of relatively simple type systems like ML's. I have actually tried this, and obtained (IMO) promising results: core types prove some simple things, proper use of modules proves a few more simple things, but most importantly, they make complicated things easier to (formally!) prove by other means.

(2) On more philosophical grounds, the extended community behind typed functional languages draws inspiration from the wrong source. Martin-Löf type theory is doubtlessly a beautiful foundation for mathematics, but it has a fatal flaw for programming purposes: it makes no distinction between values (which are already there, you just use them) and computations (which you need to run to produce values, and might have effects during the process). As a result, a programming language based on Martin-Löf type theory has to be crippled in unacceptable ways (e.g. totality, IOW, “fixing a prior notion of acceptable recursion”) to prevent it from being type-unsound.

(3) There is ample theoretical and practical evidence that no single language can be good for everything, so there will be situations in which we need to compose program fragments written in different languages. Alas, type theory is conspicuously silent about FFIs. Is this perhaps because of a refusal to acknowledge that computation is a prior notion, which types can help organize, but by no means define?


> Haskell and Scala are already very complicated, and they are becoming even more complicated at a very fast rate. This will only make life harder when we finally bite the bullet and admit that we need a formal semantics for the mess we have created.

It's very hard to know what we need formal semantics for if we don't simultaneously try to write practical programs. There's a tick-tock pattern at work here: start with a clean formal language, figure out what extensions are needed to write practical programs in it, at first in an ad-hoc way, then figure out a way to make that extension in a sound, compatible way, then finally form a new clean formal language that accommodates these extensions. If you just want a language with formal semantics, ML or Haskell 98 exist already; Scala or GHC Haskell extend these to languages we can write industrial programs with, and then Dotty or Idris are clean formalisable designs with the lessons learned from this.

> Moreover, this complexity only yields at best marginal increases in the extent to which these languages aid software verification. Effort would be better spent trying to figure out how much verification you can wrangle out of relatively simple type systems like ML's. I have actually tried this, and obtained (IMO) promising results: core types prove some simple things, proper use of modules proves a few more simple things, but most importantly, they make complicated things easier to (formally!) prove by other means.

Not my experience. I've found HKT and dependent types extremely useful in practice; they give me a simple way to directly encode the business properties I care about verifying. Of course if you've come up with something useful I'm interested, and I'm sure many others would be, but I'd strongly disagree with the notion that what the academics are doing isn't useful.

> As a result, a programming language based on Martin-Löf type theory has to be crippled in unacceptable ways (e.g. totality, IOW, “fixing a prior notion of acceptable recursion”) to prevent it from being type-unsound.

Not at all convinced that e.g. Idris-style totality is "unacceptable". What's the problem with it?

> There is ample theoretical and practical evidence that no single language can be good for everything, so there will be situations in which we need to compose program fragments written in different languages.

Again not at all convinced. Certainly my practical Scala experience is that it's good for everything.


> figure out what extensions are needed to write practical programs in it, at first in an ad-hoc way, then figure out a way to make that extension in a sound, compatible way, then finally form a new clean formal language that accommodates these extensions.

This is not possible, and the history of Haskell, Scala, and to a lesser extent OCaml, is precisely the evidence for it.

> Not my experience. I've found HKT and dependent types extremely useful in practice;

In practice, most uses of HKTs or even term-level HOFs are just bad workarounds for

(0) The lack of a proper hierarchical module system, allowing type components to be shared by fibration.

(1) The lack of properly designed data structure libraries with decoupled concerns. For example, you need traversals and folds to manipulate sets and maps, but you don't need them to manipulate search trees (i.e., trees that allow you to do binary search, rather than ADTs that might use binary search under the covers).

> they give me a simple way to directly encode the business properties I care about verifying.

Do you actually verify them? How exactly do you verify Scala programs in the absence of a formal semantics for Scala?

> Not at all convinced that e.g. Idris-style totality is "unacceptable". What's the problem with it?

“There is no largest class of recognizably terminating programs.”

https://mail.haskell.org/pipermail/haskell-cafe/2003-May/004...

> Certainly my practical Scala experience is that it's good for everything.

One thing that I find useful, due to my current job, is numerical linear algebra without array bounds checks at runtime. Have fun doing this in Scala.


> This is not possible, and the history of Haskell, Scala, and to a lesser extent OCaml, is precisely the evidence for it.

Then how do you explain the existence of e.g. Idris or Dotty?

> In practice, most uses of HKTs or even term-level HOFs are just bad workarounds for

> (0) The lack of a proper hierarchical module system, allowing type components to be shared by fibration.

They don't feel like a workaround. They feel like a nice way of modelling my problems that has the properties that I want. Maybe there are other ways, but if those other ways work then why aren't we seeing them in industry?

> Do you actually verify them? How exactly do you verify Scala programs in the absence of a formal semantics for Scala?

Any verification method has a false-negative rate. I use a combination of trusting the compiler's typechecking, sticking to the scalazzi safe subset and enforcing with linters etc., manual review of recursion (limiting the cases where it occurs as far as possible) and fast and loose reasoning.

Maybe I'm missing something, because there always seems to be some disconnect in these conversations: as far as I can see any method that lets you associate logical deductions to your program and mechanically verify that those deductions are legitimate is verification, and I can certainly do that with propositions-as-types in the Scala type system. I don't verify the faithfulness of the compiled code, but many verification systems don't do that either. I don't have a proof of soundness, but plenty of verification systems don't have that.

> “There is no largest class of recognizably terminating programs.”

Sure. But not all recognisably terminating programs are important or useful. To form a terminating language we have to have a finite limit on how many levels of nested 'eval' we can do at compile time, sure (though I see no reason we can't form a parameterized family of languages and pass the "depth" as a compiler argument). That's irritating, but it's a big jump to "unacceptable"

> One thing that I find useful, due to my current job, is numerical linear algebra without array bounds checks at runtime. Have fun doing this in Scala.

"Without array bounds checks at runtime" is not a real requirement. The real requirement would be something like "using x amount of CPU" or, ultimately, "at cost of no more than $y". FWIW I did numerical linear algebra in Scala in production at a previous job; the benefits outweighed the costs.

More to the point, there's nothing fundamental about Scala that makes it impossible to incorporate linear algebra without array bounds checks at runtime into the language. Yes the current implementation doesn't do it, but that's a contingent fact about the current implementation, not something fundamental.


> Then how do you explain the existence of e.g. Idris or Dotty?

Have fun convincing people to port their existing Haskell and Scala programs to Idris and Dotty, respectively. In particular, Idris is not even remotely similar to Haskell beyond the syntax - it even uses a different evaluation strategy!

> They don't feel like a workaround. They feel like a nice way of modelling my problems that has the properties that I want.

The trouble with Haskell and Scala-like higher-kinded type constructors is that they are merely parameterized by other type constructors, rather than by entire structures. (It is as if, in mathematics, all functors were functors from Set!) Haskell and Scala attempt to remedy this situation using type classes, i.e., identifying every type constructor `T` with a canonical structure whose carrier is `T`. But, what if you need to use two different algebraic structures, both having `T` as their carrier? For example, say you need to use maps whose keys are lexicographically ordered strings, as well as maps whose keys are colexicographically ordered strings. Do you create a `newtype` wrapper as in Haskell? Or do you use different implicits in different contexts, and risk conflating maps created in one context with maps created in the other?

This problem doesn't exist in ML, because functors allow you to parameterize entire structures by other entire structures, not just their carriers.

> Maybe there are other ways, but if those other ways work then why aren't we seeing them in industry?

What do I know. Ask the industry, not me.

> I use a combination of trusting the compiler's typechecking,

Trust is neither here nor there. Without concrete type safety theorems (“such and such doesn't happen in a typable program”), you don't even know what type checking is supposed to buy you.

> sticking to the scalazzi safe subset and enforcing with linters etc., manual review of recursion (limiting the cases where it occurs as far as possible) and fast and loose reasoning.

That's a perfectly fine set of software development practices, but it's not verification.

> I don't have a proof of soundness, but plenty of verification systems don't have that.

Then all you have is a fancy linter.

> More to the point, there's nothing fundamental about Scala that makes it impossible to incorporate linear algebra without array bounds checks at runtime into the language.

Um, the inability to statically verify array index manipulation using Scala's type system?

> "Without array bounds checks at runtime" is not a real requirement.

Not paying the runtime cost of superfluous runtime checks is absolutely a real requirement in my current job.

> More to the point, there's nothing fundamental about Scala that makes it impossible to incorporate linear algebra without array bounds checks at runtime into the language.

It can't be implemented as an ordinary library in the existing Scala language without relying on unsafe features (e.g., a C FFI). Doing this safely requiers bona fide dependent types - in spite of their name, so-called “path-dependent types” are just fancy first-class existentials.


Thanks for the reply

> Without concrete type safety theorems (“such and such doesn't happen in a typable program”), you don't even know what type checking is supposed to buy you.

What kind of safety theorem should I be looking for? I mean, what I generally want is something like "if an expression of type A evaluates, the result will be a valid thing of type A i.e. a value constructed by one of the constructors I defined for values of type A". Or perhaps it would be better expressed via elimination - "if I provide eliminators for all my constructors for type A then I can eliminate any expression of type A". I kind of want a "the system will never magic up an A out of nowhere" result, but AIUI I can never get that because I can't have a proof of consistency of the system, no?

> Doing this safely requiers bona fide dependent types - in spite of their name, so-called “path-dependent types” are just fancy first-class existentials.

What is it that we can't do currently? I mean, I can define a natural number type, I can convert between values and types in both directions. I can't use ad-hoc polymorphism with a type defined at runtime (the types I form at runtime must be phantom) but I wouldn't expect to do that in well-behaved code in the first place? The only way I can see for a type to ever get into code, in any language, is either through a literal or a cast, and both those approaches are open to me in Scala. What am I missing?


> What kind of safety theorem should I be looking for? I mean, what I generally want is something like "if an expression of type A evaluates, the result will be a valid thing of type A i.e. a value constructed by one of the constructors I defined for values of type A".

This is probably sufficient for a Coq-style proof assistant, but from a general-purpose programming language I expect more. See below.

> I kind of want a "the system will never magic up an A out of nowhere" result, but AIUI I can never get that because I can't have a proof of consistency of the system, no?

A type system can and should guarantee that you can't create values of a (positive) type other than by using the type's introuction forms.

> What is it that we can't do currently?

(0) The obvious: Be sure your program won't attempt to read from or write to a close file.

(1) Obtain an arbitrary integer `n`. Test once whether it is a valid array index. If the test succeeds, use `n` arbitrarily often as an array index, without any further bounds-checking.

(2) Obtain three `foo.Index` values `a,b,c`. Test once whether the arrays `foo` and `bar` have the same length. Use `a,b,c` arbitrarily often as values of type `bar.Index`, without any further bounds-checking.

etc. etc. etc.

> The only way I can see for a type to ever get into code, in any language, is either through a literal or a cast, and both those approaches are open to me in Scala.

Are those casts safe without runtime checks?

---

In any case, at some point using types for program verification offers diminishing returns. We have to accept the possibility that maybe it's us, not type checkers, who have to verify our programs.


> (0) The obvious: Be sure your program won't attempt to read from or write to a close file.

I can do that in Scala with monadic regions, which only need existentials/rank-2 types.

> (1) Obtain an arbitrary integer `n`. Test once whether it is a valid array index. If the test succeeds, use `n` arbitrarily often as an array index, without any further bounds-checking. > (2) Obtain three `foo.Index` values `a,b,c`. Test once whether the arrays `foo` and `bar` have the same length. Use `a,b,c` arbitrarily often as values of type `bar.Index`, without any further bounds-checking.

I see no reason these things wouldn't be possible to implement in the language, though one would have to pass around the (phantom) evidence that these things are true to the point where it's used, which is cumbersome and something newer languages fix.

I'm not saying any of this is as easy as it should be, I'm asking how "real" dependent types would be different from what Scala currently has.

> Are those casts safe without runtime checks?

No, we'd make the runtime check at the point of casting. What I mean is: the only way something I'd want to type as an array of length n could ever get into my program is (ultimately, once we flatten out all the arithmetic from e.g. concatenating arrays of length p and q), either a literal array of length n, or a manually checked cast of a dynamic-length array received at runtime.

> In any case, at some point using types for program verification offers diminishing returns. We have to accept the possibility that maybe it's us, not type checkers, who have to verify our programs.

My experience has been that types are more than good enough. My defect rate is much, much lower than it needs to be, lower than I can really justify putting any more work into, and I don't feel like I'm close to the limits of what's possible with types.


> I can do that in Scala with monadic regions, which only need existentials/rank-2 types.

Monadic regions are inflexible. As I have previously told you on lobste.rs, try implementing braided allocations:

(0) Allocate A

(1) Allocate B, do something with A and B together, deallocate A

(2) Allocate C, do something with B and C together, deallocate B

(3) etc.

Turns out, you can't, because monadic regions are strictly LIFO. Lest you think this is just a theoretical exercise, this allocation pattern is used in the implementation of concurrent database indices.

> I see no reason these things wouldn't be possible to implement in the language, though one would have to pass around the (phantom) evidence that these things are true to the point where it's used, which is cumbersome and something newer languages fix.

The only way you could do it is if the design of the array data type anticipates every use case (including ones not in my original list). To give you an analogy, your response is exactly the same as if someone said “I don't need sum types. I can anticipate every single way in which a sum will be case-analyzed, and turn it into virtual method of a base abstract class.” This is completely ridiculous.

> No, we'd make the runtime check at the point of casting.

Then it's effectively dynamically typed, don't you think?

> My experience has been that types are more than good enough.

Types are good enough for some things. I certainly wouldn't want to program in a completely typeless language - parametric polymorphism and algebraic data types are just too nice to give up. But I also wouldn't want to use types to prove everything. Nor would you, as evidenced by your follow-up.

> My defect rate is much, much lower than it needs to be

Lucky you. Mine has to be zero, but it's always positive, sadly.


> Monadic regions are inflexible. As I have previously told you on lobste.rs, try implementing braided allocations:

Earlier in this thread you said "this complexity only yields at best marginal increases in the extent to which these languages aid software verification. Effort would be better spent trying to figure out how much verification you can wrangle out of relatively simple type systems like ML's" - aren't monadic regions exactly a case of wrangling more verification out of a relatively simple type system rather than building something more complex?

There will always be things we don't know how to do; monadic regions themselves seemed impossible until they were implemented. I don't think we've put anywhere near enough effort in yet to conclude that it's impossible to extend that kind of approach to support braided allocations; indeed that's the sort of thing I'd expect the academic process you deride to come up with.

> The only way you could do it is if the design of the array data type anticipates every use case (including ones not in my original list).

The Scala type system can implement Turing-complete computation, you can do arithmetic with type-level numbers, so I don't see that the array design would needs to anticipate anything? Accessing a specific element could demand evidence that that element was within the array's bounds, operations that form new arrays would need to provide evidence about that array's bounds. But the user is free to connect those two ends together in arbitrary ways.

> To give you an analogy, your response is exactly the same as if someone said “I don't need sum types. I can anticipate every single way in which a sum will be case-analyzed, and turn it into virtual method of a base abstract class.” This is completely ridiculous.

Um, I do say this. In a language that has first-class functions etc. I don't need any language-level support for sum types (e.g. pattern matching); Either#fold covers all the same use case. Language support is a nice syntactic convenience but nothing more than that AFAICS.

> Then it's effectively dynamically typed, don't you think?

No, because the type system is still checking that my deductions are valid, that my conclusions follow from my premises (e.g. if I concatenate an array that I've dynamically checked to be of size n with one that I've dynamically checked to be of size m, I statically know that the result is of size n + m). I'm just adopting an extra premise of "this particular runtime size check is correctly implemented"; of course if I adopt a false premise then I'll deduce nonsense, but that's on me. I don't see any qualitative difference between having the (outside array of unknown size -> array typed as being of a given size) function be part of the system versus implemented by me.

> Types are good enough for some things. I certainly wouldn't want to program in a completely typeless language - parametric polymorphism and algebraic data types are just too nice to give up. But I also wouldn't want to use types to prove everything. Nor would you, as evidenced by your follow-up.

I don't prove everything, but I feel like I could; since I'm going to be using types anyway, I don't feel the need to pursue non-type ways of proving things unless and until I hit something that I want to prove that's harder to prove with types than it should be, that feels like an easier way to prove it should exist. So far I haven't.


> Earlier in this thread you said "this complexity only yields at best marginal increases in the extent to which these languages aid software verification. Effort would be better spent trying to figure out how much verification you can wrangle out of relatively simple type systems like ML's"

Substructural types are a much, much, simpler addition to a type system than rank-N types, rest assured. For instance, substructural types don't impede global type inference. In fact, adding substructural types to HM require minimal changes to how the type checker works. Read ATTaPL, chapter 1!

> The Scala type system can implement Turing-complete computation, you can do arithmetic with type-level numbers, so I don't see that the array design would needs to anticipate anything?

By itself, a Turing-complete type system does not give you the ability to enforce arbitrary program properties. It only guarantees that you can wait arbitrarily long for the type checker to complete. Kind of impressive, but not useful.

> Um, I do say this. In a language that has first-class functions[sic] etc. I don't need any language-level support for sum types (e.g. pattern matching); Either#fold covers all the same use case. Language support is a nice syntactic convenience but nothing more than that AFAICS.

Encoding data as particular higher-order procedures only works in an effect-free language. (Nontermination counts as an effect!) Mmm, like Scala:

    trait ChurchBool {
        def pick[A](x: A, y: A): A
    }
    
    object ChurchTrue extends ChurchBool {
        def pick[A](x: A, y: A) = x
    }
     
    object ChurchFalse extends ChurchBool {
        def pick[A](x: A, y: A) = y
    }
 
    object ChurchWTF extends ChurchBool {
        def pick[A](x: A, y: A) = pick(x, y)
    }
Sorry, did you say anything?

> I don't see any qualitative difference between having the (outside array of unknown size -> array typed as being of a given size) function be part of the system versus implemented by me.

Here is an important qualitative difference: A runtime check that checks something that is statically knowable introduces an unreachable control flow path.

> I don't prove everything, but I feel like I could

Whereas I actually prove everything, and in the process have found a practical boundary between what (IMO) is reasonable to prove with types, and what I'd rather prove using other technology.


> Encoding data as particular higher-order procedures only works in an effect-free language. (Nontermination counts as an effect!) Mmm, like Scala:

Scala pervasively allows nontermination, sure. All the program properties I have are up to "if it terminates ...". (I mean, I would claim I work in a fragment of Scala that disallows definitions like ChurchWTF#pick, but I can't really justify that without formalising it). Language-level support for sum types doesn't change that; programs written in Scala carry just as much risk of nontermination as programs written in Scala-without-pattern-matches.

> Here is an important qualitative difference: A runtime check that checks something that is statically knowable introduces an unreachable control flow path.

I'm not talking about having runtime checks for things that are statically knowable, I'm talking about runtime checks for things that aren't statically knowable like the size of an array received from an external source. Again, how is that any different from what you would do in a language with "true" dependent types?


> Language-level support for sum types doesn't change that; programs written in Scala carry just as much risk of nontermination as programs written in Scala-without-pattern-matches.

Admitting nonterminating computations is actually a good thing. What's a bad thing is conflating computations with the values they produce. In other words, the types `Bool` and `ChurchBool` are not isomorphic.

> I'm not talking about having runtime checks for things that are statically knowable, I'm talking about runtime checks for things that aren't statically knowable like the size of an array received from an external source. Again, how is that any different from what you would do in a language with "true" dependent types?

There's no difference here. But you totally glossed over my point that your abstract type of arrays with a member type of indices would have to be bloated with every operation imaginable on indices.


But those language communities also have to do some self-reflection.

People are often hostile to things they don't know. Not taking this into account when trying to introduce new things is a recipe for disaster.

This means ...

– picking up developers from where they are, not telling potential adopters "if you write Java-like Scala we don't want you here".

– meaningful documentation, not answering efforts to improve documentation with "it's not necessary, because everyone already knows Scala".

– describing and explaining design choices like companion objects, type classes, context bounds, and so on; not expecting that beginners will understand their superiority through osmosis.

– making technologies accessible to people on their platforms, not treating backends such as Scala.js, Scala on Android or Scala-Native as add-ons for existing Scala users (or worse).

– taking tooling seriously, and not releasing major versions of the language without tooling support. Or breaking compatibility in minor versions like in Scala 2.12.

– clean-ups that removes unnecessary baggage from the language and its libraries. Not lecturing people about the number of keywords or counting the lines in the language's grammar in comparison to other languages.

Haskell does some things better in this regard, but both are far from a convincing value proposition over "worse is better" approaches like Kotlin (which will probably swipe the floor with Scala in the future, because they address the bad parts of Scala).


Neither Kotlin nor any other industry language addresses my original point, namely:

> What we need is “a formal semantics for a language, with practical proof principles for designing correct programs”.

However, I'm not disappointed in industry languages, because I don't actually expect anything from them.


Another language that has some very interesting features is Perl6. I know it's the brunt of jokes because of how long it took to bake but I'd say it's innovative over languages from the 90s, namely it's own predicessors.


> I also suspect that programming languages have not advanced in any significant way since around 1990

I would have said not much since 1970? 1950? What big new things have their been?


Yes. This is really funny:

> I once gave a talk to Disney executives about "new ways to kill the geese that lay the golden eggs". For example, set up deadlines and quotas for the eggs. Make the geese into managers. Make the geese go to meetings to justify their diet and day to day processes. (etc.)

however it's really hard to argue that Apple, Microsoft, or indeed IBM or Disney ever killed the geese that lay the golden eggs.

Or if they did, they had plenty others running around.


> however it's really hard to argue that Apple, Microsoft, or indeed IBM or Disney ever killed the geese that lay the golden eggs.

In this context, you're not talking about the geese, but about the eggs. For example, for Apple, one such golden egg is the iPhone, and what Kay talks about is how there are many capable people to turn this golden egg (invention) into coins or platinum (product => shareholder value).

But the geese are not the products; the geese are the inventors. Jobs turned the failing Apple around from outside its own context. Microsoft missed out on the entire phone eco-system. Lots of people would attribute these long-term business fails to exactly what Kay talks about here.


Fair enough, but if one follows your reasoning, then Apple didn't kill the geese laying the golden eggs, they killed (or failed to let live) the geese about to lay golden eggs.

Which makes more sense. It's hard to kill successful projects, but incredibly easy to build an atmosphere where creative people and new ideas are not welcome.


Jobs gutted Apples version of PARC when he came back, yet somehow it kick-started Apples greatest ever period of innovation.


Which, fittingly, involved getting rid of Alan Kay.


Eh Disney came closer than they would have liked during the second half of the Eisner era. They released some bombs (e.g. Treasure Planet) and the quality of the parks was decreasing because of Eisner’s cost cutting.


Reminds me of seeing Tom Peters at Planet Tivoli one year, raving about big, dumb companies to 10,000 loyal big bluers. It's not hard to argue about golden eggs dying at these big companies. I won two Master Innovation Catalyst awards in IBM's Thinkplace program, and I firmly believe now that if you want innovation, you have to think small.


Except when Apple forced Jobs to step down in the 90s.


>It is a very difficult narrative. Mainly because while reading it, I had to think constantly about survivorship bias and appeal to authority. Would I really read this if it was written by a 20 year old without a track record?

That's not a problem I see, because this is not about Alan Kay's personal track record.

The kind of organizational freedom he describes was available in Park and Bell Labs, and it wasn't just Kay, but tens of people, one after another creating the most important technologies, while it was on.


> appeal to authority. Would I really read this if it was written by a 20 year old without a track record?

First, that's not what appeal to authority means. It doesn't mean, "Don't be interested in what Einstein thought," for example.

Second, ironically, the big problem in our industry is that no one ever listens to those who have gone before. It's Alan Kay, not Alan Key.

We reinvent technology, package it with a new logo, give it a new name, and run into the same problems as before, until someone comes up with a new technology to replace it, which is the same technology again, ad infinitum.

You really need to listen to authority, because, frankly your software sucks. How do I know? Because all software sucks. Alan Kay, not Key, is one of the few people who understand that and understand what kind of creativity might lead us out of this mess. We should all be interested in what he's saying.


And most tellingly the innovations at Bell Labs didn't just arise in computing--the transistor, the solar cell, information theory, statistical process control, the laser, the cell phone, and the discovery of cosmic background radiation are some examples. What Alan Kay is talking about is literally a cornucopia. Long term scientific research directed by scientists works.


Oh in a different field don't forget 3M.


Are the full emails availabe somewhere?


> Most don't think of the resources in our centuries as actually part of a human-made garden via inventions and cooperation, and that the garden has to be maintained and renewed.

Just right.


What if a stepping stone to UBI was a basic income to everyone that’s demonstrated some technical aptitude? If scientists and engineers didn’t have to work for, say Zuckerberg, to survive and instead because they genuinely wanted to collaborate, maybe the tech giants wouldn’t be so disproportionately powerful. And creativity in the tech world would skyrocket.


No need to comstrain our self to engineers. There are probably lots, and lots, of people that would produce more value on their own agenda.


And there would be plenty more that would produce far less.


Is that a fact or a fear?


"Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime."

UBI has a lot wrong with it, but I'll just start and end with a first principle: Taxation is theft and therefore, government and UBI are immoral.

I advocate for freedom and consent, especially as it pertains to economics. The income I earn through producing goods and services that I exchange voluntarily with other individuals should be mine to keep; all of it. Anything less is not freedom and certainly not consent.


What about all the goods you consume that you do not pay for, but were developed over the decades, centuries, and millennia and by your contemporaries too: Language, liberty, civil rights, the system of laws, security, roads, economics, finance, wheels, computers, mathematics, much of health care and education. Are you paying the Einstein estate for relativity? King James' translation team for their Bible? Alan Kay for all he did that you use? All the FOSS developers? The Suffragettes for your vote?

If not for taxation, who will pay for the police that keep you safe, the education programs that likely educated you (assuming you had some publicly funded education) and most of your customers, employees, and business partners? Public health that prevents epidemics? The list goes on forever.


I actually agree, income tax is not a good thing. I still support UBI though. In fact I derive it as a natural right from the same principles that motivates private property.

For details checkout geo-libertarianism. But in short the argument is that the common resources, “land”, should be allocated according to the preferences of the commons (see John Locke), not the individual. In practice have land owners pay the would be non-owners it’s full market value for the privilege of being called the owner (see Rawls, Justice as Fairness for this step). Call it a land value tax (see Henry George) or call it land rent. However, since it’s a payment to the commons, not the state, the only thing to do is to divide it fairly as a public dividend.

End result: a fair and proper UBI.


Freedom to me is having the means and ability to follow my heart. I was lucky enough to come from a family that enabled that freedom, and later sold a business that enabled more of it.

Maybe everyone should have those opportunities. Maybe that’s the future. Maybe it’s the bright side of the consolidation of wealth that’s only accelerating.


Economics has made some progress since Biblical times.

Taxation drives the economy. By creating demand for money. No taxes, no economy.


It only takes one amazing idea to move the needle.


Why the downvotes? This is a perfectly valid and sensible option which is being explored in some countries (towards the general population, not just 'creators').

Can someone explain to me why this comment is deemed to be irrelevant?


Going on about downvotes like this is against the HN guidelines. Among many other reasons, the voting situation is nearly always fluid and comments like this usually end up being factually wrong in addition to pointless noise. Please read https://news.ycombinator.com/newsguidelines.html and follow those rules when commenting here.


A basic income with restrictions is basically a job . UBI needs to apply to everyone to work.


Agreed that UBI for everyone is the ultimate goal. But basic income for all engineers and scientists could be more politically palatable in the US and have quicker results.




Applications are open for YC Summer 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: